mirror of
https://github.com/krahets/hello-algo.git
synced 2024-12-24 04:06:28 +08:00
Compare commits
4 commits
4a6e3337c2
...
00c6f6aa8b
Author | SHA1 | Date | |
---|---|---|---|
|
00c6f6aa8b | ||
|
4db5c19011 | ||
|
5ec15c4a3e | ||
|
a430cc2607 |
2 changed files with 35 additions and 35 deletions
|
@ -2,30 +2,30 @@
|
|||
|
||||
### Key review
|
||||
|
||||
- A graph consists of vertices and edges and can be represented as a set comprising a group of vertices and a group of edges.
|
||||
- Compared to linear relationships (linked lists) and divide-and-conquer relationships (trees), network relationships (graphs) have a higher degree of freedom and are therefore more complex.
|
||||
- The edges of a directed graph have directionality, any vertex in a connected graph is reachable, and each edge in a weighted graph contains a weight variable.
|
||||
- Adjacency matrices use matrices to represent graphs, with each row (column) representing a vertex and matrix elements representing edges, using $1$ or $0$ to indicate the presence or absence of an edge between two vertices. Adjacency matrices are highly efficient for add, delete, find, and modify operations, but they consume more space.
|
||||
- Adjacency lists use multiple linked lists to represent graphs, with the $i^{th}$ list corresponding to vertex $i$, containing all its adjacent vertices. Adjacency lists save more space compared to adjacency matrices, but since it is necessary to traverse the list to find edges, their time efficiency is lower.
|
||||
- When the linked lists in the adjacency list are too long, they can be converted into red-black trees or hash tables to improve query efficiency.
|
||||
- From the perspective of algorithmic thinking, adjacency matrices embody the principle of "space for time," while adjacency lists embody "time for space."
|
||||
- Graphs can be used to model various real systems, such as social networks, subway routes, etc.
|
||||
- A graph is made up of vertices and edges. It can be described as a set of vertices and a set of edges.
|
||||
- Compared to linear relationships (like linked lists) and hierarchical relationships (like trees), network relationships (graphs) offer greater flexibility, making them more complex.
|
||||
- In a directed graph, edges have directions. In a connected graph, any vertex can be reached from any other vertex. In a weighted graph, each edge has an associated weight variable.
|
||||
- An adjacency matrix is a way to represent a graph using matrix (2D array). The rows and columns represent the vertices. The matrix element value indicates whether there is an edge between two vertices, using $1$ for an edge or $0$ for no edge. Adjacency matrices are highly efficient for operations like adding, deleting, or checking edges, but they require more space.
|
||||
- An adjacency list is another common way to represent a graph using a collection of linked lists. Each vertex in the graph has a list that contains all its adjacent vertices. The $i^{th}$ list represents vertex $i$. Adjacency lists use less space compared to adjacency matrices. However, since it requires traversing the list to find edges, the time efficiency is lower.
|
||||
- When the linked lists in an adjacency list are long enough, they can be converted into red-black trees or hash tables to improve lookup efficiency.
|
||||
- From the perspective of algorithmic design, an adjacency matrix reflects the concept of "trading space for time", whereas an adjacency list reflects "trading time for space".
|
||||
- Graphs can be used to model various real-world systems, such as social networks, subway routes.
|
||||
- A tree is a special case of a graph, and tree traversal is also a special case of graph traversal.
|
||||
- Breadth-first traversal of a graph is a search method that expands layer by layer from near to far, usually implemented with a queue.
|
||||
- Depth-first traversal of a graph is a search method that prefers to go as deep as possible and backtracks when no further paths are available, often based on recursion.
|
||||
- Breadth-first traversal of a graph is a search method that expands layer by layer from near to far, typically using a queue.
|
||||
- Depth-first traversal of a graph is a search method that prioritizes reaching the end before backtracking when no further path is available. It is often implemented using recursion.
|
||||
|
||||
### Q & A
|
||||
|
||||
**Q**: Is a path defined as a sequence of vertices or a sequence of edges?
|
||||
|
||||
Definitions vary between different language versions on Wikipedia: the English version defines a path as "a sequence of edges," while the Chinese version defines it as "a sequence of vertices." Here is the original text from the English version: In graph theory, a path in a graph is a finite or infinite sequence of edges which joins a sequence of vertices.
|
||||
In graph theory, a path in a graph is a finite or infinite sequence of edges which joins a sequence of vertices.
|
||||
|
||||
In this document, a path is considered a sequence of edges, rather than a sequence of vertices. This is because there might be multiple edges connecting two vertices, in which case each edge corresponds to a path.
|
||||
|
||||
**Q**: In a disconnected graph, are there points that cannot be traversed to?
|
||||
**Q**: In a disconnected graph, are there points that cannot be traversed?
|
||||
|
||||
In a disconnected graph, starting from a certain vertex, there is at least one vertex that cannot be reached. Traversing a disconnected graph requires setting multiple starting points to traverse all connected components of the graph.
|
||||
In a disconnected graph, there is at least one vertex that cannot be reached from a specific point. To traverse a disconnected graph, you need to set multiple starting points to traverse all the connected components of the graph.
|
||||
|
||||
**Q**: In an adjacency list, does the order of "all vertices connected to that vertex" matter?
|
||||
|
||||
It can be in any order. However, in practical applications, it might be necessary to sort according to certain rules, such as the order in which vertices are added, or the order of vertex values, etc., to facilitate the quick search for vertices with certain extremal values.
|
||||
It can be in any order. However, in real-world applications, it might be necessary to sort them according to certain rules, such as the order in which vertices are added, or the order of vertex values. This can help find vertices quickly with certain extreme values.
|
||||
|
|
|
@ -1,28 +1,28 @@
|
|||
# Search algorithms revisited
|
||||
|
||||
<u>Searching algorithms (searching algorithm)</u> are used to search for one or several elements that meet specific criteria in data structures such as arrays, linked lists, trees, or graphs.
|
||||
<u>Searching algorithms (search algorithms)</u> are used to retrieve one or more elements that meet specific criteria within data structures such as arrays, linked lists, trees, or graphs.
|
||||
|
||||
Searching algorithms can be divided into the following two categories based on their implementation approaches.
|
||||
Searching algorithms can be divided into the following two categories based on their approach.
|
||||
|
||||
- **Locating the target element by traversing the data structure**, such as traversals of arrays, linked lists, trees, and graphs, etc.
|
||||
- **Using the organizational structure of the data or the prior information contained in the data to achieve efficient element search**, such as binary search, hash search, and binary search tree search, etc.
|
||||
- **Using the organizational structure of the data or existing data to achieve efficient element searches**, such as binary search, hash search, binary search tree search, etc.
|
||||
|
||||
It is not difficult to notice that these topics have been introduced in previous chapters, so searching algorithms are not unfamiliar to us. In this section, we will revisit searching algorithms from a more systematic perspective.
|
||||
These topics were introduced in previous chapters, so they are not unfamiliar to us. In this section, we will revisit searching algorithms from a more systematic perspective.
|
||||
|
||||
## Brute-force search
|
||||
|
||||
Brute-force search locates the target element by traversing every element of the data structure.
|
||||
A Brute-force search locates the target element by traversing every element of the data structure.
|
||||
|
||||
- "Linear search" is suitable for linear data structures such as arrays and linked lists. It starts from one end of the data structure, accesses each element one by one, until the target element is found or the other end is reached without finding the target element.
|
||||
- "Breadth-first search" and "Depth-first search" are two traversal strategies for graphs and trees. Breadth-first search starts from the initial node and searches layer by layer, accessing nodes from near to far. Depth-first search starts from the initial node, follows a path until the end, then backtracks and tries other paths until the entire data structure is traversed.
|
||||
- "Linear search" is suitable for linear data structures such as arrays and linked lists. It starts from one end of the data structure and accesses each element one by one until the target element is found or the other end is reached without finding the target element.
|
||||
- "Breadth-first search" and "Depth-first search" are two traversal strategies for graphs and trees. Breadth-first search starts from the initial node and searches layer by layer (left to right), accessing nodes from near to far. Depth-first search starts from the initial node, follows a path until the end (top to bottom), then backtracks and tries other paths until the entire data structure is traversed.
|
||||
|
||||
The advantage of brute-force search is its simplicity and versatility, **no need for data preprocessing and the help of additional data structures**.
|
||||
The advantage of brute-force search is its simplicity and versatility, **no need for data preprocessing or the help of additional data structures**.
|
||||
|
||||
However, **the time complexity of this type of algorithm is $O(n)$**, where $n$ is the number of elements, so the performance is poor in cases of large data volumes.
|
||||
However, **the time complexity of this type of algorithm is $O(n)$**, where $n$ is the number of elements, so the performance is poor with large data sets.
|
||||
|
||||
## Adaptive search
|
||||
|
||||
Adaptive search uses the unique properties of data (such as order) to optimize the search process, thereby locating the target element more efficiently.
|
||||
An Adaptive search uses the unique properties of data (such as order) to optimize the search process, thereby locating the target element more efficiently.
|
||||
|
||||
- "Binary search" uses the orderliness of data to achieve efficient searching, only suitable for arrays.
|
||||
- "Hash search" uses a hash table to establish a key-value mapping between search data and target data, thus implementing the query operation.
|
||||
|
@ -30,7 +30,7 @@ Adaptive search uses the unique properties of data (such as order) to optimize t
|
|||
|
||||
The advantage of these algorithms is high efficiency, **with time complexities reaching $O(\log n)$ or even $O(1)$**.
|
||||
|
||||
However, **using these algorithms often requires data preprocessing**. For example, binary search requires sorting the array in advance, and hash search and tree search both require the help of additional data structures, maintaining these structures also requires extra time and space overhead.
|
||||
However, **using these algorithms often requires data preprocessing**. For example, binary search requires sorting the array in advance, and hash search and tree search both require the help of additional data structures. Maintaining these structures also requires more overhead in terms of time and space.
|
||||
|
||||
!!! tip
|
||||
|
||||
|
@ -38,11 +38,11 @@ However, **using these algorithms often requires data preprocessing**. For examp
|
|||
|
||||
## Choosing a search method
|
||||
|
||||
Given a set of data of size $n$, we can use linear search, binary search, tree search, hash search, and other methods to search for the target element from it. The working principles of these methods are shown in the figure below.
|
||||
Given a set of data of size $n$, we can use a linear search, binary search, tree search, hash search, or other methods to retrieve the target element. The working principles of these methods are shown in the figure below.
|
||||
|
||||
![Various search strategies](searching_algorithm_revisited.assets/searching_algorithms.png)
|
||||
|
||||
The operation efficiency and characteristics of the aforementioned methods are shown in the following table.
|
||||
The characteristics and operational efficiency of the aforementioned methods are shown in the following table.
|
||||
|
||||
<p align="center"> Table <id> Comparison of search algorithm efficiency </p>
|
||||
|
||||
|
@ -55,23 +55,23 @@ The operation efficiency and characteristics of the aforementioned methods are s
|
|||
| Data preprocessing | / | Sorting $O(n \log n)$ | Building tree $O(n \log n)$ | Building hash table $O(n)$ |
|
||||
| Data orderliness | Unordered | Ordered | Ordered | Unordered |
|
||||
|
||||
The choice of search algorithm also depends on the volume of data, search performance requirements, data query and update frequency, etc.
|
||||
The choice of search algorithm also depends on the volume of data, search performance requirements, frequency of data queries and updates, etc.
|
||||
|
||||
**Linear search**
|
||||
|
||||
- Good versatility, no need for any data preprocessing operations. If we only need to query the data once, then the time for data preprocessing in the other three methods would be longer than the time for linear search.
|
||||
- Good versatility, no need for any data preprocessing operations. If we only need to query the data once, then the time for data preprocessing in the other three methods would be longer than the time for a linear search.
|
||||
- Suitable for small volumes of data, where time complexity has a smaller impact on efficiency.
|
||||
- Suitable for scenarios with high data update frequency, because this method does not require any additional maintenance of the data.
|
||||
- Suitable for scenarios with very frequent data updates, because this method does not require any additional maintenance of the data.
|
||||
|
||||
**Binary search**
|
||||
|
||||
- Suitable for large data volumes, with stable efficiency performance, the worst time complexity being $O(\log n)$.
|
||||
- The data volume cannot be too large, because storing arrays requires contiguous memory space.
|
||||
- Not suitable for scenarios with frequent additions and deletions, because maintaining an ordered array incurs high overhead.
|
||||
- Suitable for larger data volumes, with stable performance and a worst-case time complexity of $O(\log n)$.
|
||||
- However, the data volume cannot be too large, because storing arrays requires contiguous memory space.
|
||||
- Not suitable for scenarios with frequent additions and deletions, because maintaining an ordered array incurs a lot of overhead.
|
||||
|
||||
**Hash search**
|
||||
|
||||
- Suitable for scenarios with high query performance requirements, with an average time complexity of $O(1)$.
|
||||
- Suitable for scenarios where fast query performance is essential, with an average time complexity of $O(1)$.
|
||||
- Not suitable for scenarios needing ordered data or range searches, because hash tables cannot maintain data orderliness.
|
||||
- High dependency on hash functions and hash collision handling strategies, with significant performance degradation risks.
|
||||
- Not suitable for overly large data volumes, because hash tables need extra space to minimize collisions and provide good query performance.
|
||||
|
@ -80,5 +80,5 @@ The choice of search algorithm also depends on the volume of data, search perfor
|
|||
|
||||
- Suitable for massive data, because tree nodes are stored scattered in memory.
|
||||
- Suitable for maintaining ordered data or range searches.
|
||||
- In the continuous addition and deletion of nodes, the binary search tree may become skewed, degrading the time complexity to $O(n)$.
|
||||
- With the continuous addition and deletion of nodes, the binary search tree may become skewed, degrading the time complexity to $O(n)$.
|
||||
- If using AVL trees or red-black trees, operations can run stably at $O(\log n)$ efficiency, but the operation to maintain tree balance adds extra overhead.
|
||||
|
|
Loading…
Reference in a new issue