mirror of
https://github.com/krahets/hello-algo.git
synced 2024-12-24 09:16:28 +08:00
Compare commits
5 commits
3b81ff3cba
...
ce497b9a50
Author | SHA1 | Date | |
---|---|---|---|
|
ce497b9a50 | ||
|
2737357242 | ||
|
e0d617edbb | ||
|
7a345fc66b | ||
|
bc0e32af57 |
8 changed files with 56 additions and 54 deletions
|
@ -46,7 +46,8 @@ class GraphAdjList {
|
|||
if (
|
||||
!this.adjList.has(vet1) ||
|
||||
!this.adjList.has(vet2) ||
|
||||
vet1 === vet2
|
||||
vet1 === vet2 ||
|
||||
this.adjList.get(vet1).indexOf(vet2) === -1
|
||||
) {
|
||||
throw new Error('Illegal Argument Exception');
|
||||
}
|
||||
|
|
|
@ -29,7 +29,7 @@ impl<T> ListNode<T> {
|
|||
for item in array.iter().rev() {
|
||||
let node = Rc::new(RefCell::new(ListNode {
|
||||
val: *item,
|
||||
next: head.clone(),
|
||||
next: head.take(),
|
||||
}));
|
||||
head = Some(node);
|
||||
}
|
||||
|
@ -44,14 +44,14 @@ impl<T> ListNode<T> {
|
|||
T: std::hash::Hash + Eq + Copy + Clone,
|
||||
{
|
||||
let mut hashmap = HashMap::new();
|
||||
if let Some(node) = linked_list {
|
||||
let mut current = Some(node.clone());
|
||||
while let Some(cur) = current {
|
||||
let borrow = cur.borrow();
|
||||
hashmap.insert(borrow.val.clone(), cur.clone());
|
||||
current = borrow.next.clone();
|
||||
}
|
||||
let mut node = linked_list;
|
||||
|
||||
while let Some(cur) = node {
|
||||
let borrow = cur.borrow();
|
||||
hashmap.insert(borrow.val.clone(), cur.clone());
|
||||
node = borrow.next.clone();
|
||||
}
|
||||
|
||||
hashmap
|
||||
}
|
||||
}
|
||||
|
|
|
@ -72,23 +72,21 @@ pub fn vec_to_tree(arr: Vec<Option<i32>>) -> Option<Rc<RefCell<TreeNode>>> {
|
|||
}
|
||||
|
||||
/* 将二叉树序列化为列表:递归 */
|
||||
fn tree_to_vec_dfs(root: Option<Rc<RefCell<TreeNode>>>, i: usize, res: &mut Vec<Option<i32>>) {
|
||||
if root.is_none() {
|
||||
return;
|
||||
fn tree_to_vec_dfs(root: Option<&Rc<RefCell<TreeNode>>>, i: usize, res: &mut Vec<Option<i32>>) {
|
||||
if let Some(root) = root {
|
||||
// i + 1 is the minimum valid size to access index i
|
||||
while res.len() < i + 1 {
|
||||
res.push(None);
|
||||
}
|
||||
res[i] = Some(root.borrow().val);
|
||||
tree_to_vec_dfs(root.borrow().left.as_ref(), 2 * i + 1, res);
|
||||
tree_to_vec_dfs(root.borrow().right.as_ref(), 2 * i + 2, res);
|
||||
}
|
||||
let root = root.unwrap();
|
||||
// i + 1 is the minimum valid size to access index i
|
||||
while res.len() < i + 1 {
|
||||
res.push(None);
|
||||
}
|
||||
res[i] = Some(root.borrow().val);
|
||||
tree_to_vec_dfs(root.borrow().left.clone(), 2 * i + 1, res);
|
||||
tree_to_vec_dfs(root.borrow().right.clone(), 2 * i + 2, res);
|
||||
}
|
||||
|
||||
/* 将二叉树序列化为列表 */
|
||||
pub fn tree_to_vec(root: Option<Rc<RefCell<TreeNode>>>) -> Vec<Option<i32>> {
|
||||
let mut res = vec![];
|
||||
tree_to_vec_dfs(root, 0, &mut res);
|
||||
tree_to_vec_dfs(root.as_ref(), 0, &mut res);
|
||||
res
|
||||
}
|
||||
|
|
|
@ -46,7 +46,8 @@ class GraphAdjList {
|
|||
if (
|
||||
!this.adjList.has(vet1) ||
|
||||
!this.adjList.has(vet2) ||
|
||||
vet1 === vet2
|
||||
vet1 === vet2 ||
|
||||
this.adjList.get(vet1).indexOf(vet2) === -1
|
||||
) {
|
||||
throw new Error('Illegal Argument Exception');
|
||||
}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Hash table
|
||||
|
||||
A <u>hash table</u>, also known as a <u>hash map</u>, is a data structure that establishes a mapping between keys and values, enabling efficient element retrieval. Specifically, when we input a `key` into the hash table, we can retrive the corresponding `value` in $O(1)$ time complexity.
|
||||
A <u>hash table</u>, also known as a <u>hash map</u>, is a data structure that establishes a mapping between keys and values, enabling efficient element retrieval. Specifically, when we input a `key` into the hash table, we can retrieve the corresponding `value` in $O(1)$ time complexity.
|
||||
|
||||
As shown in the figure below, given $n$ students, each student has two data fields: "Name" and "Student ID". If we want to implement a query function that takes a student ID as input and returns the corresponding name, we can use the hash table shown in the figure below.
|
||||
|
||||
|
@ -8,9 +8,9 @@ As shown in the figure below, given $n$ students, each student has two data fiel
|
|||
|
||||
In addition to hash tables, arrays and linked lists can also be used to implement query functionality, but the time complexity is different. Their efficiency is compared in the table below:
|
||||
|
||||
- **Inserting elements**: Simply append the element to the tail of the array (or linked list). The time complexity of this operation is $O(1)$.
|
||||
- **Searching for elements**: As the array (or linked list) is unsorted, searching for an element requires traversing through all of the elements. The time complexity of this operation is $O(n)$.
|
||||
- **Deleting elements**: To remove an element, we first need to locate it. Then, we delete it from the array (or linked list). The time complexity of this operation is $O(n)$.
|
||||
- **Inserting an element**: Simply append the element to the tail of the array (or linked list). The time complexity of this operation is $O(1)$.
|
||||
- **Searching for an element**: As the array (or linked list) is unsorted, searching for an element requires traversing through all of the elements. The time complexity of this operation is $O(n)$.
|
||||
- **Deleting an element**: To remove an element, we first need to locate it. Then, we delete it from the array (or linked list). The time complexity of this operation is $O(n)$.
|
||||
|
||||
<p align="center"> Table <id> Comparison of time efficiency for common operations </p>
|
||||
|
||||
|
@ -20,7 +20,7 @@ In addition to hash tables, arrays and linked lists can also be used to implemen
|
|||
| Insert Elements | $O(1)$ | $O(1)$ | $O(1)$ |
|
||||
| Delete Elements | $O(n)$ | $O(n)$ | $O(1)$ |
|
||||
|
||||
It can be seen that **the time complexity for operations (insertion, deletion, searching, and modification) in a hash table is $O(1)$**, which is highly efficient.
|
||||
As observed, **the time complexity for operations (insertion, deletion, searching, and modification) in a hash table is $O(1)$**, which is highly efficient.
|
||||
|
||||
## Common operations of hash table
|
||||
|
||||
|
@ -56,7 +56,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
unordered_map<int, string> map;
|
||||
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
map[12836] = "Xiao Ha";
|
||||
map[15937] = "Xiao Luo";
|
||||
map[16750] = "Xiao Suan";
|
||||
|
@ -79,7 +79,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
Map<Integer, String> map = new HashMap<>();
|
||||
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
map.put(12836, "Xiao Ha");
|
||||
map.put(15937, "Xiao Luo");
|
||||
map.put(16750, "Xiao Suan");
|
||||
|
@ -101,7 +101,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
/* Initialize hash table */
|
||||
Dictionary<int, string> map = new() {
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
{ 12836, "Xiao Ha" },
|
||||
{ 15937, "Xiao Luo" },
|
||||
{ 16750, "Xiao Suan" },
|
||||
|
@ -125,7 +125,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
hmap := make(map[int]string)
|
||||
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
hmap[12836] = "Xiao Ha"
|
||||
hmap[15937] = "Xiao Luo"
|
||||
hmap[16750] = "Xiao Suan"
|
||||
|
@ -148,7 +148,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
var map: [Int: String] = [:]
|
||||
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
map[12836] = "Xiao Ha"
|
||||
map[15937] = "Xiao Luo"
|
||||
map[16750] = "Xiao Suan"
|
||||
|
@ -192,7 +192,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
/* Initialize hash table */
|
||||
const map = new Map<number, string>();
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
map.set(12836, 'Xiao Ha');
|
||||
map.set(15937, 'Xiao Luo');
|
||||
map.set(16750, 'Xiao Suan');
|
||||
|
@ -220,7 +220,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
Map<int, String> map = {};
|
||||
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
map[12836] = "Xiao Ha";
|
||||
map[15937] = "Xiao Luo";
|
||||
map[16750] = "Xiao Suan";
|
||||
|
@ -245,7 +245,7 @@ Common operations of a hash table include: initialization, querying, adding key-
|
|||
let mut map: HashMap<i32, String> = HashMap::new();
|
||||
|
||||
/* Add operation */
|
||||
// Add key-value pair (key, value) to the hash table
|
||||
// Add key-value pair (key, value) to hash table
|
||||
map.insert(12836, "Xiao Ha".to_string());
|
||||
map.insert(15937, "Xiao Luo".to_string());
|
||||
map.insert(16750, "Xiao Suan".to_string());
|
||||
|
@ -490,10 +490,10 @@ First, let's consider the simplest case: **implementing a hash table using only
|
|||
|
||||
So, how do we locate the corresponding bucket based on the `key`? This is achieved through a <u>hash function</u>. The role of the hash function is to map a larger input space to a smaller output space. In a hash table, the input space consists of all the keys, and the output space consists of all the buckets (array indices). In other words, given a `key`, **we can use the hash function to determine the storage location of the corresponding key-value pair in the array**.
|
||||
|
||||
When given a `key`, the calculation process of the hash function consists of the following two steps:
|
||||
With a given `key`, the calculation of the hash function consists of two steps:
|
||||
|
||||
1. Calculate the hash value by using a certain hash algorithm `hash()`.
|
||||
2. Take the modulus of the hash value with the bucket count (array length) `capacity` to obtain the array `index` corresponding to that key.
|
||||
2. Take the modulus of the hash value with the bucket count (array length) `capacity` to obtain the array `index` corresponding to the key.
|
||||
|
||||
```shell
|
||||
index = hash(key) % capacity
|
||||
|
|
|
@ -9,15 +9,15 @@ As shown in the figure below, a <u>binary search tree</u> satisfies the followin
|
|||
|
||||
## Operations on a binary search tree
|
||||
|
||||
We encapsulate the binary search tree as a class `BinarySearchTree` and declare a member variable `root`, pointing to the tree's root node.
|
||||
We encapsulate the binary search tree as a class `BinarySearchTree` and declare a member variable `root` pointing to the tree's root node.
|
||||
|
||||
### Searching for a node
|
||||
|
||||
Given a target node value `num`, one can search according to the properties of the binary search tree. As shown in the figure below, we declare a node `cur` and start from the binary tree's root node `root`, looping to compare the size relationship between the node value `cur.val` and `num`.
|
||||
Given a target node value `num`, one can search according to the properties of the binary search tree. As shown in the figure below, we declare a node `cur`, start from the binary tree's root node `root`, and loop to compare the size between the node value `cur.val` and `num`.
|
||||
|
||||
- If `cur.val < num`, it means the target node is in `cur`'s right subtree, thus execute `cur = cur.right`.
|
||||
- If `cur.val > num`, it means the target node is in `cur`'s left subtree, thus execute `cur = cur.left`.
|
||||
- If `cur.val = num`, it means the target node is found, exit the loop and return the node.
|
||||
- If `cur.val = num`, it means the target node is found, exit the loop, and return the node.
|
||||
|
||||
=== "<1>"
|
||||
![Example of searching for a node in a binary search tree](binary_search_tree.assets/bst_search_step1.png)
|
||||
|
@ -31,7 +31,7 @@ Given a target node value `num`, one can search according to the properties of t
|
|||
=== "<4>"
|
||||
![bst_search_step4](binary_search_tree.assets/bst_search_step4.png)
|
||||
|
||||
The search operation in a binary search tree works on the same principle as the binary search algorithm, eliminating half of the possibilities in each round. The number of loops is at most the height of the binary tree. When the binary tree is balanced, it uses $O(\log n)$ time. Example code is as follows:
|
||||
The search operation in a binary search tree works on the same principle as the binary search algorithm, eliminating half of the cases in each round. The number of loops is at most the height of the binary tree. When the binary tree is balanced, it uses $O(\log n)$ time. The example code is as follows:
|
||||
|
||||
```src
|
||||
[file]{binary_search_tree}-[class]{binary_search_tree}-[func]{search}
|
||||
|
@ -41,15 +41,15 @@ The search operation in a binary search tree works on the same principle as the
|
|||
|
||||
Given an element `num` to be inserted, to maintain the property of the binary search tree "left subtree < root node < right subtree," the insertion operation proceeds as shown in the figure below.
|
||||
|
||||
1. **Finding the insertion position**: Similar to the search operation, start from the root node and loop downwards according to the size relationship between the current node value and `num` until passing through the leaf node (traversing to `None`) then exit the loop.
|
||||
2. **Insert the node at that position**: Initialize the node `num` and place it where `None` was.
|
||||
1. **Finding insertion position**: Similar to the search operation, start from the root node, loop downwards according to the size relationship between the current node value and `num`, until the leaf node is passed (traversed to `None`), then exit the loop.
|
||||
2. **Insert the node at this position**: Initialize the node `num` and place it where `None` was.
|
||||
|
||||
![Inserting a node into a binary search tree](binary_search_tree.assets/bst_insert.png)
|
||||
|
||||
In the code implementation, note the following two points.
|
||||
|
||||
- The binary search tree does not allow duplicate nodes; otherwise, it will violate its definition. Therefore, if the node to be inserted already exists in the tree, the insertion is not performed, and it directly returns.
|
||||
- To perform the insertion operation, we need to use the node `pre` to save the node from the last loop. This way, when traversing to `None`, we can get its parent node, thus completing the node insertion operation.
|
||||
- The binary search tree does not allow duplicate nodes to exist; otherwise, its definition would be violated. Therefore, if the node to be inserted already exists in the tree, the insertion is not performed, and the node returns directly.
|
||||
- To perform the insertion operation, we need to use the node `pre` to save the node from the previous loop. This way, when traversing to `None`, we can get its parent node, thus completing the node insertion operation.
|
||||
|
||||
```src
|
||||
[file]{binary_search_tree}-[class]{binary_search_tree}-[func]{insert}
|
||||
|
@ -59,9 +59,9 @@ Similar to searching for a node, inserting a node uses $O(\log n)$ time.
|
|||
|
||||
### Removing a node
|
||||
|
||||
First, find the target node in the binary tree, then remove it. Similar to inserting a node, we need to ensure that after the removal operation is completed, the property of the binary search tree "left subtree < root node < right subtree" is still satisfied. Therefore, based on the number of child nodes of the target node, we divide it into 0, 1, and 2 cases, performing the corresponding node removal operations.
|
||||
First, find the target node in the binary tree, then remove it. Similar to inserting a node, we need to ensure that after the removal operation is completed, the property of the binary search tree "left subtree < root node < right subtree" is still satisfied. Therefore, based on the number of child nodes of the target node, we divide it into three cases: 0, 1, and 2, and perform the corresponding node removal operations.
|
||||
|
||||
As shown in the figure below, when the degree of the node to be removed is $0$, it means the node is a leaf node, and it can be directly removed.
|
||||
As shown in the figure below, when the degree of the node to be removed is $0$, it means the node is a leaf node and can be directly removed.
|
||||
|
||||
![Removing a node in a binary search tree (degree 0)](binary_search_tree.assets/bst_remove_case1.png)
|
||||
|
||||
|
@ -96,9 +96,9 @@ The operation of removing a node also uses $O(\log n)$ time, where finding the n
|
|||
|
||||
### In-order traversal is ordered
|
||||
|
||||
As shown in the figure below, the in-order traversal of a binary tree follows the "left $\rightarrow$ root $\rightarrow$ right" traversal order, and a binary search tree satisfies the size relationship "left child node $<$ root node $<$ right child node".
|
||||
As shown in the figure below, the in-order traversal of a binary tree follows the traversal order of "left $\rightarrow$ root $\rightarrow$ right," and a binary search tree satisfies the size relationship of "left child node $<$ root node $<$ right child node."
|
||||
|
||||
This means that in-order traversal in a binary search tree always traverses the next smallest node first, thus deriving an important property: **The in-order traversal sequence of a binary search tree is ascending**.
|
||||
This means that when performing in-order traversal in a binary search tree, the next smallest node will always be traversed first, thus leading to an important property: **The sequence of in-order traversal in a binary search tree is ascending**.
|
||||
|
||||
Using the ascending property of in-order traversal, obtaining ordered data in a binary search tree requires only $O(n)$ time, without the need for additional sorting operations, which is very efficient.
|
||||
|
||||
|
@ -106,7 +106,7 @@ Using the ascending property of in-order traversal, obtaining ordered data in a
|
|||
|
||||
## Efficiency of binary search trees
|
||||
|
||||
Given a set of data, we consider using an array or a binary search tree for storage. Observing the table below, the operations on a binary search tree all have logarithmic time complexity, which is stable and efficient. Only in scenarios of high-frequency addition and low-frequency search and removal, arrays are more efficient than binary search trees.
|
||||
Given a set of data, we consider using an array or a binary search tree for storage. Observing the table below, the operations on a binary search tree all have logarithmic time complexity, which is stable and efficient. Arrays are more efficient than binary search trees only in scenarios involving frequent additions and infrequent searches or removals.
|
||||
|
||||
<p align="center"> Table <id> Efficiency comparison between arrays and search trees </p>
|
||||
|
||||
|
@ -116,9 +116,9 @@ Given a set of data, we consider using an array or a binary search tree for stor
|
|||
| Insert element | $O(1)$ | $O(\log n)$ |
|
||||
| Remove element | $O(n)$ | $O(\log n)$ |
|
||||
|
||||
In ideal conditions, the binary search tree is "balanced," thus any node can be found within $\log n$ loops.
|
||||
Ideally, the binary search tree is "balanced," allowing any node can be found within $\log n$ loops.
|
||||
|
||||
However, continuously inserting and removing nodes in a binary search tree may lead to the binary tree degenerating into a chain list as shown in the figure below, at which point the time complexity of various operations also degrades to $O(n)$.
|
||||
However, if we continuously insert and remove nodes in a binary search tree, it may degenerate into a linked list as shown in the figure below, where the time complexity of various operations also degrades to $O(n)$.
|
||||
|
||||
![Degradation of a binary search tree](binary_search_tree.assets/bst_degradation.png)
|
||||
|
||||
|
|
|
@ -46,7 +46,8 @@ class GraphAdjList {
|
|||
if (
|
||||
!this.adjList.has(vet1) ||
|
||||
!this.adjList.has(vet2) ||
|
||||
vet1 === vet2
|
||||
vet1 === vet2 ||
|
||||
this.adjList.get(vet1).indexOf(vet2) === -1
|
||||
) {
|
||||
throw new Error('Illegal Argument Exception');
|
||||
}
|
||||
|
|
|
@ -46,7 +46,8 @@ class GraphAdjList {
|
|||
if (
|
||||
!this.adjList.has(vet1) ||
|
||||
!this.adjList.has(vet2) ||
|
||||
vet1 === vet2
|
||||
vet1 === vet2 ||
|
||||
this.adjList.get(vet1).indexOf(vet2) === -1
|
||||
) {
|
||||
throw new Error('Illegal Argument Exception');
|
||||
}
|
||||
|
|
Loading…
Reference in a new issue