Add the initial translation of chapter "sorting" (#1321)
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 26 KiB |
59
en/docs/chapter_sorting/bubble_sort.md
Normal file
|
@ -0,0 +1,59 @@
|
|||
# Bubble sort
|
||||
|
||||
<u>Bubble sort</u> achieves sorting by continuously comparing and swapping adjacent elements. This process resembles bubbles rising from the bottom to the top, hence the name bubble sort.
|
||||
|
||||
As shown in the following figures, the bubbling process can be simulated using element swap operations: starting from the leftmost end of the array and moving right, sequentially compare the size of adjacent elements. If "left element > right element," then swap them. After the traversal, the largest element will be moved to the far right end of the array.
|
||||
|
||||
=== "<1>"
|
||||
![Simulating bubble process using element swap](bubble_sort.assets/bubble_operation_step1.png)
|
||||
|
||||
=== "<2>"
|
||||
![bubble_operation_step2](bubble_sort.assets/bubble_operation_step2.png)
|
||||
|
||||
=== "<3>"
|
||||
![bubble_operation_step3](bubble_sort.assets/bubble_operation_step3.png)
|
||||
|
||||
=== "<4>"
|
||||
![bubble_operation_step4](bubble_sort.assets/bubble_operation_step4.png)
|
||||
|
||||
=== "<5>"
|
||||
![bubble_operation_step5](bubble_sort.assets/bubble_operation_step5.png)
|
||||
|
||||
=== "<6>"
|
||||
![bubble_operation_step6](bubble_sort.assets/bubble_operation_step6.png)
|
||||
|
||||
=== "<7>"
|
||||
![bubble_operation_step7](bubble_sort.assets/bubble_operation_step7.png)
|
||||
|
||||
## Algorithm process
|
||||
|
||||
Assuming the length of the array is $n$, the steps of bubble sort are shown below.
|
||||
|
||||
1. First, perform a "bubble" on $n$ elements, **swapping the largest element to its correct position**.
|
||||
2. Next, perform a "bubble" on the remaining $n - 1$ elements, **swapping the second largest element to its correct position**.
|
||||
3. Similarly, after $n - 1$ rounds of "bubbling," **the top $n - 1$ largest elements will be swapped to their correct positions**.
|
||||
4. The only remaining element is necessarily the smallest and does not require sorting, thus the array sorting is complete.
|
||||
|
||||
![Bubble sort process](bubble_sort.assets/bubble_sort_overview.png)
|
||||
|
||||
Example code is as follows:
|
||||
|
||||
```src
|
||||
[file]{bubble_sort}-[class]{}-[func]{bubble_sort}
|
||||
```
|
||||
|
||||
## Efficiency optimization
|
||||
|
||||
We find that if no swaps are performed in a round of "bubbling," the array is already sorted, and we can return the result immediately. Thus, we can add a flag `flag` to monitor this situation and return immediately when it occurs.
|
||||
|
||||
Even after optimization, the worst-case time complexity and average time complexity of bubble sort remain at $O(n^2)$; however, when the input array is completely ordered, it can achieve the best time complexity of $O(n)$.
|
||||
|
||||
```src
|
||||
[file]{bubble_sort}-[class]{}-[func]{bubble_sort_with_flag}
|
||||
```
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
- **Time complexity of $O(n^2)$, adaptive sorting**: The length of the array traversed in each round of "bubbling" decreases sequentially from $n - 1$, $n - 2$, $\dots$, $2$, $1$, totaling $(n - 1) n / 2$. With the introduction of `flag` optimization, the best time complexity can reach $O(n)$.
|
||||
- **Space complexity of $O(1)$, in-place sorting**: Only a constant amount of extra space is used by pointers $i$ and $j$.
|
||||
- **Stable sorting**: As equal elements are not swapped during the "bubbling".
|
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 36 KiB |
46
en/docs/chapter_sorting/bucket_sort.md
Normal file
|
@ -0,0 +1,46 @@
|
|||
# Bucket sort
|
||||
|
||||
The previously mentioned sorting algorithms are all "comparison-based sorting algorithms," which sort by comparing the size of elements. Such sorting algorithms cannot surpass a time complexity of $O(n \log n)$. Next, we will discuss several "non-comparison sorting algorithms" that can achieve linear time complexity.
|
||||
|
||||
<u>Bucket sort</u> is a typical application of the divide-and-conquer strategy. It involves setting up a series of ordered buckets, each corresponding to a range of data, and then distributing the data evenly among these buckets; each bucket is then sorted individually; finally, all the data are merged in the order of the buckets.
|
||||
|
||||
## Algorithm process
|
||||
|
||||
Consider an array of length $n$, with elements in the range $[0, 1)$. The bucket sort process is illustrated in the figure below.
|
||||
|
||||
1. Initialize $k$ buckets and distribute $n$ elements into these $k$ buckets.
|
||||
2. Sort each bucket individually (using the built-in sorting function of the programming language).
|
||||
3. Merge the results in the order from the smallest to the largest bucket.
|
||||
|
||||
![Bucket sort algorithm process](bucket_sort.assets/bucket_sort_overview.png)
|
||||
|
||||
The code is shown as follows:
|
||||
|
||||
```src
|
||||
[file]{bucket_sort}-[class]{}-[func]{bucket_sort}
|
||||
```
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
Bucket sort is suitable for handling very large data sets. For example, if the input data includes 1 million elements, and system memory limitations prevent loading all the data at once, you can divide the data into 1,000 buckets and sort each bucket separately before merging the results.
|
||||
|
||||
- **Time complexity is $O(n + k)$**: Assuming the elements are evenly distributed across the buckets, the number of elements in each bucket is $n/k$. Assuming sorting a single bucket takes $O(n/k \log(n/k))$ time, sorting all buckets takes $O(n \log(n/k))$ time. **When the number of buckets $k$ is relatively large, the time complexity tends towards $O(n)$**. Merging the results requires traversing all buckets and elements, taking $O(n + k)$ time.
|
||||
- **Adaptive sorting**: In the worst case, all data is distributed into a single bucket, and sorting that bucket takes $O(n^2)$ time.
|
||||
- **Space complexity is $O(n + k)$, non-in-place sorting**: It requires additional space for $k$ buckets and a total of $n$ elements.
|
||||
- Whether bucket sort is stable depends on whether the algorithm used to sort elements within the buckets is stable.
|
||||
|
||||
## How to achieve even distribution
|
||||
|
||||
The theoretical time complexity of bucket sort can reach $O(n)$, **the key is to evenly distribute the elements across all buckets**, as real data is often not uniformly distributed. For example, if we want to evenly distribute all products on Taobao by price range into 10 buckets, but the distribution of product prices is uneven, with many under 100 yuan and few over 1000 yuan. If the price range is evenly divided into 10, the difference in the number of products in each bucket will be very large.
|
||||
|
||||
To achieve even distribution, we can initially set a rough dividing line, roughly dividing the data into 3 buckets. **After the distribution is complete, the buckets with more products can be further divided into 3 buckets, until the number of elements in all buckets is roughly equal**.
|
||||
|
||||
As shown in the figure below, this method essentially creates a recursive tree, aiming to make the leaf node values as even as possible. Of course, you don't have to divide the data into 3 buckets each round; the specific division method can be flexibly chosen based on data characteristics.
|
||||
|
||||
![Recursive division of buckets](bucket_sort.assets/scatter_in_buckets_recursively.png)
|
||||
|
||||
If we know the probability distribution of product prices in advance, **we can set the price dividing line for each bucket based on the data probability distribution**. It is worth noting that it is not necessarily required to specifically calculate the data distribution; it can also be approximated based on data characteristics using some probability model.
|
||||
|
||||
As shown in the figure below, we assume that product prices follow a normal distribution, allowing us to reasonably set the price intervals, thereby evenly distributing the products into the respective buckets.
|
||||
|
||||
![Dividing buckets based on probability distribution](bucket_sort.assets/scatter_in_buckets_distribution.png)
|
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 16 KiB |
86
en/docs/chapter_sorting/counting_sort.md
Normal file
|
@ -0,0 +1,86 @@
|
|||
# Counting sort
|
||||
|
||||
<u>Counting sort</u> achieves sorting by counting the number of elements, typically applied to arrays of integers.
|
||||
|
||||
## Simple implementation
|
||||
|
||||
Let's start with a simple example. Given an array `nums` of length $n$, where all elements are "non-negative integers", the overall process of counting sort is illustrated in the following diagram.
|
||||
|
||||
1. Traverse the array to find the maximum number, denoted as $m$, then create an auxiliary array `counter` of length $m + 1$.
|
||||
2. **Use `counter` to count the occurrence of each number in `nums`**, where `counter[num]` corresponds to the occurrence of the number `num`. The counting method is simple, just traverse `nums` (suppose the current number is `num`), and increase `counter[num]` by $1$ each round.
|
||||
3. **Since the indices of `counter` are naturally ordered, all numbers are essentially sorted already**. Next, we traverse `counter`, filling `nums` in ascending order of occurrence.
|
||||
|
||||
![Counting sort process](counting_sort.assets/counting_sort_overview.png)
|
||||
|
||||
The code is shown below:
|
||||
|
||||
```src
|
||||
[file]{counting_sort}-[class]{}-[func]{counting_sort_naive}
|
||||
```
|
||||
|
||||
!!! note "Connection between counting sort and bucket sort"
|
||||
|
||||
From the perspective of bucket sort, we can consider each index of the counting array `counter` in counting sort as a bucket, and the process of counting as distributing elements into the corresponding buckets. Essentially, counting sort is a special case of bucket sort for integer data.
|
||||
|
||||
## Complete implementation
|
||||
|
||||
Astute readers might have noticed, **if the input data is an object, the above step `3.` becomes ineffective**. Suppose the input data is a product object, we want to sort the products by their price (a class member variable), but the above algorithm can only provide the sorting result for the price.
|
||||
|
||||
So how can we get the sorting result for the original data? First, we calculate the "prefix sum" of `counter`. As the name suggests, the prefix sum at index `i`, `prefix[i]`, equals the sum of the first `i` elements of the array:
|
||||
|
||||
$$
|
||||
\text{prefix}[i] = \sum_{j=0}^i \text{counter[j]}
|
||||
$$
|
||||
|
||||
**The prefix sum has a clear meaning, `prefix[num] - 1` represents the last occurrence index of element `num` in the result array `res`**. This information is crucial, as it tells us where each element should appear in the result array. Next, we traverse the original array `nums` for each element `num` in reverse order, performing the following two steps in each iteration.
|
||||
|
||||
1. Fill `num` into the array `res` at the index `prefix[num] - 1`.
|
||||
2. Reduce the prefix sum `prefix[num]` by $1$, thus obtaining the next index to place `num`.
|
||||
|
||||
After the traversal, the array `res` contains the sorted result, and finally, `res` replaces the original array `nums`. The complete counting sort process is shown in the figures below.
|
||||
|
||||
=== "<1>"
|
||||
![Counting sort process](counting_sort.assets/counting_sort_step1.png)
|
||||
|
||||
=== "<2>"
|
||||
![counting_sort_step2](counting_sort.assets/counting_sort_step2.png)
|
||||
|
||||
=== "<3>"
|
||||
![counting_sort_step3](counting_sort.assets/counting_sort_step3.png)
|
||||
|
||||
=== "<4>"
|
||||
![counting_sort_step4](counting_sort.assets/counting_sort_step4.png)
|
||||
|
||||
=== "<5>"
|
||||
![counting_sort_step5](counting_sort.assets/counting_sort_step5.png)
|
||||
|
||||
=== "<6>"
|
||||
![counting_sort_step6](counting_sort.assets/counting_sort_step6.png)
|
||||
|
||||
=== "<7>"
|
||||
![counting_sort_step7](counting_sort.assets/counting_sort_step7.png)
|
||||
|
||||
=== "<8>"
|
||||
![counting_sort_step8](counting_sort.assets/counting_sort_step8.png)
|
||||
|
||||
The implementation code of counting sort is shown below:
|
||||
|
||||
```src
|
||||
[file]{counting_sort}-[class]{}-[func]{counting_sort}
|
||||
```
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
- **Time complexity is $O(n + m)$, non-adaptive sort**: Involves traversing `nums` and `counter`, both using linear time. Generally, $n \gg m$, and the time complexity tends towards $O(n)$.
|
||||
- **Space complexity is $O(n + m)$, non-in-place sort**: Utilizes arrays `res` and `counter` of lengths $n$ and $m$ respectively.
|
||||
- **Stable sort**: Since elements are filled into `res` in a "right-to-left" order, reversing the traversal of $nums$ can prevent changing the relative position between equal elements, thereby achieving a stable sort. Actually, traversing `nums$ in
|
||||
|
||||
order can also produce the correct sorting result, but the outcome is unstable.
|
||||
|
||||
## Limitations
|
||||
|
||||
By now, you might find counting sort very clever, as it can achieve efficient sorting merely by counting quantities. However, the prerequisites for using counting sort are relatively strict.
|
||||
|
||||
**Counting sort is only suitable for non-negative integers**. If you want to apply it to other types of data, you need to ensure that these data can be converted to non-negative integers without changing the relative sizes of the elements. For example, for an array containing negative integers, you can first add a constant to all numbers, converting them all to positive numbers, and then convert them back after sorting is complete.
|
||||
|
||||
**Counting sort is suitable for large data volumes but small data ranges**. For example, in the above example, $m$ should not be too large, otherwise, it will occupy too much space. And when $n \ll m$, counting sort uses $O(m)$ time, which may be slower than $O(n \log n)$ sorting algorithms.
|
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step1.png
Normal file
After Width: | Height: | Size: 23 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step10.png
Normal file
After Width: | Height: | Size: 18 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step11.png
Normal file
After Width: | Height: | Size: 19 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step12.png
Normal file
After Width: | Height: | Size: 20 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step2.png
Normal file
After Width: | Height: | Size: 20 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step3.png
Normal file
After Width: | Height: | Size: 22 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step4.png
Normal file
After Width: | Height: | Size: 20 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step5.png
Normal file
After Width: | Height: | Size: 22 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step6.png
Normal file
After Width: | Height: | Size: 19 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step7.png
Normal file
After Width: | Height: | Size: 20 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step8.png
Normal file
After Width: | Height: | Size: 18 KiB |
BIN
en/docs/chapter_sorting/heap_sort.assets/heap_sort_step9.png
Normal file
After Width: | Height: | Size: 20 KiB |
73
en/docs/chapter_sorting/heap_sort.md
Normal file
|
@ -0,0 +1,73 @@
|
|||
# Heap sort
|
||||
|
||||
!!! tip
|
||||
|
||||
Before reading this section, please make sure you have completed the "Heap" chapter.
|
||||
|
||||
<u>Heap sort</u> is an efficient sorting algorithm based on the heap data structure. We can implement heap sort using the "heap creation" and "element extraction" operations we have already learned.
|
||||
|
||||
1. Input the array and establish a min-heap, where the smallest element is at the heap's top.
|
||||
2. Continuously perform the extraction operation, recording the extracted elements in sequence to obtain a sorted list from smallest to largest.
|
||||
|
||||
Although the above method is feasible, it requires an additional array to save the popped elements, which is somewhat space-consuming. In practice, we usually use a more elegant implementation.
|
||||
|
||||
## Algorithm flow
|
||||
|
||||
Suppose the array length is $n$, the heap sort process is as follows.
|
||||
|
||||
1. Input the array and establish a max-heap. After completion, the largest element is at the heap's top.
|
||||
2. Swap the top element of the heap (the first element) with the heap's bottom element (the last element). After the swap, reduce the heap's length by $1$ and increase the sorted elements count by $1$.
|
||||
3. Starting from the heap top, perform the sift-down operation from top to bottom. After the sift-down, the heap's property is restored.
|
||||
4. Repeat steps `2.` and `3.` Loop for $n - 1$ rounds to complete the sorting of the array.
|
||||
|
||||
!!! tip
|
||||
|
||||
In fact, the element extraction operation also includes steps `2.` and `3.`, with the addition of a popping element step.
|
||||
|
||||
=== "<1>"
|
||||
![Heap sort process](heap_sort.assets/heap_sort_step1.png)
|
||||
|
||||
=== "<2>"
|
||||
![heap_sort_step2](heap_sort.assets/heap_sort_step2.png)
|
||||
|
||||
=== "<3>"
|
||||
![heap_sort_step3](heap_sort.assets/heap_sort_step3.png)
|
||||
|
||||
=== "<4>"
|
||||
![heap_sort_step4](heap_sort.assets/heap_sort_step4.png)
|
||||
|
||||
=== "<5>"
|
||||
![heap_sort_step5](heap_sort.assets/heap_sort_step5.png)
|
||||
|
||||
=== "<6>"
|
||||
![heap_sort_step6](heap_sort.assets/heap_sort_step6.png)
|
||||
|
||||
=== "<7>"
|
||||
![heap_sort_step7](heap_sort.assets/heap_sort_step7.png)
|
||||
|
||||
=== "<8>"
|
||||
![heap_sort_step8](heap_sort.assets/heap_sort_step8.png)
|
||||
|
||||
=== "<9>"
|
||||
![heap_sort_step9](heap_sort.assets/heap_sort_step9.png)
|
||||
|
||||
=== "<10>"
|
||||
![heap_sort_step10](heap_sort.assets/heap_sort_step10.png)
|
||||
|
||||
=== "<11>"
|
||||
![heap_sort_step11](heap_sort.assets/heap_sort_step11.png)
|
||||
|
||||
=== "<12>"
|
||||
![heap_sort_step12](heap_sort.assets/heap_sort_step12.png)
|
||||
|
||||
In the code implementation, we used the sift-down function `sift_down()` from the "Heap" chapter. It is important to note that since the heap's length decreases as the maximum element is extracted, we need to add a length parameter $n$ to the `sift_down()` function to specify the current effective length of the heap. The code is shown below:
|
||||
|
||||
```src
|
||||
[file]{heap_sort}-[class]{}-[func]{heap_sort}
|
||||
```
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
- **Time complexity is $O(n \log n)$, non-adaptive sort**: The heap creation uses $O(n)$ time. Extracting the largest element from the heap takes $O(\log n)$ time, looping for $n - 1$ rounds.
|
||||
- **Space complexity is $O(1)$, in-place sort**: A few pointer variables use $O(1)$ space. The element swapping and heapifying operations are performed on the original array.
|
||||
- **Non-stable sort**: The relative positions of equal elements may change during the swapping of the heap's top and bottom elements.
|
9
en/docs/chapter_sorting/index.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# Sorting
|
||||
|
||||
![Sorting](../assets/covers/chapter_sorting.jpg)
|
||||
|
||||
!!! abstract
|
||||
|
||||
Sorting is like a magical key that turns chaos into order, enabling us to understand and handle data in a more efficient manner.
|
||||
|
||||
Whether it's simple ascending order or complex categorical arrangements, sorting reveals the harmonious beauty of data.
|
After Width: | Height: | Size: 27 KiB |
After Width: | Height: | Size: 24 KiB |
46
en/docs/chapter_sorting/insertion_sort.md
Normal file
|
@ -0,0 +1,46 @@
|
|||
# Insertion sort
|
||||
|
||||
<u>Insertion sort</u> is a simple sorting algorithm that works very much like the process of manually sorting a deck of cards.
|
||||
|
||||
Specifically, we select a pivot element from the unsorted interval, compare it with the elements in the sorted interval to its left, and insert the element into the correct position.
|
||||
|
||||
The figure below shows the process of inserting an element into an array. Assuming the pivot element is `base`, we need to move all elements between the target index and `base` one position to the right, then assign `base` to the target index.
|
||||
|
||||
![Single insertion operation](insertion_sort.assets/insertion_operation.png)
|
||||
|
||||
## Algorithm process
|
||||
|
||||
The overall process of insertion sort is shown in the following figure.
|
||||
|
||||
1. Initially, the first element of the array is sorted.
|
||||
2. The second element of the array is taken as `base`, and after inserting it into the correct position, **the first two elements of the array are sorted**.
|
||||
3. The third element is taken as `base`, and after inserting it into the correct position, **the first three elements of the array are sorted**.
|
||||
4. And so on, in the last round, the last element is taken as `base`, and after inserting it into the correct position, **all elements are sorted**.
|
||||
|
||||
![Insertion sort process](insertion_sort.assets/insertion_sort_overview.png)
|
||||
|
||||
Example code is as follows:
|
||||
|
||||
```src
|
||||
[file]{insertion_sort}-[class]{}-[func]{insertion_sort}
|
||||
```
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
- **Time complexity is $O(n^2)$, adaptive sorting**: In the worst case, each insertion operation requires $n - 1$, $n-2$, ..., $2$, $1$ loops, summing up to $(n - 1) n / 2$, thus the time complexity is $O(n^2)$. In the case of ordered data, the insertion operation will terminate early. When the input array is completely ordered, insertion sort achieves the best time complexity of $O(n)$.
|
||||
- **Space complexity is $O(1)$, in-place sorting**: Pointers $i$ and $j$ use a constant amount of extra space.
|
||||
- **Stable sorting**: During the insertion operation, we insert elements to the right of equal elements, not changing their order.
|
||||
|
||||
## Advantages of insertion sort
|
||||
|
||||
The time complexity of insertion sort is $O(n^2)$, while the time complexity of quicksort, which we will study next, is $O(n \log n)$. Although insertion sort has a higher time complexity, **it is usually faster in cases of small data volumes**.
|
||||
|
||||
This conclusion is similar to that for linear and binary search. Algorithms like quicksort that have a time complexity of $O(n \log n)$ and are based on the divide-and-conquer strategy often involve more unit operations. In cases of small data volumes, the numerical values of $n^2$ and $n \log n$ are close, and complexity does not dominate, with the number of unit operations per round playing a decisive role.
|
||||
|
||||
In fact, many programming languages (such as Java) use insertion sort in their built-in sorting functions. The general approach is: for long arrays, use sorting algorithms based on divide-and-conquer strategies, such as quicksort; for short arrays, use insertion sort directly.
|
||||
|
||||
Although bubble sort, selection sort, and insertion sort all have a time complexity of $O(n^2)$, in practice, **insertion sort is used significantly more frequently than bubble sort and selection sort**, mainly for the following reasons.
|
||||
|
||||
- Bubble sort is based on element swapping, which requires the use of a temporary variable, involving 3 unit operations; insertion sort is based on element assignment, requiring only 1 unit operation. Therefore, **the computational overhead of bubble sort is generally higher than that of insertion sort**.
|
||||
- The time complexity of selection sort is always $O(n^2)$. **Given a set of partially ordered data, insertion sort is usually more efficient than selection sort**.
|
||||
- Selection sort is unstable and cannot be applied to multi-level sorting.
|
After Width: | Height: | Size: 25 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step1.png
Normal file
After Width: | Height: | Size: 8.4 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step10.png
Normal file
After Width: | Height: | Size: 14 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step2.png
Normal file
After Width: | Height: | Size: 9.1 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step3.png
Normal file
After Width: | Height: | Size: 9.7 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step4.png
Normal file
After Width: | Height: | Size: 10 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step5.png
Normal file
After Width: | Height: | Size: 11 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step6.png
Normal file
After Width: | Height: | Size: 12 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step7.png
Normal file
After Width: | Height: | Size: 12 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step8.png
Normal file
After Width: | Height: | Size: 12 KiB |
BIN
en/docs/chapter_sorting/merge_sort.assets/merge_sort_step9.png
Normal file
After Width: | Height: | Size: 13 KiB |
73
en/docs/chapter_sorting/merge_sort.md
Normal file
|
@ -0,0 +1,73 @@
|
|||
# Merge sort
|
||||
|
||||
<u>Merge sort</u> is a sorting algorithm based on the divide-and-conquer strategy, involving the "divide" and "merge" phases shown in the following figure.
|
||||
|
||||
1. **Divide phase**: Recursively split the array from the midpoint, transforming the sorting problem of a long array into that of shorter arrays.
|
||||
2. **Merge phase**: Stop dividing when the length of the sub-array is 1, start merging, and continuously combine two shorter ordered arrays into one longer ordered array until the process is complete.
|
||||
|
||||
![The divide and merge phases of merge sort](merge_sort.assets/merge_sort_overview.png)
|
||||
|
||||
## Algorithm workflow
|
||||
|
||||
As shown in the figure below, the "divide phase" recursively splits the array from the midpoint into two sub-arrays from top to bottom.
|
||||
|
||||
1. Calculate the midpoint `mid`, recursively divide the left sub-array (interval `[left, mid]`) and the right sub-array (interval `[mid + 1, right]`).
|
||||
2. Continue with step `1.` recursively until the sub-array interval length is 1 to stop.
|
||||
|
||||
The "merge phase" combines the left and right sub-arrays into a single ordered array from bottom to top. Note that merging starts with sub-arrays of length 1, and each sub-array is ordered during the merge phase.
|
||||
|
||||
=== "<1>"
|
||||
![Merge sort process](merge_sort.assets/merge_sort_step1.png)
|
||||
|
||||
=== "<2>"
|
||||
![merge_sort_step2](merge_sort.assets/merge_sort_step2.png)
|
||||
|
||||
=== "<3>"
|
||||
![merge_sort_step3](merge_sort.assets/merge_sort_step3.png)
|
||||
|
||||
=== "<4>"
|
||||
![merge_sort_step4](merge_sort.assets/merge_sort_step4.png)
|
||||
|
||||
=== "<5>"
|
||||
![merge_sort_step5](merge_sort.assets/merge_sort_step5.png)
|
||||
|
||||
=== "<6>"
|
||||
![merge_sort_step6](merge_sort.assets/merge_sort_step6.png)
|
||||
|
||||
=== "<7>"
|
||||
![merge_sort_step7](merge_sort.assets/merge_sort_step7.png)
|
||||
|
||||
=== "<8>"
|
||||
![merge_sort_step8](merge_sort.assets/merge_sort_step8.png)
|
||||
|
||||
=== "<9>"
|
||||
![merge_sort_step9](merge_sort.assets/merge_sort_step9.png)
|
||||
|
||||
=== "<10>"
|
||||
![merge_sort_step10](merge_sort.assets/merge_sort_step10.png)
|
||||
|
||||
It is observed that the order of recursion in merge sort is consistent with the post-order traversal of a binary tree.
|
||||
|
||||
- **Post-order traversal**: First recursively traverse the left subtree, then the right subtree, and finally handle the root node.
|
||||
- **Merge sort**: First recursively handle the left sub-array, then the right sub-array, and finally perform the merge.
|
||||
|
||||
The implementation of merge sort is shown in the following code. Note that the interval to be merged in `nums` is `[left, right]`, while the corresponding interval in `tmp` is `[0, right - left]`.
|
||||
|
||||
```src
|
||||
[file]{merge_sort}-[class]{}-[func]{merge_sort}
|
||||
```
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
- **Time complexity of $O(n \log n)$, non-adaptive sort**: The division creates a recursion tree of height $\log n$, with each layer merging a total of $n$ operations, resulting in an overall time complexity of $O(n \log n)$.
|
||||
- **Space complexity of $O(n)$, non-in-place sort**: The recursion depth is $\log n`, using $O(\log n)` stack frame space. The merging operation requires auxiliary arrays, using an additional space of $O(n)$.
|
||||
- **Stable sort**: During the merging process, the order of equal elements remains unchanged.
|
||||
|
||||
## Linked List sorting
|
||||
|
||||
For linked lists, merge sort has significant advantages over other sorting algorithms, **optimizing the space complexity of the linked list sorting task to $O(1)$**.
|
||||
|
||||
- **Divide phase**: "Iteration" can be used instead of "recursion" to perform the linked list division work, thus saving the stack frame space used by recursion.
|
||||
- **Merge phase**: In linked lists, node addition and deletion operations can be achieved by changing references (pointers), so no extra lists need to be created during the merge phase (combining two short ordered lists into one long ordered list).
|
||||
|
||||
Detailed implementation details are complex, and interested readers can consult related materials for learning.
|
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 26 KiB |
100
en/docs/chapter_sorting/quick_sort.md
Normal file
|
@ -0,0 +1,100 @@
|
|||
# Quick sort
|
||||
|
||||
<u>Quick sort</u> is a sorting algorithm based on the divide and conquer strategy, known for its efficiency and wide application.
|
||||
|
||||
The core operation of quick sort is "pivot partitioning," aiming to: select an element from the array as the "pivot," move all elements smaller than the pivot to its left, and move elements greater than the pivot to its right. Specifically, the pivot partitioning process is illustrated as follows.
|
||||
|
||||
1. Select the leftmost element of the array as the pivot, and initialize two pointers `i` and `j` at both ends of the array.
|
||||
2. Set up a loop where each round uses `i` (`j`) to find the first element larger (smaller) than the pivot, then swap these two elements.
|
||||
3. Repeat step `2.` until `i` and `j` meet, finally swap the pivot to the boundary between the two sub-arrays.
|
||||
|
||||
=== "<1>"
|
||||
![Pivot division process](quick_sort.assets/pivot_division_step1.png)
|
||||
|
||||
=== "<2>"
|
||||
![pivot_division_step2](quick_sort.assets/pivot_division_step2.png)
|
||||
|
||||
=== "<3>"
|
||||
![pivot_division_step3](quick_sort.assets/pivot_division_step3.png)
|
||||
|
||||
=== "<4>"
|
||||
![pivot_division_step4](quick_sort.assets/pivot_division_step4.png)
|
||||
|
||||
=== "<5>"
|
||||
![pivot_division_step5](quick_sort.assets/pivot_division_step5.png)
|
||||
|
||||
=== "<6>"
|
||||
![pivot_division_step6](quick_sort.assets/pivot_division_step6.png)
|
||||
|
||||
=== "<7>"
|
||||
![pivot_division_step7](quick_sort.assets/pivot_division_step7.png)
|
||||
|
||||
=== "<8>"
|
||||
![pivot_division_step8](quick_sort.assets/pivot_division_step8.png)
|
||||
|
||||
=== "<9>"
|
||||
![pivot_division_step9](quick_sort.assets/pivot_division_step9.png)
|
||||
|
||||
After the pivot partitioning, the original array is divided into three parts: left sub-array, pivot, and right sub-array, satisfying "any element in the left sub-array $\leq$ pivot $\leq$ any element in the right sub-array." Therefore, we only need to sort these two sub-arrays next.
|
||||
|
||||
!!! note "Quick sort's divide and conquer strategy"
|
||||
|
||||
The essence of pivot partitioning is to simplify a longer array's sorting problem into two shorter arrays' sorting problems.
|
||||
|
||||
```src
|
||||
[file]{quick_sort}-[class]{quick_sort}-[func]{partition}
|
||||
```
|
||||
|
||||
## Algorithm process
|
||||
|
||||
The overall process of quick sort is shown in the following figure.
|
||||
|
||||
1. First, perform a "pivot partitioning" on the original array to obtain the unsorted left and right sub-arrays.
|
||||
2. Then, recursively perform "pivot partitioning" on both the left and right sub-arrays.
|
||||
3. Continue recursively until the sub-array length reaches 1, thus completing the sorting of the entire array.
|
||||
|
||||
![Quick sort process](quick_sort.assets/quick_sort_overview.png)
|
||||
|
||||
```src
|
||||
[file]{quick_sort}-[class]{quick_sort}-[func]{quick_sort}
|
||||
```
|
||||
|
||||
## Algorithm features
|
||||
|
||||
- **Time complexity of $O(n \log n)$, adaptive sorting**: In average cases, the recursive levels of pivot partitioning are $\log n$, and the total number of loops per level is $n$, using $O(n \log n)$ time overall. In the worst case, each round of pivot partitioning divides an array of length $n$ into two sub-arrays of lengths $0$ and $n - 1$, reaching $n$ recursive levels, and using $O(n^2)$ time overall.
|
||||
- **Space complexity of $O(n)$, in-place sorting**: In completely reversed input arrays, reaching the worst recursion depth of $n$, using $O(n)$ stack frame space. The sorting operation is performed on the original array without the aid of additional arrays.
|
||||
- **Non-stable sorting**: In the final step of pivot partitioning, the pivot may be swapped to the right of equal elements.
|
||||
|
||||
## Why is quick sort fast
|
||||
|
||||
From its name, it is apparent that quick sort should have certain efficiency advantages. Although the average time complexity of quick sort is the same as "merge sort" and "heap sort," quick sort is generally more efficient, mainly for the following reasons.
|
||||
|
||||
- **Low probability of worst-case scenarios**: Although the worst time complexity of quick sort is $O(n^2)$, less stable than merge sort, in most cases, quick sort can operate under a time complexity of $O(n \log n)$.
|
||||
- **High cache usage efficiency**: During the pivot partitioning operation, the system can load the entire sub-array into the cache, thus accessing elements more efficiently. In contrast, algorithms like "heap sort" need to access elements in a jumping manner, lacking this feature.
|
||||
- **Small constant coefficient of complexity**: Among the mentioned algorithms, quick sort has the fewest total number of comparisons, assignments, and swaps. This is similar to why "insertion sort" is faster than "bubble sort."
|
||||
|
||||
## Pivot optimization
|
||||
|
||||
**Quick sort's time efficiency may decrease under certain inputs**. For example, if the input array is completely reversed, since we select the leftmost element as the pivot, after the pivot partitioning, the pivot is swapped to the array's right end, causing the left sub-array length to be $n - 1$ and the right sub-array length to be $0$. If this recursion continues, each round of pivot partitioning will have a sub-array length of $0$, and the divide and conquer strategy fails, degrading quick sort to a form similar to "bubble sort."
|
||||
|
||||
To avoid this situation, **we can optimize the strategy for selecting the pivot in the pivot partitioning**. For instance, we can randomly select an element as the pivot. However, if luck is not on our side, and we keep selecting suboptimal pivots, the efficiency is still not satisfactory.
|
||||
|
||||
It's important to note that programming languages usually generate "pseudo-random numbers". If we construct a specific test case for a pseudo-random number sequence, the efficiency of quick sort may still degrade.
|
||||
|
||||
For further improvement, we can select three candidate elements (usually the first, last, and midpoint elements of the array), **and use the median of these three candidate elements as the pivot**. This significantly increases the probability that the pivot is "neither too small nor too large". Of course, we can also select more candidate elements to further enhance the algorithm's robustness. Using this method significantly reduces the probability of time complexity degradation to $O(n^2)$.
|
||||
|
||||
Sample code is as follows:
|
||||
|
||||
```src
|
||||
[file]{quick_sort}-[class]{quick_sort_median}-[func]{partition}
|
||||
```
|
||||
|
||||
## Tail recursion optimization
|
||||
|
||||
**Under certain inputs, quick sort may occupy more space**. For a completely ordered input array, assume the sub-array length in recursion is $m$, each round of pivot partitioning produces a left sub-array of length $0$ and a right sub-array of length $m - 1$, meaning the problem size reduced per recursive call is very small (only one element), and the height of the recursion tree can reach $n - 1$, requiring $O(n)$ stack frame space.
|
||||
|
||||
To prevent the accumulation of stack frame space, we can compare the lengths of the two sub-arrays after each round of pivot sorting, **and only recursively sort the shorter sub-array**. Since the length of the shorter sub-array will not exceed $n / 2$, this method ensures that the recursion depth does not exceed $\log n$, thus optimizing the worst space complexity to $O(\log n)$. The code is as follows:
|
||||
|
||||
```src
|
||||
[file]{quick_sort}-[class]{quick_sort_tail_call}-[func]{quick_sort}
|
||||
```
|
After Width: | Height: | Size: 46 KiB |
41
en/docs/chapter_sorting/radix_sort.md
Normal file
|
@ -0,0 +1,41 @@
|
|||
# Radix sort
|
||||
|
||||
The previous section introduced counting sort, which is suitable for scenarios where the data volume $n$ is large but the data range $m$ is small. Suppose we need to sort $n = 10^6$ student IDs, where each ID is an $8$-digit number. This means the data range $m = 10^8$ is very large, requiring a significant amount of memory space for counting sort, while radix sort can avoid this situation.
|
||||
|
||||
<u>Radix sort</u> shares the core idea with counting sort, which also sorts by counting the frequency of elements. Building on this, radix sort utilizes the progressive relationship between the digits of numbers, sorting each digit in turn to achieve the final sorted order.
|
||||
|
||||
## Algorithm process
|
||||
|
||||
Taking the student ID data as an example, assuming the least significant digit is the $1^{st}$ and the most significant is the $8^{th}$, the radix sort process is illustrated in the following diagram.
|
||||
|
||||
1. Initialize digit $k = 1$.
|
||||
2. Perform "counting sort" on the $k^{th}$ digit of the student IDs. After completion, the data will be sorted from smallest to largest based on the $k^{th}$ digit.
|
||||
3. Increment $k$ by $1$, then return to step `2.` and continue iterating until all digits have been sorted, then the process ends.
|
||||
|
||||
![Radix sort algorithm process](radix_sort.assets/radix_sort_overview.png)
|
||||
|
||||
Below we dissect the code implementation. For a number $x$ in base $d$, to obtain its $k^{th}$ digit $x_k$, the following calculation formula can be used:
|
||||
|
||||
$$
|
||||
x_k = \lfloor\frac{x}{d^{k-1}}\rfloor \bmod d
|
||||
$$
|
||||
|
||||
Where $\lfloor a \rfloor$ denotes rounding down the floating point number $a$, and $\bmod \: d$ denotes taking the modulus of $d$. For student ID data, $d = 10$ and $k \in [1, 8]$.
|
||||
|
||||
Additionally, we need to slightly modify the counting sort code to allow sorting based on the $k^{th}$ digit:
|
||||
|
||||
```src
|
||||
[file]{radix_sort}-[class]{}-[func]{radix_sort}
|
||||
```
|
||||
|
||||
!!! question "Why start sorting from the least significant digit?"
|
||||
|
||||
In consecutive sorting rounds, the result of a later round will override the result of an earlier round. For example, if the result of the first round is $a < b$ and the result of the second round is $a > b$, the result of the second round will replace the first round's result. Since the significance of higher digits is greater than that of lower digits, it makes sense to sort lower digits before higher digits.
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
Compared to counting sort, radix sort is suitable for larger numerical ranges, **but it assumes that the data can be represented in a fixed number of digits, and the number of digits should not be too large**. For example, floating-point numbers are not suitable for radix sort, as their digit count $k$ may be large, potentially leading to a time complexity $O(nk) \gg O(n^2)$.
|
||||
|
||||
- **Time complexity is $O(nk)$, non-adaptive sorting**: Assuming the data size is $n$, the data is in base $d$, and the maximum number of digits is $k$, then sorting a single digit takes $O(n + d)$ time, and sorting all $k$ digits takes $O((n + d)k)$ time. Generally, both $d$ and $k$ are relatively small, leading to a time complexity approaching $O(n)$.
|
||||
- **Space complexity is $O(n + d)$, non-in-place sorting**: Like counting sort, radix sort relies on arrays `res` and `counter` of lengths $n$ and $d$ respectively.
|
||||
- **Stable sorting**: When counting sort is stable, radix sort is also stable; if counting sort is unstable, radix sort cannot guarantee a correct sorting outcome.
|
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 8.1 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 11 KiB |
58
en/docs/chapter_sorting/selection_sort.md
Normal file
|
@ -0,0 +1,58 @@
|
|||
# Selection sort
|
||||
|
||||
<u>Selection sort</u> works on a very simple principle: it starts a loop where each iteration selects the smallest element from the unsorted interval and moves it to the end of the sorted interval.
|
||||
|
||||
Suppose the length of the array is $n$, the algorithm flow of selection sort is as shown below.
|
||||
|
||||
1. Initially, all elements are unsorted, i.e., the unsorted (index) interval is $[0, n-1]$.
|
||||
2. Select the smallest element in the interval $[0, n-1]$ and swap it with the element at index $0$. After this, the first element of the array is sorted.
|
||||
3. Select the smallest element in the interval $[1, n-1]$ and swap it with the element at index $1$. After this, the first two elements of the array are sorted.
|
||||
4. Continue in this manner. After $n - 1$ rounds of selection and swapping, the first $n - 1$ elements are sorted.
|
||||
5. The only remaining element is necessarily the largest element and does not need sorting, thus the array is sorted.
|
||||
|
||||
=== "<1>"
|
||||
![Selection sort process](selection_sort.assets/selection_sort_step1.png)
|
||||
|
||||
=== "<2>"
|
||||
![selection_sort_step2](selection_sort.assets/selection_sort_step2.png)
|
||||
|
||||
=== "<3>"
|
||||
![selection_sort_step3](selection_sort.assets/selection_sort_step3.png)
|
||||
|
||||
=== "<4>"
|
||||
![selection_sort_step4](selection_sort.assets/selection_sort_step4.png)
|
||||
|
||||
=== "<5>"
|
||||
![selection_sort_step5](selection_sort.assets/selection_sort_step5.png)
|
||||
|
||||
=== "<6>"
|
||||
![selection_sort_step6](selection_sort.assets/selection_sort_step6.png)
|
||||
|
||||
=== "<7>"
|
||||
![selection_sort_step7](selection_sort.assets/selection_sort_step7.png)
|
||||
|
||||
=== "<8>"
|
||||
![selection_sort_step8](selection_sort.assets/selection_sort_step8.png)
|
||||
|
||||
=== "<9>"
|
||||
![selection_sort_step9](selection_sort.assets/selection_sort_step9.png)
|
||||
|
||||
=== "<10>"
|
||||
![selection_sort_step10](selection_sort.assets/selection_sort_step10.png)
|
||||
|
||||
=== "<11>"
|
||||
![selection_sort_step11](selection_sort.assets/selection_sort_step11.png)
|
||||
|
||||
In the code, we use $k$ to record the smallest element within the unsorted interval:
|
||||
|
||||
```src
|
||||
[file]{selection_sort}-[class]{}-[func]{selection_sort}
|
||||
```
|
||||
|
||||
## Algorithm characteristics
|
||||
|
||||
- **Time complexity of $O(n^2)$, non-adaptive sort**: There are $n - 1$ rounds in the outer loop, with the unsorted interval length starting at $n$ in the first round and decreasing to $2$ in the last round, i.e., the outer loops contain $n$, $n - 1$, $\dots$, $3$, $2$ inner loops respectively, summing up to $\frac{(n - 1)(n + 2)}{2}$.
|
||||
- **Space complexity of $O(1)$, in-place sort**: Uses constant extra space with pointers $i$ and $j$.
|
||||
- **Non-stable sort**: As shown in the figure below, an element `nums[i]` may be swapped to the right of an equal element, causing their relative order to change.
|
||||
|
||||
![Selection sort instability example](selection_sort.assets/selection_sort_instability.png)
|
After Width: | Height: | Size: 16 KiB |
48
en/docs/chapter_sorting/sorting_algorithm.md
Normal file
|
@ -0,0 +1,48 @@
|
|||
# Sorting algorithms
|
||||
|
||||
<u>Sorting algorithms (sorting algorithm)</u> are used to arrange a set of data in a specific order. Sorting algorithms have a wide range of applications because ordered data can usually be searched, analyzed, and processed more efficiently.
|
||||
|
||||
As shown in the following figure, the data types in sorting algorithms can be integers, floating point numbers, characters, or strings, etc. Sorting rules can be set according to needs, such as numerical size, character ASCII order, or custom rules.
|
||||
|
||||
![Data types and comparator examples](sorting_algorithm.assets/sorting_examples.png)
|
||||
|
||||
## Evaluation dimensions
|
||||
|
||||
**Execution efficiency**: We expect the time complexity of sorting algorithms to be as low as possible, with a lower number of overall operations (reduction in the constant factor of time complexity). For large data volumes, execution efficiency is particularly important.
|
||||
|
||||
**In-place property**: As the name implies, <u>in-place sorting</u> is achieved by directly manipulating the original array, without the need for additional auxiliary arrays, thus saving memory. Generally, in-place sorting involves fewer data movement operations and is faster.
|
||||
|
||||
**Stability**: <u>Stable sorting</u> ensures that the relative order of equal elements in the array does not change after sorting.
|
||||
|
||||
Stable sorting is a necessary condition for multi-level sorting scenarios. Suppose we have a table storing student information, with the first and second columns being name and age, respectively. In this case, <u>unstable sorting</u> might lead to a loss of orderedness in the input data:
|
||||
|
||||
```shell
|
||||
# Input data is sorted by name
|
||||
# (name, age)
|
||||
('A', 19)
|
||||
('B', 18)
|
||||
('C', 21)
|
||||
('D', 19)
|
||||
('E', 23)
|
||||
|
||||
# Assuming an unstable sorting algorithm is used to sort the list by age,
|
||||
# the result changes the relative position of ('D', 19) and ('A', 19),
|
||||
# and the property of the input data being sorted by name is lost
|
||||
('B', 18)
|
||||
('D', 19)
|
||||
('A', 19)
|
||||
('C', 21)
|
||||
('E', 23)
|
||||
```
|
||||
|
||||
**Adaptability**: <u>Adaptive sorting</u> has a time complexity that depends on the input data, i.e., the best time complexity, worst time complexity, and average time complexity are not exactly equal.
|
||||
|
||||
Adaptability needs to be assessed according to the specific situation. If the worst time complexity is worse than the average, it suggests that the performance of the sorting algorithm might deteriorate under certain data, hence it is seen as a negative attribute; whereas, if the best time complexity is better than the average, it is considered a positive attribute.
|
||||
|
||||
**Comparison-based**: <u>Comparison-based sorting</u> relies on comparison operators ($<$, $=$, $>$) to determine the relative order of elements and thus sort the entire array, with the theoretical optimal time complexity being $O(n \log n)$. Meanwhile, <u>non-comparison sorting</u> does not use comparison operators and can achieve a time complexity of $O(n)$, but its versatility is relatively poor.
|
||||
|
||||
## Ideal sorting algorithm
|
||||
|
||||
**Fast execution, in-place, stable, positively adaptive, and versatile**. Clearly, no sorting algorithm that combines all these features has been found to date. Therefore, when selecting a sorting algorithm, it is necessary to decide based on the specific characteristics of the data and the requirements of the problem.
|
||||
|
||||
Next, we will learn about various sorting algorithms together and analyze the advantages and disadvantages of each based on the above evaluation dimensions.
|
After Width: | Height: | Size: 58 KiB |
47
en/docs/chapter_sorting/summary.md
Normal file
|
@ -0,0 +1,47 @@
|
|||
# Summary
|
||||
|
||||
### Key review
|
||||
|
||||
- Bubble sort works by swapping adjacent elements. By adding a flag to enable early return, we can optimize the best-case time complexity of bubble sort to $O(n)$.
|
||||
- Insertion sort sorts each round by inserting elements from the unsorted interval into the correct position in the sorted interval. Although the time complexity of insertion sort is $O(n^2)$, it is very popular in sorting small amounts of data due to relatively fewer operations per unit.
|
||||
- Quick sort is based on sentinel partitioning operations. In sentinel partitioning, it's possible to always pick the worst pivot, leading to a time complexity degradation to $O(n^2)$. Introducing median or random pivots can reduce the probability of such degradation. Tail recursion can effectively reduce the recursion depth, optimizing the space complexity to $O(\log n)$.
|
||||
- Merge sort includes dividing and merging two phases, typically embodying the divide-and-conquer strategy. In merge sort, sorting an array requires creating auxiliary arrays, resulting in a space complexity of $O(n)$; however, the space complexity for sorting a list can be optimized to $O(1)$.
|
||||
- Bucket sort consists of three steps: data bucketing, sorting within buckets, and merging results. It also embodies the divide-and-conquer strategy, suitable for very large datasets. The key to bucket sort is the even distribution of data.
|
||||
- Counting sort is a special case of bucket sort, which sorts by counting the occurrences of each data point. Counting sort is suitable for large datasets with a limited range of data and requires that data can be converted to positive integers.
|
||||
- Radix sort sorts data by sorting digit by digit, requiring data to be represented as fixed-length numbers.
|
||||
- Overall, we hope to find a sorting algorithm that has high efficiency, stability, in-place operation, and positive adaptability. However, like other data structures and algorithms, no sorting algorithm can meet all these conditions simultaneously. In practical applications, we need to choose the appropriate sorting algorithm based on the characteristics of the data.
|
||||
- The following figure compares mainstream sorting algorithms in terms of efficiency, stability, in-place nature, and adaptability.
|
||||
|
||||
![Sorting Algorithm Comparison](summary.assets/sorting_algorithms_comparison.png)
|
||||
|
||||
### Q & A
|
||||
|
||||
**Q**: When is the stability of sorting algorithms necessary?
|
||||
|
||||
In reality, we might sort based on one attribute of an object. For example, students have names and heights as attributes, and we aim to implement multi-level sorting: first by name to get `(A, 180) (B, 185) (C, 170) (D, 170)`; then by height. Because the sorting algorithm is unstable, we might end up with `(D, 170) (C, 170) (A, 180) (B, 185)`.
|
||||
|
||||
It can be seen that the positions of students D and C have been swapped, disrupting the orderliness of the names, which is undesirable.
|
||||
|
||||
**Q**: Can the order of "searching from right to left" and "searching from left to right" in sentinel partitioning be swapped?
|
||||
|
||||
No, when using the leftmost element as the pivot, we must first "search from right to left" then "search from left to right". This conclusion is somewhat counterintuitive, so let's analyze the reason.
|
||||
|
||||
The last step of the sentinel partition `partition()` is to swap `nums[left]` and `nums[i]`. After the swap, the elements to the left of the pivot are all `<=` the pivot, **which requires that `nums[left] >= nums[i]` must hold before the last swap**. Suppose we "search from left to right" first, then if no element larger than the pivot is found, **we will exit the loop when `i == j`, possibly with `nums[j] == nums[i] > nums[left]`**. In other words, the final swap operation will exchange an element larger than the pivot to the left end of the array, causing the sentinel partition to fail.
|
||||
|
||||
For example, given the array `[0, 0, 0, 0, 1]`, if we first "search from left to right", the array after the sentinel partition is `[1, 0, 0, 0, 0]`, which is incorrect.
|
||||
|
||||
Upon further consideration, if we choose `nums[right]` as the pivot, then exactly the opposite, we must first "search from left to right".
|
||||
|
||||
**Q**: Regarding tail recursion optimization, why does choosing the shorter array ensure that the recursion depth does not exceed $\log n$?
|
||||
|
||||
The recursion depth is the number of currently unreturned recursive methods. Each round of sentinel partition divides the original array into two subarrays. With tail recursion optimization, the length of the subarray to be recursively followed is at most half of the original array length. Assuming the worst case always halves the length, the final recursion depth will be $\log n$.
|
||||
|
||||
Reviewing the original quicksort, we might continuously recursively process larger arrays, in the worst case from $n$, $n - 1$, ..., $2$, $1$, with a recursion depth of $n$. Tail recursion optimization can avoid this scenario.
|
||||
|
||||
**Q**: When all elements in the array are equal, is the time complexity of quicksort $O(n^2)$? How should this degenerate case be handled?
|
||||
|
||||
Yes. For this situation, consider using sentinel partitioning to divide the array into three parts: less than, equal to, and greater than the pivot. Only recursively proceed with the less than and greater than parts. In this method, an array where all input elements are equal can be sorted in just one round of sentinel partitioning.
|
||||
|
||||
**Q**: Why is the worst-case time complexity of bucket sort $O(n^2)$?
|
||||
|
||||
In the worst case, all elements are placed in the same bucket. If we use an $O(n^2)$ algorithm to sort these elements, the time complexity will be $O(n^2)$.
|