mirror of
https://github.com/krahets/hello-algo.git
synced 2024-12-26 12:46:31 +08:00
build
This commit is contained in:
parent
1c26d6f475
commit
e56cb78f28
13 changed files with 432 additions and 431 deletions
|
@ -13,9 +13,9 @@ icon: material/timer-sand
|
|||
|
||||
!!! abstract
|
||||
|
||||
Complexity analysis is like a space-time guide in the vast universe of algorithms.
|
||||
|
||||
It leads us to explore deeply in the dimensions of time and space, in search of more elegant solutions.
|
||||
Complexity analysis is like a space-time navigator in the vast universe of algorithms.
|
||||
|
||||
It guides us in exploring deeper within the the dimensions of time and space, seeking more elegant solutions.
|
||||
|
||||
## 本章内容
|
||||
|
||||
|
|
|
@ -2,19 +2,19 @@
|
|||
comments: true
|
||||
---
|
||||
|
||||
# 2.2 Iteration vs. Recursion
|
||||
# 2.2 Iteration and Recursion
|
||||
|
||||
In data structures and algorithms, it is common to repeat a task, which is closely related to the complexity of the algorithm. There are two basic program structures that we usually use to repeat a task: iteration and recursion.
|
||||
In algorithms, repeatedly performing a task is common and closely related to complexity analysis. Therefore, before introducing time complexity and space complexity, let's first understand how to implement task repetition in programs, focusing on two basic programming control structures: iteration and recursion.
|
||||
|
||||
## 2.2.1 Iteration
|
||||
|
||||
An "iteration iteration" is a control structure that repeats a task. In iteration, a program repeats the execution of a piece of code until the condition is no longer satisfied.
|
||||
"Iteration" is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met, until this condition is no longer satisfied.
|
||||
|
||||
### 1. For Loops
|
||||
### 1. for Loop
|
||||
|
||||
`for` loops are one of the most common forms of iteration, **suitable when the number of iterations is known in advance**.
|
||||
The `for` loop is one of the most common forms of iteration, **suitable for use when the number of iterations is known in advance**.
|
||||
|
||||
The following function implements the summation $1 + 2 + \dots + n$ based on a `for` loop, and the result is recorded using the variable `res`. Note that `range(a, b)` in Python corresponds to a "left-closed-right-open" interval, which is traversed in the range $a, a + 1, \dots, b-1$.
|
||||
The following function implements the sum $1 + 2 + \dots + n$ using a `for` loop, with the sum result recorded in the variable `res`. Note that in Python, `range(a, b)` corresponds to a "left-closed, right-open" interval, covering $a, a + 1, \dots, b-1$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -182,19 +182,19 @@ The following function implements the summation $1 + 2 + \dots + n$ based on a `
|
|||
}
|
||||
```
|
||||
|
||||
The Figure 2-1 shows the flow block diagram of this summation function.
|
||||
The flowchart below represents this sum function.
|
||||
|
||||
![Flow block diagram of the summation function](iteration_and_recursion.assets/iteration.png){ class="animation-figure" }
|
||||
![Flowchart of the Sum Function](iteration_and_recursion.assets/iteration.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-1 Flow block diagram of the summation function </p>
|
||||
<p align="center"> Figure 2-1 Flowchart of the Sum Function </p>
|
||||
|
||||
The number of operations in this summation function is proportional to the size of the input data $n$, or a "linear relationship". In fact, **time complexity describes this "linear relationship"**. This is described in more detail in the next section.
|
||||
The number of operations in this sum function is proportional to the input data size $n$, or in other words, it has a "linear relationship". This is actually what **time complexity describes**. This topic will be detailed in the next section.
|
||||
|
||||
### 2. While Loop
|
||||
### 2. while Loop
|
||||
|
||||
Similar to a `for` loop, a `while` loop is a way to implement iteration. In a `while` loop, the program first checks the condition at each turn, and if the condition is true, it continues, otherwise it ends the loop.
|
||||
Similar to the `for` loop, the `while` loop is another method to implement iteration. In a `while` loop, the program checks the condition in each round; if the condition is true, it continues, otherwise, the loop ends.
|
||||
|
||||
Below, we use a `while` loop to realize the summation $1 + 2 + \dots + n$ .
|
||||
Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -388,9 +388,9 @@ Below, we use a `while` loop to realize the summation $1 + 2 + \dots + n$ .
|
|||
}
|
||||
```
|
||||
|
||||
In `while` loops, since the steps of initializing and updating condition variables are independent of the loop structure, **it has more degrees of freedom than `for` loops**.
|
||||
**The `while` loop is more flexible than the `for` loop**. In a `while` loop, we can freely design the initialization and update steps of the condition variable.
|
||||
|
||||
For example, in the following code, the condition variable $i$ is updated twice per round, which is not convenient to implement with a `for` loop.
|
||||
For example, in the following code, the condition variable $i$ is updated twice in each round, which would be inconvenient to implement with a `for` loop:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -607,11 +607,11 @@ For example, in the following code, the condition variable $i$ is updated twice
|
|||
}
|
||||
```
|
||||
|
||||
Overall, **`for` loops have more compact code and `while` loops are more flexible**, and both can implement iteration structures. The choice of which one to use should be based on the needs of the particular problem.
|
||||
Overall, **`for` loops are more concise, while `while` loops are more flexible**. Both can implement iterative structures. Which one to use should be determined based on the specific requirements of the problem.
|
||||
|
||||
### 3. Nested Loops
|
||||
|
||||
We can nest one loop structure inside another, using the `for` loop as an example:
|
||||
We can nest one loop structure within another. Below is an example using `for` loops:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -821,30 +821,30 @@ We can nest one loop structure inside another, using the `for` loop as an exampl
|
|||
}
|
||||
```
|
||||
|
||||
The Figure 2-2 gives the block diagram of the flow of this nested loop.
|
||||
The flowchart below represents this nested loop.
|
||||
|
||||
![Block diagram of the flow of nested loops](iteration_and_recursion.assets/nested_iteration.png){ class="animation-figure" }
|
||||
![Flowchart of the Nested Loop](iteration_and_recursion.assets/nested_iteration.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-2 Block diagram of the flow of nested loops </p>
|
||||
<p align="center"> Figure 2-2 Flowchart of the Nested Loop </p>
|
||||
|
||||
In this case, the number of operations of the function is proportional to $n^2$, or the algorithm's running time is "squared" to the size of the input data $n$.
|
||||
In this case, the number of operations in the function is proportional to $n^2$, or the algorithm's running time and the input data size $n$ have a "quadratic relationship".
|
||||
|
||||
We can continue to add nested loops, and each nest is a "dimension up", which will increase the time complexity to "cubic relations", "quadratic relations", and so on.
|
||||
We can continue adding nested loops, each nesting is a "dimensional escalation," which will increase the time complexity to "cubic," "quartic," and so on.
|
||||
|
||||
## 2.2.2 Recursion
|
||||
|
||||
"Recursion recursion is an algorithmic strategy to solve a problem by calling the function itself. It consists of two main phases.
|
||||
"Recursion" is an algorithmic strategy that solves problems by having a function call itself. It mainly consists of two phases.
|
||||
|
||||
1. **recursive**: the program calls itself deeper and deeper, usually passing smaller or simpler arguments, until a "termination condition" is reached.
|
||||
2. **Recursion**: After the "termination condition" is triggered, the program returns from the deepest level of the recursion function, level by level, aggregating the results of each level.
|
||||
1. **Recursion**: The program continuously calls itself, usually with smaller or more simplified parameters, until reaching a "termination condition."
|
||||
2. **Return**: Upon triggering the "termination condition," the program begins to return from the deepest recursive function, aggregating the results of each layer.
|
||||
|
||||
And from an implementation point of view, recursion code contains three main elements.
|
||||
From an implementation perspective, recursive code mainly includes three elements.
|
||||
|
||||
1. **Termination condition**: used to decide when to switch from "recursive" to "inductive".
|
||||
2. **Recursion call**: corresponds to "recursion", where the function calls itself, usually with smaller or more simplified input parameters.
|
||||
3. **return result**: corresponds to "return", returning the result of the current recursion level to the previous one.
|
||||
1. **Termination Condition**: Determines when to switch from "recursion" to "return."
|
||||
2. **Recursive Call**: Corresponds to "recursion," where the function calls itself, usually with smaller or more simplified parameters.
|
||||
3. **Return Result**: Corresponds to "return," where the result of the current recursion level is returned to the previous layer.
|
||||
|
||||
Observe the following code, we only need to call the function `recur(n)` , and the calculation of $1 + 2 + \dots + n$ is done:
|
||||
Observe the following code, where calling the function `recur(n)` completes the computation of $1 + 2 + \dots + n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1026,45 +1026,45 @@ Observe the following code, we only need to call the function `recur(n)` , and t
|
|||
}
|
||||
```
|
||||
|
||||
The Figure 2-3 shows the recursion of the function.
|
||||
The Figure 2-3 shows the recursive process of this function.
|
||||
|
||||
![Recursion process for the summation function](iteration_and_recursion.assets/recursion_sum.png){ class="animation-figure" }
|
||||
![Recursive Process of the Sum Function](iteration_and_recursion.assets/recursion_sum.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-3 Recursion process for the summation function </p>
|
||||
<p align="center"> Figure 2-3 Recursive Process of the Sum Function </p>
|
||||
|
||||
Although iteration and recursion can yield the same results from a computational point of view, **they represent two completely different paradigms for thinking about and solving problems**.
|
||||
Although iteration and recursion can achieve the same results from a computational standpoint, **they represent two entirely different paradigms of thinking and solving problems**.
|
||||
|
||||
- **Iteration**: solving problems "from the bottom up". Start with the most basic steps and repeat or add to them until the task is completed.
|
||||
- **Recursion**: solving problems "from the top down". The original problem is broken down into smaller subproblems that have the same form as the original problem. Next, the subproblem continues to be broken down into smaller subproblems until it stops at the base case (the solution to the base case is known).
|
||||
- **Iteration**: Solves problems "from the bottom up." It starts with the most basic steps, then repeatedly adds or accumulates these steps until the task is complete.
|
||||
- **Recursion**: Solves problems "from the top down." It breaks down the original problem into smaller sub-problems, each of which has the same form as the original problem. These sub-problems are then further decomposed into even smaller sub-problems, stopping at the base case (whose solution is known).
|
||||
|
||||
As an example of the above summation function, set the problem $f(n) = 1 + 2 + \dots + n$ .
|
||||
Taking the sum function as an example, let's define the problem as $f(n) = 1 + 2 + \dots + n$.
|
||||
|
||||
- **Iteration**: the summation process is simulated in a loop, iterating from $1$ to $n$ and executing the summation operation in each round to find $f(n)$.
|
||||
- **Recursion**: decompose the problem into subproblems $f(n) = n + f(n-1)$ and keep (recursively) decomposing until the base case $f(1) = 1$ terminates.
|
||||
- **Iteration**: In a loop, simulate the summing process, iterating from $1$ to $n$, performing the sum operation in each round, to obtain $f(n)$.
|
||||
- **Recursion**: Break down the problem into sub-problems $f(n) = n + f(n-1)$, continuously (recursively) decomposing until reaching the base case $f(1) = 1$ and then stopping.
|
||||
|
||||
### 1. Call The Stack
|
||||
### 1. Call Stack
|
||||
|
||||
Each time a recursion function calls itself, the system allocates memory for the newly opened function to store local variables, call addresses, other information, and so on. This results in two things.
|
||||
Each time a recursive function calls itself, the system allocates memory for the newly initiated function to store local variables, call addresses, and other information. This leads to two main consequences.
|
||||
|
||||
- The context data for a function is stored in an area of memory called "stack frame space" and is not freed until the function returns. As a result, **recursion is usually more memory-intensive than iteration**.
|
||||
- Recursion calls to functions incur additional overhead. **Therefore recursion is usually less time efficient than loops**.
|
||||
- The function's context data is stored in a memory area called "stack frame space" and is only released after the function returns. Therefore, **recursion generally consumes more memory space than iteration**.
|
||||
- Recursive calls introduce additional overhead. **Hence, recursion is usually less time-efficient than loops**.
|
||||
|
||||
As shown in the Figure 2-4 , before the termination condition is triggered, there are $n$ unreturned recursion functions at the same time, **with a recursion depth of $n$** .
|
||||
As shown in the Figure 2-4 , there are $n$ unreturned recursive functions before triggering the termination condition, indicating a **recursion depth of $n$**.
|
||||
|
||||
![Recursion call depth](iteration_and_recursion.assets/recursion_sum_depth.png){ class="animation-figure" }
|
||||
![Recursion Call Depth](iteration_and_recursion.assets/recursion_sum_depth.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-4 Recursion call depth </p>
|
||||
<p align="center"> Figure 2-4 Recursion Call Depth </p>
|
||||
|
||||
In practice, the depth of recursion allowed by a programming language is usually limited, and too deep a recursion may result in a stack overflow error.
|
||||
In practice, the depth of recursion allowed by programming languages is usually limited, and excessively deep recursion can lead to stack overflow errors.
|
||||
|
||||
### 2. Tail Recursion
|
||||
|
||||
Interestingly, **if a function makes a recursion call only at the last step before returning**, the function can be optimized by the compiler or interpreter to be comparable to iteration in terms of space efficiency. This situation is called "tail recursion tail recursion".
|
||||
Interestingly, **if a function makes its recursive call as the last step before returning**, it can be optimized by compilers or interpreters to be as space-efficient as iteration. This scenario is known as "tail recursion".
|
||||
|
||||
- **Ordinary recursion**: when a function returns to a function at a higher level, it needs to continue executing the code, so the system needs to save the context of the previous call.
|
||||
- **tail recursion**: the recursion call is the last operation before the function returns, which means that the function does not need to continue with other operations after returning to the previous level, so the system does not need to save the context of the previous function.
|
||||
- **Regular Recursion**: The function needs to perform more code after returning to the previous level, so the system needs to save the context of the previous call.
|
||||
- **Tail Recursion**: The recursive call is the last operation before the function returns, meaning no further actions are required upon returning to the previous level, so the system doesn't need to save the context of the previous level's function.
|
||||
|
||||
In the case of calculating $1 + 2 + \dots + n$, for example, we can implement tail recursion by setting the result variable `res` as a function parameter.
|
||||
For example, in calculating $1 + 2 + \dots + n$, we can make the result variable `res` a parameter of the function, thereby achieving tail recursion:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1222,33 +1222,33 @@ In the case of calculating $1 + 2 + \dots + n$, for example, we can implement ta
|
|||
}
|
||||
```
|
||||
|
||||
The execution of tail recursion is shown in the Figure 2-5 . Comparing normal recursion and tail recursion, the execution point of the summation operation is different.
|
||||
The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different.
|
||||
|
||||
- **Ordinary recursion**: the summing operation is performed during the "return" process, and the summing operation is performed again after returning from each level.
|
||||
- **Tail recursion**: the summing operation is performed in a "recursion" process, the "recursion" process simply returns in levels.
|
||||
- **Regular Recursion**: The summation operation occurs during the "return" phase, requiring another summation after each layer returns.
|
||||
- **Tail Recursion**: The summation operation occurs during the "recursion" phase, and the "return" phase only involves returning through each layer.
|
||||
|
||||
![tail recursion process](iteration_and_recursion.assets/tail_recursion_sum.png){ class="animation-figure" }
|
||||
![Tail Recursion Process](iteration_and_recursion.assets/tail_recursion_sum.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-5 tail recursion process </p>
|
||||
<p align="center"> Figure 2-5 Tail Recursion Process </p>
|
||||
|
||||
!!! tip
|
||||
|
||||
Note that many compilers or interpreters do not support tail recursion optimization. For example, Python does not support tail recursion optimization by default, so even if a function is tail recursive, you may still encounter stack overflow problems.
|
||||
Note that many compilers or interpreters do not support tail recursion optimization. For example, Python does not support tail recursion optimization by default, so even if the function is in the form of tail recursion, it may still encounter stack overflow issues.
|
||||
|
||||
### 3. Recursion Tree
|
||||
|
||||
When dealing with algorithmic problems related to divide and conquer, recursion is often more intuitive and easier to read than iteration. Take the Fibonacci sequence as an example.
|
||||
When dealing with algorithms related to "divide and conquer", recursion often offers a more intuitive approach and more readable code than iteration. Take the "Fibonacci sequence" as an example.
|
||||
|
||||
!!! question
|
||||
|
||||
Given a Fibonacci series $0, 1, 1, 2, 3, 5, 8, 13, \dots$ , find the $n$th number of the series.
|
||||
Given a Fibonacci sequence $0, 1, 1, 2, 3, 5, 8, 13, \dots$, find the $n$th number in the sequence.
|
||||
|
||||
Let the $n$th number of the Fibonacci series be $f(n)$ , which leads to two easy conclusions.
|
||||
Let the $n$th number of the Fibonacci sequence be $f(n)$, it's easy to deduce two conclusions:
|
||||
|
||||
- The first two numbers of the series are $f(1) = 0$ and $f(2) = 1$.
|
||||
- Each number in the series is the sum of the previous two numbers, i.e. $f(n) = f(n - 1) + f(n - 2)$ .
|
||||
- The first two numbers of the sequence are $f(1) = 0$ and $f(2) = 1$.
|
||||
- Each number in the sequence is the sum of the two preceding ones, that is, $f(n) = f(n - 1) + f(n - 2)$.
|
||||
|
||||
Recursion code can be written by making recursion calls according to the recursion relationship, using the first two numbers as termination conditions. Call `fib(n)` to get the $n$th number of the Fibonacci series.
|
||||
Using the recursive relation, and considering the first two numbers as termination conditions, we can write the recursive code. Calling `fib(n)` will yield the $n$th number of the Fibonacci sequence:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1430,46 +1430,46 @@ Recursion code can be written by making recursion calls according to the recursi
|
|||
}
|
||||
```
|
||||
|
||||
Looking at the above code, we have recursively called two functions within a function, **this means that from one call, two call branches are created**. As shown in the Figure 2-6 , this recursion will result in a recursion tree with the number of levels $n$.
|
||||
Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of $n$.
|
||||
|
||||
![Recursion tree for Fibonacci series](iteration_and_recursion.assets/recursion_tree.png){ class="animation-figure" }
|
||||
![Fibonacci Sequence Recursion Tree](iteration_and_recursion.assets/recursion_tree.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-6 Recursion tree for Fibonacci series </p>
|
||||
<p align="center"> Figure 2-6 Fibonacci Sequence Recursion Tree </p>
|
||||
|
||||
Essentially, recursion embodies the paradigm of "breaking down a problem into smaller sub-problems", and this divide and conquer strategy is essential.
|
||||
Fundamentally, recursion embodies the paradigm of "breaking down a problem into smaller sub-problems." This divide-and-conquer strategy is crucial.
|
||||
|
||||
- From an algorithmic point of view, many important algorithmic strategies such as searching, sorting algorithm, backtracking, divide and conquer, dynamic programming, etc. directly or indirectly apply this way of thinking.
|
||||
- From a data structure point of view, recursion is naturally suited to problems related to linked lists, trees and graphs because they are well suited to be analyzed with the idea of partitioning.
|
||||
- From an algorithmic perspective, many important strategies like searching, sorting, backtracking, divide-and-conquer, and dynamic programming directly or indirectly use this way of thinking.
|
||||
- From a data structure perspective, recursion is naturally suited for dealing with linked lists, trees, and graphs, as they are well suited for analysis using the divide-and-conquer approach.
|
||||
|
||||
## 2.2.3 Compare The Two
|
||||
## 2.2.3 Comparison
|
||||
|
||||
To summarize the above, as shown in the Table 2-1 , iteration and recursion differ in implementation, performance and applicability.
|
||||
Summarizing the above content, the following table shows the differences between iteration and recursion in terms of implementation, performance, and applicability.
|
||||
|
||||
<p align="center"> Table 2-1 Comparison of iteration and recursion features </p>
|
||||
<p align="center"> Table: Comparison of Iteration and Recursion Characteristics </p>
|
||||
|
||||
<div class="center-table" markdown>
|
||||
|
||||
| | iteration | recursion |
|
||||
| ------------------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| implementation | circular structure | function call itself |
|
||||
| time-efficient | typically efficient, no function call overhead | overhead on every function call |
|
||||
| Memory Usage | Usually uses a fixed size of memory space | Cumulative function calls may use a lot of stack frame space |
|
||||
| Applicable Problems | For simple cyclic tasks, code is intuitive and readable | For sub-problem decomposition, such as trees, graphs, divide and conquer, backtracking, etc., the code structure is concise and clear |
|
||||
| | Iteration | Recursion |
|
||||
| ----------------- | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Approach | Loop structure | Function calls itself |
|
||||
| Time Efficiency | Generally higher efficiency, no function call overhead | Each function call generates overhead |
|
||||
| Memory Usage | Typically uses a fixed size of memory space | Accumulative function calls can use a substantial amount of stack frame space |
|
||||
| Suitable Problems | Suitable for simple loop tasks, intuitive and readable code | Suitable for problem decomposition, like trees, graphs, divide-and-conquer, backtracking, etc., concise and clear code structure |
|
||||
|
||||
</div>
|
||||
|
||||
!!! tip
|
||||
|
||||
If you find the following solutions difficult to understand, you can review them after reading the "Stack" chapter.
|
||||
If you find the following content difficult to understand, consider revisiting it after reading the "Stack" chapter.
|
||||
|
||||
So what is the intrinsic connection between iteration and recursion? In the case of the recursive function described above, the summing operation takes place in the "return" phase of the recursion. This means that the function that is initially called is actually the last to complete its summing operation, **This mechanism works in the same way as the stack's "first in, last out" principle**.
|
||||
So, what is the intrinsic connection between iteration and recursion? Taking the above recursive function as an example, the summation operation occurs during the recursion's "return" phase. This means that the initially called function is actually the last to complete its summation operation, **mirroring the "last in, first out" principle of a stack**.
|
||||
|
||||
In fact, recursion terms like "call stack" and "stack frame space" already imply a close relationship between recursion and the stack.
|
||||
In fact, recursive terms like "call stack" and "stack frame space" hint at the close relationship between recursion and stacks.
|
||||
|
||||
1. **Recursive**: When a function is called, the system allocates a new stack frame on the "call stack" for the function, which is used to store the function's local variables, parameters, return address, and other data.
|
||||
2. **Return to**: When a function completes execution and returns, the corresponding stack frame is removed from the "call stack", restoring the function's previous execution environment.
|
||||
1. **Recursion**: When a function is called, the system allocates a new stack frame on the "call stack" for that function, storing local variables, parameters, return addresses, and other data.
|
||||
2. **Return**: When a function completes execution and returns, the corresponding stack frame is removed from the "call stack," restoring the execution environment of the previous function.
|
||||
|
||||
Thus, **we can use an explicit stack to model the behavior of the call stack**, thus transforming recursion into an iteration form:
|
||||
Therefore, **we can use an explicit stack to simulate the behavior of the call stack**, thus transforming recursion into an iterative form:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1748,9 +1748,9 @@ Thus, **we can use an explicit stack to model the behavior of the call stack**,
|
|||
}
|
||||
```
|
||||
|
||||
Observing the code above, it becomes more complex when recursion is converted to iteration. Although iteration and recursion can be converted to each other in many cases, it is not always worth doing so for two reasons.
|
||||
Observing the above code, when recursion is transformed into iteration, the code becomes more complex. Although iteration and recursion can often be transformed into each other, it's not always advisable to do so for two reasons:
|
||||
|
||||
- The transformed code may be more difficult to understand and less readable.
|
||||
- For some complex problems, simulating the behavior of the system call stack can be very difficult.
|
||||
- The transformed code may become harder to understand and less readable.
|
||||
- For some complex problems, simulating the behavior of the system's call stack can be quite challenging.
|
||||
|
||||
In short, **the choice of iteration or recursion depends on the nature of the particular problem**. In programming practice, it is crucial to weigh the advantages and disadvantages of both and choose the appropriate method based on the context.
|
||||
In summary, **choosing between iteration and recursion depends on the nature of the specific problem**. In programming practice, weighing the pros and cons of each and choosing the appropriate method for the situation is essential.
|
||||
|
|
|
@ -2,52 +2,51 @@
|
|||
comments: true
|
||||
---
|
||||
|
||||
# 2.1 Evaluation Of Algorithm Efficiency
|
||||
# 2.1 Algorithm Efficiency Assessment
|
||||
|
||||
In algorithm design, we aim to achieve two goals in succession:
|
||||
In algorithm design, we pursue the following two objectives in sequence.
|
||||
|
||||
1. **Finding a Solution to the Problem**: The algorithm needs to reliably find the correct solution within the specified input range.
|
||||
2. **Seeking the Optimal Solution**: There may be multiple ways to solve the same problem, and our goal is to find the most efficient algorithm possible.
|
||||
1. **Finding a Solution to the Problem**: The algorithm should reliably find the correct solution within the stipulated range of inputs.
|
||||
2. **Seeking the Optimal Solution**: For the same problem, multiple solutions might exist, and we aim to find the most efficient algorithm possible.
|
||||
|
||||
In other words, once the ability to solve the problem is established, the efficiency of the algorithm emerges as the main benchmark for assessing its quality, which includes the following two aspects.
|
||||
In other words, under the premise of being able to solve the problem, algorithm efficiency has become the main criterion for evaluating the merits of an algorithm, which includes the following two dimensions.
|
||||
|
||||
- **Time Efficiency**: The speed at which an algorithm runs.
|
||||
- **Space Efficiency**: The amount of memory space the algorithm consumes.
|
||||
- **Space Efficiency**: The size of the memory space occupied by an algorithm.
|
||||
|
||||
In short, our goal is to design data structures and algorithms that are both "fast and economical". Effectively evaluating algorithm efficiency is crucial, as it allows for the comparison of different algorithms and guides the design and optimization process.
|
||||
In short, **our goal is to design data structures and algorithms that are both fast and memory-efficient**. Effectively assessing algorithm efficiency is crucial because only then can we compare various algorithms and guide the process of algorithm design and optimization.
|
||||
|
||||
There are mainly two approaches for assessing efficiency: practical testing and theoretical estimation.
|
||||
There are mainly two methods of efficiency assessment: actual testing and theoretical estimation.
|
||||
|
||||
## 2.1.1 Practical Testing
|
||||
## 2.1.1 Actual Testing
|
||||
|
||||
Let's consider a scenario where we have two algorithms, `A` and `B`, both capable of solving the same problem. To compare their efficiency, the most direct method is to use a computer to run both algorithms while monitoring and recording their execution time and memory usage. This approach provides a realistic assessment of their performance, but it also has significant limitations.
|
||||
Suppose we have algorithms `A` and `B`, both capable of solving the same problem, and we need to compare their efficiencies. The most direct method is to use a computer to run these two algorithms and monitor and record their runtime and memory usage. This assessment method reflects the actual situation but has significant limitations.
|
||||
|
||||
On one hand, it's challenging to eliminate the interference of the test environment. Hardware configurations can significantly affect the performance of algorithms. For instance, on one computer, Algorithm `A` might run faster than Algorithm `B`, but the results could be the opposite on another computer with different specifications. This means we would need to conduct tests on a variety of machines and calculate an average efficiency, which is impractical.
|
||||
On one hand, **it's difficult to eliminate interference from the testing environment**. Hardware configurations can affect algorithm performance. For example, algorithm `A` might run faster than `B` on one computer, but the opposite result may occur on another computer with different configurations. This means we would need to test on a variety of machines to calculate average efficiency, which is impractical.
|
||||
|
||||
Furthermore, conducting comprehensive tests is resource-intensive. The efficiency of algorithms can vary with different volumes of input data. For example, with smaller data sets, Algorithm A might run faster than Algorithm B; however, this could change with larger data sets. Therefore, to reach a convincing conclusion, it's necessary to test a range of data sizes, which requires excessive computational resources.
|
||||
On the other hand, **conducting a full test is very resource-intensive**. As the volume of input data changes, the efficiency of the algorithms may vary. For example, with smaller data volumes, algorithm `A` might run faster than `B`, but the opposite might be true with larger data volumes. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires significant computational resources.
|
||||
|
||||
## 2.1.2 Theoretical Estimation
|
||||
|
||||
Given the significant limitations of practical testing, we can consider assessing algorithm efficiency solely through calculations. This method of estimation is known as 'asymptotic complexity analysis,' often simply referred to as 'complexity analysis.
|
||||
Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as "asymptotic complexity analysis," or simply "complexity analysis."
|
||||
|
||||
Complexity analysis illustrates the relationship between the time (and space) resources required by an algorithm and the size of its input data. **It describes the growing trend in the time and space required for the execution of an algorithm as the size of the input data increases**. This definition might sound a bit complex, so let's break it down into three key points for easier understanding.
|
||||
Complexity analysis reflects the relationship between the time and space resources required for algorithm execution and the size of the input data. **It describes the trend of growth in the time and space required by the algorithm as the size of the input data increases**. This definition might sound complex, but we can break it down into three key points to understand it better.
|
||||
|
||||
- In complexity analysis, 'time and space' directly relate to 'time complexity' and 'space complexity,' respectively.
|
||||
- The statement "as the size of the input data increases" highlights that complexity analysis examines the interplay between the size of the input data and the algorithm's efficiency.
|
||||
- Lastly, the phrase "the growing trend in time and space required" emphasizes that the focus of complexity analysis is not on the specific values of running time or space occupied, but on the rate at which these requirements increase with larger input sizes.
|
||||
- "Time and space resources" correspond to "time complexity" and "space complexity," respectively.
|
||||
- "As the size of input data increases" means that complexity reflects the relationship between algorithm efficiency and the volume of input data.
|
||||
- "The trend of growth in time and space" indicates that complexity analysis focuses not on the specific values of runtime or space occupied but on the "rate" at which time or space grows.
|
||||
|
||||
**Complexity analysis overcomes the drawbacks of practical testing methods in two key ways:**.
|
||||
**Complexity analysis overcomes the disadvantages of actual testing methods**, reflected in the following aspects:
|
||||
|
||||
- It is independent of the testing environment, meaning the analysis results are applicable across all operating platforms.
|
||||
- It effectively demonstrates the efficiency of algorithms with varying data volumes, particularly highlighting performance in large-scale data scenarios.
|
||||
- It is independent of the testing environment and applicable to all operating platforms.
|
||||
- It can reflect algorithm efficiency under different data volumes, especially in the performance of algorithms with large data volumes.
|
||||
|
||||
!!! tip
|
||||
|
||||
If you're still finding the concept of complexity confusing, don't worry. We will cover it in more detail in the subsequent chapters.
|
||||
If you're still confused about the concept of complexity, don't worry. We will introduce it in detail in subsequent chapters.
|
||||
|
||||
Complexity analysis provides us with a 'ruler' for evaluating the efficiency of algorithms, enabling us to measure the time and space resources required to execute a given algorithm and to compare the efficiency of different algorithms.
|
||||
Complexity analysis provides us with a "ruler" to measure the time and space resources needed to execute an algorithm and compare the efficiency between different algorithms.
|
||||
|
||||
Complexity is a mathematical concept that might seem abstract and somewhat challenging for beginners. From this perspective, introducing complexity analysis at the very beginning may not be the most suitable approach. However, when discussing the characteristics of a particular data structure or algorithm, analyzing its operational speed and space usage is often inevitable.
|
||||
|
||||
Therefore, it is recommended that before diving deeply into data structures and algorithms, **one should first gain a basic understanding of complexity analysis. This foundational knowledge will facilitate the complexity analysis of simple algorithms.**
|
||||
Complexity is a mathematical concept and may be abstract and challenging for beginners. From this perspective, complexity analysis might not be the best content to introduce first. However, when discussing the characteristics of a particular data structure or algorithm, it's hard to avoid analyzing its speed and space usage.
|
||||
|
||||
In summary, it's recommended that you establish a preliminary understanding of complexity analysis before diving deep into data structures and algorithms, **so that you can carry out simple complexity analyses of algorithms**.
|
||||
|
|
|
@ -4,29 +4,31 @@ comments: true
|
|||
|
||||
# 2.4 Space Complexity
|
||||
|
||||
The space complexity is used to measure the growth trend of memory consumption as the scale of data increases for an algorithm solution. This concept is analogous to time complexity by replacing "runtime" with "memory space".
|
||||
"Space complexity" is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that "running time" is replaced with "occupied memory space".
|
||||
|
||||
## 2.4.1 Algorithmic Correlation Space
|
||||
## 2.4.1 Space Related to Algorithms
|
||||
|
||||
The memory space used by algorithms during its execution include the following types.
|
||||
The memory space used by an algorithm during its execution mainly includes the following types.
|
||||
|
||||
- **Input Space**: Used to store the input data for the algorithm.
|
||||
- **Temporary Space**: Used to store variables, objects, function contexts, and other data of the algorithm during runtime.
|
||||
- **Input Space**: Used to store the input data of the algorithm.
|
||||
- **Temporary Space**: Used to store variables, objects, function contexts, and other data during the algorithm's execution.
|
||||
- **Output Space**: Used to store the output data of the algorithm.
|
||||
|
||||
In general, the "Input Space" is excluded from the statistics of space complexity.
|
||||
Generally, the scope of space complexity statistics includes both "Temporary Space" and "Output Space".
|
||||
|
||||
The **Temporary Space** can be further divided into three parts.
|
||||
Temporary space can be further divided into three parts.
|
||||
|
||||
- **Temporary Data**: Used to store various constants, variables, objects, etc., during the the algorithm's execution.
|
||||
- **Stack Frame Space**: Used to hold the context data of the called function. The system creates a stack frame at the top of the stack each time a function is called, and the stack frame space is freed when the function returns.
|
||||
- **Instruction Space**: Used to hold compiled program instructions, usually ignored in practical statistics.
|
||||
- **Temporary Data**: Used to save various constants, variables, objects, etc., during the algorithm's execution.
|
||||
- **Stack Frame Space**: Used to save the context data of the called function. The system creates a stack frame at the top of the stack each time a function is called, and the stack frame space is released after the function returns.
|
||||
- **Instruction Space**: Used to store compiled program instructions, which are usually negligible in actual statistics.
|
||||
|
||||
When analyzing the space complexity of a piece of program, **three parts are usually taken into account: Temporary Data, Stack Frame Space and Output Data**.
|
||||
When analyzing the space complexity of a program, **we typically count the Temporary Data, Stack Frame Space, and Output Data**, as shown in the Figure 2-15 .
|
||||
|
||||
![Associated spaces used by the algorithm](space_complexity.assets/space_types.png){ class="animation-figure" }
|
||||
![Space Types Used in Algorithms](space_complexity.assets/space_types.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-15 Associated spaces used by the algorithm </p>
|
||||
<p align="center"> Figure 2-15 Space Types Used in Algorithms </p>
|
||||
|
||||
The relevant code is as follows:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -322,14 +324,14 @@ When analyzing the space complexity of a piece of program, **three parts are usu
|
|||
|
||||
## 2.4.2 Calculation Method
|
||||
|
||||
The calculation method for space complexity is pretty similar to time complexity, with the only difference being that the focus shifts from "operation count" to "space usage size".
|
||||
The method for calculating space complexity is roughly similar to that of time complexity, with the only change being the shift of the statistical object from "number of operations" to "size of used space".
|
||||
|
||||
On top of that, unlike time complexity, **we usually only focus on the worst-case space complexity**. This is because memory space is a hard requirement, and we have to make sure that there is enough memory space reserved for all possibilities incurred by input data.
|
||||
However, unlike time complexity, **we usually only focus on the worst-case space complexity**. This is because memory space is a hard requirement, and we must ensure that there is enough memory space reserved under all input data.
|
||||
|
||||
Looking at the following code, the "worst" in worst-case space complexity has two layers of meaning.
|
||||
Consider the following code, the term "worst-case" in worst-case space complexity has two meanings.
|
||||
|
||||
1. **Based on the worst-case input data**: when $n < 10$, the space complexity is $O(1)$; however, when $n > 10$, the initialized array `nums` occupies $O(n)$ space; thus the worst-case space complexity is $O(n)$.
|
||||
2. **Based on the peak memory during algorithm execution**: for example, the program occupies $O(1)$ space until the last line is executed; when the array `nums` is initialized, the program occupies $O(n)$ space; thus the worst-case space complexity is $O(n)$.
|
||||
1. **Based on the worst input data**: When $n < 10$, the space complexity is $O(1)$; but when $n > 10$, the initialized array `nums` occupies $O(n)$ space, thus the worst-case space complexity is $O(n)$.
|
||||
2. **Based on the peak memory used during the algorithm's execution**: For example, before executing the last line, the program occupies $O(1)$ space; when initializing the array `nums`, the program occupies $O(n)$ space, hence the worst-case space complexity is $O(n)$.
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -466,10 +468,7 @@ Looking at the following code, the "worst" in worst-case space complexity has tw
|
|||
|
||||
```
|
||||
|
||||
**In recursion functions, it is important to take into count the measurement of stack frame space**. For example in the following code:
|
||||
|
||||
- The function `loop()` calls $n$ times `function()` in a loop, and each round of `function()` returns and frees stack frame space, so the space complexity is still $O(1)$.
|
||||
- The recursion function `recur()` will have $n$ unreturned `recur()` during runtime, thus occupying $O(n)$ of stack frame space.
|
||||
**In recursive functions, stack frame space must be taken into count**. Consider the following code:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -706,26 +705,31 @@ Looking at the following code, the "worst" in worst-case space complexity has tw
|
|||
|
||||
```
|
||||
|
||||
The time complexity of both `loop()` and `recur()` functions is $O(n)$, but their space complexities differ.
|
||||
|
||||
- The `loop()` function calls `function()` $n$ times in a loop, where each iteration's `function()` returns and releases its stack frame space, so the space complexity remains $O(1)$.
|
||||
- The recursive function `recur()` will have $n$ instances of unreturned `recur()` existing simultaneously during its execution, thus occupying $O(n)$ stack frame space.
|
||||
|
||||
## 2.4.3 Common Types
|
||||
|
||||
Assuming the input data size is $n$, the figure illustrates common types of space complexity (ordered from low to high).
|
||||
Let the size of the input data be $n$, the following chart displays common types of space complexities (arranged from low to high).
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
O(1) < O(\log n) < O(n) < O(n^2) < O(2^n) \newline
|
||||
\text{constant order} < \text{logarithmic order} < \text{linear order} < \text{square order} < \text{exponential order}
|
||||
\text{Constant Order} < \text{Logarithmic Order} < \text{Linear Order} < \text{Quadratic Order} < \text{Exponential Order}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
![Common space complexity types](space_complexity.assets/space_complexity_common_types.png){ class="animation-figure" }
|
||||
![Common Types of Space Complexity](space_complexity.assets/space_complexity_common_types.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-16 Common space complexity types </p>
|
||||
<p align="center"> Figure 2-16 Common Types of Space Complexity </p>
|
||||
|
||||
### 1. Constant Order $O(1)$
|
||||
|
||||
Constant order is common for constants, variables, and objects whose quantity is unrelated to the size of the input data $n$.
|
||||
Constant order is common in constants, variables, objects that are independent of the size of input data $n$.
|
||||
|
||||
It is important to note that memory occupied by initializing a variable or calling a function in a loop is released once the next iteration begins. Therefore, there is no accumulation of occupied space and the space complexity remains $O(1)$ :
|
||||
Note that memory occupied by initializing variables or calling functions in a loop, which is released upon entering the next cycle, does not accumulate over space, thus the space complexity remains $O(1)$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1058,9 +1062,9 @@ It is important to note that memory occupied by initializing a variable or calli
|
|||
}
|
||||
```
|
||||
|
||||
### 2. Linear Order $O(N)$
|
||||
### 2. Linear Order $O(n)$
|
||||
|
||||
Linear order is commonly found in arrays, linked lists, stacks, queues, and similar structures where the number of elements is proportional to $n$:
|
||||
Linear order is common in arrays, linked lists, stacks, queues, etc., where the number of elements is proportional to $n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1322,7 +1326,7 @@ Linear order is commonly found in arrays, linked lists, stacks, queues, and simi
|
|||
}
|
||||
```
|
||||
|
||||
As shown in the Figure 2-17 , the depth of recursion for this function is $n$, which means that there are $n$ unreturned `linear_recur()` functions at the same time, using $O(n)$ size stack frame space:
|
||||
As shown below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1463,13 +1467,13 @@ As shown in the Figure 2-17 , the depth of recursion for this function is $n$, w
|
|||
}
|
||||
```
|
||||
|
||||
![Linear order space complexity generated by recursion function](space_complexity.assets/space_complexity_recursive_linear.png){ class="animation-figure" }
|
||||
![Recursive Function Generating Linear Order Space Complexity](space_complexity.assets/space_complexity_recursive_linear.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-17 Linear order space complexity generated by recursion function </p>
|
||||
<p align="center"> Figure 2-17 Recursive Function Generating Linear Order Space Complexity </p>
|
||||
|
||||
### 3. Quadratic Order $O(N^2)$
|
||||
### 3. Quadratic Order $O(n^2)$
|
||||
|
||||
Quadratic order is common in matrices and graphs, where the number of elements is in a square relationship with $n$:
|
||||
Quadratic order is common in matrices and graphs, where the number of elements is quadratic to $n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1683,7 +1687,7 @@ Quadratic order is common in matrices and graphs, where the number of elements i
|
|||
}
|
||||
```
|
||||
|
||||
As shown in the Figure 2-18 , the recursion depth of this function is $n$, and an array is initialized in each recursion function with lengths $n$, $n-1$, $\dots$, $2$, $1$, and an average length of $n / 2$, thus occupying $O(n^2)$ space overall:
|
||||
As shown below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1842,13 +1846,13 @@ As shown in the Figure 2-18 , the recursion depth of this function is $n$, and a
|
|||
}
|
||||
```
|
||||
|
||||
![Square-order space complexity generated by the recursion function](space_complexity.assets/space_complexity_recursive_quadratic.png){ class="animation-figure" }
|
||||
![Recursive Function Generating Quadratic Order Space Complexity](space_complexity.assets/space_complexity_recursive_quadratic.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-18 Square-order space complexity generated by the recursion function </p>
|
||||
<p align="center"> Figure 2-18 Recursive Function Generating Quadratic Order Space Complexity </p>
|
||||
|
||||
### 4. Exponential Order $O(2^N)$
|
||||
### 4. Exponential Order $O(2^n)$
|
||||
|
||||
Exponential order is common in binary trees. Looking at the Figure 2-19 , a "full binary tree" of degree $n$ has $2^n - 1$ nodes, occupying $O(2^n)$ space:
|
||||
Exponential order is common in binary trees. Observe the below image, a "full binary tree" with $n$ levels has $2^n - 1$ nodes, occupying $O(2^n)$ space:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -2015,20 +2019,20 @@ Exponential order is common in binary trees. Looking at the Figure 2-19 , a "ful
|
|||
}
|
||||
```
|
||||
|
||||
![Exponential order space complexity generated by a full binary tree](space_complexity.assets/space_complexity_exponential.png){ class="animation-figure" }
|
||||
![Full Binary Tree Generating Exponential Order Space Complexity](space_complexity.assets/space_complexity_exponential.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-19 Exponential order space complexity generated by a full binary tree </p>
|
||||
<p align="center"> Figure 2-19 Full Binary Tree Generating Exponential Order Space Complexity </p>
|
||||
|
||||
### 5. Logarithmic Order $O(\Log N)$
|
||||
### 5. Logarithmic Order $O(\log n)$
|
||||
|
||||
Logarithmic order is commonly used in divide and conquer algorithms. For example, in a merge sort, given an array of length $n$ as the input, each round of recursion divides the array in half from its midpoint to form a recursion tree of height $\log n$, using $O(\log n)$ stack frame space.
|
||||
Logarithmic order is common in divide-and-conquer algorithms. For example, in merge sort, an array of length $n$ is recursively divided in half each round, forming a recursion tree of height $\log n$, using $O(\log n)$ stack frame space.
|
||||
|
||||
Another example is to convert a number into a string. Given a positive integer $n$ with a digit count of $\log_{10} n + 1$, the corresponding string length is $\log_{10} n + 1$. Therefore, the space complexity is $O(\log_{10} n + 1) = O(\log n)$.
|
||||
Another example is converting a number to a string. Given a positive integer $n$, its number of digits is $\log_{10} n + 1$, corresponding to the length of the string, thus the space complexity is $O(\log_{10} n + 1) = O(\log n)$.
|
||||
|
||||
## 2.4.4 Weighing Time And Space
|
||||
## 2.4.4 Balancing Time and Space
|
||||
|
||||
Ideally, we would like to optimize both the time complexity and the space complexity of an algorithm. However, in reality, simultaneously optimizing time and space complexity is often challenging.
|
||||
Ideally, we aim for both time complexity and space complexity to be optimal. However, in practice, optimizing both simultaneously is often difficult.
|
||||
|
||||
**Reducing time complexity usually comes at the expense of increasing space complexity, and vice versa**. The approach of sacrificing memory space to improve algorithm speed is known as "trading space for time", while the opposite is called "trading time for space".
|
||||
**Lowering time complexity usually comes at the cost of increased space complexity, and vice versa**. The approach of sacrificing memory space to improve algorithm speed is known as "space-time tradeoff"; the reverse is known as "time-space tradeoff".
|
||||
|
||||
The choice between these approaches depends on which aspect we prioritize. In most cases, time is more valuable than space, so "trading space for time" is usually the more common strategy. Of course, in situations with large data volumes, controlling space complexity is also crucial.
|
||||
The choice depends on which aspect we value more. In most cases, time is more precious than space, so "space-time tradeoff" is often the more common strategy. Of course, controlling space complexity is also very important when dealing with large volumes of data.
|
||||
|
|
|
@ -4,50 +4,50 @@ comments: true
|
|||
|
||||
# 2.5 Summary
|
||||
|
||||
### 1. Highlights
|
||||
### 1. Key Review
|
||||
|
||||
**Evaluation of Algorithm Efficiency**
|
||||
**Algorithm Efficiency Assessment**
|
||||
|
||||
- Time and space efficiency are the two leading evaluation indicators to measure an algorithm.
|
||||
- We can evaluate the efficiency of an algorithm through real-world testing. Still, it isn't easy to eliminate the side effects from the testing environment, and it consumes a lot of computational resources.
|
||||
- Complexity analysis overcomes the drawbacks of real-world testing. The analysis results can apply to all operating platforms and reveal the algorithm's efficiency under variant data scales.
|
||||
- Time efficiency and space efficiency are the two main criteria for assessing the merits of an algorithm.
|
||||
- We can assess algorithm efficiency through actual testing, but it's challenging to eliminate the influence of the test environment, and it consumes substantial computational resources.
|
||||
- Complexity analysis can overcome the disadvantages of actual testing. Its results are applicable across all operating platforms and can reveal the efficiency of algorithms at different data scales.
|
||||
|
||||
**Time Complexity**
|
||||
|
||||
- Time complexity is used to measure the trend of algorithm running time as the data size grows., which can effectively evaluate the algorithm's efficiency. However, it may fail in some cases, such as when the input volume is small or the time complexities are similar, making it difficult to precisely compare the efficiency of algorithms.
|
||||
- The worst time complexity is denoted by big $O$ notation, which corresponds to the asymptotic upper bound of the function, reflecting the growth rate in the number of operations $T(n)$ as $n$ tends to positive infinity.
|
||||
- The estimation of time complexity involves two steps: first, counting the number of operations, and then determining the asymptotic upper bound.
|
||||
- Common time complexities, from lowest to highest, are $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, $O(n^2)$, $O(2^n)$, and $O(n!)$.
|
||||
- The time complexity of certain algorithms is not fixed and depends on the distribution of the input data. The time complexity can be categorized into worst-case, best-case, and average. The best-case time complexity is rarely used because the input data must meet strict conditions to achieve the best-case.
|
||||
- The average time complexity reflects the efficiency of an algorithm with random data inputs, which is closest to the performance of algorithms in real-world scenarios. Calculating the average time complexity requires statistical analysis of input data and a synthesized mathematical expectation.
|
||||
- Time complexity measures the trend of an algorithm's running time with the increase in data volume, effectively assessing algorithm efficiency. However, it can fail in certain cases, such as with small input data volumes or when time complexities are the same, making it challenging to precisely compare the efficiency of algorithms.
|
||||
- Worst-case time complexity is denoted using big O notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations $T(n)$ as $n$ approaches infinity.
|
||||
- Calculating time complexity involves two steps: first counting the number of operations, then determining the asymptotic upper bound.
|
||||
- Common time complexities, arranged from low to high, include $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, $O(n^2)$, $O(2^n)$, and $O(n!)$, among others.
|
||||
- The time complexity of some algorithms is not fixed and depends on the distribution of input data. Time complexities are divided into worst, best, and average cases. The best case is rarely used because input data generally needs to meet strict conditions to achieve the best case.
|
||||
- Average time complexity reflects the efficiency of an algorithm under random data inputs, closely resembling the algorithm's performance in actual applications. Calculating average time complexity requires accounting for the distribution of input data and the subsequent mathematical expectation.
|
||||
|
||||
**Space Complexity**
|
||||
|
||||
- Space complexity serves a similar purpose to time complexity and is used to measure the trend of space occupied by an algorithm as the data volume increases.
|
||||
- The memory space associated with the operation of an algorithm can be categorized into input space, temporary space, and output space. Normally, the input space is not considered when determining space complexity. The temporary space can be classified into instruction space, data space, and stack frame space, and the stack frame space usually only affects the space complexity for recursion functions.
|
||||
- We mainly focus on the worst-case space complexity, which refers to the measurement of an algorithm's space usage when given the worst-case input data and during the worst-case execution scenario.
|
||||
- Common space complexities are $O(1)$, $O(\log n)$, $O(n)$, $O(n^2)$ and $O(2^n)$ from lowest to highest.
|
||||
- Space complexity, similar to time complexity, measures the trend of memory space occupied by an algorithm with the increase in data volume.
|
||||
- The relevant memory space used during the algorithm's execution can be divided into input space, temporary space, and output space. Generally, input space is not included in space complexity calculations. Temporary space can be divided into temporary data, stack frame space, and instruction space, where stack frame space usually affects space complexity only in recursive functions.
|
||||
- We usually focus only on the worst-case space complexity, which means calculating the space complexity of the algorithm under the worst input data and at the worst moment of operation.
|
||||
- Common space complexities, arranged from low to high, include $O(1)$, $O(\log n)$, $O(n)$, $O(n^2)$, and $O(2^n)$, among others.
|
||||
|
||||
### 2. Q & A
|
||||
|
||||
!!! question "Is the space complexity of tail recursion $O(1)$?"
|
||||
|
||||
Theoretically, the space complexity of a tail recursion function can be optimized to $O(1)$. However, most programming languages (e.g., Java, Python, C++, Go, C#, etc.) do not support auto-optimization for tail recursion, so the space complexity is usually considered as $O(n)$.
|
||||
Theoretically, the space complexity of a tail-recursive function can be optimized to $O(1)$. However, most programming languages (such as Java, Python, C++, Go, C#) do not support automatic optimization of tail recursion, so it's generally considered to have a space complexity of $O(n)$.
|
||||
|
||||
!!! question "What is the difference between the terms function and method?"
|
||||
!!! question "What is the difference between the terms 'function' and 'method'?"
|
||||
|
||||
A *function* can be executed independently, and all arguments are passed explicitly. A *method* is associated with an object and is implicitly passed to the object that calls it, allowing it to operate on the data contained within an instance of a class.
|
||||
A "function" can be executed independently, with all parameters passed explicitly. A "method" is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.
|
||||
|
||||
Let's illustrate with a few common programming languages.
|
||||
Here are some examples from common programming languages:
|
||||
|
||||
- C is a procedural programming language without object-oriented concepts, so it has only functions. However, we can simulate object-oriented programming by creating structures (struct), and the functions associated with structures are equivalent to methods in other languages.
|
||||
- Java and C# are object-oriented programming languages, and blocks of code (methods) are typically part of a class. Static methods behave like a function because it is bound to the class and cannot access specific instance variables.
|
||||
- Both C++ and Python support both procedural programming (functions) and object-oriented programming (methods).
|
||||
- C is a procedural programming language without object-oriented concepts, so it only has functions. However, we can simulate object-oriented programming by creating structures (struct), and functions associated with these structures are equivalent to methods in other programming languages.
|
||||
- Java and C# are object-oriented programming languages where code blocks (methods) are typically part of a class. Static methods behave like functions because they are bound to the class and cannot access specific instance variables.
|
||||
- C++ and Python support both procedural programming (functions) and object-oriented programming (methods).
|
||||
|
||||
!!! question "Does the figure "Common Types of Space Complexity" reflect the absolute size of the occupied space?"
|
||||
!!! question "Does the 'Common Types of Space Complexity' figure reflect the absolute size of occupied space?"
|
||||
|
||||
No, that figure shows the space complexity, which reflects the growth trend, not the absolute size of the space occupied.
|
||||
|
||||
For example, if you take $n = 8$ , the values of each curve do not align with the function because each curve contains a constant term used to compress the range of values to a visually comfortable range.
|
||||
No, the figure shows space complexities, which reflect growth trends, not the absolute size of the occupied space.
|
||||
|
||||
In practice, since we usually don't know each method's "constant term" complexity, it is generally impossible to choose the optimal solution for $n = 8$ based on complexity alone. But it's easier to choose for $n = 8^5$ as the growth trend is already dominant.
|
||||
If you take $n = 8$, you might find that the values of each curve don't correspond to their functions. This is because each curve includes a constant term, intended to compress the value range into a visually comfortable range.
|
||||
|
||||
In practice, since we usually don't know the "constant term" complexity of each method, it's generally not possible to choose the best solution for $n = 8$ based solely on complexity. However, for $n = 8^5$, it's much easier to choose, as the growth trend becomes dominant.
|
||||
|
|
|
@ -4,13 +4,13 @@ comments: true
|
|||
|
||||
# 2.3 Time Complexity
|
||||
|
||||
Runtime can be a visual and accurate reflection of the efficiency of an algorithm. What should we do if we want to accurately predict the runtime of a piece of code?
|
||||
Time complexity is a concept used to measure how the run time of an algorithm increases with the size of the input data. Understanding time complexity is crucial for accurately assessing the efficiency of an algorithm.
|
||||
|
||||
1. **Determine the running platform**, including hardware configuration, programming language, system environment, etc., all of which affect the efficiency of the code.
|
||||
2. **Evaluates the running time** required for various computational operations, e.g., the addition operation `+` takes 1 ns, the multiplication operation `*` takes 10 ns, the print operation `print()` takes 5 ns, and so on.
|
||||
3. **Counts all the computational operations in the code** and sums the execution times of all the operations to get the runtime.
|
||||
1. **Determining the Running Platform**: This includes hardware configuration, programming language, system environment, etc., all of which can affect the efficiency of code execution.
|
||||
2. **Evaluating the Run Time for Various Computational Operations**: For instance, an addition operation `+` might take 1 ns, a multiplication operation `*` might take 10 ns, a print operation `print()` might take 5 ns, etc.
|
||||
3. **Counting All the Computational Operations in the Code**: Summing the execution times of all these operations gives the total run time.
|
||||
|
||||
For example, in the following code, the input data size is $n$ :
|
||||
For example, consider the following code with an input size of $n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -190,19 +190,19 @@ For example, in the following code, the input data size is $n$ :
|
|||
}
|
||||
```
|
||||
|
||||
Based on the above method, the algorithm running time can be obtained as $6n + 12$ ns :
|
||||
Using the above method, the run time of the algorithm can be calculated as $(6n + 12)$ ns:
|
||||
|
||||
$$
|
||||
1 + 1 + 10 + (1 + 5) \times n = 6n + 12
|
||||
$$
|
||||
|
||||
In practice, however, **statistical algorithm runtimes are neither reasonable nor realistic**. First, we do not want to tie the estimation time to the operation platform, because the algorithm needs to run on a variety of different platforms. Second, it is difficult for us to be informed of the runtime of each operation, which makes the prediction process extremely difficult.
|
||||
However, in practice, **counting the run time of an algorithm is neither practical nor reasonable**. First, we don't want to tie the estimated time to the running platform, as algorithms need to run on various platforms. Second, it's challenging to know the run time for each type of operation, making the estimation process difficult.
|
||||
|
||||
## 2.3.1 Trends In Statistical Time Growth
|
||||
## 2.3.1 Assessing Time Growth Trend
|
||||
|
||||
The time complexity analysis counts not the algorithm running time, **but the tendency of the algorithm running time to increase as the amount of data gets larger**.
|
||||
Time complexity analysis does not count the algorithm's run time, **but rather the growth trend of the run time as the data volume increases**.
|
||||
|
||||
The concept of "time-growing trend" is rather abstract, so let's try to understand it through an example. Suppose the size of the input data is $n$, and given three algorithmic functions `A`, `B` and `C`:
|
||||
Let's understand this concept of "time growth trend" with an example. Assume the input data size is $n$, and consider three algorithms `A`, `B`, and `C`:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -456,25 +456,25 @@ The concept of "time-growing trend" is rather abstract, so let's try to understa
|
|||
}
|
||||
```
|
||||
|
||||
The Figure 2-7 shows the time complexity of the above three algorithmic functions.
|
||||
The following figure shows the time complexities of these three algorithms.
|
||||
|
||||
- Algorithm `A` has only $1$ print operations, and the running time of the algorithm does not increase with $n$. We call the time complexity of this algorithm "constant order".
|
||||
- The print operation in algorithm `B` requires $n$ cycles, and the running time of the algorithm increases linearly with $n$. The time complexity of this algorithm is called "linear order".
|
||||
- The print operation in algorithm `C` requires $1000000$ loops, which is a long runtime, but it is independent of the size of the input data $n$. Therefore, the time complexity of `C` is the same as that of `A`, which is still of "constant order".
|
||||
- Algorithm `A` has just one print operation, and its run time does not grow with $n$. Its time complexity is considered "constant order."
|
||||
- Algorithm `B` involves a print operation looping $n$ times, and its run time grows linearly with $n$. Its time complexity is "linear order."
|
||||
- Algorithm `C` has a print operation looping 1,000,000 times. Although it takes a long time, it is independent of the input data size $n$. Therefore, the time complexity of `C` is the same as `A`, which is "constant order."
|
||||
|
||||
![Time growth trends for algorithms A, B and C](time_complexity.assets/time_complexity_simple_example.png){ class="animation-figure" }
|
||||
![Time Growth Trend of Algorithms A, B, and C](time_complexity.assets/time_complexity_simple_example.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-7 Time growth trends for algorithms A, B and C </p>
|
||||
<p align="center"> Figure 2-7 Time Growth Trend of Algorithms A, B, and C </p>
|
||||
|
||||
What are the characteristics of time complexity analysis compared to direct statistical algorithmic running time?
|
||||
Compared to directly counting the run time of an algorithm, what are the characteristics of time complexity analysis?
|
||||
|
||||
- The **time complexity can effectively evaluate the efficiency of an algorithm**. For example, the running time of algorithm `B` increases linearly and is slower than algorithm `A` for $n > 1$ and slower than algorithm `C` for $n > 1,000,000$. In fact, as long as the input data size $n$ is large enough, algorithms with "constant order" of complexity will always outperform algorithms with "linear order", which is exactly what the time complexity trend means.
|
||||
- The **time complexity of the projection method is simpler**. Obviously, neither the running platform nor the type of computational operation is related to the growth trend of the running time of the algorithm. Therefore, in the time complexity analysis, we can simply consider the execution time of all computation operations as the same "unit time", and thus simplify the "statistics of the running time of computation operations" to the "statistics of the number of computation operations", which is the same as the "statistics of the number of computation operations". The difficulty of the estimation is greatly reduced by considering the execution time of all operations as the same "unit time".
|
||||
- There are also some limitations of **time complexity**. For example, although algorithms `A` and `C` have the same time complexity, the actual running time varies greatly. Similarly, although the time complexity of algorithm `B` is higher than that of `C` , algorithm `B` significantly outperforms algorithm `C` when the size of the input data $n$ is small. In these cases, it is difficult to judge the efficiency of an algorithm based on time complexity alone. Of course, despite the above problems, complexity analysis is still the most effective and commonly used method to judge the efficiency of algorithms.
|
||||
- **Time complexity effectively assesses algorithm efficiency**. For instance, algorithm `B` has linearly growing run time, which is slower than algorithm `A` when $n > 1$ and slower than `C` when $n > 1,000,000$. In fact, as long as the input data size $n$ is sufficiently large, a "constant order" complexity algorithm will always be better than a "linear order" one, demonstrating the essence of time growth trend.
|
||||
- **Time complexity analysis is more straightforward**. Obviously, the running platform and the types of computational operations are irrelevant to the trend of run time growth. Therefore, in time complexity analysis, we can simply treat the execution time of all computational operations as the same "unit time," simplifying the "computational operation run time count" to a "computational operation count." This significantly reduces the complexity of estimation.
|
||||
- **Time complexity has its limitations**. For example, although algorithms `A` and `C` have the same time complexity, their actual run times can be quite different. Similarly, even though algorithm `B` has a higher time complexity than `C`, it is clearly superior when the input data size $n$ is small. In these cases, it's difficult to judge the efficiency of algorithms based solely on time complexity. Nonetheless, despite these issues, complexity analysis remains the most effective and commonly used method for evaluating algorithm efficiency.
|
||||
|
||||
## 2.3.2 Functions Asymptotic Upper Bounds
|
||||
## 2.3.2 Asymptotic Upper Bound
|
||||
|
||||
Given a function with input size $n$:
|
||||
Consider a function with an input size of $n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -643,43 +643,41 @@ Given a function with input size $n$:
|
|||
}
|
||||
```
|
||||
|
||||
Let the number of operations of the algorithm be a function of the size of the input data $n$, denoted as $T(n)$ , then the number of operations of the above function is:
|
||||
Given a function that represents the number of operations of an algorithm as a function of the input size $n$, denoted as $T(n)$, consider the following example:
|
||||
|
||||
$$
|
||||
T(n) = 3 + 2n
|
||||
$$
|
||||
|
||||
$T(n)$ is a primary function, which indicates that the trend of its running time growth is linear, and thus its time complexity is of linear order.
|
||||
Since $T(n)$ is a linear function, its growth trend is linear, and therefore, its time complexity is of linear order, denoted as $O(n)$. This mathematical notation, known as "big-O notation," represents the "asymptotic upper bound" of the function $T(n)$.
|
||||
|
||||
We denote the time complexity of the linear order as $O(n)$ , and this mathematical notation is called the "big $O$ notation big-$O$ notation", which denotes the "asymptotic upper bound" of the function $T(n)$.
|
||||
In essence, time complexity analysis is about finding the asymptotic upper bound of the "number of operations $T(n)$". It has a precise mathematical definition.
|
||||
|
||||
Time complexity analysis is essentially the computation of asymptotic upper bounds on the "number of operations function $T(n)$", which has a clear mathematical definition.
|
||||
!!! abstract "Asymptotic Upper Bound"
|
||||
|
||||
!!! abstract "Function asymptotic upper bound"
|
||||
If there exist positive real numbers $c$ and $n_0$ such that for all $n > n_0$, $T(n) \leq c \cdot f(n)$, then $f(n)$ is considered an asymptotic upper bound of $T(n)$, denoted as $T(n) = O(f(n))$.
|
||||
|
||||
If there exists a positive real number $c$ and a real number $n_0$ such that $T(n) \leq c \cdot f(n)$ for all $n > n_0$ , then it can be argued that $f(n)$ gives an asymptotic upper bound on $T(n)$ , denoted as $T(n) = O(f(n))$ .
|
||||
As illustrated below, calculating the asymptotic upper bound involves finding a function $f(n)$ such that, as $n$ approaches infinity, $T(n)$ and $f(n)$ have the same growth order, differing only by a constant factor $c$.
|
||||
|
||||
As shown in the Figure 2-8 , computing the asymptotic upper bound is a matter of finding a function $f(n)$ such that $T(n)$ and $f(n)$ are at the same growth level as $n$ tends to infinity, differing only by a multiple of the constant term $c$.
|
||||
![Asymptotic Upper Bound of a Function](time_complexity.assets/asymptotic_upper_bound.png){ class="animation-figure" }
|
||||
|
||||
![asymptotic upper bound of function](time_complexity.assets/asymptotic_upper_bound.png){ class="animation-figure" }
|
||||
<p align="center"> Figure 2-8 Asymptotic Upper Bound of a Function </p>
|
||||
|
||||
<p align="center"> Figure 2-8 asymptotic upper bound of function </p>
|
||||
## 2.3.3 Calculation Method
|
||||
|
||||
## 2.3.3 Method Of Projection
|
||||
While the concept of asymptotic upper bound might seem mathematically dense, you don't need to fully grasp it right away. Let's first understand the method of calculation, which can be practiced and comprehended over time.
|
||||
|
||||
Asymptotic upper bounds are a bit heavy on math, so don't worry if you feel you don't have a full solution. Because in practice, we only need to master the projection method, and the mathematical meaning can be gradually comprehended.
|
||||
Once $f(n)$ is determined, we obtain the time complexity $O(f(n))$. But how do we determine the asymptotic upper bound $f(n)$? This process generally involves two steps: counting the number of operations and determining the asymptotic upper bound.
|
||||
|
||||
By definition, after determining $f(n)$, we can get the time complexity $O(f(n))$. So how to determine the asymptotic upper bound $f(n)$? The overall is divided into two steps: first count the number of operations, and then determine the asymptotic upper bound.
|
||||
### 1. Step 1: Counting the Number of Operations
|
||||
|
||||
### 1. The First Step: Counting The Number Of Operations
|
||||
This step involves going through the code line by line. However, due to the presence of the constant $c$ in $c \cdot f(n)$, **all coefficients and constant terms in $T(n)$ can be ignored**. This principle allows for simplification techniques in counting operations.
|
||||
|
||||
For the code, it is sufficient to calculate from top to bottom line by line. However, since the constant term $c \cdot f(n)$ in the above $c \cdot f(n)$ can take any size, **the various coefficients and constant terms in the number of operations $T(n)$ can be ignored**. Based on this principle, the following counting simplification techniques can be summarized.
|
||||
1. **Ignore constant terms in $T(n)$**, as they do not affect the time complexity being independent of $n$.
|
||||
2. **Omit all coefficients**. For example, looping $2n$, $5n + 1$ times, etc., can be simplified to $n$ times since the coefficient before $n$ does not impact the time complexity.
|
||||
3. **Use multiplication for nested loops**. The total number of operations equals the product of the number of operations in each loop, applying the simplification techniques from points 1 and 2 for each loop level.
|
||||
|
||||
1. **Ignore the constant terms in $T(n)$**. Since none of them are related to $n$, they have no effect on the time complexity.
|
||||
2. **omits all coefficients**. For example, loops $2n$ times, $5n + 1$ times, etc., can be simplified and notated as $n$ times because the coefficients before $n$ have no effect on the time complexity.
|
||||
3. **Use multiplication** when loops are nested. The total number of operations is equal to the product of the number of operations of the outer and inner levels of the loop, and each level of the loop can still be nested by applying the techniques in points `1.` and `2.` respectively.
|
||||
|
||||
Given a function, we can use the above trick to count the number of operations.
|
||||
Given a function, we can use these techniques to count operations:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -909,56 +907,54 @@ Given a function, we can use the above trick to count the number of operations.
|
|||
}
|
||||
```
|
||||
|
||||
The following equations show the statistical results before and after using the above technique, both of which were introduced with a time complexity of $O(n^2)$ .
|
||||
The formula below shows the counting results before and after simplification, both leading to a time complexity of $O(n^2)$:
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
T(n) & = 2n(n + 1) + (5n + 1) + 2 & \text{complete statistics (-.-|||)} \newline
|
||||
T(n) & = 2n(n + 1) + (5n + 1) + 2 & \text{Complete Count (-.-|||)} \newline
|
||||
& = 2n^2 + 7n + 3 \newline
|
||||
T(n) & = n^2 + n & \text{Lazy Stats (o.O)}
|
||||
T(n) & = n^2 + n & \text{Simplified Count (o.O)}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
### 2. Step 2: Judging The Asymptotic Upper Bounds
|
||||
### 2. Step 2: Determining the Asymptotic Upper Bound
|
||||
|
||||
**The time complexity is determined by the highest order term in the polynomial $T(n)$**. This is because as $n$ tends to infinity, the highest order term will play a dominant role and the effects of all other terms can be ignored.
|
||||
**The time complexity is determined by the highest order term in $T(n)$**. This is because, as $n$ approaches infinity, the highest order term dominates, rendering the influence of other terms negligible.
|
||||
|
||||
The Table 2-2 shows some examples, some of which have exaggerated values to emphasize the conclusion that "the coefficients can't touch the order". As $n$ tends to infinity, these constants become irrelevant.
|
||||
The following table illustrates examples of different operation counts and their corresponding time complexities. Some exaggerated values are used to emphasize that coefficients cannot alter the order of growth. When $n$ becomes very large, these constants become insignificant.
|
||||
|
||||
<p align="center"> Table 2-2 Time complexity corresponding to different number of operations </p>
|
||||
<p align="center"> Table: Time Complexity for Different Operation Counts </p>
|
||||
|
||||
<div class="center-table" markdown>
|
||||
|
||||
| number of operations $T(n)$ | time complexity $O(f(n))$ |
|
||||
| --------------------------- | ------------------------- |
|
||||
| $100000$ | $O(1)$ |
|
||||
| $3n + 2$ | $O(n)$ |
|
||||
| $2n^2 + 3n + 2$ | $O(n^2)$ |
|
||||
| $n^3 + 10000n^2$ | $O(n^3)$ |
|
||||
| $2^n + 10000n^{10000}$ | $O(2^n)$ |
|
||||
| Operation Count $T(n)$ | Time Complexity $O(f(n))$ |
|
||||
| ---------------------- | ------------------------- |
|
||||
| $100000$ | $O(1)$ |
|
||||
| $3n + 2$ | $O(n)$ |
|
||||
| $2n^2 + 3n + 2$ | $O(n^2)$ |
|
||||
| $n^3 + 10000n^2$ | $O(n^3)$ |
|
||||
| $2^n + 10000n^{10000}$ | $O(2^n)$ |
|
||||
|
||||
</div>
|
||||
|
||||
## 2.3.4 Common Types
|
||||
## 2.3.4 Common Types of Time Complexity
|
||||
|
||||
Let the input data size be $n$ , the common types of time complexity are shown in the Figure 2-9 (in descending order).
|
||||
Let's consider the input data size as $n$. The common types of time complexities are illustrated below, arranged from lowest to highest:
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!) \newline
|
||||
\text{constant order} < \text{logarithmic order} < \text{linear order} < \text{linear logarithmic order} < \text{square order} < \text{exponential order} < \text{multiplication order}
|
||||
\text{Constant Order} < \text{Logarithmic Order} < \text{Linear Order} < \text{Linear-Logarithmic Order} < \text{Quadratic Order} < \text{Exponential Order} < \text{Factorial Order}
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
![Common time complexity types](time_complexity.assets/time_complexity_common_types.png){ class="animation-figure" }
|
||||
![Common Types of Time Complexity](time_complexity.assets/time_complexity_common_types.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-9 Common time complexity types </p>
|
||||
<p align="center"> Figure 2-9 Common Types of Time Complexity </p>
|
||||
|
||||
### 1. Constant Order $O(1)$
|
||||
|
||||
The number of operations of the constant order is independent of the input data size $n$, i.e., it does not change with $n$.
|
||||
|
||||
In the following function, although the number of operations `size` may be large, the time complexity is still $O(1)$ because it is independent of the input data size $n$ :
|
||||
Constant order means the number of operations is independent of the input data size $n$. In the following function, although the number of operations `size` might be large, the time complexity remains $O(1)$ as it's unrelated to $n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1123,9 +1119,9 @@ In the following function, although the number of operations `size` may be large
|
|||
}
|
||||
```
|
||||
|
||||
### 2. Linear Order $O(N)$
|
||||
### 2. Linear Order $O(n)$
|
||||
|
||||
The number of operations in a linear order grows in linear steps relative to the input data size $n$. Linear orders are usually found in single level loops:
|
||||
Linear order indicates the number of operations grows linearly with the input data size $n$. Linear order commonly appears in single-loop structures:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1275,7 +1271,7 @@ The number of operations in a linear order grows in linear steps relative to the
|
|||
}
|
||||
```
|
||||
|
||||
The time complexity of operations such as traversing an array and traversing a linked list is $O(n)$ , where $n$ is the length of the array or linked list:
|
||||
Operations like array traversal and linked list traversal have a time complexity of $O(n)$, where $n$ is the length of the array or list:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1443,11 +1439,11 @@ The time complexity of operations such as traversing an array and traversing a l
|
|||
}
|
||||
```
|
||||
|
||||
It is worth noting that **Input data size $n$ needs to be determined specifically** according to the type of input data. For example, in the first example, the variable $n$ is the input data size; in the second example, the array length $n$ is the data size.
|
||||
It's important to note that **the input data size $n$ should be determined based on the type of input data**. For example, in the first example, $n$ represents the input data size, while in the second example, the length of the array $n$ is the data size.
|
||||
|
||||
### 3. Squared Order $O(N^2)$
|
||||
### 3. Quadratic Order $O(n^2)$
|
||||
|
||||
The number of operations in the square order grows in square steps with respect to the size of the input data $n$. The squared order is usually found in nested loops, where both the outer and inner levels are $O(n)$ and therefore overall $O(n^2)$:
|
||||
Quadratic order means the number of operations grows quadratically with the input data size $n$. Quadratic order typically appears in nested loops, where both the outer and inner loops have a time complexity of $O(n)$, resulting in an overall complexity of $O(n^2)$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1640,13 +1636,13 @@ The number of operations in the square order grows in square steps with respect
|
|||
}
|
||||
```
|
||||
|
||||
The Figure 2-10 compares the three time complexities of constant order, linear order and squared order.
|
||||
The following image compares constant order, linear order, and quadratic order time complexities.
|
||||
|
||||
![Time complexity of constant, linear and quadratic orders](time_complexity.assets/time_complexity_constant_linear_quadratic.png){ class="animation-figure" }
|
||||
![Constant, Linear, and Quadratic Order Time Complexities](time_complexity.assets/time_complexity_constant_linear_quadratic.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-10 Time complexity of constant, linear and quadratic orders </p>
|
||||
<p align="center"> Figure 2-10 Constant, Linear, and Quadratic Order Time Complexities </p>
|
||||
|
||||
Taking bubble sort as an example, the outer loop executes $n - 1$ times, and the inner loop executes $n-1$, $n-2$, $\dots$, $2$, $1$ times, which averages out to $n / 2$ times, resulting in a time complexity of $O((n - 1) n / 2) = O(n^2)$ .
|
||||
For instance, in bubble sort, the outer loop runs $n - 1$ times, and the inner loop runs $n-1$, $n-2$, ..., $2$, $1$ times, averaging $n / 2$ times, resulting in a time complexity of $O((n - 1) n / 2) = O(n^2)$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -1920,11 +1916,11 @@ Taking bubble sort as an example, the outer loop executes $n - 1$ times, and the
|
|||
}
|
||||
```
|
||||
|
||||
## 2.3.5 Exponential Order $O(2^N)$
|
||||
### 4. Exponential Order $O(2^n)$
|
||||
|
||||
Cell division in biology is a typical example of exponential growth: the initial state is $1$ cells, after one round of division it becomes $2$, after two rounds of division it becomes $4$, and so on, after $n$ rounds of division there are $2^n$ cells.
|
||||
Biological "cell division" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in $2^n$ cells after $n$ divisions.
|
||||
|
||||
The Figure 2-11 and the following code simulate the process of cell division with a time complexity of $O(2^n)$ .
|
||||
The following image and code simulate the cell division process, with a time complexity of $O(2^n)$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -2148,11 +2144,11 @@ The Figure 2-11 and the following code simulate the process of cell division wi
|
|||
}
|
||||
```
|
||||
|
||||
![time complexity of exponential order](time_complexity.assets/time_complexity_exponential.png){ class="animation-figure" }
|
||||
![Exponential Order Time Complexity](time_complexity.assets/time_complexity_exponential.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-11 time complexity of exponential order </p>
|
||||
<p align="center"> Figure 2-11 Exponential Order Time Complexity </p>
|
||||
|
||||
In practical algorithms, exponential orders are often found in recursion functions. For example, in the following code, it recursively splits in two and stops after $n$ splits:
|
||||
In practice, exponential order often appears in recursive functions. For example, in the code below, it recursively splits into two halves, stopping after $n$ divisions:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -2283,13 +2279,13 @@ In practical algorithms, exponential orders are often found in recursion functio
|
|||
}
|
||||
```
|
||||
|
||||
Exponential order grows very rapidly and is more common in exhaustive methods (brute force search, backtracking, etc.). For problems with large data sizes, exponential order is unacceptable and usually requires the use of algorithms such as dynamic programming or greedy algorithms to solve.
|
||||
Exponential order growth is extremely rapid and is commonly seen in exhaustive search methods (brute force, backtracking, etc.). For large-scale problems, exponential order is unacceptable, often requiring dynamic programming or greedy algorithms as solutions.
|
||||
|
||||
### 1. Logarithmic Order $O(\Log N)$
|
||||
### 5. Logarithmic Order $O(\log n)$
|
||||
|
||||
In contrast to the exponential order, the logarithmic order reflects the "each round is reduced to half" case. Let the input data size be $n$, and since each round is reduced to half, the number of loops is $\log_2 n$, which is the inverse function of $2^n$.
|
||||
In contrast to exponential order, logarithmic order reflects situations where "the size is halved each round." Given an input data size $n$, since the size is halved each round, the number of iterations is $\log_2 n$, the inverse function of $2^n$.
|
||||
|
||||
The Figure 2-12 and the code below simulate the process of "reducing each round to half" with a time complexity of $O(\log_2 n)$, which is abbreviated as $O(\log n)$.
|
||||
The following image and code simulate the "halving each round" process, with a time complexity of $O(\log_2 n)$, commonly abbreviated as $O(\log n)$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -2460,11 +2456,11 @@ The Figure 2-12 and the code below simulate the process of "reducing each round
|
|||
}
|
||||
```
|
||||
|
||||
![time complexity of logarithmic order](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" }
|
||||
![Logarithmic Order Time Complexity](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-12 time complexity of logarithmic order </p>
|
||||
<p align="center"> Figure 2-12 Logarithmic Order Time Complexity </p>
|
||||
|
||||
Similar to the exponential order, the logarithmic order is often found in recursion functions. The following code forms a recursion tree of height $\log_2 n$:
|
||||
Like exponential order, logarithmic order also frequently appears in recursive functions. The code below forms a recursive tree of height $\log_2 n$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -2595,21 +2591,21 @@ Similar to the exponential order, the logarithmic order is often found in recurs
|
|||
}
|
||||
```
|
||||
|
||||
Logarithmic order is often found in algorithms based on the divide and conquer strategy, which reflects the algorithmic ideas of "dividing one into many" and "simplifying the complexity into simplicity". It grows slowly and is the second most desirable time complexity after constant order.
|
||||
Logarithmic order is typical in algorithms based on the divide-and-conquer strategy, embodying the "split into many" and "simplify complex problems" approach. It's slow-growing and is the most ideal time complexity after constant order.
|
||||
|
||||
!!! tip "What is the base of $O(\log n)$?"
|
||||
|
||||
To be precise, the corresponding time complexity of "one divided into $m$" is $O(\log_m n)$ . And by using the logarithmic permutation formula, we can get equal time complexity with different bases:
|
||||
Technically, "splitting into $m$" corresponds to a time complexity of $O(\log_m n)$. Using the logarithm base change formula, we can equate different logarithmic complexities:
|
||||
|
||||
$$
|
||||
O(\log_m n) = O(\log_k n / \log_k m) = O(\log_k n)
|
||||
$$
|
||||
|
||||
That is, the base $m$ can be converted without affecting the complexity. Therefore we usually omit the base $m$ and write the logarithmic order directly as $O(\log n)$.
|
||||
This means the base $m$ can be changed without affecting the complexity. Therefore, we often omit the base $m$ and simply denote logarithmic order as $O(\log n)$.
|
||||
|
||||
### 2. Linear Logarithmic Order $O(N \Log N)$
|
||||
### 6. Linear-Logarithmic Order $O(n \log n)$
|
||||
|
||||
The linear logarithmic order is often found in nested loops, and the time complexity of the two levels of loops is $O(\log n)$ and $O(n)$ respectively. The related code is as follows:
|
||||
Linear-logarithmic order often appears in nested loops, with the complexities of the two loops being $O(\log n)$ and $O(n)$ respectively. The related code is as follows:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -2788,23 +2784,23 @@ The linear logarithmic order is often found in nested loops, and the time comple
|
|||
}
|
||||
```
|
||||
|
||||
The Figure 2-13 shows how the linear logarithmic order is generated. The total number of operations at each level of the binary tree is $n$ , and the tree has a total of $\log_2 n + 1$ levels, resulting in a time complexity of $O(n\log n)$ .
|
||||
The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$.
|
||||
|
||||
![Time complexity of linear logarithmic order](time_complexity.assets/time_complexity_logarithmic_linear.png){ class="animation-figure" }
|
||||
![Linear-Logarithmic Order Time Complexity](time_complexity.assets/time_complexity_logarithmic_linear.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-13 Time complexity of linear logarithmic order </p>
|
||||
<p align="center"> Figure 2-13 Linear-Logarithmic Order Time Complexity </p>
|
||||
|
||||
Mainstream sorting algorithms typically have a time complexity of $O(n \log n)$ , such as quick sort, merge sort, heap sort, etc.
|
||||
Mainstream sorting algorithms typically have a time complexity of $O(n \log n)$, such as quicksort, mergesort, and heapsort.
|
||||
|
||||
### 3. The Factorial Order $O(N!)$
|
||||
### 7. Factorial Order $O(n!)$
|
||||
|
||||
The factorial order corresponds to the mathematical "permutations problem". Given $n$ elements that do not repeat each other, find all possible permutations of them, the number of permutations being:
|
||||
Factorial order corresponds to the mathematical problem of "full permutation." Given $n$ distinct elements, the total number of possible permutations is:
|
||||
|
||||
$$
|
||||
n! = n \times (n - 1) \times (n - 2) \times \dots \times 2 \times 1
|
||||
$$
|
||||
|
||||
Factorials are usually implemented using recursion. As shown in the Figure 2-14 and in the code below, the first level splits $n$, the second level splits $n - 1$, and so on, until the splitting stops at the $n$th level:
|
||||
Factorials are typically implemented using recursion. As shown in the image and code below, the first level splits into $n$ branches, the second level into $n - 1$ branches, and so on, stopping after the $n$th level:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -2994,20 +2990,20 @@ Factorials are usually implemented using recursion. As shown in the Figure 2-14
|
|||
}
|
||||
```
|
||||
|
||||
![Time complexity of the factorial order](time_complexity.assets/time_complexity_factorial.png){ class="animation-figure" }
|
||||
![Factorial Order Time Complexity](time_complexity.assets/time_complexity_factorial.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 2-14 Time complexity of the factorial order </p>
|
||||
<p align="center"> Figure 2-14 Factorial Order Time Complexity </p>
|
||||
|
||||
Note that since there is always $n! > 2^n$ when $n \geq 4$, the factorial order grows faster than the exponential order, and is also unacceptable when $n$ is large.
|
||||
Note that factorial order grows even faster than exponential order; it's unacceptable for larger $n$ values.
|
||||
|
||||
## 2.3.6 Worst, Best, Average Time Complexity
|
||||
## 2.3.5 Worst, Best, and Average Time Complexities
|
||||
|
||||
**The time efficiency of algorithms is often not fixed, but is related to the distribution of the input data**. Suppose an array `nums` of length $n$ is input, where `nums` consists of numbers from $1$ to $n$, each of which occurs only once; however, the order of the elements is randomly upset, and the goal of the task is to return the index of element $1$. We can draw the following conclusion.
|
||||
**The time efficiency of an algorithm is often not fixed but depends on the distribution of the input data**. Assume we have an array `nums` of length $n$, consisting of numbers from $1$ to $n$, each appearing only once, but in a randomly shuffled order. The task is to return the index of the element $1$. We can draw the following conclusions:
|
||||
|
||||
- When `nums = [? , ? , ... , 1]` , i.e., when the end element is $1$, a complete traversal of the array is required, **to reach the worst time complexity $O(n)$** .
|
||||
- When `nums = [1, ? , ? , ...]` , i.e., when the first element is $1$ , there is no need to continue traversing the array no matter how long it is, **reaching the optimal time complexity $\Omega(1)$** .
|
||||
- When `nums = [?, ?, ..., 1]`, that is, when the last element is $1$, it requires a complete traversal of the array, **achieving the worst-case time complexity of $O(n)$**.
|
||||
- When `nums = [1, ?, ?, ...]`, that is, when the first element is $1$, no matter the length of the array, no further traversal is needed, **achieving the best-case time complexity of $\Omega(1)$**.
|
||||
|
||||
The "worst time complexity" corresponds to the asymptotic upper bound of the function and is denoted by the large $O$ notation. Correspondingly, the "optimal time complexity" corresponds to the asymptotic lower bound of the function and is denoted in $\Omega$ notation:
|
||||
The "worst-case time complexity" corresponds to the asymptotic upper bound, denoted by the big $O$ notation. Correspondingly, the "best-case time complexity" corresponds to the asymptotic lower bound, denoted by $\Omega$:
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
@ -3356,14 +3352,14 @@ The "worst time complexity" corresponds to the asymptotic upper bound of the fun
|
|||
}
|
||||
```
|
||||
|
||||
It is worth stating that we rarely use the optimal time complexity in practice because it is usually only attainable with a small probability and may be somewhat misleading. **whereas the worst time complexity is more practical because it gives a safe value for efficiency and allows us to use the algorithm with confidence**.
|
||||
It's important to note that the best-case time complexity is rarely used in practice, as it is usually only achievable under very low probabilities and might be misleading. **The worst-case time complexity is more practical as it provides a safety value for efficiency**, allowing us to confidently use the algorithm.
|
||||
|
||||
From the above examples, it can be seen that the worst or best time complexity only occurs in "special data distributions", and the probability of these cases may be very small, which does not truly reflect the efficiency of the algorithm. In contrast, **the average time complexity of can reflect the efficiency of the algorithm under random input data**, which is denoted by the $\Theta$ notation.
|
||||
From the above example, it's clear that both the worst-case and best-case time complexities only occur under "special data distributions," which may have a small probability of occurrence and may not accurately reflect the algorithm's run efficiency. In contrast, **the average time complexity can reflect the algorithm's efficiency under random input data**, denoted by the $\Theta$ notation.
|
||||
|
||||
For some algorithms, we can simply derive the average case under a random data distribution. For example, in the above example, since the input array is scrambled, the probability of an element $1$ appearing at any index is equal, so the average number of loops of the algorithm is half of the length of the array $n / 2$ , and the average time complexity is $\Theta(n / 2) = \Theta(n)$ .
|
||||
For some algorithms, we can simply estimate the average case under a random data distribution. For example, in the aforementioned example, since the input array is shuffled, the probability of element $1$ appearing at any index is equal. Therefore, the average number of loops for the algorithm is half the length of the array $n / 2$, giving an average time complexity of $\Theta(n / 2) = \Theta(n)$.
|
||||
|
||||
However, for more complex algorithms, calculating the average time complexity is often difficult because it is hard to analyze the overall mathematical expectation given the data distribution. In this case, we usually use the worst time complexity as a criterion for the efficiency of the algorithm.
|
||||
However, calculating the average time complexity for more complex algorithms can be quite difficult, as it's challenging to analyze the overall mathematical expectation under the data distribution. In such cases, we usually use the worst-case time complexity as the standard for judging the efficiency of the algorithm.
|
||||
|
||||
!!! question "Why do you rarely see the $\Theta$ symbol?"
|
||||
!!! question "Why is the $\Theta$ symbol rarely seen?"
|
||||
|
||||
Perhaps because the $O$ symbol is so catchy, we often use it to denote average time complexity. However, this practice is not standardized in the strict sense. In this book and other sources, if you encounter a statement like "average time complexity $O(n)$", please understand it as $\Theta(n)$.
|
||||
Possibly because the $O$ notation is more commonly spoken, it is often used to represent the average time complexity. However, strictly speaking, this practice is not accurate. In this book and other materials, if you encounter statements like "average time complexity $O(n)$", please understand it directly as $\Theta(n)$.
|
||||
|
|
|
@ -8,6 +8,6 @@
|
|||
|
||||
!!! abstract
|
||||
|
||||
Data structures are like a solid and varied framework.
|
||||
Data structures resemble a stable and diverse framework.
|
||||
|
||||
It provides a blueprint for the orderly organization of data upon which algorithms can come alive.
|
||||
They serve as a blueprint for organizing data orderly, enabling algorithms to come to life upon this foundation.
|
||||
|
|
|
@ -2,48 +2,48 @@
|
|||
comments: true
|
||||
---
|
||||
|
||||
# 1.1 Algorithms Are Everywhere
|
||||
# 1.1 Algorithms are Everywhere
|
||||
|
||||
When we hear the word "algorithm", we naturally think of mathematics. However, many algorithms do not involve complex mathematics but rely more on basic logic, which is ubiquitous in our daily lives.
|
||||
When we hear the word "algorithm," we naturally think of mathematics. However, many algorithms do not involve complex mathematics but rely more on basic logic, which can be seen everywhere in our daily lives.
|
||||
|
||||
Before we formally discuss algorithms, an interesting fact is worth sharing: **you have already learned many algorithms unconsciously and have become accustomed to applying them in your daily life**. Below, I will give a few specific examples to prove this point.
|
||||
Before formally discussing algorithms, there's an interesting fact worth sharing: **you have already unconsciously learned many algorithms and have become accustomed to applying them in your daily life**. Here, I will give a few specific examples to prove this point.
|
||||
|
||||
**Example 1: Looking Up a Dictionary**. In a standard dictionary, each word corresponds to a phonetic transcription and the dictionary is organized alphabetically based on these transcriptions. Let's say we're looking for a word that begins with the letter $r$. This is typically done in the following way:
|
||||
**Example 1: Looking Up a Dictionary**. In an English dictionary, words are listed alphabetically. Suppose we're searching for a word that starts with the letter $r$. This is typically done in the following way:
|
||||
|
||||
1. Open the dictionary around its midpoint and note the first letter on that page, assuming it to be $m$.
|
||||
2. Given the sequence of words following the initial letter $m$, estimate where words starting with the letter $r$ might be located within the alphabetical order.
|
||||
3. Iterate steps `1.` and `2.` until you find the page where the word begins with the letter $r$.
|
||||
1. Open the dictionary to about halfway and check the first letter on the page, let's say the letter is $m$.
|
||||
2. Since $r$ comes after $m$ in the alphabet, we can ignore the first half of the dictionary and focus on the latter half.
|
||||
3. Repeat steps `1.` and `2.` until you find the page where the word starts with $r$.
|
||||
|
||||
=== "<1>"
|
||||
![Dictionary search step](algorithms_are_everywhere.assets/binary_search_dictionary_step1.png){ class="animation-figure" }
|
||||
![Process of Looking Up a Dictionary](algorithms_are_everywhere.assets/binary_search_dictionary_step1.png){ class="animation-figure" }
|
||||
|
||||
=== "<2>"
|
||||
![binary_search_dictionary_step2](algorithms_are_everywhere.assets/binary_search_dictionary_step2.png){ class="animation-figure" }
|
||||
![Binary Search in Dictionary Step 2](algorithms_are_everywhere.assets/binary_search_dictionary_step2.png){ class="animation-figure" }
|
||||
|
||||
=== "<3>"
|
||||
![binary_search_dictionary_step3](algorithms_are_everywhere.assets/binary_search_dictionary_step3.png){ class="animation-figure" }
|
||||
![Binary Search in Dictionary Step 3](algorithms_are_everywhere.assets/binary_search_dictionary_step3.png){ class="animation-figure" }
|
||||
|
||||
=== "<4>"
|
||||
![binary_search_dictionary_step4](algorithms_are_everywhere.assets/binary_search_dictionary_step4.png){ class="animation-figure" }
|
||||
![Binary Search in Dictionary Step 4](algorithms_are_everywhere.assets/binary_search_dictionary_step4.png){ class="animation-figure" }
|
||||
|
||||
=== "<5>"
|
||||
![binary_search_dictionary_step5](algorithms_are_everywhere.assets/binary_search_dictionary_step5.png){ class="animation-figure" }
|
||||
![Binary Search in Dictionary Step 5](algorithms_are_everywhere.assets/binary_search_dictionary_step5.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 1-1 Dictionary search step </p>
|
||||
<p align="center"> Figure 1-1 Process of Looking Up a Dictionary </p>
|
||||
|
||||
The skill of looking up a dictionary, essential for elementary school students, is actually the renowned binary search algorithm. Through the lens of data structures, we can view the dictionary as a sorted "array"; while from an algorithmic perspective, the series of operations in looking up a dictionary can be seen as "binary search".
|
||||
This essential skill for elementary students, looking up a dictionary, is actually the famous "Binary Search" algorithm. From a data structure perspective, we can consider the dictionary as a sorted "array"; from an algorithmic perspective, the series of actions taken to look up a word in the dictionary can be viewed as "Binary Search."
|
||||
|
||||
**Example 2: Organizing Playing Cards**. When playing cards, we need to arrange the cards in ascending order each game, as shown in the following process.
|
||||
**Example 2: Organizing Playing Cards**. When playing cards, we need to arrange the cards in our hand in ascending order, as shown in the following process.
|
||||
|
||||
1. Divide the playing cards into "ordered" and "unordered" parts, assuming initially that the leftmost card is already ordered.
|
||||
2. Take out a card from the unordered part and insert it into the correct position in the ordered part; once completed, the leftmost two cards will be in an ordered sequence.
|
||||
3. Continue the loop described in step `2.`, each iteration involving insertion of one card from the unordered segment into the ordered portion, until all cards are appropriately ordered.
|
||||
1. Divide the playing cards into "ordered" and "unordered" sections, assuming initially the leftmost card is already in order.
|
||||
2. Take out a card from the unordered section and insert it into the correct position in the ordered section; after this, the leftmost two cards are in order.
|
||||
3. Continue to repeat step `2.` until all cards are in order.
|
||||
|
||||
![Playing cards sorting process](algorithms_are_everywhere.assets/playing_cards_sorting.png){ class="animation-figure" }
|
||||
![Playing Cards Sorting Process](algorithms_are_everywhere.assets/playing_cards_sorting.png){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 1-2 Playing cards sorting process </p>
|
||||
<p align="center"> Figure 1-2 Playing Cards Sorting Process </p>
|
||||
|
||||
The above method of organizing playing cards is essentially the "insertion sort" algorithm, which is very efficient for small datasets. Many programming languages' sorting library functions include insertion sort.
|
||||
The above method of organizing playing cards is essentially the "Insertion Sort" algorithm, which is very efficient for small datasets. Many programming languages' sorting functions include the insertion sort.
|
||||
|
||||
**Example 3: Making Change**. Suppose we buy goods worth $69$ yuan at a supermarket and give the cashier $100$ yuan, then the cashier needs to give us $31$ yuan in change. They would naturally complete the thought process as shown below.
|
||||
|
||||
|
@ -57,9 +57,9 @@ The above method of organizing playing cards is essentially the "insertion sort"
|
|||
|
||||
<p align="center"> Figure 1-3 Change making process </p>
|
||||
|
||||
In the aforementioned steps, at each stage, we make the optimal choice (utilizing the highest denomination possible), ultimately deriving at a feasible change-making approach. From the perspective of data structures and algorithms, this approach is essentially a "greedy" algorithm.
|
||||
In the above steps, we make the best choice at each step (using the largest denomination possible), ultimately resulting in a feasible change-making plan. From the perspective of data structures and algorithms, this method is essentially a "Greedy" algorithm.
|
||||
|
||||
From preparing a dish to traversing interstellar realms, virtually every problem-solving endeavor relies on algorithms. The emergence of computers enables us to store data structures in memory and write code to call CPUs and GPUs to execute algorithms. Consequently, we can transfer real-life predicaments to computers, efficiently addressing a myriad of complex issues.
|
||||
From cooking a meal to interstellar travel, almost all problem-solving involves algorithms. The advent of computers allows us to store data structures in memory and write code to call the CPU and GPU to execute algorithms. In this way, we can transfer real-life problems to computers, solving various complex issues more efficiently.
|
||||
|
||||
!!! tip
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ A "data structure" is a way of organizing and storing data in a computer, with t
|
|||
|
||||
## 1.2.3 Relationship Between Data Structures and Algorithms
|
||||
|
||||
As shown in the diagram below, data structures and algorithms are highly related and closely integrated, specifically in the following three aspects:
|
||||
As shown in the Figure 1-4 , data structures and algorithms are highly related and closely integrated, specifically in the following three aspects:
|
||||
|
||||
- Data structures are the foundation of algorithms. They provide structured data storage and methods for manipulating data for algorithms.
|
||||
- Algorithms are the stage where data structures come into play. The data structure alone only stores data information; it is through the application of algorithms that specific problems can be solved.
|
||||
|
|
|
@ -2,50 +2,51 @@
|
|||
comments: true
|
||||
---
|
||||
|
||||
# 0.1 The Book
|
||||
# 0.1 About This Book
|
||||
|
||||
The aim of this project is to create an open source, free, novice-friendly introductory tutorial on data structures and algorithms.
|
||||
This open-source project aims to create a free, and beginner-friendly crash course on data structures and algorithms.
|
||||
|
||||
- Animated graphs are used throughout the book to structure the knowledge of data structures and algorithms in a way that is clear and easy to understand with a smooth learning curve.
|
||||
- The source code of the algorithms can be run with a single click, supporting Java, C++, Python, Go, JS, TS, C#, Swift, Rust, Dart, Zig and other languages.
|
||||
- Readers are encouraged to help each other and make progress in the chapter discussion forums, and questions and comments can usually be answered within two days.
|
||||
- Using animated illustrations, it delivers structured insights into data structures and algorithmic concepts, ensuring comprehensibility and a smooth learning curve.
|
||||
- Run code with just one click, supporting Java, C++, Python, Go, JS, TS, C#, Swift, Rust, Dart, Zig and other languages.
|
||||
- Readers are encouraged to engage with each other in the discussion area for each section, questions and comments are usually answered within two days.
|
||||
|
||||
## 0.1.1 Target Readers
|
||||
## 0.1.1 Target Audience
|
||||
|
||||
If you are a beginner to algorithms, have never touched an algorithm before, or already have some experience brushing up on data structures and algorithms, and have a vague understanding of data structures and algorithms, repeatedly jumping sideways between what you can and can't do, then this book is just for you!
|
||||
If you are new to algorithms with limited exposure, or you have accumulated some experience in algorithms, but you only have a vague understanding of data structures and algorithms, and you are constantly jumping between "yep" and "hmm", then this book is for you!
|
||||
|
||||
If you have already accumulated a certain amount of questions and are familiar with most of the question types, then this book can help you review and organize the algorithm knowledge system, and the repository source code can be used as a "brushing tool library" or "algorithm dictionary".
|
||||
If you have already accumulated a certain amount of problem-solving experience, and are familiar with most types of problems, then this book can help you review and organize your algorithm knowledge system. The repository's source code can be used as a "problem-solving toolkit" or an "algorithm cheat sheet".
|
||||
|
||||
If you are an algorithm expert, we look forward to receiving your valuable suggestions or [participate in the creation together](https://www.hello-algo.com/chapter_appendix/contribution/).
|
||||
If you are an algorithm expert, we look forward to receiving your valuable suggestions, or [join us and collaborate](https://www.hello-algo.com/chapter_appendix/contribution/).
|
||||
|
||||
!!! success "precondition"
|
||||
!!! success "Prerequisites"
|
||||
|
||||
You will need to have at least a basic knowledge of programming in any language and be able to read and write simple code.
|
||||
You should know how to write and read simple code in at least one programming language.
|
||||
|
||||
## 0.1.2 Content Structure
|
||||
|
||||
The main contents of the book are shown in the Figure 0-1 .
|
||||
The main content of the book is shown in the following figure.
|
||||
|
||||
- **Complexity Analysis**: dimensions and methods of evaluation of data structures and algorithms. Methods of deriving time complexity, space complexity, common types, examples, etc.
|
||||
- **Data Structures**: basic data types, classification methods of data structures. Definition, advantages and disadvantages, common operations, common types, typical applications, implementation methods of data structures such as arrays, linked lists, stacks, queues, hash tables, trees, heaps, graphs, etc.
|
||||
- **Algorithms**: definitions, advantages and disadvantages, efficiency, application scenarios, solution steps, sample topics of search, sorting algorithms, divide and conquer, backtracking algorithms, dynamic programming, greedy algorithms, and more.
|
||||
- **Complexity Analysis**: explores aspects and methods for evaluating data structures and algorithms. Covers methods of deriving time complexity and space complexity, along with common types and examples.
|
||||
- **Data Structures**: focuses on fundamental data types, classification methods, definitions, pros and cons, common operations, types, applications, and implementation methods of data structures such as array, linked list, stack, queue, hash table, tree, heap, graph, etc.
|
||||
- **Algorithms**: defines algorithms, discusses their pros and cons, efficiency, application scenarios, problem-solving steps, and includes sample questions for various algorithms such as search, sorting, divide and conquer, backtracking, dynamic programming, greedy algorithms, and more.
|
||||
|
||||
![Hello Algo content structure](about_the_book.assets/hello_algo_mindmap.jpg){ class="animation-figure" }
|
||||
![Main Content of the Book](about_the_book.assets/hello_algo_mindmap.jpg){ class="animation-figure" }
|
||||
|
||||
<p align="center"> Figure 0-1 Hello Algo content structure </p>
|
||||
<p align="center"> Figure 0-1 Main Content of the Book </p>
|
||||
|
||||
## 0.1.3 Acknowledgements
|
||||
|
||||
During the creation of this book, I received help from many people, including but not limited to:
|
||||
Throughout the creation of this book, numerous individuals provided invaluable assistance, including but not limited to:
|
||||
|
||||
- Thank you to my mentor at the company, Dr. Shih Lee, for encouraging me to "get moving" during one of our conversations, which strengthened my resolve to write this book.
|
||||
- I would like to thank my girlfriend Bubbles for being the first reader of this book, and for making many valuable suggestions from the perspective of an algorithm whiz, making this book more suitable for newbies.
|
||||
- Thanks to Tengbao, Qibao, and Feibao for coming up with a creative name for this book that evokes fond memories of writing the first line of code "Hello World!".
|
||||
- Thanks to Sutong for designing the beautiful cover and logo for this book and patiently revising it many times under my OCD.
|
||||
- Thanks to @squidfunk for writing layout suggestions and for developing the open source documentation theme [Material-for-MkDocs](https://github.com/squidfunk/mkdocs-material/tree/master).
|
||||
- Thanks to my mentor at the company, Dr. Xi Li, who encouraged me in a conversation to "get moving fast," which solidified my determination to write this book;
|
||||
- Thanks to my girlfriend Paopao, as the first reader of this book, for offering many valuable suggestions from the perspective of a beginner in algorithms, making this book more suitable for newbies;
|
||||
- Thanks to Tengbao, Qibao, and Feibao for coming up with a creative name for this book, evoking everyone's fond memories of writing their first line of code "Hello World!";
|
||||
- Thanks to Xiaoquan for providing professional help in intellectual property, which has played a significant role in the development of this open-source book;
|
||||
- Thanks to Sutong for designing a beautiful cover and logo for this book, and for patiently making multiple revisions under my insistence;
|
||||
- Thanks to @squidfunk for providing writing and typesetting suggestions, as well as his developed open-source documentation theme [Material-for-MkDocs](https://github.com/squidfunk/mkdocs-material/tree/master).
|
||||
|
||||
During the writing process, I read many textbooks and articles on data structures and algorithms. These works provide excellent models for this book and ensure the accuracy and quality of its contents. I would like to thank all my teachers and predecessors for their outstanding contributions!
|
||||
Throughout the writing journey, I delved into numerous textbooks and articles on data structures and algorithms. These works served as exemplary models, ensuring the accuracy and quality of this book's content. I extend my gratitude to all who preceded me for their invaluable contributions!
|
||||
|
||||
This book promotes a hands-on approach to learning, and in this respect is heavily inspired by ["Dive into Deep Learning"](https://github.com/d2l-ai/d2l-zh). I highly recommend this excellent book to you.
|
||||
This book advocates a combination of hands-on and minds-on learning, inspired in this regard by ["Dive into Deep Learning"](https://github.com/d2l-ai/d2l-zh). I highly recommend this excellent book to all readers.
|
||||
|
||||
**A heartfelt thank you to my parents, it is your constant support and encouragement that gives me the opportunity to do this fun-filled thing**.
|
||||
**Heartfelt thanks to my parents, whose ongoing support and encouragement have allowed me to do this interesting work**.
|
||||
|
|
|
@ -15,7 +15,7 @@ icon: material/book-open-outline
|
|||
|
||||
Algorithms are like a beautiful symphony, with each line of code flowing like a rhythm.
|
||||
|
||||
May this book ring softly in your head, leaving a unique and profound melody.
|
||||
May this book ring softly in your mind, leaving a unique and profound melody.
|
||||
|
||||
## 本章内容
|
||||
|
||||
|
|
|
@ -7,6 +7,6 @@ comments: true
|
|||
- The main audience of this book is beginners in algorithm. If you already have some basic knowledge, this book can help you systematically review your algorithm knowledge, and the source code in this book can also be used as a "Coding Toolkit".
|
||||
- The book consists of three main sections, Complexity Analysis, Data Structures, and Algorithms, covering most of the topics in the field.
|
||||
- For newcomers to algorithms, it is crucial to read an introductory book in the beginning stages to avoid many detours or common pitfalls.
|
||||
- Animations and graphs within the book are usually used to introduce key points and difficult knowledge. These should be given more attention when reading the book.
|
||||
- Animations and figures within the book are usually used to introduce key points and difficult knowledge. These should be given more attention when reading the book.
|
||||
- Practice is the best way to learn programming. It is highly recommended that you run the source code and type in the code yourself.
|
||||
- Each chapter in the web version of this book features a discussion forum, and you are welcome to share your questions and insights at any time.
|
||||
- Each chapter in the web version of this book features a discussion section, and you are welcome to share your questions and insights at any time.
|
||||
|
|
|
@ -247,24 +247,20 @@ comments: true
|
|||
|
||||
```rust title="binary_search_tree.rs"
|
||||
/* 查找节点 */
|
||||
pub fn search(&self, num: i32) -> Option<TreeNodeRc> {
|
||||
pub fn search(&self, num: i32) -> OptionTreeNodeRc {
|
||||
let mut cur = self.root.clone();
|
||||
|
||||
// 循环查找,越过叶节点后跳出
|
||||
while let Some(node) = cur.clone() {
|
||||
// 目标节点在 cur 的右子树中
|
||||
if node.borrow().val < num {
|
||||
cur = node.borrow().right.clone();
|
||||
}
|
||||
// 目标节点在 cur 的左子树中
|
||||
else if node.borrow().val > num {
|
||||
cur = node.borrow().left.clone();
|
||||
}
|
||||
// 找到目标节点,跳出循环
|
||||
else {
|
||||
break;
|
||||
match num.cmp(&node.borrow().val) {
|
||||
// 目标节点在 cur 的右子树中
|
||||
Ordering::Greater => cur = node.borrow().right.clone(),
|
||||
// 目标节点在 cur 的左子树中
|
||||
Ordering::Less => cur = node.borrow().left.clone(),
|
||||
// 找到目标节点,跳出循环
|
||||
Ordering::Equal => break,
|
||||
}
|
||||
}
|
||||
|
||||
// 返回目标节点
|
||||
cur
|
||||
}
|
||||
|
@ -644,27 +640,28 @@ comments: true
|
|||
let mut pre = None;
|
||||
// 循环查找,越过叶节点后跳出
|
||||
while let Some(node) = cur.clone() {
|
||||
// 找到重复节点,直接返回
|
||||
if node.borrow().val == num {
|
||||
return;
|
||||
}
|
||||
// 插入位置在 cur 的右子树中
|
||||
pre = cur.clone();
|
||||
if node.borrow().val < num {
|
||||
cur = node.borrow().right.clone();
|
||||
}
|
||||
// 插入位置在 cur 的左子树中
|
||||
else {
|
||||
cur = node.borrow().left.clone();
|
||||
match num.cmp(&node.borrow().val) {
|
||||
// 找到重复节点,直接返回
|
||||
Ordering::Equal => return,
|
||||
// 插入位置在 cur 的右子树中
|
||||
Ordering::Greater => {
|
||||
pre = cur.clone();
|
||||
cur = node.borrow().right.clone();
|
||||
}
|
||||
// 插入位置在 cur 的左子树中
|
||||
Ordering::Less => {
|
||||
pre = cur.clone();
|
||||
cur = node.borrow().left.clone();
|
||||
}
|
||||
}
|
||||
}
|
||||
// 插入节点
|
||||
let node = TreeNode::new(num);
|
||||
let pre = pre.unwrap();
|
||||
if pre.borrow().val < num {
|
||||
pre.borrow_mut().right = Some(Rc::clone(&node));
|
||||
let node = Some(TreeNode::new(num));
|
||||
if num > pre.borrow().val {
|
||||
pre.borrow_mut().right = node;
|
||||
} else {
|
||||
pre.borrow_mut().left = Some(Rc::clone(&node));
|
||||
pre.borrow_mut().left = node;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -1295,18 +1292,19 @@ comments: true
|
|||
let mut pre = None;
|
||||
// 循环查找,越过叶节点后跳出
|
||||
while let Some(node) = cur.clone() {
|
||||
// 找到待删除节点,跳出循环
|
||||
if node.borrow().val == num {
|
||||
break;
|
||||
}
|
||||
// 待删除节点在 cur 的右子树中
|
||||
pre = cur.clone();
|
||||
if node.borrow().val < num {
|
||||
cur = node.borrow().right.clone();
|
||||
}
|
||||
// 待删除节点在 cur 的左子树中
|
||||
else {
|
||||
cur = node.borrow().left.clone();
|
||||
match num.cmp(&node.borrow().val) {
|
||||
// 找到待删除节点,跳出循环
|
||||
Ordering::Equal => break,
|
||||
// 待删除节点在 cur 的右子树中
|
||||
Ordering::Greater => {
|
||||
pre = cur.clone();
|
||||
cur = node.borrow().right.clone();
|
||||
}
|
||||
// 待删除节点在 cur 的左子树中
|
||||
Ordering::Less => {
|
||||
pre = cur.clone();
|
||||
cur = node.borrow().left.clone();
|
||||
}
|
||||
}
|
||||
}
|
||||
// 若无待删除节点,则直接返回
|
||||
|
@ -1314,40 +1312,43 @@ comments: true
|
|||
return;
|
||||
}
|
||||
let cur = cur.unwrap();
|
||||
// 子节点数量 = 0 or 1
|
||||
if cur.borrow().left.is_none() || cur.borrow().right.is_none() {
|
||||
// 当子节点数量 = 0 / 1 时, child = nullptr / 该子节点
|
||||
let child = cur.borrow().left.clone().or_else(|| cur.borrow().right.clone());
|
||||
let pre = pre.unwrap();
|
||||
let left = pre.borrow().left.clone().unwrap();
|
||||
// 删除节点 cur
|
||||
if !Rc::ptr_eq(&cur, self.root.as_ref().unwrap()) {
|
||||
if Rc::ptr_eq(&left, &cur) {
|
||||
pre.borrow_mut().left = child;
|
||||
let (left_child, right_child) = (cur.borrow().left.clone(), cur.borrow().right.clone());
|
||||
match (left_child.clone(), right_child.clone()) {
|
||||
// 子节点数量 = 0 or 1
|
||||
(None, None) | (Some(_), None) | (None, Some(_)) => {
|
||||
// 当子节点数量 = 0 / 1 时, child = nullptr / 该子节点
|
||||
let child = left_child.or(right_child);
|
||||
let pre = pre.unwrap();
|
||||
// 删除节点 cur
|
||||
if !Rc::ptr_eq(&cur, self.root.as_ref().unwrap()) {
|
||||
let left = pre.borrow().left.clone();
|
||||
if left.is_some() && Rc::ptr_eq(&left.as_ref().unwrap(), &cur) {
|
||||
pre.borrow_mut().left = child;
|
||||
} else {
|
||||
pre.borrow_mut().right = child;
|
||||
}
|
||||
} else {
|
||||
pre.borrow_mut().right = child;
|
||||
}
|
||||
} else {
|
||||
// 若删除节点为根节点,则重新指定根节点
|
||||
self.root = child;
|
||||
}
|
||||
}
|
||||
// 子节点数量 = 2
|
||||
else {
|
||||
// 获取中序遍历中 cur 的下一个节点
|
||||
let mut tmp = cur.borrow().right.clone();
|
||||
while let Some(node) = tmp.clone() {
|
||||
if node.borrow().left.is_some() {
|
||||
tmp = node.borrow().left.clone();
|
||||
} else {
|
||||
break;
|
||||
// 若删除节点为根节点,则重新指定根节点
|
||||
self.root = child;
|
||||
}
|
||||
}
|
||||
let tmpval = tmp.unwrap().borrow().val;
|
||||
// 递归删除节点 tmp
|
||||
self.remove(tmpval);
|
||||
// 用 tmp 覆盖 cur
|
||||
cur.borrow_mut().val = tmpval;
|
||||
// 子节点数量 = 2
|
||||
(Some(_), Some(_)) => {
|
||||
// 获取中序遍历中 cur 的下一个节点
|
||||
let mut tmp = cur.borrow().right.clone();
|
||||
while let Some(node) = tmp.clone() {
|
||||
if node.borrow().left.is_some() {
|
||||
tmp = node.borrow().left.clone();
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
let tmpval = tmp.unwrap().borrow().val;
|
||||
// 递归删除节点 tmp
|
||||
self.remove(tmpval);
|
||||
// 用 tmp 覆盖 cur
|
||||
cur.borrow_mut().val = tmpval;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
Loading…
Reference in a new issue