Add the initial translation of chapter "dynamic programming" (#1319)
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 19 KiB |
101
en/docs/chapter_dynamic_programming/dp_problem_features.md
Normal file
|
@ -0,0 +1,101 @@
|
||||||
|
# Dynamic programming problem characteristics
|
||||||
|
|
||||||
|
In the previous section, we learned how dynamic programming solves the original problem by decomposing it into subproblems. In fact, subproblem decomposition is a general algorithmic approach, with different emphases in divide and conquer, dynamic programming, and backtracking.
|
||||||
|
|
||||||
|
- Divide and conquer algorithms recursively divide the original problem into multiple independent subproblems until the smallest subproblems are reached, and combine the solutions of the subproblems during backtracking to ultimately obtain the solution to the original problem.
|
||||||
|
- Dynamic programming also decomposes the problem recursively, but the main difference from divide and conquer algorithms is that the subproblems in dynamic programming are interdependent, and many overlapping subproblems will appear during the decomposition process.
|
||||||
|
- Backtracking algorithms exhaust all possible solutions through trial and error and avoid unnecessary search branches by pruning. The solution to the original problem consists of a series of decision steps, and we can consider each sub-sequence before each decision step as a subproblem.
|
||||||
|
|
||||||
|
In fact, dynamic programming is commonly used to solve optimization problems, which not only include overlapping subproblems but also have two other major characteristics: optimal substructure and statelessness.
|
||||||
|
|
||||||
|
## Optimal substructure
|
||||||
|
|
||||||
|
We make a slight modification to the stair climbing problem to make it more suitable to demonstrate the concept of optimal substructure.
|
||||||
|
|
||||||
|
!!! question "Minimum cost of climbing stairs"
|
||||||
|
|
||||||
|
Given a staircase, you can step up 1 or 2 steps at a time, and each step on the staircase has a non-negative integer representing the cost you need to pay at that step. Given a non-negative integer array $cost$, where $cost[i]$ represents the cost you need to pay at the $i$-th step, $cost[0]$ is the ground (starting point). What is the minimum cost required to reach the top?
|
||||||
|
|
||||||
|
As shown in the figure below, if the costs of the 1st, 2nd, and 3rd steps are $1$, $10$, and $1$ respectively, then the minimum cost to climb to the 3rd step from the ground is $2$.
|
||||||
|
|
||||||
|
![Minimum cost to climb to the 3rd step](dp_problem_features.assets/min_cost_cs_example.png)
|
||||||
|
|
||||||
|
Let $dp[i]$ be the cumulative cost of climbing to the $i$-th step. Since the $i$-th step can only come from the $i-1$ or $i-2$ step, $dp[i]$ can only be either $dp[i-1] + cost[i]$ or $dp[i-2] + cost[i]$. To minimize the cost, we should choose the smaller of the two:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i] = \min(dp[i-1], dp[i-2]) + cost[i]
|
||||||
|
$$
|
||||||
|
|
||||||
|
This leads us to the meaning of optimal substructure: **The optimal solution to the original problem is constructed from the optimal solutions of subproblems**.
|
||||||
|
|
||||||
|
This problem obviously has optimal substructure: we select the better one from the optimal solutions of the two subproblems, $dp[i-1]$ and $dp[i-2]$, and use it to construct the optimal solution for the original problem $dp[i]$.
|
||||||
|
|
||||||
|
So, does the stair climbing problem from the previous section have optimal substructure? Its goal is to solve for the number of solutions, which seems to be a counting problem, but if we ask in another way: "Solve for the maximum number of solutions". We surprisingly find that **although the problem has changed, the optimal substructure has emerged**: the maximum number of solutions at the $n$-th step equals the sum of the maximum number of solutions at the $n-1$ and $n-2$ steps. Thus, the interpretation of optimal substructure is quite flexible and will have different meanings in different problems.
|
||||||
|
|
||||||
|
According to the state transition equation, and the initial states $dp[1] = cost[1]$ and $dp[2] = cost[2]$, we can obtain the dynamic programming code:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{min_cost_climbing_stairs_dp}-[class]{}-[func]{min_cost_climbing_stairs_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
The figure below shows the dynamic programming process for the above code.
|
||||||
|
|
||||||
|
![Dynamic programming process for minimum cost of climbing stairs](dp_problem_features.assets/min_cost_cs_dp.png)
|
||||||
|
|
||||||
|
This problem can also be space-optimized, compressing one dimension to zero, reducing the space complexity from $O(n)$ to $O(1)$:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{min_cost_climbing_stairs_dp}-[class]{}-[func]{min_cost_climbing_stairs_dp_comp}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Statelessness
|
||||||
|
|
||||||
|
Statelessness is one of the important characteristics that make dynamic programming effective in solving problems. Its definition is: **Given a certain state, its future development is only related to the current state and unrelated to all past states experienced**.
|
||||||
|
|
||||||
|
Taking the stair climbing problem as an example, given state $i$, it will develop into states $i+1$ and $i+2$, corresponding to jumping 1 step and 2 steps respectively. When making these two choices, we do not need to consider the states before state $i$, as they do not affect the future of state $i$.
|
||||||
|
|
||||||
|
However, if we add a constraint to the stair climbing problem, the situation changes.
|
||||||
|
|
||||||
|
!!! question "Stair climbing with constraints"
|
||||||
|
|
||||||
|
Given a staircase with $n$ steps, you can go up 1 or 2 steps each time, **but you cannot jump 1 step twice in a row**. How many ways are there to climb to the top?
|
||||||
|
|
||||||
|
As shown in the figure below, there are only 2 feasible options for climbing to the 3rd step, among which the option of jumping 1 step three times in a row does not meet the constraint condition and is therefore discarded.
|
||||||
|
|
||||||
|
![Number of feasible options for climbing to the 3rd step with constraints](dp_problem_features.assets/climbing_stairs_constraint_example.png)
|
||||||
|
|
||||||
|
In this problem, if the last round was a jump of 1 step, then the next round must be a jump of 2 steps. This means that **the next step choice cannot be independently determined by the current state (current stair step), but also depends on the previous state (last round's stair step)**.
|
||||||
|
|
||||||
|
It is not difficult to find that this problem no longer satisfies statelessness, and the state transition equation $dp[i] = dp[i-1] + dp[i-2]$ also fails, because $dp[i-1]$ represents this round's jump of 1 step, but it includes many "last round was a jump of 1 step" options, which, to meet the constraint, cannot be directly included in $dp[i]$.
|
||||||
|
|
||||||
|
For this, we need to expand the state definition: **State $[i, j]$ represents being on the $i$-th step and the last round was a jump of $j$ steps**, where $j \in \{1, 2\}$. This state definition effectively distinguishes whether the last round was a jump of 1 step or 2 steps, and we can judge accordingly where the current state came from.
|
||||||
|
|
||||||
|
- When the last round was a jump of 1 step, the round before last could only choose to jump 2 steps, that is, $dp[i, 1]$ can only be transferred from $dp[i-1, 2]$.
|
||||||
|
- When the last round was a jump of 2 steps, the round before last could choose to jump 1 step or 2 steps, that is, $dp[i, 2]$ can be transferred from $dp[i-2, 1]$ or $dp[i-2, 2]$.
|
||||||
|
|
||||||
|
As shown in the figure below, $dp[i, j]$ represents the number of solutions for state $[i, j]$. At this point, the state transition equation is:
|
||||||
|
|
||||||
|
$$
|
||||||
|
\begin{cases}
|
||||||
|
dp[i, 1] = dp[i-1, 2] \\
|
||||||
|
dp[i, 2] = dp[i-2, 1] + dp[i-2, 2]
|
||||||
|
\end{cases}
|
||||||
|
$$
|
||||||
|
|
||||||
|
![Recursive relationship considering constraints](dp_problem_features.assets/climbing_stairs_constraint_state_transfer.png)
|
||||||
|
|
||||||
|
In the end, returning $dp[n, 1] + dp[n, 2]$ will do, the sum of the two representing the total number of solutions for climbing to the $n$-th step:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{climbing_stairs_constraint_dp}-[class]{}-[func]{climbing_stairs_constraint_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
In the above cases, since we only need to consider the previous state, we can still meet the statelessness by expanding the state definition. However, some problems have very serious "state effects".
|
||||||
|
|
||||||
|
!!! question "Stair climbing with obstacle generation"
|
||||||
|
|
||||||
|
Given a staircase with $n$ steps, you can go up 1 or 2 steps each time. **It is stipulated that when climbing to the $i$-th step, the system automatically places an obstacle on the $2i$-th step, and thereafter all rounds are not allowed to jump to the $2i$-th step**. For example, if the first two rounds jump to the 2nd and 3rd steps, then later you cannot jump to the 4th and 6th steps. How many ways are there to climb to the top?
|
||||||
|
|
||||||
|
In this problem, the next jump depends on all past states, as each jump places obstacles on higher steps, affecting future jumps. For such problems, dynamic programming often struggles to solve.
|
||||||
|
|
||||||
|
In fact, many complex combinatorial optimization problems (such as the traveling salesman problem) do not satisfy statelessness. For these kinds of problems, we usually choose to use other methods, such as heuristic search, genetic algorithms, reinforcement learning, etc., to obtain usable local optimal solutions within a limited time.
|
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 18 KiB |
183
en/docs/chapter_dynamic_programming/dp_solution_pipeline.md
Normal file
|
@ -0,0 +1,183 @@
|
||||||
|
# Dynamic programming problem-solving approach
|
||||||
|
|
||||||
|
The last two sections introduced the main characteristics of dynamic programming problems. Next, let's explore two more practical issues together.
|
||||||
|
|
||||||
|
1. How to determine whether a problem is a dynamic programming problem?
|
||||||
|
2. What are the complete steps to solve a dynamic programming problem?
|
||||||
|
|
||||||
|
## Problem determination
|
||||||
|
|
||||||
|
Generally speaking, if a problem contains overlapping subproblems, optimal substructure, and exhibits no aftereffects, it is usually suitable for dynamic programming solutions. However, it is often difficult to directly extract these characteristics from the problem description. Therefore, we usually relax the conditions and **first observe whether the problem is suitable for resolution using backtracking (exhaustive search)**.
|
||||||
|
|
||||||
|
**Problems suitable for backtracking usually fit the "decision tree model"**, which can be described using a tree structure, where each node represents a decision, and each path represents a sequence of decisions.
|
||||||
|
|
||||||
|
In other words, if the problem contains explicit decision concepts, and the solution is produced through a series of decisions, then it fits the decision tree model and can usually be solved using backtracking.
|
||||||
|
|
||||||
|
On this basis, there are some "bonus points" for determining dynamic programming problems.
|
||||||
|
|
||||||
|
- The problem contains descriptions of maximization (minimization) or finding the most (least) optimal solution.
|
||||||
|
- The problem's states can be represented using a list, multi-dimensional matrix, or tree, and a state has a recursive relationship with its surrounding states.
|
||||||
|
|
||||||
|
Correspondingly, there are also some "penalty points".
|
||||||
|
|
||||||
|
- The goal of the problem is to find all possible solutions, not just the optimal solution.
|
||||||
|
- The problem description has obvious characteristics of permutations and combinations, requiring the return of specific multiple solutions.
|
||||||
|
|
||||||
|
If a problem fits the decision tree model and has relatively obvious "bonus points", we can assume it is a dynamic programming problem and verify it during the solution process.
|
||||||
|
|
||||||
|
## Problem-solving steps
|
||||||
|
|
||||||
|
The dynamic programming problem-solving process varies with the nature and difficulty of the problem but generally follows these steps: describe decisions, define states, establish a $dp$ table, derive state transition equations, and determine boundary conditions, etc.
|
||||||
|
|
||||||
|
To illustrate the problem-solving steps more vividly, we use a classic problem, "Minimum Path Sum", as an example.
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given an $n \times m$ two-dimensional grid `grid`, each cell in the grid contains a non-negative integer representing the cost of that cell. The robot starts from the top-left cell and can only move down or right at each step until it reaches the bottom-right cell. Return the minimum path sum from the top-left to the bottom-right.
|
||||||
|
|
||||||
|
The following figure shows an example, where the given grid's minimum path sum is $13$.
|
||||||
|
|
||||||
|
![Minimum Path Sum Example Data](dp_solution_pipeline.assets/min_path_sum_example.png)
|
||||||
|
|
||||||
|
**First step: Think about each round of decisions, define the state, and thereby obtain the $dp$ table**
|
||||||
|
|
||||||
|
Each round of decisions in this problem is to move one step down or right from the current cell. Suppose the row and column indices of the current cell are $[i, j]$, then after moving down or right, the indices become $[i+1, j]$ or $[i, j+1]$. Therefore, the state should include two variables: the row index and the column index, denoted as $[i, j]$.
|
||||||
|
|
||||||
|
The state $[i, j]$ corresponds to the subproblem: the minimum path sum from the starting point $[0, 0]$ to $[i, j]$, denoted as $dp[i, j]$.
|
||||||
|
|
||||||
|
Thus, we obtain the two-dimensional $dp$ matrix shown below, whose size is the same as the input grid $grid$.
|
||||||
|
|
||||||
|
![State definition and DP table](dp_solution_pipeline.assets/min_path_sum_solution_state_definition.png)
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
Dynamic programming and backtracking can be described as a sequence of decisions, while a state consists of all decision variables. It should include all variables that describe the progress of solving the problem, containing enough information to derive the next state.
|
||||||
|
|
||||||
|
Each state corresponds to a subproblem, and we define a $dp$ table to store the solutions to all subproblems. Each independent variable of the state is a dimension of the $dp$ table. Essentially, the $dp$ table is a mapping between states and solutions to subproblems.
|
||||||
|
|
||||||
|
**Second step: Identify the optimal substructure, then derive the state transition equation**
|
||||||
|
|
||||||
|
For the state $[i, j]$, it can only be derived from the cell above $[i-1, j]$ or the cell to the left $[i, j-1]$. Therefore, the optimal substructure is: the minimum path sum to reach $[i, j]$ is determined by the smaller of the minimum path sums of $[i, j-1]$ and $[i-1, j]$.
|
||||||
|
|
||||||
|
Based on the above analysis, the state transition equation shown in the following figure can be derived:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i, j] = \min(dp[i-1, j], dp[i, j-1]) + grid[i, j]
|
||||||
|
$$
|
||||||
|
|
||||||
|
![Optimal substructure and state transition equation](dp_solution_pipeline.assets/min_path_sum_solution_state_transition.png)
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
Based on the defined $dp$ table, think about the relationship between the original problem and the subproblems, and find out how to construct the optimal solution to the original problem from the optimal solutions to the subproblems, i.e., the optimal substructure.
|
||||||
|
|
||||||
|
Once we have identified the optimal substructure, we can use it to build the state transition equation.
|
||||||
|
|
||||||
|
**Third step: Determine boundary conditions and state transition order**
|
||||||
|
|
||||||
|
In this problem, the states in the first row can only come from the states to their left, and the states in the first column can only come from the states above them, so the first row $i = 0$ and the first column $j = 0$ are the boundary conditions.
|
||||||
|
|
||||||
|
As shown in the figure below, since each cell is derived from the cell to its left and the cell above it, we use loops to traverse the matrix, the outer loop iterating over the rows and the inner loop iterating over the columns.
|
||||||
|
|
||||||
|
![Boundary conditions and state transition order](dp_solution_pipeline.assets/min_path_sum_solution_initial_state.png)
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
Boundary conditions are used in dynamic programming to initialize the $dp$ table, and in search to prune.
|
||||||
|
|
||||||
|
The core of the state transition order is to ensure that when calculating the solution to the current problem, all the smaller subproblems it depends on have already been correctly calculated.
|
||||||
|
|
||||||
|
Based on the above analysis, we can directly write the dynamic programming code. However, the decomposition of subproblems is a top-down approach, so implementing it in the order of "brute-force search → memoized search → dynamic programming" is more in line with habitual thinking.
|
||||||
|
|
||||||
|
### Method 1: Brute-force search
|
||||||
|
|
||||||
|
Start searching from the state $[i, j]$, constantly decomposing it into smaller states $[i-1, j]$ and $[i, j-1]$. The recursive function includes the following elements.
|
||||||
|
|
||||||
|
- **Recursive parameter**: state $[i, j]$.
|
||||||
|
- **Return value**: the minimum path sum from $[0, 0]$ to $[i, j]$ $dp[i, j]$.
|
||||||
|
- **Termination condition**: when $i = 0$ and $j = 0$, return the cost $grid[0, 0]$.
|
||||||
|
- **Pruning**: when $i < 0$ or $j < 0$ index out of bounds, return the cost $+\infty$, representing infeasibility.
|
||||||
|
|
||||||
|
Implementation code as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dfs}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following figure shows the recursive tree rooted at $dp[2, 1]$, which includes some overlapping subproblems, the number of which increases sharply as the size of the grid `grid` increases.
|
||||||
|
|
||||||
|
Essentially, the reason for overlapping subproblems is: **there are multiple paths to reach a certain cell from the top-left corner**.
|
||||||
|
|
||||||
|
![Brute-force search recursive tree](dp_solution_pipeline.assets/min_path_sum_dfs.png)
|
||||||
|
|
||||||
|
Each state has two choices, down and right, so the total number of steps from the top-left corner to the bottom-right corner is $m + n - 2$, so the worst-case time complexity is $O(2^{m + n})$. Please note that this calculation method does not consider the situation near the grid edge, where there is only one choice left when reaching the network edge, so the actual number of paths will be less.
|
||||||
|
|
||||||
|
### Method 2: Memoized search
|
||||||
|
|
||||||
|
We introduce a memo list `mem` of the same size as the grid `grid`, used to record the solutions to various subproblems, and prune overlapping subproblems:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dfs_mem}
|
||||||
|
```
|
||||||
|
|
||||||
|
As shown in the figure below, after introducing memoization, all subproblem solutions only need to be calculated once, so the time complexity depends on the total number of states, i.e., the grid size $O(nm)$.
|
||||||
|
|
||||||
|
![Memoized search recursive tree](dp_solution_pipeline.assets/min_path_sum_dfs_mem.png)
|
||||||
|
|
||||||
|
### Method 3: Dynamic programming
|
||||||
|
|
||||||
|
Implement the dynamic programming solution iteratively, code as shown below:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following figures show the state transition process of the minimum path sum, traversing the entire grid, **thus the time complexity is $O(nm)$**.
|
||||||
|
|
||||||
|
The array `dp` is of size $n \times m$, **therefore the space complexity is $O(nm)$**.
|
||||||
|
|
||||||
|
=== "<1>"
|
||||||
|
![Dynamic programming process of minimum path sum](dp_solution_pipeline.assets/min_path_sum_dp_step1.png)
|
||||||
|
|
||||||
|
=== "<2>"
|
||||||
|
![min_path_sum_dp_step2](dp_solution_pipeline.assets/min_path_sum_dp_step2.png)
|
||||||
|
|
||||||
|
=== "<3>"
|
||||||
|
![min_path_sum_dp_step3](dp_solution_pipeline.assets/min_path_sum_dp_step3.png)
|
||||||
|
|
||||||
|
=== "<4>"
|
||||||
|
![min_path_sum_dp_step4](dp_solution_pipeline.assets/min_path_sum_dp_step4.png)
|
||||||
|
|
||||||
|
=== "<5>"
|
||||||
|
![min_path_sum_dp_step5](dp_solution_pipeline.assets/min_path_sum_dp_step5.png)
|
||||||
|
|
||||||
|
=== "<6>"
|
||||||
|
![min_path_sum_dp_step6](dp_solution_pipeline.assets/min_path_sum_dp_step6.png)
|
||||||
|
|
||||||
|
=== "<7>"
|
||||||
|
![min_path_sum_dp_step7](dp_solution_pipeline.assets/min_path_sum_dp_step7.png)
|
||||||
|
|
||||||
|
=== "<8>"
|
||||||
|
![min_path_sum_dp_step8](dp_solution_pipeline.assets/min_path_sum_dp_step8.png)
|
||||||
|
|
||||||
|
=== "<9>"
|
||||||
|
![min_path_sum_dp_step9](dp_solution_pipeline.assets/min_path_sum_dp_step9.png)
|
||||||
|
|
||||||
|
=== "<10>"
|
||||||
|
![min_path_sum_dp_step10](dp_solution_pipeline.assets/min_path_sum_dp_step10.png)
|
||||||
|
|
||||||
|
=== "<11>"
|
||||||
|
![min_path_sum_dp_step11](dp_solution_pipeline.assets/min_path_sum_dp_step11.png)
|
||||||
|
|
||||||
|
=== "<12>"
|
||||||
|
![min_path_sum_dp_step12](dp_solution_pipeline.assets/min_path_sum_dp_step12.png)
|
||||||
|
|
||||||
|
### Space optimization
|
||||||
|
|
||||||
|
Since each cell is only related to the cell to its left and above, we can use a single-row array to implement the $dp$ table.
|
||||||
|
|
||||||
|
Please note, since the array `dp` can only represent the state of one row, we cannot initialize the first column state in advance, but update it as we traverse each row:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{min_path_sum}-[class]{}-[func]{min_path_sum_dp_comp}
|
||||||
|
```
|
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 13 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 12 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 13 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 13 KiB |
After Width: | Height: | Size: 23 KiB |
129
en/docs/chapter_dynamic_programming/edit_distance_problem.md
Normal file
|
@ -0,0 +1,129 @@
|
||||||
|
# Edit distance problem
|
||||||
|
|
||||||
|
Edit distance, also known as Levenshtein distance, refers to the minimum number of modifications required to transform one string into another, commonly used in information retrieval and natural language processing to measure the similarity between two sequences.
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given two strings $s$ and $t$, return the minimum number of edits required to transform $s$ into $t$.
|
||||||
|
|
||||||
|
You can perform three types of edits on a string: insert a character, delete a character, or replace a character with any other character.
|
||||||
|
|
||||||
|
As shown in the figure below, transforming `kitten` into `sitting` requires 3 edits, including 2 replacements and 1 insertion; transforming `hello` into `algo` requires 3 steps, including 2 replacements and 1 deletion.
|
||||||
|
|
||||||
|
![Example data of edit distance](edit_distance_problem.assets/edit_distance_example.png)
|
||||||
|
|
||||||
|
**The edit distance problem can naturally be explained with a decision tree model**. Strings correspond to tree nodes, and a round of decision (an edit operation) corresponds to an edge of the tree.
|
||||||
|
|
||||||
|
As shown in the figure below, with unrestricted operations, each node can derive many edges, each corresponding to one operation, meaning there are many possible paths to transform `hello` into `algo`.
|
||||||
|
|
||||||
|
From the perspective of the decision tree, the goal of this problem is to find the shortest path between the node `hello` and the node `algo`.
|
||||||
|
|
||||||
|
![Edit distance problem represented based on decision tree model](edit_distance_problem.assets/edit_distance_decision_tree.png)
|
||||||
|
|
||||||
|
### Dynamic programming approach
|
||||||
|
|
||||||
|
**Step one: Think about each round of decision, define the state, thus obtaining the $dp$ table**
|
||||||
|
|
||||||
|
Each round of decision involves performing one edit operation on string $s$.
|
||||||
|
|
||||||
|
We aim to gradually reduce the problem size during the edit process, which enables us to construct subproblems. Let the lengths of strings $s$ and $t$ be $n$ and $m$, respectively. We first consider the tail characters of both strings $s[n-1]$ and $t[m-1]$.
|
||||||
|
|
||||||
|
- If $s[n-1]$ and $t[m-1]$ are the same, we can skip them and directly consider $s[n-2]$ and $t[m-2]$.
|
||||||
|
- If $s[n-1]$ and $t[m-1]$ are different, we need to perform one edit on $s$ (insert, delete, replace) so that the tail characters of the two strings match, allowing us to skip them and consider a smaller-scale problem.
|
||||||
|
|
||||||
|
Thus, each round of decision (edit operation) in string $s$ changes the remaining characters in $s$ and $t$ to be matched. Therefore, the state is the $i$-th and $j$-th characters currently considered in $s$ and $t$, denoted as $[i, j]$.
|
||||||
|
|
||||||
|
State $[i, j]$ corresponds to the subproblem: **The minimum number of edits required to change the first $i$ characters of $s$ into the first $j$ characters of $t**.
|
||||||
|
|
||||||
|
From this, we obtain a two-dimensional $dp$ table of size $(i+1) \times (j+1)$.
|
||||||
|
|
||||||
|
**Step two: Identify the optimal substructure and then derive the state transition equation**
|
||||||
|
|
||||||
|
Consider the subproblem $dp[i, j]$, whose corresponding tail characters of the two strings are $s[i-1]$ and $t[j-1]$, which can be divided into three scenarios as shown below.
|
||||||
|
|
||||||
|
1. Add $t[j-1]$ after $s[i-1]$, then the remaining subproblem is $dp[i, j-1]$.
|
||||||
|
2. Delete $s[i-1]$, then the remaining subproblem is $dp[i-1, j]$.
|
||||||
|
3. Replace $s[i-1]$ with $t[j-1]$, then the remaining subproblem is $dp[i-1, j-1]$.
|
||||||
|
|
||||||
|
![State transition of edit distance](edit_distance_problem.assets/edit_distance_state_transfer.png)
|
||||||
|
|
||||||
|
Based on the analysis above, we can determine the optimal substructure: The minimum number of edits for $dp[i, j]$ is the minimum among $dp[i, j-1]$, $dp[i-1, j]$, and $dp[i-1, j-1]$, plus the edit step $1$. The corresponding state transition equation is:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i, j] = \min(dp[i, j-1], dp[i-1, j], dp[i-1, j-1]) + 1
|
||||||
|
$$
|
||||||
|
|
||||||
|
Please note, **when $s[i-1]$ and $t[j-1]$ are the same, no edit is required for the current character**, in which case the state transition equation is:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i, j] = dp[i-1, j-1]
|
||||||
|
$$
|
||||||
|
|
||||||
|
**Step three: Determine the boundary conditions and the order of state transitions**
|
||||||
|
|
||||||
|
When both strings are empty, the number of edits is $0$, i.e., $dp[0, 0] = 0$. When $s$ is empty but $t$ is not, the minimum number of edits equals the length of $t$, that is, the first row $dp[0, j] = j$. When $s$ is not empty but $t$ is, the minimum number of edits equals the length of $s$, that is, the first column $dp[i, 0] = i$.
|
||||||
|
|
||||||
|
Observing the state transition equation, solving $dp[i, j]$ depends on the solutions to the left, above, and upper left, so a double loop can be used to traverse the entire $dp$ table in the correct order.
|
||||||
|
|
||||||
|
### Code implementation
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{edit_distance}-[class]{}-[func]{edit_distance_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
As shown below, the process of state transition in the edit distance problem is very similar to that in the knapsack problem, which can be seen as filling a two-dimensional grid.
|
||||||
|
|
||||||
|
=== "<1>"
|
||||||
|
![Dynamic programming process of edit distance](edit_distance_problem.assets/edit_distance_dp_step1.png)
|
||||||
|
|
||||||
|
=== "<2>"
|
||||||
|
![edit_distance_dp_step2](edit_distance_problem.assets/edit_distance_dp_step2.png)
|
||||||
|
|
||||||
|
=== "<3>"
|
||||||
|
![edit_distance_dp_step3](edit_distance_problem.assets/edit_distance_dp_step3.png)
|
||||||
|
|
||||||
|
=== "<4>"
|
||||||
|
![edit_distance_dp_step4](edit_distance_problem.assets/edit_distance_dp_step4.png)
|
||||||
|
|
||||||
|
=== "<5>"
|
||||||
|
![edit_distance_dp_step5](edit_distance_problem.assets/edit_distance_dp_step5.png)
|
||||||
|
|
||||||
|
=== "<6>"
|
||||||
|
![edit_distance_dp_step6](edit_distance_problem.assets/edit_distance_dp_step6.png)
|
||||||
|
|
||||||
|
=== "<7>"
|
||||||
|
![edit_distance_dp_step7](edit_distance_problem.assets/edit_distance_dp_step7.png)
|
||||||
|
|
||||||
|
=== "<8>"
|
||||||
|
![edit_distance_dp_step8](edit_distance_problem.assets/edit_distance_dp_step8.png)
|
||||||
|
|
||||||
|
=== "<9>"
|
||||||
|
![edit_distance_dp_step9](edit_distance_problem.assets/edit_distance_dp_step9.png)
|
||||||
|
|
||||||
|
=== "<10>"
|
||||||
|
![edit_distance_dp_step10](edit_distance_problem.assets/edit_distance_dp_step10.png)
|
||||||
|
|
||||||
|
=== "<11>"
|
||||||
|
![edit_distance_dp_step11](edit_distance_problem.assets/edit_distance_dp_step11.png)
|
||||||
|
|
||||||
|
=== "<12>"
|
||||||
|
![edit_distance_dp_step12](edit_distance_problem.assets/edit_distance_dp_step12.png)
|
||||||
|
|
||||||
|
=== "<13>"
|
||||||
|
![edit_distance_dp_step13](edit_distance_problem.assets/edit_distance_dp_step13.png)
|
||||||
|
|
||||||
|
=== "<14>"
|
||||||
|
![edit_distance_dp_step14](edit_distance_problem.assets/edit_distance_dp_step14.png)
|
||||||
|
|
||||||
|
=== "<15>"
|
||||||
|
![edit_distance_dp_step15](edit_distance_problem.assets/edit_distance_dp_step15.png)
|
||||||
|
|
||||||
|
### Space optimization
|
||||||
|
|
||||||
|
Since $dp[i, j]$ is derived from the solutions above $dp[i-1, j]$, to the left $dp[i, j-1]$, and to the upper left $dp[i-1, j-1]$, and direct traversal will lose the upper left solution $dp[i-1, j-1]$, and reverse traversal cannot build $dp[i, j-1]$ in advance, therefore, both traversal orders are not feasible.
|
||||||
|
|
||||||
|
For this reason, we can use a variable `leftup` to temporarily store the solution from the upper left $dp[i-1, j-1]`, thus only needing to consider the solutions to the left and above. This situation is similar to the complete knapsack problem, allowing for direct traversal. The code is as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{edit_distance}-[class]{}-[func]{edit_distance_dp_comp}
|
||||||
|
```
|
9
en/docs/chapter_dynamic_programming/index.md
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
# Dynamic programming
|
||||||
|
|
||||||
|
![Dynamic programming](../assets/covers/chapter_dynamic_programming.jpg)
|
||||||
|
|
||||||
|
!!! abstract
|
||||||
|
|
||||||
|
Streams merge into rivers, and rivers merge into the sea.
|
||||||
|
|
||||||
|
Dynamic programming combines the solutions of small problems to solve bigger problems, step by step leading us to the solution.
|
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 22 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 17 KiB |
|
@ -0,0 +1,110 @@
|
||||||
|
# Initial exploration of dynamic programming
|
||||||
|
|
||||||
|
<u>Dynamic programming</u> is an important algorithmic paradigm that decomposes a problem into a series of smaller subproblems, and stores the solutions of these subproblems to avoid redundant computations, thereby significantly improving time efficiency.
|
||||||
|
|
||||||
|
In this section, we start with a classic problem, first presenting its brute force backtracking solution, observing the overlapping subproblems contained within, and then gradually deriving a more efficient dynamic programming solution.
|
||||||
|
|
||||||
|
!!! question "Climbing stairs"
|
||||||
|
|
||||||
|
Given a staircase with $n$ steps, where you can climb $1$ or $2$ steps at a time, how many different ways are there to reach the top?
|
||||||
|
|
||||||
|
As shown in the figure below, there are $3$ ways to reach the top of a $3$-step staircase.
|
||||||
|
|
||||||
|
![Number of ways to reach the 3rd step](intro_to_dynamic_programming.assets/climbing_stairs_example.png)
|
||||||
|
|
||||||
|
The goal of this problem is to determine the number of ways, **considering using backtracking to exhaust all possibilities**. Specifically, imagine climbing stairs as a multi-round choice process: starting from the ground, choosing to go up $1$ or $2$ steps each round, adding one to the count of ways upon reaching the top of the stairs, and pruning the process when exceeding the top. The code is as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{climbing_stairs_backtrack}-[class]{}-[func]{climbing_stairs_backtrack}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Method 1: Brute force search
|
||||||
|
|
||||||
|
Backtracking algorithms do not explicitly decompose the problem but treat solving the problem as a series of decision steps, searching for all possible solutions through exploration and pruning.
|
||||||
|
|
||||||
|
We can try to analyze this problem from the perspective of decomposition. Let $dp[i]$ be the number of ways to reach the $i^{th}$ step, then $dp[i]$ is the original problem, and its subproblems include:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i-1], dp[i-2], \dots, dp[2], dp[1]
|
||||||
|
$$
|
||||||
|
|
||||||
|
Since each round can only advance $1$ or $2$ steps, when we stand on the $i^{th}$ step, the previous round must have been either on the $i-1^{th}$ or the $i-2^{th}$ step. In other words, we can only step from the $i-1^{th}$ or the $i-2^{th}$ step to the $i^{th}$ step.
|
||||||
|
|
||||||
|
This leads to an important conclusion: **the number of ways to reach the $i-1^{th}$ step plus the number of ways to reach the $i-2^{th}$ step equals the number of ways to reach the $i^{th}$ step**. The formula is as follows:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i] = dp[i-1] + dp[i-2]
|
||||||
|
$$
|
||||||
|
|
||||||
|
This means that in the stair climbing problem, there is a recursive relationship between the subproblems, **the solution to the original problem can be constructed from the solutions to the subproblems**. The following image shows this recursive relationship.
|
||||||
|
|
||||||
|
![Recursive relationship of solution counts](intro_to_dynamic_programming.assets/climbing_stairs_state_transfer.png)
|
||||||
|
|
||||||
|
We can obtain the brute force search solution according to the recursive formula. Starting with $dp[n]$, **recursively decompose a larger problem into the sum of two smaller problems**, until reaching the smallest subproblems $dp[1]$ and $dp[2]$ where the solutions are known, with $dp[1] = 1$ and $dp[2] = 2$, representing $1$ and $2$ ways to climb to the first and second steps, respectively.
|
||||||
|
|
||||||
|
Observe the following code, which, like standard backtracking code, belongs to depth-first search but is more concise:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{climbing_stairs_dfs}-[class]{}-[func]{climbing_stairs_dfs}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following image shows the recursive tree formed by brute force search. For the problem $dp[n]$, the depth of its recursive tree is $n$, with a time complexity of $O(2^n)$. Exponential order represents explosive growth, and entering a long wait if a relatively large $n$ is input.
|
||||||
|
|
||||||
|
![Recursive tree for climbing stairs](intro_to_dynamic_programming.assets/climbing_stairs_dfs_tree.png)
|
||||||
|
|
||||||
|
Observing the above image, **the exponential time complexity is caused by 'overlapping subproblems'**. For example, $dp[9]$ is decomposed into $dp[8]$ and $dp[7]$, $dp[8]$ into $dp[7]$ and $dp[6]$, both containing the subproblem $dp[7]$.
|
||||||
|
|
||||||
|
Thus, subproblems include even smaller overlapping subproblems, endlessly. A vast majority of computational resources are wasted on these overlapping subproblems.
|
||||||
|
|
||||||
|
## Method 2: Memoized search
|
||||||
|
|
||||||
|
To enhance algorithm efficiency, **we hope that all overlapping subproblems are calculated only once**. For this purpose, we declare an array `mem` to record the solution of each subproblem, and prune overlapping subproblems during the search process.
|
||||||
|
|
||||||
|
1. When $dp[i]$ is calculated for the first time, we record it in `mem[i]` for later use.
|
||||||
|
2. When $dp[i]$ needs to be calculated again, we can directly retrieve the result from `mem[i]`, thus avoiding redundant calculations of that subproblem.
|
||||||
|
|
||||||
|
The code is as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{climbing_stairs_dfs_mem}-[class]{}-[func]{climbing_stairs_dfs_mem}
|
||||||
|
```
|
||||||
|
|
||||||
|
Observe the following image, **after memoization, all overlapping subproblems need to be calculated only once, optimizing the time complexity to $O(n)$**, which is a significant leap.
|
||||||
|
|
||||||
|
![Recursive tree with memoized search](intro_to_dynamic_programming.assets/climbing_stairs_dfs_memo_tree.png)
|
||||||
|
|
||||||
|
## Method 3: Dynamic programming
|
||||||
|
|
||||||
|
**Memoized search is a 'top-down' method**: we start with the original problem (root node), recursively decompose larger subproblems into smaller ones until the solutions to the smallest known subproblems (leaf nodes) are reached. Subsequently, by backtracking, we collect the solutions of the subproblems, constructing the solution to the original problem.
|
||||||
|
|
||||||
|
On the contrary, **dynamic programming is a 'bottom-up' method**: starting with the solutions to the smallest subproblems, iteratively construct the solutions to larger subproblems until the original problem is solved.
|
||||||
|
|
||||||
|
Since dynamic programming does not include a backtracking process, it only requires looping iteration to implement, without needing recursion. In the following code, we initialize an array `dp` to store the solutions to the subproblems, serving the same recording function as the array `mem` in memoized search:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{climbing_stairs_dp}-[class]{}-[func]{climbing_stairs_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
The image below simulates the execution process of the above code.
|
||||||
|
|
||||||
|
![Dynamic programming process for climbing stairs](intro_to_dynamic_programming.assets/climbing_stairs_dp.png)
|
||||||
|
|
||||||
|
Like the backtracking algorithm, dynamic programming also uses the concept of "states" to represent specific stages in problem solving, each state corresponding to a subproblem and its local optimal solution. For example, the state of the climbing stairs problem is defined as the current step number $i$.
|
||||||
|
|
||||||
|
Based on the above content, we can summarize the commonly used terminology in dynamic programming.
|
||||||
|
|
||||||
|
- The array `dp` is referred to as the <u>DP table</u>, with $dp[i]$ representing the solution to the subproblem corresponding to state $i$.
|
||||||
|
- The states corresponding to the smallest subproblems (steps $1$ and $2$) are called <u>initial states</u>.
|
||||||
|
- The recursive formula $dp[i] = dp[i-1] + dp[i-2]$ is called the <u>state transition equation</u>.
|
||||||
|
|
||||||
|
## Space optimization
|
||||||
|
|
||||||
|
Observant readers may have noticed that **since $dp[i]$ is only related to $dp[i-1]$ and $dp[i-2]$, we do not need to use an array `dp` to store the solutions to all subproblems**, but can simply use two variables to progress iteratively. The code is as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{climbing_stairs_dp}-[class]{}-[func]{climbing_stairs_dp_comp}
|
||||||
|
```
|
||||||
|
|
||||||
|
Observing the above code, since the space occupied by the array `dp` is eliminated, the space complexity is reduced from $O(n)$ to $O(1)$.
|
||||||
|
|
||||||
|
In dynamic programming problems, the current state is often only related to a limited number of previous states, allowing us to retain only the necessary states and save memory space by "dimension reduction". **This space optimization technique is known as 'rolling variable' or 'rolling array'**.
|
After Width: | Height: | Size: 33 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 19 KiB |
168
en/docs/chapter_dynamic_programming/knapsack_problem.md
Normal file
|
@ -0,0 +1,168 @@
|
||||||
|
# 0-1 Knapsack problem
|
||||||
|
|
||||||
|
The knapsack problem is an excellent introductory problem for dynamic programming and is the most common type of problem in dynamic programming. It has many variants, such as the 0-1 knapsack problem, the complete knapsack problem, and the multiple knapsack problem, etc.
|
||||||
|
|
||||||
|
In this section, we will first solve the most common 0-1 knapsack problem.
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given $n$ items, the weight of the $i$-th item is $wgt[i-1]$ and its value is $val[i-1]$, and a knapsack with a capacity of $cap$. Each item can be chosen only once. What is the maximum value of items that can be placed in the knapsack under the capacity limit?
|
||||||
|
|
||||||
|
Observe the following figure, since the item number $i$ starts counting from 1, and the array index starts from 0, thus the weight of item $i$ corresponds to $wgt[i-1]$ and the value corresponds to $val[i-1]$.
|
||||||
|
|
||||||
|
![Example data of the 0-1 knapsack](knapsack_problem.assets/knapsack_example.png)
|
||||||
|
|
||||||
|
We can consider the 0-1 knapsack problem as a process consisting of $n$ rounds of decisions, where for each item there are two decisions: not to put it in or to put it in, thus the problem fits the decision tree model.
|
||||||
|
|
||||||
|
The objective of this problem is to "maximize the value of the items that can be put in the knapsack under the limited capacity," thus it is more likely a dynamic programming problem.
|
||||||
|
|
||||||
|
**First step: Think about each round of decisions, define states, thereby obtaining the $dp$ table**
|
||||||
|
|
||||||
|
For each item, if not put into the knapsack, the capacity remains unchanged; if put in, the capacity is reduced. From this, the state definition can be obtained: the current item number $i$ and knapsack capacity $c$, denoted as $[i, c]$.
|
||||||
|
|
||||||
|
State $[i, c]$ corresponds to the sub-problem: **the maximum value of the first $i$ items in a knapsack of capacity $c$**, denoted as $dp[i, c]$.
|
||||||
|
|
||||||
|
The solution we are looking for is $dp[n, cap]$, so we need a two-dimensional $dp$ table of size $(n+1) \times (cap+1)$.
|
||||||
|
|
||||||
|
**Second step: Identify the optimal substructure, then derive the state transition equation**
|
||||||
|
|
||||||
|
After making the decision for item $i$, what remains is the sub-problem of decisions for the first $i-1$ items, which can be divided into two cases.
|
||||||
|
|
||||||
|
- **Not putting item $i$**: The knapsack capacity remains unchanged, state changes to $[i-1, c]$.
|
||||||
|
- **Putting item $i$**: The knapsack capacity decreases by $wgt[i-1]$, and the value increases by $val[i-1]$, state changes to $[i-1, c-wgt[i-1]]$.
|
||||||
|
|
||||||
|
The above analysis reveals the optimal substructure of this problem: **the maximum value $dp[i, c]$ is equal to the larger value of the two schemes of not putting item $i$ and putting item $i$**. From this, the state transition equation can be derived:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i, c] = \max(dp[i-1, c], dp[i-1, c - wgt[i-1]] + val[i-1])
|
||||||
|
$$
|
||||||
|
|
||||||
|
It is important to note that if the current item's weight $wgt[i - 1]$ exceeds the remaining knapsack capacity $c$, then the only option is not to put it in the knapsack.
|
||||||
|
|
||||||
|
**Third step: Determine the boundary conditions and the order of state transitions**
|
||||||
|
|
||||||
|
When there are no items or the knapsack capacity is $0$, the maximum value is $0$, i.e., the first column $dp[i, 0]$ and the first row $dp[0, c]$ are both equal to $0$.
|
||||||
|
|
||||||
|
The current state $[i, c]$ transitions from the state directly above $[i-1, c]$ and the state to the upper left $[i-1, c-wgt[i-1]]$, thus, the entire $dp$ table is traversed in order through two layers of loops.
|
||||||
|
|
||||||
|
Following the above analysis, we will next implement the solutions in the order of brute force search, memoized search, and dynamic programming.
|
||||||
|
|
||||||
|
### Method one: Brute force search
|
||||||
|
|
||||||
|
The search code includes the following elements.
|
||||||
|
|
||||||
|
- **Recursive parameters**: State $[i, c]$.
|
||||||
|
- **Return value**: Solution to the sub-problem $dp[i, c]$.
|
||||||
|
- **Termination condition**: When the item number is out of bounds $i = 0$ or the remaining capacity of the knapsack is $0$, terminate the recursion and return the value $0$.
|
||||||
|
- **Pruning**: If the current item's weight exceeds the remaining capacity of the knapsack, the only option is not to put it in the knapsack.
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{knapsack}-[class]{}-[func]{knapsack_dfs}
|
||||||
|
```
|
||||||
|
|
||||||
|
As shown in the figure below, since each item generates two search branches of not selecting and selecting, the time complexity is $O(2^n)$.
|
||||||
|
|
||||||
|
Observing the recursive tree, it is easy to see that there are overlapping sub-problems, such as $dp[1, 10]$, etc. When there are many items and the knapsack capacity is large, especially when there are many items of the same weight, the number of overlapping sub-problems will increase significantly.
|
||||||
|
|
||||||
|
![The brute force search recursive tree of the 0-1 knapsack problem](knapsack_problem.assets/knapsack_dfs.png)
|
||||||
|
|
||||||
|
### Method two: Memoized search
|
||||||
|
|
||||||
|
To ensure that overlapping sub-problems are only calculated once, we use a memoization list `mem` to record the solutions to sub-problems, where `mem[i][c]` corresponds to $dp[i, c]$.
|
||||||
|
|
||||||
|
After introducing memoization, **the time complexity depends on the number of sub-problems**, which is $O(n \times cap)$. The implementation code is as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{knapsack}-[class]{}-[func]{knapsack_dfs_mem}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following figure shows the search branches that are pruned in memoized search.
|
||||||
|
|
||||||
|
![The memoized search recursive tree of the 0-1 knapsack problem](knapsack_problem.assets/knapsack_dfs_mem.png)
|
||||||
|
|
||||||
|
### Method three: Dynamic programming
|
||||||
|
|
||||||
|
Dynamic programming essentially involves filling the $dp$ table during the state transition, the code is shown below:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{knapsack}-[class]{}-[func]{knapsack_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
As shown in the figures below, both the time complexity and space complexity are determined by the size of the array `dp`, i.e., $O(n \times cap)$.
|
||||||
|
|
||||||
|
=== "<1>"
|
||||||
|
![The dynamic programming process of the 0-1 knapsack problem](knapsack_problem.assets/knapsack_dp_step1.png)
|
||||||
|
|
||||||
|
=== "<2>"
|
||||||
|
![knapsack_dp_step2](knapsack_problem.assets/knapsack_dp_step2.png)
|
||||||
|
|
||||||
|
=== "<3>"
|
||||||
|
![knapsack_dp_step3](knapsack_problem.assets/knapsack_dp_step3.png)
|
||||||
|
|
||||||
|
=== "<4>"
|
||||||
|
![knapsack_dp_step4](knapsack_problem.assets/knapsack_dp_step4.png)
|
||||||
|
|
||||||
|
=== "<5>"
|
||||||
|
![knapsack_dp_step5](knapsack_problem.assets/knapsack_dp_step5.png)
|
||||||
|
|
||||||
|
=== "<6>"
|
||||||
|
![knapsack_dp_step6](knapsack_problem.assets/knapsack_dp_step6.png)
|
||||||
|
|
||||||
|
=== "<7>"
|
||||||
|
![knapsack_dp_step7](knapsack_problem.assets/knapsack_dp_step7.png)
|
||||||
|
|
||||||
|
=== "<8>"
|
||||||
|
![knapsack_dp_step8](knapsack_problem.assets/knapsack_dp_step8.png)
|
||||||
|
|
||||||
|
=== "<9>"
|
||||||
|
![knapsack_dp_step9](knapsack_problem.assets/knapsack_dp_step9.png)
|
||||||
|
|
||||||
|
=== "<10>"
|
||||||
|
![knapsack_dp_step10](knapsack_problem.assets/knapsack_dp_step10.png)
|
||||||
|
|
||||||
|
=== "<11>"
|
||||||
|
![knapsack_dp_step11](knapsack_problem.assets/knapsack_dp_step11.png)
|
||||||
|
|
||||||
|
=== "<12>"
|
||||||
|
![knapsack_dp_step12](knapsack_problem.assets/knapsack_dp_step12.png)
|
||||||
|
|
||||||
|
=== "<13>"
|
||||||
|
![knapsack_dp_step13](knapsack_problem.assets/knapsack_dp_step13.png)
|
||||||
|
|
||||||
|
=== "<14>"
|
||||||
|
![knapsack_dp_step14](knapsack_problem.assets/knapsack_dp_step14.png)
|
||||||
|
|
||||||
|
### Space optimization
|
||||||
|
|
||||||
|
Since each state is only related to the state in the row above it, we can use two arrays to roll forward, reducing the space complexity from $O(n^2)$ to $O(n)$.
|
||||||
|
|
||||||
|
Further thinking, can we use just one array to achieve space optimization? It can be observed that each state is transferred from the cell directly above or from the upper left cell. If there is only one array, when starting to traverse the $i$-th row, that array still stores the state of row $i-1$.
|
||||||
|
|
||||||
|
- If using normal order traversal, then when traversing to $dp[i, j]$, the values from the upper left $dp[i-1, 1]$ ~ $dp[i-1, j-1]$ may have already been overwritten, thus the correct state transition result cannot be obtained.
|
||||||
|
- If using reverse order traversal, there will be no overwriting problem, and the state transition can be conducted correctly.
|
||||||
|
|
||||||
|
The figures below show the transition process from row $i = 1$ to row $i = 2$ in a single array. Please think about the differences between normal order traversal and reverse order traversal.
|
||||||
|
|
||||||
|
=== "<1>"
|
||||||
|
![The space-optimized dynamic programming process of the 0-1 knapsack](knapsack_problem.assets/knapsack_dp_comp_step1.png)
|
||||||
|
|
||||||
|
=== "<2>"
|
||||||
|
![knapsack_dp_comp_step2](knapsack_problem.assets/knapsack_dp_comp_step2.png)
|
||||||
|
|
||||||
|
=== "<3>"
|
||||||
|
![knapsack_dp_comp_step3](knapsack_problem.assets/knapsack_dp_comp_step3.png)
|
||||||
|
|
||||||
|
=== "<4>"
|
||||||
|
![knapsack_dp_comp_step4](knapsack_problem.assets/knapsack_dp_comp_step4.png)
|
||||||
|
|
||||||
|
=== "<5>"
|
||||||
|
![knapsack_dp_comp_step5](knapsack_problem.assets/knapsack_dp_comp_step5.png)
|
||||||
|
|
||||||
|
=== "<6>"
|
||||||
|
![knapsack_dp_comp_step6](knapsack_problem.assets/knapsack_dp_comp_step6.png)
|
||||||
|
|
||||||
|
In the code implementation, we only need to delete the first dimension $i$ of the array `dp` and change the inner loop to reverse traversal:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{knapsack}-[class]{}-[func]{knapsack_dp_comp}
|
||||||
|
```
|
23
en/docs/chapter_dynamic_programming/summary.md
Normal file
|
@ -0,0 +1,23 @@
|
||||||
|
# Summary
|
||||||
|
|
||||||
|
- Dynamic programming decomposes problems and improves computational efficiency by avoiding redundant computations through storing solutions of subproblems.
|
||||||
|
- Without considering time, all dynamic programming problems can be solved using backtracking (brute force search), but the recursion tree has many overlapping subproblems, resulting in very low efficiency. By introducing a memorization list, it's possible to store solutions of all computed subproblems, ensuring that overlapping subproblems are only computed once.
|
||||||
|
- Memorization search is a top-down recursive solution, whereas dynamic programming corresponds to a bottom-up iterative approach, akin to "filling out a table." Since the current state only depends on certain local states, we can eliminate one dimension of the dp table to reduce space complexity.
|
||||||
|
- Decomposition of subproblems is a universal algorithmic approach, differing in characteristics among divide and conquer, dynamic programming, and backtracking.
|
||||||
|
- Dynamic programming problems have three main characteristics: overlapping subproblems, optimal substructure, and no aftereffects.
|
||||||
|
- If the optimal solution of the original problem can be constructed from the optimal solutions of its subproblems, it has an optimal substructure.
|
||||||
|
- No aftereffects mean that the future development of a state depends only on the current state and not on all past states experienced. Many combinatorial optimization problems do not have this property and cannot be quickly solved using dynamic programming.
|
||||||
|
|
||||||
|
**Knapsack problem**
|
||||||
|
|
||||||
|
- The knapsack problem is one of the most typical dynamic programming problems, with variants including the 0-1 knapsack, complete knapsack, and multiple knapsacks.
|
||||||
|
- The state definition of the 0-1 knapsack is the maximum value in a knapsack of capacity $c$ with the first $i$ items. Based on decisions not to include or to include an item in the knapsack, optimal substructures can be identified and state transition equations constructed. In space optimization, since each state depends on the state directly above and to the upper left, the list should be traversed in reverse order to avoid overwriting the upper left state.
|
||||||
|
- In the complete knapsack problem, there is no limit on the number of each kind of item that can be chosen, thus the state transition for including items differs from the 0-1 knapsack. Since the state depends on the state directly above and to the left, space optimization should involve forward traversal.
|
||||||
|
- The coin change problem is a variant of the complete knapsack problem, shifting from seeking the “maximum” value to seeking the “minimum” number of coins, thus the state transition equation should change $\max()$ to $\min()$. From pursuing “not exceeding” the capacity of the knapsack to seeking exactly the target amount, thus use $amt + 1$ to represent the invalid solution of “unable to make up the target amount.”
|
||||||
|
- Coin Change Problem II shifts from seeking the “minimum number of coins” to seeking the “number of coin combinations,” changing the state transition equation accordingly from $\min()$ to summation operator.
|
||||||
|
|
||||||
|
**Edit distance problem**
|
||||||
|
|
||||||
|
- Edit distance (Levenshtein distance) measures the similarity between two strings, defined as the minimum number of editing steps needed to change one string into another, with editing operations including adding, deleting, or replacing.
|
||||||
|
- The state definition for the edit distance problem is the minimum number of editing steps needed to change the first $i$ characters of $s$ into the first $j$ characters of $t$. When $s[i] \ne t[j]$, there are three decisions: add, delete, replace, each with their corresponding residual subproblems. From this, optimal substructures can be identified, and state transition equations built. When $s[i] = t[j]$, no editing of the current character is necessary.
|
||||||
|
- In edit distance, the state depends on the state directly above, to the left, and to the upper left. Therefore, after space optimization, neither forward nor reverse traversal can correctly perform state transitions. To address this, we use a variable to temporarily store the upper left state, making it equivalent to the situation in the complete knapsack problem, allowing for forward traversal after space optimization.
|
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 20 KiB |
|
@ -0,0 +1,207 @@
|
||||||
|
# Complete knapsack problem
|
||||||
|
|
||||||
|
In this section, we first solve another common knapsack problem: the complete knapsack, and then explore a special case of it: the coin change problem.
|
||||||
|
|
||||||
|
## Complete knapsack problem
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given $n$ items, where the weight of the $i^{th}$ item is $wgt[i-1]$ and its value is $val[i-1]$, and a backpack with a capacity of $cap$. **Each item can be selected multiple times**. What is the maximum value of the items that can be put into the backpack without exceeding its capacity? See the example below.
|
||||||
|
|
||||||
|
![Example data for the complete knapsack problem](unbounded_knapsack_problem.assets/unbounded_knapsack_example.png)
|
||||||
|
|
||||||
|
### Dynamic programming approach
|
||||||
|
|
||||||
|
The complete knapsack problem is very similar to the 0-1 knapsack problem, **the only difference being that there is no limit on the number of times an item can be chosen**.
|
||||||
|
|
||||||
|
- In the 0-1 knapsack problem, there is only one of each item, so after placing item $i$ into the backpack, you can only choose from the previous $i-1$ items.
|
||||||
|
- In the complete knapsack problem, the quantity of each item is unlimited, so after placing item $i$ in the backpack, **you can still choose from the previous $i$ items**.
|
||||||
|
|
||||||
|
Under the rules of the complete knapsack problem, the state $[i, c]$ can change in two ways.
|
||||||
|
|
||||||
|
- **Not putting item $i$ in**: As with the 0-1 knapsack problem, transition to $[i-1, c]$.
|
||||||
|
- **Putting item $i$ in**: Unlike the 0-1 knapsack problem, transition to $[i, c-wgt[i-1]]$.
|
||||||
|
|
||||||
|
The state transition equation thus becomes:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i, c] = \max(dp[i-1, c], dp[i, c - wgt[i-1]] + val[i-1])
|
||||||
|
$$
|
||||||
|
|
||||||
|
### Code implementation
|
||||||
|
|
||||||
|
Comparing the code for the two problems, the state transition changes from $i-1$ to $i$, the rest is completely identical:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{unbounded_knapsack}-[class]{}-[func]{unbounded_knapsack_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Space optimization
|
||||||
|
|
||||||
|
Since the current state comes from the state to the left and above, **the space-optimized solution should perform a forward traversal for each row in the $dp$ table**.
|
||||||
|
|
||||||
|
This traversal order is the opposite of that for the 0-1 knapsack. Please refer to the following figures to understand the difference.
|
||||||
|
|
||||||
|
=== "<1>"
|
||||||
|
![Dynamic programming process for the complete knapsack problem after space optimization](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step1.png)
|
||||||
|
|
||||||
|
=== "<2>"
|
||||||
|
![unbounded_knapsack_dp_comp_step2](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step2.png)
|
||||||
|
|
||||||
|
=== "<3>"
|
||||||
|
![unbounded_knapsack_dp_comp_step3](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step3.png)
|
||||||
|
|
||||||
|
=== "<4>"
|
||||||
|
![unbounded_knapsack_dp_comp_step4](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step4.png)
|
||||||
|
|
||||||
|
=== "<5>"
|
||||||
|
![unbounded_knapsack_dp_comp_step5](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step5.png)
|
||||||
|
|
||||||
|
=== "<6>"
|
||||||
|
![unbounded_knapsack_dp_comp_step6](unbounded_knapsack_problem.assets/unbounded_knapsack_dp_comp_step6.png)
|
||||||
|
|
||||||
|
The code implementation is quite simple, just remove the first dimension of the array `dp`:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{unbounded_knapsack}-[class]{}-[func]{unbounded_knapsack_dp_comp}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Coin change problem
|
||||||
|
|
||||||
|
The knapsack problem is a representative of a large class of dynamic programming problems and has many variants, such as the coin change problem.
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given $n$ types of coins, the denomination of the $i^{th}$ type of coin is $coins[i - 1]$, and the target amount is $amt$. **Each type of coin can be selected multiple times**. What is the minimum number of coins needed to make up the target amount? If it is impossible to make up the target amount, return $-1$. See the example below.
|
||||||
|
|
||||||
|
![Example data for the coin change problem](unbounded_knapsack_problem.assets/coin_change_example.png)
|
||||||
|
|
||||||
|
### Dynamic programming approach
|
||||||
|
|
||||||
|
**The coin change can be seen as a special case of the complete knapsack problem**, sharing the following similarities and differences.
|
||||||
|
|
||||||
|
- The two problems can be converted into each other: "item" corresponds to "coin", "item weight" corresponds to "coin denomination", and "backpack capacity" corresponds to "target amount".
|
||||||
|
- The optimization goals are opposite: the complete knapsack problem aims to maximize the value of items, while the coin change problem aims to minimize the number of coins.
|
||||||
|
- The complete knapsack problem seeks solutions "not exceeding" the backpack capacity, while the coin change seeks solutions that "exactly" make up the target amount.
|
||||||
|
|
||||||
|
**First step: Think through each round's decision-making, define the state, and thus derive the $dp$ table**
|
||||||
|
|
||||||
|
The state $[i, a]$ corresponds to the sub-problem: **the minimum number of coins that can make up the amount $a$ using the first $i$ types of coins**, denoted as $dp[i, a]$.
|
||||||
|
|
||||||
|
The two-dimensional $dp$ table is of size $(n+1) \times (amt+1)$.
|
||||||
|
|
||||||
|
**Second step: Identify the optimal substructure and derive the state transition equation**
|
||||||
|
|
||||||
|
This problem differs from the complete knapsack problem in two aspects of the state transition equation.
|
||||||
|
|
||||||
|
- This problem seeks the minimum, so the operator $\max()$ needs to be changed to $\min()$.
|
||||||
|
- The optimization is focused on the number of coins, so simply add $+1$ when a coin is chosen.
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i, a] = \min(dp[i-1, a], dp[i, a - coins[i-1]] + 1)
|
||||||
|
$$
|
||||||
|
|
||||||
|
**Third step: Define boundary conditions and state transition order**
|
||||||
|
|
||||||
|
When the target amount is $0$, the minimum number of coins needed to make it up is $0$, so all $dp[i, 0]$ in the first column are $0$.
|
||||||
|
|
||||||
|
When there are no coins, **it is impossible to make up any amount >0**, which is an invalid solution. To allow the $\min()$ function in the state transition equation to recognize and filter out invalid solutions, consider using $+\infty$ to represent them, i.e., set all $dp[0, a]$ in the first row to $+\infty$.
|
||||||
|
|
||||||
|
### Code implementation
|
||||||
|
|
||||||
|
Most programming languages do not provide a $+\infty$ variable, only the maximum value of an integer `int` can be used as a substitute. This can lead to overflow: the $+1$ operation in the state transition equation may overflow.
|
||||||
|
|
||||||
|
For this reason, we use the number $amt + 1$ to represent an invalid solution, because the maximum number of coins needed to make up $amt$ is at most $amt$. Before returning the result, check if $dp[n, amt]$ equals $amt + 1$, and if so, return $-1$, indicating that the target amount cannot be made up. The code is as follows:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{coin_change}-[class]{}-[func]{coin_change_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following images show the dynamic programming process for the coin change problem, which is very similar to the complete knapsack problem.
|
||||||
|
|
||||||
|
=== "<1>"
|
||||||
|
![Dynamic programming process for the coin change problem](unbounded_knapsack_problem.assets/coin_change_dp_step1.png)
|
||||||
|
|
||||||
|
=== "<2>"
|
||||||
|
![coin_change_dp_step2](unbounded_knapsack_problem.assets/coin_change_dp_step2.png)
|
||||||
|
|
||||||
|
=== "<3>"
|
||||||
|
![coin_change_dp_step3](unbounded_knapsack_problem.assets/coin_change_dp_step3.png)
|
||||||
|
|
||||||
|
=== "<4>"
|
||||||
|
![coin_change_dp_step4](unbounded_knapsack_problem.assets/coin_change_dp_step4.png)
|
||||||
|
|
||||||
|
=== "<5>"
|
||||||
|
![coin_change_dp_step5](unbounded_knapsack_problem.assets/coin_change_dp_step5.png)
|
||||||
|
|
||||||
|
=== "<6>"
|
||||||
|
![coin_change_dp_step6](unbounded_knapsack_problem.assets/coin_change_dp_step6.png)
|
||||||
|
|
||||||
|
=== "<7>"
|
||||||
|
![coin_change_dp_step7](unbounded_knapsack_problem.assets/coin_change_dp_step7.png)
|
||||||
|
|
||||||
|
=== "<8>"
|
||||||
|
![coin_change_dp_step8](unbounded_knapsack_problem.assets/coin_change_dp_step8.png)
|
||||||
|
|
||||||
|
=== "<9>"
|
||||||
|
![coin_change_dp_step9](unbounded_knapsack_problem.assets/coin_change_dp_step9.png)
|
||||||
|
|
||||||
|
=== "<10>"
|
||||||
|
![coin_change_dp_step10](unbounded_knapsack_problem.assets/coin_change_dp_step10.png)
|
||||||
|
|
||||||
|
=== "<11>"
|
||||||
|
![coin_change_dp_step11](unbounded_knapsack_problem.assets/coin_change_dp_step11.png)
|
||||||
|
|
||||||
|
=== "<12>"
|
||||||
|
![coin_change_dp_step12](unbounded_knapsack_problem.assets/coin_change_dp_step12.png)
|
||||||
|
|
||||||
|
=== "<13>"
|
||||||
|
![coin_change_dp_step13](unbounded_knapsack_problem.assets/coin_change_dp_step13.png)
|
||||||
|
|
||||||
|
=== "<14>"
|
||||||
|
![coin_change_dp_step14](unbounded_knapsack_problem.assets/coin_change_dp_step14.png)
|
||||||
|
|
||||||
|
=== "<15>"
|
||||||
|
![coin_change_dp_step15](unbounded_knapsack_problem.assets/coin_change_dp_step15.png)
|
||||||
|
|
||||||
|
### Space optimization
|
||||||
|
|
||||||
|
The space optimization for the coin change problem is handled in the same way as for the complete knapsack problem:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{coin_change}-[class]{}-[func]{coin_change_dp_comp}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Coin change problem II
|
||||||
|
|
||||||
|
!!! question
|
||||||
|
|
||||||
|
Given $n$ types of coins, where the denomination of the $i^{th}$ type of coin is $coins[i - 1]$, and the target amount is $amt$. **Each type of coin can be selected multiple times**, **ask how many combinations of coins can make up the target amount**. See the example below.
|
||||||
|
|
||||||
|
![Example data for Coin Change Problem II](unbounded_knapsack_problem.assets/coin_change_ii_example.png)
|
||||||
|
|
||||||
|
### Dynamic programming approach
|
||||||
|
|
||||||
|
Compared to the previous problem, the goal of this problem is to determine the number of combinations, so the sub-problem becomes: **the number of combinations that can make up amount $a$ using the first $i$ types of coins**. The $dp$ table remains a two-dimensional matrix of size $(n+1) \times (amt + 1)$.
|
||||||
|
|
||||||
|
The number of combinations for the current state is the sum of the combinations from not selecting the current coin and selecting the current coin. The state transition equation is:
|
||||||
|
|
||||||
|
$$
|
||||||
|
dp[i, a] = dp[i-1, a] + dp[i, a - coins[i-1]]
|
||||||
|
$$
|
||||||
|
|
||||||
|
When the target amount is $0$, no coins are needed to make up the target amount, so all $dp[i, 0]$ in the first column should be initialized to $1$. When there are no coins, it is impossible to make up any amount >0, so all $dp[0, a]$ in the first row should be set to $0$.
|
||||||
|
|
||||||
|
### Code implementation
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{coin_change_ii}-[class]{}-[func]{coin_change_ii_dp}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Space optimization
|
||||||
|
|
||||||
|
The space optimization approach is the same, just remove the coin dimension:
|
||||||
|
|
||||||
|
```src
|
||||||
|
[file]{coin_change_ii}-[class]{}-[func]{coin_change_ii_dp_comp}
|
||||||
|
```
|