mirror of
https://github.com/krahets/hello-algo.git
synced 2024-12-26 01:36:29 +08:00
1 line
No EOL
1.1 MiB
1 line
No EOL
1.1 MiB
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"chapter_appendix/","title":"Chapter 16. \u00a0 Appendix","text":""},{"location":"chapter_appendix/#chapter-contents","title":"Chapter contents","text":"<ul> <li>16.1 \u00a0 Installation</li> <li>16.2 \u00a0 Contributing</li> <li>16.3 \u00a0 Terminology</li> </ul>"},{"location":"chapter_appendix/contribution/","title":"16.2 \u00a0 Contributing","text":"<p>Due to the limited abilities of the author, some omissions and errors are inevitable in this book. Please understand. If you discover any typos, broken links, missing content, textual ambiguities, unclear explanations, or unreasonable text structures, please assist us in making corrections to provide readers with better quality learning resources.</p> <p>The GitHub IDs of all contributors will be displayed on the repository, web, and PDF versions of the homepage of this book to thank them for their selfless contributions to the open-source community.</p> <p>The charm of open source</p> <p>The interval between two printings of a paper book is often long, making content updates very inconvenient.</p> <p>In this open-source book, however, the content update cycle is shortened to just a few days or even hours.</p>"},{"location":"chapter_appendix/contribution/#1-content-fine-tuning","title":"1. \u00a0 Content fine-tuning","text":"<p>As shown in Figure 16-3, there is an \"edit icon\" in the upper right corner of each page. You can follow these steps to modify text or code.</p> <ol> <li>Click the \"edit icon\". If prompted to \"fork this repository\", please agree to do so.</li> <li>Modify the Markdown source file content, check the accuracy of the content, and try to keep the formatting consistent.</li> <li>Fill in the modification description at the bottom of the page, then click the \"Propose file change\" button. After the page redirects, click the \"Create pull request\" button to initiate the pull request.</li> </ol> <p></p> <p> Figure 16-3 \u00a0 Edit page button </p> <p>Figures cannot be directly modified and require the creation of a new Issue or a comment to describe the problem. We will redraw and replace the figures as soon as possible.</p>"},{"location":"chapter_appendix/contribution/#2-content-creation","title":"2. \u00a0 Content creation","text":"<p>If you are interested in participating in this open-source project, including translating code into other programming languages or expanding article content, then the following Pull Request workflow needs to be implemented.</p> <ol> <li>Log in to GitHub and Fork the code repository of this book to your personal account.</li> <li>Go to your Forked repository web page and use the <code>git clone</code> command to clone the repository to your local machine.</li> <li>Create content locally and perform complete tests to verify the correctness of the code.</li> <li>Commit the changes made locally, then push them to the remote repository.</li> <li>Refresh the repository webpage and click the \"Create pull request\" button to initiate the pull request.</li> </ol>"},{"location":"chapter_appendix/contribution/#3-docker-deployment","title":"3. \u00a0 Docker deployment","text":"<p>In the <code>hello-algo</code> root directory, execute the following Docker script to access the project at <code>http://localhost:8000</code>:</p> <pre><code>docker-compose up -d\n</code></pre> <p>Use the following command to remove the deployment:</p> <pre><code>docker-compose down\n</code></pre>"},{"location":"chapter_appendix/installation/","title":"16.1 \u00a0 Installation","text":""},{"location":"chapter_appendix/installation/#1611-install-ide","title":"16.1.1 \u00a0 Install IDE","text":"<p>We recommend using the open-source, lightweight VS Code as your local Integrated Development Environment (IDE). Visit the VS Code official website and choose the version of VS Code appropriate for your operating system to download and install.</p> <p></p> <p> Figure 16-1 \u00a0 Download VS Code from the official website </p> <p>VS Code has a powerful extension ecosystem, supporting the execution and debugging of most programming languages. For example, after installing the \"Python Extension Pack,\" you can debug Python code. The installation steps are shown in Figure 16-2.</p> <p></p> <p> Figure 16-2 \u00a0 Install VS Code Extension Pack </p>"},{"location":"chapter_appendix/installation/#1612-install-language-environments","title":"16.1.2 \u00a0 Install language environments","text":""},{"location":"chapter_appendix/installation/#1-python-environment","title":"1. \u00a0 Python environment","text":"<ol> <li>Download and install Miniconda3, requiring Python 3.10 or newer.</li> <li>In the VS Code extension marketplace, search for <code>python</code> and install the Python Extension Pack.</li> <li>(Optional) Enter <code>pip install black</code> in the command line to install the code formatting tool.</li> </ol>"},{"location":"chapter_appendix/installation/#2-cc-environment","title":"2. \u00a0 C/C++ environment","text":"<ol> <li>Windows systems need to install MinGW (Configuration tutorial); MacOS comes with Clang, so no installation is necessary.</li> <li>In the VS Code extension marketplace, search for <code>c++</code> and install the C/C++ Extension Pack.</li> <li>(Optional) Open the Settings page, search for the <code>Clang_format_fallback Style</code> code formatting option, and set it to <code>{ BasedOnStyle: Microsoft, BreakBeforeBraces: Attach }</code>.</li> </ol>"},{"location":"chapter_appendix/installation/#3-java-environment","title":"3. \u00a0 Java environment","text":"<ol> <li>Download and install OpenJDK (version must be > JDK 9).</li> <li>In the VS Code extension marketplace, search for <code>java</code> and install the Extension Pack for Java.</li> </ol>"},{"location":"chapter_appendix/installation/#4-c-environment","title":"4. \u00a0 C# environment","text":"<ol> <li>Download and install .Net 8.0.</li> <li>In the VS Code extension marketplace, search for <code>C# Dev Kit</code> and install the C# Dev Kit (Configuration tutorial).</li> <li>You can also use Visual Studio (Installation tutorial).</li> </ol>"},{"location":"chapter_appendix/installation/#5-go-environment","title":"5. \u00a0 Go environment","text":"<ol> <li>Download and install go.</li> <li>In the VS Code extension marketplace, search for <code>go</code> and install Go.</li> <li>Press <code>Ctrl + Shift + P</code> to call up the command bar, enter go, choose <code>Go: Install/Update Tools</code>, select all and install.</li> </ol>"},{"location":"chapter_appendix/installation/#6-swift-environment","title":"6. \u00a0 Swift environment","text":"<ol> <li>Download and install Swift.</li> <li>In the VS Code extension marketplace, search for <code>swift</code> and install Swift for Visual Studio Code.</li> </ol>"},{"location":"chapter_appendix/installation/#7-javascript-environment","title":"7. \u00a0 JavaScript environment","text":"<ol> <li>Download and install Node.js.</li> <li>(Optional) In the VS Code extension marketplace, search for <code>Prettier</code> and install the code formatting tool.</li> </ol>"},{"location":"chapter_appendix/installation/#8-typescript-environment","title":"8. \u00a0 TypeScript environment","text":"<ol> <li>Follow the same installation steps as the JavaScript environment.</li> <li>Install TypeScript Execute (tsx).</li> <li>In the VS Code extension marketplace, search for <code>typescript</code> and install Pretty TypeScript Errors.</li> </ol>"},{"location":"chapter_appendix/installation/#9-dart-environment","title":"9. \u00a0 Dart environment","text":"<ol> <li>Download and install Dart.</li> <li>In the VS Code extension marketplace, search for <code>dart</code> and install Dart.</li> </ol>"},{"location":"chapter_appendix/installation/#10-rust-environment","title":"10. \u00a0 Rust environment","text":"<ol> <li>Download and install Rust.</li> <li>In the VS Code extension marketplace, search for <code>rust</code> and install rust-analyzer.</li> </ol>"},{"location":"chapter_appendix/terminology/","title":"16.3 \u00a0 Glossary","text":"<p>Table 16-1 lists the important terms that appear in the book, and it is worth noting the following points.</p> <ul> <li>It is recommended to remember the English names of the terms to facilitate reading English literature.</li> <li>Some terms have different names in Simplified and Traditional Chinese.</li> </ul> <p> Table 16-1 \u00a0 Important Terms in Data Structures and Algorithms </p> English \u7b80\u4f53\u4e2d\u6587 \u7e41\u4f53\u4e2d\u6587 algorithm \u7b97\u6cd5 \u6f14\u7b97\u6cd5 data structure \u6570\u636e\u7ed3\u6784 \u8cc7\u6599\u7d50\u69cb code \u4ee3\u7801 \u7a0b\u5f0f\u78bc file \u6587\u4ef6 \u6a94\u6848 function \u51fd\u6570 \u51fd\u5f0f method \u65b9\u6cd5 \u65b9\u6cd5 variable \u53d8\u91cf \u8b8a\u6578 asymptotic complexity analysis \u6e10\u8fd1\u590d\u6742\u5ea6\u5206\u6790 \u6f38\u8fd1\u8907\u96dc\u5ea6\u5206\u6790 time complexity \u65f6\u95f4\u590d\u6742\u5ea6 \u6642\u9593\u8907\u96dc\u5ea6 space complexity \u7a7a\u95f4\u590d\u6742\u5ea6 \u7a7a\u9593\u8907\u96dc\u5ea6 loop \u5faa\u73af \u8ff4\u5708 iteration \u8fed\u4ee3 \u8fed\u4ee3 recursion \u9012\u5f52 \u905e\u8ff4 tail recursion \u5c3e\u9012\u5f52 \u5c3e\u905e\u8ff4 recursion tree \u9012\u5f52\u6811 \u905e\u8ff4\u6a39 big-\\(O\\) notation \u5927 \\(O\\) \u8bb0\u53f7 \u5927 \\(O\\) \u8a18\u865f asymptotic upper bound \u6e10\u8fd1\u4e0a\u754c \u6f38\u8fd1\u4e0a\u754c sign-magnitude \u539f\u7801 \u539f\u78bc 1\u2019s complement \u53cd\u7801 \u4e00\u88dc\u6578 2\u2019s complement \u8865\u7801 \u4e8c\u88dc\u6578 array \u6570\u7ec4 \u9663\u5217 index \u7d22\u5f15 \u7d22\u5f15 linked list \u94fe\u8868 \u93c8\u7d50\u4e32\u5217 linked list node, list node \u94fe\u8868\u8282\u70b9 \u93c8\u7d50\u4e32\u5217\u7bc0\u9ede head node \u5934\u8282\u70b9 \u982d\u7bc0\u9ede tail node \u5c3e\u8282\u70b9 \u5c3e\u7bc0\u9ede list \u5217\u8868 \u4e32\u5217 dynamic array \u52a8\u6001\u6570\u7ec4 \u52d5\u614b\u9663\u5217 hard disk \u786c\u76d8 \u786c\u789f random-access memory (RAM) \u5185\u5b58 \u8a18\u61b6\u9ad4 cache memory \u7f13\u5b58 \u5feb\u53d6 cache miss \u7f13\u5b58\u672a\u547d\u4e2d \u5feb\u53d6\u672a\u547d\u4e2d cache hit rate \u7f13\u5b58\u547d\u4e2d\u7387 \u5feb\u53d6\u547d\u4e2d\u7387 stack \u6808 \u5806\u758a top of the stack \u6808\u9876 \u5806\u758a\u9802 bottom of the stack \u6808\u5e95 \u5806\u758a\u5e95 queue \u961f\u5217 \u4f47\u5217 double-ended queue \u53cc\u5411\u961f\u5217 \u96d9\u5411\u4f47\u5217 front of the queue \u961f\u9996 \u4f47\u5217\u9996 rear of the queue \u961f\u5c3e \u4f47\u5217\u5c3e hash table \u54c8\u5e0c\u8868 \u96dc\u6e4a\u8868 hash set \u54c8\u5e0c\u96c6\u5408 \u96dc\u6e4a\u96c6\u5408 bucket \u6876 \u6876 hash function \u54c8\u5e0c\u51fd\u6570 \u96dc\u6e4a\u51fd\u5f0f hash collision \u54c8\u5e0c\u51b2\u7a81 \u96dc\u6e4a\u885d\u7a81 load factor \u8d1f\u8f7d\u56e0\u5b50 \u8ca0\u8f09\u56e0\u5b50 separate chaining \u94fe\u5f0f\u5730\u5740 \u93c8\u7d50\u4f4d\u5740 open addressing \u5f00\u653e\u5bfb\u5740 \u958b\u653e\u5b9a\u5740 linear probing \u7ebf\u6027\u63a2\u6d4b \u7dda\u6027\u63a2\u67e5 lazy deletion \u61d2\u5220\u9664 \u61f6\u522a\u9664 binary tree \u4e8c\u53c9\u6811 \u4e8c\u5143\u6a39 tree node \u6811\u8282\u70b9 \u6a39\u7bc0\u9ede left-child node \u5de6\u5b50\u8282\u70b9 \u5de6\u5b50\u7bc0\u9ede right-child node \u53f3\u5b50\u8282\u70b9 \u53f3\u5b50\u7bc0\u9ede parent node \u7236\u8282\u70b9 \u7236\u7bc0\u9ede left subtree \u5de6\u5b50\u6811 \u5de6\u5b50\u6a39 right subtree \u53f3\u5b50\u6811 \u53f3\u5b50\u6a39 root node \u6839\u8282\u70b9 \u6839\u7bc0\u9ede leaf node \u53f6\u8282\u70b9 \u8449\u7bc0\u9ede edge \u8fb9 \u908a level \u5c42 \u5c64 degree \u5ea6 \u5ea6 height \u9ad8\u5ea6 \u9ad8\u5ea6 depth \u6df1\u5ea6 \u6df1\u5ea6 perfect binary tree \u5b8c\u7f8e\u4e8c\u53c9\u6811 \u5b8c\u7f8e\u4e8c\u5143\u6a39 complete binary tree \u5b8c\u5168\u4e8c\u53c9\u6811 \u5b8c\u5168\u4e8c\u5143\u6a39 full binary tree \u5b8c\u6ee1\u4e8c\u53c9\u6811 \u5b8c\u6eff\u4e8c\u5143\u6a39 balanced binary tree \u5e73\u8861\u4e8c\u53c9\u6811 \u5e73\u8861\u4e8c\u5143\u6a39 binary search tree \u4e8c\u53c9\u641c\u7d22\u6811 \u4e8c\u5143\u641c\u5c0b\u6a39 AVL tree AVL \u6811 AVL \u6a39 red-black tree \u7ea2\u9ed1\u6811 \u7d05\u9ed1\u6a39 level-order traversal \u5c42\u5e8f\u904d\u5386 \u5c64\u5e8f\u8d70\u8a2a breadth-first traversal \u5e7f\u5ea6\u4f18\u5148\u904d\u5386 \u5ee3\u5ea6\u512a\u5148\u8d70\u8a2a depth-first traversal \u6df1\u5ea6\u4f18\u5148\u904d\u5386 \u6df1\u5ea6\u512a\u5148\u8d70\u8a2a binary search tree \u4e8c\u53c9\u641c\u7d22\u6811 \u4e8c\u5143\u641c\u5c0b\u6a39 balanced binary search tree \u5e73\u8861\u4e8c\u53c9\u641c\u7d22\u6811 \u5e73\u8861\u4e8c\u5143\u641c\u5c0b\u6a39 balance factor \u5e73\u8861\u56e0\u5b50 \u5e73\u8861\u56e0\u5b50 heap \u5806 \u5806\u7a4d max heap \u5927\u9876\u5806 \u5927\u9802\u5806\u7a4d min heap \u5c0f\u9876\u5806 \u5c0f\u9802\u5806\u7a4d priority queue \u4f18\u5148\u961f\u5217 \u512a\u5148\u4f47\u5217 heapify \u5806\u5316 \u5806\u7a4d\u5316 top-\\(k\\) problem Top-\\(k\\) \u95ee\u9898 Top-\\(k\\) \u554f\u984c graph \u56fe \u5716 vertex \u9876\u70b9 \u9802\u9ede undirected graph \u65e0\u5411\u56fe \u7121\u5411\u5716 directed graph \u6709\u5411\u56fe \u6709\u5411\u5716 connected graph \u8fde\u901a\u56fe \u9023\u901a\u5716 disconnected graph \u975e\u8fde\u901a\u56fe \u975e\u9023\u901a\u5716 weighted graph \u6709\u6743\u56fe \u6709\u6b0a\u5716 adjacency \u90bb\u63a5 \u9130\u63a5 path \u8def\u5f84 \u8def\u5f91 in-degree \u5165\u5ea6 \u5165\u5ea6 out-degree \u51fa\u5ea6 \u51fa\u5ea6 adjacency matrix \u90bb\u63a5\u77e9\u9635 \u9130\u63a5\u77e9\u9663 adjacency list \u90bb\u63a5\u8868 \u9130\u63a5\u8868 breadth-first search \u5e7f\u5ea6\u4f18\u5148\u641c\u7d22 \u5ee3\u5ea6\u512a\u5148\u641c\u5c0b depth-first search \u6df1\u5ea6\u4f18\u5148\u641c\u7d22 \u6df1\u5ea6\u512a\u5148\u641c\u5c0b binary search \u4e8c\u5206\u67e5\u627e \u4e8c\u5206\u641c\u5c0b searching algorithm \u641c\u7d22\u7b97\u6cd5 \u641c\u5c0b\u6f14\u7b97\u6cd5 sorting algorithm \u6392\u5e8f\u7b97\u6cd5 \u6392\u5e8f\u6f14\u7b97\u6cd5 selection sort \u9009\u62e9\u6392\u5e8f \u9078\u64c7\u6392\u5e8f bubble sort \u5192\u6ce1\u6392\u5e8f \u6ce1\u6cab\u6392\u5e8f insertion sort \u63d2\u5165\u6392\u5e8f \u63d2\u5165\u6392\u5e8f quick sort \u5feb\u901f\u6392\u5e8f \u5feb\u901f\u6392\u5e8f merge sort \u5f52\u5e76\u6392\u5e8f \u5408\u4f75\u6392\u5e8f heap sort \u5806\u6392\u5e8f \u5806\u7a4d\u6392\u5e8f bucket sort \u6876\u6392\u5e8f \u6876\u6392\u5e8f counting sort \u8ba1\u6570\u6392\u5e8f \u8a08\u6578\u6392\u5e8f radix sort \u57fa\u6570\u6392\u5e8f \u57fa\u6578\u6392\u5e8f divide and conquer \u5206\u6cbb \u5206\u6cbb hanota problem \u6c49\u8bfa\u5854\u95ee\u9898 \u6cb3\u5167\u5854\u554f\u984c backtracking algorithm \u56de\u6eaf\u7b97\u6cd5 \u56de\u6eaf\u6f14\u7b97\u6cd5 constraint \u7ea6\u675f \u7d04\u675f solution \u89e3 \u89e3 state \u72b6\u6001 \u72c0\u614b pruning \u526a\u679d \u526a\u679d permutations problem \u5168\u6392\u5217\u95ee\u9898 \u5168\u6392\u5217\u554f\u984c subset-sum problem \u5b50\u96c6\u548c\u95ee\u9898 \u5b50\u96c6\u5408\u554f\u984c \\(n\\)-queens problem \\(n\\) \u7687\u540e\u95ee\u9898 \\(n\\) \u7687\u540e\u554f\u984c dynamic programming \u52a8\u6001\u89c4\u5212 \u52d5\u614b\u898f\u5283 initial state \u521d\u59cb\u72b6\u6001 \u521d\u59cb\u72c0\u614b state-transition equation \u72b6\u6001\u8f6c\u79fb\u65b9\u7a0b \u72c0\u614b\u8f49\u79fb\u65b9\u7a0b knapsack problem \u80cc\u5305\u95ee\u9898 \u80cc\u5305\u554f\u984c edit distance problem \u7f16\u8f91\u8ddd\u79bb\u95ee\u9898 \u7de8\u8f2f\u8ddd\u96e2\u554f\u984c greedy algorithm \u8d2a\u5fc3\u7b97\u6cd5 \u8caa\u5a6a\u6f14\u7b97\u6cd5"},{"location":"chapter_array_and_linkedlist/","title":"Chapter 4. \u00a0 Arrays and linked lists","text":"<p>Abstract</p> <p>The world of data structures resembles a sturdy brick wall.</p> <p>In arrays, envision bricks snugly aligned, each resting seamlessly beside the next, creating a unified formation. Meanwhile, in linked lists, these bricks disperse freely, embraced by vines gracefully knitting connections between them.</p>"},{"location":"chapter_array_and_linkedlist/#chapter-contents","title":"Chapter contents","text":"<ul> <li>4.1 \u00a0 Array</li> <li>4.2 \u00a0 Linked list</li> <li>4.3 \u00a0 List</li> <li>4.4 \u00a0 Memory and cache *</li> <li>4.5 \u00a0 Summary</li> </ul>"},{"location":"chapter_array_and_linkedlist/array/","title":"4.1 \u00a0 Array","text":"<p>An array is a linear data structure that operates as a lineup of similar items, stored together in a computer's memory in contiguous spaces. It's like a sequence that maintains organized storage. Each item in this lineup has its unique 'spot' known as an index. Please refer to Figure 4-1 to observe how arrays work and grasp these key terms.</p> <p></p> <p> Figure 4-1 \u00a0 Array definition and storage method </p>"},{"location":"chapter_array_and_linkedlist/array/#411-common-operations-on-arrays","title":"4.1.1 \u00a0 Common operations on arrays","text":""},{"location":"chapter_array_and_linkedlist/array/#1-initializing-arrays","title":"1. \u00a0 Initializing arrays","text":"<p>Arrays can be initialized in two ways depending on the needs: either without initial values or with specified initial values. When initial values are not specified, most programming languages will set the array elements to \\(0\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig array.py<pre><code># Initialize array\narr: list[int] = [0] * 5 # [ 0, 0, 0, 0, 0 ]\nnums: list[int] = [1, 3, 2, 5, 4]\n</code></pre> array.cpp<pre><code>/* Initialize array */\n// Stored on stack\nint arr[5];\nint nums[5] = { 1, 3, 2, 5, 4 };\n// Stored on heap (manual memory release needed)\nint* arr1 = new int[5];\nint* nums1 = new int[5] { 1, 3, 2, 5, 4 };\n</code></pre> array.java<pre><code>/* Initialize array */\nint[] arr = new int[5]; // { 0, 0, 0, 0, 0 }\nint[] nums = { 1, 3, 2, 5, 4 };\n</code></pre> array.cs<pre><code>/* Initialize array */\nint[] arr = new int[5]; // [ 0, 0, 0, 0, 0 ]\nint[] nums = [1, 3, 2, 5, 4];\n</code></pre> array.go<pre><code>/* Initialize array */\nvar arr [5]int\n// In Go, specifying the length ([5]int) denotes an array, while not specifying it ([]int) denotes a slice.\n// Since Go's arrays are designed to have compile-time fixed length, only constants can be used to specify the length.\n// For convenience in implementing the extend() method, the Slice will be considered as an Array here.\nnums := []int{1, 3, 2, 5, 4}\n</code></pre> array.swift<pre><code>/* Initialize array */\nlet arr = Array(repeating: 0, count: 5) // [0, 0, 0, 0, 0]\nlet nums = [1, 3, 2, 5, 4]\n</code></pre> array.js<pre><code>/* Initialize array */\nvar arr = new Array(5).fill(0);\nvar nums = [1, 3, 2, 5, 4];\n</code></pre> array.ts<pre><code>/* Initialize array */\nlet arr: number[] = new Array(5).fill(0);\nlet nums: number[] = [1, 3, 2, 5, 4];\n</code></pre> array.dart<pre><code>/* Initialize array */\nList<int> arr = List.filled(5, 0); // [0, 0, 0, 0, 0]\nList<int> nums = [1, 3, 2, 5, 4];\n</code></pre> array.rs<pre><code>/* Initialize array */\nlet arr: [i32; 5] = [0; 5]; // [0, 0, 0, 0, 0]\nlet slice: &[i32] = &[0; 5];\n// In Rust, specifying the length ([i32; 5]) denotes an array, while not specifying it (&[i32]) denotes a slice.\n// Since Rust's arrays are designed to have compile-time fixed length, only constants can be used to specify the length.\n// Vectors are generally used as dynamic arrays in Rust.\n// For convenience in implementing the extend() method, the vector will be considered as an array here.\nlet nums: Vec<i32> = vec![1, 3, 2, 5, 4];\n</code></pre> array.c<pre><code>/* Initialize array */\nint arr[5] = { 0 }; // { 0, 0, 0, 0, 0 }\nint nums[5] = { 1, 3, 2, 5, 4 };\n</code></pre> array.kt<pre><code>\n</code></pre> array.zig<pre><code>// Initialize array\nvar arr = [_]i32{0} ** 5; // { 0, 0, 0, 0, 0 }\nvar nums = [_]i32{ 1, 3, 2, 5, 4 };\n</code></pre>"},{"location":"chapter_array_and_linkedlist/array/#2-accessing-elements","title":"2. \u00a0 Accessing elements","text":"<p>Elements in an array are stored in contiguous memory spaces, making it simpler to compute each element's memory address. The formula shown in the Figure below aids in determining an element's memory address, utilizing the array's memory address (specifically, the first element's address) and the element's index. This computation streamlines direct access to the desired element.</p> <p></p> <p> Figure 4-2 \u00a0 Memory address calculation for array elements </p> <p>As observed in Figure 4-2, array indexing conventionally begins at \\(0\\). While this might appear counterintuitive, considering counting usually starts at \\(1\\), within the address calculation formula, an index is essentially an offset from the memory address. For the first element's address, this offset is \\(0\\), validating its index as \\(0\\).</p> <p>Accessing elements in an array is highly efficient, allowing us to randomly access any element in \\(O(1)\\) time.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array.py<pre><code>def random_access(nums: list[int]) -> int:\n \"\"\"Random access to elements\"\"\"\n # Randomly select a number from the interval [0, len(nums)-1]\n random_index = random.randint(0, len(nums) - 1)\n # Retrieve and return a random element\n random_num = nums[random_index]\n return random_num\n</code></pre> array.cpp<pre><code>/* Random access to elements */\nint randomAccess(int *nums, int size) {\n // Randomly select a number in the range [0, size)\n int randomIndex = rand() % size;\n // Retrieve and return a random element\n int randomNum = nums[randomIndex];\n return randomNum;\n}\n</code></pre> array.java<pre><code>/* Random access to elements */\nint randomAccess(int[] nums) {\n // Randomly select a number in the interval [0, nums.length)\n int randomIndex = ThreadLocalRandom.current().nextInt(0, nums.length);\n // Retrieve and return a random element\n int randomNum = nums[randomIndex];\n return randomNum;\n}\n</code></pre> array.cs<pre><code>[class]{array}-[func]{RandomAccess}\n</code></pre> array.go<pre><code>[class]{}-[func]{randomAccess}\n</code></pre> array.swift<pre><code>[class]{}-[func]{randomAccess}\n</code></pre> array.js<pre><code>[class]{}-[func]{randomAccess}\n</code></pre> array.ts<pre><code>[class]{}-[func]{randomAccess}\n</code></pre> array.dart<pre><code>[class]{}-[func]{randomAccess}\n</code></pre> array.rs<pre><code>[class]{}-[func]{random_access}\n</code></pre> array.c<pre><code>[class]{}-[func]{randomAccess}\n</code></pre> array.kt<pre><code>[class]{}-[func]{randomAccess}\n</code></pre> array.rb<pre><code>[class]{}-[func]{random_access}\n</code></pre> array.zig<pre><code>[class]{}-[func]{randomAccess}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/array/#3-inserting-elements","title":"3. \u00a0 Inserting elements","text":"<p>Array elements are tightly packed in memory, with no space available to accommodate additional data between them. As illustrated in Figure 4-3, inserting an element in the middle of an array requires shifting all subsequent elements back by one position to create room for the new element.</p> <p></p> <p> Figure 4-3 \u00a0 Array element insertion example </p> <p>It's important to note that due to the fixed length of an array, inserting an element will unavoidably result in the loss of the last element in the array. Solutions to address this issue will be explored in the \"List\" chapter.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array.py<pre><code>def insert(nums: list[int], num: int, index: int):\n \"\"\"Insert element num at `index`\"\"\"\n # Move all elements after `index` one position backward\n for i in range(len(nums) - 1, index, -1):\n nums[i] = nums[i - 1]\n # Assign num to the element at index\n nums[index] = num\n</code></pre> array.cpp<pre><code>/* Insert element num at `index` */\nvoid insert(int *nums, int size, int num, int index) {\n // Move all elements after `index` one position backward\n for (int i = size - 1; i > index; i--) {\n nums[i] = nums[i - 1];\n }\n // Assign num to the element at index\n nums[index] = num;\n}\n</code></pre> array.java<pre><code>/* Insert element num at `index` */\nvoid insert(int[] nums, int num, int index) {\n // Move all elements after `index` one position backward\n for (int i = nums.length - 1; i > index; i--) {\n nums[i] = nums[i - 1];\n }\n // Assign num to the element at index\n nums[index] = num;\n}\n</code></pre> array.cs<pre><code>[class]{array}-[func]{Insert}\n</code></pre> array.go<pre><code>[class]{}-[func]{insert}\n</code></pre> array.swift<pre><code>[class]{}-[func]{insert}\n</code></pre> array.js<pre><code>[class]{}-[func]{insert}\n</code></pre> array.ts<pre><code>[class]{}-[func]{insert}\n</code></pre> array.dart<pre><code>[class]{}-[func]{insert}\n</code></pre> array.rs<pre><code>[class]{}-[func]{insert}\n</code></pre> array.c<pre><code>[class]{}-[func]{insert}\n</code></pre> array.kt<pre><code>[class]{}-[func]{insert}\n</code></pre> array.rb<pre><code>[class]{}-[func]{insert}\n</code></pre> array.zig<pre><code>[class]{}-[func]{insert}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/array/#4-deleting-elements","title":"4. \u00a0 Deleting elements","text":"<p>Similarly, as depicted in Figure 4-4, to delete an element at index \\(i\\), all elements following index \\(i\\) must be moved forward by one position.</p> <p></p> <p> Figure 4-4 \u00a0 Array element deletion example </p> <p>Please note that after deletion, the former last element becomes \"meaningless,\" hence requiring no specific modification.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array.py<pre><code>def remove(nums: list[int], index: int):\n \"\"\"Remove the element at `index`\"\"\"\n # Move all elements after `index` one position forward\n for i in range(index, len(nums) - 1):\n nums[i] = nums[i + 1]\n</code></pre> array.cpp<pre><code>/* Remove the element at `index` */\nvoid remove(int *nums, int size, int index) {\n // Move all elements after `index` one position forward\n for (int i = index; i < size - 1; i++) {\n nums[i] = nums[i + 1];\n }\n}\n</code></pre> array.java<pre><code>/* Remove the element at `index` */\nvoid remove(int[] nums, int index) {\n // Move all elements after `index` one position forward\n for (int i = index; i < nums.length - 1; i++) {\n nums[i] = nums[i + 1];\n }\n}\n</code></pre> array.cs<pre><code>[class]{array}-[func]{Remove}\n</code></pre> array.go<pre><code>[class]{}-[func]{remove}\n</code></pre> array.swift<pre><code>[class]{}-[func]{remove}\n</code></pre> array.js<pre><code>[class]{}-[func]{remove}\n</code></pre> array.ts<pre><code>[class]{}-[func]{remove}\n</code></pre> array.dart<pre><code>[class]{}-[func]{remove}\n</code></pre> array.rs<pre><code>[class]{}-[func]{remove}\n</code></pre> array.c<pre><code>[class]{}-[func]{removeItem}\n</code></pre> array.kt<pre><code>[class]{}-[func]{remove}\n</code></pre> array.rb<pre><code>[class]{}-[func]{remove}\n</code></pre> array.zig<pre><code>[class]{}-[func]{remove}\n</code></pre> <p>In summary, the insertion and deletion operations in arrays present the following disadvantages:</p> <ul> <li>High time complexity: Both insertion and deletion in an array have an average time complexity of \\(O(n)\\), where \\(n\\) is the length of the array.</li> <li>Loss of elements: Due to the fixed length of arrays, elements that exceed the array's capacity are lost during insertion.</li> <li>Waste of memory: Initializing a longer array and utilizing only the front part results in \"meaningless\" end elements during insertion, leading to some wasted memory space.</li> </ul>"},{"location":"chapter_array_and_linkedlist/array/#5-traversing-arrays","title":"5. \u00a0 Traversing arrays","text":"<p>In most programming languages, we can traverse an array either by using indices or by directly iterating over each element:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array.py<pre><code>def traverse(nums: list[int]):\n \"\"\"Traverse array\"\"\"\n count = 0\n # Traverse array by index\n for i in range(len(nums)):\n count += nums[i]\n # Traverse array elements\n for num in nums:\n count += num\n # Traverse both data index and elements\n for i, num in enumerate(nums):\n count += nums[i]\n count += num\n</code></pre> array.cpp<pre><code>/* Traverse array */\nvoid traverse(int *nums, int size) {\n int count = 0;\n // Traverse array by index\n for (int i = 0; i < size; i++) {\n count += nums[i];\n }\n}\n</code></pre> array.java<pre><code>/* Traverse array */\nvoid traverse(int[] nums) {\n int count = 0;\n // Traverse array by index\n for (int i = 0; i < nums.length; i++) {\n count += nums[i];\n }\n // Traverse array elements\n for (int num : nums) {\n count += num;\n }\n}\n</code></pre> array.cs<pre><code>[class]{array}-[func]{Traverse}\n</code></pre> array.go<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.swift<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.js<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.ts<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.dart<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.rs<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.c<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.kt<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.rb<pre><code>[class]{}-[func]{traverse}\n</code></pre> array.zig<pre><code>[class]{}-[func]{traverse}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/array/#6-finding-elements","title":"6. \u00a0 Finding elements","text":"<p>Locating a specific element within an array involves iterating through the array, checking each element to determine if it matches the desired value.</p> <p>Because arrays are linear data structures, this operation is commonly referred to as \"linear search.\"</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array.py<pre><code>def find(nums: list[int], target: int) -> int:\n \"\"\"Search for a specified element in the array\"\"\"\n for i in range(len(nums)):\n if nums[i] == target:\n return i\n return -1\n</code></pre> array.cpp<pre><code>/* Search for a specified element in the array */\nint find(int *nums, int size, int target) {\n for (int i = 0; i < size; i++) {\n if (nums[i] == target)\n return i;\n }\n return -1;\n}\n</code></pre> array.java<pre><code>/* Search for a specified element in the array */\nint find(int[] nums, int target) {\n for (int i = 0; i < nums.length; i++) {\n if (nums[i] == target)\n return i;\n }\n return -1;\n}\n</code></pre> array.cs<pre><code>[class]{array}-[func]{Find}\n</code></pre> array.go<pre><code>[class]{}-[func]{find}\n</code></pre> array.swift<pre><code>[class]{}-[func]{find}\n</code></pre> array.js<pre><code>[class]{}-[func]{find}\n</code></pre> array.ts<pre><code>[class]{}-[func]{find}\n</code></pre> array.dart<pre><code>[class]{}-[func]{find}\n</code></pre> array.rs<pre><code>[class]{}-[func]{find}\n</code></pre> array.c<pre><code>[class]{}-[func]{find}\n</code></pre> array.kt<pre><code>[class]{}-[func]{find}\n</code></pre> array.rb<pre><code>[class]{}-[func]{find}\n</code></pre> array.zig<pre><code>[class]{}-[func]{find}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/array/#7-expanding-arrays","title":"7. \u00a0 Expanding arrays","text":"<p>In complex system environments, ensuring the availability of memory space after an array for safe capacity extension becomes challenging. Consequently, in most programming languages, the length of an array is immutable.</p> <p>To expand an array, it's necessary to create a larger array and then copy the elements from the original array. This operation has a time complexity of \\(O(n)\\) and can be time-consuming for large arrays. The code are as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array.py<pre><code>def extend(nums: list[int], enlarge: int) -> list[int]:\n \"\"\"Extend array length\"\"\"\n # Initialize an extended length array\n res = [0] * (len(nums) + enlarge)\n # Copy all elements from the original array to the new array\n for i in range(len(nums)):\n res[i] = nums[i]\n # Return the new array after expansion\n return res\n</code></pre> array.cpp<pre><code>/* Extend array length */\nint *extend(int *nums, int size, int enlarge) {\n // Initialize an extended length array\n int *res = new int[size + enlarge];\n // Copy all elements from the original array to the new array\n for (int i = 0; i < size; i++) {\n res[i] = nums[i];\n }\n // Free memory\n delete[] nums;\n // Return the new array after expansion\n return res;\n}\n</code></pre> array.java<pre><code>/* Extend array length */\nint[] extend(int[] nums, int enlarge) {\n // Initialize an extended length array\n int[] res = new int[nums.length + enlarge];\n // Copy all elements from the original array to the new array\n for (int i = 0; i < nums.length; i++) {\n res[i] = nums[i];\n }\n // Return the new array after expansion\n return res;\n}\n</code></pre> array.cs<pre><code>[class]{array}-[func]{Extend}\n</code></pre> array.go<pre><code>[class]{}-[func]{extend}\n</code></pre> array.swift<pre><code>[class]{}-[func]{extend}\n</code></pre> array.js<pre><code>[class]{}-[func]{extend}\n</code></pre> array.ts<pre><code>[class]{}-[func]{extend}\n</code></pre> array.dart<pre><code>[class]{}-[func]{extend}\n</code></pre> array.rs<pre><code>[class]{}-[func]{extend}\n</code></pre> array.c<pre><code>[class]{}-[func]{extend}\n</code></pre> array.kt<pre><code>[class]{}-[func]{extend}\n</code></pre> array.rb<pre><code>[class]{}-[func]{extend}\n</code></pre> array.zig<pre><code>[class]{}-[func]{extend}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/array/#412-advantages-and-limitations-of-arrays","title":"4.1.2 \u00a0 Advantages and limitations of arrays","text":"<p>Arrays are stored in contiguous memory spaces and consist of elements of the same type. This approach provides substantial prior information that systems can leverage to optimize the efficiency of data structure operations.</p> <ul> <li>High space efficiency: Arrays allocate a contiguous block of memory for data, eliminating the need for additional structural overhead.</li> <li>Support for random access: Arrays allow \\(O(1)\\) time access to any element.</li> <li>Cache locality: When accessing array elements, the computer not only loads them but also caches the surrounding data, utilizing high-speed cache to enchance subsequent operation speeds.</li> </ul> <p>However, continuous space storage is a double-edged sword, with the following limitations:</p> <ul> <li>Low efficiency in insertion and deletion: As arrays accumulate many elements, inserting or deleting elements requires shifting a large number of elements.</li> <li>Fixed length: The length of an array is fixed after initialization. Expanding an array requires copying all data to a new array, incurring significant costs.</li> <li>Space wastage: If the allocated array size exceeds the what is necessary, the extra space is wasted.</li> </ul>"},{"location":"chapter_array_and_linkedlist/array/#413-typical-applications-of-arrays","title":"4.1.3 \u00a0 Typical applications of arrays","text":"<p>Arrays are fundamental and widely used data structures. They find frequent application in various algorithms and serve in the implementation of complex data structures.</p> <ul> <li>Random access: Arrays are ideal for storing data when random sampling is required. By generating a random sequence based on indices, we can achieve random sampling efficiently.</li> <li>Sorting and searching: Arrays are the most commonly used data structure for sorting and searching algorithms. Techniques like quick sort, merge sort, binary search, etc., are primarily operate on arrays.</li> <li>Lookup tables: Arrays serve as efficient lookup tables for quick element or relationship retrieval. For instance, mapping characters to ASCII codes becomes seamless by using the ASCII code values as indices and storing corresponding elements in the array.</li> <li>Machine learning: Within the domain of neural networks, arrays play a pivotal role in executing crucial linear algebra operations involving vectors, matrices, and tensors. Arrays serve as the primary and most extensively used data structure in neural network programming.</li> <li>Data structure implementation: Arrays serve as the building blocks for implementing various data structures like stacks, queues, hash tables, heaps, graphs, etc. For instance, the adjacency matrix representation of a graph is essentially a two-dimensional array.</li> </ul>"},{"location":"chapter_array_and_linkedlist/linked_list/","title":"4.2 \u00a0 Linked list","text":"<p>Memory space is a shared resource among all programs. In a complex system environment, available memory can be dispersed throughout the memory space. We understand that the memory allocated for an array must be continuous. However, for very large arrays, finding a sufficiently large contiguous memory space might be challenging. This is where the flexible advantage of linked lists becomes evident.</p> <p>A linked list is a linear data structure in which each element is a node object, and the nodes are interconnected through \"references\". These references hold the memory addresses of subsequent nodes, enabling navigation from one node to the next.</p> <p>The design of linked lists allows for their nodes to be distributed across memory locations without requiring contiguous memory addresses.</p> <p></p> <p> Figure 4-5 \u00a0 Linked list definition and storage method </p> <p>As shown in Figure 4-5, we see that the basic building block of a linked list is the node object. Each node comprises two key components: the node's \"value\" and a \"reference\" to the next node.</p> <ul> <li>The first node in a linked list is the \"head node\", and the final one is the \"tail node\".</li> <li>The tail node points to \"null\", designated as <code>null</code> in Java, <code>nullptr</code> in C++, and <code>None</code> in Python.</li> <li>In languages that support pointers, like C, C++, Go, and Rust, this \"reference\" is typically implemented as a \"pointer\".</li> </ul> <p>As the code below illustrates, a <code>ListNode</code> in a linked list, besides holding a value, must also maintain an additional reference (or pointer). Therefore, a linked list occupies more memory space than an array when storing the same quantity of data..</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>class ListNode:\n \"\"\"Linked list node class\"\"\"\n def __init__(self, val: int):\n self.val: int = val # Node value\n self.next: ListNode | None = None # Reference to the next node\n</code></pre> <pre><code>/* Linked list node structure */\nstruct ListNode {\n int val; // Node value\n ListNode *next; // Pointer to the next node\n ListNode(int x) : val(x), next(nullptr) {} // Constructor\n};\n</code></pre> <pre><code>/* Linked list node class */\nclass ListNode {\n int val; // Node value\n ListNode next; // Reference to the next node\n ListNode(int x) { val = x; } // Constructor\n}\n</code></pre> <pre><code>/* Linked list node class */\nclass ListNode(int x) { // Constructor\n int val = x; // Node value\n ListNode? next; // Reference to the next node\n}\n</code></pre> <pre><code>/* Linked list node structure */\ntype ListNode struct {\n Val int // Node value\n Next *ListNode // Pointer to the next node\n}\n\n// NewListNode Constructor, creates a new linked list\nfunc NewListNode(val int) *ListNode {\n return &ListNode{\n Val: val,\n Next: nil,\n }\n}\n</code></pre> <pre><code>/* Linked list node class */\nclass ListNode {\n var val: Int // Node value\n var next: ListNode? // Reference to the next node\n\n init(x: Int) { // Constructor\n val = x\n }\n}\n</code></pre> <pre><code>/* Linked list node class */\nclass ListNode {\n constructor(val, next) {\n this.val = (val === undefined ? 0 : val); // Node value\n this.next = (next === undefined ? null : next); // Reference to the next node\n }\n}\n</code></pre> <pre><code>/* Linked list node class */\nclass ListNode {\n val: number;\n next: ListNode | null;\n constructor(val?: number, next?: ListNode | null) {\n this.val = val === undefined ? 0 : val; // Node value\n this.next = next === undefined ? null : next; // Reference to the next node\n }\n}\n</code></pre> <pre><code>/* Linked list node class */\nclass ListNode {\n int val; // Node value\n ListNode? next; // Reference to the next node\n ListNode(this.val, [this.next]); // Constructor\n}\n</code></pre> <pre><code>use std::rc::Rc;\nuse std::cell::RefCell;\n/* Linked list node class */\n#[derive(Debug)]\nstruct ListNode {\n val: i32, // Node value\n next: Option<Rc<RefCell<ListNode>>>, // Pointer to the next node\n}\n</code></pre> <pre><code>/* Linked list node structure */\ntypedef struct ListNode {\n int val; // Node value\n struct ListNode *next; // Pointer to the next node\n} ListNode;\n\n/* Constructor */\nListNode *newListNode(int val) {\n ListNode *node;\n node = (ListNode *) malloc(sizeof(ListNode));\n node->val = val;\n node->next = NULL;\n return node;\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>// Linked list node class\npub fn ListNode(comptime T: type) type {\n return struct {\n const Self = @This();\n\n val: T = 0, // Node value\n next: ?*Self = null, // Pointer to the next node\n\n // Constructor\n pub fn init(self: *Self, x: i32) void {\n self.val = x;\n self.next = null;\n }\n };\n}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/linked_list/#421-common-operations-on-linked-lists","title":"4.2.1 \u00a0 Common operations on linked lists","text":""},{"location":"chapter_array_and_linkedlist/linked_list/#1-initializing-a-linked-list","title":"1. \u00a0 Initializing a linked list","text":"<p>Constructing a linked list is a two-step process: first, initializing each node object, and second, forming the reference links between the nodes. After initialization, we can traverse all nodes sequentially from the head node by following the <code>next</code> reference.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig linked_list.py<pre><code># Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4\n# Initialize each node\nn0 = ListNode(1)\nn1 = ListNode(3)\nn2 = ListNode(2)\nn3 = ListNode(5)\nn4 = ListNode(4)\n# Build references between nodes\nn0.next = n1\nn1.next = n2\nn2.next = n3\nn3.next = n4\n</code></pre> linked_list.cpp<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nListNode* n0 = new ListNode(1);\nListNode* n1 = new ListNode(3);\nListNode* n2 = new ListNode(2);\nListNode* n3 = new ListNode(5);\nListNode* n4 = new ListNode(4);\n// Build references between nodes\nn0->next = n1;\nn1->next = n2;\nn2->next = n3;\nn3->next = n4;\n</code></pre> linked_list.java<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nListNode n0 = new ListNode(1);\nListNode n1 = new ListNode(3);\nListNode n2 = new ListNode(2);\nListNode n3 = new ListNode(5);\nListNode n4 = new ListNode(4);\n// Build references between nodes\nn0.next = n1;\nn1.next = n2;\nn2.next = n3;\nn3.next = n4;\n</code></pre> linked_list.cs<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nListNode n0 = new(1);\nListNode n1 = new(3);\nListNode n2 = new(2);\nListNode n3 = new(5);\nListNode n4 = new(4);\n// Build references between nodes\nn0.next = n1;\nn1.next = n2;\nn2.next = n3;\nn3.next = n4;\n</code></pre> linked_list.go<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nn0 := NewListNode(1)\nn1 := NewListNode(3)\nn2 := NewListNode(2)\nn3 := NewListNode(5)\nn4 := NewListNode(4)\n// Build references between nodes\nn0.Next = n1\nn1.Next = n2\nn2.Next = n3\nn3.Next = n4\n</code></pre> linked_list.swift<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nlet n0 = ListNode(x: 1)\nlet n1 = ListNode(x: 3)\nlet n2 = ListNode(x: 2)\nlet n3 = ListNode(x: 5)\nlet n4 = ListNode(x: 4)\n// Build references between nodes\nn0.next = n1\nn1.next = n2\nn2.next = n3\nn3.next = n4\n</code></pre> linked_list.js<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nconst n0 = new ListNode(1);\nconst n1 = new ListNode(3);\nconst n2 = new ListNode(2);\nconst n3 = new ListNode(5);\nconst n4 = new ListNode(4);\n// Build references between nodes\nn0.next = n1;\nn1.next = n2;\nn2.next = n3;\nn3.next = n4;\n</code></pre> linked_list.ts<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nconst n0 = new ListNode(1);\nconst n1 = new ListNode(3);\nconst n2 = new ListNode(2);\nconst n3 = new ListNode(5);\nconst n4 = new ListNode(4);\n// Build references between nodes\nn0.next = n1;\nn1.next = n2;\nn2.next = n3;\nn3.next = n4;\n</code></pre> linked_list.dart<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nListNode n0 = ListNode(1);\nListNode n1 = ListNode(3);\nListNode n2 = ListNode(2);\nListNode n3 = ListNode(5);\nListNode n4 = ListNode(4);\n// Build references between nodes\nn0.next = n1;\nn1.next = n2;\nn2.next = n3;\nn3.next = n4;\n</code></pre> linked_list.rs<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nlet n0 = Rc::new(RefCell::new(ListNode { val: 1, next: None }));\nlet n1 = Rc::new(RefCell::new(ListNode { val: 3, next: None }));\nlet n2 = Rc::new(RefCell::new(ListNode { val: 2, next: None }));\nlet n3 = Rc::new(RefCell::new(ListNode { val: 5, next: None }));\nlet n4 = Rc::new(RefCell::new(ListNode { val: 4, next: None }));\n\n// Build references between nodes\nn0.borrow_mut().next = Some(n1.clone());\nn1.borrow_mut().next = Some(n2.clone());\nn2.borrow_mut().next = Some(n3.clone());\nn3.borrow_mut().next = Some(n4.clone());\n</code></pre> linked_list.c<pre><code>/* Initialize linked list: 1 -> 3 -> 2 -> 5 -> 4 */\n// Initialize each node\nListNode* n0 = newListNode(1);\nListNode* n1 = newListNode(3);\nListNode* n2 = newListNode(2);\nListNode* n3 = newListNode(5);\nListNode* n4 = newListNode(4);\n// Build references between nodes\nn0->next = n1;\nn1->next = n2;\nn2->next = n3;\nn3->next = n4;\n</code></pre> linked_list.kt<pre><code>\n</code></pre> linked_list.zig<pre><code>// Initialize linked list\n// Initialize each node\nvar n0 = inc.ListNode(i32){.val = 1};\nvar n1 = inc.ListNode(i32){.val = 3};\nvar n2 = inc.ListNode(i32){.val = 2};\nvar n3 = inc.ListNode(i32){.val = 5};\nvar n4 = inc.ListNode(i32){.val = 4};\n// Build references between nodes\nn0.next = &n1;\nn1.next = &n2;\nn2.next = &n3;\nn3.next = &n4;\n</code></pre> <p>The array as a whole is a variable, for instance, the array <code>nums</code> includes elements like <code>nums[0]</code>, <code>nums[1]</code>, and so on, whereas a linked list is made up of several distinct node objects. We typically refer to a linked list by its head node, for example, the linked list in the previous code snippet is referred to as <code>n0</code>.</p>"},{"location":"chapter_array_and_linkedlist/linked_list/#2-inserting-nodes","title":"2. \u00a0 Inserting nodes","text":"<p>Inserting a node into a linked list is very easy. As shown in Figure 4-6, let's assume we aim to insert a new node <code>P</code> between two adjacent nodes <code>n0</code> and <code>n1</code>. This can be achieved by simply modifying two node references (pointers), with a time complexity of \\(O(1)\\).</p> <p>By comparison, inserting an element into an array has a time complexity of \\(O(n)\\), which becomes less efficient when dealing with large data volumes.</p> <p></p> <p> Figure 4-6 \u00a0 Linked list node insertion example </p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig linked_list.py<pre><code>def insert(n0: ListNode, P: ListNode):\n \"\"\"Insert node P after node n0 in the linked list\"\"\"\n n1 = n0.next\n P.next = n1\n n0.next = P\n</code></pre> linked_list.cpp<pre><code>/* Insert node P after node n0 in the linked list */\nvoid insert(ListNode *n0, ListNode *P) {\n ListNode *n1 = n0->next;\n P->next = n1;\n n0->next = P;\n}\n</code></pre> linked_list.java<pre><code>/* Insert node P after node n0 in the linked list */\nvoid insert(ListNode n0, ListNode P) {\n ListNode n1 = n0.next;\n P.next = n1;\n n0.next = P;\n}\n</code></pre> linked_list.cs<pre><code>[class]{linked_list}-[func]{Insert}\n</code></pre> linked_list.go<pre><code>[class]{}-[func]{insertNode}\n</code></pre> linked_list.swift<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.js<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.ts<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.dart<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.rs<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.c<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.kt<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.rb<pre><code>[class]{}-[func]{insert}\n</code></pre> linked_list.zig<pre><code>[class]{}-[func]{insert}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/linked_list/#3-deleting-nodes","title":"3. \u00a0 Deleting nodes","text":"<p>As shown in Figure 4-7, deleting a node from a linked list is also very easy, involving only the modification of a single node's reference (pointer).</p> <p>It's important to note that even though node <code>P</code> continues to point to <code>n1</code> after being deleted, it becomes inaccessible during linked list traversal. This effectively means that <code>P</code> is no longer a part of the linked list.</p> <p></p> <p> Figure 4-7 \u00a0 Linked list node deletion </p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig linked_list.py<pre><code>def remove(n0: ListNode):\n \"\"\"Remove the first node after node n0 in the linked list\"\"\"\n if not n0.next:\n return\n # n0 -> P -> n1\n P = n0.next\n n1 = P.next\n n0.next = n1\n</code></pre> linked_list.cpp<pre><code>/* Remove the first node after node n0 in the linked list */\nvoid remove(ListNode *n0) {\n if (n0->next == nullptr)\n return;\n // n0 -> P -> n1\n ListNode *P = n0->next;\n ListNode *n1 = P->next;\n n0->next = n1;\n // Free memory\n delete P;\n}\n</code></pre> linked_list.java<pre><code>/* Remove the first node after node n0 in the linked list */\nvoid remove(ListNode n0) {\n if (n0.next == null)\n return;\n // n0 -> P -> n1\n ListNode P = n0.next;\n ListNode n1 = P.next;\n n0.next = n1;\n}\n</code></pre> linked_list.cs<pre><code>[class]{linked_list}-[func]{Remove}\n</code></pre> linked_list.go<pre><code>[class]{}-[func]{removeItem}\n</code></pre> linked_list.swift<pre><code>[class]{}-[func]{remove}\n</code></pre> linked_list.js<pre><code>[class]{}-[func]{remove}\n</code></pre> linked_list.ts<pre><code>[class]{}-[func]{remove}\n</code></pre> linked_list.dart<pre><code>[class]{}-[func]{remove}\n</code></pre> linked_list.rs<pre><code>[class]{}-[func]{remove}\n</code></pre> linked_list.c<pre><code>[class]{}-[func]{removeItem}\n</code></pre> linked_list.kt<pre><code>[class]{}-[func]{remove}\n</code></pre> linked_list.rb<pre><code>[class]{}-[func]{remove}\n</code></pre> linked_list.zig<pre><code>[class]{}-[func]{remove}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/linked_list/#4-accessing-nodes","title":"4. \u00a0 Accessing nodes","text":"<p>Accessing nodes in a linked list is less efficient. As previously mentioned, any element in an array can be accessed in \\(O(1)\\) time. In contrast, with a linked list, the program involves starting from the head node and sequentially traversing through the nodes until the desired node is found. In other words, to access the \\(i\\)-th node in a linked list, the program must iterate through \\(i - 1\\) nodes, resulting in a time complexity of \\(O(n)\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig linked_list.py<pre><code>def access(head: ListNode, index: int) -> ListNode | None:\n \"\"\"Access the node at `index` in the linked list\"\"\"\n for _ in range(index):\n if not head:\n return None\n head = head.next\n return head\n</code></pre> linked_list.cpp<pre><code>/* Access the node at `index` in the linked list */\nListNode *access(ListNode *head, int index) {\n for (int i = 0; i < index; i++) {\n if (head == nullptr)\n return nullptr;\n head = head->next;\n }\n return head;\n}\n</code></pre> linked_list.java<pre><code>/* Access the node at `index` in the linked list */\nListNode access(ListNode head, int index) {\n for (int i = 0; i < index; i++) {\n if (head == null)\n return null;\n head = head.next;\n }\n return head;\n}\n</code></pre> linked_list.cs<pre><code>[class]{linked_list}-[func]{Access}\n</code></pre> linked_list.go<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.swift<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.js<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.ts<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.dart<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.rs<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.c<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.kt<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.rb<pre><code>[class]{}-[func]{access}\n</code></pre> linked_list.zig<pre><code>[class]{}-[func]{access}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/linked_list/#5-finding-nodes","title":"5. \u00a0 Finding nodes","text":"<p>Traverse the linked list to locate a node whose value matches <code>target</code>, and then output the index of that node within the linked list. This procedure is also an example of linear search. The corresponding code is provided below:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig linked_list.py<pre><code>def find(head: ListNode, target: int) -> int:\n \"\"\"Search for the first node with value target in the linked list\"\"\"\n index = 0\n while head:\n if head.val == target:\n return index\n head = head.next\n index += 1\n return -1\n</code></pre> linked_list.cpp<pre><code>/* Search for the first node with value target in the linked list */\nint find(ListNode *head, int target) {\n int index = 0;\n while (head != nullptr) {\n if (head->val == target)\n return index;\n head = head->next;\n index++;\n }\n return -1;\n}\n</code></pre> linked_list.java<pre><code>/* Search for the first node with value target in the linked list */\nint find(ListNode head, int target) {\n int index = 0;\n while (head != null) {\n if (head.val == target)\n return index;\n head = head.next;\n index++;\n }\n return -1;\n}\n</code></pre> linked_list.cs<pre><code>[class]{linked_list}-[func]{Find}\n</code></pre> linked_list.go<pre><code>[class]{}-[func]{findNode}\n</code></pre> linked_list.swift<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.js<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.ts<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.dart<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.rs<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.c<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.kt<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.rb<pre><code>[class]{}-[func]{find}\n</code></pre> linked_list.zig<pre><code>[class]{}-[func]{find}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/linked_list/#422-arrays-vs-linked-lists","title":"4.2.2 \u00a0 Arrays vs. linked lists","text":"<p>Table 4-1 summarizes the characteristics of arrays and linked lists, and it also compares their efficiencies in various operations. Because they utilize opposing storage strategies, their respective properties and operational efficiencies exhibit distinct contrasts.</p> <p> Table 4-1 \u00a0 Efficiency comparison of arrays and linked lists </p> Arrays Linked Lists Storage Contiguous Memory Space Dispersed Memory Space Capacity Expansion Fixed Length Flexible Expansion Memory Efficiency Less Memory per Element, Potential Space Wastage More Memory per Element Accessing Elements \\(O(1)\\) \\(O(n)\\) Adding Elements \\(O(n)\\) \\(O(1)\\) Deleting Elements \\(O(n)\\) \\(O(1)\\)"},{"location":"chapter_array_and_linkedlist/linked_list/#423-common-types-of-linked-lists","title":"4.2.3 \u00a0 Common types of linked lists","text":"<p>As shown in Figure 4-8, there are three common types of linked lists.</p> <ul> <li>Singly linked list: This is the standard linked list described earlier. Nodes in a singly linked list include a value and a reference to the next node. The first node is known as the head node, and the last node, which points to null (<code>None</code>), is the tail node.</li> <li>Circular linked list: This is formed when the tail node of a singly linked list points back to the head node, creating a loop. In a circular linked list, any node can function as the head node.</li> <li>Doubly linked list: In contrast to a singly linked list, a doubly linked list maintains references in two directions. Each node contains references (pointer) to both its successor (the next node) and predecessor (the previous node). Although doubly linked lists offer more flexibility for traversing in either direction, they also consume more memory space.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>class ListNode:\n \"\"\"Bidirectional linked list node class\"\"\"\n def __init__(self, val: int):\n self.val: int = val # Node value\n self.next: ListNode | None = None # Reference to the successor node\n self.prev: ListNode | None = None # Reference to a predecessor node\n</code></pre> <pre><code>/* Bidirectional linked list node structure */\nstruct ListNode {\n int val; // Node value\n ListNode *next; // Pointer to the successor node\n ListNode *prev; // Pointer to the predecessor node\n ListNode(int x) : val(x), next(nullptr), prev(nullptr) {} // Constructor\n};\n</code></pre> <pre><code>/* Bidirectional linked list node class */\nclass ListNode {\n int val; // Node value\n ListNode next; // Reference to the next node\n ListNode prev; // Reference to the predecessor node\n ListNode(int x) { val = x; } // Constructor\n}\n</code></pre> <pre><code>/* Bidirectional linked list node class */\nclass ListNode(int x) { // Constructor\n int val = x; // Node value\n ListNode next; // Reference to the next node\n ListNode prev; // Reference to the predecessor node\n}\n</code></pre> <pre><code>/* Bidirectional linked list node structure */\ntype DoublyListNode struct {\n Val int // Node value\n Next *DoublyListNode // Pointer to the successor node\n Prev *DoublyListNode // Pointer to the predecessor node\n}\n\n// NewDoublyListNode initialization\nfunc NewDoublyListNode(val int) *DoublyListNode {\n return &DoublyListNode{\n Val: val,\n Next: nil,\n Prev: nil,\n }\n}\n</code></pre> <pre><code>/* Bidirectional linked list node class */\nclass ListNode {\n var val: Int // Node value\n var next: ListNode? // Reference to the next node\n var prev: ListNode? // Reference to the predecessor node\n\n init(x: Int) { // Constructor\n val = x\n }\n}\n</code></pre> <pre><code>/* Bidirectional linked list node class */\nclass ListNode {\n constructor(val, next, prev) {\n this.val = val === undefined ? 0 : val; // Node value\n this.next = next === undefined ? null : next; // Reference to the successor node\n this.prev = prev === undefined ? null : prev; // Reference to the predecessor node\n }\n}\n</code></pre> <pre><code>/* Bidirectional linked list node class */\nclass ListNode {\n val: number;\n next: ListNode | null;\n prev: ListNode | null;\n constructor(val?: number, next?: ListNode | null, prev?: ListNode | null) {\n this.val = val === undefined ? 0 : val; // Node value\n this.next = next === undefined ? null : next; // Reference to the successor node\n this.prev = prev === undefined ? null : prev; // Reference to the predecessor node\n }\n}\n</code></pre> <pre><code>/* Bidirectional linked list node class */\nclass ListNode {\n int val; // Node value\n ListNode next; // Reference to the next node\n ListNode prev; // Reference to the predecessor node\n ListNode(this.val, [this.next, this.prev]); // Constructor\n}\n</code></pre> <pre><code>use std::rc::Rc;\nuse std::cell::RefCell;\n\n/* Bidirectional linked list node type */\n#[derive(Debug)]\nstruct ListNode {\n val: i32, // Node value\n next: Option<Rc<RefCell<ListNode>>>, // Pointer to successor node\n prev: Option<Rc<RefCell<ListNode>>>, // Pointer to predecessor node\n}\n\n/* Constructors */\nimpl ListNode {\n fn new(val: i32) -> Self {\n ListNode {\n val,\n next: None,\n prev: None,\n }\n }\n}\n</code></pre> <pre><code>/* Bidirectional linked list node structure */\ntypedef struct ListNode {\n int val; // Node value\n struct ListNode *next; // Pointer to the successor node\n struct ListNode *prev; // Pointer to the predecessor node\n} ListNode;\n\n/* Constructors */\nListNode *newListNode(int val) {\n ListNode *node, *next;\n node = (ListNode *) malloc(sizeof(ListNode));\n node->val = val;\n node->next = NULL;\n node->prev = NULL;\n return node;\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>// Bidirectional linked list node class\npub fn ListNode(comptime T: type) type {\n return struct {\n const Self = @This();\n\n val: T = 0, // Node value\n next: ?*Self = null, // Pointer to the successor node\n prev: ?*Self = null, // Pointer to the predecessor node\n\n // Constructor\n pub fn init(self: *Self, x: i32) void {\n self.val = x;\n self.next = null;\n self.prev = null;\n }\n };\n}\n</code></pre> <p></p> <p> Figure 4-8 \u00a0 Common types of linked lists </p>"},{"location":"chapter_array_and_linkedlist/linked_list/#424-typical-applications-of-linked-lists","title":"4.2.4 \u00a0 Typical applications of linked lists","text":"<p>Singly linked lists are frequently utilized in implementing stacks, queues, hash tables, and graphs.</p> <ul> <li>Stacks and queues: In singly linked lists, if insertions and deletions occur at the same end, it behaves like a stack (last-in-first-out). Conversely, if insertions are at one end and deletions at the other, it functions like a queue (first-in-first-out).</li> <li>Hash tables: Linked lists are used in chaining, a popular method for resolving hash collisions. Here, all collided elements are grouped into a linked list.</li> <li>Graphs: Adjacency lists, a standard method for graph representation, associate each graph vertex with a linked list. This list contains elements that represent vertices connected to the corresponding vertex.</li> </ul> <p>Doubly linked lists are ideal for scenarios requiring rapid access to preceding and succeeding elements.</p> <ul> <li>Advanced data structures: In structures like red-black trees and B-trees, accessing a node's parent is essential. This is achieved by incorporating a reference to the parent node in each node, akin to a doubly linked list.</li> <li>Browser history: In web browsers, doubly linked lists facilitate navigating the history of visited pages when users click forward or back.</li> <li>LRU algorithm: Doubly linked lists are apt for Least Recently Used (LRU) cache eviction algorithms, enabling swift identification of the least recently used data and facilitating fast node addition and removal.</li> </ul> <p>Circular linked lists are ideal for applications that require periodic operations, such as resource scheduling in operating systems.</p> <ul> <li>Round-robin scheduling algorithm: In operating systems, the round-robin scheduling algorithm is a common CPU scheduling method, requiring cycling through a group of processes. Each process is assigned a time slice, and upon expiration, the CPU rotates to the next process. This cyclical operation can be efficiently realized using a circular linked list, allowing for a fair and time-shared system among all processes.</li> <li>Data buffers: Circular linked lists are also used in data buffers, like in audio and video players, where the data stream is divided into multiple buffer blocks arranged in a circular fashion for seamless playback.</li> </ul>"},{"location":"chapter_array_and_linkedlist/list/","title":"4.3 \u00a0 List","text":"<p>A list is an abstract data structure concept that represents an ordered collection of elements, supporting operations such as element access, modification, addition, deletion, and traversal, without requiring users to consider capacity limitations. Lists can be implemented based on linked lists or arrays.</p> <ul> <li>A linked list inherently serves as a list, supporting operations for adding, deleting, searching, and modifying elements, with the flexibility to dynamically adjust its size.</li> <li>Arrays also support these operations, but due to their immutable length, they can be considered as a list with a length limit.</li> </ul> <p>When implementing lists using arrays, the immutability of length reduces the practicality of the list. This is because predicting the amount of data to be stored in advance is often challenging, making it difficult to choose an appropriate list length. If the length is too small, it may not meet the requirements; if too large, it may waste memory space.</p> <p>To solve this problem, we can implement lists using a dynamic array. It inherits the advantages of arrays and can dynamically expand during program execution.</p> <p>In fact, many programming languages' standard libraries implement lists using dynamic arrays, such as Python's <code>list</code>, Java's <code>ArrayList</code>, C++'s <code>vector</code>, and C#'s <code>List</code>. In the following discussion, we will consider \"list\" and \"dynamic array\" as synonymous concepts.</p>"},{"location":"chapter_array_and_linkedlist/list/#431-common-list-operations","title":"4.3.1 \u00a0 Common list operations","text":""},{"location":"chapter_array_and_linkedlist/list/#1-initializing-a-list","title":"1. \u00a0 Initializing a list","text":"<p>We typically use two initialization methods: \"without initial values\" and \"with initial values\".</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig list.py<pre><code># Initialize list\n# Without initial values\nnums1: list[int] = []\n# With initial values\nnums: list[int] = [1, 3, 2, 5, 4]\n</code></pre> list.cpp<pre><code>/* Initialize list */\n// Note, in C++ the vector is the equivalent of nums described here\n// Without initial values\nvector<int> nums1;\n// With initial values\nvector<int> nums = { 1, 3, 2, 5, 4 };\n</code></pre> list.java<pre><code>/* Initialize list */\n// Without initial values\nList<Integer> nums1 = new ArrayList<>();\n// With initial values (note the element type should be the wrapper class Integer[] for int[])\nInteger[] numbers = new Integer[] { 1, 3, 2, 5, 4 };\nList<Integer> nums = new ArrayList<>(Arrays.asList(numbers));\n</code></pre> list.cs<pre><code>/* Initialize list */\n// Without initial values\nList<int> nums1 = [];\n// With initial values\nint[] numbers = [1, 3, 2, 5, 4];\nList<int> nums = [.. numbers];\n</code></pre> list_test.go<pre><code>/* Initialize list */\n// Without initial values\nnums1 := []int{}\n// With initial values\nnums := []int{1, 3, 2, 5, 4}\n</code></pre> list.swift<pre><code>/* Initialize list */\n// Without initial values\nlet nums1: [Int] = []\n// With initial values\nvar nums = [1, 3, 2, 5, 4]\n</code></pre> list.js<pre><code>/* Initialize list */\n// Without initial values\nconst nums1 = [];\n// With initial values\nconst nums = [1, 3, 2, 5, 4];\n</code></pre> list.ts<pre><code>/* Initialize list */\n// Without initial values\nconst nums1: number[] = [];\n// With initial values\nconst nums: number[] = [1, 3, 2, 5, 4];\n</code></pre> list.dart<pre><code>/* Initialize list */\n// Without initial values\nList<int> nums1 = [];\n// With initial values\nList<int> nums = [1, 3, 2, 5, 4];\n</code></pre> list.rs<pre><code>/* Initialize list */\n// Without initial values\nlet nums1: Vec<i32> = Vec::new();\n// With initial values\nlet nums: Vec<i32> = vec![1, 3, 2, 5, 4];\n</code></pre> list.c<pre><code>// C does not provide built-in dynamic arrays\n</code></pre> list.kt<pre><code>\n</code></pre> list.zig<pre><code>// Initialize list\nvar nums = std.ArrayList(i32).init(std.heap.page_allocator);\ndefer nums.deinit();\ntry nums.appendSlice(&[_]i32{ 1, 3, 2, 5, 4 });\n</code></pre>"},{"location":"chapter_array_and_linkedlist/list/#2-accessing-elements","title":"2. \u00a0 Accessing elements","text":"<p>Lists are essentially arrays, thus they can access and update elements in \\(O(1)\\) time, which is very efficient.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig list.py<pre><code># Access elements\nnum: int = nums[1] # Access the element at index 1\n\n# Update elements\nnums[1] = 0 # Update the element at index 1 to 0\n</code></pre> list.cpp<pre><code>/* Access elements */\nint num = nums[1]; // Access the element at index 1\n\n/* Update elements */\nnums[1] = 0; // Update the element at index 1 to 0\n</code></pre> list.java<pre><code>/* Access elements */\nint num = nums.get(1); // Access the element at index 1\n\n/* Update elements */\nnums.set(1, 0); // Update the element at index 1 to 0\n</code></pre> list.cs<pre><code>/* Access elements */\nint num = nums[1]; // Access the element at index 1\n\n/* Update elements */\nnums[1] = 0; // Update the element at index 1 to 0\n</code></pre> list_test.go<pre><code>/* Access elements */\nnum := nums[1] // Access the element at index 1\n\n/* Update elements */\nnums[1] = 0 // Update the element at index 1 to 0\n</code></pre> list.swift<pre><code>/* Access elements */\nlet num = nums[1] // Access the element at index 1\n\n/* Update elements */\nnums[1] = 0 // Update the element at index 1 to 0\n</code></pre> list.js<pre><code>/* Access elements */\nconst num = nums[1]; // Access the element at index 1\n\n/* Update elements */\nnums[1] = 0; // Update the element at index 1 to 0\n</code></pre> list.ts<pre><code>/* Access elements */\nconst num: number = nums[1]; // Access the element at index 1\n\n/* Update elements */\nnums[1] = 0; // Update the element at index 1 to 0\n</code></pre> list.dart<pre><code>/* Access elements */\nint num = nums[1]; // Access the element at index 1\n\n/* Update elements */\nnums[1] = 0; // Update the element at index 1 to 0\n</code></pre> list.rs<pre><code>/* Access elements */\nlet num: i32 = nums[1]; // Access the element at index 1\n/* Update elements */\nnums[1] = 0; // Update the element at index 1 to 0\n</code></pre> list.c<pre><code>// C does not provide built-in dynamic arrays\n</code></pre> list.kt<pre><code>\n</code></pre> list.zig<pre><code>// Access elements\nvar num = nums.items[1]; // Access the element at index 1\n\n// Update elements\nnums.items[1] = 0; // Update the element at index 1 to 0 \n</code></pre>"},{"location":"chapter_array_and_linkedlist/list/#3-inserting-and-removing-elements","title":"3. \u00a0 Inserting and removing elements","text":"<p>Compared to arrays, lists offer more flexibility in adding and removing elements. While adding elements to the end of a list is an \\(O(1)\\) operation, the efficiency of inserting and removing elements elsewhere in the list remains the same as in arrays, with a time complexity of \\(O(n)\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig list.py<pre><code># Clear list\nnums.clear()\n\n# Append elements at the end\nnums.append(1)\nnums.append(3)\nnums.append(2)\nnums.append(5)\nnums.append(4)\n\n# Insert element in the middle\nnums.insert(3, 6) # Insert number 6 at index 3\n\n# Remove elements\nnums.pop(3) # Remove the element at index 3\n</code></pre> list.cpp<pre><code>/* Clear list */\nnums.clear();\n\n/* Append elements at the end */\nnums.push_back(1);\nnums.push_back(3);\nnums.push_back(2);\nnums.push_back(5);\nnums.push_back(4);\n\n/* Insert element in the middle */\nnums.insert(nums.begin() + 3, 6); // Insert number 6 at index 3\n\n/* Remove elements */\nnums.erase(nums.begin() + 3); // Remove the element at index 3\n</code></pre> list.java<pre><code>/* Clear list */\nnums.clear();\n\n/* Append elements at the end */\nnums.add(1);\nnums.add(3);\nnums.add(2);\nnums.add(5);\nnums.add(4);\n\n/* Insert element in the middle */\nnums.add(3, 6); // Insert number 6 at index 3\n\n/* Remove elements */\nnums.remove(3); // Remove the element at index 3\n</code></pre> list.cs<pre><code>/* Clear list */\nnums.Clear();\n\n/* Append elements at the end */\nnums.Add(1);\nnums.Add(3);\nnums.Add(2);\nnums.Add(5);\nnums.Add(4);\n\n/* Insert element in the middle */\nnums.Insert(3, 6);\n\n/* Remove elements */\nnums.RemoveAt(3);\n</code></pre> list_test.go<pre><code>/* Clear list */\nnums = nil\n\n/* Append elements at the end */\nnums = append(nums, 1)\nnums = append(nums, 3)\nnums = append(nums, 2)\nnums = append(nums, 5)\nnums = append(nums, 4)\n\n/* Insert element in the middle */\nnums = append(nums[:3], append([]int{6}, nums[3:]...)...) // Insert number 6 at index 3\n\n/* Remove elements */\nnums = append(nums[:3], nums[4:]...) // Remove the element at index 3\n</code></pre> list.swift<pre><code>/* Clear list */\nnums.removeAll()\n\n/* Append elements at the end */\nnums.append(1)\nnums.append(3)\nnums.append(2)\nnums.append(5)\nnums.append(4)\n\n/* Insert element in the middle */\nnums.insert(6, at: 3) // Insert number 6 at index 3\n\n/* Remove elements */\nnums.remove(at: 3) // Remove the element at index 3\n</code></pre> list.js<pre><code>/* Clear list */\nnums.length = 0;\n\n/* Append elements at the end */\nnums.push(1);\nnums.push(3);\nnums.push(2);\nnums.push(5);\nnums.push(4);\n\n/* Insert element in the middle */\nnums.splice(3, 0, 6);\n\n/* Remove elements */\nnums.splice(3, 1);\n</code></pre> list.ts<pre><code>/* Clear list */\nnums.length = 0;\n\n/* Append elements at the end */\nnums.push(1);\nnums.push(3);\nnums.push(2);\nnums.push(5);\nnums.push(4);\n\n/* Insert element in the middle */\nnums.splice(3, 0, 6);\n\n/* Remove elements */\nnums.splice(3, 1);\n</code></pre> list.dart<pre><code>/* Clear list */\nnums.clear();\n\n/* Append elements at the end */\nnums.add(1);\nnums.add(3);\nnums.add(2);\nnums.add(5);\nnums.add(4);\n\n/* Insert element in the middle */\nnums.insert(3, 6); // Insert number 6 at index 3\n\n/* Remove elements */\nnums.removeAt(3); // Remove the element at index 3\n</code></pre> list.rs<pre><code>/* Clear list */\nnums.clear();\n\n/* Append elements at the end */\nnums.push(1);\nnums.push(3);\nnums.push(2);\nnums.push(5);\nnums.push(4);\n\n/* Insert element in the middle */\nnums.insert(3, 6); // Insert number 6 at index 3\n\n/* Remove elements */\nnums.remove(3); // Remove the element at index 3\n</code></pre> list.c<pre><code>// C does not provide built-in dynamic arrays\n</code></pre> list.kt<pre><code>\n</code></pre> list.zig<pre><code>// Clear list\nnums.clearRetainingCapacity();\n\n// Append elements at the end\ntry nums.append(1);\ntry nums.append(3);\ntry nums.append(2);\ntry nums.append(5);\ntry nums.append(4);\n\n// Insert element in the middle\ntry nums.insert(3, 6); // Insert number 6 at index 3\n\n// Remove elements\n_ = nums.orderedRemove(3); // Remove the element at index 3\n</code></pre>"},{"location":"chapter_array_and_linkedlist/list/#4-iterating-the-list","title":"4. \u00a0 Iterating the list","text":"<p>Similar to arrays, lists can be iterated either by using indices or by directly iterating through each element.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig list.py<pre><code># Iterate through the list by index\ncount = 0\nfor i in range(len(nums)):\n count += nums[i]\n\n# Iterate directly through list elements\nfor num in nums:\n count += num\n</code></pre> list.cpp<pre><code>/* Iterate through the list by index */\nint count = 0;\nfor (int i = 0; i < nums.size(); i++) {\n count += nums[i];\n}\n\n/* Iterate directly through list elements */\ncount = 0;\nfor (int num : nums) {\n count += num;\n}\n</code></pre> list.java<pre><code>/* Iterate through the list by index */\nint count = 0;\nfor (int i = 0; i < nums.size(); i++) {\n count += nums.get(i);\n}\n\n/* Iterate directly through list elements */\nfor (int num : nums) {\n count += num;\n}\n</code></pre> list.cs<pre><code>/* Iterate through the list by index */\nint count = 0;\nfor (int i = 0; i < nums.Count; i++) {\n count += nums[i];\n}\n\n/* Iterate directly through list elements */\ncount = 0;\nforeach (int num in nums) {\n count += num;\n}\n</code></pre> list_test.go<pre><code>/* Iterate through the list by index */\ncount := 0\nfor i := 0; i < len(nums); i++ {\n count += nums[i]\n}\n\n/* Iterate directly through list elements */\ncount = 0\nfor _, num := range nums {\n count += num\n}\n</code></pre> list.swift<pre><code>/* Iterate through the list by index */\nvar count = 0\nfor i in nums.indices {\n count += nums[i]\n}\n\n/* Iterate directly through list elements */\ncount = 0\nfor num in nums {\n count += num\n}\n</code></pre> list.js<pre><code>/* Iterate through the list by index */\nlet count = 0;\nfor (let i = 0; i < nums.length; i++) {\n count += nums[i];\n}\n\n/* Iterate directly through list elements */\ncount = 0;\nfor (const num of nums) {\n count += num;\n}\n</code></pre> list.ts<pre><code>/* Iterate through the list by index */\nlet count = 0;\nfor (let i = 0; i < nums.length; i++) {\n count += nums[i];\n}\n\n/* Iterate directly through list elements */\ncount = 0;\nfor (const num of nums) {\n count += num;\n}\n</code></pre> list.dart<pre><code>/* Iterate through the list by index */\nint count = 0;\nfor (var i = 0; i < nums.length; i++) {\n count += nums[i];\n}\n\n/* Iterate directly through list elements */\ncount = 0;\nfor (var num in nums) {\n count += num;\n}\n</code></pre> list.rs<pre><code>// Iterate through the list by index\nlet mut _count = 0;\nfor i in 0..nums.len() {\n _count += nums[i];\n}\n\n// Iterate directly through list elements\n_count = 0;\nfor num in &nums {\n _count += num;\n}\n</code></pre> list.c<pre><code>// C does not provide built-in dynamic arrays\n</code></pre> list.kt<pre><code>\n</code></pre> list.zig<pre><code>// Iterate through the list by index\nvar count: i32 = 0;\nvar i: i32 = 0;\nwhile (i < nums.items.len) : (i += 1) {\n count += nums[i];\n}\n\n// Iterate directly through list elements\ncount = 0;\nfor (nums.items) |num| {\n count += num;\n}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/list/#5-concatenating-lists","title":"5. \u00a0 Concatenating lists","text":"<p>Given a new list <code>nums1</code>, we can append it to the end of the original list.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig list.py<pre><code># Concatenate two lists\nnums1: list[int] = [6, 8, 7, 10, 9]\nnums += nums1 # Concatenate nums1 to the end of nums\n</code></pre> list.cpp<pre><code>/* Concatenate two lists */\nvector<int> nums1 = { 6, 8, 7, 10, 9 };\n// Concatenate nums1 to the end of nums\nnums.insert(nums.end(), nums1.begin(), nums1.end());\n</code></pre> list.java<pre><code>/* Concatenate two lists */\nList<Integer> nums1 = new ArrayList<>(Arrays.asList(new Integer[] { 6, 8, 7, 10, 9 }));\nnums.addAll(nums1); // Concatenate nums1 to the end of nums\n</code></pre> list.cs<pre><code>/* Concatenate two lists */\nList<int> nums1 = [6, 8, 7, 10, 9];\nnums.AddRange(nums1); // Concatenate nums1 to the end of nums\n</code></pre> list_test.go<pre><code>/* Concatenate two lists */\nnums1 := []int{6, 8, 7, 10, 9}\nnums = append(nums, nums1...) // Concatenate nums1 to the end of nums\n</code></pre> list.swift<pre><code>/* Concatenate two lists */\nlet nums1 = [6, 8, 7, 10, 9]\nnums.append(contentsOf: nums1) // Concatenate nums1 to the end of nums\n</code></pre> list.js<pre><code>/* Concatenate two lists */\nconst nums1 = [6, 8, 7, 10, 9];\nnums.push(...nums1); // Concatenate nums1 to the end of nums\n</code></pre> list.ts<pre><code>/* Concatenate two lists */\nconst nums1: number[] = [6, 8, 7, 10, 9];\nnums.push(...nums1); // Concatenate nums1 to the end of nums\n</code></pre> list.dart<pre><code>/* Concatenate two lists */\nList<int> nums1 = [6, 8, 7, 10, 9];\nnums.addAll(nums1); // Concatenate nums1 to the end of nums\n</code></pre> list.rs<pre><code>/* Concatenate two lists */\nlet nums1: Vec<i32> = vec![6, 8, 7, 10, 9];\nnums.extend(nums1);\n</code></pre> list.c<pre><code>// C does not provide built-in dynamic arrays\n</code></pre> list.kt<pre><code>\n</code></pre> list.zig<pre><code>// Concatenate two lists\nvar nums1 = std.ArrayList(i32).init(std.heap.page_allocator);\ndefer nums1.deinit();\ntry nums1.appendSlice(&[_]i32{ 6, 8, 7, 10, 9 });\ntry nums.insertSlice(nums.items.len, nums1.items); // Concatenate nums1 to the end of nums\n</code></pre>"},{"location":"chapter_array_and_linkedlist/list/#6-sorting-the-list","title":"6. \u00a0 Sorting the list","text":"<p>Once the list is sorted, we can employ algorithms commonly used in array-related algorithm problems, such as \"binary search\" and \"two-pointer\" algorithms.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig list.py<pre><code># Sort the list\nnums.sort() # After sorting, the list elements are in ascending order\n</code></pre> list.cpp<pre><code>/* Sort the list */\nsort(nums.begin(), nums.end()); // After sorting, the list elements are in ascending order\n</code></pre> list.java<pre><code>/* Sort the list */\nCollections.sort(nums); // After sorting, the list elements are in ascending order\n</code></pre> list.cs<pre><code>/* Sort the list */\nnums.Sort(); // After sorting, the list elements are in ascending order\n</code></pre> list_test.go<pre><code>/* Sort the list */\nsort.Ints(nums) // After sorting, the list elements are in ascending order\n</code></pre> list.swift<pre><code>/* Sort the list */\nnums.sort() // After sorting, the list elements are in ascending order\n</code></pre> list.js<pre><code>/* Sort the list */ \nnums.sort((a, b) => a - b); // After sorting, the list elements are in ascending order\n</code></pre> list.ts<pre><code>/* Sort the list */\nnums.sort((a, b) => a - b); // After sorting, the list elements are in ascending order\n</code></pre> list.dart<pre><code>/* Sort the list */\nnums.sort(); // After sorting, the list elements are in ascending order\n</code></pre> list.rs<pre><code>/* Sort the list */\nnums.sort(); // After sorting, the list elements are in ascending order\n</code></pre> list.c<pre><code>// C does not provide built-in dynamic arrays\n</code></pre> list.kt<pre><code>\n</code></pre> list.zig<pre><code>// Sort the list\nstd.sort.sort(i32, nums.items, {}, comptime std.sort.asc(i32));\n</code></pre>"},{"location":"chapter_array_and_linkedlist/list/#432-list-implementation","title":"4.3.2 \u00a0 List implementation","text":"<p>Many programming languages come with built-in lists, including Java, C++, Python, etc. Their implementations tend to be intricate, featuring carefully considered settings for various parameters, like initial capacity and expansion factors. Readers who are curious can delve into the source code for further learning.</p> <p>To enhance our understanding of how lists work, we will attempt to implement a simplified version of a list, focusing on three crucial design aspects:</p> <ul> <li>Initial capacity: Choose a reasonable initial capacity for the array. In this example, we choose 10 as the initial capacity.</li> <li>Size recording: Declare a variable <code>size</code> to record the current number of elements in the list, updating in real-time with element insertion and deletion. With this variable, we can locate the end of the list and determine whether expansion is needed.</li> <li>Expansion mechanism: If the list reaches full capacity upon an element insertion, an expansion process is required. This involves creating a larger array based on the expansion factor, and then transferring all elements from the current array to the new one. In this example, we stipulate that the array size should double with each expansion.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig my_list.py<pre><code>class MyList:\n \"\"\"List class\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n self._capacity: int = 10 # List capacity\n self._arr: list[int] = [0] * self._capacity # Array (stores list elements)\n self._size: int = 0 # List length (current number of elements)\n self._extend_ratio: int = 2 # Multiple for each list expansion\n\n def size(self) -> int:\n \"\"\"Get list length (current number of elements)\"\"\"\n return self._size\n\n def capacity(self) -> int:\n \"\"\"Get list capacity\"\"\"\n return self._capacity\n\n def get(self, index: int) -> int:\n \"\"\"Access element\"\"\"\n # If the index is out of bounds, throw an exception, as below\n if index < 0 or index >= self._size:\n raise IndexError(\"Index out of bounds\")\n return self._arr[index]\n\n def set(self, num: int, index: int):\n \"\"\"Update element\"\"\"\n if index < 0 or index >= self._size:\n raise IndexError(\"Index out of bounds\")\n self._arr[index] = num\n\n def add(self, num: int):\n \"\"\"Add element at the end\"\"\"\n # When the number of elements exceeds capacity, trigger the expansion mechanism\n if self.size() == self.capacity():\n self.extend_capacity()\n self._arr[self._size] = num\n self._size += 1\n\n def insert(self, num: int, index: int):\n \"\"\"Insert element in the middle\"\"\"\n if index < 0 or index >= self._size:\n raise IndexError(\"Index out of bounds\")\n # When the number of elements exceeds capacity, trigger the expansion mechanism\n if self._size == self.capacity():\n self.extend_capacity()\n # Move all elements after `index` one position backward\n for j in range(self._size - 1, index - 1, -1):\n self._arr[j + 1] = self._arr[j]\n self._arr[index] = num\n # Update the number of elements\n self._size += 1\n\n def remove(self, index: int) -> int:\n \"\"\"Remove element\"\"\"\n if index < 0 or index >= self._size:\n raise IndexError(\"Index out of bounds\")\n num = self._arr[index]\n # Move all elements after `index` one position forward\n for j in range(index, self._size - 1):\n self._arr[j] = self._arr[j + 1]\n # Update the number of elements\n self._size -= 1\n # Return the removed element\n return num\n\n def extend_capacity(self):\n \"\"\"Extend list\"\"\"\n # Create a new array of _extend_ratio times the length of the original array and copy the original array to the new array\n self._arr = self._arr + [0] * self.capacity() * (self._extend_ratio - 1)\n # Update list capacity\n self._capacity = len(self._arr)\n\n def to_array(self) -> list[int]:\n \"\"\"Return a list of valid lengths\"\"\"\n return self._arr[: self._size]\n</code></pre> my_list.cpp<pre><code>/* List class */\nclass MyList {\n private:\n int *arr; // Array (stores list elements)\n int arrCapacity = 10; // List capacity\n int arrSize = 0; // List length (current number of elements)\n int extendRatio = 2; // Multiple for each list expansion\n\n public:\n /* Constructor */\n MyList() {\n arr = new int[arrCapacity];\n }\n\n /* Destructor */\n ~MyList() {\n delete[] arr;\n }\n\n /* Get list length (current number of elements)*/\n int size() {\n return arrSize;\n }\n\n /* Get list capacity */\n int capacity() {\n return arrCapacity;\n }\n\n /* Access element */\n int get(int index) {\n // If the index is out of bounds, throw an exception, as below\n if (index < 0 || index >= size())\n throw out_of_range(\"Index out of bounds\");\n return arr[index];\n }\n\n /* Update element */\n void set(int index, int num) {\n if (index < 0 || index >= size())\n throw out_of_range(\"Index out of bounds\");\n arr[index] = num;\n }\n\n /* Add element at the end */\n void add(int num) {\n // When the number of elements exceeds capacity, trigger the expansion mechanism\n if (size() == capacity())\n extendCapacity();\n arr[size()] = num;\n // Update the number of elements\n arrSize++;\n }\n\n /* Insert element in the middle */\n void insert(int index, int num) {\n if (index < 0 || index >= size())\n throw out_of_range(\"Index out of bounds\");\n // When the number of elements exceeds capacity, trigger the expansion mechanism\n if (size() == capacity())\n extendCapacity();\n // Move all elements after `index` one position backward\n for (int j = size() - 1; j >= index; j--) {\n arr[j + 1] = arr[j];\n }\n arr[index] = num;\n // Update the number of elements\n arrSize++;\n }\n\n /* Remove element */\n int remove(int index) {\n if (index < 0 || index >= size())\n throw out_of_range(\"Index out of bounds\");\n int num = arr[index];\n // Move all elements after `index` one position forward\n for (int j = index; j < size() - 1; j++) {\n arr[j] = arr[j + 1];\n }\n // Update the number of elements\n arrSize--;\n // Return the removed element\n return num;\n }\n\n /* Extend list */\n void extendCapacity() {\n // Create a new array with a length multiple of the original array by extendRatio\n int newCapacity = capacity() * extendRatio;\n int *tmp = arr;\n arr = new int[newCapacity];\n // Copy all elements from the original array to the new array\n for (int i = 0; i < size(); i++) {\n arr[i] = tmp[i];\n }\n // Free memory\n delete[] tmp;\n arrCapacity = newCapacity;\n }\n\n /* Convert the list to a Vector for printing */\n vector<int> toVector() {\n // Only convert elements within valid length range\n vector<int> vec(size());\n for (int i = 0; i < size(); i++) {\n vec[i] = arr[i];\n }\n return vec;\n }\n};\n</code></pre> my_list.java<pre><code>/* List class */\nclass MyList {\n private int[] arr; // Array (stores list elements)\n private int capacity = 10; // List capacity\n private int size = 0; // List length (current number of elements)\n private int extendRatio = 2; // Multiple for each list expansion\n\n /* Constructor */\n public MyList() {\n arr = new int[capacity];\n }\n\n /* Get list length (current number of elements) */\n public int size() {\n return size;\n }\n\n /* Get list capacity */\n public int capacity() {\n return capacity;\n }\n\n /* Access element */\n public int get(int index) {\n // If the index is out of bounds, throw an exception, as below\n if (index < 0 || index >= size)\n throw new IndexOutOfBoundsException(\"Index out of bounds\");\n return arr[index];\n }\n\n /* Update element */\n public void set(int index, int num) {\n if (index < 0 || index >= size)\n throw new IndexOutOfBoundsException(\"Index out of bounds\");\n arr[index] = num;\n }\n\n /* Add element at the end */\n public void add(int num) {\n // When the number of elements exceeds capacity, trigger the expansion mechanism\n if (size == capacity())\n extendCapacity();\n arr[size] = num;\n // Update the number of elements\n size++;\n }\n\n /* Insert element in the middle */\n public void insert(int index, int num) {\n if (index < 0 || index >= size)\n throw new IndexOutOfBoundsException(\"Index out of bounds\");\n // When the number of elements exceeds capacity, trigger the expansion mechanism\n if (size == capacity())\n extendCapacity();\n // Move all elements after `index` one position backward\n for (int j = size - 1; j >= index; j--) {\n arr[j + 1] = arr[j];\n }\n arr[index] = num;\n // Update the number of elements\n size++;\n }\n\n /* Remove element */\n public int remove(int index) {\n if (index < 0 || index >= size)\n throw new IndexOutOfBoundsException(\"Index out of bounds\");\n int num = arr[index];\n // Move all elements after `index` one position forward\n for (int j = index; j < size - 1; j++) {\n arr[j] = arr[j + 1];\n }\n // Update the number of elements\n size--;\n // Return the removed element\n return num;\n }\n\n /* Extend list */\n public void extendCapacity() {\n // Create a new array with a length multiple of the original array by extendRatio, and copy the original array to the new array\n arr = Arrays.copyOf(arr, capacity() * extendRatio);\n // Update list capacity\n capacity = arr.length;\n }\n\n /* Convert the list to an array */\n public int[] toArray() {\n int size = size();\n // Only convert elements within valid length range\n int[] arr = new int[size];\n for (int i = 0; i < size; i++) {\n arr[i] = get(i);\n }\n return arr;\n }\n}\n</code></pre> my_list.cs<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.go<pre><code>[class]{myList}-[func]{}\n</code></pre> my_list.swift<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.js<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.ts<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.dart<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.rs<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.c<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.kt<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.rb<pre><code>[class]{MyList}-[func]{}\n</code></pre> my_list.zig<pre><code>[class]{MyList}-[func]{}\n</code></pre>"},{"location":"chapter_array_and_linkedlist/ram_and_cache/","title":"4.4 \u00a0 Memory and cache *","text":"<p>In the first two sections of this chapter, we explored arrays and linked lists, two fundamental and important data structures, representing \"continuous storage\" and \"dispersed storage\" respectively.</p> <p>In fact, the physical structure largely determines the efficiency of a program's use of memory and cache, which in turn affects the overall performance of the algorithm.</p>"},{"location":"chapter_array_and_linkedlist/ram_and_cache/#441-computer-storage-devices","title":"4.4.1 \u00a0 Computer storage devices","text":"<p>There are three types of storage devices in computers: hard disk, random-access memory (RAM), and cache memory. The following table shows their different roles and performance characteristics in computer systems.</p> <p> Table 4-2 \u00a0 Computer storage devices </p> Hard Disk Memory Cache Usage Long-term storage of data, including OS, programs, files, etc. Temporary storage of currently running programs and data being processed Stores frequently accessed data and instructions, reducing the number of CPU accesses to memory Volatility Data is not lost after power off Data is lost after power off Data is lost after power off Capacity Larger, TB level Smaller, GB level Very small, MB level Speed Slower, several hundred to thousands MB/s Faster, several tens of GB/s Very fast, several tens to hundreds of GB/s Price Cheaper, several cents to yuan / GB More expensive, tens to hundreds of yuan / GB Very expensive, priced with CPU <p>We can imagine the computer storage system as a pyramid structure shown in Figure 4-9. The storage devices closer to the top of the pyramid are faster, have smaller capacity, and are more costly. This multi-level design is not accidental, but the result of careful consideration by computer scientists and engineers.</p> <ul> <li>Hard disks are difficult to replace with memory. Firstly, data in memory is lost after power off, making it unsuitable for long-term data storage; secondly, the cost of memory is dozens of times that of hard disks, making it difficult to popularize in the consumer market.</li> <li>It is difficult for caches to have both large capacity and high speed. As the capacity of L1, L2, L3 caches gradually increases, their physical size becomes larger, increasing the physical distance from the CPU core, leading to increased data transfer time and higher element access latency. Under current technology, a multi-level cache structure is the best balance between capacity, speed, and cost.</li> </ul> <p></p> <p> Figure 4-9 \u00a0 Computer storage system </p> <p>Tip</p> <p>The storage hierarchy of computers reflects a delicate balance between speed, capacity, and cost. In fact, this kind of trade-off is common in all industrial fields, requiring us to find the best balance between different advantages and limitations.</p> <p>Overall, hard disks are used for long-term storage of large amounts of data, memory is used for temporary storage of data being processed during program execution, and cache is used to store frequently accessed data and instructions to improve program execution efficiency. Together, they ensure the efficient operation of computer systems.</p> <p>As shown in Figure 4-10, during program execution, data is read from the hard disk into memory for CPU computation. The cache can be considered a part of the CPU, smartly loading data from memory to provide fast data access to the CPU, significantly enhancing program execution efficiency and reducing reliance on slower memory.</p> <p></p> <p> Figure 4-10 \u00a0 Data flow between hard disk, memory, and cache </p>"},{"location":"chapter_array_and_linkedlist/ram_and_cache/#442-memory-efficiency-of-data-structures","title":"4.4.2 \u00a0 Memory efficiency of data structures","text":"<p>In terms of memory space utilization, arrays and linked lists have their advantages and limitations.</p> <p>On one hand, memory is limited and cannot be shared by multiple programs, so we hope that data structures can use space as efficiently as possible. The elements of an array are tightly packed without extra space for storing references (pointers) between linked list nodes, making them more space-efficient. However, arrays require allocating sufficient continuous memory space at once, which may lead to memory waste, and array expansion also requires additional time and space costs. In contrast, linked lists allocate and reclaim memory dynamically on a per-node basis, providing greater flexibility.</p> <p>On the other hand, during program execution, as memory is repeatedly allocated and released, the degree of fragmentation of free memory becomes higher, leading to reduced memory utilization efficiency. Arrays, due to their continuous storage method, are relatively less likely to cause memory fragmentation. In contrast, the elements of a linked list are dispersedly stored, and frequent insertion and deletion operations make memory fragmentation more likely.</p>"},{"location":"chapter_array_and_linkedlist/ram_and_cache/#443-cache-efficiency-of-data-structures","title":"4.4.3 \u00a0 Cache efficiency of data structures","text":"<p>Although caches are much smaller in space capacity than memory, they are much faster and play a crucial role in program execution speed. Since the cache's capacity is limited and can only store a small part of frequently accessed data, when the CPU tries to access data not in the cache, a cache miss occurs, forcing the CPU to load the needed data from slower memory.</p> <p>Clearly, the fewer the cache misses, the higher the CPU's data read-write efficiency, and the better the program performance. The proportion of successful data retrieval from the cache by the CPU is called the cache hit rate, a metric often used to measure cache efficiency.</p> <p>To achieve higher efficiency, caches adopt the following data loading mechanisms.</p> <ul> <li>Cache lines: Caches don't store and load data byte by byte but in units of cache lines. Compared to byte-by-byte transfer, the transmission of cache lines is more efficient.</li> <li>Prefetch mechanism: Processors try to predict data access patterns (such as sequential access, fixed stride jumping access, etc.) and load data into the cache according to specific patterns to improve the hit rate.</li> <li>Spatial locality: If data is accessed, data nearby is likely to be accessed in the near future. Therefore, when loading certain data, the cache also loads nearby data to improve the hit rate.</li> <li>Temporal locality: If data is accessed, it's likely to be accessed again in the near future. Caches use this principle to retain recently accessed data to improve the hit rate.</li> </ul> <p>In fact, arrays and linked lists have different cache utilization efficiencies, mainly reflected in the following aspects.</p> <ul> <li>Occupied space: Linked list elements occupy more space than array elements, resulting in less effective data volume in the cache.</li> <li>Cache lines: Linked list data is scattered throughout memory, and since caches load \"by line,\" the proportion of loading invalid data is higher.</li> <li>Prefetch mechanism: The data access pattern of arrays is more \"predictable\" than that of linked lists, meaning the system is more likely to guess which data will be loaded next.</li> <li>Spatial locality: Arrays are stored in concentrated memory spaces, so the data near the loaded data is more likely to be accessed next.</li> </ul> <p>Overall, arrays have a higher cache hit rate and are generally more efficient in operation than linked lists. This makes data structures based on arrays more popular in solving algorithmic problems.</p> <p>It should be noted that high cache efficiency does not mean that arrays are always better than linked lists. Which data structure to choose in actual applications should be based on specific requirements. For example, both arrays and linked lists can implement the \"stack\" data structure (which will be detailed in the next chapter), but they are suitable for different scenarios.</p> <ul> <li>In algorithm problems, we tend to choose stacks based on arrays because they provide higher operational efficiency and random access capabilities, with the only cost being the need to pre-allocate a certain amount of memory space for the array.</li> <li>If the data volume is very large, highly dynamic, and the expected size of the stack is difficult to estimate, then a stack based on a linked list is more appropriate. Linked lists can disperse a large amount of data in different parts of the memory and avoid the additional overhead of array expansion.</li> </ul>"},{"location":"chapter_array_and_linkedlist/summary/","title":"4.5 \u00a0 Summary","text":""},{"location":"chapter_array_and_linkedlist/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>Arrays and linked lists are two basic data structures, representing two storage methods in computer memory: contiguous space storage and non-contiguous space storage. Their characteristics complement each other.</li> <li>Arrays support random access and use less memory; however, they are inefficient in inserting and deleting elements and have a fixed length after initialization.</li> <li>Linked lists implement efficient node insertion and deletion through changing references (pointers) and can flexibly adjust their length; however, they have lower node access efficiency and consume more memory.</li> <li>Common types of linked lists include singly linked lists, circular linked lists, and doubly linked lists, each with its own application scenarios.</li> <li>Lists are ordered collections of elements that support addition, deletion, and modification, typically implemented based on dynamic arrays, retaining the advantages of arrays while allowing flexible length adjustment.</li> <li>The advent of lists significantly enhanced the practicality of arrays but may lead to some memory space wastage.</li> <li>During program execution, data is mainly stored in memory. Arrays provide higher memory space efficiency, while linked lists are more flexible in memory usage.</li> <li>Caches provide fast data access to CPUs through mechanisms like cache lines, prefetching, spatial locality, and temporal locality, significantly enhancing program execution efficiency.</li> <li>Due to higher cache hit rates, arrays are generally more efficient than linked lists. When choosing a data structure, the appropriate choice should be made based on specific needs and scenarios.</li> </ul>"},{"location":"chapter_array_and_linkedlist/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: Does storing arrays on the stack versus the heap affect time and space efficiency?</p> <p>Arrays stored on both the stack and heap are stored in contiguous memory spaces, and data operation efficiency is essentially the same. However, stacks and heaps have their own characteristics, leading to the following differences.</p> <ol> <li>Allocation and release efficiency: The stack is a smaller memory block, allocated automatically by the compiler; the heap memory is relatively larger and can be dynamically allocated in the code, more prone to fragmentation. Therefore, allocation and release operations on the heap are generally slower than on the stack.</li> <li>Size limitation: Stack memory is relatively small, while the heap size is generally limited by available memory. Therefore, the heap is more suitable for storing large arrays.</li> <li>Flexibility: The size of arrays on the stack needs to be determined at compile-time, while the size of arrays on the heap can be dynamically determined at runtime.</li> </ol> <p>Q: Why do arrays require elements of the same type, while linked lists do not emphasize same-type elements?</p> <p>Linked lists consist of nodes connected by references (pointers), and each node can store data of different types, such as int, double, string, object, etc.</p> <p>In contrast, array elements must be of the same type, allowing the calculation of offsets to access the corresponding element positions. For example, an array containing both int and long types, with single elements occupying 4 bytes and 8 bytes respectively, cannot use the following formula to calculate offsets, as the array contains elements of two different lengths.</p> <pre><code># Element memory address = array memory address + element length * element index\n</code></pre> <p>Q: After deleting a node, is it necessary to set <code>P.next</code> to <code>None</code>?</p> <p>Not modifying <code>P.next</code> is also acceptable. From the perspective of the linked list, traversing from the head node to the tail node will no longer encounter <code>P</code>. This means that node <code>P</code> has been effectively removed from the list, and where <code>P</code> points no longer affects the list.</p> <p>From a garbage collection perspective, for languages with automatic garbage collection mechanisms like Java, Python, and Go, whether node <code>P</code> is collected depends on whether there are still references pointing to it, not on the value of <code>P.next</code>. In languages like C and C++, we need to manually free the node's memory.</p> <p>Q: In linked lists, the time complexity for insertion and deletion operations is <code>O(1)</code>. But searching for the element before insertion or deletion takes <code>O(n)</code> time, so why isn't the time complexity <code>O(n)</code>?</p> <p>If an element is searched first and then deleted, the time complexity is indeed <code>O(n)</code>. However, the <code>O(1)</code> advantage of linked lists in insertion and deletion can be realized in other applications. For example, in the implementation of double-ended queues using linked lists, we maintain pointers always pointing to the head and tail nodes, making each insertion and deletion operation <code>O(1)</code>.</p> <p>Q: In the figure \"Linked List Definition and Storage Method\", do the light blue storage nodes occupy a single memory address, or do they share half with the node value?</p> <p>The figure is just a qualitative representation; quantitative analysis depends on specific situations.</p> <ul> <li>Different types of node values occupy different amounts of space, such as int, long, double, and object instances.</li> <li>The memory space occupied by pointer variables depends on the operating system and compilation environment used, usually 8 bytes or 4 bytes.</li> </ul> <p>Q: Is adding elements to the end of a list always <code>O(1)</code>?</p> <p>If adding an element exceeds the list length, the list needs to be expanded first. The system will request a new memory block and move all elements of the original list over, in which case the time complexity becomes <code>O(n)</code>.</p> <p>Q: The statement \"The emergence of lists greatly improves the practicality of arrays, but may lead to some memory space wastage\" - does this refer to the memory occupied by additional variables like capacity, length, and expansion multiplier?</p> <p>The space wastage here mainly refers to two aspects: on the one hand, lists are set with an initial length, which we may not always need; on the other hand, to prevent frequent expansion, expansion usually multiplies by a coefficient, such as \\(\\times 1.5\\). This results in many empty slots, which we typically cannot fully fill.</p> <p>Q: In Python, after initializing <code>n = [1, 2, 3]</code>, the addresses of these 3 elements are contiguous, but initializing <code>m = [2, 1, 3]</code> shows that each element's <code>id</code> is not consecutive but identical to those in <code>n</code>. If the addresses of these elements are not contiguous, is <code>m</code> still an array?</p> <p>If we replace list elements with linked list nodes <code>n = [n1, n2, n3, n4, n5]</code>, these 5 node objects are also typically dispersed throughout memory. However, given a list index, we can still access the node's memory address in <code>O(1)</code> time, thereby accessing the corresponding node. This is because the array stores references to the nodes, not the nodes themselves.</p> <p>Unlike many languages, in Python, numbers are also wrapped as objects, and lists store references to these numbers, not the numbers themselves. Therefore, we find that the same number in two arrays has the same <code>id</code>, and these numbers' memory addresses need not be contiguous.</p> <p>Q: The <code>std::list</code> in C++ STL has already implemented a doubly linked list, but it seems that some algorithm books don't directly use it. Is there any limitation?</p> <p>On the one hand, we often prefer to use arrays to implement algorithms, only using linked lists when necessary, mainly for two reasons.</p> <ul> <li>Space overhead: Since each element requires two additional pointers (one for the previous element and one for the next), <code>std::list</code> usually occupies more space than <code>std::vector</code>.</li> <li>Cache unfriendly: As the data is not stored continuously, <code>std::list</code> has a lower cache utilization rate. Generally, <code>std::vector</code> performs better.</li> </ul> <p>On the other hand, linked lists are primarily necessary for binary trees and graphs. Stacks and queues are often implemented using the programming language's <code>stack</code> and <code>queue</code> classes, rather than linked lists.</p> <p>Q: Does initializing a list <code>res = [0] * self.size()</code> result in each element of <code>res</code> referencing the same address?</p> <p>No. However, this issue arises with two-dimensional arrays, for example, initializing a two-dimensional list <code>res = [[0]] * self.size()</code> would reference the same list <code>[0]</code> multiple times.</p> <p>Q: In deleting a node, is it necessary to break the reference to its successor node?</p> <p>From the perspective of data structures and algorithms (problem-solving), it's okay not to break the link, as long as the program's logic is correct. From the perspective of standard libraries, breaking the link is safer and more logically clear. If the link is not broken, and the deleted node is not properly recycled, it could affect the recycling of the successor node's memory.</p>"},{"location":"chapter_backtracking/","title":"Chapter 13. \u00a0 Backtracking","text":"<p>Abstract</p> <p>Like explorers in a maze, we may encounter difficulties on our path forward.</p> <p>The power of backtracking allows us to start over, keep trying, and eventually find the exit to the light.</p>"},{"location":"chapter_backtracking/#chapter-contents","title":"Chapter contents","text":"<ul> <li>13.1 \u00a0 Backtracking algorithms</li> <li>13.2 \u00a0 Permutation problem</li> <li>13.3 \u00a0 Subset sum problem</li> <li>13.4 \u00a0 n queens problem</li> <li>13.5 \u00a0 Summary</li> </ul>"},{"location":"chapter_backtracking/backtracking_algorithm/","title":"13.1 \u00a0 Backtracking algorithms","text":"<p>Backtracking algorithm is a method to solve problems by exhaustive search, where the core idea is to start from an initial state and brute force all possible solutions, recording the correct ones until a solution is found or all possible choices are exhausted without finding a solution.</p> <p>Backtracking typically employs \"depth-first search\" to traverse the solution space. In the \"Binary Tree\" chapter, we mentioned that pre-order, in-order, and post-order traversals are all depth-first searches. Next, we use pre-order traversal to construct a backtracking problem to gradually understand the workings of the backtracking algorithm.</p> <p>Example One</p> <p>Given a binary tree, search and record all nodes with a value of \\(7\\), please return a list of nodes.</p> <p>For this problem, we traverse this tree in pre-order and check if the current node's value is \\(7\\). If it is, we add the node's value to the result list <code>res</code>. The relevant process is shown in Figure 13-1:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig preorder_traversal_i_compact.py<pre><code>def pre_order(root: TreeNode):\n \"\"\"Pre-order traversal: Example one\"\"\"\n if root is None:\n return\n if root.val == 7:\n # Record solution\n res.append(root)\n pre_order(root.left)\n pre_order(root.right)\n</code></pre> preorder_traversal_i_compact.cpp<pre><code>/* Pre-order traversal: Example one */\nvoid preOrder(TreeNode *root) {\n if (root == nullptr) {\n return;\n }\n if (root->val == 7) {\n // Record solution\n res.push_back(root);\n }\n preOrder(root->left);\n preOrder(root->right);\n}\n</code></pre> preorder_traversal_i_compact.java<pre><code>/* Pre-order traversal: Example one */\nvoid preOrder(TreeNode root) {\n if (root == null) {\n return;\n }\n if (root.val == 7) {\n // Record solution\n res.add(root);\n }\n preOrder(root.left);\n preOrder(root.right);\n}\n</code></pre> preorder_traversal_i_compact.cs<pre><code>[class]{preorder_traversal_i_compact}-[func]{PreOrder}\n</code></pre> preorder_traversal_i_compact.go<pre><code>[class]{}-[func]{preOrderI}\n</code></pre> preorder_traversal_i_compact.swift<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_i_compact.js<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_i_compact.ts<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_i_compact.dart<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_i_compact.rs<pre><code>[class]{}-[func]{pre_order}\n</code></pre> preorder_traversal_i_compact.c<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_i_compact.kt<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_i_compact.rb<pre><code>[class]{}-[func]{pre_order}\n</code></pre> preorder_traversal_i_compact.zig<pre><code>[class]{}-[func]{preOrder}\n</code></pre> <p></p> <p> Figure 13-1 \u00a0 Searching nodes in pre-order traversal </p>"},{"location":"chapter_backtracking/backtracking_algorithm/#1311-trying-and-retreating","title":"13.1.1 \u00a0 Trying and retreating","text":"<p>The reason it is called backtracking is that the algorithm uses a \"try\" and \"retreat\" strategy when searching the solution space. When the algorithm encounters a state where it can no longer progress or fails to achieve a satisfying solution, it undoes the previous choice, reverts to the previous state, and tries other possible choices.</p> <p>For Example One, visiting each node represents a \"try\", and passing a leaf node or returning to the parent node's <code>return</code> represents \"retreat\".</p> <p>It's worth noting that retreat is not merely about function returns. We expand slightly on Example One for clarification.</p> <p>Example Two</p> <p>In a binary tree, search for all nodes with a value of \\(7\\) and please return the paths from the root node to these nodes.</p> <p>Based on the code from Example One, we need to use a list <code>path</code> to record the visited node paths. When a node with a value of \\(7\\) is reached, we copy <code>path</code> and add it to the result list <code>res</code>. After the traversal, <code>res</code> holds all the solutions. The code is as shown:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig preorder_traversal_ii_compact.py<pre><code>def pre_order(root: TreeNode):\n \"\"\"Pre-order traversal: Example two\"\"\"\n if root is None:\n return\n # Attempt\n path.append(root)\n if root.val == 7:\n # Record solution\n res.append(list(path))\n pre_order(root.left)\n pre_order(root.right)\n # Retract\n path.pop()\n</code></pre> preorder_traversal_ii_compact.cpp<pre><code>/* Pre-order traversal: Example two */\nvoid preOrder(TreeNode *root) {\n if (root == nullptr) {\n return;\n }\n // Attempt\n path.push_back(root);\n if (root->val == 7) {\n // Record solution\n res.push_back(path);\n }\n preOrder(root->left);\n preOrder(root->right);\n // Retract\n path.pop_back();\n}\n</code></pre> preorder_traversal_ii_compact.java<pre><code>/* Pre-order traversal: Example two */\nvoid preOrder(TreeNode root) {\n if (root == null) {\n return;\n }\n // Attempt\n path.add(root);\n if (root.val == 7) {\n // Record solution\n res.add(new ArrayList<>(path));\n }\n preOrder(root.left);\n preOrder(root.right);\n // Retract\n path.remove(path.size() - 1);\n}\n</code></pre> preorder_traversal_ii_compact.cs<pre><code>[class]{preorder_traversal_ii_compact}-[func]{PreOrder}\n</code></pre> preorder_traversal_ii_compact.go<pre><code>[class]{}-[func]{preOrderII}\n</code></pre> preorder_traversal_ii_compact.swift<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_ii_compact.js<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_ii_compact.ts<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_ii_compact.dart<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_ii_compact.rs<pre><code>[class]{}-[func]{pre_order}\n</code></pre> preorder_traversal_ii_compact.c<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_ii_compact.kt<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_ii_compact.rb<pre><code>[class]{}-[func]{pre_order}\n</code></pre> preorder_traversal_ii_compact.zig<pre><code>[class]{}-[func]{preOrder}\n</code></pre> <p>In each \"try\", we record the path by adding the current node to <code>path</code>; before \"retreating\", we need to pop the node from <code>path</code> to restore the state before this attempt.</p> <p>Observe the process shown in Figure 13-2, we can understand trying and retreating as \"advancing\" and \"undoing\", two operations that are reverse to each other.</p> <1><2><3><4><5><6><7><8><9><10><11> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 13-2 \u00a0 Trying and retreating </p>"},{"location":"chapter_backtracking/backtracking_algorithm/#1312-pruning","title":"13.1.2 \u00a0 Pruning","text":"<p>Complex backtracking problems usually involve one or more constraints, which are often used for \"pruning\".</p> <p>Example Three</p> <p>In a binary tree, search for all nodes with a value of \\(7\\) and return the paths from the root to these nodes, requiring that the paths do not contain nodes with a value of \\(3\\).</p> <p>To meet the above constraints, we need to add a pruning operation: during the search process, if a node with a value of \\(3\\) is encountered, it returns early, discontinuing further search. The code is as shown:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig preorder_traversal_iii_compact.py<pre><code>def pre_order(root: TreeNode):\n \"\"\"Pre-order traversal: Example three\"\"\"\n # Pruning\n if root is None or root.val == 3:\n return\n # Attempt\n path.append(root)\n if root.val == 7:\n # Record solution\n res.append(list(path))\n pre_order(root.left)\n pre_order(root.right)\n # Retract\n path.pop()\n</code></pre> preorder_traversal_iii_compact.cpp<pre><code>/* Pre-order traversal: Example three */\nvoid preOrder(TreeNode *root) {\n // Pruning\n if (root == nullptr || root->val == 3) {\n return;\n }\n // Attempt\n path.push_back(root);\n if (root->val == 7) {\n // Record solution\n res.push_back(path);\n }\n preOrder(root->left);\n preOrder(root->right);\n // Retract\n path.pop_back();\n}\n</code></pre> preorder_traversal_iii_compact.java<pre><code>/* Pre-order traversal: Example three */\nvoid preOrder(TreeNode root) {\n // Pruning\n if (root == null || root.val == 3) {\n return;\n }\n // Attempt\n path.add(root);\n if (root.val == 7) {\n // Record solution\n res.add(new ArrayList<>(path));\n }\n preOrder(root.left);\n preOrder(root.right);\n // Retract\n path.remove(path.size() - 1);\n}\n</code></pre> preorder_traversal_iii_compact.cs<pre><code>[class]{preorder_traversal_iii_compact}-[func]{PreOrder}\n</code></pre> preorder_traversal_iii_compact.go<pre><code>[class]{}-[func]{preOrderIII}\n</code></pre> preorder_traversal_iii_compact.swift<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_iii_compact.js<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_iii_compact.ts<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_iii_compact.dart<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_iii_compact.rs<pre><code>[class]{}-[func]{pre_order}\n</code></pre> preorder_traversal_iii_compact.c<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_iii_compact.kt<pre><code>[class]{}-[func]{preOrder}\n</code></pre> preorder_traversal_iii_compact.rb<pre><code>[class]{}-[func]{pre_order}\n</code></pre> preorder_traversal_iii_compact.zig<pre><code>[class]{}-[func]{preOrder}\n</code></pre> <p>\"Pruning\" is a very vivid noun. As shown in Figure 13-3, in the search process, we \"cut off\" the search branches that do not meet the constraints, avoiding many meaningless attempts, thus enhancing the search efficiency.</p> <p></p> <p> Figure 13-3 \u00a0 Pruning based on constraints </p>"},{"location":"chapter_backtracking/backtracking_algorithm/#1313-framework-code","title":"13.1.3 \u00a0 Framework code","text":"<p>Next, we attempt to distill the main framework of \"trying, retreating, and pruning\" from backtracking to enhance the code's universality.</p> <p>In the following framework code, <code>state</code> represents the current state of the problem, <code>choices</code> represents the choices available under the current state:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig <pre><code>def backtrack(state: State, choices: list[choice], res: list[state]):\n \"\"\"Backtracking algorithm framework\"\"\"\n # Check if it's a solution\n if is_solution(state):\n # Record the solution\n record_solution(state, res)\n # Stop searching\n return\n # Iterate through all choices\n for choice in choices:\n # Pruning: check if the choice is valid\n if is_valid(state, choice):\n # Try: make a choice, update the state\n make_choice(state, choice)\n backtrack(state, choices, res)\n # Retreat: undo the choice, revert to the previous state\n undo_choice(state, choice)\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nvoid backtrack(State *state, vector<Choice *> &choices, vector<State *> &res) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record the solution\n recordSolution(state, res);\n // Stop searching\n return;\n }\n // Iterate through all choices\n for (Choice choice : choices) {\n // Pruning: check if the choice is valid\n if (isValid(state, choice)) {\n // Try: make a choice, update the state\n makeChoice(state, choice);\n backtrack(state, choices, res);\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, choice);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nvoid backtrack(State state, List<Choice> choices, List<State> res) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record the solution\n recordSolution(state, res);\n // Stop searching\n return;\n }\n // Iterate through all choices\n for (Choice choice : choices) {\n // Pruning: check if the choice is valid\n if (isValid(state, choice)) {\n // Try: make a choice, update the state\n makeChoice(state, choice);\n backtrack(state, choices, res);\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, choice);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nvoid Backtrack(State state, List<Choice> choices, List<State> res) {\n // Check if it's a solution\n if (IsSolution(state)) {\n // Record the solution\n RecordSolution(state, res);\n // Stop searching\n return;\n }\n // Iterate through all choices\n foreach (Choice choice in choices) {\n // Pruning: check if the choice is valid\n if (IsValid(state, choice)) {\n // Try: make a choice, update the state\n MakeChoice(state, choice);\n Backtrack(state, choices, res);\n // Retreat: undo the choice, revert to the previous state\n UndoChoice(state, choice);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nfunc backtrack(state *State, choices []Choice, res *[]State) {\n // Check if it's a solution\n if isSolution(state) {\n // Record the solution\n recordSolution(state, res)\n // Stop searching\n return\n }\n // Iterate through all choices\n for _, choice := range choices {\n // Pruning: check if the choice is valid\n if isValid(state, choice) {\n // Try: make a choice, update the state\n makeChoice(state, choice)\n backtrack(state, choices, res)\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, choice)\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nfunc backtrack(state: inout State, choices: [Choice], res: inout [State]) {\n // Check if it's a solution\n if isSolution(state: state) {\n // Record the solution\n recordSolution(state: state, res: &res)\n // Stop searching\n return\n }\n // Iterate through all choices\n for choice in choices {\n // Pruning: check if the choice is valid\n if isValid(state: state, choice: choice) {\n // Try: make a choice, update the state\n makeChoice(state: &state, choice: choice)\n backtrack(state: &state, choices: choices, res: &res)\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state: &state, choice: choice)\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nfunction backtrack(state, choices, res) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record the solution\n recordSolution(state, res);\n // Stop searching\n return;\n }\n // Iterate through all choices\n for (let choice of choices) {\n // Pruning: check if the choice is valid\n if (isValid(state, choice)) {\n // Try: make a choice, update the state\n makeChoice(state, choice);\n backtrack(state, choices, res);\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, choice);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nfunction backtrack(state: State, choices: Choice[], res: State[]): void {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record the solution\n recordSolution(state, res);\n // Stop searching\n return;\n }\n // Iterate through all choices\n for (let choice of choices) {\n // Pruning: check if the choice is valid\n if (isValid(state, choice)) {\n // Try: make a choice, update the state\n makeChoice(state, choice);\n backtrack(state, choices, res);\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, choice);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nvoid backtrack(State state, List<Choice>, List<State> res) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record the solution\n recordSolution(state, res);\n // Stop searching\n return;\n }\n // Iterate through all choices\n for (Choice choice in choices) {\n // Pruning: check if the choice is valid\n if (isValid(state, choice)) {\n // Try: make a choice, update the state\n makeChoice(state, choice);\n backtrack(state, choices, res);\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, choice);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nfn backtrack(state: &mut State, choices: &Vec<Choice>, res: &mut Vec<State>) {\n // Check if it's a solution\n if is_solution(state) {\n // Record the solution\n record_solution(state, res);\n // Stop searching\n return;\n }\n // Iterate through all choices\n for choice in choices {\n // Pruning: check if the choice is valid\n if is_valid(state, choice) {\n // Try: make a choice, update the state\n make_choice(state, choice);\n backtrack(state, choices, res);\n // Retreat: undo the choice, revert to the previous state\n undo_choice(state, choice);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nvoid backtrack(State *state, Choice *choices, int numChoices, State *res, int numRes) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record the solution\n recordSolution(state, res, numRes);\n // Stop searching\n return;\n }\n // Iterate through all choices\n for (int i = 0; i < numChoices; i++) {\n // Pruning: check if the choice is valid\n if (isValid(state, &choices[i])) {\n // Try: make a choice, update the state\n makeChoice(state, &choices[i]);\n backtrack(state, choices, numChoices, res, numRes);\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, &choices[i]);\n }\n }\n}\n</code></pre> <pre><code>/* Backtracking algorithm framework */\nfun backtrack(state: State?, choices: List<Choice?>, res: List<State?>?) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record the solution\n recordSolution(state, res)\n // Stop searching\n return\n }\n // Iterate through all choices\n for (choice in choices) {\n // Pruning: check if the choice is valid\n if (isValid(state, choice)) {\n // Try: make a choice, update the state\n makeChoice(state, choice)\n backtrack(state, choices, res)\n // Retreat: undo the choice, revert to the previous state\n undoChoice(state, choice)\n }\n }\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>\n</code></pre> <p>Next, we solve Example Three based on the framework code. The <code>state</code> is the node traversal path, <code>choices</code> are the current node's left and right children, and the result <code>res</code> is the list of paths:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig preorder_traversal_iii_template.py<pre><code>def is_solution(state: list[TreeNode]) -> bool:\n \"\"\"Determine if the current state is a solution\"\"\"\n return state and state[-1].val == 7\n\ndef record_solution(state: list[TreeNode], res: list[list[TreeNode]]):\n \"\"\"Record solution\"\"\"\n res.append(list(state))\n\ndef is_valid(state: list[TreeNode], choice: TreeNode) -> bool:\n \"\"\"Determine if the choice is legal under the current state\"\"\"\n return choice is not None and choice.val != 3\n\ndef make_choice(state: list[TreeNode], choice: TreeNode):\n \"\"\"Update state\"\"\"\n state.append(choice)\n\ndef undo_choice(state: list[TreeNode], choice: TreeNode):\n \"\"\"Restore state\"\"\"\n state.pop()\n\ndef backtrack(\n state: list[TreeNode], choices: list[TreeNode], res: list[list[TreeNode]]\n):\n \"\"\"Backtracking algorithm: Example three\"\"\"\n # Check if it's a solution\n if is_solution(state):\n # Record solution\n record_solution(state, res)\n # Traverse all choices\n for choice in choices:\n # Pruning: check if the choice is legal\n if is_valid(state, choice):\n # Attempt: make a choice, update the state\n make_choice(state, choice)\n # Proceed to the next round of selection\n backtrack(state, [choice.left, choice.right], res)\n # Retract: undo the choice, restore to the previous state\n undo_choice(state, choice)\n</code></pre> preorder_traversal_iii_template.cpp<pre><code>/* Determine if the current state is a solution */\nbool isSolution(vector<TreeNode *> &state) {\n return !state.empty() && state.back()->val == 7;\n}\n\n/* Record solution */\nvoid recordSolution(vector<TreeNode *> &state, vector<vector<TreeNode *>> &res) {\n res.push_back(state);\n}\n\n/* Determine if the choice is legal under the current state */\nbool isValid(vector<TreeNode *> &state, TreeNode *choice) {\n return choice != nullptr && choice->val != 3;\n}\n\n/* Update state */\nvoid makeChoice(vector<TreeNode *> &state, TreeNode *choice) {\n state.push_back(choice);\n}\n\n/* Restore state */\nvoid undoChoice(vector<TreeNode *> &state, TreeNode *choice) {\n state.pop_back();\n}\n\n/* Backtracking algorithm: Example three */\nvoid backtrack(vector<TreeNode *> &state, vector<TreeNode *> &choices, vector<vector<TreeNode *>> &res) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record solution\n recordSolution(state, res);\n }\n // Traverse all choices\n for (TreeNode *choice : choices) {\n // Pruning: check if the choice is legal\n if (isValid(state, choice)) {\n // Attempt: make a choice, update the state\n makeChoice(state, choice);\n // Proceed to the next round of selection\n vector<TreeNode *> nextChoices{choice->left, choice->right};\n backtrack(state, nextChoices, res);\n // Retract: undo the choice, restore to the previous state\n undoChoice(state, choice);\n }\n }\n}\n</code></pre> preorder_traversal_iii_template.java<pre><code>/* Determine if the current state is a solution */\nboolean isSolution(List<TreeNode> state) {\n return !state.isEmpty() && state.get(state.size() - 1).val == 7;\n}\n\n/* Record solution */\nvoid recordSolution(List<TreeNode> state, List<List<TreeNode>> res) {\n res.add(new ArrayList<>(state));\n}\n\n/* Determine if the choice is legal under the current state */\nboolean isValid(List<TreeNode> state, TreeNode choice) {\n return choice != null && choice.val != 3;\n}\n\n/* Update state */\nvoid makeChoice(List<TreeNode> state, TreeNode choice) {\n state.add(choice);\n}\n\n/* Restore state */\nvoid undoChoice(List<TreeNode> state, TreeNode choice) {\n state.remove(state.size() - 1);\n}\n\n/* Backtracking algorithm: Example three */\nvoid backtrack(List<TreeNode> state, List<TreeNode> choices, List<List<TreeNode>> res) {\n // Check if it's a solution\n if (isSolution(state)) {\n // Record solution\n recordSolution(state, res);\n }\n // Traverse all choices\n for (TreeNode choice : choices) {\n // Pruning: check if the choice is legal\n if (isValid(state, choice)) {\n // Attempt: make a choice, update the state\n makeChoice(state, choice);\n // Proceed to the next round of selection\n backtrack(state, Arrays.asList(choice.left, choice.right), res);\n // Retract: undo the choice, restore to the previous state\n undoChoice(state, choice);\n }\n }\n}\n</code></pre> preorder_traversal_iii_template.cs<pre><code>[class]{preorder_traversal_iii_template}-[func]{IsSolution}\n\n[class]{preorder_traversal_iii_template}-[func]{RecordSolution}\n\n[class]{preorder_traversal_iii_template}-[func]{IsValid}\n\n[class]{preorder_traversal_iii_template}-[func]{MakeChoice}\n\n[class]{preorder_traversal_iii_template}-[func]{UndoChoice}\n\n[class]{preorder_traversal_iii_template}-[func]{Backtrack}\n</code></pre> preorder_traversal_iii_template.go<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrackIII}\n</code></pre> preorder_traversal_iii_template.swift<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.js<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.ts<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.dart<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.rs<pre><code>[class]{}-[func]{is_solution}\n\n[class]{}-[func]{record_solution}\n\n[class]{}-[func]{is_valid}\n\n[class]{}-[func]{make_choice}\n\n[class]{}-[func]{undo_choice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.c<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.kt<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.rb<pre><code>[class]{}-[func]{is_solution}\n\n[class]{}-[func]{record_solution}\n\n[class]{}-[func]{is_valid}\n\n[class]{}-[func]{make_choice}\n\n[class]{}-[func]{undo_choice}\n\n[class]{}-[func]{backtrack}\n</code></pre> preorder_traversal_iii_template.zig<pre><code>[class]{}-[func]{isSolution}\n\n[class]{}-[func]{recordSolution}\n\n[class]{}-[func]{isValid}\n\n[class]{}-[func]{makeChoice}\n\n[class]{}-[func]{undoChoice}\n\n[class]{}-[func]{backtrack}\n</code></pre> <p>As per the requirements, after finding a node with a value of \\(7\\), the search should continue, thus the <code>return</code> statement after recording the solution should be removed. Figure 13-4 compares the search processes with and without retaining the <code>return</code> statement.</p> <p></p> <p> Figure 13-4 \u00a0 Comparison of retaining and removing the return in the search process </p> <p>Compared to the implementation based on pre-order traversal, the code implementation based on the backtracking algorithm framework seems verbose, but it has better universality. In fact, many backtracking problems can be solved within this framework. We just need to define <code>state</code> and <code>choices</code> according to the specific problem and implement the methods in the framework.</p>"},{"location":"chapter_backtracking/backtracking_algorithm/#1314-common-terminology","title":"13.1.4 \u00a0 Common terminology","text":"<p>To analyze algorithmic problems more clearly, we summarize the meanings of commonly used terminology in backtracking algorithms and provide corresponding examples from Example Three as shown in Table 13-1.</p> <p> Table 13-1 \u00a0 Common backtracking algorithm terminology </p> Term Definition Example Three Solution (solution) A solution is an answer that satisfies specific conditions of the problem, which may have one or more All paths from the root node to node \\(7\\) that meet the constraint Constraint (constraint) Constraints are conditions in the problem that limit the feasibility of solutions, often used for pruning Paths do not contain node \\(3\\) State (state) State represents the situation of the problem at a certain moment, including choices made Current visited node path, i.e., <code>path</code> node list Attempt (attempt) An attempt is the process of exploring the solution space based on available choices, including making choices, updating the state, and checking if it's a solution Recursively visiting left (right) child nodes, adding nodes to <code>path</code>, checking if the node's value is \\(7\\) Backtracking (backtracking) Backtracking refers to the action of undoing previous choices and returning to the previous state when encountering states that do not meet the constraints When passing leaf nodes, ending node visits, encountering nodes with a value of \\(3\\), terminating the search, and function return Pruning (pruning) Pruning is a method to avoid meaningless search paths based on the characteristics and constraints of the problem, which can enhance search efficiency When encountering a node with a value of \\(3\\), no further search is continued <p>Tip</p> <p>Concepts like problems, solutions, states, etc., are universal, and are involved in divide and conquer, backtracking, dynamic programming, and greedy algorithms, among others.</p>"},{"location":"chapter_backtracking/backtracking_algorithm/#1315-advantages-and-limitations","title":"13.1.5 \u00a0 Advantages and limitations","text":"<p>The backtracking algorithm is essentially a depth-first search algorithm that attempts all possible solutions until a satisfying solution is found. The advantage of this method is that it can find all possible solutions, and with reasonable pruning operations, it can be highly efficient.</p> <p>However, when dealing with large-scale or complex problems, the operational efficiency of backtracking may be difficult to accept.</p> <ul> <li>Time: Backtracking algorithms usually need to traverse all possible states in the state space, which can reach exponential or factorial time complexity.</li> <li>Space: In recursive calls, it is necessary to save the current state (such as paths, auxiliary variables for pruning, etc.). When the depth is very large, the space requirement may become significant.</li> </ul> <p>Even so, backtracking remains the best solution for certain search problems and constraint satisfaction problems. For these problems, since it is unpredictable which choices can generate valid solutions, we must traverse all possible choices. In this case, the key is how to optimize efficiency, with common efficiency optimization methods being two types.</p> <ul> <li>Pruning: Avoid searching paths that definitely will not produce a solution, thus saving time and space.</li> <li>Heuristic search: Introduce some strategies or estimates during the search process to prioritize the paths that are most likely to produce valid solutions.</li> </ul>"},{"location":"chapter_backtracking/backtracking_algorithm/#1316-typical-backtracking-problems","title":"13.1.6 \u00a0 Typical backtracking problems","text":"<p>Backtracking algorithms can be used to solve many search problems, constraint satisfaction problems, and combinatorial optimization problems.</p> <p>Search problems: The goal of these problems is to find solutions that meet specific conditions.</p> <ul> <li>Full permutation problem: Given a set, find all possible permutations and combinations of it.</li> <li>Subset sum problem: Given a set and a target sum, find all subsets of the set that sum to the target.</li> <li>Tower of Hanoi problem: Given three rods and a series of different-sized discs, the goal is to move all the discs from one rod to another, moving only one disc at a time, and never placing a larger disc on a smaller one.</li> </ul> <p>Constraint satisfaction problems: The goal of these problems is to find solutions that satisfy all the constraints.</p> <ul> <li>\\(n\\) queens: Place \\(n\\) queens on an \\(n \\times n\\) chessboard so that they do not attack each other.</li> <li>Sudoku: Fill a \\(9 \\times 9\\) grid with the numbers \\(1\\) to \\(9\\), ensuring that the numbers do not repeat in each row, each column, and each \\(3 \\times 3\\) subgrid.</li> <li>Graph coloring problem: Given an undirected graph, color each vertex with the fewest possible colors so that adjacent vertices have different colors.</li> </ul> <p>Combinatorial optimization problems: The goal of these problems is to find the optimal solution within a combination space that meets certain conditions.</p> <ul> <li>0-1 knapsack problem: Given a set of items and a backpack, each item has a certain value and weight. The goal is to choose items to maximize the total value within the backpack's capacity limit.</li> <li>Traveling salesman problem: In a graph, starting from one point, visit all other points exactly once and then return to the starting point, seeking the shortest path.</li> <li>Maximum clique problem: Given an undirected graph, find the largest complete subgraph, i.e., a subgraph where any two vertices are connected by an edge.</li> </ul> <p>Please note that for many combinatorial optimization problems, backtracking is not the optimal solution.</p> <ul> <li>The 0-1 knapsack problem is usually solved using dynamic programming to achieve higher time efficiency.</li> <li>The traveling salesman is a well-known NP-Hard problem, commonly solved using genetic algorithms and ant colony algorithms, among others.</li> <li>The maximum clique problem is a classic problem in graph theory, which can be solved using greedy algorithms and other heuristic methods.</li> </ul>"},{"location":"chapter_backtracking/n_queens_problem/","title":"13.4 \u00a0 n queens problem","text":"<p>Question</p> <p>According to the rules of chess, a queen can attack pieces in the same row, column, or on a diagonal line. Given \\(n\\) queens and an \\(n \\times n\\) chessboard, find arrangements where no two queens can attack each other.</p> <p>As shown in Figure 13-15, when \\(n = 4\\), there are two solutions. From the perspective of the backtracking algorithm, an \\(n \\times n\\) chessboard has \\(n^2\\) squares, presenting all possible choices <code>choices</code>. The state of the chessboard <code>state</code> changes continuously as each queen is placed.</p> <p></p> <p> Figure 13-15 \u00a0 Solution to the 4 queens problem </p> <p>Figure 13-16 shows the three constraints of this problem: multiple queens cannot be on the same row, column, or diagonal. It is important to note that diagonals are divided into the main diagonal <code>\\</code> and the secondary diagonal <code>/</code>.</p> <p></p> <p> Figure 13-16 \u00a0 Constraints of the n queens problem </p>"},{"location":"chapter_backtracking/n_queens_problem/#1-row-by-row-placing-strategy","title":"1. \u00a0 Row-by-row placing strategy","text":"<p>As the number of queens equals the number of rows on the chessboard, both being \\(n\\), it is easy to conclude: each row on the chessboard allows and only allows one queen to be placed.</p> <p>This means that we can adopt a row-by-row placing strategy: starting from the first row, place one queen per row until the last row is reached.</p> <p>Figure 13-17 shows the row-by-row placing process for the 4 queens problem. Due to space limitations, the figure only expands one search branch of the first row, and prunes any placements that do not meet the column and diagonal constraints.</p> <p></p> <p> Figure 13-17 \u00a0 Row-by-row placing strategy </p> <p>Essentially, the row-by-row placing strategy serves as a pruning function, avoiding all search branches that would place multiple queens in the same row.</p>"},{"location":"chapter_backtracking/n_queens_problem/#2-column-and-diagonal-pruning","title":"2. \u00a0 Column and diagonal pruning","text":"<p>To satisfy column constraints, we can use a boolean array <code>cols</code> of length \\(n\\) to track whether a queen occupies each column. Before each placement decision, <code>cols</code> is used to prune the columns that already have queens, and it is dynamically updated during backtracking.</p> <p>How about the diagonal constraints? Let the row and column indices of a cell on the chessboard be \\((row, col)\\). By selecting a specific main diagonal, we notice that the difference \\(row - col\\) is the same for all cells on that diagonal, meaning that \\(row - col\\) is a constant value on that diagonal.</p> <p>Thus, if two cells satisfy \\(row_1 - col_1 = row_2 - col_2\\), they are definitely on the same main diagonal. Using this pattern, we can utilize the array <code>diags1</code> shown in Figure 13-18 to track whether a queen is on any main diagonal.</p> <p>Similarly, the sum \\(row + col\\) is a constant value for all cells on a secondary diagonal. We can also use the array <code>diags2</code> to handle secondary diagonal constraints.</p> <p></p> <p> Figure 13-18 \u00a0 Handling column and diagonal constraints </p>"},{"location":"chapter_backtracking/n_queens_problem/#3-code-implementation","title":"3. \u00a0 Code implementation","text":"<p>Please note, in an \\(n\\)-dimensional matrix, the range of \\(row - col\\) is \\([-n + 1, n - 1]\\), and the range of \\(row + col\\) is \\([0, 2n - 2]\\), thus the number of both main and secondary diagonals is \\(2n - 1\\), meaning the length of both arrays <code>diags1</code> and <code>diags2</code> is \\(2n - 1\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig n_queens.py<pre><code>def backtrack(\n row: int,\n n: int,\n state: list[list[str]],\n res: list[list[list[str]]],\n cols: list[bool],\n diags1: list[bool],\n diags2: list[bool],\n):\n \"\"\"Backtracking algorithm: n queens\"\"\"\n # When all rows are placed, record the solution\n if row == n:\n res.append([list(row) for row in state])\n return\n # Traverse all columns\n for col in range(n):\n # Calculate the main and minor diagonals corresponding to the cell\n diag1 = row - col + n - 1\n diag2 = row + col\n # Pruning: do not allow queens on the column, main diagonal, or minor diagonal of the cell\n if not cols[col] and not diags1[diag1] and not diags2[diag2]:\n # Attempt: place the queen in the cell\n state[row][col] = \"Q\"\n cols[col] = diags1[diag1] = diags2[diag2] = True\n # Place the next row\n backtrack(row + 1, n, state, res, cols, diags1, diags2)\n # Retract: restore the cell to an empty spot\n state[row][col] = \"#\"\n cols[col] = diags1[diag1] = diags2[diag2] = False\n\ndef n_queens(n: int) -> list[list[list[str]]]:\n \"\"\"Solve n queens\"\"\"\n # Initialize an n*n size chessboard, where 'Q' represents the queen and '#' represents an empty spot\n state = [[\"#\" for _ in range(n)] for _ in range(n)]\n cols = [False] * n # Record columns with queens\n diags1 = [False] * (2 * n - 1) # Record main diagonals with queens\n diags2 = [False] * (2 * n - 1) # Record minor diagonals with queens\n res = []\n backtrack(0, n, state, res, cols, diags1, diags2)\n\n return res\n</code></pre> n_queens.cpp<pre><code>/* Backtracking algorithm: n queens */\nvoid backtrack(int row, int n, vector<vector<string>> &state, vector<vector<vector<string>>> &res, vector<bool> &cols,\n vector<bool> &diags1, vector<bool> &diags2) {\n // When all rows are placed, record the solution\n if (row == n) {\n res.push_back(state);\n return;\n }\n // Traverse all columns\n for (int col = 0; col < n; col++) {\n // Calculate the main and minor diagonals corresponding to the cell\n int diag1 = row - col + n - 1;\n int diag2 = row + col;\n // Pruning: do not allow queens on the column, main diagonal, or minor diagonal of the cell\n if (!cols[col] && !diags1[diag1] && !diags2[diag2]) {\n // Attempt: place the queen in the cell\n state[row][col] = \"Q\";\n cols[col] = diags1[diag1] = diags2[diag2] = true;\n // Place the next row\n backtrack(row + 1, n, state, res, cols, diags1, diags2);\n // Retract: restore the cell to an empty spot\n state[row][col] = \"#\";\n cols[col] = diags1[diag1] = diags2[diag2] = false;\n }\n }\n}\n\n/* Solve n queens */\nvector<vector<vector<string>>> nQueens(int n) {\n // Initialize an n*n size chessboard, where 'Q' represents the queen and '#' represents an empty spot\n vector<vector<string>> state(n, vector<string>(n, \"#\"));\n vector<bool> cols(n, false); // Record columns with queens\n vector<bool> diags1(2 * n - 1, false); // Record main diagonals with queens\n vector<bool> diags2(2 * n - 1, false); // Record minor diagonals with queens\n vector<vector<vector<string>>> res;\n\n backtrack(0, n, state, res, cols, diags1, diags2);\n\n return res;\n}\n</code></pre> n_queens.java<pre><code>/* Backtracking algorithm: n queens */\nvoid backtrack(int row, int n, List<List<String>> state, List<List<List<String>>> res,\n boolean[] cols, boolean[] diags1, boolean[] diags2) {\n // When all rows are placed, record the solution\n if (row == n) {\n List<List<String>> copyState = new ArrayList<>();\n for (List<String> sRow : state) {\n copyState.add(new ArrayList<>(sRow));\n }\n res.add(copyState);\n return;\n }\n // Traverse all columns\n for (int col = 0; col < n; col++) {\n // Calculate the main and minor diagonals corresponding to the cell\n int diag1 = row - col + n - 1;\n int diag2 = row + col;\n // Pruning: do not allow queens on the column, main diagonal, or minor diagonal of the cell\n if (!cols[col] && !diags1[diag1] && !diags2[diag2]) {\n // Attempt: place the queen in the cell\n state.get(row).set(col, \"Q\");\n cols[col] = diags1[diag1] = diags2[diag2] = true;\n // Place the next row\n backtrack(row + 1, n, state, res, cols, diags1, diags2);\n // Retract: restore the cell to an empty spot\n state.get(row).set(col, \"#\");\n cols[col] = diags1[diag1] = diags2[diag2] = false;\n }\n }\n}\n\n/* Solve n queens */\nList<List<List<String>>> nQueens(int n) {\n // Initialize an n*n size chessboard, where 'Q' represents the queen and '#' represents an empty spot\n List<List<String>> state = new ArrayList<>();\n for (int i = 0; i < n; i++) {\n List<String> row = new ArrayList<>();\n for (int j = 0; j < n; j++) {\n row.add(\"#\");\n }\n state.add(row);\n }\n boolean[] cols = new boolean[n]; // Record columns with queens\n boolean[] diags1 = new boolean[2 * n - 1]; // Record main diagonals with queens\n boolean[] diags2 = new boolean[2 * n - 1]; // Record minor diagonals with queens\n List<List<List<String>>> res = new ArrayList<>();\n\n backtrack(0, n, state, res, cols, diags1, diags2);\n\n return res;\n}\n</code></pre> n_queens.cs<pre><code>[class]{n_queens}-[func]{Backtrack}\n\n[class]{n_queens}-[func]{NQueens}\n</code></pre> n_queens.go<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> n_queens.swift<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> n_queens.js<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> n_queens.ts<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> n_queens.dart<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> n_queens.rs<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{n_queens}\n</code></pre> n_queens.c<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> n_queens.kt<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> n_queens.rb<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{n_queens}\n</code></pre> n_queens.zig<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{nQueens}\n</code></pre> <p>Placing \\(n\\) queens row-by-row, considering column constraints, from the first row to the last row there are \\(n\\), \\(n-1\\), \\(\\dots\\), \\(2\\), \\(1\\) choices, using \\(O(n!)\\) time. When recording a solution, it is necessary to copy the matrix <code>state</code> and add it to <code>res</code>, with the copying operation using \\(O(n^2)\\) time. Therefore, the overall time complexity is \\(O(n! \\cdot n^2)\\). In practice, pruning based on diagonal constraints can significantly reduce the search space, thus often the search efficiency is better than the above time complexity.</p> <p>Array <code>state</code> uses \\(O(n^2)\\) space, and arrays <code>cols</code>, <code>diags1</code>, and <code>diags2</code> each use \\(O(n)\\) space. The maximum recursion depth is \\(n\\), using \\(O(n)\\) stack space. Therefore, the space complexity is \\(O(n^2)\\).</p>"},{"location":"chapter_backtracking/permutations_problem/","title":"13.2 \u00a0 Permutation problem","text":"<p>The permutation problem is a typical application of the backtracking algorithm. It is defined as finding all possible arrangements of elements from a given set (such as an array or string).</p> <p>Table 13-2 lists several example data, including the input arrays and their corresponding permutations.</p> <p> Table 13-2 \u00a0 Permutation examples </p> Input array Permutations \\([1]\\) \\([1]\\) \\([1, 2]\\) \\([1, 2], [2, 1]\\) \\([1, 2, 3]\\) \\([1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]\\)"},{"location":"chapter_backtracking/permutations_problem/#1321-cases-without-equal-elements","title":"13.2.1 \u00a0 Cases without equal elements","text":"<p>Question</p> <p>Enter an integer array without duplicate elements and return all possible permutations.</p> <p>From the perspective of the backtracking algorithm, we can imagine the process of generating permutations as a series of choices. Suppose the input array is \\([1, 2, 3]\\), if we first choose \\(1\\), then \\(3\\), and finally \\(2\\), we obtain the permutation \\([1, 3, 2]\\). Backtracking means undoing a choice and then continuing to try other choices.</p> <p>From the code perspective, the candidate set <code>choices</code> contains all elements of the input array, and the state <code>state</code> contains elements that have been selected so far. Please note that each element can only be chosen once, thus all elements in <code>state</code> must be unique.</p> <p>As shown in Figure 13-5, we can unfold the search process into a recursive tree, where each node represents the current state <code>state</code>. Starting from the root node, after three rounds of choices, we reach the leaf nodes, each corresponding to a permutation.</p> <p></p> <p> Figure 13-5 \u00a0 Permutation recursive tree </p>"},{"location":"chapter_backtracking/permutations_problem/#1-pruning-of-repeated-choices","title":"1. \u00a0 Pruning of repeated choices","text":"<p>To ensure that each element is selected only once, we consider introducing a boolean array <code>selected</code>, where <code>selected[i]</code> indicates whether <code>choices[i]</code> has been selected. We base our pruning operations on this array:</p> <ul> <li>After making the choice <code>choice[i]</code>, we set <code>selected[i]</code> to \\(\\text{True}\\), indicating it has been chosen.</li> <li>When iterating through the choice list <code>choices</code>, skip all nodes that have already been selected, i.e., prune.</li> </ul> <p>As shown in Figure 13-6, suppose we choose 1 in the first round, 3 in the second round, and 2 in the third round, we need to prune the branch of element 1 in the second round and elements 1 and 3 in the third round.</p> <p></p> <p> Figure 13-6 \u00a0 Permutation pruning example </p> <p>Observing Figure 13-6, this pruning operation reduces the search space size from \\(O(n^n)\\) to \\(O(n!)\\).</p>"},{"location":"chapter_backtracking/permutations_problem/#2-code-implementation","title":"2. \u00a0 Code implementation","text":"<p>After understanding the above information, we can \"fill in the blanks\" in the framework code. To shorten the overall code, we do not implement individual functions within the framework code separately, but expand them in the <code>backtrack()</code> function:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig permutations_i.py<pre><code>def backtrack(\n state: list[int], choices: list[int], selected: list[bool], res: list[list[int]]\n):\n \"\"\"Backtracking algorithm: Permutation I\"\"\"\n # When the state length equals the number of elements, record the solution\n if len(state) == len(choices):\n res.append(list(state))\n return\n # Traverse all choices\n for i, choice in enumerate(choices):\n # Pruning: do not allow repeated selection of elements\n if not selected[i]:\n # Attempt: make a choice, update the state\n selected[i] = True\n state.append(choice)\n # Proceed to the next round of selection\n backtrack(state, choices, selected, res)\n # Retract: undo the choice, restore to the previous state\n selected[i] = False\n state.pop()\n\ndef permutations_i(nums: list[int]) -> list[list[int]]:\n \"\"\"Permutation I\"\"\"\n res = []\n backtrack(state=[], choices=nums, selected=[False] * len(nums), res=res)\n return res\n</code></pre> permutations_i.cpp<pre><code>/* Backtracking algorithm: Permutation I */\nvoid backtrack(vector<int> &state, const vector<int> &choices, vector<bool> &selected, vector<vector<int>> &res) {\n // When the state length equals the number of elements, record the solution\n if (state.size() == choices.size()) {\n res.push_back(state);\n return;\n }\n // Traverse all choices\n for (int i = 0; i < choices.size(); i++) {\n int choice = choices[i];\n // Pruning: do not allow repeated selection of elements\n if (!selected[i]) {\n // Attempt: make a choice, update the state\n selected[i] = true;\n state.push_back(choice);\n // Proceed to the next round of selection\n backtrack(state, choices, selected, res);\n // Retract: undo the choice, restore to the previous state\n selected[i] = false;\n state.pop_back();\n }\n }\n}\n\n/* Permutation I */\nvector<vector<int>> permutationsI(vector<int> nums) {\n vector<int> state;\n vector<bool> selected(nums.size(), false);\n vector<vector<int>> res;\n backtrack(state, nums, selected, res);\n return res;\n}\n</code></pre> permutations_i.java<pre><code>/* Backtracking algorithm: Permutation I */\nvoid backtrack(List<Integer> state, int[] choices, boolean[] selected, List<List<Integer>> res) {\n // When the state length equals the number of elements, record the solution\n if (state.size() == choices.length) {\n res.add(new ArrayList<Integer>(state));\n return;\n }\n // Traverse all choices\n for (int i = 0; i < choices.length; i++) {\n int choice = choices[i];\n // Pruning: do not allow repeated selection of elements\n if (!selected[i]) {\n // Attempt: make a choice, update the state\n selected[i] = true;\n state.add(choice);\n // Proceed to the next round of selection\n backtrack(state, choices, selected, res);\n // Retract: undo the choice, restore to the previous state\n selected[i] = false;\n state.remove(state.size() - 1);\n }\n }\n}\n\n/* Permutation I */\nList<List<Integer>> permutationsI(int[] nums) {\n List<List<Integer>> res = new ArrayList<List<Integer>>();\n backtrack(new ArrayList<Integer>(), nums, new boolean[nums.length], res);\n return res;\n}\n</code></pre> permutations_i.cs<pre><code>[class]{permutations_i}-[func]{Backtrack}\n\n[class]{permutations_i}-[func]{PermutationsI}\n</code></pre> permutations_i.go<pre><code>[class]{}-[func]{backtrackI}\n\n[class]{}-[func]{permutationsI}\n</code></pre> permutations_i.swift<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsI}\n</code></pre> permutations_i.js<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsI}\n</code></pre> permutations_i.ts<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsI}\n</code></pre> permutations_i.dart<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsI}\n</code></pre> permutations_i.rs<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutations_i}\n</code></pre> permutations_i.c<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsI}\n</code></pre> permutations_i.kt<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsI}\n</code></pre> permutations_i.rb<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutations_i}\n</code></pre> permutations_i.zig<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsI}\n</code></pre>"},{"location":"chapter_backtracking/permutations_problem/#1322-considering-cases-with-equal-elements","title":"13.2.2 \u00a0 Considering cases with equal elements","text":"<p>Question</p> <p>Enter an integer array, which may contain duplicate elements, and return all unique permutations.</p> <p>Suppose the input array is \\([1, 1, 2]\\). To differentiate the two duplicate elements \\(1\\), we mark the second \\(1\\) as \\(\\hat{1}\\).</p> <p>As shown in Figure 13-7, half of the permutations generated by the above method are duplicates.</p> <p></p> <p> Figure 13-7 \u00a0 Duplicate permutations </p> <p>So, how do we eliminate duplicate permutations? Most directly, consider using a hash set to deduplicate permutation results. However, this is not elegant, as branches generating duplicate permutations are unnecessary and should be identified and pruned in advance, which can further improve algorithm efficiency.</p>"},{"location":"chapter_backtracking/permutations_problem/#1-pruning-of-equal-elements","title":"1. \u00a0 Pruning of equal elements","text":"<p>Observing Figure 13-8, in the first round, choosing \\(1\\) or \\(\\hat{1}\\) results in identical permutations under both choices, thus we should prune \\(\\hat{1}\\).</p> <p>Similarly, after choosing \\(2\\) in the first round, choosing \\(1\\) and \\(\\hat{1}\\) in the second round also produces duplicate branches, so we should also prune \\(\\hat{1}\\) in the second round.</p> <p>Essentially, our goal is to ensure that multiple equal elements are only selected once in each round of choices.</p> <p></p> <p> Figure 13-8 \u00a0 Duplicate permutations pruning </p>"},{"location":"chapter_backtracking/permutations_problem/#2-code-implementation_1","title":"2. \u00a0 Code implementation","text":"<p>Based on the code from the previous problem, we consider initiating a hash set <code>duplicated</code> in each round of choices, used to record elements that have been tried in that round, and prune duplicate elements:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig permutations_ii.py<pre><code>def backtrack(\n state: list[int], choices: list[int], selected: list[bool], res: list[list[int]]\n):\n \"\"\"Backtracking algorithm: Permutation II\"\"\"\n # When the state length equals the number of elements, record the solution\n if len(state) == len(choices):\n res.append(list(state))\n return\n # Traverse all choices\n duplicated = set[int]()\n for i, choice in enumerate(choices):\n # Pruning: do not allow repeated selection of elements and do not allow repeated selection of equal elements\n if not selected[i] and choice not in duplicated:\n # Attempt: make a choice, update the state\n duplicated.add(choice) # Record selected element values\n selected[i] = True\n state.append(choice)\n # Proceed to the next round of selection\n backtrack(state, choices, selected, res)\n # Retract: undo the choice, restore to the previous state\n selected[i] = False\n state.pop()\n\ndef permutations_ii(nums: list[int]) -> list[list[int]]:\n \"\"\"Permutation II\"\"\"\n res = []\n backtrack(state=[], choices=nums, selected=[False] * len(nums), res=res)\n return res\n</code></pre> permutations_ii.cpp<pre><code>/* Backtracking algorithm: Permutation II */\nvoid backtrack(vector<int> &state, const vector<int> &choices, vector<bool> &selected, vector<vector<int>> &res) {\n // When the state length equals the number of elements, record the solution\n if (state.size() == choices.size()) {\n res.push_back(state);\n return;\n }\n // Traverse all choices\n unordered_set<int> duplicated;\n for (int i = 0; i < choices.size(); i++) {\n int choice = choices[i];\n // Pruning: do not allow repeated selection of elements and do not allow repeated selection of equal elements\n if (!selected[i] && duplicated.find(choice) == duplicated.end()) {\n // Attempt: make a choice, update the state\n duplicated.emplace(choice); // Record selected element values\n selected[i] = true;\n state.push_back(choice);\n // Proceed to the next round of selection\n backtrack(state, choices, selected, res);\n // Retract: undo the choice, restore to the previous state\n selected[i] = false;\n state.pop_back();\n }\n }\n}\n\n/* Permutation II */\nvector<vector<int>> permutationsII(vector<int> nums) {\n vector<int> state;\n vector<bool> selected(nums.size(), false);\n vector<vector<int>> res;\n backtrack(state, nums, selected, res);\n return res;\n}\n</code></pre> permutations_ii.java<pre><code>/* Backtracking algorithm: Permutation II */\nvoid backtrack(List<Integer> state, int[] choices, boolean[] selected, List<List<Integer>> res) {\n // When the state length equals the number of elements, record the solution\n if (state.size() == choices.length) {\n res.add(new ArrayList<Integer>(state));\n return;\n }\n // Traverse all choices\n Set<Integer> duplicated = new HashSet<Integer>();\n for (int i = 0; i < choices.length; i++) {\n int choice = choices[i];\n // Pruning: do not allow repeated selection of elements and do not allow repeated selection of equal elements\n if (!selected[i] && !duplicated.contains(choice)) {\n // Attempt: make a choice, update the state\n duplicated.add(choice); // Record selected element values\n selected[i] = true;\n state.add(choice);\n // Proceed to the next round of selection\n backtrack(state, choices, selected, res);\n // Retract: undo the choice, restore to the previous state\n selected[i] = false;\n state.remove(state.size() - 1);\n }\n }\n}\n\n/* Permutation II */\nList<List<Integer>> permutationsII(int[] nums) {\n List<List<Integer>> res = new ArrayList<List<Integer>>();\n backtrack(new ArrayList<Integer>(), nums, new boolean[nums.length], res);\n return res;\n}\n</code></pre> permutations_ii.cs<pre><code>[class]{permutations_ii}-[func]{Backtrack}\n\n[class]{permutations_ii}-[func]{PermutationsII}\n</code></pre> permutations_ii.go<pre><code>[class]{}-[func]{backtrackII}\n\n[class]{}-[func]{permutationsII}\n</code></pre> permutations_ii.swift<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsII}\n</code></pre> permutations_ii.js<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsII}\n</code></pre> permutations_ii.ts<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsII}\n</code></pre> permutations_ii.dart<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsII}\n</code></pre> permutations_ii.rs<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutations_ii}\n</code></pre> permutations_ii.c<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsII}\n</code></pre> permutations_ii.kt<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsII}\n</code></pre> permutations_ii.rb<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutations_ii}\n</code></pre> permutations_ii.zig<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{permutationsII}\n</code></pre> <p>Assuming all elements are distinct from each other, there are \\(n!\\) (factorial) permutations of \\(n\\) elements; when recording results, it is necessary to copy a list of length \\(n\\), using \\(O(n)\\) time. Thus, the time complexity is \\(O(n!n)\\).</p> <p>The maximum recursion depth is \\(n\\), using \\(O(n)\\) frame space. <code>Selected</code> uses \\(O(n)\\) space. At any one time, there can be up to \\(n\\) <code>duplicated</code>, using \\(O(n^2)\\) space. Therefore, the space complexity is \\(O(n^2)\\).</p>"},{"location":"chapter_backtracking/permutations_problem/#3-comparison-of-the-two-pruning-methods","title":"3. \u00a0 Comparison of the two pruning methods","text":"<p>Please note, although both <code>selected</code> and <code>duplicated</code> are used for pruning, their targets are different.</p> <ul> <li>Repeated choice pruning: There is only one <code>selected</code> throughout the search process. It records which elements are currently in the state, aiming to prevent an element from appearing repeatedly in <code>state</code>.</li> <li>Equal element pruning: Each round of choices (each call to the <code>backtrack</code> function) contains a <code>duplicated</code>. It records which elements have been chosen in the current traversal (<code>for</code> loop), aiming to ensure equal elements are selected only once.</li> </ul> <p>Figure 13-9 shows the scope of the two pruning conditions. Note, each node in the tree represents a choice, and the nodes from the root to the leaf form a permutation.</p> <p></p> <p> Figure 13-9 \u00a0 Scope of the two pruning conditions </p>"},{"location":"chapter_backtracking/subset_sum_problem/","title":"13.3 \u00a0 Subset sum problem","text":""},{"location":"chapter_backtracking/subset_sum_problem/#1331-case-without-duplicate-elements","title":"13.3.1 \u00a0 Case without duplicate elements","text":"<p>Question</p> <p>Given an array of positive integers <code>nums</code> and a target positive integer <code>target</code>, find all possible combinations such that the sum of the elements in the combination equals <code>target</code>. The given array has no duplicate elements, and each element can be chosen multiple times. Please return these combinations as a list, which should not contain duplicate combinations.</p> <p>For example, for the input set \\(\\{3, 4, 5\\}\\) and target integer \\(9\\), the solutions are \\(\\{3, 3, 3\\}, \\{4, 5\\}\\). Note the following two points.</p> <ul> <li>Elements in the input set can be chosen an unlimited number of times.</li> <li>Subsets do not distinguish the order of elements, for example \\(\\{4, 5\\}\\) and \\(\\{5, 4\\}\\) are the same subset.</li> </ul>"},{"location":"chapter_backtracking/subset_sum_problem/#1-reference-permutation-solution","title":"1. \u00a0 Reference permutation solution","text":"<p>Similar to the permutation problem, we can imagine the generation of subsets as a series of choices, updating the \"element sum\" in real-time during the choice process. When the element sum equals <code>target</code>, the subset is recorded in the result list.</p> <p>Unlike the permutation problem, elements in this problem can be chosen an unlimited number of times, thus there is no need to use a <code>selected</code> boolean list to record whether an element has been chosen. We can make minor modifications to the permutation code to initially solve the problem:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig subset_sum_i_naive.py<pre><code>def backtrack(\n state: list[int],\n target: int,\n total: int,\n choices: list[int],\n res: list[list[int]],\n):\n \"\"\"Backtracking algorithm: Subset Sum I\"\"\"\n # When the subset sum equals target, record the solution\n if total == target:\n res.append(list(state))\n return\n # Traverse all choices\n for i in range(len(choices)):\n # Pruning: if the subset sum exceeds target, skip that choice\n if total + choices[i] > target:\n continue\n # Attempt: make a choice, update elements and total\n state.append(choices[i])\n # Proceed to the next round of selection\n backtrack(state, target, total + choices[i], choices, res)\n # Retract: undo the choice, restore to the previous state\n state.pop()\n\ndef subset_sum_i_naive(nums: list[int], target: int) -> list[list[int]]:\n \"\"\"Solve Subset Sum I (including duplicate subsets)\"\"\"\n state = [] # State (subset)\n total = 0 # Subset sum\n res = [] # Result list (subset list)\n backtrack(state, target, total, nums, res)\n return res\n</code></pre> subset_sum_i_naive.cpp<pre><code>/* Backtracking algorithm: Subset Sum I */\nvoid backtrack(vector<int> &state, int target, int total, vector<int> &choices, vector<vector<int>> &res) {\n // When the subset sum equals target, record the solution\n if (total == target) {\n res.push_back(state);\n return;\n }\n // Traverse all choices\n for (size_t i = 0; i < choices.size(); i++) {\n // Pruning: if the subset sum exceeds target, skip that choice\n if (total + choices[i] > target) {\n continue;\n }\n // Attempt: make a choice, update elements and total\n state.push_back(choices[i]);\n // Proceed to the next round of selection\n backtrack(state, target, total + choices[i], choices, res);\n // Retract: undo the choice, restore to the previous state\n state.pop_back();\n }\n}\n\n/* Solve Subset Sum I (including duplicate subsets) */\nvector<vector<int>> subsetSumINaive(vector<int> &nums, int target) {\n vector<int> state; // State (subset)\n int total = 0; // Subset sum\n vector<vector<int>> res; // Result list (subset list)\n backtrack(state, target, total, nums, res);\n return res;\n}\n</code></pre> subset_sum_i_naive.java<pre><code>/* Backtracking algorithm: Subset Sum I */\nvoid backtrack(List<Integer> state, int target, int total, int[] choices, List<List<Integer>> res) {\n // When the subset sum equals target, record the solution\n if (total == target) {\n res.add(new ArrayList<>(state));\n return;\n }\n // Traverse all choices\n for (int i = 0; i < choices.length; i++) {\n // Pruning: if the subset sum exceeds target, skip that choice\n if (total + choices[i] > target) {\n continue;\n }\n // Attempt: make a choice, update elements and total\n state.add(choices[i]);\n // Proceed to the next round of selection\n backtrack(state, target, total + choices[i], choices, res);\n // Retract: undo the choice, restore to the previous state\n state.remove(state.size() - 1);\n }\n}\n\n/* Solve Subset Sum I (including duplicate subsets) */\nList<List<Integer>> subsetSumINaive(int[] nums, int target) {\n List<Integer> state = new ArrayList<>(); // State (subset)\n int total = 0; // Subset sum\n List<List<Integer>> res = new ArrayList<>(); // Result list (subset list)\n backtrack(state, target, total, nums, res);\n return res;\n}\n</code></pre> subset_sum_i_naive.cs<pre><code>[class]{subset_sum_i_naive}-[func]{Backtrack}\n\n[class]{subset_sum_i_naive}-[func]{SubsetSumINaive}\n</code></pre> subset_sum_i_naive.go<pre><code>[class]{}-[func]{backtrackSubsetSumINaive}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> subset_sum_i_naive.swift<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> subset_sum_i_naive.js<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> subset_sum_i_naive.ts<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> subset_sum_i_naive.dart<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> subset_sum_i_naive.rs<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subset_sum_i_naive}\n</code></pre> subset_sum_i_naive.c<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> subset_sum_i_naive.kt<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> subset_sum_i_naive.rb<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subset_sum_i_naive}\n</code></pre> subset_sum_i_naive.zig<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumINaive}\n</code></pre> <p>Inputting the array \\([3, 4, 5]\\) and target element \\(9\\) into the above code yields the results \\([3, 3, 3], [4, 5], [5, 4]\\). Although it successfully finds all subsets with a sum of \\(9\\), it includes the duplicate subset \\([4, 5]\\) and \\([5, 4]\\).</p> <p>This is because the search process distinguishes the order of choices, however, subsets do not distinguish the choice order. As shown in Figure 13-10, choosing \\(4\\) before \\(5\\) and choosing \\(5\\) before \\(4\\) are different branches, but correspond to the same subset.</p> <p></p> <p> Figure 13-10 \u00a0 Subset search and pruning out of bounds </p> <p>To eliminate duplicate subsets, a straightforward idea is to deduplicate the result list. However, this method is very inefficient for two reasons.</p> <ul> <li>When there are many array elements, especially when <code>target</code> is large, the search process produces a large number of duplicate subsets.</li> <li>Comparing subsets (arrays) for differences is very time-consuming, requiring arrays to be sorted first, then comparing the differences of each element in the arrays.</li> </ul>"},{"location":"chapter_backtracking/subset_sum_problem/#2-duplicate-subset-pruning","title":"2. \u00a0 Duplicate subset pruning","text":"<p>We consider deduplication during the search process through pruning. Observing Figure 13-11, duplicate subsets are generated when choosing array elements in different orders, for example in the following situations.</p> <ol> <li>When choosing \\(3\\) in the first round and \\(4\\) in the second round, all subsets containing these two elements are generated, denoted as \\([3, 4, \\dots]\\).</li> <li>Later, when \\(4\\) is chosen in the first round, the second round should skip \\(3\\) because the subset \\([4, 3, \\dots]\\) generated by this choice completely duplicates the subset from step <code>1.</code>.</li> </ol> <p>In the search process, each layer's choices are tried one by one from left to right, so the more to the right a branch is, the more it is pruned.</p> <ol> <li>First two rounds choose \\(3\\) and \\(5\\), generating subset \\([3, 5, \\dots]\\).</li> <li>First two rounds choose \\(4\\) and \\(5\\), generating subset \\([4, 5, \\dots]\\).</li> <li>If \\(5\\) is chosen in the first round, then the second round should skip \\(3\\) and \\(4\\) as the subsets \\([5, 3, \\dots]\\) and \\([5, 4, \\dots]\\) completely duplicate the subsets described in steps <code>1.</code> and <code>2.</code>.</li> </ol> <p></p> <p> Figure 13-11 \u00a0 Different choice orders leading to duplicate subsets </p> <p>In summary, given the input array \\([x_1, x_2, \\dots, x_n]\\), the choice sequence in the search process should be \\([x_{i_1}, x_{i_2}, \\dots, x_{i_m}]\\), which needs to satisfy \\(i_1 \\leq i_2 \\leq \\dots \\leq i_m\\). Any choice sequence that does not meet this condition will cause duplicates and should be pruned.</p>"},{"location":"chapter_backtracking/subset_sum_problem/#3-code-implementation","title":"3. \u00a0 Code implementation","text":"<p>To implement this pruning, we initialize the variable <code>start</code>, which indicates the starting point for traversal. After making the choice \\(x_{i}\\), set the next round to start from index \\(i\\). This will ensure the choice sequence satisfies \\(i_1 \\leq i_2 \\leq \\dots \\leq i_m\\), thereby ensuring the uniqueness of the subsets.</p> <p>Besides, we have made the following two optimizations to the code.</p> <ul> <li>Before starting the search, sort the array <code>nums</code>. In the traversal of all choices, end the loop directly when the subset sum exceeds <code>target</code> as subsequent elements are larger and their subset sum will definitely exceed <code>target</code>.</li> <li>Eliminate the element sum variable <code>total</code>, by performing subtraction on <code>target</code> to count the element sum. When <code>target</code> equals \\(0\\), record the solution.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig subset_sum_i.py<pre><code>def backtrack(\n state: list[int], target: int, choices: list[int], start: int, res: list[list[int]]\n):\n \"\"\"Backtracking algorithm: Subset Sum I\"\"\"\n # When the subset sum equals target, record the solution\n if target == 0:\n res.append(list(state))\n return\n # Traverse all choices\n # Pruning two: start traversing from start to avoid generating duplicate subsets\n for i in range(start, len(choices)):\n # Pruning one: if the subset sum exceeds target, end the loop immediately\n # This is because the array is sorted, and later elements are larger, so the subset sum will definitely exceed target\n if target - choices[i] < 0:\n break\n # Attempt: make a choice, update target, start\n state.append(choices[i])\n # Proceed to the next round of selection\n backtrack(state, target - choices[i], choices, i, res)\n # Retract: undo the choice, restore to the previous state\n state.pop()\n\ndef subset_sum_i(nums: list[int], target: int) -> list[list[int]]:\n \"\"\"Solve Subset Sum I\"\"\"\n state = [] # State (subset)\n nums.sort() # Sort nums\n start = 0 # Start point for traversal\n res = [] # Result list (subset list)\n backtrack(state, target, nums, start, res)\n return res\n</code></pre> subset_sum_i.cpp<pre><code>/* Backtracking algorithm: Subset Sum I */\nvoid backtrack(vector<int> &state, int target, vector<int> &choices, int start, vector<vector<int>> &res) {\n // When the subset sum equals target, record the solution\n if (target == 0) {\n res.push_back(state);\n return;\n }\n // Traverse all choices\n // Pruning two: start traversing from start to avoid generating duplicate subsets\n for (int i = start; i < choices.size(); i++) {\n // Pruning one: if the subset sum exceeds target, end the loop immediately\n // This is because the array is sorted, and later elements are larger, so the subset sum will definitely exceed target\n if (target - choices[i] < 0) {\n break;\n }\n // Attempt: make a choice, update target, start\n state.push_back(choices[i]);\n // Proceed to the next round of selection\n backtrack(state, target - choices[i], choices, i, res);\n // Retract: undo the choice, restore to the previous state\n state.pop_back();\n }\n}\n\n/* Solve Subset Sum I */\nvector<vector<int>> subsetSumI(vector<int> &nums, int target) {\n vector<int> state; // State (subset)\n sort(nums.begin(), nums.end()); // Sort nums\n int start = 0; // Start point for traversal\n vector<vector<int>> res; // Result list (subset list)\n backtrack(state, target, nums, start, res);\n return res;\n}\n</code></pre> subset_sum_i.java<pre><code>/* Backtracking algorithm: Subset Sum I */\nvoid backtrack(List<Integer> state, int target, int[] choices, int start, List<List<Integer>> res) {\n // When the subset sum equals target, record the solution\n if (target == 0) {\n res.add(new ArrayList<>(state));\n return;\n }\n // Traverse all choices\n // Pruning two: start traversing from start to avoid generating duplicate subsets\n for (int i = start; i < choices.length; i++) {\n // Pruning one: if the subset sum exceeds target, end the loop immediately\n // This is because the array is sorted, and later elements are larger, so the subset sum will definitely exceed target\n if (target - choices[i] < 0) {\n break;\n }\n // Attempt: make a choice, update target, start\n state.add(choices[i]);\n // Proceed to the next round of selection\n backtrack(state, target - choices[i], choices, i, res);\n // Retract: undo the choice, restore to the previous state\n state.remove(state.size() - 1);\n }\n}\n\n/* Solve Subset Sum I */\nList<List<Integer>> subsetSumI(int[] nums, int target) {\n List<Integer> state = new ArrayList<>(); // State (subset)\n Arrays.sort(nums); // Sort nums\n int start = 0; // Start point for traversal\n List<List<Integer>> res = new ArrayList<>(); // Result list (subset list)\n backtrack(state, target, nums, start, res);\n return res;\n}\n</code></pre> subset_sum_i.cs<pre><code>[class]{subset_sum_i}-[func]{Backtrack}\n\n[class]{subset_sum_i}-[func]{SubsetSumI}\n</code></pre> subset_sum_i.go<pre><code>[class]{}-[func]{backtrackSubsetSumI}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> subset_sum_i.swift<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> subset_sum_i.js<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> subset_sum_i.ts<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> subset_sum_i.dart<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> subset_sum_i.rs<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subset_sum_i}\n</code></pre> subset_sum_i.c<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> subset_sum_i.kt<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> subset_sum_i.rb<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subset_sum_i}\n</code></pre> subset_sum_i.zig<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumI}\n</code></pre> <p>Figure 13-12 shows the overall backtracking process after inputting the array \\([3, 4, 5]\\) and target element \\(9\\) into the above code.</p> <p></p> <p> Figure 13-12 \u00a0 Subset sum I backtracking process </p>"},{"location":"chapter_backtracking/subset_sum_problem/#1332-considering-cases-with-duplicate-elements","title":"13.3.2 \u00a0 Considering cases with duplicate elements","text":"<p>Question</p> <p>Given an array of positive integers <code>nums</code> and a target positive integer <code>target</code>, find all possible combinations such that the sum of the elements in the combination equals <code>target</code>. The given array may contain duplicate elements, and each element can only be chosen once. Please return these combinations as a list, which should not contain duplicate combinations.</p> <p>Compared to the previous question, this question's input array may contain duplicate elements, introducing new problems. For example, given the array \\([4, \\hat{4}, 5]\\) and target element \\(9\\), the existing code's output results in \\([4, 5], [\\hat{4}, 5]\\), resulting in duplicate subsets.</p> <p>The reason for this duplication is that equal elements are chosen multiple times in a certain round. In Figure 13-13, the first round has three choices, two of which are \\(4\\), generating two duplicate search branches, thus outputting duplicate subsets; similarly, the two \\(4\\)s in the second round also produce duplicate subsets.</p> <p></p> <p> Figure 13-13 \u00a0 Duplicate subsets caused by equal elements </p>"},{"location":"chapter_backtracking/subset_sum_problem/#1-equal-element-pruning","title":"1. \u00a0 Equal element pruning","text":"<p>To solve this issue, we need to limit equal elements to being chosen only once per round. The implementation is quite clever: since the array is sorted, equal elements are adjacent. This means that in a certain round of choices, if the current element is equal to its left-hand element, it means it has already been chosen, so skip the current element directly.</p> <p>At the same time, this question stipulates that each array element can only be chosen once. Fortunately, we can also use the variable <code>start</code> to meet this constraint: after making the choice \\(x_{i}\\), set the next round to start from index \\(i + 1\\) going forward. This not only eliminates duplicate subsets but also avoids repeated selection of elements.</p>"},{"location":"chapter_backtracking/subset_sum_problem/#2-code-implementation","title":"2. \u00a0 Code implementation","text":"PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig subset_sum_ii.py<pre><code>def backtrack(\n state: list[int], target: int, choices: list[int], start: int, res: list[list[int]]\n):\n \"\"\"Backtracking algorithm: Subset Sum II\"\"\"\n # When the subset sum equals target, record the solution\n if target == 0:\n res.append(list(state))\n return\n # Traverse all choices\n # Pruning two: start traversing from start to avoid generating duplicate subsets\n # Pruning three: start traversing from start to avoid repeatedly selecting the same element\n for i in range(start, len(choices)):\n # Pruning one: if the subset sum exceeds target, end the loop immediately\n # This is because the array is sorted, and later elements are larger, so the subset sum will definitely exceed target\n if target - choices[i] < 0:\n break\n # Pruning four: if the element equals the left element, it indicates that the search branch is repeated, skip it\n if i > start and choices[i] == choices[i - 1]:\n continue\n # Attempt: make a choice, update target, start\n state.append(choices[i])\n # Proceed to the next round of selection\n backtrack(state, target - choices[i], choices, i + 1, res)\n # Retract: undo the choice, restore to the previous state\n state.pop()\n\ndef subset_sum_ii(nums: list[int], target: int) -> list[list[int]]:\n \"\"\"Solve Subset Sum II\"\"\"\n state = [] # State (subset)\n nums.sort() # Sort nums\n start = 0 # Start point for traversal\n res = [] # Result list (subset list)\n backtrack(state, target, nums, start, res)\n return res\n</code></pre> subset_sum_ii.cpp<pre><code>/* Backtracking algorithm: Subset Sum II */\nvoid backtrack(vector<int> &state, int target, vector<int> &choices, int start, vector<vector<int>> &res) {\n // When the subset sum equals target, record the solution\n if (target == 0) {\n res.push_back(state);\n return;\n }\n // Traverse all choices\n // Pruning two: start traversing from start to avoid generating duplicate subsets\n // Pruning three: start traversing from start to avoid repeatedly selecting the same element\n for (int i = start; i < choices.size(); i++) {\n // Pruning one: if the subset sum exceeds target, end the loop immediately\n // This is because the array is sorted, and later elements are larger, so the subset sum will definitely exceed target\n if (target - choices[i] < 0) {\n break;\n }\n // Pruning four: if the element equals the left element, it indicates that the search branch is repeated, skip it\n if (i > start && choices[i] == choices[i - 1]) {\n continue;\n }\n // Attempt: make a choice, update target, start\n state.push_back(choices[i]);\n // Proceed to the next round of selection\n backtrack(state, target - choices[i], choices, i + 1, res);\n // Retract: undo the choice, restore to the previous state\n state.pop_back();\n }\n}\n\n/* Solve Subset Sum II */\nvector<vector<int>> subsetSumII(vector<int> &nums, int target) {\n vector<int> state; // State (subset)\n sort(nums.begin(), nums.end()); // Sort nums\n int start = 0; // Start point for traversal\n vector<vector<int>> res; // Result list (subset list)\n backtrack(state, target, nums, start, res);\n return res;\n}\n</code></pre> subset_sum_ii.java<pre><code>/* Backtracking algorithm: Subset Sum II */\nvoid backtrack(List<Integer> state, int target, int[] choices, int start, List<List<Integer>> res) {\n // When the subset sum equals target, record the solution\n if (target == 0) {\n res.add(new ArrayList<>(state));\n return;\n }\n // Traverse all choices\n // Pruning two: start traversing from start to avoid generating duplicate subsets\n // Pruning three: start traversing from start to avoid repeatedly selecting the same element\n for (int i = start; i < choices.length; i++) {\n // Pruning one: if the subset sum exceeds target, end the loop immediately\n // This is because the array is sorted, and later elements are larger, so the subset sum will definitely exceed target\n if (target - choices[i] < 0) {\n break;\n }\n // Pruning four: if the element equals the left element, it indicates that the search branch is repeated, skip it\n if (i > start && choices[i] == choices[i - 1]) {\n continue;\n }\n // Attempt: make a choice, update target, start\n state.add(choices[i]);\n // Proceed to the next round of selection\n backtrack(state, target - choices[i], choices, i + 1, res);\n // Retract: undo the choice, restore to the previous state\n state.remove(state.size() - 1);\n }\n}\n\n/* Solve Subset Sum II */\nList<List<Integer>> subsetSumII(int[] nums, int target) {\n List<Integer> state = new ArrayList<>(); // State (subset)\n Arrays.sort(nums); // Sort nums\n int start = 0; // Start point for traversal\n List<List<Integer>> res = new ArrayList<>(); // Result list (subset list)\n backtrack(state, target, nums, start, res);\n return res;\n}\n</code></pre> subset_sum_ii.cs<pre><code>[class]{subset_sum_ii}-[func]{Backtrack}\n\n[class]{subset_sum_ii}-[func]{SubsetSumII}\n</code></pre> subset_sum_ii.go<pre><code>[class]{}-[func]{backtrackSubsetSumII}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> subset_sum_ii.swift<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> subset_sum_ii.js<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> subset_sum_ii.ts<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> subset_sum_ii.dart<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> subset_sum_ii.rs<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subset_sum_ii}\n</code></pre> subset_sum_ii.c<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> subset_sum_ii.kt<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> subset_sum_ii.rb<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subset_sum_ii}\n</code></pre> subset_sum_ii.zig<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{subsetSumII}\n</code></pre> <p>Figure 13-14 shows the backtracking process for the array \\([4, 4, 5]\\) and target element \\(9\\), including four types of pruning operations. Please combine the illustration with the code comments to understand the entire search process and how each type of pruning operation works.</p> <p></p> <p> Figure 13-14 \u00a0 Subset sum II backtracking process </p>"},{"location":"chapter_backtracking/summary/","title":"13.5 \u00a0 Summary","text":""},{"location":"chapter_backtracking/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>The essence of the backtracking algorithm is an exhaustive search method, where the solution space is traversed deeply first to find solutions that meet the criteria. During the search, if a satisfying solution is found, it is recorded, until all solutions are found or the search is completed.</li> <li>The search process of the backtracking algorithm includes trying and retreating. It uses depth-first search to explore various choices, and when a choice does not meet the constraint conditions, the previous choice is undone, reverting to the previous state, and other options are then continued to be tried. Trying and retreating are operations in opposite directions.</li> <li>Backtracking problems usually contain multiple constraints, which can be used to perform pruning operations. Pruning can terminate unnecessary search branches early, greatly enhancing search efficiency.</li> <li>Backtracking algorithms are mainly used to solve search problems and constraint satisfaction problems. Although combinatorial optimization problems can be solved using backtracking, there are often more efficient or effective solutions available.</li> <li>The permutation problem aims to search for all possible permutations of a given set of elements. We use an array to record whether each element has been chosen, cutting off branches that repeatedly select the same element, ensuring each element is selected only once.</li> <li>In permutation problems, if the set contains duplicate elements, the final result will include duplicate permutations. We need to restrict that identical elements can only be selected once in each round, which is usually implemented using a hash set.</li> <li>The subset-sum problem aims to find all subsets in a given set that sum to a target value. The set does not distinguish the order of elements, but the search process outputs all ordered results, producing duplicate subsets. Before backtracking, we sort the data and set a variable to indicate the starting point of each round of traversal, thereby pruning the search branches that generate duplicate subsets.</li> <li>For the subset-sum problem, equal elements in the array can produce duplicate sets. Using the precondition that the array is already sorted, we prune by determining if adjacent elements are equal, thus ensuring equal elements are only selected once per round.</li> <li>The \\(n\\) queens problem aims to find schemes to place \\(n\\) queens on an \\(n \\times n\\) size chessboard in such a way that no two queens can attack each other. The constraints of the problem include row constraints, column constraints, main diagonal constraints, and secondary diagonal constraints. To meet the row constraint, we adopt a strategy of placing one queen per row, ensuring each row has one queen placed.</li> <li>The handling of column constraints and diagonal constraints is similar. For column constraints, we use an array to record whether there is a queen in each column, thereby indicating whether the selected cell is legal. For diagonal constraints, we use two arrays to respectively record the presence of queens on the main and secondary diagonals; the challenge is in identifying the row and column index patterns that satisfy the same primary (secondary) diagonal.</li> </ul>"},{"location":"chapter_backtracking/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: How can we understand the relationship between backtracking and recursion?</p> <p>Overall, backtracking is a \"strategic algorithm,\" while recursion is more of a \"tool.\"</p> <ul> <li>Backtracking algorithms are typically based on recursion. However, backtracking is one of the application scenarios of recursion, specifically in search problems.</li> <li>The structure of recursion reflects the \"sub-problem decomposition\" problem-solving paradigm, commonly used in solving problems involving divide and conquer, backtracking, and dynamic programming (memoized recursion).</li> </ul>"},{"location":"chapter_computational_complexity/","title":"Chapter 2. \u00a0 Complexity analysis","text":"<p>Abstract</p> <p>Complexity analysis is like a space-time navigator in the vast universe of algorithms.</p> <p>It guides us in exploring deeper within the the dimensions of time and space, seeking more elegant solutions.</p>"},{"location":"chapter_computational_complexity/#chapter-contents","title":"Chapter contents","text":"<ul> <li>2.1 \u00a0 Algorithm efficiency assessment</li> <li>2.2 \u00a0 Iteration and recursion</li> <li>2.3 \u00a0 Time complexity</li> <li>2.4 \u00a0 Space complexity</li> <li>2.5 \u00a0 Summary</li> </ul>"},{"location":"chapter_computational_complexity/iteration_and_recursion/","title":"2.2 \u00a0 Iteration and recursion","text":"<p>In algorithms, the repeated execution of a task is quite common and is closely related to the analysis of complexity. Therefore, before delving into the concepts of time complexity and space complexity, let's first explore how to implement repetitive tasks in programming. This involves understanding two fundamental programming control structures: iteration and recursion.</p>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#221-iteration","title":"2.2.1 \u00a0 Iteration","text":"<p>Iteration is a control structure for repeatedly performing a task. In iteration, a program repeats a block of code as long as a certain condition is met until this condition is no longer satisfied.</p>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#1-for-loops","title":"1. \u00a0 For loops","text":"<p>The <code>for</code> loop is one of the most common forms of iteration, and it's particularly suitable when the number of iterations is known in advance.</p> <p>The following function uses a <code>for</code> loop to perform a summation of \\(1 + 2 + \\dots + n\\), with the sum being stored in the variable <code>res</code>. It's important to note that in Python, <code>range(a, b)</code> creates an interval that is inclusive of <code>a</code> but exclusive of <code>b</code>, meaning it iterates over the range from \\(a\\) up to \\(b\u22121\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig iteration.py<pre><code>def for_loop(n: int) -> int:\n \"\"\"for loop\"\"\"\n res = 0\n # Loop sum 1, 2, ..., n-1, n\n for i in range(1, n + 1):\n res += i\n return res\n</code></pre> iteration.cpp<pre><code>/* for loop */\nint forLoop(int n) {\n int res = 0;\n // Loop sum 1, 2, ..., n-1, n\n for (int i = 1; i <= n; ++i) {\n res += i;\n }\n return res;\n}\n</code></pre> iteration.java<pre><code>/* for loop */\nint forLoop(int n) {\n int res = 0;\n // Loop sum 1, 2, ..., n-1, n\n for (int i = 1; i <= n; i++) {\n res += i;\n }\n return res;\n}\n</code></pre> iteration.cs<pre><code>[class]{iteration}-[func]{ForLoop}\n</code></pre> iteration.go<pre><code>[class]{}-[func]{forLoop}\n</code></pre> iteration.swift<pre><code>[class]{}-[func]{forLoop}\n</code></pre> iteration.js<pre><code>[class]{}-[func]{forLoop}\n</code></pre> iteration.ts<pre><code>[class]{}-[func]{forLoop}\n</code></pre> iteration.dart<pre><code>[class]{}-[func]{forLoop}\n</code></pre> iteration.rs<pre><code>[class]{}-[func]{for_loop}\n</code></pre> iteration.c<pre><code>[class]{}-[func]{forLoop}\n</code></pre> iteration.kt<pre><code>[class]{}-[func]{forLoop}\n</code></pre> iteration.rb<pre><code>[class]{}-[func]{for_loop}\n</code></pre> iteration.zig<pre><code>[class]{}-[func]{forLoop}\n</code></pre> <p>Figure 2-1 represents this sum function.</p> <p></p> <p> Figure 2-1 \u00a0 Flowchart of the sum function </p> <p>The number of operations in this summation function is proportional to the size of the input data \\(n\\), or in other words, it has a linear relationship. This \"linear relationship\" is what time complexity describes. This topic will be discussed in more detail in the next section.</p>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#2-while-loops","title":"2. \u00a0 While loops","text":"<p>Similar to <code>for</code> loops, <code>while</code> loops are another approach for implementing iteration. In a <code>while</code> loop, the program checks a condition at the beginning of each iteration; if the condition is true, the execution continues, otherwise, the loop ends.</p> <p>Below we use a <code>while</code> loop to implement the sum \\(1 + 2 + \\dots + n\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig iteration.py<pre><code>def while_loop(n: int) -> int:\n \"\"\"while loop\"\"\"\n res = 0\n i = 1 # Initialize condition variable\n # Loop sum 1, 2, ..., n-1, n\n while i <= n:\n res += i\n i += 1 # Update condition variable\n return res\n</code></pre> iteration.cpp<pre><code>/* while loop */\nint whileLoop(int n) {\n int res = 0;\n int i = 1; // Initialize condition variable\n // Loop sum 1, 2, ..., n-1, n\n while (i <= n) {\n res += i;\n i++; // Update condition variable\n }\n return res;\n}\n</code></pre> iteration.java<pre><code>/* while loop */\nint whileLoop(int n) {\n int res = 0;\n int i = 1; // Initialize condition variable\n // Loop sum 1, 2, ..., n-1, n\n while (i <= n) {\n res += i;\n i++; // Update condition variable\n }\n return res;\n}\n</code></pre> iteration.cs<pre><code>[class]{iteration}-[func]{WhileLoop}\n</code></pre> iteration.go<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> iteration.swift<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> iteration.js<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> iteration.ts<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> iteration.dart<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> iteration.rs<pre><code>[class]{}-[func]{while_loop}\n</code></pre> iteration.c<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> iteration.kt<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> iteration.rb<pre><code>[class]{}-[func]{while_loop}\n</code></pre> iteration.zig<pre><code>[class]{}-[func]{whileLoop}\n</code></pre> <p><code>while</code> loops provide more flexibility than <code>for</code> loops, especially since they allow for custom initialization and modification of the condition variable at each step.</p> <p>For example, in the following code, the condition variable \\(i\\) is updated twice each round, which would be inconvenient to implement with a <code>for</code> loop.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig iteration.py<pre><code>def while_loop_ii(n: int) -> int:\n \"\"\"while loop (two updates)\"\"\"\n res = 0\n i = 1 # Initialize condition variable\n # Loop sum 1, 4, 10, ...\n while i <= n:\n res += i\n # Update condition variable\n i += 1\n i *= 2\n return res\n</code></pre> iteration.cpp<pre><code>/* while loop (two updates) */\nint whileLoopII(int n) {\n int res = 0;\n int i = 1; // Initialize condition variable\n // Loop sum 1, 4, 10, ...\n while (i <= n) {\n res += i;\n // Update condition variable\n i++;\n i *= 2;\n }\n return res;\n}\n</code></pre> iteration.java<pre><code>/* while loop (two updates) */\nint whileLoopII(int n) {\n int res = 0;\n int i = 1; // Initialize condition variable\n // Loop sum 1, 4, 10, ...\n while (i <= n) {\n res += i;\n // Update condition variable\n i++;\n i *= 2;\n }\n return res;\n}\n</code></pre> iteration.cs<pre><code>[class]{iteration}-[func]{WhileLoopII}\n</code></pre> iteration.go<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> iteration.swift<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> iteration.js<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> iteration.ts<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> iteration.dart<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> iteration.rs<pre><code>[class]{}-[func]{while_loop_ii}\n</code></pre> iteration.c<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> iteration.kt<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> iteration.rb<pre><code>[class]{}-[func]{while_loop_ii}\n</code></pre> iteration.zig<pre><code>[class]{}-[func]{whileLoopII}\n</code></pre> <p>Overall, <code>for</code> loops are more concise, while <code>while</code> loops are more flexible. Both can implement iterative structures. Which one to use should be determined based on the specific requirements of the problem.</p>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#3-nested-loops","title":"3. \u00a0 Nested loops","text":"<p>We can nest one loop structure within another. Below is an example using <code>for</code> loops:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig iteration.py<pre><code>def nested_for_loop(n: int) -> str:\n \"\"\"Double for loop\"\"\"\n res = \"\"\n # Loop i = 1, 2, ..., n-1, n\n for i in range(1, n + 1):\n # Loop j = 1, 2, ..., n-1, n\n for j in range(1, n + 1):\n res += f\"({i}, {j}), \"\n return res\n</code></pre> iteration.cpp<pre><code>/* Double for loop */\nstring nestedForLoop(int n) {\n ostringstream res;\n // Loop i = 1, 2, ..., n-1, n\n for (int i = 1; i <= n; ++i) {\n // Loop j = 1, 2, ..., n-1, n\n for (int j = 1; j <= n; ++j) {\n res << \"(\" << i << \", \" << j << \"), \";\n }\n }\n return res.str();\n}\n</code></pre> iteration.java<pre><code>/* Double for loop */\nString nestedForLoop(int n) {\n StringBuilder res = new StringBuilder();\n // Loop i = 1, 2, ..., n-1, n\n for (int i = 1; i <= n; i++) {\n // Loop j = 1, 2, ..., n-1, n\n for (int j = 1; j <= n; j++) {\n res.append(\"(\" + i + \", \" + j + \"), \");\n }\n }\n return res.toString();\n}\n</code></pre> iteration.cs<pre><code>[class]{iteration}-[func]{NestedForLoop}\n</code></pre> iteration.go<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> iteration.swift<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> iteration.js<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> iteration.ts<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> iteration.dart<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> iteration.rs<pre><code>[class]{}-[func]{nested_for_loop}\n</code></pre> iteration.c<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> iteration.kt<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> iteration.rb<pre><code>[class]{}-[func]{nested_for_loop}\n</code></pre> iteration.zig<pre><code>[class]{}-[func]{nestedForLoop}\n</code></pre> <p>Figure 2-2 represents this nested loop.</p> <p></p> <p> Figure 2-2 \u00a0 Flowchart of the nested loop </p> <p>In such cases, the number of operations of the function is proportional to \\(n^2\\), meaning the algorithm's runtime and the size of the input data \\(n\\) has a 'quadratic relationship.'</p> <p>We can further increase the complexity by adding more nested loops, each level of nesting effectively \"increasing the dimension,\" which raises the time complexity to \"cubic,\" \"quartic,\" and so on.</p>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#222-recursion","title":"2.2.2 \u00a0 Recursion","text":"<p>Recursion is an algorithmic strategy where a function solves a problem by calling itself. It primarily involves two phases:</p> <ol> <li>Calling: This is where the program repeatedly calls itself, often with progressively smaller or simpler arguments, moving towards the \"termination condition.\"</li> <li>Returning: Upon triggering the \"termination condition,\" the program begins to return from the deepest recursive function, aggregating the results of each layer.</li> </ol> <p>From an implementation perspective, recursive code mainly includes three elements.</p> <ol> <li>Termination Condition: Determines when to switch from \"calling\" to \"returning.\"</li> <li>Recursive Call: Corresponds to \"calling,\" where the function calls itself, usually with smaller or more simplified parameters.</li> <li>Return Result: Corresponds to \"returning,\" where the result of the current recursion level is returned to the previous layer.</li> </ol> <p>Observe the following code, where simply calling the function <code>recur(n)</code> can compute the sum of \\(1 + 2 + \\dots + n\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig recursion.py<pre><code>def recur(n: int) -> int:\n \"\"\"Recursion\"\"\"\n # Termination condition\n if n == 1:\n return 1\n # Recursive: recursive call\n res = recur(n - 1)\n # Return: return result\n return n + res\n</code></pre> recursion.cpp<pre><code>/* Recursion */\nint recur(int n) {\n // Termination condition\n if (n == 1)\n return 1;\n // Recursive: recursive call\n int res = recur(n - 1);\n // Return: return result\n return n + res;\n}\n</code></pre> recursion.java<pre><code>/* Recursion */\nint recur(int n) {\n // Termination condition\n if (n == 1)\n return 1;\n // Recursive: recursive call\n int res = recur(n - 1);\n // Return: return result\n return n + res;\n}\n</code></pre> recursion.cs<pre><code>[class]{recursion}-[func]{Recur}\n</code></pre> recursion.go<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.swift<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.js<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.ts<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.dart<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.rs<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.c<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.kt<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.rb<pre><code>[class]{}-[func]{recur}\n</code></pre> recursion.zig<pre><code>[class]{}-[func]{recur}\n</code></pre> <p>Figure 2-3 shows the recursive process of this function.</p> <p></p> <p> Figure 2-3 \u00a0 Recursive process of the sum function </p> <p>Although iteration and recursion can achieve the same results from a computational standpoint, they represent two entirely different paradigms of thinking and problem-solving.</p> <ul> <li>Iteration: Solves problems \"from the bottom up.\" It starts with the most basic steps, and then repeatedly adds or accumulates these steps until the task is complete.</li> <li>Recursion: Solves problems \"from the top down.\" It breaks down the original problem into smaller sub-problems, each of which has the same form as the original problem. These sub-problems are then further decomposed into even smaller sub-problems, stopping at the base case whose solution is known.</li> </ul> <p>Let's take the earlier example of the summation function, defined as \\(f(n) = 1 + 2 + \\dots + n\\).</p> <ul> <li>Iteration: In this approach, we simulate the summation process within a loop. Starting from \\(1\\) and traversing to \\(n\\), we perform the summation operation in each iteration to eventually compute \\(f(n)\\).</li> <li>Recursion: Here, the problem is broken down into a sub-problem: \\(f(n) = n + f(n-1)\\). This decomposition continues recursively until reaching the base case, \\(f(1) = 1\\), at which point the recursion terminates.</li> </ul>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#1-call-stack","title":"1. \u00a0 Call stack","text":"<p>Every time a recursive function calls itself, the system allocates memory for the newly initiated function to store local variables, the return address, and other relevant information. This leads to two primary outcomes.</p> <ul> <li>The function's context data is stored in a memory area called \"stack frame space\" and is only released after the function returns. Therefore, recursion generally consumes more memory space than iteration.</li> <li>Recursive calls introduce additional overhead. Hence, recursion is usually less time-efficient than loops.</li> </ul> <p>As shown in Figure 2-4, there are \\(n\\) unreturned recursive functions before triggering the termination condition, indicating a recursion depth of \\(n\\).</p> <p></p> <p> Figure 2-4 \u00a0 Recursion call depth </p> <p>In practice, the depth of recursion allowed by programming languages is usually limited, and excessively deep recursion can lead to stack overflow errors.</p>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#2-tail-recursion","title":"2. \u00a0 Tail recursion","text":"<p>Interestingly, if a function performs its recursive call as the very last step before returning, it can be optimized by the compiler or interpreter to be as space-efficient as iteration. This scenario is known as tail recursion.</p> <ul> <li>Regular recursion: In standard recursion, when the function returns to the previous level, it continues to execute more code, requiring the system to save the context of the previous call.</li> <li>Tail recursion: Here, the recursive call is the final operation before the function returns. This means that upon returning to the previous level, no further actions are needed, so the system does not need to save the context of the previous level.</li> </ul> <p>For example, in calculating \\(1 + 2 + \\dots + n\\), we can make the result variable <code>res</code> a parameter of the function, thereby achieving tail recursion:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig recursion.py<pre><code>def tail_recur(n, res):\n \"\"\"Tail recursion\"\"\"\n # Termination condition\n if n == 0:\n return res\n # Tail recursive call\n return tail_recur(n - 1, res + n)\n</code></pre> recursion.cpp<pre><code>/* Tail recursion */\nint tailRecur(int n, int res) {\n // Termination condition\n if (n == 0)\n return res;\n // Tail recursive call\n return tailRecur(n - 1, res + n);\n}\n</code></pre> recursion.java<pre><code>/* Tail recursion */\nint tailRecur(int n, int res) {\n // Termination condition\n if (n == 0)\n return res;\n // Tail recursive call\n return tailRecur(n - 1, res + n);\n}\n</code></pre> recursion.cs<pre><code>[class]{recursion}-[func]{TailRecur}\n</code></pre> recursion.go<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> recursion.swift<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> recursion.js<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> recursion.ts<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> recursion.dart<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> recursion.rs<pre><code>[class]{}-[func]{tail_recur}\n</code></pre> recursion.c<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> recursion.kt<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> recursion.rb<pre><code>[class]{}-[func]{tail_recur}\n</code></pre> recursion.zig<pre><code>[class]{}-[func]{tailRecur}\n</code></pre> <p>The execution process of tail recursion is shown in Figure 2-5. Comparing regular recursion and tail recursion, the point of the summation operation is different.</p> <ul> <li>Regular recursion: The summation operation occurs during the \"returning\" phase, requiring another summation after each layer returns.</li> <li>Tail recursion: The summation operation occurs during the \"calling\" phase, and the \"returning\" phase only involves returning through each layer.</li> </ul> <p></p> <p> Figure 2-5 \u00a0 Tail recursion process </p> <p>Tip</p> <p>Note that many compilers or interpreters do not support tail recursion optimization. For example, Python does not support tail recursion optimization by default, so even if the function is in the form of tail recursion, it may still encounter stack overflow issues.</p>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#3-recursion-tree","title":"3. \u00a0 Recursion tree","text":"<p>When dealing with algorithms related to \"divide and conquer\", recursion often offers a more intuitive approach and more readable code than iteration. Take the \"Fibonacci sequence\" as an example.</p> <p>Question</p> <p>Given a Fibonacci sequence \\(0, 1, 1, 2, 3, 5, 8, 13, \\dots\\), find the \\(n\\)th number in the sequence.</p> <p>Let the \\(n\\)th number of the Fibonacci sequence be \\(f(n)\\), it's easy to deduce two conclusions:</p> <ul> <li>The first two numbers of the sequence are \\(f(1) = 0\\) and \\(f(2) = 1\\).</li> <li>Each number in the sequence is the sum of the two preceding ones, that is, \\(f(n) = f(n - 1) + f(n - 2)\\).</li> </ul> <p>Using the recursive relation, and considering the first two numbers as termination conditions, we can write the recursive code. Calling <code>fib(n)</code> will yield the \\(n\\)th number of the Fibonacci sequence:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig recursion.py<pre><code>def fib(n: int) -> int:\n \"\"\"Fibonacci sequence: Recursion\"\"\"\n # Termination condition f(1) = 0, f(2) = 1\n if n == 1 or n == 2:\n return n - 1\n # Recursive call f(n) = f(n-1) + f(n-2)\n res = fib(n - 1) + fib(n - 2)\n # Return result f(n)\n return res\n</code></pre> recursion.cpp<pre><code>/* Fibonacci sequence: Recursion */\nint fib(int n) {\n // Termination condition f(1) = 0, f(2) = 1\n if (n == 1 || n == 2)\n return n - 1;\n // Recursive call f(n) = f(n-1) + f(n-2)\n int res = fib(n - 1) + fib(n - 2);\n // Return result f(n)\n return res;\n}\n</code></pre> recursion.java<pre><code>/* Fibonacci sequence: Recursion */\nint fib(int n) {\n // Termination condition f(1) = 0, f(2) = 1\n if (n == 1 || n == 2)\n return n - 1;\n // Recursive call f(n) = f(n-1) + f(n-2)\n int res = fib(n - 1) + fib(n - 2);\n // Return result f(n)\n return res;\n}\n</code></pre> recursion.cs<pre><code>[class]{recursion}-[func]{Fib}\n</code></pre> recursion.go<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.swift<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.js<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.ts<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.dart<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.rs<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.c<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.kt<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.rb<pre><code>[class]{}-[func]{fib}\n</code></pre> recursion.zig<pre><code>[class]{}-[func]{fib}\n</code></pre> <p>Observing the above code, we see that it recursively calls two functions within itself, meaning that one call generates two branching calls. As illustrated in Figure 2-6, this continuous recursive calling eventually creates a recursion tree with a depth of \\(n\\).</p> <p></p> <p> Figure 2-6 \u00a0 Fibonacci sequence recursion tree </p> <p>Fundamentally, recursion embodies the paradigm of \"breaking down a problem into smaller sub-problems.\" This divide-and-conquer strategy is crucial.</p> <ul> <li>From an algorithmic perspective, many important strategies like searching, sorting, backtracking, divide-and-conquer, and dynamic programming directly or indirectly use this way of thinking.</li> <li>From a data structure perspective, recursion is naturally suited for dealing with linked lists, trees, and graphs, as they are well suited for analysis using the divide-and-conquer approach.</li> </ul>"},{"location":"chapter_computational_complexity/iteration_and_recursion/#223-comparison","title":"2.2.3 \u00a0 Comparison","text":"<p>Summarizing the above content, the following table shows the differences between iteration and recursion in terms of implementation, performance, and applicability.</p> <p> Table: Comparison of iteration and recursion characteristics </p> Iteration Recursion Approach Loop structure Function calls itself Time Efficiency Generally higher efficiency, no function call overhead Each function call generates overhead Memory Usage Typically uses a fixed size of memory space Accumulative function calls can use a substantial amount of stack frame space Suitable Problems Suitable for simple loop tasks, intuitive and readable code Suitable for problem decomposition, like trees, graphs, divide-and-conquer, backtracking, etc., concise and clear code structure <p>Tip</p> <p>If you find the following content difficult to understand, consider revisiting it after reading the \"Stack\" chapter.</p> <p>So, what is the intrinsic connection between iteration and recursion? Taking the above recursive function as an example, the summation operation occurs during the recursion's \"return\" phase. This means that the initially called function is the last to complete its summation operation, mirroring the \"last in, first out\" principle of a stack.</p> <p>Recursive terms like \"call stack\" and \"stack frame space\" hint at the close relationship between recursion and stacks.</p> <ol> <li>Calling: When a function is called, the system allocates a new stack frame on the \"call stack\" for that function, storing local variables, parameters, return addresses, and other data.</li> <li>Returning: When a function completes execution and returns, the corresponding stack frame is removed from the \"call stack,\" restoring the execution environment of the previous function.</li> </ol> <p>Therefore, we can use an explicit stack to simulate the behavior of the call stack, thus transforming recursion into an iterative form:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig recursion.py<pre><code>def for_loop_recur(n: int) -> int:\n \"\"\"Simulate recursion with iteration\"\"\"\n # Use an explicit stack to simulate the system call stack\n stack = []\n res = 0\n # Recursive: recursive call\n for i in range(n, 0, -1):\n # Simulate \"recursive\" by \"pushing onto the stack\"\n stack.append(i)\n # Return: return result\n while stack:\n # Simulate \"return\" by \"popping from the stack\"\n res += stack.pop()\n # res = 1+2+3+...+n\n return res\n</code></pre> recursion.cpp<pre><code>/* Simulate recursion with iteration */\nint forLoopRecur(int n) {\n // Use an explicit stack to simulate the system call stack\n stack<int> stack;\n int res = 0;\n // Recursive: recursive call\n for (int i = n; i > 0; i--) {\n // Simulate \"recursive\" by \"pushing onto the stack\"\n stack.push(i);\n }\n // Return: return result\n while (!stack.empty()) {\n // Simulate \"return\" by \"popping from the stack\"\n res += stack.top();\n stack.pop();\n }\n // res = 1+2+3+...+n\n return res;\n}\n</code></pre> recursion.java<pre><code>/* Simulate recursion with iteration */\nint forLoopRecur(int n) {\n // Use an explicit stack to simulate the system call stack\n Stack<Integer> stack = new Stack<>();\n int res = 0;\n // Recursive: recursive call\n for (int i = n; i > 0; i--) {\n // Simulate \"recursive\" by \"pushing onto the stack\"\n stack.push(i);\n }\n // Return: return result\n while (!stack.isEmpty()) {\n // Simulate \"return\" by \"popping from the stack\"\n res += stack.pop();\n }\n // res = 1+2+3+...+n\n return res;\n}\n</code></pre> recursion.cs<pre><code>[class]{recursion}-[func]{ForLoopRecur}\n</code></pre> recursion.go<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> recursion.swift<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> recursion.js<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> recursion.ts<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> recursion.dart<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> recursion.rs<pre><code>[class]{}-[func]{for_loop_recur}\n</code></pre> recursion.c<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> recursion.kt<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> recursion.rb<pre><code>[class]{}-[func]{for_loop_recur}\n</code></pre> recursion.zig<pre><code>[class]{}-[func]{forLoopRecur}\n</code></pre> <p>Observing the above code, when recursion is transformed into iteration, the code becomes more complex. Although iteration and recursion can often be transformed into each other, it's not always advisable to do so for two reasons:</p> <ul> <li>The transformed code may become more challenging to understand and less readable.</li> <li>For some complex problems, simulating the behavior of the system's call stack can be quite challenging.</li> </ul> <p>In conclusion, whether to choose iteration or recursion depends on the specific nature of the problem. In programming practice, it's crucial to weigh the pros and cons of both and choose the most suitable approach for the situation at hand.</p>"},{"location":"chapter_computational_complexity/performance_evaluation/","title":"2.1 \u00a0 Algorithm efficiency assessment","text":"<p>In algorithm design, we pursue the following two objectives in sequence.</p> <ol> <li>Finding a Solution to the Problem: The algorithm should reliably find the correct solution within the stipulated range of inputs.</li> <li>Seeking the Optimal Solution: For the same problem, multiple solutions might exist, and we aim to find the most efficient algorithm possible.</li> </ol> <p>In other words, under the premise of being able to solve the problem, algorithm efficiency has become the main criterion for evaluating the merits of an algorithm, which includes the following two dimensions.</p> <ul> <li>Time efficiency: The speed at which an algorithm runs.</li> <li>Space efficiency: The size of the memory space occupied by an algorithm.</li> </ul> <p>In short, our goal is to design data structures and algorithms that are both fast and memory-efficient. Effectively assessing algorithm efficiency is crucial because only then can we compare various algorithms and guide the process of algorithm design and optimization.</p> <p>There are mainly two methods of efficiency assessment: actual testing and theoretical estimation.</p>"},{"location":"chapter_computational_complexity/performance_evaluation/#211-actual-testing","title":"2.1.1 \u00a0 Actual testing","text":"<p>Suppose we have algorithms <code>A</code> and <code>B</code>, both capable of solving the same problem, and we need to compare their efficiencies. The most direct method is to use a computer to run these two algorithms and monitor and record their runtime and memory usage. This assessment method reflects the actual situation but has significant limitations.</p> <p>On one hand, it's difficult to eliminate interference from the testing environment. Hardware configurations can affect algorithm performance. For example, algorithm <code>A</code> might run faster than <code>B</code> on one computer, but the opposite result may occur on another computer with different configurations. This means we would need to test on a variety of machines to calculate average efficiency, which is impractical.</p> <p>On the other hand, conducting a full test is very resource-intensive. As the volume of input data changes, the efficiency of the algorithms may vary. For example, with smaller data volumes, algorithm <code>A</code> might run faster than <code>B</code>, but the opposite might be true with larger data volumes. Therefore, to draw convincing conclusions, we need to test a wide range of input data sizes, which requires significant computational resources.</p>"},{"location":"chapter_computational_complexity/performance_evaluation/#212-theoretical-estimation","title":"2.1.2 \u00a0 Theoretical estimation","text":"<p>Due to the significant limitations of actual testing, we can consider evaluating algorithm efficiency solely through calculations. This estimation method is known as asymptotic complexity analysis, or simply complexity analysis.</p> <p>Complexity analysis reflects the relationship between the time and space resources required for algorithm execution and the size of the input data. It describes the trend of growth in the time and space required by the algorithm as the size of the input data increases. This definition might sound complex, but we can break it down into three key points to understand it better.</p> <ul> <li>\"Time and space resources\" correspond to time complexity and space complexity, respectively.</li> <li>\"As the size of input data increases\" means that complexity reflects the relationship between algorithm efficiency and the volume of input data.</li> <li>\"The trend of growth in time and space\" indicates that complexity analysis focuses not on the specific values of runtime or space occupied but on the \"rate\" at which time or space grows.</li> </ul> <p>Complexity analysis overcomes the disadvantages of actual testing methods, reflected in the following aspects:</p> <ul> <li>It is independent of the testing environment and applicable to all operating platforms.</li> <li>It can reflect algorithm efficiency under different data volumes, especially in the performance of algorithms with large data volumes.</li> </ul> <p>Tip</p> <p>If you're still confused about the concept of complexity, don't worry. We will introduce it in detail in subsequent chapters.</p> <p>Complexity analysis provides us with a \"ruler\" to measure the time and space resources needed to execute an algorithm and compare the efficiency between different algorithms.</p> <p>Complexity is a mathematical concept and may be abstract and challenging for beginners. From this perspective, complexity analysis might not be the best content to introduce first. However, when discussing the characteristics of a particular data structure or algorithm, it's hard to avoid analyzing its speed and space usage.</p> <p>In summary, it's recommended that you establish a preliminary understanding of complexity analysis before diving deep into data structures and algorithms, so that you can carry out simple complexity analyses of algorithms.</p>"},{"location":"chapter_computational_complexity/space_complexity/","title":"2.4 \u00a0 Space complexity","text":"<p>Space complexity is used to measure the growth trend of the memory space occupied by an algorithm as the amount of data increases. This concept is very similar to time complexity, except that \"running time\" is replaced with \"occupied memory space\".</p>"},{"location":"chapter_computational_complexity/space_complexity/#241-space-related-to-algorithms","title":"2.4.1 \u00a0 Space related to algorithms","text":"<p>The memory space used by an algorithm during its execution mainly includes the following types.</p> <ul> <li>Input space: Used to store the input data of the algorithm.</li> <li>Temporary space: Used to store variables, objects, function contexts, and other data during the algorithm's execution.</li> <li>Output space: Used to store the output data of the algorithm.</li> </ul> <p>Generally, the scope of space complexity statistics includes both \"Temporary Space\" and \"Output Space\".</p> <p>Temporary space can be further divided into three parts.</p> <ul> <li>Temporary data: Used to save various constants, variables, objects, etc., during the algorithm's execution.</li> <li>Stack frame space: Used to save the context data of the called function. The system creates a stack frame at the top of the stack each time a function is called, and the stack frame space is released after the function returns.</li> <li>Instruction space: Used to store compiled program instructions, which are usually negligible in actual statistics.</li> </ul> <p>When analyzing the space complexity of a program, we typically count the Temporary Data, Stack Frame Space, and Output Data, as shown in Figure 2-15.</p> <p></p> <p> Figure 2-15 \u00a0 Space types used in algorithms </p> <p>The relevant code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>class Node:\n \"\"\"Classes\"\"\"\n def __init__(self, x: int):\n self.val: int = x # node value\n self.next: Node | None = None # reference to the next node\n\ndef function() -> int:\n \"\"\"Functions\"\"\"\n # Perform certain operations...\n return 0\n\ndef algorithm(n) -> int: # input data\n A = 0 # temporary data (constant, usually in uppercase)\n b = 0 # temporary data (variable)\n node = Node(0) # temporary data (object)\n c = function() # Stack frame space (call function)\n return A + b + c # output data\n</code></pre> <pre><code>/* Structures */\nstruct Node {\n int val;\n Node *next;\n Node(int x) : val(x), next(nullptr) {}\n};\n\n/* Functions */\nint func() {\n // Perform certain operations...\n return 0;\n}\n\nint algorithm(int n) { // input data\n const int a = 0; // temporary data (constant)\n int b = 0; // temporary data (variable)\n Node* node = new Node(0); // temporary data (object)\n int c = func(); // stack frame space (call function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>/* Classes */\nclass Node {\n int val;\n Node next;\n Node(int x) { val = x; }\n}\n\n/* Functions */\nint function() {\n // Perform certain operations...\n return 0;\n}\n\nint algorithm(int n) { // input data\n final int a = 0; // temporary data (constant)\n int b = 0; // temporary data (variable)\n Node node = new Node(0); // temporary data (object)\n int c = function(); // stack frame space (call function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>/* Classes */\nclass Node {\n int val;\n Node next;\n Node(int x) { val = x; }\n}\n\n/* Functions */\nint Function() {\n // Perform certain operations...\n return 0;\n}\n\nint Algorithm(int n) { // input data\n const int a = 0; // temporary data (constant)\n int b = 0; // temporary data (variable)\n Node node = new(0); // temporary data (object)\n int c = Function(); // stack frame space (call function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>/* Structures */\ntype node struct {\n val int\n next *node\n}\n\n/* Create node structure */\nfunc newNode(val int) *node {\n return &node{val: val}\n}\n\n/* Functions */\nfunc function() int {\n // Perform certain operations...\n return 0\n}\n\nfunc algorithm(n int) int { // input data\n const a = 0 // temporary data (constant)\n b := 0 // temporary storage of data (variable)\n newNode(0) // temporary data (object)\n c := function() // stack frame space (call function)\n return a + b + c // output data\n}\n</code></pre> <pre><code>/* Classes */\nclass Node {\n var val: Int\n var next: Node?\n\n init(x: Int) {\n val = x\n }\n}\n\n/* Functions */\nfunc function() -> Int {\n // Perform certain operations...\n return 0\n}\n\nfunc algorithm(n: Int) -> Int { // input data\n let a = 0 // temporary data (constant)\n var b = 0 // temporary data (variable)\n let node = Node(x: 0) // temporary data (object)\n let c = function() // stack frame space (call function)\n return a + b + c // output data\n}\n</code></pre> <pre><code>/* Classes */\nclass Node {\n val;\n next;\n constructor(val) {\n this.val = val === undefined ? 0 : val; // node value\n this.next = null; // reference to the next node\n }\n}\n\n/* Functions */\nfunction constFunc() {\n // Perform certain operations\n return 0;\n}\n\nfunction algorithm(n) { // input data\n const a = 0; // temporary data (constant)\n let b = 0; // temporary data (variable)\n const node = new Node(0); // temporary data (object)\n const c = constFunc(); // Stack frame space (calling function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>/* Classes */\nclass Node {\n val: number;\n next: Node | null;\n constructor(val?: number) {\n this.val = val === undefined ? 0 : val; // node value\n this.next = null; // reference to the next node\n }\n}\n\n/* Functions */\nfunction constFunc(): number {\n // Perform certain operations\n return 0;\n}\n\nfunction algorithm(n: number): number { // input data\n const a = 0; // temporary data (constant)\n let b = 0; // temporary data (variable)\n const node = new Node(0); // temporary data (object)\n const c = constFunc(); // Stack frame space (calling function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>/* Classes */\nclass Node {\n int val;\n Node next;\n Node(this.val, [this.next]);\n}\n\n/* Functions */\nint function() {\n // Perform certain operations...\n return 0;\n}\n\nint algorithm(int n) { // input data\n const int a = 0; // temporary data (constant)\n int b = 0; // temporary data (variable)\n Node node = Node(0); // temporary data (object)\n int c = function(); // stack frame space (call function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>use std::rc::Rc;\nuse std::cell::RefCell;\n\n/* Structures */\nstruct Node {\n val: i32,\n next: Option<Rc<RefCell<Node>>>,\n}\n\n/* Constructor */\nimpl Node {\n fn new(val: i32) -> Self {\n Self { val: val, next: None }\n }\n}\n\n/* Functions */\nfn function() -> i32 { \n // Perform certain operations...\n return 0;\n}\n\nfn algorithm(n: i32) -> i32 { // input data\n const a: i32 = 0; // temporary data (constant)\n let mut b = 0; // temporary data (variable)\n let node = Node::new(0); // temporary data (object)\n let c = function(); // stack frame space (call function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>/* Functions */\nint func() {\n // Perform certain operations...\n return 0;\n}\n\nint algorithm(int n) { // input data\n const int a = 0; // temporary data (constant)\n int b = 0; // temporary data (variable)\n int c = func(); // stack frame space (call function)\n return a + b + c; // output data\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>\n</code></pre>"},{"location":"chapter_computational_complexity/space_complexity/#242-calculation-method","title":"2.4.2 \u00a0 Calculation method","text":"<p>The method for calculating space complexity is roughly similar to that of time complexity, with the only change being the shift of the statistical object from \"number of operations\" to \"size of used space\".</p> <p>However, unlike time complexity, we usually only focus on the worst-case space complexity. This is because memory space is a hard requirement, and we must ensure that there is enough memory space reserved under all input data.</p> <p>Consider the following code, the term \"worst-case\" in worst-case space complexity has two meanings.</p> <ol> <li>Based on the worst input data: When \\(n < 10\\), the space complexity is \\(O(1)\\); but when \\(n > 10\\), the initialized array <code>nums</code> occupies \\(O(n)\\) space, thus the worst-case space complexity is \\(O(n)\\).</li> <li>Based on the peak memory used during the algorithm's execution: For example, before executing the last line, the program occupies \\(O(1)\\) space; when initializing the array <code>nums</code>, the program occupies \\(O(n)\\) space, hence the worst-case space complexity is \\(O(n)\\).</li> </ol> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>def algorithm(n: int):\n a = 0 # O(1)\n b = [0] * 10000 # O(1)\n if n > 10:\n nums = [0] * n # O(n)\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 0; // O(1)\n vector<int> b(10000); // O(1)\n if (n > 10)\n vector<int> nums(n); // O(n)\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 0; // O(1)\n int[] b = new int[10000]; // O(1)\n if (n > 10)\n int[] nums = new int[n]; // O(n)\n}\n</code></pre> <pre><code>void Algorithm(int n) {\n int a = 0; // O(1)\n int[] b = new int[10000]; // O(1)\n if (n > 10) {\n int[] nums = new int[n]; // O(n)\n }\n}\n</code></pre> <pre><code>func algorithm(n int) {\n a := 0 // O(1)\n b := make([]int, 10000) // O(1)\n var nums []int\n if n > 10 {\n nums := make([]int, n) // O(n)\n }\n fmt.Println(a, b, nums)\n}\n</code></pre> <pre><code>func algorithm(n: Int) {\n let a = 0 // O(1)\n let b = Array(repeating: 0, count: 10000) // O(1)\n if n > 10 {\n let nums = Array(repeating: 0, count: n) // O(n)\n }\n}\n</code></pre> <pre><code>function algorithm(n) {\n const a = 0; // O(1)\n const b = new Array(10000); // O(1)\n if (n > 10) {\n const nums = new Array(n); // O(n)\n }\n}\n</code></pre> <pre><code>function algorithm(n: number): void {\n const a = 0; // O(1)\n const b = new Array(10000); // O(1)\n if (n > 10) {\n const nums = new Array(n); // O(n)\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 0; // O(1)\n List<int> b = List.filled(10000, 0); // O(1)\n if (n > 10) {\n List<int> nums = List.filled(n, 0); // O(n)\n }\n}\n</code></pre> <pre><code>fn algorithm(n: i32) {\n let a = 0; // O(1)\n let b = [0; 10000]; // O(1)\n if n > 10 {\n let nums = vec![0; n as usize]; // O(n)\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 0; // O(1)\n int b[10000]; // O(1)\n if (n > 10)\n int nums[n] = {0}; // O(n)\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>\n</code></pre> <p>In recursive functions, stack frame space must be taken into count. Consider the following code:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>def function() -> int:\n # Perform certain operations\n return 0\n\ndef loop(n: int):\n \"\"\"Loop O(1)\"\"\"\n for _ in range(n):\n function()\n\ndef recur(n: int):\n \"\"\"Recursion O(n)\"\"\"\n if n == 1:\n return\n return recur(n - 1)\n</code></pre> <pre><code>int func() {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nvoid loop(int n) {\n for (int i = 0; i < n; i++) {\n func();\n }\n}\n/* Recursion O(n) */\nvoid recur(int n) {\n if (n == 1) return;\n return recur(n - 1);\n}\n</code></pre> <pre><code>int function() {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nvoid loop(int n) {\n for (int i = 0; i < n; i++) {\n function();\n }\n}\n/* Recursion O(n) */\nvoid recur(int n) {\n if (n == 1) return;\n return recur(n - 1);\n}\n</code></pre> <pre><code>int Function() {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nvoid Loop(int n) {\n for (int i = 0; i < n; i++) {\n Function();\n }\n}\n/* Recursion O(n) */\nint Recur(int n) {\n if (n == 1) return 1;\n return Recur(n - 1);\n}\n</code></pre> <pre><code>func function() int {\n // Perform certain operations\n return 0\n}\n\n/* Cycle O(1) */\nfunc loop(n int) {\n for i := 0; i < n; i++ {\n function()\n }\n}\n\n/* Recursion O(n) */\nfunc recur(n int) {\n if n == 1 {\n return\n }\n recur(n - 1)\n}\n</code></pre> <pre><code>@discardableResult\nfunc function() -> Int {\n // Perform certain operations\n return 0\n}\n\n/* Cycle O(1) */\nfunc loop(n: Int) {\n for _ in 0 ..< n {\n function()\n }\n}\n\n/* Recursion O(n) */\nfunc recur(n: Int) {\n if n == 1 {\n return\n }\n recur(n: n - 1)\n}\n</code></pre> <pre><code>function constFunc() {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nfunction loop(n) {\n for (let i = 0; i < n; i++) {\n constFunc();\n }\n}\n/* Recursion O(n) */\nfunction recur(n) {\n if (n === 1) return;\n return recur(n - 1);\n}\n</code></pre> <pre><code>function constFunc(): number {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nfunction loop(n: number): void {\n for (let i = 0; i < n; i++) {\n constFunc();\n }\n}\n/* Recursion O(n) */\nfunction recur(n: number): void {\n if (n === 1) return;\n return recur(n - 1);\n}\n</code></pre> <pre><code>int function() {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nvoid loop(int n) {\n for (int i = 0; i < n; i++) {\n function();\n }\n}\n/* Recursion O(n) */\nvoid recur(int n) {\n if (n == 1) return;\n return recur(n - 1);\n}\n</code></pre> <pre><code>fn function() -> i32 {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nfn loop(n: i32) {\n for i in 0..n {\n function();\n }\n}\n/* Recursion O(n) */\nvoid recur(n: i32) {\n if n == 1 {\n return;\n }\n recur(n - 1);\n}\n</code></pre> <pre><code>int func() {\n // Perform certain operations\n return 0;\n}\n/* Cycle O(1) */\nvoid loop(int n) {\n for (int i = 0; i < n; i++) {\n func();\n }\n}\n/* Recursion O(n) */\nvoid recur(int n) {\n if (n == 1) return;\n return recur(n - 1);\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>\n</code></pre> <p>The time complexity of both <code>loop()</code> and <code>recur()</code> functions is \\(O(n)\\), but their space complexities differ.</p> <ul> <li>The <code>loop()</code> function calls <code>function()</code> \\(n\\) times in a loop, where each iteration's <code>function()</code> returns and releases its stack frame space, so the space complexity remains \\(O(1)\\).</li> <li>The recursive function <code>recur()</code> will have \\(n\\) instances of unreturned <code>recur()</code> existing simultaneously during its execution, thus occupying \\(O(n)\\) stack frame space.</li> </ul>"},{"location":"chapter_computational_complexity/space_complexity/#243-common-types","title":"2.4.3 \u00a0 Common types","text":"<p>Let the size of the input data be \\(n\\), Figure 2-16 displays common types of space complexities (arranged from low to high).</p> \\[ \\begin{aligned} & O(1) < O(\\log n) < O(n) < O(n^2) < O(2^n) \\newline & \\text{Constant} < \\text{Logarithmic} < \\text{Linear} < \\text{Quadratic} < \\text{Exponential} \\end{aligned} \\] <p></p> <p> Figure 2-16 \u00a0 Common types of space complexity </p>"},{"location":"chapter_computational_complexity/space_complexity/#1-constant-order-o1","title":"1. \u00a0 Constant order \\(O(1)\\)","text":"<p>Constant order is common in constants, variables, objects that are independent of the size of input data \\(n\\).</p> <p>Note that memory occupied by initializing variables or calling functions in a loop, which is released upon entering the next cycle, does not accumulate over space, thus the space complexity remains \\(O(1)\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig space_complexity.py<pre><code>def function() -> int:\n \"\"\"Function\"\"\"\n # Perform some operations\n return 0\n\ndef constant(n: int):\n \"\"\"Constant complexity\"\"\"\n # Constants, variables, objects occupy O(1) space\n a = 0\n nums = [0] * 10000\n node = ListNode(0)\n # Variables in a loop occupy O(1) space\n for _ in range(n):\n c = 0\n # Functions in a loop occupy O(1) space\n for _ in range(n):\n function()\n</code></pre> space_complexity.cpp<pre><code>/* Function */\nint func() {\n // Perform some operations\n return 0;\n}\n\n/* Constant complexity */\nvoid constant(int n) {\n // Constants, variables, objects occupy O(1) space\n const int a = 0;\n int b = 0;\n vector<int> nums(10000);\n ListNode node(0);\n // Variables in a loop occupy O(1) space\n for (int i = 0; i < n; i++) {\n int c = 0;\n }\n // Functions in a loop occupy O(1) space\n for (int i = 0; i < n; i++) {\n func();\n }\n}\n</code></pre> space_complexity.java<pre><code>/* Function */\nint function() {\n // Perform some operations\n return 0;\n}\n\n/* Constant complexity */\nvoid constant(int n) {\n // Constants, variables, objects occupy O(1) space\n final int a = 0;\n int b = 0;\n int[] nums = new int[10000];\n ListNode node = new ListNode(0);\n // Variables in a loop occupy O(1) space\n for (int i = 0; i < n; i++) {\n int c = 0;\n }\n // Functions in a loop occupy O(1) space\n for (int i = 0; i < n; i++) {\n function();\n }\n}\n</code></pre> space_complexity.cs<pre><code>[class]{space_complexity}-[func]{Function}\n\n[class]{space_complexity}-[func]{Constant}\n</code></pre> space_complexity.go<pre><code>[class]{}-[func]{function}\n\n[class]{}-[func]{spaceConstant}\n</code></pre> space_complexity.swift<pre><code>[class]{}-[func]{function}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.js<pre><code>[class]{}-[func]{constFunc}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.ts<pre><code>[class]{}-[func]{constFunc}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.dart<pre><code>[class]{}-[func]{function}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.rs<pre><code>[class]{}-[func]{function}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.c<pre><code>[class]{}-[func]{func}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.kt<pre><code>[class]{}-[func]{function}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.rb<pre><code>[class]{}-[func]{function}\n\n[class]{}-[func]{constant}\n</code></pre> space_complexity.zig<pre><code>[class]{}-[func]{function}\n\n[class]{}-[func]{constant}\n</code></pre>"},{"location":"chapter_computational_complexity/space_complexity/#2-linear-order-on","title":"2. \u00a0 Linear order \\(O(n)\\)","text":"<p>Linear order is common in arrays, linked lists, stacks, queues, etc., where the number of elements is proportional to \\(n\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig space_complexity.py<pre><code>def linear(n: int):\n \"\"\"Linear complexity\"\"\"\n # A list of length n occupies O(n) space\n nums = [0] * n\n # A hash table of length n occupies O(n) space\n hmap = dict[int, str]()\n for i in range(n):\n hmap[i] = str(i)\n</code></pre> space_complexity.cpp<pre><code>/* Linear complexity */\nvoid linear(int n) {\n // Array of length n occupies O(n) space\n vector<int> nums(n);\n // A list of length n occupies O(n) space\n vector<ListNode> nodes;\n for (int i = 0; i < n; i++) {\n nodes.push_back(ListNode(i));\n }\n // A hash table of length n occupies O(n) space\n unordered_map<int, string> map;\n for (int i = 0; i < n; i++) {\n map[i] = to_string(i);\n }\n}\n</code></pre> space_complexity.java<pre><code>/* Linear complexity */\nvoid linear(int n) {\n // Array of length n occupies O(n) space\n int[] nums = new int[n];\n // A list of length n occupies O(n) space\n List<ListNode> nodes = new ArrayList<>();\n for (int i = 0; i < n; i++) {\n nodes.add(new ListNode(i));\n }\n // A hash table of length n occupies O(n) space\n Map<Integer, String> map = new HashMap<>();\n for (int i = 0; i < n; i++) {\n map.put(i, String.valueOf(i));\n }\n}\n</code></pre> space_complexity.cs<pre><code>[class]{space_complexity}-[func]{Linear}\n</code></pre> space_complexity.go<pre><code>[class]{}-[func]{spaceLinear}\n</code></pre> space_complexity.swift<pre><code>[class]{}-[func]{linear}\n</code></pre> space_complexity.js<pre><code>[class]{}-[func]{linear}\n</code></pre> space_complexity.ts<pre><code>[class]{}-[func]{linear}\n</code></pre> space_complexity.dart<pre><code>[class]{}-[func]{linear}\n</code></pre> space_complexity.rs<pre><code>[class]{}-[func]{linear}\n</code></pre> space_complexity.c<pre><code>[class]{HashTable}-[func]{}\n\n[class]{}-[func]{linear}\n</code></pre> space_complexity.kt<pre><code>[class]{}-[func]{linear}\n</code></pre> space_complexity.rb<pre><code>[class]{}-[func]{linear}\n</code></pre> space_complexity.zig<pre><code>[class]{}-[func]{linear}\n</code></pre> <p>As shown in Figure 2-17, this function's recursive depth is \\(n\\), meaning there are \\(n\\) instances of unreturned <code>linear_recur()</code> function, using \\(O(n)\\) size of stack frame space:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig space_complexity.py<pre><code>def linear_recur(n: int):\n \"\"\"Linear complexity (recursive implementation)\"\"\"\n print(\"Recursive n =\", n)\n if n == 1:\n return\n linear_recur(n - 1)\n</code></pre> space_complexity.cpp<pre><code>/* Linear complexity (recursive implementation) */\nvoid linearRecur(int n) {\n cout << \"Recursion n = \" << n << endl;\n if (n == 1)\n return;\n linearRecur(n - 1);\n}\n</code></pre> space_complexity.java<pre><code>/* Linear complexity (recursive implementation) */\nvoid linearRecur(int n) {\n System.out.println(\"Recursion n = \" + n);\n if (n == 1)\n return;\n linearRecur(n - 1);\n}\n</code></pre> space_complexity.cs<pre><code>[class]{space_complexity}-[func]{LinearRecur}\n</code></pre> space_complexity.go<pre><code>[class]{}-[func]{spaceLinearRecur}\n</code></pre> space_complexity.swift<pre><code>[class]{}-[func]{linearRecur}\n</code></pre> space_complexity.js<pre><code>[class]{}-[func]{linearRecur}\n</code></pre> space_complexity.ts<pre><code>[class]{}-[func]{linearRecur}\n</code></pre> space_complexity.dart<pre><code>[class]{}-[func]{linearRecur}\n</code></pre> space_complexity.rs<pre><code>[class]{}-[func]{linear_recur}\n</code></pre> space_complexity.c<pre><code>[class]{}-[func]{linearRecur}\n</code></pre> space_complexity.kt<pre><code>[class]{}-[func]{linearRecur}\n</code></pre> space_complexity.rb<pre><code>[class]{}-[func]{linear_recur}\n</code></pre> space_complexity.zig<pre><code>[class]{}-[func]{linearRecur}\n</code></pre> <p></p> <p> Figure 2-17 \u00a0 Recursive function generating linear order space complexity </p>"},{"location":"chapter_computational_complexity/space_complexity/#3-quadratic-order-on2","title":"3. \u00a0 Quadratic order \\(O(n^2)\\)","text":"<p>Quadratic order is common in matrices and graphs, where the number of elements is quadratic to \\(n\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig space_complexity.py<pre><code>def quadratic(n: int):\n \"\"\"Quadratic complexity\"\"\"\n # A two-dimensional list occupies O(n^2) space\n num_matrix = [[0] * n for _ in range(n)]\n</code></pre> space_complexity.cpp<pre><code>/* Quadratic complexity */\nvoid quadratic(int n) {\n // A two-dimensional list occupies O(n^2) space\n vector<vector<int>> numMatrix;\n for (int i = 0; i < n; i++) {\n vector<int> tmp;\n for (int j = 0; j < n; j++) {\n tmp.push_back(0);\n }\n numMatrix.push_back(tmp);\n }\n}\n</code></pre> space_complexity.java<pre><code>/* Quadratic complexity */\nvoid quadratic(int n) {\n // Matrix occupies O(n^2) space\n int[][] numMatrix = new int[n][n];\n // A two-dimensional list occupies O(n^2) space\n List<List<Integer>> numList = new ArrayList<>();\n for (int i = 0; i < n; i++) {\n List<Integer> tmp = new ArrayList<>();\n for (int j = 0; j < n; j++) {\n tmp.add(0);\n }\n numList.add(tmp);\n }\n}\n</code></pre> space_complexity.cs<pre><code>[class]{space_complexity}-[func]{Quadratic}\n</code></pre> space_complexity.go<pre><code>[class]{}-[func]{spaceQuadratic}\n</code></pre> space_complexity.swift<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.js<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.ts<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.dart<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.rs<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.c<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.kt<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.rb<pre><code>[class]{}-[func]{quadratic}\n</code></pre> space_complexity.zig<pre><code>[class]{}-[func]{quadratic}\n</code></pre> <p>As shown in Figure 2-18, the recursive depth of this function is \\(n\\), and in each recursive call, an array is initialized with lengths \\(n\\), \\(n-1\\), \\(\\dots\\), \\(2\\), \\(1\\), averaging \\(n/2\\), thus overall occupying \\(O(n^2)\\) space:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig space_complexity.py<pre><code>def quadratic_recur(n: int) -> int:\n \"\"\"Quadratic complexity (recursive implementation)\"\"\"\n if n <= 0:\n return 0\n # Array nums length = n, n-1, ..., 2, 1\n nums = [0] * n\n return quadratic_recur(n - 1)\n</code></pre> space_complexity.cpp<pre><code>/* Quadratic complexity (recursive implementation) */\nint quadraticRecur(int n) {\n if (n <= 0)\n return 0;\n vector<int> nums(n);\n cout << \"Recursive n = \" << n << \", length of nums = \" << nums.size() << endl;\n return quadraticRecur(n - 1);\n}\n</code></pre> space_complexity.java<pre><code>/* Quadratic complexity (recursive implementation) */\nint quadraticRecur(int n) {\n if (n <= 0)\n return 0;\n // Array nums length = n, n-1, ..., 2, 1\n int[] nums = new int[n];\n System.out.println(\"Recursion n = \" + n + \" in the length of nums = \" + nums.length);\n return quadraticRecur(n - 1);\n}\n</code></pre> space_complexity.cs<pre><code>[class]{space_complexity}-[func]{QuadraticRecur}\n</code></pre> space_complexity.go<pre><code>[class]{}-[func]{spaceQuadraticRecur}\n</code></pre> space_complexity.swift<pre><code>[class]{}-[func]{quadraticRecur}\n</code></pre> space_complexity.js<pre><code>[class]{}-[func]{quadraticRecur}\n</code></pre> space_complexity.ts<pre><code>[class]{}-[func]{quadraticRecur}\n</code></pre> space_complexity.dart<pre><code>[class]{}-[func]{quadraticRecur}\n</code></pre> space_complexity.rs<pre><code>[class]{}-[func]{quadratic_recur}\n</code></pre> space_complexity.c<pre><code>[class]{}-[func]{quadraticRecur}\n</code></pre> space_complexity.kt<pre><code>[class]{}-[func]{quadraticRecur}\n</code></pre> space_complexity.rb<pre><code>[class]{}-[func]{quadratic_recur}\n</code></pre> space_complexity.zig<pre><code>[class]{}-[func]{quadraticRecur}\n</code></pre> <p></p> <p> Figure 2-18 \u00a0 Recursive function generating quadratic order space complexity </p>"},{"location":"chapter_computational_complexity/space_complexity/#4-exponential-order-o2n","title":"4. \u00a0 Exponential order \\(O(2^n)\\)","text":"<p>Exponential order is common in binary trees. Observe Figure 2-19, a \"full binary tree\" with \\(n\\) levels has \\(2^n - 1\\) nodes, occupying \\(O(2^n)\\) space:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig space_complexity.py<pre><code>def build_tree(n: int) -> TreeNode | None:\n \"\"\"Exponential complexity (building a full binary tree)\"\"\"\n if n == 0:\n return None\n root = TreeNode(0)\n root.left = build_tree(n - 1)\n root.right = build_tree(n - 1)\n return root\n</code></pre> space_complexity.cpp<pre><code>/* Exponential complexity (building a full binary tree) */\nTreeNode *buildTree(int n) {\n if (n == 0)\n return nullptr;\n TreeNode *root = new TreeNode(0);\n root->left = buildTree(n - 1);\n root->right = buildTree(n - 1);\n return root;\n}\n</code></pre> space_complexity.java<pre><code>/* Exponential complexity (building a full binary tree) */\nTreeNode buildTree(int n) {\n if (n == 0)\n return null;\n TreeNode root = new TreeNode(0);\n root.left = buildTree(n - 1);\n root.right = buildTree(n - 1);\n return root;\n}\n</code></pre> space_complexity.cs<pre><code>[class]{space_complexity}-[func]{BuildTree}\n</code></pre> space_complexity.go<pre><code>[class]{}-[func]{buildTree}\n</code></pre> space_complexity.swift<pre><code>[class]{}-[func]{buildTree}\n</code></pre> space_complexity.js<pre><code>[class]{}-[func]{buildTree}\n</code></pre> space_complexity.ts<pre><code>[class]{}-[func]{buildTree}\n</code></pre> space_complexity.dart<pre><code>[class]{}-[func]{buildTree}\n</code></pre> space_complexity.rs<pre><code>[class]{}-[func]{build_tree}\n</code></pre> space_complexity.c<pre><code>[class]{}-[func]{buildTree}\n</code></pre> space_complexity.kt<pre><code>[class]{}-[func]{buildTree}\n</code></pre> space_complexity.rb<pre><code>[class]{}-[func]{build_tree}\n</code></pre> space_complexity.zig<pre><code>[class]{}-[func]{buildTree}\n</code></pre> <p></p> <p> Figure 2-19 \u00a0 Full binary tree generating exponential order space complexity </p>"},{"location":"chapter_computational_complexity/space_complexity/#5-logarithmic-order-olog-n","title":"5. \u00a0 Logarithmic order \\(O(\\log n)\\)","text":"<p>Logarithmic order is common in divide-and-conquer algorithms. For example, in merge sort, an array of length \\(n\\) is recursively divided in half each round, forming a recursion tree of height \\(\\log n\\), using \\(O(\\log n)\\) stack frame space.</p> <p>Another example is converting a number to a string. Given a positive integer \\(n\\), its number of digits is \\(\\log_{10} n + 1\\), corresponding to the length of the string, thus the space complexity is \\(O(\\log_{10} n + 1) = O(\\log n)\\).</p>"},{"location":"chapter_computational_complexity/space_complexity/#244-balancing-time-and-space","title":"2.4.4 \u00a0 Balancing time and space","text":"<p>Ideally, we aim for both time complexity and space complexity to be optimal. However, in practice, optimizing both simultaneously is often difficult.</p> <p>Lowering time complexity usually comes at the cost of increased space complexity, and vice versa. The approach of sacrificing memory space to improve algorithm speed is known as \"space-time tradeoff\"; the reverse is known as \"time-space tradeoff\".</p> <p>The choice depends on which aspect we value more. In most cases, time is more precious than space, so \"space-time tradeoff\" is often the more common strategy. Of course, controlling space complexity is also very important when dealing with large volumes of data.</p>"},{"location":"chapter_computational_complexity/summary/","title":"2.5 \u00a0 Summary","text":""},{"location":"chapter_computational_complexity/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<p>Algorithm Efficiency Assessment</p> <ul> <li>Time efficiency and space efficiency are the two main criteria for assessing the merits of an algorithm.</li> <li>We can assess algorithm efficiency through actual testing, but it's challenging to eliminate the influence of the test environment, and it consumes substantial computational resources.</li> <li>Complexity analysis can overcome the disadvantages of actual testing. Its results are applicable across all operating platforms and can reveal the efficiency of algorithms at different data scales.</li> </ul> <p>Time Complexity</p> <ul> <li>Time complexity measures the trend of an algorithm's running time with the increase in data volume, effectively assessing algorithm efficiency. However, it can fail in certain cases, such as with small input data volumes or when time complexities are the same, making it challenging to precisely compare the efficiency of algorithms.</li> <li>Worst-case time complexity is denoted using big-\\(O\\) notation, representing the asymptotic upper bound, reflecting the growth level of the number of operations \\(T(n)\\) as \\(n\\) approaches infinity.</li> <li>Calculating time complexity involves two steps: first counting the number of operations, then determining the asymptotic upper bound.</li> <li>Common time complexities, arranged from low to high, include \\(O(1)\\), \\(O(\\log n)\\), \\(O(n)\\), \\(O(n \\log n)\\), \\(O(n^2)\\), \\(O(2^n)\\), and \\(O(n!)\\), among others.</li> <li>The time complexity of some algorithms is not fixed and depends on the distribution of input data. Time complexities are divided into worst, best, and average cases. The best case is rarely used because input data generally needs to meet strict conditions to achieve the best case.</li> <li>Average time complexity reflects the efficiency of an algorithm under random data inputs, closely resembling the algorithm's performance in actual applications. Calculating average time complexity requires accounting for the distribution of input data and the subsequent mathematical expectation.</li> </ul> <p>Space Complexity</p> <ul> <li>Space complexity, similar to time complexity, measures the trend of memory space occupied by an algorithm with the increase in data volume.</li> <li>The relevant memory space used during the algorithm's execution can be divided into input space, temporary space, and output space. Generally, input space is not included in space complexity calculations. Temporary space can be divided into temporary data, stack frame space, and instruction space, where stack frame space usually affects space complexity only in recursive functions.</li> <li>We usually focus only on the worst-case space complexity, which means calculating the space complexity of the algorithm under the worst input data and at the worst moment of operation.</li> <li>Common space complexities, arranged from low to high, include \\(O(1)\\), \\(O(\\log n)\\), \\(O(n)\\), \\(O(n^2)\\), and \\(O(2^n)\\), among others.</li> </ul>"},{"location":"chapter_computational_complexity/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: Is the space complexity of tail recursion \\(O(1)\\)?</p> <p>Theoretically, the space complexity of a tail-recursive function can be optimized to \\(O(1)\\). However, most programming languages (such as Java, Python, C++, Go, C#) do not support automatic optimization of tail recursion, so it's generally considered to have a space complexity of \\(O(n)\\).</p> <p>Q: What is the difference between the terms \"function\" and \"method\"?</p> <p>A function can be executed independently, with all parameters passed explicitly. A method is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class.</p> <p>Here are some examples from common programming languages:</p> <ul> <li>C is a procedural programming language without object-oriented concepts, so it only has functions. However, we can simulate object-oriented programming by creating structures (struct), and functions associated with these structures are equivalent to methods in other programming languages.</li> <li>Java and C# are object-oriented programming languages where code blocks (methods) are typically part of a class. Static methods behave like functions because they are bound to the class and cannot access specific instance variables.</li> <li>C++ and Python support both procedural programming (functions) and object-oriented programming (methods).</li> </ul> <p>Q: Does the \"Common Types of Space Complexity\" figure reflect the absolute size of occupied space?</p> <p>No, the figure shows space complexities, which reflect growth trends, not the absolute size of the occupied space.</p> <p>If you take \\(n = 8\\), you might find that the values of each curve don't correspond to their functions. This is because each curve includes a constant term, intended to compress the value range into a visually comfortable range.</p> <p>In practice, since we usually don't know the \"constant term\" complexity of each method, it's generally not possible to choose the best solution for \\(n = 8\\) based solely on complexity. However, for \\(n = 8^5\\), it's much easier to choose, as the growth trend becomes dominant.</p>"},{"location":"chapter_computational_complexity/time_complexity/","title":"2.3 \u00a0 Time complexity","text":"<p>Time complexity is a concept used to measure how the run time of an algorithm increases with the size of the input data. Understanding time complexity is crucial for accurately assessing the efficiency of an algorithm.</p> <ol> <li>Determining the Running Platform: This includes hardware configuration, programming language, system environment, etc., all of which can affect the efficiency of code execution.</li> <li>Evaluating the Run Time for Various Computational Operations: For instance, an addition operation <code>+</code> might take 1 ns, a multiplication operation <code>*</code> might take 10 ns, a print operation <code>print()</code> might take 5 ns, etc.</li> <li>Counting All the Computational Operations in the Code: Summing the execution times of all these operations gives the total run time.</li> </ol> <p>For example, consider the following code with an input size of \\(n\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code># Under an operating platform\ndef algorithm(n: int):\n a = 2 # 1 ns\n a = a + 1 # 1 ns\n a = a * 2 # 10 ns\n # Cycle n times\n for _ in range(n): # 1 ns\n print(0) # 5 ns\n</code></pre> <pre><code>// Under a particular operating platform\nvoid algorithm(int n) {\n int a = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for (int i = 0; i < n; i++) { // 1 ns , every round i++ is executed\n cout << 0 << endl; // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nvoid algorithm(int n) {\n int a = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for (int i = 0; i < n; i++) { // 1 ns , every round i++ is executed\n System.out.println(0); // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nvoid Algorithm(int n) {\n int a = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for (int i = 0; i < n; i++) { // 1 ns , every round i++ is executed\n Console.WriteLine(0); // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nfunc algorithm(n int) {\n a := 2 // 1 ns\n a = a + 1 // 1 ns\n a = a * 2 // 10 ns\n // Loop n times\n for i := 0; i < n; i++ { // 1 ns\n fmt.Println(a) // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nfunc algorithm(n: Int) {\n var a = 2 // 1 ns\n a = a + 1 // 1 ns\n a = a * 2 // 10 ns\n // Loop n times\n for _ in 0 ..< n { // 1 ns\n print(0) // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nfunction algorithm(n) {\n var a = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for(let i = 0; i < n; i++) { // 1 ns , every round i++ is executed\n console.log(0); // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nfunction algorithm(n: number): void {\n var a: number = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for(let i = 0; i < n; i++) { // 1 ns , every round i++ is executed\n console.log(0); // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nvoid algorithm(int n) {\n int a = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for (int i = 0; i < n; i++) { // 1 ns , every round i++ is executed\n print(0); // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nfn algorithm(n: i32) {\n let mut a = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for _ in 0..n { // 1 ns for each round i++\n println!(\"{}\", 0); // 5 ns\n }\n}\n</code></pre> <pre><code>// Under a particular operating platform\nvoid algorithm(int n) {\n int a = 2; // 1 ns\n a = a + 1; // 1 ns\n a = a * 2; // 10 ns\n // Loop n times\n for (int i = 0; i < n; i++) { // 1 ns , every round i++ is executed\n printf(\"%d\", 0); // 5 ns\n }\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>// Under a particular operating platform\nfn algorithm(n: usize) void {\n var a: i32 = 2; // 1 ns\n a += 1; // 1 ns\n a *= 2; // 10 ns\n // Loop n times\n for (0..n) |_| { // 1 ns\n std.debug.print(\"{}\\n\", .{0}); // 5 ns\n }\n}\n</code></pre> <p>Using the above method, the run time of the algorithm can be calculated as \\((6n + 12)\\) ns:</p> \\[ 1 + 1 + 10 + (1 + 5) \\times n = 6n + 12 \\] <p>However, in practice, counting the run time of an algorithm is neither practical nor reasonable. First, we don't want to tie the estimated time to the running platform, as algorithms need to run on various platforms. Second, it's challenging to know the run time for each type of operation, making the estimation process difficult.</p>"},{"location":"chapter_computational_complexity/time_complexity/#231-assessing-time-growth-trend","title":"2.3.1 \u00a0 Assessing time growth trend","text":"<p>Time complexity analysis does not count the algorithm's run time, but rather the growth trend of the run time as the data volume increases.</p> <p>Let's understand this concept of \"time growth trend\" with an example. Assume the input data size is \\(n\\), and consider three algorithms <code>A</code>, <code>B</code>, and <code>C</code>:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code># Time complexity of algorithm A: constant order\ndef algorithm_A(n: int):\n print(0)\n# Time complexity of algorithm B: linear order\ndef algorithm_B(n: int):\n for _ in range(n):\n print(0)\n# Time complexity of algorithm C: constant order\ndef algorithm_C(n: int):\n for _ in range(1000000):\n print(0)\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nvoid algorithm_A(int n) {\n cout << 0 << endl;\n}\n// Time complexity of algorithm B: linear order\nvoid algorithm_B(int n) {\n for (int i = 0; i < n; i++) {\n cout << 0 << endl;\n }\n}\n// Time complexity of algorithm C: constant order\nvoid algorithm_C(int n) {\n for (int i = 0; i < 1000000; i++) {\n cout << 0 << endl;\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nvoid algorithm_A(int n) {\n System.out.println(0);\n}\n// Time complexity of algorithm B: linear order\nvoid algorithm_B(int n) {\n for (int i = 0; i < n; i++) {\n System.out.println(0);\n }\n}\n// Time complexity of algorithm C: constant order\nvoid algorithm_C(int n) {\n for (int i = 0; i < 1000000; i++) {\n System.out.println(0);\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nvoid AlgorithmA(int n) {\n Console.WriteLine(0);\n}\n// Time complexity of algorithm B: linear order\nvoid AlgorithmB(int n) {\n for (int i = 0; i < n; i++) {\n Console.WriteLine(0);\n }\n}\n// Time complexity of algorithm C: constant order\nvoid AlgorithmC(int n) {\n for (int i = 0; i < 1000000; i++) {\n Console.WriteLine(0);\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nfunc algorithm_A(n int) {\n fmt.Println(0)\n}\n// Time complexity of algorithm B: linear order\nfunc algorithm_B(n int) {\n for i := 0; i < n; i++ {\n fmt.Println(0)\n }\n}\n// Time complexity of algorithm C: constant order\nfunc algorithm_C(n int) {\n for i := 0; i < 1000000; i++ {\n fmt.Println(0)\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nfunc algorithmA(n: Int) {\n print(0)\n}\n\n// Time complexity of algorithm B: linear order\nfunc algorithmB(n: Int) {\n for _ in 0 ..< n {\n print(0)\n }\n}\n\n// Time complexity of algorithm C: constant order\nfunc algorithmC(n: Int) {\n for _ in 0 ..< 1_000_000 {\n print(0)\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nfunction algorithm_A(n) {\n console.log(0);\n}\n// Time complexity of algorithm B: linear order\nfunction algorithm_B(n) {\n for (let i = 0; i < n; i++) {\n console.log(0);\n }\n}\n// Time complexity of algorithm C: constant order\nfunction algorithm_C(n) {\n for (let i = 0; i < 1000000; i++) {\n console.log(0);\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nfunction algorithm_A(n: number): void {\n console.log(0);\n}\n// Time complexity of algorithm B: linear order\nfunction algorithm_B(n: number): void {\n for (let i = 0; i < n; i++) {\n console.log(0);\n }\n}\n// Time complexity of algorithm C: constant order\nfunction algorithm_C(n: number): void {\n for (let i = 0; i < 1000000; i++) {\n console.log(0);\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nvoid algorithmA(int n) {\n print(0);\n}\n// Time complexity of algorithm B: linear order\nvoid algorithmB(int n) {\n for (int i = 0; i < n; i++) {\n print(0);\n }\n}\n// Time complexity of algorithm C: constant order\nvoid algorithmC(int n) {\n for (int i = 0; i < 1000000; i++) {\n print(0);\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nfn algorithm_A(n: i32) {\n println!(\"{}\", 0);\n}\n// Time complexity of algorithm B: linear order\nfn algorithm_B(n: i32) {\n for _ in 0..n {\n println!(\"{}\", 0);\n }\n}\n// Time complexity of algorithm C: constant order\nfn algorithm_C(n: i32) {\n for _ in 0..1000000 {\n println!(\"{}\", 0);\n }\n}\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nvoid algorithm_A(int n) {\n printf(\"%d\", 0);\n}\n// Time complexity of algorithm B: linear order\nvoid algorithm_B(int n) {\n for (int i = 0; i < n; i++) {\n printf(\"%d\", 0);\n }\n}\n// Time complexity of algorithm C: constant order\nvoid algorithm_C(int n) {\n for (int i = 0; i < 1000000; i++) {\n printf(\"%d\", 0);\n }\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>// Time complexity of algorithm A: constant order\nfn algorithm_A(n: usize) void {\n _ = n;\n std.debug.print(\"{}\\n\", .{0});\n}\n// Time complexity of algorithm B: linear order\nfn algorithm_B(n: i32) void {\n for (0..n) |_| {\n std.debug.print(\"{}\\n\", .{0});\n }\n}\n// Time complexity of algorithm C: constant order\nfn algorithm_C(n: i32) void {\n _ = n;\n for (0..1000000) |_| {\n std.debug.print(\"{}\\n\", .{0});\n }\n}\n</code></pre> <p>Figure 2-7 shows the time complexities of these three algorithms.</p> <ul> <li>Algorithm <code>A</code> has just one print operation, and its run time does not grow with \\(n\\). Its time complexity is considered \"constant order.\"</li> <li>Algorithm <code>B</code> involves a print operation looping \\(n\\) times, and its run time grows linearly with \\(n\\). Its time complexity is \"linear order.\"</li> <li>Algorithm <code>C</code> has a print operation looping 1,000,000 times. Although it takes a long time, it is independent of the input data size \\(n\\). Therefore, the time complexity of <code>C</code> is the same as <code>A</code>, which is \"constant order.\"</li> </ul> <p></p> <p> Figure 2-7 \u00a0 Time growth trend of algorithms a, b, and c </p> <p>Compared to directly counting the run time of an algorithm, what are the characteristics of time complexity analysis?</p> <ul> <li>Time complexity effectively assesses algorithm efficiency. For instance, algorithm <code>B</code> has linearly growing run time, which is slower than algorithm <code>A</code> when \\(n > 1\\) and slower than <code>C</code> when \\(n > 1,000,000\\). In fact, as long as the input data size \\(n\\) is sufficiently large, a \"constant order\" complexity algorithm will always be better than a \"linear order\" one, demonstrating the essence of time growth trend.</li> <li>Time complexity analysis is more straightforward. Obviously, the running platform and the types of computational operations are irrelevant to the trend of run time growth. Therefore, in time complexity analysis, we can simply treat the execution time of all computational operations as the same \"unit time,\" simplifying the \"computational operation run time count\" to a \"computational operation count.\" This significantly reduces the complexity of estimation.</li> <li>Time complexity has its limitations. For example, although algorithms <code>A</code> and <code>C</code> have the same time complexity, their actual run times can be quite different. Similarly, even though algorithm <code>B</code> has a higher time complexity than <code>C</code>, it is clearly superior when the input data size \\(n\\) is small. In these cases, it's difficult to judge the efficiency of algorithms based solely on time complexity. Nonetheless, despite these issues, complexity analysis remains the most effective and commonly used method for evaluating algorithm efficiency.</li> </ul>"},{"location":"chapter_computational_complexity/time_complexity/#232-asymptotic-upper-bound","title":"2.3.2 \u00a0 Asymptotic upper bound","text":"<p>Consider a function with an input size of \\(n\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>def algorithm(n: int):\n a = 1 # +1\n a = a + 1 # +1\n a = a * 2 # +1\n # Cycle n times\n for i in range(n): # +1\n print(0) # +1\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +1\n a = a + 1; // +1\n a = a * 2; // +1\n // Loop n times\n for (int i = 0; i < n; i++) { // +1 (execute i ++ every round)\n cout << 0 << endl; // +1\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +1\n a = a + 1; // +1\n a = a * 2; // +1\n // Loop n times\n for (int i = 0; i < n; i++) { // +1 (execute i ++ every round)\n System.out.println(0); // +1\n }\n}\n</code></pre> <pre><code>void Algorithm(int n) {\n int a = 1; // +1\n a = a + 1; // +1\n a = a * 2; // +1\n // Loop n times\n for (int i = 0; i < n; i++) { // +1 (execute i ++ every round)\n Console.WriteLine(0); // +1\n }\n}\n</code></pre> <pre><code>func algorithm(n int) {\n a := 1 // +1\n a = a + 1 // +1\n a = a * 2 // +1\n // Loop n times\n for i := 0; i < n; i++ { // +1\n fmt.Println(a) // +1\n }\n}\n</code></pre> <pre><code>func algorithm(n: Int) {\n var a = 1 // +1\n a = a + 1 // +1\n a = a * 2 // +1\n // Loop n times\n for _ in 0 ..< n { // +1\n print(0) // +1\n }\n}\n</code></pre> <pre><code>function algorithm(n) {\n var a = 1; // +1\n a += 1; // +1\n a *= 2; // +1\n // Loop n times\n for(let i = 0; i < n; i++){ // +1 (execute i ++ every round)\n console.log(0); // +1\n }\n}\n</code></pre> <pre><code>function algorithm(n: number): void{\n var a: number = 1; // +1\n a += 1; // +1\n a *= 2; // +1\n // Loop n times\n for(let i = 0; i < n; i++){ // +1 (execute i ++ every round)\n console.log(0); // +1\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +1\n a = a + 1; // +1\n a = a * 2; // +1\n // Loop n times\n for (int i = 0; i < n; i++) { // +1 (execute i ++ every round)\n print(0); // +1\n }\n}\n</code></pre> <pre><code>fn algorithm(n: i32) {\n let mut a = 1; // +1\n a = a + 1; // +1\n a = a * 2; // +1\n\n // Loop n times\n for _ in 0..n { // +1 (execute i ++ every round)\n println!(\"{}\", 0); // +1\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +1\n a = a + 1; // +1\n a = a * 2; // +1\n // Loop n times\n for (int i = 0; i < n; i++) { // +1 (execute i ++ every round)\n printf(\"%d\", 0); // +1\n }\n} \n</code></pre> <pre><code>\n</code></pre> <pre><code>fn algorithm(n: usize) void {\n var a: i32 = 1; // +1\n a += 1; // +1\n a *= 2; // +1\n // Loop n times\n for (0..n) |_| { // +1 (execute i ++ every round)\n std.debug.print(\"{}\\n\", .{0}); // +1\n }\n}\n</code></pre> <p>Given a function that represents the number of operations of an algorithm as a function of the input size \\(n\\), denoted as \\(T(n)\\), consider the following example:</p> \\[ T(n) = 3 + 2n \\] <p>Since \\(T(n)\\) is a linear function, its growth trend is linear, and therefore, its time complexity is of linear order, denoted as \\(O(n)\\). This mathematical notation, known as big-O notation, represents the asymptotic upper bound of the function \\(T(n)\\).</p> <p>In essence, time complexity analysis is about finding the asymptotic upper bound of the \"number of operations \\(T(n)\\)\". It has a precise mathematical definition.</p> <p>Asymptotic Upper Bound</p> <p>If there exist positive real numbers \\(c\\) and \\(n_0\\) such that for all \\(n > n_0\\), \\(T(n) \\leq c \\cdot f(n)\\), then \\(f(n)\\) is considered an asymptotic upper bound of \\(T(n)\\), denoted as \\(T(n) = O(f(n))\\).</p> <p>As shown in Figure 2-8, calculating the asymptotic upper bound involves finding a function \\(f(n)\\) such that, as \\(n\\) approaches infinity, \\(T(n)\\) and \\(f(n)\\) have the same growth order, differing only by a constant factor \\(c\\).</p> <p></p> <p> Figure 2-8 \u00a0 Asymptotic upper bound of a function </p>"},{"location":"chapter_computational_complexity/time_complexity/#233-calculation-method","title":"2.3.3 \u00a0 Calculation method","text":"<p>While the concept of asymptotic upper bound might seem mathematically dense, you don't need to fully grasp it right away. Let's first understand the method of calculation, which can be practiced and comprehended over time.</p> <p>Once \\(f(n)\\) is determined, we obtain the time complexity \\(O(f(n))\\). But how do we determine the asymptotic upper bound \\(f(n)\\)? This process generally involves two steps: counting the number of operations and determining the asymptotic upper bound.</p>"},{"location":"chapter_computational_complexity/time_complexity/#1-step-1-counting-the-number-of-operations","title":"1. \u00a0 Step 1: counting the number of operations","text":"<p>This step involves going through the code line by line. However, due to the presence of the constant \\(c\\) in \\(c \\cdot f(n)\\), all coefficients and constant terms in \\(T(n)\\) can be ignored. This principle allows for simplification techniques in counting operations.</p> <ol> <li>Ignore constant terms in \\(T(n)\\), as they do not affect the time complexity being independent of \\(n\\).</li> <li>Omit all coefficients. For example, looping \\(2n\\), \\(5n + 1\\) times, etc., can be simplified to \\(n\\) times since the coefficient before \\(n\\) does not impact the time complexity.</li> <li>Use multiplication for nested loops. The total number of operations equals the product of the number of operations in each loop, applying the simplification techniques from points 1 and 2 for each loop level.</li> </ol> <p>Given a function, we can use these techniques to count operations:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>def algorithm(n: int):\n a = 1 # +0 (trick 1)\n a = a + n # +0 (trick 1)\n # +n (technique 2)\n for i in range(5 * n + 1):\n print(0)\n # +n*n (technique 3)\n for i in range(2 * n):\n for j in range(n + 1):\n print(0)\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n // +n (technique 2)\n for (int i = 0; i < 5 * n + 1; i++) {\n cout << 0 << endl;\n }\n // +n*n (technique 3)\n for (int i = 0; i < 2 * n; i++) {\n for (int j = 0; j < n + 1; j++) {\n cout << 0 << endl;\n }\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n // +n (technique 2)\n for (int i = 0; i < 5 * n + 1; i++) {\n System.out.println(0);\n }\n // +n*n (technique 3)\n for (int i = 0; i < 2 * n; i++) {\n for (int j = 0; j < n + 1; j++) {\n System.out.println(0);\n }\n }\n}\n</code></pre> <pre><code>void Algorithm(int n) {\n int a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n // +n (technique 2)\n for (int i = 0; i < 5 * n + 1; i++) {\n Console.WriteLine(0);\n }\n // +n*n (technique 3)\n for (int i = 0; i < 2 * n; i++) {\n for (int j = 0; j < n + 1; j++) {\n Console.WriteLine(0);\n }\n }\n}\n</code></pre> <pre><code>func algorithm(n int) {\n a := 1 // +0 (trick 1)\n a = a + n // +0 (trick 1)\n // +n (technique 2)\n for i := 0; i < 5 * n + 1; i++ {\n fmt.Println(0)\n }\n // +n*n (technique 3)\n for i := 0; i < 2 * n; i++ {\n for j := 0; j < n + 1; j++ {\n fmt.Println(0)\n }\n }\n}\n</code></pre> <pre><code>func algorithm(n: Int) {\n var a = 1 // +0 (trick 1)\n a = a + n // +0 (trick 1)\n // +n (technique 2)\n for _ in 0 ..< (5 * n + 1) {\n print(0)\n }\n // +n*n (technique 3)\n for _ in 0 ..< (2 * n) {\n for _ in 0 ..< (n + 1) {\n print(0)\n }\n }\n}\n</code></pre> <pre><code>function algorithm(n) {\n let a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n // +n (technique 2)\n for (let i = 0; i < 5 * n + 1; i++) {\n console.log(0);\n }\n // +n*n (technique 3)\n for (let i = 0; i < 2 * n; i++) {\n for (let j = 0; j < n + 1; j++) {\n console.log(0);\n }\n }\n}\n</code></pre> <pre><code>function algorithm(n: number): void {\n let a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n // +n (technique 2)\n for (let i = 0; i < 5 * n + 1; i++) {\n console.log(0);\n }\n // +n*n (technique 3)\n for (let i = 0; i < 2 * n; i++) {\n for (let j = 0; j < n + 1; j++) {\n console.log(0);\n }\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n // +n (technique 2)\n for (int i = 0; i < 5 * n + 1; i++) {\n print(0);\n }\n // +n*n (technique 3)\n for (int i = 0; i < 2 * n; i++) {\n for (int j = 0; j < n + 1; j++) {\n print(0);\n }\n }\n}\n</code></pre> <pre><code>fn algorithm(n: i32) {\n let mut a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n\n // +n (technique 2)\n for i in 0..(5 * n + 1) {\n println!(\"{}\", 0);\n }\n\n // +n*n (technique 3)\n for i in 0..(2 * n) {\n for j in 0..(n + 1) {\n println!(\"{}\", 0);\n }\n }\n}\n</code></pre> <pre><code>void algorithm(int n) {\n int a = 1; // +0 (trick 1)\n a = a + n; // +0 (trick 1)\n // +n (technique 2)\n for (int i = 0; i < 5 * n + 1; i++) {\n printf(\"%d\", 0);\n }\n // +n*n (technique 3)\n for (int i = 0; i < 2 * n; i++) {\n for (int j = 0; j < n + 1; j++) {\n printf(\"%d\", 0);\n }\n }\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>fn algorithm(n: usize) void {\n var a: i32 = 1; // +0 (trick 1)\n a = a + @as(i32, @intCast(n)); // +0 (trick 1)\n\n // +n (technique 2)\n for(0..(5 * n + 1)) |_| {\n std.debug.print(\"{}\\n\", .{0});\n }\n\n // +n*n (technique 3)\n for(0..(2 * n)) |_| {\n for(0..(n + 1)) |_| {\n std.debug.print(\"{}\\n\", .{0});\n }\n }\n}\n</code></pre> <p>The formula below shows the counting results before and after simplification, both leading to a time complexity of \\(O(n^2)\\):</p> \\[ \\begin{aligned} T(n) & = 2n(n + 1) + (5n + 1) + 2 & \\text{Complete Count (-.-|||)} \\newline & = 2n^2 + 7n + 3 \\newline T(n) & = n^2 + n & \\text{Simplified Count (o.O)} \\end{aligned} \\]"},{"location":"chapter_computational_complexity/time_complexity/#2-step-2-determining-the-asymptotic-upper-bound","title":"2. \u00a0 Step 2: determining the asymptotic upper bound","text":"<p>The time complexity is determined by the highest order term in \\(T(n)\\). This is because, as \\(n\\) approaches infinity, the highest order term dominates, rendering the influence of other terms negligible.</p> <p>The following table illustrates examples of different operation counts and their corresponding time complexities. Some exaggerated values are used to emphasize that coefficients cannot alter the order of growth. When \\(n\\) becomes very large, these constants become insignificant.</p> <p> Table: Time complexity for different operation counts </p> Operation Count \\(T(n)\\) Time Complexity \\(O(f(n))\\) \\(100000\\) \\(O(1)\\) \\(3n + 2\\) \\(O(n)\\) \\(2n^2 + 3n + 2\\) \\(O(n^2)\\) \\(n^3 + 10000n^2\\) \\(O(n^3)\\) \\(2^n + 10000n^{10000}\\) \\(O(2^n)\\)"},{"location":"chapter_computational_complexity/time_complexity/#234-common-types-of-time-complexity","title":"2.3.4 \u00a0 Common types of time complexity","text":"<p>Let's consider the input data size as \\(n\\). The common types of time complexities are shown in Figure 2-9, arranged from lowest to highest:</p> \\[ \\begin{aligned} & O(1) < O(\\log n) < O(n) < O(n \\log n) < O(n^2) < O(2^n) < O(n!) \\newline & \\text{Constant} < \\text{Log} < \\text{Linear} < \\text{Linear-Log} < \\text{Quadratic} < \\text{Exp} < \\text{Factorial} \\end{aligned} \\] <p></p> <p> Figure 2-9 \u00a0 Common types of time complexity </p>"},{"location":"chapter_computational_complexity/time_complexity/#1-constant-order-o1","title":"1. \u00a0 Constant order \\(O(1)\\)","text":"<p>Constant order means the number of operations is independent of the input data size \\(n\\). In the following function, although the number of operations <code>size</code> might be large, the time complexity remains \\(O(1)\\) as it's unrelated to \\(n\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def constant(n: int) -> int:\n \"\"\"Constant complexity\"\"\"\n count = 0\n size = 100000\n for _ in range(size):\n count += 1\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Constant complexity */\nint constant(int n) {\n int count = 0;\n int size = 100000;\n for (int i = 0; i < size; i++)\n count++;\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Constant complexity */\nint constant(int n) {\n int count = 0;\n int size = 100000;\n for (int i = 0; i < size; i++)\n count++;\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{Constant}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{constant}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{constant}\n</code></pre>"},{"location":"chapter_computational_complexity/time_complexity/#2-linear-order-on","title":"2. \u00a0 Linear order \\(O(n)\\)","text":"<p>Linear order indicates the number of operations grows linearly with the input data size \\(n\\). Linear order commonly appears in single-loop structures:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def linear(n: int) -> int:\n \"\"\"Linear complexity\"\"\"\n count = 0\n for _ in range(n):\n count += 1\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Linear complexity */\nint linear(int n) {\n int count = 0;\n for (int i = 0; i < n; i++)\n count++;\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Linear complexity */\nint linear(int n) {\n int count = 0;\n for (int i = 0; i < n; i++)\n count++;\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{Linear}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{linear}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{linear}\n</code></pre> <p>Operations like array traversal and linked list traversal have a time complexity of \\(O(n)\\), where \\(n\\) is the length of the array or list:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def array_traversal(nums: list[int]) -> int:\n \"\"\"Linear complexity (traversing an array)\"\"\"\n count = 0\n # Loop count is proportional to the length of the array\n for num in nums:\n count += 1\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Linear complexity (traversing an array) */\nint arrayTraversal(vector<int> &nums) {\n int count = 0;\n // Loop count is proportional to the length of the array\n for (int num : nums) {\n count++;\n }\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Linear complexity (traversing an array) */\nint arrayTraversal(int[] nums) {\n int count = 0;\n // Loop count is proportional to the length of the array\n for (int num : nums) {\n count++;\n }\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{ArrayTraversal}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{array_traversal}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{array_traversal}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{arrayTraversal}\n</code></pre> <p>It's important to note that the input data size \\(n\\) should be determined based on the type of input data. For example, in the first example, \\(n\\) represents the input data size, while in the second example, the length of the array \\(n\\) is the data size.</p>"},{"location":"chapter_computational_complexity/time_complexity/#3-quadratic-order-on2","title":"3. \u00a0 Quadratic order \\(O(n^2)\\)","text":"<p>Quadratic order means the number of operations grows quadratically with the input data size \\(n\\). Quadratic order typically appears in nested loops, where both the outer and inner loops have a time complexity of \\(O(n)\\), resulting in an overall complexity of \\(O(n^2)\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def quadratic(n: int) -> int:\n \"\"\"Quadratic complexity\"\"\"\n count = 0\n # Loop count is squared in relation to the data size n\n for i in range(n):\n for j in range(n):\n count += 1\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Quadratic complexity */\nint quadratic(int n) {\n int count = 0;\n // Loop count is squared in relation to the data size n\n for (int i = 0; i < n; i++) {\n for (int j = 0; j < n; j++) {\n count++;\n }\n }\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Quadratic complexity */\nint quadratic(int n) {\n int count = 0;\n // Loop count is squared in relation to the data size n\n for (int i = 0; i < n; i++) {\n for (int j = 0; j < n; j++) {\n count++;\n }\n }\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{Quadratic}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{quadratic}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{quadratic}\n</code></pre> <p>Figure 2-10 compares constant order, linear order, and quadratic order time complexities.</p> <p></p> <p> Figure 2-10 \u00a0 Constant, linear, and quadratic order time complexities </p> <p>For instance, in bubble sort, the outer loop runs \\(n - 1\\) times, and the inner loop runs \\(n-1\\), \\(n-2\\), ..., \\(2\\), \\(1\\) times, averaging \\(n / 2\\) times, resulting in a time complexity of \\(O((n - 1) n / 2) = O(n^2)\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def bubble_sort(nums: list[int]) -> int:\n \"\"\"Quadratic complexity (bubble sort)\"\"\"\n count = 0 # Counter\n # Outer loop: unsorted range is [0, i]\n for i in range(len(nums) - 1, 0, -1):\n # Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for j in range(i):\n if nums[j] > nums[j + 1]:\n # Swap nums[j] and nums[j + 1]\n tmp: int = nums[j]\n nums[j] = nums[j + 1]\n nums[j + 1] = tmp\n count += 3 # Element swap includes 3 individual operations\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Quadratic complexity (bubble sort) */\nint bubbleSort(vector<int> &nums) {\n int count = 0; // Counter\n // Outer loop: unsorted range is [0, i]\n for (int i = nums.size() - 1; i > 0; i--) {\n // Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for (int j = 0; j < i; j++) {\n if (nums[j] > nums[j + 1]) {\n // Swap nums[j] and nums[j + 1]\n int tmp = nums[j];\n nums[j] = nums[j + 1];\n nums[j + 1] = tmp;\n count += 3; // Element swap includes 3 individual operations\n }\n }\n }\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Quadratic complexity (bubble sort) */\nint bubbleSort(int[] nums) {\n int count = 0; // Counter\n // Outer loop: unsorted range is [0, i]\n for (int i = nums.length - 1; i > 0; i--) {\n // Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for (int j = 0; j < i; j++) {\n if (nums[j] > nums[j + 1]) {\n // Swap nums[j] and nums[j + 1]\n int tmp = nums[j];\n nums[j] = nums[j + 1];\n nums[j + 1] = tmp;\n count += 3; // Element swap includes 3 individual operations\n }\n }\n }\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{BubbleSort}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{bubble_sort}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{bubble_sort}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre>"},{"location":"chapter_computational_complexity/time_complexity/#4-exponential-order-o2n","title":"4. \u00a0 Exponential order \\(O(2^n)\\)","text":"<p>Biological \"cell division\" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in \\(2^n\\) cells after \\(n\\) divisions.</p> <p>Figure 2-11 and code simulate the cell division process, with a time complexity of \\(O(2^n)\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def exponential(n: int) -> int:\n \"\"\"Exponential complexity (loop implementation)\"\"\"\n count = 0\n base = 1\n # Cells split into two every round, forming the sequence 1, 2, 4, 8, ..., 2^(n-1)\n for _ in range(n):\n for _ in range(base):\n count += 1\n base *= 2\n # count = 1 + 2 + 4 + 8 + .. + 2^(n-1) = 2^n - 1\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Exponential complexity (loop implementation) */\nint exponential(int n) {\n int count = 0, base = 1;\n // Cells split into two every round, forming the sequence 1, 2, 4, 8, ..., 2^(n-1)\n for (int i = 0; i < n; i++) {\n for (int j = 0; j < base; j++) {\n count++;\n }\n base *= 2;\n }\n // count = 1 + 2 + 4 + 8 + .. + 2^(n-1) = 2^n - 1\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Exponential complexity (loop implementation) */\nint exponential(int n) {\n int count = 0, base = 1;\n // Cells split into two every round, forming the sequence 1, 2, 4, 8, ..., 2^(n-1)\n for (int i = 0; i < n; i++) {\n for (int j = 0; j < base; j++) {\n count++;\n }\n base *= 2;\n }\n // count = 1 + 2 + 4 + 8 + .. + 2^(n-1) = 2^n - 1\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{Exponential}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{exponential}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{exponential}\n</code></pre> <p></p> <p> Figure 2-11 \u00a0 Exponential order time complexity </p> <p>In practice, exponential order often appears in recursive functions. For example, in the code below, it recursively splits into two halves, stopping after \\(n\\) divisions:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def exp_recur(n: int) -> int:\n \"\"\"Exponential complexity (recursive implementation)\"\"\"\n if n == 1:\n return 1\n return exp_recur(n - 1) + exp_recur(n - 1) + 1\n</code></pre> time_complexity.cpp<pre><code>/* Exponential complexity (recursive implementation) */\nint expRecur(int n) {\n if (n == 1)\n return 1;\n return expRecur(n - 1) + expRecur(n - 1) + 1;\n}\n</code></pre> time_complexity.java<pre><code>/* Exponential complexity (recursive implementation) */\nint expRecur(int n) {\n if (n == 1)\n return 1;\n return expRecur(n - 1) + expRecur(n - 1) + 1;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{ExpRecur}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{expRecur}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{expRecur}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{expRecur}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{expRecur}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{expRecur}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{exp_recur}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{expRecur}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{expRecur}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{exp_recur}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{expRecur}\n</code></pre> <p>Exponential order growth is extremely rapid and is commonly seen in exhaustive search methods (brute force, backtracking, etc.). For large-scale problems, exponential order is unacceptable, often requiring dynamic programming or greedy algorithms as solutions.</p>"},{"location":"chapter_computational_complexity/time_complexity/#5-logarithmic-order-olog-n","title":"5. \u00a0 Logarithmic order \\(O(\\log n)\\)","text":"<p>In contrast to exponential order, logarithmic order reflects situations where \"the size is halved each round.\" Given an input data size \\(n\\), since the size is halved each round, the number of iterations is \\(\\log_2 n\\), the inverse function of \\(2^n\\).</p> <p>Figure 2-12 and code simulate the \"halving each round\" process, with a time complexity of \\(O(\\log_2 n)\\), commonly abbreviated as \\(O(\\log n)\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def logarithmic(n: int) -> int:\n \"\"\"Logarithmic complexity (loop implementation)\"\"\"\n count = 0\n while n > 1:\n n = n / 2\n count += 1\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Logarithmic complexity (loop implementation) */\nint logarithmic(int n) {\n int count = 0;\n while (n > 1) {\n n = n / 2;\n count++;\n }\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Logarithmic complexity (loop implementation) */\nint logarithmic(int n) {\n int count = 0;\n while (n > 1) {\n n = n / 2;\n count++;\n }\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{Logarithmic}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{logarithmic}\n</code></pre> <p></p> <p> Figure 2-12 \u00a0 Logarithmic order time complexity </p> <p>Like exponential order, logarithmic order also frequently appears in recursive functions. The code below forms a recursive tree of height \\(\\log_2 n\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def log_recur(n: int) -> int:\n \"\"\"Logarithmic complexity (recursive implementation)\"\"\"\n if n <= 1:\n return 0\n return log_recur(n / 2) + 1\n</code></pre> time_complexity.cpp<pre><code>/* Logarithmic complexity (recursive implementation) */\nint logRecur(int n) {\n if (n <= 1)\n return 0;\n return logRecur(n / 2) + 1;\n}\n</code></pre> time_complexity.java<pre><code>/* Logarithmic complexity (recursive implementation) */\nint logRecur(int n) {\n if (n <= 1)\n return 0;\n return logRecur(n / 2) + 1;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{LogRecur}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{logRecur}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{logRecur}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{logRecur}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{logRecur}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{logRecur}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{log_recur}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{logRecur}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{logRecur}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{log_recur}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{logRecur}\n</code></pre> <p>Logarithmic order is typical in algorithms based on the divide-and-conquer strategy, embodying the \"split into many\" and \"simplify complex problems\" approach. It's slow-growing and is the most ideal time complexity after constant order.</p> <p>What is the base of \\(O(\\log n)\\)?</p> <p>Technically, \"splitting into \\(m\\)\" corresponds to a time complexity of \\(O(\\log_m n)\\). Using the logarithm base change formula, we can equate different logarithmic complexities:</p> \\[ O(\\log_m n) = O(\\log_k n / \\log_k m) = O(\\log_k n) \\] <p>This means the base \\(m\\) can be changed without affecting the complexity. Therefore, we often omit the base \\(m\\) and simply denote logarithmic order as \\(O(\\log n)\\).</p>"},{"location":"chapter_computational_complexity/time_complexity/#6-linear-logarithmic-order-on-log-n","title":"6. \u00a0 Linear-logarithmic order \\(O(n \\log n)\\)","text":"<p>Linear-logarithmic order often appears in nested loops, with the complexities of the two loops being \\(O(\\log n)\\) and \\(O(n)\\) respectively. The related code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def linear_log_recur(n: int) -> int:\n \"\"\"Linear logarithmic complexity\"\"\"\n if n <= 1:\n return 1\n count: int = linear_log_recur(n // 2) + linear_log_recur(n // 2)\n for _ in range(n):\n count += 1\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Linear logarithmic complexity */\nint linearLogRecur(int n) {\n if (n <= 1)\n return 1;\n int count = linearLogRecur(n / 2) + linearLogRecur(n / 2);\n for (int i = 0; i < n; i++) {\n count++;\n }\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Linear logarithmic complexity */\nint linearLogRecur(int n) {\n if (n <= 1)\n return 1;\n int count = linearLogRecur(n / 2) + linearLogRecur(n / 2);\n for (int i = 0; i < n; i++) {\n count++;\n }\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{LinearLogRecur}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{linear_log_recur}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{linear_log_recur}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{linearLogRecur}\n</code></pre> <p>Figure 2-13 demonstrates how linear-logarithmic order is generated. Each level of a binary tree has \\(n\\) operations, and the tree has \\(\\log_2 n + 1\\) levels, resulting in a time complexity of \\(O(n \\log n)\\).</p> <p></p> <p> Figure 2-13 \u00a0 Linear-logarithmic order time complexity </p> <p>Mainstream sorting algorithms typically have a time complexity of \\(O(n \\log n)\\), such as quicksort, mergesort, and heapsort.</p>"},{"location":"chapter_computational_complexity/time_complexity/#7-factorial-order-on","title":"7. \u00a0 Factorial order \\(O(n!)\\)","text":"<p>Factorial order corresponds to the mathematical problem of \"full permutation.\" Given \\(n\\) distinct elements, the total number of possible permutations is:</p> \\[ n! = n \\times (n - 1) \\times (n - 2) \\times \\dots \\times 2 \\times 1 \\] <p>Factorials are typically implemented using recursion. As shown in the code and Figure 2-14, the first level splits into \\(n\\) branches, the second level into \\(n - 1\\) branches, and so on, stopping after the \\(n\\)th level:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig time_complexity.py<pre><code>def factorial_recur(n: int) -> int:\n \"\"\"Factorial complexity (recursive implementation)\"\"\"\n if n == 0:\n return 1\n count = 0\n # From 1 split into n\n for _ in range(n):\n count += factorial_recur(n - 1)\n return count\n</code></pre> time_complexity.cpp<pre><code>/* Factorial complexity (recursive implementation) */\nint factorialRecur(int n) {\n if (n == 0)\n return 1;\n int count = 0;\n // From 1 split into n\n for (int i = 0; i < n; i++) {\n count += factorialRecur(n - 1);\n }\n return count;\n}\n</code></pre> time_complexity.java<pre><code>/* Factorial complexity (recursive implementation) */\nint factorialRecur(int n) {\n if (n == 0)\n return 1;\n int count = 0;\n // From 1 split into n\n for (int i = 0; i < n; i++) {\n count += factorialRecur(n - 1);\n }\n return count;\n}\n</code></pre> time_complexity.cs<pre><code>[class]{time_complexity}-[func]{FactorialRecur}\n</code></pre> time_complexity.go<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> time_complexity.swift<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> time_complexity.js<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> time_complexity.ts<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> time_complexity.dart<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> time_complexity.rs<pre><code>[class]{}-[func]{factorial_recur}\n</code></pre> time_complexity.c<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> time_complexity.kt<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> time_complexity.rb<pre><code>[class]{}-[func]{factorial_recur}\n</code></pre> time_complexity.zig<pre><code>[class]{}-[func]{factorialRecur}\n</code></pre> <p></p> <p> Figure 2-14 \u00a0 Factorial order time complexity </p> <p>Note that factorial order grows even faster than exponential order; it's unacceptable for larger \\(n\\) values.</p>"},{"location":"chapter_computational_complexity/time_complexity/#235-worst-best-and-average-time-complexities","title":"2.3.5 \u00a0 Worst, best, and average time complexities","text":"<p>The time efficiency of an algorithm is often not fixed but depends on the distribution of the input data. Assume we have an array <code>nums</code> of length \\(n\\), consisting of numbers from \\(1\\) to \\(n\\), each appearing only once, but in a randomly shuffled order. The task is to return the index of the element \\(1\\). We can draw the following conclusions:</p> <ul> <li>When <code>nums = [?, ?, ..., 1]</code>, that is, when the last element is \\(1\\), it requires a complete traversal of the array, achieving the worst-case time complexity of \\(O(n)\\).</li> <li>When <code>nums = [1, ?, ?, ...]</code>, that is, when the first element is \\(1\\), no matter the length of the array, no further traversal is needed, achieving the best-case time complexity of \\(\\Omega(1)\\).</li> </ul> <p>The \"worst-case time complexity\" corresponds to the asymptotic upper bound, denoted by the big \\(O\\) notation. Correspondingly, the \"best-case time complexity\" corresponds to the asymptotic lower bound, denoted by \\(\\Omega\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig worst_best_time_complexity.py<pre><code>def random_numbers(n: int) -> list[int]:\n \"\"\"Generate an array with elements: 1, 2, ..., n, order shuffled\"\"\"\n # Generate array nums =: 1, 2, 3, ..., n\n nums = [i for i in range(1, n + 1)]\n # Randomly shuffle array elements\n random.shuffle(nums)\n return nums\n\ndef find_one(nums: list[int]) -> int:\n \"\"\"Find the index of number 1 in array nums\"\"\"\n for i in range(len(nums)):\n # When element 1 is at the start of the array, achieve best time complexity O(1)\n # When element 1 is at the end of the array, achieve worst time complexity O(n)\n if nums[i] == 1:\n return i\n return -1\n</code></pre> worst_best_time_complexity.cpp<pre><code>/* Generate an array with elements {1, 2, ..., n} in a randomly shuffled order */\nvector<int> randomNumbers(int n) {\n vector<int> nums(n);\n // Generate array nums = { 1, 2, 3, ..., n }\n for (int i = 0; i < n; i++) {\n nums[i] = i + 1;\n }\n // Generate a random seed using system time\n unsigned seed = chrono::system_clock::now().time_since_epoch().count();\n // Randomly shuffle array elements\n shuffle(nums.begin(), nums.end(), default_random_engine(seed));\n return nums;\n}\n\n/* Find the index of number 1 in array nums */\nint findOne(vector<int> &nums) {\n for (int i = 0; i < nums.size(); i++) {\n // When element 1 is at the start of the array, achieve best time complexity O(1)\n // When element 1 is at the end of the array, achieve worst time complexity O(n)\n if (nums[i] == 1)\n return i;\n }\n return -1;\n}\n</code></pre> worst_best_time_complexity.java<pre><code>/* Generate an array with elements {1, 2, ..., n} in a randomly shuffled order */\nint[] randomNumbers(int n) {\n Integer[] nums = new Integer[n];\n // Generate array nums = { 1, 2, 3, ..., n }\n for (int i = 0; i < n; i++) {\n nums[i] = i + 1;\n }\n // Randomly shuffle array elements\n Collections.shuffle(Arrays.asList(nums));\n // Integer[] -> int[]\n int[] res = new int[n];\n for (int i = 0; i < n; i++) {\n res[i] = nums[i];\n }\n return res;\n}\n\n/* Find the index of number 1 in array nums */\nint findOne(int[] nums) {\n for (int i = 0; i < nums.length; i++) {\n // When element 1 is at the start of the array, achieve best time complexity O(1)\n // When element 1 is at the end of the array, achieve worst time complexity O(n)\n if (nums[i] == 1)\n return i;\n }\n return -1;\n}\n</code></pre> worst_best_time_complexity.cs<pre><code>[class]{worst_best_time_complexity}-[func]{RandomNumbers}\n\n[class]{worst_best_time_complexity}-[func]{FindOne}\n</code></pre> worst_best_time_complexity.go<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> worst_best_time_complexity.swift<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> worst_best_time_complexity.js<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> worst_best_time_complexity.ts<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> worst_best_time_complexity.dart<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> worst_best_time_complexity.rs<pre><code>[class]{}-[func]{random_numbers}\n\n[class]{}-[func]{find_one}\n</code></pre> worst_best_time_complexity.c<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> worst_best_time_complexity.kt<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> worst_best_time_complexity.rb<pre><code>[class]{}-[func]{random_numbers}\n\n[class]{}-[func]{find_one}\n</code></pre> worst_best_time_complexity.zig<pre><code>[class]{}-[func]{randomNumbers}\n\n[class]{}-[func]{findOne}\n</code></pre> <p>It's important to note that the best-case time complexity is rarely used in practice, as it is usually only achievable under very low probabilities and might be misleading. The worst-case time complexity is more practical as it provides a safety value for efficiency, allowing us to confidently use the algorithm.</p> <p>From the above example, it's clear that both the worst-case and best-case time complexities only occur under \"special data distributions,\" which may have a small probability of occurrence and may not accurately reflect the algorithm's run efficiency. In contrast, the average time complexity can reflect the algorithm's efficiency under random input data, denoted by the \\(\\Theta\\) notation.</p> <p>For some algorithms, we can simply estimate the average case under a random data distribution. For example, in the aforementioned example, since the input array is shuffled, the probability of element \\(1\\) appearing at any index is equal. Therefore, the average number of loops for the algorithm is half the length of the array \\(n / 2\\), giving an average time complexity of \\(\\Theta(n / 2) = \\Theta(n)\\).</p> <p>However, calculating the average time complexity for more complex algorithms can be quite difficult, as it's challenging to analyze the overall mathematical expectation under the data distribution. In such cases, we usually use the worst-case time complexity as the standard for judging the efficiency of the algorithm.</p> <p>Why is the \\(\\Theta\\) symbol rarely seen?</p> <p>Possibly because the \\(O\\) notation is more commonly spoken, it is often used to represent the average time complexity. However, strictly speaking, this practice is not accurate. In this book and other materials, if you encounter statements like \"average time complexity \\(O(n)\\)\", please understand it directly as \\(\\Theta(n)\\).</p>"},{"location":"chapter_data_structure/","title":"Chapter 3. \u00a0 Data structures","text":"<p>Abstract</p> <p>Data structures serve as a robust and diverse framework.</p> <p>They offer a blueprint for the orderly organization of data, upon which algorithms come to life.</p>"},{"location":"chapter_data_structure/#chapter-contents","title":"Chapter contents","text":"<ul> <li>3.1 \u00a0 Classification of data structures</li> <li>3.2 \u00a0 Basic data types</li> <li>3.3 \u00a0 Number encoding *</li> <li>3.4 \u00a0 Character encoding *</li> <li>3.5 \u00a0 Summary</li> </ul>"},{"location":"chapter_data_structure/basic_data_types/","title":"3.2 \u00a0 Basic data types","text":"<p>When discussing data in computers, various forms like text, images, videos, voice and 3D models comes to mind. Despite their different organizational forms, they are all composed of various basic data types.</p> <p>Basic data types are those that the CPU can directly operate on and are directly used in algorithms, mainly including the following.</p> <ul> <li>Integer types: <code>byte</code>, <code>short</code>, <code>int</code>, <code>long</code>.</li> <li>Floating-point types: <code>float</code>, <code>double</code>, used to represent decimals.</li> <li>Character type: <code>char</code>, used to represent letters, punctuation, and even emojis in various languages.</li> <li>Boolean type: <code>bool</code>, used to represent \"yes\" or \"no\" decisions.</li> </ul> <p>Basic data types are stored in computers in binary form. One binary digit is 1 bit. In most modern operating systems, 1 byte consists of 8 bits.</p> <p>The range of values for basic data types depends on the size of the space they occupy. Below, we take Java as an example.</p> <ul> <li>The integer type <code>byte</code> occupies 1 byte = 8 bits and can represent \\(2^8\\) numbers.</li> <li>The integer type <code>int</code> occupies 4 bytes = 32 bits and can represent \\(2^{32}\\) numbers.</li> </ul> <p>The following table lists the space occupied, value range, and default values of various basic data types in Java. While memorizing this table isn't necessary, having a general understanding of it and referencing it when required is recommended.</p> <p> Table 3-1 \u00a0 Space occupied and value range of basic data types </p> Type Symbol Space Occupied Minimum Value Maximum Value Default Value Integer <code>byte</code> 1 byte \\(-2^7\\) (\\(-128\\)) \\(2^7 - 1\\) (\\(127\\)) 0 <code>short</code> 2 bytes \\(-2^{15}\\) \\(2^{15} - 1\\) 0 <code>int</code> 4 bytes \\(-2^{31}\\) \\(2^{31} - 1\\) 0 <code>long</code> 8 bytes \\(-2^{63}\\) \\(2^{63} - 1\\) 0 Float <code>float</code> 4 bytes \\(1.175 \\times 10^{-38}\\) \\(3.403 \\times 10^{38}\\) \\(0.0\\text{f}\\) <code>double</code> 8 bytes \\(2.225 \\times 10^{-308}\\) \\(1.798 \\times 10^{308}\\) 0.0 Char <code>char</code> 2 bytes 0 \\(2^{16} - 1\\) 0 Boolean <code>bool</code> 1 byte \\(\\text{false}\\) \\(\\text{true}\\) \\(\\text{false}\\) <p>Please note that the above table is specific to Java's basic data types. Every programming language has its own data type definitions, which might differ in space occupied, value ranges, and default values.</p> <ul> <li>In Python, the integer type <code>int</code> can be of any size, limited only by available memory; the floating-point <code>float</code> is double precision 64-bit; there is no <code>char</code> type, as a single character is actually a string <code>str</code> of length 1.</li> <li>C and C++ do not specify the size of basic data types, it varies with implementation and platform. The above table follows the LP64 data model, used for Unix 64-bit operating systems including Linux and macOS.</li> <li>The size of <code>char</code> in C and C++ is 1 byte, while in most programming languages, it depends on the specific character encoding method, as detailed in the \"Character Encoding\" chapter.</li> <li>Even though representing a boolean only requires 1 bit (0 or 1), it is usually stored in memory as 1 byte. This is because modern computer CPUs typically use 1 byte as the smallest addressable memory unit.</li> </ul> <p>So, what is the connection between basic data types and data structures? We know that data structures are ways to organize and store data in computers. The focus here is on \"structure\" rather than \"data\".</p> <p>If we want to represent \"a row of numbers\", we naturally think of using an array. This is because the linear structure of an array can represent the adjacency and the ordering of the numbers, but whether the stored content is an integer <code>int</code>, a decimal <code>float</code>, or a character <code>char</code>, is irrelevant to the \"data structure\".</p> <p>In other words, basic data types provide the \"content type\" of data, while data structures provide the \"way of organizing\" data. For example, in the following code, we use the same data structure (array) to store and represent different basic data types, including <code>int</code>, <code>float</code>, <code>char</code>, <code>bool</code>, etc.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code># Using various basic data types to initialize arrays\nnumbers: list[int] = [0] * 5\ndecimals: list[float] = [0.0] * 5\n# Python's characters are actually strings of length 1\ncharacters: list[str] = ['0'] * 5\nbools: list[bool] = [False] * 5\n# Python's lists can freely store various basic data types and object references\ndata = [0, 0.0, 'a', False, ListNode(0)]\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nint numbers[5];\nfloat decimals[5];\nchar characters[5];\nbool bools[5];\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nint[] numbers = new int[5];\nfloat[] decimals = new float[5];\nchar[] characters = new char[5];\nboolean[] bools = new boolean[5];\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nint[] numbers = new int[5];\nfloat[] decimals = new float[5];\nchar[] characters = new char[5];\nbool[] bools = new bool[5];\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nvar numbers = [5]int{}\nvar decimals = [5]float64{}\nvar characters = [5]byte{}\nvar bools = [5]bool{}\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nlet numbers = Array(repeating: 0, count: 5)\nlet decimals = Array(repeating: 0.0, count: 5)\nlet characters: [Character] = Array(repeating: \"a\", count: 5)\nlet bools = Array(repeating: false, count: 5)\n</code></pre> <pre><code>// JavaScript's arrays can freely store various basic data types and objects\nconst array = [0, 0.0, 'a', false];\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nconst numbers: number[] = [];\nconst characters: string[] = [];\nconst bools: boolean[] = [];\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nList<int> numbers = List.filled(5, 0);\nList<double> decimals = List.filled(5, 0.0);\nList<String> characters = List.filled(5, 'a');\nList<bool> bools = List.filled(5, false);\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nlet numbers: Vec<i32> = vec![0; 5];\nlet decimals: Vec<f32> = vec![0.0, 5];\nlet characters: Vec<char> = vec!['0'; 5];\nlet bools: Vec<bool> = vec![false; 5];\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nint numbers[10];\nfloat decimals[10];\nchar characters[10];\nbool bools[10];\n</code></pre> <pre><code>\n</code></pre> <pre><code>// Using various basic data types to initialize arrays\nvar numbers: [5]i32 = undefined;\nvar decimals: [5]f32 = undefined;\nvar characters: [5]u8 = undefined;\nvar bools: [5]bool = undefined;\n</code></pre>"},{"location":"chapter_data_structure/character_encoding/","title":"3.4 \u00a0 Character encoding *","text":"<p>In the computer system, all data is stored in binary form, and <code>char</code> is no exception. To represent characters, we need to develop a \"character set\" that defines a one-to-one mapping between each character and binary numbers. With the character set, computers can convert binary numbers to characters by looking up the table.</p>"},{"location":"chapter_data_structure/character_encoding/#341-ascii-character-set","title":"3.4.1 \u00a0 ASCII character set","text":"<p>The ASCII code is one of the earliest character sets, officially known as the American Standard Code for Information Interchange. It uses 7 binary digits (the lower 7 bits of a byte) to represent a character, allowing for a maximum of 128 different characters. As shown in Figure 3-6, ASCII includes uppercase and lowercase English letters, numbers 0 ~ 9, various punctuation marks, and certain control characters (such as newline and tab).</p> <p></p> <p> Figure 3-6 \u00a0 ASCII code </p> <p>However, ASCII can only represent English characters. With the globalization of computers, a character set called EASCII was developed to represent more languages. It expands from the 7-bit structure of ASCII to 8 bits, enabling the representation of 256 characters.</p> <p>Globally, various region-specific EASCII character sets have been introduced. The first 128 characters of these sets are consistent with the ASCII, while the remaining 128 characters are defined differently to accommodate the requirements of different languages.</p>"},{"location":"chapter_data_structure/character_encoding/#342-gbk-character-set","title":"3.4.2 \u00a0 GBK character set","text":"<p>Later, it was found that EASCII still could not meet the character requirements of many languages. For instance, there are nearly a hundred thousand Chinese characters, with several thousand used regularly. In 1980, the Standardization Administration of China released the GB2312 character set, which included 6763 Chinese characters, essentially fulfilling the computer processing needs for the Chinese language.</p> <p>However, GB2312 could not handle some rare and traditional characters. The GBK character set expands GB2312 and includes 21886 Chinese characters. In the GBK encoding scheme, ASCII characters are represented with one byte, while Chinese characters use two bytes.</p>"},{"location":"chapter_data_structure/character_encoding/#343-unicode-character-set","title":"3.4.3 \u00a0 Unicode character set","text":"<p>With the rapid evolution of computer technology and a plethora of character sets and encoding standards, numerous problems arose. On the one hand, these character sets generally only defined characters for specific languages and could not function properly in multilingual environments. On the other hand, the existence of multiple character set standards for the same language caused garbled text when information was exchanged between computers using different encoding standards.</p> <p>Researchers of that era thought: What if a comprehensive character set encompassing all global languages and symbols was developed? Wouldn't this resolve the issues associated with cross-linguistic environments and garbled text? Inspired by this idea, the extensive character set, Unicode, was born.</p> <p>Unicode is referred to as \"\u7edf\u4e00\u7801\" (Unified Code) in Chinese, theoretically capable of accommodating over a million characters. It aims to incorporate characters from all over the world into a single set, providing a universal character set for processing and displaying various languages and reducing the issues of garbled text due to different encoding standards.</p> <p>Since its release in 1991, Unicode has continually expanded to include new languages and characters. As of September 2022, Unicode contains 149,186 characters, including characters, symbols, and even emojis from various languages. In the vast Unicode character set, commonly used characters occupy 2 bytes, while some rare characters may occupy 3 or even 4 bytes.</p> <p>Unicode is a universal character set that assigns a number (called a \"code point\") to each character, but it does not specify how these character code points should be stored in a computer system. One might ask: How does a system interpret Unicode code points of varying lengths within a text? For example, given a 2-byte code, how does the system determine if it represents a single 2-byte character or two 1-byte characters?</p> <p>A straightforward solution to this problem is to store all characters as equal-length encodings. As shown in Figure 3-7, each character in \"Hello\" occupies 1 byte, while each character in \"\u7b97\u6cd5\" (algorithm) occupies 2 bytes. We could encode all characters in \"Hello \u7b97\u6cd5\" as 2 bytes by padding the higher bits with zeros. This method would enable the system to interpret a character every 2 bytes, recovering the content of the phrase.</p> <p></p> <p> Figure 3-7 \u00a0 Unicode encoding example </p> <p>However, as ASCII has shown us, encoding English only requires 1 byte. Using the above approach would double the space occupied by English text compared to ASCII encoding, which is a waste of memory space. Therefore, a more efficient Unicode encoding method is needed.</p>"},{"location":"chapter_data_structure/character_encoding/#344-utf-8-encoding","title":"3.4.4 \u00a0 UTF-8 encoding","text":"<p>Currently, UTF-8 has become the most widely used Unicode encoding method internationally. It is a variable-length encoding, using 1 to 4 bytes to represent a character, depending on the complexity of the character. ASCII characters need only 1 byte, Latin and Greek letters require 2 bytes, commonly used Chinese characters need 3 bytes, and some other rare characters need 4 bytes.</p> <p>The encoding rules for UTF-8 are not complex and can be divided into two cases:</p> <ul> <li>For 1-byte characters, set the highest bit to \\(0\\), and the remaining 7 bits to the Unicode code point. Notably, ASCII characters occupy the first 128 code points in the Unicode set. This means that UTF-8 encoding is backward compatible with ASCII. This implies that UTF-8 can be used to parse ancient ASCII text.</li> <li>For characters of length \\(n\\) bytes (where \\(n > 1\\)), set the highest \\(n\\) bits of the first byte to \\(1\\), and the \\((n + 1)^{\\text{th}}\\) bit to \\(0\\); starting from the second byte, set the highest 2 bits of each byte to \\(10\\); the rest of the bits are used to fill the Unicode code point.</li> </ul> <p>Figure 3-8 shows the UTF-8 encoding for \"Hello\u7b97\u6cd5\". It can be observed that since the highest \\(n\\) bits are set to \\(1\\), the system can determine the length of the character as \\(n\\) by counting the number of highest bits set to \\(1\\).</p> <p>But why set the highest 2 bits of the remaining bytes to \\(10\\)? Actually, this \\(10\\) serves as a kind of checksum. If the system starts parsing text from an incorrect byte, the \\(10\\) at the beginning of the byte can help the system quickly detect anomalies.</p> <p>The reason for using \\(10\\) as a checksum is that, under UTF-8 encoding rules, it's impossible for the highest two bits of a character to be \\(10\\). This can be proven by contradiction: If the highest two bits of a character are \\(10\\), it indicates that the character's length is \\(1\\), corresponding to ASCII. However, the highest bit of an ASCII character should be \\(0\\), which contradicts the assumption.</p> <p></p> <p> Figure 3-8 \u00a0 UTF-8 encoding example </p> <p>Apart from UTF-8, other common encoding methods include:</p> <ul> <li>UTF-16 encoding: Uses 2 or 4 bytes to represent a character. All ASCII characters and commonly used non-English characters are represented with 2 bytes; a few characters require 4 bytes. For 2-byte characters, the UTF-16 encoding equals the Unicode code point.</li> <li>UTF-32 encoding: Every character uses 4 bytes. This means UTF-32 occupies more space than UTF-8 and UTF-16, especially for texts with a high proportion of ASCII characters.</li> </ul> <p>From the perspective of storage space, using UTF-8 to represent English characters is very efficient because it only requires 1 byte; using UTF-16 to encode some non-English characters (such as Chinese) can be more efficient because it only requires 2 bytes, while UTF-8 might need 3 bytes.</p> <p>From a compatibility perspective, UTF-8 is the most versatile, with many tools and libraries supporting UTF-8 as a priority.</p>"},{"location":"chapter_data_structure/character_encoding/#345-character-encoding-in-programming-languages","title":"3.4.5 \u00a0 Character encoding in programming languages","text":"<p>Historically, many programming languages utilized fixed-length encodings such as UTF-16 or UTF-32 for processing strings during program execution. This allows strings to be handled as arrays, offering several advantages:</p> <ul> <li>Random access: Strings encoded in UTF-16 can be accessed randomly with ease. For UTF-8, which is a variable-length encoding, locating the \\(i^{th}\\) character requires traversing the string from the start to the \\(i^{th}\\) position, taking \\(O(n)\\) time.</li> <li>Character counting: Similar to random access, counting the number of characters in a UTF-16 encoded string is an \\(O(1)\\) operation. However, counting characters in a UTF-8 encoded string requires traversing the entire string.</li> <li>String operations: Many string operations like splitting, concatenating, inserting, and deleting are easier on UTF-16 encoded strings. These operations generally require additional computation on UTF-8 encoded strings to ensure the validity of the UTF-8 encoding.</li> </ul> <p>The design of character encoding schemes in programming languages is an interesting topic involving various factors:</p> <ul> <li>Java\u2019s <code>String</code> type uses UTF-16 encoding, with each character occupying 2 bytes. This was based on the initial belief that 16 bits were sufficient to represent all possible characters and proven incorrect later. As the Unicode standard expanded beyond 16 bits, characters in Java may now be represented by a pair of 16-bit values, known as \u201csurrogate pairs.\u201d</li> <li>JavaScript and TypeScript use UTF-16 encoding for similar reasons as Java. When JavaScript was first introduced by Netscape in 1995, Unicode was still in its early stages, and 16-bit encoding was sufficient to represent all Unicode characters.</li> <li>C# uses UTF-16 encoding, largely because the .NET platform, designed by Microsoft, and many Microsoft technologies, including the Windows operating system, extensively use UTF-16 encoding.</li> </ul> <p>Due to the underestimation of character counts, these languages had to use \"surrogate pairs\" to represent Unicode characters exceeding 16 bits. This approach has its drawbacks: strings containing surrogate pairs may have characters occupying 2 or 4 bytes, losing the advantage of fixed-length encoding. Additionally, handling surrogate pairs adds complexity and debugging difficulty to programming.</p> <p>Addressing these challenges, some languages have adopted alternative encoding strategies:</p> <ul> <li>Python\u2019s <code>str</code> type uses Unicode encoding with a flexible representation where the storage length of characters depends on the largest Unicode code point in the string. If all characters are ASCII, each character occupies 1 byte, 2 bytes for characters within the Basic Multilingual Plane (BMP), and 4 bytes for characters beyond the BMP.</li> <li>Go\u2019s <code>string</code> type internally uses UTF-8 encoding. Go also provides the <code>rune</code> type for representing individual Unicode code points.</li> <li>Rust\u2019s <code>str</code> and <code>String</code> types use UTF-8 encoding internally. Rust also offers the <code>char</code> type for individual Unicode code points.</li> </ul> <p>It\u2019s important to note that the above discussion pertains to how strings are stored in programming languages, which is different from how strings are stored in files or transmitted over networks. For file storage or network transmission, strings are usually encoded in UTF-8 format for optimal compatibility and space efficiency.</p>"},{"location":"chapter_data_structure/classification_of_data_structure/","title":"3.1 \u00a0 Classification of data structures","text":"<p>Common data structures include arrays, linked lists, stacks, queues, hash tables, trees, heaps, and graphs. They can be classified into \"logical structure\" and \"physical structure\".</p>"},{"location":"chapter_data_structure/classification_of_data_structure/#311-logical-structure-linear-and-non-linear","title":"3.1.1 \u00a0 Logical structure: linear and non-linear","text":"<p>The logical structures reveal the logical relationships between data elements. In arrays and linked lists, data are arranged in a specific sequence, demonstrating the linear relationship between data; while in trees, data are arranged hierarchically from the top down, showing the derived relationship between \"ancestors\" and \"descendants\"; and graphs are composed of nodes and edges, reflecting the intricate network relationship.</p> <p>As shown in Figure 3-1, logical structures can be divided into two major categories: \"linear\" and \"non-linear\". Linear structures are more intuitive, indicating data is arranged linearly in logical relationships; non-linear structures, conversely, are arranged non-linearly.</p> <ul> <li>Linear data structures: Arrays, Linked Lists, Stacks, Queues, Hash Tables.</li> <li>Non-linear data structures: Trees, Heaps, Graphs, Hash Tables.</li> </ul> <p>Non-linear data structures can be further divided into tree structures and network structures.</p> <ul> <li>Tree structures: Trees, Heaps, Hash Tables, where elements have a one-to-many relationship.</li> <li>Network structures: Graphs, where elements have a many-to-many relationships.</li> </ul> <p></p> <p> Figure 3-1 \u00a0 Linear and non-linear data structures </p>"},{"location":"chapter_data_structure/classification_of_data_structure/#312-physical-structure-contiguous-and-dispersed","title":"3.1.2 \u00a0 Physical structure: contiguous and dispersed","text":"<p>During the execution of an algorithm, the data being processed is stored in memory. Figure 3-2 shows a computer memory stick where each black square is a physical memory space. We can think of memory as a vast Excel spreadsheet, with each cell capable of storing a certain amount of data.</p> <p>The system accesses the data at the target location by means of a memory address. As shown in Figure 3-2, the computer assigns a unique identifier to each cell in the table according to specific rules, ensuring that each memory space has a unique memory address. With these addresses, the program can access the data stored in memory.</p> <p></p> <p> Figure 3-2 \u00a0 Memory stick, memory spaces, memory addresses </p> <p>Tip</p> <p>It's worth noting that comparing memory to an Excel spreadsheet is a simplified analogy. The actual working mechanism of memory is more complex, involving concepts like address space, memory management, cache mechanisms, virtual memory, and physical memory.</p> <p>Memory is a shared resource for all programs. When a block of memory is occupied by one program, it cannot be simultaneously used by other programs. Therefore, considering memory resources is crucial in designing data structures and algorithms. For instance, the algorithm's peak memory usage should not exceed the remaining free memory of the system; if there is a lack of contiguous memory blocks, then the data structure chosen must be able to be stored in non-contiguous memory blocks.</p> <p>As illustrated in Figure 3-3, the physical structure reflects the way data is stored in computer memory and it can be divided into contiguous space storage (arrays) and non-contiguous space storage (linked lists). The two types of physical structures exhibit complementary characteristics in terms of time efficiency and space efficiency.</p> <p></p> <p> Figure 3-3 \u00a0 Contiguous space storage and dispersed space storage </p> <p>It is worth noting that all data structures are implemented based on arrays, linked lists, or a combination of both. For example, stacks and queues can be implemented using either arrays or linked lists; while implementations of hash tables may involve both arrays and linked lists.</p> <ul> <li>Array-based implementations: Stacks, Queues, Hash Tables, Trees, Heaps, Graphs, Matrices, Tensors (arrays with dimensions \\(\\geq 3\\)).</li> <li>Linked-list-based implementations: Stacks, Queues, Hash Tables, Trees, Heaps, Graphs, etc.</li> </ul> <p>Data structures implemented based on arrays are also called \u201cStatic Data Structures,\u201d meaning their length cannot be changed after initialization. Conversely, those based on linked lists are called \u201cDynamic Data Structures,\u201d which can still adjust their size during program execution.</p> <p>Tip</p> <p>If you find it challenging to comprehend the physical structure, it is recommended that you read the next chapter, \"Arrays and Linked Lists,\" and revisit this section later.</p>"},{"location":"chapter_data_structure/number_encoding/","title":"3.3 \u00a0 Number encoding *","text":"<p>Tip</p> <p>In this book, chapters marked with an asterisk '*' are optional readings. If you are short on time or find them challenging, you may skip these initially and return to them after completing the essential chapters.</p>"},{"location":"chapter_data_structure/number_encoding/#331-integer-encoding","title":"3.3.1 \u00a0 Integer encoding","text":"<p>In the table from the previous section, we observed that all integer types can represent one more negative number than positive numbers, such as the <code>byte</code> range of \\([-128, 127]\\). This phenomenon seems counterintuitive, and its underlying reason involves knowledge of sign-magnitude, one's complement, and two's complement encoding.</p> <p>Firstly, it's important to note that numbers are stored in computers using the two's complement form. Before analyzing why this is the case, let's define these three encoding methods:</p> <ul> <li>Sign-magnitude: The highest bit of a binary representation of a number is considered the sign bit, where \\(0\\) represents a positive number and \\(1\\) represents a negative number. The remaining bits represent the value of the number.</li> <li>One's complement: The one's complement of a positive number is the same as its sign-magnitude. For negative numbers, it's obtained by inverting all bits except the sign bit.</li> <li>Two's complement: The two's complement of a positive number is the same as its sign-magnitude. For negative numbers, it's obtained by adding \\(1\\) to their one's complement.</li> </ul> <p>Figure 3-4 illustrates the conversions among sign-magnitude, one's complement, and two's complement:</p> <p></p> <p> Figure 3-4 \u00a0 Conversions between sign-magnitude, one's complement, and two's complement </p> <p>Although sign-magnitude is the most intuitive, it has limitations. For one, negative numbers in sign-magnitude cannot be directly used in calculations. For example, in sign-magnitude, calculating \\(1 + (-2)\\) results in \\(-3\\), which is incorrect.</p> \\[ \\begin{aligned} & 1 + (-2) \\newline & \\rightarrow 0000 \\; 0001 + 1000 \\; 0010 \\newline & = 1000 \\; 0011 \\newline & \\rightarrow -3 \\end{aligned} \\] <p>To address this, computers introduced the one's complement. If we convert to one's complement and calculate \\(1 + (-2)\\), then convert the result back to sign-magnitude, we get the correct result of \\(-1\\).</p> \\[ \\begin{aligned} & 1 + (-2) \\newline & \\rightarrow 0000 \\; 0001 \\; \\text{(Sign-magnitude)} + 1000 \\; 0010 \\; \\text{(Sign-magnitude)} \\newline & = 0000 \\; 0001 \\; \\text{(One's complement)} + 1111 \\; 1101 \\; \\text{(One's complement)} \\newline & = 1111 \\; 1110 \\; \\text{(One's complement)} \\newline & = 1000 \\; 0001 \\; \\text{(Sign-magnitude)} \\newline & \\rightarrow -1 \\end{aligned} \\] <p>Additionally, there are two representations of zero in sign-magnitude: \\(+0\\) and \\(-0\\). This means two different binary encodings for zero, which could lead to ambiguity. For example, in conditional checks, not differentiating between positive and negative zero might result in incorrect outcomes. Addressing this ambiguity would require additional checks, potentially reducing computational efficiency.</p> \\[ \\begin{aligned} +0 & \\rightarrow 0000 \\; 0000 \\newline -0 & \\rightarrow 1000 \\; 0000 \\end{aligned} \\] <p>Like sign-magnitude, one's complement also suffers from the positive and negative zero ambiguity. Therefore, computers further introduced the two's complement. Let's observe the conversion process for negative zero in sign-magnitude, one's complement, and two's complement:</p> \\[ \\begin{aligned} -0 \\rightarrow \\; & 1000 \\; 0000 \\; \\text{(Sign-magnitude)} \\newline = \\; & 1111 \\; 1111 \\; \\text{(One's complement)} \\newline = 1 \\; & 0000 \\; 0000 \\; \\text{(Two's complement)} \\newline \\end{aligned} \\] <p>Adding \\(1\\) to the one's complement of negative zero produces a carry, but with <code>byte</code> length being only 8 bits, the carried-over \\(1\\) to the 9<sup>th</sup> bit is discarded. Therefore, the two's complement of negative zero is \\(0000 \\; 0000\\), the same as positive zero, thus resolving the ambiguity.</p> <p>One last puzzle is the \\([-128, 127]\\) range for <code>byte</code>, with an additional negative number, \\(-128\\). We observe that for the interval \\([-127, +127]\\), all integers have corresponding sign-magnitude, one's complement, and two's complement, allowing for mutual conversion between them.</p> <p>However, the two's complement \\(1000 \\; 0000\\) is an exception without a corresponding sign-magnitude. According to the conversion method, its sign-magnitude would be \\(0000 \\; 0000\\), indicating zero. This presents a contradiction because its two's complement should represent itself. Computers designate this special two's complement \\(1000 \\; 0000\\) as representing \\(-128\\). In fact, the calculation of \\((-1) + (-127)\\) in two's complement results in \\(-128\\).</p> \\[ \\begin{aligned} & (-127) + (-1) \\newline & \\rightarrow 1111 \\; 1111 \\; \\text{(Sign-magnitude)} + 1000 \\; 0001 \\; \\text{(Sign-magnitude)} \\newline & = 1000 \\; 0000 \\; \\text{(One's complement)} + 1111 \\; 1110 \\; \\text{(One's complement)} \\newline & = 1000 \\; 0001 \\; \\text{(Two's complement)} + 1111 \\; 1111 \\; \\text{(Two's complement)} \\newline & = 1000 \\; 0000 \\; \\text{(Two's complement)} \\newline & \\rightarrow -128 \\end{aligned} \\] <p>As you might have noticed, all these calculations are additions, hinting at an important fact: computers' internal hardware circuits are primarily designed around addition operations. This is because addition is simpler to implement in hardware compared to other operations like multiplication, division, and subtraction, allowing for easier parallelization and faster computation.</p> <p>It's important to note that this doesn't mean computers can only perform addition. By combining addition with basic logical operations, computers can execute a variety of other mathematical operations. For example, the subtraction \\(a - b\\) can be translated into \\(a + (-b)\\); multiplication and division can be translated into multiple additions or subtractions.</p> <p>We can now summarize the reason for using two's complement in computers: with two's complement representation, computers can use the same circuits and operations to handle both positive and negative number addition, eliminating the need for special hardware circuits for subtraction and avoiding the ambiguity of positive and negative zero. This greatly simplifies hardware design and enhances computational efficiency.</p> <p>The design of two's complement is quite ingenious, and due to space constraints, we'll stop here. Interested readers are encouraged to explore further.</p>"},{"location":"chapter_data_structure/number_encoding/#332-floating-point-number-encoding","title":"3.3.2 \u00a0 Floating-point number encoding","text":"<p>You might have noticed something intriguing: despite having the same length of 4 bytes, why does a <code>float</code> have a much larger range of values compared to an <code>int</code>? This seems counterintuitive, as one would expect the range to shrink for <code>float</code> since it needs to represent fractions.</p> <p>In fact, this is due to the different representation method used by floating-point numbers (<code>float</code>). Let's consider a 32-bit binary number as:</p> \\[ b_{31} b_{30} b_{29} \\ldots b_2 b_1 b_0 \\] <p>According to the IEEE 754 standard, a 32-bit <code>float</code> consists of the following three parts:</p> <ul> <li>Sign bit \\(\\mathrm{S}\\): Occupies 1 bit, corresponding to \\(b_{31}\\).</li> <li>Exponent bit \\(\\mathrm{E}\\): Occupies 8 bits, corresponding to \\(b_{30} b_{29} \\ldots b_{23}\\).</li> <li>Fraction bit \\(\\mathrm{N}\\): Occupies 23 bits, corresponding to \\(b_{22} b_{21} \\ldots b_0\\).</li> </ul> <p>The value of a binary <code>float</code> number is calculated as:</p> \\[ \\text{val} = (-1)^{b_{31}} \\times 2^{\\left(b_{30} b_{29} \\ldots b_{23}\\right)_2 - 127} \\times \\left(1 . b_{22} b_{21} \\ldots b_0\\right)_2 \\] <p>Converted to a decimal formula, this becomes:</p> \\[ \\text{val} = (-1)^{\\mathrm{S}} \\times 2^{\\mathrm{E} - 127} \\times (1 + \\mathrm{N}) \\] <p>The range of each component is:</p> \\[ \\begin{aligned} \\mathrm{S} \\in & \\{ 0, 1\\}, \\quad \\mathrm{E} \\in \\{ 1, 2, \\dots, 254 \\} \\newline (1 + \\mathrm{N}) = & (1 + \\sum_{i=1}^{23} b_{23-i} \\times 2^{-i}) \\subset [1, 2 - 2^{-23}] \\end{aligned} \\] <p></p> <p> Figure 3-5 \u00a0 Example calculation of a float in IEEE 754 standard </p> <p>Observing Figure 3-5, given an example data \\(\\mathrm{S} = 0\\), \\(\\mathrm{E} = 124\\), \\(\\mathrm{N} = 2^{-2} + 2^{-3} = 0.375\\), we have:</p> \\[ \\text{val} = (-1)^0 \\times 2^{124 - 127} \\times (1 + 0.375) = 0.171875 \\] <p>Now we can answer the initial question: The representation of <code>float</code> includes an exponent bit, leading to a much larger range than <code>int</code>. Based on the above calculation, the maximum positive number representable by <code>float</code> is approximately \\(2^{254 - 127} \\times (2 - 2^{-23}) \\approx 3.4 \\times 10^{38}\\), and the minimum negative number is obtained by switching the sign bit.</p> <p>However, the trade-off for <code>float</code>'s expanded range is a sacrifice in precision. The integer type <code>int</code> uses all 32 bits to represent the number, with values evenly distributed; but due to the exponent bit, the larger the value of a <code>float</code>, the greater the difference between adjacent numbers.</p> <p>As shown in Table 3-2, exponent bits \\(\\mathrm{E} = 0\\) and \\(\\mathrm{E} = 255\\) have special meanings, used to represent zero, infinity, \\(\\mathrm{NaN}\\), etc.</p> <p> Table 3-2 \u00a0 Meaning of exponent bits </p> Exponent Bit E Fraction Bit \\(\\mathrm{N} = 0\\) Fraction Bit \\(\\mathrm{N} \\ne 0\\) Calculation Formula \\(0\\) \\(\\pm 0\\) Subnormal Numbers \\((-1)^{\\mathrm{S}} \\times 2^{-126} \\times (0.\\mathrm{N})\\) \\(1, 2, \\dots, 254\\) Normal Numbers Normal Numbers \\((-1)^{\\mathrm{S}} \\times 2^{(\\mathrm{E} -127)} \\times (1.\\mathrm{N})\\) \\(255\\) \\(\\pm \\infty\\) \\(\\mathrm{NaN}\\) <p>It's worth noting that subnormal numbers significantly improve the precision of floating-point numbers. The smallest positive normal number is \\(2^{-126}\\), and the smallest positive subnormal number is \\(2^{-126} \\times 2^{-23}\\).</p> <p>Double-precision <code>double</code> also uses a similar representation method to <code>float</code>, which is not elaborated here for brevity.</p>"},{"location":"chapter_data_structure/summary/","title":"3.5 \u00a0 Summary","text":""},{"location":"chapter_data_structure/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>Data structures can be categorized from two perspectives: logical structure and physical structure. Logical structure describes the logical relationships between data elements, while physical structure describes how data is stored in computer memory.</li> <li>Common logical structures include linear, tree-like, and network structures. We generally classify data structures into linear (arrays, linked lists, stacks, queues) and non-linear (trees, graphs, heaps) based on their logical structure. The implementation of hash tables may involve both linear and non-linear data structures.</li> <li>When a program runs, data is stored in computer memory. Each memory space has a corresponding memory address, and the program accesses data through these addresses.</li> <li>Physical structures are primarily divided into contiguous space storage (arrays) and dispersed space storage (linked lists). All data structures are implemented using arrays, linked lists, or a combination of both.</li> <li>Basic data types in computers include integers (<code>byte</code>, <code>short</code>, <code>int</code>, <code>long</code>), floating-point numbers (<code>float</code>, <code>double</code>), characters (<code>char</code>), and booleans (<code>boolean</code>). Their range depends on the size of the space occupied and the representation method.</li> <li>Original code, complement code, and two's complement code are three methods of encoding numbers in computers, and they can be converted into each other. The highest bit of the original code of an integer is the sign bit, and the remaining bits represent the value of the number.</li> <li>Integers are stored in computers in the form of two's complement. In this representation, the computer can treat the addition of positive and negative numbers uniformly, without the need for special hardware circuits for subtraction, and there is no ambiguity of positive and negative zero.</li> <li>The encoding of floating-point numbers consists of 1 sign bit, 8 exponent bits, and 23 fraction bits. Due to the presence of the exponent bit, the range of floating-point numbers is much greater than that of integers, but at the cost of sacrificing precision.</li> <li>ASCII is the earliest English character set, 1 byte in length, and includes 127 characters. The GBK character set is a commonly used Chinese character set, including more than 20,000 Chinese characters. Unicode strives to provide a complete character set standard, including characters from various languages worldwide, thus solving the problem of garbled characters caused by inconsistent character encoding methods.</li> <li>UTF-8 is the most popular Unicode encoding method, with excellent universality. It is a variable-length encoding method with good scalability and effectively improves the efficiency of space usage. UTF-16 and UTF-32 are fixed-length encoding methods. When encoding Chinese characters, UTF-16 occupies less space than UTF-8. Programming languages like Java and C# use UTF-16 encoding by default.</li> </ul>"},{"location":"chapter_data_structure/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: Why does a hash table contain both linear and non-linear data structures?</p> <p>The underlying structure of a hash table is an array. To resolve hash collisions, we may use \"chaining\": each bucket in the array points to a linked list, which, when exceeding a certain threshold, might be transformed into a tree (usually a red-black tree). From a storage perspective, the foundation of a hash table is an array, where each bucket slot might contain a value, a linked list, or a tree. Therefore, hash tables may contain both linear data structures (arrays, linked lists) and non-linear data structures (trees).</p> <p>Q: Is the length of the <code>char</code> type 1 byte?</p> <p>The length of the <code>char</code> type is determined by the encoding method used by the programming language. For example, Java, JavaScript, TypeScript, and C# all use UTF-16 encoding (to save Unicode code points), so the length of the char type is 2 bytes.</p> <p>Q: Is there ambiguity in calling data structures based on arrays \"static data structures\"? Because operations like push and pop on stacks are \"dynamic\".</p> <p>While stacks indeed allow for dynamic data operations, the data structure itself remains \"static\" (with unchangeable length). Even though data structures based on arrays can dynamically add or remove elements, their capacity is fixed. If the data volume exceeds the pre-allocated size, a new, larger array needs to be created, and the contents of the old array copied into it.</p> <p>Q: When building stacks (queues) without specifying their size, why are they considered \"static data structures\"?</p> <p>In high-level programming languages, we don't need to manually specify the initial capacity of stacks (queues); this task is automatically handled internally by the class. For example, the initial capacity of Java's ArrayList is usually 10. Furthermore, the expansion operation is also implemented automatically. See the subsequent \"List\" chapter for details.</p>"},{"location":"chapter_divide_and_conquer/","title":"Chapter 12. \u00a0 Divide and conquer","text":"<p>Abstract</p> <p>Difficult problems are decomposed layer by layer, each decomposition making them simpler.</p> <p>Divide and conquer reveals an important truth: start with simplicity, and nothing is complex anymore.</p>"},{"location":"chapter_divide_and_conquer/#chapter-contents","title":"Chapter contents","text":"<ul> <li>12.1 \u00a0 Divide and conquer algorithms</li> <li>12.2 \u00a0 Divide and conquer search strategy</li> <li>12.3 \u00a0 Building binary tree problem</li> <li>12.4 \u00a0 Tower of Hanoi Problem</li> <li>12.5 \u00a0 Summary</li> </ul>"},{"location":"chapter_divide_and_conquer/binary_search_recur/","title":"12.2 \u00a0 Divide and conquer search strategy","text":"<p>We have learned that search algorithms fall into two main categories.</p> <ul> <li>Brute-force search: It is implemented by traversing the data structure, with a time complexity of \\(O(n)\\).</li> <li>Adaptive search: It utilizes a unique data organization form or prior information, and its time complexity can reach \\(O(\\log n)\\) or even \\(O(1)\\).</li> </ul> <p>In fact, search algorithms with a time complexity of \\(O(\\log n)\\) are usually based on the divide-and-conquer strategy, such as binary search and trees.</p> <ul> <li>Each step of binary search divides the problem (searching for a target element in an array) into a smaller problem (searching for the target element in half of the array), continuing until the array is empty or the target element is found.</li> <li>Trees represent the divide-and-conquer idea, where in data structures like binary search trees, AVL trees, and heaps, the time complexity of various operations is \\(O(\\log n)\\).</li> </ul> <p>The divide-and-conquer strategy of binary search is as follows.</p> <ul> <li>The problem can be divided: Binary search recursively divides the original problem (searching in an array) into subproblems (searching in half of the array), achieved by comparing the middle element with the target element.</li> <li>Subproblems are independent: In binary search, each round handles one subproblem, unaffected by other subproblems.</li> <li>The solutions of subproblems do not need to be merged: Binary search aims to find a specific element, so there is no need to merge the solutions of subproblems. When a subproblem is solved, the original problem is also solved.</li> </ul> <p>Divide-and-conquer can enhance search efficiency because brute-force search can only eliminate one option per round, whereas divide-and-conquer can eliminate half of the options.</p>"},{"location":"chapter_divide_and_conquer/binary_search_recur/#1-implementing-binary-search-based-on-divide-and-conquer","title":"1. \u00a0 Implementing binary search based on divide-and-conquer","text":"<p>In previous chapters, binary search was implemented based on iteration. Now, we implement it based on divide-and-conquer (recursion).</p> <p>Question</p> <p>Given an ordered array <code>nums</code> of length \\(n\\), where all elements are unique, please find the element <code>target</code>.</p> <p>From a divide-and-conquer perspective, we denote the subproblem corresponding to the search interval \\([i, j]\\) as \\(f(i, j)\\).</p> <p>Starting from the original problem \\(f(0, n-1)\\), perform the binary search through the following steps.</p> <ol> <li>Calculate the midpoint \\(m\\) of the search interval \\([i, j]\\), and use it to eliminate half of the search interval.</li> <li>Recursively solve the subproblem reduced by half in size, which could be \\(f(i, m-1)\\) or \\(f(m+1, j)\\).</li> <li>Repeat steps <code>1.</code> and <code>2.</code>, until <code>target</code> is found or the interval is empty and returns.</li> </ol> <p>Figure 12-4 shows the divide-and-conquer process of binary search for element \\(6\\) in an array.</p> <p></p> <p> Figure 12-4 \u00a0 The divide-and-conquer process of binary search </p> <p>In the implementation code, we declare a recursive function <code>dfs()</code> to solve the problem \\(f(i, j)\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_recur.py<pre><code>def dfs(nums: list[int], target: int, i: int, j: int) -> int:\n \"\"\"Binary search: problem f(i, j)\"\"\"\n # If the interval is empty, indicating no target element, return -1\n if i > j:\n return -1\n # Calculate midpoint index m\n m = (i + j) // 2\n if nums[m] < target:\n # Recursive subproblem f(m+1, j)\n return dfs(nums, target, m + 1, j)\n elif nums[m] > target:\n # Recursive subproblem f(i, m-1)\n return dfs(nums, target, i, m - 1)\n else:\n # Found the target element, thus return its index\n return m\n\ndef binary_search(nums: list[int], target: int) -> int:\n \"\"\"Binary search\"\"\"\n n = len(nums)\n # Solve problem f(0, n-1)\n return dfs(nums, target, 0, n - 1)\n</code></pre> binary_search_recur.cpp<pre><code>/* Binary search: problem f(i, j) */\nint dfs(vector<int> &nums, int target, int i, int j) {\n // If the interval is empty, indicating no target element, return -1\n if (i > j) {\n return -1;\n }\n // Calculate midpoint index m\n int m = (i + j) / 2;\n if (nums[m] < target) {\n // Recursive subproblem f(m+1, j)\n return dfs(nums, target, m + 1, j);\n } else if (nums[m] > target) {\n // Recursive subproblem f(i, m-1)\n return dfs(nums, target, i, m - 1);\n } else {\n // Found the target element, thus return its index\n return m;\n }\n}\n\n/* Binary search */\nint binarySearch(vector<int> &nums, int target) {\n int n = nums.size();\n // Solve problem f(0, n-1)\n return dfs(nums, target, 0, n - 1);\n}\n</code></pre> binary_search_recur.java<pre><code>/* Binary search: problem f(i, j) */\nint dfs(int[] nums, int target, int i, int j) {\n // If the interval is empty, indicating no target element, return -1\n if (i > j) {\n return -1;\n }\n // Calculate midpoint index m\n int m = (i + j) / 2;\n if (nums[m] < target) {\n // Recursive subproblem f(m+1, j)\n return dfs(nums, target, m + 1, j);\n } else if (nums[m] > target) {\n // Recursive subproblem f(i, m-1)\n return dfs(nums, target, i, m - 1);\n } else {\n // Found the target element, thus return its index\n return m;\n }\n}\n\n/* Binary search */\nint binarySearch(int[] nums, int target) {\n int n = nums.length;\n // Solve problem f(0, n-1)\n return dfs(nums, target, 0, n - 1);\n}\n</code></pre> binary_search_recur.cs<pre><code>[class]{binary_search_recur}-[func]{DFS}\n\n[class]{binary_search_recur}-[func]{BinarySearch}\n</code></pre> binary_search_recur.go<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre> binary_search_recur.swift<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre> binary_search_recur.js<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre> binary_search_recur.ts<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre> binary_search_recur.dart<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre> binary_search_recur.rs<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binary_search}\n</code></pre> binary_search_recur.c<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre> binary_search_recur.kt<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre> binary_search_recur.rb<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binary_search}\n</code></pre> binary_search_recur.zig<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{binarySearch}\n</code></pre>"},{"location":"chapter_divide_and_conquer/build_binary_tree_problem/","title":"12.3 \u00a0 Building binary tree problem","text":"<p>Question</p> <p>Given the pre-order traversal <code>preorder</code> and in-order traversal <code>inorder</code> of a binary tree, construct the binary tree and return the root node of the binary tree. Assume that there are no duplicate values in the nodes of the binary tree (as shown in Figure 12-5).</p> <p></p> <p> Figure 12-5 \u00a0 Example data for building a binary tree </p>"},{"location":"chapter_divide_and_conquer/build_binary_tree_problem/#1-determining-if-it-is-a-divide-and-conquer-problem","title":"1. \u00a0 Determining if it is a divide and conquer problem","text":"<p>The original problem of constructing a binary tree from <code>preorder</code> and <code>inorder</code> is a typical divide and conquer problem.</p> <ul> <li>The problem can be decomposed: From the perspective of divide and conquer, we can divide the original problem into two subproblems: building the left subtree and building the right subtree, plus one operation: initializing the root node. For each subtree (subproblem), we can still use the above division method, dividing it into smaller subtrees (subproblems), until the smallest subproblem (empty subtree) is reached.</li> <li>The subproblems are independent: The left and right subtrees are independent of each other, with no overlap. When building the left subtree, we only need to focus on the parts of the in-order and pre-order traversals that correspond to the left subtree. The same applies to the right subtree.</li> <li>Solutions to subproblems can be combined: Once the solutions for the left and right subtrees (solutions to subproblems) are obtained, we can link them to the root node to obtain the solution to the original problem.</li> </ul>"},{"location":"chapter_divide_and_conquer/build_binary_tree_problem/#2-how-to-divide-the-subtrees","title":"2. \u00a0 How to divide the subtrees","text":"<p>Based on the above analysis, this problem can be solved using divide and conquer, but how do we use the pre-order traversal <code>preorder</code> and in-order traversal <code>inorder</code> to divide the left and right subtrees?</p> <p>By definition, <code>preorder</code> and <code>inorder</code> can be divided into three parts.</p> <ul> <li>Pre-order traversal: <code>[ Root | Left Subtree | Right Subtree ]</code>, for example, the tree in the figure corresponds to <code>[ 3 | 9 | 2 1 7 ]</code>.</li> <li>In-order traversal: <code>[ Left Subtree | Root | Right Subtree ]</code>, for example, the tree in the figure corresponds to <code>[ 9 | 3 | 1 2 7 ]</code>.</li> </ul> <p>Using the data in the figure above, we can obtain the division results as shown in Figure 12-6.</p> <ol> <li>The first element 3 in the pre-order traversal is the value of the root node.</li> <li>Find the index of the root node 3 in <code>inorder</code>, and use this index to divide <code>inorder</code> into <code>[ 9 | 3 \uff5c 1 2 7 ]</code>.</li> <li>Based on the division results of <code>inorder</code>, it is easy to determine the number of nodes in the left and right subtrees as 1 and 3, respectively, thus dividing <code>preorder</code> into <code>[ 3 | 9 | 2 1 7 ]</code>.</li> </ol> <p></p> <p> Figure 12-6 \u00a0 Dividing the subtrees in pre-order and in-order traversals </p>"},{"location":"chapter_divide_and_conquer/build_binary_tree_problem/#3-describing-subtree-intervals-based-on-variables","title":"3. \u00a0 Describing subtree intervals based on variables","text":"<p>Based on the above division method, we have now obtained the index intervals of the root, left subtree, and right subtree in <code>preorder</code> and <code>inorder</code>. To describe these index intervals, we need the help of several pointer variables.</p> <ul> <li>Let the index of the current tree's root node in <code>preorder</code> be denoted as \\(i\\).</li> <li>Let the index of the current tree's root node in <code>inorder</code> be denoted as \\(m\\).</li> <li>Let the index interval of the current tree in <code>inorder</code> be denoted as \\([l, r]\\).</li> </ul> <p>As shown in Table 12-1, the above variables can represent the index of the root node in <code>preorder</code> as well as the index intervals of the subtrees in <code>inorder</code>.</p> <p> Table 12-1 \u00a0 Indexes of the root node and subtrees in pre-order and in-order traversals </p> Root node index in <code>preorder</code> Subtree index interval in <code>inorder</code> Current tree \\(i\\) \\([l, r]\\) Left subtree \\(i + 1\\) \\([l, m-1]\\) Right subtree \\(i + 1 + (m - l)\\) \\([m+1, r]\\) <p>Please note, the meaning of \\((m-l)\\) in the right subtree root index is \"the number of nodes in the left subtree\", which is suggested to be understood in conjunction with Figure 12-7.</p> <p></p> <p> Figure 12-7 \u00a0 Indexes of the root node and left and right subtrees </p>"},{"location":"chapter_divide_and_conquer/build_binary_tree_problem/#4-code-implementation","title":"4. \u00a0 Code implementation","text":"<p>To improve the efficiency of querying \\(m\\), we use a hash table <code>hmap</code> to store the mapping of elements in <code>inorder</code> to their indexes:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig build_tree.py<pre><code>def dfs(\n preorder: list[int],\n inorder_map: dict[int, int],\n i: int,\n l: int,\n r: int,\n) -> TreeNode | None:\n \"\"\"Build binary tree: Divide and conquer\"\"\"\n # Terminate when subtree interval is empty\n if r - l < 0:\n return None\n # Initialize root node\n root = TreeNode(preorder[i])\n # Query m to divide left and right subtrees\n m = inorder_map[preorder[i]]\n # Subproblem: build left subtree\n root.left = dfs(preorder, inorder_map, i + 1, l, m - 1)\n # Subproblem: build right subtree\n root.right = dfs(preorder, inorder_map, i + 1 + m - l, m + 1, r)\n # Return root node\n return root\n\ndef build_tree(preorder: list[int], inorder: list[int]) -> TreeNode | None:\n \"\"\"Build binary tree\"\"\"\n # Initialize hash table, storing in-order elements to indices mapping\n inorder_map = {val: i for i, val in enumerate(inorder)}\n root = dfs(preorder, inorder_map, 0, 0, len(inorder) - 1)\n return root\n</code></pre> build_tree.cpp<pre><code>/* Build binary tree: Divide and conquer */\nTreeNode *dfs(vector<int> &preorder, unordered_map<int, int> &inorderMap, int i, int l, int r) {\n // Terminate when subtree interval is empty\n if (r - l < 0)\n return NULL;\n // Initialize root node\n TreeNode *root = new TreeNode(preorder[i]);\n // Query m to divide left and right subtrees\n int m = inorderMap[preorder[i]];\n // Subproblem: build left subtree\n root->left = dfs(preorder, inorderMap, i + 1, l, m - 1);\n // Subproblem: build right subtree\n root->right = dfs(preorder, inorderMap, i + 1 + m - l, m + 1, r);\n // Return root node\n return root;\n}\n\n/* Build binary tree */\nTreeNode *buildTree(vector<int> &preorder, vector<int> &inorder) {\n // Initialize hash table, storing in-order elements to indices mapping\n unordered_map<int, int> inorderMap;\n for (int i = 0; i < inorder.size(); i++) {\n inorderMap[inorder[i]] = i;\n }\n TreeNode *root = dfs(preorder, inorderMap, 0, 0, inorder.size() - 1);\n return root;\n}\n</code></pre> build_tree.java<pre><code>/* Build binary tree: Divide and conquer */\nTreeNode dfs(int[] preorder, Map<Integer, Integer> inorderMap, int i, int l, int r) {\n // Terminate when subtree interval is empty\n if (r - l < 0)\n return null;\n // Initialize root node\n TreeNode root = new TreeNode(preorder[i]);\n // Query m to divide left and right subtrees\n int m = inorderMap.get(preorder[i]);\n // Subproblem: build left subtree\n root.left = dfs(preorder, inorderMap, i + 1, l, m - 1);\n // Subproblem: build right subtree\n root.right = dfs(preorder, inorderMap, i + 1 + m - l, m + 1, r);\n // Return root node\n return root;\n}\n\n/* Build binary tree */\nTreeNode buildTree(int[] preorder, int[] inorder) {\n // Initialize hash table, storing in-order elements to indices mapping\n Map<Integer, Integer> inorderMap = new HashMap<>();\n for (int i = 0; i < inorder.length; i++) {\n inorderMap.put(inorder[i], i);\n }\n TreeNode root = dfs(preorder, inorderMap, 0, 0, inorder.length - 1);\n return root;\n}\n</code></pre> build_tree.cs<pre><code>[class]{build_tree}-[func]{DFS}\n\n[class]{build_tree}-[func]{BuildTree}\n</code></pre> build_tree.go<pre><code>[class]{}-[func]{dfsBuildTree}\n\n[class]{}-[func]{buildTree}\n</code></pre> build_tree.swift<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{buildTree}\n</code></pre> build_tree.js<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{buildTree}\n</code></pre> build_tree.ts<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{buildTree}\n</code></pre> build_tree.dart<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{buildTree}\n</code></pre> build_tree.rs<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{build_tree}\n</code></pre> build_tree.c<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{buildTree}\n</code></pre> build_tree.kt<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{buildTree}\n</code></pre> build_tree.rb<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{build_tree}\n</code></pre> build_tree.zig<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{buildTree}\n</code></pre> <p>Figure 12-8 shows the recursive process of building the binary tree, where each node is established during the \"descending\" process, and each edge (reference) is established during the \"ascending\" process.</p> <1><2><3><4><5><6><7><8><9> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 12-8 \u00a0 Recursive process of building a binary tree </p> <p>Each recursive function's division results of <code>preorder</code> and <code>inorder</code> are shown in Figure 12-9.</p> <p></p> <p> Figure 12-9 \u00a0 Division results in each recursive function </p> <p>Assuming the number of nodes in the tree is \\(n\\), initializing each node (executing a recursive function <code>dfs()</code>) takes \\(O(1)\\) time. Thus, the overall time complexity is \\(O(n)\\).</p> <p>The hash table stores the mapping of <code>inorder</code> elements to their indexes, with a space complexity of \\(O(n)\\). In the worst case, when the binary tree degenerates into a linked list, the recursive depth reaches \\(n\\), using \\(O(n)\\) stack frame space. Therefore, the overall space complexity is \\(O(n)\\).</p>"},{"location":"chapter_divide_and_conquer/divide_and_conquer/","title":"12.1 \u00a0 Divide and conquer algorithms","text":"<p>Divide and conquer, fully referred to as \"divide and rule\", is an extremely important and common algorithm strategy. Divide and conquer is usually based on recursion and includes two steps: \"divide\" and \"conquer\".</p> <ol> <li>Divide (partition phase): Recursively decompose the original problem into two or more sub-problems until the smallest sub-problem is reached and the process terminates.</li> <li>Conquer (merge phase): Starting from the smallest sub-problem with a known solution, merge the solutions of the sub-problems from bottom to top to construct the solution to the original problem.</li> </ol> <p>As shown in Figure 12-1, \"merge sort\" is one of the typical applications of the divide and conquer strategy.</p> <ol> <li>Divide: Recursively divide the original array (original problem) into two sub-arrays (sub-problems), until the sub-array has only one element (smallest sub-problem).</li> <li>Conquer: Merge the ordered sub-arrays (solutions to the sub-problems) from bottom to top to obtain an ordered original array (solution to the original problem).</li> </ol> <p></p> <p> Figure 12-1 \u00a0 Merge sort's divide and conquer strategy </p>"},{"location":"chapter_divide_and_conquer/divide_and_conquer/#1211-how-to-identify-divide-and-conquer-problems","title":"12.1.1 \u00a0 How to identify divide and conquer problems","text":"<p>Whether a problem is suitable for a divide and conquer solution can usually be judged based on the following criteria.</p> <ol> <li>The problem can be decomposed: The original problem can be decomposed into smaller, similar sub-problems and can be recursively divided in the same manner.</li> <li>Sub-problems are independent: There is no overlap between sub-problems, and they are independent and can be solved separately.</li> <li>Solutions to sub-problems can be merged: The solution to the original problem is obtained by merging the solutions of the sub-problems.</li> </ol> <p>Clearly, merge sort meets these three criteria.</p> <ol> <li>The problem can be decomposed: Recursively divide the array (original problem) into two sub-arrays (sub-problems).</li> <li>Sub-problems are independent: Each sub-array can be sorted independently (sub-problems can be solved independently).</li> <li>Solutions to sub-problems can be merged: Two ordered sub-arrays (solutions to the sub-problems) can be merged into one ordered array (solution to the original problem).</li> </ol>"},{"location":"chapter_divide_and_conquer/divide_and_conquer/#1212-improving-efficiency-through-divide-and-conquer","title":"12.1.2 \u00a0 Improving efficiency through divide and conquer","text":"<p>Divide and conquer can not only effectively solve algorithm problems but often also improve algorithm efficiency. In sorting algorithms, quicksort, merge sort, and heap sort are faster than selection, bubble, and insertion sorts because they apply the divide and conquer strategy.</p> <p>Then, we may ask: Why can divide and conquer improve algorithm efficiency, and what is the underlying logic? In other words, why are the steps of decomposing a large problem into multiple sub-problems, solving the sub-problems, and merging the solutions of the sub-problems into the solution of the original problem more efficient than directly solving the original problem? This question can be discussed from the aspects of the number of operations and parallel computation.</p>"},{"location":"chapter_divide_and_conquer/divide_and_conquer/#1-optimization-of-operation-count","title":"1. \u00a0 Optimization of operation count","text":"<p>Taking \"bubble sort\" as an example, it requires \\(O(n^2)\\) time to process an array of length \\(n\\). Suppose we divide the array from the midpoint into two sub-arrays as shown in Figure 12-2, then the division requires \\(O(n)\\) time, sorting each sub-array requires \\(O((n / 2)^2)\\) time, and merging the two sub-arrays requires \\(O(n)\\) time, with the total time complexity being:</p> \\[ O(n + (\\frac{n}{2})^2 \\times 2 + n) = O(\\frac{n^2}{2} + 2n) \\] <p></p> <p> Figure 12-2 \u00a0 Bubble sort before and after array partition </p> <p>Next, we calculate the following inequality, where the left and right sides are the total number of operations before and after the partition, respectively:</p> \\[ \\begin{aligned} n^2 & > \\frac{n^2}{2} + 2n \\newline n^2 - \\frac{n^2}{2} - 2n & > 0 \\newline n(n - 4) & > 0 \\end{aligned} \\] <p>This means that when \\(n > 4\\), the number of operations after partitioning is fewer, and the sorting efficiency should be higher. Please note that the time complexity after partitioning is still quadratic \\(O(n^2)\\), but the constant factor in the complexity has decreased.</p> <p>Further, what if we keep dividing the sub-arrays from their midpoints into two sub-arrays until the sub-arrays have only one element left? This idea is actually \"merge sort,\" with a time complexity of \\(O(n \\log n)\\).</p> <p>Furthermore, what if we set several more partition points and evenly divide the original array into \\(k\\) sub-arrays? This situation is very similar to \"bucket sort,\" which is very suitable for sorting massive data, and theoretically, the time complexity can reach \\(O(n + k)\\).</p>"},{"location":"chapter_divide_and_conquer/divide_and_conquer/#2-optimization-through-parallel-computation","title":"2. \u00a0 Optimization through parallel computation","text":"<p>We know that the sub-problems generated by divide and conquer are independent of each other, thus they can usually be solved in parallel. This means that divide and conquer can not only reduce the algorithm's time complexity, but also facilitate parallel optimization by the operating system.</p> <p>Parallel optimization is especially effective in environments with multiple cores or processors, as the system can process multiple sub-problems simultaneously, making fuller use of computing resources and significantly reducing the overall runtime.</p> <p>For example, in the \"bucket sort\" shown in Figure 12-3, we distribute massive data evenly across various buckets, then the sorting tasks of all buckets can be distributed to different computing units, and the results are merged after completion.</p> <p></p> <p> Figure 12-3 \u00a0 Bucket sort's parallel computation </p>"},{"location":"chapter_divide_and_conquer/divide_and_conquer/#1213-common-applications-of-divide-and-conquer","title":"12.1.3 \u00a0 Common applications of divide and conquer","text":"<p>On one hand, divide and conquer can be used to solve many classic algorithm problems.</p> <ul> <li>Finding the closest point pair: This algorithm first divides the set of points into two parts, then finds the closest point pair in each part, and finally finds the closest point pair that spans the two parts.</li> <li>Large integer multiplication: For example, the Karatsuba algorithm, which breaks down large integer multiplication into several smaller integer multiplications and additions.</li> <li>Matrix multiplication: For example, the Strassen algorithm, which decomposes large matrix multiplication into multiple small matrix multiplications and additions.</li> <li>Tower of Hanoi problem: The Tower of Hanoi problem can be solved recursively, a typical application of the divide and conquer strategy.</li> <li>Solving inverse pairs: In a sequence, if a number in front is greater than a number behind, these two numbers form an inverse pair. Solving the inverse pair problem can utilize the idea of divide and conquer, with the aid of merge sort.</li> </ul> <p>On the other hand, divide and conquer is very widely applied in the design of algorithms and data structures.</p> <ul> <li>Binary search: Binary search divides an ordered array from the midpoint index into two parts, then decides which half to exclude based on the comparison result between the target value and the middle element value, and performs the same binary operation in the remaining interval.</li> <li>Merge sort: Already introduced at the beginning of this section, no further elaboration is needed.</li> <li>Quicksort: Quicksort selects a pivot value, then divides the array into two sub-arrays, one with elements smaller than the pivot and the other with elements larger than the pivot, and then performs the same partitioning operation on these two parts until the sub-array has only one element.</li> <li>Bucket sort: The basic idea of bucket sort is to distribute data to multiple buckets, then sort the elements within each bucket, and finally retrieve the elements from the buckets in order to obtain an ordered array.</li> <li>Trees: For example, binary search trees, AVL trees, red-black trees, B-trees, B+ trees, etc., their operations such as search, insertion, and deletion can all be considered applications of the divide and conquer strategy.</li> <li>Heap: A heap is a special type of complete binary tree, whose various operations, such as insertion, deletion, and heapification, actually imply the idea of divide and conquer.</li> <li>Hash table: Although hash tables do not directly apply divide and conquer, some hash collision resolution solutions indirectly apply the divide and conquer strategy, for example, long lists in chained addressing being converted to red-black trees to improve query efficiency.</li> </ul> <p>It can be seen that divide and conquer is a subtly pervasive algorithmic idea, embedded within various algorithms and data structures.</p>"},{"location":"chapter_divide_and_conquer/hanota_problem/","title":"12.4 \u00a0 Tower of Hanoi Problem","text":"<p>In both merge sorting and building binary trees, we decompose the original problem into two subproblems, each half the size of the original problem. However, for the Tower of Hanoi, we adopt a different decomposition strategy.</p> <p>Question</p> <p>Given three pillars, denoted as <code>A</code>, <code>B</code>, and <code>C</code>. Initially, pillar <code>A</code> is stacked with \\(n\\) discs, arranged in order from top to bottom from smallest to largest. Our task is to move these \\(n\\) discs to pillar <code>C</code>, maintaining their original order (as shown in Figure 12-10). The following rules must be followed during the disc movement process:</p> <ol> <li>A disc can only be picked up from the top of a pillar and placed on top of another pillar.</li> <li>Only one disc can be moved at a time.</li> <li>A smaller disc must always be on top of a larger disc.</li> </ol> <p></p> <p> Figure 12-10 \u00a0 Example of the Tower of Hanoi </p> <p>We denote the Tower of Hanoi of size \\(i\\) as \\(f(i)\\). For example, \\(f(3)\\) represents the Tower of Hanoi of moving \\(3\\) discs from <code>A</code> to <code>C</code>.</p>"},{"location":"chapter_divide_and_conquer/hanota_problem/#1-consider-the-base-case","title":"1. \u00a0 Consider the base case","text":"<p>As shown in Figure 12-11, for the problem \\(f(1)\\), i.e., when there is only one disc, we can directly move it from <code>A</code> to <code>C</code>.</p> <1><2> <p></p> <p></p> <p> Figure 12-11 \u00a0 Solution for a problem of size 1 </p> <p>As shown in Figure 12-12, for the problem \\(f(2)\\), i.e., when there are two discs, since the smaller disc must always be above the larger disc, <code>B</code> is needed to assist in the movement.</p> <ol> <li>First, move the smaller disc from <code>A</code> to <code>B</code>.</li> <li>Then move the larger disc from <code>A</code> to <code>C</code>.</li> <li>Finally, move the smaller disc from <code>B</code> to <code>C</code>.</li> </ol> <1><2><3><4> <p></p> <p></p> <p></p> <p></p> <p> Figure 12-12 \u00a0 Solution for a problem of size 2 </p> <p>The process of solving the problem \\(f(2)\\) can be summarized as: moving two discs from <code>A</code> to <code>C</code> with the help of <code>B</code>. Here, <code>C</code> is called the target pillar, and <code>B</code> is called the buffer pillar.</p>"},{"location":"chapter_divide_and_conquer/hanota_problem/#2-decomposition-of-subproblems","title":"2. \u00a0 Decomposition of subproblems","text":"<p>For the problem \\(f(3)\\), i.e., when there are three discs, the situation becomes slightly more complicated.</p> <p>Since we already know the solutions to \\(f(1)\\) and \\(f(2)\\), we can think from a divide-and-conquer perspective and consider the two top discs on <code>A</code> as a unit, performing the steps shown in Figure 12-13. This way, the three discs are successfully moved from <code>A</code> to <code>C</code>.</p> <ol> <li>Let <code>B</code> be the target pillar and <code>C</code> the buffer pillar, and move the two discs from <code>A</code> to <code>B</code>.</li> <li>Move the remaining disc from <code>A</code> directly to <code>C</code>.</li> <li>Let <code>C</code> be the target pillar and <code>A</code> the buffer pillar, and move the two discs from <code>B</code> to <code>C</code>.</li> </ol> <1><2><3><4> <p></p> <p></p> <p></p> <p></p> <p> Figure 12-13 \u00a0 Solution for a problem of size 3 </p> <p>Essentially, we divide the problem \\(f(3)\\) into two subproblems \\(f(2)\\) and one subproblem \\(f(1)\\). By solving these three subproblems in order, the original problem is resolved. This indicates that the subproblems are independent, and their solutions can be merged.</p> <p>From this, we can summarize the divide-and-conquer strategy for solving the Tower of Hanoi shown in Figure 12-14: divide the original problem \\(f(n)\\) into two subproblems \\(f(n-1)\\) and one subproblem \\(f(1)\\), and solve these three subproblems in the following order.</p> <ol> <li>Move \\(n-1\\) discs with the help of <code>C</code> from <code>A</code> to <code>B</code>.</li> <li>Move the remaining one disc directly from <code>A</code> to <code>C</code>.</li> <li>Move \\(n-1\\) discs with the help of <code>A</code> from <code>B</code> to <code>C</code>.</li> </ol> <p>For these two subproblems \\(f(n-1)\\), they can be recursively divided in the same manner until the smallest subproblem \\(f(1)\\) is reached. The solution to \\(f(1)\\) is already known and requires only one move.</p> <p></p> <p> Figure 12-14 \u00a0 Divide and conquer strategy for solving the Tower of Hanoi </p>"},{"location":"chapter_divide_and_conquer/hanota_problem/#3-code-implementation","title":"3. \u00a0 Code implementation","text":"<p>In the code, we declare a recursive function <code>dfs(i, src, buf, tar)</code> whose role is to move the \\(i\\) discs on top of pillar <code>src</code> with the help of buffer pillar <code>buf</code> to the target pillar <code>tar</code>:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig hanota.py<pre><code>def move(src: list[int], tar: list[int]):\n \"\"\"Move a disc\"\"\"\n # Take out a disc from the top of src\n pan = src.pop()\n # Place the disc on top of tar\n tar.append(pan)\n\ndef dfs(i: int, src: list[int], buf: list[int], tar: list[int]):\n \"\"\"Solve the Tower of Hanoi problem f(i)\"\"\"\n # If only one disc remains on src, move it to tar\n if i == 1:\n move(src, tar)\n return\n # Subproblem f(i-1): move the top i-1 discs from src with the help of tar to buf\n dfs(i - 1, src, tar, buf)\n # Subproblem f(1): move the remaining one disc from src to tar\n move(src, tar)\n # Subproblem f(i-1): move the top i-1 discs from buf with the help of src to tar\n dfs(i - 1, buf, src, tar)\n\ndef solve_hanota(A: list[int], B: list[int], C: list[int]):\n \"\"\"Solve the Tower of Hanoi problem\"\"\"\n n = len(A)\n # Move the top n discs from A with the help of B to C\n dfs(n, A, B, C)\n</code></pre> hanota.cpp<pre><code>/* Move a disc */\nvoid move(vector<int> &src, vector<int> &tar) {\n // Take out a disc from the top of src\n int pan = src.back();\n src.pop_back();\n // Place the disc on top of tar\n tar.push_back(pan);\n}\n\n/* Solve the Tower of Hanoi problem f(i) */\nvoid dfs(int i, vector<int> &src, vector<int> &buf, vector<int> &tar) {\n // If only one disc remains on src, move it to tar\n if (i == 1) {\n move(src, tar);\n return;\n }\n // Subproblem f(i-1): move the top i-1 discs from src with the help of tar to buf\n dfs(i - 1, src, tar, buf);\n // Subproblem f(1): move the remaining one disc from src to tar\n move(src, tar);\n // Subproblem f(i-1): move the top i-1 discs from buf with the help of src to tar\n dfs(i - 1, buf, src, tar);\n}\n\n/* Solve the Tower of Hanoi problem */\nvoid solveHanota(vector<int> &A, vector<int> &B, vector<int> &C) {\n int n = A.size();\n // Move the top n discs from A with the help of B to C\n dfs(n, A, B, C);\n}\n</code></pre> hanota.java<pre><code>/* Move a disc */\nvoid move(List<Integer> src, List<Integer> tar) {\n // Take out a disc from the top of src\n Integer pan = src.remove(src.size() - 1);\n // Place the disc on top of tar\n tar.add(pan);\n}\n\n/* Solve the Tower of Hanoi problem f(i) */\nvoid dfs(int i, List<Integer> src, List<Integer> buf, List<Integer> tar) {\n // If only one disc remains on src, move it to tar\n if (i == 1) {\n move(src, tar);\n return;\n }\n // Subproblem f(i-1): move the top i-1 discs from src with the help of tar to buf\n dfs(i - 1, src, tar, buf);\n // Subproblem f(1): move the remaining one disc from src to tar\n move(src, tar);\n // Subproblem f(i-1): move the top i-1 discs from buf with the help of src to tar\n dfs(i - 1, buf, src, tar);\n}\n\n/* Solve the Tower of Hanoi problem */\nvoid solveHanota(List<Integer> A, List<Integer> B, List<Integer> C) {\n int n = A.size();\n // Move the top n discs from A with the help of B to C\n dfs(n, A, B, C);\n}\n</code></pre> hanota.cs<pre><code>[class]{hanota}-[func]{Move}\n\n[class]{hanota}-[func]{DFS}\n\n[class]{hanota}-[func]{SolveHanota}\n</code></pre> hanota.go<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfsHanota}\n\n[class]{}-[func]{solveHanota}\n</code></pre> hanota.swift<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solveHanota}\n</code></pre> hanota.js<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solveHanota}\n</code></pre> hanota.ts<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solveHanota}\n</code></pre> hanota.dart<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solveHanota}\n</code></pre> hanota.rs<pre><code>[class]{}-[func]{move_pan}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solve_hanota}\n</code></pre> hanota.c<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solveHanota}\n</code></pre> hanota.kt<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solveHanota}\n</code></pre> hanota.rb<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solve_hanota}\n</code></pre> hanota.zig<pre><code>[class]{}-[func]{move}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{solveHanota}\n</code></pre> <p>As shown in Figure 12-15, the Tower of Hanoi forms a recursive tree with a height of \\(n\\), each node representing a subproblem, corresponding to an open <code>dfs()</code> function, thus the time complexity is \\(O(2^n)\\), and the space complexity is \\(O(n)\\).</p> <p></p> <p> Figure 12-15 \u00a0 Recursive tree of the Tower of Hanoi </p> <p>Quote</p> <p>The Tower of Hanoi originates from an ancient legend. In a temple in ancient India, monks had three tall diamond pillars and \\(64\\) differently sized golden discs. The monks continuously moved the discs, believing that when the last disc is correctly placed, the world would end.</p> <p>However, even if the monks moved a disc every second, it would take about \\(2^{64} \\approx 1.84\u00d710^{19}\\) seconds, approximately 585 billion years, far exceeding current estimates of the age of the universe. Thus, if the legend is true, we probably do not need to worry about the world ending.</p>"},{"location":"chapter_divide_and_conquer/summary/","title":"12.5 \u00a0 Summary","text":"<ul> <li>Divide and conquer is a common algorithm design strategy, which includes dividing (partitioning) and conquering (merging) two stages, usually implemented based on recursion.</li> <li>The basis for judging whether it is a divide and conquer algorithm problem includes: whether the problem can be decomposed, whether the subproblems are independent, and whether the subproblems can be merged.</li> <li>Merge sort is a typical application of the divide and conquer strategy, which recursively divides the array into two equal-length subarrays until only one element remains, and then starts merging layer by layer to complete the sorting.</li> <li>Introducing the divide and conquer strategy can often improve algorithm efficiency. On one hand, the divide and conquer strategy reduces the number of operations; on the other hand, it is conducive to parallel optimization of the system after division.</li> <li>Divide and conquer can solve many algorithm problems and is widely used in data structure and algorithm design, where its presence is ubiquitous.</li> <li>Compared to brute force search, adaptive search is more efficient. Search algorithms with a time complexity of \\(O(\\log n)\\) are usually based on the divide and conquer strategy.</li> <li>Binary search is another typical application of the divide and conquer strategy, which does not include the step of merging the solutions of subproblems. We can implement binary search through recursive divide and conquer.</li> <li>In the problem of constructing binary trees, building the tree (original problem) can be divided into building the left and right subtree (subproblems), which can be achieved by partitioning the index intervals of the pre-order and in-order traversals.</li> <li>In the Tower of Hanoi problem, a problem of size \\(n\\) can be divided into two subproblems of size \\(n-1\\) and one subproblem of size \\(1\\). By solving these three subproblems in sequence, the original problem is consequently resolved.</li> </ul>"},{"location":"chapter_dynamic_programming/","title":"Chapter 14. \u00a0 Dynamic programming","text":"<p>Abstract</p> <p>Streams merge into rivers, and rivers merge into the sea.</p> <p>Dynamic programming combines the solutions of small problems to solve bigger problems, step by step leading us to the solution.</p>"},{"location":"chapter_dynamic_programming/#chapter-contents","title":"Chapter contents","text":"<ul> <li>14.1 \u00a0 Introduction to dynamic programming</li> <li>14.2 \u00a0 Characteristics of DP problems</li> <li>14.3 \u00a0 DP problem-solving approach\u00b6</li> <li>14.4 \u00a0 0-1 Knapsack problem</li> <li>14.5 \u00a0 Unbounded knapsack problem</li> <li>14.6 \u00a0 Edit distance problem</li> <li>14.7 \u00a0 Summary</li> </ul>"},{"location":"chapter_dynamic_programming/dp_problem_features/","title":"14.2 \u00a0 Characteristics of dynamic programming problems","text":"<p>In the previous section, we learned how dynamic programming solves the original problem by decomposing it into subproblems. In fact, subproblem decomposition is a general algorithmic approach, with different emphases in divide and conquer, dynamic programming, and backtracking.</p> <ul> <li>Divide and conquer algorithms recursively divide the original problem into multiple independent subproblems until the smallest subproblems are reached, and combine the solutions of the subproblems during backtracking to ultimately obtain the solution to the original problem.</li> <li>Dynamic programming also decomposes the problem recursively, but the main difference from divide and conquer algorithms is that the subproblems in dynamic programming are interdependent, and many overlapping subproblems will appear during the decomposition process.</li> <li>Backtracking algorithms exhaust all possible solutions through trial and error and avoid unnecessary search branches by pruning. The solution to the original problem consists of a series of decision steps, and we can consider each sub-sequence before each decision step as a subproblem.</li> </ul> <p>In fact, dynamic programming is commonly used to solve optimization problems, which not only include overlapping subproblems but also have two other major characteristics: optimal substructure and statelessness.</p>"},{"location":"chapter_dynamic_programming/dp_problem_features/#1421-optimal-substructure","title":"14.2.1 \u00a0 Optimal substructure","text":"<p>We make a slight modification to the stair climbing problem to make it more suitable to demonstrate the concept of optimal substructure.</p> <p>Minimum cost of climbing stairs</p> <p>Given a staircase, you can step up 1 or 2 steps at a time, and each step on the staircase has a non-negative integer representing the cost you need to pay at that step. Given a non-negative integer array \\(cost\\), where \\(cost[i]\\) represents the cost you need to pay at the \\(i\\)-th step, \\(cost[0]\\) is the ground (starting point). What is the minimum cost required to reach the top?</p> <p>As shown in Figure 14-6, if the costs of the 1<sup>st</sup>, 2<sup>nd</sup>, and 3<sup>rd</sup> steps are \\(1\\), \\(10\\), and \\(1\\) respectively, then the minimum cost to climb to the 3<sup>rd</sup> step from the ground is \\(2\\).</p> <p></p> <p> Figure 14-6 \u00a0 Minimum cost to climb to the 3rd step </p> <p>Let \\(dp[i]\\) be the cumulative cost of climbing to the \\(i\\)-th step. Since the \\(i\\)-th step can only come from the \\(i-1\\) or \\(i-2\\) step, \\(dp[i]\\) can only be either \\(dp[i-1] + cost[i]\\) or \\(dp[i-2] + cost[i]\\). To minimize the cost, we should choose the smaller of the two:</p> \\[ dp[i] = \\min(dp[i-1], dp[i-2]) + cost[i] \\] <p>This leads us to the meaning of optimal substructure: The optimal solution to the original problem is constructed from the optimal solutions of subproblems.</p> <p>This problem obviously has optimal substructure: we select the better one from the optimal solutions of the two subproblems, \\(dp[i-1]\\) and \\(dp[i-2]\\), and use it to construct the optimal solution for the original problem \\(dp[i]\\).</p> <p>So, does the stair climbing problem from the previous section have optimal substructure? Its goal is to solve for the number of solutions, which seems to be a counting problem, but if we ask in another way: \"Solve for the maximum number of solutions\". We surprisingly find that although the problem has changed, the optimal substructure has emerged: the maximum number of solutions at the \\(n\\)-th step equals the sum of the maximum number of solutions at the \\(n-1\\) and \\(n-2\\) steps. Thus, the interpretation of optimal substructure is quite flexible and will have different meanings in different problems.</p> <p>According to the state transition equation, and the initial states \\(dp[1] = cost[1]\\) and \\(dp[2] = cost[2]\\), we can obtain the dynamic programming code:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig min_cost_climbing_stairs_dp.py<pre><code>def min_cost_climbing_stairs_dp(cost: list[int]) -> int:\n \"\"\"Climbing stairs with minimum cost: Dynamic programming\"\"\"\n n = len(cost) - 1\n if n == 1 or n == 2:\n return cost[n]\n # Initialize dp table, used to store subproblem solutions\n dp = [0] * (n + 1)\n # Initial state: preset the smallest subproblem solution\n dp[1], dp[2] = cost[1], cost[2]\n # State transition: gradually solve larger subproblems from smaller ones\n for i in range(3, n + 1):\n dp[i] = min(dp[i - 1], dp[i - 2]) + cost[i]\n return dp[n]\n</code></pre> min_cost_climbing_stairs_dp.cpp<pre><code>/* Climbing stairs with minimum cost: Dynamic programming */\nint minCostClimbingStairsDP(vector<int> &cost) {\n int n = cost.size() - 1;\n if (n == 1 || n == 2)\n return cost[n];\n // Initialize dp table, used to store subproblem solutions\n vector<int> dp(n + 1);\n // Initial state: preset the smallest subproblem solution\n dp[1] = cost[1];\n dp[2] = cost[2];\n // State transition: gradually solve larger subproblems from smaller ones\n for (int i = 3; i <= n; i++) {\n dp[i] = min(dp[i - 1], dp[i - 2]) + cost[i];\n }\n return dp[n];\n}\n</code></pre> min_cost_climbing_stairs_dp.java<pre><code>/* Climbing stairs with minimum cost: Dynamic programming */\nint minCostClimbingStairsDP(int[] cost) {\n int n = cost.length - 1;\n if (n == 1 || n == 2)\n return cost[n];\n // Initialize dp table, used to store subproblem solutions\n int[] dp = new int[n + 1];\n // Initial state: preset the smallest subproblem solution\n dp[1] = cost[1];\n dp[2] = cost[2];\n // State transition: gradually solve larger subproblems from smaller ones\n for (int i = 3; i <= n; i++) {\n dp[i] = Math.min(dp[i - 1], dp[i - 2]) + cost[i];\n }\n return dp[n];\n}\n</code></pre> min_cost_climbing_stairs_dp.cs<pre><code>[class]{min_cost_climbing_stairs_dp}-[func]{MinCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.go<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.swift<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.js<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.ts<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.dart<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.rs<pre><code>[class]{}-[func]{min_cost_climbing_stairs_dp}\n</code></pre> min_cost_climbing_stairs_dp.c<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.kt<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> min_cost_climbing_stairs_dp.rb<pre><code>[class]{}-[func]{min_cost_climbing_stairs_dp}\n</code></pre> min_cost_climbing_stairs_dp.zig<pre><code>[class]{}-[func]{minCostClimbingStairsDP}\n</code></pre> <p>Figure 14-7 shows the dynamic programming process for the above code.</p> <p></p> <p> Figure 14-7 \u00a0 Dynamic programming process for minimum cost of climbing stairs </p> <p>This problem can also be space-optimized, compressing one dimension to zero, reducing the space complexity from \\(O(n)\\) to \\(O(1)\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig min_cost_climbing_stairs_dp.py<pre><code>def min_cost_climbing_stairs_dp_comp(cost: list[int]) -> int:\n \"\"\"Climbing stairs with minimum cost: Space-optimized dynamic programming\"\"\"\n n = len(cost) - 1\n if n == 1 or n == 2:\n return cost[n]\n a, b = cost[1], cost[2]\n for i in range(3, n + 1):\n a, b = b, min(a, b) + cost[i]\n return b\n</code></pre> min_cost_climbing_stairs_dp.cpp<pre><code>/* Climbing stairs with minimum cost: Space-optimized dynamic programming */\nint minCostClimbingStairsDPComp(vector<int> &cost) {\n int n = cost.size() - 1;\n if (n == 1 || n == 2)\n return cost[n];\n int a = cost[1], b = cost[2];\n for (int i = 3; i <= n; i++) {\n int tmp = b;\n b = min(a, tmp) + cost[i];\n a = tmp;\n }\n return b;\n}\n</code></pre> min_cost_climbing_stairs_dp.java<pre><code>/* Climbing stairs with minimum cost: Space-optimized dynamic programming */\nint minCostClimbingStairsDPComp(int[] cost) {\n int n = cost.length - 1;\n if (n == 1 || n == 2)\n return cost[n];\n int a = cost[1], b = cost[2];\n for (int i = 3; i <= n; i++) {\n int tmp = b;\n b = Math.min(a, tmp) + cost[i];\n a = tmp;\n }\n return b;\n}\n</code></pre> min_cost_climbing_stairs_dp.cs<pre><code>[class]{min_cost_climbing_stairs_dp}-[func]{MinCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.go<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.swift<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.js<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.ts<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.dart<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.rs<pre><code>[class]{}-[func]{min_cost_climbing_stairs_dp_comp}\n</code></pre> min_cost_climbing_stairs_dp.c<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.kt<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre> min_cost_climbing_stairs_dp.rb<pre><code>[class]{}-[func]{min_cost_climbing_stairs_dp_comp}\n</code></pre> min_cost_climbing_stairs_dp.zig<pre><code>[class]{}-[func]{minCostClimbingStairsDPComp}\n</code></pre>"},{"location":"chapter_dynamic_programming/dp_problem_features/#1422-statelessness","title":"14.2.2 \u00a0 Statelessness","text":"<p>Statelessness is one of the important characteristics that make dynamic programming effective in solving problems. Its definition is: Given a certain state, its future development is only related to the current state and unrelated to all past states experienced.</p> <p>Taking the stair climbing problem as an example, given state \\(i\\), it will develop into states \\(i+1\\) and \\(i+2\\), corresponding to jumping 1 step and 2 steps respectively. When making these two choices, we do not need to consider the states before state \\(i\\), as they do not affect the future of state \\(i\\).</p> <p>However, if we add a constraint to the stair climbing problem, the situation changes.</p> <p>Stair climbing with constraints</p> <p>Given a staircase with \\(n\\) steps, you can go up 1 or 2 steps each time, but you cannot jump 1 step twice in a row. How many ways are there to climb to the top?</p> <p>As shown in Figure 14-8, there are only 2 feasible options for climbing to the 3<sup>rd</sup> step, among which the option of jumping 1 step three times in a row does not meet the constraint condition and is therefore discarded.</p> <p></p> <p> Figure 14-8 \u00a0 Number of feasible options for climbing to the 3rd step with constraints </p> <p>In this problem, if the last round was a jump of 1 step, then the next round must be a jump of 2 steps. This means that the next step choice cannot be independently determined by the current state (current stair step), but also depends on the previous state (last round's stair step).</p> <p>It is not difficult to find that this problem no longer satisfies statelessness, and the state transition equation \\(dp[i] = dp[i-1] + dp[i-2]\\) also fails, because \\(dp[i-1]\\) represents this round's jump of 1 step, but it includes many \"last round was a jump of 1 step\" options, which, to meet the constraint, cannot be directly included in \\(dp[i]\\).</p> <p>For this, we need to expand the state definition: State \\([i, j]\\) represents being on the \\(i\\)-th step and the last round was a jump of \\(j\\) steps, where \\(j \\in \\{1, 2\\}\\). This state definition effectively distinguishes whether the last round was a jump of 1 step or 2 steps, and we can judge accordingly where the current state came from.</p> <ul> <li>When the last round was a jump of 1 step, the round before last could only choose to jump 2 steps, that is, \\(dp[i, 1]\\) can only be transferred from \\(dp[i-1, 2]\\).</li> <li>When the last round was a jump of 2 steps, the round before last could choose to jump 1 step or 2 steps, that is, \\(dp[i, 2]\\) can be transferred from \\(dp[i-2, 1]\\) or \\(dp[i-2, 2]\\).</li> </ul> <p>As shown in Figure 14-9, \\(dp[i, j]\\) represents the number of solutions for state \\([i, j]\\). At this point, the state transition equation is:</p> \\[ \\begin{cases} dp[i, 1] = dp[i-1, 2] \\\\ dp[i, 2] = dp[i-2, 1] + dp[i-2, 2] \\end{cases} \\] <p></p> <p> Figure 14-9 \u00a0 Recursive relationship considering constraints </p> <p>In the end, returning \\(dp[n, 1] + dp[n, 2]\\) will do, the sum of the two representing the total number of solutions for climbing to the \\(n\\)-th step:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig climbing_stairs_constraint_dp.py<pre><code>def climbing_stairs_constraint_dp(n: int) -> int:\n \"\"\"Constrained climbing stairs: Dynamic programming\"\"\"\n if n == 1 or n == 2:\n return 1\n # Initialize dp table, used to store subproblem solutions\n dp = [[0] * 3 for _ in range(n + 1)]\n # Initial state: preset the smallest subproblem solution\n dp[1][1], dp[1][2] = 1, 0\n dp[2][1], dp[2][2] = 0, 1\n # State transition: gradually solve larger subproblems from smaller ones\n for i in range(3, n + 1):\n dp[i][1] = dp[i - 1][2]\n dp[i][2] = dp[i - 2][1] + dp[i - 2][2]\n return dp[n][1] + dp[n][2]\n</code></pre> climbing_stairs_constraint_dp.cpp<pre><code>/* Constrained climbing stairs: Dynamic programming */\nint climbingStairsConstraintDP(int n) {\n if (n == 1 || n == 2) {\n return 1;\n }\n // Initialize dp table, used to store subproblem solutions\n vector<vector<int>> dp(n + 1, vector<int>(3, 0));\n // Initial state: preset the smallest subproblem solution\n dp[1][1] = 1;\n dp[1][2] = 0;\n dp[2][1] = 0;\n dp[2][2] = 1;\n // State transition: gradually solve larger subproblems from smaller ones\n for (int i = 3; i <= n; i++) {\n dp[i][1] = dp[i - 1][2];\n dp[i][2] = dp[i - 2][1] + dp[i - 2][2];\n }\n return dp[n][1] + dp[n][2];\n}\n</code></pre> climbing_stairs_constraint_dp.java<pre><code>/* Constrained climbing stairs: Dynamic programming */\nint climbingStairsConstraintDP(int n) {\n if (n == 1 || n == 2) {\n return 1;\n }\n // Initialize dp table, used to store subproblem solutions\n int[][] dp = new int[n + 1][3];\n // Initial state: preset the smallest subproblem solution\n dp[1][1] = 1;\n dp[1][2] = 0;\n dp[2][1] = 0;\n dp[2][2] = 1;\n // State transition: gradually solve larger subproblems from smaller ones\n for (int i = 3; i <= n; i++) {\n dp[i][1] = dp[i - 1][2];\n dp[i][2] = dp[i - 2][1] + dp[i - 2][2];\n }\n return dp[n][1] + dp[n][2];\n}\n</code></pre> climbing_stairs_constraint_dp.cs<pre><code>[class]{climbing_stairs_constraint_dp}-[func]{ClimbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.go<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.swift<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.js<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.ts<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.dart<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.rs<pre><code>[class]{}-[func]{climbing_stairs_constraint_dp}\n</code></pre> climbing_stairs_constraint_dp.c<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.kt<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> climbing_stairs_constraint_dp.rb<pre><code>[class]{}-[func]{climbing_stairs_constraint_dp}\n</code></pre> climbing_stairs_constraint_dp.zig<pre><code>[class]{}-[func]{climbingStairsConstraintDP}\n</code></pre> <p>In the above cases, since we only need to consider the previous state, we can still meet the statelessness by expanding the state definition. However, some problems have very serious \"state effects\".</p> <p>Stair climbing with obstacle generation</p> <p>Given a staircase with \\(n\\) steps, you can go up 1 or 2 steps each time. It is stipulated that when climbing to the \\(i\\)-th step, the system automatically places an obstacle on the \\(2i\\)-th step, and thereafter all rounds are not allowed to jump to the \\(2i\\)-th step. For example, if the first two rounds jump to the 2<sup>nd</sup> and 3<sup>rd</sup> steps, then later you cannot jump to the 4<sup>th</sup> and 6<sup>th</sup> steps. How many ways are there to climb to the top?</p> <p>In this problem, the next jump depends on all past states, as each jump places obstacles on higher steps, affecting future jumps. For such problems, dynamic programming often struggles to solve.</p> <p>In fact, many complex combinatorial optimization problems (such as the traveling salesman problem) do not satisfy statelessness. For these kinds of problems, we usually choose to use other methods, such as heuristic search, genetic algorithms, reinforcement learning, etc., to obtain usable local optimal solutions within a limited time.</p>"},{"location":"chapter_dynamic_programming/dp_solution_pipeline/","title":"14.3 \u00a0 Dynamic programming problem-solving approach","text":"<p>The last two sections introduced the main characteristics of dynamic programming problems. Next, let's explore two more practical issues together.</p> <ol> <li>How to determine whether a problem is a dynamic programming problem?</li> <li>What are the complete steps to solve a dynamic programming problem?</li> </ol>"},{"location":"chapter_dynamic_programming/dp_solution_pipeline/#1431-problem-determination","title":"14.3.1 \u00a0 Problem determination","text":"<p>Generally speaking, if a problem contains overlapping subproblems, optimal substructure, and exhibits no aftereffects, it is usually suitable for dynamic programming solutions. However, it is often difficult to directly extract these characteristics from the problem description. Therefore, we usually relax the conditions and first observe whether the problem is suitable for resolution using backtracking (exhaustive search).</p> <p>Problems suitable for backtracking usually fit the \"decision tree model\", which can be described using a tree structure, where each node represents a decision, and each path represents a sequence of decisions.</p> <p>In other words, if the problem contains explicit decision concepts, and the solution is produced through a series of decisions, then it fits the decision tree model and can usually be solved using backtracking.</p> <p>On this basis, there are some \"bonus points\" for determining dynamic programming problems.</p> <ul> <li>The problem contains descriptions of maximization (minimization) or finding the most (least) optimal solution.</li> <li>The problem's states can be represented using a list, multi-dimensional matrix, or tree, and a state has a recursive relationship with its surrounding states.</li> </ul> <p>Correspondingly, there are also some \"penalty points\".</p> <ul> <li>The goal of the problem is to find all possible solutions, not just the optimal solution.</li> <li>The problem description has obvious characteristics of permutations and combinations, requiring the return of specific multiple solutions.</li> </ul> <p>If a problem fits the decision tree model and has relatively obvious \"bonus points\", we can assume it is a dynamic programming problem and verify it during the solution process.</p>"},{"location":"chapter_dynamic_programming/dp_solution_pipeline/#1432-problem-solving-steps","title":"14.3.2 \u00a0 Problem-solving steps","text":"<p>The dynamic programming problem-solving process varies with the nature and difficulty of the problem but generally follows these steps: describe decisions, define states, establish a \\(dp\\) table, derive state transition equations, and determine boundary conditions, etc.</p> <p>To illustrate the problem-solving steps more vividly, we use a classic problem, \"Minimum Path Sum\", as an example.</p> <p>Question</p> <p>Given an \\(n \\times m\\) two-dimensional grid <code>grid</code>, each cell in the grid contains a non-negative integer representing the cost of that cell. The robot starts from the top-left cell and can only move down or right at each step until it reaches the bottom-right cell. Return the minimum path sum from the top-left to the bottom-right.</p> <p>Figure 14-10 shows an example, where the given grid's minimum path sum is \\(13\\).</p> <p></p> <p> Figure 14-10 \u00a0 Minimum Path Sum Example Data </p> <p>First step: Think about each round of decisions, define the state, and thereby obtain the \\(dp\\) table</p> <p>Each round of decisions in this problem is to move one step down or right from the current cell. Suppose the row and column indices of the current cell are \\([i, j]\\), then after moving down or right, the indices become \\([i+1, j]\\) or \\([i, j+1]\\). Therefore, the state should include two variables: the row index and the column index, denoted as \\([i, j]\\).</p> <p>The state \\([i, j]\\) corresponds to the subproblem: the minimum path sum from the starting point \\([0, 0]\\) to \\([i, j]\\), denoted as \\(dp[i, j]\\).</p> <p>Thus, we obtain the two-dimensional \\(dp\\) matrix shown in Figure 14-11, whose size is the same as the input grid \\(grid\\).</p> <p></p> <p> Figure 14-11 \u00a0 State definition and DP table </p> <p>Note</p> <p>Dynamic programming and backtracking can be described as a sequence of decisions, while a state consists of all decision variables. It should include all variables that describe the progress of solving the problem, containing enough information to derive the next state.</p> <p>Each state corresponds to a subproblem, and we define a \\(dp\\) table to store the solutions to all subproblems. Each independent variable of the state is a dimension of the \\(dp\\) table. Essentially, the \\(dp\\) table is a mapping between states and solutions to subproblems.</p> <p>Second step: Identify the optimal substructure, then derive the state transition equation</p> <p>For the state \\([i, j]\\), it can only be derived from the cell above \\([i-1, j]\\) or the cell to the left \\([i, j-1]\\). Therefore, the optimal substructure is: the minimum path sum to reach \\([i, j]\\) is determined by the smaller of the minimum path sums of \\([i, j-1]\\) and \\([i-1, j]\\).</p> <p>Based on the above analysis, the state transition equation shown in Figure 14-12 can be derived:</p> \\[ dp[i, j] = \\min(dp[i-1, j], dp[i, j-1]) + grid[i, j] \\] <p></p> <p> Figure 14-12 \u00a0 Optimal substructure and state transition equation </p> <p>Note</p> <p>Based on the defined \\(dp\\) table, think about the relationship between the original problem and the subproblems, and find out how to construct the optimal solution to the original problem from the optimal solutions to the subproblems, i.e., the optimal substructure.</p> <p>Once we have identified the optimal substructure, we can use it to build the state transition equation.</p> <p>Third step: Determine boundary conditions and state transition order</p> <p>In this problem, the states in the first row can only come from the states to their left, and the states in the first column can only come from the states above them, so the first row \\(i = 0\\) and the first column \\(j = 0\\) are the boundary conditions.</p> <p>As shown in Figure 14-13, since each cell is derived from the cell to its left and the cell above it, we use loops to traverse the matrix, the outer loop iterating over the rows and the inner loop iterating over the columns.</p> <p></p> <p> Figure 14-13 \u00a0 Boundary conditions and state transition order </p> <p>Note</p> <p>Boundary conditions are used in dynamic programming to initialize the \\(dp\\) table, and in search to prune.</p> <p>The core of the state transition order is to ensure that when calculating the solution to the current problem, all the smaller subproblems it depends on have already been correctly calculated.</p> <p>Based on the above analysis, we can directly write the dynamic programming code. However, the decomposition of subproblems is a top-down approach, so implementing it in the order of \"brute-force search \u2192 memoized search \u2192 dynamic programming\" is more in line with habitual thinking.</p>"},{"location":"chapter_dynamic_programming/dp_solution_pipeline/#1-method-1-brute-force-search","title":"1. \u00a0 Method 1: Brute-force search","text":"<p>Start searching from the state \\([i, j]\\), constantly decomposing it into smaller states \\([i-1, j]\\) and \\([i, j-1]\\). The recursive function includes the following elements.</p> <ul> <li>Recursive parameter: state \\([i, j]\\).</li> <li>Return value: the minimum path sum from \\([0, 0]\\) to \\([i, j]\\) \\(dp[i, j]\\).</li> <li>Termination condition: when \\(i = 0\\) and \\(j = 0\\), return the cost \\(grid[0, 0]\\).</li> <li>Pruning: when \\(i < 0\\) or \\(j < 0\\) index out of bounds, return the cost \\(+\\infty\\), representing infeasibility.</li> </ul> <p>Implementation code as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig min_path_sum.py<pre><code>def min_path_sum_dfs(grid: list[list[int]], i: int, j: int) -> int:\n \"\"\"Minimum path sum: Brute force search\"\"\"\n # If it's the top-left cell, terminate the search\n if i == 0 and j == 0:\n return grid[0][0]\n # If the row or column index is out of bounds, return a +\u221e cost\n if i < 0 or j < 0:\n return inf\n # Calculate the minimum path cost from the top-left to (i-1, j) and (i, j-1)\n up = min_path_sum_dfs(grid, i - 1, j)\n left = min_path_sum_dfs(grid, i, j - 1)\n # Return the minimum path cost from the top-left to (i, j)\n return min(left, up) + grid[i][j]\n</code></pre> min_path_sum.cpp<pre><code>/* Minimum path sum: Brute force search */\nint minPathSumDFS(vector<vector<int>> &grid, int i, int j) {\n // If it's the top-left cell, terminate the search\n if (i == 0 && j == 0) {\n return grid[0][0];\n }\n // If the row or column index is out of bounds, return a +\u221e cost\n if (i < 0 || j < 0) {\n return INT_MAX;\n }\n // Calculate the minimum path cost from the top-left to (i-1, j) and (i, j-1)\n int up = minPathSumDFS(grid, i - 1, j);\n int left = minPathSumDFS(grid, i, j - 1);\n // Return the minimum path cost from the top-left to (i, j)\n return min(left, up) != INT_MAX ? min(left, up) + grid[i][j] : INT_MAX;\n}\n</code></pre> min_path_sum.java<pre><code>/* Minimum path sum: Brute force search */\nint minPathSumDFS(int[][] grid, int i, int j) {\n // If it's the top-left cell, terminate the search\n if (i == 0 && j == 0) {\n return grid[0][0];\n }\n // If the row or column index is out of bounds, return a +\u221e cost\n if (i < 0 || j < 0) {\n return Integer.MAX_VALUE;\n }\n // Calculate the minimum path cost from the top-left to (i-1, j) and (i, j-1)\n int up = minPathSumDFS(grid, i - 1, j);\n int left = minPathSumDFS(grid, i, j - 1);\n // Return the minimum path cost from the top-left to (i, j)\n return Math.min(left, up) + grid[i][j];\n}\n</code></pre> min_path_sum.cs<pre><code>[class]{min_path_sum}-[func]{MinPathSumDFS}\n</code></pre> min_path_sum.go<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> min_path_sum.swift<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> min_path_sum.js<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> min_path_sum.ts<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> min_path_sum.dart<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> min_path_sum.rs<pre><code>[class]{}-[func]{min_path_sum_dfs}\n</code></pre> min_path_sum.c<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> min_path_sum.kt<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> min_path_sum.rb<pre><code>[class]{}-[func]{min_path_sum_dfs}\n</code></pre> min_path_sum.zig<pre><code>[class]{}-[func]{minPathSumDFS}\n</code></pre> <p>Figure 14-14 shows the recursive tree rooted at \\(dp[2, 1]\\), which includes some overlapping subproblems, the number of which increases sharply as the size of the grid <code>grid</code> increases.</p> <p>Essentially, the reason for overlapping subproblems is: there are multiple paths to reach a certain cell from the top-left corner.</p> <p></p> <p> Figure 14-14 \u00a0 Brute-force search recursive tree </p> <p>Each state has two choices, down and right, so the total number of steps from the top-left corner to the bottom-right corner is \\(m + n - 2\\), so the worst-case time complexity is \\(O(2^{m + n})\\). Please note that this calculation method does not consider the situation near the grid edge, where there is only one choice left when reaching the network edge, so the actual number of paths will be less.</p>"},{"location":"chapter_dynamic_programming/dp_solution_pipeline/#2-method-2-memoized-search","title":"2. \u00a0 Method 2: Memoized search","text":"<p>We introduce a memo list <code>mem</code> of the same size as the grid <code>grid</code>, used to record the solutions to various subproblems, and prune overlapping subproblems:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig min_path_sum.py<pre><code>def min_path_sum_dfs_mem(\n grid: list[list[int]], mem: list[list[int]], i: int, j: int\n) -> int:\n \"\"\"Minimum path sum: Memoized search\"\"\"\n # If it's the top-left cell, terminate the search\n if i == 0 and j == 0:\n return grid[0][0]\n # If the row or column index is out of bounds, return a +\u221e cost\n if i < 0 or j < 0:\n return inf\n # If there is a record, return it\n if mem[i][j] != -1:\n return mem[i][j]\n # The minimum path cost from the left and top cells\n up = min_path_sum_dfs_mem(grid, mem, i - 1, j)\n left = min_path_sum_dfs_mem(grid, mem, i, j - 1)\n # Record and return the minimum path cost from the top-left to (i, j)\n mem[i][j] = min(left, up) + grid[i][j]\n return mem[i][j]\n</code></pre> min_path_sum.cpp<pre><code>/* Minimum path sum: Memoized search */\nint minPathSumDFSMem(vector<vector<int>> &grid, vector<vector<int>> &mem, int i, int j) {\n // If it's the top-left cell, terminate the search\n if (i == 0 && j == 0) {\n return grid[0][0];\n }\n // If the row or column index is out of bounds, return a +\u221e cost\n if (i < 0 || j < 0) {\n return INT_MAX;\n }\n // If there is a record, return it\n if (mem[i][j] != -1) {\n return mem[i][j];\n }\n // The minimum path cost from the left and top cells\n int up = minPathSumDFSMem(grid, mem, i - 1, j);\n int left = minPathSumDFSMem(grid, mem, i, j - 1);\n // Record and return the minimum path cost from the top-left to (i, j)\n mem[i][j] = min(left, up) != INT_MAX ? min(left, up) + grid[i][j] : INT_MAX;\n return mem[i][j];\n}\n</code></pre> min_path_sum.java<pre><code>/* Minimum path sum: Memoized search */\nint minPathSumDFSMem(int[][] grid, int[][] mem, int i, int j) {\n // If it's the top-left cell, terminate the search\n if (i == 0 && j == 0) {\n return grid[0][0];\n }\n // If the row or column index is out of bounds, return a +\u221e cost\n if (i < 0 || j < 0) {\n return Integer.MAX_VALUE;\n }\n // If there is a record, return it\n if (mem[i][j] != -1) {\n return mem[i][j];\n }\n // The minimum path cost from the left and top cells\n int up = minPathSumDFSMem(grid, mem, i - 1, j);\n int left = minPathSumDFSMem(grid, mem, i, j - 1);\n // Record and return the minimum path cost from the top-left to (i, j)\n mem[i][j] = Math.min(left, up) + grid[i][j];\n return mem[i][j];\n}\n</code></pre> min_path_sum.cs<pre><code>[class]{min_path_sum}-[func]{MinPathSumDFSMem}\n</code></pre> min_path_sum.go<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> min_path_sum.swift<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> min_path_sum.js<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> min_path_sum.ts<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> min_path_sum.dart<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> min_path_sum.rs<pre><code>[class]{}-[func]{min_path_sum_dfs_mem}\n</code></pre> min_path_sum.c<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> min_path_sum.kt<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> min_path_sum.rb<pre><code>[class]{}-[func]{min_path_sum_dfs_mem}\n</code></pre> min_path_sum.zig<pre><code>[class]{}-[func]{minPathSumDFSMem}\n</code></pre> <p>As shown in Figure 14-15, after introducing memoization, all subproblem solutions only need to be calculated once, so the time complexity depends on the total number of states, i.e., the grid size \\(O(nm)\\).</p> <p></p> <p> Figure 14-15 \u00a0 Memoized search recursive tree </p>"},{"location":"chapter_dynamic_programming/dp_solution_pipeline/#3-method-3-dynamic-programming","title":"3. \u00a0 Method 3: Dynamic programming","text":"<p>Implement the dynamic programming solution iteratively, code as shown below:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig min_path_sum.py<pre><code>def min_path_sum_dp(grid: list[list[int]]) -> int:\n \"\"\"Minimum path sum: Dynamic programming\"\"\"\n n, m = len(grid), len(grid[0])\n # Initialize dp table\n dp = [[0] * m for _ in range(n)]\n dp[0][0] = grid[0][0]\n # State transition: first row\n for j in range(1, m):\n dp[0][j] = dp[0][j - 1] + grid[0][j]\n # State transition: first column\n for i in range(1, n):\n dp[i][0] = dp[i - 1][0] + grid[i][0]\n # State transition: the rest of the rows and columns\n for i in range(1, n):\n for j in range(1, m):\n dp[i][j] = min(dp[i][j - 1], dp[i - 1][j]) + grid[i][j]\n return dp[n - 1][m - 1]\n</code></pre> min_path_sum.cpp<pre><code>/* Minimum path sum: Dynamic programming */\nint minPathSumDP(vector<vector<int>> &grid) {\n int n = grid.size(), m = grid[0].size();\n // Initialize dp table\n vector<vector<int>> dp(n, vector<int>(m));\n dp[0][0] = grid[0][0];\n // State transition: first row\n for (int j = 1; j < m; j++) {\n dp[0][j] = dp[0][j - 1] + grid[0][j];\n }\n // State transition: first column\n for (int i = 1; i < n; i++) {\n dp[i][0] = dp[i - 1][0] + grid[i][0];\n }\n // State transition: the rest of the rows and columns\n for (int i = 1; i < n; i++) {\n for (int j = 1; j < m; j++) {\n dp[i][j] = min(dp[i][j - 1], dp[i - 1][j]) + grid[i][j];\n }\n }\n return dp[n - 1][m - 1];\n}\n</code></pre> min_path_sum.java<pre><code>/* Minimum path sum: Dynamic programming */\nint minPathSumDP(int[][] grid) {\n int n = grid.length, m = grid[0].length;\n // Initialize dp table\n int[][] dp = new int[n][m];\n dp[0][0] = grid[0][0];\n // State transition: first row\n for (int j = 1; j < m; j++) {\n dp[0][j] = dp[0][j - 1] + grid[0][j];\n }\n // State transition: first column\n for (int i = 1; i < n; i++) {\n dp[i][0] = dp[i - 1][0] + grid[i][0];\n }\n // State transition: the rest of the rows and columns\n for (int i = 1; i < n; i++) {\n for (int j = 1; j < m; j++) {\n dp[i][j] = Math.min(dp[i][j - 1], dp[i - 1][j]) + grid[i][j];\n }\n }\n return dp[n - 1][m - 1];\n}\n</code></pre> min_path_sum.cs<pre><code>[class]{min_path_sum}-[func]{MinPathSumDP}\n</code></pre> min_path_sum.go<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> min_path_sum.swift<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> min_path_sum.js<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> min_path_sum.ts<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> min_path_sum.dart<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> min_path_sum.rs<pre><code>[class]{}-[func]{min_path_sum_dp}\n</code></pre> min_path_sum.c<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> min_path_sum.kt<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> min_path_sum.rb<pre><code>[class]{}-[func]{min_path_sum_dp}\n</code></pre> min_path_sum.zig<pre><code>[class]{}-[func]{minPathSumDP}\n</code></pre> <p>Figure 14-16 show the state transition process of the minimum path sum, traversing the entire grid, thus the time complexity is \\(O(nm)\\).</p> <p>The array <code>dp</code> is of size \\(n \\times m\\), therefore the space complexity is \\(O(nm)\\).</p> <1><2><3><4><5><6><7><8><9><10><11><12> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 14-16 \u00a0 Dynamic programming process of minimum path sum </p>"},{"location":"chapter_dynamic_programming/dp_solution_pipeline/#4-space-optimization","title":"4. \u00a0 Space optimization","text":"<p>Since each cell is only related to the cell to its left and above, we can use a single-row array to implement the \\(dp\\) table.</p> <p>Please note, since the array <code>dp</code> can only represent the state of one row, we cannot initialize the first column state in advance, but update it as we traverse each row:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig min_path_sum.py<pre><code>def min_path_sum_dp_comp(grid: list[list[int]]) -> int:\n \"\"\"Minimum path sum: Space-optimized dynamic programming\"\"\"\n n, m = len(grid), len(grid[0])\n # Initialize dp table\n dp = [0] * m\n # State transition: first row\n dp[0] = grid[0][0]\n for j in range(1, m):\n dp[j] = dp[j - 1] + grid[0][j]\n # State transition: the rest of the rows\n for i in range(1, n):\n # State transition: first column\n dp[0] = dp[0] + grid[i][0]\n # State transition: the rest of the columns\n for j in range(1, m):\n dp[j] = min(dp[j - 1], dp[j]) + grid[i][j]\n return dp[m - 1]\n</code></pre> min_path_sum.cpp<pre><code>/* Minimum path sum: Space-optimized dynamic programming */\nint minPathSumDPComp(vector<vector<int>> &grid) {\n int n = grid.size(), m = grid[0].size();\n // Initialize dp table\n vector<int> dp(m);\n // State transition: first row\n dp[0] = grid[0][0];\n for (int j = 1; j < m; j++) {\n dp[j] = dp[j - 1] + grid[0][j];\n }\n // State transition: the rest of the rows\n for (int i = 1; i < n; i++) {\n // State transition: first column\n dp[0] = dp[0] + grid[i][0];\n // State transition: the rest of the columns\n for (int j = 1; j < m; j++) {\n dp[j] = min(dp[j - 1], dp[j]) + grid[i][j];\n }\n }\n return dp[m - 1];\n}\n</code></pre> min_path_sum.java<pre><code>/* Minimum path sum: Space-optimized dynamic programming */\nint minPathSumDPComp(int[][] grid) {\n int n = grid.length, m = grid[0].length;\n // Initialize dp table\n int[] dp = new int[m];\n // State transition: first row\n dp[0] = grid[0][0];\n for (int j = 1; j < m; j++) {\n dp[j] = dp[j - 1] + grid[0][j];\n }\n // State transition: the rest of the rows\n for (int i = 1; i < n; i++) {\n // State transition: first column\n dp[0] = dp[0] + grid[i][0];\n // State transition: the rest of the columns\n for (int j = 1; j < m; j++) {\n dp[j] = Math.min(dp[j - 1], dp[j]) + grid[i][j];\n }\n }\n return dp[m - 1];\n}\n</code></pre> min_path_sum.cs<pre><code>[class]{min_path_sum}-[func]{MinPathSumDPComp}\n</code></pre> min_path_sum.go<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre> min_path_sum.swift<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre> min_path_sum.js<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre> min_path_sum.ts<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre> min_path_sum.dart<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre> min_path_sum.rs<pre><code>[class]{}-[func]{min_path_sum_dp_comp}\n</code></pre> min_path_sum.c<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre> min_path_sum.kt<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre> min_path_sum.rb<pre><code>[class]{}-[func]{min_path_sum_dp_comp}\n</code></pre> min_path_sum.zig<pre><code>[class]{}-[func]{minPathSumDPComp}\n</code></pre>"},{"location":"chapter_dynamic_programming/edit_distance_problem/","title":"14.6 \u00a0 Edit distance problem","text":"<p>Edit distance, also known as Levenshtein distance, refers to the minimum number of modifications required to transform one string into another, commonly used in information retrieval and natural language processing to measure the similarity between two sequences.</p> <p>Question</p> <p>Given two strings \\(s\\) and \\(t\\), return the minimum number of edits required to transform \\(s\\) into \\(t\\).</p> <p>You can perform three types of edits on a string: insert a character, delete a character, or replace a character with any other character.</p> <p>As shown in Figure 14-27, transforming <code>kitten</code> into <code>sitting</code> requires 3 edits, including 2 replacements and 1 insertion; transforming <code>hello</code> into <code>algo</code> requires 3 steps, including 2 replacements and 1 deletion.</p> <p></p> <p> Figure 14-27 \u00a0 Example data of edit distance </p> <p>The edit distance problem can naturally be explained with a decision tree model. Strings correspond to tree nodes, and a round of decision (an edit operation) corresponds to an edge of the tree.</p> <p>As shown in Figure 14-28, with unrestricted operations, each node can derive many edges, each corresponding to one operation, meaning there are many possible paths to transform <code>hello</code> into <code>algo</code>.</p> <p>From the perspective of the decision tree, the goal of this problem is to find the shortest path between the node <code>hello</code> and the node <code>algo</code>.</p> <p></p> <p> Figure 14-28 \u00a0 Edit distance problem represented based on decision tree model </p>"},{"location":"chapter_dynamic_programming/edit_distance_problem/#1-dynamic-programming-approach","title":"1. \u00a0 Dynamic programming approach","text":"<p>Step one: Think about each round of decision, define the state, thus obtaining the \\(dp\\) table</p> <p>Each round of decision involves performing one edit operation on string \\(s\\).</p> <p>We aim to gradually reduce the problem size during the edit process, which enables us to construct subproblems. Let the lengths of strings \\(s\\) and \\(t\\) be \\(n\\) and \\(m\\), respectively. We first consider the tail characters of both strings \\(s[n-1]\\) and \\(t[m-1]\\).</p> <ul> <li>If \\(s[n-1]\\) and \\(t[m-1]\\) are the same, we can skip them and directly consider \\(s[n-2]\\) and \\(t[m-2]\\).</li> <li>If \\(s[n-1]\\) and \\(t[m-1]\\) are different, we need to perform one edit on \\(s\\) (insert, delete, replace) so that the tail characters of the two strings match, allowing us to skip them and consider a smaller-scale problem.</li> </ul> <p>Thus, each round of decision (edit operation) in string \\(s\\) changes the remaining characters in \\(s\\) and \\(t\\) to be matched. Therefore, the state is the \\(i\\)-th and \\(j\\)-th characters currently considered in \\(s\\) and \\(t\\), denoted as \\([i, j]\\).</p> <p>State \\([i, j]\\) corresponds to the subproblem: The minimum number of edits required to change the first \\(i\\) characters of \\(s\\) into the first \\(j\\) characters of \\(t\\).</p> <p>From this, we obtain a two-dimensional \\(dp\\) table of size \\((i+1) \\times (j+1)\\).</p> <p>Step two: Identify the optimal substructure and then derive the state transition equation</p> <p>Consider the subproblem \\(dp[i, j]\\), whose corresponding tail characters of the two strings are \\(s[i-1]\\) and \\(t[j-1]\\), which can be divided into three scenarios as shown in Figure 14-29.</p> <ol> <li>Add \\(t[j-1]\\) after \\(s[i-1]\\), then the remaining subproblem is \\(dp[i, j-1]\\).</li> <li>Delete \\(s[i-1]\\), then the remaining subproblem is \\(dp[i-1, j]\\).</li> <li>Replace \\(s[i-1]\\) with \\(t[j-1]\\), then the remaining subproblem is \\(dp[i-1, j-1]\\).</li> </ol> <p></p> <p> Figure 14-29 \u00a0 State transition of edit distance </p> <p>Based on the analysis above, we can determine the optimal substructure: The minimum number of edits for \\(dp[i, j]\\) is the minimum among \\(dp[i, j-1]\\), \\(dp[i-1, j]\\), and \\(dp[i-1, j-1]\\), plus the edit step \\(1\\). The corresponding state transition equation is:</p> \\[ dp[i, j] = \\min(dp[i, j-1], dp[i-1, j], dp[i-1, j-1]) + 1 \\] <p>Please note, when \\(s[i-1]\\) and \\(t[j-1]\\) are the same, no edit is required for the current character, in which case the state transition equation is:</p> \\[ dp[i, j] = dp[i-1, j-1] \\] <p>Step three: Determine the boundary conditions and the order of state transitions</p> <p>When both strings are empty, the number of edits is \\(0\\), i.e., \\(dp[0, 0] = 0\\). When \\(s\\) is empty but \\(t\\) is not, the minimum number of edits equals the length of \\(t\\), that is, the first row \\(dp[0, j] = j\\). When \\(s\\) is not empty but \\(t\\) is, the minimum number of edits equals the length of \\(s\\), that is, the first column \\(dp[i, 0] = i\\).</p> <p>Observing the state transition equation, solving \\(dp[i, j]\\) depends on the solutions to the left, above, and upper left, so a double loop can be used to traverse the entire \\(dp\\) table in the correct order.</p>"},{"location":"chapter_dynamic_programming/edit_distance_problem/#2-code-implementation","title":"2. \u00a0 Code implementation","text":"PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig edit_distance.py<pre><code>def edit_distance_dp(s: str, t: str) -> int:\n \"\"\"Edit distance: Dynamic programming\"\"\"\n n, m = len(s), len(t)\n dp = [[0] * (m + 1) for _ in range(n + 1)]\n # State transition: first row and first column\n for i in range(1, n + 1):\n dp[i][0] = i\n for j in range(1, m + 1):\n dp[0][j] = j\n # State transition: the rest of the rows and columns\n for i in range(1, n + 1):\n for j in range(1, m + 1):\n if s[i - 1] == t[j - 1]:\n # If the two characters are equal, skip these two characters\n dp[i][j] = dp[i - 1][j - 1]\n else:\n # The minimum number of edits = the minimum number of edits from three operations (insert, remove, replace) + 1\n dp[i][j] = min(dp[i][j - 1], dp[i - 1][j], dp[i - 1][j - 1]) + 1\n return dp[n][m]\n</code></pre> edit_distance.cpp<pre><code>/* Edit distance: Dynamic programming */\nint editDistanceDP(string s, string t) {\n int n = s.length(), m = t.length();\n vector<vector<int>> dp(n + 1, vector<int>(m + 1, 0));\n // State transition: first row and first column\n for (int i = 1; i <= n; i++) {\n dp[i][0] = i;\n }\n for (int j = 1; j <= m; j++) {\n dp[0][j] = j;\n }\n // State transition: the rest of the rows and columns\n for (int i = 1; i <= n; i++) {\n for (int j = 1; j <= m; j++) {\n if (s[i - 1] == t[j - 1]) {\n // If the two characters are equal, skip these two characters\n dp[i][j] = dp[i - 1][j - 1];\n } else {\n // The minimum number of edits = the minimum number of edits from three operations (insert, remove, replace) + 1\n dp[i][j] = min(min(dp[i][j - 1], dp[i - 1][j]), dp[i - 1][j - 1]) + 1;\n }\n }\n }\n return dp[n][m];\n}\n</code></pre> edit_distance.java<pre><code>/* Edit distance: Dynamic programming */\nint editDistanceDP(String s, String t) {\n int n = s.length(), m = t.length();\n int[][] dp = new int[n + 1][m + 1];\n // State transition: first row and first column\n for (int i = 1; i <= n; i++) {\n dp[i][0] = i;\n }\n for (int j = 1; j <= m; j++) {\n dp[0][j] = j;\n }\n // State transition: the rest of the rows and columns\n for (int i = 1; i <= n; i++) {\n for (int j = 1; j <= m; j++) {\n if (s.charAt(i - 1) == t.charAt(j - 1)) {\n // If the two characters are equal, skip these two characters\n dp[i][j] = dp[i - 1][j - 1];\n } else {\n // The minimum number of edits = the minimum number of edits from three operations (insert, remove, replace) + 1\n dp[i][j] = Math.min(Math.min(dp[i][j - 1], dp[i - 1][j]), dp[i - 1][j - 1]) + 1;\n }\n }\n }\n return dp[n][m];\n}\n</code></pre> edit_distance.cs<pre><code>[class]{edit_distance}-[func]{EditDistanceDP}\n</code></pre> edit_distance.go<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> edit_distance.swift<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> edit_distance.js<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> edit_distance.ts<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> edit_distance.dart<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> edit_distance.rs<pre><code>[class]{}-[func]{edit_distance_dp}\n</code></pre> edit_distance.c<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> edit_distance.kt<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> edit_distance.rb<pre><code>[class]{}-[func]{edit_distance_dp}\n</code></pre> edit_distance.zig<pre><code>[class]{}-[func]{editDistanceDP}\n</code></pre> <p>As shown in Figure 14-30, the process of state transition in the edit distance problem is very similar to that in the knapsack problem, which can be seen as filling a two-dimensional grid.</p> <1><2><3><4><5><6><7><8><9><10><11><12><13><14><15> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 14-30 \u00a0 Dynamic programming process of edit distance </p>"},{"location":"chapter_dynamic_programming/edit_distance_problem/#3-space-optimization","title":"3. \u00a0 Space optimization","text":"<p>Since \\(dp[i, j]\\) is derived from the solutions above \\(dp[i-1, j]\\), to the left \\(dp[i, j-1]\\), and to the upper left \\(dp[i-1, j-1]\\), and direct traversal will lose the upper left solution \\(dp[i-1, j-1]\\), and reverse traversal cannot build \\(dp[i, j-1]\\) in advance, therefore, both traversal orders are not feasible.</p> <p>For this reason, we can use a variable <code>leftup</code> to temporarily store the solution from the upper left \\(dp[i-1, j-1]\\), thus only needing to consider the solutions to the left and above. This situation is similar to the unbounded knapsack problem, allowing for direct traversal. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig edit_distance.py<pre><code>def edit_distance_dp_comp(s: str, t: str) -> int:\n \"\"\"Edit distance: Space-optimized dynamic programming\"\"\"\n n, m = len(s), len(t)\n dp = [0] * (m + 1)\n # State transition: first row\n for j in range(1, m + 1):\n dp[j] = j\n # State transition: the rest of the rows\n for i in range(1, n + 1):\n # State transition: first column\n leftup = dp[0] # Temporarily store dp[i-1, j-1]\n dp[0] += 1\n # State transition: the rest of the columns\n for j in range(1, m + 1):\n temp = dp[j]\n if s[i - 1] == t[j - 1]:\n # If the two characters are equal, skip these two characters\n dp[j] = leftup\n else:\n # The minimum number of edits = the minimum number of edits from three operations (insert, remove, replace) + 1\n dp[j] = min(dp[j - 1], dp[j], leftup) + 1\n leftup = temp # Update for the next round of dp[i-1, j-1]\n return dp[m]\n</code></pre> edit_distance.cpp<pre><code>/* Edit distance: Space-optimized dynamic programming */\nint editDistanceDPComp(string s, string t) {\n int n = s.length(), m = t.length();\n vector<int> dp(m + 1, 0);\n // State transition: first row\n for (int j = 1; j <= m; j++) {\n dp[j] = j;\n }\n // State transition: the rest of the rows\n for (int i = 1; i <= n; i++) {\n // State transition: first column\n int leftup = dp[0]; // Temporarily store dp[i-1, j-1]\n dp[0] = i;\n // State transition: the rest of the columns\n for (int j = 1; j <= m; j++) {\n int temp = dp[j];\n if (s[i - 1] == t[j - 1]) {\n // If the two characters are equal, skip these two characters\n dp[j] = leftup;\n } else {\n // The minimum number of edits = the minimum number of edits from three operations (insert, remove, replace) + 1\n dp[j] = min(min(dp[j - 1], dp[j]), leftup) + 1;\n }\n leftup = temp; // Update for the next round of dp[i-1, j-1]\n }\n }\n return dp[m];\n}\n</code></pre> edit_distance.java<pre><code>/* Edit distance: Space-optimized dynamic programming */\nint editDistanceDPComp(String s, String t) {\n int n = s.length(), m = t.length();\n int[] dp = new int[m + 1];\n // State transition: first row\n for (int j = 1; j <= m; j++) {\n dp[j] = j;\n }\n // State transition: the rest of the rows\n for (int i = 1; i <= n; i++) {\n // State transition: first column\n int leftup = dp[0]; // Temporarily store dp[i-1, j-1]\n dp[0] = i;\n // State transition: the rest of the columns\n for (int j = 1; j <= m; j++) {\n int temp = dp[j];\n if (s.charAt(i - 1) == t.charAt(j - 1)) {\n // If the two characters are equal, skip these two characters\n dp[j] = leftup;\n } else {\n // The minimum number of edits = the minimum number of edits from three operations (insert, remove, replace) + 1\n dp[j] = Math.min(Math.min(dp[j - 1], dp[j]), leftup) + 1;\n }\n leftup = temp; // Update for the next round of dp[i-1, j-1]\n }\n }\n return dp[m];\n}\n</code></pre> edit_distance.cs<pre><code>[class]{edit_distance}-[func]{EditDistanceDPComp}\n</code></pre> edit_distance.go<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre> edit_distance.swift<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre> edit_distance.js<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre> edit_distance.ts<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre> edit_distance.dart<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre> edit_distance.rs<pre><code>[class]{}-[func]{edit_distance_dp_comp}\n</code></pre> edit_distance.c<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre> edit_distance.kt<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre> edit_distance.rb<pre><code>[class]{}-[func]{edit_distance_dp_comp}\n</code></pre> edit_distance.zig<pre><code>[class]{}-[func]{editDistanceDPComp}\n</code></pre>"},{"location":"chapter_dynamic_programming/intro_to_dynamic_programming/","title":"14.1 \u00a0 Introduction to dynamic programming","text":"<p>Dynamic programming is an important algorithmic paradigm that decomposes a problem into a series of smaller subproblems, and stores the solutions of these subproblems to avoid redundant computations, thereby significantly improving time efficiency.</p> <p>In this section, we start with a classic problem, first presenting its brute force backtracking solution, observing the overlapping subproblems contained within, and then gradually deriving a more efficient dynamic programming solution.</p> <p>Climbing stairs</p> <p>Given a staircase with \\(n\\) steps, where you can climb \\(1\\) or \\(2\\) steps at a time, how many different ways are there to reach the top?</p> <p>As shown in Figure 14-1, there are \\(3\\) ways to reach the top of a \\(3\\)-step staircase.</p> <p></p> <p> Figure 14-1 \u00a0 Number of ways to reach the 3rd step </p> <p>The goal of this problem is to determine the number of ways, considering using backtracking to exhaust all possibilities. Specifically, imagine climbing stairs as a multi-round choice process: starting from the ground, choosing to go up \\(1\\) or \\(2\\) steps each round, adding one to the count of ways upon reaching the top of the stairs, and pruning the process when exceeding the top. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig climbing_stairs_backtrack.py<pre><code>def backtrack(choices: list[int], state: int, n: int, res: list[int]) -> int:\n \"\"\"Backtracking\"\"\"\n # When climbing to the nth step, add 1 to the number of solutions\n if state == n:\n res[0] += 1\n # Traverse all choices\n for choice in choices:\n # Pruning: do not allow climbing beyond the nth step\n if state + choice > n:\n continue\n # Attempt: make a choice, update the state\n backtrack(choices, state + choice, n, res)\n # Retract\n\ndef climbing_stairs_backtrack(n: int) -> int:\n \"\"\"Climbing stairs: Backtracking\"\"\"\n choices = [1, 2] # Can choose to climb up 1 step or 2 steps\n state = 0 # Start climbing from the 0th step\n res = [0] # Use res[0] to record the number of solutions\n backtrack(choices, state, n, res)\n return res[0]\n</code></pre> climbing_stairs_backtrack.cpp<pre><code>/* Backtracking */\nvoid backtrack(vector<int> &choices, int state, int n, vector<int> &res) {\n // When climbing to the nth step, add 1 to the number of solutions\n if (state == n)\n res[0]++;\n // Traverse all choices\n for (auto &choice : choices) {\n // Pruning: do not allow climbing beyond the nth step\n if (state + choice > n)\n continue;\n // Attempt: make a choice, update the state\n backtrack(choices, state + choice, n, res);\n // Retract\n }\n}\n\n/* Climbing stairs: Backtracking */\nint climbingStairsBacktrack(int n) {\n vector<int> choices = {1, 2}; // Can choose to climb up 1 step or 2 steps\n int state = 0; // Start climbing from the 0th step\n vector<int> res = {0}; // Use res[0] to record the number of solutions\n backtrack(choices, state, n, res);\n return res[0];\n}\n</code></pre> climbing_stairs_backtrack.java<pre><code>/* Backtracking */\nvoid backtrack(List<Integer> choices, int state, int n, List<Integer> res) {\n // When climbing to the nth step, add 1 to the number of solutions\n if (state == n)\n res.set(0, res.get(0) + 1);\n // Traverse all choices\n for (Integer choice : choices) {\n // Pruning: do not allow climbing beyond the nth step\n if (state + choice > n)\n continue;\n // Attempt: make a choice, update the state\n backtrack(choices, state + choice, n, res);\n // Retract\n }\n}\n\n/* Climbing stairs: Backtracking */\nint climbingStairsBacktrack(int n) {\n List<Integer> choices = Arrays.asList(1, 2); // Can choose to climb up 1 step or 2 steps\n int state = 0; // Start climbing from the 0th step\n List<Integer> res = new ArrayList<>();\n res.add(0); // Use res[0] to record the number of solutions\n backtrack(choices, state, n, res);\n return res.get(0);\n}\n</code></pre> climbing_stairs_backtrack.cs<pre><code>[class]{climbing_stairs_backtrack}-[func]{Backtrack}\n\n[class]{climbing_stairs_backtrack}-[func]{ClimbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.go<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.swift<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.js<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.ts<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.dart<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.rs<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbing_stairs_backtrack}\n</code></pre> climbing_stairs_backtrack.c<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.kt<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre> climbing_stairs_backtrack.rb<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbing_stairs_backtrack}\n</code></pre> climbing_stairs_backtrack.zig<pre><code>[class]{}-[func]{backtrack}\n\n[class]{}-[func]{climbingStairsBacktrack}\n</code></pre>"},{"location":"chapter_dynamic_programming/intro_to_dynamic_programming/#1411-method-1-brute-force-search","title":"14.1.1 \u00a0 Method 1: Brute force search","text":"<p>Backtracking algorithms do not explicitly decompose the problem but treat solving the problem as a series of decision steps, searching for all possible solutions through exploration and pruning.</p> <p>We can try to analyze this problem from the perspective of decomposition. Let \\(dp[i]\\) be the number of ways to reach the \\(i^{th}\\) step, then \\(dp[i]\\) is the original problem, and its subproblems include:</p> \\[ dp[i-1], dp[i-2], \\dots, dp[2], dp[1] \\] <p>Since each round can only advance \\(1\\) or \\(2\\) steps, when we stand on the \\(i^{th}\\) step, the previous round must have been either on the \\(i-1^{th}\\) or the \\(i-2^{th}\\) step. In other words, we can only step from the \\(i-1^{th}\\) or the \\(i-2^{th}\\) step to the \\(i^{th}\\) step.</p> <p>This leads to an important conclusion: the number of ways to reach the \\(i-1^{th}\\) step plus the number of ways to reach the \\(i-2^{th}\\) step equals the number of ways to reach the \\(i^{th}\\) step. The formula is as follows:</p> \\[ dp[i] = dp[i-1] + dp[i-2] \\] <p>This means that in the stair climbing problem, there is a recursive relationship between the subproblems, the solution to the original problem can be constructed from the solutions to the subproblems. Figure 14-2 shows this recursive relationship.</p> <p></p> <p> Figure 14-2 \u00a0 Recursive relationship of solution counts </p> <p>We can obtain the brute force search solution according to the recursive formula. Starting with \\(dp[n]\\), recursively decompose a larger problem into the sum of two smaller problems, until reaching the smallest subproblems \\(dp[1]\\) and \\(dp[2]\\) where the solutions are known, with \\(dp[1] = 1\\) and \\(dp[2] = 2\\), representing \\(1\\) and \\(2\\) ways to climb to the first and second steps, respectively.</p> <p>Observe the following code, which, like standard backtracking code, belongs to depth-first search but is more concise:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig climbing_stairs_dfs.py<pre><code>def dfs(i: int) -> int:\n \"\"\"Search\"\"\"\n # Known dp[1] and dp[2], return them\n if i == 1 or i == 2:\n return i\n # dp[i] = dp[i-1] + dp[i-2]\n count = dfs(i - 1) + dfs(i - 2)\n return count\n\ndef climbing_stairs_dfs(n: int) -> int:\n \"\"\"Climbing stairs: Search\"\"\"\n return dfs(n)\n</code></pre> climbing_stairs_dfs.cpp<pre><code>/* Search */\nint dfs(int i) {\n // Known dp[1] and dp[2], return them\n if (i == 1 || i == 2)\n return i;\n // dp[i] = dp[i-1] + dp[i-2]\n int count = dfs(i - 1) + dfs(i - 2);\n return count;\n}\n\n/* Climbing stairs: Search */\nint climbingStairsDFS(int n) {\n return dfs(n);\n}\n</code></pre> climbing_stairs_dfs.java<pre><code>/* Search */\nint dfs(int i) {\n // Known dp[1] and dp[2], return them\n if (i == 1 || i == 2)\n return i;\n // dp[i] = dp[i-1] + dp[i-2]\n int count = dfs(i - 1) + dfs(i - 2);\n return count;\n}\n\n/* Climbing stairs: Search */\nint climbingStairsDFS(int n) {\n return dfs(n);\n}\n</code></pre> climbing_stairs_dfs.cs<pre><code>[class]{climbing_stairs_dfs}-[func]{DFS}\n\n[class]{climbing_stairs_dfs}-[func]{ClimbingStairsDFS}\n</code></pre> climbing_stairs_dfs.go<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> climbing_stairs_dfs.swift<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> climbing_stairs_dfs.js<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> climbing_stairs_dfs.ts<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> climbing_stairs_dfs.dart<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> climbing_stairs_dfs.rs<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbing_stairs_dfs}\n</code></pre> climbing_stairs_dfs.c<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> climbing_stairs_dfs.kt<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> climbing_stairs_dfs.rb<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbing_stairs_dfs}\n</code></pre> climbing_stairs_dfs.zig<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFS}\n</code></pre> <p>Figure 14-3 shows the recursive tree formed by brute force search. For the problem \\(dp[n]\\), the depth of its recursive tree is \\(n\\), with a time complexity of \\(O(2^n)\\). Exponential order represents explosive growth, and entering a long wait if a relatively large \\(n\\) is input.</p> <p></p> <p> Figure 14-3 \u00a0 Recursive tree for climbing stairs </p> <p>Observing Figure 14-3, the exponential time complexity is caused by 'overlapping subproblems'. For example, \\(dp[9]\\) is decomposed into \\(dp[8]\\) and \\(dp[7]\\), \\(dp[8]\\) into \\(dp[7]\\) and \\(dp[6]\\), both containing the subproblem \\(dp[7]\\).</p> <p>Thus, subproblems include even smaller overlapping subproblems, endlessly. A vast majority of computational resources are wasted on these overlapping subproblems.</p>"},{"location":"chapter_dynamic_programming/intro_to_dynamic_programming/#1412-method-2-memoized-search","title":"14.1.2 \u00a0 Method 2: Memoized search","text":"<p>To enhance algorithm efficiency, we hope that all overlapping subproblems are calculated only once. For this purpose, we declare an array <code>mem</code> to record the solution of each subproblem, and prune overlapping subproblems during the search process.</p> <ol> <li>When \\(dp[i]\\) is calculated for the first time, we record it in <code>mem[i]</code> for later use.</li> <li>When \\(dp[i]\\) needs to be calculated again, we can directly retrieve the result from <code>mem[i]</code>, thus avoiding redundant calculations of that subproblem.</li> </ol> <p>The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig climbing_stairs_dfs_mem.py<pre><code>def dfs(i: int, mem: list[int]) -> int:\n \"\"\"Memoized search\"\"\"\n # Known dp[1] and dp[2], return them\n if i == 1 or i == 2:\n return i\n # If there is a record for dp[i], return it\n if mem[i] != -1:\n return mem[i]\n # dp[i] = dp[i-1] + dp[i-2]\n count = dfs(i - 1, mem) + dfs(i - 2, mem)\n # Record dp[i]\n mem[i] = count\n return count\n\ndef climbing_stairs_dfs_mem(n: int) -> int:\n \"\"\"Climbing stairs: Memoized search\"\"\"\n # mem[i] records the total number of solutions for climbing to the ith step, -1 means no record\n mem = [-1] * (n + 1)\n return dfs(n, mem)\n</code></pre> climbing_stairs_dfs_mem.cpp<pre><code>/* Memoized search */\nint dfs(int i, vector<int> &mem) {\n // Known dp[1] and dp[2], return them\n if (i == 1 || i == 2)\n return i;\n // If there is a record for dp[i], return it\n if (mem[i] != -1)\n return mem[i];\n // dp[i] = dp[i-1] + dp[i-2]\n int count = dfs(i - 1, mem) + dfs(i - 2, mem);\n // Record dp[i]\n mem[i] = count;\n return count;\n}\n\n/* Climbing stairs: Memoized search */\nint climbingStairsDFSMem(int n) {\n // mem[i] records the total number of solutions for climbing to the ith step, -1 means no record\n vector<int> mem(n + 1, -1);\n return dfs(n, mem);\n}\n</code></pre> climbing_stairs_dfs_mem.java<pre><code>/* Memoized search */\nint dfs(int i, int[] mem) {\n // Known dp[1] and dp[2], return them\n if (i == 1 || i == 2)\n return i;\n // If there is a record for dp[i], return it\n if (mem[i] != -1)\n return mem[i];\n // dp[i] = dp[i-1] + dp[i-2]\n int count = dfs(i - 1, mem) + dfs(i - 2, mem);\n // Record dp[i]\n mem[i] = count;\n return count;\n}\n\n/* Climbing stairs: Memoized search */\nint climbingStairsDFSMem(int n) {\n // mem[i] records the total number of solutions for climbing to the ith step, -1 means no record\n int[] mem = new int[n + 1];\n Arrays.fill(mem, -1);\n return dfs(n, mem);\n}\n</code></pre> climbing_stairs_dfs_mem.cs<pre><code>[class]{climbing_stairs_dfs_mem}-[func]{DFS}\n\n[class]{climbing_stairs_dfs_mem}-[func]{ClimbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.go<pre><code>[class]{}-[func]{dfsMem}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.swift<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.js<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.ts<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.dart<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.rs<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbing_stairs_dfs_mem}\n</code></pre> climbing_stairs_dfs_mem.c<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.kt<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> climbing_stairs_dfs_mem.rb<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbing_stairs_dfs_mem}\n</code></pre> climbing_stairs_dfs_mem.zig<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{climbingStairsDFSMem}\n</code></pre> <p>Observe Figure 14-4, after memoization, all overlapping subproblems need to be calculated only once, optimizing the time complexity to \\(O(n)\\), which is a significant leap.</p> <p></p> <p> Figure 14-4 \u00a0 Recursive tree with memoized search </p>"},{"location":"chapter_dynamic_programming/intro_to_dynamic_programming/#1413-method-3-dynamic-programming","title":"14.1.3 \u00a0 Method 3: Dynamic programming","text":"<p>Memoized search is a 'top-down' method: we start with the original problem (root node), recursively decompose larger subproblems into smaller ones until the solutions to the smallest known subproblems (leaf nodes) are reached. Subsequently, by backtracking, we collect the solutions of the subproblems, constructing the solution to the original problem.</p> <p>On the contrary, dynamic programming is a 'bottom-up' method: starting with the solutions to the smallest subproblems, iteratively construct the solutions to larger subproblems until the original problem is solved.</p> <p>Since dynamic programming does not include a backtracking process, it only requires looping iteration to implement, without needing recursion. In the following code, we initialize an array <code>dp</code> to store the solutions to the subproblems, serving the same recording function as the array <code>mem</code> in memoized search:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig climbing_stairs_dp.py<pre><code>def climbing_stairs_dp(n: int) -> int:\n \"\"\"Climbing stairs: Dynamic programming\"\"\"\n if n == 1 or n == 2:\n return n\n # Initialize dp table, used to store subproblem solutions\n dp = [0] * (n + 1)\n # Initial state: preset the smallest subproblem solution\n dp[1], dp[2] = 1, 2\n # State transition: gradually solve larger subproblems from smaller ones\n for i in range(3, n + 1):\n dp[i] = dp[i - 1] + dp[i - 2]\n return dp[n]\n</code></pre> climbing_stairs_dp.cpp<pre><code>/* Climbing stairs: Dynamic programming */\nint climbingStairsDP(int n) {\n if (n == 1 || n == 2)\n return n;\n // Initialize dp table, used to store subproblem solutions\n vector<int> dp(n + 1);\n // Initial state: preset the smallest subproblem solution\n dp[1] = 1;\n dp[2] = 2;\n // State transition: gradually solve larger subproblems from smaller ones\n for (int i = 3; i <= n; i++) {\n dp[i] = dp[i - 1] + dp[i - 2];\n }\n return dp[n];\n}\n</code></pre> climbing_stairs_dp.java<pre><code>/* Climbing stairs: Dynamic programming */\nint climbingStairsDP(int n) {\n if (n == 1 || n == 2)\n return n;\n // Initialize dp table, used to store subproblem solutions\n int[] dp = new int[n + 1];\n // Initial state: preset the smallest subproblem solution\n dp[1] = 1;\n dp[2] = 2;\n // State transition: gradually solve larger subproblems from smaller ones\n for (int i = 3; i <= n; i++) {\n dp[i] = dp[i - 1] + dp[i - 2];\n }\n return dp[n];\n}\n</code></pre> climbing_stairs_dp.cs<pre><code>[class]{climbing_stairs_dp}-[func]{ClimbingStairsDP}\n</code></pre> climbing_stairs_dp.go<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> climbing_stairs_dp.swift<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> climbing_stairs_dp.js<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> climbing_stairs_dp.ts<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> climbing_stairs_dp.dart<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> climbing_stairs_dp.rs<pre><code>[class]{}-[func]{climbing_stairs_dp}\n</code></pre> climbing_stairs_dp.c<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> climbing_stairs_dp.kt<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> climbing_stairs_dp.rb<pre><code>[class]{}-[func]{climbing_stairs_dp}\n</code></pre> climbing_stairs_dp.zig<pre><code>[class]{}-[func]{climbingStairsDP}\n</code></pre> <p>Figure 14-5 simulates the execution process of the above code.</p> <p></p> <p> Figure 14-5 \u00a0 Dynamic programming process for climbing stairs </p> <p>Like the backtracking algorithm, dynamic programming also uses the concept of \"states\" to represent specific stages in problem solving, each state corresponding to a subproblem and its local optimal solution. For example, the state of the climbing stairs problem is defined as the current step number \\(i\\).</p> <p>Based on the above content, we can summarize the commonly used terminology in dynamic programming.</p> <ul> <li>The array <code>dp</code> is referred to as the DP table, with \\(dp[i]\\) representing the solution to the subproblem corresponding to state \\(i\\).</li> <li>The states corresponding to the smallest subproblems (steps \\(1\\) and \\(2\\)) are called initial states.</li> <li>The recursive formula \\(dp[i] = dp[i-1] + dp[i-2]\\) is called the state transition equation.</li> </ul>"},{"location":"chapter_dynamic_programming/intro_to_dynamic_programming/#1414-space-optimization","title":"14.1.4 \u00a0 Space optimization","text":"<p>Observant readers may have noticed that since \\(dp[i]\\) is only related to \\(dp[i-1]\\) and \\(dp[i-2]\\), we do not need to use an array <code>dp</code> to store the solutions to all subproblems, but can simply use two variables to progress iteratively. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig climbing_stairs_dp.py<pre><code>def climbing_stairs_dp_comp(n: int) -> int:\n \"\"\"Climbing stairs: Space-optimized dynamic programming\"\"\"\n if n == 1 or n == 2:\n return n\n a, b = 1, 2\n for _ in range(3, n + 1):\n a, b = b, a + b\n return b\n</code></pre> climbing_stairs_dp.cpp<pre><code>/* Climbing stairs: Space-optimized dynamic programming */\nint climbingStairsDPComp(int n) {\n if (n == 1 || n == 2)\n return n;\n int a = 1, b = 2;\n for (int i = 3; i <= n; i++) {\n int tmp = b;\n b = a + b;\n a = tmp;\n }\n return b;\n}\n</code></pre> climbing_stairs_dp.java<pre><code>/* Climbing stairs: Space-optimized dynamic programming */\nint climbingStairsDPComp(int n) {\n if (n == 1 || n == 2)\n return n;\n int a = 1, b = 2;\n for (int i = 3; i <= n; i++) {\n int tmp = b;\n b = a + b;\n a = tmp;\n }\n return b;\n}\n</code></pre> climbing_stairs_dp.cs<pre><code>[class]{climbing_stairs_dp}-[func]{ClimbingStairsDPComp}\n</code></pre> climbing_stairs_dp.go<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> climbing_stairs_dp.swift<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> climbing_stairs_dp.js<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> climbing_stairs_dp.ts<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> climbing_stairs_dp.dart<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> climbing_stairs_dp.rs<pre><code>[class]{}-[func]{climbing_stairs_dp_comp}\n</code></pre> climbing_stairs_dp.c<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> climbing_stairs_dp.kt<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> climbing_stairs_dp.rb<pre><code>[class]{}-[func]{climbing_stairs_dp_comp}\n</code></pre> climbing_stairs_dp.zig<pre><code>[class]{}-[func]{climbingStairsDPComp}\n</code></pre> <p>Observing the above code, since the space occupied by the array <code>dp</code> is eliminated, the space complexity is reduced from \\(O(n)\\) to \\(O(1)\\).</p> <p>In dynamic programming problems, the current state is often only related to a limited number of previous states, allowing us to retain only the necessary states and save memory space by \"dimension reduction\". This space optimization technique is known as 'rolling variable' or 'rolling array'.</p>"},{"location":"chapter_dynamic_programming/knapsack_problem/","title":"14.4 \u00a0 0-1 Knapsack problem","text":"<p>The knapsack problem is an excellent introductory problem for dynamic programming and is the most common type of problem in dynamic programming. It has many variants, such as the 0-1 knapsack problem, the unbounded knapsack problem, and the multiple knapsack problem, etc.</p> <p>In this section, we will first solve the most common 0-1 knapsack problem.</p> <p>Question</p> <p>Given \\(n\\) items, the weight of the \\(i\\)-th item is \\(wgt[i-1]\\) and its value is \\(val[i-1]\\), and a knapsack with a capacity of \\(cap\\). Each item can be chosen only once. What is the maximum value of items that can be placed in the knapsack under the capacity limit?</p> <p>Observe Figure 14-17, since the item number \\(i\\) starts counting from 1, and the array index starts from 0, thus the weight of item \\(i\\) corresponds to \\(wgt[i-1]\\) and the value corresponds to \\(val[i-1]\\).</p> <p></p> <p> Figure 14-17 \u00a0 Example data of the 0-1 knapsack </p> <p>We can consider the 0-1 knapsack problem as a process consisting of \\(n\\) rounds of decisions, where for each item there are two decisions: not to put it in or to put it in, thus the problem fits the decision tree model.</p> <p>The objective of this problem is to \"maximize the value of the items that can be put in the knapsack under the limited capacity,\" thus it is more likely a dynamic programming problem.</p> <p>First step: Think about each round of decisions, define states, thereby obtaining the \\(dp\\) table</p> <p>For each item, if not put into the knapsack, the capacity remains unchanged; if put in, the capacity is reduced. From this, the state definition can be obtained: the current item number \\(i\\) and knapsack capacity \\(c\\), denoted as \\([i, c]\\).</p> <p>State \\([i, c]\\) corresponds to the sub-problem: the maximum value of the first \\(i\\) items in a knapsack of capacity \\(c\\), denoted as \\(dp[i, c]\\).</p> <p>The solution we are looking for is \\(dp[n, cap]\\), so we need a two-dimensional \\(dp\\) table of size \\((n+1) \\times (cap+1)\\).</p> <p>Second step: Identify the optimal substructure, then derive the state transition equation</p> <p>After making the decision for item \\(i\\), what remains is the sub-problem of decisions for the first \\(i-1\\) items, which can be divided into two cases.</p> <ul> <li>Not putting item \\(i\\): The knapsack capacity remains unchanged, state changes to \\([i-1, c]\\).</li> <li>Putting item \\(i\\): The knapsack capacity decreases by \\(wgt[i-1]\\), and the value increases by \\(val[i-1]\\), state changes to \\([i-1, c-wgt[i-1]]\\).</li> </ul> <p>The above analysis reveals the optimal substructure of this problem: the maximum value \\(dp[i, c]\\) is equal to the larger value of the two schemes of not putting item \\(i\\) and putting item \\(i\\). From this, the state transition equation can be derived:</p> \\[ dp[i, c] = \\max(dp[i-1, c], dp[i-1, c - wgt[i-1]] + val[i-1]) \\] <p>It is important to note that if the current item's weight \\(wgt[i - 1]\\) exceeds the remaining knapsack capacity \\(c\\), then the only option is not to put it in the knapsack.</p> <p>Third step: Determine the boundary conditions and the order of state transitions</p> <p>When there are no items or the knapsack capacity is \\(0\\), the maximum value is \\(0\\), i.e., the first column \\(dp[i, 0]\\) and the first row \\(dp[0, c]\\) are both equal to \\(0\\).</p> <p>The current state \\([i, c]\\) transitions from the state directly above \\([i-1, c]\\) and the state to the upper left \\([i-1, c-wgt[i-1]]\\), thus, the entire \\(dp\\) table is traversed in order through two layers of loops.</p> <p>Following the above analysis, we will next implement the solutions in the order of brute force search, memoized search, and dynamic programming.</p>"},{"location":"chapter_dynamic_programming/knapsack_problem/#1-method-one-brute-force-search","title":"1. \u00a0 Method one: Brute force search","text":"<p>The search code includes the following elements.</p> <ul> <li>Recursive parameters: State \\([i, c]\\).</li> <li>Return value: Solution to the sub-problem \\(dp[i, c]\\).</li> <li>Termination condition: When the item number is out of bounds \\(i = 0\\) or the remaining capacity of the knapsack is \\(0\\), terminate the recursion and return the value \\(0\\).</li> <li>Pruning: If the current item's weight exceeds the remaining capacity of the knapsack, the only option is not to put it in the knapsack.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig knapsack.py<pre><code>def knapsack_dfs(wgt: list[int], val: list[int], i: int, c: int) -> int:\n \"\"\"0-1 Knapsack: Brute force search\"\"\"\n # If all items have been chosen or the knapsack has no remaining capacity, return value 0\n if i == 0 or c == 0:\n return 0\n # If exceeding the knapsack capacity, can only choose not to put it in the knapsack\n if wgt[i - 1] > c:\n return knapsack_dfs(wgt, val, i - 1, c)\n # Calculate the maximum value of not putting in and putting in item i\n no = knapsack_dfs(wgt, val, i - 1, c)\n yes = knapsack_dfs(wgt, val, i - 1, c - wgt[i - 1]) + val[i - 1]\n # Return the greater value of the two options\n return max(no, yes)\n</code></pre> knapsack.cpp<pre><code>/* 0-1 Knapsack: Brute force search */\nint knapsackDFS(vector<int> &wgt, vector<int> &val, int i, int c) {\n // If all items have been chosen or the knapsack has no remaining capacity, return value 0\n if (i == 0 || c == 0) {\n return 0;\n }\n // If exceeding the knapsack capacity, can only choose not to put it in the knapsack\n if (wgt[i - 1] > c) {\n return knapsackDFS(wgt, val, i - 1, c);\n }\n // Calculate the maximum value of not putting in and putting in item i\n int no = knapsackDFS(wgt, val, i - 1, c);\n int yes = knapsackDFS(wgt, val, i - 1, c - wgt[i - 1]) + val[i - 1];\n // Return the greater value of the two options\n return max(no, yes);\n}\n</code></pre> knapsack.java<pre><code>/* 0-1 Knapsack: Brute force search */\nint knapsackDFS(int[] wgt, int[] val, int i, int c) {\n // If all items have been chosen or the knapsack has no remaining capacity, return value 0\n if (i == 0 || c == 0) {\n return 0;\n }\n // If exceeding the knapsack capacity, can only choose not to put it in the knapsack\n if (wgt[i - 1] > c) {\n return knapsackDFS(wgt, val, i - 1, c);\n }\n // Calculate the maximum value of not putting in and putting in item i\n int no = knapsackDFS(wgt, val, i - 1, c);\n int yes = knapsackDFS(wgt, val, i - 1, c - wgt[i - 1]) + val[i - 1];\n // Return the greater value of the two options\n return Math.max(no, yes);\n}\n</code></pre> knapsack.cs<pre><code>[class]{knapsack}-[func]{KnapsackDFS}\n</code></pre> knapsack.go<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> knapsack.swift<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> knapsack.js<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> knapsack.ts<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> knapsack.dart<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> knapsack.rs<pre><code>[class]{}-[func]{knapsack_dfs}\n</code></pre> knapsack.c<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> knapsack.kt<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> knapsack.rb<pre><code>[class]{}-[func]{knapsack_dfs}\n</code></pre> knapsack.zig<pre><code>[class]{}-[func]{knapsackDFS}\n</code></pre> <p>As shown in Figure 14-18, since each item generates two search branches of not selecting and selecting, the time complexity is \\(O(2^n)\\).</p> <p>Observing the recursive tree, it is easy to see that there are overlapping sub-problems, such as \\(dp[1, 10]\\), etc. When there are many items and the knapsack capacity is large, especially when there are many items of the same weight, the number of overlapping sub-problems will increase significantly.</p> <p></p> <p> Figure 14-18 \u00a0 The brute force search recursive tree of the 0-1 knapsack problem </p>"},{"location":"chapter_dynamic_programming/knapsack_problem/#2-method-two-memoized-search","title":"2. \u00a0 Method two: Memoized search","text":"<p>To ensure that overlapping sub-problems are only calculated once, we use a memoization list <code>mem</code> to record the solutions to sub-problems, where <code>mem[i][c]</code> corresponds to \\(dp[i, c]\\).</p> <p>After introducing memoization, the time complexity depends on the number of sub-problems, which is \\(O(n \\times cap)\\). The implementation code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig knapsack.py<pre><code>def knapsack_dfs_mem(\n wgt: list[int], val: list[int], mem: list[list[int]], i: int, c: int\n) -> int:\n \"\"\"0-1 Knapsack: Memoized search\"\"\"\n # If all items have been chosen or the knapsack has no remaining capacity, return value 0\n if i == 0 or c == 0:\n return 0\n # If there is a record, return it\n if mem[i][c] != -1:\n return mem[i][c]\n # If exceeding the knapsack capacity, can only choose not to put it in the knapsack\n if wgt[i - 1] > c:\n return knapsack_dfs_mem(wgt, val, mem, i - 1, c)\n # Calculate the maximum value of not putting in and putting in item i\n no = knapsack_dfs_mem(wgt, val, mem, i - 1, c)\n yes = knapsack_dfs_mem(wgt, val, mem, i - 1, c - wgt[i - 1]) + val[i - 1]\n # Record and return the greater value of the two options\n mem[i][c] = max(no, yes)\n return mem[i][c]\n</code></pre> knapsack.cpp<pre><code>/* 0-1 Knapsack: Memoized search */\nint knapsackDFSMem(vector<int> &wgt, vector<int> &val, vector<vector<int>> &mem, int i, int c) {\n // If all items have been chosen or the knapsack has no remaining capacity, return value 0\n if (i == 0 || c == 0) {\n return 0;\n }\n // If there is a record, return it\n if (mem[i][c] != -1) {\n return mem[i][c];\n }\n // If exceeding the knapsack capacity, can only choose not to put it in the knapsack\n if (wgt[i - 1] > c) {\n return knapsackDFSMem(wgt, val, mem, i - 1, c);\n }\n // Calculate the maximum value of not putting in and putting in item i\n int no = knapsackDFSMem(wgt, val, mem, i - 1, c);\n int yes = knapsackDFSMem(wgt, val, mem, i - 1, c - wgt[i - 1]) + val[i - 1];\n // Record and return the greater value of the two options\n mem[i][c] = max(no, yes);\n return mem[i][c];\n}\n</code></pre> knapsack.java<pre><code>/* 0-1 Knapsack: Memoized search */\nint knapsackDFSMem(int[] wgt, int[] val, int[][] mem, int i, int c) {\n // If all items have been chosen or the knapsack has no remaining capacity, return value 0\n if (i == 0 || c == 0) {\n return 0;\n }\n // If there is a record, return it\n if (mem[i][c] != -1) {\n return mem[i][c];\n }\n // If exceeding the knapsack capacity, can only choose not to put it in the knapsack\n if (wgt[i - 1] > c) {\n return knapsackDFSMem(wgt, val, mem, i - 1, c);\n }\n // Calculate the maximum value of not putting in and putting in item i\n int no = knapsackDFSMem(wgt, val, mem, i - 1, c);\n int yes = knapsackDFSMem(wgt, val, mem, i - 1, c - wgt[i - 1]) + val[i - 1];\n // Record and return the greater value of the two options\n mem[i][c] = Math.max(no, yes);\n return mem[i][c];\n}\n</code></pre> knapsack.cs<pre><code>[class]{knapsack}-[func]{KnapsackDFSMem}\n</code></pre> knapsack.go<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> knapsack.swift<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> knapsack.js<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> knapsack.ts<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> knapsack.dart<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> knapsack.rs<pre><code>[class]{}-[func]{knapsack_dfs_mem}\n</code></pre> knapsack.c<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> knapsack.kt<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> knapsack.rb<pre><code>[class]{}-[func]{knapsack_dfs_mem}\n</code></pre> knapsack.zig<pre><code>[class]{}-[func]{knapsackDFSMem}\n</code></pre> <p>Figure 14-19 shows the search branches that are pruned in memoized search.</p> <p></p> <p> Figure 14-19 \u00a0 The memoized search recursive tree of the 0-1 knapsack problem </p>"},{"location":"chapter_dynamic_programming/knapsack_problem/#3-method-three-dynamic-programming","title":"3. \u00a0 Method three: Dynamic programming","text":"<p>Dynamic programming essentially involves filling the \\(dp\\) table during the state transition, the code is shown in Figure 14-20:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig knapsack.py<pre><code>def knapsack_dp(wgt: list[int], val: list[int], cap: int) -> int:\n \"\"\"0-1 Knapsack: Dynamic programming\"\"\"\n n = len(wgt)\n # Initialize dp table\n dp = [[0] * (cap + 1) for _ in range(n + 1)]\n # State transition\n for i in range(1, n + 1):\n for c in range(1, cap + 1):\n if wgt[i - 1] > c:\n # If exceeding the knapsack capacity, do not choose item i\n dp[i][c] = dp[i - 1][c]\n else:\n # The greater value between not choosing and choosing item i\n dp[i][c] = max(dp[i - 1][c], dp[i - 1][c - wgt[i - 1]] + val[i - 1])\n return dp[n][cap]\n</code></pre> knapsack.cpp<pre><code>/* 0-1 Knapsack: Dynamic programming */\nint knapsackDP(vector<int> &wgt, vector<int> &val, int cap) {\n int n = wgt.size();\n // Initialize dp table\n vector<vector<int>> dp(n + 1, vector<int>(cap + 1, 0));\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int c = 1; c <= cap; c++) {\n if (wgt[i - 1] > c) {\n // If exceeding the knapsack capacity, do not choose item i\n dp[i][c] = dp[i - 1][c];\n } else {\n // The greater value between not choosing and choosing item i\n dp[i][c] = max(dp[i - 1][c], dp[i - 1][c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[n][cap];\n}\n</code></pre> knapsack.java<pre><code>/* 0-1 Knapsack: Dynamic programming */\nint knapsackDP(int[] wgt, int[] val, int cap) {\n int n = wgt.length;\n // Initialize dp table\n int[][] dp = new int[n + 1][cap + 1];\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int c = 1; c <= cap; c++) {\n if (wgt[i - 1] > c) {\n // If exceeding the knapsack capacity, do not choose item i\n dp[i][c] = dp[i - 1][c];\n } else {\n // The greater value between not choosing and choosing item i\n dp[i][c] = Math.max(dp[i - 1][c], dp[i - 1][c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[n][cap];\n}\n</code></pre> knapsack.cs<pre><code>[class]{knapsack}-[func]{KnapsackDP}\n</code></pre> knapsack.go<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> knapsack.swift<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> knapsack.js<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> knapsack.ts<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> knapsack.dart<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> knapsack.rs<pre><code>[class]{}-[func]{knapsack_dp}\n</code></pre> knapsack.c<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> knapsack.kt<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> knapsack.rb<pre><code>[class]{}-[func]{knapsack_dp}\n</code></pre> knapsack.zig<pre><code>[class]{}-[func]{knapsackDP}\n</code></pre> <p>As shown in Figure 14-20, both the time complexity and space complexity are determined by the size of the array <code>dp</code>, i.e., \\(O(n \\times cap)\\).</p> <1><2><3><4><5><6><7><8><9><10><11><12><13><14> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 14-20 \u00a0 The dynamic programming process of the 0-1 knapsack problem </p>"},{"location":"chapter_dynamic_programming/knapsack_problem/#4-space-optimization","title":"4. \u00a0 Space optimization","text":"<p>Since each state is only related to the state in the row above it, we can use two arrays to roll forward, reducing the space complexity from \\(O(n^2)\\) to \\(O(n)\\).</p> <p>Further thinking, can we use just one array to achieve space optimization? It can be observed that each state is transferred from the cell directly above or from the upper left cell. If there is only one array, when starting to traverse the \\(i\\)-th row, that array still stores the state of row \\(i-1\\).</p> <ul> <li>If using normal order traversal, then when traversing to \\(dp[i, j]\\), the values from the upper left \\(dp[i-1, 1]\\) ~ \\(dp[i-1, j-1]\\) may have already been overwritten, thus the correct state transition result cannot be obtained.</li> <li>If using reverse order traversal, there will be no overwriting problem, and the state transition can be conducted correctly.</li> </ul> <p>The figures below show the transition process from row \\(i = 1\\) to row \\(i = 2\\) in a single array. Please think about the differences between normal order traversal and reverse order traversal.</p> <1><2><3><4><5><6> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 14-21 \u00a0 The space-optimized dynamic programming process of the 0-1 knapsack </p> <p>In the code implementation, we only need to delete the first dimension \\(i\\) of the array <code>dp</code> and change the inner loop to reverse traversal:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig knapsack.py<pre><code>def knapsack_dp_comp(wgt: list[int], val: list[int], cap: int) -> int:\n \"\"\"0-1 Knapsack: Space-optimized dynamic programming\"\"\"\n n = len(wgt)\n # Initialize dp table\n dp = [0] * (cap + 1)\n # State transition\n for i in range(1, n + 1):\n # Traverse in reverse order\n for c in range(cap, 0, -1):\n if wgt[i - 1] > c:\n # If exceeding the knapsack capacity, do not choose item i\n dp[c] = dp[c]\n else:\n # The greater value between not choosing and choosing item i\n dp[c] = max(dp[c], dp[c - wgt[i - 1]] + val[i - 1])\n return dp[cap]\n</code></pre> knapsack.cpp<pre><code>/* 0-1 Knapsack: Space-optimized dynamic programming */\nint knapsackDPComp(vector<int> &wgt, vector<int> &val, int cap) {\n int n = wgt.size();\n // Initialize dp table\n vector<int> dp(cap + 1, 0);\n // State transition\n for (int i = 1; i <= n; i++) {\n // Traverse in reverse order\n for (int c = cap; c >= 1; c--) {\n if (wgt[i - 1] <= c) {\n // The greater value between not choosing and choosing item i\n dp[c] = max(dp[c], dp[c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[cap];\n}\n</code></pre> knapsack.java<pre><code>/* 0-1 Knapsack: Space-optimized dynamic programming */\nint knapsackDPComp(int[] wgt, int[] val, int cap) {\n int n = wgt.length;\n // Initialize dp table\n int[] dp = new int[cap + 1];\n // State transition\n for (int i = 1; i <= n; i++) {\n // Traverse in reverse order\n for (int c = cap; c >= 1; c--) {\n if (wgt[i - 1] <= c) {\n // The greater value between not choosing and choosing item i\n dp[c] = Math.max(dp[c], dp[c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[cap];\n}\n</code></pre> knapsack.cs<pre><code>[class]{knapsack}-[func]{KnapsackDPComp}\n</code></pre> knapsack.go<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre> knapsack.swift<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre> knapsack.js<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre> knapsack.ts<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre> knapsack.dart<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre> knapsack.rs<pre><code>[class]{}-[func]{knapsack_dp_comp}\n</code></pre> knapsack.c<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre> knapsack.kt<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre> knapsack.rb<pre><code>[class]{}-[func]{knapsack_dp_comp}\n</code></pre> knapsack.zig<pre><code>[class]{}-[func]{knapsackDPComp}\n</code></pre>"},{"location":"chapter_dynamic_programming/summary/","title":"14.7 \u00a0 Summary","text":"<ul> <li>Dynamic programming decomposes problems and improves computational efficiency by avoiding redundant computations through storing solutions of subproblems.</li> <li>Without considering time, all dynamic programming problems can be solved using backtracking (brute force search), but the recursion tree has many overlapping subproblems, resulting in very low efficiency. By introducing a memorization list, it's possible to store solutions of all computed subproblems, ensuring that overlapping subproblems are only computed once.</li> <li>Memorization search is a top-down recursive solution, whereas dynamic programming corresponds to a bottom-up iterative approach, akin to \"filling out a table.\" Since the current state only depends on certain local states, we can eliminate one dimension of the dp table to reduce space complexity.</li> <li>Decomposition of subproblems is a universal algorithmic approach, differing in characteristics among divide and conquer, dynamic programming, and backtracking.</li> <li>Dynamic programming problems have three main characteristics: overlapping subproblems, optimal substructure, and no aftereffects.</li> <li>If the optimal solution of the original problem can be constructed from the optimal solutions of its subproblems, it has an optimal substructure.</li> <li>No aftereffects mean that the future development of a state depends only on the current state and not on all past states experienced. Many combinatorial optimization problems do not have this property and cannot be quickly solved using dynamic programming.</li> </ul> <p>Knapsack problem</p> <ul> <li>The knapsack problem is one of the most typical dynamic programming problems, with variants including the 0-1 knapsack, unbounded knapsack, and multiple knapsacks.</li> <li>The state definition of the 0-1 knapsack is the maximum value in a knapsack of capacity \\(c\\) with the first \\(i\\) items. Based on decisions not to include or to include an item in the knapsack, optimal substructures can be identified and state transition equations constructed. In space optimization, since each state depends on the state directly above and to the upper left, the list should be traversed in reverse order to avoid overwriting the upper left state.</li> <li>In the unbounded knapsack problem, there is no limit on the number of each kind of item that can be chosen, thus the state transition for including items differs from the 0-1 knapsack. Since the state depends on the state directly above and to the left, space optimization should involve forward traversal.</li> <li>The coin change problem is a variant of the unbounded knapsack problem, shifting from seeking the \u201cmaximum\u201d value to seeking the \u201cminimum\u201d number of coins, thus the state transition equation should change \\(\\max()\\) to \\(\\min()\\). From pursuing \u201cnot exceeding\u201d the capacity of the knapsack to seeking exactly the target amount, thus use \\(amt + 1\\) to represent the invalid solution of \u201cunable to make up the target amount.\u201d</li> <li>Coin Change Problem II shifts from seeking the \u201cminimum number of coins\u201d to seeking the \u201cnumber of coin combinations,\u201d changing the state transition equation accordingly from \\(\\min()\\) to summation operator.</li> </ul> <p>Edit distance problem</p> <ul> <li>Edit distance (Levenshtein distance) measures the similarity between two strings, defined as the minimum number of editing steps needed to change one string into another, with editing operations including adding, deleting, or replacing.</li> <li>The state definition for the edit distance problem is the minimum number of editing steps needed to change the first \\(i\\) characters of \\(s\\) into the first \\(j\\) characters of \\(t\\). When \\(s[i] \\ne t[j]\\), there are three decisions: add, delete, replace, each with their corresponding residual subproblems. From this, optimal substructures can be identified, and state transition equations built. When \\(s[i] = t[j]\\), no editing of the current character is necessary.</li> <li>In edit distance, the state depends on the state directly above, to the left, and to the upper left. Therefore, after space optimization, neither forward nor reverse traversal can correctly perform state transitions. To address this, we use a variable to temporarily store the upper left state, making it equivalent to the situation in the unbounded knapsack problem, allowing for forward traversal after space optimization.</li> </ul>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/","title":"14.5 \u00a0 Unbounded knapsack problem","text":"<p>In this section, we first solve another common knapsack problem: the unbounded knapsack, and then explore a special case of it: the coin change problem.</p>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#1451-unbounded-knapsack-problem","title":"14.5.1 \u00a0 Unbounded knapsack problem","text":"<p>Question</p> <p>Given \\(n\\) items, where the weight of the \\(i^{th}\\) item is \\(wgt[i-1]\\) and its value is \\(val[i-1]\\), and a backpack with a capacity of \\(cap\\). Each item can be selected multiple times. What is the maximum value of the items that can be put into the backpack without exceeding its capacity? See the example below.</p> <p></p> <p> Figure 14-22 \u00a0 Example data for the unbounded knapsack problem </p>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#1-dynamic-programming-approach","title":"1. \u00a0 Dynamic programming approach","text":"<p>The unbounded knapsack problem is very similar to the 0-1 knapsack problem, the only difference being that there is no limit on the number of times an item can be chosen.</p> <ul> <li>In the 0-1 knapsack problem, there is only one of each item, so after placing item \\(i\\) into the backpack, you can only choose from the previous \\(i-1\\) items.</li> <li>In the unbounded knapsack problem, the quantity of each item is unlimited, so after placing item \\(i\\) in the backpack, you can still choose from the previous \\(i\\) items.</li> </ul> <p>Under the rules of the unbounded knapsack problem, the state \\([i, c]\\) can change in two ways.</p> <ul> <li>Not putting item \\(i\\) in: As with the 0-1 knapsack problem, transition to \\([i-1, c]\\).</li> <li>Putting item \\(i\\) in: Unlike the 0-1 knapsack problem, transition to \\([i, c-wgt[i-1]]\\).</li> </ul> <p>The state transition equation thus becomes:</p> \\[ dp[i, c] = \\max(dp[i-1, c], dp[i, c - wgt[i-1]] + val[i-1]) \\]"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#2-code-implementation","title":"2. \u00a0 Code implementation","text":"<p>Comparing the code for the two problems, the state transition changes from \\(i-1\\) to \\(i\\), the rest is completely identical:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig unbounded_knapsack.py<pre><code>def unbounded_knapsack_dp(wgt: list[int], val: list[int], cap: int) -> int:\n \"\"\"Complete knapsack: Dynamic programming\"\"\"\n n = len(wgt)\n # Initialize dp table\n dp = [[0] * (cap + 1) for _ in range(n + 1)]\n # State transition\n for i in range(1, n + 1):\n for c in range(1, cap + 1):\n if wgt[i - 1] > c:\n # If exceeding the knapsack capacity, do not choose item i\n dp[i][c] = dp[i - 1][c]\n else:\n # The greater value between not choosing and choosing item i\n dp[i][c] = max(dp[i - 1][c], dp[i][c - wgt[i - 1]] + val[i - 1])\n return dp[n][cap]\n</code></pre> unbounded_knapsack.cpp<pre><code>/* Complete knapsack: Dynamic programming */\nint unboundedKnapsackDP(vector<int> &wgt, vector<int> &val, int cap) {\n int n = wgt.size();\n // Initialize dp table\n vector<vector<int>> dp(n + 1, vector<int>(cap + 1, 0));\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int c = 1; c <= cap; c++) {\n if (wgt[i - 1] > c) {\n // If exceeding the knapsack capacity, do not choose item i\n dp[i][c] = dp[i - 1][c];\n } else {\n // The greater value between not choosing and choosing item i\n dp[i][c] = max(dp[i - 1][c], dp[i][c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[n][cap];\n}\n</code></pre> unbounded_knapsack.java<pre><code>/* Complete knapsack: Dynamic programming */\nint unboundedKnapsackDP(int[] wgt, int[] val, int cap) {\n int n = wgt.length;\n // Initialize dp table\n int[][] dp = new int[n + 1][cap + 1];\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int c = 1; c <= cap; c++) {\n if (wgt[i - 1] > c) {\n // If exceeding the knapsack capacity, do not choose item i\n dp[i][c] = dp[i - 1][c];\n } else {\n // The greater value between not choosing and choosing item i\n dp[i][c] = Math.max(dp[i - 1][c], dp[i][c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[n][cap];\n}\n</code></pre> unbounded_knapsack.cs<pre><code>[class]{unbounded_knapsack}-[func]{UnboundedKnapsackDP}\n</code></pre> unbounded_knapsack.go<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre> unbounded_knapsack.swift<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre> unbounded_knapsack.js<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre> unbounded_knapsack.ts<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre> unbounded_knapsack.dart<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre> unbounded_knapsack.rs<pre><code>[class]{}-[func]{unbounded_knapsack_dp}\n</code></pre> unbounded_knapsack.c<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre> unbounded_knapsack.kt<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre> unbounded_knapsack.rb<pre><code>[class]{}-[func]{unbounded_knapsack_dp}\n</code></pre> unbounded_knapsack.zig<pre><code>[class]{}-[func]{unboundedKnapsackDP}\n</code></pre>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#3-space-optimization","title":"3. \u00a0 Space optimization","text":"<p>Since the current state comes from the state to the left and above, the space-optimized solution should perform a forward traversal for each row in the \\(dp\\) table.</p> <p>This traversal order is the opposite of that for the 0-1 knapsack. Please refer to Figure 14-23 to understand the difference.</p> <1><2><3><4><5><6> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 14-23 \u00a0 Dynamic programming process for the unbounded knapsack problem after space optimization </p> <p>The code implementation is quite simple, just remove the first dimension of the array <code>dp</code>:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig unbounded_knapsack.py<pre><code>def unbounded_knapsack_dp_comp(wgt: list[int], val: list[int], cap: int) -> int:\n \"\"\"Complete knapsack: Space-optimized dynamic programming\"\"\"\n n = len(wgt)\n # Initialize dp table\n dp = [0] * (cap + 1)\n # State transition\n for i in range(1, n + 1):\n # Traverse in order\n for c in range(1, cap + 1):\n if wgt[i - 1] > c:\n # If exceeding the knapsack capacity, do not choose item i\n dp[c] = dp[c]\n else:\n # The greater value between not choosing and choosing item i\n dp[c] = max(dp[c], dp[c - wgt[i - 1]] + val[i - 1])\n return dp[cap]\n</code></pre> unbounded_knapsack.cpp<pre><code>/* Complete knapsack: Space-optimized dynamic programming */\nint unboundedKnapsackDPComp(vector<int> &wgt, vector<int> &val, int cap) {\n int n = wgt.size();\n // Initialize dp table\n vector<int> dp(cap + 1, 0);\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int c = 1; c <= cap; c++) {\n if (wgt[i - 1] > c) {\n // If exceeding the knapsack capacity, do not choose item i\n dp[c] = dp[c];\n } else {\n // The greater value between not choosing and choosing item i\n dp[c] = max(dp[c], dp[c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[cap];\n}\n</code></pre> unbounded_knapsack.java<pre><code>/* Complete knapsack: Space-optimized dynamic programming */\nint unboundedKnapsackDPComp(int[] wgt, int[] val, int cap) {\n int n = wgt.length;\n // Initialize dp table\n int[] dp = new int[cap + 1];\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int c = 1; c <= cap; c++) {\n if (wgt[i - 1] > c) {\n // If exceeding the knapsack capacity, do not choose item i\n dp[c] = dp[c];\n } else {\n // The greater value between not choosing and choosing item i\n dp[c] = Math.max(dp[c], dp[c - wgt[i - 1]] + val[i - 1]);\n }\n }\n }\n return dp[cap];\n}\n</code></pre> unbounded_knapsack.cs<pre><code>[class]{unbounded_knapsack}-[func]{UnboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.go<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.swift<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.js<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.ts<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.dart<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.rs<pre><code>[class]{}-[func]{unbounded_knapsack_dp_comp}\n</code></pre> unbounded_knapsack.c<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.kt<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre> unbounded_knapsack.rb<pre><code>[class]{}-[func]{unbounded_knapsack_dp_comp}\n</code></pre> unbounded_knapsack.zig<pre><code>[class]{}-[func]{unboundedKnapsackDPComp}\n</code></pre>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#1452-coin-change-problem","title":"14.5.2 \u00a0 Coin change problem","text":"<p>The knapsack problem is a representative of a large class of dynamic programming problems and has many variants, such as the coin change problem.</p> <p>Question</p> <p>Given \\(n\\) types of coins, the denomination of the \\(i^{th}\\) type of coin is \\(coins[i - 1]\\), and the target amount is \\(amt\\). Each type of coin can be selected multiple times. What is the minimum number of coins needed to make up the target amount? If it is impossible to make up the target amount, return \\(-1\\). See the example below.</p> <p></p> <p> Figure 14-24 \u00a0 Example data for the coin change problem </p>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#1-dynamic-programming-approach_1","title":"1. \u00a0 Dynamic programming approach","text":"<p>The coin change can be seen as a special case of the unbounded knapsack problem, sharing the following similarities and differences.</p> <ul> <li>The two problems can be converted into each other: \"item\" corresponds to \"coin\", \"item weight\" corresponds to \"coin denomination\", and \"backpack capacity\" corresponds to \"target amount\".</li> <li>The optimization goals are opposite: the unbounded knapsack problem aims to maximize the value of items, while the coin change problem aims to minimize the number of coins.</li> <li>The unbounded knapsack problem seeks solutions \"not exceeding\" the backpack capacity, while the coin change seeks solutions that \"exactly\" make up the target amount.</li> </ul> <p>First step: Think through each round's decision-making, define the state, and thus derive the \\(dp\\) table</p> <p>The state \\([i, a]\\) corresponds to the sub-problem: the minimum number of coins that can make up the amount \\(a\\) using the first \\(i\\) types of coins, denoted as \\(dp[i, a]\\).</p> <p>The two-dimensional \\(dp\\) table is of size \\((n+1) \\times (amt+1)\\).</p> <p>Second step: Identify the optimal substructure and derive the state transition equation</p> <p>This problem differs from the unbounded knapsack problem in two aspects of the state transition equation.</p> <ul> <li>This problem seeks the minimum, so the operator \\(\\max()\\) needs to be changed to \\(\\min()\\).</li> <li>The optimization is focused on the number of coins, so simply add \\(+1\\) when a coin is chosen.</li> </ul> \\[ dp[i, a] = \\min(dp[i-1, a], dp[i, a - coins[i-1]] + 1) \\] <p>Third step: Define boundary conditions and state transition order</p> <p>When the target amount is \\(0\\), the minimum number of coins needed to make it up is \\(0\\), so all \\(dp[i, 0]\\) in the first column are \\(0\\).</p> <p>When there are no coins, it is impossible to make up any amount >0, which is an invalid solution. To allow the \\(\\min()\\) function in the state transition equation to recognize and filter out invalid solutions, consider using \\(+\\infty\\) to represent them, i.e., set all \\(dp[0, a]\\) in the first row to \\(+\\infty\\).</p>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#2-code-implementation_1","title":"2. \u00a0 Code implementation","text":"<p>Most programming languages do not provide a \\(+\\infty\\) variable, only the maximum value of an integer <code>int</code> can be used as a substitute. This can lead to overflow: the \\(+1\\) operation in the state transition equation may overflow.</p> <p>For this reason, we use the number \\(amt + 1\\) to represent an invalid solution, because the maximum number of coins needed to make up \\(amt\\) is at most \\(amt\\). Before returning the result, check if \\(dp[n, amt]\\) equals \\(amt + 1\\), and if so, return \\(-1\\), indicating that the target amount cannot be made up. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig coin_change.py<pre><code>def coin_change_dp(coins: list[int], amt: int) -> int:\n \"\"\"Coin change: Dynamic programming\"\"\"\n n = len(coins)\n MAX = amt + 1\n # Initialize dp table\n dp = [[0] * (amt + 1) for _ in range(n + 1)]\n # State transition: first row and first column\n for a in range(1, amt + 1):\n dp[0][a] = MAX\n # State transition: the rest of the rows and columns\n for i in range(1, n + 1):\n for a in range(1, amt + 1):\n if coins[i - 1] > a:\n # If exceeding the target amount, do not choose coin i\n dp[i][a] = dp[i - 1][a]\n else:\n # The smaller value between not choosing and choosing coin i\n dp[i][a] = min(dp[i - 1][a], dp[i][a - coins[i - 1]] + 1)\n return dp[n][amt] if dp[n][amt] != MAX else -1\n</code></pre> coin_change.cpp<pre><code>/* Coin change: Dynamic programming */\nint coinChangeDP(vector<int> &coins, int amt) {\n int n = coins.size();\n int MAX = amt + 1;\n // Initialize dp table\n vector<vector<int>> dp(n + 1, vector<int>(amt + 1, 0));\n // State transition: first row and first column\n for (int a = 1; a <= amt; a++) {\n dp[0][a] = MAX;\n }\n // State transition: the rest of the rows and columns\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[i][a] = dp[i - 1][a];\n } else {\n // The smaller value between not choosing and choosing coin i\n dp[i][a] = min(dp[i - 1][a], dp[i][a - coins[i - 1]] + 1);\n }\n }\n }\n return dp[n][amt] != MAX ? dp[n][amt] : -1;\n}\n</code></pre> coin_change.java<pre><code>/* Coin change: Dynamic programming */\nint coinChangeDP(int[] coins, int amt) {\n int n = coins.length;\n int MAX = amt + 1;\n // Initialize dp table\n int[][] dp = new int[n + 1][amt + 1];\n // State transition: first row and first column\n for (int a = 1; a <= amt; a++) {\n dp[0][a] = MAX;\n }\n // State transition: the rest of the rows and columns\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[i][a] = dp[i - 1][a];\n } else {\n // The smaller value between not choosing and choosing coin i\n dp[i][a] = Math.min(dp[i - 1][a], dp[i][a - coins[i - 1]] + 1);\n }\n }\n }\n return dp[n][amt] != MAX ? dp[n][amt] : -1;\n}\n</code></pre> coin_change.cs<pre><code>[class]{coin_change}-[func]{CoinChangeDP}\n</code></pre> coin_change.go<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> coin_change.swift<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> coin_change.js<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> coin_change.ts<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> coin_change.dart<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> coin_change.rs<pre><code>[class]{}-[func]{coin_change_dp}\n</code></pre> coin_change.c<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> coin_change.kt<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> coin_change.rb<pre><code>[class]{}-[func]{coin_change_dp}\n</code></pre> coin_change.zig<pre><code>[class]{}-[func]{coinChangeDP}\n</code></pre> <p>Figure 14-25 show the dynamic programming process for the coin change problem, which is very similar to the unbounded knapsack problem.</p> <1><2><3><4><5><6><7><8><9><10><11><12><13><14><15> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 14-25 \u00a0 Dynamic programming process for the coin change problem </p>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#3-space-optimization_1","title":"3. \u00a0 Space optimization","text":"<p>The space optimization for the coin change problem is handled in the same way as for the unbounded knapsack problem:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig coin_change.py<pre><code>def coin_change_dp_comp(coins: list[int], amt: int) -> int:\n \"\"\"Coin change: Space-optimized dynamic programming\"\"\"\n n = len(coins)\n MAX = amt + 1\n # Initialize dp table\n dp = [MAX] * (amt + 1)\n dp[0] = 0\n # State transition\n for i in range(1, n + 1):\n # Traverse in order\n for a in range(1, amt + 1):\n if coins[i - 1] > a:\n # If exceeding the target amount, do not choose coin i\n dp[a] = dp[a]\n else:\n # The smaller value between not choosing and choosing coin i\n dp[a] = min(dp[a], dp[a - coins[i - 1]] + 1)\n return dp[amt] if dp[amt] != MAX else -1\n</code></pre> coin_change.cpp<pre><code>/* Coin change: Space-optimized dynamic programming */\nint coinChangeDPComp(vector<int> &coins, int amt) {\n int n = coins.size();\n int MAX = amt + 1;\n // Initialize dp table\n vector<int> dp(amt + 1, MAX);\n dp[0] = 0;\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[a] = dp[a];\n } else {\n // The smaller value between not choosing and choosing coin i\n dp[a] = min(dp[a], dp[a - coins[i - 1]] + 1);\n }\n }\n }\n return dp[amt] != MAX ? dp[amt] : -1;\n}\n</code></pre> coin_change.java<pre><code>/* Coin change: Space-optimized dynamic programming */\nint coinChangeDPComp(int[] coins, int amt) {\n int n = coins.length;\n int MAX = amt + 1;\n // Initialize dp table\n int[] dp = new int[amt + 1];\n Arrays.fill(dp, MAX);\n dp[0] = 0;\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[a] = dp[a];\n } else {\n // The smaller value between not choosing and choosing coin i\n dp[a] = Math.min(dp[a], dp[a - coins[i - 1]] + 1);\n }\n }\n }\n return dp[amt] != MAX ? dp[amt] : -1;\n}\n</code></pre> coin_change.cs<pre><code>[class]{coin_change}-[func]{CoinChangeDPComp}\n</code></pre> coin_change.go<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre> coin_change.swift<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre> coin_change.js<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre> coin_change.ts<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre> coin_change.dart<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre> coin_change.rs<pre><code>[class]{}-[func]{coin_change_dp_comp}\n</code></pre> coin_change.c<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre> coin_change.kt<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre> coin_change.rb<pre><code>[class]{}-[func]{coin_change_dp_comp}\n</code></pre> coin_change.zig<pre><code>[class]{}-[func]{coinChangeDPComp}\n</code></pre>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#1453-coin-change-problem-ii","title":"14.5.3 \u00a0 Coin change problem II","text":"<p>Question</p> <p>Given \\(n\\) types of coins, where the denomination of the \\(i^{th}\\) type of coin is \\(coins[i - 1]\\), and the target amount is \\(amt\\). Each type of coin can be selected multiple times, ask how many combinations of coins can make up the target amount. See the example below.</p> <p></p> <p> Figure 14-26 \u00a0 Example data for Coin Change Problem II </p>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#1-dynamic-programming-approach_2","title":"1. \u00a0 Dynamic programming approach","text":"<p>Compared to the previous problem, the goal of this problem is to determine the number of combinations, so the sub-problem becomes: the number of combinations that can make up amount \\(a\\) using the first \\(i\\) types of coins. The \\(dp\\) table remains a two-dimensional matrix of size \\((n+1) \\times (amt + 1)\\).</p> <p>The number of combinations for the current state is the sum of the combinations from not selecting the current coin and selecting the current coin. The state transition equation is:</p> \\[ dp[i, a] = dp[i-1, a] + dp[i, a - coins[i-1]] \\] <p>When the target amount is \\(0\\), no coins are needed to make up the target amount, so all \\(dp[i, 0]\\) in the first column should be initialized to \\(1\\). When there are no coins, it is impossible to make up any amount >0, so all \\(dp[0, a]\\) in the first row should be set to \\(0\\).</p>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#2-code-implementation_2","title":"2. \u00a0 Code implementation","text":"PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig coin_change_ii.py<pre><code>def coin_change_ii_dp(coins: list[int], amt: int) -> int:\n \"\"\"Coin change II: Dynamic programming\"\"\"\n n = len(coins)\n # Initialize dp table\n dp = [[0] * (amt + 1) for _ in range(n + 1)]\n # Initialize first column\n for i in range(n + 1):\n dp[i][0] = 1\n # State transition\n for i in range(1, n + 1):\n for a in range(1, amt + 1):\n if coins[i - 1] > a:\n # If exceeding the target amount, do not choose coin i\n dp[i][a] = dp[i - 1][a]\n else:\n # The sum of the two options of not choosing and choosing coin i\n dp[i][a] = dp[i - 1][a] + dp[i][a - coins[i - 1]]\n return dp[n][amt]\n</code></pre> coin_change_ii.cpp<pre><code>/* Coin change II: Dynamic programming */\nint coinChangeIIDP(vector<int> &coins, int amt) {\n int n = coins.size();\n // Initialize dp table\n vector<vector<int>> dp(n + 1, vector<int>(amt + 1, 0));\n // Initialize first column\n for (int i = 0; i <= n; i++) {\n dp[i][0] = 1;\n }\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[i][a] = dp[i - 1][a];\n } else {\n // The sum of the two options of not choosing and choosing coin i\n dp[i][a] = dp[i - 1][a] + dp[i][a - coins[i - 1]];\n }\n }\n }\n return dp[n][amt];\n}\n</code></pre> coin_change_ii.java<pre><code>/* Coin change II: Dynamic programming */\nint coinChangeIIDP(int[] coins, int amt) {\n int n = coins.length;\n // Initialize dp table\n int[][] dp = new int[n + 1][amt + 1];\n // Initialize first column\n for (int i = 0; i <= n; i++) {\n dp[i][0] = 1;\n }\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[i][a] = dp[i - 1][a];\n } else {\n // The sum of the two options of not choosing and choosing coin i\n dp[i][a] = dp[i - 1][a] + dp[i][a - coins[i - 1]];\n }\n }\n }\n return dp[n][amt];\n}\n</code></pre> coin_change_ii.cs<pre><code>[class]{coin_change_ii}-[func]{CoinChangeIIDP}\n</code></pre> coin_change_ii.go<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre> coin_change_ii.swift<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre> coin_change_ii.js<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre> coin_change_ii.ts<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre> coin_change_ii.dart<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre> coin_change_ii.rs<pre><code>[class]{}-[func]{coin_change_ii_dp}\n</code></pre> coin_change_ii.c<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre> coin_change_ii.kt<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre> coin_change_ii.rb<pre><code>[class]{}-[func]{coin_change_ii_dp}\n</code></pre> coin_change_ii.zig<pre><code>[class]{}-[func]{coinChangeIIDP}\n</code></pre>"},{"location":"chapter_dynamic_programming/unbounded_knapsack_problem/#3-space-optimization_2","title":"3. \u00a0 Space optimization","text":"<p>The space optimization approach is the same, just remove the coin dimension:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig coin_change_ii.py<pre><code>def coin_change_ii_dp_comp(coins: list[int], amt: int) -> int:\n \"\"\"Coin change II: Space-optimized dynamic programming\"\"\"\n n = len(coins)\n # Initialize dp table\n dp = [0] * (amt + 1)\n dp[0] = 1\n # State transition\n for i in range(1, n + 1):\n # Traverse in order\n for a in range(1, amt + 1):\n if coins[i - 1] > a:\n # If exceeding the target amount, do not choose coin i\n dp[a] = dp[a]\n else:\n # The sum of the two options of not choosing and choosing coin i\n dp[a] = dp[a] + dp[a - coins[i - 1]]\n return dp[amt]\n</code></pre> coin_change_ii.cpp<pre><code>/* Coin change II: Space-optimized dynamic programming */\nint coinChangeIIDPComp(vector<int> &coins, int amt) {\n int n = coins.size();\n // Initialize dp table\n vector<int> dp(amt + 1, 0);\n dp[0] = 1;\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[a] = dp[a];\n } else {\n // The sum of the two options of not choosing and choosing coin i\n dp[a] = dp[a] + dp[a - coins[i - 1]];\n }\n }\n }\n return dp[amt];\n}\n</code></pre> coin_change_ii.java<pre><code>/* Coin change II: Space-optimized dynamic programming */\nint coinChangeIIDPComp(int[] coins, int amt) {\n int n = coins.length;\n // Initialize dp table\n int[] dp = new int[amt + 1];\n dp[0] = 1;\n // State transition\n for (int i = 1; i <= n; i++) {\n for (int a = 1; a <= amt; a++) {\n if (coins[i - 1] > a) {\n // If exceeding the target amount, do not choose coin i\n dp[a] = dp[a];\n } else {\n // The sum of the two options of not choosing and choosing coin i\n dp[a] = dp[a] + dp[a - coins[i - 1]];\n }\n }\n }\n return dp[amt];\n}\n</code></pre> coin_change_ii.cs<pre><code>[class]{coin_change_ii}-[func]{CoinChangeIIDPComp}\n</code></pre> coin_change_ii.go<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre> coin_change_ii.swift<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre> coin_change_ii.js<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre> coin_change_ii.ts<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre> coin_change_ii.dart<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre> coin_change_ii.rs<pre><code>[class]{}-[func]{coin_change_ii_dp_comp}\n</code></pre> coin_change_ii.c<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre> coin_change_ii.kt<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre> coin_change_ii.rb<pre><code>[class]{}-[func]{coin_change_ii_dp_comp}\n</code></pre> coin_change_ii.zig<pre><code>[class]{}-[func]{coinChangeIIDPComp}\n</code></pre>"},{"location":"chapter_graph/","title":"Chapter 9. \u00a0 Graph","text":"<p>Abstract</p> <p>In the journey of life, we are like individual nodes, connected by countless invisible edges.</p> <p>Each encounter and parting leaves a distinctive imprint on this vast network graph.</p>"},{"location":"chapter_graph/#chapter-contents","title":"Chapter contents","text":"<ul> <li>9.1 \u00a0 Graph</li> <li>9.2 \u00a0 Basic graph operations</li> <li>9.3 \u00a0 Graph traversal</li> <li>9.4 \u00a0 Summary</li> </ul>"},{"location":"chapter_graph/graph/","title":"9.1 \u00a0 Graph","text":"<p>A graph is a type of nonlinear data structure, consisting of vertices and edges. A graph \\(G\\) can be abstractly represented as a collection of a set of vertices \\(V\\) and a set of edges \\(E\\). The following example shows a graph containing 5 vertices and 7 edges.</p> \\[ \\begin{aligned} V & = \\{ 1, 2, 3, 4, 5 \\} \\newline E & = \\{ (1,2), (1,3), (1,5), (2,3), (2,4), (2,5), (4,5) \\} \\newline G & = \\{ V, E \\} \\newline \\end{aligned} \\] <p>If vertices are viewed as nodes and edges as references (pointers) connecting the nodes, graphs can be seen as a data structure that extends from linked lists. As shown in Figure 9-1, compared to linear relationships (linked lists) and divide-and-conquer relationships (trees), network relationships (graphs) are more complex due to their higher degree of freedom.</p> <p></p> <p> Figure 9-1 \u00a0 Relationship between linked lists, trees, and graphs </p>"},{"location":"chapter_graph/graph/#911-common-types-of-graphs","title":"9.1.1 \u00a0 Common types of graphs","text":"<p>Based on whether edges have direction, graphs can be divided into undirected graphs and directed graphs, as shown in Figure 9-2.</p> <ul> <li>In undirected graphs, edges represent a \"bidirectional\" connection between two vertices, for example, the \"friendship\" in WeChat or QQ.</li> <li>In directed graphs, edges have directionality, that is, the edges \\(A \\rightarrow B\\) and \\(A \\leftarrow B\\) are independent of each other, for example, the \"follow\" and \"be followed\" relationship on Weibo or TikTok.</li> </ul> <p></p> <p> Figure 9-2 \u00a0 Directed and undirected graphs </p> <p>Based on whether all vertices are connected, graphs can be divided into connected graphs and disconnected graphs, as shown in Figure 9-3.</p> <ul> <li>For connected graphs, it is possible to reach any other vertex starting from a certain vertex.</li> <li>For disconnected graphs, there is at least one vertex that cannot be reached from a certain starting vertex.</li> </ul> <p></p> <p> Figure 9-3 \u00a0 Connected and disconnected graphs </p> <p>We can also add a weight variable to edges, resulting in weighted graphs as shown in Figure 9-4. For example, in mobile games like \"Honor of Kings\", the system calculates the \"closeness\" between players based on shared gaming time, and this closeness network can be represented with a weighted graph.</p> <p></p> <p> Figure 9-4 \u00a0 Weighted and unweighted graphs </p> <p>Graph data structures include the following commonly used terms.</p> <ul> <li>Adjacency: When there is an edge connecting two vertices, these two vertices are said to be \"adjacent\". In Figure 9-4, the adjacent vertices of vertex 1 are vertices 2, 3, and 5.</li> <li>Path: The sequence of edges passed from vertex A to vertex B is called a path from A to B. In Figure 9-4, the edge sequence 1-5-2-4 is a path from vertex 1 to vertex 4.</li> <li>Degree: The number of edges a vertex has. For directed graphs, in-degree refers to how many edges point to the vertex, and out-degree refers to how many edges point out from the vertex.</li> </ul>"},{"location":"chapter_graph/graph/#912-representation-of-graphs","title":"9.1.2 \u00a0 Representation of graphs","text":"<p>Common representations of graphs include \"adjacency matrices\" and \"adjacency lists\". The following examples use undirected graphs.</p>"},{"location":"chapter_graph/graph/#1-adjacency-matrix","title":"1. \u00a0 Adjacency matrix","text":"<p>Let the number of vertices in the graph be \\(n\\), the adjacency matrix uses an \\(n \\times n\\) matrix to represent the graph, where each row (column) represents a vertex, and the matrix elements represent edges, with \\(1\\) or \\(0\\) indicating whether there is an edge between two vertices.</p> <p>As shown in Figure 9-5, let the adjacency matrix be \\(M\\), and the list of vertices be \\(V\\), then the matrix element \\(M[i, j] = 1\\) indicates there is an edge between vertex \\(V[i]\\) and vertex \\(V[j]\\), conversely \\(M[i, j] = 0\\) indicates there is no edge between the two vertices.</p> <p></p> <p> Figure 9-5 \u00a0 Representation of a graph with an adjacency matrix </p> <p>Adjacency matrices have the following characteristics.</p> <ul> <li>A vertex cannot be connected to itself, so the elements on the main diagonal of the adjacency matrix are meaningless.</li> <li>For undirected graphs, edges in both directions are equivalent, thus the adjacency matrix is symmetric about the main diagonal.</li> <li>By replacing the elements of the adjacency matrix from \\(1\\) and \\(0\\) to weights, it can represent weighted graphs.</li> </ul> <p>When representing graphs with adjacency matrices, it is possible to directly access matrix elements to obtain edges, thus operations of addition, deletion, lookup, and modification are very efficient, all with a time complexity of \\(O(1)\\). However, the space complexity of the matrix is \\(O(n^2)\\), which consumes more memory.</p>"},{"location":"chapter_graph/graph/#2-adjacency-list","title":"2. \u00a0 Adjacency list","text":"<p>The adjacency list uses \\(n\\) linked lists to represent the graph, with each linked list node representing a vertex. The \\(i\\)-th linked list corresponds to vertex \\(i\\) and contains all adjacent vertices (vertices connected to that vertex). Figure 9-6 shows an example of a graph stored using an adjacency list.</p> <p></p> <p> Figure 9-6 \u00a0 Representation of a graph with an adjacency list </p> <p>The adjacency list only stores actual edges, and the total number of edges is often much less than \\(n^2\\), making it more space-efficient. However, finding edges in the adjacency list requires traversing the linked list, so its time efficiency is not as good as that of the adjacency matrix.</p> <p>Observing Figure 9-6, the structure of the adjacency list is very similar to the \"chaining\" in hash tables, hence we can use similar methods to optimize efficiency. For example, when the linked list is long, it can be transformed into an AVL tree or red-black tree, thus optimizing the time efficiency from \\(O(n)\\) to \\(O(\\log n)\\); the linked list can also be transformed into a hash table, thus reducing the time complexity to \\(O(1)\\).</p>"},{"location":"chapter_graph/graph/#913-common-applications-of-graphs","title":"9.1.3 \u00a0 Common applications of graphs","text":"<p>As shown in Table 9-1, many real-world systems can be modeled with graphs, and corresponding problems can be reduced to graph computing problems.</p> <p> Table 9-1 \u00a0 Common graphs in real life </p> Vertices Edges Graph Computing Problem Social Networks Users Friendships Potential Friend Recommendations Subway Lines Stations Connectivity Between Stations Shortest Route Recommendations Solar System Celestial Bodies Gravitational Forces Between Celestial Bodies Planetary Orbit Calculations"},{"location":"chapter_graph/graph_operations/","title":"9.2 \u00a0 Basic operations on graphs","text":"<p>The basic operations on graphs can be divided into operations on \"edges\" and operations on \"vertices\". Under the two representation methods of \"adjacency matrix\" and \"adjacency list\", the implementation methods are different.</p>"},{"location":"chapter_graph/graph_operations/#921-implementation-based-on-adjacency-matrix","title":"9.2.1 \u00a0 Implementation based on adjacency matrix","text":"<p>Given an undirected graph with \\(n\\) vertices, the various operations are implemented as shown in Figure 9-7.</p> <ul> <li>Adding or removing an edge: Directly modify the specified edge in the adjacency matrix, using \\(O(1)\\) time. Since it is an undirected graph, it is necessary to update the edges in both directions simultaneously.</li> <li>Adding a vertex: Add a row and a column at the end of the adjacency matrix and fill them all with \\(0\\)s, using \\(O(n)\\) time.</li> <li>Removing a vertex: Delete a row and a column in the adjacency matrix. The worst case is when the first row and column are removed, requiring \\((n-1)^2\\) elements to be \"moved up and to the left\", thus using \\(O(n^2)\\) time.</li> <li>Initialization: Pass in \\(n\\) vertices, initialize a vertex list <code>vertices</code> of length \\(n\\), using \\(O(n)\\) time; initialize an \\(n \\times n\\) size adjacency matrix <code>adjMat</code>, using \\(O(n^2)\\) time.</li> </ul> Initialize adjacency matrixAdd an edgeRemove an edgeAdd a vertexRemove a vertex <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 9-7 \u00a0 Initialization, adding and removing edges, adding and removing vertices in adjacency matrix </p> <p>Below is the implementation code for graphs represented using an adjacency matrix:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig graph_adjacency_matrix.py<pre><code>class GraphAdjMat:\n \"\"\"Undirected graph class based on adjacency matrix\"\"\"\n\n def __init__(self, vertices: list[int], edges: list[list[int]]):\n \"\"\"Constructor\"\"\"\n # Vertex list, elements represent \"vertex value\", index represents \"vertex index\"\n self.vertices: list[int] = []\n # Adjacency matrix, row and column indices correspond to \"vertex index\"\n self.adj_mat: list[list[int]] = []\n # Add vertex\n for val in vertices:\n self.add_vertex(val)\n # Add edge\n # Edges elements represent vertex indices\n for e in edges:\n self.add_edge(e[0], e[1])\n\n def size(self) -> int:\n \"\"\"Get the number of vertices\"\"\"\n return len(self.vertices)\n\n def add_vertex(self, val: int):\n \"\"\"Add vertex\"\"\"\n n = self.size()\n # Add new vertex value to the vertex list\n self.vertices.append(val)\n # Add a row to the adjacency matrix\n new_row = [0] * n\n self.adj_mat.append(new_row)\n # Add a column to the adjacency matrix\n for row in self.adj_mat:\n row.append(0)\n\n def remove_vertex(self, index: int):\n \"\"\"Remove vertex\"\"\"\n if index >= self.size():\n raise IndexError()\n # Remove vertex at `index` from the vertex list\n self.vertices.pop(index)\n # Remove the row at `index` from the adjacency matrix\n self.adj_mat.pop(index)\n # Remove the column at `index` from the adjacency matrix\n for row in self.adj_mat:\n row.pop(index)\n\n def add_edge(self, i: int, j: int):\n \"\"\"Add edge\"\"\"\n # Parameters i, j correspond to vertices element indices\n # Handle index out of bounds and equality\n if i < 0 or j < 0 or i >= self.size() or j >= self.size() or i == j:\n raise IndexError()\n # In an undirected graph, the adjacency matrix is symmetric about the main diagonal, i.e., satisfies (i, j) == (j, i)\n self.adj_mat[i][j] = 1\n self.adj_mat[j][i] = 1\n\n def remove_edge(self, i: int, j: int):\n \"\"\"Remove edge\"\"\"\n # Parameters i, j correspond to vertices element indices\n # Handle index out of bounds and equality\n if i < 0 or j < 0 or i >= self.size() or j >= self.size() or i == j:\n raise IndexError()\n self.adj_mat[i][j] = 0\n self.adj_mat[j][i] = 0\n\n def print(self):\n \"\"\"Print adjacency matrix\"\"\"\n print(\"Vertex list =\", self.vertices)\n print(\"Adjacency matrix =\")\n print_matrix(self.adj_mat)\n</code></pre> graph_adjacency_matrix.cpp<pre><code>/* Undirected graph class based on adjacency matrix */\nclass GraphAdjMat {\n vector<int> vertices; // Vertex list, elements represent \"vertex value\", index represents \"vertex index\"\n vector<vector<int>> adjMat; // Adjacency matrix, row and column indices correspond to \"vertex index\"\n\n public:\n /* Constructor */\n GraphAdjMat(const vector<int> &vertices, const vector<vector<int>> &edges) {\n // Add vertex\n for (int val : vertices) {\n addVertex(val);\n }\n // Add edge\n // Edges elements represent vertex indices\n for (const vector<int> &edge : edges) {\n addEdge(edge[0], edge[1]);\n }\n }\n\n /* Get the number of vertices */\n int size() const {\n return vertices.size();\n }\n\n /* Add vertex */\n void addVertex(int val) {\n int n = size();\n // Add new vertex value to the vertex list\n vertices.push_back(val);\n // Add a row to the adjacency matrix\n adjMat.emplace_back(vector<int>(n, 0));\n // Add a column to the adjacency matrix\n for (vector<int> &row : adjMat) {\n row.push_back(0);\n }\n }\n\n /* Remove vertex */\n void removeVertex(int index) {\n if (index >= size()) {\n throw out_of_range(\"Vertex does not exist\");\n }\n // Remove vertex at `index` from the vertex list\n vertices.erase(vertices.begin() + index);\n // Remove the row at `index` from the adjacency matrix\n adjMat.erase(adjMat.begin() + index);\n // Remove the column at `index` from the adjacency matrix\n for (vector<int> &row : adjMat) {\n row.erase(row.begin() + index);\n }\n }\n\n /* Add edge */\n // Parameters i, j correspond to vertices element indices\n void addEdge(int i, int j) {\n // Handle index out of bounds and equality\n if (i < 0 || j < 0 || i >= size() || j >= size() || i == j) {\n throw out_of_range(\"Vertex does not exist\");\n }\n // In an undirected graph, the adjacency matrix is symmetric about the main diagonal, i.e., satisfies (i, j) == (j, i)\n adjMat[i][j] = 1;\n adjMat[j][i] = 1;\n }\n\n /* Remove edge */\n // Parameters i, j correspond to vertices element indices\n void removeEdge(int i, int j) {\n // Handle index out of bounds and equality\n if (i < 0 || j < 0 || i >= size() || j >= size() || i == j) {\n throw out_of_range(\"Vertex does not exist\");\n }\n adjMat[i][j] = 0;\n adjMat[j][i] = 0;\n }\n\n /* Print adjacency matrix */\n void print() {\n cout << \"Vertex list = \";\n printVector(vertices);\n cout << \"Adjacency matrix =\" << endl;\n printVectorMatrix(adjMat);\n }\n};\n</code></pre> graph_adjacency_matrix.java<pre><code>/* Undirected graph class based on adjacency matrix */\nclass GraphAdjMat {\n List<Integer> vertices; // Vertex list, elements represent \"vertex value\", index represents \"vertex index\"\n List<List<Integer>> adjMat; // Adjacency matrix, row and column indices correspond to \"vertex index\"\n\n /* Constructor */\n public GraphAdjMat(int[] vertices, int[][] edges) {\n this.vertices = new ArrayList<>();\n this.adjMat = new ArrayList<>();\n // Add vertex\n for (int val : vertices) {\n addVertex(val);\n }\n // Add edge\n // Edges elements represent vertex indices\n for (int[] e : edges) {\n addEdge(e[0], e[1]);\n }\n }\n\n /* Get the number of vertices */\n public int size() {\n return vertices.size();\n }\n\n /* Add vertex */\n public void addVertex(int val) {\n int n = size();\n // Add new vertex value to the vertex list\n vertices.add(val);\n // Add a row to the adjacency matrix\n List<Integer> newRow = new ArrayList<>(n);\n for (int j = 0; j < n; j++) {\n newRow.add(0);\n }\n adjMat.add(newRow);\n // Add a column to the adjacency matrix\n for (List<Integer> row : adjMat) {\n row.add(0);\n }\n }\n\n /* Remove vertex */\n public void removeVertex(int index) {\n if (index >= size())\n throw new IndexOutOfBoundsException();\n // Remove vertex at `index` from the vertex list\n vertices.remove(index);\n // Remove the row at `index` from the adjacency matrix\n adjMat.remove(index);\n // Remove the column at `index` from the adjacency matrix\n for (List<Integer> row : adjMat) {\n row.remove(index);\n }\n }\n\n /* Add edge */\n // Parameters i, j correspond to vertices element indices\n public void addEdge(int i, int j) {\n // Handle index out of bounds and equality\n if (i < 0 || j < 0 || i >= size() || j >= size() || i == j)\n throw new IndexOutOfBoundsException();\n // In an undirected graph, the adjacency matrix is symmetric about the main diagonal, i.e., satisfies (i, j) == (j, i)\n adjMat.get(i).set(j, 1);\n adjMat.get(j).set(i, 1);\n }\n\n /* Remove edge */\n // Parameters i, j correspond to vertices element indices\n public void removeEdge(int i, int j) {\n // Handle index out of bounds and equality\n if (i < 0 || j < 0 || i >= size() || j >= size() || i == j)\n throw new IndexOutOfBoundsException();\n adjMat.get(i).set(j, 0);\n adjMat.get(j).set(i, 0);\n }\n\n /* Print adjacency matrix */\n public void print() {\n System.out.print(\"Vertex list = \");\n System.out.println(vertices);\n System.out.println(\"Adjacency matrix =\");\n PrintUtil.printMatrix(adjMat);\n }\n}\n</code></pre> graph_adjacency_matrix.cs<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.go<pre><code>[class]{graphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.swift<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.js<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.ts<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.dart<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.rs<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.c<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.kt<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.rb<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre> graph_adjacency_matrix.zig<pre><code>[class]{GraphAdjMat}-[func]{}\n</code></pre>"},{"location":"chapter_graph/graph_operations/#922-implementation-based-on-adjacency-list","title":"9.2.2 \u00a0 Implementation based on adjacency list","text":"<p>Given an undirected graph with a total of \\(n\\) vertices and \\(m\\) edges, the various operations can be implemented as shown in Figure 9-8.</p> <ul> <li>Adding an edge: Simply add the edge at the end of the corresponding vertex's linked list, using \\(O(1)\\) time. Because it is an undirected graph, it is necessary to add edges in both directions simultaneously.</li> <li>Removing an edge: Find and remove the specified edge in the corresponding vertex's linked list, using \\(O(m)\\) time. In an undirected graph, it is necessary to remove edges in both directions simultaneously.</li> <li>Adding a vertex: Add a linked list in the adjacency list and make the new vertex the head node of the list, using \\(O(1)\\) time.</li> <li>Removing a vertex: It is necessary to traverse the entire adjacency list, removing all edges that include the specified vertex, using \\(O(n + m)\\) time.</li> <li>Initialization: Create \\(n\\) vertices and \\(2m\\) edges in the adjacency list, using \\(O(n + m)\\) time.</li> </ul> Initialize adjacency listAdd an edgeRemove an edgeAdd a vertexRemove a vertex <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 9-8 \u00a0 Initialization, adding and removing edges, adding and removing vertices in adjacency list </p> <p>Below is the adjacency list code implementation. Compared to Figure 9-8, the actual code has the following differences.</p> <ul> <li>For convenience in adding and removing vertices, and to simplify the code, we use lists (dynamic arrays) instead of linked lists.</li> <li>Use a hash table to store the adjacency list, <code>key</code> being the vertex instance, <code>value</code> being the list (linked list) of adjacent vertices of that vertex.</li> </ul> <p>Additionally, we use the <code>Vertex</code> class to represent vertices in the adjacency list. The reason for this is: if, like with the adjacency matrix, list indexes were used to distinguish different vertices, then suppose you want to delete the vertex at index \\(i\\), you would need to traverse the entire adjacency list and decrement all indexes greater than \\(i\\) by \\(1\\), which is very inefficient. However, if each vertex is a unique <code>Vertex</code> instance, then deleting a vertex does not require any changes to other vertices.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig graph_adjacency_list.py<pre><code>class GraphAdjList:\n \"\"\"Undirected graph class based on adjacency list\"\"\"\n\n def __init__(self, edges: list[list[Vertex]]):\n \"\"\"Constructor\"\"\"\n # Adjacency list, key: vertex, value: all adjacent vertices of that vertex\n self.adj_list = dict[Vertex, list[Vertex]]()\n # Add all vertices and edges\n for edge in edges:\n self.add_vertex(edge[0])\n self.add_vertex(edge[1])\n self.add_edge(edge[0], edge[1])\n\n def size(self) -> int:\n \"\"\"Get the number of vertices\"\"\"\n return len(self.adj_list)\n\n def add_edge(self, vet1: Vertex, vet2: Vertex):\n \"\"\"Add edge\"\"\"\n if vet1 not in self.adj_list or vet2 not in self.adj_list or vet1 == vet2:\n raise ValueError()\n # Add edge vet1 - vet2\n self.adj_list[vet1].append(vet2)\n self.adj_list[vet2].append(vet1)\n\n def remove_edge(self, vet1: Vertex, vet2: Vertex):\n \"\"\"Remove edge\"\"\"\n if vet1 not in self.adj_list or vet2 not in self.adj_list or vet1 == vet2:\n raise ValueError()\n # Remove edge vet1 - vet2\n self.adj_list[vet1].remove(vet2)\n self.adj_list[vet2].remove(vet1)\n\n def add_vertex(self, vet: Vertex):\n \"\"\"Add vertex\"\"\"\n if vet in self.adj_list:\n return\n # Add a new linked list to the adjacency list\n self.adj_list[vet] = []\n\n def remove_vertex(self, vet: Vertex):\n \"\"\"Remove vertex\"\"\"\n if vet not in self.adj_list:\n raise ValueError()\n # Remove the vertex vet's corresponding linked list from the adjacency list\n self.adj_list.pop(vet)\n # Traverse other vertices' linked lists, removing all edges containing vet\n for vertex in self.adj_list:\n if vet in self.adj_list[vertex]:\n self.adj_list[vertex].remove(vet)\n\n def print(self):\n \"\"\"Print the adjacency list\"\"\"\n print(\"Adjacency list =\")\n for vertex in self.adj_list:\n tmp = [v.val for v in self.adj_list[vertex]]\n print(f\"{vertex.val}: {tmp},\")\n</code></pre> graph_adjacency_list.cpp<pre><code>/* Undirected graph class based on adjacency list */\nclass GraphAdjList {\n public:\n // Adjacency list, key: vertex, value: all adjacent vertices of that vertex\n unordered_map<Vertex *, vector<Vertex *>> adjList;\n\n /* Remove a specified node from vector */\n void remove(vector<Vertex *> &vec, Vertex *vet) {\n for (int i = 0; i < vec.size(); i++) {\n if (vec[i] == vet) {\n vec.erase(vec.begin() + i);\n break;\n }\n }\n }\n\n /* Constructor */\n GraphAdjList(const vector<vector<Vertex *>> &edges) {\n // Add all vertices and edges\n for (const vector<Vertex *> &edge : edges) {\n addVertex(edge[0]);\n addVertex(edge[1]);\n addEdge(edge[0], edge[1]);\n }\n }\n\n /* Get the number of vertices */\n int size() {\n return adjList.size();\n }\n\n /* Add edge */\n void addEdge(Vertex *vet1, Vertex *vet2) {\n if (!adjList.count(vet1) || !adjList.count(vet2) || vet1 == vet2)\n throw invalid_argument(\"Vertex does not exist\");\n // Add edge vet1 - vet2\n adjList[vet1].push_back(vet2);\n adjList[vet2].push_back(vet1);\n }\n\n /* Remove edge */\n void removeEdge(Vertex *vet1, Vertex *vet2) {\n if (!adjList.count(vet1) || !adjList.count(vet2) || vet1 == vet2)\n throw invalid_argument(\"Vertex does not exist\");\n // Remove edge vet1 - vet2\n remove(adjList[vet1], vet2);\n remove(adjList[vet2], vet1);\n }\n\n /* Add vertex */\n void addVertex(Vertex *vet) {\n if (adjList.count(vet))\n return;\n // Add a new linked list to the adjacency list\n adjList[vet] = vector<Vertex *>();\n }\n\n /* Remove vertex */\n void removeVertex(Vertex *vet) {\n if (!adjList.count(vet))\n throw invalid_argument(\"Vertex does not exist\");\n // Remove the vertex vet's corresponding linked list from the adjacency list\n adjList.erase(vet);\n // Traverse other vertices' linked lists, removing all edges containing vet\n for (auto &adj : adjList) {\n remove(adj.second, vet);\n }\n }\n\n /* Print the adjacency list */\n void print() {\n cout << \"Adjacency list =\" << endl;\n for (auto &adj : adjList) {\n const auto &key = adj.first;\n const auto &vec = adj.second;\n cout << key->val << \": \";\n printVector(vetsToVals(vec));\n }\n }\n};\n</code></pre> graph_adjacency_list.java<pre><code>/* Undirected graph class based on adjacency list */\nclass GraphAdjList {\n // Adjacency list, key: vertex, value: all adjacent vertices of that vertex\n Map<Vertex, List<Vertex>> adjList;\n\n /* Constructor */\n public GraphAdjList(Vertex[][] edges) {\n this.adjList = new HashMap<>();\n // Add all vertices and edges\n for (Vertex[] edge : edges) {\n addVertex(edge[0]);\n addVertex(edge[1]);\n addEdge(edge[0], edge[1]);\n }\n }\n\n /* Get the number of vertices */\n public int size() {\n return adjList.size();\n }\n\n /* Add edge */\n public void addEdge(Vertex vet1, Vertex vet2) {\n if (!adjList.containsKey(vet1) || !adjList.containsKey(vet2) || vet1 == vet2)\n throw new IllegalArgumentException();\n // Add edge vet1 - vet2\n adjList.get(vet1).add(vet2);\n adjList.get(vet2).add(vet1);\n }\n\n /* Remove edge */\n public void removeEdge(Vertex vet1, Vertex vet2) {\n if (!adjList.containsKey(vet1) || !adjList.containsKey(vet2) || vet1 == vet2)\n throw new IllegalArgumentException();\n // Remove edge vet1 - vet2\n adjList.get(vet1).remove(vet2);\n adjList.get(vet2).remove(vet1);\n }\n\n /* Add vertex */\n public void addVertex(Vertex vet) {\n if (adjList.containsKey(vet))\n return;\n // Add a new linked list to the adjacency list\n adjList.put(vet, new ArrayList<>());\n }\n\n /* Remove vertex */\n public void removeVertex(Vertex vet) {\n if (!adjList.containsKey(vet))\n throw new IllegalArgumentException();\n // Remove the vertex vet's corresponding linked list from the adjacency list\n adjList.remove(vet);\n // Traverse other vertices' linked lists, removing all edges containing vet\n for (List<Vertex> list : adjList.values()) {\n list.remove(vet);\n }\n }\n\n /* Print the adjacency list */\n public void print() {\n System.out.println(\"Adjacency list =\");\n for (Map.Entry<Vertex, List<Vertex>> pair : adjList.entrySet()) {\n List<Integer> tmp = new ArrayList<>();\n for (Vertex vertex : pair.getValue())\n tmp.add(vertex.val);\n System.out.println(pair.getKey().val + \": \" + tmp + \",\");\n }\n }\n}\n</code></pre> graph_adjacency_list.cs<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.go<pre><code>[class]{graphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.swift<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.js<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.ts<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.dart<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.rs<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.c<pre><code>[class]{AdjListNode}-[func]{}\n\n[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.kt<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.rb<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre> graph_adjacency_list.zig<pre><code>[class]{GraphAdjList}-[func]{}\n</code></pre>"},{"location":"chapter_graph/graph_operations/#923-efficiency-comparison","title":"9.2.3 \u00a0 Efficiency comparison","text":"<p>Assuming there are \\(n\\) vertices and \\(m\\) edges in the graph, Table 9-2 compares the time efficiency and space efficiency of the adjacency matrix and adjacency list.</p> <p> Table 9-2 \u00a0 Comparison of adjacency matrix and adjacency list </p> Adjacency matrix Adjacency list (Linked list) Adjacency list (Hash table) Determine adjacency \\(O(1)\\) \\(O(m)\\) \\(O(1)\\) Add an edge \\(O(1)\\) \\(O(1)\\) \\(O(1)\\) Remove an edge \\(O(1)\\) \\(O(m)\\) \\(O(1)\\) Add a vertex \\(O(n)\\) \\(O(1)\\) \\(O(1)\\) Remove a vertex \\(O(n^2)\\) \\(O(n + m)\\) \\(O(n)\\) Memory space usage \\(O(n^2)\\) \\(O(n + m)\\) \\(O(n + m)\\) <p>Observing Table 9-2, it seems that the adjacency list (hash table) has the best time efficiency and space efficiency. However, in practice, operating on edges in the adjacency matrix is more efficient, requiring only a single array access or assignment operation. Overall, the adjacency matrix exemplifies the principle of \"space for time\", while the adjacency list exemplifies \"time for space\".</p>"},{"location":"chapter_graph/graph_traversal/","title":"9.3 \u00a0 Graph traversal","text":"<p>Trees represent a \"one-to-many\" relationship, while graphs have a higher degree of freedom and can represent any \"many-to-many\" relationship. Therefore, we can consider trees as a special case of graphs. Clearly, tree traversal operations are also a special case of graph traversal operations.</p> <p>Both graphs and trees require the application of search algorithms to implement traversal operations. Graph traversal can be divided into two types: Breadth-First Search (BFS) and Depth-First Search (DFS).</p>"},{"location":"chapter_graph/graph_traversal/#931-breadth-first-search","title":"9.3.1 \u00a0 Breadth-first search","text":"<p>Breadth-first search is a near-to-far traversal method, starting from a certain node, always prioritizing the visit to the nearest vertices and expanding outwards layer by layer. As shown in Figure 9-9, starting from the top left vertex, first traverse all adjacent vertices of that vertex, then traverse all adjacent vertices of the next vertex, and so on, until all vertices have been visited.</p> <p></p> <p> Figure 9-9 \u00a0 Breadth-first traversal of a graph </p>"},{"location":"chapter_graph/graph_traversal/#1-algorithm-implementation","title":"1. \u00a0 Algorithm implementation","text":"<p>BFS is usually implemented with the help of a queue, as shown in the code below. The queue has a \"first in, first out\" property, which aligns with the BFS idea of traversing \"from near to far\".</p> <ol> <li>Add the starting vertex <code>startVet</code> to the queue and start the loop.</li> <li>In each iteration of the loop, pop the vertex at the front of the queue and record it as visited, then add all adjacent vertices of that vertex to the back of the queue.</li> <li>Repeat step <code>2.</code> until all vertices have been visited.</li> </ol> <p>To prevent revisiting vertices, we use a hash set <code>visited</code> to record which nodes have been visited.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig graph_bfs.py<pre><code>def graph_bfs(graph: GraphAdjList, start_vet: Vertex) -> list[Vertex]:\n \"\"\"Breadth-first traversal\"\"\"\n # Use adjacency list to represent the graph, to obtain all adjacent vertices of a specified vertex\n # Vertex traversal sequence\n res = []\n # Hash set, used to record visited vertices\n visited = set[Vertex]([start_vet])\n # Queue used to implement BFS\n que = deque[Vertex]([start_vet])\n # Starting from vertex vet, loop until all vertices are visited\n while len(que) > 0:\n vet = que.popleft() # Dequeue the vertex at the head of the queue\n res.append(vet) # Record visited vertex\n # Traverse all adjacent vertices of that vertex\n for adj_vet in graph.adj_list[vet]:\n if adj_vet in visited:\n continue # Skip already visited vertices\n que.append(adj_vet) # Only enqueue unvisited vertices\n visited.add(adj_vet) # Mark the vertex as visited\n # Return the vertex traversal sequence\n return res\n</code></pre> graph_bfs.cpp<pre><code>/* Breadth-first traversal */\n// Use adjacency list to represent the graph, to obtain all adjacent vertices of a specified vertex\nvector<Vertex *> graphBFS(GraphAdjList &graph, Vertex *startVet) {\n // Vertex traversal sequence\n vector<Vertex *> res;\n // Hash set, used to record visited vertices\n unordered_set<Vertex *> visited = {startVet};\n // Queue used to implement BFS\n queue<Vertex *> que;\n que.push(startVet);\n // Starting from vertex vet, loop until all vertices are visited\n while (!que.empty()) {\n Vertex *vet = que.front();\n que.pop(); // Dequeue the vertex at the head of the queue\n res.push_back(vet); // Record visited vertex\n // Traverse all adjacent vertices of that vertex\n for (auto adjVet : graph.adjList[vet]) {\n if (visited.count(adjVet))\n continue; // Skip already visited vertices\n que.push(adjVet); // Only enqueue unvisited vertices\n visited.emplace(adjVet); // Mark the vertex as visited\n }\n }\n // Return the vertex traversal sequence\n return res;\n}\n</code></pre> graph_bfs.java<pre><code>/* Breadth-first traversal */\n// Use adjacency list to represent the graph, to obtain all adjacent vertices of a specified vertex\nList<Vertex> graphBFS(GraphAdjList graph, Vertex startVet) {\n // Vertex traversal sequence\n List<Vertex> res = new ArrayList<>();\n // Hash set, used to record visited vertices\n Set<Vertex> visited = new HashSet<>();\n visited.add(startVet);\n // Queue used to implement BFS\n Queue<Vertex> que = new LinkedList<>();\n que.offer(startVet);\n // Starting from vertex vet, loop until all vertices are visited\n while (!que.isEmpty()) {\n Vertex vet = que.poll(); // Dequeue the vertex at the head of the queue\n res.add(vet); // Record visited vertex\n // Traverse all adjacent vertices of that vertex\n for (Vertex adjVet : graph.adjList.get(vet)) {\n if (visited.contains(adjVet))\n continue; // Skip already visited vertices\n que.offer(adjVet); // Only enqueue unvisited vertices\n visited.add(adjVet); // Mark the vertex as visited\n }\n }\n // Return the vertex traversal sequence\n return res;\n}\n</code></pre> graph_bfs.cs<pre><code>[class]{graph_bfs}-[func]{GraphBFS}\n</code></pre> graph_bfs.go<pre><code>[class]{}-[func]{graphBFS}\n</code></pre> graph_bfs.swift<pre><code>[class]{}-[func]{graphBFS}\n</code></pre> graph_bfs.js<pre><code>[class]{}-[func]{graphBFS}\n</code></pre> graph_bfs.ts<pre><code>[class]{}-[func]{graphBFS}\n</code></pre> graph_bfs.dart<pre><code>[class]{}-[func]{graphBFS}\n</code></pre> graph_bfs.rs<pre><code>[class]{}-[func]{graph_bfs}\n</code></pre> graph_bfs.c<pre><code>[class]{Queue}-[func]{}\n\n[class]{}-[func]{isVisited}\n\n[class]{}-[func]{graphBFS}\n</code></pre> graph_bfs.kt<pre><code>[class]{}-[func]{graphBFS}\n</code></pre> graph_bfs.rb<pre><code>[class]{}-[func]{graph_bfs}\n</code></pre> graph_bfs.zig<pre><code>[class]{}-[func]{graphBFS}\n</code></pre> <p>The code is relatively abstract, it is suggested to compare with Figure 9-10 to deepen the understanding.</p> <1><2><3><4><5><6><7><8><9><10><11> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 9-10 \u00a0 Steps of breadth-first search of a graph </p> <p>Is the sequence of breadth-first traversal unique?</p> <p>Not unique. Breadth-first traversal only requires traversing in a \"from near to far\" order, and the traversal order of multiple vertices at the same distance can be arbitrarily shuffled. For example, in Figure 9-10, the visitation order of vertices \\(1\\) and \\(3\\) can be switched, as can the order of vertices \\(2\\), \\(4\\), and \\(6\\).</p>"},{"location":"chapter_graph/graph_traversal/#2-complexity-analysis","title":"2. \u00a0 Complexity analysis","text":"<p>Time complexity: All vertices will be enqueued and dequeued once, using \\(O(|V|)\\) time; in the process of traversing adjacent vertices, since it is an undirected graph, all edges will be visited \\(2\\) times, using \\(O(2|E|)\\) time; overall using \\(O(|V| + |E|)\\) time.</p> <p>Space complexity: The maximum number of vertices in list <code>res</code>, hash set <code>visited</code>, and queue <code>que</code> is \\(|V|\\), using \\(O(|V|)\\) space.</p>"},{"location":"chapter_graph/graph_traversal/#932-depth-first-search","title":"9.3.2 \u00a0 Depth-first search","text":"<p>Depth-first search is a traversal method that prioritizes going as far as possible and then backtracks when no further paths are available. As shown in Figure 9-11, starting from the top left vertex, visit some adjacent vertex of the current vertex until no further path is available, then return and continue until all vertices are traversed.</p> <p></p> <p> Figure 9-11 \u00a0 Depth-first traversal of a graph </p>"},{"location":"chapter_graph/graph_traversal/#1-algorithm-implementation_1","title":"1. \u00a0 Algorithm implementation","text":"<p>This \"go as far as possible and then return\" algorithm paradigm is usually implemented based on recursion. Similar to breadth-first search, in depth-first search, we also need the help of a hash set <code>visited</code> to record the visited vertices to avoid revisiting.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig graph_dfs.py<pre><code>def dfs(graph: GraphAdjList, visited: set[Vertex], res: list[Vertex], vet: Vertex):\n \"\"\"Depth-first traversal helper function\"\"\"\n res.append(vet) # Record visited vertex\n visited.add(vet) # Mark the vertex as visited\n # Traverse all adjacent vertices of that vertex\n for adjVet in graph.adj_list[vet]:\n if adjVet in visited:\n continue # Skip already visited vertices\n # Recursively visit adjacent vertices\n dfs(graph, visited, res, adjVet)\n\ndef graph_dfs(graph: GraphAdjList, start_vet: Vertex) -> list[Vertex]:\n \"\"\"Depth-first traversal\"\"\"\n # Use adjacency list to represent the graph, to obtain all adjacent vertices of a specified vertex\n # Vertex traversal sequence\n res = []\n # Hash set, used to record visited vertices\n visited = set[Vertex]()\n dfs(graph, visited, res, start_vet)\n return res\n</code></pre> graph_dfs.cpp<pre><code>/* Depth-first traversal helper function */\nvoid dfs(GraphAdjList &graph, unordered_set<Vertex *> &visited, vector<Vertex *> &res, Vertex *vet) {\n res.push_back(vet); // Record visited vertex\n visited.emplace(vet); // Mark the vertex as visited\n // Traverse all adjacent vertices of that vertex\n for (Vertex *adjVet : graph.adjList[vet]) {\n if (visited.count(adjVet))\n continue; // Skip already visited vertices\n // Recursively visit adjacent vertices\n dfs(graph, visited, res, adjVet);\n }\n}\n\n/* Depth-first traversal */\n// Use adjacency list to represent the graph, to obtain all adjacent vertices of a specified vertex\nvector<Vertex *> graphDFS(GraphAdjList &graph, Vertex *startVet) {\n // Vertex traversal sequence\n vector<Vertex *> res;\n // Hash set, used to record visited vertices\n unordered_set<Vertex *> visited;\n dfs(graph, visited, res, startVet);\n return res;\n}\n</code></pre> graph_dfs.java<pre><code>/* Depth-first traversal helper function */\nvoid dfs(GraphAdjList graph, Set<Vertex> visited, List<Vertex> res, Vertex vet) {\n res.add(vet); // Record visited vertex\n visited.add(vet); // Mark the vertex as visited\n // Traverse all adjacent vertices of that vertex\n for (Vertex adjVet : graph.adjList.get(vet)) {\n if (visited.contains(adjVet))\n continue; // Skip already visited vertices\n // Recursively visit adjacent vertices\n dfs(graph, visited, res, adjVet);\n }\n}\n\n/* Depth-first traversal */\n// Use adjacency list to represent the graph, to obtain all adjacent vertices of a specified vertex\nList<Vertex> graphDFS(GraphAdjList graph, Vertex startVet) {\n // Vertex traversal sequence\n List<Vertex> res = new ArrayList<>();\n // Hash set, used to record visited vertices\n Set<Vertex> visited = new HashSet<>();\n dfs(graph, visited, res, startVet);\n return res;\n}\n</code></pre> graph_dfs.cs<pre><code>[class]{graph_dfs}-[func]{DFS}\n\n[class]{graph_dfs}-[func]{GraphDFS}\n</code></pre> graph_dfs.go<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> graph_dfs.swift<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> graph_dfs.js<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> graph_dfs.ts<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> graph_dfs.dart<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> graph_dfs.rs<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graph_dfs}\n</code></pre> graph_dfs.c<pre><code>[class]{}-[func]{isVisited}\n\n[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> graph_dfs.kt<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> graph_dfs.rb<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graph_dfs}\n</code></pre> graph_dfs.zig<pre><code>[class]{}-[func]{dfs}\n\n[class]{}-[func]{graphDFS}\n</code></pre> <p>The algorithm process of depth-first search is shown in Figure 9-12.</p> <ul> <li>Dashed lines represent downward recursion, indicating that a new recursive method has been initiated to visit a new vertex.</li> <li>Curved dashed lines represent upward backtracking, indicating that this recursive method has returned to the position where this method was initiated.</li> </ul> <p>To deepen the understanding, it is suggested to combine Figure 9-12 with the code to simulate (or draw) the entire DFS process in your mind, including when each recursive method is initiated and when it returns.</p> <1><2><3><4><5><6><7><8><9><10><11> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 9-12 \u00a0 Steps of depth-first search of a graph </p> <p>Is the sequence of depth-first traversal unique?</p> <p>Similar to breadth-first traversal, the order of the depth-first traversal sequence is also not unique. Given a certain vertex, exploring in any direction first is possible, that is, the order of adjacent vertices can be arbitrarily shuffled, all being part of depth-first traversal.</p> <p>Taking tree traversal as an example, \"root \\(\\rightarrow\\) left \\(\\rightarrow\\) right\", \"left \\(\\rightarrow\\) root \\(\\rightarrow\\) right\", \"left \\(\\rightarrow\\) right \\(\\rightarrow\\) root\" correspond to pre-order, in-order, and post-order traversals, respectively. They showcase three types of traversal priorities, yet all three are considered depth-first traversal.</p>"},{"location":"chapter_graph/graph_traversal/#2-complexity-analysis_1","title":"2. \u00a0 Complexity analysis","text":"<p>Time complexity: All vertices will be visited once, using \\(O(|V|)\\) time; all edges will be visited twice, using \\(O(2|E|)\\) time; overall using \\(O(|V| + |E|)\\) time.</p> <p>Space complexity: The maximum number of vertices in list <code>res</code>, hash set <code>visited</code> is \\(|V|\\), and the maximum recursion depth is \\(|V|\\), therefore using \\(O(|V|)\\) space.</p>"},{"location":"chapter_graph/summary/","title":"9.4 \u00a0 Summary","text":""},{"location":"chapter_graph/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>A graph consists of vertices and edges and can be represented as a set comprising a group of vertices and a group of edges.</li> <li>Compared to linear relationships (linked lists) and divide-and-conquer relationships (trees), network relationships (graphs) have a higher degree of freedom and are therefore more complex.</li> <li>The edges of a directed graph have directionality, any vertex in a connected graph is reachable, and each edge in a weighted graph contains a weight variable.</li> <li>Adjacency matrices use matrices to represent graphs, with each row (column) representing a vertex and matrix elements representing edges, using \\(1\\) or \\(0\\) to indicate the presence or absence of an edge between two vertices. Adjacency matrices are highly efficient for add, delete, find, and modify operations, but they consume more space.</li> <li>Adjacency lists use multiple linked lists to represent graphs, with the \\(i^{th}\\) list corresponding to vertex \\(i\\), containing all its adjacent vertices. Adjacency lists save more space compared to adjacency matrices, but since it is necessary to traverse the list to find edges, their time efficiency is lower.</li> <li>When the linked lists in the adjacency list are too long, they can be converted into red-black trees or hash tables to improve query efficiency.</li> <li>From the perspective of algorithmic thinking, adjacency matrices embody the principle of \"space for time,\" while adjacency lists embody \"time for space.\"</li> <li>Graphs can be used to model various real systems, such as social networks, subway routes, etc.</li> <li>A tree is a special case of a graph, and tree traversal is also a special case of graph traversal.</li> <li>Breadth-first traversal of a graph is a search method that expands layer by layer from near to far, usually implemented with a queue.</li> <li>Depth-first traversal of a graph is a search method that prefers to go as deep as possible and backtracks when no further paths are available, often based on recursion.</li> </ul>"},{"location":"chapter_graph/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: Is a path defined as a sequence of vertices or a sequence of edges?</p> <p>Definitions vary between different language versions on Wikipedia: the English version defines a path as \"a sequence of edges,\" while the Chinese version defines it as \"a sequence of vertices.\" Here is the original text from the English version: In graph theory, a path in a graph is a finite or infinite sequence of edges which joins a sequence of vertices.</p> <p>In this document, a path is considered a sequence of edges, rather than a sequence of vertices. This is because there might be multiple edges connecting two vertices, in which case each edge corresponds to a path.</p> <p>Q: In a disconnected graph, are there points that cannot be traversed to?</p> <p>In a disconnected graph, starting from a certain vertex, there is at least one vertex that cannot be reached. Traversing a disconnected graph requires setting multiple starting points to traverse all connected components of the graph.</p> <p>Q: In an adjacency list, does the order of \"all vertices connected to that vertex\" matter?</p> <p>It can be in any order. However, in practical applications, it might be necessary to sort according to certain rules, such as the order in which vertices are added, or the order of vertex values, etc., to facilitate the quick search for vertices with certain extremal values.</p>"},{"location":"chapter_greedy/","title":"Chapter 15. \u00a0 Greedy","text":"<p>Abstract</p> <p>Sunflowers turn towards the sun, always seeking the greatest possible growth for themselves.</p> <p>Greedy strategy guides to the best answer step by step through rounds of simple choices.</p>"},{"location":"chapter_greedy/#chapter-contents","title":"Chapter contents","text":"<ul> <li>15.1 \u00a0 Greedy algorithms</li> <li>15.2 \u00a0 Fractional knapsack problem</li> <li>15.3 \u00a0 Maximum capacity problem</li> <li>15.4 \u00a0 Maximum product cutting problem</li> <li>15.5 \u00a0 Summary</li> </ul>"},{"location":"chapter_greedy/fractional_knapsack_problem/","title":"15.2 \u00a0 Fractional knapsack problem","text":"<p>Question</p> <p>Given \\(n\\) items, the weight of the \\(i\\)-th item is \\(wgt[i-1]\\) and its value is \\(val[i-1]\\), and a knapsack with a capacity of \\(cap\\). Each item can be chosen only once, but a part of the item can be selected, with its value calculated based on the proportion of the weight chosen, what is the maximum value of the items in the knapsack under the limited capacity? An example is shown in Figure 15-3.</p> <p></p> <p> Figure 15-3 \u00a0 Example data of the fractional knapsack problem </p> <p>The fractional knapsack problem is very similar overall to the 0-1 knapsack problem, involving the current item \\(i\\) and capacity \\(c\\), aiming to maximize the value within the limited capacity of the knapsack.</p> <p>The difference is that, in this problem, only a part of an item can be chosen. As shown in Figure 15-4, we can arbitrarily split the items and calculate the corresponding value based on the weight proportion.</p> <ol> <li>For item \\(i\\), its value per unit weight is \\(val[i-1] / wgt[i-1]\\), referred to as the unit value.</li> <li>Suppose we put a part of item \\(i\\) with weight \\(w\\) into the knapsack, then the value added to the knapsack is \\(w \\times val[i-1] / wgt[i-1]\\).</li> </ol> <p></p> <p> Figure 15-4 \u00a0 Value per unit weight of the item </p>"},{"location":"chapter_greedy/fractional_knapsack_problem/#1-greedy-strategy-determination","title":"1. \u00a0 Greedy strategy determination","text":"<p>Maximizing the total value of the items in the knapsack essentially means maximizing the value per unit weight. From this, the greedy strategy shown in Figure 15-5 can be deduced.</p> <ol> <li>Sort the items by their unit value from high to low.</li> <li>Iterate over all items, greedily choosing the item with the highest unit value in each round.</li> <li>If the remaining capacity of the knapsack is insufficient, use part of the current item to fill the knapsack.</li> </ol> <p></p> <p> Figure 15-5 \u00a0 Greedy strategy of the fractional knapsack problem </p>"},{"location":"chapter_greedy/fractional_knapsack_problem/#2-code-implementation","title":"2. \u00a0 Code implementation","text":"<p>We have created an <code>Item</code> class in order to sort the items by their unit value. We loop and make greedy choices until the knapsack is full, then exit and return the solution:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig fractional_knapsack.py<pre><code>class Item:\n \"\"\"Item\"\"\"\n\n def __init__(self, w: int, v: int):\n self.w = w # Item weight\n self.v = v # Item value\n\ndef fractional_knapsack(wgt: list[int], val: list[int], cap: int) -> int:\n \"\"\"Fractional knapsack: Greedy\"\"\"\n # Create an item list, containing two properties: weight, value\n items = [Item(w, v) for w, v in zip(wgt, val)]\n # Sort by unit value item.v / item.w from high to low\n items.sort(key=lambda item: item.v / item.w, reverse=True)\n # Loop for greedy selection\n res = 0\n for item in items:\n if item.w <= cap:\n # If the remaining capacity is sufficient, put the entire item into the knapsack\n res += item.v\n cap -= item.w\n else:\n # If the remaining capacity is insufficient, put part of the item into the knapsack\n res += (item.v / item.w) * cap\n # No remaining capacity left, thus break the loop\n break\n return res\n</code></pre> fractional_knapsack.cpp<pre><code>/* Item */\nclass Item {\n public:\n int w; // Item weight\n int v; // Item value\n\n Item(int w, int v) : w(w), v(v) {\n }\n};\n\n/* Fractional knapsack: Greedy */\ndouble fractionalKnapsack(vector<int> &wgt, vector<int> &val, int cap) {\n // Create an item list, containing two properties: weight, value\n vector<Item> items;\n for (int i = 0; i < wgt.size(); i++) {\n items.push_back(Item(wgt[i], val[i]));\n }\n // Sort by unit value item.v / item.w from high to low\n sort(items.begin(), items.end(), [](Item &a, Item &b) { return (double)a.v / a.w > (double)b.v / b.w; });\n // Loop for greedy selection\n double res = 0;\n for (auto &item : items) {\n if (item.w <= cap) {\n // If the remaining capacity is sufficient, put the entire item into the knapsack\n res += item.v;\n cap -= item.w;\n } else {\n // If the remaining capacity is insufficient, put part of the item into the knapsack\n res += (double)item.v / item.w * cap;\n // No remaining capacity left, thus break the loop\n break;\n }\n }\n return res;\n}\n</code></pre> fractional_knapsack.java<pre><code>/* Item */\nclass Item {\n int w; // Item weight\n int v; // Item value\n\n public Item(int w, int v) {\n this.w = w;\n this.v = v;\n }\n}\n\n/* Fractional knapsack: Greedy */\ndouble fractionalKnapsack(int[] wgt, int[] val, int cap) {\n // Create an item list, containing two properties: weight, value\n Item[] items = new Item[wgt.length];\n for (int i = 0; i < wgt.length; i++) {\n items[i] = new Item(wgt[i], val[i]);\n }\n // Sort by unit value item.v / item.w from high to low\n Arrays.sort(items, Comparator.comparingDouble(item -> -((double) item.v / item.w)));\n // Loop for greedy selection\n double res = 0;\n for (Item item : items) {\n if (item.w <= cap) {\n // If the remaining capacity is sufficient, put the entire item into the knapsack\n res += item.v;\n cap -= item.w;\n } else {\n // If the remaining capacity is insufficient, put part of the item into the knapsack\n res += (double) item.v / item.w * cap;\n // No remaining capacity left, thus break the loop\n break;\n }\n }\n return res;\n}\n</code></pre> fractional_knapsack.cs<pre><code>[class]{Item}-[func]{}\n\n[class]{fractional_knapsack}-[func]{FractionalKnapsack}\n</code></pre> fractional_knapsack.go<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> fractional_knapsack.swift<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> fractional_knapsack.js<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> fractional_knapsack.ts<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> fractional_knapsack.dart<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> fractional_knapsack.rs<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractional_knapsack}\n</code></pre> fractional_knapsack.c<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> fractional_knapsack.kt<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> fractional_knapsack.rb<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractional_knapsack}\n</code></pre> fractional_knapsack.zig<pre><code>[class]{Item}-[func]{}\n\n[class]{}-[func]{fractionalKnapsack}\n</code></pre> <p>Apart from sorting, in the worst case, the entire list of items needs to be traversed, hence the time complexity is \\(O(n)\\), where \\(n\\) is the number of items.</p> <p>Since an <code>Item</code> object list is initialized, the space complexity is \\(O(n)\\).</p>"},{"location":"chapter_greedy/fractional_knapsack_problem/#3-correctness-proof","title":"3. \u00a0 Correctness proof","text":"<p>Using proof by contradiction. Suppose item \\(x\\) has the highest unit value, and some algorithm yields a maximum value <code>res</code>, but the solution does not include item \\(x\\).</p> <p>Now remove a unit weight of any item from the knapsack and replace it with a unit weight of item \\(x\\). Since the unit value of item \\(x\\) is the highest, the total value after replacement will definitely be greater than <code>res</code>. This contradicts the assumption that <code>res</code> is the optimal solution, proving that the optimal solution must include item \\(x\\).</p> <p>For other items in this solution, we can also construct the above contradiction. Overall, items with greater unit value are always better choices, proving that the greedy strategy is effective.</p> <p>As shown in Figure 15-6, if the item weight and unit value are viewed as the horizontal and vertical axes of a two-dimensional chart respectively, the fractional knapsack problem can be transformed into \"seeking the largest area enclosed within a limited horizontal axis range\". This analogy can help us understand the effectiveness of the greedy strategy from a geometric perspective.</p> <p></p> <p> Figure 15-6 \u00a0 Geometric representation of the fractional knapsack problem </p>"},{"location":"chapter_greedy/greedy_algorithm/","title":"15.1 \u00a0 Greedy algorithms","text":"<p>Greedy algorithm is a common algorithm for solving optimization problems, which fundamentally involves making the seemingly best choice at each decision-making stage of the problem, i.e., greedily making locally optimal decisions in hopes of finding a globally optimal solution. Greedy algorithms are concise and efficient, and are widely used in many practical problems.</p> <p>Greedy algorithms and dynamic programming are both commonly used to solve optimization problems. They share some similarities, such as relying on the property of optimal substructure, but they operate differently.</p> <ul> <li>Dynamic programming considers all previous decisions at the current decision stage and uses solutions to past subproblems to construct solutions for the current subproblem.</li> <li>Greedy algorithms do not consider past decisions; instead, they proceed with greedy choices, continually narrowing the scope of the problem until it is solved.</li> </ul> <p>Let's first understand the working principle of the greedy algorithm through the example of \"coin change,\" which has been introduced in the \"Complete Knapsack Problem\" chapter. I believe you are already familiar with it.</p> <p>Question</p> <p>Given \\(n\\) types of coins, where the denomination of the \\(i\\)th type of coin is \\(coins[i - 1]\\), and the target amount is \\(amt\\), with each type of coin available indefinitely, what is the minimum number of coins needed to make up the target amount? If it is not possible to make up the target amount, return \\(-1\\).</p> <p>The greedy strategy adopted in this problem is shown in Figure 15-1. Given the target amount, we greedily choose the coin that is closest to and not greater than it, repeatedly following this step until the target amount is met.</p> <p></p> <p> Figure 15-1 \u00a0 Greedy strategy for coin change </p> <p>The implementation code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig coin_change_greedy.py<pre><code>def coin_change_greedy(coins: list[int], amt: int) -> int:\n \"\"\"Coin change: Greedy\"\"\"\n # Assume coins list is ordered\n i = len(coins) - 1\n count = 0\n # Loop for greedy selection until no remaining amount\n while amt > 0:\n # Find the smallest coin close to and less than the remaining amount\n while i > 0 and coins[i] > amt:\n i -= 1\n # Choose coins[i]\n amt -= coins[i]\n count += 1\n # If no feasible solution is found, return -1\n return count if amt == 0 else -1\n</code></pre> coin_change_greedy.cpp<pre><code>/* Coin change: Greedy */\nint coinChangeGreedy(vector<int> &coins, int amt) {\n // Assume coins list is ordered\n int i = coins.size() - 1;\n int count = 0;\n // Loop for greedy selection until no remaining amount\n while (amt > 0) {\n // Find the smallest coin close to and less than the remaining amount\n while (i > 0 && coins[i] > amt) {\n i--;\n }\n // Choose coins[i]\n amt -= coins[i];\n count++;\n }\n // If no feasible solution is found, return -1\n return amt == 0 ? count : -1;\n}\n</code></pre> coin_change_greedy.java<pre><code>/* Coin change: Greedy */\nint coinChangeGreedy(int[] coins, int amt) {\n // Assume coins list is ordered\n int i = coins.length - 1;\n int count = 0;\n // Loop for greedy selection until no remaining amount\n while (amt > 0) {\n // Find the smallest coin close to and less than the remaining amount\n while (i > 0 && coins[i] > amt) {\n i--;\n }\n // Choose coins[i]\n amt -= coins[i];\n count++;\n }\n // If no feasible solution is found, return -1\n return amt == 0 ? count : -1;\n}\n</code></pre> coin_change_greedy.cs<pre><code>[class]{coin_change_greedy}-[func]{CoinChangeGreedy}\n</code></pre> coin_change_greedy.go<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> coin_change_greedy.swift<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> coin_change_greedy.js<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> coin_change_greedy.ts<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> coin_change_greedy.dart<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> coin_change_greedy.rs<pre><code>[class]{}-[func]{coin_change_greedy}\n</code></pre> coin_change_greedy.c<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> coin_change_greedy.kt<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> coin_change_greedy.rb<pre><code>[class]{}-[func]{coin_change_greedy}\n</code></pre> coin_change_greedy.zig<pre><code>[class]{}-[func]{coinChangeGreedy}\n</code></pre> <p>You might exclaim: So clean! The greedy algorithm solves the coin change problem in about ten lines of code.</p>"},{"location":"chapter_greedy/greedy_algorithm/#1511-advantages-and-limitations-of-greedy-algorithms","title":"15.1.1 \u00a0 Advantages and limitations of greedy algorithms","text":"<p>Greedy algorithms are not only straightforward and simple to implement, but they are also usually very efficient. In the code above, if the smallest coin denomination is \\(\\min(coins)\\), the greedy choice loops at most \\(amt / \\min(coins)\\) times, giving a time complexity of \\(O(amt / \\min(coins))\\). This is an order of magnitude smaller than the time complexity of the dynamic programming solution, which is \\(O(n \\times amt)\\).</p> <p>However, for some combinations of coin denominations, greedy algorithms cannot find the optimal solution. Figure 15-2 provides two examples.</p> <ul> <li>Positive example \\(coins = [1, 5, 10, 20, 50, 100]\\): In this coin combination, given any \\(amt\\), the greedy algorithm can find the optimal solution.</li> <li>Negative example \\(coins = [1, 20, 50]\\): Suppose \\(amt = 60\\), the greedy algorithm can only find the combination \\(50 + 1 \\times 10\\), totaling 11 coins, but dynamic programming can find the optimal solution of \\(20 + 20 + 20\\), needing only 3 coins.</li> <li>Negative example \\(coins = [1, 49, 50]\\): Suppose \\(amt = 98\\), the greedy algorithm can only find the combination \\(50 + 1 \\times 48\\), totaling 49 coins, but dynamic programming can find the optimal solution of \\(49 + 49\\), needing only 2 coins.</li> </ul> <p></p> <p> Figure 15-2 \u00a0 Examples where greedy algorithms do not find the optimal solution </p> <p>This means that for the coin change problem, greedy algorithms cannot guarantee finding the globally optimal solution, and they might find a very poor solution. They are better suited for dynamic programming.</p> <p>Generally, the suitability of greedy algorithms falls into two categories.</p> <ol> <li>Guaranteed to find the optimal solution: In these cases, greedy algorithms are often the best choice, as they tend to be more efficient than backtracking or dynamic programming.</li> <li>Can find a near-optimal solution: Greedy algorithms are also applicable here. For many complex problems, finding the global optimal solution is very challenging, and being able to find a high-efficiency suboptimal solution is also very commendable.</li> </ol>"},{"location":"chapter_greedy/greedy_algorithm/#1512-characteristics-of-greedy-algorithms","title":"15.1.2 \u00a0 Characteristics of greedy algorithms","text":"<p>So, what kind of problems are suitable for solving with greedy algorithms? Or rather, under what conditions can greedy algorithms guarantee to find the optimal solution?</p> <p>Compared to dynamic programming, greedy algorithms have stricter usage conditions, focusing mainly on two properties of the problem.</p> <ul> <li>Greedy choice property: Only when the locally optimal choice can always lead to a globally optimal solution can greedy algorithms guarantee to obtain the optimal solution.</li> <li>Optimal substructure: The optimal solution to the original problem contains the optimal solutions to its subproblems.</li> </ul> <p>Optimal substructure has already been introduced in the \"Dynamic Programming\" chapter, so it is not discussed further here. It's important to note that some problems do not have an obvious optimal substructure, but can still be solved using greedy algorithms.</p> <p>We mainly explore the method for determining the greedy choice property. Although its description seems simple, in practice, proving the greedy choice property for many problems is not easy.</p> <p>For example, in the coin change problem, although we can easily cite counterexamples to disprove the greedy choice property, proving it is much more challenging. If asked, what conditions must a coin combination meet to be solvable using a greedy algorithm? We often have to rely on intuition or examples to provide an ambiguous answer, as it is difficult to provide a rigorous mathematical proof.</p> <p>Quote</p> <p>A paper presents an algorithm with a time complexity of \\(O(n^3)\\) for determining whether a coin combination can use a greedy algorithm to find the optimal solution for any amount.</p> <p>Pearson, D. A polynomial-time algorithm for the change-making problem[J]. Operations Research Letters, 2005, 33(3): 231-234.</p>"},{"location":"chapter_greedy/greedy_algorithm/#1513-steps-for-solving-problems-with-greedy-algorithms","title":"15.1.3 \u00a0 Steps for solving problems with greedy algorithms","text":"<p>The problem-solving process for greedy problems can generally be divided into the following three steps.</p> <ol> <li>Problem analysis: Sort out and understand the characteristics of the problem, including state definition, optimization objectives, and constraints, etc. This step is also involved in backtracking and dynamic programming.</li> <li>Determine the greedy strategy: Determine how to make a greedy choice at each step. This strategy can reduce the scale of the problem at each step and eventually solve the entire problem.</li> <li>Proof of correctness: It is usually necessary to prove that the problem has both a greedy choice property and optimal substructure. This step may require mathematical proofs, such as induction or reductio ad absurdum.</li> </ol> <p>Determining the greedy strategy is the core step in solving the problem, but it may not be easy to implement, mainly for the following reasons.</p> <ul> <li>Greedy strategies vary greatly between different problems. For many problems, the greedy strategy is fairly straightforward, and we can come up with it through some general thinking and attempts. However, for some complex problems, the greedy strategy may be very elusive, which is a real test of individual problem-solving experience and algorithmic capability.</li> <li>Some greedy strategies are quite misleading. When we confidently design a greedy strategy, write the code, and submit it for testing, it is quite possible that some test cases will not pass. This is because the designed greedy strategy is only \"partially correct,\" as described above with the coin change example.</li> </ul> <p>To ensure accuracy, we should provide rigorous mathematical proofs for the greedy strategy, usually involving reductio ad absurdum or mathematical induction.</p> <p>However, proving correctness may not be an easy task. If we are at a loss, we usually choose to debug the code based on test cases, modifying and verifying the greedy strategy step by step.</p>"},{"location":"chapter_greedy/greedy_algorithm/#1514-typical-problems-solved-by-greedy-algorithms","title":"15.1.4 \u00a0 Typical problems solved by greedy algorithms","text":"<p>Greedy algorithms are often applied to optimization problems that satisfy the properties of greedy choice and optimal substructure. Below are some typical greedy algorithm problems.</p> <ul> <li>Coin change problem: In some coin combinations, the greedy algorithm always provides the optimal solution.</li> <li>Interval scheduling problem: Suppose you have several tasks, each of which takes place over a period of time. Your goal is to complete as many tasks as possible. If you always choose the task that ends the earliest, then the greedy algorithm can achieve the optimal solution.</li> <li>Fractional knapsack problem: Given a set of items and a carrying capacity, your goal is to select a set of items such that the total weight does not exceed the carrying capacity and the total value is maximized. If you always choose the item with the highest value-to-weight ratio (value / weight), the greedy algorithm can achieve the optimal solution in some cases.</li> <li>Stock trading problem: Given a set of historical stock prices, you can make multiple trades, but you cannot buy again until after you have sold if you already own stocks. The goal is to achieve the maximum profit.</li> <li>Huffman coding: Huffman coding is a greedy algorithm used for lossless data compression. By constructing a Huffman tree, it always merges the two nodes with the lowest frequency, resulting in a Huffman tree with the minimum weighted path length (coding length).</li> <li>Dijkstra's algorithm: It is a greedy algorithm for solving the shortest path problem from a given source vertex to all other vertices.</li> </ul>"},{"location":"chapter_greedy/max_capacity_problem/","title":"15.3 \u00a0 Maximum capacity problem","text":"<p>Question</p> <p>Input an array \\(ht\\), where each element represents the height of a vertical partition. Any two partitions in the array, along with the space between them, can form a container.</p> <p>The capacity of the container is the product of the height and the width (area), where the height is determined by the shorter partition, and the width is the difference in array indices between the two partitions.</p> <p>Please select two partitions in the array that maximize the container's capacity and return this maximum capacity. An example is shown in Figure 15-7.</p> <p></p> <p> Figure 15-7 \u00a0 Example data for the maximum capacity problem </p> <p>The container is formed by any two partitions, therefore the state of this problem is represented by the indices of the two partitions, denoted as \\([i, j]\\).</p> <p>According to the problem statement, the capacity equals the product of height and width, where the height is determined by the shorter partition, and the width is the difference in array indices between the two partitions. The formula for capacity \\(cap[i, j]\\) is:</p> \\[ cap[i, j] = \\min(ht[i], ht[j]) \\times (j - i) \\] <p>Assuming the length of the array is \\(n\\), the number of combinations of two partitions (total number of states) is \\(C_n^2 = \\frac{n(n - 1)}{2}\\). The most straightforward approach is to enumerate all possible states, resulting in a time complexity of \\(O(n^2)\\).</p>"},{"location":"chapter_greedy/max_capacity_problem/#1-determination-of-a-greedy-strategy","title":"1. \u00a0 Determination of a greedy strategy","text":"<p>There is a more efficient solution to this problem. As shown in Figure 15-8, we select a state \\([i, j]\\) where the indices \\(i < j\\) and the height \\(ht[i] < ht[j]\\), meaning \\(i\\) is the shorter partition, and \\(j\\) is the taller one.</p> <p></p> <p> Figure 15-8 \u00a0 Initial state </p> <p>As shown in Figure 15-9, if we move the taller partition \\(j\\) closer to the shorter partition \\(i\\), the capacity will definitely decrease.</p> <p>This is because when moving the taller partition \\(j\\), the width \\(j-i\\) definitely decreases; and since the height is determined by the shorter partition, the height can only remain the same (if \\(i\\) remains the shorter partition) or decrease (if the moved \\(j\\) becomes the shorter partition).</p> <p></p> <p> Figure 15-9 \u00a0 State after moving the taller partition inward </p> <p>Conversely, we can only possibly increase the capacity by moving the shorter partition \\(i\\) inward. Although the width will definitely decrease, the height may increase (if the moved shorter partition \\(i\\) becomes taller). For example, in Figure 15-10, the area increases after moving the shorter partition.</p> <p></p> <p> Figure 15-10 \u00a0 State after moving the shorter partition inward </p> <p>This leads us to the greedy strategy for this problem: initialize two pointers at the ends of the container, and in each round, move the pointer corresponding to the shorter partition inward until the two pointers meet.</p> <p>Figure 15-11 illustrate the execution of the greedy strategy.</p> <ol> <li>Initially, the pointers \\(i\\) and \\(j\\) are positioned at the ends of the array.</li> <li>Calculate the current state's capacity \\(cap[i, j]\\) and update the maximum capacity.</li> <li>Compare the heights of partitions \\(i\\) and \\(j\\), and move the shorter partition inward by one step.</li> <li>Repeat steps <code>2.</code> and <code>3.</code> until \\(i\\) and \\(j\\) meet.</li> </ol> <1><2><3><4><5><6><7><8><9> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 15-11 \u00a0 The greedy process for maximum capacity problem </p>"},{"location":"chapter_greedy/max_capacity_problem/#2-implementation","title":"2. \u00a0 Implementation","text":"<p>The code loops at most \\(n\\) times, thus the time complexity is \\(O(n)\\).</p> <p>The variables \\(i\\), \\(j\\), and \\(res\\) use a constant amount of extra space, thus the space complexity is \\(O(1)\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig max_capacity.py<pre><code>def max_capacity(ht: list[int]) -> int:\n \"\"\"Maximum capacity: Greedy\"\"\"\n # Initialize i, j, making them split the array at both ends\n i, j = 0, len(ht) - 1\n # Initial maximum capacity is 0\n res = 0\n # Loop for greedy selection until the two boards meet\n while i < j:\n # Update maximum capacity\n cap = min(ht[i], ht[j]) * (j - i)\n res = max(res, cap)\n # Move the shorter board inward\n if ht[i] < ht[j]:\n i += 1\n else:\n j -= 1\n return res\n</code></pre> max_capacity.cpp<pre><code>/* Maximum capacity: Greedy */\nint maxCapacity(vector<int> &ht) {\n // Initialize i, j, making them split the array at both ends\n int i = 0, j = ht.size() - 1;\n // Initial maximum capacity is 0\n int res = 0;\n // Loop for greedy selection until the two boards meet\n while (i < j) {\n // Update maximum capacity\n int cap = min(ht[i], ht[j]) * (j - i);\n res = max(res, cap);\n // Move the shorter board inward\n if (ht[i] < ht[j]) {\n i++;\n } else {\n j--;\n }\n }\n return res;\n}\n</code></pre> max_capacity.java<pre><code>/* Maximum capacity: Greedy */\nint maxCapacity(int[] ht) {\n // Initialize i, j, making them split the array at both ends\n int i = 0, j = ht.length - 1;\n // Initial maximum capacity is 0\n int res = 0;\n // Loop for greedy selection until the two boards meet\n while (i < j) {\n // Update maximum capacity\n int cap = Math.min(ht[i], ht[j]) * (j - i);\n res = Math.max(res, cap);\n // Move the shorter board inward\n if (ht[i] < ht[j]) {\n i++;\n } else {\n j--;\n }\n }\n return res;\n}\n</code></pre> max_capacity.cs<pre><code>[class]{max_capacity}-[func]{MaxCapacity}\n</code></pre> max_capacity.go<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre> max_capacity.swift<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre> max_capacity.js<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre> max_capacity.ts<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre> max_capacity.dart<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre> max_capacity.rs<pre><code>[class]{}-[func]{max_capacity}\n</code></pre> max_capacity.c<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre> max_capacity.kt<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre> max_capacity.rb<pre><code>[class]{}-[func]{max_capacity}\n</code></pre> max_capacity.zig<pre><code>[class]{}-[func]{maxCapacity}\n</code></pre>"},{"location":"chapter_greedy/max_capacity_problem/#3-proof-of-correctness","title":"3. \u00a0 Proof of correctness","text":"<p>The reason why the greedy method is faster than enumeration is that each round of greedy selection \"skips\" some states.</p> <p>For example, under the state \\(cap[i, j]\\) where \\(i\\) is the shorter partition and \\(j\\) is the taller partition, greedily moving the shorter partition \\(i\\) inward by one step leads to the \"skipped\" states shown in Figure 15-12. This means that these states' capacities cannot be verified later.</p> \\[ cap[i, i+1], cap[i, i+2], \\dots, cap[i, j-2], cap[i, j-1] \\] <p></p> <p> Figure 15-12 \u00a0 States skipped by moving the shorter partition </p> <p>It is observed that these skipped states are actually all states where the taller partition \\(j\\) is moved inward. We have already proven that moving the taller partition inward will definitely decrease the capacity. Therefore, the skipped states cannot possibly be the optimal solution, and skipping them does not lead to missing the optimal solution.</p> <p>The analysis shows that the operation of moving the shorter partition is \"safe\", and the greedy strategy is effective.</p>"},{"location":"chapter_greedy/max_product_cutting_problem/","title":"15.4 \u00a0 Maximum product cutting problem","text":"<p>Question</p> <p>Given a positive integer \\(n\\), split it into at least two positive integers that sum up to \\(n\\), and find the maximum product of these integers, as illustrated in Figure 15-13.</p> <p></p> <p> Figure 15-13 \u00a0 Definition of the maximum product cutting problem </p> <p>Assume we split \\(n\\) into \\(m\\) integer factors, where the \\(i\\)-th factor is denoted as \\(n_i\\), that is,</p> \\[ n = \\sum_{i=1}^{m}n_i \\] <p>The goal of this problem is to find the maximum product of all integer factors, namely,</p> \\[ \\max(\\prod_{i=1}^{m}n_i) \\] <p>We need to consider: How large should the number of splits \\(m\\) be, and what should each \\(n_i\\) be?</p>"},{"location":"chapter_greedy/max_product_cutting_problem/#1-greedy-strategy-determination","title":"1. \u00a0 Greedy strategy determination","text":"<p>Experience suggests that the product of two integers is often greater than their sum. Suppose we split a factor of \\(2\\) from \\(n\\), then their product is \\(2(n-2)\\). Compare this product with \\(n\\):</p> \\[ \\begin{aligned} 2(n-2) & \\geq n \\newline 2n - n - 4 & \\geq 0 \\newline n & \\geq 4 \\end{aligned} \\] <p>As shown in Figure 15-14, when \\(n \\geq 4\\), splitting out a \\(2\\) increases the product, which indicates that integers greater than or equal to \\(4\\) should be split.</p> <p>Greedy strategy one: If the splitting scheme includes factors \\(\\geq 4\\), they should be further split. The final split should only include factors \\(1\\), \\(2\\), and \\(3\\).</p> <p></p> <p> Figure 15-14 \u00a0 Product increase due to splitting </p> <p>Next, consider which factor is optimal. Among the factors \\(1\\), \\(2\\), and \\(3\\), clearly \\(1\\) is the worst, as \\(1 \\times (n-1) < n\\) always holds, meaning splitting out \\(1\\) actually decreases the product.</p> <p>As shown in Figure 15-15, when \\(n = 6\\), \\(3 \\times 3 > 2 \\times 2 \\times 2\\). This means splitting out \\(3\\) is better than splitting out \\(2\\).</p> <p>Greedy strategy two: In the splitting scheme, there should be at most two \\(2\\)s. Because three \\(2\\)s can always be replaced by two \\(3\\)s to obtain a higher product.</p> <p></p> <p> Figure 15-15 \u00a0 Optimal splitting factors </p> <p>From the above, the following greedy strategies can be derived.</p> <ol> <li>Input integer \\(n\\), continually split out factor \\(3\\) until the remainder is \\(0\\), \\(1\\), or \\(2\\).</li> <li>When the remainder is \\(0\\), it means \\(n\\) is a multiple of \\(3\\), so no further action is taken.</li> <li>When the remainder is \\(2\\), do not continue to split, keep it.</li> <li>When the remainder is \\(1\\), since \\(2 \\times 2 > 1 \\times 3\\), the last \\(3\\) should be replaced with \\(2\\).</li> </ol>"},{"location":"chapter_greedy/max_product_cutting_problem/#2-code-implementation","title":"2. \u00a0 Code implementation","text":"<p>As shown in Figure 15-16, we do not need to use loops to split the integer but can use the floor division operation to get the number of \\(3\\)s, \\(a\\), and the modulo operation to get the remainder, \\(b\\), thus:</p> \\[ n = 3a + b \\] <p>Please note, for the boundary case where \\(n \\leq 3\\), a \\(1\\) must be split out, with a product of \\(1 \\times (n - 1)\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig max_product_cutting.py<pre><code>def max_product_cutting(n: int) -> int:\n \"\"\"Maximum product of cutting: Greedy\"\"\"\n # When n <= 3, must cut out a 1\n if n <= 3:\n return 1 * (n - 1)\n # Greedy cut out 3s, a is the number of 3s, b is the remainder\n a, b = n // 3, n % 3\n if b == 1:\n # When the remainder is 1, convert a pair of 1 * 3 into 2 * 2\n return int(math.pow(3, a - 1)) * 2 * 2\n if b == 2:\n # When the remainder is 2, do nothing\n return int(math.pow(3, a)) * 2\n # When the remainder is 0, do nothing\n return int(math.pow(3, a))\n</code></pre> max_product_cutting.cpp<pre><code>/* Maximum product of cutting: Greedy */\nint maxProductCutting(int n) {\n // When n <= 3, must cut out a 1\n if (n <= 3) {\n return 1 * (n - 1);\n }\n // Greedy cut out 3s, a is the number of 3s, b is the remainder\n int a = n / 3;\n int b = n % 3;\n if (b == 1) {\n // When the remainder is 1, convert a pair of 1 * 3 into 2 * 2\n return (int)pow(3, a - 1) * 2 * 2;\n }\n if (b == 2) {\n // When the remainder is 2, do nothing\n return (int)pow(3, a) * 2;\n }\n // When the remainder is 0, do nothing\n return (int)pow(3, a);\n}\n</code></pre> max_product_cutting.java<pre><code>/* Maximum product of cutting: Greedy */\nint maxProductCutting(int n) {\n // When n <= 3, must cut out a 1\n if (n <= 3) {\n return 1 * (n - 1);\n }\n // Greedy cut out 3s, a is the number of 3s, b is the remainder\n int a = n / 3;\n int b = n % 3;\n if (b == 1) {\n // When the remainder is 1, convert a pair of 1 * 3 into 2 * 2\n return (int) Math.pow(3, a - 1) * 2 * 2;\n }\n if (b == 2) {\n // When the remainder is 2, do nothing\n return (int) Math.pow(3, a) * 2;\n }\n // When the remainder is 0, do nothing\n return (int) Math.pow(3, a);\n}\n</code></pre> max_product_cutting.cs<pre><code>[class]{max_product_cutting}-[func]{MaxProductCutting}\n</code></pre> max_product_cutting.go<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> max_product_cutting.swift<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> max_product_cutting.js<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> max_product_cutting.ts<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> max_product_cutting.dart<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> max_product_cutting.rs<pre><code>[class]{}-[func]{max_product_cutting}\n</code></pre> max_product_cutting.c<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> max_product_cutting.kt<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> max_product_cutting.rb<pre><code>[class]{}-[func]{max_product_cutting}\n</code></pre> max_product_cutting.zig<pre><code>[class]{}-[func]{maxProductCutting}\n</code></pre> <p></p> <p> Figure 15-16 \u00a0 Calculation method of the maximum product after cutting </p> <p>Time complexity depends on the implementation of the power operation in the programming language. For Python, the commonly used power calculation functions are three types:</p> <ul> <li>Both the operator <code>**</code> and the function <code>pow()</code> have a time complexity of \\(O(\\log\u2061 a)\\).</li> <li>The <code>math.pow()</code> function internally calls the C language library's <code>pow()</code> function, performing floating-point exponentiation, with a time complexity of \\(O(1)\\).</li> </ul> <p>Variables \\(a\\) and \\(b\\) use constant size of extra space, hence the space complexity is \\(O(1)\\).</p>"},{"location":"chapter_greedy/max_product_cutting_problem/#3-correctness-proof","title":"3. \u00a0 Correctness proof","text":"<p>Using the proof by contradiction, only analyze cases where \\(n \\geq 3\\).</p> <ol> <li>All factors \\(\\leq 3\\): Assume the optimal splitting scheme includes a factor \\(x \\geq 4\\), then it can definitely be further split into \\(2(x-2)\\), obtaining a larger product. This contradicts the assumption.</li> <li>The splitting scheme does not contain \\(1\\): Assume the optimal splitting scheme includes a factor of \\(1\\), then it can definitely be merged into another factor to obtain a larger product. This contradicts the assumption.</li> <li>The splitting scheme contains at most two \\(2\\)s: Assume the optimal splitting scheme includes three \\(2\\)s, then they can definitely be replaced by two \\(3\\)s, achieving a higher product. This contradicts the assumption.</li> </ol>"},{"location":"chapter_greedy/summary/","title":"15.5 \u00a0 Summary","text":"<ul> <li>Greedy algorithms are often used to solve optimization problems, where the principle is to make locally optimal decisions at each decision stage in order to achieve a globally optimal solution.</li> <li>Greedy algorithms iteratively make one greedy choice after another, transforming the problem into a smaller sub-problem with each round, until the problem is resolved.</li> <li>Greedy algorithms are not only simple to implement but also have high problem-solving efficiency. Compared to dynamic programming, greedy algorithms generally have a lower time complexity.</li> <li>In the problem of coin change, greedy algorithms can guarantee the optimal solution for certain combinations of coins; for others, however, the greedy algorithm might find a very poor solution.</li> <li>Problems suitable for greedy algorithm solutions possess two main properties: greedy-choice property and optimal substructure. The greedy-choice property represents the effectiveness of the greedy strategy.</li> <li>For some complex problems, proving the greedy-choice property is not straightforward. Contrarily, proving the invalidity is often easier, such as with the coin change problem.</li> <li>Solving greedy problems mainly consists of three steps: problem analysis, determining the greedy strategy, and proving correctness. Among these, determining the greedy strategy is the key step, while proving correctness often poses the challenge.</li> <li>The fractional knapsack problem builds on the 0-1 knapsack problem by allowing the selection of a part of the items, hence it can be solved using a greedy algorithm. The correctness of the greedy strategy can be proved by contradiction.</li> <li>The maximum capacity problem can be solved using the exhaustive method, with a time complexity of \\(O(n^2)\\). By designing a greedy strategy, each round moves inwardly shortening the board, optimizing the time complexity to \\(O(n)\\).</li> <li>In the problem of maximum product after cutting, we deduce two greedy strategies: integers \\(\\geq 4\\) should continue to be cut, with the optimal cutting factor being \\(3\\). The code includes power operations, and the time complexity depends on the method of implementing power operations, generally being \\(O(1)\\) or \\(O(\\log n)\\).</li> </ul>"},{"location":"chapter_hashing/","title":"Chapter 6. \u00a0 Hash table","text":"<p>Abstract</p> <p>In the world of computing, a hash table is akin to an intelligent librarian.</p> <p>It understands how to compute index numbers, enabling swift retrieval of the desired book.</p>"},{"location":"chapter_hashing/#chapter-contents","title":"Chapter contents","text":"<ul> <li>6.1 \u00a0 Hash table</li> <li>6.2 \u00a0 Hash collision</li> <li>6.3 \u00a0 Hash algorithm</li> <li>6.4 \u00a0 Summary</li> </ul>"},{"location":"chapter_hashing/hash_algorithm/","title":"6.3 \u00a0 Hash algorithms","text":"<p>The previous two sections introduced the working principle of hash tables and the methods to handle hash collisions. However, both open addressing and chaining can only ensure that the hash table functions normally when collisions occur, but cannot reduce the frequency of hash collisions.</p> <p>If hash collisions occur too frequently, the performance of the hash table will deteriorate drastically. As shown in Figure 6-8, for a chaining hash table, in the ideal case, the key-value pairs are evenly distributed across the buckets, achieving optimal query efficiency; in the worst case, all key-value pairs are stored in the same bucket, degrading the time complexity to \\(O(n)\\).</p> <p></p> <p> Figure 6-8 \u00a0 Ideal and worst cases of hash collisions </p> <p>The distribution of key-value pairs is determined by the hash function. Recalling the steps of calculating a hash function, first compute the hash value, then modulo it by the array length:</p> <pre><code>index = hash(key) % capacity\n</code></pre> <p>Observing the above formula, when the hash table capacity <code>capacity</code> is fixed, the hash algorithm <code>hash()</code> determines the output value, thereby determining the distribution of key-value pairs in the hash table.</p> <p>This means that, to reduce the probability of hash collisions, we should focus on the design of the hash algorithm <code>hash()</code>.</p>"},{"location":"chapter_hashing/hash_algorithm/#631-goals-of-hash-algorithms","title":"6.3.1 \u00a0 Goals of hash algorithms","text":"<p>To achieve a \"fast and stable\" hash table data structure, hash algorithms should have the following characteristics:</p> <ul> <li>Determinism: For the same input, the hash algorithm should always produce the same output. Only then can the hash table be reliable.</li> <li>High efficiency: The process of computing the hash value should be fast enough. The smaller the computational overhead, the more practical the hash table.</li> <li>Uniform distribution: The hash algorithm should ensure that key-value pairs are evenly distributed in the hash table. The more uniform the distribution, the lower the probability of hash collisions.</li> </ul> <p>In fact, hash algorithms are not only used to implement hash tables but are also widely applied in other fields.</p> <ul> <li>Password storage: To protect the security of user passwords, systems usually do not store the plaintext passwords but rather the hash values of the passwords. When a user enters a password, the system calculates the hash value of the input and compares it with the stored hash value. If they match, the password is considered correct.</li> <li>Data integrity check: The data sender can calculate the hash value of the data and send it along; the receiver can recalculate the hash value of the received data and compare it with the received hash value. If they match, the data is considered intact.</li> </ul> <p>For cryptographic applications, to prevent reverse engineering such as deducing the original password from the hash value, hash algorithms need higher-level security features.</p> <ul> <li>Unidirectionality: It should be impossible to deduce any information about the input data from the hash value.</li> <li>Collision resistance: It should be extremely difficult to find two different inputs that produce the same hash value.</li> <li>Avalanche effect: Minor changes in the input should lead to significant and unpredictable changes in the output.</li> </ul> <p>Note that \"Uniform Distribution\" and \"Collision Resistance\" are two separate concepts. Satisfying uniform distribution does not necessarily mean collision resistance. For example, under random input <code>key</code>, the hash function <code>key % 100</code> can produce a uniformly distributed output. However, this hash algorithm is too simple, and all <code>key</code> with the same last two digits will have the same output, making it easy to deduce a usable <code>key</code> from the hash value, thereby cracking the password.</p>"},{"location":"chapter_hashing/hash_algorithm/#632-design-of-hash-algorithms","title":"6.3.2 \u00a0 Design of hash algorithms","text":"<p>The design of hash algorithms is a complex issue that requires consideration of many factors. However, for some less demanding scenarios, we can also design some simple hash algorithms.</p> <ul> <li>Additive hash: Add up the ASCII codes of each character in the input and use the total sum as the hash value.</li> <li>Multiplicative hash: Utilize the non-correlation of multiplication, multiplying each round by a constant, accumulating the ASCII codes of each character into the hash value.</li> <li>XOR hash: Accumulate the hash value by XORing each element of the input data.</li> <li>Rotating hash: Accumulate the ASCII code of each character into a hash value, performing a rotation operation on the hash value before each accumulation.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig simple_hash.py<pre><code>def add_hash(key: str) -> int:\n \"\"\"Additive hash\"\"\"\n hash = 0\n modulus = 1000000007\n for c in key:\n hash += ord(c)\n return hash % modulus\n\ndef mul_hash(key: str) -> int:\n \"\"\"Multiplicative hash\"\"\"\n hash = 0\n modulus = 1000000007\n for c in key:\n hash = 31 * hash + ord(c)\n return hash % modulus\n\ndef xor_hash(key: str) -> int:\n \"\"\"XOR hash\"\"\"\n hash = 0\n modulus = 1000000007\n for c in key:\n hash ^= ord(c)\n return hash % modulus\n\ndef rot_hash(key: str) -> int:\n \"\"\"Rotational hash\"\"\"\n hash = 0\n modulus = 1000000007\n for c in key:\n hash = (hash << 4) ^ (hash >> 28) ^ ord(c)\n return hash % modulus\n</code></pre> simple_hash.cpp<pre><code>/* Additive hash */\nint addHash(string key) {\n long long hash = 0;\n const int MODULUS = 1000000007;\n for (unsigned char c : key) {\n hash = (hash + (int)c) % MODULUS;\n }\n return (int)hash;\n}\n\n/* Multiplicative hash */\nint mulHash(string key) {\n long long hash = 0;\n const int MODULUS = 1000000007;\n for (unsigned char c : key) {\n hash = (31 * hash + (int)c) % MODULUS;\n }\n return (int)hash;\n}\n\n/* XOR hash */\nint xorHash(string key) {\n int hash = 0;\n const int MODULUS = 1000000007;\n for (unsigned char c : key) {\n hash ^= (int)c;\n }\n return hash & MODULUS;\n}\n\n/* Rotational hash */\nint rotHash(string key) {\n long long hash = 0;\n const int MODULUS = 1000000007;\n for (unsigned char c : key) {\n hash = ((hash << 4) ^ (hash >> 28) ^ (int)c) % MODULUS;\n }\n return (int)hash;\n}\n</code></pre> simple_hash.java<pre><code>/* Additive hash */\nint addHash(String key) {\n long hash = 0;\n final int MODULUS = 1000000007;\n for (char c : key.toCharArray()) {\n hash = (hash + (int) c) % MODULUS;\n }\n return (int) hash;\n}\n\n/* Multiplicative hash */\nint mulHash(String key) {\n long hash = 0;\n final int MODULUS = 1000000007;\n for (char c : key.toCharArray()) {\n hash = (31 * hash + (int) c) % MODULUS;\n }\n return (int) hash;\n}\n\n/* XOR hash */\nint xorHash(String key) {\n int hash = 0;\n final int MODULUS = 1000000007;\n for (char c : key.toCharArray()) {\n hash ^= (int) c;\n }\n return hash & MODULUS;\n}\n\n/* Rotational hash */\nint rotHash(String key) {\n long hash = 0;\n final int MODULUS = 1000000007;\n for (char c : key.toCharArray()) {\n hash = ((hash << 4) ^ (hash >> 28) ^ (int) c) % MODULUS;\n }\n return (int) hash;\n}\n</code></pre> simple_hash.cs<pre><code>[class]{simple_hash}-[func]{AddHash}\n\n[class]{simple_hash}-[func]{MulHash}\n\n[class]{simple_hash}-[func]{XorHash}\n\n[class]{simple_hash}-[func]{RotHash}\n</code></pre> simple_hash.go<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> simple_hash.swift<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> simple_hash.js<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> simple_hash.ts<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> simple_hash.dart<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> simple_hash.rs<pre><code>[class]{}-[func]{add_hash}\n\n[class]{}-[func]{mul_hash}\n\n[class]{}-[func]{xor_hash}\n\n[class]{}-[func]{rot_hash}\n</code></pre> simple_hash.c<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> simple_hash.kt<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> simple_hash.rb<pre><code>[class]{}-[func]{add_hash}\n\n[class]{}-[func]{mul_hash}\n\n[class]{}-[func]{xor_hash}\n\n[class]{}-[func]{rot_hash}\n</code></pre> simple_hash.zig<pre><code>[class]{}-[func]{addHash}\n\n[class]{}-[func]{mulHash}\n\n[class]{}-[func]{xorHash}\n\n[class]{}-[func]{rotHash}\n</code></pre> <p>It is observed that the last step of each hash algorithm is to take the modulus of the large prime number \\(1000000007\\) to ensure that the hash value is within an appropriate range. It is worth pondering why emphasis is placed on modulo a prime number, or what are the disadvantages of modulo a composite number? This is an interesting question.</p> <p>To conclude: Using a large prime number as the modulus can maximize the uniform distribution of hash values. Since a prime number does not share common factors with other numbers, it can reduce the periodic patterns caused by the modulo operation, thus avoiding hash collisions.</p> <p>For example, suppose we choose the composite number \\(9\\) as the modulus, which can be divided by \\(3\\), then all <code>key</code> divisible by \\(3\\) will be mapped to hash values \\(0\\), \\(3\\), \\(6\\).</p> \\[ \\begin{aligned} \\text{modulus} & = 9 \\newline \\text{key} & = \\{ 0, 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, \\dots \\} \\newline \\text{hash} & = \\{ 0, 3, 6, 0, 3, 6, 0, 3, 6, 0, 3, 6,\\dots \\} \\end{aligned} \\] <p>If the input <code>key</code> happens to have this kind of arithmetic sequence distribution, then the hash values will cluster, thereby exacerbating hash collisions. Now, suppose we replace <code>modulus</code> with the prime number \\(13\\), since there are no common factors between <code>key</code> and <code>modulus</code>, the uniformity of the output hash values will be significantly improved.</p> \\[ \\begin{aligned} \\text{modulus} & = 13 \\newline \\text{key} & = \\{ 0, 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, \\dots \\} \\newline \\text{hash} & = \\{ 0, 3, 6, 9, 12, 2, 5, 8, 11, 1, 4, 7, \\dots \\} \\end{aligned} \\] <p>It is worth noting that if the <code>key</code> is guaranteed to be randomly and uniformly distributed, then choosing a prime number or a composite number as the modulus can both produce uniformly distributed hash values. However, when the distribution of <code>key</code> has some periodicity, modulo a composite number is more likely to result in clustering.</p> <p>In summary, we usually choose a prime number as the modulus, and this prime number should be large enough to eliminate periodic patterns as much as possible, enhancing the robustness of the hash algorithm.</p>"},{"location":"chapter_hashing/hash_algorithm/#633-common-hash-algorithms","title":"6.3.3 \u00a0 Common hash algorithms","text":"<p>It is not hard to see that the simple hash algorithms mentioned above are quite \"fragile\" and far from reaching the design goals of hash algorithms. For example, since addition and XOR obey the commutative law, additive hash and XOR hash cannot distinguish strings with the same content but in different order, which may exacerbate hash collisions and cause security issues.</p> <p>In practice, we usually use some standard hash algorithms, such as MD5, SHA-1, SHA-2, and SHA-3. They can map input data of any length to a fixed-length hash value.</p> <p>Over the past century, hash algorithms have been in a continuous process of upgrading and optimization. Some researchers strive to improve the performance of hash algorithms, while others, including hackers, are dedicated to finding security issues in hash algorithms. Table 6-2 shows hash algorithms commonly used in practical applications.</p> <ul> <li>MD5 and SHA-1 have been successfully attacked multiple times and are thus abandoned in various security applications.</li> <li>SHA-2 series, especially SHA-256, is one of the most secure hash algorithms to date, with no successful attacks reported, hence commonly used in various security applications and protocols.</li> <li>SHA-3 has lower implementation costs and higher computational efficiency compared to SHA-2, but its current usage coverage is not as extensive as the SHA-2 series.</li> </ul> <p> Table 6-2 \u00a0 Common hash algorithms </p> MD5 SHA-1 SHA-2 SHA-3 Release Year 1992 1995 2002 2008 Output Length 128 bit 160 bit 256/512 bit 224/256/384/512 bit Hash Collisions Frequent Frequent Rare Rare Security Level Low, has been successfully attacked Low, has been successfully attacked High High Applications Abandoned, still used for data integrity checks Abandoned Cryptocurrency transaction verification, digital signatures, etc. Can be used to replace SHA-2"},{"location":"chapter_hashing/hash_algorithm/#hash-values-in-data-structures","title":"Hash values in data structures","text":"<p>We know that the keys in a hash table can be of various data types such as integers, decimals, or strings. Programming languages usually provide built-in hash algorithms for these data types to calculate the bucket indices in the hash table. Taking Python as an example, we can use the <code>hash()</code> function to compute the hash values for various data types.</p> <ul> <li>The hash values of integers and booleans are their own values.</li> <li>The calculation of hash values for floating-point numbers and strings is more complex, and interested readers are encouraged to study this on their own.</li> <li>The hash value of a tuple is a combination of the hash values of each of its elements, resulting in a single hash value.</li> <li>The hash value of an object is generated based on its memory address. By overriding the hash method of an object, hash values can be generated based on content.</li> </ul> <p>Tip</p> <p>Be aware that the definition and methods of the built-in hash value calculation functions in different programming languages vary.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig built_in_hash.py<pre><code>num = 3\nhash_num = hash(num)\n# Hash value of integer 3 is 3\n\nbol = True\nhash_bol = hash(bol)\n# Hash value of boolean True is 1\n\ndec = 3.14159\nhash_dec = hash(dec)\n# Hash value of decimal 3.14159 is 326484311674566659\n\nstr = \"Hello \u7b97\u6cd5\"\nhash_str = hash(str)\n# Hash value of string \"Hello \u7b97\u6cd5\" is 4617003410720528961\n\ntup = (12836, \"\u5c0f\u54c8\")\nhash_tup = hash(tup)\n# Hash value of tuple (12836, '\u5c0f\u54c8') is 1029005403108185979\n\nobj = ListNode(0)\nhash_obj = hash(obj)\n# Hash value of ListNode object at 0x1058fd810 is 274267521\n</code></pre> built_in_hash.cpp<pre><code>int num = 3;\nsize_t hashNum = hash<int>()(num);\n// Hash value of integer 3 is 3\n\nbool bol = true;\nsize_t hashBol = hash<bool>()(bol);\n// Hash value of boolean 1 is 1\n\ndouble dec = 3.14159;\nsize_t hashDec = hash<double>()(dec);\n// Hash value of decimal 3.14159 is 4614256650576692846\n\nstring str = \"Hello \u7b97\u6cd5\";\nsize_t hashStr = hash<string>()(str);\n// Hash value of string \"Hello \u7b97\u6cd5\" is 15466937326284535026\n\n// In C++, built-in std::hash() only provides hash values for basic data types\n// Hash values for arrays and objects need to be implemented separately\n</code></pre> built_in_hash.java<pre><code>int num = 3;\nint hashNum = Integer.hashCode(num);\n// Hash value of integer 3 is 3\n\nboolean bol = true;\nint hashBol = Boolean.hashCode(bol);\n// Hash value of boolean true is 1231\n\ndouble dec = 3.14159;\nint hashDec = Double.hashCode(dec);\n// Hash value of decimal 3.14159 is -1340954729\n\nString str = \"Hello \u7b97\u6cd5\";\nint hashStr = str.hashCode();\n// Hash value of string \"Hello \u7b97\u6cd5\" is -727081396\n\nObject[] arr = { 12836, \"\u5c0f\u54c8\" };\nint hashTup = Arrays.hashCode(arr);\n// Hash value of array [12836, \u5c0f\u54c8] is 1151158\n\nListNode obj = new ListNode(0);\nint hashObj = obj.hashCode();\n// Hash value of ListNode object utils.ListNode@7dc5e7b4 is 2110121908\n</code></pre> built_in_hash.cs<pre><code>int num = 3;\nint hashNum = num.GetHashCode();\n// Hash value of integer 3 is 3;\n\nbool bol = true;\nint hashBol = bol.GetHashCode();\n// Hash value of boolean true is 1;\n\ndouble dec = 3.14159;\nint hashDec = dec.GetHashCode();\n// Hash value of decimal 3.14159 is -1340954729;\n\nstring str = \"Hello \u7b97\u6cd5\";\nint hashStr = str.GetHashCode();\n// Hash value of string \"Hello \u7b97\u6cd5\" is -586107568;\n\nobject[] arr = [12836, \"\u5c0f\u54c8\"];\nint hashTup = arr.GetHashCode();\n// Hash value of array [12836, \u5c0f\u54c8] is 42931033;\n\nListNode obj = new(0);\nint hashObj = obj.GetHashCode();\n// Hash value of ListNode object 0 is 39053774;\n</code></pre> built_in_hash.go<pre><code>// Go does not provide built-in hash code functions\n</code></pre> built_in_hash.swift<pre><code>let num = 3\nlet hashNum = num.hashValue\n// Hash value of integer 3 is 9047044699613009734\n\nlet bol = true\nlet hashBol = bol.hashValue\n// Hash value of boolean true is -4431640247352757451\n\nlet dec = 3.14159\nlet hashDec = dec.hashValue\n// Hash value of decimal 3.14159 is -2465384235396674631\n\nlet str = \"Hello \u7b97\u6cd5\"\nlet hashStr = str.hashValue\n// Hash value of string \"Hello \u7b97\u6cd5\" is -7850626797806988787\n\nlet arr = [AnyHashable(12836), AnyHashable(\"\u5c0f\u54c8\")]\nlet hashTup = arr.hashValue\n// Hash value of array [AnyHashable(12836), AnyHashable(\"\u5c0f\u54c8\")] is -2308633508154532996\n\nlet obj = ListNode(x: 0)\nlet hashObj = obj.hashValue\n// Hash value of ListNode object utils.ListNode is -2434780518035996159\n</code></pre> built_in_hash.js<pre><code>// JavaScript does not provide built-in hash code functions\n</code></pre> built_in_hash.ts<pre><code>// TypeScript does not provide built-in hash code functions\n</code></pre> built_in_hash.dart<pre><code>int num = 3;\nint hashNum = num.hashCode;\n// Hash value of integer 3 is 34803\n\nbool bol = true;\nint hashBol = bol.hashCode;\n// Hash value of boolean true is 1231\n\ndouble dec = 3.14159;\nint hashDec = dec.hashCode;\n// Hash value of decimal 3.14159 is 2570631074981783\n\nString str = \"Hello \u7b97\u6cd5\";\nint hashStr = str.hashCode;\n// Hash value of string \"Hello \u7b97\u6cd5\" is 468167534\n\nList arr = [12836, \"\u5c0f\u54c8\"];\nint hashArr = arr.hashCode;\n// Hash value of array [12836, \u5c0f\u54c8] is 976512528\n\nListNode obj = new ListNode(0);\nint hashObj = obj.hashCode;\n// Hash value of ListNode object Instance of 'ListNode' is 1033450432\n</code></pre> built_in_hash.rs<pre><code>use std::collections::hash_map::DefaultHasher;\nuse std::hash::{Hash, Hasher};\n\nlet num = 3;\nlet mut num_hasher = DefaultHasher::new();\nnum.hash(&mut num_hasher);\nlet hash_num = num_hasher.finish();\n// Hash value of integer 3 is 568126464209439262\n\nlet bol = true;\nlet mut bol_hasher = DefaultHasher::new();\nbol.hash(&mut bol_hasher);\nlet hash_bol = bol_hasher.finish();\n// Hash value of boolean true is 4952851536318644461\n\nlet dec: f32 = 3.14159;\nlet mut dec_hasher = DefaultHasher::new();\ndec.to_bits().hash(&mut dec_hasher);\nlet hash_dec = dec_hasher.finish();\n// Hash value of decimal 3.14159 is 2566941990314602357\n\nlet str = \"Hello \u7b97\u6cd5\";\nlet mut str_hasher = DefaultHasher::new();\nstr.hash(&mut str_hasher);\nlet hash_str = str_hasher.finish();\n// Hash value of string \"Hello \u7b97\u6cd5\" is 16092673739211250988\n\nlet arr = (&12836, &\"\u5c0f\u54c8\");\nlet mut tup_hasher = DefaultHasher::new();\narr.hash(&mut tup_hasher);\nlet hash_tup = tup_hasher.finish();\n// Hash value of tuple (12836, \"\u5c0f\u54c8\") is 1885128010422702749\n\nlet node = ListNode::new(42);\nlet mut hasher = DefaultHasher::new();\nnode.borrow().val.hash(&mut hasher);\nlet hash = hasher.finish();\n// Hash value of ListNode object RefCell { value: ListNode { val: 42, next: None } } is 15387811073369036852\n</code></pre> built_in_hash.c<pre><code>// C does not provide built-in hash code functions\n</code></pre> built_in_hash.kt<pre><code>\n</code></pre> built_in_hash.zig<pre><code>\n</code></pre> Code Visualization <p> Full Screen ></p> <p>In many programming languages, only immutable objects can serve as the <code>key</code> in a hash table. If we use a list (dynamic array) as a <code>key</code>, when the contents of the list change, its hash value also changes, and we would no longer be able to find the original <code>value</code> in the hash table.</p> <p>Although the member variables of a custom object (such as a linked list node) are mutable, it is hashable. This is because the hash value of an object is usually generated based on its memory address, and even if the contents of the object change, the memory address remains the same, so the hash value remains unchanged.</p> <p>You might have noticed that the hash values output in different consoles are different. This is because the Python interpreter adds a random salt to the string hash function each time it starts up. This approach effectively prevents HashDoS attacks and enhances the security of the hash algorithm.</p>"},{"location":"chapter_hashing/hash_collision/","title":"6.2 \u00a0 Hash collision","text":"<p>The previous section mentioned that, in most cases, the input space of a hash function is much larger than the output space, so theoretically, hash collisions are inevitable. For example, if the input space is all integers and the output space is the size of the array capacity, then multiple integers will inevitably be mapped to the same bucket index.</p> <p>Hash collisions can lead to incorrect query results, severely impacting the usability of the hash table. To address this issue, whenever a hash collision occurs, we perform hash table resizing until the collision disappears. This approach is pretty simple, straightforward, and working well. However, it appears to be pretty inefficient as the table expansion involves a lot of data migration as well as recalculation of hash code, which are expansive. To improve efficiency, we can adopt the following strategies:</p> <ol> <li>Improve the hash table data structure in a way that locating target element is still functioning well in the event of a hash collision.</li> <li>Expansion is the last resort before it becomes necessary, when severe collisions are observed.</li> </ol> <p>There are mainly two methods for improving the structure of hash tables: \"Separate Chaining\" and \"Open Addressing\".</p>"},{"location":"chapter_hashing/hash_collision/#621-separate-chaining","title":"6.2.1 \u00a0 Separate chaining","text":"<p>In the original hash table, each bucket can store only one key-value pair. Separate chaining converts a single element into a linked list, treating key-value pairs as list nodes, storing all colliding key-value pairs in the same linked list. Figure 6-5 shows an example of a hash table with separate chaining.</p> <p></p> <p> Figure 6-5 \u00a0 Separate chaining hash table </p> <p>The operations of a hash table implemented with separate chaining have changed as follows:</p> <ul> <li>Querying Elements: Input <code>key</code>, obtain the bucket index through the hash function, then access the head node of the linked list. Traverse the linked list and compare key to find the target key-value pair.</li> <li>Adding Elements: Access the head node of the linked list via the hash function, then append the node (key-value pair) to the list.</li> <li>Deleting Elements: Access the head of the linked list based on the result of the hash function, then traverse the linked list to find the target node and delete it.</li> </ul> <p>Separate chaining has the following limitations:</p> <ul> <li>Increased Space Usage: The linked list contains node pointers, which consume more memory space than arrays.</li> <li>Reduced Query Efficiency: This is because linear traversal of the linked list is required to find the corresponding element.</li> </ul> <p>The code below provides a simple implementation of a separate chaining hash table, with two things to note:</p> <ul> <li>Lists (dynamic arrays) are used instead of linked lists for simplicity. In this setup, the hash table (array) contains multiple buckets, each of which is a list.</li> <li>This implementation includes a hash table resizing method. When the load factor exceeds \\(\\frac{2}{3}\\), we expand the hash table to twice its original size.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig hash_map_chaining.py<pre><code>class HashMapChaining:\n \"\"\"Chained address hash table\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n self.size = 0 # Number of key-value pairs\n self.capacity = 4 # Hash table capacity\n self.load_thres = 2.0 / 3.0 # Load factor threshold for triggering expansion\n self.extend_ratio = 2 # Expansion multiplier\n self.buckets = [[] for _ in range(self.capacity)] # Bucket array\n\n def hash_func(self, key: int) -> int:\n \"\"\"Hash function\"\"\"\n return key % self.capacity\n\n def load_factor(self) -> float:\n \"\"\"Load factor\"\"\"\n return self.size / self.capacity\n\n def get(self, key: int) -> str | None:\n \"\"\"Query operation\"\"\"\n index = self.hash_func(key)\n bucket = self.buckets[index]\n # Traverse the bucket, if the key is found, return the corresponding val\n for pair in bucket:\n if pair.key == key:\n return pair.val\n # If the key is not found, return None\n return None\n\n def put(self, key: int, val: str):\n \"\"\"Add operation\"\"\"\n # When the load factor exceeds the threshold, perform expansion\n if self.load_factor() > self.load_thres:\n self.extend()\n index = self.hash_func(key)\n bucket = self.buckets[index]\n # Traverse the bucket, if the specified key is encountered, update the corresponding val and return\n for pair in bucket:\n if pair.key == key:\n pair.val = val\n return\n # If the key is not found, add the key-value pair to the end\n pair = Pair(key, val)\n bucket.append(pair)\n self.size += 1\n\n def remove(self, key: int):\n \"\"\"Remove operation\"\"\"\n index = self.hash_func(key)\n bucket = self.buckets[index]\n # Traverse the bucket, remove the key-value pair from it\n for pair in bucket:\n if pair.key == key:\n bucket.remove(pair)\n self.size -= 1\n break\n\n def extend(self):\n \"\"\"Extend hash table\"\"\"\n # Temporarily store the original hash table\n buckets = self.buckets\n # Initialize the extended new hash table\n self.capacity *= self.extend_ratio\n self.buckets = [[] for _ in range(self.capacity)]\n self.size = 0\n # Move key-value pairs from the original hash table to the new hash table\n for bucket in buckets:\n for pair in bucket:\n self.put(pair.key, pair.val)\n\n def print(self):\n \"\"\"Print hash table\"\"\"\n for bucket in self.buckets:\n res = []\n for pair in bucket:\n res.append(str(pair.key) + \" -> \" + pair.val)\n print(res)\n</code></pre> hash_map_chaining.cpp<pre><code>/* Chained address hash table */\nclass HashMapChaining {\n private:\n int size; // Number of key-value pairs\n int capacity; // Hash table capacity\n double loadThres; // Load factor threshold for triggering expansion\n int extendRatio; // Expansion multiplier\n vector<vector<Pair *>> buckets; // Bucket array\n\n public:\n /* Constructor */\n HashMapChaining() : size(0), capacity(4), loadThres(2.0 / 3.0), extendRatio(2) {\n buckets.resize(capacity);\n }\n\n /* Destructor */\n ~HashMapChaining() {\n for (auto &bucket : buckets) {\n for (Pair *pair : bucket) {\n // Free memory\n delete pair;\n }\n }\n }\n\n /* Hash function */\n int hashFunc(int key) {\n return key % capacity;\n }\n\n /* Load factor */\n double loadFactor() {\n return (double)size / (double)capacity;\n }\n\n /* Query operation */\n string get(int key) {\n int index = hashFunc(key);\n // Traverse the bucket, if the key is found, return the corresponding val\n for (Pair *pair : buckets[index]) {\n if (pair->key == key) {\n return pair->val;\n }\n }\n // If key not found, return an empty string\n return \"\";\n }\n\n /* Add operation */\n void put(int key, string val) {\n // When the load factor exceeds the threshold, perform expansion\n if (loadFactor() > loadThres) {\n extend();\n }\n int index = hashFunc(key);\n // Traverse the bucket, if the specified key is encountered, update the corresponding val and return\n for (Pair *pair : buckets[index]) {\n if (pair->key == key) {\n pair->val = val;\n return;\n }\n }\n // If the key is not found, add the key-value pair to the end\n buckets[index].push_back(new Pair(key, val));\n size++;\n }\n\n /* Remove operation */\n void remove(int key) {\n int index = hashFunc(key);\n auto &bucket = buckets[index];\n // Traverse the bucket, remove the key-value pair from it\n for (int i = 0; i < bucket.size(); i++) {\n if (bucket[i]->key == key) {\n Pair *tmp = bucket[i];\n bucket.erase(bucket.begin() + i); // Remove key-value pair\n delete tmp; // Free memory\n size--;\n return;\n }\n }\n }\n\n /* Extend hash table */\n void extend() {\n // Temporarily store the original hash table\n vector<vector<Pair *>> bucketsTmp = buckets;\n // Initialize the extended new hash table\n capacity *= extendRatio;\n buckets.clear();\n buckets.resize(capacity);\n size = 0;\n // Move key-value pairs from the original hash table to the new hash table\n for (auto &bucket : bucketsTmp) {\n for (Pair *pair : bucket) {\n put(pair->key, pair->val);\n // Free memory\n delete pair;\n }\n }\n }\n\n /* Print hash table */\n void print() {\n for (auto &bucket : buckets) {\n cout << \"[\";\n for (Pair *pair : bucket) {\n cout << pair->key << \" -> \" << pair->val << \", \";\n }\n cout << \"]\\n\";\n }\n }\n};\n</code></pre> hash_map_chaining.java<pre><code>/* Chained address hash table */\nclass HashMapChaining {\n int size; // Number of key-value pairs\n int capacity; // Hash table capacity\n double loadThres; // Load factor threshold for triggering expansion\n int extendRatio; // Expansion multiplier\n List<List<Pair>> buckets; // Bucket array\n\n /* Constructor */\n public HashMapChaining() {\n size = 0;\n capacity = 4;\n loadThres = 2.0 / 3.0;\n extendRatio = 2;\n buckets = new ArrayList<>(capacity);\n for (int i = 0; i < capacity; i++) {\n buckets.add(new ArrayList<>());\n }\n }\n\n /* Hash function */\n int hashFunc(int key) {\n return key % capacity;\n }\n\n /* Load factor */\n double loadFactor() {\n return (double) size / capacity;\n }\n\n /* Query operation */\n String get(int key) {\n int index = hashFunc(key);\n List<Pair> bucket = buckets.get(index);\n // Traverse the bucket, if the key is found, return the corresponding val\n for (Pair pair : bucket) {\n if (pair.key == key) {\n return pair.val;\n }\n }\n // If key is not found, return null\n return null;\n }\n\n /* Add operation */\n void put(int key, String val) {\n // When the load factor exceeds the threshold, perform expansion\n if (loadFactor() > loadThres) {\n extend();\n }\n int index = hashFunc(key);\n List<Pair> bucket = buckets.get(index);\n // Traverse the bucket, if the specified key is encountered, update the corresponding val and return\n for (Pair pair : bucket) {\n if (pair.key == key) {\n pair.val = val;\n return;\n }\n }\n // If the key is not found, add the key-value pair to the end\n Pair pair = new Pair(key, val);\n bucket.add(pair);\n size++;\n }\n\n /* Remove operation */\n void remove(int key) {\n int index = hashFunc(key);\n List<Pair> bucket = buckets.get(index);\n // Traverse the bucket, remove the key-value pair from it\n for (Pair pair : bucket) {\n if (pair.key == key) {\n bucket.remove(pair);\n size--;\n break;\n }\n }\n }\n\n /* Extend hash table */\n void extend() {\n // Temporarily store the original hash table\n List<List<Pair>> bucketsTmp = buckets;\n // Initialize the extended new hash table\n capacity *= extendRatio;\n buckets = new ArrayList<>(capacity);\n for (int i = 0; i < capacity; i++) {\n buckets.add(new ArrayList<>());\n }\n size = 0;\n // Move key-value pairs from the original hash table to the new hash table\n for (List<Pair> bucket : bucketsTmp) {\n for (Pair pair : bucket) {\n put(pair.key, pair.val);\n }\n }\n }\n\n /* Print hash table */\n void print() {\n for (List<Pair> bucket : buckets) {\n List<String> res = new ArrayList<>();\n for (Pair pair : bucket) {\n res.add(pair.key + \" -> \" + pair.val);\n }\n System.out.println(res);\n }\n }\n}\n</code></pre> hash_map_chaining.cs<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.go<pre><code>[class]{hashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.swift<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.js<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.ts<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.dart<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.rs<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.c<pre><code>[class]{Node}-[func]{}\n\n[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.kt<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.rb<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> hash_map_chaining.zig<pre><code>[class]{HashMapChaining}-[func]{}\n</code></pre> <p>It's worth noting that when the linked list is very long, the query efficiency \\(O(n)\\) is poor. In this case, the list can be converted to an \"AVL tree\" or \"Red-Black tree\" to optimize the time complexity of the query operation to \\(O(\\log n)\\).</p>"},{"location":"chapter_hashing/hash_collision/#622-open-addressing","title":"6.2.2 \u00a0 Open addressing","text":"<p>Open addressing does not introduce additional data structures but instead handles hash collisions through \"multiple probing\". The probing methods mainly include linear probing, quadratic probing, and double hashing.</p> <p>Let's use linear probing as an example to introduce the mechanism of open addressing hash tables.</p>"},{"location":"chapter_hashing/hash_collision/#1-linear-probing","title":"1. \u00a0 Linear probing","text":"<p>Linear probing uses a fixed-step linear search for probing, differing from ordinary hash tables.</p> <ul> <li>Inserting Elements: Calculate the bucket index using the hash function. If the bucket already contains an element, linearly traverse forward from the conflict position (usually with a step size of \\(1\\)) until an empty bucket is found, then insert the element.</li> <li>Searching for Elements: If a hash collision is encountered, use the same step size to linearly traverse forward until the corresponding element is found and return <code>value</code>; if an empty bucket is encountered, it means the target element is not in the hash table, so return <code>None</code>.</li> </ul> <p>Figure 6-6 shows the distribution of key-value pairs in an open addressing (linear probing) hash table. According to this hash function, keys with the same last two digits will be mapped to the same bucket. Through linear probing, they are stored sequentially in that bucket and the buckets below it.</p> <p></p> <p> Figure 6-6 \u00a0 Distribution of key-value pairs in open addressing (linear probing) hash table </p> <p>However, linear probing is prone to create \"clustering\". Specifically, the longer the continuously occupied positions in the array, the greater the probability of hash collisions occurring in these continuous positions, further promoting the growth of clustering at that position, forming a vicious cycle, and ultimately leading to degraded efficiency of insertion, deletion, query, and update operations.</p> <p>It's important to note that we cannot directly delete elements in an open addressing hash table. Deleting an element creates an empty bucket <code>None</code> in the array. When searching for elements, if linear probing encounters this empty bucket, it will return, making the elements below this bucket inaccessible. The program may incorrectly assume these elements do not exist, as shown in Figure 6-7.</p> <p></p> <p> Figure 6-7 \u00a0 Query issues caused by deletion in open addressing </p> <p>To solve this problem, we can adopt the lazy deletion mechanism: instead of directly removing elements from the hash table, use a constant <code>TOMBSTONE</code> to mark the bucket. In this mechanism, both <code>None</code> and <code>TOMBSTONE</code> represent empty buckets and can hold key-value pairs. However, when linear probing encounters <code>TOMBSTONE</code>, it should continue traversing since there may still be key-value pairs below it.</p> <p>However, lazy deletion may accelerate the performance degradation of the hash table. Every deletion operation produces a delete mark, and as <code>TOMBSTONE</code> increases, the search time will also increase because linear probing may need to skip multiple <code>TOMBSTONE</code> to find the target element.</p> <p>To address this, consider recording the index of the first encountered <code>TOMBSTONE</code> during linear probing and swapping the positions of the searched target element with that <code>TOMBSTONE</code>. The benefit of doing this is that each time an element is queried or added, the element will be moved to a bucket closer to its ideal position (the starting point of probing), thereby optimizing query efficiency.</p> <p>The code below implements an open addressing (linear probing) hash table with lazy deletion. To make better use of the hash table space, we treat the hash table as a \"circular array,\". When going beyond the end of the array, we return to the beginning and continue traversing.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig hash_map_open_addressing.py<pre><code>class HashMapOpenAddressing:\n \"\"\"Open addressing hash table\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n self.size = 0 # Number of key-value pairs\n self.capacity = 4 # Hash table capacity\n self.load_thres = 2.0 / 3.0 # Load factor threshold for triggering expansion\n self.extend_ratio = 2 # Expansion multiplier\n self.buckets: list[Pair | None] = [None] * self.capacity # Bucket array\n self.TOMBSTONE = Pair(-1, \"-1\") # Removal mark\n\n def hash_func(self, key: int) -> int:\n \"\"\"Hash function\"\"\"\n return key % self.capacity\n\n def load_factor(self) -> float:\n \"\"\"Load factor\"\"\"\n return self.size / self.capacity\n\n def find_bucket(self, key: int) -> int:\n \"\"\"Search for the bucket index corresponding to key\"\"\"\n index = self.hash_func(key)\n first_tombstone = -1\n # Linear probing, break when encountering an empty bucket\n while self.buckets[index] is not None:\n # If the key is encountered, return the corresponding bucket index\n if self.buckets[index].key == key:\n # If a removal mark was encountered earlier, move the key-value pair to that index\n if first_tombstone != -1:\n self.buckets[first_tombstone] = self.buckets[index]\n self.buckets[index] = self.TOMBSTONE\n return first_tombstone # Return the moved bucket index\n return index # Return bucket index\n # Record the first encountered removal mark\n if first_tombstone == -1 and self.buckets[index] is self.TOMBSTONE:\n first_tombstone = index\n # Calculate the bucket index, return to the head if exceeding the tail\n index = (index + 1) % self.capacity\n # If the key does not exist, return the index of the insertion point\n return index if first_tombstone == -1 else first_tombstone\n\n def get(self, key: int) -> str:\n \"\"\"Query operation\"\"\"\n # Search for the bucket index corresponding to key\n index = self.find_bucket(key)\n # If the key-value pair is found, return the corresponding val\n if self.buckets[index] not in [None, self.TOMBSTONE]:\n return self.buckets[index].val\n # If the key-value pair does not exist, return None\n return None\n\n def put(self, key: int, val: str):\n \"\"\"Add operation\"\"\"\n # When the load factor exceeds the threshold, perform expansion\n if self.load_factor() > self.load_thres:\n self.extend()\n # Search for the bucket index corresponding to key\n index = self.find_bucket(key)\n # If the key-value pair is found, overwrite val and return\n if self.buckets[index] not in [None, self.TOMBSTONE]:\n self.buckets[index].val = val\n return\n # If the key-value pair does not exist, add the key-value pair\n self.buckets[index] = Pair(key, val)\n self.size += 1\n\n def remove(self, key: int):\n \"\"\"Remove operation\"\"\"\n # Search for the bucket index corresponding to key\n index = self.find_bucket(key)\n # If the key-value pair is found, cover it with a removal mark\n if self.buckets[index] not in [None, self.TOMBSTONE]:\n self.buckets[index] = self.TOMBSTONE\n self.size -= 1\n\n def extend(self):\n \"\"\"Extend hash table\"\"\"\n # Temporarily store the original hash table\n buckets_tmp = self.buckets\n # Initialize the extended new hash table\n self.capacity *= self.extend_ratio\n self.buckets = [None] * self.capacity\n self.size = 0\n # Move key-value pairs from the original hash table to the new hash table\n for pair in buckets_tmp:\n if pair not in [None, self.TOMBSTONE]:\n self.put(pair.key, pair.val)\n\n def print(self):\n \"\"\"Print hash table\"\"\"\n for pair in self.buckets:\n if pair is None:\n print(\"None\")\n elif pair is self.TOMBSTONE:\n print(\"TOMBSTONE\")\n else:\n print(pair.key, \"->\", pair.val)\n</code></pre> hash_map_open_addressing.cpp<pre><code>/* Open addressing hash table */\nclass HashMapOpenAddressing {\n private:\n int size; // Number of key-value pairs\n int capacity = 4; // Hash table capacity\n const double loadThres = 2.0 / 3.0; // Load factor threshold for triggering expansion\n const int extendRatio = 2; // Expansion multiplier\n vector<Pair *> buckets; // Bucket array\n Pair *TOMBSTONE = new Pair(-1, \"-1\"); // Removal mark\n\n public:\n /* Constructor */\n HashMapOpenAddressing() : size(0), buckets(capacity, nullptr) {\n }\n\n /* Destructor */\n ~HashMapOpenAddressing() {\n for (Pair *pair : buckets) {\n if (pair != nullptr && pair != TOMBSTONE) {\n delete pair;\n }\n }\n delete TOMBSTONE;\n }\n\n /* Hash function */\n int hashFunc(int key) {\n return key % capacity;\n }\n\n /* Load factor */\n double loadFactor() {\n return (double)size / capacity;\n }\n\n /* Search for the bucket index corresponding to key */\n int findBucket(int key) {\n int index = hashFunc(key);\n int firstTombstone = -1;\n // Linear probing, break when encountering an empty bucket\n while (buckets[index] != nullptr) {\n // If the key is encountered, return the corresponding bucket index\n if (buckets[index]->key == key) {\n // If a removal mark was encountered earlier, move the key-value pair to that index\n if (firstTombstone != -1) {\n buckets[firstTombstone] = buckets[index];\n buckets[index] = TOMBSTONE;\n return firstTombstone; // Return the moved bucket index\n }\n return index; // Return bucket index\n }\n // Record the first encountered removal mark\n if (firstTombstone == -1 && buckets[index] == TOMBSTONE) {\n firstTombstone = index;\n }\n // Calculate the bucket index, return to the head if exceeding the tail\n index = (index + 1) % capacity;\n }\n // If the key does not exist, return the index of the insertion point\n return firstTombstone == -1 ? index : firstTombstone;\n }\n\n /* Query operation */\n string get(int key) {\n // Search for the bucket index corresponding to key\n int index = findBucket(key);\n // If the key-value pair is found, return the corresponding val\n if (buckets[index] != nullptr && buckets[index] != TOMBSTONE) {\n return buckets[index]->val;\n }\n // If key-value pair does not exist, return an empty string\n return \"\";\n }\n\n /* Add operation */\n void put(int key, string val) {\n // When the load factor exceeds the threshold, perform expansion\n if (loadFactor() > loadThres) {\n extend();\n }\n // Search for the bucket index corresponding to key\n int index = findBucket(key);\n // If the key-value pair is found, overwrite val and return\n if (buckets[index] != nullptr && buckets[index] != TOMBSTONE) {\n buckets[index]->val = val;\n return;\n }\n // If the key-value pair does not exist, add the key-value pair\n buckets[index] = new Pair(key, val);\n size++;\n }\n\n /* Remove operation */\n void remove(int key) {\n // Search for the bucket index corresponding to key\n int index = findBucket(key);\n // If the key-value pair is found, cover it with a removal mark\n if (buckets[index] != nullptr && buckets[index] != TOMBSTONE) {\n delete buckets[index];\n buckets[index] = TOMBSTONE;\n size--;\n }\n }\n\n /* Extend hash table */\n void extend() {\n // Temporarily store the original hash table\n vector<Pair *> bucketsTmp = buckets;\n // Initialize the extended new hash table\n capacity *= extendRatio;\n buckets = vector<Pair *>(capacity, nullptr);\n size = 0;\n // Move key-value pairs from the original hash table to the new hash table\n for (Pair *pair : bucketsTmp) {\n if (pair != nullptr && pair != TOMBSTONE) {\n put(pair->key, pair->val);\n delete pair;\n }\n }\n }\n\n /* Print hash table */\n void print() {\n for (Pair *pair : buckets) {\n if (pair == nullptr) {\n cout << \"nullptr\" << endl;\n } else if (pair == TOMBSTONE) {\n cout << \"TOMBSTONE\" << endl;\n } else {\n cout << pair->key << \" -> \" << pair->val << endl;\n }\n }\n }\n};\n</code></pre> hash_map_open_addressing.java<pre><code>/* Open addressing hash table */\nclass HashMapOpenAddressing {\n private int size; // Number of key-value pairs\n private int capacity = 4; // Hash table capacity\n private final double loadThres = 2.0 / 3.0; // Load factor threshold for triggering expansion\n private final int extendRatio = 2; // Expansion multiplier\n private Pair[] buckets; // Bucket array\n private final Pair TOMBSTONE = new Pair(-1, \"-1\"); // Removal mark\n\n /* Constructor */\n public HashMapOpenAddressing() {\n size = 0;\n buckets = new Pair[capacity];\n }\n\n /* Hash function */\n private int hashFunc(int key) {\n return key % capacity;\n }\n\n /* Load factor */\n private double loadFactor() {\n return (double) size / capacity;\n }\n\n /* Search for the bucket index corresponding to key */\n private int findBucket(int key) {\n int index = hashFunc(key);\n int firstTombstone = -1;\n // Linear probing, break when encountering an empty bucket\n while (buckets[index] != null) {\n // If the key is encountered, return the corresponding bucket index\n if (buckets[index].key == key) {\n // If a removal mark was encountered earlier, move the key-value pair to that index\n if (firstTombstone != -1) {\n buckets[firstTombstone] = buckets[index];\n buckets[index] = TOMBSTONE;\n return firstTombstone; // Return the moved bucket index\n }\n return index; // Return bucket index\n }\n // Record the first encountered removal mark\n if (firstTombstone == -1 && buckets[index] == TOMBSTONE) {\n firstTombstone = index;\n }\n // Calculate the bucket index, return to the head if exceeding the tail\n index = (index + 1) % capacity;\n }\n // If the key does not exist, return the index of the insertion point\n return firstTombstone == -1 ? index : firstTombstone;\n }\n\n /* Query operation */\n public String get(int key) {\n // Search for the bucket index corresponding to key\n int index = findBucket(key);\n // If the key-value pair is found, return the corresponding val\n if (buckets[index] != null && buckets[index] != TOMBSTONE) {\n return buckets[index].val;\n }\n // If the key-value pair does not exist, return null\n return null;\n }\n\n /* Add operation */\n public void put(int key, String val) {\n // When the load factor exceeds the threshold, perform expansion\n if (loadFactor() > loadThres) {\n extend();\n }\n // Search for the bucket index corresponding to key\n int index = findBucket(key);\n // If the key-value pair is found, overwrite val and return\n if (buckets[index] != null && buckets[index] != TOMBSTONE) {\n buckets[index].val = val;\n return;\n }\n // If the key-value pair does not exist, add the key-value pair\n buckets[index] = new Pair(key, val);\n size++;\n }\n\n /* Remove operation */\n public void remove(int key) {\n // Search for the bucket index corresponding to key\n int index = findBucket(key);\n // If the key-value pair is found, cover it with a removal mark\n if (buckets[index] != null && buckets[index] != TOMBSTONE) {\n buckets[index] = TOMBSTONE;\n size--;\n }\n }\n\n /* Extend hash table */\n private void extend() {\n // Temporarily store the original hash table\n Pair[] bucketsTmp = buckets;\n // Initialize the extended new hash table\n capacity *= extendRatio;\n buckets = new Pair[capacity];\n size = 0;\n // Move key-value pairs from the original hash table to the new hash table\n for (Pair pair : bucketsTmp) {\n if (pair != null && pair != TOMBSTONE) {\n put(pair.key, pair.val);\n }\n }\n }\n\n /* Print hash table */\n public void print() {\n for (Pair pair : buckets) {\n if (pair == null) {\n System.out.println(\"null\");\n } else if (pair == TOMBSTONE) {\n System.out.println(\"TOMBSTONE\");\n } else {\n System.out.println(pair.key + \" -> \" + pair.val);\n }\n }\n }\n}\n</code></pre> hash_map_open_addressing.cs<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.go<pre><code>[class]{hashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.swift<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.js<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.ts<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.dart<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.rs<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.c<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.kt<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.rb<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre> hash_map_open_addressing.zig<pre><code>[class]{HashMapOpenAddressing}-[func]{}\n</code></pre>"},{"location":"chapter_hashing/hash_collision/#2-quadratic-probing","title":"2. \u00a0 Quadratic probing","text":"<p>Quadratic probing is similar to linear probing and is one of the common strategies of open addressing. When a collision occurs, quadratic probing does not simply skip a fixed number of steps but skips a number of steps equal to the \"square of the number of probes\", i.e., \\(1, 4, 9, \\dots\\) steps.</p> <p>Quadratic probing has the following advantages:</p> <ul> <li>Quadratic probing attempts to alleviate the clustering effect of linear probing by skipping the distance of the square of the number of probes.</li> <li>Quadratic probing skips larger distances to find empty positions, which helps to distribute data more evenly.</li> </ul> <p>However, quadratic probing is not perfect:</p> <ul> <li>Clustering still exists, i.e., some positions are more likely to be occupied than others.</li> <li>Due to the growth of squares, quadratic probing may not probe the entire hash table, meaning that even if there are empty buckets in the hash table, quadratic probing may not be able to access them.</li> </ul>"},{"location":"chapter_hashing/hash_collision/#3-double-hashing","title":"3. \u00a0 Double hashing","text":"<p>As the name suggests, the double hashing method uses multiple hash functions \\(f_1(x)\\), \\(f_2(x)\\), \\(f_3(x)\\), \\(\\dots\\) for probing.</p> <ul> <li>Inserting Elements: If hash function \\(f_1(x)\\) encounters a conflict, it tries \\(f_2(x)\\), and so on, until an empty position is found and the element is inserted.</li> <li>Searching for Elements: Search in the same order of hash functions until the target element is found and returned; if an empty position is encountered or all hash functions have been tried, it indicates the element is not in the hash table, then return <code>None</code>.</li> </ul> <p>Compared to linear probing, the double hashing method is less prone to clustering, but multiple hash functions introduce additional computational overhead.</p> <p>Tip</p> <p>Please note that open addressing (linear probing, quadratic probing, and double hashing) hash tables all have the problem of \"can not directly delete elements.\"</p>"},{"location":"chapter_hashing/hash_collision/#623-choice-of-programming-languages","title":"6.2.3 \u00a0 Choice of programming languages","text":"<p>Different programming languages adopt different hash table implementation strategies. Here are a few examples:</p> <ul> <li>Python uses open addressing. The <code>dict</code> dictionary uses pseudo-random numbers for probing.</li> <li>Java uses separate chaining. Since JDK 1.8, when the array length in <code>HashMap</code> reaches 64 and the length of a linked list reaches 8, the linked list is converted to a red-black tree to improve search performance.</li> <li>Go uses separate chaining. Go stipulates that each bucket can store up to 8 key-value pairs, and if the capacity is exceeded, an overflow bucket is linked; when there are too many overflow buckets, a special equal-capacity resizing operation is performed to ensure performance.</li> </ul>"},{"location":"chapter_hashing/hash_map/","title":"6.1 \u00a0 Hash table","text":"<p>A hash table achieves efficient element querying by establishing a mapping between keys and values. Specifically, when we input a <code>key</code> into the hash table, we can retrieve the corresponding <code>value</code> in \\(O(1)\\) time.</p> <p>As shown in Figure 6-1, given \\(n\\) students, each with two pieces of data: \"name\" and \"student number\". If we want to implement a query feature that returns the corresponding name when given a student number, we can use the hash table shown in Figure 6-1.</p> <p></p> <p> Figure 6-1 \u00a0 Abstract representation of a hash table </p> <p>Apart from hash tables, arrays and linked lists can also be used to implement querying functions. Their efficiency is compared in Table 6-1.</p> <ul> <li>Adding elements: Simply add the element to the end of the array (or linked list), using \\(O(1)\\) time.</li> <li>Querying elements: Since the array (or linked list) is unordered, it requires traversing all the elements, using \\(O(n)\\) time.</li> <li>Deleting elements: First, locate the element, then delete it from the array (or linked list), using \\(O(n)\\) time.</li> </ul> <p> Table 6-1 \u00a0 Comparison of element query efficiency </p> Array Linked List Hash Table Find Element \\(O(n)\\) \\(O(n)\\) \\(O(1)\\) Add Element \\(O(1)\\) \\(O(1)\\) \\(O(1)\\) Delete Element \\(O(n)\\) \\(O(n)\\) \\(O(1)\\) <p>Observations reveal that the time complexity for adding, deleting, and querying in a hash table is \\(O(1)\\), which is highly efficient.</p>"},{"location":"chapter_hashing/hash_map/#611-common-operations-of-hash-table","title":"6.1.1 \u00a0 Common operations of hash table","text":"<p>Common operations of a hash table include initialization, querying, adding key-value pairs, and deleting key-value pairs, etc. Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig hash_map.py<pre><code># Initialize hash table\nhmap: dict = {}\n\n# Add operation\n# Add key-value pair (key, value) to the hash table\nhmap[12836] = \"Xiao Ha\"\nhmap[15937] = \"Xiao Luo\"\nhmap[16750] = \"Xiao Suan\"\nhmap[13276] = \"Xiao Fa\"\nhmap[10583] = \"Xiao Ya\"\n\n# Query operation\n# Input key into hash table, get value\nname: str = hmap[15937]\n\n# Delete operation\n# Delete key-value pair (key, value) from hash table\nhmap.pop(10583)\n</code></pre> hash_map.cpp<pre><code>/* Initialize hash table */\nunordered_map<int, string> map;\n\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nmap[12836] = \"Xiao Ha\";\nmap[15937] = \"Xiao Luo\";\nmap[16750] = \"Xiao Suan\";\nmap[13276] = \"Xiao Fa\";\nmap[10583] = \"Xiao Ya\";\n\n/* Query operation */\n// Input key into hash table, get value\nstring name = map[15937];\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nmap.erase(10583);\n</code></pre> hash_map.java<pre><code>/* Initialize hash table */\nMap<Integer, String> map = new HashMap<>();\n\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nmap.put(12836, \"Xiao Ha\"); \nmap.put(15937, \"Xiao Luo\"); \nmap.put(16750, \"Xiao Suan\"); \nmap.put(13276, \"Xiao Fa\");\nmap.put(10583, \"Xiao Ya\");\n\n/* Query operation */\n// Input key into hash table, get value\nString name = map.get(15937);\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nmap.remove(10583);\n</code></pre> hash_map.cs<pre><code>/* Initialize hash table */\nDictionary<int, string> map = new() {\n /* Add operation */\n // Add key-value pair (key, value) to the hash table\n { 12836, \"Xiao Ha\" },\n { 15937, \"Xiao Luo\" },\n { 16750, \"Xiao Suan\" },\n { 13276, \"Xiao Fa\" },\n { 10583, \"Xiao Ya\" }\n};\n\n/* Query operation */\n// Input key into hash table, get value\nstring name = map[15937];\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nmap.Remove(10583);\n</code></pre> hash_map_test.go<pre><code>/* Initialize hash table */\nhmap := make(map[int]string)\n\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nhmap[12836] = \"Xiao Ha\"\nhmap[15937] = \"Xiao Luo\"\nhmap[16750] = \"Xiao Suan\"\nhmap[13276] = \"Xiao Fa\"\nhmap[10583] = \"Xiao Ya\"\n\n/* Query operation */\n// Input key into hash table, get value\nname := hmap[15937]\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\ndelete(hmap, 10583)\n</code></pre> hash_map.swift<pre><code>/* Initialize hash table */\nvar map: [Int: String] = [:]\n\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nmap[12836] = \"Xiao Ha\"\nmap[15937] = \"Xiao Luo\"\nmap[16750] = \"Xiao Suan\"\nmap[13276] = \"Xiao Fa\"\nmap[10583] = \"Xiao Ya\"\n\n/* Query operation */\n// Input key into hash table, get value\nlet name = map[15937]!\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nmap.removeValue(forKey: 10583)\n</code></pre> hash_map.js<pre><code>/* Initialize hash table */\nconst map = new Map();\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nmap.set(12836, 'Xiao Ha');\nmap.set(15937, 'Xiao Luo');\nmap.set(16750, 'Xiao Suan');\nmap.set(13276, 'Xiao Fa');\nmap.set(10583, 'Xiao Ya');\n\n/* Query operation */\n// Input key into hash table, get value\nlet name = map.get(15937);\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nmap.delete(10583);\n</code></pre> hash_map.ts<pre><code>/* Initialize hash table */\nconst map = new Map<number, string>();\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nmap.set(12836, 'Xiao Ha');\nmap.set(15937, 'Xiao Luo');\nmap.set(16750, 'Xiao Suan');\nmap.set(13276, 'Xiao Fa');\nmap.set(10583, 'Xiao Ya');\nconsole.info('\\nAfter adding, the hash table is\\nKey -> Value');\nconsole.info(map);\n\n/* Query operation */\n// Input key into hash table, get value\nlet name = map.get(15937);\nconsole.info('\\nInput student number 15937, query name ' + name);\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nmap.delete(10583);\nconsole.info('\\nAfter deleting 10583, the hash table is\\nKey -> Value');\nconsole.info(map);\n</code></pre> hash_map.dart<pre><code>/* Initialize hash table */\nMap<int, String> map = {};\n\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nmap[12836] = \"Xiao Ha\";\nmap[15937] = \"Xiao Luo\";\nmap[16750] = \"Xiao Suan\";\nmap[13276] = \"Xiao Fa\";\nmap[10583] = \"Xiao Ya\";\n\n/* Query operation */\n// Input key into hash table, get value\nString name = map[15937];\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nmap.remove(10583);\n</code></pre> hash_map.rs<pre><code>use std::collections::HashMap;\n\n/* Initialize hash table */\nlet mut map: HashMap<i32, String> = HashMap::new();\n\n/* Add operation */\n// Add key-value pair (key, value) to the hash table\nmap.insert(12836, \"Xiao Ha\".to_string());\nmap.insert(15937, \"Xiao Luo\".to_string());\nmap.insert(16750, \"Xiao Suan\".to_string());\nmap.insert(13279, \"Xiao Fa\".to_string());\nmap.insert(10583, \"Xiao Ya\".to_string());\n\n/* Query operation */\n// Input key into hash table, get value\nlet _name: Option<&String> = map.get(&15937);\n\n/* Delete operation */\n// Delete key-value pair (key, value) from hash table\nlet _removed_value: Option<String> = map.remove(&10583);\n</code></pre> hash_map.c<pre><code>// C does not provide a built-in hash table\n</code></pre> hash_map.kt<pre><code>\n</code></pre> hash_map.zig<pre><code>\n</code></pre> Code Visualization <p> Full Screen ></p> <p>There are three common ways to traverse a hash table: traversing key-value pairs, keys, and values. Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig hash_map.py<pre><code># Traverse hash table\n# Traverse key-value pairs key->value\nfor key, value in hmap.items():\n print(key, \"->\", value)\n# Traverse keys only\nfor key in hmap.keys():\n print(key)\n# Traverse values only\nfor value in hmap.values():\n print(value)\n</code></pre> hash_map.cpp<pre><code>/* Traverse hash table */\n// Traverse key-value pairs key->value\nfor (auto kv: map) {\n cout << kv.first << \" -> \" << kv.second << endl;\n}\n// Traverse using iterator key->value\nfor (auto iter = map.begin(); iter != map.end(); iter++) {\n cout << iter->first << \"->\" << iter->second << endl;\n}\n</code></pre> hash_map.java<pre><code>/* Traverse hash table */\n// Traverse key-value pairs key->value\nfor (Map.Entry<Integer, String> kv: map.entrySet()) {\n System.out.println(kv.getKey() + \" -> \" + kv.getValue());\n}\n// Traverse keys only\nfor (int key: map.keySet()) {\n System.out.println(key);\n}\n// Traverse values only\nfor (String val: map.values()) {\n System.out.println(val);\n}\n</code></pre> hash_map.cs<pre><code>/* Traverse hash table */\n// Traverse key-value pairs Key->Value\nforeach (var kv in map) {\n Console.WriteLine(kv.Key + \" -> \" + kv.Value);\n}\n// Traverse keys only\nforeach (int key in map.Keys) {\n Console.WriteLine(key);\n}\n// Traverse values only\nforeach (string val in map.Values) {\n Console.WriteLine(val);\n}\n</code></pre> hash_map_test.go<pre><code>/* Traverse hash table */\n// Traverse key-value pairs key->value\nfor key, value := range hmap {\n fmt.Println(key, \"->\", value)\n}\n// Traverse keys only\nfor key := range hmap {\n fmt.Println(key)\n}\n// Traverse values only\nfor _, value := range hmap {\n fmt.Println(value)\n}\n</code></pre> hash_map.swift<pre><code>/* Traverse hash table */\n// Traverse key-value pairs Key->Value\nfor (key, value) in map {\n print(\"\\(key) -> \\(value)\")\n}\n// Traverse keys only\nfor key in map.keys {\n print(key)\n}\n// Traverse values only\nfor value in map.values {\n print(value)\n}\n</code></pre> hash_map.js<pre><code>/* Traverse hash table */\nconsole.info('\\nTraverse key-value pairs Key->Value');\nfor (const [k, v] of map.entries()) {\n console.info(k + ' -> ' + v);\n}\nconsole.info('\\nTraverse keys only Key');\nfor (const k of map.keys()) {\n console.info(k);\n}\nconsole.info('\\nTraverse values only Value');\nfor (const v of map.values()) {\n console.info(v);\n}\n</code></pre> hash_map.ts<pre><code>/* Traverse hash table */\nconsole.info('\\nTraverse key-value pairs Key->Value');\nfor (const [k, v] of map.entries()) {\n console.info(k + ' -> ' + v);\n}\nconsole.info('\\nTraverse keys only Key');\nfor (const k of map.keys()) {\n console.info(k);\n}\nconsole.info('\\nTraverse values only Value');\nfor (const v of map.values()) {\n console.info(v);\n}\n</code></pre> hash_map.dart<pre><code>/* Traverse hash table */\n// Traverse key-value pairs Key->Value\nmap.forEach((key, value) {\nprint('$key -> $value');\n});\n\n// Traverse keys only Key\nmap.keys.forEach((key) {\nprint(key);\n});\n\n// Traverse values only Value\nmap.values.forEach((value) {\nprint(value);\n});\n</code></pre> hash_map.rs<pre><code>/* Traverse hash table */\n// Traverse key-value pairs Key->Value\nfor (key, value) in &map {\n println!(\"{key} -> {value}\");\n}\n\n// Traverse keys only Key\nfor key in map.keys() {\n println!(\"{key}\"); \n}\n\n// Traverse values only Value\nfor value in map.values() {\n println!(\"{value}\");\n}\n</code></pre> hash_map.c<pre><code>// C does not provide a built-in hash table\n</code></pre> hash_map.kt<pre><code>\n</code></pre> hash_map.zig<pre><code>// Zig example is not provided\n</code></pre> Code Visualization <p> Full Screen ></p>"},{"location":"chapter_hashing/hash_map/#612-simple-implementation-of-hash-table","title":"6.1.2 \u00a0 Simple implementation of hash table","text":"<p>First, let's consider the simplest case: implementing a hash table using just an array. In the hash table, each empty slot in the array is called a bucket, and each bucket can store one key-value pair. Therefore, the query operation involves finding the bucket corresponding to the <code>key</code> and retrieving the <code>value</code> from it.</p> <p>So, how do we locate the appropriate bucket based on the <code>key</code>? This is achieved through a hash function. The role of the hash function is to map a larger input space to a smaller output space. In a hash table, the input space is all possible keys, and the output space is all buckets (array indices). In other words, input a <code>key</code>, and we can use the hash function to determine the storage location of the corresponding key-value pair in the array.</p> <p>The calculation process of the hash function for a given <code>key</code> is divided into the following two steps:</p> <ol> <li>Calculate the hash value using a certain hash algorithm <code>hash()</code>.</li> <li>Take the modulus of the hash value with the number of buckets (array length) <code>capacity</code> to obtain the array index <code>index</code>.</li> </ol> <pre><code>index = hash(key) % capacity\n</code></pre> <p>Afterward, we can use <code>index</code> to access the corresponding bucket in the hash table and thereby retrieve the <code>value</code>.</p> <p>Assuming array length <code>capacity = 100</code> and hash algorithm <code>hash(key) = key</code>, the hash function is <code>key % 100</code>. Figure 6-2 uses <code>key</code> as the student number and <code>value</code> as the name to demonstrate the working principle of the hash function.</p> <p></p> <p> Figure 6-2 \u00a0 Working principle of hash function </p> <p>The following code implements a simple hash table. Here, we encapsulate <code>key</code> and <code>value</code> into a class <code>Pair</code> to represent the key-value pair.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array_hash_map.py<pre><code>class Pair:\n \"\"\"Key-value pair\"\"\"\n\n def __init__(self, key: int, val: str):\n self.key = key\n self.val = val\n\nclass ArrayHashMap:\n \"\"\"Hash table based on array implementation\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n # Initialize an array, containing 100 buckets\n self.buckets: list[Pair | None] = [None] * 100\n\n def hash_func(self, key: int) -> int:\n \"\"\"Hash function\"\"\"\n index = key % 100\n return index\n\n def get(self, key: int) -> str:\n \"\"\"Query operation\"\"\"\n index: int = self.hash_func(key)\n pair: Pair = self.buckets[index]\n if pair is None:\n return None\n return pair.val\n\n def put(self, key: int, val: str):\n \"\"\"Add operation\"\"\"\n pair = Pair(key, val)\n index: int = self.hash_func(key)\n self.buckets[index] = pair\n\n def remove(self, key: int):\n \"\"\"Remove operation\"\"\"\n index: int = self.hash_func(key)\n # Set to None, representing removal\n self.buckets[index] = None\n\n def entry_set(self) -> list[Pair]:\n \"\"\"Get all key-value pairs\"\"\"\n result: list[Pair] = []\n for pair in self.buckets:\n if pair is not None:\n result.append(pair)\n return result\n\n def key_set(self) -> list[int]:\n \"\"\"Get all keys\"\"\"\n result = []\n for pair in self.buckets:\n if pair is not None:\n result.append(pair.key)\n return result\n\n def value_set(self) -> list[str]:\n \"\"\"Get all values\"\"\"\n result = []\n for pair in self.buckets:\n if pair is not None:\n result.append(pair.val)\n return result\n\n def print(self):\n \"\"\"Print hash table\"\"\"\n for pair in self.buckets:\n if pair is not None:\n print(pair.key, \"->\", pair.val)\n</code></pre> array_hash_map.cpp<pre><code>/* Key-value pair */\nstruct Pair {\n public:\n int key;\n string val;\n Pair(int key, string val) {\n this->key = key;\n this->val = val;\n }\n};\n\n/* Hash table based on array implementation */\nclass ArrayHashMap {\n private:\n vector<Pair *> buckets;\n\n public:\n ArrayHashMap() {\n // Initialize an array, containing 100 buckets\n buckets = vector<Pair *>(100);\n }\n\n ~ArrayHashMap() {\n // Free memory\n for (const auto &bucket : buckets) {\n delete bucket;\n }\n buckets.clear();\n }\n\n /* Hash function */\n int hashFunc(int key) {\n int index = key % 100;\n return index;\n }\n\n /* Query operation */\n string get(int key) {\n int index = hashFunc(key);\n Pair *pair = buckets[index];\n if (pair == nullptr)\n return \"\";\n return pair->val;\n }\n\n /* Add operation */\n void put(int key, string val) {\n Pair *pair = new Pair(key, val);\n int index = hashFunc(key);\n buckets[index] = pair;\n }\n\n /* Remove operation */\n void remove(int key) {\n int index = hashFunc(key);\n // Free memory and set to nullptr\n delete buckets[index];\n buckets[index] = nullptr;\n }\n\n /* Get all key-value pairs */\n vector<Pair *> pairSet() {\n vector<Pair *> pairSet;\n for (Pair *pair : buckets) {\n if (pair != nullptr) {\n pairSet.push_back(pair);\n }\n }\n return pairSet;\n }\n\n /* Get all keys */\n vector<int> keySet() {\n vector<int> keySet;\n for (Pair *pair : buckets) {\n if (pair != nullptr) {\n keySet.push_back(pair->key);\n }\n }\n return keySet;\n }\n\n /* Get all values */\n vector<string> valueSet() {\n vector<string> valueSet;\n for (Pair *pair : buckets) {\n if (pair != nullptr) {\n valueSet.push_back(pair->val);\n }\n }\n return valueSet;\n }\n\n /* Print hash table */\n void print() {\n for (Pair *kv : pairSet()) {\n cout << kv->key << \" -> \" << kv->val << endl;\n }\n }\n};\n</code></pre> array_hash_map.java<pre><code>/* Key-value pair */\nclass Pair {\n public int key;\n public String val;\n\n public Pair(int key, String val) {\n this.key = key;\n this.val = val;\n }\n}\n\n/* Hash table based on array implementation */\nclass ArrayHashMap {\n private List<Pair> buckets;\n\n public ArrayHashMap() {\n // Initialize an array, containing 100 buckets\n buckets = new ArrayList<>();\n for (int i = 0; i < 100; i++) {\n buckets.add(null);\n }\n }\n\n /* Hash function */\n private int hashFunc(int key) {\n int index = key % 100;\n return index;\n }\n\n /* Query operation */\n public String get(int key) {\n int index = hashFunc(key);\n Pair pair = buckets.get(index);\n if (pair == null)\n return null;\n return pair.val;\n }\n\n /* Add operation */\n public void put(int key, String val) {\n Pair pair = new Pair(key, val);\n int index = hashFunc(key);\n buckets.set(index, pair);\n }\n\n /* Remove operation */\n public void remove(int key) {\n int index = hashFunc(key);\n // Set to null, indicating removal\n buckets.set(index, null);\n }\n\n /* Get all key-value pairs */\n public List<Pair> pairSet() {\n List<Pair> pairSet = new ArrayList<>();\n for (Pair pair : buckets) {\n if (pair != null)\n pairSet.add(pair);\n }\n return pairSet;\n }\n\n /* Get all keys */\n public List<Integer> keySet() {\n List<Integer> keySet = new ArrayList<>();\n for (Pair pair : buckets) {\n if (pair != null)\n keySet.add(pair.key);\n }\n return keySet;\n }\n\n /* Get all values */\n public List<String> valueSet() {\n List<String> valueSet = new ArrayList<>();\n for (Pair pair : buckets) {\n if (pair != null)\n valueSet.add(pair.val);\n }\n return valueSet;\n }\n\n /* Print hash table */\n public void print() {\n for (Pair kv : pairSet()) {\n System.out.println(kv.key + \" -> \" + kv.val);\n }\n }\n}\n</code></pre> array_hash_map.cs<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.go<pre><code>[class]{pair}-[func]{}\n\n[class]{arrayHashMap}-[func]{}\n</code></pre> array_hash_map.swift<pre><code>[file]{utils/pair.swift}-[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.js<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.ts<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.dart<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.rs<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.c<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.kt<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.rb<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre> array_hash_map.zig<pre><code>[class]{Pair}-[func]{}\n\n[class]{ArrayHashMap}-[func]{}\n</code></pre>"},{"location":"chapter_hashing/hash_map/#613-hash-collision-and-resizing","title":"6.1.3 \u00a0 Hash collision and resizing","text":"<p>Fundamentally, the role of the hash function is to map the entire input space of all keys to the output space of all array indices. However, the input space is often much larger than the output space. Therefore, theoretically, there must be situations where \"multiple inputs correspond to the same output\".</p> <p>For the hash function in the above example, if the last two digits of the input <code>key</code> are the same, the output of the hash function will also be the same. For example, when querying for students with student numbers 12836 and 20336, we find:</p> <pre><code>12836 % 100 = 36\n20336 % 100 = 36\n</code></pre> <p>As shown in Figure 6-3, both student numbers point to the same name, which is obviously incorrect. This situation where multiple inputs correspond to the same output is known as hash collision.</p> <p></p> <p> Figure 6-3 \u00a0 Example of hash collision </p> <p>It is easy to understand that the larger the capacity \\(n\\) of the hash table, the lower the probability of multiple keys being allocated to the same bucket, and the fewer the collisions. Therefore, expanding the capacity of the hash table can reduce hash collisions.</p> <p>As shown in Figure 6-4, before expansion, key-value pairs <code>(136, A)</code> and <code>(236, D)</code> collided; after expansion, the collision is resolved.</p> <p></p> <p> Figure 6-4 \u00a0 Hash table expansion </p> <p>Similar to array expansion, resizing a hash table requires migrating all key-value pairs from the original hash table to the new one, which is time-consuming. Furthermore, since the capacity <code>capacity</code> of the hash table changes, we need to recalculate the storage positions of all key-value pairs using the hash function, which adds to the computational overhead of the resizing process. Therefore, programming languages often reserve a sufficiently large capacity for the hash table to prevent frequent resizing.</p> <p>The load factor is an important concept for hash tables. It is defined as the ratio of the number of elements in the hash table to the number of buckets. It is used to measure the severity of hash collisions and is often used as a trigger for resizing the hash table. For example, in Java, when the load factor exceeds \\(0.75\\), the system will resize the hash table to twice its original size.</p>"},{"location":"chapter_hashing/summary/","title":"6.4 \u00a0 Summary","text":""},{"location":"chapter_hashing/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>Given an input <code>key</code>, a hash table can retrieve the corresponding <code>value</code> in \\(O(1)\\) time, which is highly efficient.</li> <li>Common hash table operations include querying, adding key-value pairs, deleting key-value pairs, and traversing the hash table.</li> <li>The hash function maps a <code>key</code> to an array index, allowing access to the corresponding bucket and retrieval of the <code>value</code>.</li> <li>Two different keys may end up with the same array index after hashing, leading to erroneous query results. This phenomenon is known as hash collision.</li> <li>The larger the capacity of the hash table, the lower the probability of hash collisions. Therefore, hash table resizing can mitigate hash collisions. Similar to array resizing, hash table resizing is costly.</li> <li>The load factor, defined as the number of elements divided by the number of buckets, reflects the severity of hash collisions and is often used as a condition to trigger hash table resizing.</li> <li>Chaining addresses hash collisions by converting each element into a linked list, storing all colliding elements in the same list. However, excessively long lists can reduce query efficiency, which can be improved by converting the lists into red-black trees.</li> <li>Open addressing handles hash collisions through multiple probes. Linear probing uses a fixed step size but it cannot delete elements and is prone to clustering. Multiple hashing uses several hash functions for probing which reduces clustering compared to linear probing but increases computational overhead.</li> <li>Different programming languages adopt various hash table implementations. For example, Java's <code>HashMap</code> uses chaining, while Python's <code>dict</code> employs open addressing.</li> <li>In hash tables, we desire hash algorithms with determinism, high efficiency, and uniform distribution. In cryptography, hash algorithms should also possess collision resistance and the avalanche effect.</li> <li>Hash algorithms typically use large prime numbers as moduli to ensure uniform distribution of hash values and reduce hash collisions.</li> <li>Common hash algorithms include MD5, SHA-1, SHA-2, and SHA-3. MD5 is often used for file integrity checks, while SHA-2 is commonly used in secure applications and protocols.</li> <li>Programming languages usually provide built-in hash algorithms for data types to calculate bucket indices in hash tables. Generally, only immutable objects are hashable.</li> </ul>"},{"location":"chapter_hashing/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: When does the time complexity of a hash table degrade to \\(O(n)\\)?</p> <p>The time complexity of a hash table can degrade to \\(O(n)\\) when hash collisions are severe. When the hash function is well-designed, the capacity is set appropriately, and collisions are evenly distributed, the time complexity is \\(O(1)\\). We usually consider the time complexity to be \\(O(1)\\) when using built-in hash tables in programming languages.</p> <p>Q: Why not use the hash function \\(f(x) = x\\)? This would eliminate collisions.</p> <p>Under the hash function \\(f(x) = x\\), each element corresponds to a unique bucket index, which is equivalent to an array. However, the input space is usually much larger than the output space (array length), so the last step of a hash function is often to take the modulo of the array length. In other words, the goal of a hash table is to map a larger state space to a smaller one while providing \\(O(1)\\) query efficiency.</p> <p>Q: Why can hash tables be more efficient than arrays, linked lists, or binary trees, even though hash tables are implemented using these structures?</p> <p>Firstly, hash tables have higher time efficiency but lower space efficiency. A significant portion of memory in hash tables remains unused.</p> <p>Secondly, hash tables are only more time-efficient in specific use cases. If a feature can be implemented with the same time complexity using an array or a linked list, it's usually faster than using a hash table. This is because the computation of the hash function incurs overhead, making the constant factor in the time complexity larger.</p> <p>Lastly, the time complexity of hash tables can degrade. For example, in chaining, we perform search operations in a linked list or red-black tree, which still risks degrading to \\(O(n)\\) time.</p> <p>Q: Does multiple hashing also have the flaw of not being able to delete elements directly? Can space marked as deleted be reused?</p> <p>Multiple hashing is a form of open addressing, and all open addressing methods have the drawback of not being able to delete elements directly; they require marking elements as deleted. Marked spaces can be reused. When inserting new elements into the hash table, and the hash function points to a position marked as deleted, that position can be used by the new element. This maintains the probing sequence of the hash table while ensuring efficient use of space.</p> <p>Q: Why do hash collisions occur during the search process in linear probing?</p> <p>During the search process, the hash function points to the corresponding bucket and key-value pair. If the <code>key</code> doesn't match, it indicates a hash collision. Therefore, linear probing will search downwards at a predetermined step size until the correct key-value pair is found or the search fails.</p> <p>Q: Why can resizing a hash table alleviate hash collisions?</p> <p>The last step of a hash function often involves taking the modulo of the array length \\(n\\), to keep the output within the array index range. When resizing, the array length \\(n\\) changes, and the indices corresponding to the keys may also change. Keys that were previously mapped to the same bucket might be distributed across multiple buckets after resizing, thereby mitigating hash collisions.</p>"},{"location":"chapter_heap/","title":"Chapter 8. \u00a0 Heap","text":"<p>Abstract</p> <p>The heap is like mountain peaks, stacked and undulating, each with its unique shape.</p> <p>Among these peaks, the highest one always catches the eye first.</p>"},{"location":"chapter_heap/#chapter-contents","title":"Chapter contents","text":"<ul> <li>8.1 \u00a0 Heap</li> <li>8.2 \u00a0 Building a heap</li> <li>8.3 \u00a0 Top-k problem</li> <li>8.4 \u00a0 Summary</li> </ul>"},{"location":"chapter_heap/build_heap/","title":"8.2 \u00a0 Heap construction operation","text":"<p>In some cases, we want to build a heap using all elements of a list, and this process is known as \"heap construction operation.\"</p>"},{"location":"chapter_heap/build_heap/#821-implementing-with-heap-insertion-operation","title":"8.2.1 \u00a0 Implementing with heap insertion operation","text":"<p>First, we create an empty heap and then iterate through the list, performing the \"heap insertion operation\" on each element in turn. This means adding the element to the end of the heap and then \"heapifying\" it from bottom to top.</p> <p>Each time an element is added to the heap, the length of the heap increases by one. Since nodes are added to the binary tree from top to bottom, the heap is constructed \"from top to bottom.\"</p> <p>Let the number of elements be \\(n\\), and each element's insertion operation takes \\(O(\\log{n})\\) time, thus the time complexity of this heap construction method is \\(O(n \\log n)\\).</p>"},{"location":"chapter_heap/build_heap/#822-implementing-by-heapifying-through-traversal","title":"8.2.2 \u00a0 Implementing by heapifying through traversal","text":"<p>In fact, we can implement a more efficient method of heap construction in two steps.</p> <ol> <li>Add all elements of the list as they are into the heap, at this point the properties of the heap are not yet satisfied.</li> <li>Traverse the heap in reverse order (reverse of level-order traversal), and perform \"top to bottom heapify\" on each non-leaf node.</li> </ol> <p>After heapifying a node, the subtree with that node as the root becomes a valid sub-heap. Since the traversal is in reverse order, the heap is built \"from bottom to top.\"</p> <p>The reason for choosing reverse traversal is that it ensures the subtree below the current node is already a valid sub-heap, making the heapification of the current node effective.</p> <p>It's worth mentioning that since leaf nodes have no children, they naturally form valid sub-heaps and do not need to be heapified. As shown in the following code, the last non-leaf node is the parent of the last node; we start from it and traverse in reverse order to perform heapification:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig my_heap.py<pre><code>def __init__(self, nums: list[int]):\n \"\"\"Constructor, build heap based on input list\"\"\"\n # Add all list elements into the heap\n self.max_heap = nums\n # Heapify all nodes except leaves\n for i in range(self.parent(self.size() - 1), -1, -1):\n self.sift_down(i)\n</code></pre> my_heap.cpp<pre><code>/* Constructor, build heap based on input list */\nMaxHeap(vector<int> nums) {\n // Add all list elements into the heap\n maxHeap = nums;\n // Heapify all nodes except leaves\n for (int i = parent(size() - 1); i >= 0; i--) {\n siftDown(i);\n }\n}\n</code></pre> my_heap.java<pre><code>/* Constructor, build heap based on input list */\nMaxHeap(List<Integer> nums) {\n // Add all list elements into the heap\n maxHeap = new ArrayList<>(nums);\n // Heapify all nodes except leaves\n for (int i = parent(size() - 1); i >= 0; i--) {\n siftDown(i);\n }\n}\n</code></pre> my_heap.cs<pre><code>[class]{MaxHeap}-[func]{MaxHeap}\n</code></pre> my_heap.go<pre><code>[class]{maxHeap}-[func]{newMaxHeap}\n</code></pre> my_heap.swift<pre><code>[class]{MaxHeap}-[func]{init}\n</code></pre> my_heap.js<pre><code>[class]{MaxHeap}-[func]{constructor}\n</code></pre> my_heap.ts<pre><code>[class]{MaxHeap}-[func]{constructor}\n</code></pre> my_heap.dart<pre><code>[class]{MaxHeap}-[func]{MaxHeap}\n</code></pre> my_heap.rs<pre><code>[class]{MaxHeap}-[func]{new}\n</code></pre> my_heap.c<pre><code>[class]{MaxHeap}-[func]{newMaxHeap}\n</code></pre> my_heap.kt<pre><code>[class]{MaxHeap}-[func]{}\n</code></pre> my_heap.rb<pre><code>[class]{MaxHeap}-[func]{__init__}\n</code></pre> my_heap.zig<pre><code>[class]{MaxHeap}-[func]{init}\n</code></pre>"},{"location":"chapter_heap/build_heap/#823-complexity-analysis","title":"8.2.3 \u00a0 Complexity analysis","text":"<p>Next, let's attempt to calculate the time complexity of this second method of heap construction.</p> <ul> <li>Assuming the number of nodes in the complete binary tree is \\(n\\), then the number of leaf nodes is \\((n + 1) / 2\\), where \\(/\\) is integer division. Therefore, the number of nodes that need to be heapified is \\((n - 1) / 2\\).</li> <li>In the process of \"top to bottom heapification,\" each node is heapified to the leaf nodes at most, so the maximum number of iterations is the height of the binary tree \\(\\log n\\).</li> </ul> <p>Multiplying the two, we get the time complexity of the heap construction process as \\(O(n \\log n)\\). But this estimate is not accurate, because it does not take into account the nature of the binary tree having far more nodes at the lower levels than at the top.</p> <p>Let's perform a more accurate calculation. To simplify the calculation, assume a \"perfect binary tree\" with \\(n\\) nodes and height \\(h\\); this assumption does not affect the correctness of the result.</p> <p></p> <p> Figure 8-5 \u00a0 Node counts at each level of a perfect binary tree </p> <p>As shown in Figure 8-5, the maximum number of iterations for a node \"to be heapified from top to bottom\" is equal to the distance from that node to the leaf nodes, which is precisely \"node height.\" Therefore, we can sum the \"number of nodes \\(\\times\\) node height\" at each level, to get the total number of heapification iterations for all nodes.</p> \\[ T(h) = 2^0h + 2^1(h-1) + 2^2(h-2) + \\dots + 2^{(h-1)}\\times1 \\] <p>To simplify the above equation, we need to use knowledge of sequences from high school, first multiply \\(T(h)\\) by \\(2\\), to get:</p> \\[ \\begin{aligned} T(h) & = 2^0h + 2^1(h-1) + 2^2(h-2) + \\dots + 2^{h-1}\\times1 \\newline 2T(h) & = 2^1h + 2^2(h-1) + 2^3(h-2) + \\dots + 2^h\\times1 \\newline \\end{aligned} \\] <p>By subtracting \\(T(h)\\) from \\(2T(h)\\) using the method of displacement, we get:</p> \\[ 2T(h) - T(h) = T(h) = -2^0h + 2^1 + 2^2 + \\dots + 2^{h-1} + 2^h \\] <p>Observing the equation, \\(T(h)\\) is an geometric series, which can be directly calculated using the sum formula, resulting in a time complexity of:</p> \\[ \\begin{aligned} T(h) & = 2 \\frac{1 - 2^h}{1 - 2} - h \\newline & = 2^{h+1} - h - 2 \\newline & = O(2^h) \\end{aligned} \\] <p>Further, a perfect binary tree with height \\(h\\) has \\(n = 2^{h+1} - 1\\) nodes, thus the complexity is \\(O(2^h) = O(n)\\). This calculation shows that the time complexity of inputting a list and constructing a heap is \\(O(n)\\), which is very efficient.</p>"},{"location":"chapter_heap/heap/","title":"8.1 \u00a0 Heap","text":"<p>A heap is a complete binary tree that satisfies specific conditions and can be mainly divided into two types, as shown in Figure 8-1.</p> <ul> <li>Min heap: The value of any node \\(\\leq\\) the values of its child nodes.</li> <li>Max heap: The value of any node \\(\\geq\\) the values of its child nodes.</li> </ul> <p></p> <p> Figure 8-1 \u00a0 Min heap and max heap </p> <p>As a special case of a complete binary tree, heaps have the following characteristics:</p> <ul> <li>The bottom layer nodes are filled from left to right, and nodes in other layers are fully filled.</li> <li>The root node of the binary tree is called the \"heap top,\" and the bottom-rightmost node is called the \"heap bottom.\"</li> <li>For max heaps (min heaps), the value of the heap top element (root node) is the largest (smallest).</li> </ul>"},{"location":"chapter_heap/heap/#811-common-operations-on-heaps","title":"8.1.1 \u00a0 Common operations on heaps","text":"<p>It should be noted that many programming languages provide a priority queue, which is an abstract data structure defined as a queue with priority sorting.</p> <p>In fact, heaps are often used to implement priority queues, with max heaps equivalent to priority queues where elements are dequeued in descending order. From a usage perspective, we can consider \"priority queue\" and \"heap\" as equivalent data structures. Therefore, this book does not make a special distinction between the two, uniformly referring to them as \"heap.\"</p> <p>Common operations on heaps are shown in Table 8-1, and the method names depend on the programming language.</p> <p> Table 8-1 \u00a0 Efficiency of Heap Operations </p> Method name Description Time complexity <code>push()</code> Add an element to the heap \\(O(\\log n)\\) <code>pop()</code> Remove the top element from the heap \\(O(\\log n)\\) <code>peek()</code> Access the top element (for max/min heap, the max/min value) \\(O(1)\\) <code>size()</code> Get the number of elements in the heap \\(O(1)\\) <code>isEmpty()</code> Check if the heap is empty \\(O(1)\\) <p>In practice, we can directly use the heap class (or priority queue class) provided by programming languages.</p> <p>Similar to sorting algorithms where we have \"ascending order\" and \"descending order,\" we can switch between \"min heap\" and \"max heap\" by setting a <code>flag</code> or modifying the <code>Comparator</code>. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig heap.py<pre><code># \u521d\u59cb\u5316\u5c0f\u9876\u5806\nmin_heap, flag = [], 1\n# \u521d\u59cb\u5316\u5927\u9876\u5806\nmax_heap, flag = [], -1\n\n# Python \u7684 heapq \u6a21\u5757\u9ed8\u8ba4\u5b9e\u73b0\u5c0f\u9876\u5806\n# \u8003\u8651\u5c06\u201c\u5143\u7d20\u53d6\u8d1f\u201d\u540e\u518d\u5165\u5806\uff0c\u8fd9\u6837\u5c31\u53ef\u4ee5\u5c06\u5927\u5c0f\u5173\u7cfb\u98a0\u5012\uff0c\u4ece\u800c\u5b9e\u73b0\u5927\u9876\u5806\n# \u5728\u672c\u793a\u4f8b\u4e2d\uff0cflag = 1 \u65f6\u5bf9\u5e94\u5c0f\u9876\u5806\uff0cflag = -1 \u65f6\u5bf9\u5e94\u5927\u9876\u5806\n\n# \u5143\u7d20\u5165\u5806\nheapq.heappush(max_heap, flag * 1)\nheapq.heappush(max_heap, flag * 3)\nheapq.heappush(max_heap, flag * 2)\nheapq.heappush(max_heap, flag * 5)\nheapq.heappush(max_heap, flag * 4)\n\n# \u83b7\u53d6\u5806\u9876\u5143\u7d20\npeek: int = flag * max_heap[0] # 5\n\n# \u5806\u9876\u5143\u7d20\u51fa\u5806\n# \u51fa\u5806\u5143\u7d20\u4f1a\u5f62\u6210\u4e00\u4e2a\u4ece\u5927\u5230\u5c0f\u7684\u5e8f\u5217\nval = flag * heapq.heappop(max_heap) # 5\nval = flag * heapq.heappop(max_heap) # 4\nval = flag * heapq.heappop(max_heap) # 3\nval = flag * heapq.heappop(max_heap) # 2\nval = flag * heapq.heappop(max_heap) # 1\n\n# \u83b7\u53d6\u5806\u5927\u5c0f\nsize: int = len(max_heap)\n\n# \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a\nis_empty: bool = not max_heap\n\n# \u8f93\u5165\u5217\u8868\u5e76\u5efa\u5806\nmin_heap: list[int] = [1, 3, 2, 5, 4]\nheapq.heapify(min_heap)\n</code></pre> heap.cpp<pre><code>/* \u521d\u59cb\u5316\u5806 */\n// \u521d\u59cb\u5316\u5c0f\u9876\u5806\npriority_queue<int, vector<int>, greater<int>> minHeap;\n// \u521d\u59cb\u5316\u5927\u9876\u5806\npriority_queue<int, vector<int>, less<int>> maxHeap;\n\n/* \u5143\u7d20\u5165\u5806 */\nmaxHeap.push(1);\nmaxHeap.push(3);\nmaxHeap.push(2);\nmaxHeap.push(5);\nmaxHeap.push(4);\n\n/* \u83b7\u53d6\u5806\u9876\u5143\u7d20 */\nint peek = maxHeap.top(); // 5\n\n/* \u5806\u9876\u5143\u7d20\u51fa\u5806 */\n// \u51fa\u5806\u5143\u7d20\u4f1a\u5f62\u6210\u4e00\u4e2a\u4ece\u5927\u5230\u5c0f\u7684\u5e8f\u5217\nmaxHeap.pop(); // 5\nmaxHeap.pop(); // 4\nmaxHeap.pop(); // 3\nmaxHeap.pop(); // 2\nmaxHeap.pop(); // 1\n\n/* \u83b7\u53d6\u5806\u5927\u5c0f */\nint size = maxHeap.size();\n\n/* \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a */\nbool isEmpty = maxHeap.empty();\n\n/* \u8f93\u5165\u5217\u8868\u5e76\u5efa\u5806 */\nvector<int> input{1, 3, 2, 5, 4};\npriority_queue<int, vector<int>, greater<int>> minHeap(input.begin(), input.end());\n</code></pre> heap.java<pre><code>/* \u521d\u59cb\u5316\u5806 */\n// \u521d\u59cb\u5316\u5c0f\u9876\u5806\nQueue<Integer> minHeap = new PriorityQueue<>();\n// \u521d\u59cb\u5316\u5927\u9876\u5806\uff08\u4f7f\u7528 lambda \u8868\u8fbe\u5f0f\u4fee\u6539 Comparator \u5373\u53ef\uff09\nQueue<Integer> maxHeap = new PriorityQueue<>((a, b) -> b - a);\n\n/* \u5143\u7d20\u5165\u5806 */\nmaxHeap.offer(1);\nmaxHeap.offer(3);\nmaxHeap.offer(2);\nmaxHeap.offer(5);\nmaxHeap.offer(4);\n\n/* \u83b7\u53d6\u5806\u9876\u5143\u7d20 */\nint peek = maxHeap.peek(); // 5\n\n/* \u5806\u9876\u5143\u7d20\u51fa\u5806 */\n// \u51fa\u5806\u5143\u7d20\u4f1a\u5f62\u6210\u4e00\u4e2a\u4ece\u5927\u5230\u5c0f\u7684\u5e8f\u5217\npeek = maxHeap.poll(); // 5\npeek = maxHeap.poll(); // 4\npeek = maxHeap.poll(); // 3\npeek = maxHeap.poll(); // 2\npeek = maxHeap.poll(); // 1\n\n/* \u83b7\u53d6\u5806\u5927\u5c0f */\nint size = maxHeap.size();\n\n/* \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a */\nboolean isEmpty = maxHeap.isEmpty();\n\n/* \u8f93\u5165\u5217\u8868\u5e76\u5efa\u5806 */\nminHeap = new PriorityQueue<>(Arrays.asList(1, 3, 2, 5, 4));\n</code></pre> heap.cs<pre><code>/* \u521d\u59cb\u5316\u5806 */\n// \u521d\u59cb\u5316\u5c0f\u9876\u5806\nPriorityQueue<int, int> minHeap = new();\n// \u521d\u59cb\u5316\u5927\u9876\u5806\uff08\u4f7f\u7528 lambda \u8868\u8fbe\u5f0f\u4fee\u6539 Comparator \u5373\u53ef\uff09\nPriorityQueue<int, int> maxHeap = new(Comparer<int>.Create((x, y) => y - x));\n\n/* \u5143\u7d20\u5165\u5806 */\nmaxHeap.Enqueue(1, 1);\nmaxHeap.Enqueue(3, 3);\nmaxHeap.Enqueue(2, 2);\nmaxHeap.Enqueue(5, 5);\nmaxHeap.Enqueue(4, 4);\n\n/* \u83b7\u53d6\u5806\u9876\u5143\u7d20 */\nint peek = maxHeap.Peek();//5\n\n/* \u5806\u9876\u5143\u7d20\u51fa\u5806 */\n// \u51fa\u5806\u5143\u7d20\u4f1a\u5f62\u6210\u4e00\u4e2a\u4ece\u5927\u5230\u5c0f\u7684\u5e8f\u5217\npeek = maxHeap.Dequeue(); // 5\npeek = maxHeap.Dequeue(); // 4\npeek = maxHeap.Dequeue(); // 3\npeek = maxHeap.Dequeue(); // 2\npeek = maxHeap.Dequeue(); // 1\n\n/* \u83b7\u53d6\u5806\u5927\u5c0f */\nint size = maxHeap.Count;\n\n/* \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a */\nbool isEmpty = maxHeap.Count == 0;\n\n/* \u8f93\u5165\u5217\u8868\u5e76\u5efa\u5806 */\nminHeap = new PriorityQueue<int, int>([(1, 1), (3, 3), (2, 2), (5, 5), (4, 4)]);\n</code></pre> heap.go<pre><code>// Go \u8bed\u8a00\u4e2d\u53ef\u4ee5\u901a\u8fc7\u5b9e\u73b0 heap.Interface \u6765\u6784\u5efa\u6574\u6570\u5927\u9876\u5806\n// \u5b9e\u73b0 heap.Interface \u9700\u8981\u540c\u65f6\u5b9e\u73b0 sort.Interface\ntype intHeap []any\n\n// Push heap.Interface \u7684\u65b9\u6cd5\uff0c\u5b9e\u73b0\u63a8\u5165\u5143\u7d20\u5230\u5806\nfunc (h *intHeap) Push(x any) {\n // Push \u548c Pop \u4f7f\u7528 pointer receiver \u4f5c\u4e3a\u53c2\u6570\n // \u56e0\u4e3a\u5b83\u4eec\u4e0d\u4ec5\u4f1a\u5bf9\u5207\u7247\u7684\u5185\u5bb9\u8fdb\u884c\u8c03\u6574\uff0c\u8fd8\u4f1a\u4fee\u6539\u5207\u7247\u7684\u957f\u5ea6\u3002\n *h = append(*h, x.(int))\n}\n\n// Pop heap.Interface \u7684\u65b9\u6cd5\uff0c\u5b9e\u73b0\u5f39\u51fa\u5806\u9876\u5143\u7d20\nfunc (h *intHeap) Pop() any {\n // \u5f85\u51fa\u5806\u5143\u7d20\u5b58\u653e\u5728\u6700\u540e\n last := (*h)[len(*h)-1]\n *h = (*h)[:len(*h)-1]\n return last\n}\n\n// Len sort.Interface \u7684\u65b9\u6cd5\nfunc (h *intHeap) Len() int {\n return len(*h)\n}\n\n// Less sort.Interface \u7684\u65b9\u6cd5\nfunc (h *intHeap) Less(i, j int) bool {\n // \u5982\u679c\u5b9e\u73b0\u5c0f\u9876\u5806\uff0c\u5219\u9700\u8981\u8c03\u6574\u4e3a\u5c0f\u4e8e\u53f7\n return (*h)[i].(int) > (*h)[j].(int)\n}\n\n// Swap sort.Interface \u7684\u65b9\u6cd5\nfunc (h *intHeap) Swap(i, j int) {\n (*h)[i], (*h)[j] = (*h)[j], (*h)[i]\n}\n\n// Top \u83b7\u53d6\u5806\u9876\u5143\u7d20\nfunc (h *intHeap) Top() any {\n return (*h)[0]\n}\n\n/* Driver Code */\nfunc TestHeap(t *testing.T) {\n /* \u521d\u59cb\u5316\u5806 */\n // \u521d\u59cb\u5316\u5927\u9876\u5806\n maxHeap := &intHeap{}\n heap.Init(maxHeap)\n /* \u5143\u7d20\u5165\u5806 */\n // \u8c03\u7528 heap.Interface \u7684\u65b9\u6cd5\uff0c\u6765\u6dfb\u52a0\u5143\u7d20\n heap.Push(maxHeap, 1)\n heap.Push(maxHeap, 3)\n heap.Push(maxHeap, 2)\n heap.Push(maxHeap, 4)\n heap.Push(maxHeap, 5)\n\n /* \u83b7\u53d6\u5806\u9876\u5143\u7d20 */\n top := maxHeap.Top()\n fmt.Printf(\"\u5806\u9876\u5143\u7d20\u4e3a %d\\n\", top)\n\n /* \u5806\u9876\u5143\u7d20\u51fa\u5806 */\n // \u8c03\u7528 heap.Interface \u7684\u65b9\u6cd5\uff0c\u6765\u79fb\u9664\u5143\u7d20\n heap.Pop(maxHeap) // 5\n heap.Pop(maxHeap) // 4\n heap.Pop(maxHeap) // 3\n heap.Pop(maxHeap) // 2\n heap.Pop(maxHeap) // 1\n\n /* \u83b7\u53d6\u5806\u5927\u5c0f */\n size := len(*maxHeap)\n fmt.Printf(\"\u5806\u5143\u7d20\u6570\u91cf\u4e3a %d\\n\", size)\n\n /* \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a */\n isEmpty := len(*maxHeap) == 0\n fmt.Printf(\"\u5806\u662f\u5426\u4e3a\u7a7a %t\\n\", isEmpty)\n}\n</code></pre> heap.swift<pre><code>/* \u521d\u59cb\u5316\u5806 */\n// Swift \u7684 Heap \u7c7b\u578b\u540c\u65f6\u652f\u6301\u6700\u5927\u5806\u548c\u6700\u5c0f\u5806\uff0c\u4e14\u9700\u8981\u5f15\u5165 swift-collections\nvar heap = Heap<Int>()\n\n/* \u5143\u7d20\u5165\u5806 */\nheap.insert(1)\nheap.insert(3)\nheap.insert(2)\nheap.insert(5)\nheap.insert(4)\n\n/* \u83b7\u53d6\u5806\u9876\u5143\u7d20 */\nvar peek = heap.max()!\n\n/* \u5806\u9876\u5143\u7d20\u51fa\u5806 */\npeek = heap.removeMax() // 5\npeek = heap.removeMax() // 4\npeek = heap.removeMax() // 3\npeek = heap.removeMax() // 2\npeek = heap.removeMax() // 1\n\n/* \u83b7\u53d6\u5806\u5927\u5c0f */\nlet size = heap.count\n\n/* \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a */\nlet isEmpty = heap.isEmpty\n\n/* \u8f93\u5165\u5217\u8868\u5e76\u5efa\u5806 */\nlet heap2 = Heap([1, 3, 2, 5, 4])\n</code></pre> heap.js<pre><code>// JavaScript \u672a\u63d0\u4f9b\u5185\u7f6e Heap \u7c7b\n</code></pre> heap.ts<pre><code>// TypeScript \u672a\u63d0\u4f9b\u5185\u7f6e Heap \u7c7b\n</code></pre> heap.dart<pre><code>// Dart \u672a\u63d0\u4f9b\u5185\u7f6e Heap \u7c7b\n</code></pre> heap.rs<pre><code>use std::collections::BinaryHeap;\nuse std::cmp::Reverse;\n\n/* \u521d\u59cb\u5316\u5806 */\n// \u521d\u59cb\u5316\u5c0f\u9876\u5806\nlet mut min_heap = BinaryHeap::<Reverse<i32>>::new();\n// \u521d\u59cb\u5316\u5927\u9876\u5806\nlet mut max_heap = BinaryHeap::new();\n\n/* \u5143\u7d20\u5165\u5806 */\nmax_heap.push(1);\nmax_heap.push(3);\nmax_heap.push(2);\nmax_heap.push(5);\nmax_heap.push(4);\n\n/* \u83b7\u53d6\u5806\u9876\u5143\u7d20 */\nlet peek = max_heap.peek().unwrap(); // 5\n\n/* \u5806\u9876\u5143\u7d20\u51fa\u5806 */\n// \u51fa\u5806\u5143\u7d20\u4f1a\u5f62\u6210\u4e00\u4e2a\u4ece\u5927\u5230\u5c0f\u7684\u5e8f\u5217\nlet peek = max_heap.pop().unwrap(); // 5\nlet peek = max_heap.pop().unwrap(); // 4\nlet peek = max_heap.pop().unwrap(); // 3\nlet peek = max_heap.pop().unwrap(); // 2\nlet peek = max_heap.pop().unwrap(); // 1\n\n/* \u83b7\u53d6\u5806\u5927\u5c0f */\nlet size = max_heap.len();\n\n/* \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a */\nlet is_empty = max_heap.is_empty();\n\n/* \u8f93\u5165\u5217\u8868\u5e76\u5efa\u5806 */\nlet min_heap = BinaryHeap::from(vec![Reverse(1), Reverse(3), Reverse(2), Reverse(5), Reverse(4)]);\n</code></pre> heap.c<pre><code>// C \u672a\u63d0\u4f9b\u5185\u7f6e Heap \u7c7b\n</code></pre> heap.kt<pre><code>/* \u521d\u59cb\u5316\u5806 */\n// \u521d\u59cb\u5316\u5c0f\u9876\u5806\nvar minHeap = PriorityQueue<Int>()\n// \u521d\u59cb\u5316\u5927\u9876\u5806\uff08\u4f7f\u7528 lambda \u8868\u8fbe\u5f0f\u4fee\u6539 Comparator \u5373\u53ef\uff09\nval maxHeap = PriorityQueue { a: Int, b: Int -> b - a }\n\n/* \u5143\u7d20\u5165\u5806 */\nmaxHeap.offer(1)\nmaxHeap.offer(3)\nmaxHeap.offer(2)\nmaxHeap.offer(5)\nmaxHeap.offer(4)\n\n/* \u83b7\u53d6\u5806\u9876\u5143\u7d20 */\nvar peek = maxHeap.peek() // 5\n\n/* \u5806\u9876\u5143\u7d20\u51fa\u5806 */\n// \u51fa\u5806\u5143\u7d20\u4f1a\u5f62\u6210\u4e00\u4e2a\u4ece\u5927\u5230\u5c0f\u7684\u5e8f\u5217\npeek = maxHeap.poll() // 5\npeek = maxHeap.poll() // 4\npeek = maxHeap.poll() // 3\npeek = maxHeap.poll() // 2\npeek = maxHeap.poll() // 1\n\n/* \u83b7\u53d6\u5806\u5927\u5c0f */\nval size = maxHeap.size\n\n/* \u5224\u65ad\u5806\u662f\u5426\u4e3a\u7a7a */\nval isEmpty = maxHeap.isEmpty()\n\n/* \u8f93\u5165\u5217\u8868\u5e76\u5efa\u5806 */\nminHeap = PriorityQueue(mutableListOf(1, 3, 2, 5, 4))\n</code></pre> heap.rb<pre><code>\n</code></pre> heap.zig<pre><code>\n</code></pre> \u53ef\u89c6\u5316\u8fd0\u884c <p>https://pythontutor.com/render.html#code=import%20heapq%0A%0A%22%22%22Driver%20Code%22%22%22%0Aif%20__name__%20%3D%3D%20%22__main__%22%3A%0A%20%20%20%20%23%20%E5%88%9D%E5%A7%8B%E5%8C%96%E5%B0%8F%E9%A1%B6%E5%A0%86%0A%20%20%20%20min_heap,%20flag%20%3D%20%5B%5D,%201%0A%20%20%20%20%23%20%E5%88%9D%E5%A7%8B%E5%8C%96%E5%A4%A7%E9%A1%B6%E5%A0%86%0A%20%20%20%20max_heap,%20flag%20%3D%20%5B%5D,%20-1%0A%20%20%20%20%0A%20%20%20%20%23%20Python%20%E7%9A%84%20heapq%20%E6%A8%A1%E5%9D%97%E9%BB%98%E8%AE%A4%E5%AE%9E%E7%8E%B0%E5%B0%8F%E9%A1%B6%E5%A0%86%0A%20%20%20%20%23%20%E8%80%83%E8%99%91%E5%B0%86%E2%80%9C%E5%85%83%E7%B4%A0%E5%8F%96%E8%B4%9F%E2%80%9D%E5%90%8E%E5%86%8D%E5%85%A5%E5%A0%86%EF%BC%8C%E8%BF%99%E6%A0%B7%E5%B0%B1%E5%8F%AF%E4%BB%A5%E5%B0%86%E5%A4%A7%E5%B0%8F%E5%85%B3%E7%B3%BB%E9%A2%A0%E5%80%92%EF%BC%8C%E4%BB%8E%E8%80%8C%E5%AE%9E%E7%8E%B0%E5%A4%A7%E9%A1%B6%E5%A0%86%0A%20%20%20%20%23%20%E5%9C%A8%E6%9C%AC%E7%A4%BA%E4%BE%8B%E4%B8%AD%EF%BC%8Cflag%20%3D%201%20%E6%97%B6%E5%AF%B9%E5%BA%94%E5%B0%8F%E9%A1%B6%E5%A0%86%EF%BC%8Cflag%20%3D%20-1%20%E6%97%B6%E5%AF%B9%E5%BA%94%E5%A4%A7%E9%A1%B6%E5%A0%86%0A%20%20%20%20%0A%20%20%20%20%23%20%E5%85%83%E7%B4%A0%E5%85%A5%E5%A0%86%0A%20%20%20%20heapq.heappush%28max_heap,%20flag%20*%201%29%0A%20%20%20%20heapq.heappush%28max_heap,%20flag%20*%203%29%0A%20%20%20%20heapq.heappush%28max_heap,%20flag%20*%202%29%0A%20%20%20%20heapq.heappush%28max_heap,%20flag%20*%205%29%0A%20%20%20%20heapq.heappush%28max_heap,%20flag%20*%204%29%0A%20%20%20%20%0A%20%20%20%20%23%20%E8%8E%B7%E5%8F%96%E5%A0%86%E9%A1%B6%E5%85%83%E7%B4%A0%0A%20%20%20%20peek%20%3D%20flag%20*%20max_heap%5B0%5D%20%23%205%0A%20%20%20%20%0A%20%20%20%20%23%20%E5%A0%86%E9%A1%B6%E5%85%83%E7%B4%A0%E5%87%BA%E5%A0%86%0A%20%20%20%20%23%20%E5%87%BA%E5%A0%86%E5%85%83%E7%B4%A0%E4%BC%9A%E5%BD%A2%E6%88%90%E4%B8%80%E4%B8%AA%E4%BB%8E%E5%A4%A7%E5%88%B0%E5%B0%8F%E7%9A%84%E5%BA%8F%E5%88%97%0A%20%20%20%20val%20%3D%20flag%20*%20heapq.heappop%28max_heap%29%20%23%205%0A%20%20%20%20val%20%3D%20flag%20*%20heapq.heappop%28max_heap%29%20%23%204%0A%20%20%20%20val%20%3D%20flag%20*%20heapq.heappop%28max_heap%29%20%23%203%0A%20%20%20%20val%20%3D%20flag%20*%20heapq.heappop%28max_heap%29%20%23%202%0A%20%20%20%20val%20%3D%20flag%20*%20heapq.heappop%28max_heap%29%20%23%201%0A%20%20%20%20%0A%20%20%20%20%23%20%E8%8E%B7%E5%8F%96%E5%A0%86%E5%A4%A7%E5%B0%8F%0A%20%20%20%20size%20%3D%20len%28max_heap%29%0A%20%20%20%20%0A%20%20%20%20%23%20%E5%88%A4%E6%96%AD%E5%A0%86%E6%98%AF%E5%90%A6%E4%B8%BA%E7%A9%BA%0A%20%20%20%20is_empty%20%3D%20not%20max_heap%0A%20%20%20%20%0A%20%20%20%20%23%20%E8%BE%93%E5%85%A5%E5%88%97%E8%A1%A8%E5%B9%B6%E5%BB%BA%E5%A0%86%0A%20%20%20%20min_heap%20%3D%20%5B1,%203,%202,%205,%204%5D%0A%20%20%20%20heapq.heapify%28min_heap%29&cumulative=false&curInstr=3&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=311&rawInputLstJSON=%5B%5D&textReferences=false</p>"},{"location":"chapter_heap/heap/#812-implementation-of-heaps","title":"8.1.2 \u00a0 Implementation of heaps","text":"<p>The following implementation is of a max heap. To convert it into a min heap, simply invert all size logic comparisons (for example, replace \\(\\geq\\) with \\(\\leq\\)). Interested readers are encouraged to implement it on their own.</p>"},{"location":"chapter_heap/heap/#1-storage-and-representation-of-heaps","title":"1. \u00a0 Storage and representation of heaps","text":"<p>As mentioned in the \"Binary Trees\" section, complete binary trees are well-suited for array representation. Since heaps are a type of complete binary tree, we will use arrays to store heaps.</p> <p>When using an array to represent a binary tree, elements represent node values, and indexes represent node positions in the binary tree. Node pointers are implemented through an index mapping formula.</p> <p>As shown in Figure 8-2, given an index \\(i\\), the index of its left child is \\(2i + 1\\), the index of its right child is \\(2i + 2\\), and the index of its parent is \\((i - 1) / 2\\) (floor division). When the index is out of bounds, it signifies a null node or the node does not exist.</p> <p></p> <p> Figure 8-2 \u00a0 Representation and storage of heaps </p> <p>We can encapsulate the index mapping formula into functions for convenient later use:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig my_heap.py<pre><code>def left(self, i: int) -> int:\n \"\"\"Get index of left child node\"\"\"\n return 2 * i + 1\n\ndef right(self, i: int) -> int:\n \"\"\"Get index of right child node\"\"\"\n return 2 * i + 2\n\ndef parent(self, i: int) -> int:\n \"\"\"Get index of parent node\"\"\"\n return (i - 1) // 2 # Integer division down\n</code></pre> my_heap.cpp<pre><code>/* Get index of left child node */\nint left(int i) {\n return 2 * i + 1;\n}\n\n/* Get index of right child node */\nint right(int i) {\n return 2 * i + 2;\n}\n\n/* Get index of parent node */\nint parent(int i) {\n return (i - 1) / 2; // Integer division down\n}\n</code></pre> my_heap.java<pre><code>/* Get index of left child node */\nint left(int i) {\n return 2 * i + 1;\n}\n\n/* Get index of right child node */\nint right(int i) {\n return 2 * i + 2;\n}\n\n/* Get index of parent node */\nint parent(int i) {\n return (i - 1) / 2; // Integer division down\n}\n</code></pre> my_heap.cs<pre><code>[class]{MaxHeap}-[func]{Left}\n\n[class]{MaxHeap}-[func]{Right}\n\n[class]{MaxHeap}-[func]{Parent}\n</code></pre> my_heap.go<pre><code>[class]{maxHeap}-[func]{left}\n\n[class]{maxHeap}-[func]{right}\n\n[class]{maxHeap}-[func]{parent}\n</code></pre> my_heap.swift<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre> my_heap.js<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre> my_heap.ts<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre> my_heap.dart<pre><code>[class]{MaxHeap}-[func]{_left}\n\n[class]{MaxHeap}-[func]{_right}\n\n[class]{MaxHeap}-[func]{_parent}\n</code></pre> my_heap.rs<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre> my_heap.c<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre> my_heap.kt<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre> my_heap.rb<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre> my_heap.zig<pre><code>[class]{MaxHeap}-[func]{left}\n\n[class]{MaxHeap}-[func]{right}\n\n[class]{MaxHeap}-[func]{parent}\n</code></pre>"},{"location":"chapter_heap/heap/#2-accessing-the-top-element-of-the-heap","title":"2. \u00a0 Accessing the top element of the heap","text":"<p>The top element of the heap is the root node of the binary tree, which is also the first element of the list:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig my_heap.py<pre><code>def peek(self) -> int:\n \"\"\"Access heap top element\"\"\"\n return self.max_heap[0]\n</code></pre> my_heap.cpp<pre><code>/* Access heap top element */\nint peek() {\n return maxHeap[0];\n}\n</code></pre> my_heap.java<pre><code>/* Access heap top element */\nint peek() {\n return maxHeap.get(0);\n}\n</code></pre> my_heap.cs<pre><code>[class]{MaxHeap}-[func]{Peek}\n</code></pre> my_heap.go<pre><code>[class]{maxHeap}-[func]{peek}\n</code></pre> my_heap.swift<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.js<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.ts<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.dart<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.rs<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.c<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.kt<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.rb<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre> my_heap.zig<pre><code>[class]{MaxHeap}-[func]{peek}\n</code></pre>"},{"location":"chapter_heap/heap/#3-inserting-an-element-into-the-heap","title":"3. \u00a0 Inserting an element into the heap","text":"<p>Given an element <code>val</code>, we first add it to the bottom of the heap. After addition, since <code>val</code> may be larger than other elements in the heap, the heap's integrity might be compromised, thus it's necessary to repair the path from the inserted node to the root node. This operation is called heapifying.</p> <p>Considering starting from the node inserted, perform heapify from bottom to top. As shown in Figure 8-3, we compare the value of the inserted node with its parent node, and if the inserted node is larger, we swap them. Then continue this operation, repairing each node in the heap from bottom to top until passing the root node or encountering a node that does not need to be swapped.</p> <1><2><3><4><5><6><7><8><9> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 8-3 \u00a0 Steps of element insertion into the heap </p> <p>Given a total of \\(n\\) nodes, the height of the tree is \\(O(\\log n)\\). Hence, the loop iterations for the heapify operation are at most \\(O(\\log n)\\), making the time complexity of the element insertion operation \\(O(\\log n)\\). The code is as shown:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig my_heap.py<pre><code>def push(self, val: int):\n \"\"\"Push the element into heap\"\"\"\n # Add node\n self.max_heap.append(val)\n # Heapify from bottom to top\n self.sift_up(self.size() - 1)\n\ndef sift_up(self, i: int):\n \"\"\"Start heapifying node i, from bottom to top\"\"\"\n while True:\n # Get parent node of node i\n p = self.parent(i)\n # When \"crossing the root node\" or \"node does not need repair\", end heapification\n if p < 0 or self.max_heap[i] <= self.max_heap[p]:\n break\n # Swap two nodes\n self.swap(i, p)\n # Loop upwards heapification\n i = p\n</code></pre> my_heap.cpp<pre><code>/* Push the element into heap */\nvoid push(int val) {\n // Add node\n maxHeap.push_back(val);\n // Heapify from bottom to top\n siftUp(size() - 1);\n}\n\n/* Start heapifying node i, from bottom to top */\nvoid siftUp(int i) {\n while (true) {\n // Get parent node of node i\n int p = parent(i);\n // When \"crossing the root node\" or \"node does not need repair\", end heapification\n if (p < 0 || maxHeap[i] <= maxHeap[p])\n break;\n // Swap two nodes\n swap(maxHeap[i], maxHeap[p]);\n // Loop upwards heapification\n i = p;\n }\n}\n</code></pre> my_heap.java<pre><code>/* Push the element into heap */\nvoid push(int val) {\n // Add node\n maxHeap.add(val);\n // Heapify from bottom to top\n siftUp(size() - 1);\n}\n\n/* Start heapifying node i, from bottom to top */\nvoid siftUp(int i) {\n while (true) {\n // Get parent node of node i\n int p = parent(i);\n // When \"crossing the root node\" or \"node does not need repair\", end heapification\n if (p < 0 || maxHeap.get(i) <= maxHeap.get(p))\n break;\n // Swap two nodes\n swap(i, p);\n // Loop upwards heapification\n i = p;\n }\n}\n</code></pre> my_heap.cs<pre><code>[class]{MaxHeap}-[func]{Push}\n\n[class]{MaxHeap}-[func]{SiftUp}\n</code></pre> my_heap.go<pre><code>[class]{maxHeap}-[func]{push}\n\n[class]{maxHeap}-[func]{siftUp}\n</code></pre> my_heap.swift<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{siftUp}\n</code></pre> my_heap.js<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{siftUp}\n</code></pre> my_heap.ts<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{siftUp}\n</code></pre> my_heap.dart<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{siftUp}\n</code></pre> my_heap.rs<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{sift_up}\n</code></pre> my_heap.c<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{siftUp}\n</code></pre> my_heap.kt<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{siftUp}\n</code></pre> my_heap.rb<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{sift_up}\n</code></pre> my_heap.zig<pre><code>[class]{MaxHeap}-[func]{push}\n\n[class]{MaxHeap}-[func]{siftUp}\n</code></pre>"},{"location":"chapter_heap/heap/#4-removing-the-top-element-from-the-heap","title":"4. \u00a0 Removing the top element from the heap","text":"<p>The top element of the heap is the root node of the binary tree, that is, the first element of the list. If we directly remove the first element from the list, all node indexes in the binary tree would change, making it difficult to use heapify for repairs subsequently. To minimize changes in element indexes, we use the following steps.</p> <ol> <li>Swap the top element with the bottom element of the heap (swap the root node with the rightmost leaf node).</li> <li>After swapping, remove the bottom of the heap from the list (note, since it has been swapped, what is actually being removed is the original top element).</li> <li>Starting from the root node, perform heapify from top to bottom.</li> </ol> <p>As shown in Figure 8-4, the direction of \"heapify from top to bottom\" is opposite to \"heapify from bottom to top\". We compare the value of the root node with its two children and swap it with the largest child. Then repeat this operation until passing the leaf node or encountering a node that does not need to be swapped.</p> <1><2><3><4><5><6><7><8><9><10> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 8-4 \u00a0 Steps of removing the top element from the heap </p> <p>Similar to the element insertion operation, the time complexity of the top element removal operation is also \\(O(\\log n)\\). The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig my_heap.py<pre><code>def pop(self) -> int:\n \"\"\"Element exits heap\"\"\"\n # Empty handling\n if self.is_empty():\n raise IndexError(\"Heap is empty\")\n # Swap the root node with the rightmost leaf node (swap the first element with the last element)\n self.swap(0, self.size() - 1)\n # Remove node\n val = self.max_heap.pop()\n # Heapify from top to bottom\n self.sift_down(0)\n # Return heap top element\n return val\n\ndef sift_down(self, i: int):\n \"\"\"Start heapifying node i, from top to bottom\"\"\"\n while True:\n # Determine the largest node among i, l, r, noted as ma\n l, r, ma = self.left(i), self.right(i), i\n if l < self.size() and self.max_heap[l] > self.max_heap[ma]:\n ma = l\n if r < self.size() and self.max_heap[r] > self.max_heap[ma]:\n ma = r\n # If node i is the largest or indices l, r are out of bounds, no further heapification needed, break\n if ma == i:\n break\n # Swap two nodes\n self.swap(i, ma)\n # Loop downwards heapification\n i = ma\n</code></pre> my_heap.cpp<pre><code>/* Element exits heap */\nvoid pop() {\n // Empty handling\n if (isEmpty()) {\n throw out_of_range(\"Heap is empty\");\n }\n // Swap the root node with the rightmost leaf node (swap the first element with the last element)\n swap(maxHeap[0], maxHeap[size() - 1]);\n // Remove node\n maxHeap.pop_back();\n // Heapify from top to bottom\n siftDown(0);\n}\n\n/* Start heapifying node i, from top to bottom */\nvoid siftDown(int i) {\n while (true) {\n // Determine the largest node among i, l, r, noted as ma\n int l = left(i), r = right(i), ma = i;\n if (l < size() && maxHeap[l] > maxHeap[ma])\n ma = l;\n if (r < size() && maxHeap[r] > maxHeap[ma])\n ma = r;\n // If node i is the largest or indices l, r are out of bounds, no further heapification needed, break\n if (ma == i)\n break;\n swap(maxHeap[i], maxHeap[ma]);\n // Loop downwards heapification\n i = ma;\n }\n}\n</code></pre> my_heap.java<pre><code>/* Element exits heap */\nint pop() {\n // Empty handling\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n // Swap the root node with the rightmost leaf node (swap the first element with the last element)\n swap(0, size() - 1);\n // Remove node\n int val = maxHeap.remove(size() - 1);\n // Heapify from top to bottom\n siftDown(0);\n // Return heap top element\n return val;\n}\n\n/* Start heapifying node i, from top to bottom */\nvoid siftDown(int i) {\n while (true) {\n // Determine the largest node among i, l, r, noted as ma\n int l = left(i), r = right(i), ma = i;\n if (l < size() && maxHeap.get(l) > maxHeap.get(ma))\n ma = l;\n if (r < size() && maxHeap.get(r) > maxHeap.get(ma))\n ma = r;\n // If node i is the largest or indices l, r are out of bounds, no further heapification needed, break\n if (ma == i)\n break;\n // Swap two nodes\n swap(i, ma);\n // Loop downwards heapification\n i = ma;\n }\n}\n</code></pre> my_heap.cs<pre><code>[class]{MaxHeap}-[func]{Pop}\n\n[class]{MaxHeap}-[func]{SiftDown}\n</code></pre> my_heap.go<pre><code>[class]{maxHeap}-[func]{pop}\n\n[class]{maxHeap}-[func]{siftDown}\n</code></pre> my_heap.swift<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{siftDown}\n</code></pre> my_heap.js<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{siftDown}\n</code></pre> my_heap.ts<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{siftDown}\n</code></pre> my_heap.dart<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{siftDown}\n</code></pre> my_heap.rs<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{sift_down}\n</code></pre> my_heap.c<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{siftDown}\n</code></pre> my_heap.kt<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{siftDown}\n</code></pre> my_heap.rb<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{sift_down}\n</code></pre> my_heap.zig<pre><code>[class]{MaxHeap}-[func]{pop}\n\n[class]{MaxHeap}-[func]{siftDown}\n</code></pre>"},{"location":"chapter_heap/heap/#813-common-applications-of-heaps","title":"8.1.3 \u00a0 Common applications of heaps","text":"<ul> <li>Priority Queue: Heaps are often the preferred data structure for implementing priority queues, with both enqueue and dequeue operations having a time complexity of \\(O(\\log n)\\), and building a queue having a time complexity of \\(O(n)\\), all of which are very efficient.</li> <li>Heap Sort: Given a set of data, we can create a heap from them and then continually perform element removal operations to obtain ordered data. However, we usually use a more elegant method to implement heap sort, as detailed in the \"Heap Sort\" section.</li> <li>Finding the Largest \\(k\\) Elements: This is a classic algorithm problem and also a typical application, such as selecting the top 10 hot news for Weibo hot search, picking the top 10 selling products, etc.</li> </ul>"},{"location":"chapter_heap/summary/","title":"8.4 \u00a0 Summary","text":""},{"location":"chapter_heap/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>A heap is a complete binary tree, which can be divided into a max heap and a min heap based on its property. The top element of a max (min) heap is the largest (smallest).</li> <li>A priority queue is defined as a queue with dequeue priority, usually implemented using a heap.</li> <li>Common operations of a heap and their corresponding time complexities include: element insertion into the heap \\(O(\\log n)\\), removing the top element from the heap \\(O(\\log n)\\), and accessing the top element of the heap \\(O(1)\\).</li> <li>A complete binary tree is well-suited to be represented by an array, thus heaps are commonly stored using arrays.</li> <li>Heapify operations are used to maintain the properties of the heap and are used in both heap insertion and removal operations.</li> <li>The time complexity of inserting \\(n\\) elements into a heap and building the heap can be optimized to \\(O(n)\\), which is highly efficient.</li> <li>Top-k is a classic algorithm problem that can be efficiently solved using the heap data structure, with a time complexity of \\(O(n \\log k)\\).</li> </ul>"},{"location":"chapter_heap/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: Is the \"heap\" in data structures the same concept as the \"heap\" in memory management?</p> <p>The two are not the same concept, even though they are both referred to as \"heap\". The heap in computer system memory is part of dynamic memory allocation, where the program can use it to store data during execution. The program can request a certain amount of heap memory to store complex structures like objects and arrays. When these data are no longer needed, the program needs to release this memory to prevent memory leaks. Compared to stack memory, the management and usage of heap memory need to be more cautious, as improper use may lead to memory leaks and dangling pointers.</p>"},{"location":"chapter_heap/top_k/","title":"8.3 \u00a0 Top-k problem","text":"<p>Question</p> <p>Given an unordered array <code>nums</code> of length \\(n\\), return the largest \\(k\\) elements in the array.</p> <p>For this problem, we will first introduce two straightforward solutions, then explain a more efficient heap-based method.</p>"},{"location":"chapter_heap/top_k/#831-method-1-iterative-selection","title":"8.3.1 \u00a0 Method 1: Iterative selection","text":"<p>We can perform \\(k\\) rounds of iterations as shown in Figure 8-6, extracting the \\(1^{st}\\), \\(2^{nd}\\), \\(\\dots\\), \\(k^{th}\\) largest elements in each round, with a time complexity of \\(O(nk)\\).</p> <p>This method is only suitable when \\(k \\ll n\\), as the time complexity approaches \\(O(n^2)\\) when \\(k\\) is close to \\(n\\), which is very time-consuming.</p> <p></p> <p> Figure 8-6 \u00a0 Iteratively finding the largest k elements </p> <p>Tip</p> <p>When \\(k = n\\), we can obtain a complete ordered sequence, which is equivalent to the \"selection sort\" algorithm.</p>"},{"location":"chapter_heap/top_k/#832-method-2-sorting","title":"8.3.2 \u00a0 Method 2: Sorting","text":"<p>As shown in Figure 8-7, we can first sort the array <code>nums</code> and then return the last \\(k\\) elements, with a time complexity of \\(O(n \\log n)\\).</p> <p>Clearly, this method \"overachieves\" the task, as we only need to find the largest \\(k\\) elements, without the need to sort the other elements.</p> <p></p> <p> Figure 8-7 \u00a0 Sorting to find the largest k elements </p>"},{"location":"chapter_heap/top_k/#833-method-3-heap","title":"8.3.3 \u00a0 Method 3: Heap","text":"<p>We can solve the Top-k problem more efficiently based on heaps, as shown in the following process.</p> <ol> <li>Initialize a min heap, where the top element is the smallest.</li> <li>First, insert the first \\(k\\) elements of the array into the heap.</li> <li>Starting from the \\(k + 1^{th}\\) element, if the current element is greater than the top element of the heap, remove the top element of the heap and insert the current element into the heap.</li> <li>After completing the traversal, the heap contains the largest \\(k\\) elements.</li> </ol> <1><2><3><4><5><6><7><8><9> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 8-8 \u00a0 Find the largest k elements based on heap </p> <p>Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig top_k.py<pre><code>def top_k_heap(nums: list[int], k: int) -> list[int]:\n \"\"\"Using heap to find the largest k elements in an array\"\"\"\n # Initialize min-heap\n heap = []\n # Enter the first k elements of the array into the heap\n for i in range(k):\n heapq.heappush(heap, nums[i])\n # From the k+1th element, keep the heap length as k\n for i in range(k, len(nums)):\n # If the current element is larger than the heap top element, remove the heap top element and enter the current element into the heap\n if nums[i] > heap[0]:\n heapq.heappop(heap)\n heapq.heappush(heap, nums[i])\n return heap\n</code></pre> top_k.cpp<pre><code>/* Using heap to find the largest k elements in an array */\npriority_queue<int, vector<int>, greater<int>> topKHeap(vector<int> &nums, int k) {\n // Initialize min-heap\n priority_queue<int, vector<int>, greater<int>> heap;\n // Enter the first k elements of the array into the heap\n for (int i = 0; i < k; i++) {\n heap.push(nums[i]);\n }\n // From the k+1th element, keep the heap length as k\n for (int i = k; i < nums.size(); i++) {\n // If the current element is larger than the heap top element, remove the heap top element and enter the current element into the heap\n if (nums[i] > heap.top()) {\n heap.pop();\n heap.push(nums[i]);\n }\n }\n return heap;\n}\n</code></pre> top_k.java<pre><code>/* Using heap to find the largest k elements in an array */\nQueue<Integer> topKHeap(int[] nums, int k) {\n // Initialize min-heap\n Queue<Integer> heap = new PriorityQueue<Integer>();\n // Enter the first k elements of the array into the heap\n for (int i = 0; i < k; i++) {\n heap.offer(nums[i]);\n }\n // From the k+1th element, keep the heap length as k\n for (int i = k; i < nums.length; i++) {\n // If the current element is larger than the heap top element, remove the heap top element and enter the current element into the heap\n if (nums[i] > heap.peek()) {\n heap.poll();\n heap.offer(nums[i]);\n }\n }\n return heap;\n}\n</code></pre> top_k.cs<pre><code>[class]{top_k}-[func]{TopKHeap}\n</code></pre> top_k.go<pre><code>[class]{}-[func]{topKHeap}\n</code></pre> top_k.swift<pre><code>[class]{}-[func]{topKHeap}\n</code></pre> top_k.js<pre><code>[class]{}-[func]{pushMinHeap}\n\n[class]{}-[func]{popMinHeap}\n\n[class]{}-[func]{peekMinHeap}\n\n[class]{}-[func]{getMinHeap}\n\n[class]{}-[func]{topKHeap}\n</code></pre> top_k.ts<pre><code>[class]{}-[func]{pushMinHeap}\n\n[class]{}-[func]{popMinHeap}\n\n[class]{}-[func]{peekMinHeap}\n\n[class]{}-[func]{getMinHeap}\n\n[class]{}-[func]{topKHeap}\n</code></pre> top_k.dart<pre><code>[class]{}-[func]{topKHeap}\n</code></pre> top_k.rs<pre><code>[class]{}-[func]{top_k_heap}\n</code></pre> top_k.c<pre><code>[class]{}-[func]{pushMinHeap}\n\n[class]{}-[func]{popMinHeap}\n\n[class]{}-[func]{peekMinHeap}\n\n[class]{}-[func]{getMinHeap}\n\n[class]{}-[func]{topKHeap}\n</code></pre> top_k.kt<pre><code>[class]{}-[func]{topKHeap}\n</code></pre> top_k.rb<pre><code>[class]{}-[func]{top_k_heap}\n</code></pre> top_k.zig<pre><code>[class]{}-[func]{topKHeap}\n</code></pre> <p>A total of \\(n\\) rounds of heap insertions and deletions are performed, with the maximum heap size being \\(k\\), hence the time complexity is \\(O(n \\log k)\\). This method is very efficient; when \\(k\\) is small, the time complexity tends towards \\(O(n)\\); when \\(k\\) is large, the time complexity will not exceed \\(O(n \\log n)\\).</p> <p>Additionally, this method is suitable for scenarios with dynamic data streams. By continuously adding data, we can maintain the elements within the heap, thereby achieving dynamic updates of the largest \\(k\\) elements.</p>"},{"location":"chapter_hello_algo/","title":"Before starting","text":"<p>A few years ago, I shared the \"Sword for Offer\" problem solutions on LeetCode, receiving encouragement and support from many readers. During interactions with readers, the most common question I encountered was \"how to get started with algorithms.\" Gradually, I developed a keen interest in this question.</p> <p>Directly solving problems seems to be the most popular method \u2014 it's simple, direct, and effective. However, problem-solving is like playing a game of Minesweeper: those with strong self-study abilities can defuse the mines one by one, but those with insufficient basics might end up metaphorically bruised from explosions, retreating step by step in frustration. Going through textbooks is also common, but for those aiming for job applications, the energy spent on thesis writing, resume submissions, and preparation for written tests and interviews leaves little for tackling thick books, turning it into a daunting challenge.</p> <p>If you're facing similar troubles, then this book is lucky to have found you. This book is my answer to the question. While it may not be the best solution, it is at least a positive attempt. This book may not directly land you an offer, but it will guide you through the \"knowledge map\" in data structures and algorithms, help you understand the shapes, sizes, and locations of different \"mines,\" and enable you to master various \"demining methods.\" With these skills, I believe you can solve problems and read literature more comfortably, gradually building a knowledge system.</p> <p>I deeply agree with Professor Feynman's statement: \"Knowledge isn't free. You have to pay attention.\" In this sense, this book is not entirely \"free.\" To not disappoint the precious \"attention\" you pay for this book, I will do my best, dedicating my utmost \"attention\" to this book.</p> <p>Knowing my limitations, although the content of this book has been refined over time, there are surely many errors remaining. I sincerely request critiques and corrections from all teachers and students.</p> <p></p> Hello, Algo! <p>The advent of computers has brought significant changes to the world. With their high-speed computing power and excellent programmability, they have become the ideal medium for executing algorithms and processing data. Whether it's the realistic graphics of video games, the intelligent decisions in autonomous driving, the brilliant Go games of AlphaGo, or the natural interactions of ChatGPT, these applications are all exquisite demonstrations of algorithms at work on computers.</p> <p>In fact, before the advent of computers, algorithms and data structures already existed in every corner of the world. Early algorithms were relatively simple, such as ancient counting methods and tool-making procedures. As civilization progressed, algorithms became more refined and complex. From the exquisite craftsmanship of artisans, to industrial products that liberate productive forces, to the scientific laws governing the universe, almost every ordinary or astonishing thing has behind it the ingenious thought of algorithms.</p> <p>Similarly, data structures are everywhere: from social networks to subway lines, many systems can be modeled as \"graphs\"; from a country to a family, the main forms of social organization exhibit characteristics of \"trees\"; winter clothes are like a \"stack\", where the first item worn is the last to be taken off; a badminton shuttle tube resembles a \"queue\", with one end for insertion and the other for retrieval; a dictionary is like a \"hash table\", enabling quick search for target entries.</p> <p>This book aims to help readers understand the core concepts of algorithms and data structures through clear, easy-to-understand animated illustrations and runnable code examples, and to be able to implement them through programming. On this basis, this book strives to reveal the vivid manifestations of algorithms in the complex world, showcasing the beauty of algorithms. I hope this book can help you!</p>"},{"location":"chapter_introduction/","title":"Chapter 1. \u00a0 Encounter with algorithms","text":"<p>Abstract</p> <p>A graceful maiden dances, intertwined with the data, her skirt swaying to the melody of algorithms.</p> <p>She invites you to a dance, follow her steps, and enter the world of algorithms full of logic and beauty.</p>"},{"location":"chapter_introduction/#chapter-contents","title":"Chapter contents","text":"<ul> <li>1.1 \u00a0 Algorithms are everywhere</li> <li>1.2 \u00a0 What is an algorithm</li> <li>1.3 \u00a0 Summary</li> </ul>"},{"location":"chapter_introduction/algorithms_are_everywhere/","title":"1.1 \u00a0 Algorithms are everywhere","text":"<p>When we hear the word \"algorithm,\" we naturally think of mathematics. However, many algorithms do not involve complex mathematics but rely more on basic logic, which can be seen everywhere in our daily lives.</p> <p>Before formally discussing algorithms, there's an interesting fact worth sharing: you have already unconsciously learned many algorithms and have become accustomed to applying them in your daily life. Here, I will give a few specific examples to prove this point.</p> <p>Example 1: Looking Up a Dictionary. In an English dictionary, words are listed alphabetically. Suppose we're searching for a word that starts with the letter \\(r\\). This is typically done in the following way:</p> <ol> <li>Open the dictionary to about halfway and check the first letter on the page, let's say the letter is \\(m\\).</li> <li>Since \\(r\\) comes after \\(m\\) in the alphabet, we can ignore the first half of the dictionary and focus on the latter half.</li> <li>Repeat steps <code>1.</code> and <code>2.</code> until you find the page where the word starts with \\(r\\).</li> </ol> <1><2><3><4><5> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 1-1 \u00a0 Process of Looking Up a Dictionary </p> <p>This essential skill for elementary students, looking up a dictionary, is actually the famous \"Binary Search\" algorithm. From a data structure perspective, we can consider the dictionary as a sorted \"array\"; from an algorithmic perspective, the series of actions taken to look up a word in the dictionary can be viewed as \"Binary Search.\"</p> <p>Example 2: Organizing Playing Cards. When playing cards, we need to arrange the cards in our hand in ascending order, as shown in the following process.</p> <ol> <li>Divide the playing cards into \"ordered\" and \"unordered\" sections, assuming initially the leftmost card is already in order.</li> <li>Take out a card from the unordered section and insert it into the correct position in the ordered section; after this, the leftmost two cards are in order.</li> <li>Continue to repeat step <code>2.</code> until all cards are in order.</li> </ol> <p></p> <p> Figure 1-2 \u00a0 Playing cards sorting process </p> <p>The above method of organizing playing cards is essentially the \"Insertion Sort\" algorithm, which is very efficient for small datasets. Many programming languages' sorting functions include the insertion sort.</p> <p>Example 3: Making Change. Suppose we buy goods worth \\(69\\) yuan at a supermarket and give the cashier \\(100\\) yuan, then the cashier needs to give us \\(31\\) yuan in change. They would naturally complete the thought process as shown in Figure 1-3.</p> <ol> <li>The options are currencies smaller than \\(31\\), including \\(1\\), \\(5\\), \\(10\\), and \\(20\\).</li> <li>Take out the largest \\(20\\) from the options, leaving \\(31 - 20 = 11\\).</li> <li>Take out the largest \\(10\\) from the remaining options, leaving \\(11 - 10 = 1\\).</li> <li>Take out the largest \\(1\\) from the remaining options, leaving \\(1 - 1 = 0\\).</li> <li>Complete the change-making, with the solution being \\(20 + 10 + 1 = 31\\).</li> </ol> <p></p> <p> Figure 1-3 \u00a0 Change making process </p> <p>In the above steps, we make the best choice at each step (using the largest denomination possible), ultimately resulting in a feasible change-making plan. From the perspective of data structures and algorithms, this method is essentially a \"Greedy\" algorithm.</p> <p>From cooking a meal to interstellar travel, almost all problem-solving involves algorithms. The advent of computers allows us to store data structures in memory and write code to call the CPU and GPU to execute algorithms. In this way, we can transfer real-life problems to computers, solving various complex issues more efficiently.</p> <p>Tip</p> <p>If concepts such as data structures, algorithms, arrays, and binary search still seem somewhat obsecure, I encourage you to continue reading. This book will gently guide you into the realm of understanding data structures and algorithms.</p>"},{"location":"chapter_introduction/summary/","title":"1.3 \u00a0 Summary","text":"<ul> <li>Algorithms are ubiquitous in daily life and are not as inaccessible and complex as they might seem. In fact, we have already unconsciously learned many algorithms to solve various problems in life.</li> <li>The principle of looking up a word in a dictionary is consistent with the binary search algorithm. The binary search algorithm embodies the important algorithmic concept of divide and conquer.</li> <li>The process of organizing playing cards is very similar to the insertion sort algorithm. The insertion sort algorithm is suitable for sorting small datasets.</li> <li>The steps of making change in currency essentially follow the greedy algorithm, where each step involves making the best possible choice at the moment.</li> <li>An algorithm is a set of instructions or steps used to solve a specific problem within a finite amount of time, while a data structure is the way data is organized and stored in a computer.</li> <li>Data structures and algorithms are closely linked. Data structures are the foundation of algorithms, and algorithms are the stage to utilize the functions of data structures.</li> <li>We can liken data structures and algorithms to building blocks. The blocks represent data, the shape and connection method of the blocks represent data structures, and the steps of assembling the blocks correspond to algorithms.</li> </ul>"},{"location":"chapter_introduction/what_is_dsa/","title":"1.2 \u00a0 What is an algorithm","text":""},{"location":"chapter_introduction/what_is_dsa/#121-definition-of-an-algorithm","title":"1.2.1 \u00a0 Definition of an algorithm","text":"<p>An algorithm is a set of instructions or steps to solve a specific problem within a finite amount of time. It has the following characteristics:</p> <ul> <li>The problem is clearly defined, including unambiguous definitions of input and output.</li> <li>The algorithm is feasible, meaning it can be completed within a finite number of steps, time, and memory space.</li> <li>Each step has a definitive meaning. The output is consistently the same under the same inputs and conditions.</li> </ul>"},{"location":"chapter_introduction/what_is_dsa/#122-definition-of-a-data-structure","title":"1.2.2 \u00a0 Definition of a data structure","text":"<p>A data structure is a way of organizing and storing data in a computer, with the following design goals:</p> <ul> <li>Minimize space occupancy to save computer memory.</li> <li>Make data operations as fast as possible, covering data access, addition, deletion, updating, etc.</li> <li>Provide concise data representation and logical information to enable efficient algorithm execution.</li> </ul> <p>Designing data structures is a balancing act, often requiring trade-offs. If you want to improve in one aspect, you often need to compromise in another. Here are two examples:</p> <ul> <li>Compared to arrays, linked lists offer more convenience in data addition and deletion but sacrifice data access speed.</li> <li>Graphs, compared to linked lists, provide richer logical information but require more memory space.</li> </ul>"},{"location":"chapter_introduction/what_is_dsa/#123-relationship-between-data-structures-and-algorithms","title":"1.2.3 \u00a0 Relationship between data structures and algorithms","text":"<p>As shown in Figure 1-4, data structures and algorithms are highly related and closely integrated, specifically in the following three aspects:</p> <ul> <li>Data structures are the foundation of algorithms. They provide structured data storage and methods for manipulating data for algorithms.</li> <li>Algorithms are the stage where data structures come into play. The data structure alone only stores data information; it is through the application of algorithms that specific problems can be solved.</li> <li>Algorithms can often be implemented based on different data structures, but their execution efficiency can vary greatly. Choosing the right data structure is key.</li> </ul> <p></p> <p> Figure 1-4 \u00a0 Relationship between data structures and algorithms </p> <p>Data structures and algorithms can be likened to a set of building blocks, as illustrated in Figure 1-5. A building block set includes numerous pieces, accompanied by detailed assembly instructions. Following these instructions step by step allows us to construct an intricate block model.</p> <p></p> <p> Figure 1-5 \u00a0 Assembling blocks </p> <p>The detailed correspondence between the two is shown in Table 1-1.</p> <p> Table 1-1 \u00a0 Comparing data structures and algorithms to building blocks </p> Data Structures and Algorithms Building Blocks Input data Unassembled blocks Data structure Organization of blocks, including shape, size, connections, etc Algorithm A series of steps to assemble the blocks into the desired shape Output data Completed Block model <p>It's worth noting that data structures and algorithms are independent of programming languages. For this reason, this book is able to provide implementations in multiple programming languages.</p> <p>Conventional Abbreviation</p> <p>In real-life discussions, we often refer to \"Data Structures and Algorithms\" simply as \"Algorithms\". For example, the well-known LeetCode algorithm problems actually test both data structure and algorithm knowledge.</p>"},{"location":"chapter_preface/","title":"Chapter 0. \u00a0 Preface","text":"<p>Abstract</p> <p>Algorithms are like a beautiful symphony, with each line of code flowing like a rhythm.</p> <p>May this book ring softly in your mind, leaving a unique and profound melody.</p>"},{"location":"chapter_preface/#chapter-contents","title":"Chapter contents","text":"<ul> <li>0.1 \u00a0 About this book</li> <li>0.2 \u00a0 How to read</li> <li>0.3 \u00a0 Summary</li> </ul>"},{"location":"chapter_preface/about_the_book/","title":"0.1 \u00a0 About this book","text":"<p>This open-source project aims to create a free, and beginner-friendly crash course on data structures and algorithms.</p> <ul> <li>Using animated illustrations, it delivers structured insights into data structures and algorithmic concepts, ensuring comprehensibility and a smooth learning curve.</li> <li>Run code with just one click, supporting Java, C++, Python, Go, JS, TS, C#, Swift, Rust, Dart, Zig and other languages.</li> <li>Readers are encouraged to engage with each other in the discussion area for each section, questions and comments are usually answered within two days.</li> </ul>"},{"location":"chapter_preface/about_the_book/#011-target-audience","title":"0.1.1 \u00a0 Target audience","text":"<p>If you are new to algorithms with limited exposure, or you have accumulated some experience in algorithms, but you only have a vague understanding of data structures and algorithms, and you are constantly jumping between \"yep\" and \"hmm\", then this book is for you!</p> <p>If you have already accumulated a certain amount of problem-solving experience, and are familiar with most types of problems, then this book can help you review and organize your algorithm knowledge system. The repository's source code can be used as a \"problem-solving toolkit\" or an \"algorithm cheat sheet\".</p> <p>If you are an algorithm expert, we look forward to receiving your valuable suggestions, or join us and collaborate.</p> <p>Prerequisites</p> <p>You should know how to write and read simple code in at least one programming language.</p>"},{"location":"chapter_preface/about_the_book/#012-content-structure","title":"0.1.2 \u00a0 Content structure","text":"<p>The main content of the book is shown in Figure 0-1.</p> <ul> <li>Complexity analysis: explores aspects and methods for evaluating data structures and algorithms. Covers methods of deriving time complexity and space complexity, along with common types and examples.</li> <li>Data structures: focuses on fundamental data types, classification methods, definitions, pros and cons, common operations, types, applications, and implementation methods of data structures such as array, linked list, stack, queue, hash table, tree, heap, graph, etc.</li> <li>Algorithms: defines algorithms, discusses their pros and cons, efficiency, application scenarios, problem-solving steps, and includes sample questions for various algorithms such as search, sorting, divide and conquer, backtracking, dynamic programming, greedy algorithms, and more.</li> </ul> <p></p> <p> Figure 0-1 \u00a0 Main content of the book </p>"},{"location":"chapter_preface/about_the_book/#013-acknowledgements","title":"0.1.3 \u00a0 Acknowledgements","text":"<p>This book is continuously improved with the joint efforts of many contributors from the open-source community. Thanks to each writer who invested their time and energy, listed in the order generated by GitHub: krahets, codingonion, nuomi1, Gonglja, Reanon, justin-tse, danielsss, hpstory, S-N-O-R-L-A-X, night-cruise, msk397, gvenusleo, RiverTwilight, gyt95, zhuoqinyue, Zuoxun, Xia-Sang, mingXta, FangYuan33, GN-Yu, IsChristina, xBLACKICEx, guowei-gong, Cathay-Chen, mgisr, JoseHung, qualifier1024, pengchzn, Guanngxu, longsizhuo, L-Super, what-is-me, yuan0221, lhxsm, Slone123c, WSL0809, longranger2, theNefelibatas, xiongsp, JeffersonHuang, hongyun-robot, K3v123, yuelinxin, a16su, gaofer, malone6, Wonderdch, xjr7670, DullSword, Horbin-Magician, NI-SW, reeswell, XC-Zero, XiaChuerwu, yd-j, iron-irax, huawuque404, MolDuM, Nigh, KorsChen, foursevenlove, 52coder, bubble9um, youshaoXG, curly210102, gltianwen, fanchenggang, Transmigration-zhou, FloranceYeh, FreddieLi, ShiMaRing, lipusheng, Javesun99, JackYang-hellobobo, shanghai-Jerry, 0130w, Keynman, psychelzh, logan-qiu, ZnYang2018, MwumLi, 1ch0, Phoenix0415, qingpeng9802, Richard-Zhang1019, QiLOL, Suremotoo, Turing-1024-Lee, Evilrabbit520, GaochaoZhu, ZJKung, linzeyan, hezhizhen, ZongYangL, beintentional, czruby, coderlef, dshlstarr, szu17dmy, fbigm, gledfish, hts0000, boloboloda, iStig, jiaxianhua, wenjianmin, keshida, kilikilikid, lclc6, lwbaptx, liuxjerry, lucaswangdev, lyl625760, chadyi, noobcodemaker, selear, siqyka, syd168, 4yDX3906, tao363, wangwang105, weibk, yabo083, yi427, yishangzhang, zhouLion, baagod, ElaBosak233, xb534, luluxia, yanedie, thomasq0, YangXuanyi and th1nk3r-ing.</p> <p>The code review work for this book was completed by codingonion, Gonglja, gvenusleo, hpstory, justin\u2010tse, krahets, night-cruise, nuomi1, and Reanon (listed in alphabetical order). Thanks to them for their time and effort, ensuring the standardization and uniformity of the code in various languages.</p> <p>Throughout the creation of this book, numerous individuals provided invaluable assistance, including but not limited to:</p> <ul> <li>Thanks to my mentor at the company, Dr. Xi Li, who encouraged me in a conversation to \"get moving fast,\" which solidified my determination to write this book;</li> <li>Thanks to my girlfriend Bubble, as the first reader of this book, for offering many valuable suggestions from the perspective of a beginner in algorithms, making this book more suitable for newbies;</li> <li>Thanks to Tengbao, Qibao, and Feibao for coming up with a creative name for this book, evoking everyone's fond memories of writing their first line of code \"Hello World!\";</li> <li>Thanks to Xiaoquan for providing professional help in intellectual property, which has played a significant role in the development of this open-source book;</li> <li>Thanks to Sutong for designing a beautiful cover and logo for this book, and for patiently making multiple revisions under my insistence;</li> <li>Thanks to @squidfunk for providing writing and typesetting suggestions, as well as his developed open-source documentation theme Material-for-MkDocs.</li> </ul> <p>Throughout the writing journey, I delved into numerous textbooks and articles on data structures and algorithms. These works served as exemplary models, ensuring the accuracy and quality of this book's content. I extend my gratitude to all who preceded me for their invaluable contributions!</p> <p>This book advocates a combination of hands-on and minds-on learning, inspired in this regard by \"Dive into Deep Learning\". I highly recommend this excellent book to all readers.</p> <p>Heartfelt thanks to my parents, whose ongoing support and encouragement have allowed me to do this interesting work.</p>"},{"location":"chapter_preface/suggestions/","title":"0.2 \u00a0 How to read","text":"<p>Tip</p> <p>For the best reading experience, it is recommended that you read through this section.</p>"},{"location":"chapter_preface/suggestions/#021-writing-conventions","title":"0.2.1 \u00a0 Writing conventions","text":"<ul> <li>Chapters marked with '*' after the title are optional and contain relatively challenging content. If you are short on time, it is advisable to skip them.</li> <li>Technical terms will be in boldface (in the print and PDF versions) or underlined (in the web version), for instance, array. It's advisable to familiarize yourself with these for better comprehension of technical texts.</li> <li>Bolded text indicates key content or summary statements, which deserve special attention.</li> <li>Words and phrases with specific meanings are indicated with \u201cquotation marks\u201d to avoid ambiguity.</li> <li>When it comes to terms that are inconsistent between programming languages, this book follows Python, for example using <code>None</code> to mean <code>null</code>.</li> <li>This book partially ignores the comment conventions for programming languages in exchange for a more compact layout of the content. The comments primarily consist of three types: title comments, content comments, and multi-line comments.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig <pre><code>\"\"\"Header comments for labeling functions, classes, test samples, etc\"\"\"\n\n# Comments for explaining details\n\n\"\"\"\nMultiline\ncomments\n\"\"\"\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>/* Header comments for labeling functions, classes, test samples, etc */\n\n// Comments for explaining details.\n\n/**\n * Multiline\n * comments\n */\n</code></pre> <pre><code>// Header comments for labeling functions, classes, test samples, etc\n\n// Comments for explaining details.\n\n// Multiline\n// comments\n</code></pre>"},{"location":"chapter_preface/suggestions/#022-efficient-learning-via-animated-illustrations","title":"0.2.2 \u00a0 Efficient learning via animated illustrations","text":"<p>Compared with text, videos and pictures have a higher density of information and are more structured, making them easier to understand. In this book, key and difficult concepts are mainly presented through animations and illustrations, with text serving as explanations and supplements.</p> <p>When encountering content with animations or illustrations as shown in Figure 0-2, prioritize understanding the figure, with text as supplementary, integrating both for a comprehensive understanding.</p> <p></p> <p> Figure 0-2 \u00a0 Animated illustration example </p>"},{"location":"chapter_preface/suggestions/#023-deepen-understanding-through-coding-practice","title":"0.2.3 \u00a0 Deepen understanding through coding practice","text":"<p>The source code of this book is hosted on the GitHub Repository. As shown in Figure 0-3, the source code comes with test examples and can be executed with just a single click.</p> <p>If time permits, it's recommended to type out the code yourself. If pressed for time, at least read and run all the codes.</p> <p>Compared to just reading code, writing code often yields more learning. Learning by doing is the real way to learn.</p> <p></p> <p> Figure 0-3 \u00a0 Running code example </p> <p>Setting up to run the code involves three main steps.</p> <p>Step 1: Install a local programming environment. Follow the tutorial in the appendix for installation, or skip this step if already installed.</p> <p>Step 2: Clone or download the code repository. Visit the GitHub Repository.</p> <p>If Git is installed, use the following command to clone the repository:</p> <pre><code>git clone https://github.com/krahets/hello-algo.git\n</code></pre> <p>Alternatively, you can also click the \"Download ZIP\" button at the location shown in Figure 0-4 to directly download the code as a compressed ZIP file. Then, you can simply extract it locally.</p> <p></p> <p> Figure 0-4 \u00a0 Cloning repository and downloading code </p> <p>Step 3: Run the source code. As shown in Figure 0-5, for the code block labeled with the file name at the top, we can find the corresponding source code file in the <code>codes</code> folder of the repository. These files can be executed with a single click, which will help you save unnecessary debugging time and allow you to focus on learning.</p> <p></p> <p> Figure 0-5 \u00a0 Code block and corresponding source code file </p>"},{"location":"chapter_preface/suggestions/#024-learning-together-in-discussion","title":"0.2.4 \u00a0 Learning together in discussion","text":"<p>While reading this book, please don't skip over the points that you didn't learn. Feel free to post your questions in the comment section. We will be happy to answer them and can usually respond within two days.</p> <p>As illustrated in Figure 0-6, each chapter features a comment section at the bottom. I encourage you to pay attention to these comments. They not only expose you to others' encountered problems, aiding in identifying knowledge gaps and sparking deeper contemplation, but also invite you to generously contribute by answering fellow readers' inquiries, sharing insights, and fostering mutual improvement.</p> <p></p> <p> Figure 0-6 \u00a0 Comment section example </p>"},{"location":"chapter_preface/suggestions/#025-algorithm-learning-path","title":"0.2.5 \u00a0 Algorithm learning path","text":"<p>Overall, the journey of mastering data structures and algorithms can be divided into three stages:</p> <ol> <li>Stage 1: Introduction to algorithms. We need to familiarize ourselves with the characteristics and usage of various data structures and learn about the principles, processes, uses, and efficiency of different algorithms.</li> <li>Stage 2: Practicing algorithm problems. It is recommended to start from popular problems, such as Sword for Offer and LeetCode Hot 100, and accumulate at least 100 questions to familiarize yourself with mainstream algorithmic problems. Forgetfulness can be a challenge when you start practicing, but rest assured that this is normal. We can follow the \"Ebbinghaus Forgetting Curve\" to review the questions, and usually after 3~5 rounds of repetitions, we will be able to memorize them.</li> <li>Stage 3: Building the knowledge system. In terms of learning, we can read algorithm column articles, solution frameworks, and algorithm textbooks to continuously enrich the knowledge system. In terms of practicing, we can try advanced strategies, such as categorizing by topic, multiple solutions for a single problem, and one solution for multiple problems, etc. Insights on these strategies can be found in various communities.</li> </ol> <p>As shown in Figure 0-7, this book mainly covers \u201cStage 1,\u201d aiming to help you more efficiently embark on Stages 2 and 3.</p> <p></p> <p> Figure 0-7 \u00a0 Algorithm learning path </p>"},{"location":"chapter_preface/summary/","title":"0.3 \u00a0 Summary","text":"<ul> <li>The main audience of this book is beginners in algorithm. If you already have some basic knowledge, this book can help you systematically review your algorithm knowledge, and the source code in this book can also be used as a \"Coding Toolkit\".</li> <li>The book consists of three main sections, Complexity Analysis, Data Structures, and Algorithms, covering most of the topics in the field.</li> <li>For newcomers to algorithms, it is crucial to read an introductory book in the beginning stages to avoid many detours or common pitfalls.</li> <li>Animations and figures within the book are usually used to introduce key points and difficult knowledge. These should be given more attention when reading the book.</li> <li>Practice is the best way to learn programming. It is highly recommended that you run the source code and type in the code yourself.</li> <li>Each chapter in the web version of this book features a discussion section, and you are welcome to share your questions and insights at any time.</li> </ul>"},{"location":"chapter_reference/","title":"References","text":"<p>[1] Thomas H. Cormen, et al. Introduction to Algorithms (3<sup>rd</sup> Edition).</p> <p>[2] Aditya Bhargava. Grokking Algorithms: An Illustrated Guide for Programmers and Other Curious People (1<sup>st</sup> Edition).</p> <p>[3] Robert Sedgewick, et al. Algorithms (4<sup>th</sup> Edition).</p> <p>[4] Yan Weimin. Data Structures (C Language Version).</p> <p>[5] Deng Junhui. Data Structures (C++ Language Version, Third Edition).</p> <p>[6] Mark Allen Weiss, translated by Chen Yue. Data Structures and Algorithm Analysis in Java (Third Edition).</p> <p>[7] Cheng Jie. Speaking of Data Structures.</p> <p>[8] Wang Zheng. The Beauty of Data Structures and Algorithms.</p> <p>[9] Gayle Laakmann McDowell. Cracking the Coding Interview: 189 Programming Questions and Solutions (6<sup>th</sup> Edition).</p> <p>[10] Aston Zhang, et al. Dive into Deep Learning.</p>"},{"location":"chapter_searching/","title":"Chapter 10. \u00a0 Searching","text":"<p>Abstract</p> <p>Searching is an unknown adventure, where we may need to traverse every corner of a mysterious space, or perhaps quickly pinpoint our target.</p> <p>In this journey of discovery, each exploration may yield an unexpected answer.</p>"},{"location":"chapter_searching/#chapter-contents","title":"Chapter contents","text":"<ul> <li>10.1 \u00a0 Binary search</li> <li>10.2 \u00a0 Binary search insertion</li> <li>10.3 \u00a0 Binary search boundaries</li> <li>10.4 \u00a0 Hashing optimization strategies</li> <li>10.5 \u00a0 Search algorithms revisited</li> <li>10.6 \u00a0 Summary</li> </ul>"},{"location":"chapter_searching/binary_search/","title":"10.1 \u00a0 Binary search","text":"<p>Binary search is an efficient search algorithm based on the divide-and-conquer strategy. It utilizes the orderliness of data, reducing the search range by half each round until the target element is found or the search interval is empty.</p> <p>Question</p> <p>Given an array <code>nums</code> of length \\(n\\), with elements arranged in ascending order and non-repeating. Please find and return the index of element <code>target</code> in this array. If the array does not contain the element, return \\(-1\\). An example is shown in Figure 10-1.</p> <p></p> <p> Figure 10-1 \u00a0 Binary search example data </p> <p>As shown in Figure 10-2, we first initialize pointers \\(i = 0\\) and \\(j = n - 1\\), pointing to the first and last elements of the array, representing the search interval \\([0, n - 1]\\). Please note that square brackets indicate a closed interval, which includes the boundary values themselves.</p> <p>Next, perform the following two steps in a loop.</p> <ol> <li>Calculate the midpoint index \\(m = \\lfloor {(i + j) / 2} \\rfloor\\), where \\(\\lfloor \\: \\rfloor\\) denotes the floor operation.</li> <li>Compare the size of <code>nums[m]</code> and <code>target</code>, divided into the following three scenarios.<ol> <li>If <code>nums[m] < target</code>, it indicates that <code>target</code> is in the interval \\([m + 1, j]\\), thus set \\(i = m + 1\\).</li> <li>If <code>nums[m] > target</code>, it indicates that <code>target</code> is in the interval \\([i, m - 1]\\), thus set \\(j = m - 1\\).</li> <li>If <code>nums[m] = target</code>, it indicates that <code>target</code> is found, thus return index \\(m\\).</li> </ol> </li> </ol> <p>If the array does not contain the target element, the search interval will eventually reduce to empty. In this case, return \\(-1\\).</p> <1><2><3><4><5><6><7> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 10-2 \u00a0 Binary search process </p> <p>It's worth noting that since \\(i\\) and \\(j\\) are both of type <code>int</code>, \\(i + j\\) might exceed the range of <code>int</code> type. To avoid large number overflow, we usually use the formula \\(m = \\lfloor {i + (j - i) / 2} \\rfloor\\) to calculate the midpoint.</p> <p>The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search.py<pre><code>def binary_search(nums: list[int], target: int) -> int:\n \"\"\"Binary search (double closed interval)\"\"\"\n # Initialize double closed interval [0, n-1], i.e., i, j point to the first element and last element of the array respectively\n i, j = 0, len(nums) - 1\n # Loop until the search interval is empty (when i > j, it is empty)\n while i <= j:\n # Theoretically, Python's numbers can be infinitely large (depending on memory size), so there is no need to consider large number overflow\n m = (i + j) // 2 # Calculate midpoint index m\n if nums[m] < target:\n i = m + 1 # This situation indicates that target is in the interval [m+1, j]\n elif nums[m] > target:\n j = m - 1 # This situation indicates that target is in the interval [i, m-1]\n else:\n return m # Found the target element, thus return its index\n return -1 # Did not find the target element, thus return -1\n</code></pre> binary_search.cpp<pre><code>/* Binary search (double closed interval) */\nint binarySearch(vector<int> &nums, int target) {\n // Initialize double closed interval [0, n-1], i.e., i, j point to the first element and last element of the array respectively\n int i = 0, j = nums.size() - 1;\n // Loop until the search interval is empty (when i > j, it is empty)\n while (i <= j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) // This situation indicates that target is in the interval [m+1, j]\n i = m + 1;\n else if (nums[m] > target) // This situation indicates that target is in the interval [i, m-1]\n j = m - 1;\n else // Found the target element, thus return its index\n return m;\n }\n // Did not find the target element, thus return -1\n return -1;\n}\n</code></pre> binary_search.java<pre><code>/* Binary search (double closed interval) */\nint binarySearch(int[] nums, int target) {\n // Initialize double closed interval [0, n-1], i.e., i, j point to the first element and last element of the array respectively\n int i = 0, j = nums.length - 1;\n // Loop until the search interval is empty (when i > j, it is empty)\n while (i <= j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) // This situation indicates that target is in the interval [m+1, j]\n i = m + 1;\n else if (nums[m] > target) // This situation indicates that target is in the interval [i, m-1]\n j = m - 1;\n else // Found the target element, thus return its index\n return m;\n }\n // Did not find the target element, thus return -1\n return -1;\n}\n</code></pre> binary_search.cs<pre><code>[class]{binary_search}-[func]{BinarySearch}\n</code></pre> binary_search.go<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> binary_search.swift<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> binary_search.js<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> binary_search.ts<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> binary_search.dart<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> binary_search.rs<pre><code>[class]{}-[func]{binary_search}\n</code></pre> binary_search.c<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> binary_search.kt<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> binary_search.rb<pre><code>[class]{}-[func]{binary_search}\n</code></pre> binary_search.zig<pre><code>[class]{}-[func]{binarySearch}\n</code></pre> <p>Time complexity is \\(O(\\log n)\\) : In the binary loop, the interval reduces by half each round, hence the number of iterations is \\(\\log_2 n\\).</p> <p>Space complexity is \\(O(1)\\) : Pointers \\(i\\) and \\(j\\) use constant size space.</p>"},{"location":"chapter_searching/binary_search/#1011-interval-representation-methods","title":"10.1.1 \u00a0 Interval representation methods","text":"<p>Besides the aforementioned closed interval, a common interval representation is the \"left-closed right-open\" interval, defined as \\([0, n)\\), where the left boundary includes itself, and the right boundary does not include itself. In this representation, the interval \\([i, j)\\) is empty when \\(i = j\\).</p> <p>We can implement a binary search algorithm with the same functionality based on this representation:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search.py<pre><code>def binary_search_lcro(nums: list[int], target: int) -> int:\n \"\"\"Binary search (left closed right open interval)\"\"\"\n # Initialize left closed right open interval [0, n), i.e., i, j point to the first element and the last element +1 of the array respectively\n i, j = 0, len(nums)\n # Loop until the search interval is empty (when i = j, it is empty)\n while i < j:\n m = (i + j) // 2 # Calculate midpoint index m\n if nums[m] < target:\n i = m + 1 # This situation indicates that target is in the interval [m+1, j)\n elif nums[m] > target:\n j = m # This situation indicates that target is in the interval [i, m)\n else:\n return m # Found the target element, thus return its index\n return -1 # Did not find the target element, thus return -1\n</code></pre> binary_search.cpp<pre><code>/* Binary search (left closed right open interval) */\nint binarySearchLCRO(vector<int> &nums, int target) {\n // Initialize left closed right open interval [0, n), i.e., i, j point to the first element and the last element +1 of the array respectively\n int i = 0, j = nums.size();\n // Loop until the search interval is empty (when i = j, it is empty)\n while (i < j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) // This situation indicates that target is in the interval [m+1, j)\n i = m + 1;\n else if (nums[m] > target) // This situation indicates that target is in the interval [i, m)\n j = m;\n else // Found the target element, thus return its index\n return m;\n }\n // Did not find the target element, thus return -1\n return -1;\n}\n</code></pre> binary_search.java<pre><code>/* Binary search (left closed right open interval) */\nint binarySearchLCRO(int[] nums, int target) {\n // Initialize left closed right open interval [0, n), i.e., i, j point to the first element and the last element +1 of the array respectively\n int i = 0, j = nums.length;\n // Loop until the search interval is empty (when i = j, it is empty)\n while (i < j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) // This situation indicates that target is in the interval [m+1, j)\n i = m + 1;\n else if (nums[m] > target) // This situation indicates that target is in the interval [i, m)\n j = m;\n else // Found the target element, thus return its index\n return m;\n }\n // Did not find the target element, thus return -1\n return -1;\n}\n</code></pre> binary_search.cs<pre><code>[class]{binary_search}-[func]{BinarySearchLCRO}\n</code></pre> binary_search.go<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> binary_search.swift<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> binary_search.js<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> binary_search.ts<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> binary_search.dart<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> binary_search.rs<pre><code>[class]{}-[func]{binary_search_lcro}\n</code></pre> binary_search.c<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> binary_search.kt<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> binary_search.rb<pre><code>[class]{}-[func]{binary_search_lcro}\n</code></pre> binary_search.zig<pre><code>[class]{}-[func]{binarySearchLCRO}\n</code></pre> <p>As shown in Figure 10-3, in the two types of interval representations, the initialization of the binary search algorithm, the loop condition, and the narrowing interval operation are different.</p> <p>Since both boundaries in the \"closed interval\" representation are defined as closed, the operations to narrow the interval through pointers \\(i\\) and \\(j\\) are also symmetrical. This makes it less prone to errors, therefore, it is generally recommended to use the \"closed interval\" approach.</p> <p></p> <p> Figure 10-3 \u00a0 Two types of interval definitions </p>"},{"location":"chapter_searching/binary_search/#1012-advantages-and-limitations","title":"10.1.2 \u00a0 Advantages and limitations","text":"<p>Binary search performs well in both time and space aspects.</p> <ul> <li>Binary search is time-efficient. With large data volumes, the logarithmic time complexity has a significant advantage. For instance, when the data size \\(n = 2^{20}\\), linear search requires \\(2^{20} = 1048576\\) iterations, while binary search only requires \\(\\log_2 2^{20} = 20\\) iterations.</li> <li>Binary search does not require extra space. Compared to search algorithms that rely on additional space (like hash search), binary search is more space-efficient.</li> </ul> <p>However, binary search is not suitable for all situations, mainly for the following reasons.</p> <ul> <li>Binary search is only applicable to ordered data. If the input data is unordered, it is not worth sorting it just to use binary search, as sorting algorithms typically have a time complexity of \\(O(n \\log n)\\), which is higher than both linear and binary search. For scenarios with frequent element insertion to maintain array order, inserting elements into specific positions has a time complexity of \\(O(n)\\), which is also quite costly.</li> <li>Binary search is only applicable to arrays. Binary search requires non-continuous (jumping) element access, which is inefficient in linked lists, thus not suitable for use in linked lists or data structures based on linked lists.</li> <li>With small data volumes, linear search performs better. In linear search, each round only requires 1 decision operation; whereas in binary search, it involves 1 addition, 1 division, 1 to 3 decision operations, 1 addition (subtraction), totaling 4 to 6 operations; therefore, when data volume \\(n\\) is small, linear search can be faster than binary search.</li> </ul>"},{"location":"chapter_searching/binary_search_edge/","title":"10.3 \u00a0 Binary search boundaries","text":""},{"location":"chapter_searching/binary_search_edge/#1031-find-the-left-boundary","title":"10.3.1 \u00a0 Find the left boundary","text":"<p>Question</p> <p>Given a sorted array <code>nums</code> of length \\(n\\), which may contain duplicate elements, return the index of the leftmost element <code>target</code>. If the element is not present in the array, return \\(-1\\).</p> <p>Recall the method of binary search for an insertion point, after the search is completed, \\(i\\) points to the leftmost <code>target</code>, thus searching for the insertion point is essentially searching for the index of the leftmost <code>target</code>.</p> <p>Consider implementing the search for the left boundary using the function for finding an insertion point. Note that the array might not contain <code>target</code>, which could lead to the following two results:</p> <ul> <li>The index \\(i\\) of the insertion point is out of bounds.</li> <li>The element <code>nums[i]</code> is not equal to <code>target</code>.</li> </ul> <p>In these cases, simply return \\(-1\\). The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_edge.py<pre><code>def binary_search_left_edge(nums: list[int], target: int) -> int:\n \"\"\"Binary search for the leftmost target\"\"\"\n # Equivalent to finding the insertion point of target\n i = binary_search_insertion(nums, target)\n # Did not find target, thus return -1\n if i == len(nums) or nums[i] != target:\n return -1\n # Found target, return index i\n return i\n</code></pre> binary_search_edge.cpp<pre><code>/* Binary search for the leftmost target */\nint binarySearchLeftEdge(vector<int> &nums, int target) {\n // Equivalent to finding the insertion point of target\n int i = binarySearchInsertion(nums, target);\n // Did not find target, thus return -1\n if (i == nums.size() || nums[i] != target) {\n return -1;\n }\n // Found target, return index i\n return i;\n}\n</code></pre> binary_search_edge.java<pre><code>/* Binary search for the leftmost target */\nint binarySearchLeftEdge(int[] nums, int target) {\n // Equivalent to finding the insertion point of target\n int i = binary_search_insertion.binarySearchInsertion(nums, target);\n // Did not find target, thus return -1\n if (i == nums.length || nums[i] != target) {\n return -1;\n }\n // Found target, return index i\n return i;\n}\n</code></pre> binary_search_edge.cs<pre><code>[class]{binary_search_edge}-[func]{BinarySearchLeftEdge}\n</code></pre> binary_search_edge.go<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre> binary_search_edge.swift<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre> binary_search_edge.js<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre> binary_search_edge.ts<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre> binary_search_edge.dart<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre> binary_search_edge.rs<pre><code>[class]{}-[func]{binary_search_left_edge}\n</code></pre> binary_search_edge.c<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre> binary_search_edge.kt<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre> binary_search_edge.rb<pre><code>[class]{}-[func]{binary_search_left_edge}\n</code></pre> binary_search_edge.zig<pre><code>[class]{}-[func]{binarySearchLeftEdge}\n</code></pre>"},{"location":"chapter_searching/binary_search_edge/#1032-find-the-right-boundary","title":"10.3.2 \u00a0 Find the right boundary","text":"<p>So how do we find the rightmost <code>target</code>? The most straightforward way is to modify the code, replacing the pointer contraction operation in the case of <code>nums[m] == target</code>. The code is omitted here, but interested readers can implement it on their own.</p> <p>Below we introduce two more cunning methods.</p>"},{"location":"chapter_searching/binary_search_edge/#1-reusing-the-search-for-the-left-boundary","title":"1. \u00a0 Reusing the search for the left boundary","text":"<p>In fact, we can use the function for finding the leftmost element to find the rightmost element, specifically by transforming the search for the rightmost <code>target</code> into a search for the leftmost <code>target + 1</code>.</p> <p>As shown in Figure 10-7, after the search is completed, the pointer \\(i\\) points to the leftmost <code>target + 1</code> (if it exists), while \\(j\\) points to the rightmost <code>target</code>, thus returning \\(j\\) is sufficient.</p> <p></p> <p> Figure 10-7 \u00a0 Transforming the search for the right boundary into the search for the left boundary </p> <p>Please note, the insertion point returned is \\(i\\), therefore, it should be subtracted by \\(1\\) to obtain \\(j\\):</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_edge.py<pre><code>def binary_search_right_edge(nums: list[int], target: int) -> int:\n \"\"\"Binary search for the rightmost target\"\"\"\n # Convert to finding the leftmost target + 1\n i = binary_search_insertion(nums, target + 1)\n # j points to the rightmost target, i points to the first element greater than target\n j = i - 1\n # Did not find target, thus return -1\n if j == -1 or nums[j] != target:\n return -1\n # Found target, return index j\n return j\n</code></pre> binary_search_edge.cpp<pre><code>/* Binary search for the rightmost target */\nint binarySearchRightEdge(vector<int> &nums, int target) {\n // Convert to finding the leftmost target + 1\n int i = binarySearchInsertion(nums, target + 1);\n // j points to the rightmost target, i points to the first element greater than target\n int j = i - 1;\n // Did not find target, thus return -1\n if (j == -1 || nums[j] != target) {\n return -1;\n }\n // Found target, return index j\n return j;\n}\n</code></pre> binary_search_edge.java<pre><code>/* Binary search for the rightmost target */\nint binarySearchRightEdge(int[] nums, int target) {\n // Convert to finding the leftmost target + 1\n int i = binary_search_insertion.binarySearchInsertion(nums, target + 1);\n // j points to the rightmost target, i points to the first element greater than target\n int j = i - 1;\n // Did not find target, thus return -1\n if (j == -1 || nums[j] != target) {\n return -1;\n }\n // Found target, return index j\n return j;\n}\n</code></pre> binary_search_edge.cs<pre><code>[class]{binary_search_edge}-[func]{BinarySearchRightEdge}\n</code></pre> binary_search_edge.go<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre> binary_search_edge.swift<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre> binary_search_edge.js<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre> binary_search_edge.ts<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre> binary_search_edge.dart<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre> binary_search_edge.rs<pre><code>[class]{}-[func]{binary_search_right_edge}\n</code></pre> binary_search_edge.c<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre> binary_search_edge.kt<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre> binary_search_edge.rb<pre><code>[class]{}-[func]{binary_search_right_edge}\n</code></pre> binary_search_edge.zig<pre><code>[class]{}-[func]{binarySearchRightEdge}\n</code></pre>"},{"location":"chapter_searching/binary_search_edge/#2-transforming-into-an-element-search","title":"2. \u00a0 Transforming into an element search","text":"<p>We know that when the array does not contain <code>target</code>, \\(i\\) and \\(j\\) will eventually point to the first element greater and smaller than <code>target</code> respectively.</p> <p>Thus, as shown in Figure 10-8, we can construct an element that does not exist in the array, to search for the left and right boundaries.</p> <ul> <li>To find the leftmost <code>target</code>: it can be transformed into searching for <code>target - 0.5</code>, and return the pointer \\(i\\).</li> <li>To find the rightmost <code>target</code>: it can be transformed into searching for <code>target + 0.5</code>, and return the pointer \\(j\\).</li> </ul> <p></p> <p> Figure 10-8 \u00a0 Transforming the search for boundaries into the search for an element </p> <p>The code is omitted here, but two points are worth noting.</p> <ul> <li>The given array does not contain decimals, meaning we do not need to worry about how to handle equal situations.</li> <li>Since this method introduces decimals, the variable <code>target</code> in the function needs to be changed to a floating point type (no change needed in Python).</li> </ul>"},{"location":"chapter_searching/binary_search_insertion/","title":"10.2 \u00a0 Binary search insertion","text":"<p>Binary search is not only used to search for target elements but also to solve many variant problems, such as searching for the insertion position of target elements.</p>"},{"location":"chapter_searching/binary_search_insertion/#1021-case-with-no-duplicate-elements","title":"10.2.1 \u00a0 Case with no duplicate elements","text":"<p>Question</p> <p>Given an ordered array <code>nums</code> of length \\(n\\) and an element <code>target</code>, where the array has no duplicate elements. Now insert <code>target</code> into the array <code>nums</code> while maintaining its order. If the element <code>target</code> already exists in the array, insert it to its left side. Please return the index of <code>target</code> in the array after insertion. See the example shown in Figure 10-4.</p> <p></p> <p> Figure 10-4 \u00a0 Example data for binary search insertion point </p> <p>If you want to reuse the binary search code from the previous section, you need to answer the following two questions.</p> <p>Question one: When the array contains <code>target</code>, is the insertion point index the index of that element?</p> <p>The requirement to insert <code>target</code> to the left of equal elements means that the newly inserted <code>target</code> replaces the original <code>target</code> position. Thus, when the array contains <code>target</code>, the insertion point index is the index of that <code>target</code>.</p> <p>Question two: When the array does not contain <code>target</code>, what is the index of the insertion point?</p> <p>Further consider the binary search process: when <code>nums[m] < target</code>, pointer \\(i\\) moves, meaning that pointer \\(i\\) is approaching an element greater than or equal to <code>target</code>. Similarly, pointer \\(j\\) is always approaching an element less than or equal to <code>target</code>.</p> <p>Therefore, at the end of the binary, it is certain that: \\(i\\) points to the first element greater than <code>target</code>, and \\(j\\) points to the first element less than <code>target</code>. It is easy to see that when the array does not contain <code>target</code>, the insertion index is \\(i\\). The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_insertion.py<pre><code>def binary_search_insertion_simple(nums: list[int], target: int) -> int:\n \"\"\"Binary search for insertion point (no duplicate elements)\"\"\"\n i, j = 0, len(nums) - 1 # Initialize double closed interval [0, n-1]\n while i <= j:\n m = (i + j) // 2 # Calculate midpoint index m\n if nums[m] < target:\n i = m + 1 # Target is in interval [m+1, j]\n elif nums[m] > target:\n j = m - 1 # Target is in interval [i, m-1]\n else:\n return m # Found target, return insertion point m\n # Did not find target, return insertion point i\n return i\n</code></pre> binary_search_insertion.cpp<pre><code>/* Binary search for insertion point (no duplicate elements) */\nint binarySearchInsertionSimple(vector<int> &nums, int target) {\n int i = 0, j = nums.size() - 1; // Initialize double closed interval [0, n-1]\n while (i <= j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) {\n i = m + 1; // Target is in interval [m+1, j]\n } else if (nums[m] > target) {\n j = m - 1; // Target is in interval [i, m-1]\n } else {\n return m; // Found target, return insertion point m\n }\n }\n // Did not find target, return insertion point i\n return i;\n}\n</code></pre> binary_search_insertion.java<pre><code>/* Binary search for insertion point (no duplicate elements) */\nint binarySearchInsertionSimple(int[] nums, int target) {\n int i = 0, j = nums.length - 1; // Initialize double closed interval [0, n-1]\n while (i <= j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) {\n i = m + 1; // Target is in interval [m+1, j]\n } else if (nums[m] > target) {\n j = m - 1; // Target is in interval [i, m-1]\n } else {\n return m; // Found target, return insertion point m\n }\n }\n // Did not find target, return insertion point i\n return i;\n}\n</code></pre> binary_search_insertion.cs<pre><code>[class]{binary_search_insertion}-[func]{BinarySearchInsertionSimple}\n</code></pre> binary_search_insertion.go<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre> binary_search_insertion.swift<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre> binary_search_insertion.js<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre> binary_search_insertion.ts<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre> binary_search_insertion.dart<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre> binary_search_insertion.rs<pre><code>[class]{}-[func]{binary_search_insertion_simple}\n</code></pre> binary_search_insertion.c<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre> binary_search_insertion.kt<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre> binary_search_insertion.rb<pre><code>[class]{}-[func]{binary_search_insertion_simple}\n</code></pre> binary_search_insertion.zig<pre><code>[class]{}-[func]{binarySearchInsertionSimple}\n</code></pre>"},{"location":"chapter_searching/binary_search_insertion/#1022-case-with-duplicate-elements","title":"10.2.2 \u00a0 Case with duplicate elements","text":"<p>Question</p> <p>Based on the previous question, assume the array may contain duplicate elements, all else remains the same.</p> <p>Suppose there are multiple <code>target</code>s in the array, ordinary binary search can only return the index of one of the <code>target</code>s, and it cannot determine how many <code>target</code>s are to the left and right of that element.</p> <p>The task requires inserting the target element to the very left, so we need to find the index of the leftmost <code>target</code> in the array. Initially consider implementing this through the steps shown in Figure 10-5.</p> <ol> <li>Perform a binary search, get an arbitrary index of <code>target</code>, denoted as \\(k\\).</li> <li>Start from index \\(k\\), and perform a linear search to the left until the leftmost <code>target</code> is found and return.</li> </ol> <p></p> <p> Figure 10-5 \u00a0 Linear search for the insertion point of duplicate elements </p> <p>Although this method is feasible, it includes linear search, so its time complexity is \\(O(n)\\). This method is inefficient when the array contains many duplicate <code>target</code>s.</p> <p>Now consider extending the binary search code. As shown in Figure 10-6, the overall process remains the same, each round first calculates the midpoint index \\(m\\), then judges the size relationship between <code>target</code> and <code>nums[m]</code>, divided into the following cases.</p> <ul> <li>When <code>nums[m] < target</code> or <code>nums[m] > target</code>, it means <code>target</code> has not been found yet, thus use the normal binary search interval reduction operation, thus making pointers \\(i\\) and \\(j\\) approach <code>target</code>.</li> <li>When <code>nums[m] == target</code>, it indicates that the elements less than <code>target</code> are in the interval \\([i, m - 1]\\), therefore use \\(j = m - 1\\) to narrow the interval, thus making pointer \\(j\\) approach elements less than <code>target</code>.</li> </ul> <p>After the loop, \\(i\\) points to the leftmost <code>target</code>, and \\(j\\) points to the first element less than <code>target</code>, therefore index \\(i\\) is the insertion point.</p> <1><2><3><4><5><6><7><8> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 10-6 \u00a0 Steps for binary search insertion point of duplicate elements </p> <p>Observe the code, the operations of the branch <code>nums[m] > target</code> and <code>nums[m] == target</code> are the same, so the two can be combined.</p> <p>Even so, we can still keep the conditions expanded, as their logic is clearer and more readable.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_insertion.py<pre><code>def binary_search_insertion(nums: list[int], target: int) -> int:\n \"\"\"Binary search for insertion point (with duplicate elements)\"\"\"\n i, j = 0, len(nums) - 1 # Initialize double closed interval [0, n-1]\n while i <= j:\n m = (i + j) // 2 # Calculate midpoint index m\n if nums[m] < target:\n i = m + 1 # Target is in interval [m+1, j]\n elif nums[m] > target:\n j = m - 1 # Target is in interval [i, m-1]\n else:\n j = m - 1 # First element less than target is in interval [i, m-1]\n # Return insertion point i\n return i\n</code></pre> binary_search_insertion.cpp<pre><code>/* Binary search for insertion point (with duplicate elements) */\nint binarySearchInsertion(vector<int> &nums, int target) {\n int i = 0, j = nums.size() - 1; // Initialize double closed interval [0, n-1]\n while (i <= j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) {\n i = m + 1; // Target is in interval [m+1, j]\n } else if (nums[m] > target) {\n j = m - 1; // Target is in interval [i, m-1]\n } else {\n j = m - 1; // First element less than target is in interval [i, m-1]\n }\n }\n // Return insertion point i\n return i;\n}\n</code></pre> binary_search_insertion.java<pre><code>/* Binary search for insertion point (with duplicate elements) */\nint binarySearchInsertion(int[] nums, int target) {\n int i = 0, j = nums.length - 1; // Initialize double closed interval [0, n-1]\n while (i <= j) {\n int m = i + (j - i) / 2; // Calculate midpoint index m\n if (nums[m] < target) {\n i = m + 1; // Target is in interval [m+1, j]\n } else if (nums[m] > target) {\n j = m - 1; // Target is in interval [i, m-1]\n } else {\n j = m - 1; // First element less than target is in interval [i, m-1]\n }\n }\n // Return insertion point i\n return i;\n}\n</code></pre> binary_search_insertion.cs<pre><code>[class]{binary_search_insertion}-[func]{BinarySearchInsertion}\n</code></pre> binary_search_insertion.go<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> binary_search_insertion.swift<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> binary_search_insertion.js<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> binary_search_insertion.ts<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> binary_search_insertion.dart<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> binary_search_insertion.rs<pre><code>[class]{}-[func]{binary_search_insertion}\n</code></pre> binary_search_insertion.c<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> binary_search_insertion.kt<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> binary_search_insertion.rb<pre><code>[class]{}-[func]{binary_search_insertion}\n</code></pre> binary_search_insertion.zig<pre><code>[class]{}-[func]{binarySearchInsertion}\n</code></pre> <p>Tip</p> <p>The code in this section uses \"closed intervals\". Readers interested can implement the \"left-closed right-open\" method themselves.</p> <p>In summary, binary search is merely about setting search targets for pointers \\(i\\) and \\(j\\), which might be a specific element (like <code>target</code>) or a range of elements (like elements less than <code>target</code>).</p> <p>In the continuous loop of binary search, pointers \\(i\\) and \\(j\\) gradually approach the predefined target. Ultimately, they either find the answer or stop after crossing the boundary.</p>"},{"location":"chapter_searching/replace_linear_by_hashing/","title":"10.4 \u00a0 Hash optimization strategies","text":"<p>In algorithm problems, we often reduce the time complexity of algorithms by replacing linear search with hash search. Let's use an algorithm problem to deepen understanding.</p> <p>Question</p> <p>Given an integer array <code>nums</code> and a target element <code>target</code>, please search for two elements in the array whose \"sum\" equals <code>target</code>, and return their array indices. Any solution is acceptable.</p>"},{"location":"chapter_searching/replace_linear_by_hashing/#1041-linear-search-trading-time-for-space","title":"10.4.1 \u00a0 Linear search: trading time for space","text":"<p>Consider traversing all possible combinations directly. As shown in Figure 10-9, we initiate a two-layer loop, and in each round, we determine whether the sum of the two integers equals <code>target</code>. If so, we return their indices.</p> <p></p> <p> Figure 10-9 \u00a0 Linear search solution for two-sum problem </p> <p>The code is shown below:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig two_sum.py<pre><code>def two_sum_brute_force(nums: list[int], target: int) -> list[int]:\n \"\"\"Method one: Brute force enumeration\"\"\"\n # Two-layer loop, time complexity is O(n^2)\n for i in range(len(nums) - 1):\n for j in range(i + 1, len(nums)):\n if nums[i] + nums[j] == target:\n return [i, j]\n return []\n</code></pre> two_sum.cpp<pre><code>/* Method one: Brute force enumeration */\nvector<int> twoSumBruteForce(vector<int> &nums, int target) {\n int size = nums.size();\n // Two-layer loop, time complexity is O(n^2)\n for (int i = 0; i < size - 1; i++) {\n for (int j = i + 1; j < size; j++) {\n if (nums[i] + nums[j] == target)\n return {i, j};\n }\n }\n return {};\n}\n</code></pre> two_sum.java<pre><code>/* Method one: Brute force enumeration */\nint[] twoSumBruteForce(int[] nums, int target) {\n int size = nums.length;\n // Two-layer loop, time complexity is O(n^2)\n for (int i = 0; i < size - 1; i++) {\n for (int j = i + 1; j < size; j++) {\n if (nums[i] + nums[j] == target)\n return new int[] { i, j };\n }\n }\n return new int[0];\n}\n</code></pre> two_sum.cs<pre><code>[class]{two_sum}-[func]{TwoSumBruteForce}\n</code></pre> two_sum.go<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> two_sum.swift<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> two_sum.js<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> two_sum.ts<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> two_sum.dart<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> two_sum.rs<pre><code>[class]{}-[func]{two_sum_brute_force}\n</code></pre> two_sum.c<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> two_sum.kt<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> two_sum.rb<pre><code>[class]{}-[func]{two_sum_brute_force}\n</code></pre> two_sum.zig<pre><code>[class]{}-[func]{twoSumBruteForce}\n</code></pre> <p>This method has a time complexity of \\(O(n^2)\\) and a space complexity of \\(O(1)\\), which is very time-consuming with large data volumes.</p>"},{"location":"chapter_searching/replace_linear_by_hashing/#1042-hash-search-trading-space-for-time","title":"10.4.2 \u00a0 Hash search: trading space for time","text":"<p>Consider using a hash table, with key-value pairs being the array elements and their indices, respectively. Loop through the array, performing the steps shown in Figure 10-10 each round.</p> <ol> <li>Check if the number <code>target - nums[i]</code> is in the hash table. If so, directly return the indices of these two elements.</li> <li>Add the key-value pair <code>nums[i]</code> and index <code>i</code> to the hash table.</li> </ol> <1><2><3> <p></p> <p></p> <p></p> <p> Figure 10-10 \u00a0 Help hash table solve two-sum </p> <p>The implementation code is shown below, requiring only a single loop:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig two_sum.py<pre><code>def two_sum_hash_table(nums: list[int], target: int) -> list[int]:\n \"\"\"Method two: Auxiliary hash table\"\"\"\n # Auxiliary hash table, space complexity is O(n)\n dic = {}\n # Single-layer loop, time complexity is O(n)\n for i in range(len(nums)):\n if target - nums[i] in dic:\n return [dic[target - nums[i]], i]\n dic[nums[i]] = i\n return []\n</code></pre> two_sum.cpp<pre><code>/* Method two: Auxiliary hash table */\nvector<int> twoSumHashTable(vector<int> &nums, int target) {\n int size = nums.size();\n // Auxiliary hash table, space complexity is O(n)\n unordered_map<int, int> dic;\n // Single-layer loop, time complexity is O(n)\n for (int i = 0; i < size; i++) {\n if (dic.find(target - nums[i]) != dic.end()) {\n return {dic[target - nums[i]], i};\n }\n dic.emplace(nums[i], i);\n }\n return {};\n}\n</code></pre> two_sum.java<pre><code>/* Method two: Auxiliary hash table */\nint[] twoSumHashTable(int[] nums, int target) {\n int size = nums.length;\n // Auxiliary hash table, space complexity is O(n)\n Map<Integer, Integer> dic = new HashMap<>();\n // Single-layer loop, time complexity is O(n)\n for (int i = 0; i < size; i++) {\n if (dic.containsKey(target - nums[i])) {\n return new int[] { dic.get(target - nums[i]), i };\n }\n dic.put(nums[i], i);\n }\n return new int[0];\n}\n</code></pre> two_sum.cs<pre><code>[class]{two_sum}-[func]{TwoSumHashTable}\n</code></pre> two_sum.go<pre><code>[class]{}-[func]{twoSumHashTable}\n</code></pre> two_sum.swift<pre><code>[class]{}-[func]{twoSumHashTable}\n</code></pre> two_sum.js<pre><code>[class]{}-[func]{twoSumHashTable}\n</code></pre> two_sum.ts<pre><code>[class]{}-[func]{twoSumHashTable}\n</code></pre> two_sum.dart<pre><code>[class]{}-[func]{twoSumHashTable}\n</code></pre> two_sum.rs<pre><code>[class]{}-[func]{two_sum_hash_table}\n</code></pre> two_sum.c<pre><code>[class]{HashTable}-[func]{}\n\n[class]{}-[func]{twoSumHashTable}\n</code></pre> two_sum.kt<pre><code>[class]{}-[func]{twoSumHashTable}\n</code></pre> two_sum.rb<pre><code>[class]{}-[func]{two_sum_hash_table}\n</code></pre> two_sum.zig<pre><code>[class]{}-[func]{twoSumHashTable}\n</code></pre> <p>This method reduces the time complexity from \\(O(n^2)\\) to \\(O(n)\\) by using hash search, greatly improving the running efficiency.</p> <p>As it requires maintaining an additional hash table, the space complexity is \\(O(n)\\). Nevertheless, this method has a more balanced time-space efficiency overall, making it the optimal solution for this problem.</p>"},{"location":"chapter_searching/searching_algorithm_revisited/","title":"10.5 \u00a0 Search algorithms revisited","text":"<p>Searching algorithms (searching algorithm) are used to search for one or several elements that meet specific criteria in data structures such as arrays, linked lists, trees, or graphs.</p> <p>Searching algorithms can be divided into the following two categories based on their implementation approaches.</p> <ul> <li>Locating the target element by traversing the data structure, such as traversals of arrays, linked lists, trees, and graphs, etc.</li> <li>Using the organizational structure of the data or the prior information contained in the data to achieve efficient element search, such as binary search, hash search, and binary search tree search, etc.</li> </ul> <p>It is not difficult to notice that these topics have been introduced in previous chapters, so searching algorithms are not unfamiliar to us. In this section, we will revisit searching algorithms from a more systematic perspective.</p>"},{"location":"chapter_searching/searching_algorithm_revisited/#1051-brute-force-search","title":"10.5.1 \u00a0 Brute-force search","text":"<p>Brute-force search locates the target element by traversing every element of the data structure.</p> <ul> <li>\"Linear search\" is suitable for linear data structures such as arrays and linked lists. It starts from one end of the data structure, accesses each element one by one, until the target element is found or the other end is reached without finding the target element.</li> <li>\"Breadth-first search\" and \"Depth-first search\" are two traversal strategies for graphs and trees. Breadth-first search starts from the initial node and searches layer by layer, accessing nodes from near to far. Depth-first search starts from the initial node, follows a path until the end, then backtracks and tries other paths until the entire data structure is traversed.</li> </ul> <p>The advantage of brute-force search is its simplicity and versatility, no need for data preprocessing and the help of additional data structures.</p> <p>However, the time complexity of this type of algorithm is \\(O(n)\\), where \\(n\\) is the number of elements, so the performance is poor in cases of large data volumes.</p>"},{"location":"chapter_searching/searching_algorithm_revisited/#1052-adaptive-search","title":"10.5.2 \u00a0 Adaptive search","text":"<p>Adaptive search uses the unique properties of data (such as order) to optimize the search process, thereby locating the target element more efficiently.</p> <ul> <li>\"Binary search\" uses the orderliness of data to achieve efficient searching, only suitable for arrays.</li> <li>\"Hash search\" uses a hash table to establish a key-value mapping between search data and target data, thus implementing the query operation.</li> <li>\"Tree search\" in a specific tree structure (such as a binary search tree), quickly eliminates nodes based on node value comparisons, thus locating the target element.</li> </ul> <p>The advantage of these algorithms is high efficiency, with time complexities reaching \\(O(\\log n)\\) or even \\(O(1)\\).</p> <p>However, using these algorithms often requires data preprocessing. For example, binary search requires sorting the array in advance, and hash search and tree search both require the help of additional data structures, maintaining these structures also requires extra time and space overhead.</p> <p>Tip</p> <p>Adaptive search algorithms are often referred to as search algorithms, mainly used for quickly retrieving target elements in specific data structures.</p>"},{"location":"chapter_searching/searching_algorithm_revisited/#1053-choosing-a-search-method","title":"10.5.3 \u00a0 Choosing a search method","text":"<p>Given a set of data of size \\(n\\), we can use linear search, binary search, tree search, hash search, and other methods to search for the target element from it. The working principles of these methods are shown in Figure 10-11.</p> <p></p> <p> Figure 10-11 \u00a0 Various search strategies </p> <p>The operation efficiency and characteristics of the aforementioned methods are shown in the following table.</p> <p> Table 10-1 \u00a0 Comparison of search algorithm efficiency </p> Linear search Binary search Tree search Hash search Search element \\(O(n)\\) \\(O(\\log n)\\) \\(O(\\log n)\\) \\(O(1)\\) Insert element \\(O(1)\\) \\(O(n)\\) \\(O(\\log n)\\) \\(O(1)\\) Delete element \\(O(n)\\) \\(O(n)\\) \\(O(\\log n)\\) \\(O(1)\\) Extra space \\(O(1)\\) \\(O(1)\\) \\(O(n)\\) \\(O(n)\\) Data preprocessing / Sorting \\(O(n \\log n)\\) Building tree \\(O(n \\log n)\\) Building hash table \\(O(n)\\) Data orderliness Unordered Ordered Ordered Unordered <p>The choice of search algorithm also depends on the volume of data, search performance requirements, data query and update frequency, etc.</p> <p>Linear search</p> <ul> <li>Good versatility, no need for any data preprocessing operations. If we only need to query the data once, then the time for data preprocessing in the other three methods would be longer than the time for linear search.</li> <li>Suitable for small volumes of data, where time complexity has a smaller impact on efficiency.</li> <li>Suitable for scenarios with high data update frequency, because this method does not require any additional maintenance of the data.</li> </ul> <p>Binary search</p> <ul> <li>Suitable for large data volumes, with stable efficiency performance, the worst time complexity being \\(O(\\log n)\\).</li> <li>The data volume cannot be too large, because storing arrays requires contiguous memory space.</li> <li>Not suitable for scenarios with frequent additions and deletions, because maintaining an ordered array incurs high overhead.</li> </ul> <p>Hash search</p> <ul> <li>Suitable for scenarios with high query performance requirements, with an average time complexity of \\(O(1)\\).</li> <li>Not suitable for scenarios needing ordered data or range searches, because hash tables cannot maintain data orderliness.</li> <li>High dependency on hash functions and hash collision handling strategies, with significant performance degradation risks.</li> <li>Not suitable for overly large data volumes, because hash tables need extra space to minimize collisions and provide good query performance.</li> </ul> <p>Tree search</p> <ul> <li>Suitable for massive data, because tree nodes are stored scattered in memory.</li> <li>Suitable for maintaining ordered data or range searches.</li> <li>In the continuous addition and deletion of nodes, the binary search tree may become skewed, degrading the time complexity to \\(O(n)\\).</li> <li>If using AVL trees or red-black trees, operations can run stably at \\(O(\\log n)\\) efficiency, but the operation to maintain tree balance adds extra overhead.</li> </ul>"},{"location":"chapter_searching/summary/","title":"10.6 \u00a0 Summary","text":"<ul> <li>Binary search depends on the order of data and performs the search by iteratively halving the search interval. It requires the input data to be sorted and is only applicable to arrays or array-based data structures.</li> <li>Brute force search locates data by traversing the data structure. Linear search is suitable for arrays and linked lists, while breadth-first search and depth-first search are suitable for graphs and trees. These algorithms are highly versatile, requiring no preprocessing of data, but have a higher time complexity of \\(O(n)\\).</li> <li>Hash search, tree search, and binary search are efficient searching methods, capable of quickly locating target elements in specific data structures. These algorithms are highly efficient, with time complexities reaching \\(O(\\log n)\\) or even \\(O(1)\\), but they usually require additional data structures.</li> <li>In practice, we need to analyze factors such as data volume, search performance requirements, data query and update frequencies, etc., to choose the appropriate search method.</li> <li>Linear search is suitable for small or frequently updated data; binary search is suitable for large, sorted data; hash search is suitable for scenarios requiring high query efficiency without the need for range queries; tree search is appropriate for large dynamic data that needs to maintain order and support range queries.</li> <li>Replacing linear search with hash search is a common strategy to optimize runtime, reducing the time complexity from \\(O(n)\\) to \\(O(1)\\).</li> </ul>"},{"location":"chapter_sorting/","title":"Chapter 11. \u00a0 Sorting","text":"<p>Abstract</p> <p>Sorting is like a magical key that turns chaos into order, enabling us to understand and handle data in a more efficient manner.</p> <p>Whether it's simple ascending order or complex categorical arrangements, sorting reveals the harmonious beauty of data.</p>"},{"location":"chapter_sorting/#chapter-contents","title":"Chapter contents","text":"<ul> <li>11.1 \u00a0 Sorting algorithms</li> <li>11.2 \u00a0 Selection sort</li> <li>11.3 \u00a0 Bubble sort</li> <li>11.4 \u00a0 Insertion sort</li> <li>11.5 \u00a0 Quick sort</li> <li>11.6 \u00a0 Merge sort</li> <li>11.7 \u00a0 Heap sort</li> <li>11.8 \u00a0 Bucket sort</li> <li>11.9 \u00a0 Counting sort</li> <li>11.10 \u00a0 Radix sort</li> <li>11.11 \u00a0 Summary</li> </ul>"},{"location":"chapter_sorting/bubble_sort/","title":"11.3 \u00a0 Bubble sort","text":"<p>Bubble sort achieves sorting by continuously comparing and swapping adjacent elements. This process resembles bubbles rising from the bottom to the top, hence the name bubble sort.</p> <p>As shown in Figure 11-4, the bubbling process can be simulated using element swap operations: starting from the leftmost end of the array and moving right, sequentially compare the size of adjacent elements. If \"left element > right element,\" then swap them. After the traversal, the largest element will be moved to the far right end of the array.</p> <1><2><3><4><5><6><7> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 11-4 \u00a0 Simulating bubble process using element swap </p>"},{"location":"chapter_sorting/bubble_sort/#1131-algorithm-process","title":"11.3.1 \u00a0 Algorithm process","text":"<p>Assuming the length of the array is \\(n\\), the steps of bubble sort are shown in Figure 11-5.</p> <ol> <li>First, perform a \"bubble\" on \\(n\\) elements, swapping the largest element to its correct position.</li> <li>Next, perform a \"bubble\" on the remaining \\(n - 1\\) elements, swapping the second largest element to its correct position.</li> <li>Similarly, after \\(n - 1\\) rounds of \"bubbling,\" the top \\(n - 1\\) largest elements will be swapped to their correct positions.</li> <li>The only remaining element is necessarily the smallest and does not require sorting, thus the array sorting is complete.</li> </ol> <p></p> <p> Figure 11-5 \u00a0 Bubble sort process </p> <p>Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig bubble_sort.py<pre><code>def bubble_sort(nums: list[int]):\n \"\"\"Bubble sort\"\"\"\n n = len(nums)\n # Outer loop: unsorted range is [0, i]\n for i in range(n - 1, 0, -1):\n # Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for j in range(i):\n if nums[j] > nums[j + 1]:\n # Swap nums[j] and nums[j + 1]\n nums[j], nums[j + 1] = nums[j + 1], nums[j]\n</code></pre> bubble_sort.cpp<pre><code>/* Bubble sort */\nvoid bubbleSort(vector<int> &nums) {\n // Outer loop: unsorted range is [0, i]\n for (int i = nums.size() - 1; i > 0; i--) {\n // Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for (int j = 0; j < i; j++) {\n if (nums[j] > nums[j + 1]) {\n // Swap nums[j] and nums[j + 1]\n // Here, the std\n swap(nums[j], nums[j + 1]);\n }\n }\n }\n}\n</code></pre> bubble_sort.java<pre><code>/* Bubble sort */\nvoid bubbleSort(int[] nums) {\n // Outer loop: unsorted range is [0, i]\n for (int i = nums.length - 1; i > 0; i--) {\n // Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for (int j = 0; j < i; j++) {\n if (nums[j] > nums[j + 1]) {\n // Swap nums[j] and nums[j + 1]\n int tmp = nums[j];\n nums[j] = nums[j + 1];\n nums[j + 1] = tmp;\n }\n }\n }\n}\n</code></pre> bubble_sort.cs<pre><code>[class]{bubble_sort}-[func]{BubbleSort}\n</code></pre> bubble_sort.go<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> bubble_sort.swift<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> bubble_sort.js<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> bubble_sort.ts<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> bubble_sort.dart<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> bubble_sort.rs<pre><code>[class]{}-[func]{bubble_sort}\n</code></pre> bubble_sort.c<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> bubble_sort.kt<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre> bubble_sort.rb<pre><code>[class]{}-[func]{bubble_sort}\n</code></pre> bubble_sort.zig<pre><code>[class]{}-[func]{bubbleSort}\n</code></pre>"},{"location":"chapter_sorting/bubble_sort/#1132-efficiency-optimization","title":"11.3.2 \u00a0 Efficiency optimization","text":"<p>We find that if no swaps are performed in a round of \"bubbling,\" the array is already sorted, and we can return the result immediately. Thus, we can add a flag <code>flag</code> to monitor this situation and return immediately when it occurs.</p> <p>Even after optimization, the worst-case time complexity and average time complexity of bubble sort remain at \\(O(n^2)\\); however, when the input array is completely ordered, it can achieve the best time complexity of \\(O(n)\\).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig bubble_sort.py<pre><code>def bubble_sort_with_flag(nums: list[int]):\n \"\"\"Bubble sort (optimized with flag)\"\"\"\n n = len(nums)\n # Outer loop: unsorted range is [0, i]\n for i in range(n - 1, 0, -1):\n flag = False # Initialize flag\n # Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for j in range(i):\n if nums[j] > nums[j + 1]:\n # Swap nums[j] and nums[j + 1]\n nums[j], nums[j + 1] = nums[j + 1], nums[j]\n flag = True # Record swapped elements\n if not flag:\n break # If no elements were swapped in this round of \"bubbling\", exit\n</code></pre> bubble_sort.cpp<pre><code>/* Bubble sort (optimized with flag)*/\nvoid bubbleSortWithFlag(vector<int> &nums) {\n // Outer loop: unsorted range is [0, i]\n for (int i = nums.size() - 1; i > 0; i--) {\n bool flag = false; // Initialize flag\n // Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for (int j = 0; j < i; j++) {\n if (nums[j] > nums[j + 1]) {\n // Swap nums[j] and nums[j + 1]\n // Here, the std\n swap(nums[j], nums[j + 1]);\n flag = true; // Record swapped elements\n }\n }\n if (!flag)\n break; // If no elements were swapped in this round of \"bubbling\", exit\n }\n}\n</code></pre> bubble_sort.java<pre><code>/* Bubble sort (optimized with flag) */\nvoid bubbleSortWithFlag(int[] nums) {\n // Outer loop: unsorted range is [0, i]\n for (int i = nums.length - 1; i > 0; i--) {\n boolean flag = false; // Initialize flag\n // Inner loop: swap the largest element in the unsorted range [0, i] to the right end of the range\n for (int j = 0; j < i; j++) {\n if (nums[j] > nums[j + 1]) {\n // Swap nums[j] and nums[j + 1]\n int tmp = nums[j];\n nums[j] = nums[j + 1];\n nums[j + 1] = tmp;\n flag = true; // Record swapped elements\n }\n }\n if (!flag)\n break; // If no elements were swapped in this round of \"bubbling\", exit\n }\n}\n</code></pre> bubble_sort.cs<pre><code>[class]{bubble_sort}-[func]{BubbleSortWithFlag}\n</code></pre> bubble_sort.go<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre> bubble_sort.swift<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre> bubble_sort.js<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre> bubble_sort.ts<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre> bubble_sort.dart<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre> bubble_sort.rs<pre><code>[class]{}-[func]{bubble_sort_with_flag}\n</code></pre> bubble_sort.c<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre> bubble_sort.kt<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre> bubble_sort.rb<pre><code>[class]{}-[func]{bubble_sort_with_flag}\n</code></pre> bubble_sort.zig<pre><code>[class]{}-[func]{bubbleSortWithFlag}\n</code></pre>"},{"location":"chapter_sorting/bubble_sort/#1133-algorithm-characteristics","title":"11.3.3 \u00a0 Algorithm characteristics","text":"<ul> <li>Time complexity of \\(O(n^2)\\), adaptive sorting: The length of the array traversed in each round of \"bubbling\" decreases sequentially from \\(n - 1\\), \\(n - 2\\), \\(\\dots\\), \\(2\\), \\(1\\), totaling \\((n - 1) n / 2\\). With the introduction of <code>flag</code> optimization, the best time complexity can reach \\(O(n)\\).</li> <li>Space complexity of \\(O(1)\\), in-place sorting: Only a constant amount of extra space is used by pointers \\(i\\) and \\(j\\).</li> <li>Stable sorting: As equal elements are not swapped during the \"bubbling\".</li> </ul>"},{"location":"chapter_sorting/bucket_sort/","title":"11.8 \u00a0 Bucket sort","text":"<p>The previously mentioned sorting algorithms are all \"comparison-based sorting algorithms,\" which sort by comparing the size of elements. Such sorting algorithms cannot surpass a time complexity of \\(O(n \\log n)\\). Next, we will discuss several \"non-comparison sorting algorithms\" that can achieve linear time complexity.</p> <p>Bucket sort is a typical application of the divide-and-conquer strategy. It involves setting up a series of ordered buckets, each corresponding to a range of data, and then distributing the data evenly among these buckets; each bucket is then sorted individually; finally, all the data are merged in the order of the buckets.</p>"},{"location":"chapter_sorting/bucket_sort/#1181-algorithm-process","title":"11.8.1 \u00a0 Algorithm process","text":"<p>Consider an array of length \\(n\\), with elements in the range \\([0, 1)\\). The bucket sort process is illustrated in Figure 11-13.</p> <ol> <li>Initialize \\(k\\) buckets and distribute \\(n\\) elements into these \\(k\\) buckets.</li> <li>Sort each bucket individually (using the built-in sorting function of the programming language).</li> <li>Merge the results in the order from the smallest to the largest bucket.</li> </ol> <p></p> <p> Figure 11-13 \u00a0 Bucket sort algorithm process </p> <p>The code is shown as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig bucket_sort.py<pre><code>def bucket_sort(nums: list[float]):\n \"\"\"Bucket sort\"\"\"\n # Initialize k = n/2 buckets, expected to allocate 2 elements per bucket\n k = len(nums) // 2\n buckets = [[] for _ in range(k)]\n # 1. Distribute array elements into various buckets\n for num in nums:\n # Input data range is [0, 1), use num * k to map to index range [0, k-1]\n i = int(num * k)\n # Add num to bucket i\n buckets[i].append(num)\n # 2. Sort each bucket\n for bucket in buckets:\n # Use built-in sorting function, can also replace with other sorting algorithms\n bucket.sort()\n # 3. Traverse buckets to merge results\n i = 0\n for bucket in buckets:\n for num in bucket:\n nums[i] = num\n i += 1\n</code></pre> bucket_sort.cpp<pre><code>/* Bucket sort */\nvoid bucketSort(vector<float> &nums) {\n // Initialize k = n/2 buckets, expected to allocate 2 elements per bucket\n int k = nums.size() / 2;\n vector<vector<float>> buckets(k);\n // 1. Distribute array elements into various buckets\n for (float num : nums) {\n // Input data range is [0, 1), use num * k to map to index range [0, k-1]\n int i = num * k;\n // Add number to bucket_idx\n buckets[i].push_back(num);\n }\n // 2. Sort each bucket\n for (vector<float> &bucket : buckets) {\n // Use built-in sorting function, can also replace with other sorting algorithms\n sort(bucket.begin(), bucket.end());\n }\n // 3. Traverse buckets to merge results\n int i = 0;\n for (vector<float> &bucket : buckets) {\n for (float num : bucket) {\n nums[i++] = num;\n }\n }\n}\n</code></pre> bucket_sort.java<pre><code>/* Bucket sort */\nvoid bucketSort(float[] nums) {\n // Initialize k = n/2 buckets, expected to allocate 2 elements per bucket\n int k = nums.length / 2;\n List<List<Float>> buckets = new ArrayList<>();\n for (int i = 0; i < k; i++) {\n buckets.add(new ArrayList<>());\n }\n // 1. Distribute array elements into various buckets\n for (float num : nums) {\n // Input data range is [0, 1), use num * k to map to index range [0, k-1]\n int i = (int) (num * k);\n // Add num to bucket i\n buckets.get(i).add(num);\n }\n // 2. Sort each bucket\n for (List<Float> bucket : buckets) {\n // Use built-in sorting function, can also replace with other sorting algorithms\n Collections.sort(bucket);\n }\n // 3. Traverse buckets to merge results\n int i = 0;\n for (List<Float> bucket : buckets) {\n for (float num : bucket) {\n nums[i++] = num;\n }\n }\n}\n</code></pre> bucket_sort.cs<pre><code>[class]{bucket_sort}-[func]{BucketSort}\n</code></pre> bucket_sort.go<pre><code>[class]{}-[func]{bucketSort}\n</code></pre> bucket_sort.swift<pre><code>[class]{}-[func]{bucketSort}\n</code></pre> bucket_sort.js<pre><code>[class]{}-[func]{bucketSort}\n</code></pre> bucket_sort.ts<pre><code>[class]{}-[func]{bucketSort}\n</code></pre> bucket_sort.dart<pre><code>[class]{}-[func]{bucketSort}\n</code></pre> bucket_sort.rs<pre><code>[class]{}-[func]{bucket_sort}\n</code></pre> bucket_sort.c<pre><code>[class]{}-[func]{bucketSort}\n</code></pre> bucket_sort.kt<pre><code>[class]{}-[func]{bucketSort}\n</code></pre> bucket_sort.rb<pre><code>[class]{}-[func]{bucket_sort}\n</code></pre> bucket_sort.zig<pre><code>[class]{}-[func]{bucketSort}\n</code></pre>"},{"location":"chapter_sorting/bucket_sort/#1182-algorithm-characteristics","title":"11.8.2 \u00a0 Algorithm characteristics","text":"<p>Bucket sort is suitable for handling very large data sets. For example, if the input data includes 1 million elements, and system memory limitations prevent loading all the data at once, you can divide the data into 1,000 buckets and sort each bucket separately before merging the results.</p> <ul> <li>Time complexity is \\(O(n + k)\\): Assuming the elements are evenly distributed across the buckets, the number of elements in each bucket is \\(n/k\\). Assuming sorting a single bucket takes \\(O(n/k \\log(n/k))\\) time, sorting all buckets takes \\(O(n \\log(n/k))\\) time. When the number of buckets \\(k\\) is relatively large, the time complexity tends towards \\(O(n)\\). Merging the results requires traversing all buckets and elements, taking \\(O(n + k)\\) time.</li> <li>Adaptive sorting: In the worst case, all data is distributed into a single bucket, and sorting that bucket takes \\(O(n^2)\\) time.</li> <li>Space complexity is \\(O(n + k)\\), non-in-place sorting: It requires additional space for \\(k\\) buckets and a total of \\(n\\) elements.</li> <li>Whether bucket sort is stable depends on whether the algorithm used to sort elements within the buckets is stable.</li> </ul>"},{"location":"chapter_sorting/bucket_sort/#1183-how-to-achieve-even-distribution","title":"11.8.3 \u00a0 How to achieve even distribution","text":"<p>The theoretical time complexity of bucket sort can reach \\(O(n)\\), the key is to evenly distribute the elements across all buckets, as real data is often not uniformly distributed. For example, if we want to evenly distribute all products on Taobao by price range into 10 buckets, but the distribution of product prices is uneven, with many under 100 yuan and few over 1000 yuan. If the price range is evenly divided into 10, the difference in the number of products in each bucket will be very large.</p> <p>To achieve even distribution, we can initially set a rough dividing line, roughly dividing the data into 3 buckets. After the distribution is complete, the buckets with more products can be further divided into 3 buckets, until the number of elements in all buckets is roughly equal.</p> <p>As shown in Figure 11-14, this method essentially creates a recursive tree, aiming to make the leaf node values as even as possible. Of course, you don't have to divide the data into 3 buckets each round; the specific division method can be flexibly chosen based on data characteristics.</p> <p></p> <p> Figure 11-14 \u00a0 Recursive division of buckets </p> <p>If we know the probability distribution of product prices in advance, we can set the price dividing line for each bucket based on the data probability distribution. It is worth noting that it is not necessarily required to specifically calculate the data distribution; it can also be approximated based on data characteristics using some probability model.</p> <p>As shown in Figure 11-15, we assume that product prices follow a normal distribution, allowing us to reasonably set the price intervals, thereby evenly distributing the products into the respective buckets.</p> <p></p> <p> Figure 11-15 \u00a0 Dividing buckets based on probability distribution </p>"},{"location":"chapter_sorting/counting_sort/","title":"11.9 \u00a0 Counting sort","text":"<p>Counting sort achieves sorting by counting the number of elements, typically applied to arrays of integers.</p>"},{"location":"chapter_sorting/counting_sort/#1191-simple-implementation","title":"11.9.1 \u00a0 Simple implementation","text":"<p>Let's start with a simple example. Given an array <code>nums</code> of length \\(n\\), where all elements are \"non-negative integers\", the overall process of counting sort is illustrated in Figure 11-16.</p> <ol> <li>Traverse the array to find the maximum number, denoted as \\(m\\), then create an auxiliary array <code>counter</code> of length \\(m + 1\\).</li> <li>Use <code>counter</code> to count the occurrence of each number in <code>nums</code>, where <code>counter[num]</code> corresponds to the occurrence of the number <code>num</code>. The counting method is simple, just traverse <code>nums</code> (suppose the current number is <code>num</code>), and increase <code>counter[num]</code> by \\(1\\) each round.</li> <li>Since the indices of <code>counter</code> are naturally ordered, all numbers are essentially sorted already. Next, we traverse <code>counter</code>, filling <code>nums</code> in ascending order of occurrence.</li> </ol> <p></p> <p> Figure 11-16 \u00a0 Counting sort process </p> <p>The code is shown below:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig counting_sort.py<pre><code>def counting_sort_naive(nums: list[int]):\n \"\"\"Counting sort\"\"\"\n # Simple implementation, cannot be used for sorting objects\n # 1. Count the maximum element m in the array\n m = 0\n for num in nums:\n m = max(m, num)\n # 2. Count the occurrence of each digit\n # counter[num] represents the occurrence of num\n counter = [0] * (m + 1)\n for num in nums:\n counter[num] += 1\n # 3. Traverse counter, filling each element back into the original array nums\n i = 0\n for num in range(m + 1):\n for _ in range(counter[num]):\n nums[i] = num\n i += 1\n</code></pre> counting_sort.cpp<pre><code>/* Counting sort */\n// Simple implementation, cannot be used for sorting objects\nvoid countingSortNaive(vector<int> &nums) {\n // 1. Count the maximum element m in the array\n int m = 0;\n for (int num : nums) {\n m = max(m, num);\n }\n // 2. Count the occurrence of each digit\n // counter[num] represents the occurrence of num\n vector<int> counter(m + 1, 0);\n for (int num : nums) {\n counter[num]++;\n }\n // 3. Traverse counter, filling each element back into the original array nums\n int i = 0;\n for (int num = 0; num < m + 1; num++) {\n for (int j = 0; j < counter[num]; j++, i++) {\n nums[i] = num;\n }\n }\n}\n</code></pre> counting_sort.java<pre><code>/* Counting sort */\n// Simple implementation, cannot be used for sorting objects\nvoid countingSortNaive(int[] nums) {\n // 1. Count the maximum element m in the array\n int m = 0;\n for (int num : nums) {\n m = Math.max(m, num);\n }\n // 2. Count the occurrence of each digit\n // counter[num] represents the occurrence of num\n int[] counter = new int[m + 1];\n for (int num : nums) {\n counter[num]++;\n }\n // 3. Traverse counter, filling each element back into the original array nums\n int i = 0;\n for (int num = 0; num < m + 1; num++) {\n for (int j = 0; j < counter[num]; j++, i++) {\n nums[i] = num;\n }\n }\n}\n</code></pre> counting_sort.cs<pre><code>[class]{counting_sort}-[func]{CountingSortNaive}\n</code></pre> counting_sort.go<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> counting_sort.swift<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> counting_sort.js<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> counting_sort.ts<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> counting_sort.dart<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> counting_sort.rs<pre><code>[class]{}-[func]{counting_sort_naive}\n</code></pre> counting_sort.c<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> counting_sort.kt<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> counting_sort.rb<pre><code>[class]{}-[func]{counting_sort_naive}\n</code></pre> counting_sort.zig<pre><code>[class]{}-[func]{countingSortNaive}\n</code></pre> <p>Connection between counting sort and bucket sort</p> <p>From the perspective of bucket sort, we can consider each index of the counting array <code>counter</code> in counting sort as a bucket, and the process of counting as distributing elements into the corresponding buckets. Essentially, counting sort is a special case of bucket sort for integer data.</p>"},{"location":"chapter_sorting/counting_sort/#1192-complete-implementation","title":"11.9.2 \u00a0 Complete implementation","text":"<p>Astute readers might have noticed, if the input data is an object, the above step <code>3.</code> becomes ineffective. Suppose the input data is a product object, we want to sort the products by their price (a class member variable), but the above algorithm can only provide the sorting result for the price.</p> <p>So how can we get the sorting result for the original data? First, we calculate the \"prefix sum\" of <code>counter</code>. As the name suggests, the prefix sum at index <code>i</code>, <code>prefix[i]</code>, equals the sum of the first <code>i</code> elements of the array:</p> \\[ \\text{prefix}[i] = \\sum_{j=0}^i \\text{counter[j]} \\] <p>The prefix sum has a clear meaning, <code>prefix[num] - 1</code> represents the last occurrence index of element <code>num</code> in the result array <code>res</code>. This information is crucial, as it tells us where each element should appear in the result array. Next, we traverse the original array <code>nums</code> for each element <code>num</code> in reverse order, performing the following two steps in each iteration.</p> <ol> <li>Fill <code>num</code> into the array <code>res</code> at the index <code>prefix[num] - 1</code>.</li> <li>Reduce the prefix sum <code>prefix[num]</code> by \\(1\\), thus obtaining the next index to place <code>num</code>.</li> </ol> <p>After the traversal, the array <code>res</code> contains the sorted result, and finally, <code>res</code> replaces the original array <code>nums</code>. The complete counting sort process is shown in Figure 11-17.</p> <1><2><3><4><5><6><7><8> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 11-17 \u00a0 Counting sort process </p> <p>The implementation code of counting sort is shown below:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig counting_sort.py<pre><code>def counting_sort(nums: list[int]):\n \"\"\"Counting sort\"\"\"\n # Complete implementation, can sort objects and is a stable sort\n # 1. Count the maximum element m in the array\n m = max(nums)\n # 2. Count the occurrence of each digit\n # counter[num] represents the occurrence of num\n counter = [0] * (m + 1)\n for num in nums:\n counter[num] += 1\n # 3. Calculate the prefix sum of counter, converting \"occurrence count\" to \"tail index\"\n # counter[num]-1 is the last index where num appears in res\n for i in range(m):\n counter[i + 1] += counter[i]\n # 4. Traverse nums in reverse order, placing each element into the result array res\n # Initialize the array res to record results\n n = len(nums)\n res = [0] * n\n for i in range(n - 1, -1, -1):\n num = nums[i]\n res[counter[num] - 1] = num # Place num at the corresponding index\n counter[num] -= 1 # Decrement the prefix sum by 1, getting the next index to place num\n # Use result array res to overwrite the original array nums\n for i in range(n):\n nums[i] = res[i]\n</code></pre> counting_sort.cpp<pre><code>/* Counting sort */\n// Complete implementation, can sort objects and is a stable sort\nvoid countingSort(vector<int> &nums) {\n // 1. Count the maximum element m in the array\n int m = 0;\n for (int num : nums) {\n m = max(m, num);\n }\n // 2. Count the occurrence of each digit\n // counter[num] represents the occurrence of num\n vector<int> counter(m + 1, 0);\n for (int num : nums) {\n counter[num]++;\n }\n // 3. Calculate the prefix sum of counter, converting \"occurrence count\" to \"tail index\"\n // counter[num]-1 is the last index where num appears in res\n for (int i = 0; i < m; i++) {\n counter[i + 1] += counter[i];\n }\n // 4. Traverse nums in reverse order, placing each element into the result array res\n // Initialize the array res to record results\n int n = nums.size();\n vector<int> res(n);\n for (int i = n - 1; i >= 0; i--) {\n int num = nums[i];\n res[counter[num] - 1] = num; // Place num at the corresponding index\n counter[num]--; // Decrement the prefix sum by 1, getting the next index to place num\n }\n // Use result array res to overwrite the original array nums\n nums = res;\n}\n</code></pre> counting_sort.java<pre><code>/* Counting sort */\n// Complete implementation, can sort objects and is a stable sort\nvoid countingSort(int[] nums) {\n // 1. Count the maximum element m in the array\n int m = 0;\n for (int num : nums) {\n m = Math.max(m, num);\n }\n // 2. Count the occurrence of each digit\n // counter[num] represents the occurrence of num\n int[] counter = new int[m + 1];\n for (int num : nums) {\n counter[num]++;\n }\n // 3. Calculate the prefix sum of counter, converting \"occurrence count\" to \"tail index\"\n // counter[num]-1 is the last index where num appears in res\n for (int i = 0; i < m; i++) {\n counter[i + 1] += counter[i];\n }\n // 4. Traverse nums in reverse order, placing each element into the result array res\n // Initialize the array res to record results\n int n = nums.length;\n int[] res = new int[n];\n for (int i = n - 1; i >= 0; i--) {\n int num = nums[i];\n res[counter[num] - 1] = num; // Place num at the corresponding index\n counter[num]--; // Decrement the prefix sum by 1, getting the next index to place num\n }\n // Use result array res to overwrite the original array nums\n for (int i = 0; i < n; i++) {\n nums[i] = res[i];\n }\n}\n</code></pre> counting_sort.cs<pre><code>[class]{counting_sort}-[func]{CountingSort}\n</code></pre> counting_sort.go<pre><code>[class]{}-[func]{countingSort}\n</code></pre> counting_sort.swift<pre><code>[class]{}-[func]{countingSort}\n</code></pre> counting_sort.js<pre><code>[class]{}-[func]{countingSort}\n</code></pre> counting_sort.ts<pre><code>[class]{}-[func]{countingSort}\n</code></pre> counting_sort.dart<pre><code>[class]{}-[func]{countingSort}\n</code></pre> counting_sort.rs<pre><code>[class]{}-[func]{counting_sort}\n</code></pre> counting_sort.c<pre><code>[class]{}-[func]{countingSort}\n</code></pre> counting_sort.kt<pre><code>[class]{}-[func]{countingSort}\n</code></pre> counting_sort.rb<pre><code>[class]{}-[func]{counting_sort}\n</code></pre> counting_sort.zig<pre><code>[class]{}-[func]{countingSort}\n</code></pre>"},{"location":"chapter_sorting/counting_sort/#1193-algorithm-characteristics","title":"11.9.3 \u00a0 Algorithm characteristics","text":"<ul> <li>Time complexity is \\(O(n + m)\\), non-adaptive sort: Involves traversing <code>nums</code> and <code>counter</code>, both using linear time. Generally, \\(n \\gg m\\), and the time complexity tends towards \\(O(n)\\).</li> <li>Space complexity is \\(O(n + m)\\), non-in-place sort: Utilizes arrays <code>res</code> and <code>counter</code> of lengths \\(n\\) and \\(m\\) respectively.</li> <li>Stable sort: Since elements are filled into <code>res</code> in a \"right-to-left\" order, reversing the traversal of <code>nums</code> can prevent changing the relative position between equal elements, thereby achieving a stable sort. Actually, traversing <code>nums</code> in order can also produce the correct sorting result, but the outcome is unstable.</li> </ul>"},{"location":"chapter_sorting/counting_sort/#1194-limitations","title":"11.9.4 \u00a0 Limitations","text":"<p>By now, you might find counting sort very clever, as it can achieve efficient sorting merely by counting quantities. However, the prerequisites for using counting sort are relatively strict.</p> <p>Counting sort is only suitable for non-negative integers. If you want to apply it to other types of data, you need to ensure that these data can be converted to non-negative integers without changing the relative sizes of the elements. For example, for an array containing negative integers, you can first add a constant to all numbers, converting them all to positive numbers, and then convert them back after sorting is complete.</p> <p>Counting sort is suitable for large data volumes but small data ranges. For example, in the above example, \\(m\\) should not be too large, otherwise, it will occupy too much space. And when \\(n \\ll m\\), counting sort uses \\(O(m)\\) time, which may be slower than \\(O(n \\log n)\\) sorting algorithms.</p>"},{"location":"chapter_sorting/heap_sort/","title":"11.7 \u00a0 Heap sort","text":"<p>Tip</p> <p>Before reading this section, please make sure you have completed the \"Heap\" chapter.</p> <p>Heap sort is an efficient sorting algorithm based on the heap data structure. We can implement heap sort using the \"heap creation\" and \"element extraction\" operations we have already learned.</p> <ol> <li>Input the array and establish a min-heap, where the smallest element is at the heap's top.</li> <li>Continuously perform the extraction operation, recording the extracted elements in sequence to obtain a sorted list from smallest to largest.</li> </ol> <p>Although the above method is feasible, it requires an additional array to save the popped elements, which is somewhat space-consuming. In practice, we usually use a more elegant implementation.</p>"},{"location":"chapter_sorting/heap_sort/#1171-algorithm-flow","title":"11.7.1 \u00a0 Algorithm flow","text":"<p>Suppose the array length is \\(n\\), the heap sort process is as follows.</p> <ol> <li>Input the array and establish a max-heap. After completion, the largest element is at the heap's top.</li> <li>Swap the top element of the heap (the first element) with the heap's bottom element (the last element). After the swap, reduce the heap's length by \\(1\\) and increase the sorted elements count by \\(1\\).</li> <li>Starting from the heap top, perform the sift-down operation from top to bottom. After the sift-down, the heap's property is restored.</li> <li>Repeat steps <code>2.</code> and <code>3.</code> Loop for \\(n - 1\\) rounds to complete the sorting of the array.</li> </ol> <p>Tip</p> <p>In fact, the element extraction operation also includes steps <code>2.</code> and <code>3.</code>, with the addition of a popping element step.</p> <1><2><3><4><5><6><7><8><9><10><11><12> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 11-12 \u00a0 Heap sort process </p> <p>In the code implementation, we used the sift-down function <code>sift_down()</code> from the \"Heap\" chapter. It is important to note that since the heap's length decreases as the maximum element is extracted, we need to add a length parameter \\(n\\) to the <code>sift_down()</code> function to specify the current effective length of the heap. The code is shown below:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig heap_sort.py<pre><code>def sift_down(nums: list[int], n: int, i: int):\n \"\"\"Heap length is n, start heapifying node i, from top to bottom\"\"\"\n while True:\n # Determine the largest node among i, l, r, noted as ma\n l = 2 * i + 1\n r = 2 * i + 2\n ma = i\n if l < n and nums[l] > nums[ma]:\n ma = l\n if r < n and nums[r] > nums[ma]:\n ma = r\n # If node i is the largest or indices l, r are out of bounds, no further heapification needed, break\n if ma == i:\n break\n # Swap two nodes\n nums[i], nums[ma] = nums[ma], nums[i]\n # Loop downwards heapification\n i = ma\n\ndef heap_sort(nums: list[int]):\n \"\"\"Heap sort\"\"\"\n # Build heap operation: heapify all nodes except leaves\n for i in range(len(nums) // 2 - 1, -1, -1):\n sift_down(nums, len(nums), i)\n # Extract the largest element from the heap and repeat for n-1 rounds\n for i in range(len(nums) - 1, 0, -1):\n # Swap the root node with the rightmost leaf node (swap the first element with the last element)\n nums[0], nums[i] = nums[i], nums[0]\n # Start heapifying the root node, from top to bottom\n sift_down(nums, i, 0)\n</code></pre> heap_sort.cpp<pre><code>/* Heap length is n, start heapifying node i, from top to bottom */\nvoid siftDown(vector<int> &nums, int n, int i) {\n while (true) {\n // Determine the largest node among i, l, r, noted as ma\n int l = 2 * i + 1;\n int r = 2 * i + 2;\n int ma = i;\n if (l < n && nums[l] > nums[ma])\n ma = l;\n if (r < n && nums[r] > nums[ma])\n ma = r;\n // If node i is the largest or indices l, r are out of bounds, no further heapification needed, break\n if (ma == i) {\n break;\n }\n // Swap two nodes\n swap(nums[i], nums[ma]);\n // Loop downwards heapification\n i = ma;\n }\n}\n\n/* Heap sort */\nvoid heapSort(vector<int> &nums) {\n // Build heap operation: heapify all nodes except leaves\n for (int i = nums.size() / 2 - 1; i >= 0; --i) {\n siftDown(nums, nums.size(), i);\n }\n // Extract the largest element from the heap and repeat for n-1 rounds\n for (int i = nums.size() - 1; i > 0; --i) {\n // Swap the root node with the rightmost leaf node (swap the first element with the last element)\n swap(nums[0], nums[i]);\n // Start heapifying the root node, from top to bottom\n siftDown(nums, i, 0);\n }\n}\n</code></pre> heap_sort.java<pre><code>/* Heap length is n, start heapifying node i, from top to bottom */\nvoid siftDown(int[] nums, int n, int i) {\n while (true) {\n // Determine the largest node among i, l, r, noted as ma\n int l = 2 * i + 1;\n int r = 2 * i + 2;\n int ma = i;\n if (l < n && nums[l] > nums[ma])\n ma = l;\n if (r < n && nums[r] > nums[ma])\n ma = r;\n // If node i is the largest or indices l, r are out of bounds, no further heapification needed, break\n if (ma == i)\n break;\n // Swap two nodes\n int temp = nums[i];\n nums[i] = nums[ma];\n nums[ma] = temp;\n // Loop downwards heapification\n i = ma;\n }\n}\n\n/* Heap sort */\nvoid heapSort(int[] nums) {\n // Build heap operation: heapify all nodes except leaves\n for (int i = nums.length / 2 - 1; i >= 0; i--) {\n siftDown(nums, nums.length, i);\n }\n // Extract the largest element from the heap and repeat for n-1 rounds\n for (int i = nums.length - 1; i > 0; i--) {\n // Swap the root node with the rightmost leaf node (swap the first element with the last element)\n int tmp = nums[0];\n nums[0] = nums[i];\n nums[i] = tmp;\n // Start heapifying the root node, from top to bottom\n siftDown(nums, i, 0);\n }\n}\n</code></pre> heap_sort.cs<pre><code>[class]{heap_sort}-[func]{SiftDown}\n\n[class]{heap_sort}-[func]{HeapSort}\n</code></pre> heap_sort.go<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre> heap_sort.swift<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre> heap_sort.js<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre> heap_sort.ts<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre> heap_sort.dart<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre> heap_sort.rs<pre><code>[class]{}-[func]{sift_down}\n\n[class]{}-[func]{heap_sort}\n</code></pre> heap_sort.c<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre> heap_sort.kt<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre> heap_sort.rb<pre><code>[class]{}-[func]{sift_down}\n\n[class]{}-[func]{heap_sort}\n</code></pre> heap_sort.zig<pre><code>[class]{}-[func]{siftDown}\n\n[class]{}-[func]{heapSort}\n</code></pre>"},{"location":"chapter_sorting/heap_sort/#1172-algorithm-characteristics","title":"11.7.2 \u00a0 Algorithm characteristics","text":"<ul> <li>Time complexity is \\(O(n \\log n)\\), non-adaptive sort: The heap creation uses \\(O(n)\\) time. Extracting the largest element from the heap takes \\(O(\\log n)\\) time, looping for \\(n - 1\\) rounds.</li> <li>Space complexity is \\(O(1)\\), in-place sort: A few pointer variables use \\(O(1)\\) space. The element swapping and heapifying operations are performed on the original array.</li> <li>Non-stable sort: The relative positions of equal elements may change during the swapping of the heap's top and bottom elements.</li> </ul>"},{"location":"chapter_sorting/insertion_sort/","title":"11.4 \u00a0 Insertion sort","text":"<p>Insertion sort is a simple sorting algorithm that works very much like the process of manually sorting a deck of cards.</p> <p>Specifically, we select a pivot element from the unsorted interval, compare it with the elements in the sorted interval to its left, and insert the element into the correct position.</p> <p>Figure 11-6 shows the process of inserting an element into an array. Assuming the pivot element is <code>base</code>, we need to move all elements between the target index and <code>base</code> one position to the right, then assign <code>base</code> to the target index.</p> <p></p> <p> Figure 11-6 \u00a0 Single insertion operation </p>"},{"location":"chapter_sorting/insertion_sort/#1141-algorithm-process","title":"11.4.1 \u00a0 Algorithm process","text":"<p>The overall process of insertion sort is shown in Figure 11-7.</p> <ol> <li>Initially, the first element of the array is sorted.</li> <li>The second element of the array is taken as <code>base</code>, and after inserting it into the correct position, the first two elements of the array are sorted.</li> <li>The third element is taken as <code>base</code>, and after inserting it into the correct position, the first three elements of the array are sorted.</li> <li>And so on, in the last round, the last element is taken as <code>base</code>, and after inserting it into the correct position, all elements are sorted.</li> </ol> <p></p> <p> Figure 11-7 \u00a0 Insertion sort process </p> <p>Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig insertion_sort.py<pre><code>def insertion_sort(nums: list[int]):\n \"\"\"Insertion sort\"\"\"\n # Outer loop: sorted range is [0, i-1]\n for i in range(1, len(nums)):\n base = nums[i]\n j = i - 1\n # Inner loop: insert base into the correct position within the sorted range [0, i-1]\n while j >= 0 and nums[j] > base:\n nums[j + 1] = nums[j] # Move nums[j] to the right by one position\n j -= 1\n nums[j + 1] = base # Assign base to the correct position\n</code></pre> insertion_sort.cpp<pre><code>/* Insertion sort */\nvoid insertionSort(vector<int> &nums) {\n // Outer loop: sorted range is [0, i-1]\n for (int i = 1; i < nums.size(); i++) {\n int base = nums[i], j = i - 1;\n // Inner loop: insert base into the correct position within the sorted range [0, i-1]\n while (j >= 0 && nums[j] > base) {\n nums[j + 1] = nums[j]; // Move nums[j] to the right by one position\n j--;\n }\n nums[j + 1] = base; // Assign base to the correct position\n }\n}\n</code></pre> insertion_sort.java<pre><code>/* Insertion sort */\nvoid insertionSort(int[] nums) {\n // Outer loop: sorted range is [0, i-1]\n for (int i = 1; i < nums.length; i++) {\n int base = nums[i], j = i - 1;\n // Inner loop: insert base into the correct position within the sorted range [0, i-1]\n while (j >= 0 && nums[j] > base) {\n nums[j + 1] = nums[j]; // Move nums[j] to the right by one position\n j--;\n }\n nums[j + 1] = base; // Assign base to the correct position\n }\n}\n</code></pre> insertion_sort.cs<pre><code>[class]{insertion_sort}-[func]{InsertionSort}\n</code></pre> insertion_sort.go<pre><code>[class]{}-[func]{insertionSort}\n</code></pre> insertion_sort.swift<pre><code>[class]{}-[func]{insertionSort}\n</code></pre> insertion_sort.js<pre><code>[class]{}-[func]{insertionSort}\n</code></pre> insertion_sort.ts<pre><code>[class]{}-[func]{insertionSort}\n</code></pre> insertion_sort.dart<pre><code>[class]{}-[func]{insertionSort}\n</code></pre> insertion_sort.rs<pre><code>[class]{}-[func]{insertion_sort}\n</code></pre> insertion_sort.c<pre><code>[class]{}-[func]{insertionSort}\n</code></pre> insertion_sort.kt<pre><code>[class]{}-[func]{insertionSort}\n</code></pre> insertion_sort.rb<pre><code>[class]{}-[func]{insertion_sort}\n</code></pre> insertion_sort.zig<pre><code>[class]{}-[func]{insertionSort}\n</code></pre>"},{"location":"chapter_sorting/insertion_sort/#1142-algorithm-characteristics","title":"11.4.2 \u00a0 Algorithm characteristics","text":"<ul> <li>Time complexity is \\(O(n^2)\\), adaptive sorting: In the worst case, each insertion operation requires \\(n - 1\\), \\(n-2\\), ..., \\(2\\), \\(1\\) loops, summing up to \\((n - 1) n / 2\\), thus the time complexity is \\(O(n^2)\\). In the case of ordered data, the insertion operation will terminate early. When the input array is completely ordered, insertion sort achieves the best time complexity of \\(O(n)\\).</li> <li>Space complexity is \\(O(1)\\), in-place sorting: Pointers \\(i\\) and \\(j\\) use a constant amount of extra space.</li> <li>Stable sorting: During the insertion operation, we insert elements to the right of equal elements, not changing their order.</li> </ul>"},{"location":"chapter_sorting/insertion_sort/#1143-advantages-of-insertion-sort","title":"11.4.3 \u00a0 Advantages of insertion sort","text":"<p>The time complexity of insertion sort is \\(O(n^2)\\), while the time complexity of quicksort, which we will study next, is \\(O(n \\log n)\\). Although insertion sort has a higher time complexity, it is usually faster in cases of small data volumes.</p> <p>This conclusion is similar to that for linear and binary search. Algorithms like quicksort that have a time complexity of \\(O(n \\log n)\\) and are based on the divide-and-conquer strategy often involve more unit operations. In cases of small data volumes, the numerical values of \\(n^2\\) and \\(n \\log n\\) are close, and complexity does not dominate, with the number of unit operations per round playing a decisive role.</p> <p>In fact, many programming languages (such as Java) use insertion sort in their built-in sorting functions. The general approach is: for long arrays, use sorting algorithms based on divide-and-conquer strategies, such as quicksort; for short arrays, use insertion sort directly.</p> <p>Although bubble sort, selection sort, and insertion sort all have a time complexity of \\(O(n^2)\\), in practice, insertion sort is used significantly more frequently than bubble sort and selection sort, mainly for the following reasons.</p> <ul> <li>Bubble sort is based on element swapping, which requires the use of a temporary variable, involving 3 unit operations; insertion sort is based on element assignment, requiring only 1 unit operation. Therefore, the computational overhead of bubble sort is generally higher than that of insertion sort.</li> <li>The time complexity of selection sort is always \\(O(n^2)\\). Given a set of partially ordered data, insertion sort is usually more efficient than selection sort.</li> <li>Selection sort is unstable and cannot be applied to multi-level sorting.</li> </ul>"},{"location":"chapter_sorting/merge_sort/","title":"11.6 \u00a0 Merge sort","text":"<p>Merge sort is a sorting algorithm based on the divide-and-conquer strategy, involving the \"divide\" and \"merge\" phases shown in Figure 11-10.</p> <ol> <li>Divide phase: Recursively split the array from the midpoint, transforming the sorting problem of a long array into that of shorter arrays.</li> <li>Merge phase: Stop dividing when the length of the sub-array is 1, start merging, and continuously combine two shorter ordered arrays into one longer ordered array until the process is complete.</li> </ol> <p></p> <p> Figure 11-10 \u00a0 The divide and merge phases of merge sort </p>"},{"location":"chapter_sorting/merge_sort/#1161-algorithm-workflow","title":"11.6.1 \u00a0 Algorithm workflow","text":"<p>As shown in Figure 11-11, the \"divide phase\" recursively splits the array from the midpoint into two sub-arrays from top to bottom.</p> <ol> <li>Calculate the midpoint <code>mid</code>, recursively divide the left sub-array (interval <code>[left, mid]</code>) and the right sub-array (interval <code>[mid + 1, right]</code>).</li> <li>Continue with step <code>1.</code> recursively until the sub-array interval length is 1 to stop.</li> </ol> <p>The \"merge phase\" combines the left and right sub-arrays into a single ordered array from bottom to top. Note that merging starts with sub-arrays of length 1, and each sub-array is ordered during the merge phase.</p> <1><2><3><4><5><6><7><8><9><10> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 11-11 \u00a0 Merge sort process </p> <p>It is observed that the order of recursion in merge sort is consistent with the post-order traversal of a binary tree.</p> <ul> <li>Post-order traversal: First recursively traverse the left subtree, then the right subtree, and finally handle the root node.</li> <li>Merge sort: First recursively handle the left sub-array, then the right sub-array, and finally perform the merge.</li> </ul> <p>The implementation of merge sort is shown in the following code. Note that the interval to be merged in <code>nums</code> is <code>[left, right]</code>, while the corresponding interval in <code>tmp</code> is <code>[0, right - left]</code>.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig merge_sort.py<pre><code>def merge(nums: list[int], left: int, mid: int, right: int):\n \"\"\"Merge left subarray and right subarray\"\"\"\n # Left subarray interval is [left, mid], right subarray interval is [mid+1, right]\n # Create a temporary array tmp to store the merged results\n tmp = [0] * (right - left + 1)\n # Initialize the start indices of the left and right subarrays\n i, j, k = left, mid + 1, 0\n # While both subarrays still have elements, compare and copy the smaller element into the temporary array\n while i <= mid and j <= right:\n if nums[i] <= nums[j]:\n tmp[k] = nums[i]\n i += 1\n else:\n tmp[k] = nums[j]\n j += 1\n k += 1\n # Copy the remaining elements of the left and right subarrays into the temporary array\n while i <= mid:\n tmp[k] = nums[i]\n i += 1\n k += 1\n while j <= right:\n tmp[k] = nums[j]\n j += 1\n k += 1\n # Copy the elements from the temporary array tmp back to the original array nums at the corresponding interval\n for k in range(0, len(tmp)):\n nums[left + k] = tmp[k]\n\ndef merge_sort(nums: list[int], left: int, right: int):\n \"\"\"Merge sort\"\"\"\n # Termination condition\n if left >= right:\n return # Terminate recursion when subarray length is 1\n # Partition stage\n mid = (left + right) // 2 # Calculate midpoint\n merge_sort(nums, left, mid) # Recursively process the left subarray\n merge_sort(nums, mid + 1, right) # Recursively process the right subarray\n # Merge stage\n merge(nums, left, mid, right)\n</code></pre> merge_sort.cpp<pre><code>/* Merge left subarray and right subarray */\nvoid merge(vector<int> &nums, int left, int mid, int right) {\n // Left subarray interval is [left, mid], right subarray interval is [mid+1, right]\n // Create a temporary array tmp to store the merged results\n vector<int> tmp(right - left + 1);\n // Initialize the start indices of the left and right subarrays\n int i = left, j = mid + 1, k = 0;\n // While both subarrays still have elements, compare and copy the smaller element into the temporary array\n while (i <= mid && j <= right) {\n if (nums[i] <= nums[j])\n tmp[k++] = nums[i++];\n else\n tmp[k++] = nums[j++];\n }\n // Copy the remaining elements of the left and right subarrays into the temporary array\n while (i <= mid) {\n tmp[k++] = nums[i++];\n }\n while (j <= right) {\n tmp[k++] = nums[j++];\n }\n // Copy the elements from the temporary array tmp back to the original array nums at the corresponding interval\n for (k = 0; k < tmp.size(); k++) {\n nums[left + k] = tmp[k];\n }\n}\n\n/* Merge sort */\nvoid mergeSort(vector<int> &nums, int left, int right) {\n // Termination condition\n if (left >= right)\n return; // Terminate recursion when subarray length is 1\n // Partition stage\n int mid = (left + right) / 2; // Calculate midpoint\n mergeSort(nums, left, mid); // Recursively process the left subarray\n mergeSort(nums, mid + 1, right); // Recursively process the right subarray\n // Merge stage\n merge(nums, left, mid, right);\n}\n</code></pre> merge_sort.java<pre><code>/* Merge left subarray and right subarray */\nvoid merge(int[] nums, int left, int mid, int right) {\n // Left subarray interval is [left, mid], right subarray interval is [mid+1, right]\n // Create a temporary array tmp to store the merged results\n int[] tmp = new int[right - left + 1];\n // Initialize the start indices of the left and right subarrays\n int i = left, j = mid + 1, k = 0;\n // While both subarrays still have elements, compare and copy the smaller element into the temporary array\n while (i <= mid && j <= right) {\n if (nums[i] <= nums[j])\n tmp[k++] = nums[i++];\n else\n tmp[k++] = nums[j++];\n }\n // Copy the remaining elements of the left and right subarrays into the temporary array\n while (i <= mid) {\n tmp[k++] = nums[i++];\n }\n while (j <= right) {\n tmp[k++] = nums[j++];\n }\n // Copy the elements from the temporary array tmp back to the original array nums at the corresponding interval\n for (k = 0; k < tmp.length; k++) {\n nums[left + k] = tmp[k];\n }\n}\n\n/* Merge sort */\nvoid mergeSort(int[] nums, int left, int right) {\n // Termination condition\n if (left >= right)\n return; // Terminate recursion when subarray length is 1\n // Partition stage\n int mid = (left + right) / 2; // Calculate midpoint\n mergeSort(nums, left, mid); // Recursively process the left subarray\n mergeSort(nums, mid + 1, right); // Recursively process the right subarray\n // Merge stage\n merge(nums, left, mid, right);\n}\n</code></pre> merge_sort.cs<pre><code>[class]{merge_sort}-[func]{Merge}\n\n[class]{merge_sort}-[func]{MergeSort}\n</code></pre> merge_sort.go<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre> merge_sort.swift<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre> merge_sort.js<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre> merge_sort.ts<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre> merge_sort.dart<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre> merge_sort.rs<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{merge_sort}\n</code></pre> merge_sort.c<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre> merge_sort.kt<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre> merge_sort.rb<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{merge_sort}\n</code></pre> merge_sort.zig<pre><code>[class]{}-[func]{merge}\n\n[class]{}-[func]{mergeSort}\n</code></pre>"},{"location":"chapter_sorting/merge_sort/#1162-algorithm-characteristics","title":"11.6.2 \u00a0 Algorithm characteristics","text":"<ul> <li>Time complexity of \\(O(n \\log n)\\), non-adaptive sort: The division creates a recursion tree of height \\(\\log n\\), with each layer merging a total of \\(n\\) operations, resulting in an overall time complexity of \\(O(n \\log n)\\).</li> <li>Space complexity of \\(O(n)\\), non-in-place sort: The recursion depth is \\(\\log n\\), using \\(O(\\log n)\\) stack frame space. The merging operation requires auxiliary arrays, using an additional space of \\(O(n)\\).</li> <li>Stable sort: During the merging process, the order of equal elements remains unchanged.</li> </ul>"},{"location":"chapter_sorting/merge_sort/#1163-linked-list-sorting","title":"11.6.3 \u00a0 Linked List sorting","text":"<p>For linked lists, merge sort has significant advantages over other sorting algorithms, optimizing the space complexity of the linked list sorting task to \\(O(1)\\).</p> <ul> <li>Divide phase: \"Iteration\" can be used instead of \"recursion\" to perform the linked list division work, thus saving the stack frame space used by recursion.</li> <li>Merge phase: In linked lists, node addition and deletion operations can be achieved by changing references (pointers), so no extra lists need to be created during the merge phase (combining two short ordered lists into one long ordered list).</li> </ul> <p>Detailed implementation details are complex, and interested readers can consult related materials for learning.</p>"},{"location":"chapter_sorting/quick_sort/","title":"11.5 \u00a0 Quick sort","text":"<p>Quick sort is a sorting algorithm based on the divide and conquer strategy, known for its efficiency and wide application.</p> <p>The core operation of quick sort is \"pivot partitioning,\" aiming to: select an element from the array as the \"pivot,\" move all elements smaller than the pivot to its left, and move elements greater than the pivot to its right. Specifically, the pivot partitioning process is illustrated in Figure 11-8.</p> <ol> <li>Select the leftmost element of the array as the pivot, and initialize two pointers <code>i</code> and <code>j</code> at both ends of the array.</li> <li>Set up a loop where each round uses <code>i</code> (<code>j</code>) to find the first element larger (smaller) than the pivot, then swap these two elements.</li> <li>Repeat step <code>2.</code> until <code>i</code> and <code>j</code> meet, finally swap the pivot to the boundary between the two sub-arrays.</li> </ol> <1><2><3><4><5><6><7><8><9> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 11-8 \u00a0 Pivot division process </p> <p>After the pivot partitioning, the original array is divided into three parts: left sub-array, pivot, and right sub-array, satisfying \"any element in the left sub-array \\(\\leq\\) pivot \\(\\leq\\) any element in the right sub-array.\" Therefore, we only need to sort these two sub-arrays next.</p> <p>Quick sort's divide and conquer strategy</p> <p>The essence of pivot partitioning is to simplify a longer array's sorting problem into two shorter arrays' sorting problems.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig quick_sort.py<pre><code>def partition(self, nums: list[int], left: int, right: int) -> int:\n \"\"\"Partition\"\"\"\n # Use nums[left] as the pivot\n i, j = left, right\n while i < j:\n while i < j and nums[j] >= nums[left]:\n j -= 1 # Search from right to left for the first element smaller than the pivot\n while i < j and nums[i] <= nums[left]:\n i += 1 # Search from left to right for the first element greater than the pivot\n # Swap elements\n nums[i], nums[j] = nums[j], nums[i]\n # Swap the pivot to the boundary between the two subarrays\n nums[i], nums[left] = nums[left], nums[i]\n return i # Return the index of the pivot\n</code></pre> quick_sort.cpp<pre><code>/* Swap elements */\nvoid swap(vector<int> &nums, int i, int j) {\n int tmp = nums[i];\n nums[i] = nums[j];\n nums[j] = tmp;\n}\n\n/* Partition */\nint partition(vector<int> &nums, int left, int right) {\n // Use nums[left] as the pivot\n int i = left, j = right;\n while (i < j) {\n while (i < j && nums[j] >= nums[left])\n j--; // Search from right to left for the first element smaller than the pivot\n while (i < j && nums[i] <= nums[left])\n i++; // Search from left to right for the first element greater than the pivot\n swap(nums, i, j); // Swap these two elements\n }\n swap(nums, i, left); // Swap the pivot to the boundary between the two subarrays\n return i; // Return the index of the pivot\n}\n</code></pre> quick_sort.java<pre><code>/* Swap elements */\nvoid swap(int[] nums, int i, int j) {\n int tmp = nums[i];\n nums[i] = nums[j];\n nums[j] = tmp;\n}\n\n/* Partition */\nint partition(int[] nums, int left, int right) {\n // Use nums[left] as the pivot\n int i = left, j = right;\n while (i < j) {\n while (i < j && nums[j] >= nums[left])\n j--; // Search from right to left for the first element smaller than the pivot\n while (i < j && nums[i] <= nums[left])\n i++; // Search from left to right for the first element greater than the pivot\n swap(nums, i, j); // Swap these two elements\n }\n swap(nums, i, left); // Swap the pivot to the boundary between the two subarrays\n return i; // Return the index of the pivot\n}\n</code></pre> quick_sort.cs<pre><code>[class]{quickSort}-[func]{Swap}\n\n[class]{quickSort}-[func]{Partition}\n</code></pre> quick_sort.go<pre><code>[class]{quickSort}-[func]{partition}\n</code></pre> quick_sort.swift<pre><code>[class]{}-[func]{partition}\n</code></pre> quick_sort.js<pre><code>[class]{QuickSort}-[func]{swap}\n\n[class]{QuickSort}-[func]{partition}\n</code></pre> quick_sort.ts<pre><code>[class]{QuickSort}-[func]{swap}\n\n[class]{QuickSort}-[func]{partition}\n</code></pre> quick_sort.dart<pre><code>[class]{QuickSort}-[func]{_swap}\n\n[class]{QuickSort}-[func]{_partition}\n</code></pre> quick_sort.rs<pre><code>[class]{QuickSort}-[func]{partition}\n</code></pre> quick_sort.c<pre><code>[class]{}-[func]{swap}\n\n[class]{}-[func]{partition}\n</code></pre> quick_sort.kt<pre><code>[class]{}-[func]{swap}\n\n[class]{}-[func]{partition}\n</code></pre> quick_sort.rb<pre><code>[class]{QuickSort}-[func]{partition}\n</code></pre> quick_sort.zig<pre><code>[class]{QuickSort}-[func]{swap}\n\n[class]{QuickSort}-[func]{partition}\n</code></pre>"},{"location":"chapter_sorting/quick_sort/#1151-algorithm-process","title":"11.5.1 \u00a0 Algorithm process","text":"<p>The overall process of quick sort is shown in Figure 11-9.</p> <ol> <li>First, perform a \"pivot partitioning\" on the original array to obtain the unsorted left and right sub-arrays.</li> <li>Then, recursively perform \"pivot partitioning\" on both the left and right sub-arrays.</li> <li>Continue recursively until the sub-array length reaches 1, thus completing the sorting of the entire array.</li> </ol> <p></p> <p> Figure 11-9 \u00a0 Quick sort process </p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig quick_sort.py<pre><code>def quick_sort(self, nums: list[int], left: int, right: int):\n \"\"\"Quick sort\"\"\"\n # Terminate recursion when subarray length is 1\n if left >= right:\n return\n # Partition\n pivot = self.partition(nums, left, right)\n # Recursively process the left subarray and right subarray\n self.quick_sort(nums, left, pivot - 1)\n self.quick_sort(nums, pivot + 1, right)\n</code></pre> quick_sort.cpp<pre><code>/* Quick sort */\nvoid quickSort(vector<int> &nums, int left, int right) {\n // Terminate recursion when subarray length is 1\n if (left >= right)\n return;\n // Partition\n int pivot = partition(nums, left, right);\n // Recursively process the left subarray and right subarray\n quickSort(nums, left, pivot - 1);\n quickSort(nums, pivot + 1, right);\n}\n</code></pre> quick_sort.java<pre><code>/* Quick sort */\nvoid quickSort(int[] nums, int left, int right) {\n // Terminate recursion when subarray length is 1\n if (left >= right)\n return;\n // Partition\n int pivot = partition(nums, left, right);\n // Recursively process the left subarray and right subarray\n quickSort(nums, left, pivot - 1);\n quickSort(nums, pivot + 1, right);\n}\n</code></pre> quick_sort.cs<pre><code>[class]{quickSort}-[func]{QuickSort}\n</code></pre> quick_sort.go<pre><code>[class]{quickSort}-[func]{quickSort}\n</code></pre> quick_sort.swift<pre><code>[class]{}-[func]{quickSort}\n</code></pre> quick_sort.js<pre><code>[class]{QuickSort}-[func]{quickSort}\n</code></pre> quick_sort.ts<pre><code>[class]{QuickSort}-[func]{quickSort}\n</code></pre> quick_sort.dart<pre><code>[class]{QuickSort}-[func]{quickSort}\n</code></pre> quick_sort.rs<pre><code>[class]{QuickSort}-[func]{quick_sort}\n</code></pre> quick_sort.c<pre><code>[class]{}-[func]{quickSort}\n</code></pre> quick_sort.kt<pre><code>[class]{}-[func]{quickSort}\n</code></pre> quick_sort.rb<pre><code>[class]{QuickSort}-[func]{quick_sort}\n</code></pre> quick_sort.zig<pre><code>[class]{QuickSort}-[func]{quickSort}\n</code></pre>"},{"location":"chapter_sorting/quick_sort/#1152-algorithm-features","title":"11.5.2 \u00a0 Algorithm features","text":"<ul> <li>Time complexity of \\(O(n \\log n)\\), adaptive sorting: In average cases, the recursive levels of pivot partitioning are \\(\\log n\\), and the total number of loops per level is \\(n\\), using \\(O(n \\log n)\\) time overall. In the worst case, each round of pivot partitioning divides an array of length \\(n\\) into two sub-arrays of lengths \\(0\\) and \\(n - 1\\), reaching \\(n\\) recursive levels, and using \\(O(n^2)\\) time overall.</li> <li>Space complexity of \\(O(n)\\), in-place sorting: In completely reversed input arrays, reaching the worst recursion depth of \\(n\\), using \\(O(n)\\) stack frame space. The sorting operation is performed on the original array without the aid of additional arrays.</li> <li>Non-stable sorting: In the final step of pivot partitioning, the pivot may be swapped to the right of equal elements.</li> </ul>"},{"location":"chapter_sorting/quick_sort/#1153-why-is-quick-sort-fast","title":"11.5.3 \u00a0 Why is quick sort fast","text":"<p>From its name, it is apparent that quick sort should have certain efficiency advantages. Although the average time complexity of quick sort is the same as \"merge sort\" and \"heap sort,\" quick sort is generally more efficient, mainly for the following reasons.</p> <ul> <li>Low probability of worst-case scenarios: Although the worst time complexity of quick sort is \\(O(n^2)\\), less stable than merge sort, in most cases, quick sort can operate under a time complexity of \\(O(n \\log n)\\).</li> <li>High cache usage efficiency: During the pivot partitioning operation, the system can load the entire sub-array into the cache, thus accessing elements more efficiently. In contrast, algorithms like \"heap sort\" need to access elements in a jumping manner, lacking this feature.</li> <li>Small constant coefficient of complexity: Among the mentioned algorithms, quick sort has the fewest total number of comparisons, assignments, and swaps. This is similar to why \"insertion sort\" is faster than \"bubble sort.\"</li> </ul>"},{"location":"chapter_sorting/quick_sort/#1154-pivot-optimization","title":"11.5.4 \u00a0 Pivot optimization","text":"<p>Quick sort's time efficiency may decrease under certain inputs. For example, if the input array is completely reversed, since we select the leftmost element as the pivot, after the pivot partitioning, the pivot is swapped to the array's right end, causing the left sub-array length to be \\(n - 1\\) and the right sub-array length to be \\(0\\). If this recursion continues, each round of pivot partitioning will have a sub-array length of \\(0\\), and the divide and conquer strategy fails, degrading quick sort to a form similar to \"bubble sort.\"</p> <p>To avoid this situation, we can optimize the strategy for selecting the pivot in the pivot partitioning. For instance, we can randomly select an element as the pivot. However, if luck is not on our side, and we keep selecting suboptimal pivots, the efficiency is still not satisfactory.</p> <p>It's important to note that programming languages usually generate \"pseudo-random numbers\". If we construct a specific test case for a pseudo-random number sequence, the efficiency of quick sort may still degrade.</p> <p>For further improvement, we can select three candidate elements (usually the first, last, and midpoint elements of the array), and use the median of these three candidate elements as the pivot. This significantly increases the probability that the pivot is \"neither too small nor too large\". Of course, we can also select more candidate elements to further enhance the algorithm's robustness. Using this method significantly reduces the probability of time complexity degradation to \\(O(n^2)\\).</p> <p>Sample code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig quick_sort.py<pre><code>def median_three(self, nums: list[int], left: int, mid: int, right: int) -> int:\n \"\"\"Select the median of three candidate elements\"\"\"\n l, m, r = nums[left], nums[mid], nums[right]\n if (l <= m <= r) or (r <= m <= l):\n return mid # m is between l and r\n if (m <= l <= r) or (r <= l <= m):\n return left # l is between m and r\n return right\n\ndef partition(self, nums: list[int], left: int, right: int) -> int:\n \"\"\"Partition (median of three)\"\"\"\n # Use nums[left] as the pivot\n med = self.median_three(nums, left, (left + right) // 2, right)\n # Swap the median to the array's leftmost position\n nums[left], nums[med] = nums[med], nums[left]\n # Use nums[left] as the pivot\n i, j = left, right\n while i < j:\n while i < j and nums[j] >= nums[left]:\n j -= 1 # Search from right to left for the first element smaller than the pivot\n while i < j and nums[i] <= nums[left]:\n i += 1 # Search from left to right for the first element greater than the pivot\n # Swap elements\n nums[i], nums[j] = nums[j], nums[i]\n # Swap the pivot to the boundary between the two subarrays\n nums[i], nums[left] = nums[left], nums[i]\n return i # Return the index of the pivot\n</code></pre> quick_sort.cpp<pre><code>/* Select the median of three candidate elements */\nint medianThree(vector<int> &nums, int left, int mid, int right) {\n int l = nums[left], m = nums[mid], r = nums[right];\n if ((l <= m && m <= r) || (r <= m && m <= l))\n return mid; // m is between l and r\n if ((m <= l && l <= r) || (r <= l && l <= m))\n return left; // l is between m and r\n return right;\n}\n\n/* Partition (median of three) */\nint partition(vector<int> &nums, int left, int right) {\n // Select the median of three candidate elements\n int med = medianThree(nums, left, (left + right) / 2, right);\n // Swap the median to the array's leftmost position\n swap(nums, left, med);\n // Use nums[left] as the pivot\n int i = left, j = right;\n while (i < j) {\n while (i < j && nums[j] >= nums[left])\n j--; // Search from right to left for the first element smaller than the pivot\n while (i < j && nums[i] <= nums[left])\n i++; // Search from left to right for the first element greater than the pivot\n swap(nums, i, j); // Swap these two elements\n }\n swap(nums, i, left); // Swap the pivot to the boundary between the two subarrays\n return i; // Return the index of the pivot\n}\n</code></pre> quick_sort.java<pre><code>/* Select the median of three candidate elements */\nint medianThree(int[] nums, int left, int mid, int right) {\n int l = nums[left], m = nums[mid], r = nums[right];\n if ((l <= m && m <= r) || (r <= m && m <= l))\n return mid; // m is between l and r\n if ((m <= l && l <= r) || (r <= l && l <= m))\n return left; // l is between m and r\n return right;\n}\n\n/* Partition (median of three) */\nint partition(int[] nums, int left, int right) {\n // Select the median of three candidate elements\n int med = medianThree(nums, left, (left + right) / 2, right);\n // Swap the median to the array's leftmost position\n swap(nums, left, med);\n // Use nums[left] as the pivot\n int i = left, j = right;\n while (i < j) {\n while (i < j && nums[j] >= nums[left])\n j--; // Search from right to left for the first element smaller than the pivot\n while (i < j && nums[i] <= nums[left])\n i++; // Search from left to right for the first element greater than the pivot\n swap(nums, i, j); // Swap these two elements\n }\n swap(nums, i, left); // Swap the pivot to the boundary between the two subarrays\n return i; // Return the index of the pivot\n}\n</code></pre> quick_sort.cs<pre><code>[class]{QuickSortMedian}-[func]{MedianThree}\n\n[class]{QuickSortMedian}-[func]{Partition}\n</code></pre> quick_sort.go<pre><code>[class]{quickSortMedian}-[func]{medianThree}\n\n[class]{quickSortMedian}-[func]{partition}\n</code></pre> quick_sort.swift<pre><code>[class]{}-[func]{medianThree}\n\n[class]{}-[func]{partitionMedian}\n</code></pre> quick_sort.js<pre><code>[class]{QuickSortMedian}-[func]{medianThree}\n\n[class]{QuickSortMedian}-[func]{partition}\n</code></pre> quick_sort.ts<pre><code>[class]{QuickSortMedian}-[func]{medianThree}\n\n[class]{QuickSortMedian}-[func]{partition}\n</code></pre> quick_sort.dart<pre><code>[class]{QuickSortMedian}-[func]{_medianThree}\n\n[class]{QuickSortMedian}-[func]{_partition}\n</code></pre> quick_sort.rs<pre><code>[class]{QuickSortMedian}-[func]{median_three}\n\n[class]{QuickSortMedian}-[func]{partition}\n</code></pre> quick_sort.c<pre><code>[class]{}-[func]{medianThree}\n\n[class]{}-[func]{partitionMedian}\n</code></pre> quick_sort.kt<pre><code>[class]{}-[func]{medianThree}\n\n[class]{}-[func]{partitionMedian}\n</code></pre> quick_sort.rb<pre><code>[class]{QuickSortMedian}-[func]{median_three}\n\n[class]{QuickSortMedian}-[func]{partition}\n</code></pre> quick_sort.zig<pre><code>[class]{QuickSortMedian}-[func]{medianThree}\n\n[class]{QuickSortMedian}-[func]{partition}\n</code></pre>"},{"location":"chapter_sorting/quick_sort/#1155-tail-recursion-optimization","title":"11.5.5 \u00a0 Tail recursion optimization","text":"<p>Under certain inputs, quick sort may occupy more space. For a completely ordered input array, assume the sub-array length in recursion is \\(m\\), each round of pivot partitioning produces a left sub-array of length \\(0\\) and a right sub-array of length \\(m - 1\\), meaning the problem size reduced per recursive call is very small (only one element), and the height of the recursion tree can reach \\(n - 1\\), requiring \\(O(n)\\) stack frame space.</p> <p>To prevent the accumulation of stack frame space, we can compare the lengths of the two sub-arrays after each round of pivot sorting, and only recursively sort the shorter sub-array. Since the length of the shorter sub-array will not exceed \\(n / 2\\), this method ensures that the recursion depth does not exceed \\(\\log n\\), thus optimizing the worst space complexity to \\(O(\\log n)\\). The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig quick_sort.py<pre><code>def quick_sort(self, nums: list[int], left: int, right: int):\n \"\"\"Quick sort (tail recursion optimization)\"\"\"\n # Terminate when subarray length is 1\n while left < right:\n # Partition operation\n pivot = self.partition(nums, left, right)\n # Perform quick sort on the shorter of the two subarrays\n if pivot - left < right - pivot:\n self.quick_sort(nums, left, pivot - 1) # Recursively sort the left subarray\n left = pivot + 1 # Remaining unsorted interval is [pivot + 1, right]\n else:\n self.quick_sort(nums, pivot + 1, right) # Recursively sort the right subarray\n right = pivot - 1 # Remaining unsorted interval is [left, pivot - 1]\n</code></pre> quick_sort.cpp<pre><code>/* Quick sort (tail recursion optimization) */\nvoid quickSort(vector<int> &nums, int left, int right) {\n // Terminate when subarray length is 1\n while (left < right) {\n // Partition operation\n int pivot = partition(nums, left, right);\n // Perform quick sort on the shorter of the two subarrays\n if (pivot - left < right - pivot) {\n quickSort(nums, left, pivot - 1); // Recursively sort the left subarray\n left = pivot + 1; // Remaining unsorted interval is [pivot + 1, right]\n } else {\n quickSort(nums, pivot + 1, right); // Recursively sort the right subarray\n right = pivot - 1; // Remaining unsorted interval is [left, pivot - 1]\n }\n }\n}\n</code></pre> quick_sort.java<pre><code>/* Quick sort (tail recursion optimization) */\nvoid quickSort(int[] nums, int left, int right) {\n // Terminate when subarray length is 1\n while (left < right) {\n // Partition operation\n int pivot = partition(nums, left, right);\n // Perform quick sort on the shorter of the two subarrays\n if (pivot - left < right - pivot) {\n quickSort(nums, left, pivot - 1); // Recursively sort the left subarray\n left = pivot + 1; // Remaining unsorted interval is [pivot + 1, right]\n } else {\n quickSort(nums, pivot + 1, right); // Recursively sort the right subarray\n right = pivot - 1; // Remaining unsorted interval is [left, pivot - 1]\n }\n }\n}\n</code></pre> quick_sort.cs<pre><code>[class]{QuickSortTailCall}-[func]{QuickSort}\n</code></pre> quick_sort.go<pre><code>[class]{quickSortTailCall}-[func]{quickSort}\n</code></pre> quick_sort.swift<pre><code>[class]{}-[func]{quickSortTailCall}\n</code></pre> quick_sort.js<pre><code>[class]{QuickSortTailCall}-[func]{quickSort}\n</code></pre> quick_sort.ts<pre><code>[class]{QuickSortTailCall}-[func]{quickSort}\n</code></pre> quick_sort.dart<pre><code>[class]{QuickSortTailCall}-[func]{quickSort}\n</code></pre> quick_sort.rs<pre><code>[class]{QuickSortTailCall}-[func]{quick_sort}\n</code></pre> quick_sort.c<pre><code>[class]{}-[func]{quickSortTailCall}\n</code></pre> quick_sort.kt<pre><code>[class]{}-[func]{quickSortTailCall}\n</code></pre> quick_sort.rb<pre><code>[class]{QuickSortTailCall}-[func]{quick_sort}\n</code></pre> quick_sort.zig<pre><code>[class]{QuickSortTailCall}-[func]{quickSort}\n</code></pre>"},{"location":"chapter_sorting/radix_sort/","title":"11.10 \u00a0 Radix sort","text":"<p>The previous section introduced counting sort, which is suitable for scenarios where the data volume \\(n\\) is large but the data range \\(m\\) is small. Suppose we need to sort \\(n = 10^6\\) student IDs, where each ID is an \\(8\\)-digit number. This means the data range \\(m = 10^8\\) is very large, requiring a significant amount of memory space for counting sort, while radix sort can avoid this situation.</p> <p>Radix sort shares the core idea with counting sort, which also sorts by counting the frequency of elements. Building on this, radix sort utilizes the progressive relationship between the digits of numbers, sorting each digit in turn to achieve the final sorted order.</p>"},{"location":"chapter_sorting/radix_sort/#11101-algorithm-process","title":"11.10.1 \u00a0 Algorithm process","text":"<p>Taking the student ID data as an example, assuming the least significant digit is the \\(1^{st}\\) and the most significant is the \\(8^{th}\\), the radix sort process is illustrated in Figure 11-18.</p> <ol> <li>Initialize digit \\(k = 1\\).</li> <li>Perform \"counting sort\" on the \\(k^{th}\\) digit of the student IDs. After completion, the data will be sorted from smallest to largest based on the \\(k^{th}\\) digit.</li> <li>Increment \\(k\\) by \\(1\\), then return to step <code>2.</code> and continue iterating until all digits have been sorted, then the process ends.</li> </ol> <p></p> <p> Figure 11-18 \u00a0 Radix sort algorithm process </p> <p>Below we dissect the code implementation. For a number \\(x\\) in base \\(d\\), to obtain its \\(k^{th}\\) digit \\(x_k\\), the following calculation formula can be used:</p> \\[ x_k = \\lfloor\\frac{x}{d^{k-1}}\\rfloor \\bmod d \\] <p>Where \\(\\lfloor a \\rfloor\\) denotes rounding down the floating point number \\(a\\), and \\(\\bmod \\: d\\) denotes taking the modulus of \\(d\\). For student ID data, \\(d = 10\\) and \\(k \\in [1, 8]\\).</p> <p>Additionally, we need to slightly modify the counting sort code to allow sorting based on the \\(k^{th}\\) digit:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig radix_sort.py<pre><code>def digit(num: int, exp: int) -> int:\n \"\"\"Get the k-th digit of element num, where exp = 10^(k-1)\"\"\"\n # Passing exp instead of k can avoid repeated expensive exponentiation here\n return (num // exp) % 10\n\ndef counting_sort_digit(nums: list[int], exp: int):\n \"\"\"Counting sort (based on nums k-th digit)\"\"\"\n # Decimal digit range is 0~9, therefore need a bucket array of length 10\n counter = [0] * 10\n n = len(nums)\n # Count the occurrence of digits 0~9\n for i in range(n):\n d = digit(nums[i], exp) # Get the k-th digit of nums[i], noted as d\n counter[d] += 1 # Count the occurrence of digit d\n # Calculate prefix sum, converting \"occurrence count\" into \"array index\"\n for i in range(1, 10):\n counter[i] += counter[i - 1]\n # Traverse in reverse, based on bucket statistics, place each element into res\n res = [0] * n\n for i in range(n - 1, -1, -1):\n d = digit(nums[i], exp)\n j = counter[d] - 1 # Get the index j for d in the array\n res[j] = nums[i] # Place the current element at index j\n counter[d] -= 1 # Decrease the count of d by 1\n # Use result to overwrite the original array nums\n for i in range(n):\n nums[i] = res[i]\n\ndef radix_sort(nums: list[int]):\n \"\"\"Radix sort\"\"\"\n # Get the maximum element of the array, used to determine the maximum number of digits\n m = max(nums)\n # Traverse from the lowest to the highest digit\n exp = 1\n while exp <= m:\n # Perform counting sort on the k-th digit of array elements\n # k = 1 -> exp = 1\n # k = 2 -> exp = 10\n # i.e., exp = 10^(k-1)\n counting_sort_digit(nums, exp)\n exp *= 10\n</code></pre> radix_sort.cpp<pre><code>/* Get the k-th digit of element num, where exp = 10^(k-1) */\nint digit(int num, int exp) {\n // Passing exp instead of k can avoid repeated expensive exponentiation here\n return (num / exp) % 10;\n}\n\n/* Counting sort (based on nums k-th digit) */\nvoid countingSortDigit(vector<int> &nums, int exp) {\n // Decimal digit range is 0~9, therefore need a bucket array of length 10\n vector<int> counter(10, 0);\n int n = nums.size();\n // Count the occurrence of digits 0~9\n for (int i = 0; i < n; i++) {\n int d = digit(nums[i], exp); // Get the k-th digit of nums[i], noted as d\n counter[d]++; // Count the occurrence of digit d\n }\n // Calculate prefix sum, converting \"occurrence count\" into \"array index\"\n for (int i = 1; i < 10; i++) {\n counter[i] += counter[i - 1];\n }\n // Traverse in reverse, based on bucket statistics, place each element into res\n vector<int> res(n, 0);\n for (int i = n - 1; i >= 0; i--) {\n int d = digit(nums[i], exp);\n int j = counter[d] - 1; // Get the index j for d in the array\n res[j] = nums[i]; // Place the current element at index j\n counter[d]--; // Decrease the count of d by 1\n }\n // Use result to overwrite the original array nums\n for (int i = 0; i < n; i++)\n nums[i] = res[i];\n}\n\n/* Radix sort */\nvoid radixSort(vector<int> &nums) {\n // Get the maximum element of the array, used to determine the maximum number of digits\n int m = *max_element(nums.begin(), nums.end());\n // Traverse from the lowest to the highest digit\n for (int exp = 1; exp <= m; exp *= 10)\n // Perform counting sort on the k-th digit of array elements\n // k = 1 -> exp = 1\n // k = 2 -> exp = 10\n // i.e., exp = 10^(k-1)\n countingSortDigit(nums, exp);\n}\n</code></pre> radix_sort.java<pre><code>/* Get the k-th digit of element num, where exp = 10^(k-1) */\nint digit(int num, int exp) {\n // Passing exp instead of k can avoid repeated expensive exponentiation here\n return (num / exp) % 10;\n}\n\n/* Counting sort (based on nums k-th digit) */\nvoid countingSortDigit(int[] nums, int exp) {\n // Decimal digit range is 0~9, therefore need a bucket array of length 10\n int[] counter = new int[10];\n int n = nums.length;\n // Count the occurrence of digits 0~9\n for (int i = 0; i < n; i++) {\n int d = digit(nums[i], exp); // Get the k-th digit of nums[i], noted as d\n counter[d]++; // Count the occurrence of digit d\n }\n // Calculate prefix sum, converting \"occurrence count\" into \"array index\"\n for (int i = 1; i < 10; i++) {\n counter[i] += counter[i - 1];\n }\n // Traverse in reverse, based on bucket statistics, place each element into res\n int[] res = new int[n];\n for (int i = n - 1; i >= 0; i--) {\n int d = digit(nums[i], exp);\n int j = counter[d] - 1; // Get the index j for d in the array\n res[j] = nums[i]; // Place the current element at index j\n counter[d]--; // Decrease the count of d by 1\n }\n // Use result to overwrite the original array nums\n for (int i = 0; i < n; i++)\n nums[i] = res[i];\n}\n\n/* Radix sort */\nvoid radixSort(int[] nums) {\n // Get the maximum element of the array, used to determine the maximum number of digits\n int m = Integer.MIN_VALUE;\n for (int num : nums)\n if (num > m)\n m = num;\n // Traverse from the lowest to the highest digit\n for (int exp = 1; exp <= m; exp *= 10) {\n // Perform counting sort on the k-th digit of array elements\n // k = 1 -> exp = 1\n // k = 2 -> exp = 10\n // i.e., exp = 10^(k-1)\n countingSortDigit(nums, exp);\n }\n}\n</code></pre> radix_sort.cs<pre><code>[class]{radix_sort}-[func]{Digit}\n\n[class]{radix_sort}-[func]{CountingSortDigit}\n\n[class]{radix_sort}-[func]{RadixSort}\n</code></pre> radix_sort.go<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> radix_sort.swift<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> radix_sort.js<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> radix_sort.ts<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> radix_sort.dart<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> radix_sort.rs<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{counting_sort_digit}\n\n[class]{}-[func]{radix_sort}\n</code></pre> radix_sort.c<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> radix_sort.kt<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> radix_sort.rb<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{counting_sort_digit}\n\n[class]{}-[func]{radix_sort}\n</code></pre> radix_sort.zig<pre><code>[class]{}-[func]{digit}\n\n[class]{}-[func]{countingSortDigit}\n\n[class]{}-[func]{radixSort}\n</code></pre> <p>Why start sorting from the least significant digit?</p> <p>In consecutive sorting rounds, the result of a later round will override the result of an earlier round. For example, if the result of the first round is \\(a < b\\) and the result of the second round is \\(a > b\\), the result of the second round will replace the first round's result. Since the significance of higher digits is greater than that of lower digits, it makes sense to sort lower digits before higher digits.</p>"},{"location":"chapter_sorting/radix_sort/#11102-algorithm-characteristics","title":"11.10.2 \u00a0 Algorithm characteristics","text":"<p>Compared to counting sort, radix sort is suitable for larger numerical ranges, but it assumes that the data can be represented in a fixed number of digits, and the number of digits should not be too large. For example, floating-point numbers are not suitable for radix sort, as their digit count \\(k\\) may be large, potentially leading to a time complexity \\(O(nk) \\gg O(n^2)\\).</p> <ul> <li>Time complexity is \\(O(nk)\\), non-adaptive sorting: Assuming the data size is \\(n\\), the data is in base \\(d\\), and the maximum number of digits is \\(k\\), then sorting a single digit takes \\(O(n + d)\\) time, and sorting all \\(k\\) digits takes \\(O((n + d)k)\\) time. Generally, both \\(d\\) and \\(k\\) are relatively small, leading to a time complexity approaching \\(O(n)\\).</li> <li>Space complexity is \\(O(n + d)\\), non-in-place sorting: Like counting sort, radix sort relies on arrays <code>res</code> and <code>counter</code> of lengths \\(n\\) and \\(d\\) respectively.</li> <li>Stable sorting: When counting sort is stable, radix sort is also stable; if counting sort is unstable, radix sort cannot guarantee a correct sorting outcome.</li> </ul>"},{"location":"chapter_sorting/selection_sort/","title":"11.2 \u00a0 Selection sort","text":"<p>Selection sort works on a very simple principle: it starts a loop where each iteration selects the smallest element from the unsorted interval and moves it to the end of the sorted interval.</p> <p>Suppose the length of the array is \\(n\\), the algorithm flow of selection sort is as shown in Figure 11-2.</p> <ol> <li>Initially, all elements are unsorted, i.e., the unsorted (index) interval is \\([0, n-1]\\).</li> <li>Select the smallest element in the interval \\([0, n-1]\\) and swap it with the element at index \\(0\\). After this, the first element of the array is sorted.</li> <li>Select the smallest element in the interval \\([1, n-1]\\) and swap it with the element at index \\(1\\). After this, the first two elements of the array are sorted.</li> <li>Continue in this manner. After \\(n - 1\\) rounds of selection and swapping, the first \\(n - 1\\) elements are sorted.</li> <li>The only remaining element is necessarily the largest element and does not need sorting, thus the array is sorted.</li> </ol> <1><2><3><4><5><6><7><8><9><10><11> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 11-2 \u00a0 Selection sort process </p> <p>In the code, we use \\(k\\) to record the smallest element within the unsorted interval:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig selection_sort.py<pre><code>def selection_sort(nums: list[int]):\n \"\"\"Selection sort\"\"\"\n n = len(nums)\n # Outer loop: unsorted range is [i, n-1]\n for i in range(n - 1):\n # Inner loop: find the smallest element within the unsorted range\n k = i\n for j in range(i + 1, n):\n if nums[j] < nums[k]:\n k = j # Record the index of the smallest element\n # Swap the smallest element with the first element of the unsorted range\n nums[i], nums[k] = nums[k], nums[i]\n</code></pre> selection_sort.cpp<pre><code>/* Selection sort */\nvoid selectionSort(vector<int> &nums) {\n int n = nums.size();\n // Outer loop: unsorted range is [i, n-1]\n for (int i = 0; i < n - 1; i++) {\n // Inner loop: find the smallest element within the unsorted range\n int k = i;\n for (int j = i + 1; j < n; j++) {\n if (nums[j] < nums[k])\n k = j; // Record the index of the smallest element\n }\n // Swap the smallest element with the first element of the unsorted range\n swap(nums[i], nums[k]);\n }\n}\n</code></pre> selection_sort.java<pre><code>/* Selection sort */\nvoid selectionSort(int[] nums) {\n int n = nums.length;\n // Outer loop: unsorted range is [i, n-1]\n for (int i = 0; i < n - 1; i++) {\n // Inner loop: find the smallest element within the unsorted range\n int k = i;\n for (int j = i + 1; j < n; j++) {\n if (nums[j] < nums[k])\n k = j; // Record the index of the smallest element\n }\n // Swap the smallest element with the first element of the unsorted range\n int temp = nums[i];\n nums[i] = nums[k];\n nums[k] = temp;\n }\n}\n</code></pre> selection_sort.cs<pre><code>[class]{selection_sort}-[func]{SelectionSort}\n</code></pre> selection_sort.go<pre><code>[class]{}-[func]{selectionSort}\n</code></pre> selection_sort.swift<pre><code>[class]{}-[func]{selectionSort}\n</code></pre> selection_sort.js<pre><code>[class]{}-[func]{selectionSort}\n</code></pre> selection_sort.ts<pre><code>[class]{}-[func]{selectionSort}\n</code></pre> selection_sort.dart<pre><code>[class]{}-[func]{selectionSort}\n</code></pre> selection_sort.rs<pre><code>[class]{}-[func]{selection_sort}\n</code></pre> selection_sort.c<pre><code>[class]{}-[func]{selectionSort}\n</code></pre> selection_sort.kt<pre><code>[class]{}-[func]{selectionSort}\n</code></pre> selection_sort.rb<pre><code>[class]{}-[func]{selection_sort}\n</code></pre> selection_sort.zig<pre><code>[class]{}-[func]{selectionSort}\n</code></pre>"},{"location":"chapter_sorting/selection_sort/#1121-algorithm-characteristics","title":"11.2.1 \u00a0 Algorithm characteristics","text":"<ul> <li>Time complexity of \\(O(n^2)\\), non-adaptive sort: There are \\(n - 1\\) rounds in the outer loop, with the unsorted interval length starting at \\(n\\) in the first round and decreasing to \\(2\\) in the last round, i.e., the outer loops contain \\(n\\), \\(n - 1\\), \\(\\dots\\), \\(3\\), \\(2\\) inner loops respectively, summing up to \\(\\frac{(n - 1)(n + 2)}{2}\\).</li> <li>Space complexity of \\(O(1)\\), in-place sort: Uses constant extra space with pointers \\(i\\) and \\(j\\).</li> <li>Non-stable sort: As shown in Figure 11-3, an element <code>nums[i]</code> may be swapped to the right of an equal element, causing their relative order to change.</li> </ul> <p> Figure 11-3 \u00a0 Selection sort instability example </p>"},{"location":"chapter_sorting/sorting_algorithm/","title":"11.1 \u00a0 Sorting algorithms","text":"<p>Sorting algorithms (sorting algorithm) are used to arrange a set of data in a specific order. Sorting algorithms have a wide range of applications because ordered data can usually be searched, analyzed, and processed more efficiently.</p> <p>As shown in Figure 11-1, the data types in sorting algorithms can be integers, floating point numbers, characters, or strings, etc. Sorting rules can be set according to needs, such as numerical size, character ASCII order, or custom rules.</p> <p></p> <p> Figure 11-1 \u00a0 Data types and comparator examples </p>"},{"location":"chapter_sorting/sorting_algorithm/#1111-evaluation-dimensions","title":"11.1.1 \u00a0 Evaluation dimensions","text":"<p>Execution efficiency: We expect the time complexity of sorting algorithms to be as low as possible, with a lower number of overall operations (reduction in the constant factor of time complexity). For large data volumes, execution efficiency is particularly important.</p> <p>In-place property: As the name implies, in-place sorting is achieved by directly manipulating the original array, without the need for additional auxiliary arrays, thus saving memory. Generally, in-place sorting involves fewer data movement operations and is faster.</p> <p>Stability: Stable sorting ensures that the relative order of equal elements in the array does not change after sorting.</p> <p>Stable sorting is a necessary condition for multi-level sorting scenarios. Suppose we have a table storing student information, with the first and second columns being name and age, respectively. In this case, unstable sorting might lead to a loss of orderedness in the input data:</p> <pre><code># Input data is sorted by name\n# (name, age)\n ('A', 19)\n ('B', 18)\n ('C', 21)\n ('D', 19)\n ('E', 23)\n\n# Assuming an unstable sorting algorithm is used to sort the list by age,\n# the result changes the relative position of ('D', 19) and ('A', 19),\n# and the property of the input data being sorted by name is lost\n ('B', 18)\n ('D', 19)\n ('A', 19)\n ('C', 21)\n ('E', 23)\n</code></pre> <p>Adaptability: Adaptive sorting has a time complexity that depends on the input data, i.e., the best time complexity, worst time complexity, and average time complexity are not exactly equal.</p> <p>Adaptability needs to be assessed according to the specific situation. If the worst time complexity is worse than the average, it suggests that the performance of the sorting algorithm might deteriorate under certain data, hence it is seen as a negative attribute; whereas, if the best time complexity is better than the average, it is considered a positive attribute.</p> <p>Comparison-based: Comparison-based sorting relies on comparison operators (\\(<\\), \\(=\\), \\(>\\)) to determine the relative order of elements and thus sort the entire array, with the theoretical optimal time complexity being \\(O(n \\log n)\\). Meanwhile, non-comparison sorting does not use comparison operators and can achieve a time complexity of \\(O(n)\\), but its versatility is relatively poor.</p>"},{"location":"chapter_sorting/sorting_algorithm/#1112-ideal-sorting-algorithm","title":"11.1.2 \u00a0 Ideal sorting algorithm","text":"<p>Fast execution, in-place, stable, positively adaptive, and versatile. Clearly, no sorting algorithm that combines all these features has been found to date. Therefore, when selecting a sorting algorithm, it is necessary to decide based on the specific characteristics of the data and the requirements of the problem.</p> <p>Next, we will learn about various sorting algorithms together and analyze the advantages and disadvantages of each based on the above evaluation dimensions.</p>"},{"location":"chapter_sorting/summary/","title":"11.11 \u00a0 Summary","text":""},{"location":"chapter_sorting/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>Bubble sort works by swapping adjacent elements. By adding a flag to enable early return, we can optimize the best-case time complexity of bubble sort to \\(O(n)\\).</li> <li>Insertion sort sorts each round by inserting elements from the unsorted interval into the correct position in the sorted interval. Although the time complexity of insertion sort is \\(O(n^2)\\), it is very popular in sorting small amounts of data due to relatively fewer operations per unit.</li> <li>Quick sort is based on sentinel partitioning operations. In sentinel partitioning, it's possible to always pick the worst pivot, leading to a time complexity degradation to \\(O(n^2)\\). Introducing median or random pivots can reduce the probability of such degradation. Tail recursion can effectively reduce the recursion depth, optimizing the space complexity to \\(O(\\log n)\\).</li> <li>Merge sort includes dividing and merging two phases, typically embodying the divide-and-conquer strategy. In merge sort, sorting an array requires creating auxiliary arrays, resulting in a space complexity of \\(O(n)\\); however, the space complexity for sorting a list can be optimized to \\(O(1)\\).</li> <li>Bucket sort consists of three steps: data bucketing, sorting within buckets, and merging results. It also embodies the divide-and-conquer strategy, suitable for very large datasets. The key to bucket sort is the even distribution of data.</li> <li>Counting sort is a special case of bucket sort, which sorts by counting the occurrences of each data point. Counting sort is suitable for large datasets with a limited range of data and requires that data can be converted to positive integers.</li> <li>Radix sort sorts data by sorting digit by digit, requiring data to be represented as fixed-length numbers.</li> <li>Overall, we hope to find a sorting algorithm that has high efficiency, stability, in-place operation, and positive adaptability. However, like other data structures and algorithms, no sorting algorithm can meet all these conditions simultaneously. In practical applications, we need to choose the appropriate sorting algorithm based on the characteristics of the data.</li> <li>Figure 11-19 compares mainstream sorting algorithms in terms of efficiency, stability, in-place nature, and adaptability.</li> </ul> <p> Figure 11-19 \u00a0 Sorting Algorithm Comparison </p>"},{"location":"chapter_sorting/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: When is the stability of sorting algorithms necessary?</p> <p>In reality, we might sort based on one attribute of an object. For example, students have names and heights as attributes, and we aim to implement multi-level sorting: first by name to get <code>(A, 180) (B, 185) (C, 170) (D, 170)</code>; then by height. Because the sorting algorithm is unstable, we might end up with <code>(D, 170) (C, 170) (A, 180) (B, 185)</code>.</p> <p>It can be seen that the positions of students D and C have been swapped, disrupting the orderliness of the names, which is undesirable.</p> <p>Q: Can the order of \"searching from right to left\" and \"searching from left to right\" in sentinel partitioning be swapped?</p> <p>No, when using the leftmost element as the pivot, we must first \"search from right to left\" then \"search from left to right\". This conclusion is somewhat counterintuitive, so let's analyze the reason.</p> <p>The last step of the sentinel partition <code>partition()</code> is to swap <code>nums[left]</code> and <code>nums[i]</code>. After the swap, the elements to the left of the pivot are all <code><=</code> the pivot, which requires that <code>nums[left] >= nums[i]</code> must hold before the last swap. Suppose we \"search from left to right\" first, then if no element larger than the pivot is found, we will exit the loop when <code>i == j</code>, possibly with <code>nums[j] == nums[i] > nums[left]</code>. In other words, the final swap operation will exchange an element larger than the pivot to the left end of the array, causing the sentinel partition to fail.</p> <p>For example, given the array <code>[0, 0, 0, 0, 1]</code>, if we first \"search from left to right\", the array after the sentinel partition is <code>[1, 0, 0, 0, 0]</code>, which is incorrect.</p> <p>Upon further consideration, if we choose <code>nums[right]</code> as the pivot, then exactly the opposite, we must first \"search from left to right\".</p> <p>Q: Regarding tail recursion optimization, why does choosing the shorter array ensure that the recursion depth does not exceed \\(\\log n\\)?</p> <p>The recursion depth is the number of currently unreturned recursive methods. Each round of sentinel partition divides the original array into two subarrays. With tail recursion optimization, the length of the subarray to be recursively followed is at most half of the original array length. Assuming the worst case always halves the length, the final recursion depth will be \\(\\log n\\).</p> <p>Reviewing the original quicksort, we might continuously recursively process larger arrays, in the worst case from \\(n\\), \\(n - 1\\), ..., \\(2\\), \\(1\\), with a recursion depth of \\(n\\). Tail recursion optimization can avoid this scenario.</p> <p>Q: When all elements in the array are equal, is the time complexity of quicksort \\(O(n^2)\\)? How should this degenerate case be handled?</p> <p>Yes. For this situation, consider using sentinel partitioning to divide the array into three parts: less than, equal to, and greater than the pivot. Only recursively proceed with the less than and greater than parts. In this method, an array where all input elements are equal can be sorted in just one round of sentinel partitioning.</p> <p>Q: Why is the worst-case time complexity of bucket sort \\(O(n^2)\\)?</p> <p>In the worst case, all elements are placed in the same bucket. If we use an \\(O(n^2)\\) algorithm to sort these elements, the time complexity will be \\(O(n^2)\\).</p>"},{"location":"chapter_stack_and_queue/","title":"Chapter 5. \u00a0 Stack and queue","text":"<p>Abstract</p> <p>A stack is like cats placed on top of each other, while a queue is like cats lined up one by one.</p> <p>They represent the logical relationships of Last-In-First-Out (LIFO) and First-In-First-Out (FIFO), respectively.</p>"},{"location":"chapter_stack_and_queue/#chapter-contents","title":"Chapter contents","text":"<ul> <li>5.1 \u00a0 Stack</li> <li>5.2 \u00a0 Queue</li> <li>5.3 \u00a0 Double-ended queue</li> <li>5.4 \u00a0 Summary</li> </ul>"},{"location":"chapter_stack_and_queue/deque/","title":"5.3 \u00a0 Double-ended queue","text":"<p>In a queue, we can only delete elements from the head or add elements to the tail. As shown in Figure 5-7, a double-ended queue (deque) offers more flexibility, allowing the addition or removal of elements at both the head and the tail.</p> <p></p> <p> Figure 5-7 \u00a0 Operations in double-ended queue </p>"},{"location":"chapter_stack_and_queue/deque/#531-common-operations-in-double-ended-queue","title":"5.3.1 \u00a0 Common operations in double-ended queue","text":"<p>The common operations in a double-ended queue are listed below, and the names of specific methods depend on the programming language used.</p> <p> Table 5-3 \u00a0 Efficiency of double-ended queue operations </p> Method Name Description Time Complexity <code>pushFirst()</code> Add an element to the head \\(O(1)\\) <code>pushLast()</code> Add an element to the tail \\(O(1)\\) <code>popFirst()</code> Remove the first element \\(O(1)\\) <code>popLast()</code> Remove the last element \\(O(1)\\) <code>peekFirst()</code> Access the first element \\(O(1)\\) <code>peekLast()</code> Access the last element \\(O(1)\\) <p>Similarly, we can directly use the double-ended queue classes implemented in programming languages:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig deque.py<pre><code>from collections import deque\n\n# Initialize the deque\ndeq: deque[int] = deque()\n\n# Enqueue elements\ndeq.append(2) # Add to the tail\ndeq.append(5)\ndeq.append(4)\ndeq.appendleft(3) # Add to the head\ndeq.appendleft(1)\n\n# Access elements\nfront: int = deq[0] # The first element\nrear: int = deq[-1] # The last element\n\n# Dequeue elements\npop_front: int = deq.popleft() # The first element dequeued\npop_rear: int = deq.pop() # The last element dequeued\n\n# Get the length of the deque\nsize: int = len(deq)\n\n# Check if the deque is empty\nis_empty: bool = len(deq) == 0\n</code></pre> deque.cpp<pre><code>/* Initialize the deque */\ndeque<int> deque;\n\n/* Enqueue elements */\ndeque.push_back(2); // Add to the tail\ndeque.push_back(5);\ndeque.push_back(4);\ndeque.push_front(3); // Add to the head\ndeque.push_front(1);\n\n/* Access elements */\nint front = deque.front(); // The first element\nint back = deque.back(); // The last element\n\n/* Dequeue elements */\ndeque.pop_front(); // The first element dequeued\ndeque.pop_back(); // The last element dequeued\n\n/* Get the length of the deque */\nint size = deque.size();\n\n/* Check if the deque is empty */\nbool empty = deque.empty();\n</code></pre> deque.java<pre><code>/* Initialize the deque */\nDeque<Integer> deque = new LinkedList<>();\n\n/* Enqueue elements */\ndeque.offerLast(2); // Add to the tail\ndeque.offerLast(5);\ndeque.offerLast(4);\ndeque.offerFirst(3); // Add to the head\ndeque.offerFirst(1);\n\n/* Access elements */\nint peekFirst = deque.peekFirst(); // The first element\nint peekLast = deque.peekLast(); // The last element\n\n/* Dequeue elements */\nint popFirst = deque.pollFirst(); // The first element dequeued\nint popLast = deque.pollLast(); // The last element dequeued\n\n/* Get the length of the deque */\nint size = deque.size();\n\n/* Check if the deque is empty */\nboolean isEmpty = deque.isEmpty();\n</code></pre> deque.cs<pre><code>/* Initialize the deque */\n// In C#, LinkedList is used as a deque\nLinkedList<int> deque = new();\n\n/* Enqueue elements */\ndeque.AddLast(2); // Add to the tail\ndeque.AddLast(5);\ndeque.AddLast(4);\ndeque.AddFirst(3); // Add to the head\ndeque.AddFirst(1);\n\n/* Access elements */\nint peekFirst = deque.First.Value; // The first element\nint peekLast = deque.Last.Value; // The last element\n\n/* Dequeue elements */\ndeque.RemoveFirst(); // The first element dequeued\ndeque.RemoveLast(); // The last element dequeued\n\n/* Get the length of the deque */\nint size = deque.Count;\n\n/* Check if the deque is empty */\nbool isEmpty = deque.Count == 0;\n</code></pre> deque_test.go<pre><code>/* Initialize the deque */\n// In Go, use list as a deque\ndeque := list.New()\n\n/* Enqueue elements */\ndeque.PushBack(2) // Add to the tail\ndeque.PushBack(5)\ndeque.PushBack(4)\ndeque.PushFront(3) // Add to the head\ndeque.PushFront(1)\n\n/* Access elements */\nfront := deque.Front() // The first element\nrear := deque.Back() // The last element\n\n/* Dequeue elements */\ndeque.Remove(front) // The first element dequeued\ndeque.Remove(rear) // The last element dequeued\n\n/* Get the length of the deque */\nsize := deque.Len()\n\n/* Check if the deque is empty */\nisEmpty := deque.Len() == 0\n</code></pre> deque.swift<pre><code>/* Initialize the deque */\n// Swift does not have a built-in deque class, so Array can be used as a deque\nvar deque: [Int] = []\n\n/* Enqueue elements */\ndeque.append(2) // Add to the tail\ndeque.append(5)\ndeque.append(4)\ndeque.insert(3, at: 0) // Add to the head\ndeque.insert(1, at: 0)\n\n/* Access elements */\nlet peekFirst = deque.first! // The first element\nlet peekLast = deque.last! // The last element\n\n/* Dequeue elements */\n// Using Array, popFirst has a complexity of O(n)\nlet popFirst = deque.removeFirst() // The first element dequeued\nlet popLast = deque.removeLast() // The last element dequeued\n\n/* Get the length of the deque */\nlet size = deque.count\n\n/* Check if the deque is empty */\nlet isEmpty = deque.isEmpty\n</code></pre> deque.js<pre><code>/* Initialize the deque */\n// JavaScript does not have a built-in deque, so Array is used as a deque\nconst deque = [];\n\n/* Enqueue elements */\ndeque.push(2);\ndeque.push(5);\ndeque.push(4);\n// Note that unshift() has a time complexity of O(n) as it's an array\ndeque.unshift(3);\ndeque.unshift(1);\n\n/* Access elements */\nconst peekFirst = deque[0]; // The first element\nconst peekLast = deque[deque.length - 1]; // The last element\n\n/* Dequeue elements */\n// Note that shift() has a time complexity of O(n) as it's an array\nconst popFront = deque.shift(); // The first element dequeued\nconst popBack = deque.pop(); // The last element dequeued\n\n/* Get the length of the deque */\nconst size = deque.length;\n\n/* Check if the deque is empty */\nconst isEmpty = size === 0;\n</code></pre> deque.ts<pre><code>/* Initialize the deque */\n// TypeScript does not have a built-in deque, so Array is used as a deque\nconst deque: number[] = [];\n\n/* Enqueue elements */\ndeque.push(2);\ndeque.push(5);\ndeque.push(4);\n// Note that unshift() has a time complexity of O(n) as it's an array\ndeque.unshift(3);\ndeque.unshift(1);\n\n/* Access elements */\nconst peekFirst: number = deque[0]; // The first element\nconst peekLast: number = deque[deque.length - 1]; // The last element\n\n/* Dequeue elements */\n// Note that shift() has a time complexity of O(n) as it's an array\nconst popFront: number = deque.shift() as number; // The first element dequeued\nconst popBack: number = deque.pop() as number; // The last element dequeued\n\n/* Get the length of the deque */\nconst size: number = deque.length;\n\n/* Check if the deque is empty */\nconst isEmpty: boolean = size === 0;\n</code></pre> deque.dart<pre><code>/* Initialize the deque */\n// In Dart, Queue is defined as a deque\nQueue<int> deque = Queue<int>();\n\n/* Enqueue elements */\ndeque.addLast(2); // Add to the tail\ndeque.addLast(5);\ndeque.addLast(4);\ndeque.addFirst(3); // Add to the head\ndeque.addFirst(1);\n\n/* Access elements */\nint peekFirst = deque.first; // The first element\nint peekLast = deque.last; // The last element\n\n/* Dequeue elements */\nint popFirst = deque.removeFirst(); // The first element dequeued\nint popLast = deque.removeLast(); // The last element dequeued\n\n/* Get the length of the deque */\nint size = deque.length;\n\n/* Check if the deque is empty */\nbool isEmpty = deque.isEmpty;\n</code></pre> deque.rs<pre><code>/* Initialize the deque */\nlet mut deque: VecDeque<u32> = VecDeque::new();\n\n/* Enqueue elements */\ndeque.push_back(2); // Add to the tail\ndeque.push_back(5);\ndeque.push_back(4);\ndeque.push_front(3); // Add to the head\ndeque.push_front(1);\n\n/* Access elements */\nif let Some(front) = deque.front() { // The first element\n}\nif let Some(rear) = deque.back() { // The last element\n}\n\n/* Dequeue elements */\nif let Some(pop_front) = deque.pop_front() { // The first element dequeued\n}\nif let Some(pop_rear) = deque.pop_back() { // The last element dequeued\n}\n\n/* Get the length of the deque */\nlet size = deque.len();\n\n/* Check if the deque is empty */\nlet is_empty = deque.is_empty();\n</code></pre> deque.c<pre><code>// C does not provide a built-in deque\n</code></pre> deque.kt<pre><code>\n</code></pre> deque.zig<pre><code>\n</code></pre> Visualizing Code <p>https://pythontutor.com/render.html#code=from%20collections%20import%20deque%0A%0A%22%22%22Driver%20Code%22%22%22%0Aif%20__name__%20%3D%3D%20%22__main__%22%3A%0A%20%20%20%20%23%20%E5%88%9D%E5%A7%8B%E5%8C%96%E5%8F%8C%E5%90%91%E9%98%9F%E5%88%97%0A%20%20%20%20deq%20%3D%20deque%28%29%0A%0A%20%20%20%20%23%20%E5%85%83%E7%B4%A0%E5%85%A5%E9%98%9F%0A%20%20%20%20deq.append%282%29%20%20%23%20%E6%B7%BB%E5%8A%A0%E8%87%B3%E9%98%9F%E5%B0%BE%0A%20%20%20%20deq.append%285%29%0A%20%20%20%20deq.append%284%29%0A%20%20%20%20deq.appendleft%283%29%20%20%23%20%E6%B7%BB%E5%8A%A0%E8%87%B3%E9%98%9F%E9%A6%96%0A%20%20%20%20deq.appendleft%281%29%0A%20%20%20%20print%28%22%E5%8F%8C%E5%90%91%E9%98%9F%E5%88%97%20deque%20%3D%22,%20deq%29%0A%0A%20%20%20%20%23%20%E8%AE%BF%E9%97%AE%E5%85%83%E7%B4%A0%0A%20%20%20%20front%20%3D%20deq%5B0%5D%20%20%23%20%E9%98%9F%E9%A6%96%E5%85%83%E7%B4%A0%0A%20%20%20%20print%28%22%E9%98%9F%E9%A6%96%E5%85%83%E7%B4%A0%20front%20%3D%22,%20front%29%0A%20%20%20%20rear%20%3D%20deq%5B-1%5D%20%20%23%20%E9%98%9F%E5%B0%BE%E5%85%83%E7%B4%A0%0A%20%20%20%20print%28%22%E9%98%9F%E5%B0%BE%E5%85%83%E7%B4%A0%20rear%20%3D%22,%20rear%29%0A%0A%20%20%20%20%23%20%E5%85%83%E7%B4%A0%E5%87%BA%E9%98%9F%0A%20%20%20%20pop_front%20%3D%20deq.popleft%28%29%20%20%23%20%E9%98%9F%E9%A6%96%E5%85%83%E7%B4%A0%E5%87%BA%E9%98%9F%0A%20%20%20%20print%28%22%E9%98%9F%E9%A6%96%E5%87%BA%E9%98%9F%E5%85%83%E7%B4%A0%20%20pop_front%20%3D%22,%20pop_front%29%0A%20%20%20%20print%28%22%E9%98%9F%E9%A6%96%E5%87%BA%E9%98%9F%E5%90%8E%20deque%20%3D%22,%20deq%29%0A%20%20%20%20pop_rear%20%3D%20deq.pop%28%29%20%20%23%20%E9%98%9F%E5%B0%BE%E5%85%83%E7%B4%A0%E5%87%BA%E9%98%9F%0A%20%20%20%20print%28%22%E9%98%9F%E5%B0%BE%E5%87%BA%E9%98%9F%E5%85%83%E7%B4%A0%20%20pop_rear%20%3D%22,%20pop_rear%29%0A%20%20%20%20print%28%22%E9%98%9F%E5%B0%BE%E5%87%BA%E9%98%9F%E5%90%8E%20deque%20%3D%22,%20deq%29%0A%0A%20%20%20%20%23%20%E8%8E%B7%E5%8F%96%E5%8F%8C%E5%90%91%E9%98%9F%E5%88%97%E7%9A%84%E9%95%BF%E5%BA%A6%0A%20%20%20%20size%20%3D%20len%28deq%29%0A%20%20%20%20print%28%22%E5%8F%8C%E5%90%91%E9%98%9F%E5%88%97%E9%95%BF%E5%BA%A6%20size%20%3D%22,%20size%29%0A%0A%20%20%20%20%23%20%E5%88%A4%E6%96%AD%E5%8F%8C%E5%90%91%E9%98%9F%E5%88%97%E6%98%AF%E5%90%A6%E4%B8%BA%E7%A9%BA%0A%20%20%20%20is_empty%20%3D%20len%28deq%29%20%3D%3D%200%0A%20%20%20%20print%28%22%E5%8F%8C%E5%90%91%E9%98%9F%E5%88%97%E6%98%AF%E5%90%A6%E4%B8%BA%E7%A9%BA%20%3D%22,%20is_empty%29&cumulative=false&curInstr=3&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=311&rawInputLstJSON=%5B%5D&textReferences=false</p>"},{"location":"chapter_stack_and_queue/deque/#532-implementing-a-double-ended-queue","title":"5.3.2 \u00a0 Implementing a double-ended queue *","text":"<p>The implementation of a double-ended queue is similar to that of a regular queue, it can be based on either a linked list or an array as the underlying data structure.</p>"},{"location":"chapter_stack_and_queue/deque/#1-implementation-based-on-doubly-linked-list","title":"1. \u00a0 Implementation based on doubly linked list","text":"<p>Recall from the previous section that we used a regular singly linked list to implement a queue, as it conveniently allows for deleting from the head (corresponding to the dequeue operation) and adding new elements after the tail (corresponding to the enqueue operation).</p> <p>For a double-ended queue, both the head and the tail can perform enqueue and dequeue operations. In other words, a double-ended queue needs to implement operations in the opposite direction as well. For this, we use a \"doubly linked list\" as the underlying data structure of the double-ended queue.</p> <p>As shown in Figure 5-8, we treat the head and tail nodes of the doubly linked list as the front and rear of the double-ended queue, respectively, and implement the functionality to add and remove nodes at both ends.</p> LinkedListDequepushLast()pushFirst()popLast()popFirst() <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 5-8 \u00a0 Implementing Double-Ended Queue with Doubly Linked List for Enqueue and Dequeue Operations </p> <p>The implementation code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig linkedlist_deque.py<pre><code>class ListNode:\n \"\"\"Double-linked list node\"\"\"\n\n def __init__(self, val: int):\n \"\"\"Constructor\"\"\"\n self.val: int = val\n self.next: ListNode | None = None # Reference to successor node\n self.prev: ListNode | None = None # Reference to predecessor node\n\nclass LinkedListDeque:\n \"\"\"Double-ended queue class based on double-linked list\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n self._front: ListNode | None = None # Head node front\n self._rear: ListNode | None = None # Tail node rear\n self._size: int = 0 # Length of the double-ended queue\n\n def size(self) -> int:\n \"\"\"Get the length of the double-ended queue\"\"\"\n return self._size\n\n def is_empty(self) -> bool:\n \"\"\"Determine if the double-ended queue is empty\"\"\"\n return self._size == 0\n\n def push(self, num: int, is_front: bool):\n \"\"\"Enqueue operation\"\"\"\n node = ListNode(num)\n # If the list is empty, make front and rear both point to node\n if self.is_empty():\n self._front = self._rear = node\n # Front enqueue operation\n elif is_front:\n # Add node to the head of the list\n self._front.prev = node\n node.next = self._front\n self._front = node # Update head node\n # Rear enqueue operation\n else:\n # Add node to the tail of the list\n self._rear.next = node\n node.prev = self._rear\n self._rear = node # Update tail node\n self._size += 1 # Update queue length\n\n def push_first(self, num: int):\n \"\"\"Front enqueue\"\"\"\n self.push(num, True)\n\n def push_last(self, num: int):\n \"\"\"Rear enqueue\"\"\"\n self.push(num, False)\n\n def pop(self, is_front: bool) -> int:\n \"\"\"Dequeue operation\"\"\"\n if self.is_empty():\n raise IndexError(\"Double-ended queue is empty\")\n # Front dequeue operation\n if is_front:\n val: int = self._front.val # Temporarily store the head node value\n # Remove head node\n fnext: ListNode | None = self._front.next\n if fnext != None:\n fnext.prev = None\n self._front.next = None\n self._front = fnext # Update head node\n # Rear dequeue operation\n else:\n val: int = self._rear.val # Temporarily store the tail node value\n # Remove tail node\n rprev: ListNode | None = self._rear.prev\n if rprev != None:\n rprev.next = None\n self._rear.prev = None\n self._rear = rprev # Update tail node\n self._size -= 1 # Update queue length\n return val\n\n def pop_first(self) -> int:\n \"\"\"Front dequeue\"\"\"\n return self.pop(True)\n\n def pop_last(self) -> int:\n \"\"\"Rear dequeue\"\"\"\n return self.pop(False)\n\n def peek_first(self) -> int:\n \"\"\"Access front element\"\"\"\n if self.is_empty():\n raise IndexError(\"Double-ended queue is empty\")\n return self._front.val\n\n def peek_last(self) -> int:\n \"\"\"Access rear element\"\"\"\n if self.is_empty():\n raise IndexError(\"Double-ended queue is empty\")\n return self._rear.val\n\n def to_array(self) -> list[int]:\n \"\"\"Return array for printing\"\"\"\n node = self._front\n res = [0] * self.size()\n for i in range(self.size()):\n res[i] = node.val\n node = node.next\n return res\n</code></pre> linkedlist_deque.cpp<pre><code>/* Double-linked list node */\nstruct DoublyListNode {\n int val; // Node value\n DoublyListNode *next; // Pointer to successor node\n DoublyListNode *prev; // Pointer to predecessor node\n DoublyListNode(int val) : val(val), prev(nullptr), next(nullptr) {\n }\n};\n\n/* Double-ended queue class based on double-linked list */\nclass LinkedListDeque {\n private:\n DoublyListNode *front, *rear; // Front node front, back node rear\n int queSize = 0; // Length of the double-ended queue\n\n public:\n /* Constructor */\n LinkedListDeque() : front(nullptr), rear(nullptr) {\n }\n\n /* Destructor */\n ~LinkedListDeque() {\n // Traverse the linked list, remove nodes, free memory\n DoublyListNode *pre, *cur = front;\n while (cur != nullptr) {\n pre = cur;\n cur = cur->next;\n delete pre;\n }\n }\n\n /* Get the length of the double-ended queue */\n int size() {\n return queSize;\n }\n\n /* Determine if the double-ended queue is empty */\n bool isEmpty() {\n return size() == 0;\n }\n\n /* Enqueue operation */\n void push(int num, bool isFront) {\n DoublyListNode *node = new DoublyListNode(num);\n // If the list is empty, make front and rear both point to node\n if (isEmpty())\n front = rear = node;\n // Front enqueue operation\n else if (isFront) {\n // Add node to the head of the list\n front->prev = node;\n node->next = front;\n front = node; // Update head node\n // Rear enqueue operation\n } else {\n // Add node to the tail of the list\n rear->next = node;\n node->prev = rear;\n rear = node; // Update tail node\n }\n queSize++; // Update queue length\n }\n\n /* Front enqueue */\n void pushFirst(int num) {\n push(num, true);\n }\n\n /* Rear enqueue */\n void pushLast(int num) {\n push(num, false);\n }\n\n /* Dequeue operation */\n int pop(bool isFront) {\n if (isEmpty())\n throw out_of_range(\"Queue is empty\");\n int val;\n // Front dequeue operation\n if (isFront) {\n val = front->val; // Temporarily store the head node value\n // Remove head node\n DoublyListNode *fNext = front->next;\n if (fNext != nullptr) {\n fNext->prev = nullptr;\n front->next = nullptr;\n }\n delete front;\n front = fNext; // Update head node\n // Rear dequeue operation\n } else {\n val = rear->val; // Temporarily store the tail node value\n // Remove tail node\n DoublyListNode *rPrev = rear->prev;\n if (rPrev != nullptr) {\n rPrev->next = nullptr;\n rear->prev = nullptr;\n }\n delete rear;\n rear = rPrev; // Update tail node\n }\n queSize--; // Update queue length\n return val;\n }\n\n /* Front dequeue */\n int popFirst() {\n return pop(true);\n }\n\n /* Rear dequeue */\n int popLast() {\n return pop(false);\n }\n\n /* Access front element */\n int peekFirst() {\n if (isEmpty())\n throw out_of_range(\"Double-ended queue is empty\");\n return front->val;\n }\n\n /* Access rear element */\n int peekLast() {\n if (isEmpty())\n throw out_of_range(\"Double-ended queue is empty\");\n return rear->val;\n }\n\n /* Return array for printing */\n vector<int> toVector() {\n DoublyListNode *node = front;\n vector<int> res(size());\n for (int i = 0; i < res.size(); i++) {\n res[i] = node->val;\n node = node->next;\n }\n return res;\n }\n};\n</code></pre> linkedlist_deque.java<pre><code>/* Double-linked list node */\nclass ListNode {\n int val; // Node value\n ListNode next; // Reference to successor node\n ListNode prev; // Reference to predecessor node\n\n ListNode(int val) {\n this.val = val;\n prev = next = null;\n }\n}\n\n/* Double-ended queue class based on double-linked list */\nclass LinkedListDeque {\n private ListNode front, rear; // Front node front, back node rear\n private int queSize = 0; // Length of the double-ended queue\n\n public LinkedListDeque() {\n front = rear = null;\n }\n\n /* Get the length of the double-ended queue */\n public int size() {\n return queSize;\n }\n\n /* Determine if the double-ended queue is empty */\n public boolean isEmpty() {\n return size() == 0;\n }\n\n /* Enqueue operation */\n private void push(int num, boolean isFront) {\n ListNode node = new ListNode(num);\n // If the list is empty, make front and rear both point to node\n if (isEmpty())\n front = rear = node;\n // Front enqueue operation\n else if (isFront) {\n // Add node to the head of the list\n front.prev = node;\n node.next = front;\n front = node; // Update head node\n // Rear enqueue operation\n } else {\n // Add node to the tail of the list\n rear.next = node;\n node.prev = rear;\n rear = node; // Update tail node\n }\n queSize++; // Update queue length\n }\n\n /* Front enqueue */\n public void pushFirst(int num) {\n push(num, true);\n }\n\n /* Rear enqueue */\n public void pushLast(int num) {\n push(num, false);\n }\n\n /* Dequeue operation */\n private int pop(boolean isFront) {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n int val;\n // Front dequeue operation\n if (isFront) {\n val = front.val; // Temporarily store the head node value\n // Remove head node\n ListNode fNext = front.next;\n if (fNext != null) {\n fNext.prev = null;\n front.next = null;\n }\n front = fNext; // Update head node\n // Rear dequeue operation\n } else {\n val = rear.val; // Temporarily store the tail node value\n // Remove tail node\n ListNode rPrev = rear.prev;\n if (rPrev != null) {\n rPrev.next = null;\n rear.prev = null;\n }\n rear = rPrev; // Update tail node\n }\n queSize--; // Update queue length\n return val;\n }\n\n /* Front dequeue */\n public int popFirst() {\n return pop(true);\n }\n\n /* Rear dequeue */\n public int popLast() {\n return pop(false);\n }\n\n /* Access front element */\n public int peekFirst() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return front.val;\n }\n\n /* Access rear element */\n public int peekLast() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return rear.val;\n }\n\n /* Return array for printing */\n public int[] toArray() {\n ListNode node = front;\n int[] res = new int[size()];\n for (int i = 0; i < res.length; i++) {\n res[i] = node.val;\n node = node.next;\n }\n return res;\n }\n}\n</code></pre> linkedlist_deque.cs<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.go<pre><code>[class]{linkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.swift<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.js<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.ts<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.dart<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.rs<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.c<pre><code>[class]{DoublyListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.kt<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.rb<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre> linkedlist_deque.zig<pre><code>[class]{ListNode}-[func]{}\n\n[class]{LinkedListDeque}-[func]{}\n</code></pre>"},{"location":"chapter_stack_and_queue/deque/#2-implementation-based-on-array","title":"2. \u00a0 Implementation based on array","text":"<p>As shown in Figure 5-9, similar to implementing a queue with an array, we can also use a circular array to implement a double-ended queue.</p> ArrayDequepushLast()pushFirst()popLast()popFirst() <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 5-9 \u00a0 Implementing Double-Ended Queue with Array for Enqueue and Dequeue Operations </p> <p>The implementation only needs to add methods for \"front enqueue\" and \"rear dequeue\":</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array_deque.py<pre><code>class ArrayDeque:\n \"\"\"Double-ended queue class based on circular array\"\"\"\n\n def __init__(self, capacity: int):\n \"\"\"Constructor\"\"\"\n self._nums: list[int] = [0] * capacity\n self._front: int = 0\n self._size: int = 0\n\n def capacity(self) -> int:\n \"\"\"Get the capacity of the double-ended queue\"\"\"\n return len(self._nums)\n\n def size(self) -> int:\n \"\"\"Get the length of the double-ended queue\"\"\"\n return self._size\n\n def is_empty(self) -> bool:\n \"\"\"Determine if the double-ended queue is empty\"\"\"\n return self._size == 0\n\n def index(self, i: int) -> int:\n \"\"\"Calculate circular array index\"\"\"\n # Implement circular array by modulo operation\n # When i exceeds the tail of the array, return to the head\n # When i exceeds the head of the array, return to the tail\n return (i + self.capacity()) % self.capacity()\n\n def push_first(self, num: int):\n \"\"\"Front enqueue\"\"\"\n if self._size == self.capacity():\n print(\"Double-ended queue is full\")\n return\n # Move the front pointer one position to the left\n # Implement front crossing the head of the array to return to the tail by modulo operation\n self._front = self.index(self._front - 1)\n # Add num to the front\n self._nums[self._front] = num\n self._size += 1\n\n def push_last(self, num: int):\n \"\"\"Rear enqueue\"\"\"\n if self._size == self.capacity():\n print(\"Double-ended queue is full\")\n return\n # Calculate rear pointer, pointing to rear index + 1\n rear = self.index(self._front + self._size)\n # Add num to the rear\n self._nums[rear] = num\n self._size += 1\n\n def pop_first(self) -> int:\n \"\"\"Front dequeue\"\"\"\n num = self.peek_first()\n # Move front pointer one position backward\n self._front = self.index(self._front + 1)\n self._size -= 1\n return num\n\n def pop_last(self) -> int:\n \"\"\"Rear dequeue\"\"\"\n num = self.peek_last()\n self._size -= 1\n return num\n\n def peek_first(self) -> int:\n \"\"\"Access front element\"\"\"\n if self.is_empty():\n raise IndexError(\"Double-ended queue is empty\")\n return self._nums[self._front]\n\n def peek_last(self) -> int:\n \"\"\"Access rear element\"\"\"\n if self.is_empty():\n raise IndexError(\"Double-ended queue is empty\")\n # Calculate rear element index\n last = self.index(self._front + self._size - 1)\n return self._nums[last]\n\n def to_array(self) -> list[int]:\n \"\"\"Return array for printing\"\"\"\n # Only convert elements within valid length range\n res = []\n for i in range(self._size):\n res.append(self._nums[self.index(self._front + i)])\n return res\n</code></pre> array_deque.cpp<pre><code>/* Double-ended queue class based on circular array */\nclass ArrayDeque {\n private:\n vector<int> nums; // Array used to store elements of the double-ended queue\n int front; // Front pointer, pointing to the front element\n int queSize; // Length of the double-ended queue\n\n public:\n /* Constructor */\n ArrayDeque(int capacity) {\n nums.resize(capacity);\n front = queSize = 0;\n }\n\n /* Get the capacity of the double-ended queue */\n int capacity() {\n return nums.size();\n }\n\n /* Get the length of the double-ended queue */\n int size() {\n return queSize;\n }\n\n /* Determine if the double-ended queue is empty */\n bool isEmpty() {\n return queSize == 0;\n }\n\n /* Calculate circular array index */\n int index(int i) {\n // Implement circular array by modulo operation\n // When i exceeds the tail of the array, return to the head\n // When i exceeds the head of the array, return to the tail\n return (i + capacity()) % capacity();\n }\n\n /* Front enqueue */\n void pushFirst(int num) {\n if (queSize == capacity()) {\n cout << \"Double-ended queue is full\" << endl;\n return;\n }\n // Move the front pointer one position to the left\n // Implement front crossing the head of the array to return to the tail by modulo operation\n front = index(front - 1);\n // Add num to the front\n nums[front] = num;\n queSize++;\n }\n\n /* Rear enqueue */\n void pushLast(int num) {\n if (queSize == capacity()) {\n cout << \"Double-ended queue is full\" << endl;\n return;\n }\n // Calculate rear pointer, pointing to rear index + 1\n int rear = index(front + queSize);\n // Add num to the rear\n nums[rear] = num;\n queSize++;\n }\n\n /* Front dequeue */\n int popFirst() {\n int num = peekFirst();\n // Move front pointer one position backward\n front = index(front + 1);\n queSize--;\n return num;\n }\n\n /* Rear dequeue */\n int popLast() {\n int num = peekLast();\n queSize--;\n return num;\n }\n\n /* Access front element */\n int peekFirst() {\n if (isEmpty())\n throw out_of_range(\"Double-ended queue is empty\");\n return nums[front];\n }\n\n /* Access rear element */\n int peekLast() {\n if (isEmpty())\n throw out_of_range(\"Double-ended queue is empty\");\n // Calculate rear element index\n int last = index(front + queSize - 1);\n return nums[last];\n }\n\n /* Return array for printing */\n vector<int> toVector() {\n // Only convert elements within valid length range\n vector<int> res(queSize);\n for (int i = 0, j = front; i < queSize; i++, j++) {\n res[i] = nums[index(j)];\n }\n return res;\n }\n};\n</code></pre> array_deque.java<pre><code>/* Double-ended queue class based on circular array */\nclass ArrayDeque {\n private int[] nums; // Array used to store elements of the double-ended queue\n private int front; // Front pointer, pointing to the front element\n private int queSize; // Length of the double-ended queue\n\n /* Constructor */\n public ArrayDeque(int capacity) {\n this.nums = new int[capacity];\n front = queSize = 0;\n }\n\n /* Get the capacity of the double-ended queue */\n public int capacity() {\n return nums.length;\n }\n\n /* Get the length of the double-ended queue */\n public int size() {\n return queSize;\n }\n\n /* Determine if the double-ended queue is empty */\n public boolean isEmpty() {\n return queSize == 0;\n }\n\n /* Calculate circular array index */\n private int index(int i) {\n // Implement circular array by modulo operation\n // When i exceeds the tail of the array, return to the head\n // When i exceeds the head of the array, return to the tail\n return (i + capacity()) % capacity();\n }\n\n /* Front enqueue */\n public void pushFirst(int num) {\n if (queSize == capacity()) {\n System.out.println(\"Double-ended queue is full\");\n return;\n }\n // Move the front pointer one position to the left\n // Implement front crossing the head of the array to return to the tail by modulo operation\n front = index(front - 1);\n // Add num to the front\n nums[front] = num;\n queSize++;\n }\n\n /* Rear enqueue */\n public void pushLast(int num) {\n if (queSize == capacity()) {\n System.out.println(\"Double-ended queue is full\");\n return;\n }\n // Calculate rear pointer, pointing to rear index + 1\n int rear = index(front + queSize);\n // Add num to the rear\n nums[rear] = num;\n queSize++;\n }\n\n /* Front dequeue */\n public int popFirst() {\n int num = peekFirst();\n // Move front pointer one position backward\n front = index(front + 1);\n queSize--;\n return num;\n }\n\n /* Rear dequeue */\n public int popLast() {\n int num = peekLast();\n queSize--;\n return num;\n }\n\n /* Access front element */\n public int peekFirst() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return nums[front];\n }\n\n /* Access rear element */\n public int peekLast() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n // Calculate rear element index\n int last = index(front + queSize - 1);\n return nums[last];\n }\n\n /* Return array for printing */\n public int[] toArray() {\n // Only convert elements within valid length range\n int[] res = new int[queSize];\n for (int i = 0, j = front; i < queSize; i++, j++) {\n res[i] = nums[index(j)];\n }\n return res;\n }\n}\n</code></pre> array_deque.cs<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.go<pre><code>[class]{arrayDeque}-[func]{}\n</code></pre> array_deque.swift<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.js<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.ts<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.dart<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.rs<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.c<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.kt<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.rb<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre> array_deque.zig<pre><code>[class]{ArrayDeque}-[func]{}\n</code></pre>"},{"location":"chapter_stack_and_queue/deque/#533-applications-of-double-ended-queue","title":"5.3.3 \u00a0 Applications of double-ended queue","text":"<p>The double-ended queue combines the logic of both stacks and queues, thus, it can implement all their respective use cases while offering greater flexibility.</p> <p>We know that software's \"undo\" feature is typically implemented using a stack: the system <code>pushes</code> each change operation onto the stack and then <code>pops</code> to implement undoing. However, considering the limitations of system resources, software often restricts the number of undo steps (for example, only allowing the last 50 steps). When the stack length exceeds 50, the software needs to perform a deletion operation at the bottom of the stack (the front of the queue). But a regular stack cannot perform this function, where a double-ended queue becomes necessary. Note that the core logic of \"undo\" still follows the Last-In-First-Out principle of a stack, but a double-ended queue can more flexibly implement some additional logic.</p>"},{"location":"chapter_stack_and_queue/queue/","title":"5.2 \u00a0 Queue","text":"<p>A queue is a linear data structure that follows the First-In-First-Out (FIFO) rule. As the name suggests, a queue simulates the phenomenon of lining up, where newcomers join the queue at the rear, and the person at the front leaves the queue first.</p> <p>As shown in Figure 5-4, we call the front of the queue the \"head\" and the back the \"tail.\" The operation of adding elements to the rear of the queue is termed \"enqueue,\" and the operation of removing elements from the front is termed \"dequeue.\"</p> <p></p> <p> Figure 5-4 \u00a0 Queue's first-in-first-out rule </p>"},{"location":"chapter_stack_and_queue/queue/#521-common-operations-on-queue","title":"5.2.1 \u00a0 Common operations on queue","text":"<p>The common operations on a queue are shown in Table 5-2. Note that method names may vary across different programming languages. Here, we use the same naming convention as that used for stacks.</p> <p> Table 5-2 \u00a0 Efficiency of queue operations </p> Method Name Description Time Complexity <code>push()</code> Enqueue an element, add it to the tail \\(O(1)\\) <code>pop()</code> Dequeue the head element \\(O(1)\\) <code>peek()</code> Access the head element \\(O(1)\\) <p>We can directly use the ready-made queue classes in programming languages:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig queue.py<pre><code>from collections import deque\n\n# Initialize the queue\n# In Python, we generally use the deque class as a queue\n# Although queue.Queue() is a pure queue class, it's not very user-friendly, so it's not recommended\nque: deque[int] = deque()\n\n# Enqueue elements\nque.append(1)\nque.append(3)\nque.append(2)\nque.append(5)\nque.append(4)\n\n# Access the first element\nfront: int = que[0]\n\n# Dequeue an element\npop: int = que.popleft()\n\n# Get the length of the queue\nsize: int = len(que)\n\n# Check if the queue is empty\nis_empty: bool = len(que) == 0\n</code></pre> queue.cpp<pre><code>/* Initialize the queue */\nqueue<int> queue;\n\n/* Enqueue elements */\nqueue.push(1);\nqueue.push(3);\nqueue.push(2);\nqueue.push(5);\nqueue.push(4);\n\n/* Access the first element*/\nint front = queue.front();\n\n/* Dequeue an element */\nqueue.pop();\n\n/* Get the length of the queue */\nint size = queue.size();\n\n/* Check if the queue is empty */\nbool empty = queue.empty();\n</code></pre> queue.java<pre><code>/* Initialize the queue */\nQueue<Integer> queue = new LinkedList<>();\n\n/* Enqueue elements */\nqueue.offer(1);\nqueue.offer(3);\nqueue.offer(2);\nqueue.offer(5);\nqueue.offer(4);\n\n/* Access the first element */\nint peek = queue.peek();\n\n/* Dequeue an element */\nint pop = queue.poll();\n\n/* Get the length of the queue */\nint size = queue.size();\n\n/* Check if the queue is empty */\nboolean isEmpty = queue.isEmpty();\n</code></pre> queue.cs<pre><code>/* Initialize the queue */\nQueue<int> queue = new();\n\n/* Enqueue elements */\nqueue.Enqueue(1);\nqueue.Enqueue(3);\nqueue.Enqueue(2);\nqueue.Enqueue(5);\nqueue.Enqueue(4);\n\n/* Access the first element */\nint peek = queue.Peek();\n\n/* Dequeue an element */\nint pop = queue.Dequeue();\n\n/* Get the length of the queue */\nint size = queue.Count;\n\n/* Check if the queue is empty */\nbool isEmpty = queue.Count == 0;\n</code></pre> queue_test.go<pre><code>/* Initialize the queue */\n// In Go, use list as a queue\nqueue := list.New()\n\n/* Enqueue elements */\nqueue.PushBack(1)\nqueue.PushBack(3)\nqueue.PushBack(2)\nqueue.PushBack(5)\nqueue.PushBack(4)\n\n/* Access the first element */\npeek := queue.Front()\n\n/* Dequeue an element */\npop := queue.Front()\nqueue.Remove(pop)\n\n/* Get the length of the queue */\nsize := queue.Len()\n\n/* Check if the queue is empty */\nisEmpty := queue.Len() == 0\n</code></pre> queue.swift<pre><code>/* Initialize the queue */\n// Swift does not have a built-in queue class, so Array can be used as a queue\nvar queue: [Int] = []\n\n/* Enqueue elements */\nqueue.append(1)\nqueue.append(3)\nqueue.append(2)\nqueue.append(5)\nqueue.append(4)\n\n/* Access the first element */\nlet peek = queue.first!\n\n/* Dequeue an element */\n// Since it's an array, removeFirst has a complexity of O(n)\nlet pool = queue.removeFirst()\n\n/* Get the length of the queue */\nlet size = queue.count\n\n/* Check if the queue is empty */\nlet isEmpty = queue.isEmpty\n</code></pre> queue.js<pre><code>/* Initialize the queue */\n// JavaScript does not have a built-in queue, so Array can be used as a queue\nconst queue = [];\n\n/* Enqueue elements */\nqueue.push(1);\nqueue.push(3);\nqueue.push(2);\nqueue.push(5);\nqueue.push(4);\n\n/* Access the first element */\nconst peek = queue[0];\n\n/* Dequeue an element */\n// Since the underlying structure is an array, shift() method has a time complexity of O(n)\nconst pop = queue.shift();\n\n/* Get the length of the queue */\nconst size = queue.length;\n\n/* Check if the queue is empty */\nconst empty = queue.length === 0;\n</code></pre> queue.ts<pre><code>/* Initialize the queue */\n// TypeScript does not have a built-in queue, so Array can be used as a queue \nconst queue: number[] = [];\n\n/* Enqueue elements */\nqueue.push(1);\nqueue.push(3);\nqueue.push(2);\nqueue.push(5);\nqueue.push(4);\n\n/* Access the first element */\nconst peek = queue[0];\n\n/* Dequeue an element */\n// Since the underlying structure is an array, shift() method has a time complexity of O(n)\nconst pop = queue.shift();\n\n/* Get the length of the queue */\nconst size = queue.length;\n\n/* Check if the queue is empty */\nconst empty = queue.length === 0;\n</code></pre> queue.dart<pre><code>/* Initialize the queue */\n// In Dart, the Queue class is a double-ended queue but can be used as a queue\nQueue<int> queue = Queue();\n\n/* Enqueue elements */\nqueue.add(1);\nqueue.add(3);\nqueue.add(2);\nqueue.add(5);\nqueue.add(4);\n\n/* Access the first element */\nint peek = queue.first;\n\n/* Dequeue an element */\nint pop = queue.removeFirst();\n\n/* Get the length of the queue */\nint size = queue.length;\n\n/* Check if the queue is empty */\nbool isEmpty = queue.isEmpty;\n</code></pre> queue.rs<pre><code>/* Initialize the double-ended queue */\n// In Rust, use a double-ended queue as a regular queue\nlet mut deque: VecDeque<u32> = VecDeque::new();\n\n/* Enqueue elements */\ndeque.push_back(1);\ndeque.push_back(3);\ndeque.push_back(2);\ndeque.push_back(5);\ndeque.push_back(4);\n\n/* Access the first element */\nif let Some(front) = deque.front() {\n}\n\n/* Dequeue an element */\nif let Some(pop) = deque.pop_front() {\n}\n\n/* Get the length of the queue */\nlet size = deque.len();\n\n/* Check if the queue is empty */\nlet is_empty = deque.is_empty();\n</code></pre> queue.c<pre><code>// C does not provide a built-in queue\n</code></pre> queue.kt<pre><code>\n</code></pre> queue.zig<pre><code>\n</code></pre> Code Visualization <p> Full Screen ></p>"},{"location":"chapter_stack_and_queue/queue/#522-implementing-a-queue","title":"5.2.2 \u00a0 Implementing a queue","text":"<p>To implement a queue, we need a data structure that allows adding elements at one end and removing them at the other. Both linked lists and arrays meet this requirement.</p>"},{"location":"chapter_stack_and_queue/queue/#1-implementation-based-on-a-linked-list","title":"1. \u00a0 Implementation based on a linked list","text":"<p>As shown in Figure 5-5, we can consider the \"head node\" and \"tail node\" of a linked list as the \"front\" and \"rear\" of the queue, respectively. It is stipulated that nodes can only be added at the rear and removed at the front.</p> LinkedListQueuepush()pop() <p></p> <p></p> <p></p> <p> Figure 5-5 \u00a0 Implementing Queue with Linked List for Enqueue and Dequeue Operations </p> <p>Below is the code for implementing a queue using a linked list:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig linkedlist_queue.py<pre><code>class LinkedListQueue:\n \"\"\"Queue class based on linked list\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n self._front: ListNode | None = None # Head node front\n self._rear: ListNode | None = None # Tail node rear\n self._size: int = 0\n\n def size(self) -> int:\n \"\"\"Get the length of the queue\"\"\"\n return self._size\n\n def is_empty(self) -> bool:\n \"\"\"Determine if the queue is empty\"\"\"\n return self._size == 0\n\n def push(self, num: int):\n \"\"\"Enqueue\"\"\"\n # Add num behind the tail node\n node = ListNode(num)\n # If the queue is empty, make the head and tail nodes both point to that node\n if self._front is None:\n self._front = node\n self._rear = node\n # If the queue is not empty, add that node behind the tail node\n else:\n self._rear.next = node\n self._rear = node\n self._size += 1\n\n def pop(self) -> int:\n \"\"\"Dequeue\"\"\"\n num = self.peek()\n # Remove head node\n self._front = self._front.next\n self._size -= 1\n return num\n\n def peek(self) -> int:\n \"\"\"Access front element\"\"\"\n if self.is_empty():\n raise IndexError(\"Queue is empty\")\n return self._front.val\n\n def to_list(self) -> list[int]:\n \"\"\"Convert to a list for printing\"\"\"\n queue = []\n temp = self._front\n while temp:\n queue.append(temp.val)\n temp = temp.next\n return queue\n</code></pre> linkedlist_queue.cpp<pre><code>/* Queue class based on linked list */\nclass LinkedListQueue {\n private:\n ListNode *front, *rear; // Front node front, back node rear\n int queSize;\n\n public:\n LinkedListQueue() {\n front = nullptr;\n rear = nullptr;\n queSize = 0;\n }\n\n ~LinkedListQueue() {\n // Traverse the linked list, remove nodes, free memory\n freeMemoryLinkedList(front);\n }\n\n /* Get the length of the queue */\n int size() {\n return queSize;\n }\n\n /* Determine if the queue is empty */\n bool isEmpty() {\n return queSize == 0;\n }\n\n /* Enqueue */\n void push(int num) {\n // Add num behind the tail node\n ListNode *node = new ListNode(num);\n // If the queue is empty, make the head and tail nodes both point to that node\n if (front == nullptr) {\n front = node;\n rear = node;\n }\n // If the queue is not empty, add that node behind the tail node\n else {\n rear->next = node;\n rear = node;\n }\n queSize++;\n }\n\n /* Dequeue */\n int pop() {\n int num = peek();\n // Remove head node\n ListNode *tmp = front;\n front = front->next;\n // Free memory\n delete tmp;\n queSize--;\n return num;\n }\n\n /* Access front element */\n int peek() {\n if (size() == 0)\n throw out_of_range(\"Queue is empty\");\n return front->val;\n }\n\n /* Convert the linked list to Vector and return */\n vector<int> toVector() {\n ListNode *node = front;\n vector<int> res(size());\n for (int i = 0; i < res.size(); i++) {\n res[i] = node->val;\n node = node->next;\n }\n return res;\n }\n};\n</code></pre> linkedlist_queue.java<pre><code>/* Queue class based on linked list */\nclass LinkedListQueue {\n private ListNode front, rear; // Front node front, back node rear\n private int queSize = 0;\n\n public LinkedListQueue() {\n front = null;\n rear = null;\n }\n\n /* Get the length of the queue */\n public int size() {\n return queSize;\n }\n\n /* Determine if the queue is empty */\n public boolean isEmpty() {\n return size() == 0;\n }\n\n /* Enqueue */\n public void push(int num) {\n // Add num behind the tail node\n ListNode node = new ListNode(num);\n // If the queue is empty, make the head and tail nodes both point to that node\n if (front == null) {\n front = node;\n rear = node;\n // If the queue is not empty, add that node behind the tail node\n } else {\n rear.next = node;\n rear = node;\n }\n queSize++;\n }\n\n /* Dequeue */\n public int pop() {\n int num = peek();\n // Remove head node\n front = front.next;\n queSize--;\n return num;\n }\n\n /* Access front element */\n public int peek() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return front.val;\n }\n\n /* Convert the linked list to Array and return */\n public int[] toArray() {\n ListNode node = front;\n int[] res = new int[size()];\n for (int i = 0; i < res.length; i++) {\n res[i] = node.val;\n node = node.next;\n }\n return res;\n }\n}\n</code></pre> linkedlist_queue.cs<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.go<pre><code>[class]{linkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.swift<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.js<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.ts<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.dart<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.rs<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.c<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.kt<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.rb<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre> linkedlist_queue.zig<pre><code>[class]{LinkedListQueue}-[func]{}\n</code></pre>"},{"location":"chapter_stack_and_queue/queue/#2-implementation-based-on-an-array","title":"2. \u00a0 Implementation based on an array","text":"<p>Deleting the first element in an array has a time complexity of \\(O(n)\\), which would make the dequeue operation inefficient. However, this problem can be cleverly avoided as follows.</p> <p>We use a variable <code>front</code> to indicate the index of the front element and maintain a variable <code>size</code> to record the queue's length. Define <code>rear = front + size</code>, which points to the position immediately following the tail element.</p> <p>With this design, the effective interval of elements in the array is <code>[front, rear - 1]</code>. The implementation methods for various operations are shown in Figure 5-6.</p> <ul> <li>Enqueue operation: Assign the input element to the <code>rear</code> index and increase <code>size</code> by 1.</li> <li>Dequeue operation: Simply increase <code>front</code> by 1 and decrease <code>size</code> by 1.</li> </ul> <p>Both enqueue and dequeue operations only require a single operation, each with a time complexity of \\(O(1)\\).</p> ArrayQueuepush()pop() <p></p> <p></p> <p></p> <p> Figure 5-6 \u00a0 Implementing Queue with Array for Enqueue and Dequeue Operations </p> <p>You might notice a problem: as enqueue and dequeue operations are continuously performed, both <code>front</code> and <code>rear</code> move to the right and will eventually reach the end of the array and can't move further. To resolve this, we can treat the array as a \"circular array\" where connecting the end of the array back to its beginning.</p> <p>In a circular array, <code>front</code> or <code>rear</code> needs to loop back to the start of the array upon reaching the end. This cyclical pattern can be achieved with a \"modulo operation\" as shown in the code below:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array_queue.py<pre><code>class ArrayQueue:\n \"\"\"Queue class based on circular array\"\"\"\n\n def __init__(self, size: int):\n \"\"\"Constructor\"\"\"\n self._nums: list[int] = [0] * size # Array for storing queue elements\n self._front: int = 0 # Front pointer, pointing to the front element\n self._size: int = 0 # Queue length\n\n def capacity(self) -> int:\n \"\"\"Get the capacity of the queue\"\"\"\n return len(self._nums)\n\n def size(self) -> int:\n \"\"\"Get the length of the queue\"\"\"\n return self._size\n\n def is_empty(self) -> bool:\n \"\"\"Determine if the queue is empty\"\"\"\n return self._size == 0\n\n def push(self, num: int):\n \"\"\"Enqueue\"\"\"\n if self._size == self.capacity():\n raise IndexError(\"Queue is full\")\n # Calculate rear pointer, pointing to rear index + 1\n # Use modulo operation to wrap the rear pointer from the end of the array back to the start\n rear: int = (self._front + self._size) % self.capacity()\n # Add num to the rear\n self._nums[rear] = num\n self._size += 1\n\n def pop(self) -> int:\n \"\"\"Dequeue\"\"\"\n num: int = self.peek()\n # Move front pointer one position backward, returning to the head of the array if it exceeds the tail\n self._front = (self._front + 1) % self.capacity()\n self._size -= 1\n return num\n\n def peek(self) -> int:\n \"\"\"Access front element\"\"\"\n if self.is_empty():\n raise IndexError(\"Queue is empty\")\n return self._nums[self._front]\n\n def to_list(self) -> list[int]:\n \"\"\"Return array for printing\"\"\"\n res = [0] * self.size()\n j: int = self._front\n for i in range(self.size()):\n res[i] = self._nums[(j % self.capacity())]\n j += 1\n return res\n</code></pre> array_queue.cpp<pre><code>/* Queue class based on circular array */\nclass ArrayQueue {\n private:\n int *nums; // Array for storing queue elements\n int front; // Front pointer, pointing to the front element\n int queSize; // Queue length\n int queCapacity; // Queue capacity\n\n public:\n ArrayQueue(int capacity) {\n // Initialize an array\n nums = new int[capacity];\n queCapacity = capacity;\n front = queSize = 0;\n }\n\n ~ArrayQueue() {\n delete[] nums;\n }\n\n /* Get the capacity of the queue */\n int capacity() {\n return queCapacity;\n }\n\n /* Get the length of the queue */\n int size() {\n return queSize;\n }\n\n /* Determine if the queue is empty */\n bool isEmpty() {\n return size() == 0;\n }\n\n /* Enqueue */\n void push(int num) {\n if (queSize == queCapacity) {\n cout << \"Queue is full\" << endl;\n return;\n }\n // Calculate rear pointer, pointing to rear index + 1\n // Use modulo operation to wrap the rear pointer from the end of the array back to the start\n int rear = (front + queSize) % queCapacity;\n // Add num to the rear\n nums[rear] = num;\n queSize++;\n }\n\n /* Dequeue */\n int pop() {\n int num = peek();\n // Move front pointer one position backward, returning to the head of the array if it exceeds the tail\n front = (front + 1) % queCapacity;\n queSize--;\n return num;\n }\n\n /* Access front element */\n int peek() {\n if (isEmpty())\n throw out_of_range(\"Queue is empty\");\n return nums[front];\n }\n\n /* Convert array to Vector and return */\n vector<int> toVector() {\n // Only convert elements within valid length range\n vector<int> arr(queSize);\n for (int i = 0, j = front; i < queSize; i++, j++) {\n arr[i] = nums[j % queCapacity];\n }\n return arr;\n }\n};\n</code></pre> array_queue.java<pre><code>/* Queue class based on circular array */\nclass ArrayQueue {\n private int[] nums; // Array for storing queue elements\n private int front; // Front pointer, pointing to the front element\n private int queSize; // Queue length\n\n public ArrayQueue(int capacity) {\n nums = new int[capacity];\n front = queSize = 0;\n }\n\n /* Get the capacity of the queue */\n public int capacity() {\n return nums.length;\n }\n\n /* Get the length of the queue */\n public int size() {\n return queSize;\n }\n\n /* Determine if the queue is empty */\n public boolean isEmpty() {\n return queSize == 0;\n }\n\n /* Enqueue */\n public void push(int num) {\n if (queSize == capacity()) {\n System.out.println(\"Queue is full\");\n return;\n }\n // Calculate rear pointer, pointing to rear index + 1\n // Use modulo operation to wrap the rear pointer from the end of the array back to the start\n int rear = (front + queSize) % capacity();\n // Add num to the rear\n nums[rear] = num;\n queSize++;\n }\n\n /* Dequeue */\n public int pop() {\n int num = peek();\n // Move front pointer one position backward, returning to the head of the array if it exceeds the tail\n front = (front + 1) % capacity();\n queSize--;\n return num;\n }\n\n /* Access front element */\n public int peek() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return nums[front];\n }\n\n /* Return array */\n public int[] toArray() {\n // Only convert elements within valid length range\n int[] res = new int[queSize];\n for (int i = 0, j = front; i < queSize; i++, j++) {\n res[i] = nums[j % capacity()];\n }\n return res;\n }\n}\n</code></pre> array_queue.cs<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.go<pre><code>[class]{arrayQueue}-[func]{}\n</code></pre> array_queue.swift<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.js<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.ts<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.dart<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.rs<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.c<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.kt<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.rb<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> array_queue.zig<pre><code>[class]{ArrayQueue}-[func]{}\n</code></pre> <p>The above implementation of the queue still has its limitations: its length is fixed. However, this issue is not difficult to resolve. We can replace the array with a dynamic array that can expand itself if needed. Interested readers can try to implement this themselves.</p> <p>The comparison of the two implementations is consistent with that of the stack and is not repeated here.</p>"},{"location":"chapter_stack_and_queue/queue/#523-typical-applications-of-queue","title":"5.2.3 \u00a0 Typical applications of queue","text":"<ul> <li>Amazon orders: After shoppers place orders, these orders join a queue, and the system processes them in order. During events like Singles' Day, a massive number of orders are generated in a short time, making high concurrency a key challenge for engineers.</li> <li>Various to-do lists: Any scenario requiring a \"first-come, first-served\" functionality, such as a printer's task queue or a restaurant's food delivery queue, can effectively maintain the order of processing with a queue.</li> </ul>"},{"location":"chapter_stack_and_queue/stack/","title":"5.1 \u00a0 Stack","text":"<p>A stack is a linear data structure that follows the principle of Last-In-First-Out (LIFO).</p> <p>We can compare a stack to a pile of plates on a table. To access the bottom plate, one must first remove the plates on top. By replacing the plates with various types of elements (such as integers, characters, objects, etc.), we obtain the data structure known as a stack.</p> <p>As shown in Figure 5-1, we refer to the top of the pile of elements as the \"top of the stack\" and the bottom as the \"bottom of the stack.\" The operation of adding elements to the top of the stack is called \"push,\" and the operation of removing the top element is called \"pop.\"</p> <p></p> <p> Figure 5-1 \u00a0 Stack's last-in-first-out rule </p>"},{"location":"chapter_stack_and_queue/stack/#511-common-operations-on-stack","title":"5.1.1 \u00a0 Common operations on stack","text":"<p>The common operations on a stack are shown in Table 5-1. The specific method names depend on the programming language used. Here, we use <code>push()</code>, <code>pop()</code>, and <code>peek()</code> as examples.</p> <p> Table 5-1 \u00a0 Efficiency of stack operations </p> Method Description Time Complexity <code>push()</code> Push an element onto the stack (add to the top) \\(O(1)\\) <code>pop()</code> Pop the top element from the stack \\(O(1)\\) <code>peek()</code> Access the top element of the stack \\(O(1)\\) <p>Typically, we can directly use the stack class built into the programming language. However, some languages may not specifically provide a stack class. In these cases, we can use the language's \"array\" or \"linked list\" as a stack and ignore operations that are not related to stack logic in the program.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinZig stack.py<pre><code># Initialize the stack\n# Python does not have a built-in stack class, so a list can be used as a stack\nstack: list[int] = []\n\n# Push elements onto the stack\nstack.append(1)\nstack.append(3)\nstack.append(2)\nstack.append(5)\nstack.append(4)\n\n# Access the top element of the stack\npeek: int = stack[-1]\n\n# Pop an element from the stack\npop: int = stack.pop()\n\n# Get the length of the stack\nsize: int = len(stack)\n\n# Check if the stack is empty\nis_empty: bool = len(stack) == 0\n</code></pre> stack.cpp<pre><code>/* Initialize the stack */\nstack<int> stack;\n\n/* Push elements onto the stack */\nstack.push(1);\nstack.push(3);\nstack.push(2);\nstack.push(5);\nstack.push(4);\n\n/* Access the top element of the stack */\nint top = stack.top();\n\n/* Pop an element from the stack */\nstack.pop(); // No return value\n\n/* Get the length of the stack */\nint size = stack.size();\n\n/* Check if the stack is empty */\nbool empty = stack.empty();\n</code></pre> stack.java<pre><code>/* Initialize the stack */\nStack<Integer> stack = new Stack<>();\n\n/* Push elements onto the stack */\nstack.push(1);\nstack.push(3);\nstack.push(2);\nstack.push(5);\nstack.push(4);\n\n/* Access the top element of the stack */\nint peek = stack.peek();\n\n/* Pop an element from the stack */\nint pop = stack.pop();\n\n/* Get the length of the stack */\nint size = stack.size();\n\n/* Check if the stack is empty */\nboolean isEmpty = stack.isEmpty();\n</code></pre> stack.cs<pre><code>/* Initialize the stack */\nStack<int> stack = new();\n\n/* Push elements onto the stack */\nstack.Push(1);\nstack.Push(3);\nstack.Push(2);\nstack.Push(5);\nstack.Push(4);\n\n/* Access the top element of the stack */\nint peek = stack.Peek();\n\n/* Pop an element from the stack */\nint pop = stack.Pop();\n\n/* Get the length of the stack */\nint size = stack.Count;\n\n/* Check if the stack is empty */\nbool isEmpty = stack.Count == 0;\n</code></pre> stack_test.go<pre><code>/* Initialize the stack */\n// In Go, it is recommended to use a Slice as a stack\nvar stack []int\n\n/* Push elements onto the stack */\nstack = append(stack, 1)\nstack = append(stack, 3)\nstack = append(stack, 2)\nstack = append(stack, 5)\nstack = append(stack, 4)\n\n/* Access the top element of the stack */\npeek := stack[len(stack)-1]\n\n/* Pop an element from the stack */\npop := stack[len(stack)-1]\nstack = stack[:len(stack)-1]\n\n/* Get the length of the stack */\nsize := len(stack)\n\n/* Check if the stack is empty */\nisEmpty := len(stack) == 0\n</code></pre> stack.swift<pre><code>/* Initialize the stack */\n// Swift does not have a built-in stack class, so Array can be used as a stack\nvar stack: [Int] = []\n\n/* Push elements onto the stack */\nstack.append(1)\nstack.append(3)\nstack.append(2)\nstack.append(5)\nstack.append(4)\n\n/* Access the top element of the stack */\nlet peek = stack.last!\n\n/* Pop an element from the stack */\nlet pop = stack.removeLast()\n\n/* Get the length of the stack */\nlet size = stack.count\n\n/* Check if the stack is empty */\nlet isEmpty = stack.isEmpty\n</code></pre> stack.js<pre><code>/* Initialize the stack */\n// JavaScript does not have a built-in stack class, so Array can be used as a stack\nconst stack = [];\n\n/* Push elements onto the stack */\nstack.push(1);\nstack.push(3);\nstack.push(2);\nstack.push(5);\nstack.push(4);\n\n/* Access the top element of the stack */\nconst peek = stack[stack.length-1];\n\n/* Pop an element from the stack */\nconst pop = stack.pop();\n\n/* Get the length of the stack */\nconst size = stack.length;\n\n/* Check if the stack is empty */\nconst is_empty = stack.length === 0;\n</code></pre> stack.ts<pre><code>/* Initialize the stack */\n// TypeScript does not have a built-in stack class, so Array can be used as a stack\nconst stack: number[] = [];\n\n/* Push elements onto the stack */\nstack.push(1);\nstack.push(3);\nstack.push(2);\nstack.push(5);\nstack.push(4);\n\n/* Access the top element of the stack */\nconst peek = stack[stack.length - 1];\n\n/* Pop an element from the stack */\nconst pop = stack.pop();\n\n/* Get the length of the stack */\nconst size = stack.length;\n\n/* Check if the stack is empty */\nconst is_empty = stack.length === 0;\n</code></pre> stack.dart<pre><code>/* Initialize the stack */\n// Dart does not have a built-in stack class, so List can be used as a stack\nList<int> stack = [];\n\n/* Push elements onto the stack */\nstack.add(1);\nstack.add(3);\nstack.add(2);\nstack.add(5);\nstack.add(4);\n\n/* Access the top element of the stack */\nint peek = stack.last;\n\n/* Pop an element from the stack */\nint pop = stack.removeLast();\n\n/* Get the length of the stack */\nint size = stack.length;\n\n/* Check if the stack is empty */\nbool isEmpty = stack.isEmpty;\n</code></pre> stack.rs<pre><code>/* Initialize the stack */\n// Use Vec as a stack\nlet mut stack: Vec<i32> = Vec::new();\n\n/* Push elements onto the stack */\nstack.push(1);\nstack.push(3);\nstack.push(2);\nstack.push(5);\nstack.push(4);\n\n/* Access the top element of the stack */\nlet top = stack.last().unwrap();\n\n/* Pop an element from the stack */\nlet pop = stack.pop().unwrap();\n\n/* Get the length of the stack */\nlet size = stack.len();\n\n/* Check if the stack is empty */\nlet is_empty = stack.is_empty();\n</code></pre> stack.c<pre><code>// C does not provide a built-in stack\n</code></pre> stack.kt<pre><code>\n</code></pre> stack.zig<pre><code>\n</code></pre> Code Visualization <p> Full Screen ></p>"},{"location":"chapter_stack_and_queue/stack/#512-implementing-a-stack","title":"5.1.2 \u00a0 Implementing a stack","text":"<p>To gain a deeper understanding of how a stack operates, let's try implementing a stack class ourselves.</p> <p>A stack follows the principle of Last-In-First-Out, which means we can only add or remove elements at the top of the stack. However, both arrays and linked lists allow adding and removing elements at any position, therefore a stack can be seen as a restricted array or linked list. In other words, we can \"shield\" certain irrelevant operations of an array or linked list, aligning their external behavior with the characteristics of a stack.</p>"},{"location":"chapter_stack_and_queue/stack/#1-implementation-based-on-a-linked-list","title":"1. \u00a0 Implementation based on a linked list","text":"<p>When implementing a stack using a linked list, we can consider the head node of the list as the top of the stack and the tail node as the bottom of the stack.</p> <p>As shown in Figure 5-2, for the push operation, we simply insert elements at the head of the linked list. This method of node insertion is known as \"head insertion.\" For the pop operation, we just need to remove the head node from the list.</p> LinkedListStackpush()pop() <p></p> <p></p> <p></p> <p> Figure 5-2 \u00a0 Implementing Stack with Linked List for Push and Pop Operations </p> <p>Below is an example code for implementing a stack based on a linked list:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig linkedlist_stack.py<pre><code>class LinkedListStack:\n \"\"\"Stack class based on linked list\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n self._peek: ListNode | None = None\n self._size: int = 0\n\n def size(self) -> int:\n \"\"\"Get the length of the stack\"\"\"\n return self._size\n\n def is_empty(self) -> bool:\n \"\"\"Determine if the stack is empty\"\"\"\n return self._size == 0\n\n def push(self, val: int):\n \"\"\"Push\"\"\"\n node = ListNode(val)\n node.next = self._peek\n self._peek = node\n self._size += 1\n\n def pop(self) -> int:\n \"\"\"Pop\"\"\"\n num = self.peek()\n self._peek = self._peek.next\n self._size -= 1\n return num\n\n def peek(self) -> int:\n \"\"\"Access stack top element\"\"\"\n if self.is_empty():\n raise IndexError(\"Stack is empty\")\n return self._peek.val\n\n def to_list(self) -> list[int]:\n \"\"\"Convert to a list for printing\"\"\"\n arr = []\n node = self._peek\n while node:\n arr.append(node.val)\n node = node.next\n arr.reverse()\n return arr\n</code></pre> linkedlist_stack.cpp<pre><code>/* Stack class based on linked list */\nclass LinkedListStack {\n private:\n ListNode *stackTop; // Use the head node as the top of the stack\n int stkSize; // Length of the stack\n\n public:\n LinkedListStack() {\n stackTop = nullptr;\n stkSize = 0;\n }\n\n ~LinkedListStack() {\n // Traverse the linked list, remove nodes, free memory\n freeMemoryLinkedList(stackTop);\n }\n\n /* Get the length of the stack */\n int size() {\n return stkSize;\n }\n\n /* Determine if the stack is empty */\n bool isEmpty() {\n return size() == 0;\n }\n\n /* Push */\n void push(int num) {\n ListNode *node = new ListNode(num);\n node->next = stackTop;\n stackTop = node;\n stkSize++;\n }\n\n /* Pop */\n int pop() {\n int num = top();\n ListNode *tmp = stackTop;\n stackTop = stackTop->next;\n // Free memory\n delete tmp;\n stkSize--;\n return num;\n }\n\n /* Access stack top element */\n int top() {\n if (isEmpty())\n throw out_of_range(\"Stack is empty\");\n return stackTop->val;\n }\n\n /* Convert the List to Array and return */\n vector<int> toVector() {\n ListNode *node = stackTop;\n vector<int> res(size());\n for (int i = res.size() - 1; i >= 0; i--) {\n res[i] = node->val;\n node = node->next;\n }\n return res;\n }\n};\n</code></pre> linkedlist_stack.java<pre><code>/* Stack class based on linked list */\nclass LinkedListStack {\n private ListNode stackPeek; // Use the head node as the top of the stack\n private int stkSize = 0; // Length of the stack\n\n public LinkedListStack() {\n stackPeek = null;\n }\n\n /* Get the length of the stack */\n public int size() {\n return stkSize;\n }\n\n /* Determine if the stack is empty */\n public boolean isEmpty() {\n return size() == 0;\n }\n\n /* Push */\n public void push(int num) {\n ListNode node = new ListNode(num);\n node.next = stackPeek;\n stackPeek = node;\n stkSize++;\n }\n\n /* Pop */\n public int pop() {\n int num = peek();\n stackPeek = stackPeek.next;\n stkSize--;\n return num;\n }\n\n /* Access stack top element */\n public int peek() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return stackPeek.val;\n }\n\n /* Convert the List to Array and return */\n public int[] toArray() {\n ListNode node = stackPeek;\n int[] res = new int[size()];\n for (int i = res.length - 1; i >= 0; i--) {\n res[i] = node.val;\n node = node.next;\n }\n return res;\n }\n}\n</code></pre> linkedlist_stack.cs<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.go<pre><code>[class]{linkedListStack}-[func]{}\n</code></pre> linkedlist_stack.swift<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.js<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.ts<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.dart<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.rs<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.c<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.kt<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.rb<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre> linkedlist_stack.zig<pre><code>[class]{LinkedListStack}-[func]{}\n</code></pre>"},{"location":"chapter_stack_and_queue/stack/#2-implementation-based-on-an-array","title":"2. \u00a0 Implementation based on an array","text":"<p>When implementing a stack using an array, we can consider the end of the array as the top of the stack. As shown in Figure 5-3, push and pop operations correspond to adding and removing elements at the end of the array, respectively, both with a time complexity of \\(O(1)\\).</p> ArrayStackpush()pop() <p></p> <p></p> <p></p> <p> Figure 5-3 \u00a0 Implementing Stack with Array for Push and Pop Operations </p> <p>Since the elements to be pushed onto the stack may continuously increase, we can use a dynamic array, thus avoiding the need to handle array expansion ourselves. Here is an example code:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array_stack.py<pre><code>class ArrayStack:\n \"\"\"Stack class based on array\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n self._stack: list[int] = []\n\n def size(self) -> int:\n \"\"\"Get the length of the stack\"\"\"\n return len(self._stack)\n\n def is_empty(self) -> bool:\n \"\"\"Determine if the stack is empty\"\"\"\n return self.size() == 0\n\n def push(self, item: int):\n \"\"\"Push\"\"\"\n self._stack.append(item)\n\n def pop(self) -> int:\n \"\"\"Pop\"\"\"\n if self.is_empty():\n raise IndexError(\"Stack is empty\")\n return self._stack.pop()\n\n def peek(self) -> int:\n \"\"\"Access stack top element\"\"\"\n if self.is_empty():\n raise IndexError(\"Stack is empty\")\n return self._stack[-1]\n\n def to_list(self) -> list[int]:\n \"\"\"Return array for printing\"\"\"\n return self._stack\n</code></pre> array_stack.cpp<pre><code>/* Stack class based on array */\nclass ArrayStack {\n private:\n vector<int> stack;\n\n public:\n /* Get the length of the stack */\n int size() {\n return stack.size();\n }\n\n /* Determine if the stack is empty */\n bool isEmpty() {\n return stack.size() == 0;\n }\n\n /* Push */\n void push(int num) {\n stack.push_back(num);\n }\n\n /* Pop */\n int pop() {\n int num = top();\n stack.pop_back();\n return num;\n }\n\n /* Access stack top element */\n int top() {\n if (isEmpty())\n throw out_of_range(\"Stack is empty\");\n return stack.back();\n }\n\n /* Return Vector */\n vector<int> toVector() {\n return stack;\n }\n};\n</code></pre> array_stack.java<pre><code>/* Stack class based on array */\nclass ArrayStack {\n private ArrayList<Integer> stack;\n\n public ArrayStack() {\n // Initialize the list (dynamic array)\n stack = new ArrayList<>();\n }\n\n /* Get the length of the stack */\n public int size() {\n return stack.size();\n }\n\n /* Determine if the stack is empty */\n public boolean isEmpty() {\n return size() == 0;\n }\n\n /* Push */\n public void push(int num) {\n stack.add(num);\n }\n\n /* Pop */\n public int pop() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return stack.remove(size() - 1);\n }\n\n /* Access stack top element */\n public int peek() {\n if (isEmpty())\n throw new IndexOutOfBoundsException();\n return stack.get(size() - 1);\n }\n\n /* Convert the List to Array and return */\n public Object[] toArray() {\n return stack.toArray();\n }\n}\n</code></pre> array_stack.cs<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.go<pre><code>[class]{arrayStack}-[func]{}\n</code></pre> array_stack.swift<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.js<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.ts<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.dart<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.rs<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.c<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.kt<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.rb<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre> array_stack.zig<pre><code>[class]{ArrayStack}-[func]{}\n</code></pre>"},{"location":"chapter_stack_and_queue/stack/#513-comparison-of-the-two-implementations","title":"5.1.3 \u00a0 Comparison of the two implementations","text":"<p>Supported Operations</p> <p>Both implementations support all the operations defined in a stack. The array implementation additionally supports random access, but this is beyond the scope of a stack definition and is generally not used.</p> <p>Time Efficiency</p> <p>In the array-based implementation, both push and pop operations occur in pre-allocated contiguous memory, which has good cache locality and therefore higher efficiency. However, if the push operation exceeds the array capacity, it triggers a resizing mechanism, making the time complexity of that push operation \\(O(n)\\).</p> <p>In the linked list implementation, list expansion is very flexible, and there is no efficiency decrease issue as in array expansion. However, the push operation requires initializing a node object and modifying pointers, so its efficiency is relatively lower. If the elements being pushed are already node objects, then the initialization step can be skipped, improving efficiency.</p> <p>Thus, when the elements for push and pop operations are basic data types like <code>int</code> or <code>double</code>, we can draw the following conclusions:</p> <ul> <li>The array-based stack implementation's efficiency decreases during expansion, but since expansion is a low-frequency operation, its average efficiency is higher.</li> <li>The linked list-based stack implementation provides more stable efficiency performance.</li> </ul> <p>Space Efficiency</p> <p>When initializing a list, the system allocates an \"initial capacity,\" which might exceed the actual need; moreover, the expansion mechanism usually increases capacity by a specific factor (like doubling), which may also exceed the actual need. Therefore, the array-based stack might waste some space.</p> <p>However, since linked list nodes require extra space for storing pointers, the space occupied by linked list nodes is relatively larger.</p> <p>In summary, we cannot simply determine which implementation is more memory-efficient. It requires analysis based on specific circumstances.</p>"},{"location":"chapter_stack_and_queue/stack/#514-typical-applications-of-stack","title":"5.1.4 \u00a0 Typical applications of stack","text":"<ul> <li>Back and forward in browsers, undo and redo in software. Every time we open a new webpage, the browser pushes the previous page onto the stack, allowing us to go back to the previous page through the back operation, which is essentially a pop operation. To support both back and forward, two stacks are needed to work together.</li> <li>Memory management in programs. Each time a function is called, the system adds a stack frame at the top of the stack to record the function's context information. In recursive functions, the downward recursion phase keeps pushing onto the stack, while the upward backtracking phase keeps popping from the stack.</li> </ul>"},{"location":"chapter_stack_and_queue/summary/","title":"5.4 \u00a0 Summary","text":""},{"location":"chapter_stack_and_queue/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>Stack is a data structure that follows the Last-In-First-Out (LIFO) principle and can be implemented using arrays or linked lists.</li> <li>In terms of time efficiency, the array implementation of the stack has a higher average efficiency. However, during expansion, the time complexity for a single push operation can degrade to \\(O(n)\\). In contrast, the linked list implementation of a stack offers more stable efficiency.</li> <li>Regarding space efficiency, the array implementation of the stack may lead to a certain degree of space wastage. However, it's important to note that the memory space occupied by nodes in a linked list is generally larger than that for elements in an array.</li> <li>A queue is a data structure that follows the First-In-First-Out (FIFO) principle, and it can also be implemented using arrays or linked lists. The conclusions regarding time and space efficiency for queues are similar to those for stacks.</li> <li>A double-ended queue (deque) is a more flexible type of queue that allows adding and removing elements at both ends.</li> </ul>"},{"location":"chapter_stack_and_queue/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: Is the browser's forward and backward functionality implemented with a doubly linked list?</p> <p>A browser's forward and backward navigation is essentially a manifestation of the \"stack\" concept. When a user visits a new page, the page is added to the top of the stack; when they click the back button, the page is popped from the top of the stack. A double-ended queue (deque) can conveniently implement some additional operations, as mentioned in the \"Double-Ended Queue\" section.</p> <p>Q: After popping from a stack, is it necessary to free the memory of the popped node?</p> <p>If the popped node will still be used later, it's not necessary to free its memory. In languages like Java and Python that have automatic garbage collection, manual memory release is not necessary; in C and C++, manual memory release is required.</p> <p>Q: A double-ended queue seems like two stacks joined together. What are its uses?</p> <p>A double-ended queue, which is a combination of a stack and a queue or two stacks joined together, exhibits both stack and queue logic. Thus, it can implement all applications of stacks and queues while offering more flexibility.</p> <p>Q: How exactly are undo and redo implemented?</p> <p>Undo and redo operations are implemented using two stacks: Stack <code>A</code> for undo and Stack <code>B</code> for redo.</p> <ol> <li>Each time a user performs an operation, it is pushed onto Stack <code>A</code>, and Stack <code>B</code> is cleared.</li> <li>When the user executes an \"undo\", the most recent operation is popped from Stack <code>A</code> and pushed onto Stack <code>B</code>.</li> <li>When the user executes a \"redo\", the most recent operation is popped from Stack <code>B</code> and pushed back onto Stack <code>A</code>.</li> </ol>"},{"location":"chapter_tree/","title":"Chapter 7. \u00a0 Tree","text":"<p>Abstract</p> <p>The towering tree, vibrant with it's deep roots and lush leaves, branches spreading wide.</p> <p>It vividly illustrates the concept of divide-and-conquer in data.</p>"},{"location":"chapter_tree/#chapter-contents","title":"Chapter contents","text":"<ul> <li>7.1 \u00a0 Binary tree</li> <li>7.2 \u00a0 Binary tree traversal</li> <li>7.3 \u00a0 Array Representation of tree</li> <li>7.4 \u00a0 Binary Search tree</li> <li>7.5 \u00a0 AVL tree *</li> <li>7.6 \u00a0 Summary</li> </ul>"},{"location":"chapter_tree/array_representation_of_tree/","title":"7.3 \u00a0 Array representation of binary trees","text":"<p>Under the linked list representation, the storage unit of a binary tree is a node <code>TreeNode</code>, with nodes connected by pointers. The basic operations of binary trees under the linked list representation were introduced in the previous section.</p> <p>So, can we use an array to represent a binary tree? The answer is yes.</p>"},{"location":"chapter_tree/array_representation_of_tree/#731-representing-perfect-binary-trees","title":"7.3.1 \u00a0 Representing perfect binary trees","text":"<p>Let's analyze a simple case first. Given a perfect binary tree, we store all nodes in an array according to the order of level-order traversal, where each node corresponds to a unique array index.</p> <p>Based on the characteristics of level-order traversal, we can deduce a \"mapping formula\" between the index of a parent node and its children: If a node's index is \\(i\\), then the index of its left child is \\(2i + 1\\) and the right child is \\(2i + 2\\). Figure 7-12 shows the mapping relationship between the indices of various nodes.</p> <p></p> <p> Figure 7-12 \u00a0 Array representation of a perfect binary tree </p> <p>The mapping formula plays a role similar to the node references (pointers) in linked lists. Given any node in the array, we can access its left (right) child node using the mapping formula.</p>"},{"location":"chapter_tree/array_representation_of_tree/#732-representing-any-binary-tree","title":"7.3.2 \u00a0 Representing any binary tree","text":"<p>Perfect binary trees are a special case; there are often many <code>None</code> values in the middle levels of a binary tree. Since the sequence of level-order traversal does not include these <code>None</code> values, we cannot solely rely on this sequence to deduce the number and distribution of <code>None</code> values. This means that multiple binary tree structures can match the same level-order traversal sequence.</p> <p>As shown in Figure 7-13, given a non-perfect binary tree, the above method of array representation fails.</p> <p></p> <p> Figure 7-13 \u00a0 Level-order traversal sequence corresponds to multiple binary tree possibilities </p> <p>To solve this problem, we can consider explicitly writing out all <code>None</code> values in the level-order traversal sequence. As shown in Figure 7-14, after this treatment, the level-order traversal sequence can uniquely represent a binary tree. Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig <pre><code># Array representation of a binary tree\n# Using None to represent empty slots\ntree = [1, 2, 3, 4, None, 6, 7, 8, 9, None, None, 12, None, None, 15]\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using the maximum integer value INT_MAX to mark empty slots\nvector<int> tree = {1, 2, 3, 4, INT_MAX, 6, 7, 8, 9, INT_MAX, INT_MAX, 12, INT_MAX, INT_MAX, 15};\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using the Integer wrapper class allows for using null to mark empty slots\nInteger[] tree = { 1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15 };\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using nullable int (int?) allows for using null to mark empty slots\nint?[] tree = [1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15];\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using an any type slice, allowing for nil to mark empty slots\ntree := []any{1, 2, 3, 4, nil, 6, 7, 8, 9, nil, nil, 12, nil, nil, 15}\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using optional Int (Int?) allows for using nil to mark empty slots\nlet tree: [Int?] = [1, 2, 3, 4, nil, 6, 7, 8, 9, nil, nil, 12, nil, nil, 15]\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using null to represent empty slots\nlet tree = [1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15];\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using null to represent empty slots\nlet tree: (number | null)[] = [1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15];\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using nullable int (int?) allows for using null to mark empty slots\nList<int?> tree = [1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15];\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using None to mark empty slots\nlet tree = [Some(1), Some(2), Some(3), Some(4), None, Some(6), Some(7), Some(8), Some(9), None, None, Some(12), None, None, Some(15)];\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using the maximum int value to mark empty slots, therefore, node values must not be INT_MAX\nint tree[] = {1, 2, 3, 4, INT_MAX, 6, 7, 8, 9, INT_MAX, INT_MAX, 12, INT_MAX, INT_MAX, 15};\n</code></pre> <pre><code>/* Array representation of a binary tree */\n// Using null to represent empty slots\nval tree = mutableListOf( 1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15 )\n</code></pre> <pre><code>\n</code></pre> <pre><code>\n</code></pre> <p></p> <p> Figure 7-14 \u00a0 Array representation of any type of binary tree </p> <p>It's worth noting that complete binary trees are very suitable for array representation. Recalling the definition of a complete binary tree, <code>None</code> appears only at the bottom level and towards the right, meaning all <code>None</code> values definitely appear at the end of the level-order traversal sequence.</p> <p>This means that when using an array to represent a complete binary tree, it's possible to omit storing all <code>None</code> values, which is very convenient. Figure 7-15 gives an example.</p> <p></p> <p> Figure 7-15 \u00a0 Array representation of a complete binary tree </p> <p>The following code implements a binary tree based on array representation, including the following operations:</p> <ul> <li>Given a node, obtain its value, left (right) child node, and parent node.</li> <li>Obtain the pre-order, in-order, post-order, and level-order traversal sequences.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig array_binary_tree.py<pre><code>class ArrayBinaryTree:\n \"\"\"Array-based binary tree class\"\"\"\n\n def __init__(self, arr: list[int | None]):\n \"\"\"Constructor\"\"\"\n self._tree = list(arr)\n\n def size(self):\n \"\"\"List capacity\"\"\"\n return len(self._tree)\n\n def val(self, i: int) -> int | None:\n \"\"\"Get the value of the node at index i\"\"\"\n # If the index is out of bounds, return None, representing a vacancy\n if i < 0 or i >= self.size():\n return None\n return self._tree[i]\n\n def left(self, i: int) -> int | None:\n \"\"\"Get the index of the left child of the node at index i\"\"\"\n return 2 * i + 1\n\n def right(self, i: int) -> int | None:\n \"\"\"Get the index of the right child of the node at index i\"\"\"\n return 2 * i + 2\n\n def parent(self, i: int) -> int | None:\n \"\"\"Get the index of the parent of the node at index i\"\"\"\n return (i - 1) // 2\n\n def level_order(self) -> list[int]:\n \"\"\"Level-order traversal\"\"\"\n self.res = []\n # Traverse array\n for i in range(self.size()):\n if self.val(i) is not None:\n self.res.append(self.val(i))\n return self.res\n\n def dfs(self, i: int, order: str):\n \"\"\"Depth-first traversal\"\"\"\n if self.val(i) is None:\n return\n # Pre-order traversal\n if order == \"pre\":\n self.res.append(self.val(i))\n self.dfs(self.left(i), order)\n # In-order traversal\n if order == \"in\":\n self.res.append(self.val(i))\n self.dfs(self.right(i), order)\n # Post-order traversal\n if order == \"post\":\n self.res.append(self.val(i))\n\n def pre_order(self) -> list[int]:\n \"\"\"Pre-order traversal\"\"\"\n self.res = []\n self.dfs(0, order=\"pre\")\n return self.res\n\n def in_order(self) -> list[int]:\n \"\"\"In-order traversal\"\"\"\n self.res = []\n self.dfs(0, order=\"in\")\n return self.res\n\n def post_order(self) -> list[int]:\n \"\"\"Post-order traversal\"\"\"\n self.res = []\n self.dfs(0, order=\"post\")\n return self.res\n</code></pre> array_binary_tree.cpp<pre><code>/* Array-based binary tree class */\nclass ArrayBinaryTree {\n public:\n /* Constructor */\n ArrayBinaryTree(vector<int> arr) {\n tree = arr;\n }\n\n /* List capacity */\n int size() {\n return tree.size();\n }\n\n /* Get the value of the node at index i */\n int val(int i) {\n // If index is out of bounds, return INT_MAX, representing a null\n if (i < 0 || i >= size())\n return INT_MAX;\n return tree[i];\n }\n\n /* Get the index of the left child of the node at index i */\n int left(int i) {\n return 2 * i + 1;\n }\n\n /* Get the index of the right child of the node at index i */\n int right(int i) {\n return 2 * i + 2;\n }\n\n /* Get the index of the parent of the node at index i */\n int parent(int i) {\n return (i - 1) / 2;\n }\n\n /* Level-order traversal */\n vector<int> levelOrder() {\n vector<int> res;\n // Traverse array\n for (int i = 0; i < size(); i++) {\n if (val(i) != INT_MAX)\n res.push_back(val(i));\n }\n return res;\n }\n\n /* Pre-order traversal */\n vector<int> preOrder() {\n vector<int> res;\n dfs(0, \"pre\", res);\n return res;\n }\n\n /* In-order traversal */\n vector<int> inOrder() {\n vector<int> res;\n dfs(0, \"in\", res);\n return res;\n }\n\n /* Post-order traversal */\n vector<int> postOrder() {\n vector<int> res;\n dfs(0, \"post\", res);\n return res;\n }\n\n private:\n vector<int> tree;\n\n /* Depth-first traversal */\n void dfs(int i, string order, vector<int> &res) {\n // If it is an empty spot, return\n if (val(i) == INT_MAX)\n return;\n // Pre-order traversal\n if (order == \"pre\")\n res.push_back(val(i));\n dfs(left(i), order, res);\n // In-order traversal\n if (order == \"in\")\n res.push_back(val(i));\n dfs(right(i), order, res);\n // Post-order traversal\n if (order == \"post\")\n res.push_back(val(i));\n }\n};\n</code></pre> array_binary_tree.java<pre><code>/* Array-based binary tree class */\nclass ArrayBinaryTree {\n private List<Integer> tree;\n\n /* Constructor */\n public ArrayBinaryTree(List<Integer> arr) {\n tree = new ArrayList<>(arr);\n }\n\n /* List capacity */\n public int size() {\n return tree.size();\n }\n\n /* Get the value of the node at index i */\n public Integer val(int i) {\n // If the index is out of bounds, return null, representing an empty spot\n if (i < 0 || i >= size())\n return null;\n return tree.get(i);\n }\n\n /* Get the index of the left child of the node at index i */\n public Integer left(int i) {\n return 2 * i + 1;\n }\n\n /* Get the index of the right child of the node at index i */\n public Integer right(int i) {\n return 2 * i + 2;\n }\n\n /* Get the index of the parent of the node at index i */\n public Integer parent(int i) {\n return (i - 1) / 2;\n }\n\n /* Level-order traversal */\n public List<Integer> levelOrder() {\n List<Integer> res = new ArrayList<>();\n // Traverse array\n for (int i = 0; i < size(); i++) {\n if (val(i) != null)\n res.add(val(i));\n }\n return res;\n }\n\n /* Depth-first traversal */\n private void dfs(Integer i, String order, List<Integer> res) {\n // If it is an empty spot, return\n if (val(i) == null)\n return;\n // Pre-order traversal\n if (\"pre\".equals(order))\n res.add(val(i));\n dfs(left(i), order, res);\n // In-order traversal\n if (\"in\".equals(order))\n res.add(val(i));\n dfs(right(i), order, res);\n // Post-order traversal\n if (\"post\".equals(order))\n res.add(val(i));\n }\n\n /* Pre-order traversal */\n public List<Integer> preOrder() {\n List<Integer> res = new ArrayList<>();\n dfs(0, \"pre\", res);\n return res;\n }\n\n /* In-order traversal */\n public List<Integer> inOrder() {\n List<Integer> res = new ArrayList<>();\n dfs(0, \"in\", res);\n return res;\n }\n\n /* Post-order traversal */\n public List<Integer> postOrder() {\n List<Integer> res = new ArrayList<>();\n dfs(0, \"post\", res);\n return res;\n }\n}\n</code></pre> array_binary_tree.cs<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.go<pre><code>[class]{arrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.swift<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.js<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.ts<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.dart<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.rs<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.c<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.kt<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.rb<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre> array_binary_tree.zig<pre><code>[class]{ArrayBinaryTree}-[func]{}\n</code></pre>"},{"location":"chapter_tree/array_representation_of_tree/#733-advantages-and-limitations","title":"7.3.3 \u00a0 Advantages and limitations","text":"<p>The array representation of binary trees has the following advantages:</p> <ul> <li>Arrays are stored in contiguous memory spaces, which is cache-friendly and allows for faster access and traversal.</li> <li>It does not require storing pointers, which saves space.</li> <li>It allows random access to nodes.</li> </ul> <p>However, the array representation also has some limitations:</p> <ul> <li>Array storage requires contiguous memory space, so it is not suitable for storing trees with a large amount of data.</li> <li>Adding or deleting nodes requires array insertion and deletion operations, which are less efficient.</li> <li>When there are many <code>None</code> values in the binary tree, the proportion of node data contained in the array is low, leading to lower space utilization.</li> </ul>"},{"location":"chapter_tree/avl_tree/","title":"7.5 \u00a0 AVL tree *","text":"<p>In the \"Binary Search Tree\" section, we mentioned that after multiple insertions and removals, a binary search tree might degrade to a linked list. In such cases, the time complexity of all operations degrades from \\(O(\\log n)\\) to \\(O(n)\\).</p> <p>As shown in Figure 7-24, after two node removal operations, this binary search tree will degrade into a linked list.</p> <p></p> <p> Figure 7-24 \u00a0 Degradation of an AVL tree after removing nodes </p> <p>For example, in the perfect binary tree shown in Figure 7-25, after inserting two nodes, the tree will lean heavily to the left, and the time complexity of search operations will also degrade.</p> <p></p> <p> Figure 7-25 \u00a0 Degradation of an AVL tree after inserting nodes </p> <p>In 1962, G. M. Adelson-Velsky and E. M. Landis proposed the AVL Tree in their paper \"An algorithm for the organization of information\". The paper detailed a series of operations to ensure that after continuously adding and removing nodes, the AVL tree would not degrade, thus maintaining the time complexity of various operations at \\(O(\\log n)\\) level. In other words, in scenarios where frequent additions, removals, searches, and modifications are needed, the AVL tree can always maintain efficient data operation performance, which has great application value.</p>"},{"location":"chapter_tree/avl_tree/#751-common-terminology-in-avl-trees","title":"7.5.1 \u00a0 Common terminology in AVL trees","text":"<p>An AVL tree is both a binary search tree and a balanced binary tree, satisfying all properties of these two types of binary trees, hence it is a balanced binary search tree.</p>"},{"location":"chapter_tree/avl_tree/#1-node-height","title":"1. \u00a0 Node height","text":"<p>Since the operations related to AVL trees require obtaining node heights, we need to add a <code>height</code> variable to the node class:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig <pre><code>class TreeNode:\n \"\"\"AVL tree node\"\"\"\n def __init__(self, val: int):\n self.val: int = val # Node value\n self.height: int = 0 # Node height\n self.left: TreeNode | None = None # Left child reference\n self.right: TreeNode | None = None # Right child reference\n</code></pre> <pre><code>/* AVL tree node */\nstruct TreeNode {\n int val{}; // Node value\n int height = 0; // Node height\n TreeNode *left{}; // Left child\n TreeNode *right{}; // Right child\n TreeNode() = default;\n explicit TreeNode(int x) : val(x){}\n};\n</code></pre> <pre><code>/* AVL tree node */\nclass TreeNode {\n public int val; // Node value\n public int height; // Node height\n public TreeNode left; // Left child\n public TreeNode right; // Right child\n public TreeNode(int x) { val = x; }\n}\n</code></pre> <pre><code>/* AVL tree node */\nclass TreeNode(int? x) {\n public int? val = x; // Node value\n public int height; // Node height\n public TreeNode? left; // Left child reference\n public TreeNode? right; // Right child reference\n}\n</code></pre> <pre><code>/* AVL tree node */\ntype TreeNode struct {\n Val int // Node value\n Height int // Node height\n Left *TreeNode // Left child reference\n Right *TreeNode // Right child reference\n}\n</code></pre> <pre><code>/* AVL tree node */\nclass TreeNode {\n var val: Int // Node value\n var height: Int // Node height\n var left: TreeNode? // Left child\n var right: TreeNode? // Right child\n\n init(x: Int) {\n val = x\n height = 0\n }\n}\n</code></pre> <pre><code>/* AVL tree node */\nclass TreeNode {\n val; // Node value\n height; // Node height\n left; // Left child pointer\n right; // Right child pointer\n constructor(val, left, right, height) {\n this.val = val === undefined ? 0 : val;\n this.height = height === undefined ? 0 : height;\n this.left = left === undefined ? null : left;\n this.right = right === undefined ? null : right;\n }\n}\n</code></pre> <pre><code>/* AVL tree node */\nclass TreeNode {\n val: number; // Node value\n height: number; // Node height\n left: TreeNode | null; // Left child pointer\n right: TreeNode | null; // Right child pointer\n constructor(val?: number, height?: number, left?: TreeNode | null, right?: TreeNode | null) {\n this.val = val === undefined ? 0 : val;\n this.height = height === undefined ? 0 : height; \n this.left = left === undefined ? null : left; \n this.right = right === undefined ? null : right; \n }\n}\n</code></pre> <pre><code>/* AVL tree node */\nclass TreeNode {\n int val; // Node value\n int height; // Node height\n TreeNode? left; // Left child\n TreeNode? right; // Right child\n TreeNode(this.val, [this.height = 0, this.left, this.right]);\n}\n</code></pre> <pre><code>use std::rc::Rc;\nuse std::cell::RefCell;\n\n/* AVL tree node */\nstruct TreeNode {\n val: i32, // Node value\n height: i32, // Node height\n left: Option<Rc<RefCell<TreeNode>>>, // Left child\n right: Option<Rc<RefCell<TreeNode>>>, // Right child\n}\n\nimpl TreeNode {\n /* Constructor */\n fn new(val: i32) -> Rc<RefCell<Self>> {\n Rc::new(RefCell::new(Self {\n val,\n height: 0,\n left: None,\n right: None\n }))\n }\n}\n</code></pre> <pre><code>/* AVL tree node */\nTreeNode struct TreeNode {\n int val;\n int height;\n struct TreeNode *left;\n struct TreeNode *right;\n} TreeNode;\n\n/* Constructor */\nTreeNode *newTreeNode(int val) {\n TreeNode *node;\n\n node = (TreeNode *)malloc(sizeof(TreeNode));\n node->val = val;\n node->height = 0;\n node->left = NULL;\n node->right = NULL;\n return node;\n}\n</code></pre> <pre><code>/* AVL tree node */\nclass TreeNode(val _val: Int) { // Node value\n val height: Int = 0 // Node height\n val left: TreeNode? = null // Left child\n val right: TreeNode? = null // Right child\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>\n</code></pre> <p>The \"node height\" refers to the distance from that node to its farthest leaf node, i.e., the number of \"edges\" passed. It is important to note that the height of a leaf node is \\(0\\), and the height of a null node is \\(-1\\). We will create two utility functions for getting and updating the height of a node:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig avl_tree.py<pre><code>def height(self, node: TreeNode | None) -> int:\n \"\"\"Get node height\"\"\"\n # Empty node height is -1, leaf node height is 0\n if node is not None:\n return node.height\n return -1\n\ndef update_height(self, node: TreeNode | None):\n \"\"\"Update node height\"\"\"\n # Node height equals the height of the tallest subtree + 1\n node.height = max([self.height(node.left), self.height(node.right)]) + 1\n</code></pre> avl_tree.cpp<pre><code>/* Get node height */\nint height(TreeNode *node) {\n // Empty node height is -1, leaf node height is 0\n return node == nullptr ? -1 : node->height;\n}\n\n/* Update node height */\nvoid updateHeight(TreeNode *node) {\n // Node height equals the height of the tallest subtree + 1\n node->height = max(height(node->left), height(node->right)) + 1;\n}\n</code></pre> avl_tree.java<pre><code>/* Get node height */\nint height(TreeNode node) {\n // Empty node height is -1, leaf node height is 0\n return node == null ? -1 : node.height;\n}\n\n/* Update node height */\nvoid updateHeight(TreeNode node) {\n // Node height equals the height of the tallest subtree + 1\n node.height = Math.max(height(node.left), height(node.right)) + 1;\n}\n</code></pre> avl_tree.cs<pre><code>[class]{AVLTree}-[func]{Height}\n\n[class]{AVLTree}-[func]{UpdateHeight}\n</code></pre> avl_tree.go<pre><code>[class]{aVLTree}-[func]{height}\n\n[class]{aVLTree}-[func]{updateHeight}\n</code></pre> avl_tree.swift<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{updateHeight}\n</code></pre> avl_tree.js<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{updateHeight}\n</code></pre> avl_tree.ts<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{updateHeight}\n</code></pre> avl_tree.dart<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{updateHeight}\n</code></pre> avl_tree.rs<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{update_height}\n</code></pre> avl_tree.c<pre><code>[class]{}-[func]{height}\n\n[class]{}-[func]{updateHeight}\n</code></pre> avl_tree.kt<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{updateHeight}\n</code></pre> avl_tree.rb<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{update_height}\n</code></pre> avl_tree.zig<pre><code>[class]{AVLTree}-[func]{height}\n\n[class]{AVLTree}-[func]{updateHeight}\n</code></pre>"},{"location":"chapter_tree/avl_tree/#2-node-balance-factor","title":"2. \u00a0 Node balance factor","text":"<p>The balance factor of a node is defined as the height of the node's left subtree minus the height of its right subtree, with the balance factor of a null node defined as \\(0\\). We will also encapsulate the functionality of obtaining the node balance factor into a function for easy use later on:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig avl_tree.py<pre><code>def balance_factor(self, node: TreeNode | None) -> int:\n \"\"\"Get balance factor\"\"\"\n # Empty node balance factor is 0\n if node is None:\n return 0\n # Node balance factor = left subtree height - right subtree height\n return self.height(node.left) - self.height(node.right)\n</code></pre> avl_tree.cpp<pre><code>/* Get balance factor */\nint balanceFactor(TreeNode *node) {\n // Empty node balance factor is 0\n if (node == nullptr)\n return 0;\n // Node balance factor = left subtree height - right subtree height\n return height(node->left) - height(node->right);\n}\n</code></pre> avl_tree.java<pre><code>/* Get balance factor */\nint balanceFactor(TreeNode node) {\n // Empty node balance factor is 0\n if (node == null)\n return 0;\n // Node balance factor = left subtree height - right subtree height\n return height(node.left) - height(node.right);\n}\n</code></pre> avl_tree.cs<pre><code>[class]{AVLTree}-[func]{BalanceFactor}\n</code></pre> avl_tree.go<pre><code>[class]{aVLTree}-[func]{balanceFactor}\n</code></pre> avl_tree.swift<pre><code>[class]{AVLTree}-[func]{balanceFactor}\n</code></pre> avl_tree.js<pre><code>[class]{AVLTree}-[func]{balanceFactor}\n</code></pre> avl_tree.ts<pre><code>[class]{AVLTree}-[func]{balanceFactor}\n</code></pre> avl_tree.dart<pre><code>[class]{AVLTree}-[func]{balanceFactor}\n</code></pre> avl_tree.rs<pre><code>[class]{AVLTree}-[func]{balance_factor}\n</code></pre> avl_tree.c<pre><code>[class]{}-[func]{balanceFactor}\n</code></pre> avl_tree.kt<pre><code>[class]{AVLTree}-[func]{balanceFactor}\n</code></pre> avl_tree.rb<pre><code>[class]{AVLTree}-[func]{balance_factor}\n</code></pre> avl_tree.zig<pre><code>[class]{AVLTree}-[func]{balanceFactor}\n</code></pre> <p>Tip</p> <p>Let the balance factor be \\(f\\), then the balance factor of any node in an AVL tree satisfies \\(-1 \\le f \\le 1\\).</p>"},{"location":"chapter_tree/avl_tree/#752-rotations-in-avl-trees","title":"7.5.2 \u00a0 Rotations in AVL trees","text":"<p>The characteristic feature of an AVL tree is the \"rotation\" operation, which can restore balance to an unbalanced node without affecting the in-order traversal sequence of the binary tree. In other words, the rotation operation can maintain the property of a \"binary search tree\" while also turning the tree back into a \"balanced binary tree\".</p> <p>We call nodes with an absolute balance factor \\(> 1\\) \"unbalanced nodes\". Depending on the type of imbalance, there are four kinds of rotations: right rotation, left rotation, right-left rotation, and left-right rotation. Below, we detail these rotation operations.</p>"},{"location":"chapter_tree/avl_tree/#1-right-rotation","title":"1. \u00a0 Right rotation","text":"<p>As shown in Figure 7-26, the first unbalanced node from the bottom up in the binary tree is \"node 3\". Focusing on the subtree with this unbalanced node as the root, denoted as <code>node</code>, and its left child as <code>child</code>, perform a \"right rotation\". After the right rotation, the subtree is balanced again while still maintaining the properties of a binary search tree.</p> <1><2><3><4> <p></p> <p></p> <p></p> <p></p> <p> Figure 7-26 \u00a0 Steps of right rotation </p> <p>As shown in Figure 7-27, when the <code>child</code> node has a right child (denoted as <code>grand_child</code>), a step needs to be added in the right rotation: set <code>grand_child</code> as the left child of <code>node</code>.</p> <p></p> <p> Figure 7-27 \u00a0 Right rotation with grand_child </p> <p>\"Right rotation\" is a figurative term; in practice, it is achieved by modifying node pointers, as shown in the following code:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig avl_tree.py<pre><code>def right_rotate(self, node: TreeNode | None) -> TreeNode | None:\n \"\"\"Right rotation operation\"\"\"\n child = node.left\n grand_child = child.right\n # Rotate node to the right around child\n child.right = node\n node.left = grand_child\n # Update node height\n self.update_height(node)\n self.update_height(child)\n # Return the root of the subtree after rotation\n return child\n</code></pre> avl_tree.cpp<pre><code>/* Right rotation operation */\nTreeNode *rightRotate(TreeNode *node) {\n TreeNode *child = node->left;\n TreeNode *grandChild = child->right;\n // Rotate node to the right around child\n child->right = node;\n node->left = grandChild;\n // Update node height\n updateHeight(node);\n updateHeight(child);\n // Return the root of the subtree after rotation\n return child;\n}\n</code></pre> avl_tree.java<pre><code>/* Right rotation operation */\nTreeNode rightRotate(TreeNode node) {\n TreeNode child = node.left;\n TreeNode grandChild = child.right;\n // Rotate node to the right around child\n child.right = node;\n node.left = grandChild;\n // Update node height\n updateHeight(node);\n updateHeight(child);\n // Return the root of the subtree after rotation\n return child;\n}\n</code></pre> avl_tree.cs<pre><code>[class]{AVLTree}-[func]{RightRotate}\n</code></pre> avl_tree.go<pre><code>[class]{aVLTree}-[func]{rightRotate}\n</code></pre> avl_tree.swift<pre><code>[class]{AVLTree}-[func]{rightRotate}\n</code></pre> avl_tree.js<pre><code>[class]{AVLTree}-[func]{rightRotate}\n</code></pre> avl_tree.ts<pre><code>[class]{AVLTree}-[func]{rightRotate}\n</code></pre> avl_tree.dart<pre><code>[class]{AVLTree}-[func]{rightRotate}\n</code></pre> avl_tree.rs<pre><code>[class]{AVLTree}-[func]{right_rotate}\n</code></pre> avl_tree.c<pre><code>[class]{}-[func]{rightRotate}\n</code></pre> avl_tree.kt<pre><code>[class]{AVLTree}-[func]{rightRotate}\n</code></pre> avl_tree.rb<pre><code>[class]{AVLTree}-[func]{right_rotate}\n</code></pre> avl_tree.zig<pre><code>[class]{AVLTree}-[func]{rightRotate}\n</code></pre>"},{"location":"chapter_tree/avl_tree/#2-left-rotation","title":"2. \u00a0 Left rotation","text":"<p>Correspondingly, if considering the \"mirror\" of the above unbalanced binary tree, the \"left rotation\" operation shown in Figure 7-28 needs to be performed.</p> <p></p> <p> Figure 7-28 \u00a0 Left rotation operation </p> <p>Similarly, as shown in Figure 7-29, when the <code>child</code> node has a left child (denoted as <code>grand_child</code>), a step needs to be added in the left rotation: set <code>grand_child</code> as the right child of <code>node</code>.</p> <p></p> <p> Figure 7-29 \u00a0 Left rotation with grand_child </p> <p>It can be observed that the right and left rotation operations are logically symmetrical, and they solve two symmetrical types of imbalance. Based on symmetry, by replacing all <code>left</code> with <code>right</code>, and all <code>right</code> with <code>left</code> in the implementation code of right rotation, we can get the implementation code for left rotation:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig avl_tree.py<pre><code>def left_rotate(self, node: TreeNode | None) -> TreeNode | None:\n \"\"\"Left rotation operation\"\"\"\n child = node.right\n grand_child = child.left\n # Rotate node to the left around child\n child.left = node\n node.right = grand_child\n # Update node height\n self.update_height(node)\n self.update_height(child)\n # Return the root of the subtree after rotation\n return child\n</code></pre> avl_tree.cpp<pre><code>/* Left rotation operation */\nTreeNode *leftRotate(TreeNode *node) {\n TreeNode *child = node->right;\n TreeNode *grandChild = child->left;\n // Rotate node to the left around child\n child->left = node;\n node->right = grandChild;\n // Update node height\n updateHeight(node);\n updateHeight(child);\n // Return the root of the subtree after rotation\n return child;\n}\n</code></pre> avl_tree.java<pre><code>/* Left rotation operation */\nTreeNode leftRotate(TreeNode node) {\n TreeNode child = node.right;\n TreeNode grandChild = child.left;\n // Rotate node to the left around child\n child.left = node;\n node.right = grandChild;\n // Update node height\n updateHeight(node);\n updateHeight(child);\n // Return the root of the subtree after rotation\n return child;\n}\n</code></pre> avl_tree.cs<pre><code>[class]{AVLTree}-[func]{LeftRotate}\n</code></pre> avl_tree.go<pre><code>[class]{aVLTree}-[func]{leftRotate}\n</code></pre> avl_tree.swift<pre><code>[class]{AVLTree}-[func]{leftRotate}\n</code></pre> avl_tree.js<pre><code>[class]{AVLTree}-[func]{leftRotate}\n</code></pre> avl_tree.ts<pre><code>[class]{AVLTree}-[func]{leftRotate}\n</code></pre> avl_tree.dart<pre><code>[class]{AVLTree}-[func]{leftRotate}\n</code></pre> avl_tree.rs<pre><code>[class]{AVLTree}-[func]{left_rotate}\n</code></pre> avl_tree.c<pre><code>[class]{}-[func]{leftRotate}\n</code></pre> avl_tree.kt<pre><code>[class]{AVLTree}-[func]{leftRotate}\n</code></pre> avl_tree.rb<pre><code>[class]{AVLTree}-[func]{left_rotate}\n</code></pre> avl_tree.zig<pre><code>[class]{AVLTree}-[func]{leftRotate}\n</code></pre>"},{"location":"chapter_tree/avl_tree/#3-right-left-rotation","title":"3. \u00a0 Right-left rotation","text":"<p>For the unbalanced node 3 shown in Figure 7-30, using either left or right rotation alone cannot restore balance to the subtree. In this case, a \"left rotation\" needs to be performed on <code>child</code> first, followed by a \"right rotation\" on <code>node</code>.</p> <p></p> <p> Figure 7-30 \u00a0 Right-left rotation </p>"},{"location":"chapter_tree/avl_tree/#4-left-right-rotation","title":"4. \u00a0 Left-right rotation","text":"<p>As shown in Figure 7-31, for the mirror case of the above unbalanced binary tree, a \"right rotation\" needs to be performed on <code>child</code> first, followed by a \"left rotation\" on <code>node</code>.</p> <p></p> <p> Figure 7-31 \u00a0 Left-right rotation </p>"},{"location":"chapter_tree/avl_tree/#5-choice-of-rotation","title":"5. \u00a0 Choice of rotation","text":"<p>The four kinds of imbalances shown in Figure 7-32 correspond to the cases described above, respectively requiring right rotation, left-right rotation, right-left rotation, and left rotation.</p> <p></p> <p> Figure 7-32 \u00a0 The four rotation cases of AVL tree </p> <p>As shown in Table 7-3, we determine which of the above cases an unbalanced node belongs to by judging the sign of the balance factor of the unbalanced node and its higher-side child's balance factor.</p> <p> Table 7-3 \u00a0 Conditions for Choosing Among the Four Rotation Cases </p> Balance factor of unbalanced node Balance factor of child node Rotation method to use \\(> 1\\) (Left-leaning tree) \\(\\geq 0\\) Right rotation \\(> 1\\) (Left-leaning tree) \\(<0\\) Left rotation then right rotation \\(< -1\\) (Right-leaning tree) \\(\\leq 0\\) Left rotation \\(< -1\\) (Right-leaning tree) \\(>0\\) Right rotation then left rotation <p>For convenience, we encapsulate the rotation operations into a function. With this function, we can perform rotations on various kinds of imbalances, restoring balance to unbalanced nodes. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig avl_tree.py<pre><code>def rotate(self, node: TreeNode | None) -> TreeNode | None:\n \"\"\"Perform rotation operation to restore balance to the subtree\"\"\"\n # Get the balance factor of node\n balance_factor = self.balance_factor(node)\n # Left-leaning tree\n if balance_factor > 1:\n if self.balance_factor(node.left) >= 0:\n # Right rotation\n return self.right_rotate(node)\n else:\n # First left rotation then right rotation\n node.left = self.left_rotate(node.left)\n return self.right_rotate(node)\n # Right-leaning tree\n elif balance_factor < -1:\n if self.balance_factor(node.right) <= 0:\n # Left rotation\n return self.left_rotate(node)\n else:\n # First right rotation then left rotation\n node.right = self.right_rotate(node.right)\n return self.left_rotate(node)\n # Balanced tree, no rotation needed, return\n return node\n</code></pre> avl_tree.cpp<pre><code>/* Perform rotation operation to restore balance to the subtree */\nTreeNode *rotate(TreeNode *node) {\n // Get the balance factor of node\n int _balanceFactor = balanceFactor(node);\n // Left-leaning tree\n if (_balanceFactor > 1) {\n if (balanceFactor(node->left) >= 0) {\n // Right rotation\n return rightRotate(node);\n } else {\n // First left rotation then right rotation\n node->left = leftRotate(node->left);\n return rightRotate(node);\n }\n }\n // Right-leaning tree\n if (_balanceFactor < -1) {\n if (balanceFactor(node->right) <= 0) {\n // Left rotation\n return leftRotate(node);\n } else {\n // First right rotation then left rotation\n node->right = rightRotate(node->right);\n return leftRotate(node);\n }\n }\n // Balanced tree, no rotation needed, return\n return node;\n}\n</code></pre> avl_tree.java<pre><code>/* Perform rotation operation to restore balance to the subtree */\nTreeNode rotate(TreeNode node) {\n // Get the balance factor of node\n int balanceFactor = balanceFactor(node);\n // Left-leaning tree\n if (balanceFactor > 1) {\n if (balanceFactor(node.left) >= 0) {\n // Right rotation\n return rightRotate(node);\n } else {\n // First left rotation then right rotation\n node.left = leftRotate(node.left);\n return rightRotate(node);\n }\n }\n // Right-leaning tree\n if (balanceFactor < -1) {\n if (balanceFactor(node.right) <= 0) {\n // Left rotation\n return leftRotate(node);\n } else {\n // First right rotation then left rotation\n node.right = rightRotate(node.right);\n return leftRotate(node);\n }\n }\n // Balanced tree, no rotation needed, return\n return node;\n}\n</code></pre> avl_tree.cs<pre><code>[class]{AVLTree}-[func]{Rotate}\n</code></pre> avl_tree.go<pre><code>[class]{aVLTree}-[func]{rotate}\n</code></pre> avl_tree.swift<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre> avl_tree.js<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre> avl_tree.ts<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre> avl_tree.dart<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre> avl_tree.rs<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre> avl_tree.c<pre><code>[class]{}-[func]{rotate}\n</code></pre> avl_tree.kt<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre> avl_tree.rb<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre> avl_tree.zig<pre><code>[class]{AVLTree}-[func]{rotate}\n</code></pre>"},{"location":"chapter_tree/avl_tree/#753-common-operations-in-avl-trees","title":"7.5.3 \u00a0 Common operations in AVL trees","text":""},{"location":"chapter_tree/avl_tree/#1-node-insertion","title":"1. \u00a0 Node insertion","text":"<p>The node insertion operation in AVL trees is similar to that in binary search trees. The only difference is that after inserting a node in an AVL tree, a series of unbalanced nodes may appear along the path from that node to the root node. Therefore, we need to start from this node and perform rotation operations upwards to restore balance to all unbalanced nodes. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig avl_tree.py<pre><code>def insert(self, val):\n \"\"\"Insert node\"\"\"\n self._root = self.insert_helper(self._root, val)\n\ndef insert_helper(self, node: TreeNode | None, val: int) -> TreeNode:\n \"\"\"Recursively insert node (helper method)\"\"\"\n if node is None:\n return TreeNode(val)\n # 1. Find insertion position and insert node\n if val < node.val:\n node.left = self.insert_helper(node.left, val)\n elif val > node.val:\n node.right = self.insert_helper(node.right, val)\n else:\n # Do not insert duplicate nodes, return\n return node\n # Update node height\n self.update_height(node)\n # 2. Perform rotation operation to restore balance to the subtree\n return self.rotate(node)\n</code></pre> avl_tree.cpp<pre><code>/* Insert node */\nvoid insert(int val) {\n root = insertHelper(root, val);\n}\n\n/* Recursively insert node (helper method) */\nTreeNode *insertHelper(TreeNode *node, int val) {\n if (node == nullptr)\n return new TreeNode(val);\n /* 1. Find insertion position and insert node */\n if (val < node->val)\n node->left = insertHelper(node->left, val);\n else if (val > node->val)\n node->right = insertHelper(node->right, val);\n else\n return node; // Do not insert duplicate nodes, return\n updateHeight(node); // Update node height\n /* 2. Perform rotation operation to restore balance to the subtree */\n node = rotate(node);\n // Return the root node of the subtree\n return node;\n}\n</code></pre> avl_tree.java<pre><code>/* Insert node */\nvoid insert(int val) {\n root = insertHelper(root, val);\n}\n\n/* Recursively insert node (helper method) */\nTreeNode insertHelper(TreeNode node, int val) {\n if (node == null)\n return new TreeNode(val);\n /* 1. Find insertion position and insert node */\n if (val < node.val)\n node.left = insertHelper(node.left, val);\n else if (val > node.val)\n node.right = insertHelper(node.right, val);\n else\n return node; // Do not insert duplicate nodes, return\n updateHeight(node); // Update node height\n /* 2. Perform rotation operation to restore balance to the subtree */\n node = rotate(node);\n // Return the root node of the subtree\n return node;\n}\n</code></pre> avl_tree.cs<pre><code>[class]{AVLTree}-[func]{Insert}\n\n[class]{AVLTree}-[func]{InsertHelper}\n</code></pre> avl_tree.go<pre><code>[class]{aVLTree}-[func]{insert}\n\n[class]{aVLTree}-[func]{insertHelper}\n</code></pre> avl_tree.swift<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insertHelper}\n</code></pre> avl_tree.js<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insertHelper}\n</code></pre> avl_tree.ts<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insertHelper}\n</code></pre> avl_tree.dart<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insertHelper}\n</code></pre> avl_tree.rs<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insert_helper}\n</code></pre> avl_tree.c<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{}-[func]{insertHelper}\n</code></pre> avl_tree.kt<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insertHelper}\n</code></pre> avl_tree.rb<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insert_helper}\n</code></pre> avl_tree.zig<pre><code>[class]{AVLTree}-[func]{insert}\n\n[class]{AVLTree}-[func]{insertHelper}\n</code></pre>"},{"location":"chapter_tree/avl_tree/#2-node-removal","title":"2. \u00a0 Node removal","text":"<p>Similarly, based on the method of removing nodes in binary search trees, rotation operations need to be performed from the bottom up to restore balance to all unbalanced nodes. The code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig avl_tree.py<pre><code>def remove(self, val: int):\n \"\"\"Remove node\"\"\"\n self._root = self.remove_helper(self._root, val)\n\ndef remove_helper(self, node: TreeNode | None, val: int) -> TreeNode | None:\n \"\"\"Recursively remove node (helper method)\"\"\"\n if node is None:\n return None\n # 1. Find and remove the node\n if val < node.val:\n node.left = self.remove_helper(node.left, val)\n elif val > node.val:\n node.right = self.remove_helper(node.right, val)\n else:\n if node.left is None or node.right is None:\n child = node.left or node.right\n # Number of child nodes = 0, remove node and return\n if child is None:\n return None\n # Number of child nodes = 1, remove node\n else:\n node = child\n else:\n # Number of child nodes = 2, remove the next node in in-order traversal and replace the current node with it\n temp = node.right\n while temp.left is not None:\n temp = temp.left\n node.right = self.remove_helper(node.right, temp.val)\n node.val = temp.val\n # Update node height\n self.update_height(node)\n # 2. Perform rotation operation to restore balance to the subtree\n return self.rotate(node)\n</code></pre> avl_tree.cpp<pre><code>/* Remove node */\nvoid remove(int val) {\n root = removeHelper(root, val);\n}\n\n/* Recursively remove node (helper method) */\nTreeNode *removeHelper(TreeNode *node, int val) {\n if (node == nullptr)\n return nullptr;\n /* 1. Find and remove the node */\n if (val < node->val)\n node->left = removeHelper(node->left, val);\n else if (val > node->val)\n node->right = removeHelper(node->right, val);\n else {\n if (node->left == nullptr || node->right == nullptr) {\n TreeNode *child = node->left != nullptr ? node->left : node->right;\n // Number of child nodes = 0, remove node and return\n if (child == nullptr) {\n delete node;\n return nullptr;\n }\n // Number of child nodes = 1, remove node\n else {\n delete node;\n node = child;\n }\n } else {\n // Number of child nodes = 2, remove the next node in in-order traversal and replace the current node with it\n TreeNode *temp = node->right;\n while (temp->left != nullptr) {\n temp = temp->left;\n }\n int tempVal = temp->val;\n node->right = removeHelper(node->right, temp->val);\n node->val = tempVal;\n }\n }\n updateHeight(node); // Update node height\n /* 2. Perform rotation operation to restore balance to the subtree */\n node = rotate(node);\n // Return the root node of the subtree\n return node;\n}\n</code></pre> avl_tree.java<pre><code>/* Remove node */\nvoid remove(int val) {\n root = removeHelper(root, val);\n}\n\n/* Recursively remove node (helper method) */\nTreeNode removeHelper(TreeNode node, int val) {\n if (node == null)\n return null;\n /* 1. Find and remove the node */\n if (val < node.val)\n node.left = removeHelper(node.left, val);\n else if (val > node.val)\n node.right = removeHelper(node.right, val);\n else {\n if (node.left == null || node.right == null) {\n TreeNode child = node.left != null ? node.left : node.right;\n // Number of child nodes = 0, remove node and return\n if (child == null)\n return null;\n // Number of child nodes = 1, remove node\n else\n node = child;\n } else {\n // Number of child nodes = 2, remove the next node in in-order traversal and replace the current node with it\n TreeNode temp = node.right;\n while (temp.left != null) {\n temp = temp.left;\n }\n node.right = removeHelper(node.right, temp.val);\n node.val = temp.val;\n }\n }\n updateHeight(node); // Update node height\n /* 2. Perform rotation operation to restore balance to the subtree */\n node = rotate(node);\n // Return the root node of the subtree\n return node;\n}\n</code></pre> avl_tree.cs<pre><code>[class]{AVLTree}-[func]{Remove}\n\n[class]{AVLTree}-[func]{RemoveHelper}\n</code></pre> avl_tree.go<pre><code>[class]{aVLTree}-[func]{remove}\n\n[class]{aVLTree}-[func]{removeHelper}\n</code></pre> avl_tree.swift<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{removeHelper}\n</code></pre> avl_tree.js<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{removeHelper}\n</code></pre> avl_tree.ts<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{removeHelper}\n</code></pre> avl_tree.dart<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{removeHelper}\n</code></pre> avl_tree.rs<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{remove_helper}\n</code></pre> avl_tree.c<pre><code>[class]{AVLTree}-[func]{removeItem}\n\n[class]{}-[func]{removeHelper}\n</code></pre> avl_tree.kt<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{removeHelper}\n</code></pre> avl_tree.rb<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{remove_helper}\n</code></pre> avl_tree.zig<pre><code>[class]{AVLTree}-[func]{remove}\n\n[class]{AVLTree}-[func]{removeHelper}\n</code></pre>"},{"location":"chapter_tree/avl_tree/#3-node-search","title":"3. \u00a0 Node search","text":"<p>The node search operation in AVL trees is consistent with that in binary search trees and will not be detailed here.</p>"},{"location":"chapter_tree/avl_tree/#754-typical-applications-of-avl-trees","title":"7.5.4 \u00a0 Typical applications of AVL trees","text":"<ul> <li>Organizing and storing large amounts of data, suitable for scenarios with high-frequency searches and low-frequency intertions and removals.</li> <li>Used to build index systems in databases.</li> <li>Red-black trees are also a common type of balanced binary search tree. Compared to AVL trees, red-black trees have more relaxed balancing conditions, require fewer rotations for node insertion and removal, and have a higher average efficiency for node addition and removal operations.</li> </ul>"},{"location":"chapter_tree/binary_search_tree/","title":"7.4 \u00a0 Binary search tree","text":"<p>As shown in Figure 7-16, a binary search tree satisfies the following conditions.</p> <ol> <li>For the root node, the value of all nodes in the left subtree \\(<\\) the value of the root node \\(<\\) the value of all nodes in the right subtree.</li> <li>The left and right subtrees of any node are also binary search trees, i.e., they satisfy condition <code>1.</code> as well.</li> </ol> <p></p> <p> Figure 7-16 \u00a0 Binary search tree </p>"},{"location":"chapter_tree/binary_search_tree/#741-operations-on-a-binary-search-tree","title":"7.4.1 \u00a0 Operations on a binary search tree","text":"<p>We encapsulate the binary search tree as a class <code>BinarySearchTree</code> and declare a member variable <code>root</code>, pointing to the tree's root node.</p>"},{"location":"chapter_tree/binary_search_tree/#1-searching-for-a-node","title":"1. \u00a0 Searching for a node","text":"<p>Given a target node value <code>num</code>, one can search according to the properties of the binary search tree. As shown in Figure 7-17, we declare a node <code>cur</code> and start from the binary tree's root node <code>root</code>, looping to compare the size relationship between the node value <code>cur.val</code> and <code>num</code>.</p> <ul> <li>If <code>cur.val < num</code>, it means the target node is in <code>cur</code>'s right subtree, thus execute <code>cur = cur.right</code>.</li> <li>If <code>cur.val > num</code>, it means the target node is in <code>cur</code>'s left subtree, thus execute <code>cur = cur.left</code>.</li> <li>If <code>cur.val = num</code>, it means the target node is found, exit the loop and return the node.</li> </ul> <1><2><3><4> <p></p> <p></p> <p></p> <p></p> <p> Figure 7-17 \u00a0 Example of searching for a node in a binary search tree </p> <p>The search operation in a binary search tree works on the same principle as the binary search algorithm, eliminating half of the possibilities in each round. The number of loops is at most the height of the binary tree. When the binary tree is balanced, it uses \\(O(\\log n)\\) time. Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_tree.py<pre><code>def search(self, num: int) -> TreeNode | None:\n \"\"\"Search node\"\"\"\n cur = self._root\n # Loop find, break after passing leaf nodes\n while cur is not None:\n # Target node is in cur's right subtree\n if cur.val < num:\n cur = cur.right\n # Target node is in cur's left subtree\n elif cur.val > num:\n cur = cur.left\n # Found target node, break loop\n else:\n break\n return cur\n</code></pre> binary_search_tree.cpp<pre><code>/* Search node */\nTreeNode *search(int num) {\n TreeNode *cur = root;\n // Loop find, break after passing leaf nodes\n while (cur != nullptr) {\n // Target node is in cur's right subtree\n if (cur->val < num)\n cur = cur->right;\n // Target node is in cur's left subtree\n else if (cur->val > num)\n cur = cur->left;\n // Found target node, break loop\n else\n break;\n }\n // Return target node\n return cur;\n}\n</code></pre> binary_search_tree.java<pre><code>/* Search node */\nTreeNode search(int num) {\n TreeNode cur = root;\n // Loop find, break after passing leaf nodes\n while (cur != null) {\n // Target node is in cur's right subtree\n if (cur.val < num)\n cur = cur.right;\n // Target node is in cur's left subtree\n else if (cur.val > num)\n cur = cur.left;\n // Found target node, break loop\n else\n break;\n }\n // Return target node\n return cur;\n}\n</code></pre> binary_search_tree.cs<pre><code>[class]{BinarySearchTree}-[func]{Search}\n</code></pre> binary_search_tree.go<pre><code>[class]{binarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.swift<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.js<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.ts<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.dart<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.rs<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.c<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.kt<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.rb<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre> binary_search_tree.zig<pre><code>[class]{BinarySearchTree}-[func]{search}\n</code></pre>"},{"location":"chapter_tree/binary_search_tree/#2-inserting-a-node","title":"2. \u00a0 Inserting a node","text":"<p>Given an element <code>num</code> to be inserted, to maintain the property of the binary search tree \"left subtree < root node < right subtree,\" the insertion operation proceeds as shown in Figure 7-18.</p> <ol> <li>Finding the insertion position: Similar to the search operation, start from the root node and loop downwards according to the size relationship between the current node value and <code>num</code> until passing through the leaf node (traversing to <code>None</code>) then exit the loop.</li> <li>Insert the node at that position: Initialize the node <code>num</code> and place it where <code>None</code> was.</li> </ol> <p></p> <p> Figure 7-18 \u00a0 Inserting a node into a binary search tree </p> <p>In the code implementation, note the following two points.</p> <ul> <li>The binary search tree does not allow duplicate nodes; otherwise, it will violate its definition. Therefore, if the node to be inserted already exists in the tree, the insertion is not performed, and it directly returns.</li> <li>To perform the insertion operation, we need to use the node <code>pre</code> to save the node from the last loop. This way, when traversing to <code>None</code>, we can get its parent node, thus completing the node insertion operation.</li> </ul> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_tree.py<pre><code>def insert(self, num: int):\n \"\"\"Insert node\"\"\"\n # If tree is empty, initialize root node\n if self._root is None:\n self._root = TreeNode(num)\n return\n # Loop find, break after passing leaf nodes\n cur, pre = self._root, None\n while cur is not None:\n # Found duplicate node, thus return\n if cur.val == num:\n return\n pre = cur\n # Insertion position is in cur's right subtree\n if cur.val < num:\n cur = cur.right\n # Insertion position is in cur's left subtree\n else:\n cur = cur.left\n # Insert node\n node = TreeNode(num)\n if pre.val < num:\n pre.right = node\n else:\n pre.left = node\n</code></pre> binary_search_tree.cpp<pre><code>/* Insert node */\nvoid insert(int num) {\n // If tree is empty, initialize root node\n if (root == nullptr) {\n root = new TreeNode(num);\n return;\n }\n TreeNode *cur = root, *pre = nullptr;\n // Loop find, break after passing leaf nodes\n while (cur != nullptr) {\n // Found duplicate node, thus return\n if (cur->val == num)\n return;\n pre = cur;\n // Insertion position is in cur's right subtree\n if (cur->val < num)\n cur = cur->right;\n // Insertion position is in cur's left subtree\n else\n cur = cur->left;\n }\n // Insert node\n TreeNode *node = new TreeNode(num);\n if (pre->val < num)\n pre->right = node;\n else\n pre->left = node;\n}\n</code></pre> binary_search_tree.java<pre><code>/* Insert node */\nvoid insert(int num) {\n // If tree is empty, initialize root node\n if (root == null) {\n root = new TreeNode(num);\n return;\n }\n TreeNode cur = root, pre = null;\n // Loop find, break after passing leaf nodes\n while (cur != null) {\n // Found duplicate node, thus return\n if (cur.val == num)\n return;\n pre = cur;\n // Insertion position is in cur's right subtree\n if (cur.val < num)\n cur = cur.right;\n // Insertion position is in cur's left subtree\n else\n cur = cur.left;\n }\n // Insert node\n TreeNode node = new TreeNode(num);\n if (pre.val < num)\n pre.right = node;\n else\n pre.left = node;\n}\n</code></pre> binary_search_tree.cs<pre><code>[class]{BinarySearchTree}-[func]{Insert}\n</code></pre> binary_search_tree.go<pre><code>[class]{binarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.swift<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.js<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.ts<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.dart<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.rs<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.c<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.kt<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.rb<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> binary_search_tree.zig<pre><code>[class]{BinarySearchTree}-[func]{insert}\n</code></pre> <p>Similar to searching for a node, inserting a node uses \\(O(\\log n)\\) time.</p>"},{"location":"chapter_tree/binary_search_tree/#3-removing-a-node","title":"3. \u00a0 Removing a node","text":"<p>First, find the target node in the binary tree, then remove it. Similar to inserting a node, we need to ensure that after the removal operation is completed, the property of the binary search tree \"left subtree < root node < right subtree\" is still satisfied. Therefore, based on the number of child nodes of the target node, we divide it into 0, 1, and 2 cases, performing the corresponding node removal operations.</p> <p>As shown in Figure 7-19, when the degree of the node to be removed is \\(0\\), it means the node is a leaf node, and it can be directly removed.</p> <p></p> <p> Figure 7-19 \u00a0 Removing a node in a binary search tree (degree 0) </p> <p>As shown in Figure 7-20, when the degree of the node to be removed is \\(1\\), replacing the node to be removed with its child node is sufficient.</p> <p></p> <p> Figure 7-20 \u00a0 Removing a node in a binary search tree (degree 1) </p> <p>When the degree of the node to be removed is \\(2\\), we cannot remove it directly, but need to use a node to replace it. To maintain the property of the binary search tree \"left subtree \\(<\\) root node \\(<\\) right subtree,\" this node can be either the smallest node of the right subtree or the largest node of the left subtree.</p> <p>Assuming we choose the smallest node of the right subtree (the next node in in-order traversal), then the removal operation proceeds as shown in Figure 7-21.</p> <ol> <li>Find the next node in the \"in-order traversal sequence\" of the node to be removed, denoted as <code>tmp</code>.</li> <li>Replace the value of the node to be removed with <code>tmp</code>'s value, and recursively remove the node <code>tmp</code> in the tree.</li> </ol> <1><2><3><4> <p></p> <p></p> <p></p> <p></p> <p> Figure 7-21 \u00a0 Removing a node in a binary search tree (degree 2) </p> <p>The operation of removing a node also uses \\(O(\\log n)\\) time, where finding the node to be removed requires \\(O(\\log n)\\) time, and obtaining the in-order traversal successor node requires \\(O(\\log n)\\) time. Example code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_search_tree.py<pre><code>def remove(self, num: int):\n \"\"\"Remove node\"\"\"\n # If tree is empty, return\n if self._root is None:\n return\n # Loop find, break after passing leaf nodes\n cur, pre = self._root, None\n while cur is not None:\n # Found node to be removed, break loop\n if cur.val == num:\n break\n pre = cur\n # Node to be removed is in cur's right subtree\n if cur.val < num:\n cur = cur.right\n # Node to be removed is in cur's left subtree\n else:\n cur = cur.left\n # If no node to be removed, return\n if cur is None:\n return\n\n # Number of child nodes = 0 or 1\n if cur.left is None or cur.right is None:\n # When the number of child nodes = 0/1, child = null/that child node\n child = cur.left or cur.right\n # Remove node cur\n if cur != self._root:\n if pre.left == cur:\n pre.left = child\n else:\n pre.right = child\n else:\n # If the removed node is the root, reassign the root\n self._root = child\n # Number of child nodes = 2\n else:\n # Get the next node in in-order traversal of cur\n tmp: TreeNode = cur.right\n while tmp.left is not None:\n tmp = tmp.left\n # Recursively remove node tmp\n self.remove(tmp.val)\n # Replace cur with tmp\n cur.val = tmp.val\n</code></pre> binary_search_tree.cpp<pre><code>/* Remove node */\nvoid remove(int num) {\n // If tree is empty, return\n if (root == nullptr)\n return;\n TreeNode *cur = root, *pre = nullptr;\n // Loop find, break after passing leaf nodes\n while (cur != nullptr) {\n // Found node to be removed, break loop\n if (cur->val == num)\n break;\n pre = cur;\n // Node to be removed is in cur's right subtree\n if (cur->val < num)\n cur = cur->right;\n // Node to be removed is in cur's left subtree\n else\n cur = cur->left;\n }\n // If no node to be removed, return\n if (cur == nullptr)\n return;\n // Number of child nodes = 0 or 1\n if (cur->left == nullptr || cur->right == nullptr) {\n // When the number of child nodes = 0 / 1, child = nullptr / that child node\n TreeNode *child = cur->left != nullptr ? cur->left : cur->right;\n // Remove node cur\n if (cur != root) {\n if (pre->left == cur)\n pre->left = child;\n else\n pre->right = child;\n } else {\n // If the removed node is the root, reassign the root\n root = child;\n }\n // Free memory\n delete cur;\n }\n // Number of child nodes = 2\n else {\n // Get the next node in in-order traversal of cur\n TreeNode *tmp = cur->right;\n while (tmp->left != nullptr) {\n tmp = tmp->left;\n }\n int tmpVal = tmp->val;\n // Recursively remove node tmp\n remove(tmp->val);\n // Replace cur with tmp\n cur->val = tmpVal;\n }\n}\n</code></pre> binary_search_tree.java<pre><code>/* Remove node */\nvoid remove(int num) {\n // If tree is empty, return\n if (root == null)\n return;\n TreeNode cur = root, pre = null;\n // Loop find, break after passing leaf nodes\n while (cur != null) {\n // Found node to be removed, break loop\n if (cur.val == num)\n break;\n pre = cur;\n // Node to be removed is in cur's right subtree\n if (cur.val < num)\n cur = cur.right;\n // Node to be removed is in cur's left subtree\n else\n cur = cur.left;\n }\n // If no node to be removed, return\n if (cur == null)\n return;\n // Number of child nodes = 0 or 1\n if (cur.left == null || cur.right == null) {\n // When the number of child nodes = 0/1, child = null/that child node\n TreeNode child = cur.left != null ? cur.left : cur.right;\n // Remove node cur\n if (cur != root) {\n if (pre.left == cur)\n pre.left = child;\n else\n pre.right = child;\n } else {\n // If the removed node is the root, reassign the root\n root = child;\n }\n }\n // Number of child nodes = 2\n else {\n // Get the next node in in-order traversal of cur\n TreeNode tmp = cur.right;\n while (tmp.left != null) {\n tmp = tmp.left;\n }\n // Recursively remove node tmp\n remove(tmp.val);\n // Replace cur with tmp\n cur.val = tmp.val;\n }\n}\n</code></pre> binary_search_tree.cs<pre><code>[class]{BinarySearchTree}-[func]{Remove}\n</code></pre> binary_search_tree.go<pre><code>[class]{binarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.swift<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.js<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.ts<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.dart<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.rs<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.c<pre><code>[class]{BinarySearchTree}-[func]{removeItem}\n</code></pre> binary_search_tree.kt<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.rb<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre> binary_search_tree.zig<pre><code>[class]{BinarySearchTree}-[func]{remove}\n</code></pre>"},{"location":"chapter_tree/binary_search_tree/#4-in-order-traversal-is-ordered","title":"4. \u00a0 In-order traversal is ordered","text":"<p>As shown in Figure 7-22, the in-order traversal of a binary tree follows the \"left \\(\\rightarrow\\) root \\(\\rightarrow\\) right\" traversal order, and a binary search tree satisfies the size relationship \"left child node \\(<\\) root node \\(<\\) right child node\".</p> <p>This means that in-order traversal in a binary search tree always traverses the next smallest node first, thus deriving an important property: The in-order traversal sequence of a binary search tree is ascending.</p> <p>Using the ascending property of in-order traversal, obtaining ordered data in a binary search tree requires only \\(O(n)\\) time, without the need for additional sorting operations, which is very efficient.</p> <p></p> <p> Figure 7-22 \u00a0 In-order traversal sequence of a binary search tree </p>"},{"location":"chapter_tree/binary_search_tree/#742-efficiency-of-binary-search-trees","title":"7.4.2 \u00a0 Efficiency of binary search trees","text":"<p>Given a set of data, we consider using an array or a binary search tree for storage. Observing Table 7-2, the operations on a binary search tree all have logarithmic time complexity, which is stable and efficient. Only in scenarios of high-frequency addition and low-frequency search and removal, arrays are more efficient than binary search trees.</p> <p> Table 7-2 \u00a0 Efficiency comparison between arrays and search trees </p> Unsorted array Binary search tree Search element \\(O(n)\\) \\(O(\\log n)\\) Insert element \\(O(1)\\) \\(O(\\log n)\\) Remove element \\(O(n)\\) \\(O(\\log n)\\) <p>In ideal conditions, the binary search tree is \"balanced,\" thus any node can be found within \\(\\log n\\) loops.</p> <p>However, continuously inserting and removing nodes in a binary search tree may lead to the binary tree degenerating into a chain list as shown in Figure 7-23, at which point the time complexity of various operations also degrades to \\(O(n)\\).</p> <p></p> <p> Figure 7-23 \u00a0 Degradation of a binary search tree </p>"},{"location":"chapter_tree/binary_search_tree/#743-common-applications-of-binary-search-trees","title":"7.4.3 \u00a0 Common applications of binary search trees","text":"<ul> <li>Used as multi-level indexes in systems to implement efficient search, insertion, and removal operations.</li> <li>Serves as the underlying data structure for certain search algorithms.</li> <li>Used to store data streams to maintain their ordered state.</li> </ul>"},{"location":"chapter_tree/binary_tree/","title":"7.1 \u00a0 Binary tree","text":"<p>A binary tree is a non-linear data structure that represents the hierarchical relationship between ancestors and descendants, embodying the divide-and-conquer logic of \"splitting into two\". Similar to a linked list, the basic unit of a binary tree is a node, each containing a value, a reference to the left child node, and a reference to the right child node.</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig <pre><code>class TreeNode:\n \"\"\"Binary tree node\"\"\"\n def __init__(self, val: int):\n self.val: int = val # Node value\n self.left: TreeNode | None = None # Reference to left child node\n self.right: TreeNode | None = None # Reference to right child node\n</code></pre> <pre><code>/* Binary tree node */\nstruct TreeNode {\n int val; // Node value\n TreeNode *left; // Pointer to left child node\n TreeNode *right; // Pointer to right child node\n TreeNode(int x) : val(x), left(nullptr), right(nullptr) {}\n};\n</code></pre> <pre><code>/* Binary tree node */\nclass TreeNode {\n int val; // Node value\n TreeNode left; // Reference to left child node\n TreeNode right; // Reference to right child node\n TreeNode(int x) { val = x; }\n}\n</code></pre> <pre><code>/* Binary tree node */\nclass TreeNode(int? x) {\n public int? val = x; // Node value\n public TreeNode? left; // Reference to left child node\n public TreeNode? right; // Reference to right child node\n}\n</code></pre> <pre><code>/* Binary tree node */\ntype TreeNode struct {\n Val int\n Left *TreeNode\n Right *TreeNode\n}\n/* \u6784\u9020\u65b9\u6cd5 */\nfunc NewTreeNode(v int) *TreeNode {\n return &TreeNode{\n Left: nil, // Pointer to left child node\n Right: nil, // Pointer to right child node\n Val: v, // Node value\n }\n}\n</code></pre> <pre><code>/* Binary tree node */\nclass TreeNode {\n var val: Int // Node value\n var left: TreeNode? // Reference to left child node\n var right: TreeNode? // Reference to right child node\n\n init(x: Int) {\n val = x\n }\n}\n</code></pre> <pre><code>/* Binary tree node */\nclass TreeNode {\n val; // Node value\n left; // Pointer to left child node\n right; // Pointer to right child node\n constructor(val, left, right) {\n this.val = val === undefined ? 0 : val;\n this.left = left === undefined ? null : left;\n this.right = right === undefined ? null : right;\n }\n}\n</code></pre> <pre><code>/* Binary tree node */\nclass TreeNode {\n val: number;\n left: TreeNode | null;\n right: TreeNode | null;\n\n constructor(val?: number, left?: TreeNode | null, right?: TreeNode | null) {\n this.val = val === undefined ? 0 : val; // Node value\n this.left = left === undefined ? null : left; // Reference to left child node\n this.right = right === undefined ? null : right; // Reference to right child node\n }\n}\n</code></pre> <pre><code>/* Binary tree node */\nclass TreeNode {\n int val; // Node value\n TreeNode? left; // Reference to left child node\n TreeNode? right; // Reference to right child node\n TreeNode(this.val, [this.left, this.right]);\n}\n</code></pre> <pre><code>use std::rc::Rc;\nuse std::cell::RefCell;\n\n/* Binary tree node */\nstruct TreeNode {\n val: i32, // Node value\n left: Option<Rc<RefCell<TreeNode>>>, // Reference to left child node\n right: Option<Rc<RefCell<TreeNode>>>, // Reference to right child node\n}\n\nimpl TreeNode {\n /* \u6784\u9020\u65b9\u6cd5 */\n fn new(val: i32) -> Rc<RefCell<Self>> {\n Rc::new(RefCell::new(Self {\n val,\n left: None,\n right: None\n }))\n }\n}\n</code></pre> <pre><code>/* Binary tree node */\ntypedef struct TreeNode {\n int val; // Node value\n int height; // \u8282\u70b9\u9ad8\u5ea6\n struct TreeNode *left; // Pointer to left child node\n struct TreeNode *right; // Pointer to right child node\n} TreeNode;\n\n/* \u6784\u9020\u51fd\u6570 */\nTreeNode *newTreeNode(int val) {\n TreeNode *node;\n\n node = (TreeNode *)malloc(sizeof(TreeNode));\n node->val = val;\n node->height = 0;\n node->left = NULL;\n node->right = NULL;\n return node;\n}\n</code></pre> <pre><code>/* Binary tree node */\nclass TreeNode(val _val: Int) { // Node value\n val left: TreeNode? = null // Reference to left child node\n val right: TreeNode? = null // Reference to right child node\n}\n</code></pre> <pre><code>\n</code></pre> <pre><code>\n</code></pre> <p>Each node has two references (pointers), pointing to the left-child node and right-child node, respectively. This node is called the parent node of these two child nodes. When given a node of a binary tree, we call the tree formed by this node's left child and all nodes under it the left subtree of this node. Similarly, the right subtree can be defined.</p> <p>In a binary tree, except for leaf nodes, all other nodes contain child nodes and non-empty subtrees. As shown in Figure 7-1, if \"Node 2\" is considered as the parent node, then its left and right child nodes are \"Node 4\" and \"Node 5,\" respectively. The left subtree is \"the tree formed by Node 4 and all nodes under it,\" and the right subtree is \"the tree formed by Node 5 and all nodes under it.\"</p> <p></p> <p> Figure 7-1 \u00a0 Parent Node, child Node, subtree </p>"},{"location":"chapter_tree/binary_tree/#711-common-terminology-of-binary-trees","title":"7.1.1 \u00a0 Common terminology of binary trees","text":"<p>The commonly used terminology of binary trees is shown in Figure 7-2.</p> <ul> <li>Root node: The node at the top level of the binary tree, which has no parent node.</li> <li>Leaf node: A node with no children, both of its pointers point to <code>None</code>.</li> <li>Edge: The line segment connecting two nodes, i.e., node reference (pointer).</li> <li>The level of a node: Incrementing from top to bottom, with the root node's level being 1.</li> <li>The degree of a node: The number of children a node has. In a binary tree, the degree can be 0, 1, or 2.</li> <li>The height of a binary tree: The number of edges passed from the root node to the farthest leaf node.</li> <li>The depth of a node: The number of edges passed from the root node to the node.</li> <li>The height of a node: The number of edges from the farthest leaf node to the node.</li> </ul> <p></p> <p> Figure 7-2 \u00a0 Common Terminology of Binary Trees </p> <p>Tip</p> <p>Please note that we typically define \"height\" and \"depth\" as \"the number of edges traversed\", but some problems or textbooks may define them as \"the number of nodes traversed\". In such cases, both height and depth need to be incremented by 1.</p>"},{"location":"chapter_tree/binary_tree/#712-basic-operations-of-binary-trees","title":"7.1.2 \u00a0 Basic operations of binary trees","text":""},{"location":"chapter_tree/binary_tree/#1-initializing-a-binary-tree","title":"1. \u00a0 Initializing a binary tree","text":"<p>Similar to a linked list, begin by initialize nodes, then construct references (pointers).</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_tree.py<pre><code># Initializing a binary tree\n# Initializing nodes\nn1 = TreeNode(val=1)\nn2 = TreeNode(val=2)\nn3 = TreeNode(val=3)\nn4 = TreeNode(val=4)\nn5 = TreeNode(val=5)\n# Linking references (pointers) between nodes\nn1.left = n2\nn1.right = n3\nn2.left = n4\nn2.right = n5\n</code></pre> binary_tree.cpp<pre><code>/* Initializing a binary tree */\n// Initializing nodes\nTreeNode* n1 = new TreeNode(1);\nTreeNode* n2 = new TreeNode(2);\nTreeNode* n3 = new TreeNode(3);\nTreeNode* n4 = new TreeNode(4);\nTreeNode* n5 = new TreeNode(5);\n// Linking references (pointers) between nodes\nn1->left = n2;\nn1->right = n3;\nn2->left = n4;\nn2->right = n5;\n</code></pre> binary_tree.java<pre><code>// Initializing nodes\nTreeNode n1 = new TreeNode(1);\nTreeNode n2 = new TreeNode(2);\nTreeNode n3 = new TreeNode(3);\nTreeNode n4 = new TreeNode(4);\nTreeNode n5 = new TreeNode(5);\n// Linking references (pointers) between nodes\nn1.left = n2;\nn1.right = n3;\nn2.left = n4;\nn2.right = n5;\n</code></pre> binary_tree.cs<pre><code>/* Initializing a binary tree */\n// Initializing nodes\nTreeNode n1 = new(1);\nTreeNode n2 = new(2);\nTreeNode n3 = new(3);\nTreeNode n4 = new(4);\nTreeNode n5 = new(5);\n// Linking references (pointers) between nodes\nn1.left = n2;\nn1.right = n3;\nn2.left = n4;\nn2.right = n5;\n</code></pre> binary_tree.go<pre><code>/* Initializing a binary tree */\n// Initializing nodes\nn1 := NewTreeNode(1)\nn2 := NewTreeNode(2)\nn3 := NewTreeNode(3)\nn4 := NewTreeNode(4)\nn5 := NewTreeNode(5)\n// Linking references (pointers) between nodes\nn1.Left = n2\nn1.Right = n3\nn2.Left = n4\nn2.Right = n5\n</code></pre> binary_tree.swift<pre><code>// Initializing nodes\nlet n1 = TreeNode(x: 1)\nlet n2 = TreeNode(x: 2)\nlet n3 = TreeNode(x: 3)\nlet n4 = TreeNode(x: 4)\nlet n5 = TreeNode(x: 5)\n// Linking references (pointers) between nodes\nn1.left = n2\nn1.right = n3\nn2.left = n4\nn2.right = n5\n</code></pre> binary_tree.js<pre><code>/* Initializing a binary tree */\n// Initializing nodes\nlet n1 = new TreeNode(1),\n n2 = new TreeNode(2),\n n3 = new TreeNode(3),\n n4 = new TreeNode(4),\n n5 = new TreeNode(5);\n// Linking references (pointers) between nodes\nn1.left = n2;\nn1.right = n3;\nn2.left = n4;\nn2.right = n5;\n</code></pre> binary_tree.ts<pre><code>/* Initializing a binary tree */\n// Initializing nodes\nlet n1 = new TreeNode(1),\n n2 = new TreeNode(2),\n n3 = new TreeNode(3),\n n4 = new TreeNode(4),\n n5 = new TreeNode(5);\n// Linking references (pointers) between nodes\nn1.left = n2;\nn1.right = n3;\nn2.left = n4;\nn2.right = n5;\n</code></pre> binary_tree.dart<pre><code>/* Initializing a binary tree */\n// Initializing nodes\nTreeNode n1 = new TreeNode(1);\nTreeNode n2 = new TreeNode(2);\nTreeNode n3 = new TreeNode(3);\nTreeNode n4 = new TreeNode(4);\nTreeNode n5 = new TreeNode(5);\n// Linking references (pointers) between nodes\nn1.left = n2;\nn1.right = n3;\nn2.left = n4;\nn2.right = n5;\n</code></pre> binary_tree.rs<pre><code>// Initializing nodes\nlet n1 = TreeNode::new(1);\nlet n2 = TreeNode::new(2);\nlet n3 = TreeNode::new(3);\nlet n4 = TreeNode::new(4);\nlet n5 = TreeNode::new(5);\n// Linking references (pointers) between nodes\nn1.borrow_mut().left = Some(n2.clone());\nn1.borrow_mut().right = Some(n3);\nn2.borrow_mut().left = Some(n4);\nn2.borrow_mut().right = Some(n5);\n</code></pre> binary_tree.c<pre><code>/* Initializing a binary tree */\n// Initializing nodes\nTreeNode *n1 = newTreeNode(1);\nTreeNode *n2 = newTreeNode(2);\nTreeNode *n3 = newTreeNode(3);\nTreeNode *n4 = newTreeNode(4);\nTreeNode *n5 = newTreeNode(5);\n// Linking references (pointers) between nodes\nn1->left = n2;\nn1->right = n3;\nn2->left = n4;\nn2->right = n5;\n</code></pre> binary_tree.kt<pre><code>// Initializing nodes\nval n1 = TreeNode(1)\nval n2 = TreeNode(2)\nval n3 = TreeNode(3)\nval n4 = TreeNode(4)\nval n5 = TreeNode(5)\n// Linking references (pointers) between nodes\nn1.left = n2\nn1.right = n3\nn2.left = n4\nn2.right = n5\n</code></pre> binary_tree.rb<pre><code>\n</code></pre> binary_tree.zig<pre><code>\n</code></pre> Code visualization <p>https://pythontutor.com/render.html#code=class%20TreeNode%3A%0A%20%20%20%20%22%22%22%E4%BA%8C%E5%8F%89%E6%A0%91%E8%8A%82%E7%82%B9%E7%B1%BB%22%22%22%0A%20%20%20%20def%20__init__%28self,%20val%3A%20int%29%3A%0A%20%20%20%20%20%20%20%20self.val%3A%20int%20%3D%20val%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%23%20%E8%8A%82%E7%82%B9%E5%80%BC%0A%20%20%20%20%20%20%20%20self.left%3A%20TreeNode%20%7C%20None%20%3D%20None%20%20%23%20%E5%B7%A6%E5%AD%90%E8%8A%82%E7%82%B9%E5%BC%95%E7%94%A8%0A%20%20%20%20%20%20%20%20self.right%3A%20TreeNode%20%7C%20None%20%3D%20None%20%23%20%E5%8F%B3%E5%AD%90%E8%8A%82%E7%82%B9%E5%BC%95%E7%94%A8%0A%0A%22%22%22Driver%20Code%22%22%22%0Aif%20__name__%20%3D%3D%20%22__main__%22%3A%0A%20%20%20%20%23%20%E5%88%9D%E5%A7%8B%E5%8C%96%E4%BA%8C%E5%8F%89%E6%A0%91%0A%20%20%20%20%23%20%E5%88%9D%E5%A7%8B%E5%8C%96%E8%8A%82%E7%82%B9%0A%20%20%20%20n1%20%3D%20TreeNode%28val%3D1%29%0A%20%20%20%20n2%20%3D%20TreeNode%28val%3D2%29%0A%20%20%20%20n3%20%3D%20TreeNode%28val%3D3%29%0A%20%20%20%20n4%20%3D%20TreeNode%28val%3D4%29%0A%20%20%20%20n5%20%3D%20TreeNode%28val%3D5%29%0A%20%20%20%20%23%20%E6%9E%84%E5%BB%BA%E8%8A%82%E7%82%B9%E4%B9%8B%E9%97%B4%E7%9A%84%E5%BC%95%E7%94%A8%EF%BC%88%E6%8C%87%E9%92%88%EF%BC%89%0A%20%20%20%20n1.left%20%3D%20n2%0A%20%20%20%20n1.right%20%3D%20n3%0A%20%20%20%20n2.left%20%3D%20n4%0A%20%20%20%20n2.right%20%3D%20n5&cumulative=false&curInstr=3&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=311&rawInputLstJSON=%5B%5D&textReferences=false</p>"},{"location":"chapter_tree/binary_tree/#2-inserting-and-removing-nodes","title":"2. \u00a0 Inserting and removing nodes","text":"<p>Similar to a linked list, inserting and removing nodes in a binary tree can be achieved by modifying pointers. Figure 7-3 provides an example.</p> <p></p> <p> Figure 7-3 \u00a0 Inserting and removing nodes in a binary tree </p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_tree.py<pre><code># Inserting and removing nodes\np = TreeNode(0)\n# Inserting node P between n1 -> n2\nn1.left = p\np.left = n2\n# Removing node P\nn1.left = n2\n</code></pre> binary_tree.cpp<pre><code>/* Inserting and removing nodes */\nTreeNode* P = new TreeNode(0);\n// Inserting node P between n1 and n2\nn1->left = P;\nP->left = n2;\n// Removing node P\nn1->left = n2;\n</code></pre> binary_tree.java<pre><code>TreeNode P = new TreeNode(0);\n// Inserting node P between n1 and n2\nn1.left = P;\nP.left = n2;\n// Removing node P\nn1.left = n2;\n</code></pre> binary_tree.cs<pre><code>/* Inserting and removing nodes */\nTreeNode P = new(0);\n// Inserting node P between n1 and n2\nn1.left = P;\nP.left = n2;\n// Removing node P\nn1.left = n2;\n</code></pre> binary_tree.go<pre><code>/* Inserting and removing nodes */\n// Inserting node P between n1 and n2\np := NewTreeNode(0)\nn1.Left = p\np.Left = n2\n// Removing node P\nn1.Left = n2\n</code></pre> binary_tree.swift<pre><code>let P = TreeNode(x: 0)\n// Inserting node P between n1 and n2\nn1.left = P\nP.left = n2\n// Removing node P\nn1.left = n2\n</code></pre> binary_tree.js<pre><code>/* Inserting and removing nodes */\nlet P = new TreeNode(0);\n// Inserting node P between n1 and n2\nn1.left = P;\nP.left = n2;\n// Removing node P\nn1.left = n2;\n</code></pre> binary_tree.ts<pre><code>/* Inserting and removing nodes */\nconst P = new TreeNode(0);\n// Inserting node P between n1 and n2\nn1.left = P;\nP.left = n2;\n// Removing node P\nn1.left = n2;\n</code></pre> binary_tree.dart<pre><code>/* Inserting and removing nodes */\nTreeNode P = new TreeNode(0);\n// Inserting node P between n1 and n2\nn1.left = P;\nP.left = n2;\n// Removing node P\nn1.left = n2;\n</code></pre> binary_tree.rs<pre><code>let p = TreeNode::new(0);\n// Inserting node P between n1 and n2\nn1.borrow_mut().left = Some(p.clone());\np.borrow_mut().left = Some(n2.clone());\n// Removing node P\nn1.borrow_mut().left = Some(n2);\n</code></pre> binary_tree.c<pre><code>/* Inserting and removing nodes */\nTreeNode *P = newTreeNode(0);\n// Inserting node P between n1 and n2\nn1->left = P;\nP->left = n2;\n// Removing node P\nn1->left = n2;\n</code></pre> binary_tree.kt<pre><code>val P = TreeNode(0)\n// Inserting node P between n1 and n2\nn1.left = P\nP.left = n2\n// Removing node P\nn1.left = n2\n</code></pre> binary_tree.rb<pre><code>\n</code></pre> binary_tree.zig<pre><code>\n</code></pre> Code visualization <p>https://pythontutor.com/render.html#code=class%20TreeNode%3A%0A%20%20%20%20%22%22%22%E4%BA%8C%E5%8F%89%E6%A0%91%E8%8A%82%E7%82%B9%E7%B1%BB%22%22%22%0A%20%20%20%20def%20__init__%28self,%20val%3A%20int%29%3A%0A%20%20%20%20%20%20%20%20self.val%3A%20int%20%3D%20val%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%23%20%E8%8A%82%E7%82%B9%E5%80%BC%0A%20%20%20%20%20%20%20%20self.left%3A%20TreeNode%20%7C%20None%20%3D%20None%20%20%23%20%E5%B7%A6%E5%AD%90%E8%8A%82%E7%82%B9%E5%BC%95%E7%94%A8%0A%20%20%20%20%20%20%20%20self.right%3A%20TreeNode%20%7C%20None%20%3D%20None%20%23%20%E5%8F%B3%E5%AD%90%E8%8A%82%E7%82%B9%E5%BC%95%E7%94%A8%0A%0A%22%22%22Driver%20Code%22%22%22%0Aif%20__name__%20%3D%3D%20%22__main__%22%3A%0A%20%20%20%20%23%20%E5%88%9D%E5%A7%8B%E5%8C%96%E4%BA%8C%E5%8F%89%E6%A0%91%0A%20%20%20%20%23%20%E5%88%9D%E5%A7%8B%E5%8C%96%E8%8A%82%E7%82%B9%0A%20%20%20%20n1%20%3D%20TreeNode%28val%3D1%29%0A%20%20%20%20n2%20%3D%20TreeNode%28val%3D2%29%0A%20%20%20%20n3%20%3D%20TreeNode%28val%3D3%29%0A%20%20%20%20n4%20%3D%20TreeNode%28val%3D4%29%0A%20%20%20%20n5%20%3D%20TreeNode%28val%3D5%29%0A%20%20%20%20%23%20%E6%9E%84%E5%BB%BA%E8%8A%82%E7%82%B9%E4%B9%8B%E9%97%B4%E7%9A%84%E5%BC%95%E7%94%A8%EF%BC%88%E6%8C%87%E9%92%88%EF%BC%89%0A%20%20%20%20n1.left%20%3D%20n2%0A%20%20%20%20n1.right%20%3D%20n3%0A%20%20%20%20n2.left%20%3D%20n4%0A%20%20%20%20n2.right%20%3D%20n5%0A%0A%20%20%20%20%23%20%E6%8F%92%E5%85%A5%E4%B8%8E%E5%88%A0%E9%99%A4%E8%8A%82%E7%82%B9%0A%20%20%20%20p%20%3D%20TreeNode%280%29%0A%20%20%20%20%23%20%E5%9C%A8%20n1%20-%3E%20n2%20%E4%B8%AD%E9%97%B4%E6%8F%92%E5%85%A5%E8%8A%82%E7%82%B9%20P%0A%20%20%20%20n1.left%20%3D%20p%0A%20%20%20%20p.left%20%3D%20n2%0A%20%20%20%20%23%20%E5%88%A0%E9%99%A4%E8%8A%82%E7%82%B9%20P%0A%20%20%20%20n1.left%20%3D%20n2&cumulative=false&curInstr=37&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=311&rawInputLstJSON=%5B%5D&textReferences=false</p> <p>Tip</p> <p>It's important to note that inserting nodes may change the original logical structure of the binary tree, while removing nodes typically involves removing the node and all its subtrees. Therefore, in a binary tree, insertion and removal are usually performed through a coordinated set of operations to achieve meaningful outcomes.</p>"},{"location":"chapter_tree/binary_tree/#713-common-types-of-binary-trees","title":"7.1.3 \u00a0 Common types of binary trees","text":""},{"location":"chapter_tree/binary_tree/#1-perfect-binary-tree","title":"1. \u00a0 Perfect binary tree","text":"<p>As shown in Figure 7-4, in a perfect binary tree, all levels of nodes are fully filled. In a perfect binary tree, the degree of leaf nodes is \\(0\\), while the degree of all other nodes is \\(2\\); if the tree's height is \\(h\\), then the total number of nodes is \\(2^{h+1} - 1\\), showing a standard exponential relationship, reflecting the common phenomenon of cell division in nature.</p> <p>Tip</p> <p>Please note that in the Chinese community, a perfect binary tree is often referred to as a full binary tree.</p> <p></p> <p> Figure 7-4 \u00a0 Perfect binary tree </p>"},{"location":"chapter_tree/binary_tree/#2-complete-binary-tree","title":"2. \u00a0 Complete binary tree","text":"<p>As shown in Figure 7-5, a complete binary tree has only the bottom level nodes not fully filled, and the bottom level nodes are filled as far left as possible.</p> <p></p> <p> Figure 7-5 \u00a0 Complete binary tree </p>"},{"location":"chapter_tree/binary_tree/#3-full-binary-tree","title":"3. \u00a0 Full binary tree","text":"<p>As shown in Figure 7-6, a full binary tree has all nodes except leaf nodes having two children.</p> <p></p> <p> Figure 7-6 \u00a0 Full binary tree </p>"},{"location":"chapter_tree/binary_tree/#4-balanced-binary-tree","title":"4. \u00a0 Balanced binary tree","text":"<p>As shown in Figure 7-7, in a balanced binary tree, the absolute difference in height between the left and right subtrees of any node does not exceed 1.</p> <p></p> <p> Figure 7-7 \u00a0 Balanced binary tree </p>"},{"location":"chapter_tree/binary_tree/#714-degeneration-of-binary-trees","title":"7.1.4 \u00a0 Degeneration of binary trees","text":"<p>Figure 7-8 shows the ideal and degenerate structures of binary trees. A binary tree becomes a \"perfect binary tree\" when every level is filled; while it degenerates into a \"linked list\" when all nodes are biased toward one side.</p> <ul> <li>The perfect binary tree is the ideal situation, fully leveraging the \"divide and conquer\" advantage of binary trees.</li> <li>A linked list is another extreme, where operations become linear, degrading the time complexity to \\(O(n)\\).</li> </ul> <p></p> <p> Figure 7-8 \u00a0 The Best and Worst Structures of Binary Trees </p> <p>As shown in Table 7-1, in the best and worst structures, the number of leaf nodes, total number of nodes, and height of the binary tree reach their maximum or minimum values.</p> <p> Table 7-1 \u00a0 The Best and Worst Structures of Binary Trees </p> Perfect binary tree Linked list Number of nodes at level \\(i\\) \\(2^{i-1}\\) \\(1\\) Number of leaf nodes in a tree with height \\(h\\) \\(2^h\\) \\(1\\) Total number of nodes in a tree with height \\(h\\) \\(2^{h+1} - 1\\) \\(h + 1\\) Height of a tree with \\(n\\) total nodes \\(\\log_2 (n+1) - 1\\) \\(n - 1\\)"},{"location":"chapter_tree/binary_tree_traversal/","title":"7.2 \u00a0 Binary tree traversal","text":"<p>From the perspective of physical structure, a tree is a data structure based on linked lists, hence its traversal method involves accessing nodes one by one through pointers. However, a tree is a non-linear data structure, which makes traversing a tree more complex than traversing a linked list, requiring the assistance of search algorithms to achieve.</p> <p>Common traversal methods for binary trees include level-order traversal, pre-order traversal, in-order traversal, and post-order traversal, among others.</p>"},{"location":"chapter_tree/binary_tree_traversal/#721-level-order-traversal","title":"7.2.1 \u00a0 Level-order traversal","text":"<p>As shown in Figure 7-9, level-order traversal traverses the binary tree from top to bottom, layer by layer, and accesses nodes in each layer in a left-to-right order.</p> <p>Level-order traversal essentially belongs to breadth-first traversal, also known as breadth-first search (BFS), which embodies a \"circumferentially outward expanding\" layer-by-layer traversal method.</p> <p></p> <p> Figure 7-9 \u00a0 Level-order traversal of a binary tree </p>"},{"location":"chapter_tree/binary_tree_traversal/#1-code-implementation","title":"1. \u00a0 Code implementation","text":"<p>Breadth-first traversal is usually implemented with the help of a \"queue\". The queue follows the \"first in, first out\" rule, while breadth-first traversal follows the \"layer-by-layer progression\" rule, the underlying ideas of the two are consistent. The implementation code is as follows:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_tree_bfs.py<pre><code>def level_order(root: TreeNode | None) -> list[int]:\n \"\"\"Level-order traversal\"\"\"\n # Initialize queue, add root node\n queue: deque[TreeNode] = deque()\n queue.append(root)\n # Initialize a list to store the traversal sequence\n res = []\n while queue:\n node: TreeNode = queue.popleft() # Queue dequeues\n res.append(node.val) # Save node value\n if node.left is not None:\n queue.append(node.left) # Left child node enqueues\n if node.right is not None:\n queue.append(node.right) # Right child node enqueues\n return res\n</code></pre> binary_tree_bfs.cpp<pre><code>/* Level-order traversal */\nvector<int> levelOrder(TreeNode *root) {\n // Initialize queue, add root node\n queue<TreeNode *> queue;\n queue.push(root);\n // Initialize a list to store the traversal sequence\n vector<int> vec;\n while (!queue.empty()) {\n TreeNode *node = queue.front();\n queue.pop(); // Queue dequeues\n vec.push_back(node->val); // Save node value\n if (node->left != nullptr)\n queue.push(node->left); // Left child node enqueues\n if (node->right != nullptr)\n queue.push(node->right); // Right child node enqueues\n }\n return vec;\n}\n</code></pre> binary_tree_bfs.java<pre><code>/* Level-order traversal */\nList<Integer> levelOrder(TreeNode root) {\n // Initialize queue, add root node\n Queue<TreeNode> queue = new LinkedList<>();\n queue.add(root);\n // Initialize a list to store the traversal sequence\n List<Integer> list = new ArrayList<>();\n while (!queue.isEmpty()) {\n TreeNode node = queue.poll(); // Queue dequeues\n list.add(node.val); // Save node value\n if (node.left != null)\n queue.offer(node.left); // Left child node enqueues\n if (node.right != null)\n queue.offer(node.right); // Right child node enqueues\n }\n return list;\n}\n</code></pre> binary_tree_bfs.cs<pre><code>[class]{binary_tree_bfs}-[func]{LevelOrder}\n</code></pre> binary_tree_bfs.go<pre><code>[class]{}-[func]{levelOrder}\n</code></pre> binary_tree_bfs.swift<pre><code>[class]{}-[func]{levelOrder}\n</code></pre> binary_tree_bfs.js<pre><code>[class]{}-[func]{levelOrder}\n</code></pre> binary_tree_bfs.ts<pre><code>[class]{}-[func]{levelOrder}\n</code></pre> binary_tree_bfs.dart<pre><code>[class]{}-[func]{levelOrder}\n</code></pre> binary_tree_bfs.rs<pre><code>[class]{}-[func]{level_order}\n</code></pre> binary_tree_bfs.c<pre><code>[class]{}-[func]{levelOrder}\n</code></pre> binary_tree_bfs.kt<pre><code>[class]{}-[func]{levelOrder}\n</code></pre> binary_tree_bfs.rb<pre><code>[class]{}-[func]{level_order}\n</code></pre> binary_tree_bfs.zig<pre><code>[class]{}-[func]{levelOrder}\n</code></pre>"},{"location":"chapter_tree/binary_tree_traversal/#2-complexity-analysis","title":"2. \u00a0 Complexity analysis","text":"<ul> <li>Time complexity is \\(O(n)\\): All nodes are visited once, using \\(O(n)\\) time, where \\(n\\) is the number of nodes.</li> <li>Space complexity is \\(O(n)\\): In the worst case, i.e., a full binary tree, before traversing to the lowest level, the queue can contain at most \\((n + 1) / 2\\) nodes at the same time, occupying \\(O(n)\\) space.</li> </ul>"},{"location":"chapter_tree/binary_tree_traversal/#722-preorder-in-order-and-post-order-traversal","title":"7.2.2 \u00a0 Preorder, in-order, and post-order traversal","text":"<p>Correspondingly, pre-order, in-order, and post-order traversal all belong to depth-first traversal, also known as depth-first search (DFS), which embodies a \"proceed to the end first, then backtrack and continue\" traversal method.</p> <p>Figure 7-10 shows the working principle of performing a depth-first traversal on a binary tree. Depth-first traversal is like walking around the perimeter of the entire binary tree, encountering three positions at each node, corresponding to pre-order traversal, in-order traversal, and post-order traversal.</p> <p></p> <p> Figure 7-10 \u00a0 Preorder, in-order, and post-order traversal of a binary search tree </p>"},{"location":"chapter_tree/binary_tree_traversal/#1-code-implementation_1","title":"1. \u00a0 Code implementation","text":"<p>Depth-first search is usually implemented based on recursion:</p> PythonC++JavaC#GoSwiftJSTSDartRustCKotlinRubyZig binary_tree_dfs.py<pre><code>def pre_order(root: TreeNode | None):\n \"\"\"Pre-order traversal\"\"\"\n if root is None:\n return\n # Visit priority: root node -> left subtree -> right subtree\n res.append(root.val)\n pre_order(root=root.left)\n pre_order(root=root.right)\n\ndef in_order(root: TreeNode | None):\n \"\"\"In-order traversal\"\"\"\n if root is None:\n return\n # Visit priority: left subtree -> root node -> right subtree\n in_order(root=root.left)\n res.append(root.val)\n in_order(root=root.right)\n\ndef post_order(root: TreeNode | None):\n \"\"\"Post-order traversal\"\"\"\n if root is None:\n return\n # Visit priority: left subtree -> right subtree -> root node\n post_order(root=root.left)\n post_order(root=root.right)\n res.append(root.val)\n</code></pre> binary_tree_dfs.cpp<pre><code>/* Pre-order traversal */\nvoid preOrder(TreeNode *root) {\n if (root == nullptr)\n return;\n // Visit priority: root node -> left subtree -> right subtree\n vec.push_back(root->val);\n preOrder(root->left);\n preOrder(root->right);\n}\n\n/* In-order traversal */\nvoid inOrder(TreeNode *root) {\n if (root == nullptr)\n return;\n // Visit priority: left subtree -> root node -> right subtree\n inOrder(root->left);\n vec.push_back(root->val);\n inOrder(root->right);\n}\n\n/* Post-order traversal */\nvoid postOrder(TreeNode *root) {\n if (root == nullptr)\n return;\n // Visit priority: left subtree -> right subtree -> root node\n postOrder(root->left);\n postOrder(root->right);\n vec.push_back(root->val);\n}\n</code></pre> binary_tree_dfs.java<pre><code>/* Pre-order traversal */\nvoid preOrder(TreeNode root) {\n if (root == null)\n return;\n // Visit priority: root node -> left subtree -> right subtree\n list.add(root.val);\n preOrder(root.left);\n preOrder(root.right);\n}\n\n/* In-order traversal */\nvoid inOrder(TreeNode root) {\n if (root == null)\n return;\n // Visit priority: left subtree -> root node -> right subtree\n inOrder(root.left);\n list.add(root.val);\n inOrder(root.right);\n}\n\n/* Post-order traversal */\nvoid postOrder(TreeNode root) {\n if (root == null)\n return;\n // Visit priority: left subtree -> right subtree -> root node\n postOrder(root.left);\n postOrder(root.right);\n list.add(root.val);\n}\n</code></pre> binary_tree_dfs.cs<pre><code>[class]{binary_tree_dfs}-[func]{PreOrder}\n\n[class]{binary_tree_dfs}-[func]{InOrder}\n\n[class]{binary_tree_dfs}-[func]{PostOrder}\n</code></pre> binary_tree_dfs.go<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> binary_tree_dfs.swift<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> binary_tree_dfs.js<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> binary_tree_dfs.ts<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> binary_tree_dfs.dart<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> binary_tree_dfs.rs<pre><code>[class]{}-[func]{pre_order}\n\n[class]{}-[func]{in_order}\n\n[class]{}-[func]{post_order}\n</code></pre> binary_tree_dfs.c<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> binary_tree_dfs.kt<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> binary_tree_dfs.rb<pre><code>[class]{}-[func]{pre_order}\n\n[class]{}-[func]{in_order}\n\n[class]{}-[func]{post_order}\n</code></pre> binary_tree_dfs.zig<pre><code>[class]{}-[func]{preOrder}\n\n[class]{}-[func]{inOrder}\n\n[class]{}-[func]{postOrder}\n</code></pre> <p>Tip</p> <p>Depth-first search can also be implemented based on iteration, interested readers can study this on their own.</p> <p>Figure 7-11 shows the recursive process of pre-order traversal of a binary tree, which can be divided into two opposite parts: \"recursion\" and \"return\".</p> <ol> <li>\"Recursion\" means starting a new method, the program accesses the next node in this process.</li> <li>\"Return\" means the function returns, indicating the current node has been fully accessed.</li> </ol> <1><2><3><4><5><6><7><8><9><10><11> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p> Figure 7-11 \u00a0 The recursive process of pre-order traversal </p>"},{"location":"chapter_tree/binary_tree_traversal/#2-complexity-analysis_1","title":"2. \u00a0 Complexity analysis","text":"<ul> <li>Time complexity is \\(O(n)\\): All nodes are visited once, using \\(O(n)\\) time.</li> <li>Space complexity is \\(O(n)\\): In the worst case, i.e., the tree degrades into a linked list, the recursion depth reaches \\(n\\), the system occupies \\(O(n)\\) stack frame space.</li> </ul>"},{"location":"chapter_tree/summary/","title":"7.6 \u00a0 Summary","text":""},{"location":"chapter_tree/summary/#1-key-review","title":"1. \u00a0 Key review","text":"<ul> <li>A binary tree is a non-linear data structure that reflects the \"divide and conquer\" logic of splitting one into two. Each binary tree node contains a value and two pointers, which point to its left and right child nodes, respectively.</li> <li>For a node in a binary tree, the tree formed by its left (right) child node and all nodes under it is called the node's left (right) subtree.</li> <li>Related terminology of binary trees includes root node, leaf node, level, degree, edge, height, and depth, among others.</li> <li>The operations of initializing a binary tree, inserting nodes, and removing nodes are similar to those of linked list operations.</li> <li>Common types of binary trees include perfect binary trees, complete binary trees, full binary trees, and balanced binary trees. The perfect binary tree represents the ideal state, while the linked list is the worst state after degradation.</li> <li>A binary tree can be represented using an array by arranging the node values and empty slots in a level-order traversal sequence and implementing pointers based on the index mapping relationship between parent nodes and child nodes.</li> <li>The level-order traversal of a binary tree is a breadth-first search method, which reflects a layer-by-layer traversal manner of \"expanding circle by circle.\" It is usually implemented using a queue.</li> <li>Pre-order, in-order, and post-order traversals are all depth-first search methods, reflecting the traversal manner of \"going to the end first, then backtracking to continue.\" They are usually implemented using recursion.</li> <li>A binary search tree is an efficient data structure for element searching, with the time complexity of search, insert, and remove operations all being \\(O(\\log n)\\). When a binary search tree degrades into a linked list, these time complexities deteriorate to \\(O(n)\\).</li> <li>An AVL tree, also known as a balanced binary search tree, ensures that the tree remains balanced after continuous node insertions and removals through rotation operations.</li> <li>Rotation operations in an AVL tree include right rotation, left rotation, right-then-left rotation, and left-then-right rotation. After inserting or removing nodes, an AVL tree performs rotation operations from bottom to top to rebalance the tree.</li> </ul>"},{"location":"chapter_tree/summary/#2-q-a","title":"2. \u00a0 Q & A","text":"<p>Q: For a binary tree with only one node, are both the height of the tree and the depth of the root node \\(0\\)?</p> <p>Yes, because height and depth are typically defined as \"the number of edges passed.\"</p> <p>Q: The insertion and removal in a binary tree are generally completed by a set of operations. What does \"a set of operations\" refer to here? Can it be understood as the release of resources of the child nodes?</p> <p>Taking the binary search tree as an example, the operation of removing a node needs to be handled in three different scenarios, each requiring multiple steps of node operations.</p> <p>Q: Why are there three sequences: pre-order, in-order, and post-order for DFS traversal of a binary tree, and what are their uses?</p> <p>Similar to sequential and reverse traversal of arrays, pre-order, in-order, and post-order traversals are three methods of traversing a binary tree, allowing us to obtain a traversal result in a specific order. For example, in a binary search tree, since the node sizes satisfy <code>left child node value < root node value < right child node value</code>, we can obtain an ordered node sequence by traversing the tree in the \"left \\(\\rightarrow\\) root \\(\\rightarrow\\) right\" priority.</p> <p>Q: In a right rotation operation that deals with the relationship between the imbalance nodes <code>node</code>, <code>child</code>, <code>grand_child</code>, isn't the connection between <code>node</code> and its parent node and the original link of <code>node</code> lost after the right rotation?</p> <p>We need to view this problem from a recursive perspective. The <code>right_rotate(root)</code> operation passes the root node of the subtree and eventually returns the root node of the rotated subtree with <code>return child</code>. The connection between the subtree's root node and its parent node is established after this function returns, which is outside the scope of the right rotation operation's maintenance.</p> <p>Q: In C++, functions are divided into <code>private</code> and <code>public</code> sections. What considerations are there for this? Why are the <code>height()</code> function and the <code>updateHeight()</code> function placed in <code>public</code> and <code>private</code>, respectively?</p> <p>It depends on the scope of the method's use. If a method is only used within the class, then it is designed to be <code>private</code>. For example, it makes no sense for users to call <code>updateHeight()</code> on their own, as it is just a step in the insertion or removal operations. However, <code>height()</code> is for accessing node height, similar to <code>vector.size()</code>, thus it is set to <code>public</code> for use.</p> <p>Q: How do you build a binary search tree from a set of input data? Is the choice of root node very important?</p> <p>Yes, the method for building the tree is provided in the <code>build_tree()</code> method in the binary search tree code. As for the choice of the root node, we usually sort the input data and then select the middle element as the root node, recursively building the left and right subtrees. This approach maximizes the balance of the tree.</p> <p>Q: In Java, do you always have to use the <code>equals()</code> method for string comparison?</p> <p>In Java, for primitive data types, <code>==</code> is used to compare whether the values of two variables are equal. For reference types, the working principles of the two symbols are different.</p> <ul> <li><code>==</code>: Used to compare whether two variables point to the same object, i.e., whether their positions in memory are the same.</li> <li><code>equals()</code>: Used to compare whether the values of two objects are equal.</li> </ul> <p>Therefore, to compare values, we should use <code>equals()</code>. However, strings initialized with <code>String a = \"hi\"; String b = \"hi\";</code> are stored in the string constant pool and point to the same object, so <code>a == b</code> can also be used to compare the contents of two strings.</p> <p>Q: Before reaching the bottom level, is the number of nodes in the queue \\(2^h\\) in breadth-first traversal?</p> <p>Yes, for example, a full binary tree with height \\(h = 2\\) has a total of \\(n = 7\\) nodes, then the bottom level has \\(4 = 2^h = (n + 1) / 2\\) nodes.</p>"}]} |