Complete Reference · File 05 of 05
C++ Programming

DSA with C++
Every Structure, Every Algorithm

The complete DSA reference in C++. Arrays, linked lists, stacks, queues, trees, heaps, graphs, hashing, sorting, searching, dynamic programming, greedy, divide and conquer, backtracking, bit manipulation, segment trees, tries — every data structure implemented from scratch, every algorithm traced with real examples, every pattern you need to solve any problem.

Ch 1 · Complexity Analysis Ch 2 · Arrays & Strings Ch 3 · Linked Lists Ch 4 · Stacks & Queues Ch 5 · Recursion & Backtracking Ch 6 · Sorting Algorithms Ch 7 · Searching & Binary Search Ch 8 · Hashing Ch 9 · Trees & BST Ch 10 · Heaps & Priority Queues Ch 11 · Graphs Ch 12 · Dynamic Programming Ch 13 · Greedy Algorithms Ch 14 · Advanced Structures Ch 15 · Bit Manipulation
Chapter 01
Complexity Analysis — Big-O, Space, and the Rules of the Game

1.1   Big-O Notation — What It Means Core

Big-O describes how an algorithm's time or space grows as input size n grows. It measures the worst-case upper bound, ignoring constants and lower-order terms. It tells you whether your solution will pass within time limits before you write a single line of code.

NotationNamen=10n=1000n=10⁶Typical Example
O(1)Constant111Array index, hash lookup
O(log n)Logarithmic31020Binary search, balanced BST
O(n)Linear101,00010⁶Linear scan, single loop
O(n log n)Linearithmic3310,0002×10⁷Merge sort, heap sort
O(n²)Quadratic10010⁶10¹²Bubble sort, nested loops
O(2ⁿ)Exponential1024★★Naive recursion, subset enumeration
O(n!)Factorial3.6M★★★★★★Brute-force permutations
The 10⁸ Rule — Quick Feasibility Check

A modern computer executes roughly 10⁸ simple operations per second. A 1-second time limit means your algorithm must do ≤ 10⁸ operations. For n = 10⁵: O(n²) = 10¹⁰ — TLE. O(n log n) = 1.7×10⁶ — fast. For n = 10⁶: O(n) = 10⁶ — ok. O(n log n) = 2×10⁷ — ok. Use this mental model before coding.

// Recognizing complexity from code patterns O(1) → return arr[i]; O(log n) → while(lo <= hi) { mid = lo + (hi-lo)/2; ... } // halving search space O(n) → for(int i=0; i<n; i++) process(arr[i]); O(n log n)→ sort(arr, arr+n); // recursive divide O(n²) → for(int i=0; i<n; i++) // two nested loops for(int j=0; j<n; j++) ... O(n³) → for(int i=0; i<n; i++) // three nested loops for(int j=0; j<n; j++) for(int k=0; k<n; k++) ... O(2ⁿ) → int fib(int n){ return n<2?n:fib(n-1)+fib(n-2); } // branching recursion

1.2   Big-O Rules & Amortized Analysis Core

Drop Constants

O(2n) = O(n). O(100) = O(1). Constants don't matter for growth rate. A loop that does 3 things per iteration is still O(n).

Drop Lower Terms

O(n² + n) = O(n²). O(n + log n) = O(n). The dominant term wins as n grows large.

Different Variables

Two loops: one over n, one over m → O(n + m), not O(n²). Nested over both → O(n × m).

Amortized Analysis

vector::push_back is O(1) amortized. Individual pushes may be O(n) (realloc), but averaged over n pushes the total is O(n) → O(1) each.

// Space complexity — memory used relative to input size // O(1) space — only a few variables regardless of input int sum(int* arr, int n) { int total = 0; // O(1) space for(int i=0; i<n; i++) total += arr[i]; return total; } // O(n) space — extra array proportional to input vector<int> prefix(int* arr, int n) { vector<int> pre(n); // O(n) space pre[0] = arr[0]; for(int i=1; i<n; i++) pre[i] = pre[i-1] + arr[i]; return pre; } // O(log n) space — recursion stack in binary search // O(n) space — recursion stack in linear recursion // O(V+E) space — graph adjacency list // IMPORTANT: Call stack counts as space! // Recursive quicksort: O(log n) average stack depth, O(n) worst

1.3   Data Structure Complexity Cheatsheet Reference

StructureAccessSearchInsertDeleteSpace
ArrayO(1)O(n)O(n)O(n)O(n)
Dynamic Array (vector)O(1)O(n)O(1)★O(n)O(n)
Singly Linked ListO(n)O(n)O(1)†O(1)†O(n)
Stack / QueueO(n)O(n)O(1)O(1)O(n)
Hash Table (unordered)O(1)★O(1)★O(1)★O(n)
Binary Search TreeO(log n)★O(log n)★O(log n)★O(log n)★O(n)
AVL / Red-Black TreeO(log n)O(log n)O(log n)O(log n)O(n)
Binary HeapO(n)O(n)O(log n)O(log n)O(n)
TrieO(k)O(k)O(k)O(k)O(n·k)
Segment TreeO(log n)O(log n)O(log n)O(n)

★ = average case  |  † = with known position  |  k = key/word length

↑ back to top
Chapter 02
Arrays & Strings — The Foundation of Everything

2.1   Two-Pointer Technique Pattern

Use two pointers moving through an array to reduce O(n²) brute force to O(n). Works on sorted arrays or when one pointer chases another (sliding window variant).

Two Sum II — find pair that sums to target in SORTED array.

// O(n) time, O(1) space — classic two-pointer pair<int,int> twoSum(const vector<int>& arr, int target) { int lo = 0, hi = (int)arr.size() - 1; while(lo < hi) { int s = arr[lo] + arr[hi]; if(s == target) return {lo, hi}; if(s < target) lo++; // Sum too small → move left pointer right else hi--; // Sum too big → move right pointer left } return {-1, -1}; }

Container with Most Water — maximize trapped water between two walls.

int maxWater(const vector<int>& h) { int lo=0, hi=h.size()-1, best=0; while(lo < hi) { best = max(best, min(h[lo],h[hi]) * (hi-lo)); // Move the shorter wall — keeping taller wall can only help h[lo] < h[hi] ? lo++ : hi--; } return best; } // O(n) time

Remove duplicates from sorted array in-place.

int removeDuplicates(vector<int>& nums) { if(nums.empty()) return 0; int slow = 0; // slow: last unique position for(int fast=1; fast<nums.size(); fast++) { if(nums[fast] != nums[slow]) // Found new unique element nums[++slow] = nums[fast]; // Write it after last unique } return slow + 1; // Length of unique array } // O(n) time, O(1) space

2.2   Sliding Window — Subarray Problems Pattern

Maintain a window [left, right] over the array. Expand right to add elements, shrink left to remove. Converts O(n²) brute force to O(n).

Longest substring without repeating characters.

int lengthOfLongestSubstring(const string& s) { unordered_map<char,int> last; // char → last seen index int best = 0, left = 0; for(int right=0; right<s.size(); right++) { if(last.count(s[right])) left = max(left, last[s[right]] + 1); // Shrink: skip past duplicate last[s[right]] = right; best = max(best, right - left + 1); } return best; } // O(n) time, O(1) space (charset is bounded)

Minimum window substring — find smallest window in s containing all chars of t.

string minWindow(const string& s, const string& t) { unordered_map<char,int> need, have; for(char c : t) need[c]++; int formed=0, required=need.size(); int best=INT_MAX, bl=0, left=0; for(int right=0; right<s.size(); right++) { char c = s[right]; have[c]++; if(need.count(c) && have[c]==need[c]) formed++; while(formed == required) { // Window is valid — try to shrink if(right-left+1 < best) { best=right-left+1; bl=left; } have[s[left]]--; if(need.count(s[left]) && have[s[left]]<need[s[left]]) formed--; left++; } } return best==INT_MAX ? "" : s.substr(bl, best); } // O(|s| + |t|) time

2.3   Prefix Sum & Difference Array Pattern

Range sum query — answer Q range-sum queries in O(1) each after O(n) preprocessing.

struct RangeSum { vector<long long> pre; RangeSum(const vector<int>& a) : pre(a.size()+1, 0) { for(int i=0; i<a.size(); i++) pre[i+1] = pre[i] + a[i]; } // Sum of a[l..r] inclusive — O(1) long long query(int l, int r) { return pre[r+1] - pre[l]; } }; // 2D Prefix Sum — sum of rectangle in O(1) vector<vector<long long>> buildPrefix2D(const vector<vector<int>>& g) { int R=g.size(), C=g[0].size(); vector<vector<long long>> p(R+1, vector<long long>(C+1,0)); for(int i=1;i<=R;i++) for(int j=1;j<=C;j++) p[i][j]=g[i-1][j-1]+p[i-1][j]+p[i][j-1]-p[i-1][j-1]; return p; } // Query (r1,c1) to (r2,c2): p[r2+1][c2+1]-p[r1][c2+1]-p[r2+1][c1]+p[r1][c1]

Subarray with given XOR — count subarrays with XOR equal to k.

int countXorSubarrays(const vector<int>& a, int k) { unordered_map<int,int> freq; freq[0] = 1; // Empty prefix has XOR = 0 int prefXor = 0, count = 0; for(int x : a) { prefXor ^= x; count += freq[prefXor ^ k]; // prefXor ^ (prefXor^k) = k freq[prefXor]++; } return count; } // O(n) time — prefix-XOR + hash map technique

2.4   String Algorithms — KMP, Z-function, Hashing Essential

KMP — Pattern Matching in O(n+m)

// KMP — find all occurrences of pattern p in text t // Failure function: lps[i] = length of longest proper prefix of p[0..i] that is also a suffix vector<int> buildLPS(const string& p) { int n = p.size(); vector<int> lps(n, 0); int len = 0, i = 1; while(i < n) { if(p[i] == p[len]) { lps[i++] = ++len; } else if(len) { len = lps[len-1]; } // Fall back using lps else { lps[i++] = 0; } } return lps; } vector<int> kmpSearch(const string& t, const string& p) { vector<int> lps = buildLPS(p); vector<int> matches; int i=0, j=0; while(i < t.size()) { if(t[i] == p[j]) { i++; j++; } if(j == p.size()) { matches.push_back(i-j); j=lps[j-1]; } else if(i < t.size() && t[i] != p[j]) { j ? j=lps[j-1] : i++; } } return matches; } // O(n+m) time, O(m) space

Rabin-Karp Rolling Hash — O(n+m) average

// Rolling hash — slide hash window without rehashing the whole window bool rabinKarp(const string& t, const string& p) { const long long BASE=31, MOD=1e9+7; int n=t.size(), m=p.size(); if(m > n) return false; auto h = [&](const string& s, int len) -> long long { long long v=0, pw=1; for(int i=0;i<len;i++){ v=(v+(s[i]-'a'+1)*pw)%MOD; pw=pw*BASE%MOD; } return v; }; long long hp=h(p,m), ht=h(t,m), pw=1; for(int i=0;i<m-1;i++) pw=pw*BASE%MOD; for(int i=0;i+m<=n;i++) { if(ht==hp && t.substr(i,m)==p) return true; // Verify to avoid collision if(i+m < n) { ht=(ht-(t[i]-'a'+1)+MOD)%MOD; ht=ht*BASE%MOD; // Wrong — simplified; full impl divides by BASE ht=(ht+(t[i+m]-'a'+1)*pw)%MOD; } } return false; }
↑ back to top
Chapter 03
Linked Lists — Nodes, Pointers & Classic Problems

3.1   Singly & Doubly Linked List — Full Implementation Core

Why Linked Lists Exist

An array provides O(1) random access, but inserting or deleting an element in the middle requires shifting all subsequent elements — an O(n) operation. A Linked List solves this by storing data in scattered memory blocks (nodes) connected by pointers. You lose O(1) random access (no indexing), but gain O(1) insertions and deletions anywhere in the list (if you have the pointer). You reach for a Linked List when you need constant-time splicing, or when implementing dynamic queues/stacks where you only care about the ends.

// Node definition struct ListNode { int val; ListNode* next; ListNode(int v) : val(v), next(nullptr) {} }; // Singly Linked List class LinkedList { ListNode* head = nullptr; int size_ = 0; public: void pushFront(int val) { // O(1) ListNode* node = new ListNode(val); node->next = head; head = node; size_++; } void pushBack(int val) { // O(n) without tail pointer ListNode* node = new ListNode(val); if(!head) { head = node; } else { ListNode* cur = head; while(cur->next) cur = cur->next; cur->next = node; } size_++; } void deleteVal(int val) { // O(n) ListNode dummy(0); dummy.next = head; ListNode* prev = &dummy; while(prev->next) { if(prev->next->val == val) { ListNode* del = prev->next; prev->next = del->next; delete del; size_--; return; } prev = prev->next; } } void print() const { for(ListNode* c=head; c; c=c->next) cout << c->val << " → "; cout << "null\n"; } int size() const { return size_; } ~LinkedList() { while(head) { ListNode* t=head; head=head->next; delete t; } } };

3.2   Linked List Patterns — Every Classic Problem Must Know

Reverse a Linked List

// Iterative — O(n) time, O(1) space ListNode* reverse(ListNode* head) { ListNode* prev = nullptr, *cur = head; while(cur) { ListNode* nxt = cur->next; // Save next cur->next = prev; // Reverse pointer prev = cur; // Advance prev cur = nxt; // Advance cur } return prev; // New head } // Recursive ListNode* reverseRec(ListNode* head) { if(!head || !head->next) return head; ListNode* newHead = reverseRec(head->next); head->next->next = head; head->next = nullptr; return newHead; }

Floyd's Cycle Detection — Fast & Slow Pointers

// Detect cycle in O(n) time, O(1) space bool hasCycle(ListNode* head) { ListNode* slow=head, *fast=head; while(fast && fast->next) { slow = slow->next; fast = fast->next->next; if(slow == fast) return true; // They meet → cycle exists } return false; } // Find start of cycle ListNode* detectCycleStart(ListNode* head) { ListNode* slow=head, *fast=head; while(fast && fast->next) { slow=slow->next; fast=fast->next->next; if(slow==fast) { ListNode* ptr=head; while(ptr!=slow) { ptr=ptr->next; slow=slow->next; } return ptr; // Cycle start } } return nullptr; } // Find middle of linked list ListNode* findMiddle(ListNode* head) { ListNode* slow=head, *fast=head; while(fast->next && fast->next->next) { slow=slow->next; fast=fast->next->next; } return slow; // Slow is at middle when fast reaches end }

Merge Two Sorted Lists

ListNode* mergeSorted(ListNode* l1, ListNode* l2) { ListNode dummy(0); ListNode* tail = &dummy; while(l1 && l2) { if(l1->val <= l2->val) { tail->next=l1; l1=l1->next; } else { tail->next=l2; l2=l2->next; } tail = tail->next; } tail->next = l1 ? l1 : l2; // Attach remaining return dummy.next; } // Find kth node from end (one pass) ListNode* kthFromEnd(ListNode* head, int k) { ListNode* fast=head, *slow=head; while(k--) fast=fast->next; // Advance fast by k while(fast) { fast=fast->next; slow=slow->next; } // Move both return slow; // Slow is k from end }

LRU Cache — implement get/put in O(1) using doubly linked list + hash map.

class LRUCache { struct Node { int key, val; Node *prev, *next; }; int cap; unordered_map<int, Node*> cache; Node *head, *tail; // Dummy head and tail void remove(Node* n) { n->prev->next = n->next; n->next->prev = n->prev; } void insertFront(Node* n) { // MRU position n->next = head->next; n->prev = head; head->next->prev = n; head->next = n; } public: LRUCache(int capacity) : cap(capacity) { head = new Node{0,0,nullptr,nullptr}; tail = new Node{0,0,nullptr,nullptr}; head->next = tail; tail->prev = head; } int get(int key) { if(!cache.count(key)) return -1; Node* n = cache[key]; remove(n); insertFront(n); // Move to MRU return n->val; } void put(int key, int val) { if(cache.count(key)) { remove(cache[key]); delete cache[key]; } else if((int)cache.size() == cap) { Node* lru = tail->prev; // Least recently used remove(lru); cache.erase(lru->key); delete lru; } Node* n = new Node{key,val}; insertFront(n); cache[key] = n; } };
↑ back to top
Chapter 04
Stacks & Queues — LIFO, FIFO & Monotonic Patterns

4.1   Stack Patterns — Matching, Spans, Next Greater Pattern

Why Stacks? The Power of Invariants

Why use a Stack instead of just a vector? Because restrictions create invariants. By forbidding random access, a Stack (Last-In, First-Out) mathematically forces you to process the most recently seen incomplete element before any older ones. You reach for a stack when you need to match history: parsing parentheses, undo/redo features, or finding the "next greater element" where recent smaller elements are shadowed by larger ones.

Valid Parentheses — check balanced brackets.

bool isValid(const string& s) { stack<char> st; unordered_map<char,char> match = {{')','{'},{')','{'},{')','{'}}; // Corrected: unordered_map<char,char> close = {{'(',')'},{'{','}'},{'[',']'}}; for(char c : s) { if(close.count(c)) { st.push(c); } // Opening bracket else { // Closing bracket if(st.empty() || close[st.top()] != c) return false; st.pop(); } } return st.empty(); } // O(n) time

Next Greater Element — for each element find the first greater to its right.

// Monotonic Stack — stack always contains elements in decreasing order vector<int> nextGreater(const vector<int>& a) { int n = a.size(); vector<int> res(n, -1); stack<int> st; // Stores INDICES for(int i=0; i<n; i++) { while(!st.empty() && a[i] > a[st.top()]) { res[st.top()] = a[i]; // a[i] is next greater for stack's top st.pop(); } st.push(i); } return res; } // Input: [2, 1, 2, 4, 3] // Output: [4, 2, 4,-1,-1]

Largest Rectangle in Histogram — classic monotonic stack.

int largestRectangle(const vector<int>& h) { stack<int> st; int best = 0; for(int i=0; i<=(int)h.size(); i++) { int cur = (i==(int)h.size()) ? 0 : h[i]; while(!st.empty() && cur < h[st.top()]) { int height = h[st.top()]; st.pop(); int width = st.empty() ? i : i - st.top() - 1; best = max(best, height * width); } st.push(i); } return best; } // O(n) time — each index pushed and popped once

4.2   Monotonic Queue — Sliding Window Maximum Pattern

Sliding window maximum — maximum in every window of size k in O(n).

vector<int> slidingMax(const vector<int>& a, int k) { deque<int> dq; // Stores indices; front is max index vector<int> res; for(int i=0; i<a.size(); i++) { // Remove elements outside window while(!dq.empty() && dq.front() < i-k+1) dq.pop_front(); // Remove elements smaller than current (can't be max) while(!dq.empty() && a[dq.back()] < a[i]) dq.pop_back(); dq.push_back(i); if(i >= k-1) res.push_back(a[dq.front()]); // Window full } return res; } // O(n) — each element enters/leaves deque at most once

4.3   Special Stack Designs Core

// Min Stack — getMin() in O(1) class MinStack { stack<int> st, minSt; public: void push(int val) { st.push(val); minSt.push(minSt.empty() ? val : min(val, minSt.top())); } void pop() { st.pop(); minSt.pop(); } int top() { return st.top(); } int getMin() { return minSt.top(); } }; // Queue using two stacks — amortized O(1) enqueue/dequeue class MyQueue { stack<int> inbox, outbox; public: void enqueue(int val) { inbox.push(val); } int dequeue() { if(outbox.empty()) while(!inbox.empty()) { outbox.push(inbox.top()); inbox.pop(); } int v = outbox.top(); outbox.pop(); return v; } int front() { if(outbox.empty()) while(!inbox.empty()) { outbox.push(inbox.top()); inbox.pop(); } return outbox.top(); } };
↑ back to top
Chapter 05
Recursion & Backtracking — Exploring All Possibilities

5.1   Backtracking Template — The Universal Pattern Must Know

Backtracking Template

Choose → Explore → Un-choose. At each step, make a choice, recurse, then undo the choice to try the next option. Base case: reached a solution or dead end. Prune: skip choices that can't lead to a valid solution.

// Universal backtracking skeleton void backtrack(state, choices, results) { if(isSolution(state)) { results.add(state); return; } for(auto choice : choices) { if(!isValid(choice, state)) continue; // Prune invalid choices makeChoice(choice, state); // Choose backtrack(state, choices, results); // Explore undoChoice(choice, state); // Un-choose } }

Subsets — generate all subsets of a set.

vector<vector<int>> subsets(const vector<int>& nums) { vector<vector<int>> res; vector<int> curr; function<void(int)> bt = [&](int start) { res.push_back(curr); // Every state is a valid subset for(int i=start; i<nums.size(); i++) { curr.push_back(nums[i]); // Choose bt(i+1); // Explore curr.pop_back(); // Un-choose } }; bt(0); return res; // 2^n subsets }

Permutations — generate all permutations of an array.

vector<vector<int>> permute(const vector<int>& nums) { vector<vector<int>> res; vector<int> curr; vector<bool> used(nums.size(), false); function<void()> bt = [&]() { if(curr.size() == nums.size()) { res.push_back(curr); return; } for(int i=0; i<nums.size(); i++) { if(used[i]) continue; used[i]=true; curr.push_back(nums[i]); // Choose bt(); // Explore used[i]=false; curr.pop_back(); // Un-choose } }; bt(); return res; // n! permutations }

N-Queens — place N queens on N×N board so none attack each other.

vector<vector<string>> solveNQueens(int n) { vector<vector<string>> res; vector<string> board(n, string(n,'.')); vector<bool> col(n,false), diag1(2*n,false), diag2(2*n,false); function<void(int)> bt = [&](int row) { if(row == n) { res.push_back(board); return; } for(int c=0; c<n; c++) { if(col[c] || diag1[row-c+n] || diag2[row+c]) continue; board[row][c]='Q'; col[c]=diag1[row-c+n]=diag2[row+c]=true; // Choose bt(row+1); // Explore board[row][c]='.'; col[c]=diag1[row-c+n]=diag2[row+c]=false; // Un-choose } }; bt(0); return res; }
↑ back to top
Chapter 06
Sorting Algorithms — Every Sort, Implemented & Analyzed

6.1   Sorting Algorithm Comparison Reference

AlgorithmBestAverageWorstSpaceStable
Bubble SortO(n)O(n²)O(n²)O(1)Yes
Selection SortO(n²)O(n²)O(n²)O(1)No
Insertion SortO(n)O(n²)O(n²)O(1)Yes
Merge SortO(n log n)O(n log n)O(n log n)O(n)Yes
Quick SortO(n log n)O(n log n)O(n²)O(log n)No
Heap SortO(n log n)O(n log n)O(n log n)O(1)No
Counting SortO(n+k)O(n+k)O(n+k)O(k)Yes
Radix SortO(nk)O(nk)O(nk)O(n+k)Yes

6.2   Full Implementations — Every Essential Sort Must Know

Merge Sort — Divide, Sort, Merge

void merge(int* a, int lo, int mid, int hi) { vector<int> tmp(hi-lo+1); int i=lo, j=mid+1, k=0; while(i<=mid && j<=hi) tmp[k++] = a[i]<=a[j] ? a[i++] : a[j++]; while(i<=mid) tmp[k++]=a[i++]; while(j<=hi) tmp[k++]=a[j++]; for(int x=0; x<tmp.size(); x++) a[lo+x]=tmp[x]; } void mergeSort(int* a, int lo, int hi) { if(lo >= hi) return; int mid = lo + (hi-lo)/2; mergeSort(a, lo, mid); mergeSort(a, mid+1, hi); merge(a, lo, mid, hi); } // T(n) = 2T(n/2) + O(n) → O(n log n) by Master Theorem

Quick Sort — Partition Around Pivot

int partition(int* a, int lo, int hi) { int pivot = a[hi], i = lo-1; for(int j=lo; j<hi; j++) { if(a[j] <= pivot) swap(a[++i], a[j]); } swap(a[i+1], a[hi]); return i+1; } void quickSort(int* a, int lo, int hi) { if(lo >= hi) return; // Randomize pivot to avoid O(n²) on sorted input int r = lo + rand()%(hi-lo+1); swap(a[r], a[hi]); int p = partition(a, lo, hi); quickSort(a, lo, p-1); quickSort(a, p+1, hi); } // 3-way partition (Dutch National Flag) — handles duplicates in O(n) void quickSort3Way(int* a, int lo, int hi) { if(lo >= hi) return; int lt=lo, gt=hi, cur=lo+1, pivot=a[lo]; while(cur <= gt) { if (a[cur] < pivot) swap(a[lt++], a[cur++]); else if(a[cur] > pivot) swap(a[cur], a[gt--]); else cur++; } quickSort3Way(a, lo, lt-1); quickSort3Way(a, gt+1, hi); }

Counting Sort & Radix Sort — Linear Time

// Counting Sort — O(n+k) where k = range of values void countSort(vector<int>& a, int maxVal) { vector<int> cnt(maxVal+1, 0); for(int x : a) cnt[x]++; int i=0; for(int v=0; v<=maxVal; v++) while(cnt[v]--) a[i++]=v; } // Radix Sort — sort integers digit by digit using counting sort void countSortByDigit(vector<int>& a, int exp) { vector<int> out(a.size()), cnt(10,0); for(int x : a) cnt[(x/exp)%10]++; for(int i=1; i<10; i++) cnt[i]+=cnt[i-1]; for(int i=a.size()-1; i>=0; i--) out[--cnt[(a[i]/exp)%10]] = a[i]; a = out; } void radixSort(vector<int>& a) { int mx = *max_element(a.begin(), a.end()); for(int exp=1; mx/exp>0; exp*=10) countSortByDigit(a, exp); } // O(nk) where k = number of digits — beats O(n log n) for integers
↑ back to top
Chapter 07
Searching & Binary Search — The O(log n) Swiss Army Knife

7.1   Binary Search — All Templates Critical

Binary search is not just for sorted arrays. Any problem where the answer space is monotone (if x works, x-1 also works, or vice versa) can be binary-searched. This is one of the most powerful techniques in competitive programming.

// Template 1 — Exact match int binarySearch(const vector<int>& a, int target) { int lo=0, hi=a.size()-1; while(lo <= hi) { int mid = lo + (hi-lo)/2; // Avoid (lo+hi)/2 overflow if (a[mid] == target) return mid; else if(a[mid] < target) lo = mid+1; else hi = mid-1; } return -1; } // Template 2 — First position where condition is true // Find first index where a[i] >= target (lower_bound) int lowerBound(const vector<int>& a, int target) { int lo=0, hi=a.size(); // hi = n (past end) — valid answer position while(lo < hi) { int mid = lo + (hi-lo)/2; if(a[mid] < target) lo = mid+1; // Not yet — move right else hi = mid; // Might be answer — keep searching left } return lo; // First index where a[i] >= target } // Template 3 — Last position where condition is true // Find last index where a[i] <= target (upper_bound - 1) int upperBound(const vector<int>& a, int target) { int lo=0, hi=a.size(); while(lo < hi) { int mid = lo + (hi-lo)/2; if(a[mid] <= target) lo = mid+1; // Still valid — move right else hi = mid; } return lo; // First index where a[i] > target } // BINARY SEARCH ON ANSWER — the most powerful variant // "Is it possible to do X with limit Y?" — binary search on Y // Example: Minimum largest subarray sum when splitting into k groups bool canSplit(vector<int>& a, int k, long long limit) { int groups=1; long long cur=0; for(int x : a) { if(x > limit) return false; if(cur+x > limit) { groups++; cur=0; } cur += x; } return groups <= k; } long long splitArray(vector<int>& a, int k) { long long lo=*max_element(a.begin(),a.end()), hi=accumulate(a.begin(),a.end(),0LL); while(lo < hi) { long long mid = lo+(hi-lo)/2; canSplit(a,k,mid) ? hi=mid : lo=mid+1; } return lo; } // O(n log(sum)) — binary search on answer space

7.2   Binary Search on Special Cases Patterns

// Search in Rotated Sorted Array int searchRotated(vector<int>& a, int target) { int lo=0, hi=a.size()-1; while(lo <= hi) { int mid = lo+(hi-lo)/2; if(a[mid]==target) return mid; if(a[lo] <= a[mid]) { // Left half is sorted if(a[lo]<=target && target<a[mid]) hi=mid-1; else lo=mid+1; } else { // Right half is sorted if(a[mid]<target && target<=a[hi]) lo=mid+1; else hi=mid-1; } } return -1; } // Find peak element (mountain array) int findPeak(vector<int>& a) { int lo=0, hi=a.size()-1; while(lo < hi) { int mid = lo+(hi-lo)/2; a[mid]<a[mid+1] ? lo=mid+1 : hi=mid; // Go toward the higher side } return lo; } // Square root (integer) using binary search int mySqrt(int x) { if(x<2) return x; long long lo=1, hi=x/2; while(lo < hi) { long long mid = lo+(hi-lo+1)/2; // +1 to avoid infinite loop mid*mid<=x ? lo=mid : hi=mid-1; } return lo; }
↑ back to top
Chapter 08
Hashing — O(1) Lookup, Collision Handling & Applications

8.1   Hash Table from Scratch — Chaining & Open Addressing Core

// Hash table with separate chaining class HashTable { static const int BUCKETS = 1009; // Prime size reduces clustering vector<list<pair<int,int>>> table; // Each bucket: list of (key,val) int hash(int key) const { return ((key % BUCKETS) + BUCKETS) % BUCKETS; } public: HashTable() : table(BUCKETS) {} void insert(int key, int val) { auto& bucket = table[hash(key)]; for(auto& p : bucket) if(p.first==key) { p.second=val; return; } // Update existing bucket.emplace_back(key, val); } int get(int key) const { for(const auto& p : table[hash(key)]) if(p.first==key) return p.second; return -1; } bool remove(int key) { auto& bucket = table[hash(key)]; for(auto it=bucket.begin(); it!=bucket.end(); it++) { if(it->first==key) { bucket.erase(it); return true; } } return false; } };

8.2   Hashing Patterns — Anagram, Frequency, Grouping Patterns

Group Anagrams — group strings that are anagrams of each other.

vector<vector<string>> groupAnagrams(const vector<string>& words) { unordered_map<string, vector<string>> groups; for(const auto& w : words) { string key = w; sort(key.begin(), key.end()); // Sorted version = canonical key groups[key].push_back(w); } vector<vector<string>> res; for(auto& [k,v] : groups) res.push_back(v); return res; } // O(n * k log k) — n words each of length k

Subarray sum equals K — count subarrays with given sum.

int subarraySum(const vector<int>& nums, int k) { unordered_map<int,int> prefixCount; prefixCount[0] = 1; // Empty prefix int prefSum=0, count=0; for(int x : nums) { prefSum += x; count += prefixCount[prefSum - k]; // How many prefixes sum to prefSum-k? prefixCount[prefSum]++; } return count; } // O(n) — prefix sum + hash map

Longest consecutive sequence — find length of longest sequence of consecutive integers.

int longestConsecutive(const vector<int>& nums) { unordered_set<int> s(nums.begin(), nums.end()); int best = 0; for(int n : nums) { if(s.count(n-1)) continue; // Not the start of a sequence int len = 1; while(s.count(++n)) len++; best = max(best, len); } return best; } // O(n) — each element visited at most twice
↑ back to top
Chapter 09
Trees & BST — Binary Trees, Traversals & Search Trees

9.1   Binary Tree — Node, Traversals & Properties Core

struct TreeNode { int val; TreeNode* left; TreeNode* right; TreeNode(int v) : val(v), left(nullptr), right(nullptr) {} }; // ── DFS TRAVERSALS ────────────────────────────────────── // Pre-order: Root → Left → Right (copies tree structure) void preorder(TreeNode* node, vector<int>& res) { if(!node) return; res.push_back(node->val); // Visit root FIRST preorder(node->left, res); preorder(node->right, res); } // In-order: Left → Root → Right (gives SORTED order for BST) void inorder(TreeNode* node, vector<int>& res) { if(!node) return; inorder(node->left, res); res.push_back(node->val); // Visit root BETWEEN children inorder(node->right, res); } // Post-order: Left → Right → Root (used for deletion, size calculation) void postorder(TreeNode* node, vector<int>& res) { if(!node) return; postorder(node->left, res); postorder(node->right, res); res.push_back(node->val); // Visit root LAST } // ── BFS (LEVEL ORDER) ──────────────────────────────────── vector<vector<int>> levelOrder(TreeNode* root) { vector<vector<int>> res; if(!root) return res; queue<TreeNode*> q; q.push(root); while(!q.empty()) { int sz = q.size(); res.emplace_back(); while(sz--) { TreeNode* n = q.front(); q.pop(); res.back().push_back(n->val); if(n->left) q.push(n->left); if(n->right) q.push(n->right); } } return res; }

9.2   Classic Tree Problems — Height, Diameter, LCA Must Know

// Height / Maximum Depth int height(TreeNode* root) { if(!root) return 0; return 1 + max(height(root->left), height(root->right)); } // Diameter — longest path through any node int diameter(TreeNode* root) { int best = 0; function<int(TreeNode*)> dfs = [&](auto* node) -> int { if(!node) return 0; int L=dfs(node->left), R=dfs(node->right); best = max(best, L+R); // Diameter through this node return 1 + max(L, R); // Height from this node }; dfs(root); return best; } // Lowest Common Ancestor (LCA) TreeNode* lca(TreeNode* root, TreeNode* p, TreeNode* q) { if(!root || root==p || root==q) return root; TreeNode* L = lca(root->left, p, q); TreeNode* R = lca(root->right, p, q); if(L && R) return root; // p and q in different subtrees → this is LCA return L ? L : R; // Both in same subtree } // Is Binary Tree Balanced? bool isBalanced(TreeNode* root) { function<int(TreeNode*)> check = [&](auto* node) -> int { if(!node) return 0; int L=check(node->left), R=check(node->right); if(L==-1 || R==-1 || abs(L-R)>1) return -1; // -1 = unbalanced return 1+max(L,R); }; return check(root) != -1; } // Serialize / Deserialize Binary Tree string serialize(TreeNode* root) { if(!root) return "null,"; return to_string(root->val)+","+serialize(root->left)+serialize(root->right); } TreeNode* deserialize(const string& data) { istringstream ss(data); function<TreeNode*()> parse = [&]() -> TreeNode* { string tok; getline(ss, tok, ','); if(tok=="null") return nullptr; TreeNode* node = new TreeNode(stoi(tok)); node->left = parse(); node->right = parse(); return node; }; return parse(); }

9.3   Binary Search Tree — Full Implementation Core

class BST { TreeNode* root = nullptr; TreeNode* insert(TreeNode* node, int val) { if(!node) return new TreeNode(val); if(val < node->val) node->left = insert(node->left, val); else if(val > node->val) node->right = insert(node->right, val); return node; } bool search(TreeNode* node, int val) const { if(!node) return false; if(val == node->val) return true; return val < node->val ? search(node->left, val) : search(node->right, val); } TreeNode* minNode(TreeNode* node) { while(node->left) node=node->left; return node; } TreeNode* remove(TreeNode* node, int val) { if(!node) return nullptr; if(val < node->val) node->left = remove(node->left, val); else if(val > node->val) node->right = remove(node->right, val); else { if(!node->left) { TreeNode* r=node->right; delete node; return r; } if(!node->right) { TreeNode* l=node->left; delete node; return l; } // Two children: replace with in-order successor TreeNode* succ = minNode(node->right); node->val = succ->val; node->right = remove(node->right, succ->val); } return node; } public: void insert(int v) { root=insert(root,v); } bool search(int v) const { return search(root,v); } void remove(int v) { root=remove(root,v); } // In-order traversal gives sorted order };
↑ back to top
Chapter 10
Heaps & Priority Queues — The O(log n) Structure for Extremes

10.1   Binary Heap — Full Array-Based Implementation Core

Why Heaps Exist

What problem does a Heap solve that a sorted array cannot? A sorted array gives you the maximum in O(1), but inserting a new element is O(n). An unsorted array gives you O(1) insertion, but finding the maximum is O(n). A Heap is the perfect middle ground: it maintains just enough order to give you the maximum in O(1), while allowing insertions and deletions in O(log n). You reach for a Heap when you need to repeatedly process the "best" or "most urgent" item in a dynamic dataset.

// Max-Heap using array (index 1-based for cleaner parent/child math) class MaxHeap { vector<int> h; void siftUp(int i) { while(i>1 && h[i]>h[i/2]) { // Parent at i/2 swap(h[i], h[i/2]); i /= 2; } } void siftDown(int i) { int n = h.size()-1; while(2*i <= n) { int child = 2*i; // Left child if(child+1<=n && h[child+1]>h[child]) child++; // Pick larger child if(h[i] >= h[child]) break; swap(h[i], h[child]); i = child; } } public: MaxHeap() { h.push_back(0); } // Dummy at index 0 void push(int val) { h.push_back(val); siftUp(h.size()-1); } int top() { return h[1]; } void pop() { h[1] = h.back(); h.pop_back(); if(h.size()>1) siftDown(1); } bool empty() { return h.size()==1; } int size() { return h.size()-1; } // Heapify — build heap from unsorted array in O(n) void heapify(const vector<int>& a) { h = {0}; h.insert(h.end(), a.begin(), a.end()); for(int i=h.size()/2; i>=1; i--) siftDown(i); // O(n) — not O(n log n)! } };

10.2   Heap Patterns — Top-K, Median, Merge Patterns

Kth Largest Element — find kth largest without full sort.

// Min-heap of size k — top is kth largest int findKthLargest(vector<int>& nums, int k) { priority_queue<int, vector<int>, greater<int>> minPQ; // Min-heap for(int x : nums) { minPQ.push(x); if(int(minPQ.size()) > k) minPQ.pop(); // Keep only k largest } return minPQ.top(); // Smallest of k largest = kth largest overall } // O(n log k) time, O(k) space

Find Median from Data Stream — median of all elements seen so far.

class MedianFinder { priority_queue<int> maxH; // Max-heap: lower half priority_queue<int,vector<int>,greater<int>> minH; // Min-heap: upper half public: void addNum(int num) { maxH.push(num); minH.push(maxH.top()); maxH.pop(); // Ensure balance invariant if(minH.size() > maxH.size()+1) { maxH.push(minH.top()); minH.pop(); } } double findMedian() { if(minH.size() > maxH.size()) return minH.top(); return (maxH.top() + minH.top()) / 2.0; } }; // O(log n) add, O(1) findMedian

Merge K Sorted Lists — merge k sorted linked lists in O(n log k).

ListNode* mergeKLists(vector<ListNode*>& lists) { using P = pair<int,ListNode*>; priority_queue<P, vector<P>, greater<P>> pq; // Min-heap by value for(auto* l : lists) if(l) pq.push({l->val, l}); ListNode dummy(0); ListNode* tail = &dummy; while(!pq.empty()) { auto [val, node] = pq.top(); pq.pop(); tail->next = node; tail = tail->next; if(node->next) pq.push({node->next->val, node->next}); } return dummy.next; }
↑ back to top
Chapter 11
Graphs — BFS, DFS, Shortest Paths & MST

11.1   Graph Representation & Traversal Core

Adjacency List vs. Adjacency Matrix

Adjacency List (vector<vector<int>>) is used for 95% of problems. It uses O(V + E) memory and is optimal for sparse graphs. Adjacency Matrix (vector<vector<bool>>) uses O(V²) memory, which causes Memory Limit Exceeded for V > 10,000. Only use a matrix for dense graphs or when you need O(1) edge existence lookups (like Floyd-Warshall).

BFS vs. DFS: How to Choose

BFS searches layer by layer. Reach for BFS when asked for the shortest path in an unweighted graph, or when modeling "moves" in a grid puzzle. DFS plunges to the bottom of a path before backtracking. Reach for DFS when investigating connectivity, counting components, finding cycles, or doing backtracking where you need to track the current path state.

#include <bits/stdc++.h> using namespace std; using ll = long long; // Adjacency List — most common representation int V; // Number of vertices vector<vector<int>> adj(V); // Unweighted: adj[u] = list of neighbors vector<vector<pair<int,int>>> wadj(V); // Weighted: wadj[u] = {neighbor, weight} // Add edge void addEdge(int u, int v) { adj[u].push_back(v); adj[v].push_back(u); // Remove for directed graph } // BFS — shortest path in UNWEIGHTED graph, level-order traversal vector<int> bfs(int src, int V) { vector<int> dist(V, -1); queue<int> q; dist[src] = 0; q.push(src); while(!q.empty()) { int u = q.front(); q.pop(); for(int v : adj[u]) { if(dist[v] == -1) { dist[v] = dist[u] + 1; q.push(v); } } } return dist; // dist[v] = shortest path from src to v (-1 if unreachable) } // DFS — connectivity, cycle detection, topological sort, path finding vector<bool> visited; void dfs(int u) { visited[u] = true; for(int v : adj[u]) if(!visited[v]) dfs(v); } // Count connected components int countComponents(int V) { visited.assign(V, false); int count = 0; for(int i=0; i<V; i++) if(!visited[i]) { dfs(i); count++; } return count; }

11.2   Topological Sort & Cycle Detection Must Know

// Topological Sort using Kahn's Algorithm (BFS-based) vector<int> topoSort(int V) { vector<int> indegree(V, 0); for(int u=0; u<V; u++) for(int v : adj[u]) indegree[v]++; queue<int> q; for(int i=0; i<V; i++) if(indegree[i]==0) q.push(i); vector<int> order; while(!q.empty()) { int u = q.front(); q.pop(); order.push_back(u); for(int v : adj[u]) if(--indegree[v] == 0) q.push(v); } if(int(order.size()) != V) return {}; // Cycle detected! return order; } // DFS-based Topological Sort stack<int> topoStack; void dfsTopoSort(int u, vector<bool>& vis) { vis[u] = true; for(int v : adj[u]) if(!vis[v]) dfsTopoSort(v, vis); topoStack.push(u); // Push after all descendants processed } // Cycle detection in DIRECTED graph (3-color DFS) // white=0 (unvisited), gray=1 (in stack), black=2 (done) bool hasCycle(int u, vector<int>& color) { color[u] = 1; // Gray: currently in DFS stack for(int v : adj[u]) { if(color[v]==1) return true; // Back edge → cycle! if(color[v]==0 && hasCycle(v, color)) return true; } color[u] = 2; // Black: fully processed return false; }

11.3   Shortest Path Algorithms Critical

Why Dijkstra is Correct (Proof Sketch)

Dijkstra works because of the Greedy Choice Property on non-negative weights. When the priority queue pops the unvisited node u with the smallest current distance, that distance is guaranteed to be optimal. Why? Because any other path to u would have to go through some other unvisited node v. But since u has the smallest distance in the queue, the path to v is already longer than the path to u, and adding non-negative edges can only make it worse. The moment a node is popped, it is settled permanently.

// Dijkstra — single-source shortest path, non-negative weights, O((V+E) log V) vector<ll> dijkstra(int src, int V) { vector<ll> dist(V, LLONG_MAX); priority_queue<pair<ll,int>, vector<pair<ll,int>>, greater<>> pq; dist[src]=0; pq.push({0, src}); while(!pq.empty()) { auto [d, u] = pq.top(); pq.pop(); if(d > dist[u]) continue; // Stale entry — skip for(auto [v, w] : wadj[u]) { if(dist[u]+w < dist[v]) { dist[v] = dist[u]+w; pq.push({dist[v], v}); } } } return dist; } // Bellman-Ford — handles negative weights, detects negative cycles, O(VE) vector<ll> bellmanFord(int src, int V, vector<tuple<int,int,int>>& edges) { vector<ll> dist(V, LLONG_MAX); dist[src] = 0; for(int i=0; i<V-1; i++) { // Relax all edges V-1 times for(auto [u,v,w] : edges) { if(dist[u]!=LLONG_MAX && dist[u]+w<dist[v]) dist[v] = dist[u]+w; } } // Vth relaxation: if any update happens → negative cycle for(auto [u,v,w] : edges) if(dist[u]!=LLONG_MAX && dist[u]+w<dist[v]) dist[v]=-LLONG_MAX; return dist; } // Floyd-Warshall — all-pairs shortest paths, O(V³) void floydWarshall(vector<vector<ll>>& dist, int V) { for(int k=0; k<V; k++) for(int i=0; i<V; i++) for(int j=0; j<V; j++) if(dist[i][k]!=LLONG_MAX && dist[k][j]!=LLONG_MAX) dist[i][j] = min(dist[i][j], dist[i][k]+dist[k][j]); }

11.4   MST — Kruskal's & Prim's + DSU Essential

MST Correctness Intuition

Why do Kruskal's and Prim's greedy approaches work? Imagine dividing all nodes into two arbitrary sets (a cut). To connect these two sets, you must pick at least one edge crossing the cut. To minimize the total weight, you should obviously pick the lightest crossing edge. Both algorithms just apply this logic repeatedly: Prim's grows a single component by picking the lightest edge leaving it, while Kruskal's considers the globally lightest edge and adds it if it connects two disconnected components (preventing cycles).

// Disjoint Set Union (DSU / Union-Find) — O(α(n)) per operation struct DSU { vector<int> parent, rank_; DSU(int n) : parent(n), rank_(n, 0) { iota(parent.begin(), parent.end(), 0); // parent[i] = i initially } int find(int x) { if(parent[x]!=x) parent[x]=find(parent[x]); // Path compression return parent[x]; } bool unite(int x, int y) { x=find(x); y=find(y); if(x==y) return false; // Already same component if(rank_[x]<rank_[y]) swap(x,y); // Union by rank parent[y]=x; if(rank_[x]==rank_[y]) rank_[x]++; return true; } bool connected(int x, int y) { return find(x)==find(y); } }; // Kruskal's MST — sort edges by weight, greedily add if no cycle ll kruskal(int V, vector<tuple<ll,int,int>>& edges) { sort(edges.begin(), edges.end()); // Sort by weight (first element) DSU dsu(V); ll totalCost = 0; int edgeCount = 0; for(auto [w, u, v] : edges) { if(dsu.unite(u, v)) { totalCost += w; if(++edgeCount == V-1) break; // MST has V-1 edges } } return totalCost; } // O(E log E) time
↑ back to top
Chapter 12
Dynamic Programming — Optimal Substructure, Every Classic Problem

12.1   DP Framework — How to Think and Code Any DP Critical

The Core Insight: Overlapping Subproblems

Dynamic Programming is essentially optimized recursion. In naive recursion (like fib(5) = fib(4) + fib(3)), you recompute fib(3) multiple times because subproblems overlap. The recursion tree grows exponentially (O(2ⁿ)). DP stores the result of fib(3) the first time it is computed, trimming the exponential tree down to a linear path. Every DP is solved by answering 3 questions: 1) What is the State? 2) What is the Transition? 3) What is the Base Case?

Top-Down (Memoization)

Recursive function + cache (array or map). Natural to write. Only computes strictly needed states. The recursion trace: solve(i) → check cache → compute if needed → save and return.

Bottom-Up (Tabulation)

Iterative nested loops filling a table from the base cases up to the answer. No recursion overhead. Allows for space optimization (e.g., dropping older rows from the table).

12.2   Classic DP Problems — Full Implementations Must Know

0/1 Knapsack

Trace Table: W=5, items=[{w:2, v:3}, {w:3, v:4}, {w:4, v:5}] w=0 1 2 3 4 5 i=0: [0, 0, 0, 0, 0, 0] <-- Empty knapsack base case i=1: [0, 0, 3, 3, 3, 3] <-- Consider item 1 (w=2,v=3): Fills all capacities >= 2 i=2: [0, 0, 3, 4, 4, 7] <-- Consider item 2 (w=3,v=4): At w=5, takes both (3+4=7) i=3: [0, 0, 3, 4, 5, 7] <-- Consider item 3 (w=4,v=5): Takes at w=4 (5>4), keeps 7 at w=5
// dp[i][w] = max value using first i items with capacity w int knapsack(const vector<int>& wt, const vector<int>& val, int W) { int n = wt.size(); vector<vector<int>> dp(n+1, vector<int>(W+1, 0)); for(int i=1; i<=n; i++) { for(int w=0; w<=W; w++) { dp[i][w] = dp[i-1][w]; // Don't take item i if(wt[i-1]<=w) dp[i][w]=max(dp[i][w], dp[i-1][w-wt[i-1]]+val[i-1]); // Take it } } return dp[n][W]; } // Space optimized: use 1D dp (iterate w in reverse for 0/1 knapsack) int knapsack1D(const vector<int>& wt, const vector<int>& val, int W) { vector<int> dp(W+1, 0); for(int i=0; i<wt.size(); i++) for(int w=W; w>=wt[i]; w--) // REVERSE order prevents using item twice dp[w] = max(dp[w], dp[w-wt[i]]+val[i]); return dp[W]; }

Longest Common Subsequence (LCS)

// dp[i][j] = LCS of s1[0..i-1] and s2[0..j-1] int lcs(const string& s1, const string& s2) { int m=s1.size(), n=s2.size(); vector<vector<int>> dp(m+1, vector<int>(n+1, 0)); for(int i=1; i<=m; i++) for(int j=1; j<=n; j++) dp[i][j] = (s1[i-1]==s2[j-1]) ? dp[i-1][j-1]+1 : max(dp[i-1][j], dp[i][j-1]); return dp[m][n]; }

Longest Increasing Subsequence (LIS)

// O(n²) DP approach int lisDP(const vector<int>& a) { int n=a.size(); vector<int> dp(n, 1); for(int i=1; i<n; i++) for(int j=0; j<i; j++) if(a[j]<a[i]) dp[i]=max(dp[i], dp[j]+1); return *max_element(dp.begin(), dp.end()); } // O(n log n) — patience sorting (binary search trick) int lisBS(const vector<int>& a) { vector<int> tails; // tails[i] = smallest tail of LIS of len i+1 for(int x : a) { auto it = lower_bound(tails.begin(), tails.end(), x); if(it==tails.end()) tails.push_back(x); // Extend LIS else *it = x; // Replace: better (smaller) tail } return tails.size(); }

Edit Distance (Levenshtein)

// dp[i][j] = min operations to convert s1[0..i-1] to s2[0..j-1] int editDistance(const string& s1, const string& s2) { int m=s1.size(), n=s2.size(); vector<vector<int>> dp(m+1, vector<int>(n+1)); for(int i=0;i<=m;i++) dp[i][0]=i; // Delete all of s1 for(int j=0;j<=n;j++) dp[0][j]=j; // Insert all of s2 for(int i=1;i<=m;i++) for(int j=1;j<=n;j++) dp[i][j] = (s1[i-1]==s2[j-1]) ? dp[i-1][j-1] : 1+min({dp[i-1][j], dp[i][j-1], dp[i-1][j-1]}); // delete s1[i] insert into s1 replace s1[i] with s2[j] return dp[m][n]; }

Coin Change — Unbounded Knapsack

// Minimum coins to make amount int coinChange(const vector<int>& coins, int amount) { vector<int> dp(amount+1, INT_MAX); dp[0] = 0; for(int i=1; i<=amount; i++) for(int c : coins) if(c<=i && dp[i-c]!=INT_MAX) dp[i] = min(dp[i], dp[i-c]+1); return dp[amount]==INT_MAX ? -1 : dp[amount]; } // Number of ways to make amount (combinations) int coinChangeWays(const vector<int>& coins, int amount) { vector<long long> dp(amount+1, 0); dp[0] = 1; for(int c : coins) // Outer loop: coins (for combos not perms) for(int i=c; i<=amount; i++) dp[i] += dp[i-c]; return dp[amount]; }

12.3   Advanced DP — Interval, Bitmask & DP on Trees Advanced

// Matrix Chain Multiplication — interval DP // dp[i][j] = min cost to multiply matrices i through j int matrixChain(const vector<int>& dims) { int n = dims.size()-1; // n matrices vector<vector<int>> dp(n, vector<int>(n, 0)); for(int len=2; len<=n; len++) { // Chain length for(int i=0; i+len-1<n; i++) { int j = i+len-1; dp[i][j] = INT_MAX; for(int k=i; k<j; k++) // Split point dp[i][j] = min(dp[i][j], dp[i][k]+dp[k+1][j]+dims[i]*dims[k+1]*dims[j+1]); } } return dp[0][n-1]; }
Bitmask DP Intuition

A bitmask is just an integer used compactly as an array of booleans. In TSP, mask = 5 (binary 101) means we have currently visited cities 0 and 2. The DP state dp[mask][i] answers: "What is the minimum cost to visit exactly the set of cities represented by mask, ending our path at city i?" Transitions simply try adding an unvisited city v to the mask: new_mask = mask | (1 << v).

// Bitmask DP — Travelling Salesman Problem // dp[mask][i] = min cost to visit all cities in mask, ending at city i int tsp(const vector<vector<int>>& dist) { int n = dist.size(); int FULL = (1<<n)-1; vector<vector<int>> dp(1<<n, vector<int>(n, INT_MAX)); dp[1][0] = 0; // Start at city 0 for(int mask=1; mask<=FULL; mask++) { for(int u=0; u<n; u++) { if(!(mask&(1<<u)) || dp[mask][u]==INT_MAX) continue; for(int v=0; v<n; v++) { if(mask&(1<<v)) continue; // Already visited int nmask = mask|(1<<v); dp[nmask][v] = min(dp[nmask][v], dp[mask][u]+dist[u][v]); } } } int ans = INT_MAX; for(int i=1; i<n; i++) if(dp[FULL][i]!=INT_MAX) ans=min(ans, dp[FULL][i]+dist[i][0]); return ans; } // O(2^n * n²) — feasible for n ≤ 20
↑ back to top
Chapter 13
Greedy Algorithms — The Locally Optimal Choice

13.1   Greedy Strategy & Classic Problems Core

When Greedy Works

A greedy algorithm works when the problem has greedy choice property (locally optimal choice leads to globally optimal) and optimal substructure (optimal solution contains optimal solutions to subproblems). Proving correctness requires showing no better choice exists at each step.

Activity Selection — maximize number of non-overlapping intervals.

int activitySelection(vector<pair<int,int>>& intervals) { // Sort by END time — always pick the activity that ends earliest sort(intervals.begin(), intervals.end(), [](auto& a, auto& b){ return a.second < b.second; }); int count=1, lastEnd=intervals[0].second; for(int i=1; i<intervals.size(); i++) { if(intervals[i].first >= lastEnd) { // No overlap count++; lastEnd = intervals[i].second; } } return count; } // O(n log n) — "erases least remaining future opportunities"

Jump Game II — minimum jumps to reach end of array.

int jump(const vector<int>& nums) { int jumps=0, curEnd=0, farthest=0; for(int i=0; i<(int)nums.size()-1; i++) { farthest = max(farthest, i + nums[i]); // Farthest reachable from here if(i == curEnd) { // Must jump now jumps++; curEnd = farthest; } } return jumps; } // O(n) — BFS-style greedy

Fractional Knapsack — maximize value, items are divisible.

double fractionalKnapsack(int W, vector<pair<int,int>>& items) { // Sort by value/weight ratio descending sort(items.begin(), items.end(), [](auto& a, auto& b){ return (double)a.first/a.second > (double)b.first/b.second; }); double total=0; for(auto [val, wt] : items) { if(W >= wt) { total+=val; W-=wt; } // Take whole item else { total += (double)val*W/wt; break; } // Take fraction } return total; }

Gas Station — can you complete a circular route?

int canCompleteCircuit(const vector<int>& gas, const vector<int>& cost) { int total=0, tank=0, start=0; for(int i=0; i<gas.size(); i++) { total += gas[i]-cost[i]; tank += gas[i]-cost[i]; if(tank < 0) { start=i+1; tank=0; } // Can't start here — try next } return total >= 0 ? start : -1; } // Key insight: if total gas >= total cost, a solution always exists
↑ back to top
Chapter 14
Advanced Structures — Segment Trees, Tries & Fenwick Trees

14.1   Segment Tree — Range Query & Point Update Critical

A Segment Tree solves a very specific problem: you have an array, and you need to perform thousands of Range Queries (what is the sum/min/max from index L to R?) interspersed with thousands of Point Updates (change index i to value v). A standard array gives O(1) updates but O(n) range queries. A prefix sum array gives O(1) range queries but O(n) updates. A Segment Tree gives O(log n) for both.

The core idea is dividing the array into contiguous power-of-two segments and precalculating the answer for each segment in a binary tree. A node's value is the combination (sum, min, max) of its two children. To update an element, you only need to update the log(n) nodes above it. To query a range, you combine at most 2 * log(n) precalculated segments, rather than iterating through N individual elements.

Segment Tree Structure (Sum) for Array [1, 2, 3, 4, 5, 6, 7, 8] [36] (Sum of 0..7) <-- Root (Node 1) / \ [10](0..3) [26](4..7) <-- Node 2 & 3 / \ / \ [3](0..1) [7](2..3) [11](4..5) [15](6..7) <-- Nodes 4, 5, 6, 7 / \ / \ / \ / \ [1] [2] [3] [4] [5] [6] [7] [8] <-- Leaves (Indices 0..7) Mapping Context: Left Child is 2*i. Right Child is 2*i + 1. Parent is i/2.
// Segment Tree — O(log n) range sum query and point update struct SegTree { int n; vector<long long> tree; SegTree(int n) : n(n), tree(4*n, 0) {} SegTree(const vector<int>& a) : n(a.size()), tree(4*a.size(), 0) { build(a, 1, 0, n-1); } void build(const vector<int>& a, int node, int lo, int hi) { if(lo==hi) { tree[node]=a[lo]; return; } int mid=(lo+hi)/2; build(a, 2*node, lo, mid); build(a, 2*node+1, mid+1, hi); tree[node] = tree[2*node] + tree[2*node+1]; } void update(int node, int lo, int hi, int idx, int val) { if(lo==hi) { tree[node]=val; return; } int mid=(lo+hi)/2; idx<=mid ? update(2*node, lo, mid, idx, val) : update(2*node+1, mid+1, hi, idx, val); tree[node]=tree[2*node]+tree[2*node+1]; } long long query(int node, int lo, int hi, int l, int r) { if(r<lo || hi<l) return 0; // Completely outside if(l<=lo && hi<=r) return tree[node]; // Completely inside int mid=(lo+hi)/2; return query(2*node,lo,mid,l,r)+query(2*node+1,mid+1,hi,l,r); } // Public interface wrappers void update(int idx, int val) { update(1, 0, n-1, idx, val); } long long query(int l, int r) { return query(1, 0, n-1, l, r); } };
Range Updates & Lazy Propagation

If you need to add +5 to an entire range [L, R], doing (R-L) point updates takes O(N) per query. Lazy Propagation fixes this by recording the +5 at the highest possible nodes in the tree that fully cover the update range, stopping early. We mark the node as "lazy" (it has pending updates for its children). The next time any operation travels down that path, it "pushes" the lazy value down to its immediate children before proceeding. This keeps range updates and range queries at strictly O(log n).

// Lazy Propagation — range update + range query in O(log n) struct LazySegTree { int n; vector<ll> tree, lazy; LazySegTree(int n) : n(n), tree(4*n, 0), lazy(4*n, 0) {} void push(int node, int lo, int hi) { if(!lazy[node]) return; int mid=(lo+hi)/2; tree[2*node] += lazy[node]*(mid-lo+1); lazy[2*node] += lazy[node]; tree[2*node+1] += lazy[node]*(hi-mid); lazy[2*node+1] += lazy[node]; lazy[node] = 0; } void update(int node, int lo, int hi, int l, int r, ll val) { if(r<lo || hi<l) return; if(l<=lo && hi<=r) { tree[node]+=val*(hi-lo+1); lazy[node]+=val; return; } push(node, lo, hi); int mid=(lo+hi)/2; update(2*node, lo, mid, l, r, val); update(2*node+1, mid+1, hi, l, r, val); tree[node]=tree[2*node]+tree[2*node+1]; } ll query(int node, int lo, int hi, int l, int r) { if(r<lo || hi<l) return 0; if(l<=lo && hi<=r) return tree[node]; push(node, lo, hi); int mid=(lo+hi)/2; return query(2*node,lo,mid,l,r)+query(2*node+1,mid+1,hi,l,r); } void update(int l, int r, ll v) { update(1,0,n-1,l,r,v); } ll query(int l, int r) { return query(1,0,n-1,l,r); } };

14.2   Fenwick Tree (BIT) — Simpler Range Sum Essential

// Binary Indexed Tree — O(log n) prefix sum update and query // Simpler to code than segment tree, same complexity for point update + prefix query struct BIT { int n; vector<ll> tree; BIT(int n) : n(n), tree(n+1, 0) {} void update(int i, ll delta) { // Add delta to position i (1-indexed) for(i++; i<=n; i+=i&(-i)) // i & (-i) = lowest set bit tree[i] += delta; } ll prefix(int i) { // Sum of [0..i] inclusive ll sum=0; for(i++; i>0; i-=i&(-i)) sum += tree[i]; return sum; } ll query(int l, int r) { // Sum of [l..r] return prefix(r) - (l?prefix(l-1):0); } }; // BIT for counting inversions — classic application ll countInversions(vector<int>& a) { int n=a.size(); // Coordinate compress vector<int> sorted=a; sort(sorted.begin(), sorted.end()); sorted.erase(unique(sorted.begin(),sorted.end()),sorted.end()); auto compress=[&](int x){ return lower_bound(sorted.begin(),sorted.end(),x)-sorted.begin(); }; BIT bit(n); ll inv=0; for(int i=n-1; i>=0; i--) { int c = compress(a[i]); inv += bit.prefix(c-1); // Count elements already seen that are smaller bit.update(c, 1); } return inv; }

14.3   Trie — Prefix Tree for Strings Core

struct TrieNode { TrieNode* children[26] = {}; int count = 0; // Number of words ending here int prefix = 0; // Number of words with this prefix }; class Trie { TrieNode* root = new TrieNode(); public: void insert(const string& word) { TrieNode* cur = root; for(char c : word) { int idx = c-'a'; if(!cur->children[idx]) cur->children[idx] = new TrieNode(); cur = cur->children[idx]; cur->prefix++; } cur->count++; } bool search(const string& word) { TrieNode* cur = root; for(char c : word) { if(!cur->children[c-'a']) return false; cur = cur->children[c-'a']; } return cur->count > 0; } bool startsWith(const string& prefix) { TrieNode* cur = root; for(char c : prefix) { if(!cur->children[c-'a']) return false; cur = cur->children[c-'a']; } return true; } int countWithPrefix(const string& prefix) { TrieNode* cur = root; for(char c : prefix) { if(!cur->children[c-'a']) return 0; cur = cur->children[c-'a']; } return cur->prefix; } }; // XOR Trie — find maximum XOR of any pair in array struct XORTrie { struct Node { Node* ch[2]={}; }; Node* root = new Node(); void insert(int x) { Node* cur=root; for(int i=31; i>=0; i--) { int b=(x>>i)&1; if(!cur->ch[b]) cur->ch[b]=new Node(); cur=cur->ch[b]; } } int maxXOR(int x) { Node* cur=root; int res=0; for(int i=31; i>=0; i--) { int b=(x>>i)&1, want=1-b; // Want opposite bit for max XOR if(cur->ch[want]) { res|=(1<<i); cur=cur->ch[want]; } else if(cur->ch[b]) cur=cur->ch[b]; else break; } return res; } };
↑ back to top
Chapter 15
Bit Manipulation — The Low-Level Power Tool

15.1   Bit Tricks — Complete Reference CP Essential

// ── FUNDAMENTAL OPERATIONS ─────────────────────────────── n & 1 // Is n odd? (faster than n%2) n & (n-1) // Turn off lowest set bit n | (n+1) // Turn on lowest unset bit n ^ n // 0 (XOR with self) n ^ 0 // n (XOR with zero) ~n + 1 // -n (two's complement) // ── BIT TESTING ─────────────────────────────────────────── (n >> i) & 1 // Get bit i n & (1 << i) // Test if bit i is set (non-zero = set) // ── BIT SETTING/CLEARING ───────────────────────────────── n |= (1 << i) // Set bit i n &= ~(1 << i) // Clear bit i n ^= (1 << i) // Toggle bit i // ── POWER OF TWO ───────────────────────────────────────── n && !(n & (n-1)) // Is n a power of 2? n & -n // Isolate lowest set bit (LSB) __builtin_ctz(n) // Count trailing zeros = position of LSB __builtin_clz(n) // Count leading zeros __builtin_popcount(n) // Count set bits __builtin_popcountll(n) // For long long // ── MULTIPLY/DIVIDE BY POWERS OF 2 ─────────────────────── n << k // n * 2^k n >> k // n / 2^k (arithmetic right shift for signed) // ── SWAP WITHOUT TEMP ───────────────────────────────────── a ^= b; b ^= a; a ^= b; // ── ABSOLUTE VALUE WITHOUT BRANCH ──────────────────────── int mask = n >> 31; // All 1s if negative int abs_n = (n + mask) ^ mask; // ── NEXT PERMUTATION OF BITS (Gosper's Hack) ───────────── // Given mask with k set bits, find next larger number with k set bits int nextPerm(int v) { int t = v | (v-1); return (t+1) | (((~t & -~t)-1) >> (__builtin_ctz(v)+1)); }

15.2   Bit Manipulation Problems Must Know

Single Number — find the one element that appears once (all others appear twice).

int singleNumber(const vector<int>& nums) { int result = 0; for(int x : nums) result ^= x; // a^a=0, 0^b=b — pairs cancel out return result; } // O(n) time, O(1) space

Count set bits in all numbers from 1 to n.

int countBits(int n) { vector<int> dp(n+1, 0); for(int i=1; i<=n; i++) dp[i] = dp[i>>1] + (i&1); // dp[i] = dp[i/2] + last bit of i return accumulate(dp.begin(), dp.end(), 0); }

Generate all subsets using bitmask.

vector<vector<int>> subsets(const vector<int>& nums) { int n=nums.size(); vector<vector<int>> res; for(int mask=0; mask<(1<<n); mask++) { // 2^n possible subsets vector<int> subset; for(int i=0; i<n; i++) if(mask & (1<<i)) subset.push_back(nums[i]); res.push_back(subset); } return res; } // Enumerate all submasks of a mask — classic bitmask DP technique for(int sub=mask; sub>0; sub=(sub-1)&mask) { // Process submask 'sub' // When sub=0, (sub-1)&mask = mask, so loop would be infinite without sub>0 }

15.3   DSA Master Reference — Problem Type → Algorithm Final Cheatsheet

Problem PatternGo-To AlgorithmComplexity
Subarray with sum/XOR = kPrefix sum/XOR + hash mapO(n)
Longest substring / windowSliding windowO(n)
Two values summing to targetTwo pointers (sorted) or hash setO(n)
k-th largest/smallestMin-heap of size k or QuickSelectO(n log k)
Merge k sorted arraysMin-heap / priority_queueO(n log k)
Top k frequent elementsBucket sort or heapO(n)
Path in graph, shortestBFS (unweighted) / Dijkstra (weighted)O((V+E) log V)
Connected componentsDFS/BFS or Union-FindO(V+E)
Minimum spanning treeKruskal (sparse) / Prim (dense)O(E log E)
Cycle in directed graphDFS with 3-color or Kahn's topo sortO(V+E)
Scheduling / intervalSort by end time, greedyO(n log n)
Optimal substructureDynamic programmingVaries
Enumerate all subsetsBacktracking or bitmaskO(2^n)
Range sum queryPrefix sum (static) / BIT / SegTreeO(1) / O(log n)
Range min/max querySparse Table (static) / SegTreeO(1) / O(log n)
String matchingKMP or Rabin-KarpO(n+m)
Prefix / autocompleteTrieO(k) per op
Max XOR pairXOR TrieO(n log maxVal)
Count inversionsMerge sort or BITO(n log n)
Binary search on answerBS + feasibility check functionO(n log(range))
↑ back to top