Part 6 of 6. The grand finale covering Greedy algorithms, Backtracking, Bit Manipulation, and a Mixed Bag of advanced concepts. Every problem follows the full structure: Problem → Decode → Approach Comparison → Dry Run Trace → Annotated C++ Code → Complexity Footer.
The Core Idea: Make the optimal choice at the current step and never look back.
When to use: When a problem exhibits "Optimal Substructure" and the "Greedy Choice Property." If you sort the data (often by end time, profit, or cost), can you just pick the best valid option sequentially? If yes, it's greedy.
Given an array of intervals intervals[i] = [start_i, end_i], return the minimum number of intervals you need to remove to make the rest of the intervals non-overlapping.
[[1,2],[2,3],[3,4],[1,3]]
1
Remove [1,3]
Instead of thinking about what to remove, think about what to keep. We want to pack as many intervals as possible. An interval that ends earlier leaves more room for subsequent intervals. This is the classic Activity Selection Problem.
Sorting by start time requires you to constantly update the end time when overlaps occur.
Sort by end time. Pick the first interval, then greedily pick the next one that starts after the current one ends.
1. Sort intervals by their end times.
2. Track the end of the last added interval (initially the end of the first interval).
3. Iterate through the rest. If an interval starts before end, it overlaps! Increment the removal count. Otherwise, update end to this new interval's end time.
There are n gas stations along a circular route. You have a car with an unlimited gas tank. It costs cost[i] of gas to travel from station i to i+1. You begin with an empty tank. Return the starting gas station's index if you can travel around the circuit once, otherwise return -1.
[1,2,3,4,5]
[3,4,5,1,2]
3
This problem relies on a mathematical proof: if the sum of all gas is greater than or equal to the sum of all costs, there is guaranteed to be a valid starting point. We just need to find it.
Try starting at every station and looping around. Very slow for large inputs.
Track current fuel. If it drops below 0, the starting point (and any point before it) is invalid. Reset start to next station.
Iterate through the stations. Keep a running tally of your current_gas. If at any point your current_gas drops below 0, it means you can't reach the current station from your chosen starting point.
Crucially, it also means you can't reach it from any station between your start and the current one! Therefore, reset your starting point to the next station (i + 1) and reset current_gas to 0.
You have g children with a greed factor and s cookies with a size. A child will be content if the cookie size is ≥ their greed. Maximize the number of content children. Each child gets at most one cookie.
[1,2,3]
[1,1]
1
To maximize the number of content children, always try to satisfy the least greedy child first. And to save your big cookies for greedier children, satisfy them using the smallest cookie that meets their requirement.
For each child, scan the entire cookie array to find a valid cookie. O(N*M) time.
Sort both arrays. Use two pointers to iterate. If cookie size ≥ child's greed, move both. Otherwise, just move the cookie pointer.
Sort both the children and cookie arrays in ascending order. Initialize two pointers at 0. If the current cookie is large enough for the current child (s[cookie] >= g[child]), the child gets the cookie (increment child). Regardless of whether it fits, increment the cookie pointer to look at the next cookie.
Customers buy a $5 lemonade and pay with $5, $10, or $20 bills. You must provide the correct change. Return true if you can provide every customer with correct change.
[5,5,5,10,20]
true
When a customer hands you a $20, you have two ways to make change: one $10 and one $5, or three $5s. A $5 bill is more versatile because it's strictly required to make change for a $10. Therefore, greedily part with your $10s first to save your $5s.
Use a map to store count of 5, 10, 20. Overkill and uses extra space unnecessarily.
Just track `fives` and `tens` counters. For a 20, subtract a 10 and a 5 if possible.
Initialize fives = 0 and tens = 0. Loop through bills. If 5, fives++. If 10, check if fives > 0, decrement it, and tens++. If 20, greedily check if we have a 10 and a 5. If not, check if we have three 5s. If neither, return false.
Balloons are represented as 2D intervals [x_start, x_end]. An arrow shot up at exactly x bursts all balloons where x_start <= x <= x_end. Find the minimum arrows needed to burst all balloons.
[[10,16],[2,8],[1,6],[7,12]]
2
This is structurally identical to Non-overlapping Intervals (Q176). If you sort the balloons by their end coordinate, you can shoot an arrow at the very end of the first balloon, guaranteeing you pop it and any others overlapping that coordinate.
If sorted by start, you have to keep track of the minimum end coordinate of overlapping balloons to know when an arrow MUST be shot.
Sort by end. The greedy choice is to shoot at the end of the first balloon, popping everything that starts before this point.
Sort the array based on the end coordinate. Initialize arrows = 1 and currentEnd = points[0][1]. Loop through the balloons. If a balloon's start point is greater than currentEnd, it means it doesn't overlap with our current cluster, so we must shoot another arrow (arrows++) and update currentEnd to this new balloon's end point.
Given an array of intervals where intervals[i] = [start_i, end_i], merge all overlapping intervals, and return an array of the non-overlapping intervals.
[[1,3],[2,6],[8,10]]
[[1,6],[8,10]]
Unlike Q176 or Q180 where we sort by end time to remove/skip, here we sort by start time because we want to combine. If the next interval starts before or exactly when the current one ends, they overlap and their new end is the max of both ends.
Check every interval against every other interval and merge if they overlap. Very inefficient.
Sort by start time. Iterate once, comparing the current interval with the last merged interval.
Sort the array. Add the first interval to a merged list. Iterate through the rest. For each interval, check if its start is <= the end of the last interval in the merged list. If yes, update the end of the last interval to be the max of both ends. If no, push it to the list as a new non-overlapping interval.
There are n children in a line with ratings. You must give candies so that: 1. Every child gets at least 1 candy. 2. Children with higher ratings get more candies than their neighbors. Return the minimum candies needed.
[1,0,2]
5
2, 1, 2 = 5
Trying to check both left and right neighbors simultaneously is chaotic. Break it down: first, sweep left-to-right to ensure kids with higher ratings than their left neighbor get more candy. Then, sweep right-to-left to ensure kids with higher ratings than their right neighbor get more, without breaking the first rule (by taking the max).
Do L-to-R pass into one array, R-to-L into another. Final candy is max(L[i], R[i]).
Use one array. L-to-R pass. Then R-to-L pass, updating the same array using max().
Create a candies array initialized to 1 for all children.
1. Left-to-Right: Loop from index 1. If rating[i] > rating[i-1], set candies[i] = candies[i-1] + 1.
2. Right-to-Left: Loop backwards from n-2. If rating[i] > rating[i+1], set candies[i] = max(candies[i], candies[i+1] + 1). Finally, sum the array.
You are given an array of CPU tasks (A-Z) and a cooling time n. Identical tasks must be separated by at least n intervals. Find the minimum intervals needed to complete all tasks.
["A","A","A","B","B","B"]
2
8
The most frequent task dictates the minimum length. If task 'A' appears 3 times and cooldown is 2, the frame looks like: A _ _ A _ _ A. The minimum length is (max_freq - 1) * (n + 1) + tasks_with_max_freq. If there are enough other tasks to fill all the gaps, the answer is just the total number of tasks!
Use a priority queue and a cooldown queue to simulate every tick of the CPU. Good to know, but overkill here.
Calculate idle slots based purely on the highest frequency task. Fill slots with remaining tasks.
1. Count frequencies of all tasks.
2. Find the highest frequency (maxFreq).
3. Count how many tasks have this exact maxFreq (maxFreqCount).
4. Calculate the base time: ans = (maxFreq - 1) * (n + 1) + maxFreqCount.
5. The answer is the max(ans, tasks.size()). If tasks.size() is larger, it means we had enough tasks to fill all idle slots naturally without needing extra cooldowns.
Given weights and values of n items, put these items in a knapsack of capacity W to get the maximum total value. Unlike 0/1 Knapsack, you can break items into fractions.
Vals:[60,100,120], Wts:[10,20,30]
240.0
Because we can slice an item, we simply want the "best bang for our buck." Calculate the value / weight ratio for each item. Sort items descending by this ratio. Take as much as you can of the highest ratio item, then move to the next.
Fails or is incredibly inefficient here because the weights can be fractional. DP is for discrete choices.
The optimal choice is always to take the item with the highest value per unit of weight.
Sort the items using a custom comparator that calculates (double)value / weight and sorts descending. Loop through the sorted items. If the item's weight is <= the remaining capacity W, take the whole thing. If it's greater, take a fraction: (remaining W / item's weight) * item's value, and break the loop since the knapsack is full.
Given arrival and departure times of all trains that reach a railway station, find the minimum number of platforms required so that no train is kept waiting.
[900, 940, 950]
[910, 1200, 1120]
2
Instead of tying arrivals and departures to specific trains, think of them as independent events on a timeline. Sorting both arrays independently and walking through them like the "merge" step of merge sort lets you track how many trains are currently stationed. Arrival = +1 platform, Departure = -1 platform.
Count how many intervals overlap with every single interval. Max overlap is the answer. O(N²) time.
Sort both independently. Move pointers based on which event (arrival or departure) happens next.
Sort arr and dep arrays. Use pointers i=1 (for arr) and j=0 (for dep). Initialize platforms = 1 and max_platforms = 1. While pointers are valid: if arr[i] <= dep[j], a train arrived before the previous one left, so increment platforms and i. Otherwise, a train departed, so decrement platforms and increment j. Track the max.
Given an array nums of distinct integers, return all the possible permutations. You can return the answer in any order.
[1,2,3]
[[1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]]
A permutation is just a rearrangement of elements. Instead of creating new arrays and tracking visited elements, we can build permutations in-place by swapping elements. At each position, swap the current element with every other available element, explore that path, and then swap them back (backtrack).
Use a boolean array to track used elements and build a new path array. Easy to write, but uses O(N) extra space.
Swap elements in the original array to generate branches. Revert swaps when returning.
Use a recursive function backtrack(start). Base case: if start == nums.size(), add the current nums array to the result. Recursive step: iterate i from start to the end. Choose: swap(nums[start], nums[i]). Explore: call backtrack(start + 1). Un-choose: swap(nums[start], nums[i]) to restore the array for the next iteration.
The n-queens puzzle is the problem of placing n queens on an n x n chessboard such that no two queens attack each other. Return all distinct solutions.
n = 4
[[".Q..","...Q","Q...","..Q."], ["..Q.","Q...","...Q",".Q.."]]
We only place one queen per row, so row attacks are impossible. For column and diagonal attacks, instead of scanning the board (which takes O(n) time per check), we can use boolean arrays.
Main diagonal ID: row - col + (n - 1). Anti-diagonal ID: row + col.
For every placement, iterate vertically and diagonally to check if safe. Slower validation.
Use arrays to track occupied columns and diagonals in O(1) time.
Create a boolean array for cols (size n), diag1 (size 2n), and diag2 (size 2n).
Iterate through columns for the current row. If col, d1, and d2 are all false, place the Queen, set the arrays to true, and recurse to row + 1. On backtrack, set the arrays back to false and remove the Queen.
Given an integer array nums of unique elements, return all possible subsets (the power set). The solution set must not contain duplicate subsets.
[1,2,3]
[[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]]
Unlike permutations where we only record the answer at the end of the recursion tree (when length == n), for subsets, every path we take is a valid subset. We simply add the current path to our results at the very beginning of every recursive call.
Start with [[]]. For each num, take existing subsets, copy them, and add num. Very fast but consumes more memory.
Explore all paths via DFS. Add node to subset, recurse, remove node.
Create a backtrack(start, path) function. First thing inside: result.push_back(path). Then loop i from start to nums.size() - 1. Inside loop: push nums[i] to path, recurse with i + 1, pop from path.
Given an integer array nums that may contain duplicates, return all possible subsets. The solution set must not contain duplicate subsets.
[1,2,2]
[[],[1],[1,2],[1,2,2],[2],[2,2]]
To avoid duplicate subsets like generating [1, 2a] and [1, 2b], we must sort the array first. Then, during the loop inside our backtracking function, if i > start and nums[i] == nums[i-1], we continue to skip it.
Generate all subsets normally, sort each subset, and store in a HashSet to filter duplicates. Very slow.
Sort the array. If the current element is the same as the previous (at the same tree level), skip it.
Sort the nums array first. Inside the backtrack loop, add a condition: if(i > start && nums[i] == nums[i-1]) continue;. This ensures that if we have `[1, 2, 2]`, we explore the first `2`, but skip the second `2` at that specific level, preventing duplicate branches.
i(2) > start(1) && 2 == 2. SKIP! (Prevents duplicate [1,2])i(2) > start(0) && 2 == 2. SKIP! (Prevents duplicate [2])
Given an array of distinct integers and a target integer, return a list of all unique combinations where the chosen numbers sum to target. You may choose the same number an unlimited number of times.
c = [2,3,6,7], target = 7
[[2,2,3],[7]]
Because we can reuse elements, when we make the recursive backtrack call after choosing nums[i], we don't move to the next index. We pass i again. We stop when the target becomes 0 (success) or less than 0 (failure).
At each step, either take the number and stay, or don't take and move to `i+1`. Valid, but loop-based is cleaner.
Loop from `start`. Subtract from target. Recurse passing `i` instead of `i+1`.
Pass the remaining target into the function. Base case 1: target < 0 (return). Base case 2: target == 0 (save path and return). Inside the loop, recurse via backtrack(i, target - nums[i]). Passing i allows infinite reuse. The loop naturally moves to i+1 when we backtrack.
Given a collection of candidate numbers (with duplicates) and a target, find all unique combinations that sum to target. Each number may only be used once.
c = [10,1,2,7,6,1,5], t = 8
[[1,1,6],[1,2,5],[1,7],[2,6]]
Since we can't reuse elements, we pass i + 1 to the recursive call. Since there are duplicates in the input array, we must sort the array and use the if (i > start && candidates[i] == candidates[i-1]) continue; trick from Subsets II to avoid duplicate combinations.
Find all combinations, sort each one, store in a Set. Will TLE (Time Limit Exceeded).
Sort the array. Skip duplicates. Stop loop early if `nums[i] > target`.
Sort the array. Base cases: target == 0 (save) and target < 0 (return). In the loop, add duplicate skipping. Add an optimization: if(nums[i] > target) break; since the array is sorted, no subsequent numbers will be valid either! Pass i + 1 to recurse.
i(1) > start(0) && 1 == 1. Skip Duplicate!
Given two integers n and k, return all possible combinations of k numbers chosen from the range [1, n].
n = 4, k = 2
[[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]]
Instead of iterating over an array, we are generating numbers from 1 to n using a loop. We push the path to our results only when its size reaches k. We can also optimize: if we need 2 more numbers but only 1 is left in the range, we can stop early!
Explore all subset branches and save only those with length k. Wastes time exploring impossible paths.
Stop loop if remaining elements are fewer than needed to reach length k.
Base case: path.size() == k. Save and return. Loop i from start. The maximum value i can be is n - needed + 1, where needed = k - path.size(). Push i, recurse with i + 1, pop.
Given a string s, partition s such that every substring of the partition is a palindrome. Return all possible palindrome partitioning of s.
"aab"
[["a","a","b"],["aa","b"]]
Loop through the string to make a "cut". Check if the substring from start to i is a palindrome. If it is, add it to your path and recursively ask the function to partition the rest of the string starting from i + 1.
Use O(N²) DP to precompute all palindromes. Then backtrack. Slightly faster, but complex.
Use a helper function to check palindrome status during the DFS. Clean and standard.
Base case: start == s.length(). Save path. Loop i from start to end. Check isPalindrome(s, start, i). If true, extract the substring s.substr(start, i - start + 1), push it, recurse with i + 1, and pop.
Given an m x n grid of characters board and a string word, return true if word exists in the grid. The word can be constructed from letters of sequentially adjacent cells (horizontal/vertical). The same letter cell may not be used more than once.
We run a DFS starting from every cell that matches the first letter. To ensure we don't visit the same cell twice in a single path, we temporarily change the character on the board (e.g., to *). After exploring all 4 directions, we backtrack by restoring the original character.
Create a `vector
Modify board directly. Save char, set to `*`, recurse, restore char.
1. Double loop to find `board[r][c] == word[0]`. Start DFS.
2. DFS Base cases: index == word.length() return true. Bounds check & char match check.
3. Save temp = board[r][c]. Set board[r][c] = '*'.
4. Recurse 4 directions. If any return true, return true.
5. Restore board[r][c] = temp.
Write a program to solve a Sudoku puzzle by filling the empty cells. Empty cells are indicated by the character '.'.
Iterate through the board. Find an empty cell. Try placing digits '1' through '9'. For each digit, check if it's valid in the row, column, and 3x3 subgrid. If valid, place it, and recursively try to solve the rest of the board. If the recursive call fails, undo the placement (backtrack to '.') and try the next digit.
Re-scanning the entire 9x9 board every time to validate a move. Works, but inefficient.
Use `3 * (r/3) + i/3` logic to check row, col, and 3x3 box in a single loop of 9.
Our `solve` function iterates the grid. When it hits `'.'`, it loops 1-9. It calls `isValid(r, c, char)`. `isValid` loops 0-8 checking `board[r][i]`, `board[i][c]`, and the 3x3 grid using the math formula `board[3*(row/3) + i/3][3*(col/3) + i%3]`. If valid, place char, if(solve(board)) return true. If false, backtrack. If all 1-9 fail, return false. If loop finishes, return true (solved).
AND (&): 1 if both are 1. Used for masking.
OR (|): 1 if either is 1. Used to set bits.
XOR (^): 1 if different, 0 if same. Perfect for toggling and finding odd-occurrences.
Shifts (<<, >>): Multiply or divide by 2.
Given a non-empty array of integers, every element appears twice except for one. Find that single one. You must implement a solution with a linear runtime complexity and use only constant extra space.
[4,1,2,1,2]
4
The XOR operator (^) has two magical properties:
1. a ^ a = 0 (Any number XORed with itself is 0)
2. a ^ 0 = a (Any number XORed with 0 remains unchanged).
If we XOR every number in the array together, all pairs will cancel out into 0s, leaving only the unique number.
Store counts in a map. Loop map to find the element with count == 1. Violates the O(1) space constraint.
Keep a running XOR total. Fast, elegant, and uses zero extra memory.
Initialize a variable result = 0. Loop through every number in the array, applying result ^= num. The order doesn't matter because XOR is commutative and associative. Return the result.
Write a function that takes the binary representation of an unsigned integer and returns the number of '1' bits it has (also known as the Hamming weight).
n = 11 (1011 in binary)
3
If you subtract 1 from a number (n - 1), all the trailing 0s become 1s, and the lowest 1 becomes a 0. If you then bitwise AND it with the original number (n & (n - 1)), it perfectly erases that lowest 1 bit. We can just count how many times we can do this until n is 0!
Loop exactly 32 times, shifting a '1' mask left, and checking `(n & mask) != 0`. Works, but always takes 32 operations.
Only loop exactly as many times as there are '1's in the number.
Initialize a count = 0. Create a while(n > 0) loop. Inside the loop, perform n = n & (n - 1). This drops the lowest 1 bit. Increment count++. Return the count.
Given an integer n, return an array ans of length n + 1 such that for each i (0 <= i <= n), ans[i] is the number of 1's in the binary representation of i.
n = 5
[0,1,1,2,1,2]
If you bit-shift a number right (i >> 1), you are essentially dividing it by 2. The number of 1s in i is exactly equal to the number of 1s in i / 2, plus 1 if i itself was odd (because it had a 1 in the least significant bit position). This is perfect for Dynamic Programming.
Run `hammingWeight()` on every number from 0 to n. Works, but duplicates effort.
ans[i] = ans[i >> 1] + (i & 1). Calculates in pure O(N) time.
Create an array ans of size n + 1, initialized with 0s. Loop i from 1 to n. Calculate ans[i] by taking the pre-computed answer for i >> 1 (which is ans[i/2]), and add i & 1 (which evaluates to 1 if odd, 0 if even).
Reverse bits of a given 32 bits unsigned integer.
00000010100101000001111010011100
00111001011110000010100101000000
Instead of swapping bits in place, it is much easier to extract the bits from n one by one starting from the right (Least Significant Bit), and append them to a new result variable, shifting result to the left each time to make room.
Convert number to a 32-char binary string, reverse the string, convert back to int. Terribly slow.
Iterate 32 times. Extract rightmost bit with `n & 1`. Shift `result` left, add bit. Shift `n` right.
Initialize res = 0. Loop exactly 32 times. Shift res left by 1 to make room (res << 1). Extract the rightmost bit of n using n & 1. Add it to res using the OR operator (res | bit). Finally, shift n right by 1 (n >> 1) to process the next bit.
Given an array nums containing n distinct numbers in the range [0, n], return the only number in the range that is missing from the array.
[3,0,1]
2
n=3. Range is 0,1,2,3.
If the array wasn't missing a number, the elements would perfectly match the indices 0 through n. If we XOR every index together, and XOR every array value together, everything will come in pairs and cancel out to 0... except the missing number, which has an index but no matching value!
Calculate `n * (n + 1) / 2`. Subtract sum of elements. Great, but strictly speaking, risks integer overflow on massive arrays.
XOR avoids large sums entirely. Initialize `res = n`, then XOR `i` and `nums[i]`.
Initialize res = nums.size() (because the array indices only go up to n-1, but the numbers go up to n). Loop i from 0 to n-1. Update res ^= i ^ nums[i]. The result will hold the missing number.
These final problems represent the culmination of your prep. They rarely fit into a single easy category. They require you to combine multiple data structures (like a Map and a Linked List) or use highly specialized structures (like a Trie) to achieve optimal performance.
A trie (pronounced as "try") or prefix tree is a tree data structure used to efficiently store and retrieve keys in a dataset of strings. Implement the Trie class with insert, search, and startsWith methods.
["Trie","insert","search","startsWith"]
[[],["apple"],["apple"],["app"]]
[null,null,true,true]
Instead of storing whole words in a node, each node represents a single character. A node contains an array of 26 pointers (one for each letter of the alphabet) to its children, and a boolean flag isEndOfWord. Words sharing a prefix share the same path down the tree.
O(1) exact word search, but `startsWith` requires checking every word in the set. O(N*M) prefix search.
Every operation takes time proportional to the length of the word (L). Extremely fast for autocomplete.
Create a TrieNode struct with TrieNode* children[26] and bool isEnd.
Insert: Iterate letters. If children[char - 'a'] is null, create a new node. Move pointer down. Set isEnd = true at the final node.
Search/Prefix: Iterate letters. If the child is null, return false. If loop finishes, return isEnd (for search) or true (for startsWith).
A transformation sequence from beginWord to endWord using a wordList requires changing exactly one letter at a time. Every intermediate word must exist in the wordList. Return the length of the shortest transformation sequence, or 0 if none exists.
"hit" -> "cog"
["hot","dot","dog","lot","log","cog"]
5
Anytime a problem asks for the "shortest path" or "minimum steps" and the costs are uniform (1 change = 1 step), it is a screaming siren for Breadth-First Search (BFS). The "nodes" are words, and "edges" connect words that differ by exactly one letter.
Exploring every possible valid path recursively to find the minimum length will result in Time Limit Exceeded.
Use a Queue to explore radially. Use a Hash Set for O(1) dictionary lookups and to mark words as visited.
1. Put wordList into an unordered_set. If endWord isn't in it, return 0.
2. Push beginWord into a queue. Initialize steps = 1.
3. While queue is not empty, process level by level (using size = q.size()).
4. For each word, try changing each character from 'a' to 'z'. If the mutated word equals endWord, return steps + 1. If it exists in the set, erase it from the set (to mark as visited) and push to queue.
5. Increment steps after each level.
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache. Implement the LRUCache class with get(key) and put(key, value). Both operations must run in O(1) average time complexity.
To get O(1) lookup, we MUST use a Hash Map. To get O(1) deletion and insertion (for reordering usage), we MUST use a Doubly Linked List. The trick is to store the Linked List Node pointer inside the Hash Map as the value!
O(1) access with an array, but moving an accessed item to the "front/back" of the array takes O(N) shift time.
Map gives O(1) node lookup. DLL allows O(1) severing and reconnecting of nodes.
1. Create a Node struct with key, val, prev, next.
2. Use two dummy nodes: head and tail. head->next is the Most Recently Used. tail->prev is the Least Recently Used.
3. Get: If key exists in map, extract the node, delete it from its current DLL position, insert it right after head (marking it recently used), return val.
4. Put: If key exists, update value and move to front. If new, create node, insert after head, add to map. If map size > capacity, remove node before tail, delete from map, delete node from memory.
Given an m x n 2D binary grid which represents a map of '1's (land) and '0's (water), return the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically.
This is the classic connected-components problem. Every time we encounter an unvisited '1' in our loop, we have discovered a brand new island. To ensure we don't count its connected pieces as separate islands, we launch a DFS that visits all connected '1's and changes them to '0's ("sinking" the island).
Use a boolean matrix to track visited land. Works well but uses O(M*N) extra memory.
Mutate the input grid by changing '1's to '0's. Saves space if modification is allowed.
Iterate through every cell [r][c]. If the cell is '1', increment the islandCount and call dfs(grid, r, c). The DFS function checks if bounds are valid and if the cell is '1'. If so, it sets the cell to '0' and makes four recursive calls for Up, Down, Left, and Right.
Given n non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it can trap after raining.
[0,1,0,2,1,0,1,3,2,1,2,1]
6
For any given column i, the amount of water resting exactly on top of it is min(highest_wall_to_the_left, highest_wall_to_the_right) - height[i]. If this math yields a negative number, it holds 0 water.
Make a leftMax array and a rightMax array. Then loop and apply the formula. Easy to write, takes O(N) space.
Keep pointers at ends. Move the smaller pointer inward, updating leftMax or rightMax dynamically.
Initialize left = 0, right = n - 1. Keep track of leftMax = 0 and rightMax = 0. While left < right, compare the heights at the two pointers. The smaller boundary is the bottleneck! So, if height[left] < height[right], we process the left side: update leftMax, add leftMax - height[left] to total water, and move left++. Else, process the right side.