Complete Reference · File 06 of 06
C++ Mastery Series · ICPC & Codeforces Edition

The Black Box

// COPY. PASTE. WIN. — Contest-Grade Templates & Algorithms

This is not a textbook. This is your arsenal. Every template is contest-tested, every function is copy-paste ready, every pattern is mapped to the problem types that actually appear in ICPC and Codeforces Div 1/2. From the moment you open this file to the moment you submit AC — this is the only reference you need.

Ch 16 · Silver Bullet Boilerplate Ch 17 · Number Theory Ch 18 · Heavy Tree Algorithms Ch 19 · Advanced Range Queries Ch 20 · String Mastery Ch 21 · Coordinate Compression & Mo's Ch 22 · Network Flow & Matching Ch 23 · Computational Geometry Ch 24 · Advanced Graph Algorithms Ch 25 · Digit DP & Advanced DP Ch 26 · Game Theory & Math Ch 27 · ICPC Strategy & Pattern Recognition
Chapter 16 · Start Here
The Silver Bullet — Your Contest Starting File

16.1   The Ultimate CP Template Copy This First

This is the file you paste at the start of every contest. Every macro, every typedef, every fast-I/O trick — pre-loaded and battle-tested. The goal: zero boilerplate overhead when the clock starts.

Boilerplate Breakdown (Why It Forms the Silver Bullet)
  • using ll = long long;: Prevents 90% of integer overflow bugs. When in doubt, use ll.
  • #define pb push_back: Saves seconds typing adjacency lists and array builders, which compound over a contest.
  • #define all(x) (x).begin(),(x).end(): Turns sort(v.begin(), v.end()) into sort(all(v)). Eliminates typo risks.
  • #define sz(x) (int)(x).size(): size() yields an unsigned type, which breaks horribly when subtracting (e.g., v.size() - 1 becomes 4 billion if the vector is empty). This casts it to signed int.
  • #define rep(i,a,b): Clean syntax for loops. Reduces loop index boundary errors (< vs <=).
  • chmin/chmax: Turns dp[i] = min(dp[i], cost) into chmin(dp[i], cost), streamlining DP transitions visually and textually.
#include <bits/stdc++.h> using namespace std; // ─── TYPEDEFS ──────────────────────────────────────────────────────── using ll = long long; using ull = unsigned long long; using ld = long double; using pii = pair<int,int>; using pll = pair<ll,ll>; using vi = vector<int>; using vll = vector<ll>; using vvi = vector<vi>; // ─── CONSTANTS ─────────────────────────────────────────────────────── const ll INF = 2e18; const int INF32 = 2e9; const ll MOD = 1e9 + 7; // Also 998244353 for NTT problems const ld PI = acosl(-1.0L); const int dx[] = {0,0,1,-1}; // 4-directional grid movement const int dy[] = {1,-1,0,0}; // ─── MACROS ────────────────────────────────────────────────────────── #define pb push_back #define eb emplace_back #define all(x) (x).begin(),(x).end() #define rall(x) (x).rbegin(),(x).rend() #define sz(x) (int)(x).size() #define fi first #define se second #define mp make_pair #define YES cout << "YES\n" #define NO cout << "NO\n" #define rep(i,a,b) for(int i=(a);i<(b);i++) #define per(i,a,b) for(int i=(b)-1;i>=(a);i--) #define FOR(i,n) for(int i=0;i<(n);i++) // ─── DEBUG (local-only, stripped in judge) ─────────────────────────── #ifdef LOCAL #define dbg(x) cerr << "[" << #x << "] = " << (x) << "\n" #define dbg2(x,y) cerr <<"["<<#x<<","<<#y<<"] = "<<(x)<<","<<(y)<<"\n" template<typename T> void dbgVec(const vector<T>& v, const string& name) { cerr << name << ": ["; for(int i=0;i<sz(v);i++) cerr << v[i] << (i+1<sz(v)?",":"]"); cerr << "\n"; } #else #define dbg(x) #define dbg2(x,y) #define dbgVec(v,n) #endif // ─── UTILITY FUNCTIONS ─────────────────────────────────────────────── template<typename T> bool chmin(T& a, T b){ return a > b ? a=b,true:false; } template<typename T> bool chmax(T& a, T b){ return a < b ? a=b,true:false; } ll safeMod(ll a, ll m){ return ((a%m)+m)%m; } // ─── MAIN ──────────────────────────────────────────────────────────── void solve() { // Your solution here } int main(){ ios_base::sync_with_stdio(false); cin.tie(NULL); int T=1; // cin >> T; // Uncomment for multi-test problems while(T--) solve(); return 0; }
Compile on your machine with LOCAL

Add -DLOCAL to your compile command: g++ -std=c++17 -O2 -DLOCAL -Wall sol.cpp -o sol. Then use dbg(variable) freely. On the online judge, -DLOCAL is not set, so all debug output is compiled away — no performance impact, no runtime errors.

16.2   I/O Tricks & Reading Patterns Essential

// ── Reading n integers into a vector ───────────────────────────────── int n; cin >> n; vi a(n); for(auto& x : a) cin >> x; // ── Reading an n×m grid ─────────────────────────────────────────────── int n,m; cin>>n>>m; vector<string> grid(n); for(auto& row : grid) cin >> row; // grid[i][j] = character // ── Weighted graph reading ──────────────────────────────────────────── int V,E; cin>>V>>E; vector<vector<pair<int,int>>> adj(V+1); for(int i=0;i<E;i++){ int u,v,w; cin>>u>>v>>w; adj[u].pb({v,w}); adj[v].pb({u,w}); } // ── printf/scanf speed alternative (when sync_with_stdio is on) ─────── // For integer reads ONLY — use when cin is still too slow: inline ll readInt(){ ll x=0; char c=getchar_unlocked(); while(c<'0'||c>'9') c=getchar_unlocked(); while(c>='0'&&c<='9'){ x=x*10+c-'0'; c=getchar_unlocked(); } return x; } // ── Output — avoid endl, use "\n" ───────────────────────────────────── cout << ans << "\n"; // GOOD cout << ans << endl; // BAD — flushes buffer every time, 10x slower // ── Printing a vector ───────────────────────────────────────────────── for(int i=0;i<sz(a);i++) cout << a[i] << " \n"[i+1==sz(a)];
↑ top
Chapter 17 · Number Theory
Modular Arithmetic & Number Theory — The Engine of Div2 D/E

17.1   Binary Exponentiation & Modular Inverse Must Have

Deployment Guide: Mod Inverse & Expo

Trigger: Any problem asking you to "output the answer modulo 10⁹+7" involving division (e.g., combinatorics nCr, probabilities P/Q mod M).
Constraints: MOD must be prime for Fermat's Little Theorem (inv(a) = a^(MOD-2)). If MOD is not prime, use Extended Euclidean.
Trap: Taking the inverse of 0 is undefined. Assure a and MOD are coprime.

// Binary Exponentiation — a^b mod m in O(log b) ll binpow(ll a, ll b, ll mod){ a %= mod; ll res = 1; while(b > 0){ if(b & 1) res = res * a % mod; // Odd exponent: multiply current base a = a * a % mod; // Square the base b >>= 1; // Halve the exponent } return res; } // Modular Inverse via Fermat's Little Theorem // Requires: MOD is PRIME, a is not divisible by MOD // inv(a) = a^(MOD-2) mod MOD ll inv(ll a, ll mod = MOD){ return binpow(a, mod-2, mod); } // Modular Inverse using Extended Euclidean — works for non-prime moduli ll extgcd(ll a, ll b, ll& x, ll& y){ if(!b){ x=1; y=0; return a; } ll x1,y1; ll g = extgcd(b, a%b, x1, y1); x = y1; y = x1 - (a/b)*y1; return g; } ll modinv(ll a, ll m){ ll x,y; ll g = extgcd(a,m,x,y); if(g != 1) return -1; // No inverse if gcd != 1 return (x%m+m)%m; }

17.2   nCr with Precomputed Factorials Must Have

const int MAXN = 2e6 + 5; ll fact[MAXN], inv_fact[MAXN]; void precompute(){ fact[0] = 1; for(int i=1;i<MAXN;i++) fact[i] = fact[i-1] * i % MOD; inv_fact[MAXN-1] = binpow(fact[MAXN-1], MOD-2, MOD); for(int i=MAXN-2;i>=0;i--) inv_fact[i] = inv_fact[i+1] * (i+1) % MOD; // inv_fact[i] = inv_fact[i+1]*(i+1) } ll nCr(int n, int r){ if(r<0||r>n) return 0; return fact[n] % MOD * inv_fact[r] % MOD * inv_fact[n-r] % MOD; } ll nPr(int n, int r){ if(r<0||r>n) return 0; return fact[n] % MOD * inv_fact[n-r] % MOD; }
Lucas' Theorem — nCr when MOD is small or n is huge

If n or r > MOD (or MOD is a small prime), use Lucas: nCr(n,r) mod p = nCr(n/p, r/p) * nCr(n%p, r%p) mod p. Call recursively until base case. Essential for problems where n can be up to 10¹⁸ but MOD is a small prime.

17.3   Linear Sieve — Primes & SPF in O(n) Must Have

Deployment Guide: Linear Sieve & SPF

Trigger: You need to prime factorize multiple queries (e.g., 10⁵ queries) of numbers up to 10⁷. Or you need to quickly compute Phi/Sigma for a range.
Constraints: N can be up to 10⁷ minimum. Array size hits 40MB for 10⁷ integers.
Trap: Don't use a sieve for N > 10⁸ (Memory Limit Exceeded). Use Pollard-Rho if you need to factorize sparse, large numbers (up to 10¹⁸).

const int SIEVE_MAX = 1e7 + 5; int spf[SIEVE_MAX]; // spf[i] = smallest prime factor of i bool isCmp[SIEVE_MAX]; // isCmp[i] = true if i is composite vi primes; void linearSieve(){ for(int i=2;i<SIEVE_MAX;i++){ if(!isCmp[i]){ primes.pb(i); spf[i] = i; } for(int j=0; j<sz(primes) && (long long)i*primes[j]<SIEVE_MAX; j++){ isCmp[i*primes[j]] = true; spf[i*primes[j]] = primes[j]; if(i % primes[j] == 0) break; // Key: ensures each composite marked once } } } // O(log n) factorization using SPF map<int,int> factorize(int n){ map<int,int> f; while(n > 1){ f[spf[n]]++; n /= spf[n]; } return f; } // Count prime factors (with multiplicity) int omega(int n){ int cnt=0; while(n>1){ cnt++; int p=spf[n]; while(n%p==0) n/=p; } return cnt; } // Euler's Totient Function using SPF int phi(int n){ int result = n; int tmp = n; while(tmp > 1){ int p = spf[tmp]; result -= result/p; while(tmp%p==0) tmp/=p; } return result; }

17.4   CRT, GCD, LCM & Number Theory Tricks Reference

// Chinese Remainder Theorem — find x such that x ≡ r1 (mod m1), x ≡ r2 (mod m2) // m1, m2 must be coprime ll crt(ll r1, ll m1, ll r2, ll m2){ ll x,y; ll g = extgcd(m1, m2, x, y); // g should be 1 if coprime return (r1 + m1*((r2-r1)%m2*x%m2+m2)%m2) % (m1*m2); } // Fast GCD (C++17 has __gcd, but this handles 0 correctly) ll gcd(ll a, ll b){ return b ? gcd(b,a%b) : a; } ll lcm(ll a, ll b){ return a / gcd(a,b) * b; } // Divide first to avoid overflow // Integer square root (exact) ll isqrt(ll n){ ll x = sqrtl(n); while(x*x > n) x--; while((x+1)*(x+1) <= n) x++; return x; } // Miller-Rabin Primality Test — O(k log² n), deterministic for n < 3.3×10²⁴ ll mulmod(ll a, ll b, ll m){ return (__int128)a * b % m; } bool millerTest(ll n, ll a){ if(n%a==0) return n==a; ll d=n-1; int r=0; while(d%2==0){ d/=2; r++; } ll x = binpow(a,d,n); if(x==1||x==n-1) return true; for(int i=0;i<r-1;i++){ x=mulmod(x,x,n); if(x==n-1) return true; } return false; } bool isPrime(ll n){ if(n<2) return false; for(ll a:{2,3,5,7,11,13,17,19,23,29,31,37}) if(!millerTest(n,a)) return false; return true; }
↑ top
Chapter 18 · Trees
Heavy Tree Algorithms — LCA, Euler Tour & Centroid Decomposition

18.1   Binary Lifting — LCA in O(log n) Must Have

Deployment Guide: Binary Lifting LCA

Trigger: "Find the distance between u and v on a tree", "Path sum on a static tree", or queries requiring the k-th ancestor of a node.
Constraints: N, Q ≤ 2·10⁵. Precomputation takes O(N log N) memory and time. Queries are O(log N).
Trap: Array size for par[MAXV][LOG]. Ensure LOG is strictly > log2(MAXV). Also, don't swap u and v at the end of LCA if you need their original relative depth differences.

const int MAXV = 2e5+5, LOG = 18; int par[MAXV][LOG]; // par[v][k] = 2^k-th ancestor of v int depth[MAXV]; vector<int> adj[MAXV]; void dfsLCA(int u, int p, int d){ par[u][0] = p; depth[u] = d; for(int k=1;k<LOG;k++) par[u][k] = par[par[u][k-1]][k-1]; // 2^k ancestor = 2^(k-1) of 2^(k-1) for(int v : adj[u]) if(v != p) dfsLCA(v, u, d+1); } void buildLCA(int root, int n){ par[root][0] = root; dfsLCA(root, root, 0); } int lca(int u, int v){ if(depth[u] < depth[v]) swap(u,v); int diff = depth[u] - depth[v]; for(int k=0;k<LOG;k++) // Lift u to same depth as v if((diff>>k)&1) u = par[u][k]; if(u == v) return u; for(int k=LOG-1;k>=0;k--) // Lift both until just below LCA if(par[u][k] != par[v][k]){ u=par[u][k]; v=par[v][k]; } return par[u][0]; } int dist(int u, int v){ return depth[u] + depth[v] - 2*depth[lca(u,v)]; } // Kth ancestor of u — O(log k) int kthAnc(int u, int k){ for(int b=0;b<LOG;b++) if((k>>b)&1) u=par[u][b]; return u; }

18.2   Euler Tour — Tree Flattening for Range Queries Must Have

int tin[MAXV], tout[MAXV], euler[2*MAXV]; int timer_e = 0, flat[MAXV]; // flat[i] = node at euler-tour position i int sz_tree[MAXV]; // subtree size void dfsEuler(int u, int p){ tin[u] = timer_e; flat[timer_e++] = u; sz_tree[u] = 1; for(int v : adj[u]){ if(v != p){ dfsEuler(v, u); sz_tree[u] += sz_tree[v]; } } tout[u] = timer_e - 1; } // Now: subtree of u spans positions [tin[u], tout[u]] in flat[] // Any query on subtree of u = range query on flat[tin[u]..tout[u]] // isAncestor(u,v): tin[u]<=tin[v] && tout[v]<=tout[u] // Heavy-Light Decomposition — path queries in O(log²n) with segment tree int heavy[MAXV], head_[MAXV], pos_[MAXV], hld_timer=0; void dfsHLD(int u, int p){ sz_tree[u]=1; heavy[u]=-1; for(int v:adj[u]) if(v!=p){ dfsHLD(v,u); sz_tree[u]+=sz_tree[v]; if(heavy[u]==-1||sz_tree[v]>sz_tree[heavy[u]]) heavy[u]=v; } } void decomposeHLD(int u, int p, int h){ head_[u]=h; pos_[u]=hld_timer++; if(heavy[u]!=-1) decomposeHLD(heavy[u],u,h); for(int v:adj[u]) if(v!=p&&v!=heavy[u]) decomposeHLD(v,u,v); } // Path query u→v: while heads differ, query chain [pos[head], pos[u]], go up ll pathQuery(int u, int v, auto& seg){ ll res=0; while(head_[u]!=head_[v]){ if(depth[head_[u]]<depth[head_[v]]) swap(u,v); res += seg.query(pos_[head_[u]], pos_[u]); u = par[head_[u]][0]; } if(depth[u]>depth[v]) swap(u,v); res += seg.query(pos_[u], pos_[v]); return res; }

18.3   Centroid Decomposition — Path Problems on Trees Advanced

int subtreeSz[MAXV]; bool removed[MAXV]; int centTree[MAXV]; // centTree[v] = parent in centroid decomposition tree int getSubtreeSize(int u, int p){ subtreeSz[u]=1; for(int v:adj[u]) if(!removed[v]&&v!=p) subtreeSz[u]+=getSubtreeSize(v,u); return subtreeSz[u]; } int getCentroid(int u, int p, int treeSize){ for(int v:adj[u]) if(!removed[v]&&v!=p&&subtreeSz[v]>treeSize/2) return getCentroid(v,u,treeSize); return u; } void buildCentroid(int u, int p){ int sz = getSubtreeSize(u,-1); int c = getCentroid(u,-1,sz); centTree[c] = p; removed[c] = true; // Remove centroid and recurse on subtrees for(int v:adj[c]) if(!removed[v]) buildCentroid(v,c); } // Use case: count paths of length k // For each centroid c, count paths passing through c using two DFS passes // O(n log²n) for most path queries on trees
↑ top
Chapter 19 · Range Queries
Advanced Range Queries — Sparse Table, Lazy SegTree & 2D BIT

19.1   Sparse Table — O(1) Range Minimum Query Must Have

Deployment Guide: Sparse Table

Trigger: Static array (NO updates) with 10⁶+ range queries for MIN, MAX, GCD, or Bitwise AND/OR.
Constraints: N ≤ 2·10⁵ (due to O(N log N) memory). Q can be up to 10⁷ since query is O(1).
Trap: The operation must be idempotent (f(x, x) = x). Do not use Sparse Table for Range Sum queries (use Prefix Sums or BIT instead).

// O(n log n) build, O(1) query — only for IDEMPOTENT operations (min, max, gcd) // NOT for sum (use BIT/SegTree for sum) struct SparseTable { vvi table; // table[k][i] = min of a[i..i+2^k-1] vi log2_; int n; SparseTable(const vi& a) : n(sz(a)), table(20, vi(sz(a))), log2_(sz(a)+1) { log2_[1] = 0; for(int i=2;i<=n;i++) log2_[i] = log2_[i/2]+1; table[0] = a; for(int k=1;(1<<k)<=n;k++) for(int i=0;i+(1<<k)<=n;i++) table[k][i] = min(table[k-1][i], table[k-1][i+(1<<(k-1))]); } // Range minimum query [l, r] inclusive — O(1) int query(int l, int r){ int k = log2_[r-l+1]; return min(table[k][l], table[k][r-(1<<k)+1]); } };

19.2   Lazy Propagation Segment Tree — Modular Version Must Have

// Generic segment tree with lazy propagation // SWAP the merge() function for sum/min/max/gcd problems struct SegTree { int n; vector<ll> tree, lazy; SegTree(int n) : n(n), tree(4*n,0), lazy(4*n,0) {} SegTree(const vector<ll>& a) : n(sz(a)), tree(4*sz(a),0), lazy(4*sz(a),0) { build(a,1,0,n-1); } ll merge(ll L, ll R){ return L+R; } // ← SWAP THIS: min(L,R) / max(L,R) / gcd(L,R) void build(const vector<ll>& a,int nd,int lo,int hi){ if(lo==hi){tree[nd]=a[lo];return;} int mid=(lo+hi)/2; build(a,2*nd,lo,mid); build(a,2*nd+1,mid+1,hi); tree[nd]=merge(tree[2*nd],tree[2*nd+1]); } void push(int nd,int lo,int hi){ if(!lazy[nd]) return; int mid=(lo+hi)/2; tree[2*nd] += lazy[nd]*(mid-lo+1); // ← Adjust for non-sum operations lazy[2*nd] += lazy[nd]; tree[2*nd+1] += lazy[nd]*(hi-mid); lazy[2*nd+1] += lazy[nd]; lazy[nd]=0; } void update(int nd,int lo,int hi,int l,int r,ll val){ if(r<lo||hi<l) return; if(l<=lo&&hi<=r){tree[nd]+=val*(hi-lo+1);lazy[nd]+=val;return;} push(nd,lo,hi); int mid=(lo+hi)/2; update(2*nd,lo,mid,l,r,val); update(2*nd+1,mid+1,hi,l,r,val); tree[nd]=merge(tree[2*nd],tree[2*nd+1]); } ll query(int nd,int lo,int hi,int l,int r){ if(r<lo||hi<l) return 0; // ← identity: 0 for sum, INF for min, -INF for max if(l<=lo&&hi<=r) return tree[nd]; push(nd,lo,hi); int mid=(lo+hi)/2; return merge(query(2*nd,lo,mid,l,r),query(2*nd+1,mid+1,hi,l,r)); } void update(int l,int r,ll v){update(1,0,n-1,l,r,v);} ll query(int l,int r) {return query(1,0,n-1,l,r);} void pointSet(int i,ll v) {update(i,i,v);} };

19.3   2D Fenwick Tree — Grid Range Sums Important

struct BIT2D { int n, m; vector<vector<ll>> tree; BIT2D(int n, int m) : n(n), m(m), tree(n+1, vector<ll>(m+1,0)) {} void update(int x, int y, ll delta){ for(int i=x;i<=n;i+=i&-i) for(int j=y;j<=m;j+=j&-j) tree[i][j]+=delta; } ll prefix(int x,int y){ ll s=0; for(int i=x;i>0;i-=i&-i) for(int j=y;j>0;j-=j&-j) s+=tree[i][j]; return s; } // Sum of rectangle (r1,c1) to (r2,c2) — 1-indexed ll query(int r1,int c1,int r2,int c2){ return prefix(r2,c2)-prefix(r1-1,c2)-prefix(r2,c1-1)+prefix(r1-1,c1-1); } };

19.4   Merge Sort Tree — Kth Smallest in Range Advanced

// Segment tree where each node stores sorted elements of its range // Answers: "how many elements in [l,r] are ≤ x?" in O(log²n) struct MergeSortTree { int n; vector<vi> t; MergeSortTree(const vi& a) : n(sz(a)), t(4*sz(a)) { build(a,1,0,n-1); } void build(const vi& a,int nd,int lo,int hi){ if(lo==hi){t[nd]={a[lo]};return;} int mid=(lo+hi)/2; build(a,2*nd,lo,mid); build(a,2*nd+1,mid+1,hi); merge(all(t[2*nd]),all(t[2*nd+1]),back_inserter(t[nd])); } // Count elements <= x in [l,r] int countLE(int nd,int lo,int hi,int l,int r,int x){ if(r<lo||hi<l) return 0; if(l<=lo&&hi<=r) return upper_bound(all(t[nd]),x)-t[nd].begin(); int mid=(lo+hi)/2; return countLE(2*nd,lo,mid,l,r,x)+countLE(2*nd+1,mid+1,hi,l,r,x); } int countLE(int l,int r,int x){return countLE(1,0,n-1,l,r,x);} };
↑ top
Chapter 20 · Strings
String Mastery — Z, Manacher, Double Hashing & Suffix Arrays

20.1   Z-Algorithm — Cleaner Pattern Matching Must Have

Deployment Guide: Z-Algorithm

Trigger: String matching ("find all occurrences of P in T"), finding the shortest repeating period of a string, or prefix-suffix overlaps.
Constraints: N + M ≤ 10⁶. Linear O(N) time and space.
Trap: When concatenating P + "#" + T, ensure the separator character # is absolutely not present in either string. Without it, the Z-array might exceed length(P) and give false positives.

// z[i] = length of longest string starting at s[i] that is also a PREFIX of s // z[0] = 0 by convention (or n, depending on implementation) vi zFunction(const string& s){ int n=sz(s); vi z(n,0); z[0]=n; for(int i=1,l=0,r=0;i<n;i++){ if(i<r) z[i]=min(r-i,z[i-l]); while(i+z[i]<n && s[z[i]]==s[i+z[i]]) z[i]++; if(i+z[i]>r){ l=i; r=i+z[i]; } } return z; } // Find all occurrences of pattern p in text t — O(n+m) vi findOccurrences(const string& t, const string& p){ string s = p + "#" + t; // Concatenate with separator not in alphabet vi z = zFunction(s); vi res; int m = sz(p); for(int i=m+1;i<sz(s);i++) if(z[i]==m) res.pb(i-m-1); // Match at position i-(m+1) in t return res; } // Minimum period of string int minPeriod(const string& s){ vi z = zFunction(s); for(int p=1;p<sz(s);p++) if(sz(s)%p==0 && z[p]==sz(s)-p) return p; return sz(s); }

20.2   Manacher's Algorithm — All Palindromes in O(n) Must Have

// d1[i] = radius of longest ODD palindrome centered at i // d2[i] = radius of longest EVEN palindrome centered between i-1 and i pair<vi,vi> manacher(const string& s){ int n=sz(s); vi d1(n), d2(n); // Odd palindromes for(int i=0,l=0,r=-1;i<n;i++){ int k = i>r ? 1 : min(d1[l+r-i],r-i+1); while(i-k>=0 && i+k<n && s[i-k]==s[i+k]) k++; d1[i]=k--; if(i+k>r){l=i-k;r=i+k;} } // Even palindromes for(int i=0,l=0,r=-1;i<n;i++){ int k = i>r ? 0 : min(d2[l+r-i+1],r-i+1); while(i-k-1>=0 && i+k<n && s[i-k-1]==s[i+k]) k++; d2[i]=k--; if(i+k>r){l=i-k-1;r=i+k;} } return {d1,d2}; } // isPalindrome(l,r): if (r-l)%2==0 → d1[(l+r)/2]>(r-l)/2 // Count total palindromic substrings: sum(d1[i]) + sum(d2[i])

20.3   Polynomial Double Hashing — Anti-Hack Template Must Have

// Double hashing: two independent hashes to make collision probability ~1/MOD² // Use with random BASE to prevent anti-hash hacks on Codeforces struct StringHash { static const ll MOD1=1e9+7, MOD2=1e9+9; static const ll B1=131, B2=137; // Or randomize: mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()) vector<ll> h1,h2,p1,p2; int n; StringHash(const string& s) : n(sz(s)),h1(n+1,0),h2(n+1,0),p1(n+1,1),p2(n+1,1){ for(int i=0;i<n;i++){ h1[i+1]=(h1[i]*B1+(s[i]-'a'+1))%MOD1; h2[i+1]=(h2[i]*B2+(s[i]-'a'+1))%MOD2; p1[i+1]=p1[i]*B1%MOD1; p2[i+1]=p2[i]*B2%MOD2; } } // Hash of s[l..r] — O(1) pair<ll,ll> get(int l,int r){ ll v1=(h1[r+1]-h1[l]*p1[r-l+1]%MOD1+MOD1*2)%MOD1; ll v2=(h2[r+1]-h2[l]*p2[r-l+1]%MOD2+MOD2*2)%MOD2; return {v1,v2}; } bool equal(int l1,int r1,int l2,int r2){ return get(l1,r1)==get(l2,r2); } // Binary search for longest common prefix of s[a..] and s[b..] int lcp(int a,int b){ int lo=0,hi=min(n-a,n-b); while(lo<hi){ int mid=(lo+hi+1)/2; equal(a,a+mid-1,b,b+mid-1)?lo=mid:hi=mid-1; } return lo; } };

20.4   Suffix Array + LCP Array — O(n log n) Advanced

// SA-IS / prefix doubling approach // sa[i] = starting position of i-th lexicographically smallest suffix // lcp[i] = LCP of sa[i-1] and sa[i] vi buildSA(const string& s){ int n=sz(s); vi sa(n),rank_(n),tmp(n); iota(all(sa),0); for(int i=0;i<n;i++) rank_[i]=s[i]; for(int k=1;k<n;k<<=1){ auto cmp=[&](int a,int b){ if(rank_[a]!=rank_[b]) return rank_[a]<rank_[b]; int ra=a+k<n?rank_[a+k]:-1; int rb=b+k<n?rank_[b+k]:-1; return ra<rb; }; sort(all(sa),cmp); tmp[sa[0]]=0; for(int i=1;i<n;i++) tmp[sa[i]]=tmp[sa[i-1]]+(cmp(sa[i-1],sa[i])?1:0); rank_=tmp; if(rank_[sa[n-1]]==n-1) break; } return sa; } vi buildLCP(const string& s,const vi& sa){ int n=sz(s); vi rank_(n), lcp(n,0); for(int i=0;i<n;i++) rank_[sa[i]]=i; for(int i=0,h=0;i<n;i++){ if(rank_[i]>0){ int j=sa[rank_[i]-1]; while(i+h<n&&j+h<n&&s[i+h]==s[j+h]) h++; lcp[rank_[i]]=h; if(h) h--; } } return lcp; } // Number of distinct substrings = n*(n+1)/2 - sum(lcp) // All occurrences of pattern p: binary search on SA
↑ top
Chapter 21 · Offline Algorithms
Coordinate Compression & Mo's — Offline Range Mastery

21.1   Coordinate Compression — Shrink 10⁹ to 10⁵ Must Have

// Problem: values up to 10^9, but only n≤10^5 distinct values // Compress: map each value to its 0-indexed rank struct Compress { vi vals; void add(int x){ vals.pb(x); } void build(){ sort(all(vals)); vals.erase(unique(all(vals)),vals.end()); } int get(int x){ // O(log n) — compressed index of x return lower_bound(all(vals),x)-vals.begin(); } int size(){ return sz(vals); } }; // One-liner compress for a single array: vi compress(const vi& a){ vi sorted=a; sort(all(sorted)); sorted.erase(unique(all(sorted)),sorted.end()); vi res(sz(a)); for(int i=0;i<sz(a);i++) res[i]=lower_bound(all(sorted),a[i])-sorted.begin(); return res; }

21.2   Mo's Algorithm — O((n+q)√n) Offline Queries Must Have

// Answer range queries [l,r] offline by sorting queries cleverly // Expands/shrinks [curL, curR] to answer each query // Cost: O((n+q)*sqrt(n)) — each pointer moves O(n*sqrt(n)) total int blockSize; struct Query { int l,r,idx; bool operator<(const Query& o) const { if(l/blockSize != o.l/blockSize) return l/blockSize < o.l/blockSize; return (l/blockSize & 1) ? r>o.r : r<o.r; // Alternating sort = Mo's optimization } }; // State — modify these for your problem: int cnt[MAXV]; // frequency of each element int distinct = 0; // current answer (e.g., count of distinct elements) vi a; // input array (coordinate-compressed) void add(int x){ if(cnt[x]++==0) distinct++; } void rem(int x){ if(--cnt[x]==0) distinct--; } vi mosSolve(int n, vector<Query>& qs){ blockSize = max(1,(int)sqrt(n)); sort(all(qs)); vi ans(sz(qs)); int curL=0, curR=-1; for(auto& q : qs){ while(curR < q.r) add(a[++curR]); while(curL > q.l) add(a[--curL]); while(curR > q.r) rem(a[curR--]); while(curL < q.l) rem(a[curL++]); ans[q.idx] = distinct; } return ans; } // Mo's on Trees — flatten with Euler tour, then run Mo's on the flat array // Mo's with updates — block size = n^(2/3), O(n^(5/3))
↑ top
Chapter 22 · Flows
Network Flow & Matching — Dinic's, Bipartite & Min-Cost Flow

22.1   Dinic's Max Flow — O(V²E) General, O(E√V) for Unit Graphs Must Have

Deployment Guide: Dinic's

Trigger: "Find the maximum bipartite matching", "Minimum number of cuts to separate A and B", "Assign items to buckets with capacities". If constraints allow V, E ≤ 10³, think Flow.
Constraints: In practice, Dinic's is insanely fast. It often crushes graphs with V=10⁴, E=10⁵ easily, especially on bipartite graphs (where it runs in O(E√V)).
Trap: Building the graph correctly is 99% of the problem. Don't forget to add reverse edges with 0 capacity for directed graphs. Remember that Min-Cut edges are those going from the BFS-reachable component to the non-reachable component in the residual graph.

struct Dinic { struct Edge { int to, rev; ll cap; }; vector<vector<Edge>> g; vi level, iter_; int n; Dinic(int n) : n(n), g(n), level(n), iter_(n) {} void addEdge(int u,int v,ll cap,bool directed=true){ g[u].push_back({v,(int)sz(g[v]),cap}); g[v].push_back({u,(int)sz(g[u])-1,directed?0:cap}); } bool bfs(int s,int t){ fill(all(level),-1); queue<int> q; level[s]=0; q.push(s); while(!q.empty()){ int u=q.front();q.pop(); for(auto& e:g[u]) if(e.cap>0&&level[e.to]<0){ level[e.to]=level[u]+1; q.push(e.to); } } return level[t]>=0; } ll dfs(int u,int t,ll f){ if(u==t) return f; for(int& i=iter_[u];i<sz(g[u]);i++){ Edge& e=g[u][i]; if(e.cap>0&&level[e.to]==level[u]+1){ ll d=dfs(e.to,t,min(f,e.cap)); if(d>0){ e.cap-=d; g[e.to][e.rev].cap+=d; return d; } } } return 0; } ll maxflow(int s,int t){ ll flow=0; while(bfs(s,t)){ fill(all(iter_),0); ll d; while((d=dfs(s,t,INF))>0) flow+=d; } return flow; } // Min cut: after maxflow, S-side = nodes reachable from s in residual (level[v]!=-1) };
Flow Problem Recognition

Max flow = min cut (König's theorem). Bipartite matching = max flow with source/sink. Assignment problem = min-cost max-flow. Circulation problems = flow with lower bounds. If you see "maximum number of non-conflicting paths" or "minimum number of elements to remove to disconnect" — it's flow.

22.2   Bipartite Matching — Hopcroft-Karp O(E√V) Must Have

struct HopcroftKarp { int n, m; vvi g; // g[u] = neighbors in right partition vi matchL, matchR, dist_; HopcroftKarp(int n,int m):n(n),m(m),g(n),matchL(n,-1),matchR(m,-1),dist_(n) {} void addEdge(int u,int v){ g[u].pb(v); } bool bfs(){ queue<int> q; for(int u=0;u<n;u++){ if(matchL[u]==-1){ dist_[u]=0; q.push(u); } else dist_[u]=INF32; } bool found=false; while(!q.empty()){ int u=q.front();q.pop(); for(int v:g[u]){ int w=matchR[v]; if(w==-1) found=true; else if(dist_[w]==INF32){ dist_[w]=dist_[u]+1; q.push(w); } } } return found; } bool dfs(int u){ for(int v:g[u]){ int w=matchR[v]; if(w==-1||(dist_[w]==dist_[u]+1&&dfs(w))){ matchL[u]=v; matchR[v]=u; return true; } } dist_[u]=INF32; return false; } int maxMatching(){ int res=0; while(bfs()) for(int u=0;u<n;u++) if(matchL[u]==-1) res+=dfs(u); return res; } };

22.3   Min-Cost Max-Flow — SPFA based O(VEf) Advanced

struct MCMF { struct Edge{int to,rev;ll cap,cost;}; vector<vector<Edge>> g; int n; MCMF(int n):n(n),g(n){} void addEdge(int u,int v,ll cap,ll cost){ g[u].push_back({v,(int)sz(g[v]),cap,cost}); g[v].push_back({u,(int)sz(g[u])-1,0,-cost}); } pair<ll,ll> flow(int s,int t){ ll totalFlow=0,totalCost=0; while(true){ vector<ll> dist(n,INF); vi prevv(n),preve(n); vector<bool> inQ(n); deque<int> q; dist[s]=0; q.push_back(s); inQ[s]=true; while(!q.empty()){ int u=q.front();q.pop_front();inQ[u]=false; for(int i=0;i<sz(g[u]);i++){ auto& e=g[u][i]; if(e.cap>0&&dist[u]+e.cost<dist[e.to]){ dist[e.to]=dist[u]+e.cost; prevv[e.to]=u; preve[e.to]=i; if(!inQ[e.to]){inQ[e.to]=true;q.push_back(e.to);} } } } if(dist[t]==INF) break; ll d=INF; for(int v=t;v!=s;v=prevv[v]) d=min(d,g[prevv[v]][preve[v]].cap); for(int v=t;v!=s;v=prevv[v]){ g[prevv[v]][preve[v]].cap-=d; g[v][g[prevv[v]][preve[v]].rev].cap+=d; } totalFlow+=d; totalCost+=d*dist[t]; } return {totalFlow,totalCost}; } };
↑ top
Chapter 23 · Geometry
Computational Geometry — Points, Lines, Convex Hull & Intersections

23.1   Geometry Toolkit — The Complete Template Must Have

using ld = long double; const ld EPS = 1e-9; int sign(ld x){ return (x>EPS)-(x<-EPS); } struct P { ld x,y; P(ld x=0,ld y=0):x(x),y(y){} P operator+(P o)const{return {x+o.x,y+o.y};} P operator-(P o)const{return {x-o.x,y-o.y};} P operator*(ld t)const{return {x*t,y*t};} P operator/(ld t)const{return {x/t,y/t};} ld dot(P o) const {return x*o.x+y*o.y;} ld cross(P o)const {return x*o.y-y*o.x;} ld norm() const {return x*x+y*y;} ld abs() const {return sqrtl(norm());} P perp() const {return {-y,x};} // Rotate 90° CCW P unit() const {return *this/abs();} bool operator<(P o)const{return x<o.x-EPS||(sign(x-o.x)==0&&y<o.y-EPS);} bool operator==(P o)const{return sign(x-o.x)==0&&sign(y-o.y)==0;} }; // Signed area of triangle (a,b,c) — positive if CCW ld area2(P a,P b,P c){ return (b-a).cross(c-a); } // Orientation: 1=CCW, -1=CW, 0=collinear int orient(P a,P b,P c){ return sign(area2(a,b,c)); } // Segment intersection test bool onSegment(P a,P b,P p){ return sign((b-a).cross(p-a))==0 && sign((p-a).dot(b-a))>=0 && sign((p-b).dot(a-b))>=0; } bool segmentsIntersect(P a,P b,P c,P d){ int d1=orient(c,d,a),d2=orient(c,d,b),d3=orient(a,b,c),d4=orient(a,b,d); if(((d1>0&&d2<0)||(d1<0&&d2>0))&&((d3>0&&d4<0)||(d3<0&&d4>0))) return true; if(!d1&&onSegment(c,d,a)) return true; if(!d2&&onSegment(c,d,b)) return true; if(!d3&&onSegment(a,b,c)) return true; if(!d4&&onSegment(a,b,d)) return true; return false; } // Convex Hull — Graham scan O(n log n) vector<P> convexHull(vector<P> pts){ int n=sz(pts); if(n<2) return pts; sort(all(pts)); vector<P> hull; for(int phase=0;phase<2;phase++){ int start=sz(hull); for(auto& p:pts){ while(sz(hull)>=start+2&&orient(hull[sz(hull)-2],hull.back(),p)<=0) hull.pop_back(); hull.pb(p); } hull.pop_back(); reverse(all(pts)); } return hull; } // Polygon area (signed, positive if CCW) ld polyArea(vector<P>& poly){ ld area=0; int n=sz(poly); for(int i=0;i<n;i++) area+=poly[i].cross(poly[(i+1)%n]); return area/2; } // Point in polygon (ray casting) — O(n) bool pointInPolygon(vector<P>& poly, P p){ int n=sz(poly), cnt=0; for(int i=0;i<n;i++){ P a=poly[i], b=poly[(i+1)%n]; if(onSegment(a,b,p)) return true; // On boundary if((a.y<=p.y&&p.y<b.y)||(b.y<=p.y&&p.y<a.y)) if(sign((b-a).cross(p-a))*sign(b.y-a.y)>0) cnt++; } return cnt&1; } // Rotating Calipers — diameter of convex hull ld convexDiameter(vector<P> hull){ int n=sz(hull); if(n==1) return 0; ld res=0; for(int i=0,j=1;i<n;i++){ while((hull[(i+1)%n]-hull[i]).cross(hull[(j+1)%n]-hull[j])>0) j=(j+1)%n; res=max(res,max((hull[i]-hull[j]).abs(),(hull[(i+1)%n]-hull[j]).abs())); } return res; }
↑ top
Chapter 24 · Advanced Graphs
Advanced Graph Algorithms — SCC, Bridges, Articulation & Convex Hull Trick

24.1   Strongly Connected Components — Tarjan's O(V+E) Must Have

struct Tarjan { int n, timer_=0, numSCC=0; vector<vi> g; vi disc,low,comp; vector<bool> onStack; stack<int> st; Tarjan(int n):n(n),g(n),disc(n,-1),low(n),comp(n,-1),onStack(n,false){} void addEdge(int u,int v){g[u].pb(v);} void dfs(int u){ disc[u]=low[u]=timer_++; st.push(u); onStack[u]=true; for(int v:g[u]){ if(disc[v]==-1){ dfs(v); low[u]=min(low[u],low[v]); } else if(onStack[v]) low[u]=min(low[u],disc[v]); } if(low[u]==disc[u]){ // Root of SCC while(true){ int v=st.top();st.pop(); onStack[v]=false; comp[v]=numSCC; if(v==u) break; } numSCC++; } } void solve(){ for(int i=0;i<n;i++) if(disc[i]==-1) dfs(i); } // comp[v] = SCC id of vertex v (numbered in reverse topological order) // Build condensation DAG: for each edge (u,v) where comp[u]!=comp[v], add edge };

24.2   Bridges & Articulation Points — O(V+E) Must Have

int disc_b[MAXV],low_b[MAXV],par_b[MAXV]; bool isAP[MAXV]; int timer_b=0; vector<pair<int,int>> bridges; void dfsBridge(int u,int p){ disc_b[u]=low_b[u]=timer_b++; int children=0; for(int v:adj[u]){ if(disc_b[v]==-1){ children++; par_b[v]=u; dfsBridge(v,u); low_b[u]=min(low_b[u],low_b[v]); // Bridge condition: no back edge from subtree of v can reach above u if(low_b[v]>disc_b[u]) bridges.pb({u,v}); // Articulation point condition if(p==-1&&children>1) isAP[u]=true; if(p!=-1&&low_b[v]>=disc_b[u]) isAP[u]=true; } else if(v!=p){ low_b[u]=min(low_b[u],disc_b[v]); } } } void findBridges(int V){ fill(disc_b,disc_b+V,-1); for(int i=0;i<V;i++) if(disc_b[i]==-1) dfsBridge(i,-1); }

24.3   Convex Hull Trick — O(n) DP Optimization Expert

// When dp[i] = min over j // Lines y = m*x + b — query minimum at each x // Requires: queries in sorted order (monotone stack), or use Li Chao tree for arbitrary struct Line { ll m, b; ll eval(ll x){ return m*x+b; } }; struct CHT { deque<Line> hull; bool bad(Line l1,Line l2,Line l3){ // l2 is never minimum if intersection of l1,l3 is to the left of l1,l2 return (__int128)(l3.b-l1.b)*(l1.m-l2.m)<=(__int128)(l2.b-l1.b)*(l1.m-l3.m); } void addLine(ll m,ll b){ // Add line y=m*x+b; slopes must be DECREASING for min Line l={m,b}; while(sz(hull)>=2&&bad(hull[sz(hull)-2],hull[sz(hull)-1],l)) hull.pop_back(); hull.pb(l); } ll query(ll x){ // x must be INCREASING for O(1) amortized while(sz(hull)>1&&hull[0].eval(x)>=hull[1].eval(x)) hull.pop_front(); return hull[0].eval(x); } }; // Li Chao Tree — arbitrary query order, O(n log RANGE) struct LiChao { struct Node{ Line line; int l=-1,r=-1; }; vector<Node> t; ll lo,hi; LiChao(ll lo,ll hi):lo(lo),hi(hi){ t.push_back({{0,LLONG_MAX}}); } void addLine(int nd,ll l,ll r,Line line){ ll mid=(l+r)/2; bool leftBetter=line.eval(l)<t[nd].line.eval(l); bool midBetter =line.eval(mid)<t[nd].line.eval(mid); if(midBetter) swap(t[nd].line,line); if(l==r) return; if(leftBetter!=midBetter){ if(t[nd].l==-1){t[nd].l=sz(t);t.push_back({{0,LLONG_MAX}});} addLine(t[nd].l,l,mid,line); } else { if(t[nd].r==-1){t[nd].r=sz(t);t.push_back({{0,LLONG_MAX}});} addLine(t[nd].r,mid+1,r,line); } } void addLine(ll m,ll b){ addLine(0,lo,hi,{m,b}); } ll query(int nd,ll l,ll r,ll x){ ll res=t[nd].line.eval(x); if(l==r) return res; ll mid=(l+r)/2; if(x<=mid&&t[nd].l!=-1) res=min(res,query(t[nd].l,l,mid,x)); if(x>mid &&t[nd].r!=-1) res=min(res,query(t[nd].r,mid+1,r,x)); return res; } ll query(ll x){ return query(0,lo,hi,x); } };
↑ top
Chapter 25 · Advanced DP
Digit DP & Advanced DP — SOS, Profile, Divide & Conquer Opt

25.1   Digit DP — Count Numbers with Property in [L,R] Must Have

Digit DP counts numbers in a range satisfying a digit-based constraint (sum of digits = k, no consecutive equal digits, etc.) by processing digits one by one, tracking a state and a "tight" flag.

// Template: count numbers in [0, n] with digit sum = target string num; // Digits of the upper bound n as string int target; int memo[20][200][2][2]; // [pos][sum][tight][started] // pos=digit position, sum=running digit sum, tight=still <= n's digits, started=no leading zeros int dp(int pos, int sum, bool tight, bool started){ if(pos == sz(num)) return (!started || sum==target) ? 1 : 0; int& ret = memo[pos][sum][tight][started]; if(ret != -1) return ret; ret = 0; int limit = tight ? (num[pos]-'0') : 9; for(int d=0; d<=limit; d++){ if(sum+d > target) break; // Prune: already exceeded target bool newTight = tight && (d==limit); bool newStarted = started || (d>0); ret += dp(pos+1, started?(sum+d):0, newTight, newStarted); } return ret; } ll digitDP(ll n, int tgt){ num = to_string(n); target = tgt; memset(memo, -1, sizeof(memo)); return dp(0, 0, true, false); } // Answer for [L,R]: digitDP(R, target) - digitDP(L-1, target) // State customization examples: // No two adjacent digits equal: track prev digit in state // Divisible by k: track current_number % k in state // Count of 1s, 0s, etc.: track count in state

25.2   SOS DP — Sum over Subsets in O(n·2ⁿ) Must Have

// SOS DP: dp[mask] = sum of f[sub] for all submasks sub of mask // Used in bitmask DP problems where you need sum over all subsets int n = 20; // Number of bits vector<ll> dp(1<<n); // dp[mask] = initially f[mask] // Build SOS: O(n * 2^n) for(int i=0;i<n;i++) for(int mask=0;mask<(1<<n);mask++) if(mask&(1<<i)) dp[mask] += dp[mask^(1<<i)]; // Add contribution from subset missing bit i // After: dp[mask] = sum of original dp[sub] for all sub ⊆ mask // Superset sum (dual): sum over all supersets for(int i=0;i<n;i++) for(int mask=(1<<n)-1;mask>=0;mask--) if(!(mask&(1<<i))) dp[mask] += dp[mask|(1<<i)];

25.3   Divide & Conquer DP Optimization — O(n log n) Expert

// When: dp[i][j] = min over k // AND: opt[i][j-1] <= opt[i][j] <= opt[i][j+1] (monge array condition) // Reduces O(n²) DP to O(n log n) vector<ll> cur, prev_; auto cost = [&](int l,int r) -> ll { /* compute cost(l,r) here */ return 0; }; void solve(int lo,int hi,int optL,int optR){ if(lo>hi) return; int mid=(lo+hi)/2, opt=optL; cur[mid] = INF; for(int k=optL;k<=min(mid-1,optR);k++){ ll val = prev_[k] + cost(k,mid); if(val < cur[mid]){ cur[mid]=val; opt=k; } } solve(lo,mid-1,optL,opt); solve(mid+1,hi,opt,optR); }

25.4   Knuth's Optimization — O(n²) Interval DP Expert

// When: dp[i][j] = min over i<=k // AND: cost satisfies quadrangle inequality: cost(a,c)+cost(b,d)<=cost(a,d)+cost(b,c) for a<=b<=c<=d // opt[i][j] = optimal split point for dp[i][j] // Claim: opt[i][j-1] <= opt[i][j] <= opt[i+1][j] → reduces O(n³) to O(n²) int opt[MAXV][MAXV]; ll dp_k[MAXV][MAXV]; void knuth(int n){ for(int i=0;i<n;i++) opt[i][i]=i; for(int len=2;len<=n;len++){ for(int i=0;i+len-1<n;i++){ int j=i+len-1; dp_k[i][j]=INF; for(int k=opt[i][j-1];k<=opt[i+1][j];k++){ ll val = dp_k[i][k]+dp_k[k+1][j]+cost(i,j); if(val<dp_k[i][j]){ dp_k[i][j]=val; opt[i][j]=k; } } } } }
↑ top
Chapter 26 · Math & Game Theory
Game Theory & Math — Sprague-Grundy, Nim & Combinatorics

26.1   Sprague-Grundy Theorem & Nim Important

// Nim: given piles of stones, players alternate removing any amount from one pile // First player LOSES iff XOR of all pile sizes = 0 (Grundy theorem) bool nimWins(vi& piles){ int xorSum=0; for(int p:piles) xorSum^=p; return xorSum!=0; // true = first player wins } // Sprague-Grundy: compute Grundy number (mex) for each game state // mex = minimum excludant = smallest non-negative integer not in set int mex(set<int>& s){ int m=0; while(s.count(m)) m++; return m; } // Precompute Grundy values for game states // G(state) = mex of {G(state') : state' reachable from state} // Sum game (multiple independent games): G = XOR of all Grundy values // G(state)==0 → current player LOSES (P-position) // G(state)!=0 → current player WINS (N-position) // Staircase Nim — only odd positions matter: // XOR of piles at positions 1,3,5,... (1-indexed from top) bool staircaseNimWins(vi& piles){ int xorSum=0; for(int i=0;i<sz(piles);i++) if(i&1) xorSum^=piles[i]; return xorSum!=0; }

26.2   Advanced Math — FFT, Matrix Expo, Catalan Expert

// Matrix Exponentiation — compute A^n in O(k³ log n) // Use: linear recurrences, counting paths in graph of length n using Matrix = vector<vector<ll>>; Matrix multiply(Matrix& A, Matrix& B, ll mod){ int n=sz(A); Matrix C(n, vector<ll>(n,0)); for(int i=0;i<n;i++) for(int k=0;k<n;k++) if(A[i][k]) for(int j=0;j<n;j++) C[i][j]=(C[i][j]+A[i][k]*B[k][j])%mod; return C; } Matrix matpow(Matrix A, ll p, ll mod){ int n=sz(A); Matrix R(n,vector<ll>(n,0)); for(int i=0;i<n;i++) R[i][i]=1; // Identity while(p){ if(p&1) R=multiply(R,A,mod); A=multiply(A,A,mod); p>>=1; } return R; } // Fibonacci(n) in O(log n) using matrix: // [F(n+1)] [1 1]^n [1] // [F(n) ] = [1 0] * [0] // Catalan Number: C(n) = nCr(2n,n)/(n+1) // C(0)=1, C(1)=1, C(2)=2, C(3)=5, C(4)=14... // Counts: balanced parentheses, BST shapes, triangulations, monotonic paths ll catalan(int n){ return nCr(2*n,n)*inv(n+1)%MOD; } // Burnside's Lemma — counting distinct objects under symmetry // |orbits| = (1/|G|) * sum over g in G of |Fix(g)| // Fix(g) = number of arrangements fixed by rotation g // Inclusion-Exclusion Principle // |A ∪ B ∪ C| = |A|+|B|+|C| - |A∩B| - |A∩C| - |B∩C| + |A∩B∩C| // For bitmask: iterate over all non-empty subsets ll inclusionExclusion(int n, vector<ll>& a){ // a[i] = |set_i| ll ans=0; for(int mask=1;mask<(1<<n);mask++){ ll term=1; int bits=__builtin_popcount(mask); for(int i=0;i<n;i++) if(mask&(1<<i)) term*=a[i]; ans += (bits&1?1:-1)*term; } return ans; }

Fast Fourier Transform — Polynomial Multiplication in O(n log n)

// NTT (Number Theoretic Transform) — FFT mod prime (998244353 = 119*2^23+1) // Use for polynomial multiplication with exact integer results const int MOD_NTT = 998244353; const int PRIM_ROOT = 3; void ntt(vector<int>& a, bool invert){ int n=sz(a); for(int i=1,j=0;i<n;i++){ int bit=n>>1; for(;j&bit;bit>>=1) j^=bit; j^=bit; if(i<j) swap(a[i],a[j]); } for(int len=2;len<=n;len<<=1){ ll w=invert?binpow(PRIM_ROOT,MOD_NTT-1-(MOD_NTT-1)/len,MOD_NTT) :binpow(PRIM_ROOT,(MOD_NTT-1)/len,MOD_NTT); for(int i=0;i<n;i+=len){ ll wn=1; for(int j=0;j<len/2;j++){ ll u=a[i+j], v=a[i+j+len/2]*wn%MOD_NTT; a[i+j]=(u+v)%MOD_NTT; a[i+j+len/2]=(u-v+MOD_NTT)%MOD_NTT; wn=wn*w%MOD_NTT; } } } if(invert){ ll n_inv=binpow(n,MOD_NTT-2,MOD_NTT); for(auto& x:a) x=x*n_inv%MOD_NTT; } } vector<int> multiply(vector<int> a, vector<int> b){ int result_size=sz(a)+sz(b)-1, n=1; while(n<result_size) n<<=1; a.resize(n); b.resize(n); ntt(a,false); ntt(b,false); for(int i=0;i<n;i++) a[i]=(long long)a[i]*b[i]%MOD_NTT; ntt(a,true); a.resize(result_size); return a; }
↑ top
Chapter 27 · The Endgame
ICPC Strategy & Pattern Recognition — How to Win Contests

27.1   Problem → Algorithm Pattern Map The Black Map

Problem Pattern (what you read)Algorithm (what you code)Complexity
"Count numbers in [L,R] with property on digits"Digit DPO(D·S)
"Maximum flow / minimum cut between nodes"Dinic's AlgorithmO(V²E)
"Minimum path/cut to disconnect graph"Min-Cut = Max-FlowO(V²E)
"Maximum matching in bipartite graph"Hopcroft-KarpO(E√V)
"k-th ancestor of node / LCA queries"Binary LiftingO(log N)
"Path sum / range update on tree path"HLD + Segment TreeO(log²N)
"Count paths of length k on tree"Centroid DecompositionO(N log N)
"Range min/max query (static array)"Sparse TableO(1) query
"Range update + range query (dynamic)"Lazy Propagation SegTreeO(log N)
"Find all palindromes in string"Manacher's O(n)O(N)
"String matching / pattern in text"KMP or Z-AlgorithmO(N+M)
"Compare substrings quickly"Polynomial HashingO(1) per query
"Find bridges / articulation points"Tarjan's DFSO(V+E)
"Strongly connected components"Tarjan's / Kosaraju'sO(V+E)
"Is there a way to order tasks? / Cycle?"Topological Sort / Kahn'sO(V+E)
"Shortest path with negative edges"Bellman-FordO(VE)
"All-pairs shortest paths, small V"Floyd-WarshallO(V³)
"Minimum spanning tree"Kruskal + DSUO(E log E)
"Count/sum of subset queries"SOS DP (bitmask)O(N·2ⁿ)
"Polynomial multiplication / counting"NTT / FFTO(N log N)
"Many range queries offline"Mo's AlgorithmO((N+Q)√N)
"DP with lines y=mx+b optimization"Convex Hull Trick / Li ChaoO(N) or O(N log N)
"Monotone optimal split in DP"D&C Optimization / Knuth'sO(N log N) or O(N²)
"Game: who wins / Grundy values"Sprague-Grundy, NimProblem-specific
"Permutation / symmetry counting"Burnside + PólyaO(|G|)
"Large n mod prime / choose"Lucas' TheoremO(p log n)

27.2   Contest Execution — The 3-Hour Playbook ICPC Meta

The "Easy Problems First" Trap

Yes, you must solve the easy problems first to secure your rank. But the biggest trap is the whole team tunnel-visioning on a single tricky "easy" problem while abandoning the rest of the problem set. If a problem is rated "easy" but has an implementation trap or brutal edge cases, move on. Do not spend 90 minutes debugging Problem B while Problem E is actually a standard textbook algorithm you know completely. Keep reading, keep moving.

Minutes 0–30: Triage & Setup

One person reads ALL problems simultaneously from front to back, catching the obvious templates. Classify each: Easy, Medium, Hard. Attack easiest first — getting points on the board early matters quickly. Set up environment.

Minutes 30–90: Rapid Execution

Submit the guaranteed A/B/C tier problems. Any WA — print the code, jump off the PC, and debug on paper. PC time is gold. Never think for hours on a single problem if teammates are idle. Rotate if stuck.

Minutes 90–150: The Meat

This is where contests are won. One person defends already-solved problems (edge cases, WA fixes). One person attacks the next hardest unsolved. The third person validates ideas before coding — 30 minutes writing the wrong algorithm is fatal.

Minutes 150–180: The Endgame

Freeze the scoreboard. Do not start new implementations from scratch unless it's 20 lines. Focus entirely on debugging the closest remaining problem. Write stress tests. Scrutinize constraints, variable overflows, and modulos.

Team Communication

Announce when you "think you have it" — teammate sanity-checks before you spend 45 minutes coding. Use the shared notebook. Agree on variable naming conventions beforehand.

When You're Stuck

Read the problem again from scratch — you often misread constraints. Check: is the answer always a specific structure? Can you brute force small cases and observe a pattern? Is there a simpler equivalent problem?

Constraint Analysis

n≤10 → brute force. n≤20 → bitmask/meet-in-middle. n≤10³ → O(n²). n≤10⁵ → O(n log n). n≤10⁶ → O(n). n≤10¹⁸ → O(log n) or O(1). Let constraints tell you the algorithm.

Debugging Checklist — When You Get WA

  • Integer overflow: Did you use ll for all intermediate products? Multiply two ints before casting?
  • Array bounds: Off-by-one? Is your array large enough? 1-indexed vs 0-indexed mismatch?
  • Edge cases: n=1, empty input, all same values, negative numbers, disconnected graph?
  • Modular arithmetic: Did you take mod after every addition/multiplication? Is (a-b) possibly negative before mod?
  • Graph: Is the graph directed/undirected? Are self-loops possible? Multiple edges?
  • Floating point: Use long double. Never compare with ==. Use fabsl(a-b) < EPS.
  • Multi-test reset: Did you reinitialize all global arrays between test cases?

27.3   Codeforces Rating Roadmap Your Path

RatingWhat to MasterProblems to Solve
Newbie → Pupil (800–1200)Implementation, basic math, greedy, simple arrays/strings, brute forceSolve every A+B from Div3 contests
Pupil → Specialist (1200–1600)Binary search, two pointers, prefix sums, basic DP, basic graphs (BFS/DFS)150+ problems rated 1200–1500
Specialist → Expert (1600–1900)Sorting with custom comparator, segment trees, DSU, Dijkstra, basic backtracking, divide and conquer200+ problems rated 1500–1800
Expert → Candidate Master (1900–2100)Advanced DP (bitmask, interval, D&C opt), flows, binary lifting, CHT, Sparse Table, HLDFile 05 + File 06 mastery + 300+ problems
Master+ (2100+)Everything in this file + FFT/NTT, advanced geometry, research-level insights, speedThousands of problems + ICPC practice
The Truth About Competitive Programming

Knowing algorithms is 20%. Reading problems fast, not panicking under time pressure, clean bug-free code at speed — that's 80%. This file gives you the weapons. You build the instincts by solving problems, thousands of them, consistently. There is no shortcut. The ones who reach ICPC World Finals solve 2000+ problems before they get there. Start today.

27.4   Curated Practice Problem List by Topic Study Plan

Binary Search Problems

CF: 1288C, 1204C, 1117C, 862B, 1284B. CSES: Array Division, Factory Machines, Subarray Distinct Values. Key: practice BS on answer space.

DP Problems

CF: 837D (DP+BIT), 1498E (digit DP), 436E (DP opt), 660F (bitmask). CSES: full DP section (15 problems). AtCoder DP Contest: all 26 problems.

Graph Problems

CF: 1030E (LCA), 916E (centroid), 1239D (SCC+2-SAT), 962F (flow). CSES: full graph section. Must do: 2-SAT, SCC condensation problems.

String Problems

CF: 1063F (suffix array), 1063G (hashing+BS), 126E (Manacher). CSES: string hashing section. Hackerrank: advanced strings.

Tree Problems

CF: 600E (HLD), 916E (centroid decomp), 1017E (binary lifting+DP). CSES: full tree section (15 problems). Must do: tree diameter, tree DP.

ICPC Archives

ICPC World Finals problems (all years). Kattis: ICPC problem set. SPOJ: CLAS, QTREE series. POJ: classic ICPC-style problems. Solve old regionals.

↑ top