I'm always excited to take on new projects and collaborate with innovative minds.

Mail

say@niteshsynergy.com

Website

https://www.niteshsynergy.com/

Code

       Coding To 1Cr ~ Nitesh Synergy

Pease check git repo code & research 1Cr Problem Statements & Solutions

 

https://github.com/niteshsynergy/DSA_Phase1

https://github.com/niteshsynergy/DSA_Phase2

https://github.com/niteshsynergy/DSA_Phase3

 

https://github.com/niteshsynergy/Code1CrPlusMission

 

https://github.com/niteshsynergy/ResearchProblemsCoding1CrPlus

 

https://github.com/niteshsynergy/techassign/

 

 

 

DSA To 1Cr Coding~ Notes Available ~ Offline + Github

DSA Weekly Syllabus :

Phase 1: Introduction 

phase 2: DSA Topic Code Practice: https://github.com/niteshsynergy/DSA_Phase2 

WeekTopic - Phase 2 pratice code for each topic curd operations So that become expert of each topic
Week 1Introduction to DSA + Time & Space Complexity & Arrays (Basics, Sliding Window, Prefix Sum) & Strings (Palindrome, Anagram, Substring Search)
Week 2Searching Algorithms (Linear, Binary Search & Variants) 
Week 3 Sorting Algorithms (Bubble, Merge, Quick, Count Sort)
Week 4Hashing (HashMap, HashSet, Frequency Maps, Hashing Tricks)
Week 5Stack (NGE, Stock Span, Balanced Brackets, Histogram Area)
Week 6Queue & Deque (Circular Queue, Sliding Window Maximum)
Week 7Linked List (Reverse, Detect Loop, Merge, K-Reverse)
Week 8Trees - Part 1 (Traversals, Height, Diameter, Leaf Count) & Trees - Part 2 (Binary Search Tree, LCA, Floor/Ceil, Insertion/Deletion)
Week 9Heaps & Priority Queue (Heap Sort, Kth Largest, Median in Stream)
Week 10Tries (Prefix Matching, Auto-complete, Word Dictionary)
Week 11Graphs - Part 1 (BFS, DFS, Connected Components, Grid Problems) & Graphs - Part 2 (Dijkstra, Kruskal, Prim’s, Topological Sort, Cycle Detection)
Week 12Recursion + Backtracking (N-Queens, Subsets, Maze)
Week 13Dynamic Programming - Part 1 (Fibonacci, 0/1 Knapsack, LCS, LIS)
Week 14Dynamic Programming - Part 2 (Matrix Chain Multiplication, Partition, Palindromes)
Week 15Greedy Algorithms (Activity Selection, Job Sequencing, Interval Problems)
Week 16Sliding Window + Two Pointers (Max Sum Subarray, Rainwater, Unique Window)
Week 17Bit Manipulation (XOR, Bit Masks, Single Number, Count Set Bits)
Week 18Segment Tree & Disjoint Set Union (Range Queries, Union-Find)

 

What is DSA (Data Structures and Algorithms)?

DSA stands for Data Structures and Algorithms. It is the core foundation of computer science that helps in writing efficient and optimized code.

 

 Why DSA is Needed

ReasonExplanation
Efficient CodeHelps you write faster, memory-optimized code.
Problem SolvingSharpens your logical thinking and structured approach.
Interview PreparationAlmost every tech company focuses on DSA in coding interviews (Amazon, Google, etc.).
Build Strong FoundationsBoosts confidence to learn advanced topics like AI, OS, Networks.
 Competitive ProgrammingDSA is the core of platforms like LeetCode, Codeforces, HackerRank.
 System Design HelpUnderstanding how data is structured gives a deep insight into system architecture.

 

Types of Data Structures (DS)

1. Linear Data Structures

Data is organized in a sequential (linear) manner.

TypeDescriptionExample Use Case
ArrayCollection of elements stored at contiguous memoryStoring list of items
Linked ListElements are connected via pointersDynamic memory allocation
StackLIFO (Last In, First Out)Undo operations
QueueFIFO (First In, First Out)Scheduling processes
DequeDouble-ended queueBrowser history, palindromes

 

2. Non-Linear Data Structures

Data is organized hierarchically or in complex relationships.

TypeDescriptionExample Use Case
TreeHierarchical structureFile systems, HTML DOM
Binary Tree / BSTBinary tree with orderingSearching, sorting
Heap (Min/Max)Complete binary tree for priorityPriority queue
TriePrefix treeAutocomplete, dictionary
GraphNodes connected by edgesMaps, social networks

 

3. Hash-Based Structures

Efficient access using hash functions.

TypeDescriptionExample Use Case
Hash Table / HashMapKey-value pairsCaching, fast lookups
HashSetUnique elements onlyDuplicate detection

Types of Algorithms (A)

1. Searching Algorithms

Used to find a specific element in a structure.

TypeDescription
Linear SearchCheck each element one-by-one
Binary SearchSearch in sorted data (divide & conquer)

 

2. Sorting Algorithms

Used to reorder elements.

TypeDescription
Bubble SortCompare adjacent elements
Selection SortSelect min/max and place
Insertion SortBuild sorted list
Merge SortDivide and conquer
Quick SortPartition-based divide & conquer
Heap SortUse heap data structure

 

3. Recursion & Backtracking

Call function within itself and explore all possibilities.

TypeDescription
RecursionSolve via repeated calls
BacktrackingTry all solutions and backtrack if needed (used in puzzles, path finding)

 

4. Dynamic Programming (DP)

Break down problems into subproblems and store their results (memoization/tabulation).

Use CaseExamples
DPFibonacci, Knapsack, Longest Common Subsequence (LCS)

 

5. Greedy Algorithms

Make the locally optimal choice at each step.

Use CaseExamples
GreedyCoin change, Activity selection, Huffman coding

 

6. Divide and Conquer

Divide the problem into subproblems, solve them independently, then combine.

Use CaseExamples
Divide & ConquerMerge sort, Quick sort, Binary search

 

 

7. Graph Algorithms

Used for graph-related problems.

TypeDescription
DFS / BFSExplore graphs
Dijkstra’sShortest path
Kruskal’s / Prim’sMinimum spanning tree
Topological SortTask scheduling

 

 

→ Next →

Lets start from basic→

 

 1. Time and Space Complexity

 

What is Time Complexity?

Time Complexity measures the amount of computational time an algorithm takes relative to the input size n.

Think of it as:

"How does the algorithm scale as input grows?"

It’s about how many operations your code performs, not about how long in seconds it takes (that depends on hardware).

 

What is Space Complexity?

Space Complexity refers to the amount of memory an algorithm uses relative to the input size. This includes:

  • Input space
  • Auxiliary space (temporary data structures, recursion stack, etc.)

 

 Part 1: Big-O, Big-Theta, Big-Omega

These are formal notations that describe growth rates:

NotationRepresentsMeaning
O(f(n))Big-OUpper bound (Worst-case scenario)
Ω(f(n))Big-OmegaLower bound (Best-case scenario)
Θ(f(n))Big-ThetaTight bound (When upper = lower = actual)

 

Big-O (O)Worst-case

"The algorithm won't be worse than this."

Example:

int findMax(int[] arr) {
   int max = arr[0];
   for (int i = 1; i < arr.length; i++) {
       if (arr[i] > max) max = arr[i];
   }
   return max;
}

Time Complexity: O(n) – because in the worst case, we scan all elements.

Use Big-O in interviews because companies want to know how bad your solution can get.

 

Big-Omega (Ω)Best-case

"The algorithm won't be better than this."

Same example:

if (arr.length == 1) return arr[0];

Best case: Ω(1) – if array size is 1.

 

 Big-Theta (Θ)Tight bound

"It always behaves like this."

When the algorithm runs exactly in linear time, regardless of input (no best or worst):

for (int i = 0; i < n; i++) System.out.println(i);

Time: Θ(n)

 

Part 2: Best, Average, Worst Case

 Why Case Analysis Matters:

  • Best case = Impractical optimism
  • Average case = Real-world expectation
  • Worst case = Safety net

Example: Linear Search

int find(int[] arr, int key) {
   for (int i = 0; i < arr.length; i++) {
       if (arr[i] == key) return i;
   }
   return -1;
}
CaseScenarioTime Complexity
Best CaseKey is at arr[0]O(1)
Worst CaseKey is at arr[n-1] or not foundO(n)
Average CaseKey is somewhere in the middleO(n)

 

Part 3: Common Time Complexities (With Examples)

ComplexityDescriptionExample
O(1)Constant TimeAccessing an array element
O(log n)LogarithmicBinary Search
O(n)LinearLoop through array
O(n log n)LinearithmicMerge Sort, Quick Sort (avg)
O(n²)QuadraticNested loops (Bubble Sort)
O(2ⁿ)ExponentialRecursion (Fibonacci without DP)
O(n!)FactorialGenerating all permutations

 

For anything above O(n log n), optimize it unless the input size is guaranteed small.

 

 Part 4: Space vs Time Trade-offs

Often, you trade more memory to get faster time, or vice versa.

Example: Caching / Memoization

Map<Integer, Integer> cache = new HashMap<>();

int fib(int n) {
   if (cache.containsKey(n)) return cache.get(n);
   if (n <= 1) return n;
   int result = fib(n - 1) + fib(n - 2);
   cache.put(n, result);
   return result;
}
Without MemoizationO(2ⁿ) Time, O(n) Space
With MemoizationO(n) Time, O(n) Space

 

Space used = performance gained

 Real-Life Examples

Use CaseTime ComplexityWhy It Matters
Searching in DB IndexO(log n)Fast access, scalable
Google Search RankingO(n log n)Efficient sorting & relevance
Encryption (RSA)O(log n) or moreNeeds prime checking
AI Neural Net TrainingO(n^2)+Huge memory and CPU needed
Image CompressionO(n)Balance quality vs. processing time

 

 

2. Mathematics for DSA

Prime Numbers — Sieve of Eratosthenes (In-Depth)

 What is a Prime Number?

A prime number is a natural number greater than 1 that has exactly two divisors: 1 and itself.
Examples: 2, 3, 5, 7, 11, 13, 17, ...

Non-prime numbers are called composite numbers (like 4, 6, 8, 9, etc.).

 Why Are Prime Numbers Important in DSA?

  •  Used in Hash Functions to reduce collisions.
  • Core to modular arithmetic & cryptography (RSA, Diffie-Hellman).
  • Appear in probability, factorization, LCM/GCD, sieve optimizations.
  • Used in primality testing, number theory, counting divisors, and modular inverses.

Brute Force Approach: Primality Check

Check if a number n is divisible by any number from 2 to sqrt(n).

boolean isPrime(int n) {
   if (n <= 1) return false;
   for (int i = 2; i <= Math.sqrt(n); i++) {
       if (n % i == 0) return false;
   }
   return true;
}
 

Time Complexity: O(√n)

Not feasible for finding primes up to 10^6 or 10^7.
Hence, we need an optimized algorithm...

 

 Sieve of Eratosthenes

 Purpose:

To find all prime numbers up to a given number N in O(n log log n) time.

 

 Core Idea:

If a number is not prime, it must be divisible by some smaller prime number.
So, we:

  1. Assume all numbers from 2 to N are prime.
  2. Start from 2, mark its multiples as not prime.
  3. Move to next unmarked number and repeat.

 

 Steps:

Let’s say N = 30
We'll maintain a boolean array isPrime[0…30], initialized to true (except isPrime[0] = false, isPrime[1] = false)

boolean[] isPrime = new boolean[n + 1];
Arrays.fill(isPrime, true);
isPrime[0] = isPrime[1] = false;
for (int i = 2; i * i <= n; i++) {
   if (isPrime[i]) {
       for (int j = i * i; j <= n; j += i) {
           isPrime[j] = false;
       }
   }
}
 

 Time Complexity Analysis:

  • Outer loop runs for √n iterations.
  • Inner loop marks multiples of primes.
    → Number of operations:
    n/2 + n/3 + n/5 + n/7 + ... ≈ n * log(log n)

 So, Total Time: O(n log log n)

 

Space Complexity:

  • O(n) for the isPrime[] boolean array.

 

 Visual Walkthrough: N = 30

Initial Array (T = True):

 

[ F, F, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T ]
 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
 

 

→   Start with i = 2 → mark 4, 6, 8, 10...30

   Next i = 3 → mark 9, 12, 15, 18...30

   Skip 4 (already false), next i = 5, mark 25, 30...

Final isPrime[] shows all prime indices:

 

[ F, F, T, T, F, T, F, T, F, F, F, T, F, T, F, F, F, T, F, T, F, F, F, T, F, F, F, T, F, T, F ]
→ Indices that are still true: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
 

Optimizations

 Sieve from i * i instead of 2*i

Start marking multiples from i * i because smaller multiples are already handled.

 Segmented Sieve

Used when N is large (e.g. 10^9), and memory is limited.
It divides the range into chunks and applies the sieve in segments.Applications of Sieve

Use CaseDescription
CryptographyRSA uses large primes for public/private key generation
Competitive ProgrammingProblems involving factorization, coprime pairs, LCM/GCD
Totient Functionφ(n) uses prime factorization
Number TheorySum/count of divisors, Euler’s Theorem
Preprocessing PrimesFor quick lookups in coding interviews

 

What is GCD & LCM?

  • GCD (Greatest Common Divisor): Largest number that divides two numbers without a remainder.
  • LCM (Least Common Multiple): Smallest number divisible by both numbers.

Euclidean Algorithm for GCD

Key idea:
gcd(a, b) = gcd(b, a % b) and gcd(a, 0) = a.

Code: GCD (Iterative & Recursive)

 

// Recursive GCD
int gcd(int a, int b) {
   if (b == 0) return a;
   return gcd(b, a % b);
}
// Iterative GCD
int gcdIter(int a, int b) {
   while (b != 0) {
       int temp = b;
       b = a % b;
       a = temp;
   }
   return a;
}

Problem: Find GCD of two numbers

Example:
Find GCD of a = 48 and b = 18.

int gcd(int a, int b) {
   if (b == 0) return a;          // Base case: if second number is 0, gcd is 'a'
   return gcd(b, a % b);          // Recursive call with (b, remainder of a divided by b)
}
 

How it works for gcd(48, 18):

  • Step 1: gcd(48, 18)
    • b != 0, so call gcd(18, 48 % 18) → gcd(18, 12)
  • Step 2: gcd(18, 12)
    • b != 0, so call gcd(12, 18 % 12) → gcd(12, 6)
  • Step 3: gcd(12, 6)
    • b != 0, so call gcd(6, 12 % 6) → gcd(6, 0)
  • Step 4: gcd(6, 0)
    • b == 0, so return a → return 6

Now the recursive calls unwind, and final result is 6.

 

2. Iterative GCD

int gcdIter(int a, int b) {
   while (b != 0) {
       int temp = b;       // Store current b
       b = a % b;          // Update b to remainder of a divided by b
       a = temp;           // Update a to old b value
   }
   return a;               // When b is 0, a is the GCD
}
 

How it works for gcdIter(48, 18):

  • Initial values: a=48, b=18
  • Loop 1:
    • temp = 18
    • b = 48 % 18 = 12
    • a = 18
  • Now a=18, b=12
  • Loop 2:
    • temp = 12
    • b = 18 % 12 = 6
    • a = 12
  • Now a=12, b=6
  • Loop 3:
    • temp = 6
    • b = 12 % 6 = 0
    • a = 6
  • Now a=6, b=0 → exit loop

Return a = 6

 

LCM Using GCD

Formula:

LCM(a,b)=∣a×b∣GCD(a,b)LCM(a, b) = \frac{|a \times b|}{GCD(a, b)}

LCM ( a , b ) = GCD ( a , b ) a × b  

 

Code:

int lcm(int a, int b) {
   return (a / gcd(a, b)) * b;
}

 

Example: Calculate LCM(48, 18)

Step 1: Find gcd(48, 18)

  • From previous example, gcd(48, 18) = 6

Step 2: Calculate LCM

  • LCM = (48 / 6) * 18
  • LCM = 8 * 18 = 144
  • Verification:

  • 48 divides 144 (144 / 48 = 3)
  • 18 divides 144 (144 / 18 = 8)
  • There is no smaller positive number divisible by both 48 and 18 than 144.
  • So, LCM(48, 18) = 144 is correct.

     

    Why Important?

  • Used in fractions simplification.
  • Helps in synchronizing cycles, periodicity problems.
  • Core in problems related to number theory and modular arithmetic.

 

Modular Arithmetic

 

What?

Performing arithmetic operations under a modulus (usually a prime number or large number like 10^9+7).
E.g., compute (a + b) % mod, (a * b) % mod.


Why?

  • To avoid integer overflow.
  • Used heavily in cryptography, hashing, combinatorics.
  • Keeps numbers in manageable range.

Common Operations

  • Addition: (a + b) % mod
  • Subtraction: (a - b + mod) % mod (to avoid negatives)
  • Multiplication: (a * b) % mod
  • Division: Use modular inverse (if mod is prime, use Fermat’s Little Theorem)
long mod = 1_000_000_007;
long modAdd(long a, long b) {
   return (a + b) % mod;
}
long modMul(long a, long b) {
   return (a * b) % mod;
}

 

Power Exponentiation (Fast Power)


Problem:

Calculate abmod  ma^b \mod m a b mod m efficiently for large bb b .


Naive:

Multiply aa a by itself bb b times → O(b) time, too slow for large b.


Fast Exponentiation:

Use divide and conquer by exploiting:

ab={(ab/2)2b evena×ab−1b odda^b = \begin{cases} (a^{b/2})^2 & b \text{ even} \\ a \times a^{b-1} & b \text{ odd} \end{cases}

a b = { ( a b /2 ) 2 a × a b 1 b   even b   odd  


Code (Recursive):

long modPow(long a, long b, long mod) {
   if (b == 0) return 1;
   long half = modPow(a, b / 2, mod);
   long result = (half * half) % mod;
   if (b % 2 == 1) result = (result * a) % mod;
   return result;
}

 

Example: Calculate 313mod  173^{13} \mod 17 3 13 mod 17

We want:
313mod  173^{13} \mod 17 3 13 mod 17

 

Step-by-step trace:

Call: modPow(3, 13, 17)

  • Since b=13 ≠ 0, recursive call with b/2 = 6:

Call: modPow(3, 6, 17)

  • Since b=6 ≠ 0, recursive call with b/2 = 3:

Call: modPow(3, 3, 17)

  • Since b=3 ≠ 0, recursive call with b/2 = 1:

Call: modPow(3, 1, 17)

  • Since b=1 ≠ 0, recursive call with b/2 = 0:

Call: modPow(3, 0, 17)

  • b=0 → return 1

Returning back:

  • From modPow(3,0,17): returns 1
  • modPow(3,1,17):
    • half = 1
    • result = (1 * 1) % 17 = 1
    • b is odd (1 % 2 == 1), so result = (1 * 3) % 17 = 3
    • returns 3
  • modPow(3,3,17):
    • half = modPow(3,1,17) = 3
    • result = (3 * 3) % 17 = 9
    • b is odd (3 % 2 == 1), so result = (9 * 3) % 17 = 27 % 17 = 10
    • returns 10
  • modPow(3,6,17):
    • half = modPow(3,3,17) = 10
    • result = (10 * 10) % 17 = 100 % 17 = 15
    • b is even (6 % 2 == 0), no extra multiplication
    • returns 15
  • modPow(3,13,17):
    • half = modPow(3,6,17) = 15
    • result = (15 * 15) % 17 = 225 % 17 = 4
    • b is odd (13 % 2 == 1), so result = (4 * 3) % 17 = 12
    • returns 12

Final result:

313mod  17=123^{13} \mod 17 = 12

3 13 mod 17 = 12

Time Complexity:

O(log b) — drastically faster than naive.

 

Combinatorics (nCr, nPr)


Definitions:

  • nCr: Number of ways to choose r items from n without order.
  • nPr: Number of ways to arrange r items from n (order matters).

Formulas:

nCr=n!r!(n−r)!nCr = \frac{n!}{r! (n-r)!}

nCr = r ! ( n r )! n !

 

Efficient Calculation with Modulus:

Precompute factorials and inverse factorials to calculate nCrnCr nCr modulo modmod mod .


Code (Precompute factorials):

long mod = 1_000_000_007;
int MAX = 1000000;
long[] fact = new long[MAX+1];
long[] invFact = new long[MAX+1];

void precompute() {
   fact[0] = 1;
   for (int i = 1; i <= MAX; i++) {
       fact[i] = (fact[i-1] * i) % mod;
   }
   invFact[MAX] = modInverse(fact[MAX], mod);
   for (int i = MAX-1; i >= 0; i--) {
       invFact[i] = (invFact[i+1] * (i+1)) % mod;
   }
}

// Modular inverse using Fermat's little theorem (mod must be prime)
long modInverse(long a, long m) {
   return modPow(a, m - 2, m);
}

long nCr(int n, int r) {
   if (r > n) return 0;
   return (((fact[n] * invFact[r]) % mod) * invFact[n - r]) % mod;
}
 

Step 1: Precompute factorials fact[]

fact[0] = 1;
for (int i = 1; i <= MAX; i++) {
   fact[i] = (fact[i-1] * i) % mod;
}
 

  • fact[i] stores i!i! i ! mod mod.
  • For example:
    • fact[1] = 1! = 1
    • fact[2] = 2! = 2
    • fact[3] = 6
    • fact[4] = 24, etc., all modulo mod.

Step 2: Precompute inverse factorials invFact[]

invFact[MAX] = modInverse(fact[MAX], mod);
for (int i = MAX-1; i >= 0; i--) {
   invFact[i] = (invFact[i+1] * (i+1)) % mod;
}
 

  • invFact[i] stores modular inverse of fact[i], i.e., (i!)−1mod  mod(i!)^{-1} \mod mod ( i !) −1 mod mod .
  • Start from invFact[MAX] using modular inverse of fact[MAX].
  • Then compute inverses going backward using:

invFact[i]=invFact[i+1]×(i+1)mod  modinvFact[i] = invFact[i+1] \times (i+1) \mod mod

invFact [ i ] = invFact [ i + 1 ] × ( i + 1 ) mod mod

This works because:

(i!)−1=((i+1)!)−1×(i+1)mod  mod(i!)^{-1} = ((i+1)!)^{-1} \times (i+1) \mod mod

( i !) −1 = (( i + 1 )!) −1 × ( i + 1 ) mod mod

 

Step 3: Modular inverse function (Fermat’s Little Theorem)

long modInverse(long a, long m) {  
   return modPow(a, m - 2, m);  
}  
 

  • Uses Fermat’s little theorem:
    If m is prime, inverse of a modulo m is:

    am−2mod  ma^{m-2} \mod m a m 2 mod m

  • Uses your previously defined modPow for fast exponentiation.

Step 4: Calculate nCr modulo mod

long nCr(int n, int r) {
   if (r > n) return 0;
   return (((fact[n] * invFact[r]) % mod) * invFact[n - r]) % mod;
}
 

  • Formula for binomial coefficient modulo prime:

(nr)=n!r!(n−r)!mod  mod=fact[n]×invFact[r]×invFact[n−r]mod  mod\binom{n}{r} = \frac{n!}{r! (n-r)!} \mod mod = fact[n] \times invFact[r] \times invFact[n-r] \mod mod

( rn ) = r ! ( n r )! n ! mod mod = fact [ n ] × invFact [ r ] × invFact [ n r ] mod mod  


Example: Calculate (52)mod  1,000,000,007\binom{5}{2} \mod 1,000,000,007 ( 25 ) mod 1 , 000 , 000 , 007

  • fact[5] = 120
  • invFact[2] = modular inverse of 2! = modular inverse of 2 = 500000004
  • invFact[3] = modular inverse of 3! = modular inverse of 6 = 166666668

Calculate:

nCr=(120×500000004×166666668)mod  1,000,000,007nCr = (120 \times 500000004 \times 166666668) \mod 1,000,000,007

nCr = ( 120 × 500000004 × 166666668 ) mod 1 , 000 , 000 , 007

Stepwise:

  • 120×500000004≡60mod  1,000,000,007120 \times 500000004 \equiv 60 \mod 1,000,000,007 120 × 500000004 60 mod 1 , 000 , 000 , 007
  • 60×166666668≡10mod  1,000,000,00760 \times 166666668 \equiv 10 \mod 1,000,000,007 60 × 166666668 10 mod 1 , 000 , 000 , 007

So, (52)=10\binom{5}{2} = 10 ( 25 ) = 10 , which is correct.

Why precompute?

  • Factorials and inverse factorials are expensive to compute repeatedly.
  • Precomputing lets you calculate any nCr quickly in O(1) time after O(MAX) preprocessing.

 

 

Prefix Sum & Difference Arrays

 

Prefix Sum:

Precompute sums to answer sum queries efficiently.


Example:

Given array arr = [1, 2, 3, 4]
Prefix sum prefix = [0, 1, 3, 6, 10] (0-based for ease)
Sum from l to r:

sum = prefix [ r + 1 ] prefix [ l ]

 

Code:

int[] prefixSum(int[] arr) {
   int n = arr.length;
   int[] prefix = new int[n + 1];
   prefix[0] = 0;
   for (int i = 1; i <= n; i++) {
       prefix[i] = prefix[i-1] + arr[i-1];
   }
   return prefix;
}

// Query sum l to r
int rangeSum(int[] prefix, int l, int r) {
   return prefix[r+1] - prefix[l];
}
 

Step by step:

arr = [ 2 , 4 , 5 , 7 , 8 ]

Step 1: Compute prefix sums

  • prefix[0] = 0 (by definition)
  • prefix[1] = prefix[0] + arr[0] = 0 + 2 = 2
  • prefix[2] = prefix[1] + arr[1] = 2 + 4 = 6
  • prefix[3] = prefix[2] + arr[2] = 6 + 5 = 11
  • prefix[4] = prefix[3] + arr[3] = 11 + 7 = 18
  • prefix[5] = prefix[4] + arr[4] = 18 + 8 = 26

So, prefix array is:

prefix = [0, 2, 6, 11, 18, 26]
 

Step 2: Query sums using rangeSum

Query: sum of elements from index l=1 to r=3 (which means elements 4, 5, 7)

Calculate:

rangeSum(prefix, 1, 3) = prefix[3 + 1] - prefix[1] = prefix[4] - prefix[1] = 18 - 2 = 16
 

Check manually:

arr[1] + arr[2] + arr[3] = 4 + 5 + 7 = 16
 

Perfect!

 

Difference Array:

For efficient range update queries.


Idea:

To add val to range [l, r] in O(1):

  • diff[l] += val
  • diff[r+1] -= val (if r+1 < n)

Then prefix sum of diff gives the updated array.

Code:

int[] buildDiffArray(int[] arr) {
   int n = arr.length;
   int[] diff = new int[n];
   diff[0] = arr[0];
   for (int i = 1; i < n; i++) {
       diff[i] = arr[i] - arr[i-1];
   }
   return diff;
}

void rangeUpdate(int[] diff, int l, int r, int val) {
   diff[l] += val;
   if (r + 1 < diff.length) diff[r+1] -= val;
}

int[] buildUpdatedArray(int[] diff) {
   int n = diff.length;
   int[] arr = new int[n];
   arr[0] = diff[0];
   for (int i = 1; i < n; i++) {
       arr[i] = arr[i-1] + diff[i];
   }
   return arr;
}
 

Explanation with example:

Suppose:arr = [10, 20, 30, 40, 50]
Step 1: Build difference array

  • diff[0] = arr[0] = 10
  • diff[1] = arr[1] - arr[0] = 20 - 10 = 10
  • diff[2] = 30 - 20 = 10
  • diff[3] = 40 - 30 = 10
  • diff[4] = 50 - 40 = 10

So,

diff = [10, 10, 10, 10, 10]
Step 2: Range update

Now suppose you want to add 5 to elements from index 1 to 3 (i.e., arr[1], arr[2], arr[3])

Call:

rangeUpdate(diff, 1, 3, 5);
 

Effect on diff:

  • diff[1] += 5 → diff[1] = 10 + 5 = 15
  • diff[4] -= 5 → diff[4] = 10 - 5 = 5 (because 3+1 = 4)

Updated diff:

diff = [10, 15, 10, 10, 5]
 

Step 3: Rebuild updated array

Rebuild with prefix sums of diff:

  • arr[0] = diff[0] = 10
  • arr[1] = arr[0] + diff[1] = 10 + 15 = 25
  • arr[2] = arr[1] + diff[2] = 25 + 10 = 35
  • arr[3] = arr[2] + diff[3] = 35 + 10 = 45
  • arr[4] = arr[3] + diff[4] = 45 + 5 = 50

Resulting updated array:

[10, 25, 35, 45, 50]
 

Check update:

Original array was [10, 20, 30, 40, 50]
After adding 5 to indices 1 to 3:

  • arr[1] = 20 + 5 = 25
  • arr[2] = 30 + 5 = 35
  • arr[3] = 40 + 5 = 45

All other elements unchanged. Perfect!

 

Why use difference array?

  • Normally, updating values in a range [l, r] by adding val takes O(r-l+1) time if done naively.
  • Using difference array, each range update is done in O(1) time by updating only two positions.
  • Rebuilding the final array after all updates takes O(n) time.

Bit Manipulation

 

Why Bit Manipulation?

  • Fast operations at the bit level.
  • Useful in problems with flags, subsets, masks, parity, counting bits, etc.
  • Operates in constant time on integers.

Common Operations:

OperationSymbolDescription
AND&Bitwise AND
OR|Bitwise OR
XOR^Bitwise XOR
NOT~Bitwise NOT
Left Shift<<Shift bits left (multiply by 2)
Right Shift>>Shift bits right (divide by 2)

 

Examples:

  • Check if ith bit is set

boolean isSet(int n, int i) {
   return (n & (1 << i)) != 0;
}
 

Set ith bit:

int setBit(int n, int i) {
   return n | (1 << i);
}
 

Clear ith bit:

int clearBit(int n, int i) {
   return n & ~(1 << i);
}
 

Toggle ith bit:

int toggleBit(int n, int i) {
   return n ^ (1 << i);
}
 

boolean isPowerOfTwo(int n) {
   return n > 0 && (n & (n - 1)) == 0;
}
 

complete single-page code with explanations and examples:

 

Use Cases

  • Efficient subset generation.
  • Storing boolean flags in an integer.
  • Fast parity checks.
  • Bitmask dynamic programming.
  • Power of two checks:

 

Summary Table

ConceptTime ComplexityUse Case / Importance
GCD (Euclidean Algorithm)O(log(min(a, b)))Simplify fractions, number theory
LCMDepends on GCDSynchronization, periodic tasks
Modular ArithmeticO(1) per operationCryptography, hashing, combinatorics
Fast PowerO(log b)Efficient exponentiation under modulus
Combinatorics (nCr, nPr)Preprocessing O(n), Query O(1)Counting problems, probability, permutations
Prefix SumO(n) preprocessing, O(1) queriesRange sum queries
Difference ArrayO(1) range updateRange updates on arrays
Bit ManipulationO(1) per operationFlags, subsets, optimization, dynamic programming

 

 100+ Assignments: Basics + Mathematics for DSA → Go TO End of This page to know more.

 

→ Next →  CORE DATA STRUCTURES → Coming soon..

Follow Part→1 To …Part→N  + Live Coding → Plan Simple

  1. Techie Delight
  2. HackerRank
  3. HackerEarth
  4. Codeforces
  5. CodeChef
  6. SPOJ
  7. CSES Problem Set
  8. AtCoder
  9. TopCoder
  10. CodinGame
  11. CodingBat
  12. UVa Online Judge
  13. Kattis
  14. CheckiO
  15. CS50 (edX)
  16. Project Euler
  17. Exercism.io
  18. Replit (Free Plan)
  19. CS Dojo (YouTube + GitHub challenges)
  20. Visualgo.net (DSA visualizer + quizzes)
  21. GeeksforGeeks Practice (Basic tier)
  22. Open DSA (Virginia Tech’s open DSA textbook + questions)
  23. Advent of Code (Yearly puzzles in December)
  24. Brilliant (Free tier)
  25. FreeCodeCamp (Coding certifications + challenges)
  26. MIT OCW (6.006, 6.0001, etc.)
  27. Stanford CS106B Assignments (Available on GitHub)
  28. 100 Days of Code (GitHub + Twitter community)
  29. CS50x (YouTube)
  30. Python Tutor (Step-through code debugger)
  31. The Odin Project
  32. Nand2Tetris (Build a computer from scratch)
  33. Rustlings (Rust language exercises)
  34. JavaScript30 (30 mini projects, no libraries)
  35. Frontend Mentor (Free challenges)
  36. W3Schools (JS & Python code playgrounds)
  37. Codewars (Free tier)
  38. Codingame Clash of Code (Quick multiplayer coding)
  39. CodeGym (Free Java lessons)
  40. The Algorithms (GitHub repo of open DSA implementations)
  41. OpenGenus IQ (Interview prep & knowledge base)
  42. Bitburner (Coding game)
  43. CodeCombat (Free up to certain levels)
  44. Paiza.io (Online coding IDE + challenges)
  45. LearnCpp.com Challenges
  46. LeetCode
  47. CoderByte
  48. CodeSignal
  49. InterviewBit
  50. Pramp
  51. Edabit
  52. JetBrains Academy
  53. Binary Search
  54. Codewars (Premium kata explanations)
  55. Frontend Mentor (Pro challenges)
  56. Scrimba (Interactive code screencasts)
  57. Codewell (Free + paid frontend challenges)
  58. HackInScience (Python practice with feedback)
  59. Programmr
  60. Coding Ninjas Studio (Free + premium questions)
  61. Scaler Topics (Some free DSA content)
  62. AlgoExpert (Preview questions)
  63. DevChallenges.io
  64. Codepip (HTML/CSS games)
  65. Skillrack (Free + college contests)
  66. LearnDSA.io (Subset of Educative content)
  67. Turing (Practice + job tests)
  68. Code Avengers (Free trial)
  69. Coding Minutes (Free samples from paid DSA courses)
  70. Skillshare (Free for 1 month)
  71. Udemy Coding Courses (Often free coupons)
  72. Code.org (For younger learners, basic JavaScript)
  73. Coding Hero (Freemium beginner problems)
  74. Khan Academy – Computer Programming
  75. ZTM Academy Free Challenges
  76. LinkedIn Learning (Free month trial)
  77. GeeksForGeeks Premium Courses (Some lessons free)
  78. Platzi (Free coding basics)
  79. ByteDegree (Intro tracks free)
  80. Thinkful Coding Bootcamp (Free prep modules)
  81. Educative.io (Grokking series)
  82. ByteByteGo (System design)
  83. DesignGurus.io
  84. AlgoExpert
  85. Ex-Facebook Interview Prep (Sean Prashad)
  86. Turing Hut Challenges (Paid tracks)
  87. Coursera Coding Specializations
  88. Udacity Nanodegrees (DSA & System Design)
  89. Pluralsight Coding Paths
  90. Launch School
  91. Treehouse (Techdegree programs)
  92. Lambda School / BloomTech (Paid bootcamp)
  93. Coding Dojo
  94. Interview Cake
  95. Fullstack Academy
  96. Codesmith
  97. Springboard (Career tracks)
  98. Coding Blocks Online
  99. Code Fellows
  100. DataCamp (Python + Data Structures for DS)

 

 

Part 1: Time and Space Complexity (50+ assignments)

A. Concepts and Analysis

  1. Explain Big-O notation with 5 original examples.
  2. Explain Big-Theta notation with 3 original examples.
  3. Explain Big-Omega notation with 3 original examples.
  4. Analyze time complexity of linear search in best, average, and worst cases.
  5. Analyze time complexity of binary search in best, average, and worst cases.
  6. Analyze space complexity of recursive Fibonacci.
  7. Calculate time complexity of bubble sort in all cases.
  8. Calculate time complexity of insertion sort in all cases.
  9. Calculate time complexity of merge sort.
  10. Calculate time complexity of quicksort and explain pivot choice effects.
  11. Calculate space complexity of merge sort.
  12. Compare space vs time trade-off in merge sort vs quicksort.
  13. Time complexity of a balanced binary search tree operations.
  14. Explain amortized time complexity with stack push/pop.
  15. Calculate time complexity for matrix multiplication naive approach.
  16. Calculate time complexity for matrix multiplication using Strassen’s algorithm.
  17. Explain why hashmap operations are average O(1) but worst-case O(n).
  18. Calculate time complexity of naive substring search.
  19. Calculate time complexity of KMP substring search.
  20. Explain how caching (memoization) improves time complexity for Fibonacci.
  21. Analyze time complexity of DFS and BFS on graphs.
  22. Calculate space complexity of DFS and BFS.
  23. Analyze time complexity of Dijkstra’s shortest path algorithm.
  24. Analyze space complexity of Dijkstra’s algorithm.
  25. Calculate complexity of Prim’s and Kruskal’s MST algorithms.
  26. Explain time complexity of heap operations.
  27. Implement and analyze complexity of priority queue insert/extract.
  28. Calculate time complexity of n-queens problem brute force.
  29. Calculate time complexity of backtracking Sudoku solver.
  30. Analyze recursive vs iterative factorial time and space complexity.
  31. Explain complexity of exponentiation by squaring.
  32. Compare iterative vs recursive binary search.
  33. Analyze complexity of array vs linked list insertion/deletion.
  34. Calculate complexity of dynamic array resizing.
  35. Explain time complexity of hash collision resolution methods (chaining, open addressing).
  36. Analyze space complexity of trie data structure.
  37. Calculate time complexity of radix sort.
  38. Compare time complexity of different sorting algorithms on nearly sorted data.
  39. Calculate complexity of counting sort.
  40. Analyze worst-case scenario for quicksort.
  41. Explain time-space trade-offs with example of lookup tables.
  42. Analyze time complexity of substring search with Rabin-Karp.
  43. Calculate complexity of finding duplicates using sorting vs hashing.
  44. Analyze the complexity of string reversal.
  45. Explain time complexity of palindrome checking.
  46. Calculate time complexity of all pairs shortest path (Floyd-Warshall).
  47. Compare complexity of BFS vs DFS for cycle detection.
  48. Calculate complexity of dynamic programming solutions for Fibonacci.
  49. Explain time complexity of topological sort.
  50. Analyze complexity of graph coloring heuristics.

 

Part 2: Mathematics for DSA (50+ assignments)

Prime Numbers and Sieve

  1. Implement and analyze sieve of Eratosthenes for primes up to 10^5.
  2. Modify sieve to return prime count up to N.
  3. Find sum of primes up to 10^6 using sieve.
  4. Count twin primes up to 10^6.
  5. Implement naive prime checker using trial division.
  6. Compare sieve and naive prime checker performance.
  7. Implement segmented sieve for range [L, R].
  8. Use sieve to factorize numbers up to 10^6.
  9. Find largest prime factor of a number using sieve.
  10. Implement prime checking using Miller-Rabin test.
  11. Generate all primes between 1 and 10^7 using sieve.
  12. Find prime gaps between consecutive primes up to 10^6.
  13. Find first 100 primes.
  14. Find sum of first N primes.
  15. Check if number is perfect square of a prime.
  16. Find primes in arithmetic progression using sieve.
  17. Count prime numbers in a large interval efficiently.
  18. Calculate product of primes up to N modulo M.
  19. Implement Eratosthenes sieve with bitset optimization.
  20. Find the smallest prime factor for all numbers up to N.
  21. Check Goldbach’s conjecture for even numbers up to 10^6.
  22. Find number of primes with digit sum equal to X.
  23. Find primes with palindrome property.
  24. Find next prime number after N.
  25. Calculate Euler’s Totient function for numbers up to N.
  26. Calculate GCD using Euclidean Algorithm.
  27. Implement Extended Euclidean Algorithm.
  28. Solve linear Diophantine equations using extended Euclidean.
  29. Calculate LCM using GCD.
  30. Implement fast modular exponentiation.
  31. Use modular exponentiation to compute large powers modulo M.
  32. Calculate modular inverse using Fermat’s little theorem.
  33. Implement combinatorial nCr using Pascal’s triangle.
  34. Precompute factorials and inverse factorials modulo M for fast nCr.
  35. Implement Lucas theorem for large nCr modulo prime.
  36. Use prefix sums to answer range sum queries.
  37. Implement difference array for range update and point query.
  38. Use prefix sums in 2D arrays.
  39. Solve subarray sum problems using prefix sums.
  40. Implement bit manipulation for counting set bits.
  41. Use bit manipulation to generate all subsets of a set.
  42. Implement Brian Kernighan’s algorithm for bit counting.
  43. Use bitmask DP to solve subset sum problem.
  44. Find XOR of all elements in an array.
  45. Implement left and right bit shifts and explain effect.
  46. Solve problems involving checking power of two using bits.
  47. Implement and explain combinatorial permutations (nPr).
  48. Calculate number of derangements for n elements.
  49. Solve problems combining prefix sums and combinatorics.
  50. Explain how modular arithmetic applies to hash functions.

Optional stretch tasks

  1. Implement segmented sieve for prime factorization on huge ranges.
  2. Use sieve to solve prime factor queries online.
  3. Implement multiplicative inverse using extended Euclidean.
  4. Create a mini library for modular arithmetic (add, subtract, multiply, divide).
  5. Compare iterative vs recursive factorial with memoization.
  6. Optimize prefix sums for dynamic arrays.
  7. Use Fenwick tree or Segment tree to answer prefix sum queries with updates.
  8. Implement fast exponentiation for matrices.
  9. Explore connections between combinatorics and probability in algorithms.
  10. Solve classic combinatorial puzzles using code (e.g., Catalan numbers).

     

 PHASE 1: Basic Recursion (Understanding the Flow)

Goal: Understand base case, recursive case, and how call stack works

Assignments:

  1. Print Numbers 1 to N and N to 1 (both ways)
  2. Factorial of N
  3. Sum of Digits of a Number
  4. Fibonacci Series (basic recursive)
  5. Power of a Number (a^b)

 

 PHASE 2: Intermediate Recursion (Strings, Arrays, Backtracking Intro)

Goal: Practice recursion with decisions and branching

Assignments:

  1. Reverse a String using Recursion
  2. Check if String is Palindrome
  3. Find Max/Min in an Array Recursively
  4. Check if Array is Sorted
  5. Linear Search in Array using Recursion

 

 PHASE 3: Advanced Recursion (Combinatorics & Backtracking Base)

Goal: Solve subset problems, string operations, and decision trees

 Assignments:

  1. Generate All Subsets of Array/String (Power Set)
  2. Generate All Permutations of a String
  3. Count Paths in Grid (Right/Down only)
  4. Print All Balanced Parentheses (n pairs)
  5. String to Integer Conversion using Recursion

 

 PHASE 4: Recursive + Optimization Concepts

Goal: Merge recursion with optimization (divide & conquer, pruning, memoization)

Assignments:

  1. Binary Search using Recursion
  2. Merge Sort using Recursion
  3. Quick Sort using Recursion
  4. Tower of Hanoi
  5. Generate All Binary Strings Without Consecutive 1s

 

 PHASE 5: Real-World Recursive Challenges

Goal: Solve deep recursive problems asked in interviews and competitive coding

Assignments:

  1. Josephus Problem
  2. Letter Combinations of a Phone Number (like keypad input)
  3. Generate All Palindromic Partitions
  4. Subset Sum Problem
  5. Word Break (check if string can be segmented using dictionary)

 

     Practice Set:

 

    Optional Challenges 

PHASE 1: Basic Backtracking (Foundation Level)

Goal: Understand recursion + backtracking flow (choose → explore → un-choose)

Assignments:

  1. Subset Sum (Yes/No)
    • Check if a subset exists with given sum.
  2. Generate All Subsets (Power Set)
    • Input: [1, 2, 3] → Output: [[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]
  3. String Permutations
    • Input: "ABC" → Output: "ABC", "ACB", "BAC", ..."
  4. Palindrome Partitioning
    • Input: "aab" → Output: [[a, a, b], [aa, b]]
  5. Rat in a Maze (4 directions)
    • Maze of 1s and 0s, find all paths from top-left to bottom-right.

 PHASE 2: Intermediate Backtracking (Constraint Solving)

Goal: Solve constraint-based branching problems using recursion

Assignments:

  1. N-Queens Problem
    • Place N queens on N×N board such that no two attack each other.
  2. Sudoku Solver
    • Fill a partially completed 9x9 grid following Sudoku rules.
  3. Word Search in 2D Grid
    • Search word from a 2D board using DFS/backtracking.
  4. Unique Permutations with Duplicates
    • Input: [1, 1, 2] → Avoid duplicate permutations.
  5. Combination Sum (1 & 2)
    • Find all combinations of numbers that sum to a target.

 

 PHASE 3: Advanced Backtracking (Optimization + Pruning)

Goal: Combine backtracking with smart pruning, optimization, and state tracking

Assignments:

  1. Knight’s Tour Problem
    • Visit every square once with knight's L-shaped moves.
  2. Solve M-Coloring of Graph
    • Color graph vertices with at most M colors such that no adjacent share color.
  3. Word Break II
    • Given a string and dictionary, return all valid sentence combinations.
  4. Expression Add Operators
    • Input: "123", target=6 → Output: ["1+2+3", "1*2*3"]
  5. Tug of War (Partition array into two balanced subsets)
    • Minimize diff of two subsets.

 

PHASE 4: Expert Challenges (Interview-Style)

Goal: Apply deep recursion with optimization, bitmasking, and memoization

Assignments:

  1. Restore IP Addresses
    • Input: "25525511135" → Output: All valid IPs.
  2. Match Regular Expression (., *)
    • Implement regex engine with backtracking.
  3. Robot Room Cleaner (Leetcode Hard)
    • Robot moves in unknown grid, must clean every accessible cell.
  4. Minimum Number of Squares (Integer Partition)
    • Example: 12 = 4+4+4 or 9+1+1+1... → find minimum count.
  5. Travelling Salesman Problem (TSP) using Backtracking + Bitmasking
    • Find minimum path visiting all cities.

 

  Practice Sets:

 

   Tools for Learning:

 

 Phase 1: Basics (1D DP)

Learn fundamentals like state transition, base case, and 1D dp[]

  1. Fibonacci Number
  2. Climbing Stairs
  3. Min Cost Climbing Stairs
  4. House Robber
  5. Maximum Sum of Non-Adjacent Elements
  6. Decode Ways (A=1, B=2...)
  7. Jump Game I (Can reach end?)
  8. Jump Game II (Min jumps to end)

 

 Phase 2: Subset / Knapsack DP (2D DP)

Practice problems with dp[i][j] pattern

  1. 0/1 Knapsack
  2. Subset Sum
  3. Equal Subset Partition
  4. Count Subsets with Given Sum
  5. Target Sum
  6. Perfect Sum Problem
  7. Partition with Minimum Absolute Difference

 

 Phase 3: String & LCS Based DP

Work on problems with dp[i][j] based on two strings

  1. Longest Common Subsequence (LCS)
  2. Longest Palindromic Subsequence
  3. Longest Palindromic Substring
  4. Edit Distance
  5. Shortest Common Supersequence
  6. Sequence Pattern Matching
  7. Wildcard Matching
  8. Regular Expression Matching

 

Phase 4: Grid & Coin Problems

Learn tabulation across grids and combinations

  1. Unique Paths
  2. Unique Paths II (with obstacles)
  3. Minimum Path Sum
  4. Coin Change 1 (Min coins to make value)
  5. Coin Change 2 (Number of ways to make value)
  6. Gold Mine Problem
  7. Cherry Pickup (advanced grid DP)

 

Phase 5: Partition & Palindrome Problems

Focus on splitting and recursive subproblem DP

  1. Palindrome Partitioning
  2. Palindrome Partitioning II (Min cuts)
  3. Boolean Parenthesization
  4. Matrix Chain Multiplication
  5. Evaluate Expression to True
  6. Burst Balloons

 

 Phase 6: Advanced Topics

Covers DP on trees, memory optimization, and hard state transitions

  1. Egg Dropping Puzzle
  2. Painting Houses / Fences
  3. Job Scheduling with Profits
  4. Digit DP (Count numbers with constraints)
  5. LIS (Longest Increasing Subsequence)
  6. DP on Trees (subtree sums, max path sum, etc.)
  7. DP + Bitmask (Travelling Salesman, N-Queens)
  8. K-th Order Statistics in Arrays (Bonus)

 

Platform-wise DP Resources

 1. LeetCode

 

2. GeeksforGeeks (GFG)

 

3. Codeforces