I'm always excited to take on new projects and collaborate with innovative minds.
say@niteshsynergy.com
https://www.niteshsynergy.com/
Phase 1: Introduction
phase 2: DSA Topic Code Practice: https://github.com/niteshsynergy/DSA_Phase2
| Week | Topic - Phase 2 pratice code for each topic curd operations So that become expert of each topic |
|---|---|
| Week 1 | Introduction to DSA + Time & Space Complexity & Arrays (Basics, Sliding Window, Prefix Sum) & Strings (Palindrome, Anagram, Substring Search) |
| Week 2 | Searching Algorithms (Linear, Binary Search & Variants) |
| Week 3 | Sorting Algorithms (Bubble, Merge, Quick, Count Sort) |
| Week 4 | Hashing (HashMap, HashSet, Frequency Maps, Hashing Tricks) |
| Week 5 | Stack (NGE, Stock Span, Balanced Brackets, Histogram Area) |
| Week 6 | Queue & Deque (Circular Queue, Sliding Window Maximum) |
| Week 7 | Linked List (Reverse, Detect Loop, Merge, K-Reverse) |
| Week 8 | Trees - Part 1 (Traversals, Height, Diameter, Leaf Count) & Trees - Part 2 (Binary Search Tree, LCA, Floor/Ceil, Insertion/Deletion) |
| Week 9 | Heaps & Priority Queue (Heap Sort, Kth Largest, Median in Stream) |
| Week 10 | Tries (Prefix Matching, Auto-complete, Word Dictionary) |
| Week 11 | Graphs - Part 1 (BFS, DFS, Connected Components, Grid Problems) & Graphs - Part 2 (Dijkstra, Kruskal, Prim’s, Topological Sort, Cycle Detection) |
| Week 12 | Recursion + Backtracking (N-Queens, Subsets, Maze) |
| Week 13 | Dynamic Programming - Part 1 (Fibonacci, 0/1 Knapsack, LCS, LIS) |
| Week 14 | Dynamic Programming - Part 2 (Matrix Chain Multiplication, Partition, Palindromes) |
| Week 15 | Greedy Algorithms (Activity Selection, Job Sequencing, Interval Problems) |
| Week 16 | Sliding Window + Two Pointers (Max Sum Subarray, Rainwater, Unique Window) |
| Week 17 | Bit Manipulation (XOR, Bit Masks, Single Number, Count Set Bits) |
| Week 18 | Segment Tree & Disjoint Set Union (Range Queries, Union-Find) |
DSA stands for Data Structures and Algorithms. It is the core foundation of computer science that helps in writing efficient and optimized code.
| Reason | Explanation |
|---|---|
| Efficient Code | Helps you write faster, memory-optimized code. |
| Problem Solving | Sharpens your logical thinking and structured approach. |
| Interview Preparation | Almost every tech company focuses on DSA in coding interviews (Amazon, Google, etc.). |
| Build Strong Foundations | Boosts confidence to learn advanced topics like AI, OS, Networks. |
| Competitive Programming | DSA is the core of platforms like LeetCode, Codeforces, HackerRank. |
| System Design Help | Understanding how data is structured gives a deep insight into system architecture. |
Data is organized in a sequential (linear) manner.
| Type | Description | Example Use Case |
|---|---|---|
| Array | Collection of elements stored at contiguous memory | Storing list of items |
| Linked List | Elements are connected via pointers | Dynamic memory allocation |
| Stack | LIFO (Last In, First Out) | Undo operations |
| Queue | FIFO (First In, First Out) | Scheduling processes |
| Deque | Double-ended queue | Browser history, palindromes |
Data is organized hierarchically or in complex relationships.
| Type | Description | Example Use Case |
|---|---|---|
| Tree | Hierarchical structure | File systems, HTML DOM |
| Binary Tree / BST | Binary tree with ordering | Searching, sorting |
| Heap (Min/Max) | Complete binary tree for priority | Priority queue |
| Trie | Prefix tree | Autocomplete, dictionary |
| Graph | Nodes connected by edges | Maps, social networks |
Efficient access using hash functions.
| Type | Description | Example Use Case |
|---|---|---|
| Hash Table / HashMap | Key-value pairs | Caching, fast lookups |
| HashSet | Unique elements only | Duplicate detection |
Used to find a specific element in a structure.
| Type | Description |
|---|---|
| Linear Search | Check each element one-by-one |
| Binary Search | Search in sorted data (divide & conquer) |
Used to reorder elements.
| Type | Description |
|---|---|
| Bubble Sort | Compare adjacent elements |
| Selection Sort | Select min/max and place |
| Insertion Sort | Build sorted list |
| Merge Sort | Divide and conquer |
| Quick Sort | Partition-based divide & conquer |
| Heap Sort | Use heap data structure |
Call function within itself and explore all possibilities.
| Type | Description |
|---|---|
| Recursion | Solve via repeated calls |
| Backtracking | Try all solutions and backtrack if needed (used in puzzles, path finding) |
Break down problems into subproblems and store their results (memoization/tabulation).
| Use Case | Examples |
|---|---|
| DP | Fibonacci, Knapsack, Longest Common Subsequence (LCS) |
Make the locally optimal choice at each step.
| Use Case | Examples |
|---|---|
| Greedy | Coin change, Activity selection, Huffman coding |
Divide the problem into subproblems, solve them independently, then combine.
| Use Case | Examples |
|---|---|
| Divide & Conquer | Merge sort, Quick sort, Binary search |
Used for graph-related problems.
| Type | Description |
|---|---|
| DFS / BFS | Explore graphs |
| Dijkstra’s | Shortest path |
| Kruskal’s / Prim’s | Minimum spanning tree |
| Topological Sort | Task scheduling |
→ Next →
Lets start from basic→
Time Complexity measures the amount of computational time an algorithm takes relative to the input size n.
Think of it as:
"How does the algorithm scale as input grows?"
It’s about how many operations your code performs, not about how long in seconds it takes (that depends on hardware).
Space Complexity refers to the amount of memory an algorithm uses relative to the input size. This includes:
These are formal notations that describe growth rates:
| Notation | Represents | Meaning |
|---|---|---|
| O(f(n)) | Big-O | Upper bound (Worst-case scenario) |
| Ω(f(n)) | Big-Omega | Lower bound (Best-case scenario) |
| Θ(f(n)) | Big-Theta | Tight bound (When upper = lower = actual) |
"The algorithm won't be worse than this."
Example:
int findMax(int[] arr) {
int max = arr[0];
for (int i = 1; i < arr.length; i++) {
if (arr[i] > max) max = arr[i];
}
return max;
}Time Complexity: O(n) – because in the worst case, we scan all elements.
Use Big-O in interviews because companies want to know how bad your solution can get.
"The algorithm won't be better than this."
Same example:
if (arr.length == 1) return arr[0];
Best case: Ω(1) – if array size is 1.
"It always behaves like this."
When the algorithm runs exactly in linear time, regardless of input (no best or worst):
for (int i = 0; i < n; i++) System.out.println(i);Time: Θ(n)
int find(int[] arr, int key) {
for (int i = 0; i < arr.length; i++) {
if (arr[i] == key) return i;
}
return -1;
}| Case | Scenario | Time Complexity |
|---|
| Best Case | Key is at arr[0] | O(1) |
| Worst Case | Key is at arr[n-1] or not found | O(n) |
| Average Case | Key is somewhere in the middle | O(n) |
| Complexity | Description | Example |
|---|---|---|
O(1) | Constant Time | Accessing an array element |
O(log n) | Logarithmic | Binary Search |
O(n) | Linear | Loop through array |
O(n log n) | Linearithmic | Merge Sort, Quick Sort (avg) |
O(n²) | Quadratic | Nested loops (Bubble Sort) |
O(2ⁿ) | Exponential | Recursion (Fibonacci without DP) |
O(n!) | Factorial | Generating all permutations |
For anything above O(n log n), optimize it unless the input size is guaranteed small.
Often, you trade more memory to get faster time, or vice versa.
Map<Integer, Integer> cache = new HashMap<>();
int fib(int n) {
if (cache.containsKey(n)) return cache.get(n);
if (n <= 1) return n;
int result = fib(n - 1) + fib(n - 2);
cache.put(n, result);
return result;
}| Without Memoization | O(2ⁿ) Time, O(n) Space |
|---|
| With Memoization | O(n) Time, O(n) Space |
Space used = performance gained
| Use Case | Time Complexity | Why It Matters |
|---|---|---|
| Searching in DB Index | O(log n) | Fast access, scalable |
| Google Search Ranking | O(n log n) | Efficient sorting & relevance |
| Encryption (RSA) | O(log n) or more | Needs prime checking |
| AI Neural Net Training | O(n^2)+ | Huge memory and CPU needed |
| Image Compression | O(n) | Balance quality vs. processing time |
A prime number is a natural number greater than 1 that has exactly two divisors: 1 and itself.
Examples: 2, 3, 5, 7, 11, 13, 17, ...
Non-prime numbers are called composite numbers (like 4, 6, 8, 9, etc.).
Check if a number n is divisible by any number from 2 to sqrt(n).
boolean isPrime(int n) {
if (n <= 1) return false;
for (int i = 2; i <= Math.sqrt(n); i++) {
if (n % i == 0) return false;
}
return true;
}
O(√n)Not feasible for finding primes up to 10^6 or 10^7.
Hence, we need an optimized algorithm...
To find all prime numbers up to a given number N in O(n log log n) time.
If a number is not prime, it must be divisible by some smaller prime number.
So, we:
2 to N are prime.
Let’s say N = 30
We'll maintain a boolean array isPrime[0…30], initialized to true (except isPrime[0] = false, isPrime[1] = false)
boolean[] isPrime = new boolean[n + 1];
Arrays.fill(isPrime, true);
isPrime[0] = isPrime[1] = false;
for (int i = 2; i * i <= n; i++) {
if (isPrime[i]) {
for (int j = i * i; j <= n; j += i) {
isPrime[j] = false;
}
}
}
n/2 + n/3 + n/5 + n/7 + ... ≈ n * log(log n) So, Total Time: O(n log log n)
O(n) for the isPrime[] boolean array.
N = 30Initial Array (T = True):
[ F, F, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T ]
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
→ Start with i = 2 → mark 4, 6, 8, 10...30
Next i = 3 → mark 9, 12, 15, 18...30
Skip 4 (already false), next i = 5, mark 25, 30...
Final isPrime[] shows all prime indices:
[ F, F, T, T, F, T, F, T, F, F, F, T, F, T, F, F, F, T, F, T, F, F, F, T, F, F, F, T, F, T, F ]
→ Indices that are still true: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
i * i instead of 2*iStart marking multiples from i * i because smaller multiples are already handled.
Used when N is large (e.g. 10^9), and memory is limited.
It divides the range into chunks and applies the sieve in segments.Applications of Sieve
| Use Case | Description |
|---|---|
| Cryptography | RSA uses large primes for public/private key generation |
| Competitive Programming | Problems involving factorization, coprime pairs, LCM/GCD |
| Totient Function | φ(n) uses prime factorization |
| Number Theory | Sum/count of divisors, Euler’s Theorem |
| Preprocessing Primes | For quick lookups in coding interviews |
Key idea:gcd(a, b) = gcd(b, a % b) and gcd(a, 0) = a.
Code: GCD (Iterative & Recursive)
// Recursive GCD
int gcd(int a, int b) {
if (b == 0) return a;
return gcd(b, a % b);
}
// Iterative GCD
int gcdIter(int a, int b) {
while (b != 0) {
int temp = b;
b = a % b;
a = temp;
}
return a;
}
Example:
Find GCD of a = 48 and b = 18.
int gcd(int a, int b) {
if (b == 0) return a; // Base case: if second number is 0, gcd is 'a'
return gcd(b, a % b); // Recursive call with (b, remainder of a divided by b)
}
Now the recursive calls unwind, and final result is 6.
2. Iterative GCD
int gcdIter(int a, int b) {
while (b != 0) {
int temp = b; // Store current b
b = a % b; // Update b to remainder of a divided by b
a = temp; // Update a to old b value
}
return a; // When b is 0, a is the GCD
}
Return a = 6
Formula:
LCM(a,b)=∣a×b∣GCD(a,b)LCM(a, b) = \frac{|a \times b|}{GCD(a, b)}
LCM ( a , b ) = GCD ( a , b ) ∣ a × b ∣
int lcm(int a, int b) {
return (a / gcd(a, b)) * b;
}
Step 1: Find gcd(48, 18)
Step 2: Calculate LCM
So, LCM(48, 18) = 144 is correct.
Why Important?
Performing arithmetic operations under a modulus (usually a prime number or large number like 10^9+7).
E.g., compute (a + b) % mod, (a * b) % mod.
(a + b) % mod(a - b + mod) % mod (to avoid negatives)(a * b) % modmod is prime, use Fermat’s Little Theorem)long mod = 1_000_000_007;
long modAdd(long a, long b) {
return (a + b) % mod;
}
long modMul(long a, long b) {
return (a * b) % mod;
}
Calculate abmod ma^b \mod m a b mod m efficiently for large bb b .
Multiply aa a by itself bb b times → O(b) time, too slow for large b.
Use divide and conquer by exploiting:
ab={(ab/2)2b evena×ab−1b odda^b = \begin{cases} (a^{b/2})^2 & b \text{ even} \\ a \times a^{b-1} & b \text{ odd} \end{cases}
a b = { ( a b /2 ) 2 a × a b − 1 b even b odd
long modPow(long a, long b, long mod) {
if (b == 0) return 1;
long half = modPow(a, b / 2, mod);
long result = (half * half) % mod;
if (b % 2 == 1) result = (result * a) % mod;
return result;
}
We want:
313mod 173^{13} \mod 17 3 13 mod 17
Call: modPow(3, 13, 17)
Call: modPow(3, 6, 17)
Call: modPow(3, 3, 17)
Call: modPow(3, 1, 17)
Call: modPow(3, 0, 17)
313mod 17=123^{13} \mod 17 = 12
3 13 mod 17 = 12
O(log b) — drastically faster than naive.
nCr=n!r!(n−r)!nCr = \frac{n!}{r! (n-r)!}
nCr = r ! ( n − r )! n !
Precompute factorials and inverse factorials to calculate nCrnCr nCr modulo modmod mod .
long mod = 1_000_000_007;
int MAX = 1000000;
long[] fact = new long[MAX+1];
long[] invFact = new long[MAX+1];
void precompute() {
fact[0] = 1;
for (int i = 1; i <= MAX; i++) {
fact[i] = (fact[i-1] * i) % mod;
}
invFact[MAX] = modInverse(fact[MAX], mod);
for (int i = MAX-1; i >= 0; i--) {
invFact[i] = (invFact[i+1] * (i+1)) % mod;
}
}
// Modular inverse using Fermat's little theorem (mod must be prime)
long modInverse(long a, long m) {
return modPow(a, m - 2, m);
}
long nCr(int n, int r) {
if (r > n) return 0;
return (((fact[n] * invFact[r]) % mod) * invFact[n - r]) % mod;
}
Step 1: Precompute factorials fact[]
fact[0] = 1;
for (int i = 1; i <= MAX; i++) {
fact[i] = (fact[i-1] * i) % mod;
}
fact[i] stores i!i! i ! mod mod.mod.Step 2: Precompute inverse factorials invFact[]
invFact[MAX] = modInverse(fact[MAX], mod);
for (int i = MAX-1; i >= 0; i--) {
invFact[i] = (invFact[i+1] * (i+1)) % mod;
}
invFact[i] stores modular inverse of fact[i], i.e., (i!)−1mod mod(i!)^{-1} \mod mod ( i !) −1 mod mod .invFact[MAX] using modular inverse of fact[MAX].invFact[i]=invFact[i+1]×(i+1)mod modinvFact[i] = invFact[i+1] \times (i+1) \mod mod
invFact [ i ] = invFact [ i + 1 ] × ( i + 1 ) mod mod
This works because:
(i!)−1=((i+1)!)−1×(i+1)mod mod(i!)^{-1} = ((i+1)!)^{-1} \times (i+1) \mod mod
( i !) −1 = (( i + 1 )!) −1 × ( i + 1 ) mod mod
Step 3: Modular inverse function (Fermat’s Little Theorem)
long modInverse(long a, long m) {
return modPow(a, m - 2, m);
}
Uses Fermat’s little theorem:
If m is prime, inverse of a modulo m is:
am−2mod ma^{m-2} \mod m a m − 2 mod m
modPow for fast exponentiation.Step 4: Calculate nCr modulo mod
long nCr(int n, int r) {
if (r > n) return 0;
return (((fact[n] * invFact[r]) % mod) * invFact[n - r]) % mod;
}
(nr)=n!r!(n−r)!mod mod=fact[n]×invFact[r]×invFact[n−r]mod mod\binom{n}{r} = \frac{n!}{r! (n-r)!} \mod mod = fact[n] \times invFact[r] \times invFact[n-r] \mod mod
( rn ) = r ! ( n − r )! n ! mod mod = fact [ n ] × invFact [ r ] × invFact [ n − r ] mod mod
Calculate:
nCr=(120×500000004×166666668)mod 1,000,000,007nCr = (120 \times 500000004 \times 166666668) \mod 1,000,000,007
nCr = ( 120 × 500000004 × 166666668 ) mod 1 , 000 , 000 , 007
Stepwise:
So, (52)=10\binom{5}{2} = 10 ( 25 ) = 10 , which is correct.
Precompute sums to answer sum queries efficiently.
Given array arr = [1, 2, 3, 4]
Prefix sum prefix = [0, 1, 3, 6, 10] (0-based for ease)
Sum from l to r:
sum = prefix [ r + 1 ] − prefix [ l ]
Code:
int[] prefixSum(int[] arr) {
int n = arr.length;
int[] prefix = new int[n + 1];
prefix[0] = 0;
for (int i = 1; i <= n; i++) {
prefix[i] = prefix[i-1] + arr[i-1];
}
return prefix;
}
// Query sum l to r
int rangeSum(int[] prefix, int l, int r) {
return prefix[r+1] - prefix[l];
}
Step by step:
arr = [ 2 , 4 , 5 , 7 , 8 ]
prefix[0] = 0 (by definition)prefix[1] = prefix[0] + arr[0] = 0 + 2 = 2prefix[2] = prefix[1] + arr[1] = 2 + 4 = 6prefix[3] = prefix[2] + arr[2] = 6 + 5 = 11prefix[4] = prefix[3] + arr[3] = 11 + 7 = 18prefix[5] = prefix[4] + arr[4] = 18 + 8 = 26So, prefix array is:
prefix = [0, 2, 6, 11, 18, 26]
rangeSumQuery: sum of elements from index l=1 to r=3 (which means elements 4, 5, 7)
Calculate:
rangeSum(prefix, 1, 3) = prefix[3 + 1] - prefix[1] = prefix[4] - prefix[1] = 18 - 2 = 16
Check manually:
arr[1] + arr[2] + arr[3] = 4 + 5 + 7 = 16
Perfect!
For efficient range update queries.
To add val to range [l, r] in O(1):
diff[l] += valdiff[r+1] -= val (if r+1 < n)Then prefix sum of diff gives the updated array.
Code:
int[] buildDiffArray(int[] arr) {
int n = arr.length;
int[] diff = new int[n];
diff[0] = arr[0];
for (int i = 1; i < n; i++) {
diff[i] = arr[i] - arr[i-1];
}
return diff;
}
void rangeUpdate(int[] diff, int l, int r, int val) {
diff[l] += val;
if (r + 1 < diff.length) diff[r+1] -= val;
}
int[] buildUpdatedArray(int[] diff) {
int n = diff.length;
int[] arr = new int[n];
arr[0] = diff[0];
for (int i = 1; i < n; i++) {
arr[i] = arr[i-1] + diff[i];
}
return arr;
}
Suppose:arr = [10, 20, 30, 40, 50]
Step 1: Build difference array
So,
diff = [10, 10, 10, 10, 10]
Step 2: Range update
Now suppose you want to add 5 to elements from index 1 to 3 (i.e., arr[1], arr[2], arr[3])
Call:
rangeUpdate(diff, 1, 3, 5);
Effect on diff:
diff[1] += 5 → diff[1] = 10 + 5 = 15diff[4] -= 5 → diff[4] = 10 - 5 = 5 (because 3+1 = 4)Updated diff:
diff = [10, 15, 10, 10, 5]
Rebuild with prefix sums of diff:
Resulting updated array:
[10, 25, 35, 45, 50]
Original array was [10, 20, 30, 40, 50]
After adding 5 to indices 1 to 3:
All other elements unchanged. Perfect!
[l, r] by adding val takes O(r-l+1) time if done naively.
| Operation | Symbol | Description |
|---|---|---|
| AND | & | Bitwise AND |
| OR | | | Bitwise OR |
| XOR | ^ | Bitwise XOR |
| NOT | ~ | Bitwise NOT |
| Left Shift | << | Shift bits left (multiply by 2) |
| Right Shift | >> | Shift bits right (divide by 2) |
boolean isSet(int n, int i) {
return (n & (1 << i)) != 0;
}
Set ith bit:
int setBit(int n, int i) {
return n | (1 << i);
}
Clear ith bit:
int clearBit(int n, int i) {
return n & ~(1 << i);
}
Toggle ith bit:
int toggleBit(int n, int i) {
return n ^ (1 << i);
}
boolean isPowerOfTwo(int n) {
return n > 0 && (n & (n - 1)) == 0;
}
complete single-page code with explanations and examples:
| Concept | Time Complexity | Use Case / Importance |
|---|---|---|
| GCD (Euclidean Algorithm) | O(log(min(a, b))) | Simplify fractions, number theory |
| LCM | Depends on GCD | Synchronization, periodic tasks |
| Modular Arithmetic | O(1) per operation | Cryptography, hashing, combinatorics |
| Fast Power | O(log b) | Efficient exponentiation under modulus |
| Combinatorics (nCr, nPr) | Preprocessing O(n), Query O(1) | Counting problems, probability, permutations |
| Prefix Sum | O(n) preprocessing, O(1) queries | Range sum queries |
| Difference Array | O(1) range update | Range updates on arrays |
| Bit Manipulation | O(1) per operation | Flags, subsets, optimization, dynamic programming |
100+ Assignments: Basics + Mathematics for DSA → Go TO End of This page to know more.
→ Next → CORE DATA STRUCTURES → Coming soon..
Follow Part→1 To …Part→N + Live Coding → Plan Simple
Solve classic combinatorial puzzles using code (e.g., Catalan numbers).
Goal: Understand base case, recursive case, and how call stack works
Goal: Practice recursion with decisions and branching
Goal: Solve subset problems, string operations, and decision trees
Goal: Merge recursion with optimization (divide & conquer, pruning, memoization)
Goal: Solve deep recursive problems asked in interviews and competitive coding
Track Recursive Tree using logging:
Goal: Understand recursion + backtracking flow (choose → explore → un-choose)
[1, 2, 3] → Output: [[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]"ABC" → Output: "ABC", "ACB", "BAC", ...""aab" → Output: [[a, a, b], [aa, b]]1s and 0s, find all paths from top-left to bottom-right.Goal: Solve constraint-based branching problems using recursion
[1, 1, 2] → Avoid duplicate permutations.
Goal: Combine backtracking with smart pruning, optimization, and state tracking
"123", target=6 → Output: ["1+2+3", "1*2*3"]
Goal: Apply deep recursion with optimization, bitmasking, and memoization
"25525511135" → Output: All valid IPs.., *)
System.out.println("i=" + i + ", path=" + path);
Learn fundamentals like state transition, base case, and 1D
dp[]
Practice problems with
dp[i][j]pattern
Work on problems with
dp[i][j]based on two strings
Learn tabulation across grids and combinations
Focus on splitting and recursive subproblem DP
Covers DP on trees, memory optimization, and hard state transitions
Ideal for understanding “base case” focused tabulation