4 - Common Algorithmic Patterns
Algorithmic patterns are reusable approaches to solving common types of problems. Think of them as templates or blueprints that can be adapted to solve specific problems. Understanding these patterns can help you recognize problem types and apply proven solutions, saving time and improving code quality.
Why Learn Algorithmic Patterns?
Learning algorithmic patterns offers several benefits:
- Problem Recognition: You'll quickly identify what type of problem you're facing
- Solution Framework: Patterns provide a starting point for your solution
- Efficiency: You'll avoid reinventing the wheel for common problems
- Interview Preparation: These patterns frequently appear in coding interviews
- Code Quality: Using established patterns leads to more maintainable code
Divide and Conquer
The divide and conquer pattern breaks a problem into smaller subproblems, solves each subproblem independently, and then combines the solutions to solve the original problem.
Key Characteristics
- Divide: Break the problem into smaller subproblems.
- Conquer: Solve the subproblems recursively.
- Combine: Merge the solutions of the subproblems to create a solution to the original problem.
Real-World Analogy
Think of organizing a large office. Instead of one person trying to organize everything, you might:
- Divide: Split the office into sections (marketing, engineering, etc.)
- Conquer: Have teams organize their own sections
- Combine: Connect the sections to create a cohesive office layout
Example: Merge Sort
Merge Sort is a classic divide and conquer algorithm that sorts an array by repeatedly dividing it into halves, sorting each half, and then merging the sorted halves.
/// <summary>
/// Sorts an array using the Merge Sort algorithm.
/// </summary>
/// <param name="array">The array to sort.</param>
/// <returns>A new sorted array.</returns>
public int[] MergeSort(int[] array)
{
// Base case: arrays with 0 or 1 elements are already sorted
if (array.Length <= 1)
return array;
// DIVIDE: Split the array into two halves
int middle = array.Length / 2;
int[] left = new int[middle];
int[] right = new int[array.Length - middle];
// Copy elements to the left and right arrays
Array.Copy(array, 0, left, 0, middle);
Array.Copy(array, middle, right, 0, array.Length - middle);
// CONQUER: Recursively sort both halves
left = MergeSort(left);
right = MergeSort(right);
// COMBINE: Merge the sorted halves
return Merge(left, right);
}
/// <summary>
/// Merges two sorted arrays into a single sorted array.
/// </summary>
/// <param name="left">First sorted array.</param>
/// <param name="right">Second sorted array.</param>
/// <returns>A merged sorted array.</returns>
private int[] Merge(int[] left, int[] right)
{
// Create a result array to hold the merged elements
int[] result = new int[left.Length + right.Length];
int leftIndex = 0, rightIndex = 0, resultIndex = 0;
// Compare elements from both arrays and take the smaller one
while (leftIndex < left.Length && rightIndex < right.Length)
{
if (left[leftIndex] <= right[rightIndex])
{
result[resultIndex++] = left[leftIndex++];
}
else
{
result[resultIndex++] = right[rightIndex++];
}
}
// Copy any remaining elements from the left array
while (leftIndex < left.Length)
{
result[resultIndex++] = left[leftIndex++];
}
// Copy any remaining elements from the right array
while (rightIndex < right.Length)
{
result[resultIndex++] = right[rightIndex++];
}
return result;
}
Step-by-Step Explanation
Let's trace through a simple example: [38, 27, 43, 3, 9, 82, 10]
-
Initial Call:
MergeSort([38, 27, 43, 3, 9, 82, 10])
- Divide into
left = [38, 27, 43]
andright = [3, 9, 82, 10]
- Divide into
-
Left Recursion:
MergeSort([38, 27, 43])
- Divide into
left = [38]
andright = [27, 43]
- Further divide
right
into[27]
and[43]
- Merge
[27]
and[43]
to get[27, 43]
- Merge
[38]
and[27, 43]
to get[27, 38, 43]
- Divide into
-
Right Recursion:
MergeSort([3, 9, 82, 10])
- Similar process yields
[3, 9, 10, 82]
- Similar process yields
-
Final Merge: Merge
[27, 38, 43]
and[3, 9, 10, 82]
- Result:
[3, 9, 10, 27, 38, 43, 82]
- Result:
Other Divide and Conquer Examples
- Quick Sort: Selects a 'pivot' element and partitions the array around it
- Binary Search: Repeatedly divides the search space in half
- Closest Pair of Points: Finds the closest pair of points in a 2D plane
- Strassen's Matrix Multiplication: Efficiently multiplies large matrices
Time Complexity Analysis
For many divide and conquer algorithms, we can use the Master Theorem to analyze time complexity:
T(n) = aT(n/b) + f(n)
Where:
- T(n) is the time complexity for input size n
- a is the number of subproblems
- n/b is the size of each subproblem
- f(n) is the cost of dividing and combining
For Merge Sort:
- a = 2 (two recursive calls)
- b = 2 (each subproblem is half the size)
- f(n) = O(n) (merging takes linear time)
Using the Master Theorem: T(n) = O(n log n)
Dynamic Programming
Dynamic programming solves complex problems by breaking them down into simpler subproblems and storing the results of these subproblems to avoid redundant calculations.
Key Characteristics
- Optimal Substructure: The optimal solution to the problem contains optimal solutions to its subproblems.
- Overlapping Subproblems: The same subproblems are solved multiple times.
Real-World Analogy
Think of planning a road trip with multiple possible routes. Instead of recalculating the time for common segments every time, you:
- Calculate the time for each road segment once
- Store these results
- Reuse them when evaluating different routes that share segments
Approaches
- Top-down (Memoization): Start with the original problem and recursively solve subproblems, storing results in a cache.
- Bottom-up (Tabulation): Start with the smallest subproblems and iteratively build up to the original problem.
Example: Fibonacci Sequence
The Fibonacci sequence is defined as: F(n) = F(n-1) + F(n-2), with F(0) = 0 and F(1) = 1.
Top-down Approach (Memoization)
/// <summary>
/// Calculates the nth Fibonacci number using memoization.
/// </summary>
/// <param name="n">The position in the Fibonacci sequence.</param>
/// <param name="memo">Dictionary to store previously calculated values.</param>
/// <returns>The nth Fibonacci number.</returns>
public int Fibonacci(int n, Dictionary<int, int> memo = null)
{
// Initialize the memoization dictionary if it's null
if (memo == null)
memo = new Dictionary<int, int>();
// Base cases: F(0) = 0, F(1) = 1
if (n <= 1)
return n;
// Check if we've already calculated this value
if (memo.ContainsKey(n))
return memo[n];
// Calculate F(n) = F(n-1) + F(n-2) and store in memo
memo[n] = Fibonacci(n - 1, memo) + Fibonacci(n - 2, memo);
return memo[n];
}
Bottom-up Approach (Tabulation)
/// <summary>
/// Calculates the nth Fibonacci number using tabulation.
/// </summary>
/// <param name="n">The position in the Fibonacci sequence.</param>
/// <returns>The nth Fibonacci number.</returns>
public int Fibonacci(int n)
{
// Handle base cases
if (n <= 1)
return n;
// Create an array to store all Fibonacci numbers from 0 to n
int[] fib = new int[n + 1];
fib[0] = 0; // F(0) = 0
fib[1] = 1; // F(1) = 1
// Build up the array by calculating each Fibonacci number
for (int i = 2; i <= n; i++)
{
// Each number is the sum of the two previous numbers
fib[i] = fib[i - 1] + fib[i - 2];
}
// Return the nth Fibonacci number
return fib[n];
}
Step-by-Step Explanation
Let's trace through calculating F(5) using the bottom-up approach:
- Initialize
fib[0] = 0
andfib[1] = 1
- For i = 2:
fib[2] = fib[1] + fib[0] = 1 + 0 = 1
- For i = 3:
fib[3] = fib[2] + fib[1] = 1 + 1 = 2
- For i = 4:
fib[4] = fib[3] + fib[2] = 2 + 1 = 3
- For i = 5:
fib[5] = fib[4] + fib[3] = 3 + 2 = 5
- Return
fib[5] = 5
Other Dynamic Programming Examples
- Longest Common Subsequence: Finds the longest subsequence common to two sequences
- Knapsack Problem: Selects items to maximize value while staying within a weight limit
- Matrix Chain Multiplication: Determines the most efficient way to multiply a chain of matrices
- Shortest Path Algorithms: Like Floyd-Warshall for finding shortest paths in a graph
- Edit Distance: Calculates the minimum operations to transform one string into another
When to Use Dynamic Programming
Use dynamic programming when:
- The problem can be broken down into overlapping subproblems
- The problem has optimal substructure
- The problem requires finding an optimal solution (maximizing or minimizing something)
Greedy Algorithms
Greedy algorithms make locally optimal choices at each step with the hope of finding a global optimum. They're like taking the best immediate option without reconsidering past choices.
Key Characteristics
- Greedy Choice Property: A globally optimal solution can be reached by making locally optimal choices.
- Optimal Substructure: The optimal solution to the problem contains optimal solutions to its subproblems.
Real-World Analogy
Think of climbing a hill by always taking the steepest upward path. This might lead to the peak (global optimum) in some hills, but could lead to a smaller local peak in others.
Example: Coin Change (with specific denominations)
The problem is to make change for a given amount using the minimum number of coins.
/// <summary>
/// Makes change for a given amount using the minimum number of coins.
/// Note: This greedy approach works for some coin systems (like US coins) but not all.
/// </summary>
/// <param name="amount">The amount to make change for.</param>
/// <param name="denominations">Available coin denominations.</param>
/// <returns>List of coins used to make change.</returns>
public List<int> MakeChange(int amount, int[] denominations)
{
// Sort denominations in descending order (largest first)
Array.Sort(denominations);
Array.Reverse(denominations);
List<int> result = new List<int>();
int remainingAmount = amount;
// For each denomination, use as many coins as possible
foreach (int coin in denominations)
{
// While we can still use this coin denomination
while (remainingAmount >= coin)
{
// Add the coin to our result
result.Add(coin);
// Reduce the remaining amount
remainingAmount -= coin;
}
}
return result;
}
Step-by-Step Explanation
Let's make change for 63 cents using US coins [25, 10, 5, 1]:
- Start with denominations sorted: [25, 10, 5, 1]
- For coin = 25:
- Use 2 quarters (25¢ each): result = [25, 25], remaining = 13
- For coin = 10:
- Use 1 dime (10¢): result = [25, 25, 10], remaining = 3
- For coin = 5:
- Cannot use (5 > 3)
- For coin = 1:
- Use 3 pennies (1¢ each): result = [25, 25, 10, 1, 1, 1], remaining = 0
- Final result: [25, 25, 10, 1, 1, 1] (6 coins)
Important Note: This greedy approach works for US coins [25, 10, 5, 1] but not for all coin systems. For example, with denominations [1, 3, 4] and amount = 6, the greedy approach gives [4, 1, 1] (3 coins), but the optimal solution is [3, 3] (2 coins).
Other Greedy Algorithm Examples
- Huffman Coding: Creates optimal prefix codes for data compression
- Prim's and Kruskal's Algorithms: Find minimum spanning trees in graphs
- Dijkstra's Algorithm: Finds shortest paths in a graph with non-negative weights
- Activity Selection Problem: Selects the maximum number of non-overlapping activities
When to Use Greedy Algorithms
Use greedy algorithms when:
- The problem has optimal substructure
- The problem has the greedy choice property
- A locally optimal choice leads to a globally optimal solution
Backtracking
Backtracking is an algorithmic technique that builds a solution incrementally, abandoning a path as soon as it determines that the path cannot lead to a valid solution.
Key Characteristics
- Incremental Construction: Build the solution one piece at a time.
- Feasibility Function: Check if the current partial solution can lead to a complete solution.
- Pruning: Abandon paths that cannot lead to a valid solution.
Real-World Analogy
Think of navigating a maze. You try a path, and if you reach a dead end, you backtrack to the last intersection and try a different path.
Example: N-Queens Problem
The N-Queens problem asks how to place N chess queens on an N×N chessboard so that no two queens threaten each other.
/// <summary>
/// Solves the N-Queens problem.
/// </summary>
public class NQueensSolver
{
private int n; // Board size and number of queens
private List<int[]> solutions = new List<int[]>();
/// <summary>
/// Initializes a new instance of the NQueensSolver class.
/// </summary>
/// <param name="n">The size of the board and number of queens.</param>
public NQueensSolver(int n)
{
this.n = n;
}
/// <summary>
/// Solves the N-Queens problem and returns all solutions.
/// </summary>
/// <returns>A list of solutions, where each solution is an array representing
/// the column position of a queen in each row.</returns>
public List<int[]> Solve()
{
// Initialize the board (board[row] = column where queen is placed)
int[] board = new int[n];
// Start backtracking from the first row
Backtrack(board, 0);
return solutions;
}
/// <summary>
/// Recursive backtracking function to place queens.
/// </summary>
/// <param name="board">Current board state.</param>
/// <param name="row">Current row to place a queen.</param>
private void Backtrack(int[] board, int row)
{
// If we've placed queens in all rows, we've found a solution
if (row == n)
{
// Add a copy of the current board to our solutions
solutions.Add((int[])board.Clone());
return;
}
// Try placing a queen in each column of the current row
for (int col = 0; col < n; col++)
{
// Check if it's safe to place a queen at board[row][col]
if (IsSafe(board, row, col))
{
// Place the queen
board[row] = col;
// Recur to place queens in subsequent rows
Backtrack(board, row + 1);
// Backtrack: remove the queen to try other positions
// This is the key to the backtracking algorithm
board[row] = 0;
}
}
}
/// <summary>
/// Checks if it's safe to place a queen at the given position.
/// </summary>
/// <param name="board">Current board state.</param>
/// <param name="row">Row to check.</param>
/// <param name="col">Column to check.</param>
/// <returns>True if it's safe to place a queen, false otherwise.</returns>
private bool IsSafe(int[] board, int row, int col)
{
// Check if there's a conflict with any queen in previous rows
for (int i = 0; i < row; i++)
{
// Check three conditions:
// 1. Same column (vertical attack)
// 2. Same diagonal (diagonal attack, top-left to bottom-right)
// 3. Same diagonal (diagonal attack, top-right to bottom-left)
if (board[i] == col ||
board[i] - i == col - row ||
board[i] + i == col + row)
{
return false;
}
}
return true;
}
}
Step-by-Step Explanation
Let's trace through solving the 4-Queens problem:
- Start with an empty 4×4 board
- Try placing a queen in row 0:
- Try col 0: Place queen at (0,0)
- Recur for row 1:
- Try col 0: Not safe (same column as queen in row 0)
- Try col 1: Not safe (diagonal attack)
- Try col 2: Safe, place queen at (1,2)
- Recur for row 2:
- No safe positions in row 2
- Backtrack to row 1
- Try col 3: Not safe
- Backtrack to row 0
- Try col 1: Place queen at (0,1)
- ... (continue exploring all possibilities)
One solution found: [1, 3, 0, 2] (meaning queens at positions (0,1), (1,3), (2,0), (3,2))
Other Backtracking Examples
- Sudoku Solver: Fills a 9×9 grid with digits according to Sudoku rules
- Hamiltonian Path: Finds a path that visits every vertex exactly once
- Graph Coloring: Assigns colors to vertices so that no adjacent vertices have the same color
- Subset Sum: Finds a subset of numbers that sum to a target value
- Permutations and Combinations: Generates all possible arrangements or selections
When to Use Backtracking
Use backtracking when:
- The problem requires finding all (or some) solutions that satisfy certain constraints
- The solution can be built incrementally
- You can determine if a partial solution can be extended to a complete solution
Two Pointers
The two pointers technique uses two pointers to iterate through a data structure, often moving at different speeds or in different directions.
Key Characteristics
- Multiple Pointers: Use two or more pointers to traverse the data structure.
- Relative Movement: Pointers move based on certain conditions.
Real-World Analogy
Think of two people searching for a book in a library, starting from opposite ends of a shelf and moving toward each other to find the book faster.
Example: Finding a Pair with a Given Sum in a Sorted Array
/// <summary>
/// Finds a pair of elements in a sorted array that sum to a target value using the two pointers technique.
/// This algorithm has O(n) time complexity, which is more efficient than the O(n²) brute force approach.
/// </summary>
/// <param name="sortedArray">A sorted array of integers (must be sorted in ascending order).</param>
/// <param name="targetSum">The target sum to find.</param>
/// <returns>A tuple containing the indices of the two elements, or (-1, -1) if no pair is found.</returns>
/// <remarks>
/// The two pointers technique works here because the array is sorted.
/// By moving pointers from opposite ends, we can efficiently narrow down the search space.
/// </remarks>
public (int, int) FindPairWithSum(int[] sortedArray, int targetSum)
{
// Validate input
if (sortedArray == null || sortedArray.Length < 2)
{
throw new ArgumentException("Array must contain at least two elements", nameof(sortedArray));
}
// Initialize two pointers: one at the beginning and one at the end
// This is the key to the two pointers pattern
int left = 0;
int right = sortedArray.Length - 1;
// Continue until the pointers meet
while (left < right)
{
// Calculate the current sum of elements at both pointers
int currentSum = sortedArray[left] + sortedArray[right];
if (currentSum == targetSum)
{
// Found the pair that sums to the target
return (left, right);
}
else if (currentSum < targetSum)
{
// If current sum is less than target, move left pointer right
// This increases the sum (since array is sorted in ascending order)
left++;
}
else // currentSum > targetSum
{
// If current sum is greater than target, move right pointer left
// This decreases the sum (since array is sorted in ascending order)
right--;
}
}
// No pair found that sums to the target
return (-1, -1);
// Example: For array [1, 2, 3, 4, 6] and targetSum = 8
// Initial state: left = 0 (value 1), right = 4 (value 6), sum = 7 < 8, so left++
// Next state: left = 1 (value 2), right = 4 (value 6), sum = 8 == 8, return (1, 4)
}
Step-by-Step Explanation
Let's find a pair that sums to 16 in the array [1, 4, 6, 8, 10, 15]:
- Initialize
left = 0
(value 1) andright = 5
(value 15) - currentSum = 1 + 15 = 16 = targetSum, so return (0, 5)
Let's try another example, finding a pair that sums to 14:
- Initialize
left = 0
(value 1) andright = 5
(value 15) - currentSum = 1 + 15 = 16 > targetSum, so move right pointer:
right = 4
- currentSum = 1 + 10 = 11 < targetSum, so move left pointer:
left = 1
- currentSum = 4 + 10 = 14 = targetSum, so return (1, 4)
Other Two Pointers Examples
- Three Sum Problem: Finds triplets that sum to a target value
- Container With Most Water: Finds two lines that contain the most water
- Remove Duplicates from Sorted Array: Removes duplicates in-place
- Palindrome Checking: Determines if a string is a palindrome
- Merge Two Sorted Arrays: Combines two sorted arrays into one
When to Use Two Pointers
Use the two pointers technique when:
- You need to find a pair of elements in a collection that satisfy certain conditions
- You're dealing with sorted arrays or linked lists
- You need to compare elements from different positions in the collection
Sliding Window
The sliding window technique maintains a subset of elements as a "window" and slides this window through the data structure to solve the problem.
Key Characteristics
- Window Definition: A contiguous sequence of elements within the data structure.
- Window Movement: The window expands or contracts based on certain conditions.
Real-World Analogy
Think of a security camera that can only view a fixed section of a street at a time. As it pans from left to right, it maintains a "window" of visibility that shifts along the street.
Example: Maximum Sum Subarray of Size K
/// <summary>
/// Finds the maximum sum of any contiguous subarray of size k using the sliding window technique.
/// This algorithm has O(n) time complexity, which is more efficient than the O(n*k) brute force approach.
/// </summary>
/// <param name="array">The input array of integers.</param>
/// <param name="k">The size of the subarray (window size).</param>
/// <returns>The maximum sum of any contiguous subarray of size k.</returns>
/// <exception cref="ArgumentException">Thrown when the array length is less than k.</exception>
/// <exception cref="ArgumentNullException">Thrown when the array is null.</exception>
public int MaxSumSubarray(int[] array, int k)
{
// Validate input
if (array == null)
throw new ArgumentNullException(nameof(array), "Array cannot be null");
if (k <= 0)
throw new ArgumentException("Window size must be positive", nameof(k));
if (array.Length < k)
throw new ArgumentException($"Array length ({array.Length}) is less than window size ({k})", nameof(array));
// Step 1: Compute sum of first window (first k elements)
// This initializes our sliding window
int windowSum = 0;
for (int i = 0; i < k; i++)
{
windowSum += array[i];
}
// Initialize maximum sum to the sum of the first window
int maxSum = windowSum;
// Step 2: Slide the window through the array
// Instead of recalculating the sum for each window from scratch (which would be O(n*k)),
// we efficiently update the window sum by removing the element leaving the window
// and adding the element entering the window (making it O(n))
for (int i = k; i < array.Length; i++)
{
// Slide the window by one position:
// - Subtract the element going out of the window (at position i-k)
// - Add the element coming into the window (at position i)
windowSum = windowSum - array[i - k] + array[i];
// Update maximum sum if current window sum is larger
maxSum = Math.Max(maxSum, windowSum);
}
return maxSum;
// Example: For array [2, 1, 5, 1, 3, 2] and k = 3
// Initial window: [2, 1, 5], sum = 8
// Next window: [1, 5, 1], sum = 7 (removed 2, added 1)
// Next window: [5, 1, 3], sum = 9 (removed 1, added 3)
// Next window: [1, 3, 2], sum = 6 (removed 5, added 2)
// Maximum sum = 9
}
Step-by-Step Explanation
Let's find the maximum sum subarray of size 3 in the array [2, 1, 5, 1, 3, 2]:
- Compute sum of first window [2, 1, 5]: windowSum = 8, maxSum = 8
- Slide window to [1, 5, 1]: windowSum = 8 - 2 + 1 = 7, maxSum = 8
- Slide window to [5, 1, 3]: windowSum = 7 - 1 + 3 = 9, maxSum = 9
- Slide window to [1, 3, 2]: windowSum = 9 - 5 + 2 = 6, maxSum = 9
- Return maxSum = 9
Other Sliding Window Examples
- Longest Substring Without Repeating Characters: Finds the longest substring with unique characters
- Minimum Size Subarray Sum: Finds the smallest subarray with a sum greater than a target
- Longest Substring with K Distinct Characters: Finds the longest substring with at most K distinct characters
- String Anagrams: Finds all anagrams of a pattern in a string
- Maximum Points You Can Obtain from Cards: Maximizes points by taking cards from either end
When to Use Sliding Window
Use the sliding window technique when:
- You need to find a subarray or substring that satisfies certain conditions
- The problem involves contiguous sequences
- You need to minimize or maximize something over a contiguous sequence
Applying Algorithmic Patterns
When faced with a new problem, follow these steps:
-
Identify the Pattern: Look for clues in the problem statement:
- "Find all combinations..." might suggest backtracking
- "Maximize/minimize..." might suggest dynamic programming or greedy
- "Sorted array..." might suggest two pointers or binary search
- "Contiguous subarray..." might suggest sliding window
-
Adapt the Pattern: Modify the template to fit your specific problem:
- Define the state for dynamic programming
- Determine the greedy choice for greedy algorithms
- Define the backtracking constraints
- Set up the pointers or window appropriately
-
Analyze Complexity: Determine the time and space complexity:
- How many subproblems are there in dynamic programming?
- How many recursive calls in backtracking?
- How many iterations in two pointers or sliding window?
-
Test Edge Cases: Consider:
- Empty inputs
- Single-element inputs
- Very large inputs
- Inputs with duplicates or special patterns
Conclusion
Understanding these common algorithmic patterns can significantly improve your problem-solving skills. They provide frameworks that can be adapted to solve a wide variety of problems efficiently.
Remember that mastering these patterns takes practice. Try to identify which pattern(s) might apply to problems you encounter, and practice implementing solutions based on these patterns.
As you gain experience, you'll develop an intuition for which pattern to apply to different types of problems, making you a more effective problem solver.