Skip to main content

7 - Greedy Algorithms

Greedy algorithms are a class of algorithms that make locally optimal choices at each step with the hope of finding a global optimum. They are typically used for optimization problems where the goal is to find the maximum or minimum value of something.

Understanding Greedy Algorithms

What are Greedy Algorithms?

Greedy algorithms follow a simple principle: at each step, make the choice that looks best at the moment. Unlike dynamic programming, which considers all possible solutions, greedy algorithms make a series of choices, each one looking optimal at that point, without reconsidering previous choices.

Real-World Analogy

Think of climbing a hill by always taking the steepest upward path. This might lead to the peak (global optimum) in some hills, but could lead to a smaller local peak in others.

Another analogy is grocery shopping with a limited budget. A greedy approach would be to always pick the item with the highest value-to-cost ratio until your budget is exhausted.

Key Characteristics

  1. Greedy Choice Property: A globally optimal solution can be reached by making locally optimal choices.
  2. Optimal Substructure: The optimal solution to the problem contains optimal solutions to its subproblems.

Advantages and Limitations

Advantages

  • Simplicity: Greedy algorithms are usually straightforward to design and implement.
  • Efficiency: They are often more efficient than other approaches like dynamic programming or backtracking.
  • Immediacy: They make decisions based on the information available at the moment, without reconsidering previous choices.

Limitations

  • Not Always Optimal: Greedy algorithms don't always yield the optimal solution. They work only for problems with the greedy choice property.
  • Short-Sighted: They make decisions based on immediate gain without considering the global picture.
  • No Backtracking: Once a choice is made, it's never reconsidered, which can lead to suboptimal solutions for some problems.

Classic Greedy Algorithm Problems

1. Coin Change (with specific denominations)

The problem is to find the minimum number of coins needed to make a given amount of change, given a set of coin denominations.

/// <summary>
/// Makes change for a given amount using the minimum number of coins.
/// Uses a greedy approach by always taking the largest possible coin.
/// </summary>
/// <param name="amount">The amount to make change for.</param>
/// <param name="denominations">Available coin denominations.</param>
/// <returns>A list of coins that sum to the given amount.</returns>
/// <remarks>
/// Note: This greedy approach works for standard US coin denominations [25, 10, 5, 1]
/// but may not produce the optimal solution for all coin systems.
/// For example, with denominations [1, 3, 4] and amount 6, the greedy solution
/// would use [4, 1, 1] (3 coins) while the optimal solution is [3, 3] (2 coins).
/// </remarks>
public List<int> MakeChange(int amount, int[] denominations)
{
// Sort denominations in descending order (largest first)
// This is the key to the greedy approach - we always take the largest coin possible
Array.Sort(denominations);
Array.Reverse(denominations);

List<int> result = new List<int>();
int remainingAmount = amount;

// For each denomination, use as many coins as possible
foreach (int coin in denominations)
{
// While we can still use this coin denomination
while (remainingAmount >= coin)
{
// Add the coin to our result
result.Add(coin);
// Reduce the remaining amount
remainingAmount -= coin;
}
}

// If there's still a remaining amount, it means we couldn't make exact change
if (remainingAmount > 0)
{
throw new ArgumentException("Cannot make exact change with the given denominations");
}

return result;

// Example: For amount = 63 with US coins [25, 10, 5, 1]
// We'd use 2 quarters (25¢), 1 dime (10¢), and 3 pennies (1¢)
// Result: [25, 25, 10, 1, 1, 1] (6 coins total)
}

Step-by-Step Explanation

Let's make change for 63 cents using US coins [25, 10, 5, 1]:

  1. Start with denominations sorted: [25, 10, 5, 1]
  2. For coin = 25:
    • Use 2 quarters (25¢ each): result = [25, 25], remaining = 13
  3. For coin = 10:
    • Use 1 dime (10¢): result = [25, 25, 10], remaining = 3
  4. For coin = 5:
    • Cannot use (5 > 3)
  5. For coin = 1:
    • Use 3 pennies (1¢ each): result = [25, 25, 10, 1, 1, 1], remaining = 0
  6. Final result: [25, 25, 10, 1, 1, 1] (6 coins)

Important Note: This greedy approach works for US coins [25, 10, 5, 1] but not for all coin systems. For example, with denominations [1, 3, 4] and amount = 6, the greedy approach gives [4, 1, 1] (3 coins), but the optimal solution is [3, 3] (2 coins).

2. Activity Selection Problem

The problem is to select the maximum number of activities that can be performed by a single person, given the start and finish times of each activity.

/// <summary>
/// Represents an activity with start and finish times.
/// </summary>
public class Activity
{
public int Start { get; set; }
public int Finish { get; set; }

public Activity(int start, int finish)
{
Start = start;
Finish = finish;
}

public override string ToString()
{
return $"({Start}, {Finish})";
}
}

/// <summary>
/// Selects the maximum number of non-overlapping activities using a greedy approach.
/// The algorithm always chooses the activity that finishes earliest among the remaining compatible activities.
/// </summary>
/// <param name="activities">List of activities with start and finish times.</param>
/// <returns>List of selected activities that don't overlap.</returns>
/// <exception cref="ArgumentException">Thrown when the activities list is empty.</exception>
public List<Activity> SelectActivities(List<Activity> activities)
{
// Validate input
if (activities == null || activities.Count == 0)
{
throw new ArgumentException("Activities list cannot be empty", nameof(activities));
}

// Sort activities by finish time (this is the key greedy choice)
// We prioritize activities that finish earliest to maximize the number of activities
activities.Sort((a, b) => a.Finish.CompareTo(b.Finish));

List<Activity> selected = new List<Activity>();

// Select the first activity (the one that finishes earliest)
selected.Add(activities[0]);

// Keep track of the finish time of the last selected activity
int lastFinishTime = activities[0].Finish;

// Consider the rest of the activities
for (int i = 1; i < activities.Count; i++)
{
// If this activity starts after or at the finish time of the last selected activity
// (i.e., it doesn't overlap), select it
if (activities[i].Start >= lastFinishTime)
{
selected.Add(activities[i]);
lastFinishTime = activities[i].Finish;
}
// If there's an overlap, we skip this activity (greedy choice)
}

return selected;

// Time Complexity: O(n log n) due to sorting
// Space Complexity: O(n) for the selected activities list
}

Step-by-Step Explanation

Let's select activities from the following list:

  • Activity 1: (1, 4)
  • Activity 2: (3, 5)
  • Activity 3: (0, 6)
  • Activity 4: (5, 7)
  • Activity 5: (3, 9)
  • Activity 6: (5, 9)
  • Activity 7: (6, 10)
  • Activity 8: (8, 11)
  • Activity 9: (8, 12)
  • Activity 10: (2, 14)
  • Activity 11: (12, 16)
  1. Sort by finish time: [(0, 6), (1, 4), (2, 14), (3, 5), (3, 9), (5, 7), (5, 9), (6, 10), (8, 11), (8, 12), (12, 16)]
  2. Select the first activity: (1, 4)
  3. For each remaining activity, check if it starts after the last selected activity finishes:
    • (3, 5): 3 < 4, skip
    • (0, 6): 0 < 4, skip
    • (5, 7): 5 > 4, select, lastFinishTime = 7
    • (3, 9): 3 < 7, skip
    • (5, 9): 5 < 7, skip
    • (6, 10): 6 < 7, skip
    • (8, 11): 8 > 7, select, lastFinishTime = 11
    • (8, 12): 8 < 11, skip
    • (2, 14): 2 < 11, skip
    • (12, 16): 12 > 11, select, lastFinishTime = 16
  4. Final selection: [(1, 4), (5, 7), (8, 11), (12, 16)]

3. Huffman Coding

Huffman coding is a lossless data compression algorithm that assigns variable-length codes to input characters based on their frequencies.

/// <summary>
/// Represents a node in the Huffman tree.
/// </summary>
public class HuffmanNode
{
public char Character { get; set; }
public int Frequency { get; set; }
public HuffmanNode Left { get; set; }
public HuffmanNode Right { get; set; }

public HuffmanNode(char character, int frequency)
{
Character = character;
Frequency = frequency;
Left = null;
Right = null;
}

// A node is a leaf if it has no children
public bool IsLeaf => Left == null && Right == null;
}

/// <summary>
/// Builds Huffman codes for characters in a text.
/// </summary>
/// <param name="text">The input text to encode.</param>
/// <returns>Dictionary mapping characters to their Huffman codes.</returns>
public Dictionary<char, string> BuildHuffmanCodes(string text)
{
if (string.IsNullOrEmpty(text))
return new Dictionary<char, string>();

// Step 1: Count frequency of each character
Dictionary<char, int> frequencies = new Dictionary<char, int>();
foreach (char c in text)
{
if (frequencies.ContainsKey(c))
frequencies[c]++;
else
frequencies[c] = 1;
}

// Step 2: Create a priority queue (min heap) of nodes
// Note: PriorityQueue is available in .NET 6+
PriorityQueue<HuffmanNode, int> priorityQueue = new PriorityQueue<HuffmanNode, int>();

// Add all characters to the priority queue
foreach (var entry in frequencies)
{
priorityQueue.Enqueue(new HuffmanNode(entry.Key, entry.Value), entry.Value);
}

// Step 3: Build the Huffman tree
// Repeatedly take the two nodes with lowest frequency and create a parent node
while (priorityQueue.Count > 1)
{
// Get the two nodes with lowest frequencies
HuffmanNode left = priorityQueue.Dequeue();
HuffmanNode right = priorityQueue.Dequeue();

// Create a parent node with these two as children
// The parent's frequency is the sum of the children's frequencies
// We use '\0' as a placeholder character for internal nodes
HuffmanNode parent = new HuffmanNode('\0', left.Frequency + right.Frequency)
{
Left = left,
Right = right
};

// Add the parent back to the queue
priorityQueue.Enqueue(parent, parent.Frequency);
}

// The last remaining node is the root of the Huffman tree
HuffmanNode root = priorityQueue.Dequeue();

// Step 4: Generate Huffman codes by traversing the tree
Dictionary<char, string> huffmanCodes = new Dictionary<char, string>();
GenerateHuffmanCodes(root, "", huffmanCodes);

return huffmanCodes;
}

/// <summary>
/// Recursively generates Huffman codes by traversing the Huffman tree.
/// </summary>
/// <param name="node">Current node in the Huffman tree.</param>
/// <param name="code">Current code (path from root to node).</param>
/// <param name="huffmanCodes">Dictionary to store the generated codes.</param>
private void GenerateHuffmanCodes(HuffmanNode node, string code, Dictionary<char, string> huffmanCodes)
{
if (node == null)
return;

// If this is a leaf node (has a character), store its code
if (node.IsLeaf)
{
huffmanCodes[node.Character] = code.Length > 0 ? code : "0"; // Special case for single character
return;
}

// Traverse left (add 0 to code)
GenerateHuffmanCodes(node.Left, code + "0", huffmanCodes);

// Traverse right (add 1 to code)
GenerateHuffmanCodes(node.Right, code + "1", huffmanCodes);
}

Step-by-Step Explanation

Let's build Huffman codes for the text "ABRACADABRA":

  1. Count frequencies:

    • A: 5
    • B: 2
    • R: 2
    • C: 1
    • D: 1
  2. Create a priority queue with nodes (sorted by frequency):

    • (C, 1), (D, 1), (B, 2), (R, 2), (A, 5)
  3. Build the Huffman tree:

    • Take (C, 1) and (D, 1), create parent (-, 2)
    • Queue: (B, 2), (R, 2), (-, 2), (A, 5)
    • Take (B, 2) and (R, 2), create parent (-, 4)
    • Queue: (-, 2), (-, 4), (A, 5)
    • Take (-, 2) and (-, 4), create parent (-, 6)
    • Queue: (-, 6), (A, 5)
    • Take (A, 5) and (-, 6), create parent (-, 11)
    • Queue: (-, 11)
    • Final tree root: (-, 11)
  4. Generate codes by traversing the tree:

    • A: 0
    • B: 100
    • R: 101
    • C: 110
    • D: 111
  5. The encoded text "ABRACADABRA" becomes:

    • 0 (A) + 100 (B) + 101 (R) + 0 (A) + 110 (C) + 0 (A) + 111 (D) + 0 (A) + 100 (B) + 101 (R) + 0 (A)
    • = 0100101011001110100101

This encoding uses 23 bits instead of the original 88 bits (11 characters × 8 bits), achieving significant compression.

4. Fractional Knapsack

Unlike the 0/1 Knapsack problem (which requires dynamic programming), the Fractional Knapsack problem can be solved using a greedy approach because items can be taken in fractions.

/// <summary>
/// Represents an item with weight and value.
/// </summary>
public class Item
{
public int Weight { get; set; }
public int Value { get; set; }

public Item(int weight, int value)
{
Weight = weight;
Value = value;
}

// Value per unit weight (used for greedy choice)
public double ValuePerWeight => (double)Value / Weight;

public override string ToString()
{
return $"(Weight: {Weight}, Value: {Value}, Value/Weight: {ValuePerWeight:F2})";
}
}

/// <summary>
/// Solves the Fractional Knapsack problem.
/// </summary>
/// <param name="items">List of items with weights and values.</param>
/// <param name="capacity">Maximum weight capacity of the knapsack.</param>
/// <returns>The maximum value that can be obtained.</returns>
public double FractionalKnapsack(List<Item> items, int capacity)
{
// Sort items by value per weight in descending order (greedy choice)
items.Sort((a, b) => b.ValuePerWeight.CompareTo(a.ValuePerWeight));

double totalValue = 0;
int remainingCapacity = capacity;

// For each item, take as much as possible
foreach (Item item in items)
{
if (remainingCapacity >= item.Weight)
{
// Take the whole item
totalValue += item.Value;
remainingCapacity -= item.Weight;
Console.WriteLine($"Take 100% of {item} -> Added value: {item.Value}, Remaining capacity: {remainingCapacity}");
}
else
{
// Take a fraction of the item
double fraction = (double)remainingCapacity / item.Weight;
totalValue += fraction * item.Value;
Console.WriteLine($"Take {fraction:P2} of {item} -> Added value: {fraction * item.Value:F2}, Remaining capacity: 0");
break; // Knapsack is full
}
}

return totalValue;
}

Step-by-Step Explanation

Let's solve the Fractional Knapsack problem with:

  • Items: [(Weight: 10, Value: 60), (Weight: 20, Value: 100), (Weight: 30, Value: 120)]
  • Capacity: 50
  1. Calculate value per weight for each item:

    • Item 1: 60/10 = 6
    • Item 2: 100/20 = 5
    • Item 3: 120/30 = 4
  2. Sort items by value per weight (descending):

    • [(Weight: 10, Value: 60, Value/Weight: 6), (Weight: 20, Value: 100, Value/Weight: 5), (Weight: 30, Value: 120, Value/Weight: 4)]
  3. Fill the knapsack:

    • Take 100% of Item 1: Added value = 60, Remaining capacity = 40
    • Take 100% of Item 2: Added value = 100, Remaining capacity = 20
    • Take 66.67% of Item 3: Added value = 0.6667 × 120 = 80, Remaining capacity = 0
  4. Total value: 60 + 100 + 80 = 240

5. Minimum Spanning Tree (MST)

A minimum spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together with the minimum possible total edge weight.

Prim's Algorithm

/// <summary>
/// Represents a graph using adjacency lists.
/// </summary>
public class Graph
{
private int V; // Number of vertices
private List<(int, int)>[] adj; // Adjacency list: (vertex, weight)

/// <summary>
/// Initializes a new graph with the specified number of vertices.
/// </summary>
/// <param name="v">Number of vertices.</param>
public Graph(int v)
{
V = v;
adj = new List<(int, int)>[v];
for (int i = 0; i < v; i++)
{
adj[i] = new List<(int, int)>();
}
}

/// <summary>
/// Adds an edge to the graph.
/// </summary>
/// <param name="u">First vertex.</param>
/// <param name="v">Second vertex.</param>
/// <param name="weight">Edge weight.</param>
public void AddEdge(int u, int v, int weight)
{
adj[u].Add((v, weight));
adj[v].Add((u, weight)); // For undirected graph
}

/// <summary>
/// Finds the minimum spanning tree using Prim's algorithm.
/// </summary>
/// <returns>List of edges in the MST: (u, v, weight).</returns>
public List<(int, int, int)> PrimMST()
{
// Result will store the MST edges: (u, v, weight)
List<(int, int, int)> result = new List<(int, int, int)>();

// Key values used to pick minimum weight edge
int[] key = new int[V];

// To represent set of vertices included in MST
bool[] mstSet = new bool[V];

// Parent array to store MST
int[] parent = new int[V];

// Initialize all keys as INFINITE
for (int i = 0; i < V; i++)
{
key[i] = int.MaxValue;
mstSet[i] = false;
}

// Always include first vertex in MST
key[0] = 0; // Make key 0 so that this vertex is picked as first vertex
parent[0] = -1; // First node is always root of MST

// The MST will have V-1 edges
for (int count = 0; count < V - 1; count++)
{
// Pick the minimum key vertex from the set of vertices not yet included in MST
int u = MinKey(key, mstSet);

// Add the picked vertex to the MST Set
mstSet[u] = true;

// Update key value and parent index of the adjacent vertices of the picked vertex
foreach (var edge in adj[u])
{
int v = edge.Item1;
int weight = edge.Item2;

// Update key only if v is not in mstSet, there is an edge from u to v,
// and weight of (u,v) is smaller than current key of v
if (!mstSet[v] && weight < key[v])
{
parent[v] = u;
key[v] = weight;
}
}
}

// Construct the MST
for (int i = 1; i < V; i++)
{
// Find the weight of edge parent[i] - i
int weight = adj[i].Find(edge => edge.Item1 == parent[i]).Item2;
result.Add((parent[i], i, weight));
}

return result;
}

/// <summary>
/// Finds the vertex with the minimum key value.
/// </summary>
/// <param name="key">Array of key values.</param>
/// <param name="mstSet">Set of vertices included in MST.</param>
/// <returns>The vertex with the minimum key value.</returns>
private int MinKey(int[] key, bool[] mstSet)
{
int min = int.MaxValue;
int minIndex = -1;

for (int v = 0; v < V; v++)
{
if (!mstSet[v] && key[v] < min)
{
min = key[v];
minIndex = v;
}
}

return minIndex;
}
}

Step-by-Step Explanation of Prim's Algorithm

Let's trace through Prim's algorithm on a small graph:

  • Vertices: 0, 1, 2, 3
  • Edges: (0-1, weight 10), (0-2, weight 6), (0-3, weight 5), (1-3, weight 15), (2-3, weight 4)
  1. Initialize:

    • key = [0, ∞, ∞, ∞]
    • mstSet = [false, false, false, false]
    • parent = [-1, -, -, -]
  2. Iteration 1:

    • Select vertex 0 (min key)
    • mstSet = [true, false, false, false]
    • Update neighbors:
      • key[1] = 10, parent[1] = 0
      • key[2] = 6, parent[2] = 0
      • key[3] = 5, parent[3] = 0
    • key = [0, 10, 6, 5]
  3. Iteration 2:

    • Select vertex 3 (min key)
    • mstSet = [true, false, false, true]
    • Update neighbors:
      • key[1] = min(10, 15) = 10, parent[1] = 0
      • key[2] = min(6, 4) = 4, parent[2] = 3
    • key = [0, 10, 4, 5]
  4. Iteration 3:

    • Select vertex 2 (min key)
    • mstSet = [true, false, true, true]
    • No updates needed
    • key = [0, 10, 4, 5]
  5. Final MST:

    • Edges: (0-3, weight 5), (3-2, weight 4), (0-1, weight 10)
    • Total weight: 19

Kruskal's Algorithm

/// <summary>
/// Implements a disjoint-set data structure.
/// </summary>
public class DisjointSet
{
private int[] parent;
private int[] rank;

/// <summary>
/// Initializes a new disjoint-set with n elements.
/// </summary>
/// <param name="n">Number of elements.</param>
public DisjointSet(int n)
{
parent = new int[n];
rank = new int[n];

for (int i = 0; i < n; i++)
{
parent[i] = i; // Each element is initially its own parent
rank[i] = 0;
}
}

/// <summary>
/// Finds the representative (root) of the set containing x.
/// </summary>
/// <param name="x">The element to find.</param>
/// <returns>The representative of the set.</returns>
public int Find(int x)
{
if (parent[x] != x)
{
parent[x] = Find(parent[x]); // Path compression
}
return parent[x];
}

/// <summary>
/// Merges the sets containing x and y.
/// </summary>
/// <param name="x">First element.</param>
/// <param name="y">Second element.</param>
public void Union(int x, int y)
{
int rootX = Find(x);
int rootY = Find(y);

if (rootX == rootY)
return;

// Union by rank
if (rank[rootX] < rank[rootY])
{
parent[rootX] = rootY;
}
else if (rank[rootX] > rank[rootY])
{
parent[rootY] = rootX;
}
else
{
parent[rootY] = rootX;
rank[rootX]++;
}
}
}

/// <summary>
/// Represents a graph using an edge list.
/// </summary>
public class Graph
{
private int V; // Number of vertices
private List<(int, int, int)> edges; // Edge list: (u, v, weight)

/// <summary>
/// Initializes a new graph with the specified number of vertices.
/// </summary>
/// <param name="v">Number of vertices.</param>
public Graph(int v)
{
V = v;
edges = new List<(int, int, int)>();
}

/// <summary>
/// Adds an edge to the graph.
/// </summary>
/// <param name="u">First vertex.</param>
/// <param name="v">Second vertex.</param>
/// <param name="weight">Edge weight.</param>
public void AddEdge(int u, int v, int weight)
{
edges.Add((u, v, weight));
}

/// <summary>
/// Finds the minimum spanning tree using Kruskal's algorithm.
/// </summary>
/// <returns>List of edges in the MST: (u, v, weight).</returns>
public List<(int, int, int)> KruskalMST()
{
List<(int, int, int)> result = new List<(int, int, int)>();

// Sort edges by weight (greedy choice)
edges.Sort((a, b) => a.Item3.CompareTo(b.Item3));

// Create disjoint set
DisjointSet ds = new DisjointSet(V);

// Process edges in ascending order of weight
foreach (var edge in edges)
{
int u = edge.Item1;
int v = edge.Item2;
int weight = edge.Item3;

int rootU = ds.Find(u);
int rootV = ds.Find(v);

// If including this edge doesn't cause a cycle
if (rootU != rootV)
{
result.Add(edge);
ds.Union(rootU, rootV);
}
}

return result;
}
}

Step-by-Step Explanation of Kruskal's Algorithm

Let's trace through Kruskal's algorithm on the same graph:

  • Vertices: 0, 1, 2, 3
  • Edges: (0-1, weight 10), (0-2, weight 6), (0-3, weight 5), (1-3, weight 15), (2-3, weight 4)
  1. Sort edges by weight:

    • (2-3, weight 4)
    • (0-3, weight 5)
    • (0-2, weight 6)
    • (0-1, weight 10)
    • (1-3, weight 15)
  2. Initialize disjoint sets:

    • Each vertex is in its own set: 0, 1, 2, 3
  3. Process edges:

    • Consider (2-3, weight 4):
      • 2 and 3 are in different sets, so add the edge
      • Union sets: 0, 1, 3
    • Consider (0-3, weight 5):
      • 0 and 3 are in different sets, so add the edge
      • Union sets: 3, 1
    • Consider (0-2, weight 6):
      • 0 and 2 are in the same set, so skip (would create a cycle)
    • Consider (0-1, weight 10):
      • 0 and 1 are in different sets, so add the edge
      • Union sets: 3
    • Consider (1-3, weight 15):
      • 1 and 3 are in the same set, so skip (would create a cycle)
  4. Final MST:

    • Edges: (2-3, weight 4), (0-3, weight 5), (0-1, weight 10)
    • Total weight: 19

When to Use Greedy Algorithms

Greedy algorithms are suitable for problems with the following characteristics:

  1. Greedy Choice Property: A global optimum can be reached by making locally optimal choices.
  2. Optimal Substructure: The optimal solution to the problem contains optimal solutions to its subproblems.
  3. No Need for Backtracking: Once a choice is made, it doesn't need to be reconsidered.

How to Identify if a Greedy Approach Works

To determine if a greedy approach will work for a problem:

  1. Formulate the greedy choice: Identify what "locally optimal" means for your problem.
  2. Prove the greedy choice property: Show that making the greedy choice at each step leads to a globally optimal solution.
  3. Prove optimal substructure: Show that the optimal solution to the problem contains optimal solutions to its subproblems.

If you can't prove these properties, a greedy approach might not work, and you should consider other techniques like dynamic programming.

Greedy vs. Dynamic Programming

GreedyDynamic Programming
Makes the locally optimal choice at each stepConsiders all possible choices and picks the best one
Doesn't reconsider previous choicesBuilds a table of subproblem solutions
Usually more efficientUsually more time and space intensive
Works only for problems with greedy choice propertyWorks for a wider range of problems
Examples: Activity Selection, Huffman CodingExamples: 0/1 Knapsack, Longest Common Subsequence

Common Pitfalls

  1. Assuming Greedy Works: Not all problems can be solved optimally with a greedy approach. Always verify that the problem has the greedy choice property.

  2. Incorrect Greedy Choice: Choosing the wrong metric for the greedy choice can lead to suboptimal solutions.

  3. Overlooking Edge Cases: Greedy algorithms might fail for certain edge cases. Always test with various inputs.

Examples of Problems Where Greedy Fails

1. 0/1 Knapsack Problem

In the 0/1 Knapsack problem, items cannot be taken in fractions. A greedy approach (taking items with the highest value-to-weight ratio first) doesn't always yield the optimal solution.

Example:

  • Items: (weight=10, value=60), (weight=20, value=100), (weight=30, value=120)
  • Capacity: 50
  • Greedy solution (by value/weight): Take items 1 and 2 for a total value of 160
  • Optimal solution: Take items 2 and 3 for a total value of 220

2. Coin Change (with arbitrary denominations)

For some coin systems, the greedy approach doesn't yield the minimum number of coins.

Example:

  • Denominations: 1, 3, 4
  • Amount: 6
  • Greedy solution: 4 + 1 + 1 = 3 coins
  • Optimal solution: 3 + 3 = 2 coins

Conclusion

Greedy algorithms are a powerful tool for solving optimization problems when the problem has the greedy choice property and optimal substructure. They are often simpler and more efficient than other approaches like dynamic programming or backtracking. However, they don't always yield the optimal solution, so it's important to verify that the greedy approach works for your specific problem before implementing it.

When designing a greedy algorithm, the key is to identify the locally optimal choice at each step. This often involves sorting the input or using a priority queue to select the best option at each step. Once you've identified the greedy strategy, implementing the algorithm is usually straightforward.

Remember the key steps:

  1. Identify the greedy choice
  2. Make that choice
  3. Reduce to a smaller problem
  4. Repeat until the problem is solved