Skip to main content

2 - Understanding Big O Notation

Big O notation is a mathematical notation used in computer science to describe the performance or complexity of an algorithm. It provides a way to express how the runtime or space requirements of an algorithm grow as the input size increases.

šŸ”° Beginner's Corner: What is Big O Notation?​

Think of Big O notation as a way to measure how long it takes to complete a task as the task gets bigger:

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│ │
│ ALGORITHM EFFICIENCY VISUALIZED │
│ │
│ As input size (n) grows → │
│ │
│ ↑ O(n²) │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ O(n) │
│ │ ↗ ↗ │
│ │ ↗ ↗ │
│ │ ↗ ↗ │
│ │↗ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │ ↗ │
│ │↗ │
│ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

šŸ’” Concept Breakdown: Big O in Plain English​

Big O notation answers the question: "How does the time (or space) required by this algorithm grow as the input gets larger?"

Think of it like these real-world examples:

  1. O(1) - Constant Time: Like finding a book when you know exactly which shelf it's on. No matter how many books are in the library, it takes the same amount of time.

  2. O(log n) - Logarithmic Time: Like finding a word in a dictionary. You can skip half the pages each time because the words are sorted.

  3. O(n) - Linear Time: Like reading every page in a book. If the book has twice as many pages, it takes twice as long.

  4. O(n²) - Quadratic Time: Like checking if any students in a class have the same birthday. With twice as many students, it takes four times as long.

āš ļø Common Misconceptions​

  1. "Big O measures exact runtime" - It doesn't! It measures how runtime grows as input size increases.
  2. "Lower Big O always means faster code" - For small inputs, an O(n²) algorithm might actually run faster than an O(n) algorithm due to other factors.
  3. "Big O only matters for huge datasets" - Even with moderate data sizes, the difference between O(n) and O(n²) can be significant.

Why Big O Matters​

Understanding Big O notation helps you:

  1. Compare algorithms: Determine which algorithm is more efficient for a specific problem.
  2. Predict performance: Estimate how an algorithm will perform with large inputs.
  3. Make informed decisions: Choose the right data structure or algorithm for your specific needs.
  4. Optimize code: Identify and eliminate inefficiencies in your implementations.

Common Big O Complexities​

Here are the most common time complexities, ordered from most efficient to least efficient:

O(1) - Constant Time​

An algorithm with O(1) complexity performs the same number of operations regardless of the input size.

/// <summary>
/// Checks if the first element of an array is null.
/// This operation has O(1) time complexity as it always performs
/// exactly one operation regardless of the array size.
/// </summary>
/// <param name="strings">The array of strings to check.</param>
/// <returns>True if the first element is null, otherwise false.</returns>
public bool IsFirstElementNull(string[] strings)
{
// This single operation takes the same amount of time
// regardless of whether the array has 10 or 10,000 elements
return strings[0] == null;
}

O(log n) - Logarithmic Time​

The number of operations grows logarithmically with the input size. Binary search is a classic example.

/// <summary>
/// Performs a binary search on a sorted array to find a target value.
/// This algorithm has O(log n) time complexity because it divides the search space in half with each step.
/// </summary>
/// <param name="sortedArray">A sorted array of integers to search in.</param>
/// <param name="target">The value to search for.</param>
/// <returns>The index of the target if found, otherwise -1.</returns>
public int BinarySearch(int[] sortedArray, int target)
{
// Initialize the search boundaries
int left = 0;
int right = sortedArray.Length - 1;

// Continue searching while there are elements to check
while (left <= right)
{
// Calculate the middle index
// Using (left + right) / 2 can cause integer overflow for large arrays
// This formula avoids that problem
int mid = left + (right - left) / 2;

// If we found the target, return its index
if (sortedArray[mid] == target)
return mid;

// If the target is greater than the middle element,
// search in the right half of the current range
if (sortedArray[mid] < target)
left = mid + 1;
// Otherwise, search in the left half
else
right = mid - 1;
}

// If we get here, the target was not found
return -1;
}

O(n) - Linear Time​

The number of operations grows linearly with the input size.

/// <summary>
/// Finds the maximum value in an array of integers.
/// This algorithm has O(n) time complexity because it needs to check each element once.
/// </summary>
/// <param name="numbers">The array of integers to search.</param>
/// <returns>The maximum value found in the array.</returns>
public int FindMax(int[] numbers)
{
// Start by assuming the first element is the maximum
int max = numbers[0];

// Check each remaining element in the array
for (int i = 1; i < numbers.Length; i++)
{
// If we find a larger element, update our maximum
if (numbers[i] > max)
max = numbers[i];
}

// Return the maximum value found
return max;
}

O(n log n) - Linearithmic Time​

Common in efficient sorting algorithms like merge sort and quick sort.

/// <summary>
/// Sorts an array using the Merge Sort algorithm.
/// This algorithm has O(n log n) time complexity because it divides the array log(n) times
/// and performs O(n) work at each level to merge the subarrays.
/// </summary>
/// <param name="array">The array to sort.</param>
/// <returns>A new sorted array.</returns>
public int[] MergeSort(int[] array)
{
// Base case: arrays with 0 or 1 elements are already sorted
if (array.Length <= 1)
return array;

// DIVIDE: Split the array into two halves
int middle = array.Length / 2;
int[] left = new int[middle];
int[] right = new int[array.Length - middle];

// Copy elements to the left and right arrays
Array.Copy(array, 0, left, 0, middle);
Array.Copy(array, middle, right, 0, array.Length - middle);

// CONQUER: Recursively sort both halves
left = MergeSort(left);
right = MergeSort(right);

// COMBINE: Merge the sorted halves
return Merge(left, right);
}

/// <summary>
/// Merges two sorted arrays into a single sorted array.
/// </summary>
/// <param name="left">First sorted array.</param>
/// <param name="right">Second sorted array.</param>
/// <returns>A merged sorted array.</returns>
private int[] Merge(int[] left, int[] right)
{
// Create a result array to hold the merged elements
int[] result = new int[left.Length + right.Length];
int leftIndex = 0, rightIndex = 0, resultIndex = 0;

// Compare elements from both arrays and take the smaller one
while (leftIndex < left.Length && rightIndex < right.Length)
{
if (left[leftIndex] <= right[rightIndex])
result[resultIndex++] = left[leftIndex++];
else
result[resultIndex++] = right[rightIndex++];
}

// Copy any remaining elements from the left array
while (leftIndex < left.Length)
result[resultIndex++] = left[leftIndex++];

// Copy any remaining elements from the right array
while (rightIndex < right.Length)
result[resultIndex++] = right[rightIndex++];

return result;
}

O(n²) - Quadratic Time​

Common in nested loops and simple sorting algorithms like bubble sort.

/// <summary>
/// Sorts an array using the Bubble Sort algorithm.
/// This algorithm has O(n²) time complexity because it uses nested loops,
/// comparing and potentially swapping each pair of elements.
/// </summary>
/// <param name="array">The array to sort.</param>
public void BubbleSort(int[] array)
{
// Outer loop: each pass places one more element in its final position
for (int i = 0; i < array.Length; i++)
{
// Inner loop: compare adjacent elements and swap if they're in the wrong order
// Note: array.Length - 1 is used because we compare each element with the next one
// Note: array.Length - 1 - i optimization could be used since the last i elements are already sorted
for (int j = 0; j < array.Length - 1; j++)
{
// If the current element is greater than the next element, swap them
if (array[j] > array[j + 1])
{
// Swap the elements using a temporary variable
int temp = array[j];
array[j] = array[j + 1];
array[j + 1] = temp;
}
}
}
}

O(2^n) - Exponential Time​

The number of operations doubles with each additional input element. Often seen in recursive algorithms that solve problems of size n by solving two subproblems of size n-1.

/// <summary>
/// Calculates the nth Fibonacci number using a naive recursive approach.
/// This algorithm has O(2^n) time complexity because each call branches into two more calls,
/// leading to an exponential number of function calls.
/// </summary>
/// <param name="n">The position in the Fibonacci sequence (0-based).</param>
/// <returns>The nth Fibonacci number.</returns>
public int Fibonacci(int n)
{
// Base cases: F(0) = 0, F(1) = 1
if (n <= 1)
return n;

// Recursive case: F(n) = F(n-1) + F(n-2)
// This creates a binary tree of recursive calls, leading to exponential time complexity
return Fibonacci(n - 1) + Fibonacci(n - 2);

// Note: This implementation is inefficient for large values of n
// A dynamic programming approach would be much more efficient
}

O(n!) - Factorial Time​

The number of operations grows factorially with the input size. Common in algorithms that generate all permutations.

/// <summary>
/// Generates all permutations of a character array.
/// This algorithm has O(n!) time complexity because it generates all possible
/// arrangements of n elements, which is n! (n factorial) permutations.
/// </summary>
/// <param name="chars">The character array to permute.</param>
/// <param name="start">The starting index for the current permutation step.</param>
/// <param name="end">The ending index of the array.</param>
public void GeneratePermutations(char[] chars, int start, int end)
{
// Base case: If we've reached the end of the array, print the current permutation
if (start == end)
{
Console.WriteLine(new string(chars));
return;
}

// Try each possible character at the current position
for (int i = start; i <= end; i++)
{
// Swap the current position with each possible character
// This places the character at position 'i' into position 'start'
char temp = chars[start];
chars[start] = chars[i];
chars[i] = temp;

// Recursively generate permutations for the remaining positions
GeneratePermutations(chars, start + 1, end);

// Backtrack: restore the array to its original state
// This is necessary to try the next character at the current position
temp = chars[start];
chars[start] = chars[i];
chars[i] = temp;
}

// For a string of length n, this will generate exactly n! permutations
// For example, "abc" (n=3) generates 3! = 6 permutations:
// abc, acb, bac, bca, cab, cba
}

Big O Rules of Thumb​

  1. Drop the constants: O(2n) is simplified to O(n).
  2. Drop the non-dominant terms: O(n² + n) is simplified to O(n²).
  3. Different inputs, different variables: If an algorithm takes two different inputs, use different variables (e.g., O(n + m)).

Space Complexity​

Big O notation also applies to space complexity, which measures the amount of memory an algorithm uses relative to the input size.

/// <summary>
/// Creates a new array with each element doubled from the input array.
/// This function has O(n) space complexity because it creates a new array
/// of the same size as the input array.
/// </summary>
/// <param name="array">The input array.</param>
/// <returns>A new array with doubled values.</returns>
public int[] DoubleArray(int[] array)
{
// Create a new array of the same size as the input array
// This is what gives us O(n) space complexity
int[] result = new int[array.Length];

// Populate the new array with doubled values
for (int i = 0; i < array.Length; i++)
{
result[i] = array[i] * 2;
}

// Return the new array
return result;

// Note: The space complexity is O(n) because the size of the additional
// memory used (the result array) grows linearly with the input size
}

Practical Considerations​

While Big O notation is a powerful tool for analyzing algorithms, it's important to remember:

  1. Constants matter in practice: An O(n) algorithm with a large constant factor might be slower than an O(n²) algorithm for small inputs.
  2. Average case vs. worst case: Big O typically describes the worst-case scenario, but average-case performance might be more relevant in practice.
  3. Input characteristics: The actual performance might depend on specific characteristics of the input data.

Understanding Big O notation is essential for writing efficient code and making informed decisions about algorithm and data structure choices.