Understanding Big O Notation
My introduction to Big O notation for technical interviews
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.”
– Wikipedia
Why do I need the Big O notation in interviews?
When it comes to deciding whether a code is good or not, we can analyze it in two main aspects:
- How easy it is to understand?
- How well does it scale in space and/or time?
Big O answers the second question.
In computer science, Big O expresses how the runtime or memory usage of an algorithm grows as the input grows.
It’s important to state that in most interview scenarios we look at the worst case scenario. Even if we might get lucky and find an item immediately, we assume the input will be arranged in the least favorable way.
Even if interviewers don’t explicitly ask you to write the Big O of your solution, you will almost always need to solve problems in the most scalable and efficient way. Knowing how to evaluate code through this lens is essential.
So let’s talk about the most common complexities:
O(1) - Constant time
If we grow the size of the input, it doesn’t affect the complexity
It’s an algorithm that will not vary depending on the size of the input.
For example, if you want to log the first item of an array, the size of the array doesn’t matter.
1
2
3
function getFirstItem(array) {
return array[0];
}
O(log n) - Logarithmic time
If we grow the size of the input, the time grows slower than the size of the input
Logarithmic time appears when the algorithm reduces the search space at each step. The most common example is binary search, where the input size is cut in half repeatedly.
For instance, in a binary search we divide the input in half and look only to one of those halves.
1
2
3
4
5
6
7
8
9
10
11
12
13
function binarySearch(arr, x) {
let start = 0, end = arr.length - 1;
while (start <= end) {
let mid = Math.floor((start + end) / 2);
if (arr[mid] === x) return true;
else if (arr[mid] < x)
start = mid + 1;
else
end = mid - 1;
}
return false;
}
O(n) - Linear time
If we grow the size of the input, the time grows at the same rate as the input
When you need to access every item of the input, you have a linear time complexity.
1
2
3
4
5
function logItems(array) {
for (let item of array) {
console.log(item);
}
}
O(n log n) - Quasilinear time
If we grow the size of the input, the time grows slightly faster than linear
O(n log n) appears when the algorithm needs to process every element and also perform a logarithmic number of operations per element. This usually happens when the input is repeatedly divided into smaller parts, and each part is processed.
The most common examples are efficient sorting algorithms like Merge Sort, Heap Sort, and Quick Sort (average-case).
Below is an example of Merge Sort, a classic O(n log n) algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
function merge(arr, left, mid, right) {
const n1 = mid - left + 1;
const n2 = right - mid;
const L = new Array(n1);
const R = new Array(n2);
for (let i = 0; i < n1; i++)
L[i] = arr[left + i];
for (let j = 0; j < n2; j++)
R[j] = arr[mid + 1 + j];
let i = 0, j = 0;
let k = left;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = R[j];
j++;
}
k++;
}
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}
while (j < n2) {
arr[k] = R[j];
j++;
k++;
}
}
function mergeSort(arr, left, right) {
if (left >= right)
return;
const mid = Math.floor(left + (right - left) / 2);
mergeSort(arr, left, mid);
mergeSort(arr, mid + 1, right);
merge(arr, left, mid, right);
}
const arr = [38, 27, 43, 10];
mergeSort(arr, 0, arr.length - 1);
console.log(arr.join(" "));
O(n²) - Quadratic time
If we grow the size of the input, the time grows proportionally to the square of the input size
Quadratic complexity usually appears when we need to compare each element of the input with every other element. This typically happens when there are nested loops, where for each item we iterate over the entire array again.
1
2
3
4
5
6
7
8
9
10
11
12
function bubbleSort(array) {
for (let i = 0; i < array.length; i++) {
for (let j = i + 1; j < array.length; j++) {
if (array[i] > array[j]) {
let aux = array[i];
array[i] = array[j];
array[j] = aux;
}
}
}
}
O(2^n) - Exponential time
If we grow the size of the input, the time roughly doubles for each additional element
Exponential complexity usually appears in brute-force algorithms where all possible combinations or states must be explored. This happens often in combinatorics, subset generation, decision trees, and some areas of cryptography.
A classic example is the naive recursive Fibonacci implementation, where each call generates two more calls:
1
2
3
4
5
6
function fibonacci(n) {
if (n <= 1) {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
O(n!) - Factorial time
Factorial time appears when we need to generate every possible permutation of the input. This complexity is extremely expensive and grows faster than any other common complexity.
In other words, if you have 5 items, you need to evaluate:
1
5 × 4 × 3 × 2 × 1 = 120 possibilities
If you have 10 items, that becomes:
1
10! = 3,628,800 possibilities
A classic example is generating every permutation of an array:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
function permute(arr) {
const result = [];
function backtrack(current, remaining) {
if (remaining.length === 0) {
result.push(current);
return;
}
for (let i = 0; i < remaining.length; i++) {
const next = current.concat(remaining[i]);
const rest = remaining.filter((_, index) => index !== i);
backtrack(next, rest);
}
}
backtrack([], arr);
return result;
}
Big O Complexity Summary
| Complexity | Name | Description | Common Examples |
|---|---|---|---|
| O(1) | Constant | Time doesn’t change with input size | Hash table lookup, array index access |
| O(log n) | Logarithmic | Time grows slowly as input grows | Binary search, balanced trees |
| O(n) | Linear | Time grows at the same rate as input | Iterating through an array |
| O(n log n) | Quasilinear | Slightly slower than linear, faster than quadratic | Merge Sort, Heap Sort, Quick Sort (avg) |
| O(n²) | Quadratic | Time grows with the square of the input | Nested loops, Bubble Sort |
| O(2ⁿ) | Exponential | Time doubles for each additional input element | Recursive Fibonacci, brute-force search |
| O(n!) | Factorial | Evaluates all permutations | Permutations, Traveling Salesman brute-force |
Visual Growth Comparison
xychart-beta horizontal
title "Big O Complexity Growth"
x-axis ["O(1)", "O(log n)", "O(n)", "O(n log n)", "O(n²)", "O(2ⁿ)", "O(n!)"]
y-axis "Growth" 0 --> 70
bar [1, 2, 7, 12, 22, 40, 70]
Final Thoughts
Understanding Big O notation is one of the most fundamental skills in computer science and technical interviews.
You don’t need to memorize every complexity or become a math expert, you just need to recognize patterns:
- Loops → usually O(n)
- Nested loops → usually O(n²)
- Divide and conquer → usually O(n log n)
- Brute-force combinations → O(2ⁿ) or O(n!)
Once you learn to think this way, you naturally write more scalable and efficient code.
And in interviews, showing that you understand complexity is often more important than the exact number itself.
If you want to explore further, I highly recommend the excellent resource:
Big-O Cheat Sheet