Your algorithm could be slow and you may not even notice it unless you learn this essential skill.

Many programmers create algorithms that turn out to be slow, but they have no idea it’s slow. We need a way to measure how long an algorithm takes to run.

How do we measure how long an algorithm takes to run?

We could run an algorithm 10,000 times and measure the average time taken but this creates a problem. Say we have an algorithm that takes a shopping list and prints out every item on the shopping list. If the shopping list has 3 items at most the algorithm could take 3 seconds to run. If the shopping list has 10 items it may take 10 seconds to run.

A problem is fabricated here. How do we know what the “perfect” input size is to get the “perfect” measure of how long the algorithm takes? For this reason we use a notation called Big-O (pronounced Big Oh) notation.

Big 0 notation is a notation used to describe how efficient an algorithm is. Big-O is easy to read once you know this table:

Where the further right they are, the longer it takes. Big O notation uses these functions to describe algorithm efficiency. Taking our shopping list as an example then the algorithm will be O(n). We don’t know how many items are in the shopping list so we give it an alegrabic varaible such as n. We then print each item to the screen which takes o(n) time.

Let’s run through every single one of these.


An constant algorithm is an algorithm that doesn’t take more time if the input size gets larger. Let’s look at a quick example:

O(1), O(16), O(17)

So the numbers here are small but all of them take the same amount of time, according to big O notation.

But what if the numbers were extra large like:

O(17264613721), O(17), O(18737138182)

Still the same time, as it's still constant.

The numbers are hardcoded in. If you calculated 5 + 10, you'll get 15. The time never changes depending on the input, as the numbers are hardcoded in.

However, the time does change if the number of operations increase. For example:

$$ 5 + 10 * 2$$

is larger than:

$$ 5 + 10$$

The number of operations increases the overall time, but not the individual computation time of each operation.

Log n

You might be wondering “what is log n”? Well…

A logarithm is sometimes called a log. Frequently in base 2 binary but can differ. Let’s do a quick example:

In this example what is being asked is “3 to what power gives you 9”. This is 3 to the power of 2 gives us 9, so the whole expression looks like:

A logarithmic algorithm “halves” the list every time it’s run. It’s what Binary search is If you give the algorithm a list of 10 items like so:

a = [1, 2, 3, 4, 5, 6 , 7, 8, 9, 10]

And the algorithm halves it every time like so:

a = [1, 2, 3, 4, 5] a = [1, 2, 3]a = [2]

Then it is a logarithmic function.

Polynomial time

If we have our shopping list from earlier and it looks like this:

a = ['water', 'vegetables', 'Bose QC35']

If you went through every item in the list and said it out loud then the complexity will be n. This is because there are n items in the list and the number of items can increase or decrease.

For it doubles the time of the input. An example of this is:

a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in a:
	for x in a:

If you see a nested for loop like above then it’s n², if it’s 3 nested for loops then it’s n³ and so on.


This type of algorithm is the slowest of them all. An example of this is say you have a password consisting only of numbers (so that’s 10 numbers, 0 through to 9). You want to crack a password which has a length of n, so to bruteforce through every combination you’ll have:

$$10! = 3628800$$

Combinations to work through.

In Big O notation, we always use the worst case scenario for our calculations. Computer scientist are pessemistic.

Simplfying Big O notation

What if your algorithm looks something like O(n + n²)? Well, there are some super useful rules to simplfy your algorithm.

Drop the constants

If you have an algorithm described as O(2n), drop the 2 so it’s O(n).

Drop the non-dominant terms

O(n² + n) becomes O(n²). Only keep the larger one in Big O.

If you have a special sum such as O(b² + a) you can’t drop either because without knowledge of what b and a are.

Formal Big O notation

Given 2 positive functions, $f(n)$ and $g(n)$ we say $f(n)$ is $O(g(n))$, written $f(n) \in O(g(n))$, if there are constants $c$ and $n_0$ such that:

$$f(n) \le c * g(n) for all n \geq  n_o$$

Let's see an example.

$$7n - 4 \in O(n)$$

We need to find constants $c$ and $n_0$ such that $7n-4 \le cn$ for all $n \geq n_0$.

One possible choice is $c = 7$ and $n_0 = 1$. $7 * 7 = 42 - 4 = 38$ and $7 * 1 = 7$ so for all where $n \geq 7$ this function holds true.

In fact, this is just one of the infinitely many choices, becaue any real number $c \geq 7$ and any integer $n_0 \geq 1$ would be okay.

All you have to do is substitute values into the formula until you find values for c and n that work. Let's do 10 examples now.

$$f(n) = 4n^2 + 16n + 2$$

Is f(n) O(n4)?

We need to take this function:

$$f(n) = 4n^2 + 16n + 2$$

and say "is this less than some constant times $n^4$?" We need to find out if there is such a constant.

$$n^2 + 16n + 2 \le n^4$$

Let's do a chart. If N = 0 we get:

$$0 + 0 + 2 = 2 \le 0$$

This isn't true, so N = 0 is not true.

When n = 1:

$$ 4 * 1 * 16 * 2 = 22 \le 1^4 = 22 \le 1$$

Is not true. Let's try it again with n = 3.

$$50 \le 16$$ Not true, so let's try another one. N = 3.

$$86 \le 3^3 = 86 \le 81$$

Not true. Looks like the next one should work as we are approaching it; N = 4.

$$ 130 \le 256$$ This is true. So what we're saying is that for a particular value of n = 4, when it equals 4 or a greater number then this function where it's less than N4 becomes True. When $C = 1, N \geq 4$ this holds true.

The answer to the question "is this function, n^2 + 16n + 2, Big O of n4 true?" Yes, when $c = 1$ and $n \geq 4$."

$3n^2 + n + 16$ is $O(n^2)$

We know that $n \le n^2$ for all $n \geq 1$. Also, $16 \le n^2$ for $n \geq 4$. So:

$$3n^2 + n + 16 \le 3n^2 + n^2 + n^2 = 5n^2$$ for all $n \geq 4$.

The constant C is 5, and $n_0 = 3$.

$$13n^3 + 7n log n + 3$$ is $O(n^3$$

Because $log n \le n \geq n^2$ for all $n \geq 1$, and for similar reasons as above we may conclude that:

$$13n^3 + 6nlog n + 3 \le 21 n^3$$ for all 'large enough' n.

In this instance, $c = 21$.

Let's see another one.

Any polynomial $a_k n^k + ... + a_2 n^2 + a_1 n + a_0$ with $a_k > 0$ is $O(n^k)$.

Along the same lines, we can arrgue that any polynomial $a_k n^k + ... + a_2 n^2 + a_1 n + a-0$ with $a_k > 0$ is also $O(n^j)$ for all $j \geq k$.

Therefore $45n^5 - 10n^2 + 6n - 12$ is $O(n^2)$ (and is also $O(n^8)$ or $O(n^9)$ or indeed $O(n^k)$ for any $n \geq 5$).

$$\sqrt{n}$$ is $O(n)$.

This holdsbecause $\sqrt{m} \geq m$ read as "square root of m is always at least m". Therefore $n_0 = 1$ and $c = 1$. This hows that square root of n grows at O(n) rate.

$$ 3 log n + log log n$$ is $O(log n)$

First we have this equality that $log n \le n$ for every $n \geq 1$. We can put a double log here like so: $log log n \le log n$. Log log n is smaller than log n. We replaced "n" with "log n" on both sides. This gives us that it is at most log n. So:

$$3 log n + log n \le 4 log n$$


$$c = 4, n_0 = 1$$

Let's see another one.

$log n$ is $O(n)$, and log n is also $O(\sqrt{n})$. Log n grows slower than any function where this holds:

$$log m \le m^\epsilon$$ for every $\epsilon > 0$ no matter how small it is, as long as it is positive.

Using this inequality if we plug in $\epsilon = \frac{1}{2}$ and we plug that into our equation $\sqrt{m} = m^{\frac{1}{2}}$.

$2^70$ is O(1).

For more on formal & discrete Big O notation, check out this video:

Big Omega

Whereas Big O measures the upper bound (the maximum amount of time something can take) Big Omega (using the symbol Ω) measures the best running time.

We say that $f(n)$ is $Ω(g(n))$ if there are real constants c and $n_0$ such that:

$$f(n) \geq cg(n) for all n \geq n_0$$

We say that f(n) is $\theta (g(n))$ (Theta) if f(n) is $\omega (g(n))$ and f(n) is also $O(g(n))$. Equivalently g(n) is O(f(n)) and f(n) is O(g(n)). Let's see loads of examples.

$$3 log n + log log n$$ is $\omega (log n)$ because we choose constant = 3. $3 log n + log log n \geq 3 log n$ and from the argument 1. So $n_0 = 3, c = 1$

$$ \frac{2}{3}n^2 - n ∈ \omega (n^2)$$ if we take anything below $\frac{2}{3}$. If we take c slightly smaller than 2 thirds at some point this factor will take over and grow faster than the function on the right hand side.

For example, with c = 1/3. Let's calculate it. We claim that:

$$ \frac{2}{3}n^2 - n \geq c * m^2$$ for any $m \geq 3$. Let us multiply both sides by 3.

$$2m^2 - 3m \geq m^2$$

$$m^2 \geq 3m$$

Divide both sides by m:

$$m \geq 3 = m_0$$

C = $\frac{1}{3}$ and $n_0 = 3$.


Bet you were expecting some hard to understand guide to Big O huh? Well, this is all it is. You need to memorise (or learn) the hierarchy, take some algorithms and find out what their Big O notation is.

Big O represents how long an algorithm takes but sometimes we care about how much memory (space complexity) an algorithm takes too.

There are other forms of measuring algorithm time complexity such as Big Theta which is the least amount of time an algorithm takes.

If you liked this article, connect with me!

LinkedIn | Twitter | Website | Newsletter