# Big O Beginner's Guide

## Disclaimer

I'm not a mathematician, so it's not going to be exact, but I'll try to give you an intuitive understanding of Big O

Big O (usually written as *O(x)*, where *x* is a mathematical equation like *1*, *n*, *n^2* or *2^n*, for example) is a mathematical expression which expresses *how many steps an algorithm maximally will take to run.*

## Example 1

If you have an array or list and want to lookup the nth item, it will maximally take a so-called "constant time", expressed as *O(1)*. This does not mean it literally takes 1 step, but it means that no matter how many items you have in your data structure, the amount of steps it takes to retrieve the nth item is always a constant amount (whether it's actually 1 step or 10 does not matter in practice).

## Example 2

A lookup for a Tree structure takes up *O(log n)* steps, because you can (assuming the tree is sorted) eliminate the other branch every time you go left or right in the tree (see image below – If you choose 9, instead of 3, you won't need to check the 1 and 4)

Note that you may execute a few constant steps before that (like creating a variable to save the answer in, so you can return that), but since *O(log n)*'s runtime overshadows that of any *O(1)* code, we only note the *O(log n)* part.

## Example 3

If you want to find whether your array or list contains an a certain item (assuming it's an unsorted list), it will take *O(n)* steps, because each item has to be checked *once* to see if that's the item you're looking for.

If you have a list with 10 items and the last item is the one you're looking for, it will take 10 steps. This is obviously not every time the case, but it's the *worst* case: Big O is *how many steps an algorithm maximally will take to run.*

Again, if your algorithm has a part that happens to be *O(1)* or *O(log n)* (just as an example), you still note it down as *O(n)*, because *O(n)* overshadows the runtime of *O(1)* and *O(log n)*. Say you do, for some unknown reason, want to jot down the exact Big O, you'd write it down as *O(log n + n)*, but that's not what you'll see in practice. You can see what I mean, visually speaking, under the "Visually speaking" paragraph. :^)

## Most of the Big O notations

Now that you're getting the gist: I've copied a small, *simplified*, list of possible Big O expressions from Wikipedia. It's sorted from shortest to longest runtime. The italic rows are the ones I've seen most common.

*n* is the amount of items that's fed to the algorithm, *c* is a constant number for that specific algorithm (usually it's either 2 or 3, though I don't guarantee it)

Notation | Name |
---|---|

O(1) |
constant – An array lookup, like `list[2]` |

O(log log n) |
double logarithmic |

O(log n) |
logarithmic – Lookup in a sorted, binary tree |

O((log n)^c) |
polylogarithmic |

O(n^c) |
fractional power |

O(n) |
linear – for-loops |

O(n log n) = O(log n!) |
linearithmic, loglinear, or quasilinear |

O(n^2) |
quadratic – 'double for' loop, or "a for-loop in a for-loop" |

O(n^c) |
polynomial or algebraic |

O(c^n) |
exponential |

O(n!) |
factorial |

For the full table: Wikipedia/Big_O_notation

Another great wiki article is Wikipedia/Time_complexity

## Visually speaking

Anything after *O(n)* should be tried to be avoided (though some algorithms, like turning two lists into Cartesian product/table, or perhaps a naive implementation of generating Fibonacci number for example) are currently not faster than say *2^n* (though there are faster implementations known for the Fibonacci number generator), which means you can't always avoid it. As far as I know, we don't mathematically know if it's provable to find a faster algorithm for current algorithms.

You can see why you should avoid anything *>O(n)* here, as the rate of growth just shoots off the chart:

## A bit of trivia

There's a thing in the mathematical world called *P=NP*, which states "It asks whether every problem whose solution can be quickly verified can also be solved quickly". In concrete form: A Sudoku puzzle can be verified to be correct in a relatively short time, yet not solved in a relatively short time. P basically means "polynomial" and NP "exponential" or even "factorial". Do note that this is not mathematically correct, but I'm not also going into what the hell "nondeterministic polynomial time" is. That's a bit out of the scope of this tutorial.

Anyway, this *P=NP* thing is currently still a mathematically unsolved problem within Computer Science, so maybe (if you're smarter than me) you can possibly solve this in the future.

## Closing words

I hope this short intro into Big O has been of help to you 🙂

PS: Note that there are other so called asymptotic notations (of which Big O is one), like o (little-o), Ω (Big-Omega), ω(little-omega, and Θ (theta). I have to confess that I have no idea what those represent, but I'm guessing things like average runtime and minimal runtime or something like that, but those mathematicians have to fuck everything up for me by using words I don't understand ;_;

Fucking mathematics, how does it work?