Big O

You might wonder, why do we need a notation? Why don't we just consider the time it takes to run the algorithm? Here are two of many reasons:

  1. Different computers have different processors and thus some computers will spend less time running an algorithm than others.

  2. Some programming languages are faster than others.

It will be stressful taking all these factors into consideration when trying to analyze an algorithm. Rather than do that, the Big O notation uses something more standard - the input. It considers how the runtime of the algorithm grows in relation to the size of the input. It is also good to note that the Big O notation considers the worst-case scenario for its analysis.

Last updated

Was this helpful?