Big O notation in mathematics and computer science
What’s Big O notation?
Way of comparing rates of growth of different functions, which depending on function argument size. It is often used to compare the efficiency of different algorithms, which is done by calculating how much memory is needed (space complexity), and how much time it takes to complete (time complexity).
— SE Wikipedia
Consistent and concise way to analyze the efficiency of algorithms is a Big O notation. It gives answer to questions like this: If there are data elements (sometimes need to determine), how many steps and resources will the algorithm X take to do something? How to compare any two algorithms efficiency? Is this fast or slow algorithm as fast as algorithms generally go?
Big O key concepts:
- Growth is with respect of input
- Constants and lower order terms are ignored
- Worst case is usually the way we measure
Key question which describe Big ?
If there are N data elements, how many steps will the algorithm take?
Big O is about how will an algorithm’s performance (efficiency) change as the data increases?
Big O Notation categories, linear growing, quadratic growing, exponential grow, etc.:
Big O notation
Big O Notation allows us to easily categorize the efficiency of a given algorithm, which focus on the number of operations an algorithm requires to complete.
An algorithm that is is also known as having linear time.
If there are data elements, how many steps will linear search take?
steps,
algorithm can also be referred to as having constant time. At it always faster than , even if takes a lot of steps, googolplex steps for example, at some input data size it’s become faster.
Big O Notation resolves the key question: if there are data elements, how many steps will the algorithm take?
Big O is not only about how many steps an algorithm takes with specific input, but more about how the number of steps grows as the size of the input grows. So it’s describe relationship between input data and the algorithm efficiency.
Let’s say we have an algorithm that always takes three steps no matter how
much data there is. That is, for N elements, the algorithm always takes three
steps. How would you express that in terms of Big O?
The algorithm is . Because the number of steps is constant. Even
if algorithms taking a lot of steps, at some point it will be faster than
.
Let’s say you have steps in your algorithm, how to express that
with Big O?
At first glance you can simplify this to , but Big O Notation only
takes into account the highest order of N when we have multiple orders added
together and we reducing it to .
Is more efficient than ?
is less efficient than , no matter how many
steps (N) the algorithm always takes 1 step or constant number. Even
if takes a lot steps at some point it well be more efficient.
Big O can describe the best and worst case scenario of an algorithm. But Big O Notation generally refers to the worst case (pessimistic approach) scenario, unless specified otherwise.
is the Big O way of describing an algorithm that increases one step each time the data is doubled.
means that for data elements, the algorithm would take == (omitting 2)== steps.
means the algorithm takes as many steps as it takes to keep halving the data elements until we remain with 1.
If there are 8 elements, the algorithm would take 3 steps.
Bubble sort taking about steps,
selection sort about steps, what is complexity of these algorithms?
, yes it’s correct, there major rule of Big O: Big O
notation ignores constants. Big O Notation never includes regular numbers that
aren’t an exponent (except for ).
Which Big O notation need to use in these cases?
Is this algorithm or ?
Results:
prime: Number 59 is prime
NOT prime: Found factor 2 for number 60
prime: Number 61 is prime
NOT prime: Found factor 2 for number 66
prime: Number 67 is prime
It’s . When passing the number N, how many steps will the
algorithm take, for 7 is 5, for 101 about 101, etc. So it’s taking linear time,
we generate steps depending on number.
Example of where two algorithms can be expressed the same way using Big O Notation, but further analysis is needed to figure out which algorithm is faster.
What’s complexity of two algorithms, how many significant steps has first
version? Is second version faster?
Complexity of both algorithms is .
Significant steps for first version: comparisons, printing and
incrementing, in total steps (practically can be more).
Yes second version is much faster, about steps, we skip half of
comparisons and half of iterations and half of incrementing. TODO: need verify.
What’s complexity of this algorithm?
If it doesn’t get optimized out by the compiler, the complexity will still
be - even though the loops bodies are empty, the condition
checks and implementation of both counters are still valid operations which have
to be performed.
To-do
- Big-O notation in 5 minutes - YouTube
- CSE 373 — Lecture 2, Fall 2020 - YouTube
- Big Oh Notation (and Omega and Theta) - YouTube
- Asymptotic Notation - YouTube
- Binary Search - YouTube
- Big O Notations - YouTube
- Big-O Algorithm Complexity Cheat Sheet (Know Thy Complexities!) @ericdrowell
- Understanding the formal definition of Big-O