Big O notation in mathematics and computer science

What’s Big O notation?

Way of comparing rates of growth of different functions, which depending on function argument size. It is often used to compare the efficiency of different algorithms, which is done by calculating how much memory is needed (space complexity), and how much time it takes to complete (time complexity).
— SE Wikipedia

Consistent and concise way to analyze the efficiency of algorithms is a Big O notation. It gives answer to questions like this: If there are data elements (sometimes need to determine), how many steps and resources will the algorithm X take to do something? How to compare any two algorithms efficiency? Is this fast or slow algorithm as fast as algorithms generally go?

Big O key concepts:

  1. Growth is with respect of input
  2. Constants and lower order terms are ignored
  3. Worst case is usually the way we measure

Key question which describe Big ?
If there are N data elements, how many steps will the algorithm take?

Big O is about how will an algorithm’s performance (efficiency) change as the data increases?

Big O Notation categories, linear growing, quadratic growing, exponential grow, etc.:

Big O notation Big O notation

Big O Notation allows us to easily categorize the efficiency of a given algorithm, which focus on the number of operations an algorithm requires to complete.

An algorithm that is is also known as having linear time.

If there are data elements, how many steps will linear search take?
steps,

algorithm can also be referred to as having constant time. At it always faster than , even if takes a lot of steps, googolplex steps for example, at some input data size it’s become faster.

Big O Notation resolves the key question: if there are data elements, how many steps will the algorithm take?

Big O is not only about how many steps an algorithm takes with specific input, but more about how the number of steps grows as the size of the input grows. So it’s describe relationship between input data and the algorithm efficiency.

Let’s say we have an algorithm that always takes three steps no matter how much data there is. That is, for N elements, the algorithm always takes three steps. How would you express that in terms of Big O?
The algorithm is . Because the number of steps is constant. Even if algorithms taking a lot of steps, at some point it will be faster than .

Let’s say you have steps in your algorithm, how to express that with Big O?
At first glance you can simplify this to , but Big O Notation only takes into account the highest order of N when we have multiple orders added together and we reducing it to .

Is more efficient than ?
is less efficient than , no matter how many steps (N) the algorithm always takes 1 step or constant number. Even if takes a lot steps at some point it well be more efficient.

Big O can describe the best and worst case scenario of an algorithm. But Big O Notation generally refers to the worst case (pessimistic approach) scenario, unless specified otherwise.

is the Big O way of describing an algorithm that increases one step each time the data is doubled.

means that for data elements, the algorithm would take == (omitting 2)== steps.

means the algorithm takes as many steps as it takes to keep halving the data elements until we remain with 1.

If there are 8 elements, the algorithm would take 3 steps.

Bubble sort taking about steps, selection sort about steps, what is complexity of these algorithms?
, yes it’s correct, there major rule of Big O: Big O notation ignores constants. Big O Notation never includes regular numbers that aren’t an exponent (except for ).

Which Big O notation need to use in these cases?


Is this algorithm or ?

def is_prime(number):
    """Check number is prime (only 2 factors: 1 and themselves)"""
    for i in range(2, number):
        if number % i == 0:
            # we found a factor when we able to divide number with it
            print(f"NOT prime: Found factor {i} for number {number}")
            return False
 
    print(f"prime: Number {number} is prime")
    return True
 
is_prime(59)
is_prime(60)
is_prime(61)
is_prime(66)
is_prime(67)

Results:

prime: Number 59 is prime
NOT prime: Found factor 2 for number 60
prime: Number 61 is prime
NOT prime: Found factor 2 for number 66
prime: Number 67 is prime


It’s . When passing the number N, how many steps will the algorithm take, for 7 is 5, for 101 about 101, etc. So it’s taking linear time, we generate steps depending on number.

Example of where two algorithms can be expressed the same way using Big O Notation, but further analysis is needed to figure out which algorithm is faster.

def print_even_numbers_v1(upper_bound):
    number = 2
 
    while number <= upper_bound:
        if number % 2 == 0:
            print(number)
        number += 1
 
def print_even_numbers_v2(upper_bound):
    number = 2
 
    while number <= upper_bound:
        print(number) # 2, ...
        number += 2   # skip next number 3 and we print 4

What’s complexity of two algorithms, how many significant steps has first version? Is second version faster?
Complexity of both algorithms is . Significant steps for first version: comparisons, printing and incrementing, in total steps (practically can be more). Yes second version is much faster, about steps, we skip half of comparisons and half of iterations and half of incrementing. TODO: need verify.

What’s complexity of this algorithm?

for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
    }
}


If it doesn’t get optimized out by the compiler, the complexity will still be - even though the loops bodies are empty, the condition checks and implementation of both counters are still valid operations which have to be performed.

To-do