# Definition

The Big O Notation is simplified analysis of an algorithm’s efficiency. It is also known as Landau’s Symbol. Big O Notation is used in Computer Science and Mathematics to describe the Asymptotic behavior of functions. Let’s study some of the characteristics;

- Big O Notation enlightens about the
**complexity of a target algorithm**for a given input considered as “N”, - Big O Notation is the
**abstraction**of the efficiency in terms of the**machine independence**. The Algorithm must**perform the same way**on**every operation system and hardware**. - Big O Notation does not concern about how much of time that an algorithm takes but
**how****it****performs****under certain situations**. - Big O Notation gives us the “
**Time**” and the “**Space**” constraints.

In addition to the second characteristic; when we run a program, we will have **performance** *that is how much of time or hardware resource is used*, and complexity how the algorithm acts and grows. Note that **the Complexity affects the performance**, *but the other way around is not possible*.

Furthermore, there are three types of measurements that I’ll explain and demonstrate those measurements in a different chapter.

# Rules

## Drop the constants

If you have a function that has a running time of **5N**, that is realized as **O(n)**. Because n gets bigger and the 5 __is not the consideration anymore__,

## Observe all inputs

Different inputs and variables have different weights on identifying the notation. If you iterate in two different arrays you get **O(a*b)**. Study the following pseudo code;

method(int[] array1, int[] array2){ For(int a : array1){//O(a) For(int b : arrayb){//O(b) System.out.println("match");//O(1) } } }

## Drop the low order terms

Certain terms dominate the other terms, in this case we **drop the lower term(s)**. Here is the sequence of the terms. From the **right to the left** domination gets **lower**:

O(1) **<** O(log n) **<** O(n) **<** O(n log n) **<** O(n^{2}) **<** O(2^{n}) **<** O(n!)

# Complexities

Big O Notation can be used to describe the Time Complexity and the Space Complexity of algorithms. Each of these subjects are different terms.

## Time Complexity

The Time Complexity corresponds to the amount of time that an Algorithm takes to run. The Time Complexity has the following cases; Best, Average and Worst.

## Space Complexity

The Space Complexity describes how much of a space the algorithm allocates in the memory according to the amount of given data.

# Cases

In Big O Notation we have three cases:

- Best Case,
- Average Case,
- Worst Case

When algorithms are analyzed, generally “**Worst Case***“* is referred. It doesn’t mean that the rest of the cases are less important, but depending on the input, the Worst Case has a weight.

# Notations

image courtesy *www.bigocheatsheet.com*

## O(1) Constant

**Constant time** is a basic statement that has only **Constants**, in a different way the values that are solid and will not change. Regardless of amount of the data, the code executes the process in the time amount, this can be a variable definition, access in an array or a print out. The simplest examples would go for this:

Int x = (9/2)*12-1; To find out the Big O Notation of such constants; 1.int a = (9/2)*12-1;//O(1) 2.int b = 100/2; //O(1) 3.int result a+b; //O(1) System.out.println(result); //O(1)

Total time spent: O(1) + O(1) + O(1) + O(1) =** O(1)**

4*O(1)

*(Constants are dropped)*

## O(n) Linear

**linear time** is known as the **completion time grows** for the **given amount of data**. A good **example** to it is the **linear search** in a loop that iterates through N amount of elements.

1.for(int i=0; i<N; i++){//O(n) 2.System.out.println(i)//O(1) } Total time spent: O(n)*O(1)=O(n) 1.int x = 55*3+(10-9); //O(1) for(int i=0; i<N; i++){//O(n){ 3.System.out.println(i)//O(1) }

Total time spent: O(1) + O(n)*O(1)=O(n)

*(Drop low order terms)*

## O(n^{2}) Q**uadratic**

the time completion of **quadratic** algorithms is proportional to the square of the given amount of data. Commonly we spot these sorts of algorithms in nested loops. In addition, the more nested loops exist, the square will become **cubic** O(n^{3}) or more. The good **example** is the **Bubble Sort**. Furthermore, such algorithms dramatically hamper the performance if the amount of data increases.

## O(2^{N}) **Exponential**

**Exponential** algorithms are those whose performance **doubles** for **every given input**. We can basically spot such algorithms in recursive methods of calculations. For the sake of simplicity, I’ll give an **example** of **Fibonacci numbers**. As you will see, the method will call itself twice for the given input.

public int fibonacciNumbers(int number) { if (number <= 1) { return number; } else { return fibonacciNumbers(number - 1) + fibonacciNumbers(number - 2); } }

## O(n!) Factorial

**Factorial** algorithm that calculates all permutation of a given array is considered as O(n!). The most suitable **example** of this is the **travelling sales man problem** with the **brute-force search**.

## O(log n) Logarithmic

The best way to explain **logarithmic **algorithms is to search in** chunks of data** by **quadratic** them and the rest will be assumed as O(n). Since we split it we will gain some time while looking up. The good world **example** would be the **Binary Search Tree** for these types of algorithms and they are very efficient, because increasing the amount of data has a minor effect at some point, because the amount of data is halved each time as it is seen with the Binary Search Tree.

## O(n log n) Quasi Linear

**Quasi linear**** **algorithms are the ones hard to spot. The given values will be compared once. Essentially each comparison will reduce the possible final sorted data structure in half like in O(log n). Furthermore, the number of comparisons that will be performed is equal to log in the factorial of “n”. The comparisons are also equal to n log n this is how O(n log n) algorithms are formed. The mathematical representation as follows;

Sum of comparisons are equal to log n!

Divided comparisons are equal to log n + log(n-1) + … +log(1)//End of comparisons

The outcome is equal to **n log n**

**Examples** of Quasi linear complexities are **Merge** and **Heap Sorts**.

# Further Reading

- Big O Cheat Sheet: A great web site that points out the Time and Space complexities of Data Structures and Algorithms.
- Stackoverflow Post: While making my research, I found an explanation of John very easy to explain Notations by using the same example.