The post Radix Sort appeared first on Tugrul ASLAN.

]]>The optimal algorithm for the numbers range from 1 to n^{2}. Radix Sort algorithm favors of Counting Sort internally to sort the array. The given keys consists of the digits in each element in the array.

It starts from the Least Significant Digit which is the left digit, then goes to the Most Significant Digit which means to the right.

Each digit goes to a corresponding numbered buckets. After the buckets is filled with the elements in the array, the elements are sorted once again according to the bin position. Let’s us see an example illustration to better apprehend the logic, we will sort the numbers “551, 12, 346, 311”;

Now we have more or less how the Radix Sort works out internally. There is one gap I’d like to point what happens to 12 which has two digits compared to the others that have three digits. Well in this situation such numbers are appended with leading *0s *and they **always sit on the bucket zero**.

n numbers consisting of k digits

**n:** number of elements

**k:** the range of the keys for each number. We will also repeat the operation for this amount.

All of the Time Complexities of Radix Sort is always **O(n*k)**

Space complexity of Radix Sort is **O(n+k)**

You can checkout my GitHub repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

The post Radix Sort appeared first on Tugrul ASLAN.

]]>The post Linked List appeared first on Tugrul ASLAN.

]]>Linked List yet another linear data structure like Arrays, but its internal is completely different compared to other data structures. Let’s first have a visual look how the data structure looks like

As you can see in the above image Linked List maintains a list of objects linked to themselves as also the name suggests the same approach. To conclude some characteristics in Linked List;

- Each node has a next pointer/reference to the next or previous object then we are iterating through these references,
- The last nodes are usually null,
- However, the next/previous references do not refer to null in certain times(
*in the following chapters, I’ll demonstrate the reasons*)

Linked list, has indirect usages in some of other data structures like Queues and Stacks internally. Check them out separately and see how Linked List fits in their requirements.

*No size limitation*compared to the arrays,- It is not costly to insert and remove in between nodes, where as it is
**very pricy**especially with heavier arrays because*all the elements will be shifted*,

- Random data access
*is not possible*,**the whole data structure must be traversed to access the designated object**, - Storage to the next and previous nodes
*t***akes up some memory space**.

*image courtesy of **bigocheatsheet.com*

Linked List has some varieties of implementations that often confuse us. I’ll show all the implementations in sub sections with visuals, descriptions and codes that will let you interact more and apprehend the slightest differences better.

In a Singly Linked List the traversal is *unidirectional*,each node refers to the __next node__ in the link, and there is **no reference to previous nodes**. The l*ast node’s next refers to Null*.

See the Implementation “*SinglyLinkedList.java*” and the Unit Test “*SinglyLinkedListUnitTest.java*” to apprehend all the operations and internals of the Singly Linked List.

Doubly Linked List maintains a *bidirectional *path, thus it contains *next *and *previous links*, where *next refers to the next node*, and *previous refers to the previous node*. This maintenance comes with **an extra overhead**. Last of all the *first node’s previous* and *last node’s next* are *Null*.

See the Implementation “*SinglyLinkedList.java*” and the Unit Test “*DoublyLinkedListUnitTest.java*” to apprehend all the operations and internals of the Doubly Linked List.

Circular Linked List is the *last variation of the implementation*. I would like to call the Circular Linked List as the spiced up version of the Singly and Doubly Linked List implementations in my own terms. In addition, as the name suggest the basic internal is that the Linked List is being *circular.* Now time to clear out the 3rd element in the definition and explain

two distinct characteristics in the Circular Linked List;

- The
**head and the tail**of the data structure**don’t point to NULL**, but__head’s previous reference, points to tail__and__tail’s next reference points to the head__, - Circular List can be
*made*using*Singly*or*Doubly Linked List*implementations.

Operation description goes here

**isEmpty:**Checks whether the Linked List is empty,**insertFirst:**Inserts the given Node to the head of Linked List,**insertAfter:**Inserts the given Node after the existing Node in Linked List,**insertLast:**Inserts the given Node at the end of Linked List,**deleteFirst:**Deletes the Node in the head of Linked List,**deleteNode:**Deletes the given Node in Linked List,**deleteLast:**Deletes the given Node from the end of Linked List,**containsIterativeSearch:**Iteratively searches Linked List,**containsRecursiveSearch:**Recursively searches Linked List.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/DataStructures To see how the code works, you can also check its Unit Test.

The post Linked List appeared first on Tugrul ASLAN.

]]>The post Queue appeared first on Tugrul ASLAN.

]]>Queue is a linear data structure the iteration starts at one point and carries on to the end point. It maintains FIFO First In First Out in sequence and has two varieties of implementations; we can prefer to implement in array or singly linked list. To elaborate FIFO with an example; a queue of people waiting for buying a cinema ticket. The first one on the queue is the one that buys the ticket and it follows the rest of the people in the queue.

- Hardware Scheduling; CPUs and Disks are properly scheduled in the concurrent environments,
- Asynchronous communication makes a great use case in two processes

Since the internals of implementations differ for each variation; Singly Linked List and Array, the operations can differ. The given table is suitable for Singly Linked List implementation;

*image courtesy of **bigocheatsheet.com*

Queue has know three operations that we need to know. Some other implementations definitely have other operations as well like in the Java’s Stack implementation. However, these below operations are unique properties of the Stack data structure.

**enqueue:**inserts the element to the head of the stack,**denqueue:**removes the element from the head and returns

denqueued the value,**peek:**returns the head data but doesn’t delete it, takes a peek at it.

There are multiple variations on the implementations; internally we can use a Singly Linked List or Array to hold the data. Then the Time Complexity of the operations differ. in the Stackoverflow Article, there are more insights of the internals of the implementations. In my own implementation I preferred to use the the Singly Linked List implementation.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/DataStructures to see how the code works, you can also check its Unit Test.

The post Queue appeared first on Tugrul ASLAN.

]]>The post Stack appeared first on Tugrul ASLAN.

]]>Stack is a very usable data structure. It is not as widely as used in our daily coding tasks, because of its nature of LIFO. Let’s elaborate LIFO; LIFO is the abbreviation of Last-In-First-Out. What can it really mean for us? Well there is only one specific reason why you would want to use it that is; if you want to have a pile of things and every time you need to take one from the pile, you take it from the top.

Yes, we understood it well. In the LIFO setting we insert towards bottom and and take from the top.

- In text editors “Undo” operations while we intend to remove an unwanted entry,
- Browsers’ back buttons; make use of a similar way to be able to navigate to the earlier pages,
- Recursive methods also utilize stack very well; starting from the first call till the last all of the operations are added on top of each other.

Since the internals of implementations differ for each variation; Singly Linked List and Array, the operations can differ. The given table is suitable for Singly Linked List implementation;

*image courtesy of **bigocheatsheet.com*

Stack has know three operations that we need to know. Some other implementations definitely have other operations as well like in the Java’s Stack implementation. However, these below operations are unique properties of the Stack data structure.

**push:**pushes the element on top of stack**pop:**pops the element from the top and returns popped the value**peek:**returns the head data but doesn’t delete it, takes a peek at it.

There are multiple variations on the implementations; internally we can use a Singly Linked List or Array to hold the data. Then the Time Complexity of the operations differ. in the Stackoverflow Article, there are more insights of the internals of the implementations. In my own implementation I preferred to use the the Singly Linked List implementation.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/DataStructures to see how the code works, you can also check its Unit Test.

The post Stack appeared first on Tugrul ASLAN.

]]>The post Shell Sort appeared first on Tugrul ASLAN.

]]>Shell Sort is a variation of the Insertion Sort. Shell Sort is very fast algorithm that is compact in code size. A gap in other words a distance is set that will be used between the elements in the array

Sub lists are made out of the elements in the gap and the sub lists are compared. In the comparison lower element goes to the left and greater is on the right.

The process continues, later on the gap gets smaller until it becomes one. After the gap reaches to one, then the Insertion Sort is applied to sort the rest. Depending of this gap the time complexity of the algorithm varies.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

The post Shell Sort appeared first on Tugrul ASLAN.

]]>The post Insertion Sort appeared first on Tugrul ASLAN.

]]>The 1st element is assumed to be sorted and the iteration starts from the second element towards the end. The difference in this algorithm compared to Bubble Sort,

it compares the element that are on the left of it. It all means that the sorting goes not forward, but backwards from the right to the left.

This algorithm is sufficient on smaller data sets like Bubble Sort, because its Time complexity is **O(n ^{2})**.

In the implementations of the Insertion Sort only space complexity changes;

*. Imperative: O(1)

*. Recursive: O(n) because of the stacks that are created

The both imperative and the recursive versions are very similar, except in the recursive version, the comparison will start when the i is in second elements index which is 2

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

The post Insertion Sort appeared first on Tugrul ASLAN.

]]>The post Merge Sort appeared first on Tugrul ASLAN.

]]>Merge sort yet another sorting algorithm that benefits from the divide-and-conquer principle. The main goal in this algorithm is to split the given array into individuals and merge them back during the course of the comparison.

Merge Sort seems kind of similar to the Merge sort, in the Comparison below you can study the differences and similarities. However, there is one challenge as I see in this algorithm is the merge. I find this part very complex, but besides its very easy to apprehend the algorithm.

- Best Case: O(n log n),
- Average Case: O(n log n),
- Worse Case: O(n log n)

O(n)

In general terms, the Merge Sort is often compared to the Quick Sort. In some sense, they tend to act similarly as they inherit the same divide-and-conquer principle, to address a few of differences;

- Merge Sort demands a copy of the data structure, whereas Quick Sort applies the changes with no requirement of extra space allocated,
- Both of the algorithms split the given data structure. However, alternatively Merge Sort intends to split from the half to divide the left and right subsets into individual elements, whereas the Quick Sort picks a partition point and swaps the lower and greater values in the right and the left directions.

- The algorithm divides the array into half smaller chunks until the individual items left with by using recursion,
- once individuals created, they are compared and merged back from smaller to larger arrays
- Merge sort requires extra space allocation which makes it space complexity as O(n), whereas Quick Sort only keeps a space while swapping which makes its space complexity as O(log n). However the only similarity is that because of the recursive calls, the stack traces will be created upon each call that’s also considered as a space

- leftPointer: A pointer of the left/begin of the array
- rightPointer: A pointer of the right/endof the array
- middleElementPointer: Represents the element in the center of the array
- leftArray: The elements of the left side as a temporary storage
- rightArray: The elements of the rıght side as a temporary storage

You can checkout my GitHub repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

The post Merge Sort appeared first on Tugrul ASLAN.

]]>The post Quick Sort appeared first on Tugrul ASLAN.

]]>Quick sort is a very efficient algorithm that leverages the divide-and-conquer principle to sort a data structure. The calculated performance complexity of Quick Sort as follows;

- Best Case: O(n log n),
- Average Case: O(n log n),
- Worse Case: O(n
^{2}),*reason: the algorithm will select only one element in each iteration* - Space Complexity: O(log n).

Furthermore, for starters, it will be a good practice to apprehend the terms in this algorithm;

**Pivot:**A reference element that is used as a line whose left and rights elements are divided. There are a few Quick Sort implementations, whose suggestions vary from picking the Pivot from the beginning, middle, end or randomly,**Partition:**It is a practice that swaps elements on the left and right ends, while using the Pivot as a reference. By the end of Partitioning, a partition point for the next divided subsets(those will be divided also) is returned,**Left Pointer:**A pointer or an index value that traverses on the last/low/left subset of the designated array,**Right Pointer:**A pointer or an index value that traverses on the last/low/right subset of the designated array,

In every step, the Quick Sort divides the array to subsets and aims to collect the lower numbers on the left side, the greater numbers on the right side of the pivot in an ascending format. Let’s look at a glance how the code performs the operation;

- Choosing a Pivot,
- Beginning the partitioning by swapping the lower elements on the left, greater elements on the right side of the pivot,
- Apply partitioning on the left side and later on the right side.

you can checkout my GitHub repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

The post Quick Sort appeared first on Tugrul ASLAN.

]]>The post Selection Sort appeared first on Tugrul ASLAN.

]]>Selection Sort searches through the list of array to find the smallest item in the unsorted list.

Sorting starts from the left to the right, all the sorted elements reside on the left side of the array.

Selection sort is not a fast algorithm, because it uses nested loops to sort. It comes handy when we sort smaller data sets. It’s worse-case run time complexity is **O(n ^{2})**

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

The post Selection Sort appeared first on Tugrul ASLAN.

]]>The post Big O Notation appeared first on Tugrul ASLAN.

]]>The big o notation is simplified analysis of an algorithm’s efficiency. It is also known as Landau’s Symbol. Big O Notation is used in Computer Science and Mathematics to describe the Asymptotic behavior of functions. Let’s study some of Big O Notation characteristics;

- Big O Notation enlightens about the complexity of a target algorithm for a given input considered as “N”,
- Big O Notation is the abstraction of the efficiency in terms of the machine independent, it shall perform the same way on every operation system and hardware.
- Big O Notation does not concern about how much of time that an algorithm takes but how it performs under certain situations.
- Big O Notation gives us the “
**Time**” and the “**Space**” constraints.

In addition to the second characteristic. When we run a program, we will have **performance** *that is how much of time or hardware resource is used*, and complexity how the algorithm acts and grows. Note that **the Complexity affects the performance**, *but the other way around is not possible*.

Furthermore, there are three types of measurements that I’ll explain and demonstrate those measurements in a different chapter.

If you have a function that has a running time of **5N**, that is realized as **O(n)**. Because n gets bigger and the 5 __is not the consideration anymore__,

Different inputs and variables have different weights on identifying the notation. If you iterate in two different arrays you get **O(a*b)**. Study the following pseudo code;

method(int[] array1, int[] array2){ For(int a : array1){//O(a) For(int b : array1){//O(b) System.out.println(“match”);//O(1) } } }

Certain terms dominate the other terms so in this case drop the lower terms here is the sequence of the notations:

O(1) **<** O(log n) **<** O(n) **<** O(n log n) **<** O(n^{2}) **<** O(2^{n}) **<** O(n!)

Big O Notation can be used to describe the Time Complexity and the Space Complexity of algorithms. Each of these subjects are different terms.

The Time Complexity corresponds to the amount of time that an Algorithm takes to run. The Time Complexity has also Best, Average and Worst cases.

The Space Complexity describes how much of a space the algorithm allocates in the memory according to the amount of given data.

In Big O Notation we have three cases Worst, Best and Average cases. When algorithms are analyzed, generally “*Worst Case*” is referred. It doesn’t mean that the rest of the cases are less important, but depending on the input, the Worst Case has a weight.

image courtesy *www.bigocheatsheet.com*

**Constant time** is a basic statement that has only **Constants**, in a different way the values that are solid and will not change. Regardless of amount of the data, the code executes the process in the time amount, this can be a variable definition, access in an array or a print out. The simplest examples would go for this:

Int x = (9/2)*12-1; To find out the Big O Notation of such constants; 1.int a = (9/2)*12-1;//O(1) 2.int b = 100/2; //O(1) 3.int result a+b; //O(1) System.out.println(result); //O(1)

Total time spent: O(1) + O(1) + O(1) + O(1) =** O(1)**

4*O(1)

*(Constants are dropped)*

**linear time** is known as the completion time grows for the given amount of data. A good example to it is the linear search in a loop that iterates through N amount of elements.

1.for(int i=0; i<N; i++){//O(n) 2.System.out.println(i)//O(1) } Total time spent: O(n)*O(1)=O(n) 1.int x = 55*3+(10-9); //O(1) for(int i=0; i<N; i++){//O(n){ 3.System.out.println(i)//O(1) }

Total time spent: O(1) + O(n)*O(1)=O(n)

*(Drop low order terms)*

the time completion of **quadratic** algorithms is proportional to the square of the given amount of data. Commonly we spot these sorts of algorithms in nested loops. In addition, the more nested loops exist, the square will become **cubic** O(n^{3}) or more. The good example is the Bubble Sort. Furthermore, such algorithms dramatically hamper the performance if the amount of data increases.

**Exponential** algorithms are those whose performance doubles for every given input. We can basically spot such algorithms in recursive methods of calculations. For the sake of simplicity, I’ll give an example of Fibonacci numbers. As you will see, the method will call itself twice for the given input.

public int fibonacciNumbers(int number) { if (number <= 1) { return number; } else { return fibonacciNumbers(number - 1) + fibonacciNumbers(number - 2); } }

**Factorial** algorithm that calculates all permutation of a given array is considered as O(n!). The most suitable example of this is the travelling sales man problem with the brute-force search.

The best way to explain **logarithmic **algorithms is to search in chunks of data by halving them and the rest will be assumed as O(n). Since we split it we will gain some time while looking up. The good world example would be the Binary Search Tree for these types of algorithms and they are very efficient, because increasing the amount of data has a minor effect at some point, because the amount of data is halved each time as it is seen with the Binary Search Tree.

**Quasilinear **algorithms are the ones hard to spot. The given values will be compared once. Essentially each comparison will reduce the possible final sorted data structure in half like in O(log n). Furthermore, the number of comparisons that will be performed is equal to log in the factorial of “n”. The comparisons are also equal to n log n this is how O(n log n) algorithms are formed. The mathematical representation as follows;

Sum of comparisons are equal to log n!

Divided comparisons are equal to log n + log(n-1) + … +log(1)//End of comparisons

The outcome is equal to **n log n**

Examples of Quasilinear complexities are Merge and Heap Sorts.

- Big O Cheat Sheet: A great web site that points out the Time and Space complexities of Data Structures and Algorithms.
- Stackoverflow Post: While making my research, I found an explanation of John very easy to explain Notations by using the same example.

The post Big O Notation appeared first on Tugrul ASLAN.

]]>