Heap is a tree-based data structure with some special attributes embedded into it. Heap Data Structure has such characteristics;

- It is a form of Complete Binary Tree,
- It has a root node and its key is compared to the children nodes and the comparison is constantly carried out whenever a new node is aimed to be inserted.

In addition to the characteristics, the Heap can be internally implemented using Array or Priority Queues. The common practice is usually done with the Priority Queue.

Heap Data Structure has mainly two types. These types correspond to how the order of the Heap is placed. Let’s have a look at the types in details;

The values of children are **greater than or equal** to the value of their parents; which indicates that **parent nodes** tend to **have lower values** than the children nodes.

The values of children are **less than or equal** to the value of their parents; which indicates that **parent nodes** tend to **have greater values** than the children nodes.

The Time and Space complexities are summed up into a common table given as:

Heap Data Structure makes great use in the following areas:

- Heap Sort: Very efficient sorting algorithm
whose
**time complexities**are all the same**O (n log n)**, - Priority Queues: The Priority version of Queue
benefits efficiently from the Heap Data structure that provides insertion,
deletion extract maximum and decrease key operations in the
**O (log n) time complexity**

Heapifying is a recursive process of turning the Heap to the Max Heap type, our algorithm will go towards the non-leaf nodes and look for the largest node in the tree and in all possibilities, raise the greater values above top contentiously.

Left node of the Tree the presentation in the array: (index -1) / 2;

Left node of the Tree the presentation in the array: 2 * index + 1;

Right node of the Tree the presentation in the array: 2 * index + 2;

Heap sort is a very efficient algorithm that performs very well in sorting arrays.

All the cases are O(n log n)

O(1)

You can checkout my GitHub repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

]]>The optimal algorithm for the numbers range from 1 to n^{2}. Radix Sort algorithm favors of Counting Sort internally to sort the array. The given keys consists of the digits in each element in the array.

It starts from the Least Significant Digit which is the left digit, then goes to the Most Significant Digit which means to the right.

Each digit goes to a corresponding numbered buckets. After the buckets is filled with the elements in the array, the elements are sorted once again according to the bin position. Let’s us see an example illustration to better apprehend the logic, we will sort the numbers “551, 12, 346, 311”;

Now we have more or less how the Radix Sort works out internally. There is one gap I’d like to point what happens to 12 which has two digits compared to the others that have three digits. Well in this situation such numbers are appended with leading *0s *and they **always sit on the bucket zero**.

n numbers consisting of k digits

**n:** number of elements

**k:** the range of the keys for each number. We will also repeat the operation for this amount.

All of the Time Complexities of Radix Sort is always **O(n*k)**

Space complexity of Radix Sort is **O(n+k)**

You can checkout my GitHub repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

Linked List a **linear data structure** like Arrays, but its internal is completely different compared to other data structures. Let’s first have a visual look how the data structure looks like

As you can see in the above image Linked List maintains a list of **objects linked to them**,** **as also the name suggests the same approach. To conclude some characteristics in Linked List;

- Each
**node**has a**next pointer/reference**to the**next**or**previous object**then we are iterating through these references, - The
**last nodes**are usually**null**, - However, the next/previous references do not refer to null in certain times(
*in the following chapters, I’ll demonstrate the reasons*)

Linked list is used in some data structures like Queues and Stacks internally. Check them out separately and see how Linked List fits in their requirements.

**No size limitation**compared to the arrays,- It is
**not costly**to**insert**and**remove**in between nodes, where as it is**very costly**especially with heavier arrays because*all the elements will be shifted*,

- Random data access
*is not possible*,**the whole data structure must be traversed to access the designated object**, - Storage to the next and previous nodes
**takes up some memory space**.

*image courtesy of **bigocheatsheet.com*

Linked List has some varieties of implementations that often confuse us. I’ll show all the implementations in sub sections with visuals, descriptions and codes that will let you interact more and apprehend the slightest differences better.

In a Singly Linked List the traversal is *unidirectional*, each node refers to the __next node__ in the link, and there is **no reference to previous nodes**. The l*ast node’s next refers to Null*.

See the Implementation “*SinglyLinkedList.java*” and the Unit Test “*SinglyLinkedListUnitTest.java*” to apprehend all the operations and internals of the Singly Linked List.

Doubly Linked List maintains a *bidirectional *path, thus it contains *next *and *previous links*, where *next refers to the next node*, and *previous refers to the previous node*. This maintenance comes with **an extra overhead**. Last of all, f*irst node’s previous* and *last node’s next* are *Null*.

See the Implementation “*SinglyLinkedList.java*” and the Unit Test “*DoublyLinkedListUnitTest.java*” to apprehend all the operations and internals of the Doubly Linked List.

Circular Linked List is the *last variation of the implementation*. I would like to call the Circular Linked List as the spiced up version of the Singly and Doubly Linked List implementations in my own terms. In addition, as the name suggests, the basic internal is that the Linked List is being *circular.* Now time to clear out the 3rd element in the definition and explain

two distinct characteristics in the Circular Linked List;

- The
**head and the tail**of the data structure**don’t point to NULL**, but__head’s previous reference, points to tail__and__tail’s next reference points to the head__, - Circular List can be
*made*using*Singly*or*Doubly Linked List*implementations.

Operation description goes here

**isEmpty:**Checks whether the Linked List is empty,**insertFirst:**Inserts the given Node to the head of Linked List,**insertAfter:**Inserts the given Node after the existing Node in Linked List,**insertLast:**Inserts the given Node at the end of Linked List,**deleteFirst:**Deletes the Node in the head of Linked List,**deleteNode:**Deletes the given Node in Linked List,**deleteLast:**Deletes the given Node from the end of Linked List,**containsIterativeSearch:**Iteratively searches Linked List,**containsRecursiveSearch:**Recursively searches Linked List.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/DataStructures To see how the code works, you can also check its Unit Test.

Queue is a linear data structure that maintains **FIFO **setting;** First In First Out**. Queue comes in two possible **internal implementations**; **Singly Linked List** or **Array**. When think of FIFO, we can assume a group of **people waiting **queued up for buying a cinema ticket. The **first person** in the queue gets to buy the ticket and it follows the rest of the people in the queue.

**Hardware Scheduling**; CPUs and Disks are properly scheduled in the concurrent environments,**Asynchronous communication**makes a great use case while two processes wait for each other to respond in sequence

In the internally Queue can **implement** **Singly Linked List** or **Array**. Eventually the Time Complexity of the operations will slightly differ. In this stackoverflow Article, there are more insights and argument about the implementations. In my own implementation I preferred to use the Singly Linked List implementation.

Since the internals of implementations **differ** for **each variation**; Singly Linked List and Array, the operations can differ. The given table is suitable for **Singly Linked List** implementation;

*image courtesy of **bigocheatsheet.com*

Queue has **three** **vital** **operations** that we need to cover up. In some other languages and Stack implementations definitely have other additional operations like Java’s Queue implementation. However, these below operations are fundamental properties of the Queue data structure:

**enqueue:**inserts the element to the head of the stack,**denqueue:**removes the element from the head and returns

denqueued the value,**peek:**returns the head data but doesn’t delete it, takes a peek at it.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/DataStructures to see how the code works, you can also check its Unit Test.

]]>Stack is a very usable data structure that brings the LIFO setting into the game. Let’s elaborate LIFO; **LIFO** is the abbreviation of **Last-In-First-Out**. What does LIFO really mean for us? The intentions may vary and one of them is to have a **pile of things** **stacked down** to the **bottom** and **take** one **from** the **top**. Let’s apprehend the below illustration:

Yes, we understood it well. In the LIFO setting we insert towards bottom and and take from the top.

- In text editors “
**Undo**” operations while we intend to revert an unwanted entry, - Browsers’
**back buttons**; make use of a similar way to be able to**navigate to the earlier pages**, **Recursive methods**also utilize**stack**very well; starting from the first call till the last, all of the method executions are added on top of each other.

In the internally Stack can **implement** **Singly Linked List** or **Array**. Eventually the Time Complexity of the operations will slightly differ. In this stackoverflow Article, there are more insights and argument about the implementations. In my own implementation I preferred to use the Singly Linked List implementation.

Since the internals of implementations **differ** for **each variation**; Singly Linked List and Array, the operations can differ. The given table is suitable for **Singly Linked List** implementation;

*image courtesy of **bigocheatsheet.com*

Stack has **three** **vital** **operations** that we need to cover up. In some other languages and Stack implementations definitely have other additional operations like Java’s Stack implementation. However, these below operations are fundamental properties of the Stack data structure:

**push:**pushes the element on top of stack**pop:**pops the element from the top and returns popped the value**peek:**returns the head data but doesn’t delete it, takes a peek at it.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/DataStructures to see how the code works, you can also check its Unit Test.

]]>Shell Sort is a variation of the Insertion Sort. Shell Sort is very fast algorithm that is compact in code size. A gap in other words a distance is set that will be used between the elements in the array

Sub lists are made out of the elements in the gap and the sub lists are compared. In the comparison lower element goes to the left and greater is on the right.

The process continues, later on the gap gets smaller until it becomes one. After the gap reaches to one, then the Insertion Sort is applied to sort the rest. Depending of this gap the time complexity of the algorithm varies.

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

]]>The 1st element is assumed to be sorted and the iteration starts from the second element towards the end. The difference in this algorithm compared to Bubble Sort,

it compares the element that are on the left of it. It all means that the sorting goes not forward, but backwards from the right to the left.

This algorithm is sufficient on smaller data sets like Bubble Sort, because its Time complexity is **O(n ^{2})**.

In the implementations of the Insertion Sort only space complexity changes;

*. Imperative: O(1)

*. Recursive: O(n) because of the stacks that are created

The both imperative and the recursive versions are very similar, except in the recursive version, the comparison will start when the i is in second elements index which is 2

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

]]>Merge sort yet another sorting algorithm that benefits from the divide-and-conquer principle. The main goal in this algorithm is to split the given array into individuals and merge them back during the course of the comparison.

Merge Sort seems kind of similar to the Merge sort, in the Comparison below you can study the differences and similarities. However, there is one challenge as I see in this algorithm is the merge. I find this part very complex, but besides its very easy to apprehend the algorithm.

- Best Case: O(n log n),
- Average Case: O(n log n),
- Worse Case: O(n log n)

O(n)

In general terms, the Merge Sort is often compared to the Quick Sort. In some sense, they tend to act similarly as they inherit the same divide-and-conquer principle, to address a few of differences;

- Merge Sort demands a copy of the data structure, whereas Quick Sort applies the changes with no requirement of extra space allocated,
- Both of the algorithms split the given data structure. However, alternatively Merge Sort intends to split from the half to divide the left and right subsets into individual elements, whereas the Quick Sort picks a partition point and swaps the lower and greater values in the right and the left directions.

- The algorithm divides the array into half smaller chunks until the individual items left with by using recursion,
- once individuals created, they are compared and merged back from smaller to larger arrays
- Merge sort requires extra space allocation which makes it space complexity as O(n), whereas Quick Sort only keeps a space while swapping which makes its space complexity as O(log n). However the only similarity is that because of the recursive calls, the stack traces will be created upon each call that’s also considered as a space

- leftPointer: A pointer of the left/begin of the array
- rightPointer: A pointer of the right/endof the array
- middleElementPointer: Represents the element in the center of the array
- leftArray: The elements of the left side as a temporary storage
- rightArray: The elements of the rıght side as a temporary storage

You can checkout my GitHub repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

]]>Quick sort is a very efficient algorithm that leverages the divide-and-conquer principle to sort a data structure. The calculated performance complexity of Quick Sort as follows;

- Best Case: O(n log n),
- Average Case: O(n log n),
- Worse Case: O(n
^{2}),*reason: the algorithm will select only one element in each iteration* - Space Complexity: O(log n).

Furthermore, for starters, it will be a good practice to apprehend the terms in this algorithm;

**Pivot:**A reference element that is used as a line whose left and rights elements are divided. There are a few Quick Sort implementations, whose suggestions vary from picking the Pivot from the beginning, middle, end or randomly,**Partition:**It is a practice that swaps elements on the left and right ends, while using the Pivot as a reference. By the end of Partitioning, a partition point for the next divided subsets(those will be divided also) is returned,**Left Pointer:**A pointer or an index value that traverses on the last/low/left subset of the designated array,**Right Pointer:**A pointer or an index value that traverses on the last/low/right subset of the designated array,

In every step, the Quick Sort divides the array to subsets and aims to collect the lower numbers on the left side, the greater numbers on the right side of the pivot in an ascending format. Let’s look at a glance how the code performs the operation;

- Choosing a Pivot,
- Beginning the partitioning by swapping the lower elements on the left, greater elements on the right side of the pivot,
- Apply partitioning on the left side and later on the right side.

you can checkout my GitHub repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

]]>Selection Sort searches through the list of array to find the smallest item in the unsorted list.

Sorting starts from the left to the right, all the sorted elements reside on the left side of the array.

Selection sort is not a fast algorithm, because it uses nested loops to sort. It comes handy when we sort smaller data sets. It’s worse-case run time complexity is **O(n ^{2})**

The code can be also found in my Github Repository @ https://github.com/tugrulaslan/BlogCodeSnippets/tree/master/SortingAlgorithms

]]>