Summarize how the below sorting algorithm works describe its
Summarize how the below sorting algorithm works, describe its efficiency using Big-O notation, and explain the best uses of this algorithm.
CockTail shaker sort
Solution
sorting algorithm , the fundamental operation in computer science. Sorting refers to the operation of arranging data in some given order such as increasing or decreasing, with numerical data, or alphabetically, with character data .
All sorting algorithms are problem specific. The particular Algorithm one chooses depends on the properties of the data and operations one may perform on data. Accordingly, we will want to know the complexity of each algorithm; that is, to know the running time f (n) of each algorithm as a function of the number n of input elements and to analyses the space requirements of our algorithms. There is a direct correlation between the complexity of an algorithm and its relative efficiency. Algorithmic complexity is generally written in a form known as Big-O notation, where the O represents the complexity of the algorithm and a value n represents the size of the set the algorithm is run against. The two classes of sorting algorithms are O(n 2 ), which includes the bubble, insertion, selection, and shell, sorts
Let A be a list of n elements A1,A2……………AN in memory. Sorting of A means the operation of rearranging the contents of A so that they are increasing in order
A1 <=A 2< =A3 < = A4…………<=A
Therefore A has n elements, there are n! Ways that the contents can appear in A. these ways correspond to the n! Permutation of 1, 2, 3……..n. accordingly, each sorting algorithm must take care of these n! Possibilities.
it gives us some basis for measuring the efficiency of an algorithm. A more detailed explanation and definition of Big O analysis would be this: it measures the efficiency of an algorithm based on the time it takes for the algorithm to run as a function of the input size. Think of the input simply as what goes into a function – whether it be an array of numbers, a linked list, etc
Bubble sort, selection sort: When you\'re doing something quick and dirty and for some reason you can\'t just use the standard library\'s sorting algorithm. The only advantage these have over insertion sort is being slightly easier to implement.
Insertion sort: When N is guaranteed to be small, including as the base case of a quick sort or merge sort. While this is O(N^2), it has a very small constant and is a stable sort.
Merge sort: When you need a stable, O(N log N) sort, this is about your only option. The only downsides to it are that it uses O(N) auxiliary space and has a slightly larger constant than a quick sort
Radix sort: When log(N) is significantly larger than K, where K is the number of radix digits.
Non-comparison sorts: Under some fairly limited conditions it\'s possible to break the O(N log N) barrier and sort in O(N).
