Trees: Binary Search (BST), Insertion and Deletion in BST: Unit V

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

UNIT V

Syllabus: Searching: Linear, Binary search techniques. Sorting: Insertion Sort,


Bubble sorting, Quick Sort, Two way Merge Sort, Trees: Binary Search (BST), Insertion
and Deletion in BST.

Trees: Binary Search (BST), Insertion and Deletion in BST


TREE - A tree is a non-linear data structure in which items are arranged in a sorted
sequence. It is used to represent hierarchical relationship existing amongst several
data items. The graph theoretic definition of tree is: it is a finite set of one or more
data items (nodes) such that:

1. There is a special data item called the root of the tree.


2. And its remaining data items are
partitioned into number of
mutually exclusive disjoint
subsets, each of which is itself a
tree and they are called sub-trees.
3. Natural trees grow upwards from
the ground into the air. But, tree
data structure grow; downwards
from top to bottom. It is an
universally practiced convention
for trees. Figure 12. A tree

Introduction to Binary Search Tree


In the simple binary tree the nodes are arranged in any fashion. Depending on user’s
desire the new nodes can be attached as a left or right child of any desired node. In
such a case finding for any node is a long procedure, because in that case we have to
search the entire tree. And thus the searching time complexity will get increased
unnecessarily. So to make the searching algorithm faster in a binary tree we will go for
building the binary search tree. The binary search tree is based on the binary search
algorithm. While creating the binary search tree the data is systematically arranged.
That means values at left subtree < root node value < Right subtree values.

If you observe the Fig. carefully, you will find that the left value < parent value < right
value is followed throughout the tree. Various operations that can be performed on
binary search tree are:
Figure Binary search tree
1. Insertion of a node in a binary tree
2. Deletion of element from the binary search tree.
3. Searching of an element in the binary tree.
4. Display of binary tree.

1. Insertion of a node in a binary tree

Algorithm:

1. Read the value for the node which is to be created, and store it in a node called
New.
2. Initially if (root==NULL) then root=New.
3. Again read the next value of node created in New.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 1


4. If (Newvalue < rootvalue) then attach New node as a left Child of at otherwise
attach New node as a right child of root.
5. Repeat step 3 and 4 for constructing required binary search tree completely-

2. Deletion of an element from the binary tree

For deletion of any node from binary search tree there are three cases which are
possible.
i. Deletion of leaf node.
ii. Deletion of a node having one child.
iii. Deletion of a node having two children.

i. Deletion of leaf node - This


is the simplest deletion, in which
we set the left or right pointer of
parent node as NULL. From the
below tree, we want to delete the
node having value 6 then we will
set left pointer of its parent node
as NULL. That is left pointer of
node having values is set NULL. Figure Before and after deletion of leaf node

ii. Deletion of a node having one child - To explain this kind of deletion, consider
a tree as given below.

Figure Before and after deletion


of non leaf node

If we want to delete the node 15,


then we will simply copy node 18
at place of 15 And then set the
node free. The m order successor
is always copied at the position
of a node being deleted.

iii. The node having two children - Again, let us take some example for discussing
this kind of deletion. Let us consider that we want to delete node having value 7. We
will then find out the inorder
successor of node 7. The inorder
successor will be simply copied at
location of node 7. That means copy
8 at the position where value of node
is 7. Set left pointer of 9 as NULL.
This completes the deletion
procedure.

Figure Before and after deletion of


non leaf node

Algorithm:

1. Read the value for the node which is to be deleted, let delete node is x.
2. Find the node to be deleted and check the condition
2.1. If delete node (x) is leaf node, simply delete it and place NULL.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 2


2.2. If delete node (x) is non leaf node and it have one child node then delete
node (x) and replace delete node (x) with its child node.
2.3. If delete node (x) is non leaf node and it have two child nodes then delete
node (x) and replace delete node (x) with its inorder successor child node.
3. Print node deleted and exit.

3. Searching a node from Binary search tree

In searching, the node which we want to search is called a key node. The key node will
be compared with each node starting from root node if value of key node is greater
than current node then we search for it on right sub-branch otherwise on left sub-
branch. If we reach to leaf node and still we do not get the
value of key node then we declare "node is not present in
the tree".
Figure Binary search tree
In the above tree, if we want to search for value 9. Then we
will compare 9 with root node10. As 9 is less than 10 we will
search on left sub-branch. Now compare 9 with 5 but 9 is
greater than 5. So we will move on right sub-branch. Now
compare 9 with 8 but as 9 is greater than 8 we will move on
right sub-branch as the node we will get holds the value 9.
Thus the desired node can be searched.

Algorithm:

1. Read the value for the node which is to be searched, let search node value is x.
2. Compare the search node value x with root value
2.1. If root value == search node value x, print item searched and exit.
2.2. Elseif root value > search node value x, go to left child of root at left subtree
and repeat 2.1 until required value is not found. If node value == search
node value x, print item searched and exit.
2.3. Elseif root value < search node value x, go to right child of root at right
subtree and repeat 2.1 until required value is not found. If node value ==
search node value x, print item searched and exit.
2.4. Else print item is not present in BST and exit.

Searching: Linear, Binary search


Linear search techniques.
In linear search, we access each element of an array one by one sequentially and see
whether it is desired element or not. A search will be unsuccessful if all the elements
are accessed and the desired element is not found. In the worst case, the number of
average case we may have to scan half of the size of the array (n / 2). Therefore linear
search can be defined as the technique which traverses the array sequentially to
locate the given item. The program given below illustrates the searching of an element
of the array using search.

Efficiency of sequential searching - The time taken or the number of comparisons


made in searching a record in a search table determines the efficiency of the
technique. if the desired record is present in the first position of the search table, then
only one comparison is made. If the desired record is the last one, then n comparisons
have to be made. If the record is present somewhere in the search table, on an

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 3


average, the number of comparisons will be (n+ 1)/2. The worst case efficiency of this
technique is 0(n) stands for the order of execution

Here a is a linear array with n elements. and item is a given item of information. This
algorithm finds the location be of item in c. or sets ioc = 0 if the search is
unsuccessful.

1. [Insert item at the end of data.] set data [n+1] = item.


2. [Initialize counter ] set loc = 1.
3. [search for item]
Repeat while data[loc] =! item
Set loc = loc+1.
4. [successfu1] if loc = n+1, then set loc = 0.
5. Exit.

Program to search an element using linear search technique. ‘

#inc1ude<stdio.h>
#inc1ude<conio.h>
void main( )
{
int a[100], n, i, item, loc = -1 ;
c1rscr( ) ;
printf(“\nEnter the number of element : ") ;
scanf (“%d",&n) ;
printf(“Enter the numbers : \n”) ;
for (i=0 ; i<=n-1; i++)
{
scanf (“%d”, &a[i]) ;
}
printf(“\n enter the number to be search”);
scanf(“%d”, &item);
for(i=0; i<= n-1 ; i++)
{
if (item ==a[i])
{
loc = i;
break;
}
}
if (loc>=0)
printf (“\n%d is found in position %d”, item, loc + 1);
else
printf ("\n1tem does not exist") ;
getch( ) ;
}

Binary search techniques.


Binary search is an extremely efficient algorithm. This search technique searches the
given item in minimum possible comparisons. To do the binary search, first we had to
sort the array elements. The logic behind this technique is given below :
1. First find the middle element of the array.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 4


2. Compare the mid element with an item.
3. There are three cases :
(a) If it is a desired element then search is successful.
(b) If it is less than desired item then search only the first half of the array.
(c) If it is greater than the desired element search in the second half of the array.

Repeat the same steps until an element is found or exhausts in the search area. In
this algorithm every time we are reducing the search area. 50 number of comparisons
keep on decreasing. In worst case the number of comparisons is atmost log(N + 1). So
it is an efficient algorithm when compared to linear search but the array has to be
sorted before doing binary Search.

Here a is sorted array with lower bound LB and upper bound UB and Item is a given
Item Information. The variables beg, end and mid denoted, respectively, the beginning
and middle location of a segment of element of a. This algorithm finds the location loc
of item in or sets loc = NULL.
1. [Initialize segment variables]
set beg = LB end = UB and mid = int (( beg + end))/2
2. Repeat steps 3 and 4 while beg <= end and a[mid]!= item
3. If item < a[mid] then
Set end = mid -1
else
Set beg = mid + 1
[End of if structure]
4. set mid = int ((beg + end ))/2
[End of step 2 loop]
5. If a[mid]= item then
Set loc = mid
else
Set loc = NULL
[End of if structure}
6. Exit

Program to search an element from list of elements using binary search technique.

#inc1ude<stdio.h>
#include<conio.h>
void main( )
{
int a[100], i, loc. mid, beg, end, n, flag=0, item ;
clrscr ( ) ;
printf (“How many elements") ;
scanf (“%d",&n) ;
printf (“Enter the e1ement of the array\n") :
for(i=0; i<=n-1; i++)
{
scanf ("%d", &a [i]);
}
printf ("Enter the element to be search\n") ;
scanf ("%d", &item) ;
loc = 0 ;
beg = 0 ;
end = n-1 ;
whi1e ((beg <= end) && (item != a[mid]))

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 5


{
mid = ((beg+end)/2);
if (item == a[mid])
{
printf (“Search is successfu1\n”) ;
loc = mid ;
printf(“Position of the item %d\n", loc + 1);
Flag = f1ag+1 ;
}
else if (item < a[mid])
end = mid - 1 ;
e1se
beg = mid + 1 ;
}
if (f1ag == 0)
{
Printf(“search is not successfu1\n") ;
}
getch( );
}

Sorting
Systematic arrangement of data is called SORTING. For example – in telephone
directory the person surname are arranged in increasing or decreasing order. The
sorting is a technique by which the elements are arranged in some particular order.
Usually the sorting order is of two types-

Ascending order: It is the sorting order in which the elements are arranged from low
value to high value. In other words elements are in increasing order. For example:
10, 50, 40, 20, 30 can be arranged in awarding order after applying some sorting
technique as 10, 20. 30, 40, 50

Descending Order: It is the sorting order in which the elements are arranged from
high value to low value. In other words elements are in decreasing order. It is reverse
of the ascending order. For example: 10, 50, 40, 20, 30 can be arranged in
descending order after applying some sorting technique as 50,40,30,20,10

Need for sorting


Sorting is useful for arranging the data in desired order. After sorting the required
element can be located easily. Sorting is needed due to following reasons
1. The sorting is useful in database applications for arranging the data in desired
order.
2. In the dictionary like applications the data is arranged in sorted order.
3. For searching the element from list of elements, the sorting is required.
4. For checking the uniqueness of the element the sorting is required.
5. For finding the closest pair from the list of elements the sorting is required.

Types of sorting algorithm - Internal & External sorting algo.


There are two types of sorting techniques - Internal sorting and External sorting.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 6


Internal sorting – In the internal sorting data resides on main memory of the
computer. It is used to sort small amount of data. This type of sorting is faster in
comparison to external sorting and required low memory for sorting.

External sorting - In external sorting, the data stored on secondary memory is part
by part loaded into main memory, sorting can be done over there. The sorted data can
be then stored in the intermediate files. Finally these intermediate files can be merge d
repeatedly to get sorted data. Thus huge amount of data can be sorted using this
technique.

The external merge sort is a technique in which the data is loaded in intermediate
files. Each intermediate file is sorted independently and then combined or merged to
get the sorted data. For example: Consider that there are 10,000 records that has to
be sorted- Clearly we need to apply external sorting method. Suppose main memory
has a capacity to store 500 records in blocks, with each block size of 100 records.

Fig.

The sorted 5 blocks (i.e. 500 records) are stored in intermediate file. This process will
be repeated 20 times to get all the records sorted in chunks. In the second step, we
start merging a pair of intermediate files in the main memory to get output file.

Various internal sorting techniques are - bubble sort, insertion sort, selection sort.
Difference between internal and external sorting

S.No. Internal sorting External sorting


The internal sorting can be The internal sorting can be
1 performed for sorting small performed for sorting large
amount of data amount of data
In internal sorting all the
In external sorting all the data
data lies on same main
2 cannot be accommodated on
memory.
the single memory.
Some amount of secondary
memory needs to be kept all
It does not require secondary
3 the data for sorting such as
memory for sorting the data.
hard disk compact disk and so
on

Sorting Techniques
Sorting is an important activity and every time we insert or delete the data at the time
of sorting, the remaining data retain in queue to sort. Various algorithms that are
developed for sorting are as follows -
1. Insertion Sort 2.Bubble sort 3. Selection Sort 4. Merge Sort
5. Quick Sort 6. Radix Sort 7. Shell Sort 8. Bucket Sort

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 7


9. Heap Sort

Let us discuss above techniques in detail with example.

1. Insertion Sort: In this method the elements are inserted at their appropriate
place. Hence is the name insertion sort. Consider the following example to
understand insertion sort.For Example: Consider a list of elements as 30, 70, 20,
50, 40, 10, 60

The process starts with first element

Step 1

Put 1st element i.e. 30 in sorting zone

Step 2

Put 1st element i.e. 30 in sorting


zone and compare 2nd element i.e.
70 with 1st element and insert it on
desired position

Step 3

Now compare 20 with the element of


sorted zone i.e. 30, 70 and insert it
on desired position with respect to it

Step 4

Similarly now compare 50 with


the element of sorted zone i.e.
20. 30, 70 and insert it on
desired position with respect to
it
Step 5

Similarly now compare 40 with the


element of sorted zone i.e. 20. 30,
50, 70 and insert it on desired
position with respect to it

Step 6
Similarly now compare 10 with
the element of sorted zone i.e. 20.
30, 40, 50, 70 and insert it on
desired position with respect to it
Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 8
Step 7

Similarly now compare 60 with


the element of sorted zone i.e.
10, 20. 30, 40, 50, 70 and insert
it on desired position with
respect to it

Step 8

Finally we get all elements in


sorted zone i.e. 10, 20, 30,
40, 50, 60, 70.
Analysis - When an array of elements is almost sorted then it is best case
complexity. The best case time complexity of insertion sort is O(n). If an array is
randomly distributed then it results in average case time complexity which is
O(n2). If the list of elements is arranged in descending order and if we want to sort
the elements in ascending order then it result in worst case time complexity
which is O(n2).

Advantages of insertion sort


1. Simple to implement.
2. This method is efficient when we want to sort small number of elements and this
method has excellent performance on almost sorted list of elements.
3. More efficient than most other simple 0(n2) algorithm: such as selection sort or
bubble sort.
4. This is a stable (does not change the relative order of equal elements).
5. It is called in-place sorting algorithm. An algorithm in which the input is
overwritten by output and to execute sorting it does not requires any more
additional space is called in-place sorting algorithm.

2. Bubble Sorting - This is the simplest kind of sorting method. In bubble sort
procedure we perform several iterations in groups which are called passes. In
this procedure first each element is filled in array. Now in each pass we compare
a[0] element of array with next element i.e. a[1]. If a[0] is greater than a[1] we
interchange the value and if it is not greater then value remain same and we move
to next step and compare a[1] with a[2]. Similarly if a[1] is greater than a[2] we
interchange the value otherwise not and move to next step. Like this when we reach
at last position, largest element comes at last position in array and 1 st pass is over.
Now list is sorted upto some extend. Similarly passes is repeated from a[0] to a[n]
till all element get sort.
Example: Consider 5 unsorted elements are 45, -40, 190, 99, 11

First store those elements in the array a

Pass 1: In this pass each element will be compared with its neighbouring element.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 9


Compare a[0] and a[l] Compare a[1] and a[2]
i.e. compare 45 and -40 i.e. compare 45 and 190
if a[0] is greater than a[1] if a[1] is smaller than a[2]
Interchange them No Interchange will done
Therefore a[0] = - 40 & a[l] = 45 Therefore a[1] = 45 & a[2] = 190

Compare a[2] and a[3] Compare a[3] and a[4]


i.e. compare 190 and 99 i.e. compare 190 and 11
if a[2] is greater than a[3] if a[3] is greater than a[4]
Interchange them Interchange them
Therefore a[2] = 99 & a[3] = 190 Therefore a[3] = 190 & a[4] = 11

After first pass the array will hold the elements which are sorted to some
extent.

Pass2:

Compare a[0] and a[1] Compare a[1] and a[2]


no interchange no interchange.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 10


Compare a[2] and a[3] Compare a[3] and a[4]
Since 99 > 11 Interchange no interchange.
a[2] = 11 & a[3] = 99
Pass 3:

Compare a[0] and a[1] Compare a[1] and a[2]


No interchange 45 > 11 Interchange
a[1] = 11 & a[2] = 45

Next compare a[2] and a[3] Compare a[3] and a[4]


No interchange No interchange

This is end of pass 3. This process will be thus continued till pass 4.
Finally at the [end of last pass the array will hold all the sorted elements
like this, Since the comparison positions look like bubbles, therefore it is
called bubble sort.

Algorithm:

1. Read the total number of n elements.


2. Store the elements in the array.
3. Set the i = 0.
4. Compare the adjacent elements i.e. a[0] with a[1].
4.1. If a[0] > a[1] interchange elements
Otherwise no interchange is done
4.2. Move to next adjacent element i.e. a[1] and a[2]
5. Repeat step 4 for all n elements.
6. Increment the value of i by 1 and repeat step 4, 5 for i < n
7. Print; the sorted list of elements.
8. Stop.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 11


3. Selection Sorting - Scan the array to find its smallest element and swap it with
the first element. Then, starting with the second element scan the entire list to find
the smallest element and swap it with the second element. Then starting from the
third element the entire list is scanned in order to find the next smallest element.
Continuing in this fashion we can sort the entire list.

Generally, on pass i (0 <= i <= n-2), the smallest element is searched among last n-i
elements and is swapped with A[ i ]. The list gets sorted after n-l passes.
Example: Consider the elements 70, 30, 20, 50, 60, 10, 40 We can store these
elements in array A as :

1st Pass:

Now Swap A[ i ] with smallest element. The array then becomes,

2nd Pass:

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 12


Swap A[ i ] with smallest element. The array then becomes,

3 rd Pass:

As there is no smallest element than 30 so we will increment i pointer

4th Pass:

Swap A[ i ] with smallest element. The array then becomes,

5th Pass:

Swap A[ i ] with smallest element. The array then becomes,

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 13


6th Pass:

Swap A[ i ] with smallest element. The array then becomes,

This is a sorted array.


Analysis:

The above algorithm can be analysed mathematically. We will apply a general plan
for non recursive mathematical analysis. The input size is n i.e. total number of
elements in the list. In the algorithm the basic operation is key comparison. if A[i] <
A[min]. This basic operation depends only on array size n. Thus time complexity of
selection sort is (n2) for all input, but total number of key swaps is only (n)

4. Merge Sorting - The merge sort is a sorting algorithm that uses the divide and
conquer strategy. In this method division is dynamically carried out. Merge sort on
an input array with n elements consists of three steps:

1. Divide: Partition array into two sublists S1 and S2 with n/2 elements each
2. Conquer: Then sort sub list S1 and sublist S2.
3. Combine: Merge S1 and S2 into a unique sorted group.

Analysis: In merge sort algorithm two recursive calls are made. Each recursive call
focuses on n/ 2 elements of the list. After two recursive calls one call is made to
combine two sublists i.e. to merge all the elements. We can write it as –
T(n) = T(n/2) + T(n/2) + Cn
Time taken Time taken Time for
by left by right combining
sublist to sublist to two soblists
get sorted get sorted

T(n) = O(nlog2n)

The average and worst case time complexity of merge sort is O(nlog2 n)

Example 1: Consider the elements as 70, 20, 30, 40, 10, 50, 60

Now we will split this list into two sublists.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 14


5. Quick Sorting - Quick sort is a sorting algorithm that uses the divide and
conquer strategy. In this method division is dynamically carried out. The three
steps of quick sort are as follows:

1. Divide: Split the array into two sub arrays that each element in the left sub
array is less than or equal the middle element and each element in the right sub
array is greater than the middle element. The splitting of the array into two sub
arrays is based on pivot element. All the elements that are less than pivot
should be in left sub array and all the elements that are more than pivot should
be in right sub array.
2. Conquer: Recursively sort the two sub arrays. A
3. Combine: Combine all the sorted elements in a group to form a list of sorted
elements.
In merge sort the division of array is based on the positions of array elements, but
quick sort this division is based on actual value of the element. Consider an array
a[i] where i is ranging from 0 to n - 1 then we can formulize the division of array
elements as

A[0],..................... A[m—1], A[m], A[m+1],......................A[n—1]

These elements are smaller Mid These elements are greater


than A[m] than A[m]

let us understand this algorithm with the help of some example. Example

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 15


Step 1:

We will now split the array in two parts. The left sublist will contain the elements
less than Pivot (i.e. 50) and right sublist contains elements greater than pivot.

Step 2:

We will increment i. If A[i] <= Pivot, we will continue to increment it until the
element pointed by i is greater than A[Low].

Step 3:

increment i as A[i] <= A[Low].

Step 4:

As A[i] > A [Low], we will stop incrementing i.

Step 5:

As A[j] > Pivot (i.e. 70 > 50). We will decrement j. We will continue to decrement j
until the element pointed by j is less than A [Low].

Step 6:

Now we cannot decrement j because 40 < 50. Hence we will swap A[i] and A[j] i.e. 90
and 40.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 16


Step 7:

As A[i] is less than A[Low] and A[j] is greater than A[Low] we will continue
incrementing i and decrementing j. until the false conditions are obtained.

Step 8:

We will stop incrementing i and stop decrementing j. As i is smaller than j we will


swap 80 and 20.

Step 9:

As A[i] < A[Low] and A[j] > A[Low], we will continue incrementing i and decrementing
j.

Step 10:

As A[j] < A[Low] and j has crossed i that is j < i, we will swap A[low] and A[j].

Step 11:

We will now start sorting left sublist, assuming the first element of left sublist as
pivot element. Thus now new pivot = 20.

Step 12:

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 17


Now we will set i and j pointer and then we will start comparing A[i] with A[Low] or
A[Pivot]. Similarly comparison with A[j] and A[Pivot].

Step 13:

As A[i] > A[Pivot]. hence stop incrementing i. Now as A[i] > A[Pivot]. hence decrement
j.

Step 14:

Now j can not be decremented because 10 < 20. Hence we will swap A[i] and A[j].

Step 15:

As A[i] < A[Low] increment i.

Step 16:

Now as A[i] > A[Low]. or A[i] > A[Pivot] decrement j.

Step 17:

As A[j] < A[Low] we cannot decrement j now. We will now swap A[Low] and A[i] as 1
has crossed i and i > j

Step 18:

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 18


As there is only one element in left sublist hence we will sort right sublist.

Step 19:

As left sublist is sorted completely we will sort right sublist, assuming first element
of right sublist as pivot.

Step 20:

As A[i] > A[Pivot]. hence we will stop incrementing i. Similarly A[j] < A[Pivot]. Hence
we stop decrementing j. Swap A[i] and A[j].

Step 21:

As A[i] < A[Pivot]. increment i.

Step 22:

As A[i] > A[Pivot], decrement j.

Step 23:

Now swap A[pivot] and A[j]

Step 24:

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 19


The left sublist now contain 70 and right sublist contain only 90. We can not
further subdivide the list

Hence list is

This is a sorted list.

The partition function is called to arrange the elements such that all the elements
that are less than pivot are at the left side of pivot and all the elements that are
greater than pivot are all at the right of pivot. In other words pivot is occupying its
proper position and the partitioned list is obtained in an ordered manner.

Analysis: When pivot is chosen such that the array gets divided at the mid then it
gives the best case time complexity. The best case time complexity of quick sort is
O(nlog2 n). The worst case for quick sort occurs when the pivot is minimum or
maximum of all the elements in the list. This can be graphically represented as -
This ultimately results in 0(n2) time complexity. When array elements are randomly
distributed then it results in average case time complexity, and it is O(nlog2n).

6. Radix Sorting
In this method sorting can be done digit by digit from ones place to tens then
hundred and so on. Thus all the elements get sort. Example for Radix sort -
Consider the unsorted array of 8 elements. 45, 37, 05, 09, 06, 11, 18, 27

Step 1: Now sort the element according to the last digit (ones place).

Now sort this number.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 20


Step 2: Now sort the above array with the help of second last digit (tens place).

Since the list of element is of two digits that is why, we will stop comparing. Now
whatever list we have got (shown in above array) is of sorted elements. Thus finally
the sorted list by radix sort method will be 05 06 09 11 18 27 37 45
Algorithm:

1. Read the total number of elements in the array.


2. Store the unsorted elements in the array.
3. Now the simple procedure is to sort the elements by digit by digit-
4. Sort the elements according to the last digit then second last digit and so on.
5. Thus the elements should be sorted for up to the most significant bit.
6. Store the sorted element in the array and print them.
7. Stop.

7. Bucket Sorting
Bucket sort is a sorting technique in which array is partitioned into buckets. Each
bucket is then sorted individually, using some other sorting algorithm such as
insertion

Algorithm
1. Set up an array of initially empty buckets.
2. Put each element in corresponding bucket.
3. Sort each non empty bucket.
4. Visit the buckets in order and put all the elements into a sequence and print
them.

Example: Sort the element using bucket sort 56, 12, 84, 56, 28, 0, -13, 47, 94,
31, 12, -2
Solution: We will set up an array as follows

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 21


Now we will fill up each bucket by corresponding elements

Print the array by visiting each bucket sequentially.

This is the sorted list: -13, -2, 0, 12, 12, 28, 31, 47, 56, 56, 84, 94.

8. Shell Sorting
This method is an improvement over the simple insertion sort. In this method the
elements at fixed distance are compared. The distance will then be
decremented by some fixed amount and again the comparison will be made.
Finally, individual elements will be compared. Consider the following example to
understand shell sort.
For Example: Consider a list of elements as 25, 57, 48, 37, 12, 92, 86, 33

Step 1: So, let distance d is d=5


So in the first iteration compare
( x[0], x[5] ) ( x[1], x[6] )
( x[2], x[7] ) ( x[3] ) ( x[4] )
After first iteration,

Step 2: Initially distance d was d=5 now decrement d by taking 2


So d= 5 – 2 = 3
So in the Second iteration compare

( x[0], x[3] )
( x[1], x[4] )
( x[2], x[5] )
( x[3], x[6] )
( x[4], x[7] )
After
Second iteration

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 22


Step 3: Initially distance d was d=5 now decrement d by taking 2
So d= 3 – 2 = 1
So in the Third iteration compare
( x[0], x[1] ) ( x[1], x[2] ) ( x[2], x[3] )
( x[3], x[4] ) ( x[4], x[5] ) ( x[5], x[6] ) ( x[6], x[7] )

After Second iteration


x[0] x[1] x[2] x[3] x[4] x[5] x[6] x[7]
12 27 33 37 48 86 92 57

Now after above sort, elements are sorted by simple insertion sort because simple
insertion sort is highly efficient on sorted file. So we get
x[0] x[1] x[2] x[3] x[4] x[5] x[6] x[7]
12 27 33 37 48 57 86 92

Analysis:

Best Case: The best case in the shell sort is when the array is already sorted in the
right order. The number of comparisons is less. In that case the inner loop does not
need to any work and a simple comparison will be sufficient to skip the inner sort
loop. The other loops gives O(n logn). The best case of O(n) is reached by using a
constant number of increments. Hence the best case time complexity of shell sort is
O (n logn)

Worst Case and Average Case: The running time of Shell sort depends on the
choice of increment sequence. The problem with Shell’s increments is that pairs of
increments l are not necessarily relatively prime and smaller increments can have
little effect. The worst case and average case time complexity is O(n).

Time complexity Space


S. Sorting Auxilary
Worst complex. Is Stable
No. Name Best Case Avg. case Memory
Case
1 Bubble Ω(n) θ(n2 ) O(n2) O(1) 1 Yes
2 Selection Ω (n2) θ(n2 ) O(n2) O(1) 1 No
3 Insertion Ω(n) θ(n2 ) O(n2) O(1) 1 Yes
4 Shell Ω (n log2n) θ(n2 ) O(n) O(1) 1 No
5 Merge Ω (n log2n) θ(n log2n) O(n log2n) - Depends Yes
6 Heap Ω (n log2n) θ(n log2n) O(n log2n) O(1) 1 No
7 Quick Ω (n log2n) θ(n log2n) O(n2) - - Depends
8 Radix Ω (nk) θ(nk) O(nk) O(1) 1 -
9 Bucket Ω (n + k) θ(n + k) O(n2) O(1) 1 -

Two way Merge Sort


Multiway merge sort is a technique of merging 'm' sorted lists into single sorted list.
The 2-way merge is a special case of multiway merge sort.

Two-Way Merge Sort - The two way merge sort makes use of two input tapes and
two output tapes for sorting the records. Input tape is memory to store items. It
works in two stages –

Stage 1: Break the records into blocks. Sort individual records with the help of two
input tapes.

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 23


Stage 2: Merge the
sorted blocks and
create a single sorted
file with the help of two
output tapes.

Fig. 3 Two-way merge

Fig. 3 Two-way merge

Let us understand it with the help of an example. Sort the following list of elements
using two way merge sort with M=3 and elements are 20, 47, 15, 8, 9, 4, 40, 30, 12,
17, 11, 56, 28, 35
Solution: As M = 3, we will break the records in the group of 3 and will store them
on tape. We will store data on alternate tapes.
Stage I: Sorting Phase
1. Read the 3 item from list and sort them and store them on Tape Tb1
20, 47, 15  15, 20, 47

2. Read next three records sort them and store them on Tape Tb2
8, 9, 4  4, 8, 9

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 24


3. Read next three records, sort them and store on tape Tbl.
40, 30, 12  12, 30, 40

4. Read next three records, sort them and store on tape Tb2.
17, 11, 56  11, 17, 56

5. Read next two remaining records, sort them and store on Tape Tbl
28,35  28,35

At the end of stage 1 we have following data

Stage II: Merging of runs

The input tapes Tbl and Tb2 will use two more output tapes Ta1 and Ta2, for
sorting. Finally the sorted data will be on tape Tal or Ta2. We will read the elements
from both the tapes Tbl and Tb2, compare them, and store on Tal in sorted order.
Comparing is done block by block i.e. compare 4 and 15 so 4 come in Ta1 then, 15
with 8 then, 8 comes out now again next element 15 with 9, again 9 come out in Ta1.
Now Tb2 one block is empty, so remaining element of Tb1 comes in Ta1.

Now similarly we will read second block from Tb1 and Tb2, compare them and store in
Ta2

Finally read the third block from Tbl and store in sorted manner on Tal. We will not
compare this block with Ta2 as there is no third block Hence we will get following
blocks

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 25


Now compare and sort first, second block of Ta1 and Ta2 same as we compare above
blocks. Store sorted elements in Tb1.

Last block is third block, remain in Ta1

Now compare Ta1 and Tb1, perform sorting and store elements in Ta1. Finally we get

Thus we get sorted list.

Algorithm:
Step 1: Divide the elements into the blocks of size M. Sort each block and write runs
on disk.
Step 2: Merge two runs
i) Read first value on each of two runs
ii) Compare and sort
iii) Write on output tape.
Step 3: Repeat the step 2 and get longer and longer runs on alternate tapes. Finally
we will get single sorted list.
Analysis: - The number of passes for multiway merging is log (N/M)

Prepared by: Prof Dheresh Soni CSE Dept. SIRT Page 26

You might also like