Diploma 5 Sem ADA Notes 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Learning Outcomes 3

Divide and Conquer Approach


Divide and Conquer Approach

Divide and Conquer is a problem-solving strategy that involves breaking down a complex
problem into smaller, more manageable sub-problems, solving each sub-problem
independently, and then combining the solutions to solve the original problem. The strategy
is based on the idea that it is easier to solve smaller problems than larger ones.

The Divide and Conquer approach typically consists of three steps:

1. Divide: The problem is divided into smaller sub-problems that are similar to the
original problem but of smaller size.
2. Conquer: The sub-problems are solved recursively using the same algorithm until
they become simple enough to be solved directly.
3. Combine: The solutions to the sub-problems are combined to solve the original
problem.

Figure: Divide and Conquer Approach

This approach is used in many algorithms, including Quicksort, Merge Sort, and Strassen’s
Algorithm.

The following are some standard algorithms that follow Divide and Conquer algorithm.

 Quicksort is a sorting algorithm. The algorithm picks a pivot element and rearranges
the array elements so that all elements smaller than the picked pivot element move to
the left side of the pivot, and all greater elements move to the right side. Finally, the
algorithm recursively sorts the subarrays on the left and right of the pivot element.
 Merge Sort is also a sorting algorithm. The algorithm divides the array into two
halves, recursively sorts them, and finally merges the two sorted halves.
 Closest Pair of Points The problem is to find the closest pair of points in a set of
points in the x-y plane. The problem can be solved in O(n^2) time by calculating the
distances of every pair of points and comparing the distances to find the minimum.
The Divide and Conquer algorithm solves the problem in O(N log N) time.
 Strassen’s Algorithm is an efficient algorithm to multiply two matrices. A simple
method to multiply two matrices needs 3 nested loops and is O(n^3). Strassen’s
algorithm multiplies two matrices in O(n^2.8974) time.
 Cooley–Tukey Fast Fourier Transform (FFT) algorithm is the most common
algorithm for FFT. It is a divide and conquer algorithm which works in O(N log N)
time.
 Karatsuba algorithm for fast multiplication does the multiplication of two n-digit
numbers in at most.

How Divide and Conquer Algorithms Work?

Here are the steps involved:

1. Divide: Divide the given problem into sub problems using recursion.

2. Conquer: Solve the smaller sub problems recursively. If the sub problem is small
enough, then solve it directly.

3. Combine: Combine the solutions of the sub-problems that are part of the recursive
process to solve the actual problem.

Let us understand this concept with the help of an example.

Here, we will sort an array using the divide and conquer approach (ie. merge sort).

1. Let the given array be:

2. Divide the array into two halves.


Again, divide each subpart recursively into two halves until you get individual elements.

Divide the array into smaller subparts

3. Now, combine the individual elements in a sorted manner. Here, conquer and
combine steps go side by side.

Combine the subparts

Advantages of Divide and Conquer Technique or Approach

 The difficult problem can be solved easily.


 It divides the entire problem into sub problems thus it can be solved parallel ensuring
multiprocessing
 Efficiently uses cache memory without occupying much space
 Reduces time complexity of the problem
 Solving difficult problems: Divide and conquer technique is a tool for solving difficult
problems conceptually. e.g. Tower of Hanoi puzzle. It requires a way of breaking the
problem into sub-problems, and solving all of them as individual cases and then
combining sub- problems to the original problem.
 Algorithm efficiency: The divide-and-conquer paradigm often helps in the discovery
of efficient algorithms. It was the key, for example to the Quicksort and Mergesort
algorithms,
 Parallelism: Normally DAC algorithms are used in multiprocessor machines having
shared-memory systems where the communication of data between processors does
not need to be planned in advance, because distinct sub-problems can be executed on
different processors.
 Memory access: These algorithms naturally make an efficient use of memory caches.
Since the subproblems are small enough to be solved in cache without using the main
memory that is slower. Any algorithm that uses cache efficiently is called cache
oblivious.
 Roundoff control: In computations with rounded arithmetic, e.g. with floating point
numbers, a divide-and-conquer algorithm may yield more accurate results than a
superficially equivalent iterative method.

Binary Search Algorithm

Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer. For this algorithm to work properly,
the data collection should be in the sorted form.

Binary search looks for a particular item by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. If the middle item is greater
than the item, then the item is searched in the sub-array to the left of the middle item.
Otherwise, the item is searched for in the sub-array to the right of the middle item. This
process continues on the sub-array as well until the size of the subarray reduces to zero.

How Binary Search Works?

Now, let's see the working of the Binary Search Algorithm.

To understand the working of the Binary search algorithm, let's take a sorted array. It will be
easy to understand the working of Binary search with an example.

The array in which searching is to be performed is:


Let K = 56 be the element to be searched.

We have to use the below formula to calculate the mid of the array –

mid = (beg + end)/2

So, in the given array -

beg = 0

end = 8

mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.


Now, the element to search is found. So algorithm will return the index of the element
matched.

The pseudocode of binary search algorithms should look like

binarySearch(arr, x, beg, end)

if beg > end

return False

else

mid = (beg + end) / 2

if x == arr[mid]

return mid

else if x > arr[mid] // x is on the right side

return binarySearch(arr, x, mid + 1, end)

else // x is on the left side

return binarySearch(arr, x, beg, mid - 1)

Algorithm of Binary Search (Step by Step)

1. Divide the search space into two halves by finding the middle index “mid”.
2. Compare the middle element of the search space with the key.
3. If the key is found at middle element, the process is terminated.
4. If the key is not found at middle element, choose which half will be used as the next
search space.
a. If the key is smaller than the middle element, then the left side is used for next
search.
b. If the key is larger than the middle element, then the right side is used for next
5. This process is continued until the key is found or the total search space is exhausted.\
Binary search complexity:

Time Complexity:

Best Case: O(1)


Average Case: O(log N)
Worst Case: O(log N)
Auxiliary Space: O(1), If the recursive call stack is considered then the auxiliary space will
be O(logN).

Advantages of Binary Search:


 Binary search is faster than linear search, especially for large arrays.
 More efficient than other searching algorithms with a similar time complexity, such as
interpolation search or exponential search.
 Binary search is well-suited for searching large datasets that are stored in external
memory, such as on a hard drive or in the cloud.

Drawbacks of Binary Search:


 The array should be sorted.
 Binary search requires that the data structure being searched be stored in contiguous
memory locations.
 Binary search requires that the elements of the array be comparable, meaning that
they must be able to be ordered.

Applications of Binary Search:

 Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the
optimal hyper parameters for a model.
 It can be used for searching in computer graphics such as algorithms for ray tracing or
texture mapping.
 It can be used for searching a database.
Merge Sort Algorithm

Merge Sort is one of the most popular sorting algorithms that is based on the principle of
Divide and Conquer Algorithm.

Here, a problem is divided into multiple sub-problems. Each sub-problem is solved


individually. Finally, sub-problems are combined to form the final solution.

Divide and Conquer Strategy

Using the Divide and Conquer technique, we divide a problem into subproblems. When the
solution to each subproblem is ready, we 'combine' the results from the subproblems to solve
the main problem.

Suppose we had to sort an array A. A subproblem would be to sort a sub-section of this array
starting at index p and ending at index r, denoted as A[p..r]

Divide

If q is the half-way point between p and r, then we can split the subarray A[p..r] into two
arrays A[p..q] and A[q+1, r].

Conquer

In the conquer step, we try to sort both the subarrays A[p..q] and A[q+1, r]. If we haven't yet
reached the base case, we again divide both these subarrays and try to sort them.

Combine

When the conquer step reaches the base step and we get two sorted subarrays A[p..q] and
A[q+1, r] for array A[p..r], we combine the results by creating a sorted array A[p..r] from two
sorted subarrays A[p..q] and A[q+1, r].

In Short:
Figure: Divide and Conquer Technique

Algorithm of Merge Sort

Step 1 − if it is only one element in the list it is already sorted, return.

Step 2 − divide the list recursively into two halves until it can no more be divided.

Step 3 − merge the smaller lists into new list in sorted order.

Pseudocode of algorithm of Merge Sort:

MERGE_SORT(arr, beg, end)

if beg < end

set mid = (beg + end)/2

MERGE_SORT(arr, beg, mid)

MERGE_SORT(arr, mid + 1, end)

MERGE (arr, beg, mid, end)

end of if

END MERGE_SORT

Working of Merge Sort Algorithm

For Example: To understand merge sort, we take an unsorted array as the following –
We know that merge sort first divides the whole array iteratively into equal halves unless the
atomic values are achieved. We see here that an array of 8 items is divided into two arrays of
size 4.

This does not change the sequence of appearance of items in the original. Now we divide
these two arrays into halves.

We further divide these arrays and we achieve atomic value which can no more be divided.

Now, we combine them in exactly the same manner as they were broken down.

We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the
target list of 2 values we put 10 first, followed by 27. We change the order of 19 and 35
whereas 42 and 44 are placed sequentially.

In the next iteration of the combining phase, we compare lists of two data values, and merge
them into a list of found data values placing all in a sorted order.

After the final merging, the list should look like this –

Applications of Merge Sort

 Sorting large datasets: Merge sort is particularly well-suited for sorting large datasets
due to its guaranteed worst-case time complexity of O(n log n).
 External sorting: Merge sort is commonly used in external sorting, where the data to
be sorted is too large to fit into memory.
 Custom sorting: Merge sort can be adapted to handle different input distributions,
such as partially sorted, nearly sorted, or completely unsorted data.
 Used for Counting inversions in a List.

Advantages of Merge Sort Algorithm

 Stability: Merge sort is a stable sorting algorithm, which means it maintains the
relative order of equal elements in the input array.
 Guaranteed worst-case performance: Merge sort has a worst-case time complexity of
O(N logN), which means it performs well even on large datasets.
 Parallelizable: Merge sort is a naturally parallelizable algorithm, which means it can
be easily parallelized to take advantage of multiple processors or threads.

Disadvantages of Merge Sort

 Space complexity: Merge sort requires additional memory to store the merged sub-
arrays during the sorting process.
 Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires
additional memory to store the sorted data. This can be a disadvantage in applications
where memory usage is a concern.
 Not always optimal for small datasets: For small datasets, Merge sort has a higher
time complexity than some other sorting algorithms, such as insertion sort. This can
result in slower performance for very small datasets.

Time Complexity of Merge Sort Algorithm

Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of merge sort is O(n*logn).

Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of merge
sort is O(n*logn).

Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of merge sort is
O(n*logn).

Space Complexity of Merge Sort Algorithm

The space complexity of merge sort is O(n). It is because, in merge sort, an extra variable is
required for swapping.
Quick Sort Algorithm

Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data
into smaller arrays. A large array is partitioned into two arrays one of which holds values
smaller than the specified value, say pivot, based on which the partition is made and another
array holds values greater than the pivot value.

Pivot element can be any element from the array, it can be the first element, the last element
or any random element.

Quicksort partitions an array and then calls itself recursively twice to sort the two resulting
subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst-
case complexity are O(n2), respectively.

Figure: Correct Position of Pivot


Algorithm of Quick Sort

Step 1: Select a pivot element

Step 2: Partition the array around the pivot element.

Move all the elements < pivot to the left of the pivot and move all elements >= pivot to the
pivot’s right

Step 3: After Step 2, the pivot element is in its correct position

Step 4: Apply the quicksort recursively on the left partition

Step 5: Apply the quicksort recursively on the right partition

Step 6: Stop the recursion when the base case is reached. The base case is an array of size
zero or one

Working of Quick Sort Algorithm

Now, let's see the working of the Quicksort Algorithm.

To understand the working of quick sort, let's take an unsorted array. It will make the concept
more clear and understandable.

Let the elements of array are –

In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.

Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. –

Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.

Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves
to right, as –

Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.

As a[pivot] > a[left], so algorithm moves one position to right as –


Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves
one position to right as –

Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. –

Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24,
a[right] = 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to
left, as –
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot]
and a[right], now pivot is at right, i.e. –

Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts
from left and move to right.

Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the
same element. It represents the termination of procedure.

Elements that are right side of element 24 are greater than it, and the elements that are left
side of element 24 are smaller than it.

Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-
arrays. After sorting gets done, the array will be –
Time Complexity of Quick Sort Algorithm

 Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is
the middle element or near to the middle element. The best-case time complexity of
quicksort is O(n*logn).

 Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of quicksort is O(n*logn).

 Worst Case Complexity - In quick sort, worst case occurs when the pivot element is
either greatest or smallest element. Suppose, if the pivot element is always the last
element of the array, the worst case would occur when the given array is sorted
already in ascending or descending order. The worst-case time complexity of
quicksort is O(n2).

Space Complexity

The space complexity of quicksort is O(n*logn).

Advantages of Quick Sort Algorithm

 It is a divide-and-conquer algorithm that makes it easier to solve problems.

 It is efficient on large data sets.

 It has a low overhead, as it only requires a small amount of memory to function.

Disadvantages of Quick Sort Algorithm

 It has a worst-case time complexity of O(n2), which occurs when the pivot is chosen
poorly.

 It is not a good choice for small data sets.

 It is not a stable sort, meaning that if two elements have the same key, their relative
order will not be preserved in the sorted output in case of quick sort, because here we
are swapping elements according to the pivot’s position (without considering their
original positions).

Application of Quicksort
 Commercial Computing is used in various government and private organizations for
the purpose of sorting various data like sorting files by name/date/price, sorting of
students by their roll no., sorting of account profile by given id, etc.
 The sorting algorithm is used for information searching and as Quicksort is the fastest
algorithm so it is widely used as a better way of searching.
 It is used everywhere where a stable sort is not needed.
 It is an in-place sort that does not require any extra storage memory.
 Numerical computations and in scientific research, for accuracy in calculations most
of the efficiently developed algorithm uses priority queue and quick sort is used for
sorting.
 Quicksort is a cache-friendly algorithm as it has a good locality of reference when
used for arrays.
 It is used in operational research and event-driven simulation.

Characteristics of quick sort

 In efficient implementations Quick Sort is not a stable sort, meaning that the relative
order of equal sort items is not preserved.
 Overall time complexity of Quick Sort is O(nLogn). In the worst case, it makes O(n2)
comparisons, though this behavior is rare.
 The space complexity of Quick Sort is O(nLogn). It is an in-place sort (i.e. it doesn’t
require any extra storage)

***

You might also like