Bca Daa 04
Bca Daa 04
Bca Daa 04
UNIT - 4
DECREASE AND CONQUER
Decrease-and-conquer is an algorithm design paradigm in which a problem is transformed
into a smaller subproblem and then that subproblem is solved first.Unlike the case of Divide
and conquer algorithms, the reduction process here generates a single subproblem.
A problem-solving strategy known as "Decrease and Conquer" involves slicing the size of the
input at every stage of the solution procedure. Identical to divide- and-conquer as it breaks
the problem down into smaller sub-problems, decrease- and-conquer reduces the size of the
input data at each stage rather than increasingit.
Finding the maximum or smallest element in an array, finding the nearest pair of points in a
group of points, and binary search are a few cases that can be resolved using the decrease-and-
conquer strategy.
Since the quantity of input data is decreased at each stage, decreasing the space and time
complexities of the solution, the decrease-and-conquer benefit is that it frequently produces
efficient algorithms. A poor decision could result in an algorithm that is ineffective, thus it's
critical to pick the best technique for minimizing the quantity of the input data.
INSERTION SORT
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single
element at a particular instance. It is not the best sorting algorithm in terms of performance,
but it's slightly more efficient than selection sort and bubble sort in practical scenarios.
It is an intuitive sorting technique.
1. for j = 2 to A. length
2. key = A[j]
3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. i=j-1
5. while i > 0 and A[i] > key
6. A[i + 1] = A[i]
7. i = i -1
8. A[i + 1] = key
1. We will start by assuming the very first element of the array is already sorted. Inside the
key, we will store the second element.
Next, we will compare our first element with the key, such that if the key is found to be smaller
than the first element, we will interchange their indexes or place thekey at the first index. After
doing this, we will notice that the first two elements are sorted.
2. Now, we will move on to the third element and compare it with the left-hand side
elements. If it is the smallest element, then we will place the third element atthe first index.
Else if it is greater than the first element and smaller than the second element, then we will
interchange its position with the third element and place it after the first element. After doing
this, we will have our first three elements in a sorted manner.
3. Similarly, we will sort the rest of the elements and place them in their correct position.
Consider the following example of an unsorted array that we will sort with the help of the
Insertion Sort algorithm.
Initially,
1st Iteration:
Compare a1 with a0
2nd Iteration:
Set key = 63
3rd Iteration:
Set key = 14
Since a3 is the smallest among all the elements on the left-hand side, place a3 atthe beginning
of the array.
4th Iteration:
Set key = 55
Compare a4 with a3, a2, a1 and a0.
5th Iteration:
Set key = 36
Compare a5 with a4, a3, a2, a1 and a0.
Since a5 < a2, so we will place the elements in their correct positions.
Time Complexities:
o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outerloop is running n times,
and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order,
i.e., neither in the ascending order nor in thedescending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which
occurs when we sort the ascending order of an array into the descending order.
TOPOLOGICAL SORTING
Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such
that for every directed edge u-v, vertex u comes before v in the ordering. Note: Topological
Sorting for a graph is not possible if the graph is nota DAG.
Topological Sorting is mostly used to schedule jobs based on their dependencies. Instruction
scheduling, ordering formula cell evaluation when recomputing formula values in
spreadsheets, logic synthesis, determining the order of compilation tasks to perform in make
files, data serialization, and resolving symbol dependencies in linker are all examples of
applications of this type in computer science.
• Finding cycle in a graph: Only directed acyclic graphs may be ordered topologically
(DAG). It is impossible to arrange a circular graph topologically.
• Operation System deadlock detection: A deadlock occurs when one process is waiting
Divide and Conquer is an algorithmic pattern. In algorithmic methods, the designis to take a dispute
on a huge input, break the input into minor pieces, decide the problem on each of the small pieces,
and then merge the piecewise solutions into a global solution. This mechanism of solving the
problem is called the Divide &Conquer Strategy.
Divide and Conquer algorithm consists of a dispute using the following three steps.
Examples: The specific computer algorithms are based on the Divide & Conquerapproach:
Following algorithms are based on the concept of the Divide and ConquerTechnique:
• Binary Search: The binary search algorithm is a searching algorithm, which is also
called a half-interval search or logarithmic search. It works by comparing the target value
with the middle element existing in a sorted array. After making the comparison, if the
value differs, then the half that cannot contain the target will eventually eliminate, followed
by continuing the search on the other half. We will again consider the middle element and
compare it with the target value. The process keeps on repeating until the target value is
met. If we found the other half to be empty after ending thesearch, then it can be concluded
that the target is not present in the array.
• Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the rest
of the array elements into two sub-arrays. The partition is made by comparing each of the
elements with the pivot value. It compares whether the element holds a greater value or
lesser value thanthe pivot and then sort the arrays recursively.
• Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts
by dividing an array into sub-array and then recursively sorts each of them. After the
sorting is done, it merges them back.
• Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach, it
has lessened the effort as it works on dividing the main problem into two halves and then
solve them recursively. This algorithm is much faster than other algorithms.
• It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
• It is more proficient than that of its counterpart Brute Force technique.
• Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.
• Since most of its algorithms are designed by incorporating recursion, so it necessitates high
memory management.
• An explicit stack may overuse the space.
• It may even crash the system if the recursion is performed rigorously greater than the
stack present in the CPU.
1. ALGORITHM-MERGE SORT
2. If p<r
3. Then q → ( p+ r)/2
4. MERGE-SORT (A, p, q)
5. MERGE-SORT ( A, q+1,r)
6. MERGE ( A, p, q, r)
Here we called MergeSort(A, 0, length(A)-1) to sort the complete array.
As you can see in the image given below, the merge sort algorithm recursively divides the
array into halves until the base condition is met, where we are left with only 1 element in the
array. And then, the merge function picks up the sortedsub-arrays and merge them back to sort
the entire array.
7. for j ← 1 to n2
8. do R[j] ← A[ q + j]
9. L [n 1+ 1] ← ∞
10.R[n 2+ 1] ← ∞
11.I ← 1
12.J ← 1
13.For k ← p to r
14.Do if L [i] ≤ R[j]
15.16.14. then A[k] ← L[ i]
16.i ← i +1
17.else A[k] ← R[j]
18.j ← j+1
19.END
The merge step of Merge Sort
Let T (n) be the total time taken by the Merge Sort algorithm.
But we ignore '-1' because the element will take some time to be copied in mergelists.
So T (n) = 2T + n...equation 1
From 6 equation
• Best Case Complexity: The merge sort algorithm has a best-case time complexity of
O(n*log n) for the already sorted array.
• Average Case Complexity: The average-case time complexity for the merge sort algorithm
is O(n*log n), which happens when 2 or more elements are jumbled, i.e., neither in the
ascending order nor in the descending order.
• Worst Case Complexity: The worst-case time complexity is also O(n*log n), which
occurs when we sort the descending order of an array into the ascending order
QUICK SORT:
Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than themiddle element.
Partition Algorithm:
Let 44 be the Pivot element and scanning done from right to left
22 33 11 55 77 90 40 60 99 44
88
Now comparing 44 to the left side element and the element must be greater than44 then swap
them. As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55
88
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivotelement 44 &
one right from pivot element.
22 33 11 40 77 90 44 60 99 55
88
22 33 11 40 44 90 77 60 99 55
88
Now, the element on the right side and left side are greater than and smallerthan 44
respectively.
And these sublists are sorted under the same process as above done. These two sorted sublists side
by side.
Merging Sublists:
TIME COMPLEXITY:
BINARY SEARCH:
Traversal is a process to visit all the nodes of a tree and may print their values too. Because, all
nodes are connected via edges (links) we always start from the root (head) node. That is, we
cannot randomly access a node in a tree. There arethree ways which we use to traverse a tree −
• In-order Traversal
• Pre-order Traversal
• Post-order Traversal
Generally, we traverse a tree to search or locate a given item or key in the tree or to print all the
values it contains.
In-order Traversal
• In this traversal method, the left subtree is visited first, then the root and later the right
sub-tree. We should always remember that every node may represent asubtree itself.
• If a binary tree is traversed in-order, the output will produce sorted key values in an
ascending order.
D→B→E→A→F→C→G
Algorithm
Pre-order Traversal
• In this traversal method, the root node is visited first, then the left subtree and finally the
right subtree.
We start from A, and following pre-order traversal, we first visit A itself and then move to its
left subtree B. B is also traversed pre-order. The process goes on until all the nodes are visited.
The output of pre-order traversal of this tree will be −
A→B→D→E→C→F→G
Algorithm
Post-order Traversal
• In this traversal method, the root node is visited last, hence the name. First wetraverse the
left subtree, then the right subtree and finally the root node.
We start from A, and following pre-order traversal, we first visit the left subtree B. B is also traversed
post-order. The process goes on until all the nodesare visited. The output of post-order traversal of this
tree will be −
D→E→B→F→G→C→A
Algorithm
The common non-linear data structure known as a tree. A tree illustrates a hierarchical structure in
contrast to other data structures such an array, stack, queue, and linked list, which are linear in
nature. A tree's ordering information is irrelevant. Two pointers and nodes make up a tree. The
parent node's left and right children are represented by these two pointers. Let's thoroughly
comprehendthe words used in trees.
• The highest node in a tree that has no parent nodes is considered the rootof the tree. Every
tree has a single root node.
• Parent Node: A node's parent one is the node that came before it in the treeof nodes.
• Child Node: The node that is a node's direct successor is referred to as a node's child
node.
• Siblings are the children of the same parent node.
• Edge: Edge serves as a connecting node between the parent and childnodes.
• Leaf: A node without children is referred to as a leaf node. It is the tree'slast node. A tree
may have several leaf nodes.
• A node's subtree is the tree that views that specific node as the root node.
• Depth: The depth of a node is the separation between it and the root node.
• Height: The height of a node is the distance between it and the subtree'sdeepest node.
• The maximum height of any node is referred to as the tree's height. The height of the
root node is the same as this.
• Level: In the tree, a level is the number of parents that correspond to aparticular node.