Bca Daa 04

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Design and Analysis of an Algorithm 1

UNIT - 4
DECREASE AND CONQUER
Decrease-and-conquer is an algorithm design paradigm in which a problem is transformed
into a smaller subproblem and then that subproblem is solved first.Unlike the case of Divide
and conquer algorithms, the reduction process here generates a single subproblem.

A problem-solving strategy known as "Decrease and Conquer" involves slicing the size of the
input at every stage of the solution procedure. Identical to divide- and-conquer as it breaks
the problem down into smaller sub-problems, decrease- and-conquer reduces the size of the
input data at each stage rather than increasingit.

Finding the maximum or smallest element in an array, finding the nearest pair of points in a
group of points, and binary search are a few cases that can be resolved using the decrease-and-
conquer strategy.

Since the quantity of input data is decreased at each stage, decreasing the space and time
complexities of the solution, the decrease-and-conquer benefit is that it frequently produces
efficient algorithms. A poor decision could result in an algorithm that is ineffective, thus it's
critical to pick the best technique for minimizing the quantity of the input data.

INSERTION SORT

Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single
element at a particular instance. It is not the best sorting algorithm in terms of performance,
but it's slightly more efficient than selection sort and bubble sort in practical scenarios.
It is an intuitive sorting technique.

ALGORITHM: INSERTION SORT (A)

1. for j = 2 to A. length
2. key = A[j]
3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. i=j-1
5. while i > 0 and A[i] > key
6. A[i + 1] = A[i]
7. i = i -1
8. A[i + 1] = key

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 2

How Insertion Sort Works

1. We will start by assuming the very first element of the array is already sorted. Inside the
key, we will store the second element.

Next, we will compare our first element with the key, such that if the key is found to be smaller
than the first element, we will interchange their indexes or place thekey at the first index. After
doing this, we will notice that the first two elements are sorted.

2. Now, we will move on to the third element and compare it with the left-hand side
elements. If it is the smallest element, then we will place the third element atthe first index.

Else if it is greater than the first element and smaller than the second element, then we will
interchange its position with the third element and place it after the first element. After doing
this, we will have our first three elements in a sorted manner.

3. Similarly, we will sort the rest of the elements and place them in their correct position.
Consider the following example of an unsorted array that we will sort with the help of the
Insertion Sort algorithm.

A = (41, 22, 63, 14, 55, 36)

Initially,

1st Iteration:

Set key = 22,

Compare a1 with a0

Since a0 > a1, swap both of them.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 3

2nd Iteration:

Set key = 63

Compare a2 with a1 and a0

Since a2 > a1 > a0, keep the array as it is.

3rd Iteration:

Set key = 14

Compare a3 with a2, a1 and a0

Since a3 is the smallest among all the elements on the left-hand side, place a3 atthe beginning
of the array.

4th Iteration:

Set key = 55
Compare a4 with a3, a2, a1 and a0.

As a4 < a3, swap both of them.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 4

5th Iteration:
Set key = 36
Compare a5 with a4, a3, a2, a1 and a0.

Since a5 < a2, so we will place the elements in their correct positions.

Hence the array is arranged in ascending order, so no more swapping is required.

TIME COMPLEXITY OF INSERTION SORT

The insertion sort algorithm encompasses a time complexity of O(n2)

Time Complexities:
o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outerloop is running n times,
and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order,
i.e., neither in the ascending order nor in thedescending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which
occurs when we sort the ascending order of an array into the descending order.

TOPOLOGICAL SORTING

Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such
that for every directed edge u-v, vertex u comes before v in the ordering. Note: Topological
Sorting for a graph is not possible if the graph is nota DAG.

Problems: Refer class notes

Advantages of Topological Sorting

Topological Sorting is mostly used to schedule jobs based on their dependencies. Instruction
scheduling, ordering formula cell evaluation when recomputing formula values in
spreadsheets, logic synthesis, determining the order of compilation tasks to perform in make
files, data serialization, and resolving symbol dependencies in linker are all examples of
applications of this type in computer science.

• Finding cycle in a graph: Only directed acyclic graphs may be ordered topologically
(DAG). It is impossible to arrange a circular graph topologically.
• Operation System deadlock detection: A deadlock occurs when one process is waiting

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 5

while another holds the requested resource.


• Dependency resolution: Topological Sorting has been proved to be very helpful in
Dependency resolution.
• Sentence Ordering: A set of n documents D={d1,d2...,dn} and the number of sentences
in a document is vi, where ∀i, vi>=1. Suppose a random order o=[o1,....ovi] and a set of
vi sentences in random order are{So1, So2,..., Sovi}.
• Then the task is to find the right order of the sentenceso*={o*1,...o*vi}. A set of constraints
Ci represents the relative ordering between every pair of sentences in di where
|Ci|=(vi×(vi-1))/2. For example, if a document has three sentences in the correct order s1 <
s2 < s3, then we have three set of constraints {s1 < s2, s1 < s3, s2 < s3} The order of
the sentences can be represented using a DAG. Here the sentences (Si) represent the
vertices, and the edges represent the ordering between sentences. For example, if we have a
directed edge between S1 to S2, then S1 must come before S2. Topological sort can produce
an orderingof these sentences (Sentence ordering).
• Critical Path Analysis: A project management approach known as critical route analysis.
It's used to figure out how long a project should take and how dependent each action is on
the others. There may be some precedingactions before an activity. Before beginning a new
activity, all previous actions must be completed.

DIVIDE AND CONQUER:

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the designis to take a dispute
on a huge input, break the input into minor pieces, decide the problem on each of the small pieces,
and then merge the piecewise solutions into a global solution. This mechanism of solving the
problem is called the Divide &Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solutionto the whole
problem.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 6

Generally, we can follow the divide-and-conquer approach in a three-stepprocess.

Examples: The specific computer algorithms are based on the Divide & Conquerapproach:

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)

Applications of Divide and Conquer Approach:

Following algorithms are based on the concept of the Divide and ConquerTechnique:

• Binary Search: The binary search algorithm is a searching algorithm, which is also
called a half-interval search or logarithmic search. It works by comparing the target value
with the middle element existing in a sorted array. After making the comparison, if the
value differs, then the half that cannot contain the target will eventually eliminate, followed
by continuing the search on the other half. We will again consider the middle element and
compare it with the target value. The process keeps on repeating until the target value is
met. If we found the other half to be empty after ending thesearch, then it can be concluded
that the target is not present in the array.
• Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the rest
of the array elements into two sub-arrays. The partition is made by comparing each of the
elements with the pivot value. It compares whether the element holds a greater value or
lesser value thanthe pivot and then sort the arrays recursively.
• Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts
by dividing an array into sub-array and then recursively sorts each of them. After the
sorting is done, it merges them back.

Advantages of Divide and Conquer

• Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach, it
has lessened the effort as it works on dividing the main problem into two halves and then
solve them recursively. This algorithm is much faster than other algorithms.
• It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
• It is more proficient than that of its counterpart Brute Force technique.
• Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 7

Disadvantages of Divide and Conquer

• Since most of its algorithms are designed by incorporating recursion, so it necessitates high
memory management.
• An explicit stack may overuse the space.
• It may even crash the system if the recursion is performed rigorously greater than the
stack present in the CPU.

MERGE SORT (REFER THIS OR CLASS NOTES)

1. ALGORITHM-MERGE SORT
2. If p<r
3. Then q → ( p+ r)/2
4. MERGE-SORT (A, p, q)
5. MERGE-SORT ( A, q+1,r)
6. MERGE ( A, p, q, r)
Here we called MergeSort(A, 0, length(A)-1) to sort the complete array.

As you can see in the image given below, the merge sort algorithm recursively divides the
array into halves until the base condition is met, where we are left with only 1 element in the
array. And then, the merge function picks up the sortedsub-arrays and merge them back to sort
the entire array.

The following figure illustrates the dividing (splitting) procedure.

1. FUNCTIONS: MERGE (A, p, q, r)


2. n 1 = q-p+1
3. n 2= r-q
4. create arrays [1.....n 1 + 1] and R [ 1 ......... n 2 +1 ]
5. for i ← 1 to n 1
6. do [i] ← A [ p+ i-1]

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 8

7. for j ← 1 to n2
8. do R[j] ← A[ q + j]
9. L [n 1+ 1] ← ∞
10.R[n 2+ 1] ← ∞
11.I ← 1
12.J ← 1
13.For k ← p to r
14.Do if L [i] ≤ R[j]
15.16.14. then A[k] ← L[ i]
16.i ← i +1
17.else A[k] ← R[j]
18.j ← j+1
19.END
The merge step of Merge Sort

Analysis of Merge Sort:

Let T (n) be the total time taken by the Merge Sort algorithm.

• Sorting two halves will take at the most 2T time.


• When we merge the sorted lists, we come up with a total n-1 comparison because the last
element which is left will need to be copied down in the combined list, and there will be no
comparison.

Thus, the relational formula will be

But we ignore '-1' because the element will take some time to be copied in mergelists.

So T (n) = 2T + n...equation 1

Put 2 equation in 1 equation

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 9

Putting 4 equation in 3 equation

From Stopping Condition:

From 6 equation

• Best Case Complexity: The merge sort algorithm has a best-case time complexity of
O(n*log n) for the already sorted array.
• Average Case Complexity: The average-case time complexity for the merge sort algorithm
is O(n*log n), which happens when 2 or more elements are jumbled, i.e., neither in the
ascending order nor in the descending order.
• Worst Case Complexity: The worst-case time complexity is also O(n*log n), which
occurs when we sort the descending order of an array into the ascending order

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 10

QUICK SORT:

Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than themiddle element.

Conquer: Recursively, sort two sub arrays.

Combine: Combine the already sorted array.Algorithm:

1. QUICKSORT (array A, int m, int n)


2. if (n > m)
3. then
4. i ← a random index from [m,n]
5. swap A [i] with A[m]
6. o ← PARTITION (A, m, n)
7. QUICKSORT (A, m, o - 1)
8. QUICKSORT (A, o + 1, n)

Partition Algorithm:

Partition algorithm rearranges the sub arrays in a place.

1. PARTITION (array A, int m, int n)


2. x ← A[m]
3. o←m
4. for p ← m + 1 to n
5. do if (A[p] < x)
6. then o ← o + 1
7. swap A[o] with A[p]
8. swap A[m] with A[o]
9. return o

Figure: shows the execution trace partition algorithm

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 11

Example of Quick Sort:


44 33 11 55 77 90 40 60 99 22 88

Let 44 be the Pivot element and scanning done from right to left

Comparing 44 to the right-side elements, and if right-side elements


are smaller than 44, then swap it. As 22 is smaller than 44 so swap them.

22 33 11 55 77 90 40 60 99 44
88

Now comparing 44 to the left side element and the element must be greater than44 then swap
them. As 55 are greater than 44 so swap them.

22 33 11 44 77 90 40 60 99 55
88

Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivotelement 44 &
one right from pivot element.

22 33 11 40 77 90 44 60 99 55
88

Swap with 77:

22 33 11 40 44 90 77 60 99 55
88

Now, the element on the right side and left side are greater than and smallerthan 44
respectively.

Now we get two sorted lists:

And these sublists are sorted under the same process as above done. These two sorted sublists side
by side.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 12

Merging Sublists:

TIME COMPLEXITY:

BINARY SEARCH:

BINARY TREE TRAVERSAL

Traversal is a process to visit all the nodes of a tree and may print their values too. Because, all
nodes are connected via edges (links) we always start from the root (head) node. That is, we
cannot randomly access a node in a tree. There arethree ways which we use to traverse a tree −

• In-order Traversal
• Pre-order Traversal
• Post-order Traversal

Generally, we traverse a tree to search or locate a given item or key in the tree or to print all the
values it contains.
In-order Traversal
• In this traversal method, the left subtree is visited first, then the root and later the right
sub-tree. We should always remember that every node may represent asubtree itself.
• If a binary tree is traversed in-order, the output will produce sorted key values in an
ascending order.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 13

We start from A, and following in-order traversal, we move to its left


subtree B.B is also traversed in-order. The process goes on until all the nodesare visited.
The output of in-order traversal of this tree will be −

D→B→E→A→F→C→G

Algorithm

Until all nodes are traversed −

Step 1 − Recursively traverse left subtree.

Step 2 − Visit root node.

Step 3 − Recursively traverse right subtree.

Pre-order Traversal

• In this traversal method, the root node is visited first, then the left subtree and finally the
right subtree.

We start from A, and following pre-order traversal, we first visit A itself and then move to its
left subtree B. B is also traversed pre-order. The process goes on until all the nodes are visited.
The output of pre-order traversal of this tree will be −
A→B→D→E→C→F→G

Algorithm

Until all nodes are traversed −

Step 1 − Visit root node.

Step 2 − Recursively traverse left subtree.

Step 3 − Recursively traverse right subtree.

Post-order Traversal

• In this traversal method, the root node is visited last, hence the name. First wetraverse the
left subtree, then the right subtree and finally the root node.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 14

We start from A, and following pre-order traversal, we first visit the left subtree B. B is also traversed
post-order. The process goes on until all the nodesare visited. The output of post-order traversal of this
tree will be −

D→E→B→F→G→C→A

Algorithm

Until all nodes are traversed −

Step 1 − Recursively traverse left subtree.

Step 2 − Recursively traverse right subtree.

Step 3 − Visit root node.

Properties of Binary Tree

The common non-linear data structure known as a tree. A tree illustrates a hierarchical structure in
contrast to other data structures such an array, stack, queue, and linked list, which are linear in
nature. A tree's ordering information is irrelevant. Two pointers and nodes make up a tree. The
parent node's left and right children are represented by these two pointers. Let's thoroughly
comprehendthe words used in trees.

• The highest node in a tree that has no parent nodes is considered the rootof the tree. Every
tree has a single root node.
• Parent Node: A node's parent one is the node that came before it in the treeof nodes.
• Child Node: The node that is a node's direct successor is referred to as a node's child
node.
• Siblings are the children of the same parent node.
• Edge: Edge serves as a connecting node between the parent and childnodes.
• Leaf: A node without children is referred to as a leaf node. It is the tree'slast node. A tree
may have several leaf nodes.
• A node's subtree is the tree that views that specific node as the root node.
• Depth: The depth of a node is the separation between it and the root node.
• Height: The height of a node is the distance between it and the subtree'sdeepest node.
• The maximum height of any node is referred to as the tree's height. The height of the
root node is the same as this.
• Level: In the tree, a level is the number of parents that correspond to aparticular node.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 15

• Node degree: A node's degree is determined by how many children it has.


• A binary tree has (N+1) NULL nodes, where N is the total number of nodesin the tree.

Apoorva M S CIT_Mandya Dept. of MCA


Design and Analysis of an Algorithm 16

Apoorva M S CIT_Mandya Dept. of MCA

You might also like