20MCA023 Algorithm Assighnment
20MCA023 Algorithm Assighnment
20MCA023 Algorithm Assighnment
20MCA023
Chapter 1
Ans:
Big-O Analysis of Algorithms
We can express algorithmic complexity using the big-O notation. For a problem
of size N:
A constant-time function/method is “order 1” : O(1)
A linear-time function/method is “order N” : O(N)
A quadratic-time function/method is “order N squared” : O(N 2 )
Definition: Let g and f be functions from the set of natural numbers to itself. The
function f is said to be O(g) (read big-oh of g), if there is a constant c and a
natural n 0 such that f (n) ≤ cg(n) for all n > n0 .
The general step wise procedure for Big-O runtime analysis is as follows:
1. Figure out what the input is and what n represents.
2. Express the maximum number of operations, the algorithm performs in
terms of n.
3. Eliminate all excluding the highest order terms.
4. Remove all the constant factors.
Basically, this asymptotic notation is used to measure and compare the worst-
case scenarios of algorithms theoretically. For any algorithm, the Big-O analysis
should be straightforward as long as we correctly identify the operations that
are dependent on n, the input size.
Chapter 2
Merge Sort is a Divide and Conquer algorithm. It divides the input array into two
halves, calls itself for the two halves, and then merges the two sorted
halves. The merge() function is used for merging two halves. The merge(arr, l,
m, r) is a key process that assumes that arr[l..m] and arr[m+1..r] are sorted and
merges the two sorted sub-arrays into one.
Example of Merge Sort:
2.Write a note on Binary Search.
Ans:Binary search is a fast search algorithm with run-time complexity of Ο(log n). This
search algorithm works on the principle of divide and conquer. For this algorithm to
work properly, the data collection should be in the sorted form.
Binary search looks for a particular item by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. If the middle item is
greater than the item, then the item is searched in the sub-array to the left of the middle
item. Otherwise, the item is searched for in the sub-array to the right of the middle item.
This process continues on the sub-array as well until the size of the subarray reduces
to zero.
Example of Binary Search: The array in which searching is to be performed is
Let x = 4 be the element to be searched.
Set two pointers low and high at the lowest and the highest positions respectively.
Find the middle element mid of the array ie. (arr[low + high]) / 2 = 6.
If x == mid, then return mid.Else, compare the element to be searched with m.
If x > mid, compare x with the middle element of the elements on the right side of mid.
This is done by setting low to low = mid + 1.
Else, compare x with the middle element of the elements on the left side of mid. This is
done by setting high to high = mid - 1.
Repeat steps 3 to 6 until low meets high
Chapter 3
1. Floyd Warshall Algorithm
The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path
problem. The problem is to find shortest distances between every pair of
vertices in a given edge weighted directed Graph.
Example:
{INF, 0, 3, INF},
10
(0)------->(3)
| /|\
5 | |
| | 1
\|/ |
(1)------->2)
3
Note that the value of graph[i][j] is 0 if i is equal to j
And graph[i][j] is INF (infinite) if there is no edge from vertex i to j.
Output:
Shortest distance matrix
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INFINF 0
For every pair (i, j) of the source and destination vertices respectively, there
are two possible cases.
1) k is not an intermediate vertex in shortest path from i to j. We keep the
value of dist[i][j] as it is.
2) k is an intermediate vertex in shortest path from i to j. We update the value
of dist[i][j] as dist[i][k] + dist[k][j] if dist[i][j] >dist[i][k] + dist[k][j]
The following figure shows the above optimal substructure property in the all-
pairs shortest path problem.
-
Chapter 5
NP-Complete Problems
Following are some NP-Complete problems, for which no polynomial time algorithm is
known.
Lower bound theory says that no algorithm can do the job in fewer than that
of (L (n)) times the units for arbitrary inputs i.e. that for every comparison based
sorting algorithm must take at least L(n) time in the worst case.
L(n) is the minimum over all possible algorithm which is maximum complete.
The proofing techniques that are useful for obtaining lower bounds are:
1. Comparison trees:
Comparison trees are the computational model useful for determining lower bounds
for sorting and searching problems.
2. Oracles and adversary arguments:
One of the techniques that are important for obtaining lower bounds consists of
making the use of an oracle
3. Lower bounds through reduction:
This is a very important technique of lower bound, This technique calls for reducing
the given problem for which a lower bound is already known.
4. Techniques for the algebraic problem:
Substitution and linear independence are two methods used for deriving lower
bounds on algebraic and arithmetic problems. The algebraic problems are operation
on integers, polynomials, and rational functions.