20MCA023 Algorithm Assighnment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Subject: Algorithmic Desigining Assignment

20MCA023

Chapter 1

Q1) What is Time And Space Complexity?


Ans. Time complexity of an algorithm quantifies the amount of time taken by
an algorithm to run as a function of the length of the input. Similarly, Space
complexity of an algorithm quantifies the amount of space or memory taken
by an algorithm to run as a function of the length of the input.
Time and space complexity depends on lots of things like hardware, operating
system, processors, etc. However, we don't consider any of these factors while
analyzing the algorithm.
Time Complexity of an algorithm is the representation of the amount of time
required by the algorithm to execute to completion. Time requirements can be
denoted or defined as a numerical function t(N), where t(N) can be measured
as the number of steps, provided each step takes constant time.
A fixed part that is a space required to store certain data and variables (i.e.
simple variables and constants, program size etc.), that are not dependent of
the size of the problem.
A variable part is a space required by variables, whose size is totally
dependent on the size of the problem. For example, recursion stack space,
dynamic memory allocation etc.
Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is treated
as the fixed part and S(I) is treated as the variable part of the algorithm which
depends on instance characteristic I.
*Algorithm for space complexity
Sum(p,q)
Start
R <- P + Q +10
Stop
Example for time Complexity
For example, in case of addition of two n-bit integers, N steps are taken.
Consequently, the total computational time is t(N) = c*n, where c is the time
consumed for addition of two bits. Here, we observe that t(N) grows linearly
as input size increases.
2.Write a note on Big-O notation.

Ans:
Big-O Analysis of Algorithms
We can express algorithmic complexity using the big-O notation. For a problem
of size N:
 A constant-time function/method is “order 1” : O(1)
 A linear-time function/method is “order N” : O(N)
 A quadratic-time function/method is “order N squared” : O(N 2 )
Definition: Let g and f be functions from the set of natural numbers to itself. The
function f is said to be O(g) (read big-oh of g), if there is a constant c and a
natural n 0 such that f (n) ≤ cg(n) for all n > n0 .
The general step wise procedure for Big-O runtime analysis is as follows:
1. Figure out what the input is and what n represents.
2. Express the maximum number of operations, the algorithm performs in
terms of n.
3. Eliminate all excluding the highest order terms.
4. Remove all the constant factors.
Basically, this asymptotic notation is used to measure and compare the worst-
case scenarios of algorithms theoretically. For any algorithm, the Big-O analysis
should be straightforward as long as we correctly identify the operations that
are dependent on n, the input size.
Chapter 2

1. Explain Merge Sort with example.


Ans:

Merge Sort is a Divide and Conquer algorithm. It divides the input array into two
halves, calls itself for the two halves, and then merges the two sorted
halves. The merge() function is used for merging two halves. The merge(arr, l,
m, r) is a key process that assumes that arr[l..m] and arr[m+1..r] are sorted and
merges the two sorted sub-arrays into one.
Example of Merge Sort:
2.Write a note on Binary Search.
Ans:Binary search is a fast search algorithm with run-time complexity of Ο(log n). This
search algorithm works on the principle of divide and conquer. For this algorithm to
work properly, the data collection should be in the sorted form.
Binary search looks for a particular item by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. If the middle item is
greater than the item, then the item is searched in the sub-array to the left of the middle
item. Otherwise, the item is searched for in the sub-array to the right of the middle item.
This process continues on the sub-array as well until the size of the subarray reduces
to zero.
Example of Binary Search: The array in which searching is to be performed is
Let x = 4 be the element to be searched.
Set two pointers low and high at the lowest and the highest positions respectively.
Find the middle element mid of the array ie. (arr[low + high]) / 2 = 6.
If x == mid, then return mid.Else, compare the element to be searched with m.
If x > mid, compare x with the middle element of the elements on the right side of mid.
This is done by setting low to low = mid + 1.
Else, compare x with the middle element of the elements on the left side of mid. This is
done by setting high to high = mid - 1.
Repeat steps 3 to 6 until low meets high
Chapter 3
1. Floyd Warshall Algorithm
The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path
problem. The problem is to find shortest distances between every pair of
vertices in a given edge weighted directed Graph.
Example:

graph[][] = { {0, 5, INF, 10},

{INF, 0, 3, INF},

{INF, INF, 0, 1},

{INF, INF, INF, 0}}

which represents the following graph

10
(0)------->(3)
| /|\
5 | |
| | 1
\|/ |
(1)------->2)
3
Note that the value of graph[i][j] is 0 if i is equal to j
And graph[i][j] is INF (infinite) if there is no edge from vertex i to j.

Output:
Shortest distance matrix
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INFINF 0

Floyd Warshall Algorithm


We initialize the solution matrix same as the input graph matrix as a first step.
Then we update the solution matrix by considering all vertices as an
intermediate vertex. The idea is to one by one pick all vertices and updates all
shortest paths which include the picked vertex as an intermediate vertex in
the shortest path. When we pick vertex number k as an intermediate vertex,
we already have considered vertices {0, 1, 2, .. k-1} as intermediate vertices.

For every pair (i, j) of the source and destination vertices respectively, there
are two possible cases.
1) k is not an intermediate vertex in shortest path from i to j. We keep the
value of dist[i][j] as it is.
2) k is an intermediate vertex in shortest path from i to j. We update the value
of dist[i][j] as dist[i][k] + dist[k][j] if dist[i][j] >dist[i][k] + dist[k][j]
The following figure shows the above optimal substructure property in the all-
pairs shortest path problem.

2.Explain optimal binary search tree.


Ans:
A Binary Search Tree (BST) is a tree where the key values are stored in the internal
nodes. The external nodes are null nodes. The keys are ordered lexicographically, i.e.
for each internal node all the keys in the left sub-tree are less than the keys in the
node, and all the keys in the right sub-tree are greater.
When we know the frequency of searching each one of the keys, it is quite easy to
compute the expected cost of accessing each node in the tree. An optimal binary
search tree is a BST, which has minimal expected cost of locating each node.
Chapter 4

Q1) Explain Sum Subset problem with example?

Ans: The Subset-Sum Problem is to find a subset's' of the given set S =


(S1 S2 S3...Sn) where the elements of the set S are n positive integers in such a
manner that s'∈S and sum of the elements of subset's' is equal to some
positive integer 'X.'

The Subset-Sum Problem can be solved by using the backtracking approach.


In this implicit tree is a binary tree. The root of the tree is selected in such a
way that represents that no decision is yet taken on any input. We assume
that the elements of the given set are arranged in increasing order:

S1 < S2 < S3……..<Sn


Example:-
Q2) Explain Assignment problem with example?
Ans:-Assignment problem is a special type of linear programming problem
which deals with the allocation of the various resources to the various
activities on one to one basis. It does it in such a way that the cost or time
involved in the process is minimum and profit or sale is maximum. Though
there problems can be solved by simplex method or by transportation method
but assignment model gives a simpler approach for these problems.
Example:

-
Chapter 5

1. P-NP and NP-Complete Problem:


Ans:
A language B is NP-complete if it satisfies two conditions
 B is in NP
 Every A in NP is polynomial time reducible to B.
If a language satisfies the second property, but not necessarily the first one, the
language B is known as NP-Hard. Informally, a search problem B is NP-Hard if there
exists some NP-Complete problem A that Turing reduces to B.
The problem in NP-Hard cannot be solved in polynomial time, until P = NP. If a
problem is proved to be NPC, there is no need to waste time on trying to find an
efficient algorithm for it. Instead, we can focus on design approximation algorithm.

NP-Complete Problems
Following are some NP-Complete problems, for which no polynomial time algorithm is
known.

 Determining whether a graph has a Hamiltonian cycle


 Determining whether a Boolean formula is satisfiable, etc.
To prove TSP is NP-Complete, first we have to prove that TSP belongs to NP. In
TSP, we find a tour and check that the tour contains each vertex once. Then the
total cost of the edges of the tour is calculated. Finally, we check if the cost is
minimum. This can be completed in polynomial time. Thus TSP belongs to NP.
 Secondly, we have to prove that TSP is NP-hard. To prove this, one way is to
show that Hamiltonian cycle ≤p TSP (as we know that the Hamiltonian cycle
problem is NPcomplete).
 Assume G = (V, E) to be an instance of Hamiltonian cycle.
 Hence, an instance of TSP is constructed. We create the complete graph G' =
(V, E'), where
 E′={(i,j):i,j∈Vandi≠jE′={(i,j):i,j∈Vandi≠j
 Thus, the cost function is defined as follows −
 t(i,j)={01if(i,j)∈Eotherwiset(i,j)={0if(i,j)∈E1otherwise
 Now, suppose that a Hamiltonian cycle h exists in G. It is clear that the cost of
each edge in h is 0 in G' as each edge belongs to E. Therefore, h has a cost
of 0 in G'. Thus, if graph G has a Hamiltonian cycle, then graph G' has a tour
of 0 cost.
 Conversely, we assume that G' has a tour h' of cost at most 0. The cost of edges
in E' are 0 and 1 by definition. Hence, each edge must have a cost of 0 as the
cost of h' is 0. We therefore conclude that h' contains only edges in E.
 We have thus proven that G has a Ham

2. lower bound arguments analysis of algorithms:


Lower bound (L(n)) is a property of the specific problem i.e. the sorting problem,
matrix multiplication not of any particular algorithm solving that problem.

Lower bound theory says that no algorithm can do the job in fewer than that
of (L (n)) times the units for arbitrary inputs i.e. that for every comparison based
sorting algorithm must take at least L(n) time in the worst case.

L(n) is the minimum over all possible algorithm which is maximum complete.

The proofing techniques that are useful for obtaining lower bounds are:

1. Comparison trees:
Comparison trees are the computational model useful for determining lower bounds
for sorting and searching problems.
2. Oracles and adversary arguments:
One of the techniques that are important for obtaining lower bounds consists of
making the use of an oracle
3. Lower bounds through reduction:
This is a very important technique of lower bound, This technique calls for reducing
the given problem for which a lower bound is already known.
4. Techniques for the algebraic problem:
Substitution and linear independence are two methods used for deriving lower
bounds on algebraic and arithmetic problems. The algebraic problems are operation
on integers, polynomials, and rational functions.

You might also like