Ada RGPV Chatgpt Ques

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 32

Analysis & Design of Algorithms

Q.1: What are the rules of manipulate Big-Oh expressions and


about the typical growth rates of algorithm?

Answer:
Rules of manipulate Big-Oh expressions

Rule 1 (Sequential Composition) The worst-case running time of a


sequence of C++ statements such as
S1;
S2;
.
Sm;
is O(max(T1(n), T2(n),...,Tm(n))), where the running time of Si, the ith
statement in the sequence, is O(Ti(n)).

Rule 2 (Iteration) The worst-case running time of a C++ for loop such as
for (S1, S2, S3 )
S4;
is O(max(T1(n), T2(n) X (l(n)+1), T3(n) X l(n), T4(n)X l(n))), where the
running time of statement Si is O(Ti(n)), for i = 1, 2, 3, and 4, and I(n) is
the number of iterations executed in the worst case.

Rule 3 (Conditional Execution) The worst-case running time of a C++ if-


then-else such as
if (S1)
S2;
else
S3 ;
is O(max(T1(n), T2(n), T3(n))) the running time of statement Si is O(Ti(n)),
for i = 1, 2, 3.

RATES OF GROWTH

In analysis of algorithms, it is not important to know exactly how many


operations an algorithm does. Of greater concern is the rate of increase in
operations for an algorithm to solve a problem as the size of the problem
increases. This is referred to as the rate of growth of the algorithm. What
happens with small sets of input data is not as interesting as what
happens when the data set gets large.

Because we are interested in general behavior, we just look at the overall


growth rate of algorithms, not at the details. If we look closely at the
graph in Fig. 1.1, we will see some trends. The function based on x
increases slowly at first, but as the problem size gets larger, it begins to
grow at a rapid rate.
Q.2: Discuss the steps in mathematical analysis for recursive
algorithm. Do the same for finding the factorial of a number.

Answer:
MATHEMATICAL ANALYSIS FOR RECURSIVE ALGORITHMS

General Plan for Analyzing the Time Efficiency of Recursive Algorithms

1. Decide on a parameter (or parameters) indicating an input's size.


2. Identify the algorithm's basic operation.
3. Check whether the number of times the basic operation is executed can
vary on different
inputs of the same size; if it can, the worst-case, average-case, and best-
case efficiencies
must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for
the number of times
the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its
solution.

Recursive Evaluation of n!

Definition: n! = 1 * 2 * ... *(n-1) * n for n 1 and 0! = 1

Recursive definition of n! : F(n) = F(n-1)*n for n >= 1 and F(0) = 1

*******************************
ALGORITHM F(n)

//Computes n! recursively

//Input: A nonnegative integer n

//Output: The value of n!

if n = 0 return 1

else return F(n - 1)*n

*******************************

M(n)= M(n − 1) + 1 for n > 0


to compute to multiply
F(n-1) F(n-1) by n

M(n)= M(n − 1) +1 for n>0


M(0) = 0

M(0) = 0

the calls stop when n=0 no multiplications when n = 0

M(n-1) = M(n-2) + 1;

M(n-2) = M(n-3)+1

M(n) = [M(n-2)+1] + 1
= M(n-2) + 2
= [M(n-3)+1+2]
= M(n-3) + 3
=M(n-n) + n
=n
Overall time Complexity: O(n)

Q.3: Find the optimal binary merge tree (pattern) for ten files
whose length are. 28, 32, 12,5,84,53,91, 35, 3 and 11 also find its
weighted external path length.

Answer:
For obtaining optimal binary merge pattern merge two smallest size files
at each step
Figure shows a binary image pattern representing the optimal merge
pattern obtained for the above 10 files
Weighted external path length
Where di = distance from the route to the external node
Qi = the length of Xi
Here n = 10
Therefore ∑diqi=d1q1+d2q2+….+d10q10

=(4x28) +
(3x32)+(5x12)+(7x5)+(2x84)+(3x53)+(2x91)+(3x35)+(7x3)+(6x11)

=112+96+60+35+168+159+182+105+21+66

=1004 (Answer)

Q.4: Explain the memory function method for the Knapsack


problem and give the algorithm.

Answer:
Memory Functions
While solving recurrence relation using dynamic programming approach
common subproblems may be solved more than once and this makes
inefficient solving of the problem. Hence memory function is a method
that solves only necessary subproblems. Hence we can define the goal of
memory function as : solve only subproblems that are necessary and
solve these subproblems only once.
Memorization is a way to deal with overlapping subproblems in dynamic
programming. During memorization -

1. After computing the solution to a subproblem, store it in a table.


2. Make use of recursive calls.

The 0-1 Knapsack Memory Function Algorithm is as given below -


Globally some values are to be set before the actual memory function
algorithm
for (i = 1 to n) do
for (j=1 to W) do
tablel[i,j] = -1
for (j=0 to W) do
table[0,j]=0
//making oth row 0
for i=1 to n do
table[i,0]=0 //making oth column 0

Algorithm Mem_Fun knapsack(ij)


//Problem Description: Implementation of memory function method
//for the knapsack problem [/Input: i is the number of items and j denotes
the knapsack's
//capacity
//Output: optimal solution subset
if (tablel [ i , j] < 0) then
{
if (i < w[i]) then
value = Mem_Fun_knapsack(i-1, j)
else
value-max(Mem_Fun_knapsack(i-1, j)
v[i] + Mem_Fun_knapsack(i-1, j - w[i]))
tablel[i,j]=value
}
return table[i,j]

Memory Function Method for the Knapsack Problem


The memory function method is a technique used to improve the
efficiency of solving the Knapsack problem by storing the results of
subproblems (memoization) to avoid redundant calculations. This is
particularly useful in dynamic programming to reduce the time
complexity.

Problem Statement
Given a set of items, each with a weight and a value, determine the
number of each item to include in a collection so that the total weight is
less than or equal to a given limit and the total value is as large as
possible.

Algorithm Using Memory Function (Memoization)

1. Define the Problem:

𝑛n be the number of items.


𝑊W be the maximum weight capacity of the knapsack.
 Let

Let 𝑤[𝑖]w[i] and 𝑣[𝑖]v[i] be the weight and value of the 𝑖i-th
 Let


item, respectively.

2. Recursive Definition:

𝐾(𝑖,𝑤)K(i,w) as the maximum value that can be obtained


with the first 𝑖i items and a knapsack capacity of 𝑤w.
 Define

The recursive formula is:

𝐾(𝑖,𝑤)={0if 𝑖=0 or 𝑤=0𝐾(𝑖−1,𝑤)if 𝑤[𝑖]>𝑤max⁡(𝐾(𝑖−1,𝑤),𝑣[𝑖]


+𝐾(𝑖−1,𝑤−𝑤[𝑖]))otherwiseK(i,w)=⎩⎨⎧0K(i−1,w)max(K(i−1,w),v[i]
+K(i−1,w−w[i]))if i=0 or w=0if w[i]>wotherwise

3. Memoization Table:

 Use a table 𝑀[𝑖][𝑤]M[i][w] to store the results of subproblems to


avoid redundant calculations.

Advantages of Memory Function Method


 Efficiency: Avoids redundant calculations by storing the results of
subproblems.
 Simplicity: Simplifies the problem by breaking it down into smaller
subproblems.
 Scalability: Can handle larger input sizes efficiently compared to a
naive recursive approach.

Knapsack problem is significantly reduced to 𝑂(𝑛×𝑊)O(n×W), making it


By using the memory function method, the time complexity of solving the

more feasible for larger datasets.


Q.5: Apply Warshall's algorithm to find the transitive closure of
the digraph defined by the following adjacency matrix.

0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
i) Prove that the time efficiency of Warshall's algorithm is cubic.

ii) Explain why the time efficiency of Warshall's algorithm is


inferior to that of the traversal based algorithm for sparse graphs
represented by their adjacency lists.

Answer:
Applying Warshall's algorithm yields the following sequence of matrices
(i) For a graph with n vertices, the algorithm computes n matrices R (k) (k =
1,2, ..., n), each of which has n2 elements. Hence, the total number of
elements to be computed is n3. Since computing each element takes
constant time, the time efficiency of the algorithm is in Ɵ(n3).

(ii) Since one DFS or BFS traversal of a graph with n vertices and m edges,
which is represented by its adjacency lists, takes Ɵ(n + m) time, doing
this n times takes nƟ(n+ m) = Ɵ(n2+ nm) time. For sparse graphs (i.e., if
m € Ɵ(n)), (n2 + nm) = Ɵ(n), which is more efficient than the (n3) time
efficiency of Warshall's algorithm.

Warshall's Algorithm for Transitive Closure


Warshall's algorithm is used to find the transitive closure of a directed
graph. The transitive closure of a graph is a matrix that indicates whether
there is a path between any two nodes.

Given the adjacency matrix:

Copy code
0100001000010000

Steps to Apply Warshall's Algorithm

1. Initialize the Transitive Closure Matrix:

 Start with the given adjacency matrix as the initial transitive


closure matrix.

2. Algorithm:

𝑘k, update the transitive closure


𝑇T using:
 For each intermediate vertex
matrix

𝑇[𝑖][𝑗]=𝑇[𝑖][𝑗]∨(𝑇[𝑖][𝑘]∧𝑇[𝑘][𝑗])T[i][j]=T[i][j]∨(T[i]
[k]∧T[k][j])
𝑖i to vertex 𝑗j
either directly or through an intermediate vertex 𝑘k.
 This means that there is a path from vertex

3. Update Matrix for Each 𝑘k:

 k = 0:

01000010000100000000100001000010

 k = 1:

01100010000100000000100011000010

 k = 2:

01110011000100000000100011001110

 k = 3:

01110011000100000000100011001110
4. Final Transitive Closure Matrix:

5. Copy code

6. 0 1 1 1 0 0 1 1 0 0 0 1 0 0 0 0

i) Proving Time Efficiency of Warshall's Algorithm is


Cubic
Warshall's algorithm consists of three nested loops:

𝑘k, which iterates 𝑛n times.


2. For each 𝑘k, the inner loops iterate over all pairs of vertices (𝑖,𝑗)
1. The outer loop runs for each vertex

(i,j), which also iterate 𝑛n times each.

Thus, the total number of iterations is: 𝑛×𝑛×𝑛=𝑛3n×n×n=n3

Hence, the time complexity of Warshall's algorithm is 𝑂(𝑛3)O(n3).

ii) Comparison to Traversal-Based Algorithms for


Sparse Graphs
Traversal-based algorithms, such as Depth-First Search (DFS) or Breadth-
First Search (BFS), can be more efficient for sparse graphs represented by
adjacency lists. Here's why:

1. Sparse Graphs:

𝐸E that is much less


than the maximum possible number of edges 𝑉2V2 (i.e.,
 A sparse graph has a number of edges

𝐸≪𝑉2E≪V2).
 For sparse graphs, traversal algorithms can efficiently find
paths by exploring only the existing edges.

2. Traversal-Based Algorithms:

 DFS/BFS Complexity: For an adjacency list representation,

𝑂(𝑉+𝐸)O(V+E).
both DFS and BFS have a time complexity of

This is significantly better than 𝑂(𝑛3)O(n3) for Warshall's


algorithm when 𝐸E is much less than 𝑉2V2.

3. Why Warshall's Algorithm is Inferior:


𝑂(𝑛3)O(n3) is constant regardless of the number of edges,
 Warshall's Algorithm: The cubic time complexity

making it less efficient for sparse graphs.


 Traversal-Based Algorithms: These algorithms scale

more suitable for sparse graphs where 𝐸E is much smaller


linearly with the number of edges and vertices, making them

than 𝑉2V2.

Conclusion
Warshall's algorithm is efficient for dense graphs where the adjacency
matrix is fully populated. However, for sparse graphs, traversal-based
algorithms using adjacency lists provide a significant performance
advantage due to their linear time complexity in terms of vertices and
edges.

Q.6: Explain how branch and bound techniques can be used to


solve travelling sales person problem.

Answer:
Travelling Salesman Problem - Least cost branch-and-bound (LCBB)
technique can be used to find a minimum cost tour from a given set of the
nodes. For this, we need to define two cost functions c(*) and u(*) such
that ĉ(r)
The cost function c(*) should be such that the solution node with least c(*)
corresponds to a shortest tour in G. One way for deciding value of c(*) is -

C(A)= Length of tour defined by the path from the root to A, if A is a leaf
cost of a minimum cost leaf in the subtree A, if A is not a leaf.

State space tree for such problem is shown in fig. with each edge
assigned a value indicates the root to be followed.

For example, node 8 shows the path 1, 3, 4 and node 13 shows path 1, 3,
2, 4. Reduced cost matrix is used for finding better value of ĉ(*). For this,
we have to reduce a matrix. A matrix is said to be reduced if its rows and
columns are reduced, and a row (column) is said to be reduced iff it
contains at least one zero and all remaining entries are non-negative.

We associate a reduced cost matrix with every node in the travelling


salesperson state space tree. Suppose that A be the reduced cost matrix
for a node P and Q be a child for node P, also suppose that P is not a leaf
child. Here tree edge (P, Q) corresponds to including edge in the tour.

A reduced cost matrix for P can be obtained as follows -


(i) Change all entries in row i and column j of A to 0. This prevents the use
of any more edges leaving vertex i or entering vertex j.
(ii) Set A(j, 1) to co. This prevents the use of edge (j, 1).
(iii) Reduce all rows and columns in the resulting matrix except for rows
and columns containing only ∞.

Total cost of Q can be given as -


ĉ (Q) = ĉ(P) +A(i, j) +r

where r is the total amount subtracted in step (iii).

At each step of determining minimum cost tour we select a node ha


minimum cost and go ahead step-by-step by doing same, and finally we
result with a minimum cost tour.

Branch and Bound Technique for the Travelling


Salesperson Problem (TSP)
The Travelling Salesperson Problem (TSP) is a classic optimization
problem where a salesperson must visit a set of cities exactly once and
return to the starting city, with the goal of minimizing the total travel
distance. The Branch and Bound technique is an efficient way to solve this
problem by systematically exploring all possible routes while pruning
suboptimal solutions to reduce computation.

Steps to Solve TSP Using Branch and Bound

1. Initialization:
 Start with an initial solution (often a greedy solution) to establish an
upper bound on the solution's cost.
 Create a priority queue or a list to store nodes representing partial
solutions (routes).

2. Branching:
 From the current node (partial solution), generate all possible next
steps (routes).
 Each generated node represents a city added to the current route.

3. Bounding:
 Calculate a lower bound for the cost of each generated node.
 The lower bound helps in estimating the minimal possible cost to
complete the tour from that node.

4. Pruning:
 Compare the lower bound of each node with the current upper
bound.
 If the lower bound of a node is greater than the upper bound,
discard (prune) that node as it cannot lead to an optimal solution.
 If a complete tour is found with a cost less than the current upper
bound, update the upper bound.

5. Termination:
 The algorithm continues until all nodes in the priority queue are
processed or pruned.
 The optimal tour is the one corresponding to the final upper bound.

Algorithm Details

1. Representation:

𝐶C, where 𝐶[𝑖][𝑗]C[i]


[j] is the cost of traveling from city 𝑖i to city 𝑗j.
 Represent the problem using a cost matrix

 Use a binary decision tree where each node represents a partial


tour.

2. Initial Bound:
 Use the nearest neighbor heuristic or another greedy approach to
find an initial solution and set an initial upper bound.

3. Priority Queue:
 Use a priority queue to store nodes, prioritized by their lower bound
costs.

4. Bounding Function:
 Compute the lower bound for a partial tour using techniques like
reduced cost matrices or minimum spanning trees.

Example
Consider a small TSP with 4 cities and the following cost matrix:

css
Copy code
0 1 2 3 0 [ ∞ 10 15 20 ] 1 [ 10 ∞ 35 25 ] 2 [ 15 35 ∞ 30 ] 3 [ 20 25 30 ∞ ]

1. Initial Solution:
 Use the nearest neighbor or any heuristic to find an initial tour.
Suppose the initial tour is 0 → 1 → 3 → 2 → 0 with a cost of 85. Set
this as the initial upper bound.

2. Branching:
 Start with the root node representing city 0.
 From city 0, generate branches to cities 1, 2, and 3.
 Calculate the lower bound for each branch.

3. Bounding:
 For each partial tour (e.g., 0 → 1), calculate the lower bound using
reduced cost matrices or other techniques.

4. Pruning:
 If the lower bound of a partial tour exceeds the current upper bound
(85), prune that branch.
 Otherwise, add the branch to the priority queue.

5. Iteration:
 Continue branching from the node with the lowest lower bound in
the priority queue.
 Update the upper bound whenever a complete tour with a lower
cost is found.

6. Termination:
 The algorithm terminates when the priority queue is empty or all
nodes are pruned.
 The final upper bound gives the cost of the optimal tour.

Example Execution

1. Start with city 0:


 Generate branches: 0 → 1, 0 → 2, 0 → 3.
 Calculate lower bounds for each branch.
 Suppose lower bounds are 60, 70, 75, respectively.

2. Select the branch with the lowest bound (0 → 1):


 Generate new branches from 0 → 1: 0 → 1 → 2, 0 → 1 → 3.
 Calculate lower bounds for each new branch.
 Suppose new bounds are 80, 90.

3. Select the branch 0 → 1 → 2 (bound 80):


 Generate new branches from 0 → 1 → 2: 0 → 1 → 2 → 3.
 Calculate lower bound. Suppose it's 85 (equals initial upper bound).

4. Complete the tour 0 → 1 → 2 → 3 → 0 with cost 85:


 No better tours are found, so the algorithm terminates.
Advantages of Branch and Bound
 Efficiency: Prunes large parts of the search space, reducing computation.
 Optimality: Guarantees finding the optimal solution.
 Scalability: More efficient than brute-force search, especially for larger
problems.

Conclusion
The Branch and Bound technique systematically explores possible
solutions for the TSP while using bounds to prune suboptimal solutions.
This ensures that the optimal tour is found efficiently, making it a powerful
method for solving combinatorial optimization problems like TSP.

Q.7: What is Hamiltonian cycle? Write an algorithm to find all


Hamiltonian cycle in graph?

Answer:

Let G=(V, E) be a connected graph with n vertices. A Hamiltonian cycle is


a round-trip path along n edges of G that visits every vertex once and
returns to its initial or starting position. In other words if a Hamiltonian
cycle begins at some vertex Vi Î G and the vertices of G are visited in the
order V1, V2, ......, Vn+1, then the edges (Vi, Vi+1) are in E, 1<=i<n, and the Vi
are different except for Vi and Vn+1, which are equal. Hamiltonian cycle was
suggested by Sir William Hamilton.

Fig. shows a graph G1 which contains the Hamiltonian cycle 1, 2, 8, 7, 6,


5, 4, 3, 1. The graph G2 does not contain any Hamiltonian cycle.

There is no easy way to find whether a given graph contains a


Hamiltonian cycle. We have backtracking algorithm that finds all the
Hamiltonian cycles in a graph. The graph may be directed or undirected.
Only distinct cycles are output.
Algorithm – The backtracking solution vector (X1, ...,Xn) is defined so that
xi represents the ith visited vertex of the proposed cycle. Now we have to
find how to compute the set of possible vertices for X k if X1, ...., Xk-1, have
already been chosen. If k = 1 the x, can be any of the n vertices. To avoid
printing the same cycle n times, we require that xy = 1. If 1 <k<n, then
xk can be any vertex v that is distinct from X1, X2, .....,Xk-1 and v is
connected by an edge to Xk-1. Only the vertex Xn be the one remaining
vertex and it must be connected to both Xn-1 and X1. Algorithm 1
NextValue (k) determines a possible next vertex of the proposed cycle.

Algorithm 1 Generating a Next Vertex

1. Algorithm NextValue (k)


2. // x [1 : k-1] is a path of k – 1 distinct vertices. If
3. ll x [k] = 0, then no vertex has as yet been assigned to x [k]. After
4. // execution, x[k] is assigned to the next highest numbered vertex
5. // which does not already appear in x [1 : k-1] and is connected by
6. // an edge to x [k - 1]. Otherwise x [k] = 0. If k= n, then
7. // in addition x [k] is connected to x [1].
8. {
9. repeat
10.{
11. x[k] : = (x [k] + 1) mod (n+1); //Next vertex.
12. if (x [k] = 0) then return;
13. if (G[x [k – 1], x [k]] + 0) then
14. {// Is there an edge ?
15. for j := 1 to k-1 do if (x [j] = x[k]) then break;
16. // Check for distinctness
17. if (j = k) then // If true, then the vertex is distinct.
18. if ((k <n) or (( k = n) and G [x [n], x [1]] + 0))
19. then return;
20. }
21. } until (false);
22. }

Algorithm 2 is used to find all Hamiltonian cycles. This algorithm is started


by first initializing the adjacency matrix G [l: n, 1:n), then setting x [2 : n]
to zero and x[1] to 1, and then executing Hamiltonian (2).

Algorithm 2 Finding All Hamiltonian Cycles


1. Algorithm Hamiltonian (k)
2. // This algorithm uses the recursive formulation of
3. // backtracking to find all the Hamiltonian cycles
4. // of a graph. The graph is stored as an adjacency
5. // matrix G[1 : n, 1:n). All cycles begin at node 1.
6. {
7. repeat
8. { // Generate values for x [k].
9. NextValue (k); // Assign a legal next value to x[k].
10. if (x [k] = 0) then return;
11. if ( k = n) then write (x [1 : n]);
12. else Hamiltonian (k+ 1);
13. } until (False);
14.}

Travelling salesman problem is a problem of finding tour with minimum


cost. This tour is a Hamiltonian cycle. For the simple case of a graph all of
whose edge costs are similar, Hamiltonian will determine a minimum-cost
tour if a tour exists. If the common edge cost is C, the cost of a tour is en
as there are n edges in a Hamiltonian cycle.

Hamiltonian Cycle's Complexity –

We can define Hamiltonian-cycle as a formal language as –


HAM-CYCLE = {<G>: G is a Hamiltonian graph}

If we use the reasonable encoding of a graph as its adjacency matrix, the


number m of vertices in the graph is Ω(√ n), where n= |<G>| is the length
of the encoding of G. There are m! possible permutations of the vertices,
and therefore the running time is Ω (m!) = Ω (√n!) = Ω (2 √n), which is
not θ(nk) for any constant k.

Q.8: Explain the relationship between class P, NP, NP-complete


and NP hard problem with example of each class.

Answer

Class P If a problem can be solved by a deterministic Turing machine in polynomial time,


the problem belongs to the complexity class P. All problems in this class have a solution
whose time requirement is a polynom on the input size n. i.e. f(n) is of form akn k +ak−1n k−1 +...
+a2n 2 +a1n+a0 where ak, ..., a0 are contant factors (possibly 0). The order of polynom is the
largest exponent k such that ak = 0 (if ak = 0, then the polynom is ak−1n k−1+...+a0 and the order is
k − 1, unless ak−1 = 0. If ak−1 = 0, too, then the polynom is ak−2n k−2 + ... + a0 etc.).

Class NP contains all computational problems such that the corresponding decision problem
can be solved in a polynomial time by a nondeterministic Turing machine.

NP-complete problems
NP-complete problems are special problems in class NP. I.e. they are a subset of class NP.
An problem p is NP-complete, if

1. p ∈ NP (you can solve it in polynomial time by a non-deterministic Turing machine) and


2. All other problems in class NP can be reduced to problem p in polynomial time.

NP-hard problems are partly similar but more difficult problems than NP complete
problems. They don’t themselves belong to class NP (or if they do, nobody has invented it,
yet), but all problems in class NP can be reduced to them. Very often, the NP-hard problems
really require exponential time or even worse.
Notice that NP-complete problems are a subset of NP-hard problems, and that’s why NP-
complete problems are sometimes called NP-hard. It can also happen that some problem
which is nowadays known to be only NP-hard, will be proved to be NP-complete in the
future – it is enough that somebody just invents a nondeterministic Turing machine, which
solves the problem in polynomial time.

Q.9: Solve the recurrence relation: T(n) = 3 (n/4) + n

Answer:
Given
T(n) = 3 (n/4) + n

But the recurrence relation should be

T(n) = 3T(n/4) + n We iterate is as follows -

T(n) = n + 3T(n/4)

= n + 3(n/4 + 3T(n/16)) = n + 3(n/4 + 3(n/16) + 3T(n/64))

= n + 3(n/4) + 9(n/16) + 27T(n/64)

where ((n/4)/4) = (n/16) and ((n/16)/4) = (n/64).

The ith term in the series is 3i(n/4i). The iterations hits n = 1 when (n/4i) =
1 or, equivalently, when i exceeds log4n. By continuing the iteration until
this point, we discover that the summation contains a decreasing
geometric series

T(n) <= n + 3n/4 + 9n/16 + 27n/64 + ... + 3 log n Ɵ 4 (1)

= 4n + o(n)
= O(n)
Here, we concluded that 3log4n = nlog43 and we have used the fact that log43
< 1 to conclude that
Ɵ nlog43 = o(n).

Q.10: Tabulate the difference between Kruskal's and Prism's


algorithm.

Answer
Prism Algorithm:
1. It start to build the MST from any of the Node.
2. Adjencary Matrix , Binary Heap or Fibonacci Heap is used in Prism
algorithm
3. Prism Algorithm run faster in dense graphs
4. Time Complexity is O(EV log V) with binary heap and O(E+V log V) with
Fibonacci heap.
5. The next Node included must be connected with the node we traverse
6. It traverse the one node several time in order to get it minimum
distance

Kruskal Algorithm
1. It start to build the MST from Minimum weighted vertex in the graph.
2. Disjoint Set is used in Kruskal Algorithm.
3. Kruskal Algorithm run faster in sparse graphs
4. Time Complexity is O(E log V)
5. The next edge include may or may not be connected but should not
form the cycle.
6. It traverse the edge only once and based on cycle it will either reject it
or accept it,

Q.11: Using Dijkstra's algorithm, find out the shortest distance


from the source vertex 's' to the rest of the vertices in the given
graph. Also write the order in which all the vertices of the graph
are visited.

Answer:

Dijkstra's algorithm Solution for given problem


Q.16: Write the procedure of Merge sort and sort the given array
of 8 elements using merge sort 35, 10,7,22, 5, 13, 16, 3.

Answer:

Merge Sort Algorithm


Merge sort is one of the most efficient sorting algorithms. It works on the
principle of Divide and Conquer. Merge sort repeatedly breaks down a list
into several sublists until each sublist consists of a single element and
merging those sublists in a manner that results into a sorted list.

A merge sort works as follows:


Top-down Merge Sort Implementation:
The top-down merge sort approach is the methodology which
uses recursion mechanism. It starts at the Top and proceeds downwards,
with each recursive turn asking the same question such as “What is
required to be done to sort the array?” and having the answer as, “split
the array into two, make a recursive call, and merge the results.”, until
one gets to the bottom of the array-tree.
Example: Let us consider an example to understand the approach better.
1. Divide the unsorted list into n sublists, each comprising 1 element (a
list of 1 element is supposed sorted).

Top-down Implementation
2. Repeatedly merge sublists to produce newly sorted sublists until
there is only 1 sublist remaining. This will be the sorted list.
Merging of two lists done as follows:
The first element of both lists is compared. If sorting in ascending order,
the smaller element among two becomes a new element of the sorted list.
This procedure is repeated until both the smaller sublists are empty and
the newly combined sublist covers all the elements of both the sublists.
Merging of two lists
Bottom-Up Merge Sort Implementation:
The Bottom-Up merge sort approach uses iterative methodology.
It starts with the “single-element” array, and combines two
adjacent elements and also sorting the two at the same time. The
combined-sorted arrays are again combined and sorted with each
other until one single unit of sorted array is achieved.
Example: Let us understand the concept with the following
example.
1. Iteration (1)
Merge pairs of arrays of size 1
2. Iteration (2)

Merge pairs of arrays of size 2


3. Iteration (3)
Merge pairs of arrays of size 4
Thus the entire array has been sorted and merged.

Q.17: What are the factors which affect the running time of an
algorithm?

Answer:
factors affects the running time of an algorithm are

1. Parallelism – A multi-threaded program running on multicore machine


will be faster.

2. CPU Utilization - If CPU is already utilized by some other processes


then running time of algorithm will increase.

3. I/0 Bound – Sometimes I/O bounds like disk read/write speed affect
the running time.
4. Overhead Due to many Function Call – There could be huge
overhead on running if a function call is made multiple times for a small
function.

5. Recursion – Recursion can cause a lot of overhead which increases


the running time of an algorithm.

6. Disk Layout – For multiple disk, RAID might work best or faster than
NAS, SAN or other storage layout.

7. Contention – If there are multiple threads, they are synchronized to


access common resources, then there can be contention for the resources
which causes thread to wait.

8. Buggy Synchronization - If there are deadlocks, the algorithm or


program will stop working or the two threads involved in deadlock will
stop. This increases running time of any algorithm.

Q.18: Show how the following matrices would be multiplied using


Strassen's algorithm?
Answer:
Q.19: Consider n=7,m=15,(PP.,....P.)=(10,5,15,7,6,18,3) and
(w,wa n.....w.) = (2,3,5,7,1,4,1).Obtain the optimal solution for
this Knapsack instance. n=7,m= 15,(P,P........P.)=(10,5, 15,7,6,
18,3)

Answer:
To solve this problem, we use some strategy to determine the fraction of
weight which should be included so as to maximize the profit and fill the
Knapsack..

(X1, X2, X3, X4, X5, X6, X7) ∑WiXi ∑PiXi

(1) (1/2,1/3,1/4,1/5,1/6,1/7,1/8) 5.51 15.76

Now taking maximum profit 18 with weight 4 as -

X6 = 1, ∑ WiXi < m.

(2) (1/2,1/3,1/4,1/5,1/6,1,1/8) 8.51 31.19

(3) (1/2,1/3,1,1/5,1/6,1,1/8) 12.69 42.44

(4) ) (1,1/3,1,1/5,1/6,1,1/8) 13.69 47.44

(5) (1,1/3,1,1/5,1,1,1/8) 14.52 52.44

(6) (1,1/3,1,1/5,1,1,1) 15 54.67

(7) (1,2/3,1,0,1,1,1) 15 55.33

at each step, we try to get the maximum profit. The maximum profit we
set by step (7) taking

X1 = 1, X2 = 2/3, X3 = 1, X4 = 0, X5 = 1, X6 = 1, and X7=1

These fraction of weight provided maximum profit.

Unit I: Definitions of Algorithms and Complexity


1. Define algorithm and discuss the importance of analyzing algorithms.
2. Explain time and space complexity with examples.
3. What is asymptotic notation? Explain Big-O, Big-Ω, and Big-Θ notations with
examples.
4. Discuss various methods of solving recurrences. Solve T(n)=2T(n/2)+nT(n) = 2T(n/2)
+ nT(n)=2T(n/2)+n.
5. Explain the divide and conquer technique. Illustrate with binary search and merge
sort.
6. Describe Strassen’s matrix multiplication algorithm.
7. Discuss loop optimization techniques in code tuning.

Unit II: Greedy Strategy

1. Explain the greedy strategy. What are the characteristics of problems that can be
solved using a greedy approach?
2. Describe Huffman coding and its application.
3. Explain the algorithm for finding a minimum spanning tree using Kruskal’s and
Prim’s algorithms.
4. Solve the 0/1 knapsack problem using a greedy approach. Is it always optimal? Why
or why not?
5. Discuss the job sequencing problem with deadlines.
6. Explain Dijkstra’s algorithm for the single-source shortest path problem.
7. Provide a correctness proof for a chosen greedy algorithm.

Unit III: Dynamic Programming

1. Explain the concept of dynamic programming. How does it differ from the greedy
approach?
2. Describe the 0/1 knapsack problem and solve it using dynamic programming.
3. Explain the Floyd-Warshall algorithm and its use in finding the shortest paths in a
weighted graph.
4. Discuss the reliability design problem and how dynamic programming can be applied.
5. Solve a multistage graph problem using dynamic programming.

Unit IV: Backtracking and Branch & Bound

1. Explain the backtracking technique. Describe the 8 queens problem and provide a
solution.
2. Discuss the Hamiltonian cycle problem and how backtracking can be used to solve it.
3. Explain the graph coloring problem and provide a backtracking solution.
4. Introduce the branch and bound method. Describe the travelling salesman problem
and provide an example.
5. What is the lower bound theory? How is it used in solving algebraic problems?
6. Provide an overview of parallel algorithms and their significance.

Unit V: Advanced Tree and Graph Algorithms, NP-Hard and NP-Complete


Problems

1. Discuss NP-hard and NP-complete problems with examples.


2. Explain approximation algorithms and provide an example.
3. Describe data stream algorithms and their applications.
4. Explain advanced tree algorithms, such as AVL trees or B-trees.
5. Provide an introduction to the design and complexity of parallel algorithms.

You might also like