DAA Solution All Years Previous Year Paper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

Scanned by CamScanner

Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
DAA Solution NCS-501

(a) What are the Characteristics of an Algorithm?

Ans:
The characteristics of a good algorithm are:
Precision – the steps are precisely stated(defined).
Uniqueness – results of each step are uniquely definedand only depend on the input and the
result of the precedingsteps.
Finiteness – the algorithm stops after a finite number ofinstructions are executed.
Input – the algorithm receives input.
Output – the algorithm produces output.
Generality – the algorithm applies to a set ofinputs.

(b) Explain Divide and Conquer.

Ans: Divide-and-conquer is a top-down technique for designing algorithms that consists of


dividing the problem into smaller subproblems hoping that the solutions of the subproblems are
easier to find and then composing the partial solutions into the solution of the original problem.

Little more formally, divide-and-conquer paradigm consists of following major phases:

 Breaking the problem into several sub-problems that are similar to the original problem
but smaller in size,
 Solve the sub-problem recursively (successively and independently), and then
 Combine these solutions to subproblems to create a solution to the original problem.

(c ) How we can Analyze the Algorithm?

Ans: Analysis of Algorithms.

A complete analysis of the running time of an algorithm involves the following steps:
 Implement the algorithm completely.
 Determine the time required for each basic operation.
 Identify unknown quantities that can be used to describe the frequency of execution of
the basic operations.
 Develop a realistic model for the input to the program.
 Analyze the unknown quantities, assuming the modelled input.
 Calculate the total running time by multiplying the time by the frequency for each
operation, then adding all the products.

(d) What is time Complexity?

Ans: Time complexity of an algorithm signifies the total time required by the program to run to
completion. The time complexity of algorithms is most commonly expressed using the big O
notation.

Time Complexity is most commonly estimated by counting the number of elementary functions
performed by the algorithm. And since the algorithm's performance may vary with different
types of input data, hence for an algorithm we usually use the worst-case Time complexity of
an algorithm because that is the maximum time taken for any input size.

(e) Why counting sort is called Stable Sort?

Ans:

The output is an array of the items, in order by their keys. Because of the application to radix
sorting, it is important for counting sort to be a stable sort: if two items have the same key as
each other, they should have the same relative position in the output as they did in the input

(f) What is Dynamic Programming?


Ans:

Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a


particular class of problems. It demands very elegant formulation of the approach and simple
thinking and the coding part is very easy. The idea is very simple, If you have solved a problem
with the given input, then save the result for future reference, so as to avoid solving the same
problem again.. shortly 'Remember your Past' :) . If the given problem can be broken up in to
smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones,
and in this process, if you observe some over-lappping subproblems, then its a big hint for DP.
Also, the optimal solutions to the subproblems contribute to the optimal solution of the given
problem ( referred to as the Optimal Substructure Property ).

There are two ways of doing this.

1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem
has been solved already, then just return the saved answer. If it has not been solved, solve it and
save the answer. This is usually easy to think of and very intuitive. This is referred to
as Memoization.

2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved
and start solving from the trivial subproblem, up towards the given problem. In this process, it is
guaranteed that the subproblems are solved before solving the problem. This is referred to
as Dynamic Programming.

(g) Write the names of Various Design Techniques of Algorithm.

1. Brute force
2. Greedy algorithms
3. Divide-and-conquer, decrease-and-conquer
4. Dynamic programming
5. Transform-and-conquer
6. Backtracking and branch-and-bound
7. Genetic algorithms

(h) Write the properties of RB Tree.


Ans: Properties of red-black tree:
 It is a binary search tree which has following properties.
 The root of tree is always black.
 Every leaf node is black.
 If a node is red, then both of its children are black .It means that two consecutive red are
not allowed.
 Every simple path from a node to descendent leaf contains same number of black node.

( i) Write the Properties of Binomial Tree.


Ans:

For the binomial tree Bk ;

1. There are 2k nodes,


2. The height of tree is k,
3. There are exactly nodes at depth i for i = 0,1,..,k and
4. The root has degree k > degree of any other node if the children of the root are numbered
from left to right as k-1, k-2,...,0; child i is the root of a subtree Bi.
(j) What is the benefit of using randomized quick sort over Quick sort.

Using the same analysis, one can show that randomized quicksort has the desirable property that,
for any input, it requires only O(n log n) expected time (averaged over all choices of pivots).
However, there also exists a combinatorial proof.

To each execution of quicksort corresponds the following binary search tree (BST): the initial
pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right
half is the root of the right subtree, and so on. The number of comparisons of the execution of
quicksort equals the number of comparisons during the construction of the BST by a sequence of
insertions. So, the average number of comparisons for randomized quicksort equals the average
cost of constructing a BST when the values inserted form a random
permutation.
Q1(a) T(n) = 6T( n/3) + n2 log n

(b) T (n) = T( n-1) + (n-1)3

Solution 1(a):

T(n)=6T(n/3)+n2logn

Here a = 6 b = 2 f(n)=n2logn

Now nlogba = nlog36 = n1.631

Since f(n) = n2logn is asymptotically larger than nlogba = n1.631 so it seems that case 3
should apply.

T(n) = Ɵ(n2logn).

Solution 1(b):

T(n) = T(n-1) + (n-1)3

Apply iteration method,

T(n) =T(n-1)+(n-1)3

=[T(n-2)+(n-2)3]+(n-1)3

=[T(n-3)+(n-3)3]+(n-2)3+(n-1)3

.........................................................

=(n-1)3+(n-2)3+(n-3)3.........+23+13+T(0)

= +T(0)

T(n)= )

Q 2(a) For a given problem P, two algorithms A1 and A2 have time complexities
T1(n) and T2(n) respectively in terms of size n where
T1(n) = 100n2 and
T2 (n) = 2n
Find the minimum value of n for which A1 is more efficient than A2

Solution 2(a):
Algorithm A1 have T1 time complexity i.e. T1=100n2

Algorithm A2 have T2 time complexity i.e. T2=2n

For A1 to be more efficient than A2 it should satisfy

F(n)= (G(n))

i.e. 100n2= (2n)

and this n=15.

(b) Apply Quick sort , to sort the list A, L, G, O, R, I, T, H, M. In alphabetical order , draw the tree
of recursive calls made

Q 3 (a) Sort the Array by Counting Sort.

0 2 3 6 0 3 1 4 0 3 2 1

Solution 3(a):

1 2 3 4 5 6 7 8 9 10 11 12

0 2 3 6 0 3 1 4 0 3 2 1

Here k=6(highest number in A)

For i=0 to 6

C[i]=0

i.e.

0 1 2 3 4 5 6

0 0 0 0 0 0 0

For j=1 to 12

0 1 2 3 4 5 6
3 2 2 3 1 0 1

For i=1 to 6

0 1 2 3 4 5 6

3 5 7 10 11 11 12

j A[j] C[A[j]] B[C[A[j]]]←A[j] C[A[j]]←C[A[j]]-1


12 1 5 B[5] ←1 C[1] ←4
11 2 7 B[7] ←2 C[2] ←6
10 3 10 B[10] ←3 C[3] ←9
9 0 3 B[3] ←0 C[0] ←2
8 4 11 B[11] ←4 C[4] ←10
7 1 4 B[4] ←1 C[1] ←3
6 3 9 B[9] ←3 C[3] ←8
5 0 2 B[2] ←0 C[0] ←1
4 6 12 B[12] ←6 C[6] ←11
3 3 8 B[8] ←3 C[3] ←7
2 2 6 B[6] ←2 C[2] ←5
1 0 1 B[1] ←0 C[0] ←0

Final sorted array is B:

1 2 3 4 5 6 7 8 9 10 11 12

0 0 0 1 1 2 2 3 3 3 4 6

(b) Consider an array A = {3, 14, 27, 31, 39, 42, 55, 70, 74 }. Sort this Array by Heap Sort

Solution 3(b):

Array A=

3 14 27 31 39 42 55 70 74

First we call BUILD MAX HEAP. Heap size(A)=9

So, i=4 to 1. Call MAX-HEAPIFY(A,i) and the final tree after BUILD MAX HEAP.
Now, i=9 down to 1 exchange A[1]↔A[i] and size= size-1 and call MAX HEAPIFY (A,1) each
time .

Exchanging A[1] ↔A[9] and size=size-1 ,we get

31 70 55 39 3 42 27 14

Now call MAX HEAPIFY(A,1) we get

14 27 42 3 31 55 39 70

Again call MAX HEAPIFY (A,1) and continue the process we get sorted array

3 14 27 31 39 42 55 70 74

4 (a) Design an Algorithm for log n largest number of an unsorted array of length n in the linear Time

Solution 4(a):

QUICKSELECT Algorithm

QuickSelect(A, k)
let r be chosen uniformly at random in the range 1 to length(A)
let pivot = A[r]
let A1, A2 be new arrays
# split into a pile A1 of small elements and A2 of big elements
for i = 1 to n
if A[i] < pivot then
append A[i] to A1
else if A[i] > pivot then
append A[i] to A2
else
# do nothing
end for
if k <= length(A1):
# it's in the pile of small elements
return QuickSelect(A1, k)
else if k > length(A) - length(A2)
# it's in the pile of big elements
return QuickSelect(A2, k - (length(A) - length(A2))
else
# it's equal to the pivot
return pivot

(b) Prove that A Red-Black tree with n internal Nodes has Height at most 2lg(n+1)

Solution 4(b):

In a RBT, no path from a node x to a leaf is more than twice as long as any other path
from x to a leaf.

Let bh(x) be the black height of x. Then the length of a longest path from x to a leaf

= 2bh(x) {a path on which red and black nodes alternate}

Length of the shortest possible path from x to a leaf

bh(x) {a path that only contains blacks}

Hence the result.

A red-black tree with n internal nodes has height at most

2log(n + 1)

Consider any node x and the subtree rooted at x. We will first show that this subtree
has at least

2bh(x) - 1 internal nodes


We do this by induction on the height of x. If h(x) = 0, bh(x) = 0, x is leaf and hence
the subtree has no internal nodes, as corroborated by

20 - 1 = 0
Let h(x) > 0 and let x1 and x2 be its two children

Note that h(x1), h(x2) are both h(x) - 1. Assume the result to be true for x1 and x2.
We shall show the result is true for x.

Now,

bh(x1) bh(x) and bh(x) - 1


bh(x2) bh(x) and bh(x) - 1

Therefore, the tree with root x1 has at least

2bh(x) - 1 - 1 internal nodes


whereas the tree with root x2 has at least

2bh(x) - 1 - 1 internal nodes


Thus the tree with root x has at least

1 + 2bh(x) - 1 - 1 + 2bh(x) - 1 - 1 = 2bh(x) - 1 &nbsp; internal nodes


To complete the proof, let h be the height of the tree. Then

bh (root)
Thus

n/TD> 2h/2 - 1

h/TD> 2log(n + 1)

Q 5 (a) Prove that the Height of B-tree is log t (n+1/2).

Solution:

Most of the BST operations (e.g., search, max, min, insert, delete..etc) take O(h) time where h is
the height of the BST. The cost of these operations may become O(n) for a skewed Binary tree. If
we make sure that height of the tree remains O(Logn) after every insertion and deletion, then we
can guarantee an upper bound of O(Logn) for all these operations. The height of a Red Black tree
is always O(Logn) where n is the number of nodes in the tree.

Every Red Black Tree with n nodes has height <= 2Log2(n+1)

(b) Write an algorithm for Print the array After applying Quick sort in Reverse order.

Solution:

quickSort(int arr[], int left, int right) {


int i = left, j = right;
int tmp;
int pivot = arr[(left + right) / 2];

/* partition */
while (i <= j) {
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j) {
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
};

/* recursion */
if (left < j)
quickSort(arr, left, j);
if (i < right)
quickSort(arr, i, right);
}
For i=1 to n

Print arr[],

Q 6. Insert the keys F, S,D,E,V,S,H,G,A,R,P,C,E,T,H,I,J,Y,M,H,Z in empty B


tree with degree 3.

Solution Resulting B TREE with Degree 3 is:


Q 7. Insert the Keys 2, 5, 15, 4, 8, 7 ,9, 11, 3, 6 in empty Binomial Heap.

Solution Resulting Binomial Heap is:


Q8. Write down all the cases of “Binomial Heap Union” of two binomial Heaps.
Solution:
CASE 3 & 4: Occur when x is the first of 2 roots of equal degree
degree [x] = degree [next-x] ≠ degree [sibling [next-x]]
• Occur on the next iteration after any case
• Always occur immediately following CASE 2
• Two cases are distinguished by whether x or next-x has the smaller key
• The root with the smaller key becomes the root of the linked tree
Part C
Q 1 (a) Define the Various structural properties of Binomial Heap.

Solution:

A binomial heap H is a set of binomial trees that satisfies the following binomial-
heap properties.
1. Each binomial tree in H is heap-ordered: the key of a node is greater than or equal
to the key of its parent.

2. There is at most one binomial tree in H whose root has a given degree.

The first property tells us that the root of a heap-ordered tree contains the smallest key
in the tree.

(b) Write the algorithm for insert a node in Fibonacci heap and insert 32 in Fibonacci Heap.

6 8 1 7 9

9 2 3 4 9

5 6

The following procedure inserts node x into Fibonacci heap H, assuming of course
that the node has already been allocated and that key[x] has already been filled in.
FIB-HEAP-INSERT(H, x)
1 degree[x] 0
2 p[x] NIL
3 child[x] NIL
4 left[x] x
5 right[x] x
6 mark[x] FALSE
7 concatenate the root list containing x with root list H
8 if min[H] = NIL or key[x] < key[min[H]]
9 then min[H] x
10 n[H] n[H] + 1

After lines 1-6 initialize the structural fields of node x, making it its own circular,
doubly linked list , line 7 adds x to the root list of H in O(1) actual time. Thus,
node x becomes a single-node heap-ordered tree, and thus an unordered binomial tree,
in the Fibonacci heap. It has no children and is unmarked. Lines 8-9 then update the
pointer to the minimum node of Fibonacci heap H if necessary. Finally, line 10
increments n[H] to reflect the addition of the new node.
Unlike the BINOMIAL-HEAP-INSERT procedure, FIB-HEAP-INSERT makes no attempt to
consolidate the trees within the Fibonacci heap. If k consecutive FIB-HEAP-
INSERT operations occur, then k single-node trees are added to the root list.

OR
Q1. If Knapsack Capacity W=8 , weights w(i)= { 1,2,2,3,4,} and profit V(i) = { 3,5,7,6,7}. Find the
maximum profit and weights by 0/1 Knapsack Method.

Q2 ( a) Given an algorithm to find Kth smallest element in a sequence of n


Elements Your Algorithm must run in Liner time.

Sol:

1) Divide arr[] into [n/5; groups where size of each group is 5

except possibly the last group which may have less than 5 elements.

2) Sort the above created [n/5] groups and find median

of all groups. Create an auxiliary array 'median[]' and store medians

of all [n/5] groups in this median array.

// Recursively call this method to find median of median[0..[n/5]-1]

3) medOfMed = kthSmallest(median[0..[n/5]-1], [n/10])

4) Partition arr[] around medOfMed and obtain its position.

pos = partition(arr, n, medOfMed)


5) If pos == k return medOfMed

6) If pos < k return kthSmallest(arr[l..pos-1], k)

7) If poa > k return kthSmallest(arr[pos+1..r], k-pos+l-1)

(b) Prove That Height of Heap is log n.

Sol:

Theorem: A heap storing n keys has height O(log n)

Proof: (we apply the complete binary tree property)

Let h be the height of a heap storing n keys

Since there are 2i keys at depth i = 0, … , h - 1 and at least one key at depth h, we
have n  1 + 2 + 4 + … + 2h-1 + 1

Thus, n  2h , i.e., h  log n

OR
Find LCS of sequence X={A,B,C,B,D,A,B} AND Y={B,D,C,A,B,A}

A B C B D A B
0 1 2 3 4 5 6 7
0 0 0 0 0 0 0 0 0
B 1 0 0 1 1 1 1 1 1
D 2 0 0 1 1 1 2 2 2
C 3 0 0 1 2 2 2 2 2
A 4 0 1 1 2 2 2 3 3
B 5 0 1 2 2 3 3 3 4
A 6 0 1 2 2 3 3 4 4

String is : BDAB
Q 3. Insert the nodes 15,13,12,16,19,23,5,8 in empty Red Black Tree and delete in the
reverse order of insertion.

OR
Find an optimal parenthesization of a matrix-chain product whose sequence of
dimensions is {5, 10, 3, 12, 5}
ECS502

DAA Solution
Section A

1. a. why should we do asymptotic analysis of algorithm? Justify

Sol: Algorithmic complexity is a very important topic in computer science. Knowing the
complexity of algorithms allows you to answer questions such as

 How long will a program run on an input?


 How much space will it take?
 Is the problem solvable?

These are important bases of comparison between different algorithms. An understanding of


algorithmic complexity provides programmers with insight into the efficiency of their code.
Complexity is also important to several theoretical areas in computer science, including
algorithms, data structures, and complexity theory.

When analyzing the running time or space usage of programs, we usually try to estimate the time
or space as function of the input size. For example, when analyzing the worst case running time
of a function that sorts a list of numbers, we will be concerned with how long it takes as a
function of the length of the input list. For example, we say the standard insertion sort takes
time T(n) where T(n)= c*n2+k for some constants c and k. In contrast, merge sort takes
time T '(n) = c'*n*log2(n) + k'.

The asymptotic behavior of a function f(n) (such as f(n)=c*n or f(n)=c*n2, etc.) refers to the
growth of f(n) as n gets large. We typically ignore small values of n, since we are usually
interested in estimating how slow the program will be on large inputs. A good rule of thumb is:
the slower the asymptotic growth rate, the better the algorithm (although this is often not the
whole story).

b. order the following expressions by their asymptotic growth and justify your answer.

Sol: en > n! >22n >2n > (logn)! >

c. how can you modify quick sort algorithm to search an item in a list?

Sol: the algorithm outline:

1. Choose a pivot value.


2. Partition the array (put all value less than the pivot in the left part of the array, then the
pivot itself, then all values greater than the pivot).
3. Recursively, sort the values less than the pivot.
4. Recursively, sort the values greater than the pivot.

Note that, as for merge sort, we need an auxiliary method with two extra parameters -- low and
high indexes to indicate which part of the array to sort. Also, although we could "recurse" all the
way down to a single item, in practice, it is better to switch to a sort like insertion sort when the
number of items to be sorted is small.

Use randomized quick sort to search item.

d. what are all pair shortest path ?

Sol: The all-pairs shortest path problem is the determination of the shortest graph
distances between every pair of vertices in a given graph. The problem can be solved using
applications of Dijkstra's algorithm or all at once using the Floyd-Warshall algorithm. The latter
algorithm also works in the case of a weighted graph where the edges have negative weights.

The matrix of all distances between pairs of vertices is called the graph distance matrix, or
sometimes the all-pairs shortest path matrix.

e. Define Convex Hull?

Sol:

f. discuss various properties of Binomial Tree.

Sol:
• The binomial tree Bk is an ordered tree defined recursively

Bo Consists of a single node

Bk Consists of two binominal trees Bk-1 linked together.

Root of one is the leftmost child of the root of the other.

g. what are the steps for design an algorithm ?

sol: Steps in development of Algorithms

1. Problem definition
2. Development of a model
3. Specification of Algorithm
4. Designing an Algorithm
5. Checking the correctness of Algorithm
6. Analysis of Algorithm
7. Implementation of Algorithm
8. Program testing
9. Documentation Preparation

h. Prove that red black tree with n internal nodes has height at most 2log2 (n+1)

Ans:

Most of the BST operations (e.g., search, max, min, insert, delete..etc) take O(h) time
where h is the height of the BST. The cost of these operations may become O(n) for a
skewed Binary tree. If we make sure that height of the tree remains O(Logn) after every
insertion and deletion, then we can guarantee an upper bound of O(Logn) for all these
operations. The height of a Red Black tree is always O(Logn) where n is the number of
nodes in the tree.

Every Red Black Tree with n nodes has height <= 2Log2(n+1)

i. Prove that the maximum degree of n nodes in a binomial tree is log2n.

Sol:

For all integers k ≥ 0, the following properties hold:

1. Bk has 2 k nodes.

2. Bk has height k.

3. For i = 0, . . . , k, Bk has exactly B k+i nodes at depth i.

4. The root of Bk has degree k and all other nodes in Bk have degree smaller than k.

5. If k ≥ 1, then the children of the root of Bk are Bk−1, Bk−2, · · · , B0 from left to right.

The maximum degree of an n-node binomial tree is lg n

j. what do you understand by stable sort? Name two stable sorting algorithm.

Sol: A sorting algorithm is said to be stable if two objects with equal keys appear in the same
order in sorted output as they appear in the input unsorted array. Some sorting algorithms are
stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms
are not, like Heap Sort, Quick Sort, etc.
However, any given sorting algo which is not stable can be modified to be stable. There can be
sorting algo specific ways to make it stable, but in general, any comparison based sorting
algorithm which is not stable by nature can be modified to be stable by changing the key
comparison operation so that the comparison of two keys considers position as a factor for
objects with equal keys.
Eg. Counting sort and radix sort

k. define greedy approach.

Sol: Greedy method:


Greedy method focuses on expanding partially constructed solutions .It provides many results
such as feasible solution . it gives local optimal solution for a given problem.

Part C
10. (a) Why the statement “the running time of an algorithm A is at least O(n2) is
meaningless”? Explain

Sol: "The running time of algorithm A is at least O(n^2)"

"At least" is the same as "not less than".

"The running time of algorithm A is not less than O(n^2)"

O(n) is less than O(n^2). O(1) is less than O(n^2), and O(n*log(n)) is less than O(n^2), and
many others besides.

"The running time of algorithm A is not O(n)"


"The running time of algorithm A is not O(1)"
"The running time of algorithm A is not O(n*log(n))"
.. and many others besides.

The original statement, "The running time of algorithm A is at least O(n^2)", is 'shorthand'
for "The running time of algorithm A is not O(f(n)), where f(n) is any function in o(n^2)"
(note use of little-o, different from big-O).

(b)what is the procedure of partition (A,p,r) in Quick sort and also define the
complexity of quick sort.
Sol:

QUICKSORT: Performance – a quick look.


• We first look at (apparent) worst-case partitioning:
T(n) = T(n-1) + T(0) + Q(n) = T(n-1) + O(n).
It is easy to show – using substitution - that T(n) = O(n2).
• We next look at (apparent) best-case partitioning:
T(n) = 2T(n/2) + Q(n).
It is also easy to show (case 2 of the Master Theorem) that T(n) = Q(n lg n).

Since the disparity between the two is substantial, we need to look further
11. What do you mean by branch and bound? How TSP can be solved using this
approach.

Sol: The Branch-and-Bound algorithm explicitly explores a large search space by


recursively branching on disjunctions of possible values assigned to various decisions.
Some of the recursions can be avoided if it can established that no subsequent recursive
call contains a good solution. Typically, with optimization problems, recursions are
avoided by employing a bound to the optimal value: if a good solution was already
identified and the current bound is worse, there is no point in further recursion. In the rest
of the section we shall discuss a pure branching algorithm first, and add the bounding part
as a modification yielding better performance.
Algorithm
branchAndBound(n, d)

Require: set Q of nodes (Y, N, S, c′ ) ordered by increasing c ′ ∈ R

Let Q = {(∅, ∅, ∅, −∞)}

while Q 6= ∅
do Let α = (Y, N, S, c′ ) = argminc ′Q;
let Q = Q r {α}
if φ(α) = 1 then
Let R = lowerBound(α)
if d(R) < d(S) then
if τ (R) = 0 then
if Y ∪ N 6= A then

choose a ∈ A r (Y ∪ N)

Let β = (Y ∪ {a}, N, S, d(R));

let γ = (Y, N ∪ {a}, S, d(R))

Let Q = Q ∪ {β, γ}
end if
else Let S = R end if end if end if end while return S
12. (a) discuss the relation between the class P, NP, NPC, and NP-hard with suitable
example of each class

Sol: Definition 1 of NP: A problem is said to be Nondeterministically Polynomial (NP) if we can


find a nodeterminsitic Turing machine that can solve the problem in a polynomial number of
nondeterministic moves.

 For those who are not familiar with Turing machines, two alternative definitions of NP
will be developed.

 Definition 2 of NP: A problem is said to be NP if

1. its solution comes from a finite set of possibilities, and

2. it takes polynomial time to verify the correctness of a candidate solution

 Remark: It is much easier and faster to "grade" a solution than to find a solution from
scratch.

 We use NP to designate the class of all nondeterministically polynomial problems.

 Clearly, P is a subset of NP

NP-hard

What does NP-hard mean? A lot of times you can solve a problem by reducing it to a different
problem. I can reduce Problem B to Problem A if, given a solution to Problem A, I can easily
construct a solution to Problem B. (In this case, "easily" means "in polynomial time.")

If a problem is NP-hard, this means I can reduce any problem in NP to that problem. This
means if I can solve that problem, I can easily solve any problem in NP. If we could solve an
NP-hard problem in polynomial time, this would prove P = NP.

NP-complete

A problem is NP-complete if the problem is both

 NP-hard, and
 in NP.

A technical point: O(n) actually means the algorithm runs in asymptotically linear time, which
means the time complexity approaches a line as n gets very large. Also, O(n) is technically an
upper bound, so if the algorithm ran in sublinear time you could still say it's O(n), even if that's
not the best description of it.
(b) Define approximation algorithm. What is approximation ratio? Give an approximation
algorithm for travelling salesman.

Sol: An approximate algorithm is a way of dealing with NP-completeness for


optimization problem. This technique does not guarantee the best solution. The goal
of an approximation algorithm is to come as close as possible to the optimum value in
a reasonable amount of time which is at most polynomial time.

Suppose we have some optimization problem instance i, which has a large number of
feasible solutions. Also let c(i) be the cost of solution produced by approximate
algorithm and c*(i) be the cost of optimal solution. For minimization problem, we are
interested in finding a solution of a given instance i in the set of feasible solutions,
such that c(i)/c*(i) be as small as possible. On the other hand, for maximization
problem, we are interested in finding a solution in the feasible solution set such
that c*(i)/c(i) be as small as possible.

We say that an approximation algorithm for the given problem instance i, has a ratio
bound of p(n) if for any input of sign n, the cost c of the solution produced by the
approximation algorithm is within a factor of p(n) of the cost c* of an optimal
solution. That is
max(c(i)/c*(i), c*(i)/c(i)) ≤ p(n)

This definition applies for both minimization and maximization problems.

Note that p(n) is always greater than or equal to 1. If solution produced by


approximation algorithm is true optimal solution then clearly we have p(n) = 1.

For a minimization problem, 0 < c*(i) ≤ c(i), and the ratio c(i)/c*(i) gives the factor
by which the cost of the approximate solution is larger than the cost of an optimal
solution. Similarly, for a maximization problem, 0 < c(i) ≤ c*(i), and the
ratio c*(i)/c(i) gives the factor by which the cost of an optimal solution is larger than
the cost of the approximate solution.

Theorem: APPROX-TSP-TOUR is a polynomial-time 2-approximation algorithm for


TSP with triangle inequality.

Proof:

1. We have already shown that APPROX-TSP-TOUR-time.


2. Let H* denote the optimal tour. Observe that a TSP with one edge removed is a
spanning tree (not necessarily MST).

It implies that the weight of the MST T is in lower bound on the cost of an optimal
tour.
c(T) = c(H*)

A "Full" walk, W, traverse every edge of MST, T, exactly twice. That is,
c(w) = 2c(T)
which means
c(w) ≤ 2c(H*)
and we have
c(w)/c(H*) ≤ p(n) = 2

That is, the cost of walk, c(w), of the solution produced by the algorithm is within a
factor of p(n)=2 of the cost c(H*) of an optimal solution.
2016-17 AKTU question paper Solution DAA(NCS 501)

SECTION A
1.(a) List out the disadvantage of divide and conquer algorithm.

Ans:

 One of the most common issues with this sort of algorithm is the fact that the
recursion is slow, which in some cases outweighs any advantages of this divide
and conquer process.
 Sometimes it can become more complicated than a basic iterative approach,
especially in cases with a large n. In other words, if someone wanted to add a
large amount of numbers together, if they just create a simple loop to add them
together, it would turn out to be a much simpler approach than it would be to
divide the numbers up into two groups, add these groups recursively, and then
add the sums of the two groups together.

(b) What are the fundamental steps involved in algorithm problem solving.

Ans:

•Understanding the problem


• Ascertain the capabilities of the computational device
• Exact /approximate soln.
• Decide on the appropriate data structure
• Algorithm design techniques
• Methods of specifying an algorithm
• Proving an algorithms correctness
• Analysing an algorithm

(c ) Write recursive function to find nth Fibonacci number

Ans:

1. {
2. i nt Fibonacci(int n)
3. if ( n == 0 )
4. return 0;
5. else if ( n == 1 )
6. return 1;
7. else
8. return ( Fibonacci(n-1) + Fibonacci(n-2) );
9. }

(d) Define binary heap.

Ans: A binary heap is a complete binary tree which satisfies the heap ordering
property. The ordering can be one of two types: the min-heap property: the value of
each node is greater than or equal to the value of its parent, with the minimum-value
element at the root.

(e) Briefly explain prims algorithm

Ans: Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for
a weighted undirected graph. This means it finds a subset of the edgesthat forms
a tree that includes every vertex, where the total weight of all the edges in the tree is
minimized. The algorithm operates by building this tree one vertex at a time, from an
arbitrary starting vertex, at each step adding the cheapest possible connection from the
tree to another vertex.

(f) Define principle of optimality

Ans:The principle of optimality is the basic principle of dynamic programming, which


was developed by Richard Bellman: that an optimal path has the property that whatever
the initial conditions and control variables (choices) over some initial period, the control
(or decision variables) chosen over the remaining period must be optimal for the
remaining problem, with the state resulting from the early decisions taken to be the
initial condition.

(g) Write the various design techniques of Algorithm

Ans:
Greedy algorithm
Divide and conquer
Dynamic programing
Backtracking
Branch and bound
(h) Difference between Branch & bound and Backtracking techniques.
Ans:
Backtracking

 Backtracking is a general algorithm for finding all (or some) solutions to some
computational problems, notably constraint satisfaction problems, that incrementally
builds candidates to the solutions, and abandons each partial candidate c
("backtracks") as soon as it determines that c cannot possibly be completed to a valid
solution.
 It enumerates a set of partial candidates that, in principle, could be completed in
various ways to give all the possible solutions to the given problem. The completion
is done incrementally, by a sequence of candidate extension steps.
 Conceptually, the partial candidates are represented as the nodes of a tree structure,
the potential search tree. Each partial candidate is the parent of the candidates that
differ from it by a single extension step, the leaves of the tree are the partial
candidates that cannot be extended any further.
 It traverses this search tree recursively, from the root down, in depth-first order
(DFS). It realizes that it has made a bad choice & undoes the last choice by backing
up.
Branch And Bound

 A branch-and-bound algorithm consists of a systematic enumeration of candidate


solutions by means of state space search: the set of candidate solutions is thought of
as forming a rooted tree with the full set at the root.
 The algorithm explores branches of this tree, which represent subsets of the solution
set. Before enumerating the candidate solutions of a branch, the branch is checked
against upper and lower estimated bounds on the optimal solution, and is discarded
if it cannot produce a better solution than the best one found so far by the algorithm.
 It may traverse the tree in any following manner:
1. BFS (Breath First Search) or (FIFO) Branch and Bound
2. D-Search or (LIFO) Branch and Bound
3. Least Count Search or (LC) Branch and Bound

(I)What is the running time complexity of N queen problems..


Ans: The complexity is n^n for n queens problem
(j) Define P , NP and NP complete in decision problems.

Ans:

P- Polynomial time solving . Problems which can be solved in polynomial time, which
take time like O(n), O(n2), O(n3). Eg: finding maximum element in an array or to check
whether a string is palindrome or not. so there are many problems which can be solved
in polynomial time.

NP- Non deterministic Polynomial time solving. Problem which can't be solved in
polynomial time like TSP( travelling salesman problem) or An easy example of this is
subset sum: given a set of numbers, does there exist a subset whose sum is zero?.but NP
problems are checkable in polynomial time means that given a solution of a problem ,
we can check that whether the solution is correct or not in polynomial time.

NP-Complete -- The group of problems which are both in NP and NP-hard are known
as NP-Complete problem.Now suppose we have a NP-Complete problem R and it is
reducible to Q then Q is at least as hard as R and since R is an NP-hard problem.
therefore Q will also be at least NP-hard , it may be NP-complete also.
SECTION B

2. Explain the concept of quick sort method and analyze its complexity with suitable
example.

Ans: Mathematical analysis of quicksort shows that, on average, the algorithm


takes O(n log n) comparisons to sort n items. In the worst case, it makes O(n2)
comparisons, though this behavior is rare.

Quicksort is a comparison sort, meaning that it can sort items of any type for which a
"less-than" relation (formally, a total order) is defined. In efficient implementations it is
not a stable sort, meaning that the relative order of equal sort items is not preserved.
Quicksort can operate in-place on an array, requiring small additional amounts
of memory to perform the sorting.

Quicksort is a divide and conquer algorithm. Quicksort first divides a large array into
two smaller sub-arrays: the low elements and the high elements. Quicksort can then
recursively sort the sub-arrays.

The steps are:

1. Pick an element, called a pivot, from the array.


2. Partitioning: reorder the array so that all elements with values less than the pivot
come before the pivot, while all elements with values greater than the pivot come
after it (equal values can go either way). After this partitioning, the pivot is in its
final position. This is called the partition operation.
3. Recursively apply the above steps to the sub-array of elements with smaller
values and separately to the sub-array of elements with greater values.
4. Explain the concept merge sort with an example.
Ans: Sorting Problem: Sort a sequence of n elements into non-decreasing order.

• Divide: Divide the n-element sequence to be sorted into two subsequences of n/2
elements each
• Conquer: Sort the two subsequences recursively using merge sort.
• Combine: Merge the two sorted subsequences to produce the sorted answer.

Merge Sort – Example


18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

Comp 122
Merge Sort – Example
Original Sequence Sorted Sequence
18 26 32 6 43 15 9 1 1 6 9 15 18 26 32 43

18 26 32 6 43 15 9 1 6 18 26 32 1 9 15 43
43

18 26 32 6 43 15 9 1 18 26 6 32 15 43 1 9

18 26 32 6 43 15 9 1 18 26 32 6 43 15 9 1

18 26 32 6 43 15 9 1

Comp 122
6. write pseudo code for 8 queen problem.
Ans:

SolveQueens (Integer boardSize, Queen


queen[boardSize]);
i <- 0 //Begin by placing the queen
number 0
while i < boardSize
queen[i].row <- queen[i].row + 1 //Place
queen[i] to next row

/* If queen[i] exceeds the row count, reset the


queen and
re-place queen[i-1]
/*
if(queen[i].row >= boardSize)
queen[i] <- -1;
i <- i - 1;
else
//While the queen[i] is under attack move
it down the row
while(isUnderAttack(queen[i])
queen[i].row <- queen[i] + 1; Q7)
//if queen[i] exceeds the row count, reset Write
it, re-place queen[i-1] non
if(queen[i].row >= boardSize) determi
queen[i].row <- -1
nistic
i <- i - 1;
algorit
else
i++; hm for
end while sorting

Ans:

Note that Choose function does not itself indicate success or failure. The algorithm
must take the result of Choose function and check for success or failure.
• Another useful way to view Choose is as an infinitely wise guesser (or oracle) – if a
correct guess can be made, Choose will make it.

• A deterministic interpretation of a nondeterministic algorithm can be made by


allowing unbounded parallelism in computation.

• We now combine the concept of decision problems and nondeterministic


algorithms. Consider the following nondeterministic algorithm for the 0-1 Knapsack
Decision Problem. Procedure Non-Bknap (p,w,n,m,Q,X); For j = 1 to n do X(j) = Choose
(0,1) If ∑1≤j≤ n wi xi > M or ∑1≤j≤ n pi xi < Q Then Failure Else Success

• A successful termination is possible if and only if the answer to the decision problem
is “yes.” The complexity of the algorithm is O(n)

Non-deterministic Sorting: Procedure Non-Sort (A,n); // sort n positive integers in


increasing order // B(n) <- 0 // initialize B to zero //

For j = 1 to n do k = Choose (1:n)

If B(k) ≠0 Then Failure

B(k) <- A(j)

repeat

For j = 1 to n-1 do // verify order //

If B(j) > B(j+1) Then Failure

repeat

print (B),

Success The complexity of non-deterministic sorting is O(n)

Q8) what is backtracking ? write general iterative algorithm for backtracking

Ans:
Backtracking is a general algorithm for finding all (or some) solutions to
some computational problems, notably constraint satisfaction problems, that
incrementally builds candidates to the solutions, and abandons each partial
candidate ("backtracks") as soon as it determines that the candidate cannot
possibly be completed to a valid solution.[1][2]

The classic textbook example of the use of backtracking is the eight queens
puzzle, that asks for all arrangements of eight chess queens on a
standard chessboard so that no queen attacks any other. In the common
backtracking approach, the partial candidates are arrangements of k queens in
the first k rows of the board, all in different rows and columns. Any partial
solution that contains two mutually attacking queens can be abandoned.

• Iterative backtracking algorithm algorithm ibacktrack ( n )

// Iterative backtracking process

// All solutions are generated in x[1:n] and printed as soon as they are found

{ k = 1;

while ( k != 0 )

if ( there remains an untried x[k] in T(x[1], x[2], ..., x[k-1]) and B_k( x[1], ..., x[k] ) is
true )

if ( x[1], ..., x[k] is a path to an answer node )

write ( x[1:k] ); k = k + 1;

// Consider the next set

else k = k - 1;

// Backtrack to the previous set


}}

Q9) Differentiate NP complete with NP hard.

NP-Complete

NP-Complete is a complexity class which represents the set of all problems X in


NP for which it is possible to reduce any other NP problem Y to X in polynomial
time.
Intuitively this means that we can solve Y quickly if we know how to
solve X quickly. Precisely, Yis reducible to X, if there is a polynomial time
algorithm f to transform instances y of Y to instances x = f(y) of X in
polynomial time, with the property that the answer to y is yes, if and only if the
answer to f(y) is yes.
Example
3-SAT. This is the problem wherein we are given a conjunction (ANDs) of 3-
clause disjunctions (ORs), statements of the form
1. (x_v11 OR x_v21 OR x_v31) AND
2. (x_v12 OR x_v22 OR x_v32) AND
3. ... AND
4. (x_v1n OR x_v2n OR x_v3n)

NP-hard
5. Intuitively, these are the problems that are at least as hard as the NP-
complete problems. Note that NP-hard problems do not have to be in NP,
and they do not have to be decision problems.
6. The precise definition here is that a problem X is NP-hard, if there is an
NP-complete problem Y, such that Y is reducible to X in polynomial time.
7. But since any NP-complete problem can be reduced to any other NP-
complete problem in polynomial time, all NP-complete problems can be
reduced to any NP-hard problem in polynomial time. Then, if there is a
solution to one NP-hard problem in polynomial time, there is a solution to
all NP problems in polynomial time.
Q12) Prove that three coloring problem is NP Complete

Ans:

The minimum colouring problem asks for the smallest k to properly colour G. The k-
colouring problem asks whether G can be properly coloured using at most k colours.
We call the subset of vertices that receive the same colour a colour class. It is easy to
see that every colour class is an independent set (yeah?). Notice that we’ve
encountered the colouring problem before! Recall in Assignment 1, we wanted to send
the minimum number of journalists to cover all the world cup games. Well the subset
of games a journalist got is a colour class! We were able to solve this optimization
problem in O(|V | log |V |) time, so how can we claim that the k-colouring decision
problem is NP-complete? It is the same reasoning for the Independent Set problem.
One way to deal with NP-completeness is to

restrict the problem to subsets of the input (in this assignment, we restricted
“arbitrary” graphs to interval graphs). In this lecture we will show that the colouring
problem on arbitrary graphs becomes NP-complete even for k = 3 ! Crazy! No? Think
about it: One easy to rule out a 3-colouring is to check if G has a clique of size 4, like in
the example below1 : How hard is it to check for a clique of size at least k = 4? We just
showed that CLIQUE is NP-complete!
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner

You might also like