DAA Solution All Years Previous Year Paper
DAA Solution All Years Previous Year Paper
DAA Solution All Years Previous Year Paper
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
DAA Solution NCS-501
Ans:
The characteristics of a good algorithm are:
Precision – the steps are precisely stated(defined).
Uniqueness – results of each step are uniquely definedand only depend on the input and the
result of the precedingsteps.
Finiteness – the algorithm stops after a finite number ofinstructions are executed.
Input – the algorithm receives input.
Output – the algorithm produces output.
Generality – the algorithm applies to a set ofinputs.
Breaking the problem into several sub-problems that are similar to the original problem
but smaller in size,
Solve the sub-problem recursively (successively and independently), and then
Combine these solutions to subproblems to create a solution to the original problem.
A complete analysis of the running time of an algorithm involves the following steps:
Implement the algorithm completely.
Determine the time required for each basic operation.
Identify unknown quantities that can be used to describe the frequency of execution of
the basic operations.
Develop a realistic model for the input to the program.
Analyze the unknown quantities, assuming the modelled input.
Calculate the total running time by multiplying the time by the frequency for each
operation, then adding all the products.
Ans: Time complexity of an algorithm signifies the total time required by the program to run to
completion. The time complexity of algorithms is most commonly expressed using the big O
notation.
Time Complexity is most commonly estimated by counting the number of elementary functions
performed by the algorithm. And since the algorithm's performance may vary with different
types of input data, hence for an algorithm we usually use the worst-case Time complexity of
an algorithm because that is the maximum time taken for any input size.
Ans:
The output is an array of the items, in order by their keys. Because of the application to radix
sorting, it is important for counting sort to be a stable sort: if two items have the same key as
each other, they should have the same relative position in the output as they did in the input
1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem
has been solved already, then just return the saved answer. If it has not been solved, solve it and
save the answer. This is usually easy to think of and very intuitive. This is referred to
as Memoization.
2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved
and start solving from the trivial subproblem, up towards the given problem. In this process, it is
guaranteed that the subproblems are solved before solving the problem. This is referred to
as Dynamic Programming.
1. Brute force
2. Greedy algorithms
3. Divide-and-conquer, decrease-and-conquer
4. Dynamic programming
5. Transform-and-conquer
6. Backtracking and branch-and-bound
7. Genetic algorithms
Using the same analysis, one can show that randomized quicksort has the desirable property that,
for any input, it requires only O(n log n) expected time (averaged over all choices of pivots).
However, there also exists a combinatorial proof.
To each execution of quicksort corresponds the following binary search tree (BST): the initial
pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right
half is the root of the right subtree, and so on. The number of comparisons of the execution of
quicksort equals the number of comparisons during the construction of the BST by a sequence of
insertions. So, the average number of comparisons for randomized quicksort equals the average
cost of constructing a BST when the values inserted form a random
permutation.
Q1(a) T(n) = 6T( n/3) + n2 log n
Solution 1(a):
T(n)=6T(n/3)+n2logn
Here a = 6 b = 2 f(n)=n2logn
Since f(n) = n2logn is asymptotically larger than nlogba = n1.631 so it seems that case 3
should apply.
T(n) = Ɵ(n2logn).
Solution 1(b):
T(n) =T(n-1)+(n-1)3
=[T(n-2)+(n-2)3]+(n-1)3
=[T(n-3)+(n-3)3]+(n-2)3+(n-1)3
.........................................................
=(n-1)3+(n-2)3+(n-3)3.........+23+13+T(0)
= +T(0)
T(n)= )
Q 2(a) For a given problem P, two algorithms A1 and A2 have time complexities
T1(n) and T2(n) respectively in terms of size n where
T1(n) = 100n2 and
T2 (n) = 2n
Find the minimum value of n for which A1 is more efficient than A2
Solution 2(a):
Algorithm A1 have T1 time complexity i.e. T1=100n2
F(n)= (G(n))
(b) Apply Quick sort , to sort the list A, L, G, O, R, I, T, H, M. In alphabetical order , draw the tree
of recursive calls made
0 2 3 6 0 3 1 4 0 3 2 1
Solution 3(a):
1 2 3 4 5 6 7 8 9 10 11 12
0 2 3 6 0 3 1 4 0 3 2 1
For i=0 to 6
C[i]=0
i.e.
0 1 2 3 4 5 6
0 0 0 0 0 0 0
For j=1 to 12
0 1 2 3 4 5 6
3 2 2 3 1 0 1
For i=1 to 6
0 1 2 3 4 5 6
3 5 7 10 11 11 12
1 2 3 4 5 6 7 8 9 10 11 12
0 0 0 1 1 2 2 3 3 3 4 6
(b) Consider an array A = {3, 14, 27, 31, 39, 42, 55, 70, 74 }. Sort this Array by Heap Sort
Solution 3(b):
Array A=
3 14 27 31 39 42 55 70 74
So, i=4 to 1. Call MAX-HEAPIFY(A,i) and the final tree after BUILD MAX HEAP.
Now, i=9 down to 1 exchange A[1]↔A[i] and size= size-1 and call MAX HEAPIFY (A,1) each
time .
31 70 55 39 3 42 27 14
14 27 42 3 31 55 39 70
Again call MAX HEAPIFY (A,1) and continue the process we get sorted array
3 14 27 31 39 42 55 70 74
4 (a) Design an Algorithm for log n largest number of an unsorted array of length n in the linear Time
Solution 4(a):
QUICKSELECT Algorithm
QuickSelect(A, k)
let r be chosen uniformly at random in the range 1 to length(A)
let pivot = A[r]
let A1, A2 be new arrays
# split into a pile A1 of small elements and A2 of big elements
for i = 1 to n
if A[i] < pivot then
append A[i] to A1
else if A[i] > pivot then
append A[i] to A2
else
# do nothing
end for
if k <= length(A1):
# it's in the pile of small elements
return QuickSelect(A1, k)
else if k > length(A) - length(A2)
# it's in the pile of big elements
return QuickSelect(A2, k - (length(A) - length(A2))
else
# it's equal to the pivot
return pivot
(b) Prove that A Red-Black tree with n internal Nodes has Height at most 2lg(n+1)
Solution 4(b):
In a RBT, no path from a node x to a leaf is more than twice as long as any other path
from x to a leaf.
Let bh(x) be the black height of x. Then the length of a longest path from x to a leaf
2log(n + 1)
Consider any node x and the subtree rooted at x. We will first show that this subtree
has at least
20 - 1 = 0
Let h(x) > 0 and let x1 and x2 be its two children
Note that h(x1), h(x2) are both h(x) - 1. Assume the result to be true for x1 and x2.
We shall show the result is true for x.
Now,
bh (root)
Thus
n/TD> 2h/2 - 1
h/TD> 2log(n + 1)
Solution:
Most of the BST operations (e.g., search, max, min, insert, delete..etc) take O(h) time where h is
the height of the BST. The cost of these operations may become O(n) for a skewed Binary tree. If
we make sure that height of the tree remains O(Logn) after every insertion and deletion, then we
can guarantee an upper bound of O(Logn) for all these operations. The height of a Red Black tree
is always O(Logn) where n is the number of nodes in the tree.
Every Red Black Tree with n nodes has height <= 2Log2(n+1)
(b) Write an algorithm for Print the array After applying Quick sort in Reverse order.
Solution:
/* partition */
while (i <= j) {
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j) {
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
};
/* recursion */
if (left < j)
quickSort(arr, left, j);
if (i < right)
quickSort(arr, i, right);
}
For i=1 to n
Print arr[],
Solution:
A binomial heap H is a set of binomial trees that satisfies the following binomial-
heap properties.
1. Each binomial tree in H is heap-ordered: the key of a node is greater than or equal
to the key of its parent.
2. There is at most one binomial tree in H whose root has a given degree.
The first property tells us that the root of a heap-ordered tree contains the smallest key
in the tree.
(b) Write the algorithm for insert a node in Fibonacci heap and insert 32 in Fibonacci Heap.
6 8 1 7 9
9 2 3 4 9
5 6
The following procedure inserts node x into Fibonacci heap H, assuming of course
that the node has already been allocated and that key[x] has already been filled in.
FIB-HEAP-INSERT(H, x)
1 degree[x] 0
2 p[x] NIL
3 child[x] NIL
4 left[x] x
5 right[x] x
6 mark[x] FALSE
7 concatenate the root list containing x with root list H
8 if min[H] = NIL or key[x] < key[min[H]]
9 then min[H] x
10 n[H] n[H] + 1
After lines 1-6 initialize the structural fields of node x, making it its own circular,
doubly linked list , line 7 adds x to the root list of H in O(1) actual time. Thus,
node x becomes a single-node heap-ordered tree, and thus an unordered binomial tree,
in the Fibonacci heap. It has no children and is unmarked. Lines 8-9 then update the
pointer to the minimum node of Fibonacci heap H if necessary. Finally, line 10
increments n[H] to reflect the addition of the new node.
Unlike the BINOMIAL-HEAP-INSERT procedure, FIB-HEAP-INSERT makes no attempt to
consolidate the trees within the Fibonacci heap. If k consecutive FIB-HEAP-
INSERT operations occur, then k single-node trees are added to the root list.
OR
Q1. If Knapsack Capacity W=8 , weights w(i)= { 1,2,2,3,4,} and profit V(i) = { 3,5,7,6,7}. Find the
maximum profit and weights by 0/1 Knapsack Method.
Sol:
except possibly the last group which may have less than 5 elements.
Sol:
Since there are 2i keys at depth i = 0, … , h - 1 and at least one key at depth h, we
have n 1 + 2 + 4 + … + 2h-1 + 1
OR
Find LCS of sequence X={A,B,C,B,D,A,B} AND Y={B,D,C,A,B,A}
A B C B D A B
0 1 2 3 4 5 6 7
0 0 0 0 0 0 0 0 0
B 1 0 0 1 1 1 1 1 1
D 2 0 0 1 1 1 2 2 2
C 3 0 0 1 2 2 2 2 2
A 4 0 1 1 2 2 2 3 3
B 5 0 1 2 2 3 3 3 4
A 6 0 1 2 2 3 3 4 4
String is : BDAB
Q 3. Insert the nodes 15,13,12,16,19,23,5,8 in empty Red Black Tree and delete in the
reverse order of insertion.
OR
Find an optimal parenthesization of a matrix-chain product whose sequence of
dimensions is {5, 10, 3, 12, 5}
ECS502
DAA Solution
Section A
Sol: Algorithmic complexity is a very important topic in computer science. Knowing the
complexity of algorithms allows you to answer questions such as
When analyzing the running time or space usage of programs, we usually try to estimate the time
or space as function of the input size. For example, when analyzing the worst case running time
of a function that sorts a list of numbers, we will be concerned with how long it takes as a
function of the length of the input list. For example, we say the standard insertion sort takes
time T(n) where T(n)= c*n2+k for some constants c and k. In contrast, merge sort takes
time T '(n) = c'*n*log2(n) + k'.
The asymptotic behavior of a function f(n) (such as f(n)=c*n or f(n)=c*n2, etc.) refers to the
growth of f(n) as n gets large. We typically ignore small values of n, since we are usually
interested in estimating how slow the program will be on large inputs. A good rule of thumb is:
the slower the asymptotic growth rate, the better the algorithm (although this is often not the
whole story).
b. order the following expressions by their asymptotic growth and justify your answer.
c. how can you modify quick sort algorithm to search an item in a list?
Note that, as for merge sort, we need an auxiliary method with two extra parameters -- low and
high indexes to indicate which part of the array to sort. Also, although we could "recurse" all the
way down to a single item, in practice, it is better to switch to a sort like insertion sort when the
number of items to be sorted is small.
Sol: The all-pairs shortest path problem is the determination of the shortest graph
distances between every pair of vertices in a given graph. The problem can be solved using
applications of Dijkstra's algorithm or all at once using the Floyd-Warshall algorithm. The latter
algorithm also works in the case of a weighted graph where the edges have negative weights.
The matrix of all distances between pairs of vertices is called the graph distance matrix, or
sometimes the all-pairs shortest path matrix.
Sol:
Sol:
• The binomial tree Bk is an ordered tree defined recursively
1. Problem definition
2. Development of a model
3. Specification of Algorithm
4. Designing an Algorithm
5. Checking the correctness of Algorithm
6. Analysis of Algorithm
7. Implementation of Algorithm
8. Program testing
9. Documentation Preparation
h. Prove that red black tree with n internal nodes has height at most 2log2 (n+1)
Ans:
Most of the BST operations (e.g., search, max, min, insert, delete..etc) take O(h) time
where h is the height of the BST. The cost of these operations may become O(n) for a
skewed Binary tree. If we make sure that height of the tree remains O(Logn) after every
insertion and deletion, then we can guarantee an upper bound of O(Logn) for all these
operations. The height of a Red Black tree is always O(Logn) where n is the number of
nodes in the tree.
Every Red Black Tree with n nodes has height <= 2Log2(n+1)
Sol:
1. Bk has 2 k nodes.
2. Bk has height k.
4. The root of Bk has degree k and all other nodes in Bk have degree smaller than k.
5. If k ≥ 1, then the children of the root of Bk are Bk−1, Bk−2, · · · , B0 from left to right.
j. what do you understand by stable sort? Name two stable sorting algorithm.
Sol: A sorting algorithm is said to be stable if two objects with equal keys appear in the same
order in sorted output as they appear in the input unsorted array. Some sorting algorithms are
stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms
are not, like Heap Sort, Quick Sort, etc.
However, any given sorting algo which is not stable can be modified to be stable. There can be
sorting algo specific ways to make it stable, but in general, any comparison based sorting
algorithm which is not stable by nature can be modified to be stable by changing the key
comparison operation so that the comparison of two keys considers position as a factor for
objects with equal keys.
Eg. Counting sort and radix sort
Part C
10. (a) Why the statement “the running time of an algorithm A is at least O(n2) is
meaningless”? Explain
O(n) is less than O(n^2). O(1) is less than O(n^2), and O(n*log(n)) is less than O(n^2), and
many others besides.
The original statement, "The running time of algorithm A is at least O(n^2)", is 'shorthand'
for "The running time of algorithm A is not O(f(n)), where f(n) is any function in o(n^2)"
(note use of little-o, different from big-O).
(b)what is the procedure of partition (A,p,r) in Quick sort and also define the
complexity of quick sort.
Sol:
Since the disparity between the two is substantial, we need to look further
11. What do you mean by branch and bound? How TSP can be solved using this
approach.
while Q 6= ∅
do Let α = (Y, N, S, c′ ) = argminc ′Q;
let Q = Q r {α}
if φ(α) = 1 then
Let R = lowerBound(α)
if d(R) < d(S) then
if τ (R) = 0 then
if Y ∪ N 6= A then
choose a ∈ A r (Y ∪ N)
Let Q = Q ∪ {β, γ}
end if
else Let S = R end if end if end if end while return S
12. (a) discuss the relation between the class P, NP, NPC, and NP-hard with suitable
example of each class
For those who are not familiar with Turing machines, two alternative definitions of NP
will be developed.
Remark: It is much easier and faster to "grade" a solution than to find a solution from
scratch.
Clearly, P is a subset of NP
NP-hard
What does NP-hard mean? A lot of times you can solve a problem by reducing it to a different
problem. I can reduce Problem B to Problem A if, given a solution to Problem A, I can easily
construct a solution to Problem B. (In this case, "easily" means "in polynomial time.")
If a problem is NP-hard, this means I can reduce any problem in NP to that problem. This
means if I can solve that problem, I can easily solve any problem in NP. If we could solve an
NP-hard problem in polynomial time, this would prove P = NP.
NP-complete
NP-hard, and
in NP.
A technical point: O(n) actually means the algorithm runs in asymptotically linear time, which
means the time complexity approaches a line as n gets very large. Also, O(n) is technically an
upper bound, so if the algorithm ran in sublinear time you could still say it's O(n), even if that's
not the best description of it.
(b) Define approximation algorithm. What is approximation ratio? Give an approximation
algorithm for travelling salesman.
Suppose we have some optimization problem instance i, which has a large number of
feasible solutions. Also let c(i) be the cost of solution produced by approximate
algorithm and c*(i) be the cost of optimal solution. For minimization problem, we are
interested in finding a solution of a given instance i in the set of feasible solutions,
such that c(i)/c*(i) be as small as possible. On the other hand, for maximization
problem, we are interested in finding a solution in the feasible solution set such
that c*(i)/c(i) be as small as possible.
We say that an approximation algorithm for the given problem instance i, has a ratio
bound of p(n) if for any input of sign n, the cost c of the solution produced by the
approximation algorithm is within a factor of p(n) of the cost c* of an optimal
solution. That is
max(c(i)/c*(i), c*(i)/c(i)) ≤ p(n)
For a minimization problem, 0 < c*(i) ≤ c(i), and the ratio c(i)/c*(i) gives the factor
by which the cost of the approximate solution is larger than the cost of an optimal
solution. Similarly, for a maximization problem, 0 < c(i) ≤ c*(i), and the
ratio c*(i)/c(i) gives the factor by which the cost of an optimal solution is larger than
the cost of the approximate solution.
Proof:
It implies that the weight of the MST T is in lower bound on the cost of an optimal
tour.
c(T) = c(H*)
A "Full" walk, W, traverse every edge of MST, T, exactly twice. That is,
c(w) = 2c(T)
which means
c(w) ≤ 2c(H*)
and we have
c(w)/c(H*) ≤ p(n) = 2
That is, the cost of walk, c(w), of the solution produced by the algorithm is within a
factor of p(n)=2 of the cost c(H*) of an optimal solution.
2016-17 AKTU question paper Solution DAA(NCS 501)
SECTION A
1.(a) List out the disadvantage of divide and conquer algorithm.
Ans:
One of the most common issues with this sort of algorithm is the fact that the
recursion is slow, which in some cases outweighs any advantages of this divide
and conquer process.
Sometimes it can become more complicated than a basic iterative approach,
especially in cases with a large n. In other words, if someone wanted to add a
large amount of numbers together, if they just create a simple loop to add them
together, it would turn out to be a much simpler approach than it would be to
divide the numbers up into two groups, add these groups recursively, and then
add the sums of the two groups together.
(b) What are the fundamental steps involved in algorithm problem solving.
Ans:
Ans:
1. {
2. i nt Fibonacci(int n)
3. if ( n == 0 )
4. return 0;
5. else if ( n == 1 )
6. return 1;
7. else
8. return ( Fibonacci(n-1) + Fibonacci(n-2) );
9. }
Ans: A binary heap is a complete binary tree which satisfies the heap ordering
property. The ordering can be one of two types: the min-heap property: the value of
each node is greater than or equal to the value of its parent, with the minimum-value
element at the root.
Ans: Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for
a weighted undirected graph. This means it finds a subset of the edgesthat forms
a tree that includes every vertex, where the total weight of all the edges in the tree is
minimized. The algorithm operates by building this tree one vertex at a time, from an
arbitrary starting vertex, at each step adding the cheapest possible connection from the
tree to another vertex.
Ans:
Greedy algorithm
Divide and conquer
Dynamic programing
Backtracking
Branch and bound
(h) Difference between Branch & bound and Backtracking techniques.
Ans:
Backtracking
Backtracking is a general algorithm for finding all (or some) solutions to some
computational problems, notably constraint satisfaction problems, that incrementally
builds candidates to the solutions, and abandons each partial candidate c
("backtracks") as soon as it determines that c cannot possibly be completed to a valid
solution.
It enumerates a set of partial candidates that, in principle, could be completed in
various ways to give all the possible solutions to the given problem. The completion
is done incrementally, by a sequence of candidate extension steps.
Conceptually, the partial candidates are represented as the nodes of a tree structure,
the potential search tree. Each partial candidate is the parent of the candidates that
differ from it by a single extension step, the leaves of the tree are the partial
candidates that cannot be extended any further.
It traverses this search tree recursively, from the root down, in depth-first order
(DFS). It realizes that it has made a bad choice & undoes the last choice by backing
up.
Branch And Bound
Ans:
P- Polynomial time solving . Problems which can be solved in polynomial time, which
take time like O(n), O(n2), O(n3). Eg: finding maximum element in an array or to check
whether a string is palindrome or not. so there are many problems which can be solved
in polynomial time.
NP- Non deterministic Polynomial time solving. Problem which can't be solved in
polynomial time like TSP( travelling salesman problem) or An easy example of this is
subset sum: given a set of numbers, does there exist a subset whose sum is zero?.but NP
problems are checkable in polynomial time means that given a solution of a problem ,
we can check that whether the solution is correct or not in polynomial time.
NP-Complete -- The group of problems which are both in NP and NP-hard are known
as NP-Complete problem.Now suppose we have a NP-Complete problem R and it is
reducible to Q then Q is at least as hard as R and since R is an NP-hard problem.
therefore Q will also be at least NP-hard , it may be NP-complete also.
SECTION B
2. Explain the concept of quick sort method and analyze its complexity with suitable
example.
Quicksort is a comparison sort, meaning that it can sort items of any type for which a
"less-than" relation (formally, a total order) is defined. In efficient implementations it is
not a stable sort, meaning that the relative order of equal sort items is not preserved.
Quicksort can operate in-place on an array, requiring small additional amounts
of memory to perform the sorting.
Quicksort is a divide and conquer algorithm. Quicksort first divides a large array into
two smaller sub-arrays: the low elements and the high elements. Quicksort can then
recursively sort the sub-arrays.
• Divide: Divide the n-element sequence to be sorted into two subsequences of n/2
elements each
• Conquer: Sort the two subsequences recursively using merge sort.
• Combine: Merge the two sorted subsequences to produce the sorted answer.
18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2
18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2
18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2
18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2
Comp 122
Merge Sort – Example
Original Sequence Sorted Sequence
18 26 32 6 43 15 9 1 1 6 9 15 18 26 32 43
18 26 32 6 43 15 9 1 6 18 26 32 1 9 15 43
43
18 26 32 6 43 15 9 1 18 26 6 32 15 43 1 9
18 26 32 6 43 15 9 1 18 26 32 6 43 15 9 1
18 26 32 6 43 15 9 1
Comp 122
6. write pseudo code for 8 queen problem.
Ans:
Ans:
Note that Choose function does not itself indicate success or failure. The algorithm
must take the result of Choose function and check for success or failure.
• Another useful way to view Choose is as an infinitely wise guesser (or oracle) – if a
correct guess can be made, Choose will make it.
• A successful termination is possible if and only if the answer to the decision problem
is “yes.” The complexity of the algorithm is O(n)
repeat
repeat
print (B),
Ans:
Backtracking is a general algorithm for finding all (or some) solutions to
some computational problems, notably constraint satisfaction problems, that
incrementally builds candidates to the solutions, and abandons each partial
candidate ("backtracks") as soon as it determines that the candidate cannot
possibly be completed to a valid solution.[1][2]
The classic textbook example of the use of backtracking is the eight queens
puzzle, that asks for all arrangements of eight chess queens on a
standard chessboard so that no queen attacks any other. In the common
backtracking approach, the partial candidates are arrangements of k queens in
the first k rows of the board, all in different rows and columns. Any partial
solution that contains two mutually attacking queens can be abandoned.
// All solutions are generated in x[1:n] and printed as soon as they are found
{ k = 1;
while ( k != 0 )
if ( there remains an untried x[k] in T(x[1], x[2], ..., x[k-1]) and B_k( x[1], ..., x[k] ) is
true )
write ( x[1:k] ); k = k + 1;
else k = k - 1;
NP-Complete
NP-hard
5. Intuitively, these are the problems that are at least as hard as the NP-
complete problems. Note that NP-hard problems do not have to be in NP,
and they do not have to be decision problems.
6. The precise definition here is that a problem X is NP-hard, if there is an
NP-complete problem Y, such that Y is reducible to X in polynomial time.
7. But since any NP-complete problem can be reduced to any other NP-
complete problem in polynomial time, all NP-complete problems can be
reduced to any NP-hard problem in polynomial time. Then, if there is a
solution to one NP-hard problem in polynomial time, there is a solution to
all NP problems in polynomial time.
Q12) Prove that three coloring problem is NP Complete
Ans:
The minimum colouring problem asks for the smallest k to properly colour G. The k-
colouring problem asks whether G can be properly coloured using at most k colours.
We call the subset of vertices that receive the same colour a colour class. It is easy to
see that every colour class is an independent set (yeah?). Notice that we’ve
encountered the colouring problem before! Recall in Assignment 1, we wanted to send
the minimum number of journalists to cover all the world cup games. Well the subset
of games a journalist got is a colour class! We were able to solve this optimization
problem in O(|V | log |V |) time, so how can we claim that the k-colouring decision
problem is NP-complete? It is the same reasoning for the Independent Set problem.
One way to deal with NP-completeness is to
restrict the problem to subsets of the input (in this assignment, we restricted
“arbitrary” graphs to interval graphs). In this lecture we will show that the colouring
problem on arbitrary graphs becomes NP-complete even for k = 3 ! Crazy! No? Think
about it: One easy to rule out a 3-colouring is to check if G has a clique of size 4, like in
the example below1 : How hard is it to check for a clique of size at least k = 4? We just
showed that CLIQUE is NP-complete!
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner
Scanned with CamScanner