CS8451-DAA - by WWW - LearnEngineering.in PDF
CS8451-DAA - by WWW - LearnEngineering.in PDF
CS8451-DAA - by WWW - LearnEngineering.in PDF
in CS8451 - DAA
ENGINEERING COLLEGES
2016 – 17 Even Semester
IMPORTANT QUESTIONS AND ANSWERS
Department of CSE & IT
DEPARTMENT OF CSE
REGULATION
Prepared by : 2017
YEAR: II
Sl. No. Name of the Faculty Designation Affiliating College
SEM : IV
1. Dr.M.Duraipandian HOD/VP SCADIT
Copyright @ SCAD
SYLLABUS
UNIT I INTRODUCTION 9
Notion of an Algorithm – Fundamentals of Algorithmic Problem Solving –
Important Problem Types – Fundamentals of the Analysis of Algorithm Efficiency –
Analysis Framework – Asymptotic Notations and its properties – Mathematical
analysis for Recursive and Non- recursive algorithms.
OUTCOMES: At the end of the course, the student should be able to:
TEXT BOOK:
1. Anany Levitin, ―Introduction to the Design and Analysis of
Algorithms‖, Third Edition, Pearson Education, 2012.
REFERENCES:
1. Thomas H.Cormen, Charles E.Leiserson, Ronald L. Rivest and
Clifford Stein, ―Introduction to Algorithms‖, Third Edition, PHI Learning Private
Limited, 2012.
2. Alfred V. Aho, John E. Hopcroft and Jeffrey D. Ullman, ―Data Structures
and Algorithms‖,Pearson Education, Reprint 2006.
3. Donald E. Knuth, ―The Art of Computer Programming‖, Volumes 1& 3
Pearson Education, 2 009.
4. Steven S. Skiena, ―The Algorithm Design Manual‖, Second Edition, Springer,
2008.
5. http://nptel.ac.in/
INDEX
PAGE
S.NO CONTENTS NO.
1 Aim and objective 5
2 Detailed Lesson plan 7
Unit I – Introduction
3 Part A 10
4 Part B 15
5 Fundamentals of algorithmic problem solving 15
6 Important problem types 17
7 Mathematical Analysis of non-recursive 18
8 Asymptotic notations 19
9 Sequential Search 22
10 Mathematical Analysis of recursive 23
11 PART-C 25
Unit II - Brute force and Divide & Conquer
12 Part A 26
13 Part B 30
14 Exhaustive search 30
15 Merge sort 32
16 Binary search 34
17 Quick sort 35
18 Strassen‘s matrix multiplication 39
19 Closest pair and convex hull 42
20 PART-C 43
Unit III – Dynamic Programming & Greedy Technique
21 Part A 46
22 Part B 49
23 All pair shortest path problem 49
24 Knapsack problem 51
25 Optimal binary search tree 53
36 Part A 86
37 Part B 88
38 N-Queen‘s Problem 88
39 Hamiltonian and subset problem 90
40 Traveling salesman problem 93
41 Knapsack problem 96
42 Job Assignment problem 99
43 Approximation algorithms – TSP 102
44 Approximation algorithms – Knapsack Problem 105
45 PART-C 107
46 Industrial/Practical Connectivity of the subject 111
Text Book
1. Anany Levitin, ―Introduction to the Design and Analysis of
Algorithms‖, Third
Edition, Pearson Education, 2012. (Available in the library)
References
1. Thomas H.Cormen, Charles E.Leiserson, Ronald L. Rivest and
Clifford Stein,
―Introduction to Algorithms‖, Third Edition, PHI Learning Private Limited,
2012.
(Available in the library)
2. Alfred V. Aho, John E. Hopcroft and Jeffrey D. Ullman, ―Data
Structures and
Algorithms‖, Pearson Education, Reprint 2006. (Available in the library)
3. Donald E. Knuth, ―The Art of Computer Programming‖, Volumes 1& 3
Pearson
Education, 2009. (Not available in the Library)
4. Steven S. Skiena, ―The Algorithm Design Manual‖, Second Edition,
Springer, 2008.
(Not available in the Library)
5. http://nptel.ac.in/
Fundamentals of Algorithmic
2 I 1 2 T1
Problem
Solving
3 I Algorithm design technique 1 3 T1
13 II Assignment problem 1 14 T1
16 II Binary search 1 17 T1
17 II Multiplication of Large 2 T1
19
Integers, Stassen‗s Matrix
18 II Multiplication
Closest-Pair and Convex-Hull 1 T1
20
Problems.
The Maximum-Flow
32 IV 2 36 T1, R1
Problem: Flow graph and
34 IV maximum
Maximum flow problem
Matching in Bipartite 2 T1, R1
38
Graphs
35 IV The Stable marriage Problem. 1 39 T1
36 V Lower-Bound Arguments 1 40 T1
37 V Decision Trees 1 41 T1
UNIT I – INTRODUCTION
Notion of an Algorithm – Fundamentals of Algorithmic Problem Solving –
Important Problem Types – Fundamentals of the Analysis of Algorithm Efficiency –
Analysis Framework – Asymptotic Notations and its properties – Mathematical
analysis for Recursive and Non-recursive algorithms.
PART – A
1. Define: Algorithm. May / June 2013 Nov/Dec 2016
An algorithm is a sequence of unambiguous instructions for solving a
problem, i.e., for obtaining a required output for any legitimate input in finite
amount of time. An algorithm is step by step procedure to solve a problem.
2. Differentiate time and space complexity. April / May 2010, Nov / Dec
2011
Time efficiency: indicates how fast the algorithm runs
Space efficiency: indicates how much extra memory the algorithm needs
count← count + 1
n ← n/2
return
count
6. What do you mean by linear search? Nov / Dec 2011, May / June 2012
Linear Search or Sequential Search searches for the key value in the
given set of items sequentially and returns the position of the key value else returns
-1.
7. What are average, best and worst case efficiency? May / June 2014
The average case efficiency of an algorithm is its efficiency for an average
case input of size n. It provides information about an algorithm behavior on a typical
or random input.
The best-case efficiency of an algorithm is its efficiency for the best-
case input of size n, which is an input or inputs for which the algorithm runs the
fastest among all possible inputs of that size.
The worst-case efficiency of an algorithm is its efficiency for the worst-
case input of size n, which is an input or inputs of size n for which the
algorithm runs the longest among all possible inputs of that size.
9. What are the components of fixed and variable parts in space complexity?
Nov / Dec 2013
Space Complexity of an algorithm is the amount of memory it needs to run to
completion i.e. from start of execution to its termination. Space need by any
algorithm is the sum of following components:
Fixed Component: This is independent of the characteristics of
the inputs and outputs. This part includes: Instruction Space, Space of
simple variables, fixed size component variables, and constants variables.
10. Define O-notation. List the properties of Big Oh notation. Nov/Dec 2011, 12
A function t(n) is said to be in O(g(n)), denoted by t(n) ε O(g(n)), if t(n) is
bounded above by some constant multiple of g(n) for all large n, i.e., if there exists
some positive constant c and some non-negative integer n0 such that
t (n) <=cg (n) for all n >= n0
11. What are little Oh and omega notations? Nov / Dec 2013
Little Oh notation: A function t(n) is said to be in o(g(n)) denoted by t(n) € o(g(n)) if
there exists some positive constant c and some non-negative integer such that
t(n) < c g(n)
The limit -based approach is often more convenient than the one based on the
definitions because it can take advantage of the powerful calculus techniques
developed for computing limits, such as L'Hopital's rule:
n2 quadratic
n3 cubic
2n exponential
n! factorial
17. Compare the order of grouth n(n-1) and n2. (May/June 2016)
N n(n-1)/2 n2
Polynomial Quadratic Quadratic
1 0 1
2 1 4
4 6 16
8 28 64
10 45 102
102 4950 104
Complexity Low High
Growth Low high
Algorithm F(n)
if n = 0 then return 1 // base case
else F(n-1)•n // recursive call
This is called finding the k-th order statistic. There's a very simple randomized
algorithm (called quick select) taking O(n) average time, O(n^2) worst case time, and
a pretty complicated non-randomized algorithm (called introselect) taking O(n) worst
case time.
PART – B
scheduled in the same time, a solution to this graph gives an optimal schedule.
5. Combinatorial problems
The traveling salesman problem and the graph-coloring problem are examples
of combinatorial problems. These are problems that ask us to find a
combinatorial object such as permutation, combination or a subset that
satisfies certain constraints and has some desired (e.g. maximizes a value or
minimizes a cost).
These problems are difficult to solve for the following facts. First, the number
of combinatorial objects grows extremely fast with a problem‘s size.
Second, there are no known algorithms, which are solved in acceptable
amount of time.
6. Geometric problems
Geometric algorithms deal with geometric objects such as points, lines and
polygons. It also includes various geometric shapes such as triangles, circles
etc. The applications for these algorithms are in computer graphic, robotics
etc. The two problems most widely used are the closest-pair problem,
given ‗n‗ points in the plane, finds the closest pair among them. The
convex-hull problem is to find the smallest convex polygon that would
include all the points of a given set.
7. Numerical problems
This is another large special area of applications, where the problems involve
mathematical objects of continuous nature: solving equations computing
definite integrals and evaluating functions and so on. These problems
can be solved only approximately. These require real numbers, which can be
represented in a computer only approximately. If can also lead to an
accumulation of round-off errors. The algorithms designed are mainly used in
scientific and engineering applications.
Analysis:
Let us denote C(n) the number of times this comparison is executed
ASYMPTOTIC NOTAIONS
3 Asymptotic Notations:
When the input size increases, then the time is also increases.
Example:
By the definition,
f(n)≤c g(n)
And so on.
When the input size increases, then the time is also increases.
Example:
f(n)≥c g(n)
And so on.
When the input size increases, then the time is also increases.
f(n) is bounded by both upper and lower i.e) c1g(n) ≤ f(n) ≤ c2g(n).
Example:
By the definition,
c1 * n ≤ 3n+2 ≤ c2 * n
and
5. With an algorithm explain in detail about the linear search. Give its
efficiency.
SEQUENTIAL SEARCH
Sequential Search searches for the key value in the given set of items
sequentially and returns the position of the key value else returns -1.
Algorithm sequentialsearch(A[0….n-1],K)
//Searches for a given array by sequential search
//Input: An array A[0….n-1} and a search key K
// the index of the first element of A that matches K or -1 if there are no matching
elements i←0
while i<n and A[i]≠K do
i←i+1 if i<n return i else return -1
Analysis:
For sequential search, best-case inputs are lists of size n with their first
elements equal to a search key; accordingly, Cbw(n) = 1.
Example 1:
//Computes n! recursively
if(n = = 0)
return 1
else
return Factorial (n – 1) * n
Example 2:
//Input: n disks
TOH(n,A,C,B)
If(n==1) then
PART-C
//Output: Returns ―true‖ if all the elements in A are distinct and ―false‖
otherwise
for i ←0 to n − 2 do
for j ←i + 1 to n − 1 do
if A[i]= A[j ] return false
return true
ALGORITHM BinRec(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n‗s binary representation
if n = 1 return 1
else return BinRec( n/2 )+ 1
Algorithm analysis
The number of additions made in computing BinRec( n/2 ) is A( n/2 ),
plus one more addition is made by the algorithm to increase the returned
value 1.This lead to the recurrence
A(n) = A( n/2 ) + 1 for n > 1.
Since the recurrence call end when n is equal to 1 and there are no
addition made then the initial condition is A(1)=0
The standard approach to solving such a recurrence is to solve it
only for n = 2k
4. How does a divide and conquer algorithm work? May / June 2013,
2016
Divide and conquer is an algorithm design paradigm based on multi
branched recursion. The general plan is as follows:
• A problems instance is divided into several smaller instances of
the same problem, ideally about the same size
• The smaller instances are solved, typically recursively
• If necessary the solutions obtained are combined to get the
solution of the original problem
• Weights: w1 w2 … wn
• Values: v1 v2 … vn
• A knapsack of capacity W
7. Give the control abstraction of divide and conquer techniques. Nov
/Dec 2012,13
Control abstraction for recursive divide and conquer, with
problem instance P
divide_and_conquer ( P )
{
divide_and_conquer ( P2 ),
...
divide_and_conquer ( Pk ) ) );
}
8. Differentiate linear search and binary search. Nov / Dec 2014
Linear search: Linear Search or Sequential Search searches for
the key value in the given set of items sequentially and returns the position
of the key value else returns -1. Binary search: Binary search is a
remarkably efficient algorithm for searching in a sorted array. It works by
comparing a search key K with the arrays middle element A[m]. If they
match the algorithm stops; otherwise the same operation is repeated
recursively for the first half of the array if K < A[m] and the second half if K >
A[m].
9. Derive the complexity for binary search algorithm. April / May 2012,15
Nov/Dec 2016
10.. Distinguish between quick sort and merge sort. Nov/ Dec 2011
Merge Quick
Sort
For merge sort the arrays are In quick sort theSort
arrays are
partitioned partitioned
according to their position according to the element values.
11. Give the time complexity of merge sort.
Cworst(n) = 2Cworst(n/2) + n- 1 for n> 1, Worst(1)=0
Cworst(n) € Θ(n log n) (as per master‗s theorem)
PART – B
1. Using exhaustive search solve traveling salesman problem,
knapsack problem and assignment problem for the given data.
EXHAUSTIVE
SEARCH
Solution:
C = 9 2 7 8
6 4 3 7
5 8 1 8
7 6 9 9
2. Explain the merge sort using divide and conquer technique give an
example. April / May 2010,May/June 2016
Devise an algorithm to perform merge sort.
Trace the steps of merge sort algorithm for the elements 8, 3, 2, 9,
7, 1, 5, 4 and also compute its time complexity.
MERGE SORT
o It is a perfect example of divide-and-conquer.
o It sorts a given array A [0..n-1] by dividing it into two halves
A[0…(n/2-1)] and A[n/2…n-1], sorting each of them recursively
and then merging the two smaller sorted arrays into a single sorted
one.
Algorithm for merge sort:
//Input: An array A [0…n-1] of
orderable elements //Output:
Array A [0..n-1] sorted in
increasing order
if n>1
Copy A[0…(n/2-1)] to
B[0…(n/2-1)] Copy
A[n/2…n-1] to
C[0…(n/2-1)] Merge
sort (B[0..(n/2-1)]
Merge sort (C[0..(n/2-
1)] Merge (B,C,A)
Merging process:
The merging of two sorted arrays can be done as follows:
o Two pointers are initialized to point to first elements of the arrays
being merged.
o Then the elements pointed to are compared and the smaller of them
is added to a new array being constructed;
o After that, that index of that smaller element is incremented to
point to its immediate successor in the array it was copied from.
This operation is continued until one of the two given arrays are
exhausted then the remaining elements of the other array are
copied to the end of the new array.
Algorithm for merging:
Algorithm Merge (B[0…P-1], C[0…q-1], A[0…p + q-1])
//Merge two sorted arrays into one sorted array.
//Input: Arrays B[0..p-1] and C[0…q-1] both sorted
//Output: Sorted Array A [0…p+q-1] of the elements of B & C
i = 0; j = 0; k = 0
3. Explain the binary search with suitable example. Nov/ Dec 2011 (Nov/Dec
15) Write an algorithm to perform binary search on a sorted list of elements
and analyze the algorithm. April / May 2011
BINARY SEARCH
It is an efficient algorithm for searching in a sorted array. It is an example for
divide and conquer technique.
Example:
Given set of elements
3 14 27 31 42 55 70 81 98
Search key: K=55
The iterations of the algorithm are given as:
return -1
Analysis:
The efficiency of binary search is to count the number of times the search key is
compared with an element of the array.
For simplicity, three-way comparisons are considered. This assumes that
after one comparison of K with A [M], the algorithm can determine whether K is smaller,
equal to, or larger than A [M]. The comparisons not only depend on ‗n‗ but also the
particular instance of the problem.
The worst case comparison Cw (n) includes all arrays that do not contain a search
key, after one comparison the algorithm considers the half size of the array.Cw (n) = Cw
(n/2) + 1 for n > 1, Cw (1) = 1 --------------------- eqn(1)
To solve such recurrence equations, assume that n = 2k to obtain the solution.
Cw ( k) = k +1 = log2n+1
For any positive even number n, n = 2i, where I > 0. now the LHS of eqn (1) is:
Cw (n) = [ log2n]+1 = [ log22i]+1 = [ log 22 + log2i] + 1 = ( 1 + [log2i]) + 1
= [ log2i] +2
The R.H.S. of equation (1) for n = 2 i is
Cw [n/2] + 1 = Cw [2i / 2] + 1
= Cw (i) + 1
= ([log2 i] + 1) + 1
= ([log2 i] + 2
Since both expressions are the same, the assertion is proved.
The worst – case efficiency is in θ (log n) since the algorithm reduces the size of the
array remained as about half the size, the numbers of iterations needed to reduce the initial
size n to the final size 1 has to be about log2n. Also the logarithmic functions grow so
slowly that its values remain small even for very large values of n.
The average-case efficiency is the number of key comparisons made which is
slightly smaller than the worst case.
QUICK SORT
Quick sort is an algorithm of choice in many situations because it is not
difficult to implement, it is a good \"general purpose\" sort and it consumes relatively
fewer resources during execution.
Divide: Partition array A[l..r] into 2 sub-arrays, A[l..s-1] and A[s+1..r] such that
each element of the first array is ≤A[s] and each element of the second array is ≥
A[s]. (Computing the index of s is part of partition.)
Implication: A[s] will be in its final position in the sorted array.
Conquer: Sort t h e t w o s u b - a r r a y s A [l...s-1] a n d A [s+1...r] b y r e c u r s i v e
c a l l s t o quicksort Combine: No work is needed, because A[s] is already in its
correct place after the partition is done, and the two sub arrays have been sorted.
Steps in Quicksort
Select a pivot with respect to whose value we are going to divide the
sublist. (e.g., p = A[l])
Rearrange the list so that it starts with the pivot followed by a ≤ sublist
(a sublist whose elements are all smaller than or equal to the pivot)
and a ≥ sublist (a sublist whose elements are all greater than or equal
to the pivot ) Exchange the pivot with the last element in the first
sublist(i.e., ≤ sublist) – the pivot is now in its final position.
Efficiency of Quicksort
Based on whether the partitioning is balanced.
Best case: split in the middle — Θ (n log n)
C (n) = 2C (n/2) + Θ (n) //2 subproblems of size n/2 each
Worst case: sorted array! — Θ (n2)
C (n) = C (n-1) + n+1 //2 subproblems of size 0 and n-1 respectively
Average case: random arrays — Θ (n log n)
Conditions:
If i< j, swap A[i] and A[j]
If I > j, partition the array after exchanging pivot and A[j] If
i=j, the value pointed is p and thus partition the array
Example:
50 30 10 90 80 20 40 70
Step 1: P = A[l]
50 30 10 90 80 20 40 70
P I j
Step2: Inrement I; A[i] < Pivot
50 30 10 90 80 20 40 70
P I j
Step 3: Decrement j; A[j]>Pivot
50 30 10 90 80 20 40 70
P I j
Step 4: i<j; swap A[i] & A[j]
50 30 10 40 80 20 90 70
P I j
Step 5: Increment i
50 30 10 40 80 20 90 70
P I j
Step 6: Decrement j
50 30 10 40 80 20 90 70
P I j
Step 7: i<j, swap A[i] & A[j]
50 30 10 40 20 80 90 70
P I j
Step 8: Increment I, Decrement j
50 30 10 40 20 80 90 70
P j i
Master‗s Theorm:
C(n)=2c(n/2)+ 1
C(1) =
As per master‗s Theorem:
C(n)tq(ncnlogn)
Substitution method:
c(n)2c(n/2)+n n=2k
c(2k) = 2c(2k-1) +2k
=22c(2k-2) +2.2k
=23c(2k-3) +3.2k
=24c(2k-4) +4.2k
C(2k) = 2k c(2k-k) + k.2k
=2kc(1)+k.2k
C(2k)=k.2k N=2k
Where,
m1 = ( a00 + a11 ) * (b00 + b11)
m2 = ( a10 + a11 )* b00 m3 = a00 *
(b01 - b11) m4
= a11 * (b10 – b00) m5 = (
a00 + a01 * b11
m6 = (a10 – a00) * (b00 + b01)
For More Visit : www.Learnengineering.in
Page 39 of 120
For More Visit : www.Learnengineering.in CS8451 - DAA
Analysis:
The efficiency of this algorithm, M(n) is the number of multiplications in multiplying
two n by n matrices according to Strassen‗s algorithm. The recurrence relation is as
follows:
6. Write a brute force algorithm to find closest points in a set of n points and an
algorithm based on divide and conquer to solve convex hull problem.
April / May 2015Nov/Dec 15
Find the two closest points in a set of n points (in the two-dimensional Cartesian plane).
Brute-force algorithm
Compute the distance between every pair of distinct points and return the indexes of the
points for which the distance is the smallest.
Algortihm BruteForceClosestPoint(P)
//Input A list P of n (n≥2) points P1=(x1,y1),………Pn=(xn,yn)
//Output: Indices index1 and index 2 of the closest pair of points dmin←∞
for i←1 to n-1 do
for j←i+1 to n do
d←sqrt((xi-xj)2
(yi-yj)2
// sqrt – square root function if d<+)
dmin
dmin←∞ index1←I; index2←j
return index1, index2
Finding points farthest away from line P1P2 can be done in linear time.
Time efficiency: T(n) = T(x) + T(y) + T(z) + T(v) + O(n), where x + y + z +v <=
n.
Find the two closest points in a set of n points (in the two-dimensional
Cartesian plane).It computes the distance between every pair of distinct points and return
the indexes of the points for which the distance is the smallest.
Algorithm
Step 0 Sort the points by x (list one) and then by y (list two).
Step 1 Divide the points given into two subsets S1 and S2 by a vertical line x = c
so that half the points lie to the left or on the line and half the points lie to the right or
on the line.
Step 2 Find recursively the closest pairs for the left and right subsets.
Step 3 Set d = min{d1, d2}
Limit the attention to the points in the symmetric vertical strip of width
2d as possible closest pair. Let C1 and C2 be the subsets of points in
the left subset S1 and of the right subset S2, respectively, that lie in
this vertical strip. The points in C1 and C2 are stored in increasing
order of their y coordinates, taken from the second list.
Step 4 For every point P(x,y) in C1, we inspect points in C2 that may be closer to
P than d. There can be no more than 6 such points (because d ≤ d2)!
Time Complexity:
Running time of the algorithm (without sorting) is:
T(n) = 2T(n/2) + M(n), where M(n)
Θ(n)
By the Master Theorem (with a = 2, b = 2, d = 1)
T(n) Θ(n log n)
So the total time is Θ(n log n).
PART-C
In the conventional pen-and-pencil algorithm for multiplying two n-digit integers, each of the
n digits of the first number is multiplied by each of the n digits of the second number for the
total of n2 digit multiplications.
The divide-and-conquer method does the above multiplication in less than n2 digit
multiplications.
Example:
23 ∗ 14 = (2 • 101 + 3 • 100) ∗ (1 • 101 + 4 • 100)
= (2 ∗ 1)102 + (2 ∗ 4 + 3 ∗ 1)101 + (3 ∗ 4)100
= 2• 102 + 11• 101 + 12• 100
= 3• 102 + 2• 101 + 2• 100
= 322
The term (2 ∗ 1 + 3 ∗ 4) computed as 2 ∗ 4 + 3 ∗ 1= (2 + 3) ∗ (1+ 4) – (2 ∗ 1) − (3 ∗ 4). Here
(2 ∗ 1) and (3 ∗ 4) are already computed used. So only one multiplication only we have to
do. For any pair of two-digit numbers a = a1a0 and b = b1b0, their product c can be
computed by the formula c = a ∗ b = c2102 + c1101 + c0,
where
where
c2 = a1 * b1 is the product of their first halves,
c0 = a0 * b0 is the product of their second halves,
c1 = (a1 + a0) * (b1 + b0) − (c2 + c0)
If n/2 is even, we can apply the same method for computing the products c2, c0, and c1.
Thus, if n is a power of 2, we have a recursive algorithm for computing the product of two
n-digit integers. In its pure form, the recursion is stopped when n becomes 1. It can also be
stopped when we deem n small enough to multiply the numbers of that size directly.
def a11 = 1
defa12 =3
def a21 = 7
def a22 = 5
def b11 = 6
def b12 = 8
def b21 = 4
def b22 = 2
RESULT:
C= [ 18 14]
[ 62 66]
PART – A
2. List out the memory functions used under dynamic programming. April / May
2015
guarantees a solution with value no worst than 1/2 the optimal solution.
13. What is the best algorithm suited to identify the topography for
a graph. Mention its efficiency factors. (Nov/Dec 2015)
Prim and Kruskal‘s are the best algorithm suited to identify the topography of graph
15. Define the single source shortest path problem. (May/June 2016)
(Nov/Dec 2016)
Dijkstra‗s algorithm solves the single source shortest path problem of finding
shortest paths from a given vertex( the source), to all the other vertices of a
weighted graph or digraph. Dijkstra‗s algorithm provides a correct solution for a
graph with non-negative weights.
PART-B
1. Write down and explain the algorithm to solve all pair shortest path
algorithm. April / May 2010
Write and explain the algorithm to compute all pair source shortest
path using dynamic programming. Nov / Dec 2010
Explain in detail about Floyd’s algorithm. (Nov/Dec 2016)
Algorithm
//Implemnt Floyd‗s algorithm for the all-pairs shortest-paths problem
// Input: The weight matrix W of a graph
// Output: The distance matrix of the shortest paths lengths
D← w
for k ← 1 to n do
for i ← 1 to n do for j← 1 to n do
D[i,j] min {D[i,j], D[i,k] + D[k,j]}
Return D.
Example:
KNAPSACK PROBLEM
Given n items of integer weights: w1 w2 … wn , values: v1 v2 … vn
and a knapsack of integer capacity W find most valuable subset of the items that
fit into the knapsack.
Consider instance defined by first i items and capacity j (j W).
Let V[i,j] be optimal value of such an instance. Then max
The Recurrence:
To design a dynamic programming algorithm, we need to derive a
recurrence relation that expresses a solution to an instance of the
knapsack problem in terms of solutions to its smaller sub instances.
Let us consider an instance defined by the first i items, 1≤i≤n, with
weights w1, ... , wi, values v1, ... , vi, and knapsack capacity j, 1 ≤j ≤
W. Let V[i, j] be the value of an optimal solution to this instance, i.e.,
the value of the most valuable subset of the first i items that fit into the
knapsack of capacity j.
Divide all the subsets of the first i items that fit the knapsack of
capacity j into two categories: those that do not include the ith item
and those that do.
Two possibilities for the most valuable subset for the sub problem P(i, j)
i. It does not include the ith item: V[i, j] = V[i-1, j]
ii. It includes the ith item: V[i, j] = vi+ V[i-1, j – wi] V[i, j] = max{V[i-1, j], vi+
V[i-1, j – wi] }, if j – wi 0
0 1 2 3 4 5
0 0 0 0
w1 = 2, v1= 1 0 0 12
122 = 1, v2=
w 2 0 10 12 22 22 2
10
w3 = 3, v3= 3 0 10 12 22 30 2
3
w20
4 = 2, v4= 4 15 25 30 0 10 2
3
15 7
Thus, the maximal value is V[4, 5] = $37.
Since V[4, 5] ≠ V[3, 5], item 4 was included in an optimal solution along
with an optimal subset for filling 5 - 2 = 3 remaining units of the knapsack
capacity.
The latter is represented by element V[3, 3]. Since V[3, 3] = V[2, 3], item
3 is not a part of an optimal subset.
Since V[2, 3] ≠ V[l, 3], item 2 is a part of an optimal selection, which leaves
element
o V[l, 3 -1] to specify its remaining composition.
Similarly, since V[1, 2] ≠ V[O, 2], item 1 is the final part of the optimal solution
{item
o 1, item 2, item 4}.
Efficiency:
The time efficiency and space efficiency of this algorithm are both in
Θ(n W).
The time needed to find the composition of an optimal solution is in O(n
+ W).
p2
i C[i,j]
EXAMPLE
Let us illustrate the algorithm by applying it to the four key set we used at the beginning of
this section:
KEY A B C D
Initial Tables:
0 1 2 3 4 0 1 2 3 4
1 0 0.1 1 1
2 0 0.2 2 2
3 0 0.4 3 3
4 0 0.3 4 4
5 0 5
K=1: c(1,2) = c(1,0) + c(2,2) + 0.1 + 0.2 = 0 + 0.2 + 0.1 + 0.2 = 0.5
K=2: c(1,2) = c(1,1) + c(3,2) + 0.1 + 0.2 = 0.1 + 0 + 0.1 + 0.2 = 0.4
0 1 2 3 4 0 1 2 3 4
1 0 0.1 0.4 1 1 2
2 0 0.2 2 2
3 0 0.4 3 3
4 0 0.3 4 4
5 0 5
K=2: c(2,3) = c(2,1) + c(3,3) + 0.2 + 0.4 = 0 + 0.4 + 0.2 + 0.4 = 1.0
K=3: c(2,3) = c(2,2) + c(4,3) + 0.2 + 0.4 = 0.2 + 0 + 0.2 + 0.4 = 0.8
0 1 2 3 4 0 1 2 3 4
1 0 0.1 0.4 1 1 2
2 0 0.2 0.8 2 2 3
3 0 0.4 3 3
4 0 0.3 4 4
5 0 5
K=3: c(3,4) = c(3,2) + c(4,4) + 0.4 + 0.3 = 0 + 0.3 + 0.4 + 0.3 = 1.0
K=4: c(3,4) = c(3,3) + c(5,4) + 0.4 + 0.3 = 0.4 + 0 + 0.4 + 0.3 = 1.1
0 1 2 3 4 0 1 2 3 4
1 0 0.1 0.4 1 1 2
2 0 0.2 0.8 2 2 3
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5
K=1: c(1,3) = c(1,0) + c(2,3) + 0.1 + 0.2 + 0.4 = 0 + 0.8 + 0.1 + 0.2 + 0.4 = 1.5
K=2: c(1,3) = c(1,1) + c(3,3) + 0.1 + 0.2 + 0.4 = 0.1 + 0.4 + 0.1 + 0.2 + 0.4 = 1.2
K=3: c(1,3) = c(1,2) + c(4,3) + 0.1 + 0.2 + 0.4 = 0.4 + 0 + 0.1 + 0.2 + 0.4 = 1.1
0 1 2 3 4 0 1 2 3 4
2 0 0.2 0.8 2 2 3
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5
K=2: c(2,4) = c(2,1) + c(3,4) + 0.2 + 0.4 + 0.3 = 0 + 1.0 + 0.2 + 0.4 + 0.3 = 1.9
K=3: c(2,4) = c(2,2) + c(4,4) + 0.2 + 0.4 + 0.3 = 0.2 + 0.3 + 0.2 + 0.4 + 0.3 = 1.4
K=4: c(2,4) = c(2,3) + c(5,4) + 0.2 + 0.4 + 0.3 = 0.8 + 0 + 0.2 + 0.4 + 0.3 = 1.7
0 1 2 3 4 0 1 2 3 4
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5
0 1 2 3 4 0 1 2 3 4
3 0 0.4 1.0 3 3 3
4 0 0.3 4 4
5 0 5
Thus, the average number of key comparisons in the optimal tree is equal to
1.7.
Since R(1, 4) = 3, the root of the optimal tree contains the third key, i.e., C.
Its left subtree is made up of keys A and B, and its right subtree contains just
key D.
Since R(1, 2) = 2, the root of the optimal tree containing A and B is B, with A
being its left child (and the root of the one node tree: R(1, 1) = 1).
Prims’s Algorithm:
Prim's algorithm constructs a minimum spanning tree through
a sequence of expanding sub trees.
The initial sub tree in such a sequence consists of a single vertex
selected arbitrarily from the set V of the graph's vertices.
On each iteration, the current tree is expanded in the greedy
manner by simply attaching to it the nearest vertex not in that tree.
The algorithm stops after all the graph's vertices have been included
in the tree being constructed.
Since the algorithm expands a tree by exactly one vertex on each of its
iterations, the total number of such iterations is n - 1, where n is the number of
vertices in the graph. The tree generated by the algorithm is obtained as the set
of edges used for the tree expansions.
Kruskals’s Algorithm:
o Discovered by Joseph Kruskal
o Constructs a minimum spanning tree for a weighted connected graph
G= (V, E) as an acyclic sub graph with |V| - 1 edges for which the sum
of edge weights is the smallest.
Procedure – Krusal’s Algorithm:
ALGORITHM Kruskal(G)
//Kruskal's algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET, the set of edges composing a minimum spanning tree of G
sort E in nondecreasing order of the edge weights w(e1) ≤ ...≤ w(ei|E|)
ET ←ϕ; ecounter ←0 //initialize the set of tree edges and its size
k ←0 //initialize the number of
processed edges while ecounter <
|V|- 1 do
k ←k+l
if ET U[eik} is acyclic
ET ←ET U ET {eik}; ecounter +--- ecounter + 1
returnET
Example:
5. Let Compute the Huffman coding to compress the data effectively and
encode theString _AB_D. April / May 2015 (Nov/Dec 15)
Character A B C D _
Huffman’s algorithm
Initialize n one-node trees with alphabet characters and the tree
weights with their frequencies.
Repeat the following step n-1 times: join two binary trees with smallest weights
into one (as left and right subtrees) and make its weight equal the sum of the
weights of the two trees.
Mark edges leading to left and right subtrees with 0‗s and 1‗s, respectively.
Example:
character A B C D _
PART-C
1. Discuss about the algorithm and pseudocode to find the Minimum
Spanning Tree using Prim’s Algorithm. Find the Minimum Spanning Tree
for the graph shown below. And Discuss about the
efficiency of the algorithm. (May/June 16)
find a minimum-weight edge e∗ = (v∗, u∗) among all the edges (v, u)
Tour Length
a ---> b ---> c ---> d ---> a I = 2 + 8 + 1 + 7 = 18
a ---> b ---> d ---> c ---> a I = 2 + 3 + 1 + 5 = 11 optimal
a ---> c ---> b ---> d ---> a I = 5 + 8 + 3 + 7 = 23
a ---> c ---> d ---> b ---> a I = 5 + 1 + 3 + 2 = 11 optimal
a ---> d ---> b ---> c ---> a I = 7 + 3 +8 + 5 = 23 a ---> d ---> c ---> b ---> a I=7
+ 1 + 8 + 2 = 18
capacity constraints
0 ≤ xij ≤ uij for every edge (i,j) E
9. Determine the Dual linear program for the following LP, Maximize
3a + 2b + c .
Subject to,2a+b+c <=3 a+b+c <=4 3a+3b+6c<=6 a,b,c >= O.
(Nov/Dec 2015) Sol:
The basic solution is a = 0, b = 0, c = 0, d = 30, e = 24, f = 36, i.e. (0,0,0,
30,24,36). z = 0.
10. What is iterative improvement? (Nov/Dec 2016)
The iterative improvement technique involves finding a solution to an
optimization problem by generating a sequence of feasible solutions with
improving values of the problem's objective function. Each subsequent solution
in such a sequence typically involves a small, localized change in the previous
feasible solution. When no such change improves the value of the objective
function, the algorithm returns the last feasible solution as optimal and stops.
C(n,0)=1
C(n,n)=1
Where n>k>0
PART – B
1. Discuss in detail about stable marriage problem. (Nov/Dec 2016) Explain in
detail about Gale-Shapley Algorithm.
Marriage Matching:
A marriage matching M is a set of n (m, w) pairs whose members are selected
from disjoint n- element sets Y and X in a one-one fashion, i.e., each man m from
Y is paired with exactly one woman w from X and vice versa.
Blocking Pair:
A pair (m, w), where m ϵ Y, w ϵ X, is said to be a blocking pair for a marriage
matching M if man m and woman w are not matched in M but they prefer each
other to their mates in M. For example, (Bob, Lea) is a blocking pair for the
marriage matching M = (Bob,Ann), (Jim,Lea), (Tom,Sue)} because they are not
matched in M while Bob prefers Lea to Ann and Lea prefers Bob to Jim.
2. How do you compute the maximum flow for the following graph
using Ford – Fulkerson method? Explain. (May/June 2016)
digraph with n vertices numbered from 1 to n and a set of edges E, with the
following properties:
contains exactly one vertex with no entering edges, called the source
(numbered 1)
contains exactly one vertex with no leaving edges, called the sink
(numbered n)
has positive integer weight uij on each directed edge (i.j),
called the edge capacity, indicating the upper bound on the
amount of the material that can be sent from i to j through this edge
Flow: A flow is an assignment of real numbers xij to edges (i,j) of a given
network that satisfy the following:
Flow-conservation requirements: The total amount of material entering an
intermediate vertex must be equal to the total amount
of the material
leaving the vertex
Capacity constraints:
j: (1,j) є E j: (j,n) є E
The total outflow from the source or the total inflow into the sink is called the
value of the flow (v). Thus, a (feasible) flow is an assignment of real numbers xij
to edges (i, j) of a given network that satisfy flow-conservation constraints and the
capacity constraints:
0 ≤ xij ≤ uij for every edge (i,j) E
The maximum flow problem can be stated as: Given a network N,
find a flow f of maximum value.
More formally,
Maximize v = ∑ x1j
subject to
j: (1,j) € E
edge ( ← )
Shortest-Augmenting-Path Algorithm
Generate augmenting path with the least number of edges by BFS as follows.
Starting at the source, perform BFS traversal by marking new (unlabeled) vertices
with two labels:
• first label – indicates the amount of additional flow that can be
brought from the source to the vertex being labeled
• second label – indicates the vertex from which the vertex
being labeled was reached, with ―+‖ or ―–‖ added to the
second label to indicate whether the vertex was reached via a
forward or backward edge
Labeling of vertex:
The source is always labeled with ∞,-
All other vertices are labeled as follows:
If unlabeled vertex j is connected to the front vertex i of the traversal
queue by a directed edge
+
from i to j with positive unused capacity rij = uij –xij (forward edge),
vertex j is labeled with lj,i ,
where lj = min{li, rij}
If unlabeled vertex j is connected to the front vertex i of the traversal
queue by a directed edge
-
from j to i with positive flow xji (backward edge), vertex j is labeled
lj,i , where lj = min{li, xji}
If the sink ends up being labeled, the current flow can be augmented
by the amount
indicated by the
sink‗s first label
The augmentation of the current flow is performed along the
augmenting path traced by following the vertex second labels from
Definition of
a Cut
Let X be a set of vertices in a network that includes its source but does not
include its sink, and let X, the complement of X, be the rest of the vertices
including the sink. The cut induced by this partition of the vertices is the set of
all the edges with a tail in X and a head in X.
Capacity of a cut is defined as the sum of capacities of the edges that
compose the cut.
A cut and its capacity are denoted by C(X,X)
and c(X,X)
Note that if all the edges of a cut were deleted from the network, there
would be no directed path from source to sink
Efficiency:
• The number of augmenting paths needed by the shortest-augmenting-
path algorithm never exceeds nm/2, where n and m are the number of
vertices and edges, respectively
• Since the time required to find shortest
) augmenting path by breadth-first
search is in O(n+m)=O(m) for networks represented by their adjacency lists,
the time efficiency of the shortest-augmenting-path algorithm is in O(nm2 for this
representation
• More efficient algorithms have been found that can run in close to
O(nm) time, but these algorithms don‗t fall into the iterative-
improvement paradigm
A matching in a graph is a subset of its edges with the property that no two
edges share a vertex A maximum (or maximum cardinality) matching is a
matching with the largest number of edges For a given matching M, a vertex is
called free (or unmatched) if it is not an endpoint of any edge inM; otherwise, a
vertex is said to be matched
• If every vertex is matched, then M is a maximum matching
• If there are unmatched or free vertices, then M may be able to be improved
• We can immediately increase a matching by adding an edge
connecting two free vertices (e.g., (1,6) above)
Initialize queue Q with all free vertices in one of the sets (say V)
While Q is not empty, delete front vertex w and label every unlabeled
vertex u
adjacent to w as follows:
Maximize P = 3x + 4y
subject to x+3y≤30 ; 2x+y≤20
SIMPLEX METHOD
o The classic method for solving LP problems; one of the most
important algorithms ever invented
o Simplex method was invented by George Dantzig in 1947
o Based on the iterative improvement idea:
o Generates a sequence of adjacent points of the problem‗s feasible
region with improving values of the objective function until no further
improvement is possible
o Standard Linear Programming
Problem:
o must be a maximization problem
o all constraints (except the non negativity constraints) must be
in the form of linear equations
o all the variables must be required to be nonnegative
Thus, the general linear programming problem in
standard form with m
constraints and n unknowns (n ≥ m) is
maximize c1 x1 +
...+ cn xn
subject to ai 1x1+ ...+ ain xn = bi , i = 1,...,m,
x1 ≥ 0, ... ,
xn ≥ 0
Every LP problem can be represented in such for
maximize 3x + 5y maximize 3x + 5y + 0u + 0v
subject to x + y ≤ 4 subject to x
+ y+ u=4
x + 3y ≤ 6 x + 3y + v = 6
x≥0, y≥0 x≥0, y≥0, u≥0, v≥0
A basic solution is called feasible if all its (basic) variables are nonnegative.
Example x + y + u =4
x + 3y +
v =6
(0, 0, 4, 6) is basic feasible solution
(x, y are nonbasic; u, v
Step 4 [Form the next tableau] Divide all the entries in the pivot row by its
entry in the pivot column. Subtract from each of the other rows, including
the objective row, the new pivot row multiplied by the entry in the pivot
column of the row in question. Replace the label of the pivot row by the
variable's name of the pivot column and go back to Step 1.
PART-C
2. The shortest augmenting path algorithm yields both a maximum flow and a
minimum cut:
• All the edges from the labeled to unlabeled vertices are full, i.e.,
their flow amounts are equal to the edge capacities, while all the
edges from the unlabeled to labeled vertices, if any, have zero flow
amounts on them.
PART – A
solution.
Non Promising Node: A node in a state-space tree is said to be non-
promising if it backtracks to the node‗s parent to consider the nest possible
solution for its last component.
11. Compare backtracking and branch and bound technique. April / May
2015
Backtracking Technique Branch and Bound technique
Backtracking constructs its state- Branch-and-bound is an algorithm
space tree in the depth-first search design technique that enhances
fashion in the majority of its the idea of generating a state-
applications. space tree with the idea of estimating
the best value obtainable from a
current node of the decision tree.
14. State the reason for terminating search path at the current node in
branch bound algorithm (Nov/Dec 2016)
The value of the node's bound is not better than the value of the best solution seen
so far. The node represents no feasible solutions because the constraints of the
problem are already violated.
PART – B
Algorithm
place(k,I)
{
for j := 1 to k-1 do
if(x[j]=I) or(abs(x[j]-I)=abs(j-k))) then return false;
return true;
}
Algorithm Nqueens(k,n)
{
for I:= 1 to n do
{
if( place(k,I) then
{
x[k]:= I;
if(k=n) then write(x[1:n]);
else
Nqueens(k
+1,n)
}
}
}
Example
N=4
void Nextvalue(int k)
{
do
{
x[k]=(x[k]+1)%(n+1);
if(!x[k]) return; if(G[x[k-
1]][x[k]])
{
for(int j=1;j<=k-1;j++)
if(x[j]==x[k])
break;
}
}while(1);
}
if(j==k)
if(k<n)
return;void Hamiltonian(int k)
{
do
{
NextValue(k);
if(!x[k]) return;
if(k==n)
{
for(int i=1;i<=n;i++)
{
Print x[i];
}
}
else Hamiltonian(k+1);
}while(1);
}
Example:
State Space
Tree:
3. Explain how the travelling salesman problem is solved using branch and
bound technique.
graph.
For example, for the instance above the lower bound is:
lb = [(1+ 3) + (3 + 6) + (1 + 2) + (3 + 4) + (2 + 3)]/2 = 14.
The bounding function can be used, to find the shortest Hamiltonian circuit for
the given
Root:
First, without loss of generality, consider only tours that start at a.
First level
Second because the graph is undirected, tours can be generated in which b
is visited before c. In addition, after visiting n - 1 = 4 cities, a tour has no choice
but to visit the remaining unvisited city and return to the starting one.
Lower bound if edge (a,b) is chosen: lb =
ceil([(3+1)+(3+6)+(1+2)+(4+3)+(2+3)]/2)=14 Edge (a,c) is not include since b is
visited before c. Lower bound if edge (a,d) is chosen: lb =
ceil([5+1)+(3+6)+(1+2)+(5+3)+(2+3)]/2)=16. Lower bound if edge (a,e) is chosen: lb
= ceil([(8+1)+(3+6)+(1+2)+(4+3)+(2+8)]/2)=19 Since the lower bound of edge
(a,b) is smaller among all the edges, it is included in the solution. The state
space tree is expanded from this node.
Second level
The choice of edges should be made between three vertices: c, d and e.
Lower bound if edge (b,c) is chosen. The path taken will be (a ->b->c):
lb = ceil([(3+1)+(3+6)+(1+6)+(4+3)+(2+3)]/2)=16. Lower bound if edge (b,d) is
chosen. The path taken will be (a ->b->d):
lb = ceil([(3+1)+(3+7)+(7+3)+(1+2)+(2+3)]/2)=16
Lower bound if edge (b,e) is chosen. The path taken will be (a ->b->e):
lb = ceil([(3+1)+(3+9)+(2+9)+(1+2)+(4+3)]/2)=19. (Since this lb is larger than other
values, so further expansion is stopped)
The path a->b->c and a->b->d are more promising. Hence the state space tree is
expanded from those nodes.
Next level
a ->b->c->d->e->a a
->b->c->e->d->a a-
>b->d->c->e->a a-
>b->d->e->c->a
KNAPSACK
PROBLEM
Given n items of known weights wi and values vi, i = 1, 2, ..., n, and a knapsack
of capacity W, find the most valuable subset of the items that fit in the knapsack. It
is convenient to order the items of a given instance in descending order by their
value-to-weight ratios. Then the first item gives the best payoff per weight unit
and the last one gives the worst payoff per weight unit, with ties resolved
arbitr
arily:
v1/w1≥ v2/w2≥…….
≥vn/wn
W = 10
Hence the items in the knapsack are {item 1, item 3} with the profit $65.
This problem deals with the order in which the tree‗s nodes will he
generated. Rather than generating a single child of the last promising node
as in backtracking, all the children of the most promising node among non-
terminated leaves in the current tree are generated.
To find which of the nodes is most promising, compare the lower bounds
of the live node. It is sensible to consider a node with the best
bound as most promising, although this does not, of course, preclude the
possibility that an optimal solution will ultimately belong to a different branch
of the state-space tree.
The nodes on the first level of the free correspond to four elements (jobs)
in the first row of the matrix since they are each a potential selection for the
first component of the solution. So there are four live leaves (nodes 1
through 4) that may contain an optimal solution. The most promising of them
is node 2 because it has the smallest lower bound value.
By following the best-first search strategy, branch out from that node first
by considering the three different ways of selecting an element from the
second row and not in the second column—the three different jobs that
can be assigned to person b.
Of the six live leaves (nodes 1, 3, 4, 5, 6, and 7) that may contain an
optimal solution, we again choose the one with the smallest lower bound,
node 5.
First, consider selecting the third column‗s element from c‗s row (i.e.,
assigning person c to job 3); this leaves with no choice but to select
the element from the fourth column of d‗s row (assigning person d to job 4).
This yield leafs that corresponds to the feasible solution (a →2, b→1, c→3,
d →4) with (The total cost of 13. Its sibling, node 9, corresponds to
the feasible
Solution:
solution {a → 2, b →1, c → 4, d → 3) with the total cost of 25, Since its cost is
larger than the cost of the solution represented by leafs, node 9 is simply
terminated.
o Note that if its cost were smaller than 13 then it would have to be replaced
with the information about the best solution seen so far with the data provided by
this node.
o Now, as inspect each of the live leaves of the last state-space tree (nodes 1, 3,
4, 6, and 7 in the following figure), it is discovered that their lower bound
values are not smaller than 13 the value of the best selection seen so far (leaf
8).
o Hence all of them are terminated and the solution represented by leaf 8 is
recognized as the optimal solution to the problem.
sa : A – B – C – D – A of
length 10
*
s : A – B – D – C – A of
length 8
Multifragment-heuristic algorithm
Step 1 Sort the edges in increasing order of their weights. (Ties can be broken
arbitrarily.) Initialize the set of tour edges to be constructed to the empty set.
Step 2 Repeat this step until a tour of length n is obtained, where n is the number of
cities in the
instance being solved: add the next edge on the sorted edge list to the set of tour
edges, provided this addition does not create a vertex of degree 3 or a cycle of
length less than n; otherwise, skip the edge.
Step 3 Return the set of tour edges.
This algorithm yields the set of edges for the graph shown above:
{(a, b), (c, d), (b, c), (a, d)}.
Minimum-spanning-tree-based algorithms
There are approximation algorithms for the traveling salesman problem that exploit
a connection between Hamiltonian circuits and spanning trees of the same graph.
Since removing an edge from a Hamiltonian circuit yields a spanning tree, we
can expect that the structure of a minimum spanning tree provides a good basis
for constructing a shortest tour approximation. Here is an algorithm that
implements this idea in a rather straightforward fashion.
Twice-around-the-tree algorithm:
Step 3 Scan the vertex list obtained in Step 2 and eliminate from it all repeated
occurrences of the same
vertex except the starting one at the end of the list. (This step is equivalent to
making shortcuts in the walk.) The vertices remaining on the list will form a
Hamiltonian circuit, which is the output of the algorithm.
Christofides algorithm:
It also uses a minimum spanning tree but does this in a more sophisticated
way than the twice- around-the-tree algorithm.
Stage 1: Construct a minimum spanning tree of the graph
Stage 2: Add edges of a minimum-weight matching of all the odd vertices in the
minimum spanning tree.
Stage 3: Find an Eulerian circuit of the multigraph obtained in stage 2
Stage 3: Create a tour form the path constructed in Sateg 2 by making
shortcuts to avoid visiting intermediate vertices more than once.
Step 2: Sort the items in non increasing order of the ratios computed in Step
1.
Step 3 Repeat the following operation until no item is left in the sorted list:
if the current item on the list fits into the knapsack, place it in the knapsack;
otherwise, proceed to the next item.
Approximation algorithm by S.
This algorithm generates all subsets of k items or less, and for each one that
fits into the knapsack, it adds the remaining items as the greedy algorithm
would (i.e., in non increasing order of their value-to-weight ratios). The subset
of the highest value obtained in this fashion is returned as the algorithm's
output.
Example: A small example of an approximation scheme with k = 2
and instances given below:
Ite Weight Value Value / Weight
m
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 1 $4 4
PART-C
1.Explain in detail about P,NP,NP complete and NP hard
PROBLEMS
P:
NP:
There are a lot of programs that don't run in polynomial timeon a
regular computer, but do run in polynomial time on a
nondeterministic Turing machine.
These programs solve problems in NP, which stands for
nondeterministic polynomial time.
NP problems are questions that have yes/no answers that are easy to
verify, but are hard to solve. That means it would take years or
centuries for your computer to come up with an answer.
For example, Given the cities and distances, is there a route that
covers all the cities, returning to the starting point, in less than x
distance?
Two Stages in NP class problems.
Guessing stage: We can easily guess.
Verifying stage: It is very hard to verify. i.e) High time complexity.
NP COMPLETE:
Halting Problem.
Post correspondence problem.
Hilbert‗s tenth problem: the problem of deciding whether a
Diophantine equation (multivariable polynomial equation) has a
solution in integers.
Determining if a context-free grammar generates all possible strings, or
if it is ambiguous.
The word problem in algebra and computer science.
The word problem for certain formal languages.
Determining whether a λ-calculus formula has a normal form.
Pg no:13
pg no:14
Pg no:27
Pg no:29
Pg no:48
Pg no:47
Pg no:88
Pg no:69
Pg no:88
Pg no:69
Pg no:19
Pg no:25
Pg no:25
Pg no:32
Pg no:43
Pg no:64
Pg no:65
Pg no:82
Pg no:84
Pg no:72
Pg no:109
Pg no:89