Daa Backup

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 49

Program: Implement Binary search

using divide and conquer technique.


A binary search or half-interval search algorithm finds the position of a specified
value (the input "key") within a sorted array. In each step, the algorithm
compares the input key value with the key value of the middle element of the
array. If the keys match, then a matching element has been found so its index, or
position, is returned. Otherwise, if the sought key is less than the middle
element's key, then the algorithm repeats its action on the sub-array to the left of
the middle element or, if the input key is greater, on the sub-array to the right. If
the remaining array to be searched is reduced to zero, then the key cannot be
found in the array and a special "Not found" indication is returned.

Algorithm:

Step 1: if (low > high) then return -1


Step 2: if (low < high) the mid=(low + high)/2
Step 3: X be a key. If a[mid] = X then return mid
Step 4: If a[mid] > X then search for X from a[low] to a[mid-1]
Step 5: If a[mid] < X then search for X from a[mid + 1] to a[high]

Properties:

1. Best Case performance – O(1) (The middle element is equal to the ‘input
key’ )
2. Worst Case performance - O(logn) ( The ‘input key’ is not present in the
list)
3. Average Case performance – O(logn) (The ‘input key’ is present, but
it’s not the middle element)

It is faster than Linear Search algorithm, and its performance


increases in comparisons to linear search as N grows.

1
1. Implement Binary Search using Divide and Conquer
approach

#include<stdio.h> }
#include<conio.h> int main()
void binSearch(int key,int a[],int n) {
{
int lower=0,mid; int a[10],i,n,key;
int j;
int upper; printf("Enter the size of an array: ");
upper=n-1; scanf("%d",&n);
j=-1;
while(upper>=lower) printf("Enter the elements in
{ ascending order: ");
mid=(upper+lower)/2; for(i=0;i<n;i++)
if(key==a[mid]) {
{ scanf("%d",&a[i]);
j=mid; }
break;
} printf("Enter the number to be
else search: ");
{ scanf("%d",&key);
if(key>a[mid]) binSearch(key,a,n);
lower=mid+1;
else getch();
upper=mid-1; return 0;
} }
}
if(j==-1)
printf("The number not found");
else
printf("The number is found");
getch();

2
2. Find Maximum and Minimum element from a array of integer using
Divide and Conquer approach
#include<conio.h> } }
max1=max;
#include<stdio.h> min1=min;
int a[100] ,max=0, min=1000,mid, n, max=0;
max1,max2,min1,min2; min=1000;
for(i=mid;i<n;i++)
void max_min (int j,int n) {
{ if(a[i]>max)
int p=0,i; {
if(p==j) max=a[i];
{ }
max=min=a[p]; // only one element in if(a[i]<min)
array {
printf("max is:%d and min is : %d " , min=a[i];
max, min);
}
}
}
else if(p==j-1)
max2=max;
{
min2=min;
if(a[p]<a[j])
if(max1<max2)
{
{
max=a[j];
max=max2;
min=a[p];
}
}
else
else
max=max1;
{
if(min1<min2)
max=a[p];
{
min=a[j];
min=min1;
}
}
printf("max is:%d and min is :%d
else
",max,min);
{
}
min=min2;
else
}
{
printf("\nmaximum is:%d\n",max);
mid=(int)((p+j)/2);
printf("\n minimun is:%d\n",min);
for(i=0;i<mid;i++)
}
{
}
if(a[i]>max)
int main()
{
{
max=a[i];
int i,j;
}
printf ("enter the size of the array");
if(a[i]<min)
scanf ("%d",&n);
{
printf ("enter the elements of the array");
min=a[i];

3
for(i=0;i<n;i++) j=n-1;
{ max_min(j,n);
Scanf ("%d",&a[i]); getch();
} return 0;}

Merge Sort
Merge sort or Mergesort is an O(n log n) sorting algorithm. It is easy to implement
merge sort such that it is stable, meaning it preserves the input order of equal elements
in the sorted output. It is an example of the divide and conquer algorithmic paradigm. It
is a comparison sort. The algorithm was invented by John von Neumann in 1945.

Merge sort recursively divides the array of elements into smaller subarrays and sort them
and then merge these to form a complete sorted array.

Conceptually, merge sort works as follows:

1. Divide the unsorted list into two sublists of about half the size
2. Divide each of the two sublists recursively until we have list sizes of length 1, in
which case the list itself is returned
3. Merge the two sorted sublists back into one sorted list.

Description of the Implementation of Merge Sort:


The program contains 2 important functions.

1. merge() - This function takes as argument the bounds of parts of array to be


merged. These two parts are contiguous and hence only 3 bounds are required.
These parts are copied in separate arrays and then actual merging takes place.
There are two pointers to the beginning of these arrays. The first elements are
compared and smaller one copied to the original array and its pointer incremented.
This process continues till one of the arrays exhausts. After this the remaining
array is just copied. Thus the two arrays are merged and the result is a sorted
array part.
2. merge_sort() - This function breaks the array into two nearly equal parts and calls
merge_sort for these parts. Then it calls the merge() function to merge these parts.

4
Merge Sort is a Divide and Conquer algorithm. It divides input array in two
halves, calls itself for the two halves and then merges the two sorted
halves. The merge() function is used for merging two halves. The merge(arr,
l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are sorted
and merges the two sorted sub-arrays into one. See following C
implementation for details.

ALGORITHM:

MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)

Complexity Analysis of Merge Sort


Merge Sort is quite fast, and has a time complexity of O(n*log n). It is also a
stable sort, which means the "equal" elements are ordered in the same order
in the sorted list.

5
In this section we will understand why the running time for merge sort
is O(n*log n).
As we have already learned in Binary Search that whenever we divide a
number into half in every stpe, it can be represented using a logarithmic
function, which is log n and the number of steps can be represented by log n
+ 1(at most)

Also, we perform a single step operation to find out the middle of any
subarray, i.e. O(1).
And to merge the subarrays, made by dividing the original array
of n elements, a running time of O(n) will be required.
Hence the total time for mergeSort function will become n(log n + 1), which
gives us a time complexity of O(n*log n).
Worst Case Time Complexity [ Big-O ]: O(n*log n)
Best Case Time Complexity [Big-omega]: O(n*log n)
Average Time Complexity [Big-theta]: O(n*log n)
Space Complexity: O(n)

 Time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst,


average and best) as merge sort always divides the array in two halves
and takes linear time to merge two halves.
 It requires equal amount of additional space as the unsorted array.
Hence its not at all recommended for searching large unsorted arrays.
 It is the best Sorting technique used for sorting Linked Lists.

Example:
let the array be 80 32 56 37 69 76.
this is partitioned into 2 parts as
80 32 56 | 37 69 76

now 80 32 56 is partitioned into 2 parts as


80 | 32 56

6
since 80 is single element list, it is trivially sorted.

now 32 56 is partitioned into 2 parts as


32 | 56

Again these are trivially sorted.

Now merge 32 | 56 results in 32 56

And merge 80 | 32 56 results in 32 56 80

Now 37 69 76 is partitioned into 2 parts as


37 | 69 76

since 37 is single element list, it is trivially sorted.

now 69 76 is partitioned into 2 parts as


69 | 76

Again these are trivially sorted.

Now merge 69 | 76 results in 69 76

And merge 37 | 69 76 results in 37 69 76

At last merge 32 56 80 | 37 69 76 results in sorted array


32 37 56 69 76 80

3. Implement Merge Sort using Divide and Conquer approach

#include<stdio.h> int p,q,j,n;


#include<conio.h> int D[10];

void merge (int A[], int lower1, int p=lower1;


upper1, int lower2,int upper2 ) q=lower2;
{ n=0;

7
mid=(lower+upper)/2;
while((p<=upper1)&&(q<=upper2)) mergesort(A,lower,mid);
{ mergesort(A,mid+1,upper);
D[n++]=(A[p]<A[q]?A[p++]:A[q++]) ; merge(A,lower,mid,mid+1,upper);
} }
}
while(p<=upper1)
int main()
D[n++]=A[p++]; {
while(q<=upper2) int i,lower,upper,n;
D[n++]=A[q++] ; int A[50];
printf("enter the upper limit");
for(q=lower1,n=0;q<=upper1;q++,n+ scanf("%d",&n);
+) printf("enter the numbers");
A[q]=D[n]; for(i=0;i<n;i++)
{
for(q=lower2,j=n;q<=upper2;q++,j++) scanf("%d",&A[i]);
A[q]=D[j]; }
lower=0;
} upper=n-1;
mergesort(A,lower,upper);

for(i=0;i<n;i++)
{printf(" %d ",A[i]);
void mergesort (int A[], int lower, }
int upper)
{ getch();
int mid; return(0);
if(upper>lower) }
{

Quicksort
Quicksort is a fast sorting algorithm, which is used not only for educational
purposes, but widely applied in practice. On the average, it has O(n log n)
complexity, making quicksort suitable for sorting big data volumes. The idea of

8
the algorithm is quite simple and once you realize it, you can write quicksort as
fast as bubble sort.

Algorithm
The divide-and-conquer strategy is used in quicksort. Below the recursion step is
described:

1. Choose a pivot value. We take the value of the middle element as pivot
value, but it can be any value, which is in range of sorted values, even if it
doesn't present in the array.
2. Partition. Rearrange elements in such a way, that all elements which are
lesser than the pivot go to the left part of the array and all elements greater
than the pivot, go to the right part of the array. Values equal to the pivot
can stay in any part of the array. Notice, that array may be divided in non-
equal parts.
3. Sort both parts. Apply quicksort algorithm recursively to the left and the
right parts.

Partition algorithm in detail

There are two indices i and j and at the very beginning of the partition algorithm i
points to the first element in the array and j points to the last one. Then algorithm
moves i forward, until an element with value greater or equal to the pivot is found.
Index j is moved backward, until an element with value lesser or equal to the pivot
is found. If i ≤ j then they are swapped and i steps to the next position (i + 1), j
steps to the previous one (j - 1). Algorithm stops, when i becomes greater than j.

After partition, all values before i-th element are less or equal than the pivot and
all values after j-th element are greater or equal to the pivot.

Example. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort.

9
Notice, that we show here only the first recursion step, in order not to make
example too long. But, in fact, {1, 2, 5, 7, 3} and {14, 7, 26, 12} are sorted then
recursively.

10
Complexity Analysis of Quick Sort
For an array, in which partitioning leads to unbalanced subarrays, to an
extent where on the left side there are no elements, with all the elements
greater than the pivot, hence on the right side.
And if keep on getting unbalanced subarrays, then the running time is the
worst case, which is O(n ) 2

Where as if partitioning leads to almost equal subarrays, then the running


time is the best, with time complexity as O(n*log n).
Worst Case Time Complexity [ Big-O ]: O(n2)
Best Case Time Complexity [Big-omega]: O(n*log n)
Average Time Complexity [Big-theta]: O(n*log n)
Space Complexity: O(n*log n)

As we know now, that if sub arrays partitioning produced after partitioning


are unbalanced, quick sort will take more time to finish. If someone knows that
you pick the last index as pivot all the time, they can intentionally provide you
with array which will result in worst-case running time for quick sort.
To avoid this, you can pick random pivot element too. It won't make any
difference in the algorithm, as all you need to do is, pick a random element
from the array, swap it with element at the last index, make it the pivot and
carry on with quick sort.

 Space required by quick sort is very less, only O(n*log n) additional


space is required.
 Quick sort is not a stable sorting technique, so it might change the
occurrence of two similar elements in the list while sorting.

11
4. Implement Quick Sort using Divide and Conquer
approach

#include<stdio.h>
#include<conio.h>
int partition(int a[],int lower,int upper)
{
int i,p,q,t; int main()
p=lower; {
q=upper+1; int a[100],lower,upper,i,n,j;
i=a[lower];
while(q>=p) printf("\tenter the no of element to be
{ inserted:\t");
while(a[++p]<i); scanf("%d",&n);
while(a[--q]>i); lower=0;
if(q>p) upper=n-1;
{ printf("\tenter the elements\n");
t=a[p]; for(i=0;i<n;i++)
a[p]=a[q]; { printf("\t%d : ",i);
a[q]=t; scanf("%d",&a[i]);
} }
}
t=a[lower]; quicksort(a,lower,upper);
a[lower]=a[q];
a[q]=t; printf("sorted list is:\n");
return q; for(i=0;i<n;i++)
} printf("%d\t",a[i]);
void quicksort(int a[],int lower, int
upper) getch();
{ return 0;
int i; }
if (upper>lower)
{
i=partition(a,lower,upper);

quicksort(a,lower,i-1);
quicksort(a,i+1,upper);
}
}

12
Dynamic Programming |(Matrix Chain Multiplication )
Given a sequence of matrices, find the most efficient way to multiply these matrices together. The
problem is not actually to perform the multiplications, but merely to decide in which order to
perform the multiplications.

We have many options to multiply a chain of matrices because matrix multiplication is associative.
In other words, no matter how we parenthesize the product, the result will be the same. For example,
if we had four matrices A, B, C, and D, we would have:
(ABC)D = (AB)(CD) = A(BCD) = ....

However, the order in which we parenthesize the product affects the number of simple arithmetic
operations needed to compute the product, or the efficiency. For example, suppose A is a 10 × 30
matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. Then,
(AB)C = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations
A(BC) = (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.

Clearly the first parenthesization requires less number of operations.

Given an array p[] which represents the chain of matrices such that the ith matrix Ai is of dimension
p[i-1] x p[i]. We need to write a function MatrixChainOrder() that should return the minimum
number of multiplications needed to multiply the chain.
Input: p[] = {40, 20, 30, 10, 30}
Output: 26000
There are 4 matrices of dimensions 40x20, 20x30, 30x10 and 10x30.
Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained
by putting parenthesis in following way
(A(BC))D --> 20*30*10 + 40*20*10 + 40*10*30

Input: p[] = {10, 20, 30, 40, 30}


Output: 30000
There are 4 matrices of dimensions 10x20, 20x30, 30x40 and 40x30.
Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained
by putting parenthesis in following way
((AB)C)D --> 10*20*30 + 10*30*40 + 10*40*30
Input: p[] = {10, 20, 30}
Output: 6000
There are only two matrices of dimensions 10x20 and 20x30. So there
is only one way to multiply the matrices, cost of which is 10*20*30

13
1) Optimal Substructure:

A simple solution is to place parenthesis at all possible places, calculate the cost for each placement
and return the minimum value. In a chain of matrices of size n, we can place the first set of
parenthesis in n-1 ways. For example, if the given chain is of 4 matrices. let the chain be ABCD,
then there are 3 way to place first set of parenthesis: A(BCD), (AB)CD and (ABC)D. So when we
place a set of parenthesis, we divide the problem into subproblems of smaller size. Therefore, the
problem has optimal substructure property and can be easily solved using recursion.

Minimum number of multiplication needed to multiply a chain of size n = Minimum of all n-1
placements (these placements create subproblems of smaller size)

2) Overlapping Subproblems

Following is a recursive implementation that simply follows the above optimal substructure
property.

Time Complexity: O(n^3)


Auxiliary Space: O(n^2)

5. Find the minimum number of scalar multiplication


needed for chain of matrix

#include<stdio.h> }
#include<limits.h> return min;
int matrixchain(int p[],int i,int j)
{ }
int k,count; void main()
int min=INT_MAX; {
if(i==j) int arr[4]={10,100,5,50};
return (0); int n= sizeof(arr)/sizeof(arr[0]);
for(k=i; k < j;k++)
{ printf("\n The value of n is
count= matrixchain (p, i, k)+ %d",n);
matrixchain (p, k+1 , j) + p[i-1] *
p[k] * p[j]; printf("\n The optimal solution
is %d ", matrixchain(arr,1,n-1));
if(count<min)
min=count; getch();
14
}

Floyd-Warshall algorithm
Floyd-Warshall algorithm is a procedure, which is used to find the shorthest (longest) paths among all
pairs of nodes in a graph, which does not contain any cycles of negative lenght. The main advantage of Floyd-
Warshall algorithm is its simplicity.

DESCRIPTION

Floyd-Warshall algorithm uses a matrix of lengths as its input. If there is an edge between nodes

and , than the matrix contains its length at the corresponding coordinates. The diagonal of the matrix

contains only zeros. If there is no edge between edges and , than the position contains positive
infinity. In other words, the matrix represents lengths of all paths between nodes that does not contain any
intermediate node.

In each iteration of Floyd-Warshall algorithm is this matrix recalculated, so it contains lengths of paths

among all pairs of nodes using gradually enlarging set of intermediate nodes. The matrix , which is
created by the first iteration of the procedure, contains paths among all nodes using exactly one (predefined)

intermediate node. contains lengths using two predefined intermediate nodes. Finally the matrix
uses intermediate nodes.

This transformation can be described using the following recurrent formula:

Because this transformation never rewrites elements, which are to be used to


calculate the new matrix, we can use the same matrix for both and .

Input:
graph[][] = { {0, 5, INF, 10},
{INF, 0, 3, INF},
{INF, INF, 0, 1},
{INF, INF, INF, 0} }
which represents the following graph
10
(0)------->(3)
| /|\
5| |
| |1

15
\|/ |
(1)------->(2)
3
Note that the value of graph[i][j] is 0 if i is equal to j
And graph[i][j] is INF (infinite) if there is no edge from vertex i to j.

Output:
Shortest distance matrix
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0

Floyd-Warshall Algorithm
n = no of vertices

A = matrix of dimension n*n

for k = 1 to n

for i = 1 to n

for j = 1 to n

Ak[i, j] = min (Ak-1[i, j], Ak-1[i, k] + Ak-1[k, j])


return A

16
6. Implement all pair of Shortest path for a graph
( Floyd- Warshall Algorithm )

#include<conio.h> printf("%d ",x[i][j]);


#include<stdio.h> }
void floyd(int cost[10][10],int n) printf("\n");
{ }
int i,k,j,t,x[10][10]; }
for(i=0;i<n;i++) void main()
for(j=0;j<n;j++) {
x[i][j]=cost[i][j]; int i,j,n,a[10][10];
for(k=0;k<n;k++) clrscr();
for(i=0;i<n;i++)
for(j=0;j<n;j++) printf("enter the no of
vertices");
{
scanf("%d",&n);
if((x[i][k]==1000) || (x[k]
[j]==1000)) printf("enter the cost matrix");
t=1000; for(i=0;i<n;i++)
else {
t=x[i][k]+x[k][j]; for(j=0;j<n;j++)
x[i][j]=(x[i][j]>t)?t:x[i][j]; {
} scanf("%d",&a[i][j]);
printf("the dist is:\n"); }
for(i=0;i<n;i++) }
{ for(j=0;j<n;j++) floyd(a,n);
{ getch();}

17
Example:
7
V1 4 V4

5
7 1

V3
V2 3

enter the no of vertices :4


enter the cost matrix

7 5 10001000
7 100010002
10003 10001000
4 10001 1000

The dist is:

7 5 8 7
7 6 3 2
9 3 6 5
4 4 1 6

18
Asymptotic complexity

The algorithm consists of three loops over all nodes, and the most inner loop
contains only operations of a constant complexity. Hence the asymptotic complexity of

the whole Floyd-Warshall algorithm is , where is number of nodes of the


graph.

Fractional Knapsack (using The Greedy algorithm)

Problem Scenario
A thief is robbing a store and can carry a maximal weight of W into his knapsack.
There are n items available in the store and weight of ith item is wi and its profit
is pi. What items should the thief take?
In this context, the items should be selected in such a way that the thief will
carry those items for which he will gain maximum profit. Hence, the objective of
the thief is to maximize the profit.
Based on the nature of the items, Knapsack problems are categorized as

 Fractional Knapsack
 Knapsack

Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select
fractions of items.
According to the problem statement,
 There are n items in the store
 Weight of ith item wi>0wi>0
 Profit for ith item pi>0pi>0 and
 Capacity of the Knapsack is W
In this version of Knapsack problem, items can be broken into smaller pieces. So,
the thief may take only a fraction xi of ith item.
0⩽xi⩽10⩽xi⩽1

The ith item contributes the weight xi.wixi.wi to the total weight in the knapsack
and profit xi.pixi.pi to the total profit.
Hence, the objective of this algorithm is to
maximize∑n=1n(xi.pi)maximize∑n=1n(xi.pi)

subject to constraint,

19
∑n=1n(xi.wi)⩽W∑n=1n(xi.wi)⩽W

It is clear that an optimal solution must fill the knapsack exactly, otherwise we
could add a fraction of one of the remaining items and increase the overall profit.
Thus, an optimal solution can be obtained by
∑n=1n(xi.wi)=W∑n=1n(xi.wi)=W

In this context, first we need to sort those items according to the value
of piwipiwi, so that pi+1wi+1pi+1wi+1 ≤ piwipiwi . Here, x is an array to store the
fraction of items.
ALGORITHM:

Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)


for i = 1 to n
do x[i] = 0
weight = 0
for i = 1 to n
if weight + w[i] ≤ W then
x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
break
return x

Analysis
If the provided items are already sorted into a decreasing order of piwipiwi, then the whileloop
takes a time in O(n); Therefore, the total time including the sort is in O(n logn).

Example
Let us consider that the capacity of the knapsack W = 60 and the list of provided items are shown in
the following table −

Item A B C D

Profit 280 100 120 120

Weight 40 10 20 24

Ratio (piwi)(piwi) 7 10 6 5

20
As the provided items are not sorted based on piwipiwi. After sorting, the items are as shown in the
following table.

Item B A C D

Profit 100 280 120 120

Weight 10 40 20 24

Ratio (piwi)(piwi) 10 7 6 5

Solution
After sorting all the items according to piwipiwi. First all of B is chosen as weight
of B is less than the capacity of the knapsack. Next, item A is chosen, as the
available capacity of the knapsack is greater than the weight of A. Now, C is
chosen as the next item. However, the whole item cannot be chosen as the
remaining capacity of the knapsack is less than the weight of C.
Hence, fraction of C (i.e. (60 − 50)/20) is chosen.
Now, the capacity of the Knapsack is equal to the selected items. Hence, no more
item can be selected.
The total weight of the selected items is 10 + 40 + 20 * (10/20) = 60
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different
combination of items.

21
Knapsack Problem using Greedy method

a. Fractional Knapsack( Partial )


int Knapsackpart(float p[],float
#include<stdio.h> v[],float w[], int n,int c)
#include<conio.h> {
void sort(int n,float p[50],float int i,t=0;
w[50], float v[50] )
{ for(i=0;i<n;i++)
int i,j,temp; {
for(i=0;i<n;i++) if(t<c)
{ {
for(j=i+1;j<n;j++) if((t+w[i])>c)
{ {
if(p[i]<p[j]) w[i]=c-t;
{ v[i]=w[i]*p[i];
temp=p[j]; t=c;
p[j]=p[i]; }
p[i]=temp;
//***********// else
temp=v[j]; {
v[j]=v[i]; t=t+w[i];
v[i]=temp; }
//***********// }
temp=w[j]; else
w[j]=w[i]; break;
w[i]=temp; }
return i;
}
}
} }
}
void main()

22
{ getch();
int n,i,j;
float c,p[10], w[50], v[10], }
sum_vi=0.0, float sum_wi=0.0,temp; OUTPUT:
clrscr(); Enter the number of Items::
printf("\nEnter the number of
Items:");
Enter Weight of Items:40
scanf("%d",&n);
for(i=0;i<n;i++) Enter the value of Items:160
{
printf("Enter Weight of Items"); Enter Weight of Items:30
scanf("%f",&w[i]); Enter the value of Items:100
printf("Enter the value of
Items");
Enter Weight of Items:20
scanf("%f",&v[i]);
p[i]=v[i]/w[i]; Enter the value of Items:90

} Enter Weight of Items:5


Enter the value of Items:30
printf("\nEnter Capacity:");
scanf("%f",&c);
Enter Weight of Items:10
sort(n,p,w,v);
Enter the value of Items:20
n= Knapsackpart(p,v,w,n,c);
for(i=0;i<n;i++)
{ Enter Capacity: 60
printf("\n%f",w[i]); 5.000000 30.000000
printf("\t%f",v[i]); 6.000000
printf("\t%f",p[i]); 20.000000 100.000000
sum_vi=sum_vi+v[i]; 5.000000
sum_wi=sum_wi+w[i]; 35.000000 140.000000
} 4.000000
printf("\n\n sum of value: %f",
sum_vi); sum of value: 270.000000
printf("\n\n Total weight: %f",
Total weight: 60.000000
sum_wi);

23
DAA - 0-1 Knapsack
In 0-1 Knapsack, items cannot be broken which means the thief should take the
item as a whole or should leave it. This is reason behind calling it as 0-1
Knapsack.
Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other
constraints remain the same.
0-1 Knapsack cannot be solved by Greedy approach. Greedy approach does not
ensure an optimal solution. In many instances, Greedy approach may give an
optimal solution.
The following examples will establish our statement.
Example-1
Let us consider that the capacity of the knapsack is W = 25 and the items are as
shown in the following table.

Item A B C D

Profit 24 18 18 10

Weight 24 10 10 7

24
Without considering the profit per unit weight (pi/wi), if we apply Greedy
approach to solve this problem, first item A will be selected as it will contribute
maximum profit among all the elements.
After selecting item A, no more item will be selected. Hence, for this given set of
items total profit is 24. Whereas, the optimal solution can be achieved by
selecting items, B and C, where the total profit is 18 + 18 = 36.

Example-2
Instead of selecting the items based on the overall benefit, in this example the
items are selected based on ratio pi/wi. Let us consider that the capacity of the
knapsack is W = 60 and the items are as shown in the following table.

Item A B C

Price 100 280 120

Weight 10 40 20

Ratio 10 7 6
Using the Greedy approach, first item A is selected. Then, the next item B is
chosen. Hence, the total profit is 100 + 280 = 380. However, the optimal solution
of this instance can be achieved by selecting items, B and C, where the total
profit is 280 + 120 = 400.
Hence, it can be concluded that Greedy approach may not give an optimal
solution.
To solve 0-1 Knapsack, Dynamic Programming approach is required.
Problem Statement
A thief is robbing a store and can carry a max imal weight of W into his knapsack.
There are n items and weight of ith item is wi and the profit of selecting this item
is pi. What items should the thief take?

25
Dynamic-Programming Approach
Let i be the highest-numbered item in an optimal solution S for W dollars.
Then S' = S - {i} is an optimal solution for W - wi dollars and the value to the
solution S is Vi plus the value of the sub-problem.
We can express this fact in the following formula: define c[i, w] to be the solution
for items 1,2, … , i and the maximum weight w.
The algorithm takes the following inputs
 The maximum weight W
 The number of items n
 The two sequences v = <v1, v2, …, vn> and w = <w1, w2, …, wn>

Dynamic-0-1-knapsack (v, w, n, W)
for w = 0 to W do
c[0, w] = 0
for i = 1 to n do
c[i, 0] = 0
for w = 1 to W do
if wi ≤ w then
if vi + c[i-1, w-wi] then
c[i, w] = vi + c[i-1, w-wi]
else c[i, w] = c[i-1, w]
else
c[i, w] = c[i-1, w]
The set of items to take can be deduced from the table, starting at c[n, w] and
tracing backwards where the optimal values came from.
If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue
tracing with c[i-1, w]. Otherwise, item i is part of the solution, and we continue
tracing with c[i-1, w-W].
Analysis
This algorithm takes θ(n, w) times as table c has (n + 1).(w + 1) entries, where
each entry requires θ(1) time to compute.

8. 0 1 Knapsack Problem
# include<stdio.h>
#include<conio.h>

void sort(int n,float w[50],float p[50])

26
{
int i,temp,j; //float p[50],w[50];
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
if(p[i]<p[j])
{
temp=p[j];
p[j]=p[i];
p[i]=temp;
//***********//
temp=w[j];
w[j]=w[i];
w[i]=temp;
}
}
}
}

void Knapsack(int no, float wt[50],float pft[50],float max)


{
int i,j; float total=0,tpft;
for(i=0;i<no;i++)
{
if((total+wt[i])<=max)
{
total=total+wt[i];
tpft=tpft+pft[i];
}
else
for(j=i+1;j<no;j++)
{
if((total+wt[j])<=max)
{
total=total+wt[j];
tpft=tpft+pft[j];
}
}
}

27
printf("\nTotal Weight %f",total);
printf("\nTotal Profit %f",tpft);
}
void main()
{ int n,i,j;
float temp, p[50],w[50],m;
clrscr();
printf("\nEnter the number of Items:");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("Enter Weight of Items");
scanf("%f",&w[i]);
printf("Enter the value of Items");
scanf("%f",&p[i]);
}
printf("\nEnter Capacity:");
scanf("%f",&m);
sort(n,w,p);
Knapsack(n,w,p,m);
getch();}
Dynamic Programming (DIJKSTRA'S ALGORITHM)

Dijkstra's algorithm solves the single-source shortest-path problem when all


edges have non-negative weights. It is a greedy algorithm and similar to Prim's algorithm.
Algorithm starts at the source vertex, s, it grows a tree, T, that ultimately spans all
vertices reachable from S. Vertices are added to T in order of distance i.e., first S, then
the vertex closest to S, then the next closest, and so on. Following implementation
assumes that graph G is represented by adjacency lists.

DIJKSTRA (G, w, s)
1. INITIALIZE SINGLE-SOURCE (G, s)
2. S ← { } // S will ultimately contains vertices of final shortest-path weights
from s
3. Initialize priority queue Q i.e., Q ← V[G]
4. while priority queue Q is not empty do
5. u ← EXTRACT_MIN(Q) // Pull out new vertex

28
6. S ← S È {u}
// Perform relaxation for each vertex v adjacent to u
7. for each vertex v in Adj[u] do
8. Relax (u, v, w)

Time Complexity of Dijkstra's Algorithm is O(V2) but with


min-priority queue it drops down to O(V+ElogV)

9. Implement Single Source shortest Path for a graph


( Dijkstra Algorithm )
for(u=1;u<=n;u++)
#include<stdio.h>
{
#include<conio.h>
for(z=1;z<=n;z++)
void dijsaktra ( int cost [100][100],
{
int n )
if(cost[z][u] !=1000)
{
cnt=cnt+1;
int c=1,s,i,dist[100],cnt=0,z,k,j,u,v;
}
while(c)
if(cnt!=0)
{
{
printf("enter the source node whose
shortest path is to be determined"); cnt=0;
scanf("%d",&s); for(i=1;i<=n;i++)
//get the input for source node {
//find the shortest path if(dist[u]>(dist[i]+cost[i]
[u]))
for(i=1;i<=n;i++)
dist[u]=dist[i]
dist[i]=cost[s][i];
+cost[i][u];
for(k=2;k<n;k++)
}
{

29
} int main()
} {
} int n,i,j,cost[100][100];
printf("shortest path from clrscr();
%d",s);
printf("enter no of vertices");
for(i=1;i!=s,i<=n;i++)
scanf("%d",&n);
if(i!=s)
for(i=1;i<=n;i++)
printf("\n%d : %d",i,dist[i]);
{
for(j=1;j<=n;j++)
//display th shotest path
{
scanf("%d",&cost[i][j]);
printf("continue? y=1 or n=0");//check
if the user wishes to continue }

scanf("%d",&c); }

} dijsaktra(cost,n);

} getch();
return 0;}

Prim's algorithm
How does Prim’s Algorithm Work? The idea behind Prim’s algorithm
is simple, a spanning tree means all vertices must be connected. So
the two disjoint subsets (discussed above) of vertices must be
connected to make a Spanning Tree. And they must be connected
with the minimum weight edge to make it a Minimum Spanning Tree.

Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.

2) Assign a key value to all vertices in the input graph. Initialize all key values
as INFINITE. Assign key value as 0 for the first vertex so that it is picked first.

3) While mstSet doesn’t include all vertices

….a) Pick a vertex u which is not there in mstSet and has minimum key value.

30
….b) Include u to mstSet.

….c) Update key value of all adjacent vertices of u. To update the key values,
iterate through all adjacent vertices.

For every adjacent vertex v, if weight of edge u-v is less than the previous key
value of v, update the key value as weight of u-v

Time Complexity:
The time complexity of the Prim’s Algorithm is O((V+E)logV) because
each vertex is inserted in the priority queue only once and insertion in
priority queue take logarithmic time.

10 . Minimum Cost Spanning Tree by Prim's Algorithm


using Greedy method

#include<stdio.h> void buildtree()


#include<conio.h> {
#define INF 1000 int i=0,j,count=0,min,v1=0,v2=0;
int vertex[10],wght[10][10]; closed[0]=0;
int new_wght[10][10],n,closed[10]; while(count<n-1)
int inclose(int i,int n1) {
{ min=INF;
int j; for(i=0;i<=count;i++)
for(j=0;j<=n1;j++) for(j=0;j<n;j++)
if(closed[j]==i) if(wght[closed[i]][j]<min &&!
inclose(j,count))
return 1;
{
return 0;
min=wght[closed[i]][j];
}

31
v1=closed[i]; printf("\n\t enter 0 if path does n't
exist between{v1,v2} else enter
v2=j;
weight");
}
for(i=0;i<n;i++)
new_wght[v1][v2] =min;
for(j=i+1;j<n;j++)
new_wght[v2][v1]=min;
{
count++;
printf("\n\t%d ............%d:" ,vertex[i],
closed[count]=v2; vertex[j] );
printf("\nscan %d : %d ::.............%d scanf("%d",&ed);
wght=%d\n", count,v1+1,v2+1,min);
if (ed>=1)
getch();
wght[i][j]=wght[j][i]=ed;
}
}
}
getch();
void main()
printf("\n\n\t NODES CURRENTLY
{ ADDED TO SPANNING TREE:");
int i,j,ed,sum=0; buildtree();
clrscr(); printf("\n\n\t MINIMUM PSPANNING
TREE");
printf("\n\n\t PRIMS ALGORITHM
TO FIND SPANNING TREE"); printf("\n \t LIST OF EDGES");
printf("\n rnter the no of NODES:"); for(i=0;i<n;i++)
scanf("%d",&n); for(j=i+1;j<n;j++)
for(i=0;i<n;i++) if(new_wght[i][j]!=INF)
{ vertex[i]=i+1; {
for(j=0;j<n;j++) printf("\n\t %d.........%d =
%d" ,vertex[i], vertex[j], new_wght[i][j]);
{
sum+= new_wght[i][j];
wght[i][j]=INF;
}
new_wght[i][j]=INF;
printf("\n\n \t TOTAL WEIGHT =
} %d",sum);
} getch();
printf("GETTING WEIGHT"); }

32
Travelling Salesman Problem
Given a set of cities and distance between every pair of cities, the problem is to find
the shortest possible route that visits every city exactly once and returns to the starting
point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltoninan
cycle problem is to find if there exist a tour that visits every city exactly once. Here we
know that Hamiltonian Tour exists (because the graph is complete) and in fact many such
tours exist, the problem is to find a minimum weight Hamiltonian Cycle.

33
For example, consider the graph shown
in figure on right side. A TSP tour in the
graph is 1-2-4-3-1. The cost of the tour
is 10+25+30+15 which is 80.

Dynamic Programming:

Let the given set of vertices be {1, 2, 3, 4,….n}. Let us consider 1 as starting and ending
point of output. For every other vertex i (other than 1), we find the minimum cost path
with 1 as the starting point, i as the ending point and all vertices appearing exactly once.
Let the cost of this path be cost(i), the cost of corresponding Cycle would be cost(i) + dist(i,
1) where dist(i, 1) is the distance from i to 1. Finally, we return the minimum of all [cost(i)
+ dist(i, 1)] values.

This looks simple so far. Now the question is how to get cost(i)?
To calculate cost(i) using Dynamic Programming, we need to have some recursive relation
in terms of sub-problems. Let us define a term C(S, i) be the cost of the minimum cost path
visiting each vertex in set S exactly once, starting at 1 and ending at i.
We start with all subsets of size 2 and calculate C(S, i) for all subsets where S is the
subset, then we calculate C(S, i) for all subsets S of size 3 and so on. Note that 1 must be
present in every subset.

If size of S is 2, then S must be {1, i},


C(S, i) = dist(1, i)
Else if size of S is greater than 2.
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j belongs to S, j != i and j != 1.

For a set of size n, we consider n-2 subsets each of size n-1 such that all subsets don’t
have nth in them.

Using the above recurrence relation, we can write dynamic programming based solution.
There are at most O(n*2n) subproblems, and each one takes linear time to solve.

The total running time is therefore O(n2*2n). The time complexity is much less than
O(n!), but still exponential. Space required is also exponential. So this approach is also
infeasible even for slightly higher number of vertices.

Analysis
n
There are at the most 2 .n sub-problems and each one takes linear time to solve.
n 2
Therefore, the total running time is O(2 .n )
34
//Implement Travelling void calg(int x,int y,int
Salesman Problem z,int a)

#include<stdio.h> int temp1,temp2,temp3;

#include<conio.h> temp1=c[x][y]+g[y][z][a][0];

int c[10][10],g[10][10][10][10],n; temp2=c[x][z]+g[z][y][a][0];

temp3=c[x][a]+g[a][y][z]

void calg(int x,int y)


if(temp1<temp2 && temp1<temp3)
{

g[x][y][0][0]=c[x][y]+g[y][0][0][0];
g[x][y][z][a]=temp1;
g[y][x][0][0]=c[y][x]+g[x][0][0][0];

}
else if(temp2<temp1 &&
void calg(int x,int y,int z) temp2<temp3)

int temp1,temp2; g[x][y][z][a]=temp2;

temp1=c[x][y]+g[y][z][0][0]; else

temp2=c[x][z]+g[z][y][0][0]; g[x][y][z][a]=temp3;

if(temp1>temp2) }

g[x][y][z][0]=temp2; //*************************

else

g[x][y][z][0]=temp2;

} void tsp()

35
//[s]=0 for(int i=1;i<=n;i++)

for(int i=2;i<=n;i++) {

g[i][0][0][0]=c[i][1]; for(int j=1;j<=n;j++)

//[s]=1 {

calg(2,3);
printf("c[%d][%d]=",i,j);
calg(2,4);

calg(3,4); scanf("%d",&c[i][j]);

//[s]=2 }

calg(2,3,4); }

calg(3,2,4); tsp();

calg(4,2,3);
printf("\n minium path covered is :
//[s]=3
%d",g[1][2][3][4]);
calg(1,2,3,4);

} getch();

int main()
return(0);
{ }
printf("algorithm for tsp");

printf("enter the no. of cities:\n");

scanf("%d",&n);

printf("enter the edge matrix


value:\n");

11. Implement Traveling Salesman Problem

#include <stdio.h> #define MAX 100

36
#define INFINITY 999 int tsp_dp (int c[][MAX], int tour[],
int start, int n)
int tsp_dp (int c[][MAX], int tour[],
int start, int n); {
void main() int i, j, k;
{ int temp[MAX];
int n; int mintour[MAX];
int i, j; int mincost;
int c[MAX][MAX]; int ccost;
int tour[MAX]; if (start == n - 2)
int cost; return c[tour[n-2]][tour[n-1]] +
c[tour[n-1]][0];
clrscr();
mincost = INFINITY;
printf ("This program demonstrates
the TSP problem."); for (i = start+1; i<n; i++)
printf ("\nHow many cities to { for (j=0; j<n; j++)
traverse? ");
temp[j] = tour[j];
scanf ("%d", &n);
printf ("Enter the cost matrix: (999:
temp[start+1] = tour[i];
no connection)\n");
temp[i] = tour[start+1];
for (i=0; i<n; i++)
if (c[tour[start]][tour[i]] + (ccost =
for (j=0; j<n; j++)
tsp_dp (c, temp, start+1, n)) <
scanf ("%d", &c[i][j]); mincost)
for (i=0; i<n; i++) {
tour[i] = i; mincost = c[tour[start]][tour[i]] +
ccost;
cost = tsp_dp (c, tour, 0, n);
for (k=0; k<n; k++)
printf ("Minimum cost: %d.\nTour:
", cost); mintour[k] = temp[k];
for (i=0; i<n; i++) }
printf ("%d ", tour[i]+1); }
printf ("1\n"); for (i=0; i<n; i++)
getch(); tour[i] = mintour[i];
} return mincost; }
12 . Implement 8 Queen problem Using Backtracking Method

37
#include<stdio.h>
#include<limits.h> printf(" %d ",X[J]);
#define MAX=10
int fnPlace(int K,int I,int X[10]) for(J=1;J<=N;J++)
{ {
int J; printf("\n");
for(J=1;J<=K-1;J++)
{ for(L=1;L<=N;L++)
if(X[J]==I || abs(J-K ) == abs {
(X[J]-I)) if(L==X[J])
return 0; printf(" X ");
} else
return 1;
} printf(" 0 ");
}
void fnQueens(int K,int N) }
{ }
int I,J,L; else
static int Count,X[10]; fnQueens(K+1,N);
for(I=1;I<=N;I++) }
{ }
if(fnPlace(K,I,X)) }
{ void main()
X[K]=I; {
if(K==N) int No;
{
printf("\nFeassible solution : %d",+ printf("Enter the no of Queenes : ") ;
+Count); scanf("%d",&No);
fnQueens(1,No);
printf("\nSolutions are : "); getch();

for(J=1;J<=N;J++) }

13.Graph Coloring Problem Using Backtracking Method

38
#include<stdio.h>
#include<conio.h> break;
int m,n; }
int found= 0; else
int G[10][10]; {
int x[10]; mColoring(k+1);
}
void NextValue(int k) }
{ }
int j; void main()
while(1) {
{ int i,j;

x[k] = (x[k]+1)%(m+1); printf("\nGRAPHCOLORING");


if(x[k]==0)
break; printf("\nEnter the number of the
for(j=1; j<=n; j++) vertices: ");
{ scanf("%d",&n);
printf("\nIf edge between the
if( (G[k][j] != 0) && (x[k] == x[j]) ) following vertices enter 1 else 0:\n");
for(i=1;i<=n;i++)
break; {
} for(j=1;j<=n;j++)
{
if(j == n+1) if((i!
break; =j)&&(i<j))
} {
printf("\n%d----%d",i,j);
}
void mColoring(int k) scanf("%d", &G[i][j]);
{ int i; G[j][i]=G[i]
[j];
}
while(1)
if(i==j)
{
G[i]
NextValue(k);
[j]=0;
if(x[k] == 0)
}
break;
if(k == n)
}
{
printf("\nEnter the number of colors
for(i=1; i<=k;i++) available: ");
printf("%d\t",x[i]); scanf("%d",&m);
printf("\n"); printf("\nSolution:\n");
found=1;

39
mColoring(1); printf("\nNo Solution possible!");
getch();
if (found == 0) }

Breadth First Traversal


BFS implementation
procedure bfs(G, s):

Q := queue containing only s

while Q not empty

v := Q.front();

Q.remove front()

for w ЄG.neighbors(v):

if w not seen:

mark w seen

Q.enqueue(w)

Breadth First Search algorithm(BFS) traverses a graph in a breadthwards


motion and uses a queue to remember to get the next vertex to start a search
when a dead end occurs in any iteration.

As in example given above, BFS algorithm traverses from A to B to E to F first


then to C and G lastly to D. It employs following rules.
Rule 1 − Visit adjacent unvisited vertex. Mark it visited. Display it. Insert it in a
queue.
Rule 2 − If no adjacent vertex found, remove the first vertex from queue.

40
Rule 3 − Repeat Rule 1 and Rule 2 until queue is empty.
Step Traversal Description

1. Initialize the queue.

2. We start from
visiting S(starting node), and
mark it visited.

3. We then see unvisited


adjacent node from S. In this
example, we have three
nodes but alphabetically we
choose A mark it visited and
enqueue it.

4. Next unvisited adjacent node


from S is B. We mark it
visited and enqueue it.

41
5. Next unvisited adjacent node
from S is C. We mark it
visited and enqueue it.

6. Now S is left with no


unvisited adjacent nodes. So
we dequeue and find A.

7. From A we have D as
unvisited adjacent node. We
mark it visited and enqueue
it.

At this stage we are left with no unmarked (unvisited) nodes. But as per
algorithm we keep on dequeuing in order to get all unvisited nodes. When the
queue gets emptied the program is over.

42
14. Implement Breadth First Search (BFS)
#include <stdio.h> for(i=1;i<=n;i++)
#include<conio.h> {
if(a[v][i]==1 &&
int visited[i]==0)
f,r,c,n,v1,q[20],p[20],visited[20],a[20] {
[20],i,j; printf(" %d",i);
void bfs(int v) visited[i]=1;
{ r++;
f=r=-1; q[r]=i;
visited[v]=1; }
f++; }
r++;
q[r]=v;
while(f<=r) }
{ }
v=q[f] ; void main()
f++; {

43
scanf("%d",&a[i][j]);
printf("\n enter number of nodes:"); }
scanf("%d",&n);
for(i=1;i<=n;i++) }
visited[i]=0;
printf("\nenter the source
for(i=1;i<=n;i++) vertex:");
{ scanf("%d",&v1);
for(j=1;j<=n;j++) bfs(v1) ;
{
printf("%d........%d",i,j); }

Depth First Traversal


Depth First Search algorithm(DFS) traverses a graph in a depthward motion and
uses a stack to remember to get the next vertex to start a search when a dead end
occurs in any iteration.

Recursive implementation of DFS


procedure dfs(G, u):
while u has an unvisited neighbor in G
v := an unvisited neighbor of u
mark v visited
dfs(G, v)

Stack-based implementation of DFS


procedure dfs(G, s):
S := stack containing only s
while S not empty
v := S.pop()
if v not visited:
mark v visited
for w 2 Є.neighbors(v): S.push(w)

44
As in example given above, DFS algorithm traverses from A to B to C to D first
then to E, then to F and lastly to G. It employs following rules.

Rule 1 − Visit adjacent unvisited vertex. Mark it visited. Display it. Push it in a
stack.

Rule 2 − If no adjacent vertex found, pop up a vertex from stack. (It will pop up all
the vertices from the stack which do not have adjacent vertices.)

Rule 3 − Repeat Rule 1 and Rule 2 until stack is empty.

Step Traversal Description

1. Initialize the stack

45
2. Mark S as visited and put it
onto the stack. Explore any
unvisited adjacent node
from S. We have three nodes
and we can pick any of them.
For this example, we shall take
the node in alphabetical order.

3. Mark A as visited and put it


onto the stack. Explore any
unvisited adjacent node from
A. Both Sand D are adjacent
to A but we are concerned for
unvisited nodes only.

4. Visit D and mark it visited and


put onto the stack. Here we
have B and C nodes which are
adjacent to D and both are
unvisited. But we shall again
choose in alphabetical order.

5. We choose B, mark it visited


and put onto stack.
Here B does not have any
unvisited adjacent node. So we
pop B from the stack.

46
6. We check stack top for return
to previous node and check if
it has any unvisited nodes.
Here, we find D to be on the
top of stack.

7. Only unvisited adjacent node


is from D is C now. So we
visit C, mark it visited and put
it onto the stack.

As C does not have any unvisited adjacent node so we keep popping the stack
until we find a node which has unvisited adjacent node. In this case, there's none
and we keep popping until stack is empty.

47
15. Implement Depth First Search (DFS)
for(i=1;i<=n;i++)
#include<stdio.h> if(a[v][i] && !reach[i])
#include<conio.h> {
int a[20][20],reach[20],n; printf("\n %d->%d",v,i);
void dfs(int v) dfsx(i);
{ }
int i; }
reach[v]=1; void main()

48
{ }
int i,j,count=0;
clrscr();
printf("\n Enter number of vertices:");
scanf("%d",&n);
for(i=1;i<=n;i++)
{
reach[i]=0;
for(j=1;j<=n;j++)
a[i][j]=0;
}
printf("\n Enter the adjacency
matrix:\n");

for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{ printf("%d....%d",i,j);
scanf("%d",&a[i][j]);
}
dfs(2);
printf("\n");
for(i=1;i<=n;i++)
{
if(reach[i])
count++;
}
if(count==n)
printf("\n Graph is connected");
else
printf("\n Graph is not connected");
getch();

49

You might also like