4., Graphs-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

4.

1 GRAPH
• A graph is an abstract data structure that is used to implement the mathematical
concept of graphs. It is basically a collection of vertices (also called nodes) and
edges that connect these vertices.

Graphs are widely used to model any situation where entities or things are
related to each other in pairs.
REAL WORLD EXAMPLE

On Facebook, everything is a node. That includes User, Photo, Album, Event, Group, Page,
Comment, Story, Video, Link, Note...anything that has data is a node.

Every relationship is an edge from one node to another. Whether you post a photo, join
a group, like a page, etc., a new edge is created for that relationship.
DEFINITION
• A graph G is defined as an ordered set (V, E), where V(G) represents
the set of vertices and E(G) represents the edges that connect these
vertices

V = {0, 1, 2, 3}
E = {(0,1), (0,2), (0,3), (1,2)}
G = {V, E}
GRAPH TERMINOLOGY
• When designing a graph we can make decisions as to:
• Use a directed graph or an undirected graph,
• Use a weighted graph or an unweighted graph.

An undirected graph is when edges have no A directed graph is when edges have a direction.
direction.
GRAPH TERMINOLOGY

A weighted graph is when edges have a numerical value A weighted and directed graph is where edges have a
direction and a numerical value!
GRAPH TERMINOLOGY
Adjacent nodes or neighbours:
For every edge, e = (u, v) that connects nodes u and v, the nodes u and
v are the end-points and are said to be the adjacent nodes or
neighbours
GRAPH TERMINOLOGY
Degree of a node:
• Degree of a node u, deg(u), is the total number of edges containing
the node u. If deg(u) = 0, it means that u does not belong to any edge
and such a node is known as an isolated node.
GRAPH TERMINOLOGY
Regular graph:
It is a graph where each vertex has the same number of neighbors. That
is, every node has the same degree. A regular graph with vertices of
degree k is called a k–regular graph or a regular graph of degree k.
GRAPH TERMINOLOGY
Path:
A path P written as P = {v0 , v1 , v2 , ..., vn ), of length n from a node u
to v is defined as a sequence of (n+1) nodes. Here, u = v0 , v = vn and
vi–1 is adjacent to vi for i = 1, 2, 3, ..., n.
Closed path :A path P is known as a closed path if the edge has the
same end-points. That is, if v0 = vn .
Simple path: A path P is known as a simple path if all the nodes in the
path are distinct with an exception that v0 may be equal to vn . If v0 =
vn , then the path is called a closed simple path.
GRAPH TERMINOLOGY
Cycle :
A path in which the first and the last vertices are same. A simple cycle
has no repeated edges or vertices (except the first and last vertices).
Connected graph:
A graph is said to be connected if for any two vertices (u, v) in V there
is a path from u to v. That is to say that there are no isolated nodes in a
connected graph. A connected graph that does not have any cycle is
called a tree. Therefore, a tree is treated as a special graph.
GRAPH TERMINOLOGY
Complete graph:
A graph G is said to be complete if all its nodes are fully connected.
That is, there is a path from one node to every other node in the graph. A
complete graph has n(n–1)/2 edges, where n is the number of nodes in G
GRAPH TERMINOLOGY
Clique:
In an undirected graph G = (V, E), clique is a subset of the vertex set C
Õ V, such that for every two vertices in C, there is an edge that connects
two vertices.
GRAPH TERMINOLOGY
Labelled graph or weighted graph:
A graph is said to be labelled if every edge in the graph is assigned some
data. In a weighted graph, the edges of the graph are assigned some
weight or length. The weight of an edge denoted by w(e) is a positive
value which indicates the cost of traversing the edge.
GRAPH TERMINOLOGY
Multiple edges :
Distinct edges which connect the same end-points are called multiple edges.
That is, e = (u, v) and e' = (u, v) are known as multiple edges of G.

Loop: An edge that has identical end-points is called a loop. That is, e = (u,
u).

Multi-graph: A graph with multiple edges and/or loops is called a multi-


graph.

Size of a graph: The size of a graph is the total number of edges in it


REPRESENTATION OF GRAPH
There are two common ways of storing graphs in the computer’s
memory. They are:
1. Sequential representation by using an adjacency matrix.

2. Linked representation by using an adjacency list that stores the


neighbors of a node using a linked list.
4.6.ADJACENCY MATRIX
REPRESENTATION
An adjacency matrix is used to represent which nodes are adjacent to
one another. By definition, two nodes are said to be adjacent if there is
an edge connecting them.
ADJACENCY MATRIX REPRESENTATION
• An adjacency matrix contains only 0s and 1s, it is called a bit matrix
or a Boolean matrix. The entries in the matrix depend on the ordering
of the nodes in G. Therefore, a change in the order of nodes will result
in a different adjacency matrix.
ADJACENCY LIST REPRESENTATION
• An adjacency list is another way in which graphs can be represented in
the computer’s memory.
• This structure consists of a list of all nodes in G. Furthermore, every
node is in turn linked to its own list that contains the names of all other
nodes that are adjacent to it.
ADJACENCY LIST REPRESENTATION
The key advantages of using an adjacency list are:
➢It is easy to follow and clearly shows the adjacent nodes of a
particular node.
➢Adding new nodes in G is easy and straightforward when G is
represented using an adjacency list.
ADJACENCY LIST REPRESENTATION
4.2.GRAPH TRAVERSAL
• Traversing????

It is the method of examining the nodes and edges of the graph.


GRAPH TRAVERSAL ALGORITHM
• There are two standard methods of graph traversal Algorithms are
there .They are
1. Breadth-first search (4.2.1)
2. Depth-first search (4.2.3)

While breadth-first search uses a queue as an auxiliary data structure to


store nodes for further processing, the depth-first search scheme uses a
stack
4.2.1 BREADTH-FIRST SEARCH
ALGORITHM
• Breadth-first search (BFS) is a graph search algorithm that begins at
the root node and explores all the neighboring nodes1. Breadth-first
search
BREADTH-FIRST SEARCH ALGORITHM
BREADTH-FIRST SEARCH ALGORITHM
ALGORITHM:
Step 1: SET STATUS=1 (ready state)
for each node in G
Step 2: Enqueue the starting node A
and set its STATUS=2
(waiting state)
Step 3: Repeat Steps 4 and 5 until
QUEUE is empty
Step 4: Dequeue a node N. Process it
and set its STATUS=3
(processed state).
Step 5: Enqueue all the neighbours of
N that are in the ready state
(whose STATUS=1) and set
their STATUS=2
(waiting state)
[END OF LOOP]
Step 6: EXIT
FEATURES OF BREADTH-FIRST
SEARCH ALGORITHM
• Space complexity:
In the breadth-first search algorithm, all the nodes at a particular level
must be saved until their child nodes in the next level have been
generated. The space complexity is therefore proportional to the number
of nodes at the deepest level of the graph.
• Time complexity:
In the worst case, breadth-first search has to traverse through all paths
to all possible nodes, thus the time complexity of this algorithm
asymptotically approaches O(bd).However, the time complexity can
also be expressed as O( | E | + | V | ).
FEATURES OF BREADTH-FIRST
SEARCH ALGORITHM
• Completeness:
Breadth-first search is said to be a complete algorithm because if there
is a solution, breadth-first search will find it regardless of the kind of
graph. But in case of an infinite graph where there is no possible
solution, it will diverge.
• Optimality
Breadth-first search is optimal for a graph that has edges of equal
length, since it always returns the result with the fewest edges between
the start node and the goal node.
APPLICATIONS OF BREADTH-FIRST
SEARCH ALGORITHM
Breadth-first search can be used to solve many problems such as:
• Finding all connected components in a graph G.
• Finding all nodes within an individual connected component.
• Finding the shortest path between two nodes, u and v, of an
unweighted graph.
• Finding the shortest path between two nodes, u and v, of a weighted
graph.
4.2.2.DEPTH-FIRST SEARCH
ALGORITHM
• The depth-first search algorithm progresses by expanding the starting
node of G and then going deeper and deeper until the goal node is
found, or until a node that has no children is encountered.
• When a dead-end is reached, the algorithm backtracks, returning to the
most recent node that has not been completely explored.
DEPTH-FIRST SEARCH ALGORITHM
.
DEPTH-FIRST SEARCH ALGORITHM
ALGORITHM:
• Step 1: SET STATUS=1 (ready state) for each node in G
• Step 2: Push the starting nodeAon the stack and set
• its STATUS=2 (waiting state)
• Step 3: Repeat Steps 4 and 5 until STACK is empty
• Step 4: Pop the top node N. Process it and set its
• STATUS=3 (processed state)
• Step 5: Push on the stack all the neighbours of N that
• are in the ready state (whose STATUS=1) and
• set their STATUS=2 (waiting state)
• [END OF LOOP]
• Step 6: EXIT
FEATURES OF DEPTH-FIRST SEARCH
ALGORITHM
• Space complexity :
The space complexity of a depth-first search is lower than that of a breadth
first search.
• Time complexity:
The time complexity of a depth-first search is proportional to the number of
vertices plus the number of edges in the graphs that are traversed. The time
complexity can be given as (O(|V| + |E|)).
• Completeness:
Depth-first search is said to be a complete algorithm. If there is a solution,
depth first search will find it regardless of the kind of graph. But in case of an
infinite graph, where there is no possible solution, it will diverge.
APPLICATIONS OF DEPTH-FIRST
SEARCH ALGORITHM
Depth-first search is useful for:
• Finding a path between two specified nodes, u and v, of an unweighted
graph.
• Finding a path between two specified nodes, u and v, of a weighted
graph.
• Finding whether a graph is connected or not.
• Computing the spanning tree of a connected graph.
APPLICATIONS OF DEPTH-FIRST
SEARCH ALGORITHM
Depth-first search is useful for:
• Finding a path between two specified nodes, u and v, of an unweighted
graph.
• Finding a path between two specified nodes, u and v, of a weighted
graph.
• Finding whether a graph is connected or not.
• Computing the spanning tree of a connected graph.
4.4.TOPOLOGICAL SORT
• Topological sort of a directed acyclic graph (DAG) G is defined as a
linear ordering of its nodes in which each node comes before all nodes
to which it has outbound edges.
• Topological Sorting for a graph is not possible if the graph is not a
DAG.
• Every DAG has one or more number of topological sorts.
ANIMATED EXPLANATION
TOPOLOGICAL SORTING
ALGORITHM:
Step 1: Find the in-degree INDEG(N) of every node
graph
Step 2: Enqueue all the nodes with a zero in-degree
Step 3: Repeat Steps 4 and 5 until the QUEUE is empty
Step 4: Remove the front nodeNof the QUEUE by setting
FRONT = FRONT+1
Step 5: Repeat for each neighbourMof node N:
a) Delete the edge from N to M by setting
INDEG(M) = INDEG(M)-1
b) IF INDEG(M) = , then Enqueue M, that is,
addMto the rear of the queue
[END OF INNER LOOP]
[END OF LOOP]
Step 6: Exit
4.5.Minimum Spanning Tree
Spanning Trees
• Given (connected) graph G(V,E),
a spanning tree T(V’,E’):
• Is a subgraph of G; that is, V’  V, E’  E.
• Spans the graph (V’ = V)
• Forms a tree (no cycle);
• So, E’ has |V| -1 edges

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 39


Minimum Spanning Trees
• Edges are weighted: find minimum cost spanning tree
• Applications
• Find cheapest way to wire your house
• Find minimum cost to send a message on the Internet

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 40


Strategy for Minimum Spanning Tree
• For any spanning tree T, inserting an edge enew not in T creates a cycle
• But
• Removing any edge eold from the cycle gives back a spanning tree
• If enew has a lower cost than eold we have progressed!

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 41


Strategy
• Strategy for construction:
• Add an edge of minimum cost that does not create a cycle (greedy algorithm)
• Repeat |V| -1 times
• Correct since if we could replace an edge with one of lower cost, the
algorithm would have picked it up

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 42


Two Algorithms
• Prim: (build tree incrementally) (4.5.1)
• Pick lower cost edge connected to known (incomplete) spanning tree that
does not create a cycle and expand to include it in the tree
• Kruskal: (build forest that will finish as a tree)(4.5.2)
• Pick lowest cost edge not yet in a tree that does not create a cycle. Then
expand the set of included edges to include it. (It will be somewhere in the
forest.)

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 43


Prim’s algorithm
1
Starting from empty T,
10 5
choose a vertex at random
and initialize 1
V = {1), E’ ={}
8 3
2 3 4

1 1 6

4
2
6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 44


Prim’s algorithm
1
Choose the vertex u not in V
10 5
such that edge weight from u to
a vertex in V is minimal (greedy!) 1
V={1,3} E’= {(1,3) }
8 3
2 3 4

1 1 6

4
2
6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 45


Prim’s algorithm
Repeat until all vertices have been
chosen 1
10 5
Choose the vertex u not in V such that
edge weight from v to a vertex in V is 1
minimal (greedy!)
V= {1,3,4} E’= {(1,3),(3,4)} 8 3
2 3 4
V={1,3,4,5} E’={(1,3),(3,4),(4,5)}
…. 1 1 6
V={1,3,4,5,2,6}
4
E’={(1,3),(3,4),(4,5),(5,2),(2,6)}
2
6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 46


Prim’s algorithm
Repeat until all vertices have been
chosen 1
10 5
V={1,3,4,5,2,6}
1
E’={(1,3),(3,4),(4,5),(5,2),(2,6)}

8 3
2 3 4
Final Cost: 1 + 3 + 4 + 1 + 1 = 10

1 1 6

4
2
6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 47


Prim’s Algorithm Implementation
• Assume adjacency list representation
Initialize connection cost of each node to “inf” and “unmark” them
Choose one node, say v and set cost[v] = 0 and prev[v] =0
While they are unmarked nodes
Select the unmarked node u with minimum cost; mark it
For each unmarked node w adjacent to u
if cost(u,w) < cost(w) then cost(w) := cost (u,w)
prev[w] = u

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 48


Prim’s algorithm Analysis
• If the “Select the unmarked node u with minimum cost” is done with binary heap
then O((n+m)logn)

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 49


Kruskal’s Algorithm
• Select edges in order of increasing cost
• Accept an edge to expand tree or forest only if it does not cause a
cycle
• Implementation using adjacency list, priority queues and disjoint sets

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 50


Kruskal’s Algorithm
Initialize a forest of trees, each tree being a single node
Build a priority queue of edges with priority being lowest cost
Repeat until |V| -1 edges have been accepted {
Deletemin edge from priority queue
If it forms a cycle then discard it
else accept the edge – It will join 2 existing trees yielding a larger tree and
reducing the forest by one tree
}
The accepted edges form the minimum spanning tree

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 51


Detecting Cycles
• If the edge to be added (u,v) is such that vertices u and v belong to
the same tree, then by adding (u,v) you would form a cycle
• Therefore to check, Find(u) and Find(v). If they are the same discard (u,v)
• If they are different Union(Find(u),Find(v))

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 52


Properties of trees in K’s algorithm
• Vertices in different trees are disjoint
• True at initialization and Union won’t modify the fact for remaining trees
• Trees form equivalent classes under the relation “is connected to”
• u connected to u (reflexivity)
• u connected to v implies v connected to u (symmetry)
• u connected to v and v connected to w implies a path from u to w so u
connected to w (transitivity)

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 53


K’s Algorithm Data Structures
• Adjacency list for the graph
• To perform the initialization of the data structures below
• Disjoint Set ADT’s for the trees (recall Up tree implementation of
Union-Find)
• Binary heap for edges

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 54


Example

1
10 5
1
1

8 3
2 3 4

1 1 6

4
2
6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 55


1
10 5
1
1

8 3
2 3 4

1 1 6

4
2
6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 56


Initialization
1
Initially, Forest of 6 trees
F= {{1},{2},{3},{4},{5},{6}}

Edges in a heap (not shown) 2 3 4

6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 57


Step 1
1
Select edge with lowest cost
(2,5)
Find(2) = 2, Find (5) = 5
Union(2,5)
2 3 4
F= {{1},{2,5},{3},{4},{6}}
1 edge accepted
1

6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 58


Step 2
1
Select edge with lowest cost
(2,6)
Find(2) = 2, Find (6) = 6
Union(2,6)
2 3 4
F= {{1},{2,5,6},{3},{4}}
2 edges accepted
1
1

6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 59


Step 3
1
Select edge with lowest cost
(1,3)
Find(1) = 1, Find (3) = 3 1

Union(1,3)
2 3 4
F= {{1,3},{2,5,6},{4}}
3 edges accepted
1
1

6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 60


Step 4
1
Select edge with lowest cost
(5,6)
Find(5) = 2, Find (6) = 2 1

Do nothing
2 3 4
F= {{1,3},{2,5,6},{4}}
3 edges accepted
1
1

6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 61


Step 5
1
Select edge with lowest cost
(3,4)
Find(3) = 1, Find (4) = 4 1

Union(1,4) 3
2 3 4
F= {{1,3,4},{2,5,6}}
4 edges accepted
1
1

6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 62


Step 6
Select edge with lowest cost
1
(4,5)
Find(4) = 1, Find (5) = 2
1
Union(1,2)
F= {{1,3,4,2,5,6}} 3
2 3 4
5 edges accepted : end
Total cost = 10
1
Although there is a unique 4
1
spanning tree in this example,
this is not generally the case
6 5

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 63


Kruskal’s Algorithm Analysis
• Initialize forest O(n)
• Initialize heap O(m), m = |E|
• Loop performed m times
• In the loop one DeleteMin O(log m)
• Two Find, each O(log n)
• One Union (at most) O(1)
• So worst case O(m log m) = O(m log n)

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 64


Time Complexity Summary
• Recall that m = |E| = O(V2) = O(n2 )
• Prim’s runs in O((n+m) log n)
• Kruskal runs in O(m log m) = O(m log n)
• In practice, Kruskal has a tendency to run faster since graphs might
not be dense and not all edges need to be looked at in the Deletemin
operations

10/16/2024 CSE 373 AU 04 - Minimum Spanning Trees 65


The Travelling Salesman Problem (4.7)

(TSP)

66
A Salesman wishes to travel around a given set of cities,
and return to the beginning, covering the smallest total
distance

Easy to State

Difficult to Solve

67
If there is no condition to return to the beginning. It
can still be regarded as a TSP.
Suppose we wish to go from A to B visiting all cities.

A
B

68
If there is no condition to return to the beginning. It
can still be regarded as a TSP.

Connect A and B to a ‘dummy’ city at zero distance


(If no stipulation of start and finish cities connect all to dummy at
zero distance)

A
B

69
If there is no condition to return to the beginning. It
can still be regarded as a TSP.

Create a TSP Tour around all cities

A
B

70
A route returning to the beginning is known as a
Hamiltonian Circuit

A route not returning to the beginning is known as a


Hamiltonian Path

Essentially the same class of problem

71
Applications of the TSP
Routing around Cities
Computer Wiring - connecting together computer
components using minimum
wire length
Archaeological Seriation - ordering sites in time

Genome Sequencing - arranging DNA fragments in


sequence
Job Sequencing - sequencing jobs in order to
minimise total set-up time
between jobs
Wallpapering to Minimise Waste

NB: First three applications generally symmetric 72


Major Practical Extension of the TSP
Vehicle Routing - Meet customers demands within given
time windows using lorries of limited capacity
10am-1pm 7am-8am 3am-5am

4pm-7pm 6pm-7pm
Depot

8am-10am

6am-9am
2pm-3pm

73
Much more difficult than TSP
History of TSP

1800’s Irish Mathematician, Sir William Rowan Hamilton


1930’s Studied by Mathematicians Menger, Whitney, Flood etc.
1954 Dantzig, Fulkerson, Johnson, 49 cities (capitals of USA
states) problem solved
1971 64 Cities
1975 100 Cities
1977 120 Cities
1980 318 Cities
1987 666 Cities
1987 2392 Cities (Electronic Wiring Example)
1994 7397 Cities
1998 13509 Cities (all towns in the USA with population > 500)
2001 15112 Cities (towns in Germany)
2004 24978 Cities (places in Sweden)
74
But many smaller instances not yet solved (to proven optimality)
Recent TSP Problems and
Optimal Solutions
from
Web Page of William Cook,
Georgia Tech, USA

with Thanks

75
54
62
77
87
94
98
n=1512

n=13509
n=2392
n=7397
n=120
n=532
n=666
n=49
n=33

Printed Circuit Board 2392 cities 1987 Padberg and Rinaldi

76
USA Towns of 500 or more population 13509 cities
1998 Applegate, Bixby, Chvátal and Cook

77
Towns in Germany 15112 Cities 2001Applegate,
Bixby, Chvátal and Cook

78
Sweden 24978 Cities 2004 Applegate, Bixby, Chvátal, Cook and Helsgaun

79
Solution Methods
I. Try every possibility (n-1)! possibilities – grows faster
than exponentially

If it took 1 microsecond to calculate each possibility


takes 10140 centuries to calculate all possibilities when n = 100

II. Optimising Methods obtain guaranteed optimal


solution, but can take a very,
very, long time
III. Heuristic Methods obtain ‘good’ solutions ‘quickly’
by intuitive methods.
No guarantee of optimality

(Place problem in newspaper with cash prize)

80
The Nearest Neighbour Method (Heuristic)
– A ‘Greedy’ Method

1. Start Anywhere

2. Go to Nearest Unvisited City


3. Continue until all Cities visited

4. Return to Beginning

81
A 42-City Problem The Nearest Neighbour Method
(Starting at City 1)
5

25 8

37
31
24 6 28 36
32
41
14 26 30
27
11
7
15
23
33 9
40
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
82
18
The Nearest Neighbour Method (Starting at City 1)

25 8

37
31
24 6 28 36
32
41
14 26 30
27
11
7
15
23
33 9
40
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
83
18
The Nearest Neighbour Method (Starting at City 1)
Length 1498
5

25 8

37
31
24 6 28 36
32
41
14 26 30
27
11
7
15
23
33 9
40
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
84
18
Remove Crossovers

25 8

31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13 19
2 34
42 35
20
16
38
17 4 21 10
3
1 39
85
18
Remove Crossovers

25 8

31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13 19
2 34
42 35
20
16
38
17 4 21 10
3
1 39
86
18
Remove Crossovers Length 1453

25 8

31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13 19
2 34
42 35
20
16
38
17 4 21 10
3
1 39
87
18
Christofides Method (Heuristic)

1. Create Minimum Cost Spanning Tree (Greedy


Algorithm)

2. ‘Match’ Odd Degree Nodes

3. Create an Eulerian Tour - Short circuit


cities revisited

88
Christofides Method 42 – City Problem
Minimum Cost Spanning Tree
5 8
Length 1124
25
31
37
24
6 28
26 30 36
32
27
11
41 14
23 7 33

22
15 9

29
40 12
2
13 35
19
42 34
38
20
16
21 10
17 4
3
39

1
18
89
Minimum Cost Spanning Tree by Greedy
Algorithm

Match Odd Degree Nodes

90
Match Odd Degree Nodes in Cheapest Way – Matching Problem
5 8
25
31
37
24
6 28
26 30 36
32
27
11
41 14
23 7 33

22
15 9

29
40 12
2
13 35
19
42 34
38
20
16
21 10
17 4
3
39

1
18
91
1. Create Minimum Cost Spanning Tree (Greedy
Algorithm)

2. ‘Match’ Odd Degree Nodes

3. Create an Eulerian Tour - Short circuit


cities revisited

92
Create a Eulerian Tour – Short Circuiting Cities revisited
Length 1436
5

8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13
2 19 34
42 35
20
16
38
17 4 10
21
3
1 39
93
18
Optimising Method
1.Make sure every city visited once and left once – in cheapest way (Easy)
-The Assignment Problem 5 - Results in subtours Length 1098

8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13
2 19
42 34 35
20
16
38
17 4 10
21
3
1 39
94
18
Put in extra constraints to remove subtours (More Difficult)
Results in new subtours Length 1154
5

8
25
31 37
24
6 28 32 36
41 26 30
14 27
11
7
15 23
40 33 9
22 29
12
13
2 19 34
42 35
20
16
38
17 4 21 10
3

1 39
95
18
Remove new subtours
Results in further subtours Length 1179
5

8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13 2 19
34 35
42
20
16
38
17 4 21 10
3
1 39
96
18
Further subtours Length 1189

8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
40 33 9
22 29
12
13 2 19
34 35
42
20
16
38
17 4 21 10
3
1 39
97
18
Further subtours Length 1192

8
25
31 37
24 28
6 32 36
41 26 30
14 27
11
7
15
23
33 9
40
22 29
12
13 2 19
42 34 35
20
16
38
17 4 21 10
3
1 39
98
18
Further subtours Length 1193

8
25
31 37
24 6 28 36
32
41 26 30
14 27
11
7
15 23
33 9
40
22 29
12
13 2 19
34 35
42
20
16
38
17 4 10
21
3

1 39
99
18
Optimal Solution Length 1194

8
25
37
31
24 28
6 32 36
41 26 30
14 27
11
7
15
23
33
40 9
22 29
12
13 2 19
34 35
42
20
16
38
17 4 21 10
3

1 39
100
18

You might also like