Chapter 8: The Matrix of A Graph
Chapter 8: The Matrix of A Graph
Chapter 8: The Matrix of A Graph
8
Graph Matrices and Applications: Motivational Overview
m
paths, the logic function that controls the flow, the processing time of the routine, the equations
that define a domain, whether the routine pushes or pops, or whether a state is reachable or not.
co
n.
Tool Building
If you build test tools or want to know how they work, sooner or later you’ll be
io
implementing or investigating analysis routines based on these methods—or you should be.
at
Think about how a naive tool builder would go about finding a property of all paths (a possibly
infinite number) versus how one might do it based on the methods was graphical and it’s hard to
uc
build algorithms over visual graphs. The properties of graph matrices are fundamental to test tool
building.
ed
We talk about graphs in testing theory, but we prove theorems about graphs by proving
ks
theorems about their matrix representations. Without the conceptual apparatus of graph matrices,
you will be blind too much of testing theory, especially those parts that lead to useful algorithms.
a
.s
1. Matrix multiplication, which is used to get the path expression from every node to
w
Chapter 8 Page 1
www.sakshieducation.com
Chapter 8
Basic Principles
A graph matrix is a square array with one row and one column for every node in the
graph. Each row-column combination corresponds to a relation between the node corresponding
to the row and the node corresponding to the column. The relation, for example, could be as
simple as the link name, if there is a link between the nodes. Observe the following:
1. The size of the matrix (i.e., the number of rows and columns) equals the number of
m
nodes.
2. There is a place to put every possible direct connection or link between any node and
co
any other node.
3. The entry at a row and column intersection is the link weight of the link (if any) that
n.
connects the two nodes in that direction.
io
at
A Simple Weight
The simplest weight we can use is to note that there is or isn’t a connection. Let “1” mean
uc
that there is a connection and “0” that there isn’t. The arithmetic rules are:
1 + 1 = 1, 1 + 0 = 1, 0 + 0 = 0,
ed
1 × 1 = 1, 1 × 0 = 0, 0 × 0 = 0.
hi
A matrix with weights defined like this is called a connection matrix. The connection
ks
matrix for is obtained by replacing each entry with I if there is a link and 0 if there isn’t. As
usual, to reduce clutter we don’t write down 0 entries. Each row of a matrix (whatever the
a
weights) denotes the outlinks of the node corresponding to that row, and each column denotes
.s
the inlinks corresponding to that node. A branch node is a node with more than one nonzero
w
Further Notation
Talking about the “entry at row 6, column 7” is wordy. To compact things, the entry
corresponding to node i and columnj, which is to say the link weights between nodes i and j, is
denoted by aij. A self-loop about node i is denoted by aii, while the link weight for the link
between nodes j and i is denoted by aji. The path segments expressed in terms of link names and,
in this notation, for several paths in the graph are:
Chapter 8 Page 2
www.sakshieducation.com
Chapter 8
abmd = a13a35a56a67;
degef = a67a78a87a78a82;
ahekmlld = a13a37a78a85a56a66a66a67; because
a13 = a, a35 = b, a56 = m, a66 = l, a67 = d, etc.
Relations
m
This isn’t a section on aunts and uncles but on abstract relations that can exist between
abstract objects, although family and personal relations can also be modeled by abstract
co
relations, if you want to. A relation is a property that exists between two (usually) objects of
interest. We’ve had many examples of relations in this book. Here’s a sample, where a and b
n.
denote objects and R is used to denote that a has the relation R to b:
io
1. “Node a is connected to node b” or aRb where “R” means “is connected to.”
at
2. “a >= b” or aRb where “R” means “greater than or equal.”
3. “a is a subset of b” where the relation is “is a subset of.”
uc
4. “It takes 20 microseconds of processing time to get from node a to node b.” The
relation is expressed by the number 20.
ed
Properties of Relations
hi
General
ks
If that’s all we ask, then our relation arithmetic is too weak to be useful. The
following sections concern some properties of relations that have been found to be useful. Any
a
given relation may or may not have these properties, in almost any combination.
.s
Transitive Relations
w
A relation R is transitive if aRb and bRc implies aRc. Most relations used in testing are
w
transitive. Examples of transitive relations include: is connected to, is greater than or equal to, is
less than or equal to, is a relative of, is faster than, is slower than, takes more time than, is a
w
Chapter 8 Page 3
www.sakshieducation.com
Chapter 8
perhaps, for amnesiacs), is a relative of. Examples of irreflexive relations include: not equals, is a
friend of (unfortunately), is on top of, is under.
Symmetric Relations
A relation R is symmetric if for every a and b, aRb implies bRa. A symmetric relation
means that if there is a link from a to b then there is also a link from b to a; which furthermore
means that we can do away with arrows and replace the pair of links with a single undirected
m
link. A graph whose relations are not symmetric is called a directed graph because we must use
arrows to denote the relation’s direction. A graph over a symmetric relation is called an
co
undirected graph. The matrix of an undirected graph is symmetric (aij = aji for all i, j)
n.
Equivalence Relations
An equivalence relation is a relation that satisfies the reflexive, transitive, and symmetric
io
properties. Numerical equality is the most familiar example of an equivalence relation. If a set of
at
objects satisfy an equivalence relation, we say that they form an equivalence class over that
relation.
uc
Partial Ordering Relations
A partial ordering relation satisfies the reflexive, transitive, and antisymmetric properties.
ed
Partial ordered graphs have several important properties: they are loop-free, there is at least one
maximum element, there is at least one minimum element, and if you reverse all the arrows, the
hi
resulting graph is also partly ordered. A maximum element a is one for which the relation xRa
ks
Principles
w
Each entry in the graph’s matrix (that is, each link) expresses a relation between the pair
w
of nodes that corresponds to that entry. It is a direct relation, but we are usually interested in
indirect relations that exist by virtue of intervening nodes between the two nodes of interest.
w
Squaring the matrix (using suitable arithmetic for the weights) yields a new matrix that expresses
the relation between each pair of nodes via one intermediate node under the assumption that the
relation is transitive.
Matrix Powers and Products
Chapter 8 Page 4
www.sakshieducation.com
Chapter 8
Given a matrix whose entries are aij, the square of that matrix is obtained by replacing every
entry with more generally, given two matrices A and B, with entries aik and bkj, respectively,
their product is a new matrix C, whose entries are cij, where:
The indexes of the product [C32] identify, respectively, the row of the first matrix and the column
of the second matrix that will be combined to yield the entry for that product in the product
matrix.
m
The Set of All Paths
Our main objective is to use matrix operations to obtain the set of all paths between all
co
nodes or, equivalently, a property (described by link weights) over the set of all paths from every
node to every other node, using the appropriate arithmetic rules for such weights. The set of all
n.
paths between all nodes is easily expressed in terms of matrix operations. It’s given by the
io
following infinite series of matrix powers: This is an eloquent, but practically useless,
at
expression. Let I be an n by n matrix, where n is the number of nodes. Let I’s entries consist of
multiplicative identity elements along the principal diagonal. For link names, this can be the
uc
number “1.” For other kinds of weights, it is the multiplicative identity for those weights. The
above product can be re-phrased as:
ed
A (I + A + A2 + A3 + A4 . . . A∞),
But often for relations, A + A = A, (A + I)2 = A2 + A +A + I A2 + A + I
hi
(A + I)n = I + A + A2 + A3 . . . An
a
Loops
.s
Every loop forces us into a potentially infinite sum of matrix powers. The way to handle loops is
w
similar to what we did for regular expressions. Every loop shows up as a term in the diagonal of
some power of the matrix—the power at which the loop finally closes—or, equivalently, the
w
length of the loop. The impact of the loop can be obtained by preceding every element in the row
w
of the node at which the loop occurs by the path expression of the loop term starred and then
deleting the loop term.
Chapter 8 Page 5
www.sakshieducation.com
Chapter 8
Partitioning Algorithm (BEIZ71, SOHO84)
Consider any graph over a transitive relation. The graph may have loops. We would like to
partition the graph by grouping nodes in such a way that every loop is contained within one
group or another. Such a graph is partly ordered. There are many used for an algorithm that does
that:
1. We might want to embed the loops within a subroutine so as to have a resulting graph
m
which is loop-free at the top level.
2. Many graphs with loops are easy to analyze if you know where to break the loops.
co
n.
Breaking Loops and Applications
How do you find the point at which to break the loops? Consider the matrix of a strongly
io
connected subgraph. If there are entries on the principal diagonal, then start by breaking the loop
at
for those links. Now consider successive powers of the matrix. At some power or another, a loop
is manifested as an entry on the principal diagonal. Furthermore, the regular expression over the
uc
link names that appears in the diagonal entry tells you all the places you can or must break the
loop.
ed
hi
context of testing, we’re usually interested in establishing a relation between two nodes —
a
typically the entry and exit nodes—rather than between every node and every other node. In a
.s
debugging context it is unlikely that we would want to know the path expression between every
node and every other node; there also, it is the path expression or some other related expression
w
Chapter 8 Page 6
www.sakshieducation.com
Chapter 8
renumbering the nodes of a graph is to interchange the rows and columns of the corresponding
matrix. Say that you wanted to change the names of nodes i and j to j and i, respectively.
The first step in the Node-Reduction Algorithm is the most complicated one: eliminating
a node and replacing it with a set of equivalent links. The reduction is done one node at a time by
combining the elements in the last column with the elements in the last row and putting the result
into the entry at the corresponding intersection.
m
Applications
co
The path expression is usually the most difficult and complicated to get. The arithmetic
rules for most applications are simpler. In this section we’ll redo applications from, using the
n.
appropriate arithmetic rules, but this time using matrices rather than graphs. Refer back to the
io
corresponding examples in to follow the successive stages of the analysis.
at
Building Tools: Matrix Representation Software
uc
Overview
We draw graphs or display them on screens as visual objects; we prove theorems and develop
ed
graph algorithms by using matrices; and when we want to process graphs in a computer, because
we’re building tools, we represent them as linked lists. We use linked lists because graph
hi
matrices are usually very sparse; that is, the rows and columns are mostly empty.
ks
The out-degree of a node is the number of outlinks it has. The in-degree of a node is the number
.s
of inlinks it has. The degree of a node is the sum of the out-degree and in-degree. The average
w
degree of a node (the mean over all nodes) for a typical graph defined over software is between 3
w
Chapter 8 Page 7
www.sakshieducation.com
Chapter 8
2. Weights—Most weights are complicated and can have several components. That
would require an additional weight matrix for each such weight.
3. Variable-Length Weights—If the weights are regular expressions, say, or algebraic
expressions (which is what we need for a timing analyzer), then we need a two-
dimensional string array, most of whose entries would be null.
m
Linked-List Representation
Give every node a unique name or number. A link is a pair of node names. The linked list for:
co
1,3;a
2,
n.
3,2;d
io
3,4;b
at
4,2;c
4,5;f
uc
5,2;g
5,3;e
ed
5,5;h
Note that I’ve put the list entries in lexicographic order. The link names will usually be pointers
hi
to entries in a string array Where the actual link weight expressions are stored. If the weights are
ks
fixed length then they can be associated directly with the links in a parallel, fixed entry-length
array. Let’s clarify the notation a bit by using node names and pointers.
a
1 node1,3;a
w
2 node2,exit
w
3 node3,2;d
w
,4;b
4 node4,2;c
,5;f
5 node5,2;g
Chapter 8 Page 8
www.sakshieducation.com
Chapter 8
,3;e
,5;h
The node names appear only once, at the first link entry. Also, instead of naming the other end of
the link, we have just the pointer to the list position in which that node starts. Finally, it is also
very useful to have back pointers for the inlinks. Doing this we get
m
co
List Entry Content
1 node1,3;a
n.
2 node2,exit
io
3,
at
4,
5,
uc
3 node3,2;d
ed
,4;b
1,
hi
5,
ks
4 node4,2;c
,5;f
a
5 3,
.s
node5,2;g
w
,3;e
w
,5;h
w
4,
5,
Matrix Operations
Chapter 8 Page 9
www.sakshieducation.com
Chapter 8
Parallel Reduction
This is the easiest operation. Parallel links after sorting are adjacent entries with the same pair of
node names. For example:
node 17,21;x
,44;y
,44;z
m
,44;w
We have three parallel links from node 17 to node 44. We fetch the weight expressions using the
co
y, z, and w pointers and we obtain a new link that is their sum:
node17,21;x
n.
,44;y (where y = y + z + w).
io
at
Loop Reduction
Loop reduction is almost as easy. A loop term is spotted as a self-link. The effect of the loop
uc
must be applied to all the outlinks of the node. Scan the link list for the node to find the loop(s).
Apply the loop calculation to every outlink, except another loop. Remove that loop. Repeat for
ed
all loops about that node. Repeat for all nodes. For example removing node 5’s loop:
hi
5 node5,2;g → node5,2;h*g
a
,3;e → ,3;h*e
.s
,5;h →
w
4, 4,
5,
w
w
Cross-Term Reduction
Select a node for reduction. The cross-term step requires that you combine every inlink to the
node with every outlink from that node. The outlinks are associated with the node you have
selected. The inlinks are obtained by using the back pointers. The new links created by removing
Chapter 8 Page 10
www.sakshieducation.com
Chapter 8
the node will be associated with the nodes of the inlinks. Say that the node to be removed was
node 4.
m
3, 3,
co
4, 4,
5, 5,
n.
3 node3,2;d node3,2;d
io
,4;b ,2;bc
at
uc ,5;bf
1, 1,
5, 5,
ed
4 node4,2;c
,5;f
hi
3,
ks
5 node5,2;h*g node5,2;h*g
a
,3;h*e ,3;h*e
.s
4,
w
Addition of two matrices is straightforward. If you keep the lists sorted, then simply merge the
w
Chapter 8 Page 11
www.sakshieducation.com
Chapter 8
Node-Reduction Optimization
The optimum order for node reduction is to do lowest-degree nodes first. The idea is to get the
lists as short as possible as quickly as possible. Nodes of degree 3 (one in and two out or two in
and one out) reduce the total link count by one link when removed. A degree-4 node keeps the
link count the same, and all higher-degree nodes increase the link count.
m
co
n.
io
at
uc
ed
hi
a ks
.s
w
w
w
Chapter 8 Page 12
www.sakshieducation.com