A Study On Focus Arithmetics Labeling of Graphs

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 57

CHAPTER-1

A STUDY ON FOCUS ARITHMETICS LABELING OF GRAPHS


BASIC DEFINITIONS

Graph:

A graph G is an ordered triple (V (G), E (G), ) consisting of

(i) a nonempty finite set V(G)


(ii) a finite set (may be empty) E(G), which is disjoint from V(G),and

(iii) an incidence function  that associates with each element of

E(G) an ordered pair of elements of V(G).

The elements of V (G) are called the vertices (or nodes), and the elements E
(G) are called the edges (or links) of G. If e is an edge and
 (e) = (u, v), then we say that e is an edge joining u and v and the
vertices u and v are called the ends (end vertices) of e.

Example:

G= (V (G), E (G), )

Where V (G) = { v1, v2 , v3,v4 }

E (G) = { e1, e2 , e3,e4 }

And  is defined by
  e1    v1, v2 
  e2    v2, v3 
  e3    v3, v4 

  e4    v4, v1 

Adjacent vertices:

Two vertices v1 and v2


are said to be adjacent, if they are
connected by an edge.

Adjacent edges:

1
Two edges e1 and e2
are said to be adjacent edges, if they have the
vertex in common.

Incident:

A vertex v is said to be incident at ‗e‘, if v e .

Sub graph:

Let G = <V, E> be a graph .A graph H = <W, F> is said to be a sub


graph of G. If W
 V and F  E.

Spanning sub graph:

Let G = <V, E> be a graph .A sub graph H = <W, F> is said to be a


spanning sub graph of G. If W  V

Degree:

The degree of a vertex v in a graph G is the number of edges incident with


v.

2
DEFINITIONS RELATED TO A TREE

Acyclic Graph:

An acyclic graph is a graph having no graph cycles.

Tree:

A connected acyclic graph is known as a tree, and a possibly disconnected


acyclic graph is known as a forest (i.e., a collection of trees)

Example:

Connected:

A graph can be a tree if is connected. That is, if each of the nodes is


Connected with a link to at least one other node. If a node is not connected to some
other node, then the assembly is not a tree.

Here is an example of tree because it is connected.


Eccentricity:
The eccentricity e (u) = max {d (u; v): v V } of a node is the maximum
distance to any node v V . The eccentricity of each node is indicated in the graph
below.
Example:

Radius:
The radius rad (G) = min {e (u): u V } of a graph G is the
minimal eccentricity of all nodes in V.
Diameter:
The diameter diam (G) = max {e (u): u V } of a graph G is the maximal.
Eccentricity of all nodes in V. It is also the maximal distance between any two nodes
in V
Center:
The center of a graph G is the set of nodes in V of minimal eccentricity (the
black node)
Subtree:
A tree G  whose graph vertices and graph edges form subsets of the graph
vertices and graph edges of a given t r e e G.
PROPERTIES OF TREES

A tree is an undirected simple graph G that satisfies any of the following


equivalent conditions:

 G is connected and has no cycles.


 G has no cycles, and a simple cycle is formed if any edge is added to G.
 G is connected, but is not connected if any single edge is removed from G.
 G is connected and the 3-vertex complete graph K3 is not a minor of G.

 Any two vertices in G can be connected by a unique simple path.

If G has finitely many vertices, say n of them, then the above statements are
also equivalent to any of the following conditions:

 G is connected and has n − 1 edges.


 G has no simple cycles and has n − 1 edges.
 T is connected, and every edge is a cut-edge.
 Any two vertices of T are connected by exactly one path.
 T contains no cycles, and for any new edge e, the graph T + e has exactly one
cycle.

Leaf:
A leaf is a vertex of degree 1.

Forest:

A forest is a disjoint union of trees.

14
Internal vertex:

An internal vertex is a vertex of degree at least 2.

Spanning Tree:

A spanning tree of a graph is a subset of n-1edges that form a tree.

Example:

Complexity of the graph:

A spanning subgraph H of a graph is said to be a spanning tree if H is a tree.


We know that every connected graph contains a spanning tree. The number of
spanning trees in a connected graph G is denoted by

 G  and is said to be the complexity of the graph G.

Minimum Spanning Tree:

The minimum spanning tree of a weighted graph is a set of n 1


edges of minimum total weight which form a spanning tree of the graph. When a
graph is unweighted, any spanning tree is a minimum spanning tree.
Maximum Spanning Tree:

A maximum spanning tree is a spanning tree of a weighted graph having


maximum weight. It can be computed by negating the weights for each edge and
applying Kruskal's algorithm.

Optimal tree:

An optimal tree in a connected graph is a spanning tree that has the smallest
possible sum of weights of its edges.

Rooted Tree:

A rooted tree is a tree in which a special ("labeled") node is singled out.


This node is called the "root" or (less commonly) "eve" of the tree. Rooted trees are
equivalent to oriented tree.

Centered tree:

A tree having a single node that is a graph. Then it is also called a central tree.

Example:
Planted Tree:

A planted tree is a rooted tree whose root node has vertex degree 1.

The number of planted trees of n nodes isTn1 , where Tn1 is the

number of rooted trees of


n 1vertices, so there are 0, 1, 1, 2, 4, 9, 20,
48, 115, 286, 719, 1842, ... planted trees of n 1, 2, 3, ... vertices. E

Example:

Descendant:
A vertex w is called a descendant of a vertex v (and v is
called an ancestor of w), if v is on the unique path from the root to w. If, in
addition, w  v, then w is a proper descendant of v (and v is a proper ancestor of
w).
Parent:
If vertex v immediately precedes vertex w on the path from the root to w,
then v is parent of w and w is child of v.
Depth:
In a rooted tree, the depth or level of a vertex v is its distance from the
root, i.e., the length of the unique path from the root to v. Thus, the root has depth
0.
Height:
The height of a rooted tree is the length of a longest path from the
root (or the greatest depth in the tree).
Sibling:
A sibling a node which shares the same parent.
Ancestor:
An ancestor is any node between a given node and the root, including the
root.
Bicentered Tree:
A tree (also called a bicentral tree) having two nodes that are graph
centers. The numbers of bicentered trees on 1, 1, 3,
n  1, 2 ... nodes are 0, 1, 0,
4, 11, 20, 51,108

Example:

Path Graph:

A path graph is a graph that can be drawn so that all of its vertices and
edges lie on a single straight line.

Example:
Labeled tree:

A labeled tree is a tree in which each vertex is given a unique label. The
vertices of a labeled tree on n vertices are typically given the labels 1, 2,…, n.

Recursive tree:

A recursive tree is a labeled rooted tree where the vertex labels respect the tree
order (i.e., if u  v for two vertices u and v, then the label of u is smaller than the label
of v).

n-ary tree:

An n-ary tree is a rooted tree for which each vertex has at most n children. 2-
ary trees are sometimes called binary trees, while 3-ary trees are sometimes called
ternary trees.

Complete m-ary tree:

A complete m-ary tree is an m-ary tree in which every internal vertex has
exactly m children and all leaves have the same depth.
2.4 BINARY TREE

A binary tree is a tree in which there is exactly one vertex of degree two
and each of the remaining vertices is of the degree one or three.

Properties of binary tree:

The number of vertices n in a binary tree is always odd. This is because there
is exactly one vertex of even degree, and the remaining
n 1 vertices are of even odd degree.

Traversals of Binary tree:

The traversal of a binary tree consists of visiting each vertex of the tree in
some prescribed order. The most common binary tree traversals are differentiated
by the order in which the root and its subtrees are visited. The three traversals are
described as:

1. Preorder traversal

(a) Visit the root of the tree.

(b) Pre order traverses the left subtree.

(c) Pre order traverses the right subtree.

2. Post order traversal

(a) Post order traverses the left subtree.

(b) Post order traverses the right subtree.

(c) Visit the root of the tree.


3. In order traversal

(a) In order traverse the left subtree.

(b) Visit the root of the tree.

(c) In order traverse the right subtree.

Full regular binary tree:

A Full regular binary tree is a regular tree in which all leaves have the same
path length (equal to the height of the tree).
Weight of a binary tree:
A binary tree that has n leaves with the weight w1, w2 ,..., wn assigned to the
leaves is said to be a binary tree for the weights w1, w2 ,..., wn. The weight of a binary tree
n
for the weights w1, w2 ,..., wn is ∑ w il( wi). Where l(w i ) denotes the path length of the
i=1

leaf to which the weight w i is assigned.

Weight of a binary tree:

A binary tree that has n leaves with the weight w1, w2 ,..., wn

assigned to the leaves is said to be a binary tree for the weights


w1, w2 ,..., wn . The weight of a binary tree for the weights
w1, w2 ,..., wn
n
l  wi  . Where l  wi 
is  wi denote the path length of the leaf to
i1
which the weight wi
is assigned.
CHAPTER-2

SPANNING TREES ALGORITHM

INTRODUCTION

In this section also we deal only with undirected graphs.

A spanning subgraph H of a graph is said to be a spanning tree if H is a


tree. We know that every connected graph contains a spanning tree. The number of

spanning trees in a connected graph G is denoted by  G  and is said to be the


complexity of the graph G.

If e = uv is an edge of G then contraction of e is the operation of replacing u


and v by a single vertex whose incident whose incident edges are the edges other
than e that were incident to u to v.

Depth First traversal of a graph is analogous to preorder traversal of an


ordered tree. Suppose that the traversal has just visited a vertex v
and let w1, w2 ,..., wk be the vertices adjacent to v . Then we next visit w1 ,

and after visiting w1 , we traverse all vertices to which it is adjacent before returning

to traverse w2 ,..., wk . Depth First traversal algorithm is a recursive one.


DEPTH FIRST SEARCH: (DFS)

Let G =< V, E> be a simple connected graph with n vertices.


Arrange the elements of V as ordered list v1, v2 ,..., vn .

Let v denote the vertex that is presently examined. Let T denote the spanning
tree obtained as a rooted tree by applying DFS algorithm.

Algorithm:

Step: 1(Initialization)

Let v  v
1 and T = { v1 }, ( v1 is a root of the tree T obtained in the
final step).

Step: 2

Select the smallest i such that v, vi  E and vi has not been

visited earlier. If no such vi


exist go to step: 3

Otherwise,

(i) Attach an edge v, vi  of T

(ii) Let
v  vi

(iii)Return to step:2

Step: 3
v  v1 , then T is rooted spanning tree with
If v1 as its root.

Step: 4
v  v1 , back track v, If p is the parent of v ,Let v  p
If and go
to step 2.
Note:
In this algorithm, we construct a largest possible path  from
vi (step2). If  is a spanning tree then we stop( step 3) otherwise
vertex
we go back to the parent P of the last vertex v visited (step 4) and construct another
path from P(we may have to back track to a higher level if no new vertices are
found while applying step 4)

Example:

Consider the depth first search spanning tree for the given graph.

Initial conditions:

Let the starting vertex v  a . Let the

frontier edge set FE  

Label (a) = 1 Let

i=1

Then the tree T is displayed in the figure (1).


Step: 1

Now, let i=2 and FE= {(a, b), (a, d)} Among these two edges, arbitrarily select
one. Now take e= (a, d) Then u=d and label (d) = l (d) = 2. Add e to tree T. The
resulting tree is depicted in the figure (2).

Step:2

Now, let i=3 and FE= {(a, b), (d, c)}


Here, e= (d, c) (
label(d )  label(a))

Then u=c and label(c) = l(c) =3.


The resulting tree is shown in the figure (3).

Step: 3

Now, let i=4 and FE= {(a, b), (c, i),(c, j)} Select e=(c, f). Then u=j and l (j) =4.

The resulting tree is displayed the figure (4)


Step: 4

Now, let i=5 and FE={(a,b),(c.i)} e=(c,i),u=i and and l(i)=5.

The resulting tree is displayed the figure (5)

Step: 5

Now, let i=6 and FE={(a,b)} e=(a,b), u=b, l(b)=6.

The resulting tree is displayed in the figure(6).


Step: 6

Now, let i=7 and FE={(b,e),(b,g),(b,h)} e=(b,e),u=e,l(e)=7.

The resulting tree is displayed in the figure(7).


Step: 7

Now, let i=8 and FE={(e,f),(b,g),(b,h)} e=(e,f),u=f,l(f)=8.

The resulting tree is displayed in the figure(8).

Step: 8

Now, let i=9 and FE={(e,g),(b,g),(b,h)} e=(e,h),u=g, l(g)=9.

The resulting tree is displayed in the figure(9).


Step: 9

Now, let i=10 and FE={(b,h),(h,g)}e=(b,h),u=h, l(h)=10.

The resulting tree is displayed in the figure(10).

In figure (10) there is a label associated with each vertex. These labels
correspond to the order in which a depth first search visits the respective vertex.
3.3 BREADTH FIRST SEARCH: (BFS)

Breadth-first search traversal of a graph is an analogous to level by level


traversal of an ordered tree. If the traversal has just visited a vertex v , then it next
visits all the vertices adjacent to v . Putting the vertices adjacent to these in a waiting
list to be traversed after all vertices adjacent to v have been visited. For Breadth first
algorithm a queue is needed instead of stack. Queue is a list in which all insertions
are made of one end called the back and all deletions are made at the other end called
the front. Queues are referred as ―first in, first out‖

This algorithm is used to find a path which uses least number of edges that is
the shortest path.

Algorithm:

Step: 1

Label the vertex  with 0, set i=0.

Step: 2

Find all unlabeled vertices in G which are adjacent to vertices labeled i . If


there are no such vertices then z is not connected to ' ' . If there are such vertices,
label them i  1.

Step: 3

If z is labeled, go to step4. If not increase i toi 1 and go to step2

Step: 4

The length of the shortest path from to z is i 1, then stop.


Once the length of the shortest path is found from BFS, We use the back
tracking algorithm to find the shortest path from to z . This

algorithm uses the label  v which are generated in BFS algorithm.

Back Tracking Algorithm:

Step: 1
i    z and assign
Set vi  z

Step: 2

Find a vertex u adjacent to vi and with   u   i 1. Assign

vi1  u .

Step: 3

If i  1, then stop, if not then decrease i to i 1and go to step 2.

Example:

Obtain a BFS- spanning tree for the graph in the given graph.
Initial conditions:

Let the starting vertex v  a .

Let the frontier edge set FE  

Label (a) = 1 Let i=1

Then the tree T is displayed in the figure (1).

Step: 1

Now, let i=2 and FE= {(a, b), (a, g)}

e= (a, b), u=b label (b)= l(b) = 2 Add e to tree T.

Step: 2

Now, let i=3 and FE= {(a, g), (b, c), (b, e)} e= (a, g),u=g label (g)= l(g) = 3

Add e to tree T.

Step: 3

Now, let i=4and FE= {(b, c), (b, e), (g, f), (g, h)}
e= (b, c), u=c label (c) = l(c) =4, Add e to tree T.

Step: 4

Now, let i=5and FE= {(b, e), (g, f), (g, h), (c, d)} e= (b, e),u=e label (e)=l(e)=5

Add e to tree T.

If we proceed further steps, the BFS-spanning tree T obtained from the graph G.

In figure (2) there is a label associated with each vertex. These labels
correspond to the order in which a breadth-first search visits the respective vertex.
CHAPTER-4

GREEDY ALGORITHMS

INTRODUCTION TO MINIMAL SPANNING TREE

Greedy is an algorithmic paradigm that builds up a solution piece by piece,


always choosing the next piece that offers the most obvious and immediate benefit.
Greedy algorithms are used for optimization problems. An optimization problem
can be solved using Greedy if the problem has the following property: At every step,
we can make a choice that looks best at the moment, and we get the optimal solution
of the complete problem.
If a Greedy Algorithm can solve a problem, then it generally becomes the
best method to solve that problem as the Greedy algorithms are in general more
efficient than other techniques like Dynamic Programming. But Greedy algorithms
cannot always be applied. For example, Fractional Knapsack problem can be solved
using Greedy, but 0-1 Knapsack cannot be solved using Greedy.
KRUSKAL’S ALGORITHM

In Kruskal‘s algorithm, we create a MST by picking edges one by one. The


Greedy Choice is to pick the smallest weight edge that doesn‘t cause a cycle in the
MST CONSTRUCTED so far.

We construct a spanning subgraph H at each iteration. We start with E(H)


= and n components of H. (As E(H) =  , every vertex of G is a component of
H).At each iteration we consider cheapest edge not in E(H), otherwise discard it
and consider the next cheapest edge and so on.
th
The algorithm stops after  n  1 iteration, as at the stage the spanning

graph H is a tree.

Algorithm:

Step: 1
w  e1 
Choose an edge e1 such that is as small as possible.

Step: 2

If the edges e1 , e2 , e3 ,..., ei  have been chosen, then choose an


ei 1 from E(G)\ e1 , e2 , e3 ,..., ei  in such a way that
edge

(i) The spanning subgraph H whose E (H) =e1 , e2 , e3 ,..., ei , ei1

is acyclic.

(ii) w  ei1 
is small as possible subject to (i).

Step: 3

Stop when Step: 2 cannot be implemented further.

The spanning graph T produced by kruskal algorithm is optimal.


Example:

Consider the below graph.

Proof:

The graph contains 9 vertices and 14 edges. So, the minimum spanning tree
formed will be having (9 – 1) = 8 edges.

Now pick all edges one by one from sorted list of edges
1. Pick edge 7-6: No cycle is formed, include it.
2. Pick edge 8-2: No cycle is formed, include it.

3. Pick edge 6-5: No cycle is formed, include it.

4. Pick edge 0-1: No cycle is formed, include it.


5. Pick edge 2-5: No cycle is formed, include it.

6. Pick edge 8-6: Since including this edge results in cycle, discard it.

7. Pick edge 2-3: No cycle is formed, include it.

8. Pick edge 7-8 Since including this edge results in cycle, discard it.
9. Pick edge 0-7: No cycle is formed, include it.

10. Pick edge 1-2: Since including this edge results in cycle, discard it.

11. Pick edge 3-4: No cycle is formed, include it

Since the number of edges included equals (V – 1), the algorithm stops here.
REVERSE DELETING ALGORITHM

1. It is the reverse form of the kruskal‘s algorithm.


2. Start with all edges.
3. Delete the longest edge.
4. Continue deleting longest edge as long as all nodes are connected and no
cycles.

Example:

Consider the below graph.

Solution:

Step: 1
Step: 2

Step: 3

If we choose an edge e= (3,4) the graph get disconnected, so we discard it.

Step: 4
Step: 5

Step: 6
PRIM’S ALGORITHM

In Prim‘s algorithm also, we create a MST by picking edges one by one. We


maintain two sets: set of the vertices already included in MST and the set of the
vertices not yet included. The Greedy Choice is to pick the smallest weight edge that
connects the two sets

Prim‘s algorithm is a special case of Greedy algorithm. First we choose any


edge with smallest weight. Successively add to the tree edges of minimum weight
that are incident to a vertex already in the tree and not forming a simple circuit
with those edges already in the tree. Stop with n-1 edges have been added.

Algorithm:

Step: 1

Choose any vertex v1 of G.

Step: 2
e1  v1 , v2 
Choose an edge of G such that v2  v1 and e has
1

the smallest weight among the edges of G incident with v1 .

Step: 3

If edges e1, e2 ,..., ei have been chosen involving end points

v1, v2 ,..., vi1 .Choose an edge ei1  v j , vk 


with

v j  {v1, v2 ,...,vi1} and

vk {v1, v2 ,..., vi1} ei


Such that has smallest weight among the edges
1
of G with precisely one end in{v1, v2 ,..., vi1} .
Step: 4

Stop after n-1 edges have been chosen.

Example:

Let us understand with the following example:

Proof:

The set minimum spanning tree set is initially empty and keys assigned to
vertices are {0, INF, INF, INF, INF, INF, INF, INF} where INF indicates infinite.
Now pick the vertex with minimum key value. The vertex 0 is picked; include it in
minimum spanning tree set. So minimum spanning tree set becomes {0}. After
including to minimum spanning tree set, update key values of adjacent vertices.
Adjacent vertices of 0 are 1 and 7. The key values of 1 and 7 are updated as 4 and 8.
Following subgraph shows vertices and their key values, only the vertices with finite
key values are shown. The vertices included in MST are shown in ash color.
Pick the vertex with minimum key value and not already included in MST
(not in minimum spanning tree set). The vertex 1 is picked and added to minimum
spanning tree set. So minimum spanning tree set now becomes {0, 1}. Update the key
values of adjacent vertices of 1. The key value of vertex 2 becomes 8.

Pick the vertex with minimum key value and not already included in MST
(not in minimum spanning tree set). We can either pick vertex 7 or vertex 2, let
vertex 7 is picked. So minimum spanning tree set now becomes {0, 1, 7}. Update the
key values of adjacent vertices of 7. The key value of vertex 6 and 8 becomes
finite (7 and 1 respectively).
Pick the vertex with minimum key value and not already included in MST (not in minimum
spanning tree set). Vertex 6 is picked. So minimum spanning tree set now becomes {0, 1, 7, 6}. Update the
key values of adjacent vertices of 6. The key value of vertex 5 and 8 are updated

We repeat the above steps until minimum spanning tree set doesn‘t include all vertices of given
graph. Finally, we get the following graph.
CHAPTER 1

FIXED POINT THEOREM ON GENERALIZED METRIC SPACE ENDOWED

WITH GRAPHS

INTRODUCTION

In 1964, Prove extended the classical Banach contraction principle for Contraction Mappings on

spaces endowed with vector valued metrics. For some Contributions to this topic, Let X bea non

empty set and Rm is the set of all m – tuples of real numbers.

If α, β ∈ Rm, α =(α1, α2, ..., αm)T, β = (β1, β2, ..., βm) T and c ∈ R, then by α ≤ β (resp., α < β)

we mean αi ≤ βi ( resp., αi < βi) for i ∈ {1, 2, ..., m} and by α ≤ c we mean that αi ≤ c for I ∈ {1,

2, ..., m}. A mapping d: X × X → R m is called a vector valued metric on X if the following

properties are satisfied:

(d1) d(x, y) ≥ 0 for all x, y ∈ X; if d(x, y) = 0, then x =

y; (d2) d(x,y) = d(y,x) for all x,y ∈ X;

(d3) d(x,y) ≤ d(x,z) + d(z,y) for all x,y,z ∈ X.

A set X equipped with a vector valued metric d is called a generalized metric space and, it is

denoted by (X,d). The notions that are defined in the generalized metric spaces are similar to those

defined in usual metric spaces.

Throughout this chapter we denote the non-empty closed subsets of X by Cl(X), the set of all

m × m matrices with non-negative elements by Mm,m(R+), the zero m × m matrix by ō and the

identity m × m matrix by I, and note that A0 = I.

A matrix A is said to be convergent to zero if and only if An → ō as n → ∞.

THEOREM

Let A ∈ Mm, m(R+). The followings are equivalent.

(i) A is convergent towards zero;


(ii) An → ō as n → ∞;
(iii) the eigenvalues of A are in the open unit disc, that is, |λ|<1, for every λ ∈ C with det(A – λI)= 0;

(iv) the matrix I − A is nonsingular and

(I − A)−1 = I + A + · · · + A n + · · · ; (1.1)

(v) An q → ō and q An → ō as n → ∞, for each q ∈ Rm.

REMARK

Some examples of matrix convergent to zero are


a a
(a) Any matrix A: = ( ) , where a, b ∈ R+ and a + b < 1
b b
a b
(b) Any matrix A: = ( ) , where a, b ∈ R+ and a + b < 1;
a b
a b
(c) Any matrix A: = ( ) , where a, b,c ∈ R+ and max {a, c} < 1
o c

For other examples and consideration on matrices which converge to zero, see [8] and [12]

THEOREM

Let (X, d) be a complete generalized metric space and the mapping f: X → X with the property

that there exists a matrix A ∈ Mm,m(R+) such that d(f(x), f(y)) ≤ Ad(x, y) for all x, y ∈ X. If A is a

matrix convergent towards zero, then

(1) Fix (f) = {x∗};

(2) The sequence of successive approximations {xn} such that, xn = fn(x0) is convergent and it

has the limit x∗, for all x0 ∈ X. On other hand Jachymski [4], generalized the Banach contraction

principle on a complete metric space endowed with a graph. He introduced the notion of Banach G-

contraction as follows:

DEFINITION

Let (M, d) be a metric space, let △ be the diagonal of the Cartesian product M × M , and let G be

a directed graph such that the set V of its vertices coincides with M and the set E of its edges contains

loops; that is, E ⊇△. Assume that G has no parallel edges. A mapping f: M→ M is called a Banach
G-contraction if

(i) x, y ∈ X ((x, y) ∈ E ⇒ (fx, fy) ∈ E);

(ii) There exists α, 0 < α < 1 such that, x, y ∈ X, (x, y) ∈ E ⇒ d(fx, fy) ≤ αd(x, y).

DEFINITION

A mapping f: M → M is called G-continuous, if for each sequence {xn} in M with xn → x

and (xn, xn+1) ∈ E for each n ∈ N, we have fxn → fx. For some other interesting extensions of

Banach G-contraction.

MAIN RESULTS

Throughout this chapter, (X, d) is a generalized metric space and we will denote G = (V, E) as a

directed graph such that the set V of its vertices coincides with X and the set E of its edges contains

loops; that is, E ⊇△, where △ is the diagonal of the Cartesian product X × X.

THEOREM

Let (X, d) be a complete generalized metric space endowed with the graph G and let f: X → X be
an

edge preserving mapping with A, B ∈ Mm , m (R+) such that

d(fx, fy) ≤ Ad(x, y) + Bd( y, f x) (2.1)

For all (x, y) ∈ E. Assume that the following conditions hold:

(i) The matrix A converges toward zero;

(ii) There exists x0 ∈ X such that (x0, fx0) ∈ E;

(iii) (a) f is G-continuous; or (b) for each sequence {xn} ∈ X such that xn → x and (xn , xn+1) ∈

E for all n ∈ N, we have (xn, x) ∈ E for all n ∈ N.

Then f has a fixed point. Moreover, if for each x, y ∈ F ix (f), we have E(x, y) ∈ E and A + B

converges to zero then we have a unique fixed point.

Proof.

By hypothesis (ii), we have (x0, fx0) ∈ E. Take x1 = fx0. From (2.1), we

have d(x1, x2) = d(fx0, fx1) ≤ Ad(x0, x1) + Bd(x1, fx0)


= Ad(x0, x1). (2.2)

As f is edge preserving mapping, then (x1, x2) ∈ E, again from(2.1), we have

d(x2, x3) = d(fx1, fx2) ≤ Ad(x1, x2) + Bd(x2, fx1)

≤ A2d(x0, x1), (by using (2.2)).

Continuing in the same way, we get a sequence {xn} ⊆ X, such that xn = fxn−1, (xn−1, xn) ∈ E

And d(xn, xn+1) ≤An d(x0, x1), ∀ n ∈ N.

Now for each n, m ∈ N. By using the triangular inequality we get

n+m−1
d(xn , xn+m) ≤ ∑ d(xi, xi+1)
i=
n

n+m−1
≤ ∑ Aid(x0, x1)
i=n

≤ An ∑ d(x0, x1)
i=
0

=An (I − A) −1d(x0, x1)

Letting n → ∞ in the above inequality we get, d(xn, xn+m) → 0, since A is converging towards

zero.Thus, the sequence {xn} is a Cauchy sequence. As X is complete.Then there exists x∗ ∈ X,

such that xn → x∗.If hypothesis (iii. a) holds.Then we have fxn → fx∗, that is xn+1 → fx∗.Thus,

fx∗ = x∗.

If

(iii.) holds, then we have (xn, x∗) ∈ E ∀n ∈ N. From (2.1), we have

d(xn +1, fx∗) = d(fxn, fx∗) ≤ A d(xn , x∗) + Bd(x∗, fxn ) = A d(xn , x∗) + Bd(x∗, xn +1).

Letting n → ∞, in the above inequality, we get d(x∗, fx∗) = 0. This shows that x ∗ = fx∗.

Further assume that x, y ∈ Fix (f) and(x, y) ∈ E, then by (2.1), we have


d(x, y) ≤ Ad(x, y) + Bd(x, y).That is,

(I − (A + B)) d(x, y) ≤ 0.

Since the matrix I − (A + B) is nonsingular, then d(x, y) = 0. Thus, we have Fix (f) = {x}.
REMARK

If we assume that E = X × X and B = 0, then above theorems reduces to Theorem 1.2.2.

EXAMPLE
|x1 − y1|
Let X = R 2 be endowed with a generalized metric defined by d(x, y) =( ) for
|x2 − y2|

Each x = (x1, x2), y = (y1, y2) ∈ R2. Define the operator

2s y
− + 1, + 1 ) for (x, y) ∈ X witℎ x ≤ 3
y
(
F: R2→R2, f1(x, y), 3 3 3
5s
={ 2s + 1, − + y + 1) for(x, y) ∈ X witℎ x ≥ 3
( y 3−3 3 3

If we take f(x, y) = f1(x, y), f2(x, y), where


2s y
f1(x, y), = − + 1 , and
3 3

y
+ 1 if x ≤ 3
3
f2(x, y) = { 5s y
− + + 1 if x > 3
3 3

Then it is easy to see that


2
|f (x x ) − f ( y y )| ≤ | x − y | + 1 | x − y | and

1 1,2, 1 1, 2
, 3 1 1 3 2 2

1
| y2 | ifx1 , y1 , ≤ 3
x
3 2– ,
|f2(x1 , x2 ,) − f2(y1 , y2 , )| ≤ {5 y | + 1 |x y | otherwise
|x

3 1– 1, 3 2– 2 ,

For each (x1, x2),(y1, y2) ∈ X. Define the graph G = (V,E) such that V = R2 and E = {((x1, x2),(y1,

y2)): x1, x2, y1, y2∈ [0, 3]} ∪ {(z, z): z ∈ R2}. Now for each (x, y) ∈ E, we have

2 1
3 |x1 − y1|
d(f |f1 (x1, x 2)− f1 ( y1 , y2 | ) ( )= Ad(x, y).
f )=( )≤
s,y 1 |x2 − y2|
|f2 (x1, x2 )− ( y 1 , y2 | 0
f2 3

Moreover, it is easy to see that all the other conditions of Theorem 1.3.1 hold. Thus, f has a fixed
point, that is x = fx = (f1x, f2x), where x = (1.5, 1.5).
THEOREM

Let X be a non-empty set endowed with the graph G and two generalized metrics d, ρ. Let f:

(X, ρ) → (X, ρ) be an edge preserving mapping with A,B ∈ Mm,m(R+) such that

ρ(fx, fy) ≤ Aρ(x, y) + Bρ(y, fx) ∀ (x, y) ∈ E (2.3)

(i) The matrix A converges towards zero;

(ii) There exists x0 ∈ X such that (x0, fx0) ∈ E;

(iii) f: (X, d) → (X, d) is a G-contraction;

(iv) There exists C ∈ Mm,m(R+) such that d(fx, fy) ≤ C ρ (x, y), whenever, there exists a

path between x and y;

(v) (X, d) is complete generalized metric space.

Then f has a fixed point. Moreover, if for each x, y ∈ Fix (f), we have (x, y) ∈ E and A + B

converges to zero then we have a unique fixed point.

Proof.

By hypothesis (ii), we have (x0, fx0) ∈ E. Take x1 = fx0. From (2.3), we have,

ρ(x1, x2) = ρ(fx0, fx1) ≤ Aρ(x0, x1) + Bρ(x1, fx0)

= Aρ(x0, x1).

As f is edge preserving, then (x1, x2) ∈ E. Again from (2.3), we have

ρ(x2, x3) = ρ(fx1, fx2) ≤ Aρ(x1, x2) + Bρ(x2, fx1)

= A2ρ(x0, x1).

Continuing in the same way we get a sequence {xn} in X such that xn = fxn−1, (xn−1, xn) ∈ E, and

ρ(xn , xn +1) ≤ An ρ(x0, x1) ∀n ∈ N.

Now we will show that {xn} is a Cauchy sequence in (X, ρ). By using the triangular inequality, we

have

n+m−
ρ(xn, xn+m) 1 ρ(xi, xi+1)
≤ ∑
i=n
n+m−1 ≤ ∑
i=n
Ai ρ(x0, x1)

≤ An ( ∑ Ai) ρ(x0, x1)
i=
n

= An (I − A) −1ρ(x0, x1)

Since A converges towards zero. Thus{xn}is a Cauchy sequence in (X, ρ). By the Construction of

sequence, for each n, m ∈ N,we have a path between xn and xn+m. Now, by using hypothesis (iv),we

have

d(xn+1, xn+m+1) = d(fxn , fxn+m)

≤ Cρ(xn, xn+m)

≤ C[An (I − A) −1ρ(x0, x1)].

This shows that {xn} is also a Cauchy in (X, d). As (X, d) is complete, there exists x∗ ∈ X, such

that xn → x∗. By hypothesis (iii) we get limn→∞ d(fxn, fx∗) = 0. As xn+1 = fxn for each n ∈ N.

Thus, x∗ is a fixed point off. Further assume that x, y ∈ Fix (f) and(x, y) ∈ E, then by (2.3), we

have

ρ(x, y) ≤ Aρ(x, y) + Bρ(y, x). That is,

(I − (A + B)) ρ(x, y) ≤ 0.

Since, the matrix I − (A + B) is non-singular, then ρ(x, y) = 0. Thus, we have Fix (f) = {x}

THEOREM

Let X = (0, ∞) be endowed withgeneralized metrics ρ and d defined by

⎧ |x − y| + 1
| x − y| + 1 ( )
( ) If x or y or botℎ x, y ∈ 0, 1
⎪ 0
| x − y| ( )
ρ(x, y) = ( ) and d(x, y)= ( ) If x = y 0.1
|
x− | ⎨ ∈0
y
| x − y|
( ) otℎerwise
|x − y|

For each x, y ∈ X Define the operator


f:X → X , fx = s +12
4

Define the graph G = (V, E) such that V = X and E = {(x, y): x, y ≥ 1} ∪ {(z, z): z ∈ X}. It is easy

to see that f satisfies (2.3) with


1
0 0 0
A= ( 4
) , B = ( )
1 0 0
0
4

And all the other conditions of Theorem 1.3.4 hold. Thus, f has a fixed point.

THEOREM

Let (X, d) be a complete generalized metric space endowed with the graph G and let F: X → Cl(X)

be a multi-valued mapping with A, B ∈ Mm,m(R+), such that for each (x, y) ∈ E and u ∈ F x,

there exists v ∈ F y satisfying

d(u, v) ≤ Ad(x, y) + Bd(y, u). (2.4)

Assume that the following conditions hold:

(i) The matrix A converges towards zero;

(ii) There exist x0 ∈ X and x1 ∈ Fx0 such that (x0, x1) ∈ E;

(iii) For each u ∈ Fx and v ∈ Fy with d(u, v) ≤ Ad(x, y) we have (u, v) ∈ E whenever (x, y) ∈ E;

(iv) For each sequence {xn} in X such that xn → x and (xn, xn+1) ∈ E for all n ∈ N, we have

(xn, x) ∈E for all n ∈ N. Then F has a fixed point.

Proof.

By hypothesis (ii), we have x0 ∈ X and x1 ∈ Fx0 with (x0, x1) ∈ E. From (2.4), for (x0, x1) ∈ E,

we have x2 ∈ F x1 such that

d(x1, x2) ≤ Ad(x0, x1) + Bd(x1, x1)

= Ad(x0, x1). (2.5)

By hypothesis (iii) and (2.5), we have (x1, x2) ∈ E. Again from (2.4), for (x1, x2) ∈ E and x2 ∈

Fx1, we have x3 ∈ Fx2 such that

d(x2, x3) ≤ Ad(x1, x2) + Bd(x2, x2)


≤ A2d(x0, x1), (by using (2.5)).

Continuing in the same way, we get a sequence {xn} in X such that xn ∈ Fxn −1, (xn −1, xn) ∈ E
and

d(xn, xn+1) ≤ An d(x0, x1), ∀ n

∈ N For each n, m ∈ N. By using the triangular inequality

we get,

n+m−1
d(xn, xn+m) ≤ ∑ d(xi, xi+1)
i=
n

n+m−1
≤ ∑ Ai d(x0, x1)
i=n

≤ An ( ∑ Ai) d(x0, x1)
i=
n

= An (I − A) −1d(x0, x1)

Since the matrix A converges towards ō. Thus the sequence {xn} is a Cauchy sequence in X. As X is

complete. Then there exists x∗ ∈ X, such that xn → x∗. By hypothesis (iv) we have (xn, x∗) ∈ E,

for each n ∈ N. From (2.4), for (xn, x∗) ∈ E and xn+1 ∈ F xn we have w∗ ∈ F x∗ such that

d(xn+1, w∗) ≤ Ad(xn, x∗) + Bd( x∗, xn+1).

Letting n → ∞ in the above inequality, we get d(x∗, w∗) = 0, that is, x∗ = w∗. Thus x∗ ∈ Fx∗.

THEOREM.
| x1 − y1 |
Let X = R2 be endowed with a generalized metric defined by d(x, y) =( )
| x 2 − y2 |

For each x = (x1, x2), y = (y1, y2) ∈ R2. Define the operator
s1
{(0, 0) ( , s2)} for x , x ≥0
3 3 1 2
f: R2→ cl(R2), F(x1 x2) = {

10
{(0, 0), (x1 + 1, x2 + 1) } otℎerwise

Define the graph G = (V, E) such that V = R2 and E = {((x1, x2), (y1, y2)): x1, x2, y1, y2 ≥ 0} ∪

{(z, z): z ∈ R2}. It is easy to see that F satisfies (2.4) with

11
01 0 0
A=( ) , B= (
3
)
1 0 0
0
3

And all the other conditions of Theorem 1.3.6 hold. Thus, F has a fixed point.

THEOREM.

Let X be a non-empty set endowed with the graph G and two generalized metrics d, ρ. Let F:

X → Cl(X) be a multi-valued mapping with A, B ∈ Mm,m (R+), such that for each (x, y) ∈ E and u

∈ Fx there exists v ∈ F y satisfying

ρ(u, v) ≤ Aρ(x, y) + Bρ(y, u). (2.6)

Assume that the following conditions hold:

(i) The matrix A converges towards zero;

(ii) There exist x0 ∈ X and x1 ∈ Fx0 such that (x0, x1) ∈ E;

(iii) For each u ∈ Fx and v ∈ Fy with ρ(u, v) ≤ Aρ(x, y) we have E(u, v) ∈ E whenever (x, y) ∈
E;

(iv) (X, d) is complete generalized metric space;

(v) There exists C ∈ Mm,m(R+) such that d(x, y) ≤ C ρ(x, y), whenever, there exists a path

between x and y;

(vi) For each sequence{xn}in X such that xn → x and (xn, xn +1) ∈ E for each n ∈ N, we have

(xn ,x) ∈ E for all n ∈ N. Then F has a fixed point.

Proof.

By hypothesis (ii), we have x0 ∈ X and x1 ∈ Fx0 such that (x0, x1) ∈ E. From (2.6), for (x0, x1) ∈

E and x1∈ Fx0, we have x2∈ F x1 such that

ρ(x1, x2) ≤ Aρ(x0, x1) + Bρ(x1, x1)

= Aρ(x0, x1).

By hypothesis (iii) and above inequality, we have (x1, x2) ∈ E. Again from (2.6) for (x1, x2) ∈ E,

and x2 ∈ Fx1, we have x3 ∈ Fx2 such that

ρ(x2, x3) ≤ Aρ(x1, x2) + Bρ(x2, x2)


≤ A2ρ(x0, x1).

Continuing in the same way, we get a sequence {xn} ∈ X such that xn∈ Fxn−1, (xn−1, xn) ∈ E and

ρ(xn, xn+1) ≤ An ρ(x0, X1) for each n ∈ N.

Now, we will show that {xn} is a Cauchy sequence in (X, ρ). Let n, m ∈ N, then by using the
triangular

inequality we get

n+m−
ρ(xn , xn +m) 1 ρ(xi, xi+1)
≤ ∑
i=n

n+m−1
≤ ∑ Ai ρ(x0, xi+1) (2.7)
i=n

≤ ( ∑ Ai) ) ρ(x0, xi+1)
i=n

= An (I − A) −1ρ(x0, x1)

Since the matrix A converges towards zero. Thus {xn} is a Cauchy sequence in (X, ρ). Clearly, for

each m, n ∈ N there exists a path between xn and xn+m. By using the hypothesis (v) we get,

d(xn, xn+ m) ≤ Cρ(xn, xn+m )

≤ C[An−1 (I − A) −1ρ(x0, x1)], (by using(2.7)

Thus, {xn} is also a Cauchy sequence in (X, d). As (X, d) is complete, there exists x∗ ∈ X, such

that xn→ x∗. By hypothesis (vi) we have E(xn, x∗) ∈ E for each n ∈ N. From (2.4), for (xn, x∗)

∈ E and xn +1 ∈ Fxn we have w∗ ∈ F x∗ such that

ρ(xn +1, w∗) ≤ Aρ(xn, x∗) + Bρ(x∗, xn +1).

Letting n → ∞ in the above inequality we get ρ(x∗, w∗) = 0. This implies that x∗∈ Fx∗.

THEOREM.

Let X = (0, ∞) be endowed with generalized metrics ρ and d defined by


⎧ |x − y| + 1
| x − y| + 1 ( )
( ) If x or y or botℎ x, y ∈ 0, 1
⎪ 0
| x − y| ( )
ρ(x, y) = ( ) and d(x, y)= ( ) If x = y 0.1
|
x− | ⎨ ∈0
y
| x − y|
( ) otℎerwise
| |
⎪⎩ x − y

For each x, y ∈ X. Define the operator


s +5 s +4
{ , } for x ≥ 0
4 3 1
F: x → cl(x), F(x) = { 1 1
{ ∶ n ≤ ⎝ [} otℎerwise.
n s

Define the graph G = (V, E) such that V = X and E = {(x, y): x, y ≥ 1} ∪ {(z, z): z ∈ X}. It is easy

to see that F satisfies (2.6) with

1 0 0
A = (3 0) , B= ( )
1 0 0
0
3

And all the other conditions of Theorem 1.3.8 hold. Thus, F has a fixed point.

You might also like