Jeffrey D. Ullman Stanford University

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 55

Jeffrey D.

Ullman
Stanford University
 Web pages are important if people visit them a
lot.
 But we can’t watch everybody using the Web.
 A good surrogate for visiting pages is to assume
people follow links randomly.
 Leads to random surfer model:
 Start at a random page and follow random out-links
repeatedly, from whatever page you are at.
 PageRank = limiting probability of being at a page.

2
 Solve the recursive equations: “importance of a
page = its share of the importance of each of its
predecessor pages.”
 Equivalent to the random-surfer definition of
PageRank.
 Technically, importance = the principal
eigenvector of the transition matrix of the Web.
 A few fixups needed.

3
 Number the pages 1, 2,… .
 Page i corresponds to row and column i.
 M [i, j] = 1/n if page j links to n pages,
including page i ; 0 if j does not link to i.
 M [i, j] is the probability a surfer will next be at
page i if it is now at page j.
 Or it is the share of j’s importance that i receives.

4
Suppose page j links to 3 pages, including i but not x.
j

1/3
x

Called a stochastic matrix =


“all columns sum to 1.”
5
 Suppose v is a vector whose i th component is
the probability that a random surfer is at page
i at a certain time.
 If a surfer chooses a successor page from
page i at random, the probability distribution
for surfers is then given by the vector Mv.

6
 Starting from any vector u, the limit
M (M (…M (M u ) …)) is the long-term
distribution of the surfers.
 The math: limiting distribution = principal
eigenvector of M = PageRank.
 Note: If v is the limit of MM…Mu, then v satisfies
the equation v = Mv, so v is an eigenvector of M
with eigenvalue 1.

7
y a m
Yahoo
y 1/2 1/2 0
a 1/2 0 1
m 0 1/2 0

Amazon M’soft

8
 Because there are no constant terms, the
equations v = Mv do not have a unique
solution.
 Example: doubling each component of solution v
yields another solution.
 In Web-sized examples, we cannot solve by
Gaussian elimination anyway; we need to use
relaxation (= iterative solution).

9
 Start with the vector u = [1, 1,…, 1]
representing the idea that each Web page is
given one unit of importance.
 Note: it is more common to start with each vector
element = 1/N, where N is the number of Web
pages and to keep the sum of the elements at 1.
 Question for thought: Why such small values?
 Repeatedly apply the matrix M to u, allowing
the importance to flow like a random walk.
 About 50 iterations is sufficient to estimate
the limiting solution.
10
 Equations v = Mv:
y = y /2 + a /2 Note: “=” is
a = y /2 + m really “assignment.”

m = a /2

y 1 1 5/4 9/8 6/5


a = 1 3/2 1 11/8 ... 6/5
m 1 1/2 3/4 1/2 3/5

11
Yahoo

Amazon M’soft

12
Yahoo

Amazon M’soft

13
Yahoo

Amazon M’soft

14
Yahoo

Amazon M’soft

15
Yahoo

Amazon M’soft

16
 Some pages are dead ends (have no links out).
 Such a page causes importance to leak out, or
surfers to disappear.
 Other groups of pages are spider traps (all out-
links are within the group).
 Eventually spider traps absorb all importance; all
surfers get stuck in the trap.

18
y a m
Yahoo
y 1/2 1/2 0
a 1/2 0 0
m 0 1/2 0

Amazon M’soft

A substochastic matrix =
“all columns sum to at most 1.”
19
 Equations v = Mv:
y = y /2 + a /2
a = y /2
m = a /2

y 1 1 3/4 5/8 0
a = 1 1/2 1/2 3/8 ... 0
m 1 1/2 1/4 1/4 0

20
Yahoo

Amazon M’soft

21
Yahoo

Amazon M’soft

22
Yahoo

Amazon M’soft

23
Yahoo

Amazon M’soft

24
Yahoo

Amazon M’soft

25
y a m
Yahoo
y 1/2 1/2 0
a 1/2 0 0
m 0 1/2 1

Amazon M’soft

26
 Equations v = Mv:
y = y /2 + a /2
a = y /2
m = a /2 + m

y 1 1 3/4 5/8 0
a = 1 1/2 1/2 3/8 ... 0
m 1 3/2 7/4 2 3

27
Yahoo

Amazon M’soft

28
Yahoo

Amazon M’soft

29
Yahoo

Amazon M’soft

30
Yahoo

Amazon M’soft

31
 “Tax” each page a fixed percentage at each
iteration.
 Add a fixed constant to all pages.
 Optional but useful: add exactly enough to balance
the loss (tax + PageRank of dead ends).
 Models a random walk with a fixed probability
of leaving the system, and a fixed number of
new surfers injected into the system at each
step.
 Divided equally among all pages.
32
 Equations v = 0.8(Mv) + 0.2:
y = 0.8(y/2 + a/2) + 0.2
a = 0.8(y/2) + 0.2 Note: amount injected is chosen to balance
the tax. If we started with 1/3 for each rather
m = 0.8(a/2 + m) + 0.2 than 1, the 0.2 would be replaced by 0.0667.

y 1 1.00 0.84 0.776 7/11


a = 1 0.60 0.60 0.536 ... 5/11
m 1 1.40 1.56 1.688 21/11

33
 Goal: Evaluate Web pages not just by popularity,
but also by relevance to a particular topic, e.g.
“sports” or “history.”
 Allows search queries to be answered based on
interests of the user.
 Example: Search query [jaguar] wants different
pages depending on whether you are interested
in automobiles, nature, or sports.
 Might discover interests by browsing history,
bookmarks, e.g.
35
 Assume each surfer has a small probability of
“teleporting” at any tick.
 Teleport can go to:
1. Any page with equal probability.
 As in the “taxation” scheme.
2. A set of “relevant” pages (teleport set).
 For topic-specific PageRank.
 Note: can also inject surfers to compensate for
surfers lost at dead ends.
 Or imagine a surfer always teleports from a dead
end.
36
 Only Microsoft is in the teleport set.
 Assume 20% “tax.”
 I.e., probability of a teleport is 20%.

37
Yahoo

Dr. Who’s
phone
booth.

Amazon M’soft

38
Yahoo

Amazon M’soft

39
Yahoo

Amazon M’soft

40
Yahoo

Amazon M’soft

41
Yahoo

Amazon M’soft

42
Yahoo

Amazon M’soft

43
Yahoo

Amazon M’soft

44
1. One option is to choose the pages belonging to
the topic in Open Directory.
2. Another option is to “learn,” from a training
set (which could be Open Directory), the
typical words in pages belonging to the topic;
use pages heavy in those words as the teleport
set.

45
 Spam farmers create networks of millions of
pages designed to focus PageRank on a few
undeserving pages.
 We’ll discuss this technology shortly.
 To minimize their influence, use a teleport set
consisting of trusted pages only.
 Example: home pages of universities.

46
 Mutually recursive definition:
 A hub links to many authorities;
 An authority is linked to by many hubs.
 Authorities turn out to be places where
information can be found.
 Example: course home pages.
 Hubs tell where the authorities are.
 Example: departmental course-listing page.

48
 HITS uses a matrix A[i, j] = 1 if page i links to
page j, 0 if not.
 AT, the transpose of A, is similar to the PageRank
matrix M, but AT has 1’s where M has fractions.
 Also, HITS uses column vectors h and a
representing the degrees to which each page is
a hub or authority, respectively.
 Computation of h and a is similar to the
iterative way we compute PageRank.

49
y a m
Yahoo
y 1 1 1
A= a 1 0 1
m 0 1 0

Amazon M’soft

50
 Powers of A and AT have elements whose
values grow exponentially with the exponent,
so we need scale factors λ and μ.
 Let h and a be column vectors measuring the
“hubbiness” and authority of each page.
 Equations: h = λAa; a = μAT h.
 Hubbiness = scaled sum of authorities of successor
pages (out-links).
 Authority = scaled sum of hubbiness of
predecessor pages (in-links).
51
 From h = λAa; a = μAT h we can derive:
 h = λμAAT h
 a = λμATA a
 Compute h and a by iteration, assuming
initially each page has one unit of hubbiness
and one unit of authority.
 Technically, these equations let you solve for
λμ as well as h and a.
 In practice, you don’t fix λμ, but rather scale
the result at each iteration.
 Example: scale to keep largest value at 1.
52
 Remember: it is only the direction of the
vectors, or the relative hubbiness and authority
of Web pages that matters.
 As for PageRank, the only reason to worry
about scale is so you don’t get overflows or
underflows in the values as you iterate.

53
a = λμATA a; h = λμAAT h

111 110 321 212


A= 1 0 1 AT = 1 0 1 AAT= 2 2 0 ATA= 1 2 1
0 10 1 10 1 01 212

= 1 5 24 114 ... 1+3


a(yahoo)
= 1 4 18 84 ... 2
a(amazon)
= 1 5 24 114 ... 1+3
a(m’soft)

h(yahoo) = 1 6 28 132 ... 1.000


h(amazon) = 1 4 20 96 ... 0.735
h(microsoft) = 1 2 8 36 ... 0.268

54
 Start with h = [1,1,…,1]; multiply by AT to get
first a; scale so largest component = 1; then
multiply by A to get next h, and repeat until
approximate convergence.
 You may be tempted to compute AAT and ATA
first, then iterate multiplication by these
matrices, as for PageRank.
 Question for thought: Why was the separate
calculations of h and a actually less efficient
than the method suggested above.
55

You might also like