Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Centrality - Complex Networks - Lecture Slides, Slides of Computer Networks

The key points in these lecture slides and the complex network are given in the following list:Centrality, Eigenvalues and Eigenvectors, Motion, Solutions, Differential Equations, Quantum Mechanics, Eigenvectors Are Vectors, Original Length, Scalar, Same Product

Typology: Slides

2012/2013

Uploaded on 04/23/2013

saraswathi
saraswathi 🇮🇳

4

(1)

74 documents

1 / 53

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 7
Centrality (cont)
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35

Partial preview of the text

Download Centrality - Complex Networks - Lecture Slides and more Slides Computer Networks in PDF only on Docsity!

Lecture 7

Centrality (cont)

Eigenvalues and eigenvectors have their origins in physics, in particular in problems where motion is involved, although their uses extend from solutions to stress and strain problems to differential equations and quantum mechanics.

Eigenvectors are vectors that point in directions where there is no rotation. Eigenvalues are the change in length of the eigenvector from the original length.

 The basic equation in eigenvalue problems is:

Ax = λ x

Eigenvalues and eigenvectors

Slides from Fred K. Duennebier

Ax = λ x

The vector x is called an eigenvector and the scalar λ, is

called an eigenvalue.

Do all matrices have real eigenvalues?

No, they must be square and the determinant of A- λ I must equal zero. This is easy to show:

This can only be true if det( A- λ I )=| A- λ I |=

Are eigenvectors unique?

No, if x is an eigenvector, then β x is also an eigenvector and β λ is an eigenvalue.

Ax −λ x = 0 x A ( −λ I ) = 0

Ax )= β Ax = βλ x = λ (β x )

(E.02)

(E.03)

(E.04)

How do you calculate eigenvectors and eigenvalues? Expand equation (E.03): det( A- λ I )=| A- λ I |=0 for a 2x matrix:

[^ A^ −λ I ] =^

a 11 a 12

a 21 a 22

 −λ^

1 0

0 1

 =^

a 11 −λ a 12

a 21 a 22 −λ

det (^) [ A − λ I ] =

a 11 −λ a 12

a 21 a 22 −λ

= (^) ( a 11 −λ)( a 22 −λ) − a 12 a 21 = 0

0 = a 11 a 22 − a 12 a 21 −λ (^) ( a 11 + a 22 ) +λ^2

For a 2-dimensional problem such as this, the equation above is a simple quadratic equation with two solutions for λ. In fact, there is generally one eigenvalue for each dimension, but some may be zero, and some complex.

(E.05)

0 = a 11 a 22 − a 12 a 21 − (^) ( a 11 + a 22 )λ +λ^2

0 = 1 ⋅ 4 − 2 ⋅ 2 − (1+ 4) λ +λ^2

(1+ 4) λ = λ^2

We see that one solution to this equation is λ=0, and dividing both sides of the above equation by λ yields λ=5. Thus we have our two eigenvalues, and the eigenvectors for the first eigenvalue, λ=0 are: Axx , (^) ( A −λ I ) x = 0 1 2 2 4

 

 

 −^

0 0

 

 

 

 

 ⋅^

x y

 

 

 =^

1 2 2 4

 

 

 ⋅^

x y

 

 

 =^

1 x + 2 y 2 x + 4 y

 

 

 =^

0 0

 

 

These equations are multiples of x=-2y, so the smallest whole number values that fit are x=2, y=-

For the other eigenvalue, λ=5: 1 2 2 4

 

 

 −^

5 0 0 5

 

 

 

 

 ⋅^

x y

 

 

 =

− 4 2 2 − 1

 

 

 ⋅^

x y

 

 

 =

− 4 x + 2 y 2 x − 1 y

 

 

 =^

0 0

 

 

-4 x + 2 y = 0, and 2 xy = 0, so, x = 1, y = 2

This example is rather special; A -1^ does not exist, the two rows of A - λ I are dependent and thus one of the eigenvalues is zero. (Zero is a legitimate eigenvalue!)

EXAMPLE: A more common case is A =[1.05 .05 ; .05 1] used in the strain exercise. Find the eigenvectors and eigenvalues for this A, and then calculate [V,D]=eig[A].

The procedure is:

  1. Compute the determinant of A - λ I

  2. Find the roots of the polynomial given by | A - λ I |=

  3. Solve the system of equations ( A - λ I ) x =0 Docsity.com

For now, I’ll just tell you that there are two eigenvectors for A :

x 1 =

. .

 

 

 and^ Ax 1 =^

.8. .7.

 

 

. .

 

 

 =^ x 1 (^ λ 1 = 1)

x 2 =

1 − 1

 

 

 and^ Ax 2 =^

.8. .7.

 

 

1 − 1

 

 

 =^

. −.

 

 

 (^ λ 2 = 0.5)

The eigenvectors are x 1 =[.6 ; .4] and x 2 =[1 ; -1], and the eigenvalues are λ 1 =1 and λ 2 =0.5.

Note that if we multiply x 1 by A, we get x 1. If we multiply x 1 by A again, we STILL get x 1. Thus x 1 doesn’t change as we mulitiply it by An^.

What about x 2? When we multiply A by x 2 , we get x 2 /2, and if we multiply x 2 by A^2 , we get x 2 /4. This number gets very small fast.

Note that when A is squared the eigenvectors stay the same, but the eigenvalues are squared!

Back to our original problem we note that for A^100 , the eigenvectors will be the same, the eigenvalues λ 1 =1 and λ 2 =(0.5) 100 , which is effectively zero.

Each eigenvector is multiplied by its eigenvalue whenever A is applied,

Eigenvector Centrality

 An extension of degree centrality  Centrality increases with number of neighbors

 Not all neighbors are equal  Having connection to more central nodes increases importance

 Eigenvector centrality gives each vertex a score proportional to the sum of scores of its neighbors

𝑥(𝑡) = 𝐴𝑡^ 𝑥(0)

where 𝑥 0 = ∑^ 𝑖 𝑐𝑖 𝑣𝑖 and vi are eigenvectors

Eigenvector Centrality

 As 𝑡 → ∞, we get 𝑥(𝑡) → 𝑐 1 𝑘 1 𝑡^ 𝑣 1

 Hence, A x = k 1 x

 where 𝑥𝑖 = 𝑘 1 −1^ ∑ 𝐴𝑖 (^) 𝑖𝑖 𝑥𝑖

 Eigenvector centrality of a vertex is large either  it has many neighbors and/or  it has important neighbors

Katz centrality

 Give each vertex a small amount of centrality  regardless of its position in the network or the centrality of its neighbors

 Hence, 𝐱 = α𝑨𝑨 + β𝟏  where 𝑥𝑖 =∝ ∑ 𝐴𝑖 (^) 𝑖𝑖 𝑥𝑖 + 𝛽

𝐱 = β(𝑰 − α𝑨)−𝟏. 𝟏

α is a scaling vector, which is set to normalize the score (for the expression to converge α ≤ 1/κ 1 ) β reflects the extent to which you weight the centrality of people ego is tied to I is the identity matrix (1s down the diagonal) 1 is a matrix of all ones

Katz Centrality: β

 The magnitude of β reflects the radius of power

  • Small values of β weight local structure
  • Larger values weight global structure

 If β > 0, ego has higher centrality when tied to people who are central

 If β < 0, then ego has higher centrality when tied to people who are not central

 With β = 0, you get degree centrality

PageRank: bringing order to the web

 It’s in the links:  links to URLs can be interpreted as endorsements or recommendations  the more links a URL receives, the more likely it is to be a good/entertaining/provocative/authoritative/interesting information source  but not all link sources are created equal  a link from a respected information source  a link from a page created by a spammer

Many webpages scattered across the web

an important page, e.g. slashdot

if a web page is slashdotted, it gains attention

PageRank

 An issue in Katz centrality measure is that a high centrality vertex pointing to large number of vertices gives all high centrality  Yahoo directory

 This can be fixed by dividing the centrality with the out- degree of a vertex 𝑥𝑖 =∝ � 𝐴 (^) 𝑖𝑖

𝑥𝑖 𝑖^ 𝑘𝑖^ 𝑜𝑜𝑡^ +^ 𝛽 𝑨 = α𝑨𝑫−1𝑨 + β𝟏 where D (^) ii =max(k (^) iout^ , 1) 𝑨 = β(𝑰 − α𝑨𝑫−1)−1. 𝟏