GRAPH 1

https://towardsdatascience.com/how-to-do-deep-learning-on-graphs-with-graph-convolutional-networks-7d2250723780

https://github.com/dmlc/dgl

https://github.com/dglai/DGL-GTC2019/blob/master/slides.pptx

http://tkipf.github.io/misc/SlidesCambridge.pdf

GCN

G(V,E)G(V, E)

Input feature matrix

H0=X[NF0]H ^ 0 = X \leftarrow [N \cdot F ^ 0]

N = # of nodes
F0 = # of features of each node

Adjacency matrix

representation of the graph structure

A[NN]A \leftarrow [N \cdot N]

Output

Hl+1=f(HlA)[NFl+1]H ^{l+1} = f(H^l \cdot A) \leftarrow [N \cdot F^{l+1}]

f = propagation

Propagation

Sum Rule

sum up feature representations of the neighbors of the ith node

f(HlA)=σ(AHlWl)f(H^l \cdot A) = \sigma(A \cdot H^l \cdot W^l)

aggregate(A,X)=j=1NAi,jXjaggregate(A, X) = \sum_{j=1}^{N} A_{i,j} \cdot X_j

W = weight matrix, dimension = [F_l, F_l+1]

Mean Rule: Self-loop & normalization

A^=A+I\hat{A} = A + I

f(XA)=D1A^Xf(X \cdot A) = D^{-1} \cdot \hat{A} \cdot X

aggregate(A,X)=j=1NAi,jDi,iXjaggregate(A, X) = \sum_{j=1}^{N} \frac{A_{i,j}}{D_{i, i}} \cdot X_j

D = degree matrix (for normalization)

Spectral Rule

f(XA)=D0.5A^D0.5Xf(X \cdot A) = D^{-0.5} \cdot \hat{A} \cdot D^{-0.5} \cdot X

aggregate(A,X)=j=1N1Di,i0.5Ai,j1Dj,j0.5Xjaggregate(A, X) = \sum_{j=1}^{N} \frac{1}{D_{i, i}^{0.5}} \cdot A_{i,j} \cdot \frac{1}{D_{j,j}^{0.5}} \cdot X_j

aggregate feature of the ith node = the degree of the ith node, and also the degree of the jth node.

Inductive = supervised

Transductive = unsupervised

Last updated

Was this helpful?