|
73 | 73 | "#IFrame(\"doc/SpectralGraphTheory/cbms.pdf\", width=1200, height=800)"
|
74 | 74 | ]
|
75 | 75 | },
|
| 76 | + { |
| 77 | + "cell_type": "markdown", |
| 78 | + "metadata": {}, |
| 79 | + "source": [ |
| 80 | + "Here are two very good references from Petar Veličković :\n", |
| 81 | + "* [Theoretical Foundations of Graph Neural Networks](https://www.youtube.com/watch?v=uF53xsT7mjc&ab_channel=PetarVeli%C4%8Dkovi%C4%87)\n", |
| 82 | + "* [Intro to graph neural networks (ML Tech Talks](https://www.youtube.com/watch?v=8owQBFAHw7E&ab_channel=TensorFlow)\n", |
| 83 | + "* [Recent set of references](https://twitter.com/PetarV_93/status/1306689702020382720)\n", |
| 84 | + "* [Some introduction to Graph attention network](https://petar-v.com/GAT/)" |
| 85 | + ] |
| 86 | + }, |
76 | 87 | {
|
77 | 88 | "cell_type": "markdown",
|
78 | 89 | "metadata": {},
|
|
125 | 136 | "| Gradient operator | $(\\nabla f)_{ij} = \\sqrt{a_{ij}}(f_i-f_j)$ | $\\nabla : L^2(\\mathcal{V}) \\rightarrow L^2(\\mathcal{E})$ |\n",
|
126 | 137 | "| Divergence operator | $(div F)_{i} = \\frac{1}{b_i} \\sum_{j:(i,j) \\in \\mathcal{E}} \\sqrt{a_{ij}}(F_{ji}-F_{ij})$ |$div : L^2(\\mathcal{E}) \\rightarrow L^2(\\mathcal{V})$ |\n",
|
127 | 138 | "| Gradient Adjoint |$\\nabla^{\\star}F = -div F$ | $\\langle F,\\nabla f \\rangle_{L^2(\\mathcal{E})} = \\langle \\nabla^{\\star}F, f \\rangle_{L^2(\\mathcal{V})} = \\langle -div F, f \\rangle_{L^2(\\mathcal{V})}$ |\n",
|
128 |
| - "| Laplacian operator | $(\\Delta F)_{i} = \\frac{1}{b_i} \\sum_{j:(i,j) \\in \\mathcal{E}} a_{ij}(f_i-f_j)$ | $\\Delta : L^2(\\mathcal{V}) \\rightarrow L^2(\\mathcal{V})$, this one is the $D-A$ |\n", |
129 | 139 | "| Weight matrix | $A = (a_{ij})$ | | |\n",
|
130 | 140 | "| Degree matrix | $D = diag\\left(d_i\\right)$ where $d_i=\\sum_{j\\neq i}a_{ij})$ | | |\n",
|
131 |
| - "| Unnormalized Laplacian | $\\Delta = D - A$ | | |\n", |
132 |
| - "| Normalized Laplacian | $\\Delta = I - D^{-\\frac{1}{2}} A D^{-\\frac{1}{2}}$ | | |\n", |
133 |
| - "| Random walk Laplacian | $\\Delta = I - D^{-1} A $ | | |\n", |
| 141 | + "| (Unnormalized) Laplacian operator | $(\\Delta F)_{i} = \\frac{1}{b_i} \\sum_{j:(i,j) \\in \\mathcal{E}} a_{ij}(f_i-f_j)$ | $\\Delta : L^2(\\mathcal{V}) \\rightarrow L^2(\\mathcal{V})$, this one is $\\Delta = D - A$ |\n", |
| 142 | + "| Normalized Laplacian | $(\\Delta F)_{i} = \\sum_{j:(i,j) \\in \\mathcal{E}} \\frac{1}{b_i\\sqrt{d_i d_j}} a_{ij}(f_i-f_j)$ | $\\Delta : L^2(\\mathcal{V}) \\rightarrow L^2(\\mathcal{V})$, this one is $\\Delta = I - D^{-\\frac{1}{2}} A D^{-\\frac{1}{2}}$ |\n", |
| 143 | + "| Random walk Laplacian | $\\Delta = I - D^{-1} A $ | $\\Delta : L^2(\\mathcal{V}) \\rightarrow L^2(\\mathcal{V})$, this one is $\\Delta = I - D^{-1} A $ |\n", |
134 | 144 | "\n",
|
135 | 145 | "Those definitions are coming from slides from Xavier Bresson."
|
136 | 146 | ]
|
|
0 commit comments