This notebook provides a demo of the interpretation method [GNN-LRP](https://arxiv.org/abs/2006.03589), which explains the network prediction strategy of a GNN by extracting relevant walks on the input graph.
This notebook provides a demo of the interpretation method [GNN-LRP](https://arxiv.org/abs/2006.03589), available at
T Schnake, O Eberle, J Lederer, S Nakajima, K T. Schütt, KR Müller, G Montavon<br><ahref="https://arxiv.org/abs/2006.03589">Higher-Order Explanations of Graph Neural Networks via Relevant Walks</a><br><fontcolor="#008800">arXiv:2006.03589, 2020</font>
</blockquote>
which explains the network prediction strategy of a GNN by extracting relevant walks on the input graph.
We will train a [GCN](https://arxiv.org/abs/1609.02907) on scale-free Barabási-Albert graphs to detect its growth parameter. We will give an implementation of the GNN-LRP method and apply it to the trained network. Finally, we will show qualitative evidence for the network predictions, by visualizing the heatmaps of relevant walks.