DOI: 10.1145/3711122 ISSN: 0360-0300

Can Graph Neural Networks be Adequately Explained? A Survey

Xuyan Li, Jie Wang, Zheng Yan

To address the barrier caused by the black-box nature of Deep Learning (DL) for practical deployment, eXplainable Artificial Intelligence (XAI) has emerged and is developing rapidly. While significant progress has been made in explanation techniques for DL models targeted to images and texts, research on explaining DL models for graph data is still in its infancy. As Graph Neural Networks (GNNs) have shown superiority over various network analysis tasks, their explainability has also gained attention from both academia and industry. However, despite the increasing number of GNN explanation methods, there is currently neither a fine-grained taxonomy of them, nor a holistic set of evaluation criteria for quantitative and qualitative evaluation. To fill this gap, we conduct a comprehensive survey on existing explanation methods of GNNs in this paper. Specifically, we propose a novel four-dimensional taxonomy of GNN explanation methods and summarize evaluation criteria in terms of correctness, robustness, usability, understandability, and computational complexity. Based on the taxonomy and criteria, we thoroughly review the recent advances in GNN explanation methods and analyze their pros and cons. In the end, we identify a series of open issues and put forward future research directions to facilitate XAI research in the field of GNNs.

More from our Archive