A review paper by three undergraduates – Wang Chenyu, Zuo Chaoying, and Su Zihan – from Huazhong Agricultural University (HZAU) has been accepted by the 28th European Conference on Artificial Intelligence (ECAI-2025), a leading international conference in the field of artificial intelligence (AI).
Titled "Deep Learning and Explainable AI: New Pathways to Genetic Insights", the paper provides a comprehensive classification of explainable AI (XAI) methods in genomics, offering theoretical and practical guidance for researchers seeking to understand deep learning models and their interpretability in genetic studies.
Deep learning techniques have become increasingly prominent in genomics. Common approaches include visualizing convolutional kernels in CNNs to detect enhancers in DNA sequences, using gradient-based methods to map regulatory elements, applying perturbation techniques to link gene loci with biological traits, and building transparent models like DCell based on biological priors to predict gene functions.
Despite their power, the complexity of these models creates an urgent need for interpretability to ensure reliable, model-driven decisions in genetic research. However, current interpretability in genomics largely relies on heuristics rather than robust theory.
The review classifies existing XAI methods into input-based and model-based categories. Input-based methods include kernel visualization, gradient-based analysis, and perturbation tests, while model-based methods cover attention mechanisms and biologically grounded transparent models. The authors assess these techniques through real-world biological use cases, highlighting limitations such as dropout-induced inconsistencies. They also introduce novel mathematical interpretations, including differential geometry-based insights into neural redundancy.

The research offers new theoretical insights into model interpretability. [Photo/news.hzau.edu.cn]