Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs exceed at predictive performance, they are often too complex to be understood by humans, rendering them to be referred to as “closed-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to understand DNNs better, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not consistently defined and is highly dependent on the end user and task, leading to ill-defined research questions and no standardized evaluation practices.
Rapidly increasing model sizes and the rise of large-scale foundation models have brought forth fresh challenges to the field such as handling scale, maintaining model performance, performing appropriate evaluations, complying with regulatory requirements, and fundamentally rethinking what one wants from an explanation.
The goals of this workshop are thus two-fold:
Discussion and dissemination of ideas at the cutting-edge of XAI research
A critical introspection on the challenges faced by the community and the way forward (“Quo Vadis?”)
The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:
Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.
The workshop has four submission tracks:
Papers in these tracks will be published in the ICCV 2025 Workshop Proceedings, and must be up to 8 pages in length excluding references and supplementary material. Papers submitted to the Proceedings Track should follow the ICCV 2025 Submission Policies and the Author Guidelines. Each accepted paper in the Proceedings Track needs to be covered by a Author Registration, and one registration can cover up to three papers. Please see the ICCV 2025 Registration page for the most up to date details.
Papers in these tracks will not be published in the ICCV 2025 Workshop Proceedings.
For all tracks, accepted papers will be presented in-person in the poster session of the workshop. At least one author for each accepted paper should plan to attend the workshop to present a poster.
Coming soon.