Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs exceed at predictive performance, they are often too complex to be understood by humans, rendering them to be referred to as “closed-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to understand DNNs better, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not consistently defined and is highly dependent on the end user and task, leading to ill-defined research questions and no standardized evaluation practices.
Rapidly increasing model sizes and the rise of large-scale foundation models have brought forth fresh challenges to the field such as handling scale, maintaining model performance, performing appropriate evaluations, complying with regulatory requirements, and fundamentally rethinking what one wants from an explanation.
The goals of this workshop are thus two-fold:
Discussion and dissemination of ideas at the cutting-edge of XAI research
A critical introspection on the challenges faced by the community and the way forward (“Quo Vadis?”)
Keeping with the spirit of the workshop’s theme “Quo Vadis”, where we aim to discuss the state of the field of XAI, its challenges, and its opportunities, we wish to provide a platform for the broader community to participate in this discussion and share their views. We plan to do this by holding short 5 minute talks (in-person only) during the workshop.
If you would like to participate, please submit a short proposal (at most half a page, and can even be just bullet points) on the topic you would like to speak and the position you would like to take. The proposal does not have to be very detailed but should contain the key messages you would like to convey in your talk. The general idea is to present a position on a matter of interest to the community (similar to position papers). Using references to support your position are encouraged. You may use your own work to support your position, but the talk should have a broader focus and should ideally not be limited to your own research. “Unpopular” positions that may challenge norms and assumptions taken by the field are welcome. We welcome everyone to speak, including but not limited to XAI researchers (students included), practitioners who use XAI methods, stakeholders who use explanations, and members of the broader vision, ML, and AI community.
The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:
Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.
The workshop has four submission tracks:
Papers in these tracks will be published in the ICCV 2025 Workshop Proceedings, and must be up to 8 pages in length excluding references and supplementary material. Papers submitted to the Proceedings Track should follow the ICCV 2025 Submission Policies and the Author Guidelines. Each accepted paper in the Proceedings Track needs to be covered by a Author Registration, and one registration can cover up to three papers. Please see the ICCV 2025 Registration page for the most up to date details.
Papers in these tracks will not be published in the ICCV 2025 Workshop Proceedings.
For all tracks, accepted papers will be presented in-person in the poster session of the workshop. At least one author for each accepted paper should plan to attend the workshop to present a poster.
Rolling deadline for Nectar track only: Submissions will be accepted as long as poster space is available. Please submit via this form. Note that this form may close at any time without prior notice.