Explainable Computer Vision: Quo Vadis?

Workshop at the International Conference on Computer Vision (ICCV) 2025

Honolulu, Hawai'i, USA

October 19, 2025

About

Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs exceed at predictive performance, they are often too complex to be understood by humans, rendering them to be referred to as “closed-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to understand DNNs better, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not consistently defined and is highly dependent on the end user and task, leading to ill-defined research questions and no standardized evaluation practices.

Rapidly increasing model sizes and the rise of large-scale foundation models have brought forth fresh challenges to the field such as handling scale, maintaining model performance, performing appropriate evaluations, complying with regulatory requirements, and fundamentally rethinking what one wants from an explanation.

The goals of this workshop are thus two-fold:

  1. Discussion and dissemination of ideas at the cutting-edge of XAI research

  2. A critical introspection on the challenges faced by the community and the way forward (“Quo Vadis?”)

Call for Open-Mic Opinions

Keeping with the spirit of the workshop’s theme “Quo Vadis”, where we aim to discuss the state of the field of XAI, its challenges, and its opportunities, we wish to provide a platform for the broader community to participate in this discussion and share their views. We plan to do this by holding short 5 minute talks (in-person only) during the workshop.

If you would like to participate, please submit a short proposal (at most half a page, and can even be just bullet points) on the topic you would like to speak and the position you would like to take. The proposal does not have to be very detailed but should contain the key messages you would like to convey in your talk. The general idea is to present a position on a matter of interest to the community (similar to position papers). Using references to support your position are encouraged. You may use your own work to support your position, but the talk should have a broader focus and should ideally not be limited to your own research. “Unpopular” positions that may challenge norms and assumptions taken by the field are welcome. We welcome everyone to speak, including but not limited to XAI researchers (students included), practitioners who use XAI methods, stakeholders who use explanations, and members of the broader vision, ML, and AI community.

  • Submission Link: https://forms.gle/fw9PdvKNyWSqnTwe7 (if you are unable to submit via Google Forms, please email us.)
  • Deadline: September 29, 2025 AoE
  • Decisions: October 1, 2025 AoE

Call for Papers

The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:

  • Attribution maps
  • Evaluating XAI methods
  • Intrinsically explainable models
  • Language as an explanation for vision models
  • Counterfactual explanations
  • Causality in XAI for vision models
  • Mechanistic interpretability
  • XAI beyond classification (e.g., segmentation or other disciplines of computer vision)
  • Concept discovery
  • Understanding Foundation models’ representations
  • Feature visualizations
  • New forms of explanations

Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.

Submission Instructions

The workshop has four submission tracks:

Proceedings Tracks

Papers in these tracks will be published in the ICCV 2025 Workshop Proceedings, and must be up to 8 pages in length excluding references and supplementary material. Papers submitted to the Proceedings Track should follow the ICCV 2025 Submission Policies and the Author Guidelines. Each accepted paper in the Proceedings Track needs to be covered by a Author Registration, and one registration can cover up to three papers. Please see the ICCV 2025 Registration page for the most up to date details.

  • Full Papers: We welcome papers presenting novel and original XAI work, within the broad scope described above.
  • Position Papers: We invite thought-provoking papers that articulate bold positions, propose new directions or present challenges for the field of XAI. We expect accepted papers to spark discussions rather than presenting research work, and will select papers based on their potential to stimulate the debate during the workshop.

Non-Proceedings Tracks

Papers in these tracks will not be published in the ICCV 2025 Workshop Proceedings.

  • Early Stage Track: We welcome in this track submissions describing preliminary work, ongoing projects, or novel ideas that are in early stages of development. We particularly encourage contributions from researchers from underrepresented communities and/or interdisciplinary teams. Papers should be up to 4 pages excluding references, and should follow the ICCV submission template.
  • Nectar Track: We invite papers that have been previously published at a leading international conference on computer vision or machine learning in 2024 or 2025 (e.g., ECCV, ICCV, CVPR, NeurIPS, ICLR, ICML, AAAI). The aim of the Nectar Track is to increase the visibility of exciting XAI work and to give researchers an opportunity to connect with the XAI community. The submission should be a single PDF containing the already published paper (not anonymized and in the formatting of the original venue).

For all tracks, accepted papers will be presented in-person in the poster session of the workshop. At least one author for each accepted paper should plan to attend the workshop to present a poster.

Important Dates

Proceedings Tracks:

  • Paper submission deadline: June 26, 2025 (23:59 AoE)
  • Paper decision notification: July 10, 2025
  • Camera Ready deadline: August 18, 2025

Non-Proceedings Tracks:

  • Paper submission deadline: August 15, 2025 (23:59 AoE)
  • Paper decision notification: August 27, 2025

Rolling deadline for Nectar track only: Submissions will be accepted as long as poster space is available. Please submit via this form. Note that this form may close at any time without prior notice.

Submission Sites

Invited Speakers

 

Avatar

Hila Chefer

Tel Aviv University

Avatar

Jitendra Malik

UC Berkeley

Avatar

Sharon Li

UW Madison

Avatar

Thomas Fel

Harvard University

Organizing Team

 

Avatar

Sukrut Rao

MPI Informatics

Avatar

Robin Hesse

TU Darmstadt

Avatar

Quentin Bouniot

TUM and Helmholtz Munich

Avatar

Sweta Mahajan

MPI Informatics

Avatar

Amin Parchami-Araghi

MPI Informatics

Avatar

Jayneel Parekh

Sorbonne Université

Avatar

Simone Schaub-Meyer

TU Darmstadt

Avatar

Florence d'Alché-Buc

Télécom Paris

Avatar

Zeynep Akata

TUM and Helmholtz Munich

Avatar

Stefan Roth

TU Darmstadt

Avatar

Bernt Schiele

MPI Informatics