Explainable Computer Vision: Quo Vadis?

Workshop at the International Conference on Computer Vision (ICCV) 2025

Honolulu, Hawai'i, USA

October 19/20, 2025

About

Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs exceed at predictive performance, they are often too complex to be understood by humans, rendering them to be referred to as “closed-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to understand DNNs better, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not consistently defined and is highly dependent on the end user and task, leading to ill-defined research questions and no standardized evaluation practices.

Rapidly increasing model sizes and the rise of large-scale foundation models have brought forth fresh challenges to the field such as handling scale, maintaining model performance, performing appropriate evaluations, complying with regulatory requirements, and fundamentally rethinking what one wants from an explanation.

The goals of this workshop are thus two-fold:

  1. Discussion and dissemination of ideas at the cutting-edge of XAI research

  2. A critical introspection on the challenges faced by the community and the way forward (“Quo Vadis?”)

Call for Papers

The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:

  • Attribution maps
  • Evaluating XAI methods
  • Intrinsically explainable models
  • Language as an explanation for vision models
  • Counterfactual explanations
  • Causality in XAI for vision models
  • Mechanistic interpretability
  • XAI beyond classification (e.g., segmentation or other disciplines of computer vision)
  • Concept discovery
  • Understanding Foundation models’ representations
  • Feature visualizations
  • New forms of explanations

Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.

Submission Instructions

The workshop has four submission tracks:

Proceedings Tracks

Papers in these tracks will be published in the ICCV 2025 Workshop Proceedings, and must be up to 8 pages in length excluding references and supplementary material. Papers submitted to the Proceedings Track should follow the ICCV 2025 Submission Policies and the Author Guidelines. Each accepted paper in the Proceedings Track needs to be covered by a Author Registration, and one registration can cover up to three papers. Please see the ICCV 2025 Registration page for the most up to date details.

  • Full Papers: We welcome papers presenting novel and original XAI work, within the broad scope described above.
  • Position Papers: We invite thought-provoking papers that articulate bold positions, propose new directions or present challenges for the field of XAI. We expect accepted papers to spark discussions rather than presenting research work, and will select papers based on their potential to stimulate the debate during the workshop.

Non-Proceedings Tracks

Papers in these tracks will not be published in the ICCV 2025 Workshop Proceedings.

  • Early Stage Track: We welcome in this track submissions describing preliminary work, ongoing projects, or novel ideas that are in early stages of development. We particularly encourage contributions from researchers from underrepresented communities and/or interdisciplinary teams. Papers should be up to 4 pages excluding references, and should follow the ICCV submission template.
  • Nectar Track: We invite papers that have been previously published at a leading international conference on computer vision or machine learning in 2024 or 2025 (e.g., ECCV, ICCV, CVPR, NeurIPS, ICLR, ICML, AAAI). The aim of the Nectar Track is to increase the visibility of exciting XAI work and to give researchers an opportunity to connect with the XAI community. The submission should be a single PDF containing the already published paper (not anonymized and in the formatting of the original venue).

For all tracks, accepted papers will be presented in-person in the poster session of the workshop. At least one author for each accepted paper should plan to attend the workshop to present a poster.

Important Dates

Proceedings Tracks:

  • Paper submission deadline: June 20, 2025 (23:59 AoE)
  • Paper decision notification: July 10, 2025
  • Camera Ready deadline: August 18, 2025

Non-Proceedings Tracks:

  • Paper submission deadline: July 18, 2025
  • Paper decision notification: August 14, 2025

Submission Sites

Coming soon.

Invited Speakers

 

Avatar

Hila Chefer

Tel Aviv University

Avatar

Jitendra Malik

UC Berkeley

Avatar

Sharon Li

UW Madison

Avatar

Thomas Fel

Harvard University

Avatar

Trevor Darrell

UC Berkeley

Organizing Team

 

Avatar

Sukrut Rao

MPI Informatics

Avatar

Robin Hesse

TU Darmstadt

Avatar

Quentin Bouniot

TUM and Helmholtz Munich

Avatar

Sweta Mahajan

MPI Informatics

Avatar

Amin Parchami-Araghi

MPI Informatics

Avatar

Jayneel Parekh

Sorbonne Université

Avatar

Simone Schaub-Meyer

TU Darmstadt

Avatar

Florence d'Alché-Buc

Télécom Paris

Avatar

Zeynep Akata

TUM and Helmholtz Munich

Avatar

Stefan Roth

TU Darmstadt

Avatar

Bernt Schiele

MPI Informatics