Explainable Computer Vision: Where are We and Where are We Going?

Workshop at the European Conference on Computer Vision (ECCV) 2024

Milan, Italy · September 29, 2024 · Afternoon

About

Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs excel at predictive performance, they are often too complex to be understood by humans, leading to them often being referred to as “black-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to gain a better understanding of DNNs, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not properly defined and is highly dependent on the end user and the task, leading to ill-defined research questions and no standardized evaluation practices. The goals of this workshop are thus two-fold:

  1. Discussion and dissemination of ideas at the cutting-edge of XAI research (“Where are we?”)

  2. A critical introspection on the challenges faced by the community and the way to go forward (“Where are we going?”)

Call for Papers

The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:

  • Attribution maps
  • Evaluating XAI methods
  • Intrinsically explainable models
  • Language as an explanation for vision models
  • Counterfactual explanations
  • Causality in XAI for vision models
  • Mechanistic interpretability
  • XAI beyond classification (e.g., segmentation or other disciplines of computer vision)
  • Concept discovery

Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.

Submission Instructions

The workshop has two tracks of submissions.

  • Proceedings Track: We welcome papers of up to 14 pages, excluding references or supplementary materials. Accepted papers will be included in the ECCV Workshop Proceedings. Papers submitted to the Proceedings Track should follow the ECCV 2024 Submission Policies.
  • Non-Proceedings / Nectar Track: For the Nectar Track we invite papers that have been previously published at a leading international conference on computer vision or machine learning in 2023 or 2024 (e.g., ECCV, ICCV, CVPR, NeurIPS, ICLR, ICML, AAAI). The aim of the Nectar Track is to increase the visibility of exciting XAI work and to give researchers an opportunity to connect with the XAI community. The submission should be a single PDF containing the already published paper (not anonymized and in the formatting of the original venue)

Important Dates

Proceedings Track:

Non-Proceedings / Nectar Track:

Program (Tentative)

Time Program
13:00-13:15 Welcome
13:15-13:35 Speaker 1
13:35-13:55 Speaker 2
13:55-14:15 Q/A for Speaker 1 and Speaker 2
14:15-15:30 Coffee Break + Poster Session
15:30-15:50 Speaker 3
15:50-16:10 Speaker 4
16:10-16:30 Speaker 5
16:30-16:50 Q/A for Speaker 3 and Speaker 4 and Speaker 5
16:50-17:00 Closing

Invited Speakers

 

More coming soon!

 

Avatar

Olga Russakovsky

Princeton University

Avatar

Zeynep Akata

Helmholtz Munich

Organizing Team

 

Avatar

Robin Hesse

TU Darmstadt

Avatar

Sukrut Rao

MPI Informatics

Avatar

Moritz Böhle

MPI Informatics

Avatar

Quentin Bouniot

Télécom Paris

Avatar

Stefan Roth

TU Darmstadt

Avatar

Kate Saenko

FAIR / Boston University

Avatar

Bernt Schiele

MPI Informatics