Explainable Computer Vision: Where are We and Where are We Going?

Workshop at the European Conference on Computer Vision (ECCV) 2024

Milan, Italy

September 29, 2024 · 14:00-18:00 · Brown 1

Virtual Attendees: Zoom link and Q/A via RocketChat

About

Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs excel at predictive performance, they are often too complex to be understood by humans, leading to them often being referred to as “black-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to gain a better understanding of DNNs, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not properly defined and is highly dependent on the end user and the task, leading to ill-defined research questions and no standardized evaluation practices. The goals of this workshop are thus two-fold:

  1. Discussion and dissemination of ideas at the cutting-edge of XAI research (“Where are we?”)

  2. A critical introspection on the challenges faced by the community and the way to go forward (“Where are we going?”)

Program

Time Program
14:00-14:15 Welcome
14:15-14:45 Yossi Gandelsman: Interpreting Models by Reverse Engineering
14:45-15:15 Alexei A. Efros: Instead of “explaining” the Algorithms, we need to understand the Data
15:15-16:30 Coffee Break + Poster Session
16:30-17:00 Olga Russakovsky and Sunnie S. Y. Kim: Human-Centered Approaches to Explainable Computer Vision
17:00-17:30 René Vidal: Explainable AI via Semantic Information Pursuit
17:30-18:00 Stephan Alaniz and Zeynep Akata: Explainability in the Era of Multimodal Large Language Models
18:00-18:05 Closing

Invited Speakers

 

Avatar

Alexei A. Efros

UC Berkeley

Avatar

Olga Russakovsky

Princeton University

Avatar

René Vidal

University of Pennsylvania

Avatar

Stephan Alaniz

Helmholtz Munich

Avatar

Sunnie S. Y. Kim

Princeton University

Avatar

Yossi Gandelsman

UC Berkeley

Avatar

Zeynep Akata

TUM and Helmholtz Munich

Organizing Team

 

Avatar

Robin Hesse

TU Darmstadt

Avatar

Sukrut Rao

MPI Informatics

Avatar

Moritz Böhle

MPI Informatics / Kyutai

Avatar

Quentin Bouniot

Télécom Paris

Avatar

Stefan Roth

TU Darmstadt

Avatar

Kate Saenko

FAIR / Boston University

Avatar

Bernt Schiele

MPI Informatics

Sponsors

Reviewers

We thank the reviewers for their efforts!

Ada GörgünAditya ChinchureAkash Guna R.TAmin Parchami-Araghi
Angelos NalmpantisAshkan KhakzarAshwath ShettyBor-Shiun Wang
Chenyang ZHAODhruv SrikanthDmitry KanginElisa Nguyen
Erfan DarziEunji KimFawaz SammaniGiang Nguyen
Hubert BanieckiJohn GkountourasJoseph Paul CohenJunyi Wu
Konstantinos P. PanousisLama MoukheiberManxi LinMatthew Kowal
Maximilian DreyerMeghal DaniMira MoukheiberNhi Pham
Nina WengRomain Xu-DarmeSebastian BordtSimon Schrodi
Stephan AlanizSusu SunSweta MahajanSyed Nouman Hasany
Yannic Neuhaus

Call for Papers

The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:

  • Attribution maps
  • Evaluating XAI methods
  • Intrinsically explainable models
  • Language as an explanation for vision models
  • Counterfactual explanations
  • Causality in XAI for vision models
  • Mechanistic interpretability
  • XAI beyond classification (e.g., segmentation or other disciplines of computer vision)
  • Concept discovery

Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.

Submission Instructions

The workshop has two tracks of submissions.

  • Proceedings Track: We welcome papers of up to 14 pages, excluding references or supplementary materials. Accepted papers will be included in the ECCV Workshop Proceedings. Papers submitted to the Proceedings Track should follow the ECCV 2024 Submission Policies. Each accepted paper in the Proceedings Track needs to be covered by a full Workshop Registration, and one registration can cover up to two papers. Please see the ECCV 2024 Registration page for full details.
  • Non-Proceedings / Nectar Track: For the Nectar Track we invite papers that have been previously published at a leading international conference on computer vision or machine learning in 2023 or 2024 (e.g., ECCV, ICCV, CVPR, NeurIPS, ICLR, ICML, AAAI). The aim of the Nectar Track is to increase the visibility of exciting XAI work and to give researchers an opportunity to connect with the XAI community. The submission should be a single PDF containing the already published paper (not anonymized and in the formatting of the original venue).

For both tracks, accepted papers will be presented in the in-person poster session of the workshop. At least one author for each accepted paper should plan to attend the workshop to present a poster.

Important Dates

Proceedings Track:

  • Paper submission deadline: July 24, 2024 (23:59 CEST)
  • Paper decision notification: August 16, 2024 (23:59 CEST)
  • Camera Ready deadline: August 22, 2024 (23:59 CEST)

Non-Proceedings / Nectar Track:

  • Paper submission deadline: August 5, 2024 (23:59 CEST)
  • Paper decision notification: August 16, 2024 (23:59 CEST)

Submission Sites