Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs excel at predictive performance, they are often too complex to be understood by humans, leading to them often being referred to as “black-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to gain a better understanding of DNNs, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not properly defined and is highly dependent on the end user and the task, leading to ill-defined research questions and no standardized evaluation practices. The goals of this workshop are thus two-fold:
Discussion and dissemination of ideas at the cutting-edge of XAI research (“Where are we?”)
A critical introspection on the challenges faced by the community and the way to go forward (“Where are we going?”)
Time | Program |
---|---|
14:00-14:15 | Welcome |
14:15-14:45 | Yossi Gandelsman: Interpreting Models by Reverse Engineering |
14:45-15:15 | Alexei A. Efros: Instead of “explaining” the Algorithms, we need to understand the Data |
15:15-16:30 | Coffee Break + Poster Session |
16:30-17:00 | Olga Russakovsky and Sunnie S. Y. Kim: Human-Centered Approaches to Explainable Computer Vision |
17:00-17:30 | René Vidal: Explainable AI via Semantic Information Pursuit |
17:30-18:00 | Stephan Alaniz and Zeynep Akata: Explainability in the Era of Multimodal Large Language Models |
18:00-18:05 | Closing |
We thank the reviewers for their efforts!
Ada Görgün | Aditya Chinchure | Akash Guna R.T | Amin Parchami-Araghi |
Angelos Nalmpantis | Ashkan Khakzar | Ashwath Shetty | Bor-Shiun Wang |
Chenyang ZHAO | Dhruv Srikanth | Dmitry Kangin | Elisa Nguyen |
Erfan Darzi | Eunji Kim | Fawaz Sammani | Giang Nguyen |
Hubert Baniecki | John Gkountouras | Joseph Paul Cohen | Junyi Wu |
Konstantinos P. Panousis | Lama Moukheiber | Manxi Lin | Matthew Kowal |
Maximilian Dreyer | Meghal Dani | Mira Moukheiber | Nhi Pham |
Nina Weng | Romain Xu-Darme | Sebastian Bordt | Simon Schrodi |
Stephan Alaniz | Susu Sun | Sweta Mahajan | Syed Nouman Hasany |
Yannic Neuhaus |
The eXCV workshop aims to advance and critically examine the current landscape in the field of XAI for computer vision. To this end, we invite papers covering all topics within XAI for computer vision, including but not limited to:
Since the aim of the workshop is not only to present new XAI methods but also to question current practices, we also invite papers that present interesting and detailed negative results and papers that show the limitations of today’s XAI methods.
The workshop has two tracks of submissions.
For both tracks, accepted papers will be presented in the in-person poster session of the workshop. At least one author for each accepted paper should plan to attend the workshop to present a poster.