While Deep Neural Networks (DNNs) exceed at predictive performance, they are often too complex to be understood by humans, rendering them “closed-box models”, which is of particular concern when applied in safety-critical domains such as autonomous driving or medical applications. Therefore, explainable artificial intelligence (XAI) aims to better understand DNNs, ultimately leading to more robust, fair, and interpretable models. While this important field of research has been gaining traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not consistently defined and is highly dependent on the end user and task, leading to ill-defined research questions and a lack of standardized evaluation practices. Motivated by this, the goal of our workshop series is the discussion and dissemination of ideas at the cutting-edge of XAI research, while also introducing a dedicated sub-theme each year to encourage critical introspection on the challenges faced by the community. To this end, we have dedicated paper tracks (proceedings, non-proceedings), a poster session, elevator pitches for each poster, invited speakers presenting their latest work, and a panel discussion focusing on theme-specific challenges. In this third iteration of our workshop, our theme will be challenges to and opportunities for XAI in the era of large foundation models, which have been rapidly gaining traction in the field of computer vision and beyond. This focus will be reflected in the selection of the invited speakers—we aim for research expertise at the intersection of traditional XAI and foundation models—and, especially, in the composition and framing of the panel discussion.
The rapid rise of foundation models over the past few years has made the role of XAI even more critical. For example, the widespread deployment of generative AI models has given them the power to steer collective thought processes, shift cultural norms, and spread biases. Thus, it is essential to have a proper understanding of their inner workings, the knowledge they store, and how to steer them away from harmful and biased outputs. However, they have also brought forth fresh challenges to the XAI field such as handling scale, maintaining model performance, performing appropriate evaluations, complying with regulatory requirements, and fundamentally rethinking what one wants from an explanation. Traditional XAI methods can often not be directly applied to these models, and even when they are, it is not clear whether they are still useful. Thus, the field of XAI needs to adapt to the rapidly changing landscape of AI research and deployment, and this workshop will act as a platform to foster discussion on how to do so. A key goal for our workshop this year is to bring together researchers who have been working on classical XAI methods from before foundation models and those who have been at the forefront of XAI for such models, to foster a fruitful exchange of ideas and perspectives.