CLEAR-AI Workshop · ECAI 2025 · Bologna, Italy

CLEAR — Collaborative Methods and Tools for
Engineering and Evaluating Transparency in AI

European Conference on Artificial Intelligence  ·  Bologna, Italy  ·  2025

Morning Session

09:30 – 10:30
Welcome & Introduction
"Enhancing Active Learning efficiency with a Recommendation System based on Annotator Accuracy, Mood, and Fatigue"
Diana Mortágua, Luis Macedo and F. Amílcar Cardoso
10:30 – 11:00
☕ Coffee Break
Session 1: Presentations
11:00 – 12:30
"A Visual Reader's Eye for Image-Generation AI"
Randi Cecchine and Martha Larson

"Uncertainty-Guided Expert-AI Collaboration for Efficient Soil Horizon Annotation"
Teodor Chiaburu, Vipin Singh, Frank Haußer and Felix Biessmann

"Towards Transparent and Interpretable Credit Risk Models: Classifier Selection Insights from Australia, Germany, and Taiwan"
Krzysztof Lorenz, Piotr Gutowski, Anna Drab-Kurowska, Agnieszka Budziewicz-Guźlecka, Ewelina Gutowska, Magdalena Majchrzak, Tymoteusz Miller, Irmina Durlik and Ewelina Kostecka
12:30 – 14:00
🍽️ Lunch Break

Afternoon Session

Session 2: Presentations
14:00 – 15:30
"Information Flow Modeling for Transparent and Accountable AI Act Assessment"
Mattias Brännström, Themis Xanthopoulou and Lili Jiang

"The Machinery of Government: Bureaucracy, Automation and Institutional Black-Boxing"
Diletta Huyskes

"Making Privacy Risks Transparent: Causal Analysis of Generalization and Membership Inference Attack in Differentially Private SGD"
Zhou Zhou and Lili Jiang
15:30 – 16:00
☕ Coffee Break
16:00 – 17:00
Roundtable Discussion
Interactive discussion with presenters and organizers
17:00 – 17:15
Closing Remarks
"Enhancing Active Learning efficiency with a Recommendation System based on Annotator Accuracy, Mood, and Fatigue"
Diana Mortágua, Luis Macedo and F. Amílcar Cardoso
"A Visual Reader's Eye for Image-Generation AI"
Randi Cecchine and Martha Larson
"Uncertainty-Guided Expert-AI Collaboration for Efficient Soil Horizon Annotation"
Teodor Chiaburu, Vipin Singh, Frank Haußer and Felix Biessmann
📄 Full text
"Towards Transparent and Interpretable Credit Risk Models: Classifier Selection Insights from Australia, Germany, and Taiwan"
Krzysztof Lorenz, Piotr Gutowski, Anna Drab-Kurowska, Agnieszka Budziewicz-Guźlecka, Ewelina Gutowska, Magdalena Majchrzak, Tymoteusz Miller, Irmina Durlik and Ewelina Kostecka
"Information Flow Modeling for Transparent and Accountable AI Act Assessment"
Mattias Brännström, Themis Xanthopoulou and Lili Jiang
"The Machinery of Government: Bureaucracy, Automation and Institutional Black-Boxing"
Diletta Huyskes
"Making Privacy Risks Transparent: Causal Analysis of Generalization and Membership Inference Attack in Differentially Private SGD"
Zhou Zhou and Lili Jiang

As AI systems become integral to critical decision-making processes, ensuring their transparency, fairness, and accountability is more essential than ever. The AI Act, the world's first attempt to systematically regulate artificial intelligence, places significant emphasis on this need. In particular, the Act establishes transparency as a fundamental principle, aiming to safeguard user rights and foster trust and reliability in new technologies.

Achieving these objectives necessitates not only the establishment of robust technical frameworks but also the active engagement of a diverse range of stakeholders to ensure that the development of AI is aligned with societal values. The CLEAR-AI Workshop addresses the critical need for interdisciplinary collaboration to advance methods, tools, and evaluation frameworks that ensure transparency, fairness, and trustworthiness in AI systems.

The CLEAR-AI Workshop aligns with ECAI 2025's mission of advancing sustainable AI by promoting transparency, participatory approaches, and rigorous methods that ensure accountability and societal alignment.

Challenge 1

Reframing Transparency as a Relation, Not a Static Property

Transparency is too often treated as something systems have rather than something they provide in relation to those who use, audit, or are affected by them. This theme challenges us to rethink transparency as an active relation between information and its audience.

Challenge 2

Connecting Methods to Meaning and Purpose

Technical tools and conceptual models for transparency abound, but they often operate without a clear sense of for whom and for what they exist. This theme explores how metrics, explanations, and frameworks can be oriented toward concrete purposes—whether legal accountability, informed oversight, or effective user understanding.

Challenge 3

Moving Toward Stakeholder-Centered Transparency

The ultimate challenge remains to make transparency usable, relevant, and responsive. This theme looks at how today's methods can evolve—through participatory, methodological, or institutional innovation—toward transparency that genuinely empowers those who depend on it.

ACM Journal on Responsible Computing

Following the workshop, a special section in the ACM Journal on Responsible Computing (JRC) dedicated to transparency in AI systems is currently open for submissions. See the full call for papers (PDF).

Submissions deadlineApril 15, 2026
First-round review decisionsJune 15, 2026
Deadline for revision submissionsJuly 8, 2026
Notification of final decisionsJuly 31, 2026
Camera-ready manuscriptsAugust 22, 2026
Tentative publicationEnd of 2026

Contact: Francien Dechesnef.dechesne@law.leidenuniv.nl

Open for submissionsApril 15, 2025
1st submission cut-offJune 15, 2025
1st round notificationJuly 15, 2025
2nd submission cut-offAugust 15, 2025
2nd round notificationAugust 20, 2025
Final schedule publishedOctober 22, 2025
Themis Dimitra Xanthopoulou
Dept. of Computing Science, Umeå University
Rachele Carli
Dept. of Computing Science, Umeå University
Andreas Brännström
Dept. of Computing Science, Umeå University
Francien Dechesne
eLaw Center for Law and Digital Technologies, Leiden University
Chiara Gallese
Dept. of Law, University of Turin
Mattias Brännström
Dept. of Computing Science, Umeå University
Virginia Dignum
AI Policy Lab
Juan Carlos Nieves
Umeå University
Neil Yorke-Smith
TU Delft
Tim Miller
University of Queensland
Tony Ribeiro
Laboratoire des Sciences du Numérique de Nantes
Agate Balayn
TU Delft
Daniel Kostic
Leiden University
Giuseppe Primiero
University of Milan
Jayati Deshmukh
University of Southampton
Imane Hmiddou
ALLAI
Amro Najjar
Luxembourg Institute of Science and Technology

Questions regarding the workshop: Themis Dimitra Xanthopoulou, Umeå University, Sweden — themis.xanthopoulou@umu.se