CLEAR-WS

CLEAR-AI Workshop Banner

CLEAR-AI Workshop: Collaborative Methods and Tools for Engineering and Evaluating Transparency in AI [ECAI 2025]

Workshop Schedule

Morning Session

09:30–10:30 | Welcome & Introduction

Presentation: “Enhancing Active Learning efficiency with a Recommendation System based on Annotator Accuracy, Mood, and Fatigue”
Diana Mortágua, Luis Macedo and F. Amílcar Cardoso


10:30–11:00 | ☕ Coffee Break


11:00–12:30 | Session 1: Presentations

“A Visual Reader’s Eye for Image-Generation AI”
Randi Cecchine and Martha Larson

“Uncertainty-Guided Expert-AI Collaboration for Efficient Soil Horizon Annotation”
Teodor Chiaburu, Vipin Singh, Frank Haußer and Felix Biessmann

“Towards Transparent and Interpretable Credit Risk Models: Classifier Selection Insights from Australia, Germany, and Taiwan”
Krzysztof Lorenz, Piotr Gutowski, Anna Drab-Kurowska, Agnieszka Budziewicz-Guźlecka, Ewelina Gutowska, Magdalena Majchrzak, Tymoteusz Miller, Irmina Durlik and Ewelina Kostecka


12:30–14:00 | 🍽️ Lunch Break


Afternoon Session

14:00–15:30 | Session 2: Presentations

“Information Flow Modeling for Transparent and Accountable AI Act Assessment”
Mattias Brännström, Themis Xanthopoulou and Lili Jiang

“The Machinery of Government: Bureaucracy, Automation and Institutional Black-Boxing”
Diletta Huyskes

“Making Privacy Risks Transparent: Causal Analysis of Generalization and Membership Inference Attack in Differentially Private SGD”
Zhou Zhou and Lili Jiang


15:30–16:00 | ☕ Coffee Break


16:00–17:00 | Roundtable Discussion

Interactive discussion with presenters and organizers


17:00–17:15 | Closing Remarks


About the Workshop

The CLEAR-AI Workshop brings together researchers, practitioners, and stakeholders to address transparency in AI systems through collaborative and participatory approaches. Below you’ll find more details about the workshop’s focus, submission guidelines, and how to participate.

Background

As AI systems become integral to critical decision-making processes, ensuring their transparency, fairness, and accountability is more essential than ever. Therefore, it is not surprising that the AI Act, the world’s first attempt to systematically regulate artificial intelligence, places significant emphasis on this need. In particular, the Act establishes transparency as a fundamental principle, aiming to safeguard user rights and foster trust and reliability in new technologies. In this context, transparency assumes a salient value as a prerequisite for ensuring the fair and responsible development and deployment of AI.

Achieving these objectives necessitates not only the establishment of robust technical frameworks but also the active engagement of a diverse range of stakeholders to ensure that the development of AI is aligned with societal values. In this regard, interdisciplinary research and implementation initiatives have been identified as crucial to facilitate progress in this area.

Following from this, the workshop focuses on advancing the design, monitoring, and evaluation of transparent AI systems. By combining participatory approaches that amplify stakeholder voices with formal methodologies that ensure rigour and reproducibility, the workshop aims to advance the state of the art in trustworthy AI, bridging technical and social perspectives.

The CLEAR-AI Workshop addresses the critical need for interdisciplinary collaboration to advance methods, tools, and evaluation frameworks that ensure transparency, fairness, and trustworthiness in AI systems. In addition, specific focus is put on tackling the challenges emerging from the participatory setting. Submissions should emphasize the integration of technical rigour with societal relevance to address the challenges of building trustworthy AI systems. The CLEAR-AI Workshop aligns with ECAI 2025’s mission of advancing sustainable AI by promoting transparency, participatory approaches, and rigorous methods that ensure accountability and societal alignment.

Problem Description

Transparency takes up a central instrumental role in addressing and understanding a wide range of problems from legal to ethical in AI development and deployment. Yet, transparency is often a vague and hard to pin down notion from a objective perspective and is defined differently on different levels in the understanding of AI systems.

In CLEAR we want to approach the problem from the direction of stakeholder needs on transparency. By understanding the needs for information we can work to concretely meet these demands.

The needs on transparency can come from many different sources, such as legal understanding and effective user agency, as such any workable method for this will have to address the participatory aspect of eliciting these needs from the stakeholders.

Topics of Interest

The workshop welcomes contributions on topics including, but not limited to:

Submissions should emphasize the integration of technical rigour with societal relevance to address the challenges of building trustworthy AI systems.

Perspectives on the above issues from under-represented countries are particularly welcome.

Challenges and Workshop perspective

Challenge 1. Reframing Transparency as a Relation, Not a Static Property

Transparency is too often treated as something systems have rather than something they provide in relation to those who use, audit, or are affected by them. This theme challenges us to rethink transparency as an active relation between information and its audience, asking what kinds of transparency actually reach and serve their intended stakeholders. Taking a step back this theme pushes us to reflect about the stakeholders behind the system design and their agency through the system.

Challenge 2. Connecting Methods to Meaning and Purpose

Technical tools and conceptual models for transparency abound, but they often operate without a clear sense of for whom and for what they exist. This theme explores how metrics, explanations, and frameworks can be oriented toward concrete purposes—whether legal accountability, informed oversight, or effective user understanding—rather than remaining abstract or self-referential. ### Challenge 3. Moving Toward Stakeholder-Centered Transparency

The ultimate challenge remains to make transparency usable, relevant, and responsive. This theme looks at how today’s methods can evolve—through participatory, methodological, or institutional innovation—toward transparency that genuinely empowers those who depend on it.

Target Audience

The CLEAR-AI Workshop targets an interdisciplinary audience, including:

Submission guidelines

Authors are invited to submit full papers of 5,000 to 8,000 words following the ACM guidlines https://www.acm.org/publications/authors/submissions . Submissions will be reviewed by at least two reviewers in a double blind review process. Paper, including transfer papers from the ECAI main track, should be submitted through our Easychair link. At least one author of each accepted paper is required to register for the workshop and for ECAI. Informal proceedings will be distributed to ECAI 2025 registrants in electronic form.

📚 Special Issue: ACM Journal on Responsible Computing

Extended Publication Opportunity

Following the workshop, we are pleased to announce a special section in the ACM Journal on Responsible Computing (JRC) dedicated to transparency in AI systems. This special section will be part of a regular issue of the JRC, with additional submissions from workshop participants also welcomed.

🎯 Submission Requirements

Post-Workshop Revision:

📅 Timeline

Submission Deadline 31 January 2026

✅ Publication Details

💡 Contact person

Francien Dechesne
📧 f.dechesne@law.leidenuniv.nl

Important Dates

Venue

Organising Committee

Advisory Board

Program Committee

Contact

Please direct any questions regarding the workshop to: