TADM 2025

Second International Workshop on Trusted Automated Decision-Making
An ETAPS 2025 Workshop
ETAPS is the premier conference on theory and practice of software systems.

Workshop Description

The TADM workshop fosters a discussion of safety in software development and deployment with artificial intelligence technologies. Trust is critical to safe implementation of new technologies. As a community of software researchers, what can we do to ensure automated decision-making systems are designed through processes that are trustworthy? How can we assess machine learning, large language models and other emerging artificial intelligence (AI) technologies? Is explainability enough? If not, what further measures can be taken?

Safety and trust challenges arise with unique specificities in each new context. The recent explosion in the usage of large language models (LLMs) such as CODEX, derived from GPT3, for generating source code for software applications poses distinctive questions. It is estimated that CODEX only gets it right 30% of the time, thereby increasing the likelihood of the internet getting riddled with vulnerable code which could be readily exploited by malicious actors. Is this inevitable or can we learn to avoid these potentially dangerous pitfalls of AI-based code generation? Can we invent methods and tools for error mitigation/containment? Are formal methods the answer to find and fix flaws in the generated code? Could we exploit adversarial learning to re-train the models and prevent them from repeating their mistakes? Are there benchmarks/tests to certify freedom from exploitable vulnerabilities?

To initiate discussion in this very important societal need, we invite interdisciplinary researchers, computer scientists, and practitioners with new and novel research ideas. We particularly encourage research of a nascent or speculative nature, in order to chalk a way forward.

Submission Procedure

We particularly encourage research of a nascent or speculative nature, in order to chalk a way forward. Two kinds of submissions are solicited, abstracts and position papers. TADM fosters the future of collaborative research for attaining the highest levels of trust in automated decision making. The abstracts and position papers will be subject to peer review by an interdisciplinary team, with a minimum of two reviewers for each submission. The top six submissions will be invited for presentation, and whose camera-ready versions will be included in the pre-proceedings.

We invite abstracts and full papers on research in the following areas:

Creation of interpretable models for specific domains

Extraction of interpretable models of comparable accuracy from black box models

Unique and novel approaches to learning sparse models

Formal approaches for synthesis of interpretable models from specifications

Metrics to assess veracity of recent approaches in interpretable model creation

Challenge problems in finance, criminal justice, or social and industrial robotics

Post-workshop proceedings

Proceedings will be published as video on the TADM website

Speakers TBA

Program Committee TBA

Get in touch

Please feel free to reach out to Ramesh Bharadwaj with any questions you may have.