TADM 2025

Second International Workshop on Trusted Automated Decision-Making
An ETAPS 2025 Workshop
ETAPS is the premier conference on theory and practice of software systems.

Workshop Description

The Third International Workshop on Trusted Automated Decision-Making (TADM 2025) elicits discussion from researchers on what constitutes trustworthiness in automated decision-making. The landscape of AI safety has continued to evolve and expand rapidly. From hallucinations to the black-box algorithms, AI requires a paradigmatic shift in thinking about safety in software design and development. The aim of this workshop is to bring researchers together to advance the technical aspects of attaining trustworthiness, and ask how to engineer safe and robust AI. Continuing conversations on safety is critical to ensuring the responsible development and deployment of automated decision-making systems. Topics of interest include but are not limited to Adversarial Testing, Explainability, Interpretability, Counterfactual Reasoning, Bias Audits, Multi-Disciplinary Collaboration and Safety by Design.

Submission Procedure

We particularly encourage research of a nascent or speculative nature, in order to chalk a way forward. Two kinds of submissions are solicited, abstracts and position papers. TADM fosters the future of collaborative research for attaining the highest levels of trust in automated decision making. The abstracts and position papers will be subject to peer review by an interdisciplinary team, with a minimum of two reviewers for each submission. The top six submissions will be invited for presentation, and whose camera-ready versions will be included in the pre-proceedings.

We invite abstracts and full papers on research in the following areas:

Creation of interpretable models for specific domains

Extraction of interpretable models of comparable accuracy from black box models

Unique and novel approaches to learning sparse models

Formal approaches for synthesis of interpretable models from specifications

Metrics to assess veracity of recent approaches in interpretable model creation

Challenge problems in finance, criminal justice, or social and industrial robotics

Post-workshop proceedings

Proceedings will be published as video on the TADM website

Speakers

David Lorge Parnas Headshot

David Lorge Parnas

Although AI is often presented as a new field that has made great strides recently, we should have a sense of deja vu. There were university courses on AI more than fifty years ago. Then, as now, it was a promising field, a field in which researchers made great promises. It was then, and remains, a field that produces untrustworthy programs. Why is that, and what we can do about it?

David Lorge Parnas began studying professional software development in 1969. He is best known for the concept of information hiding, methods of precise software documentation and advocacy for professionalism. He taught at Carnegie Mellon, Technische Hochschule Darmstadt, McMaster University, the University of Limerick among others. He has received four honorary doctorates and is a Fellow of the Royal Society of Canada, the Royal Irish Academy, the Canadian Academy of Engineering, the Gesellschaft für Informatik, the ACM and the IEEE.

Andrés Corrada-Emmanuel Headshot

Andrés Corrada-Emmanuel

Dr. Corrada-Emmanuel trained as a physicist at Harvard and the University of Massachusetts at Amherst. He transitioned into industrial work by joining Paul Bamberg, one of his instructors at Harvard, at Dragon Systems. There he was part of the R&D team that released the 1st consumer continuous speech recognition product - the still continuing Naturally Speaking line. The problem of errors in transcriptions by recorded speech by human annotators he encountered there was the genesis of his research into the problem of evaluating experts without ground truth. In 2008 he started working with Howard Schultz at UMass/Amherst on the problem of fusing multiple noisy maps without ground truth. They developed an algebraic method that used Compressed Sensing techniques to recover the average error in sparsely-correlated maps. This was followed by the classification version in 2010 that uses Algebraic Geometry. More than a decade of research with this algebraic approach to evaluation culminated in 2023 with the realization that it is part of a wider logic of unsupervised evaluation. He recently founded NTQR Logic to research and promote the use of this logic for formal verification of expert evaluations in unsupervised settings.

Alan Wassyng Headshot

Alan Wassyng

Alan Wassyng is a Professor in the Department of Computing and Software at McMaster University. He has been working on safety critical software intensive systems for over 30 years. His research focuses on how we develop such systems so that they are safe and dependable, and how we certify this. As a consultant to Ontario Hydro, in the 1990s he helped develop a new methodology for building safety critical software at Ontario Hydro - now Ontario Power Generation, and was a senior member of the team that used that methodology to redesign the shutdown systems. That methodology, slightly modified, is still in use today. After running a software consulting business for 15 years, he returned to academic life, joining the Department of Computing and Software at McMaster University in 2002. His research at McMaster focuses on safety assurance of safety critical software intensive systems. With colleagues Tom Maibaum and Mark Lawford at McMaster, he founded the McMaster Centre for Software Certification (McSCert) in 2009. He was the Director of McSCert from 2009 to 2017. McSCert was the winner of the 2024 IEEE CS TCSE Distinguished Synergy Award, "for outstanding and sustained contributions to the software engineering community". Alan's graduate students publish primarily on developing and assuring safety of software intensive systems, especially in the automotive and medical device domains. He was also one of the founders of the Software Certification Consortium (SCC), and has been the Chair of the SCC Steering Committee since its inception. The SCC held its 1st meeting in August 2007 and its 21st meeting in May 2024. Alan has collaborated with the US Nuclear Regulatory Commission (NRC) and with the US Food and Drug Administration on safety of digital systems. He was one of the 10 researchers selected world-wide by the NRC to participate in their Expert Clinic on Safety of Digital Instrumentation & Control in 2010.

Omri Isac Headshot

Omri Isac

I am a PhD student advised by Prof. Guy Katz, at the School of Computer Science and Engineering at the Hebrew University of Jerusalem. My research is centered on the verification of deep neural networks (DNNs), focusing on proof production. In particular, I am working on: (1) enabling DNN verifiers to produce proofs, that attest to their correctness or reveal potential errors in their implementation; (2) reliably checking proofs produced by verifiers; and (3) leveraging proofs to improve the scalability of DNN verifiers.

Additionally, I'm interested in the complexity and computability-theoretical aspects of DNN verification, with respect to the DNN architecture.

Sandhya Saisubramanian Headshot

Sandhya Saisubramanian

Sandhya Saisubramanian is an Assistant Professor in the School of EECS at Oregon State University where she leads the Intelligent and Reliable Autonomous Systems (IRAS) research group. Her research focus is on safe and reliable decision making in autonomous systems that operate in complex, unstructured environments. She received her Ph.D from the University of Massachusetts Amherst. Saisubramanian is a recipient of the Outstanding Program Committee Member award at ICAPS 2022 and a Distinguished Paper Award at IJCAI 2020.

William M. Farmer Headshot

William M. Farmer

William M. Farmer has 40 years of experience working in industry and academia in computing and mathematics. He received a B.A. in mathematics from the University of Notre Dame in 1978 and an M.A. in mathematics in 1980, an M.S. in computer sciences in 1983, and a Ph.D. in mathematics in 1984 from the University of Wisconsin-Madison. He is currently a Professor in the Department of Computing and Software at McMaster University. Before joining McMaster in 1999, he conducted research in computer science for twelve years at The MITRE Corporation in Bedford, Massachusetts, USA and taught computer programming and networking courses for two years at St. Cloud State University.

Dr. Farmer's research interests are logic, mathematical knowledge management, mechanized mathematics, and formal methods. One of his most significant achievements is the design and implementation of the IMPS proof assistant, which was done at MITRE in partnership with Dr. Joshua Guttman and Dr. Javier Thayer. His work on IMPS has led to research on developing practical logics based on simple type theory and NGB set theory and on organizing mathematical knowledge as a network of interconnected axiomatic theories. He also has collaborated with Dr. Jacques Carette for several years at McMaster on developing a framework for integrating axiomatic and algorithmic mathematics. As part of this research, Dr. Farmer has investigated how to reason about the interplay of syntax and semantics, as exhibited in syntax-based mathematical algorithms like symbolic differentiation, within a logic equipped with global quotation and evaluation operators. Dr. Farmer is currently working on developing a communication-oriented approach to formal mathematics as an alternative to the standard certification-oriented approach employed using proof assistants.

Dr. Lingyang Chu Headshot

Dr. Lingyang Chu

Dr. Lingyang Chu is an Assistant Professor in the Department of Computing and Software at McMaster University. He earned his Ph.D. in Computer Science from the University of Chinese Academy of Sciences in 2015. Following his doctorate, Dr. Chu was a Postdoctoral Fellow at Simon Fraser University under the supervision of Prof. Jian Pei from 2015 to 2018. He then served as a Principal Researcher at Huawei Technologies Canada before joining McMaster University on January 1, 2021.

Throughout his career, Dr. Chu has made significant contributions to both academia and industry. His scholarly output includes 56 research works accumulating over 1,976 citations. His research has been published in top-tier venues, and one of his works on interpretable AI was highlighted by a mainstream AI research portal in China in 2018. In the industry sector, many of his research outcomes have been successfully deployed as services and products. Notably, a personalized federated learning system he worked on was implemented as a core service on tens of millions of smartphones running Harmony OS.

Get in touch

Please feel free to reach out to Ramesh Bharadwaj with any questions you may have.