TADM 2021

First International Workshop on Trusted Automated Decision-Making
An ETAPS 2021 Workshop

Workshop Goals

When can automated decision-making systems or processes be trusted? Is it sufficient if all decisions are explainable, secure, and fair? As more and more life-defining decisions are being relegated to algorithms based on Machine Learning (ML), it is increasingly becoming clear that the touted benefits of introducing new and novel algorithms, especially those based on Artificial Intelligence (AI), into our daily lives are accompanied by serious negative societal consequences. Corporations are incentivized to promote opacity rather than transparency of their decision-making processes, due to the proprietary nature of their algorithms. What disciplines can help software professionals demonstrate trust in automated decision-making systems?

Decision-making logic of black box approaches -- such as those based on deep learning or deep neural networks -- cannot be comprehended by humans. The field of "explainable AI," which prescribes the use of adjunct explainable models, partially mitigates this problem. Do adjunct models make the whole process trustworthy enough? Detractors of explainable AI propose that decisions that could potentially impact human safety be restricted to interpretable and transparent algorithms. Although there have been a few recent successes in the creation of interpretable models -- including decision trees and case-based reasoning approaches -- it is not clear whether they are sufficiently accurate or practical.

Submission Guidelines

We particularly encourage research of a nascent or speculative nature, in order to chalk a way forward. Two kinds of submissions are solicited, abstracts and position papers. TADM fosters the future of collaborative research for attaining the highest levels of trust in automated decision making. The abstracts and position papers will be subject to peer review by an interdisciplinary team, with a minimum of two reviewers for each submission. The top six submissions will be invited for presentation, and whose camera-ready versions will be included in the pre-proceedings.

We invite abstracts and full papers on research in the following areas:

Creation of interpretable models for specific domains

Extraction of interpretable models of comparable accuracy from black box models

Unique and novel approaches to learning sparse models

Formal approaches for synthesis of interpretable models from specifications

Metrics to assess veracity of recent approaches in interpretable model creation

Challenge problems in finance, criminal justice, or social and industrial robotics

Speakers

Cynthia Rudin Headshot

Cynthia Rudin

Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. She is also an associate director of the Statistical and Applied Mathematical Sciences Institute (SAMSI). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.

Some of her (collaborative) projects are: (1) she has developed practical code for optimal decision trees and sparse scoring systems, used for creating models for high stakes decisions. Some of these models are used to manage treatment and monitoring for patients in intensive care units of hospitals. (2) She led the first major effort to maintain a power distribution network with machine learning (in NYC). (3) She developed algorithms for crime series detection, which allow police detectives to find patterns of housebreaks. Her code was developed with detectives in Cambridge MA, and later adopted by the NYPD. (4) She solved several well-known previously open theoretical problems about the convergence of AdaBoost and related boosting methods. (5) She is a co-lead of the Almost-Matching-Exactly lab, which develops matching methods for use in interpretable causal inference.

Workshop Organizers

Ramesh Bharadwaj, NRL

From 2002-2007, Ramesh Bharadwaj ran a workshop series on Automated Verification of Infinite-State Systems (AVIS) at ETAPS, which was well received and made significant contributions to this area. The stakes now are higher and the field has moved further necessitating the involvement of transdisciplinary researchers and practitioners to work in the socially necessary problem domain of trusted automated decision-making.

Ilya Parker, 3D Rationality

Ilya Parker will moderate a panel on transdisciplinary collaborative research on trustworthiness.

Program Committee:
Jennifer M. Logg, Georgetown University
Madhavan Mukund, CMI Chennai
Sanjit A. Seshia, UC Berkeley

Get in touch

Please feel free to reach out to Ramesh Bharadwaj with any questions you may have.