Quantcast


CURATOR
A pinboard by
Himabindu Lakkaraju

PhD Student, Stanford University

PINBOARD SUMMARY

Exploring how AI can help improve decision making in health care and criminal justice

My research interests lie at the intersection of artificial intelligence and human decision making. The primary goal of my research is to explore how artificial intelligence can help improve real world decision making in domains such as health care and criminal justice. The focus of my work is two-fold: 1) Understanding the underlying patterns, biases and motivations involved in human decision making and obtaining diagnostic insights into the mistakes made by human decision makers via empirical analysis, and machine learning algorithms. 2) Building models and algorithms which are transparent, easy-to-understand, and can readily assist human decision makers. To this end, my thesis addresses various challenges involved in developing and evaluating intelligent frameworks which can complement and analyze human decision making. More specifically, my research has focused on finding answers to the following questions: a) How do we evaluate the performance of algorithms designed to perform real world decision making tasks such as making judicial bail decisions and health care treatment recommendations? b) How do we obtain interpretable diagnostic insights into the patterns of mistakes made by human decision makers on these tasks? c) How do we build accurate and interpretable models which can be readily understood, trusted, and consequently employed by human judges in their decision making? d) How do we identify and remedy undesirable biases of human decision makers and algorithms?

3 ITEMS PINNED

Interpretable Decision Sets: A Joint Framework for Description and Prediction.

Abstract: One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.

Pub.: 18 Nov '16, Pinned: 01 Jul '17