CS Seminar: Dr. Himabindu Lakkaraju, “Enforcing Right to Explanation: Algorithmic Challenges and Opportunities”

Himabindu Lakkaraju.
Title: “Enforcing Right to Explanation: Algorithmic Challenges and Opportunities”
Abstract: As predictive and generative models are increasingly being deployed in various high-stakes applications in critical domains including healthcare, law, policy and finance, it becomes important to ensure that relevant stakeholders understand the behaviors and outputs of these models so that they can determine if and when to intervene. To this end, several techniques have been proposed in recent literature to explain these models. In addition, multiple regulatory frameworks (e.g., GDPR, CCPA) introduced in recent years also emphasized the importance of enforcing the key principle of “Right to Explanation” to ensure that individuals who are adversely impacted by algorithmic outcomes are provided with an actionable explanation. In this talk, I will discuss the gaps that exist between regulations and state-of-the-art technical solutions when it comes to explainability of predictive and generative models. I will then present some of our latest research that attempts to address some of these gaps. I will conclude the talk by discussing bigger challenges that arise as we think about enforcing right to explanation in the context of large language models and other large generative models.
Bio: Himabindu (Hima) Lakkaraju is an assistant professor at Harvard University focusing on the algorithmic, theoretical, and applied aspects of explainability, fairness, and robustness of machine learning models. Hima has been named as one of the world’s top innovators under 35 by both MIT Tech Review and Vanity Fair. She also received several prestigious awards including the NSF CAREER award, AI2050 Early Career Fellowship by Schmidt Futures and multiple best paper awards at top-tier ML conferences, and grants from NSF, Google, Amazon, JP Morgan, and Bayer. Hima has given keynote talks at various top ML conferences and associated workshops including CIKM, ICML, NeurIPS, ICLR, AAAI, and CVPR, and her research has also been showcased by popular media outlets including the New York Times, MIT Tech Review, TIME magazine, and Forbes. More recently, she co-founded the Trustworthy ML Initiative to enable easy access to resources on trustworthy ML and to build a community of researchers/practitioners working on the topic.
Zoom: https://wse.zoom.us/j/93686573446

Thursday, April 4, 2024 - 10:45 to 11:45

Hackerman Hall B17, Johns Hopkins University