Michelle Lam
Stanford University
mlam4@cs.stanford.edu
Bio
Michelle Lam is a fifth-year Computer Science Ph.D. candidate at Stanford University in the Human-Computer Interaction Group. She builds systems for non-AI experts to take creative control over AI by allowing them to define AI behavior in their own words–from objective functions to evaluations to end-to-end AI applications. Advised by Michael Bernstein and James Landay, Michelle is supported by a Stanford Interdisciplinary Graduate Fellowship and was a Stanford HAI Graduate Fellow, Stanford Technology and Racial Equity Graduate Fellow, Brown Institute for Media Innovation Magic Grantee, and Siebel Scholar. Her work has received awards at CHI and CSCW, and she holds an MS and BS in Computer Science from Stanford University. Website: michelle123lam.github.io
Areas of Research
- Human-Computer Interaction
Granting Non-AI Experts Creative Control Over AI Systems
Granting Non-AI Experts Creative Control Over AI Systems. Many harmful behaviors and problematic deployments of AI stem from the fact that AI experts are not experts in the vast array of settings where AI is applied. Non-AI experts from these domains hold promising potential to contribute their expertise and directly design the AI systems that impact them, but they face substantial technical and effort barriers. Could we redesign AI development tools to match the language of non-technical end users? My research develops novel systems that allow non-AI experts to define AI behavior in terms of interpretable, self-defined concepts. Monolithic, black-box models do not yield such control, so we introduce techniques for users to create many narrow, personalized models that they can better understand and steer. We demonstrate the success of this approach across the AI lifecycle: from designing AI objectives to evaluating AI behavior to authoring end-to-end AI systems. When non-AI experts design AI from start to finish, they notice gaps and build solutions that AI experts could not–such as creating new feed ranking models to mitigate partisan animosity, surfacing underreported issues with content moderation models, and activating unique pockets of LLM behavior to amplify their personal writing style.