Blaine Hoak

University of Wisconsin-Madison

Position: PhD Candidate
Rising Stars year of participation: 2025
Bio

Blaine Hoak is a Ph.D. Candidate in Computer Sciences at the University of Wisconsin-Madison in the Madison Security and Privacy Group (MadS&P) advised by Prof. Patrick McDaniel. Blaine previously worked as a Research Scientist Intern at Visa Research on the Trustworthy AI team. Her research intersects computer security and AI/ML, where she focuses on uncovering and solving trustworthiness issues with AI systems. Her work has been published in a variety of security and AI/ML conferences such as USENIX Security, ICLR, IEEE SaTML, and ECAI. She has participated in a wide range of service opportunities, such as serving on the Program Committee for IEEE S&P, USENIX Security, CCS, NeurIPS, and ICLR, receiving multiple awards for her reviewing. Blaine received her B.S. in Biomedical Engineering from Pennsylvania State University in 2020.

Areas of Research
  • Machine Learning
Uncovering the Mechanics of Model Failure: Textures and Beyond

Artificial Intelligence (AI) models now serve as core components to a range of mature applications but remain vulnerable to a wide spectrum of attacks. Despite more than a decade of robustness research in the vision domain, we still lack systematic understanding of the underpinnings of model vulnerability. Without interpretable measures of robustness, defenses remain underdeveloped and unused in real systems. My research at the nexus of machine learning and security–influenced by my background in biomedical engineering–uncovers trustworthiness issues that arise from functional differences in how humans and machines “see” the world. Specifically, I have identified that textures, or repeated patterns, are a fundamental mechanism that vision models use to generalize, and what leaves them consistently exposed to failures and adversarial manipulation. My works have focused on developing new datasets and metrics to quantify model reliance on texture, as well as developing experimental frameworks that provide interpretable insights on why models fail. Through my research, I aim to provide evaluations of robustness both measurable and understandable, offering an intuitive lens on model brittleness that traditional robustness benchmarks do not. In the future, I intend to extend this research to generative AI, where texture-related biases may play a central role in both vulnerabilities and the detection of synthetic media.