Aimen Gaba

University of Massachusetts Amherst

Position: PhD Candidate
Rising Stars year of participation: 2025
Bio

Aimen is a final-year PhD candidate in Computer Science at the University of Massachusetts Amherst. Her research sits at the intersection of HCI, Visualization Design, and Machine Learning, with a focus on building AI systems that are transparent, fair, and trustworthy. Drawing from HCI, psychology, and social science, she investigates how design choices in language and visualization influence user perceptions of bias, fairness, and trust in machine learning models. She is a mixed-methods researcher, employing interviews, surveys, large-scale behavioral studies, and participatory design to understand how people navigate technology in real-world contexts. While her primary focus is on fairness and trust in AI, she is also interested in accessibility and works in this space by centering disability-led innovation with blind and low-vision communities. Aimen also interned at Adobe Research in 2024, where she explored user preferences for extractive versus generative answers in document-grounded contexts.

Areas of Research
  • Human-Computer Interaction
Towards Responsible and Trustworthy AI Using Human-Centered Evaluation

As machine learning (ML) systems become increasingly embedded in high-impact domains such as healthcare, finance, hiring, and criminal justice, they carry significant risks of perpetuating bias and inequity. Language models, for example, can generate stigmatizing language when referring to marginalized groups, including women and non-binary people, reinforcing stereotypes and eroding user trust. Such harms reveal that the challenge of building trustworthy AI extends beyond algorithms themselves – design choices in how these systems are communicated and presented to users play an equally critical role. My research investigates how visualization design and natural language influence usersÂ’ perceptions of fairness and bias in ML models, and how these perceptions influence trust. Through large-scale behavioral experiments and qualitative studies, I examine how design elements, model performance, fairness trade-offs, and user characteristics affect engagement with ML outputs. A strand of this work focuses on how cisgender and non-binary individuals experience harmful language in large language models, the ways it impacts their willingness to trust or contest system outputs, and the design changes they advocate for more accurate and inclusive representation. By bridging human–computer interaction, visualization design, and AI ethics, I develop actionable design guidelines that enable more transparent and inclusive communication of ML outputs. These guidelines go beyond technical mitigation to empower users to critically interpret AI behavior and make informed decisions. Ultimately, my work seeks to reimagine fairness in AI not only as a technical property, but as a lived, user-centered experience. This research advances pathways for building ML systems that are not only accurate, but also equitable, transparent, and trustworthy.