Beatrice Bevilacqua
Purdue University
bbevilac@purdue.edu
Bio
Beatrice Bevilacqua is a Ph.D. candidate in Computer Science at Purdue University advised by Prof. Bruno Ribeiro, and also working closely with Prof. Haggai Maron. Her research interests are models for graph data, and for other mathematical objects that have an inherent symmetry structure. Prior to joining Purdue, she earned a MSc. in Computer Engineering at Sapienza, University of Rome. Beatrice was a Research Scientist Intern at Google DeepMind working with Dr. Petar Veličković, and a Research Scientist Intern at Meta AI (FAIR). She won the 2024 Employee Recognition Award from Purdue University, the Top Reviewer Award at NeurIPS ’22 and ’23, the Honors Award from Sapienza University of Rome, and she was also the recipient of the Andrews PhD Fellowship.
Areas of Research
- Machine Learning
Unified Graph Representations for Diverse Tasks: Towards Graph Foundation Models
Graph Neural Networks (GNNs) have become the dominant approach for learning on graph-structured data. However, despite their widespread adoption, they face several challenges that limit their effectiveness in practical applications, including poor generalization to out-of-distribution (OOD) data and limited expressive power. While my previous research has addressed these issues through targeted solutions, it has become evident that a unified approach is needed to overcome these limitations across diverse tasks. My current research focuses on a new paradigm in graph learning: Graph Foundation Models (GFMs). I am exploring innovative graph representations tailored for GFMs that facilitate the pretraining of graph models on diverse tasks, including node, link, and multi-node predictions. These pretrained models can then be applied inductively to new data, generating representations suitable for various tasks, even those not encountered during training. Preliminary results highlight the potential of this approach, particularly in its ability to recover canonical versions of graph eigenvectors, demonstrating both the expressiveness and generalization capabilities of the learned representations. Empirical evidence shows that our method can be effectively pretrained and reused across different tasks, achieving strong performance in diverse scenarios.