Isabella Poles
Politecnico di Milano
isabella.poles@polimi.it
Bio
Isabella Poles is a Ph.D. candidate in Computer Science and Engineering at Politecnico di Milano focusing on developing domain expert DL models along the diagnostic–therapeutic medical pathway to support both clinical and research needs. Currently, she is a Deep Learning Intern at Siemens Healthineers developing generative AI solutions for radiation planning and treatment under the guidance of Ali Kamen, Ph.D. and Simon Arberet, Ph.D. Moreover, Isabella was a Visiting Ph.D. Student at the Mahmood Lab at Harvard Medical School and Brigham and Women’s Hospital working with Prof. Faisal Mahmood and Guillaume Jaume, Ph.D. on foundation models for pathology image analysis. Prior to that, she received her B.Sc. and M.Sc. degrees in Biomedical Engineering from Politecnico di Milano in 2019 and 2022, respectively. In 2022, she also got her M.Sc. in Bioengineering at the University of Illinois at Chicago while working on DL strategies for image registration in radiology.
Areas of Research
- AI for Healthcare and Life Sciences
Domain Expert and Multimodal AI for Collaborative Clinical Workflows from Diagnosis to Treatment
Healthcare is increasingly shaped by the demand for personalized, data-driven care and the rapid growth of multimodal data spanning radiology, pathology, clinical reports, and patient histories. Interpreting such heterogeneous information requires systems that integrate diverse modalities while preserving interpretability. Recent advances in VLMs and LMMs offer a unified paradigm for textual and visual data. However, current pipelines rely on static, general-purpose pretraining datasets and sequential fine-tuning stages, limiting their ability to capture the fine-grained, multiscale, and clinically specific nature of medical imaging tasks.
My research to date has focused on developing domain expert DL models spanning diagnosis, prognosis, and treatment. For diagnosis at the macroscale, I developed GAN-based disease classification methods and human-aligned evaluation protocols to capture disease-specific patterns. At the microscale, I worked on biopsy-derived tissue analysis, mitigating hidden stratification through autoencoder-driven latent space manipulation strategies and enhancing tissue segmentation via diffusion-based multimodal knowledge distillation. Building on these foundations, I addressed prognosis with a multimodal survival prediction framework that integrates radiology and histopathology. This approach combines modality-specific encoders, contextual embeddings, and gradient-regularized optimization, achieving robustness under heterogeneous and partially missing data while fusing local-to-global biological pathways for outcome prediction. Finally, I extended the scope of domain expert models to treatment. This includes 3D registration methods for consistent longitudinal and multimodal acquisitions, as well as a generative 3D model for dose painting and radiotherapy planning, enabling controllable fluence-to-dose modeling for personalized treatment strategies.
Looking forward, I envision unifying these domain expert models with the reasoning capabilities of LMMs within multi-agent VLM frameworks. Such systems would mirror clinical practice, where specialists collaborate, defer, or consult as needed, and could dynamically adapt to patient-specific workflows from early diagnosis through prognosis and treatment decision-making. This long-term agenda aims to move beyond monolithic models toward collaborative, trustworthy systems that scale across clinical pipelines.