Marin Vogelsang
Massachusetts Institute of Technology
ozaki@mit.edu
Bio
Marin Vogelsang is a Postdoctoral Fellow in Prof. Pawan Sinha’s lab in the Department of Brain and Cognitive Sciences at MIT, where she is supported by a JSPS Postdoctoral Fellowship and a Yamada Science Foundation grant. Previously, she received a B.Sc. in Biology from the University of Tokyo, an M.Sc. in Neural Systems & Computation from UZH/ETH Zurich, a second B.Sc. in Computer Science from EPFL, and a Ph.D. in Cognitive Science from the University of Osnabrueck. Her work is currently focused on studying visual learning in humans and machines. She engages in simulations with deep neural networks as computational model systems and also contributes to Project Prakash, studying children in rural India who gain sight late in life after treatment for congenital blindness.
Areas of Research
- Machine Learning
Using deep learning to illuminate human development & leveraging human developmental insights to improve AI
Human perceptual development unfolds in a stereotypical sequence. Early visual inputs, for instance, are severely limited in color, acuity, and contrast, gradually improving over the first months of life. Far from being mere ‘hurdles’, our recent studies suggest that these early degradations may be adaptive and help establish robust perceptual mechanisms. In essence, initially degraded inputs may prompt the developing system to prioritize more holistic and robust representations rather than overly relying on fine-grained details.
This proposal is supported by joint experimental and computational findings. Experimental evidence derives from studies of children born blind who later gained sight through Project Prakash. Unlike typical neonates, these children begin to see with a more mature retina, effectively bypassing early visual degradations. Consistent with our predictions, perceptual examinations of Prakash children revealed significant deficits in recognizing color-degraded images and in tasks requiring more holistic processing. Comprehensive simulations with deep neural networks support these observations, revealing causal links between such deficits and the absence of initially degraded inputs. These simulations also highlight computational advantages of deep network training trajectories that transition from initially degraded inputs to gradually improving fidelity.
Similar results we obtained in prenatal hearing indicate that our ‘adaptive initial degradation’ hypothesis may reflect a more domain-general phenomenon. Together, these findings help reveal the computational principles underlying typical and atypical perceptual development. They also provide inspiration for the design of rehabilitation following sight-restoring surgeries and the development of more robust strategies for deep network training.