Elena Glassman

MIT

Position: PhD Candidate, EECS
Rising Stars year of participation: 2015
Bio

Elena Glassman is an EECS Ph.D. candidate at MIT Computer Science and Artificial Intelligence Lab, where she specializes in human-computer interaction. For her dissertation, Elena has created tools that help teach programming and hardware design to thousands of students at once. She uses theories from the learning sciences, as well as the pain points of students and teachers, to guide the creation of new systems for teaching and learning online and at scale. Elena earned both her MIT EECS B.S. and M.Eng. degrees in ‘08 and ‘10, respectively, with a Ph.D. expected in ‘16. She has been a visiting researcher at Stanford and an intern at Google and Microsoft Research. She earned the NSF and NDSEG fellowships and MIT’s Amar Bose Teaching Fellowship. She also leads the MIT chapter of MEET, which helps teach gifted Palestinians and Israelis computer science and teamwork in Jerusalem.

Systems for Teaching Programming and Hardware Design at Scale

Systems for Teaching Programming and Hardware Design at Scale

In a massive open online course (MOOC), a single programming exercise may yield thousands of student solutions that vary in many ways, some superficial and some fundamental. Understanding large-scale variation in programs is a hard but important problem. For teachers, this variation can be a source of pedagogically valuable examples and expose corner cases not yet covered by autograding. For students, the variation in a large class means that other students may have struggled along a similar solution path, hit the same bugs, and can offer hints based on that earned expertise.

I have developed three systems to explore solution variation in large-scale programming and computer architecture classes. (1) OverCode visualizes thousands of programming solutions using static and dynamic analysis to cluster similar solutions. It lets teachers quickly develop a high-level view of student understanding and misconceptions and provide feedback that is relevant to many student solutions. (2) Foobaz clusters variables in student programs by their names and behavior so that teachers can give feedback on variable naming. Rather than requiring the teacher to comment on thousands of students individually, Foobaz generates personalized quizzes that help students evaluate their own names by comparing them with good and bad names from other students. (3) ClassOverflow collects and organizes solution hints indexed by the autograder test that failed or a performance characteristic like size or speed. It helps students reflect on their debugging or optimization process, generates hints that can help other students with the same problem, and could potentially bootstrap an intelligent tutor tailored to the problem. All three systems have been evaluated using data or live deployments in on-campus or edX courses with thousands of students.