Peggy Chi
UC Berkeley
peggychi@eecs.berkeley.edu
Bio
Pei-yu (Peggy) Chi designs intelligent systems that enhance and improve everyday experiences. She is currently a fifth-year Ph.D. student in Computer Science at UC Berkeley, working with Prof. Bjoern Hartmann on computer-generated interactive tutorials. She received the Google Ph.D. Fellowship in Human Computer Interaction (2014-2016) and the Berkeley Fellowship for Graduate Study (2011-2013). Peggy earned her M.S. in Media Arts and Sciences in 2010 from the MIT Media Lab, where she was awarded as a lab fellow and worked with Henry Lieberman at the Software Agents Group. She also holds a M.S. in Computer Science in 2008 from National Taiwan University, where she worked with Hao-hua Chu at the UbiComp Lab.
Peggy’s research in Human-Computer Interaction focuses on novel authoring tools for content creation. Her recent work published at top HCI conferences includes: tutorial generation for software applications and physical tasks, designing and scripting cross-device interactions, and interactive storytelling for sharing personal media.
Designing Video-Based Interactive Instructions
Designing Video-Based Interactive Instructions
When aiming to accomplish unfamiliar, complicated tasks, people often search for online helps to follow instructions shared by experts or hobbyists. Although the availability of content sharing sites such as YouTube and Blogger has led to an explosion in user-generated tutorials, it remains a challenge for tutorial creators to offer concise and effective content for learners to put into actions. From using software applications, performing physical tasks such as machine repair and cooking, to giving a lecture, each domain involves specific “how-to” knowledge with certain degree of complexity. Authors therefore need to carefully design what and when to introduce an important concept in addition to accurately performing the tasks.
My research introduces video editing, recording, and playback tools optimized for producing and consuming instructional demonstrations. We focus on videos as they are commonly used to capture a demonstration contained with visual and auditory details. Using video and audio analysis techniques, our goal is to dramatically increase the quality of amateur-produced instructions, which in turn improves learning for viewers to interactively navigate. We show a series of proposed systems that create effective tutorials to support this vision, including MixT that automatically generates mixed-media software instructions, DemoCut that automatically applies video editing effects to a recording of a physical demonstration, and DemoWiz that provides an increased awareness of upcoming actions through glanceable visualizations.