Nora Willett
Princeton University
noraw@princeton.edu
Bio
Nora is a fourth-year PhD student in computer science at Princeton, working with Adam Finkelstein. Her thesis is in making animation more accessible to novices. She received a best paper award at the ACM Symposium on User Interface Software and Technology (UIST) in 2017. She interned twice at Adobe Research and once at Autdesk Research. Previously, she received a bachelor’s degree from Stanford University and worked on the film “Mr. Peabody and Sherman” at DreamWorks Animation.
Animation Authoring Tools
Animation Authoring Tools
My research explores methods to provide animation novices with authoring tools for creating live 2D animations and dynamic illustrations. One process leverages simulation to automatically create secondary animation for live performances. Another designs an interface for triggering artwork swaps during a performance. To add expressiveness to live 2D animated characters, I developed behaviors that allow artists to apply secondary motion to parts of characters (Willett UIST’17). Secondary animation is the subtle movement of parts like hair, foliage, or cloth that complements and emphasizes the primary motion – the animation of the character’s main parts. My work introduces physically-inspired rigs – the Follow rig, the Rest pose rig, and the Collision rig – that propagate the primary motion of layered illustrated characters to produce plausible secondary motion. Another method to improve the live performance of illustrated characters is through triggering discrete artwork changes on a character. In 2D animation, characters are represented by a set of artwork layers that typically depict different body parts or accessories. Within this context, discrete transitions are realized by swapping artwork layers to produce large changes in pose and appearance. I designed and evaluated a multi-touch interface for triggering artwork swaps during performances (Willett UIST’17). This trigger layout design enables users to quickly recognize and tap visual triggers without looking away from a live preview of the character. In addition, since animators typically practice before live performances, common patterns from practice sessions are encoded in a predictive model that is used to highlight suggested triggers during performances. Users of this system found it helpful in reducing the cognitive load of choosing artwork swaps during a performance. In summary, the goal of my research is to expand the range of possibilities for 2D animations by creating tools that enable novice animators to author them.