Tiantian Liu

Zhejiang University

Position: Ph.D. Candidate
Rising Stars year of participation: 2024
Bio

Tiantian Liu is a fourth-year Ph.D. candidate at Zhejiang University, under the supervision of Professors Kui Ren and Feng Lin. Her research centers on human-centered computing and IoT security, with a focus on voice interaction technologies, notably in combating malicious injection attacks and improving noise-resistant speech recognition. To date, Tiantian Liu has published a total of 11 papers in top-tier conferences and journals, including IEEE S&P, USENIX, ACM MobiCom, and SenSys. Her research achievements have been recognized with several honors, including being named a SenSys Best Paper Candidate and receiving the SIGMOBILE Research Highlight Award. She has also been awarded IEEE S&P Travel Grants, USENIX Student Grants, and secured the second prize in the finals of the 2021 National AI Innovation and Application Competition. Additionally, she serves as a reviewer for multiple journals, including TIFS, TDSC, TWC, IMWUT, IoTJ, and PeerJ.

Areas of Research
  • Human-Computer Interaction
Enhancing Voice Interaction Security and Performance in Cyber-Physical Systems

Voice User Interfaces (VUIs) are increasingly integrated into smart environments, but their open nature makes them susceptible to security threats and performance issues, especially under adversarial conditions or in noisy environments. My research is centered on advancing the security and performance of VUIs within human-centered computing and IoT systems, addressing the critical challenges of defending against malicious attacks and improving noise-resistant speech recognition. To tackle the security vulnerabilities inherent in VUIs, I have developed a comprehensive protection framework. This includes multimodal sensing systems that utilize ubiquitous wireless sensing to detect and prevent spoofing attacks. Additionally, I have introduced a memory-bank detection system that efficiently rejects out-of-band injection signals without requiring prior data, enhancing the resilience of voice-controlled devices against sophisticated threats. In parallel, my research in multimodal learning focuses on the fusion of millimeter-wave (mmWave) and audio signals to bolster speech recognition accuracy in environments with significant noise or long-distance communication. I have designed attention-based fusion networks that optimize the exchange of information between these heterogeneous data sources, significantly improving the robustness and reliability of VUIs. By addressing these critical issues, my work aims to strengthen the integration of VUI technologies in real-world applications, ensuring both security and effective performance across diverse scenarios. This research not only contributes to the theoretical foundations of voice interaction but also paves the way for more secure and resilient cyber-physical systems in the IoT era.