Yashaswini Murthy
California Institute of Technology
yashaswini.krishnamurthy23@gmail.com
Bio
Yashaswini Murthy is a postdoctoral scholar in Computing and Mathematical Sciences at Caltech and an incoming Assistant Professor in Operations Research and Industrial Engineering at UT Austin (starting 2026). She studies provable guarantees for algorithms in average-reward and robust reinforcement learning; her interests broadly span applied probability and reinforcement learning theory. She has been recognized as an ISyEMS&EIOE Joint Rising Star in 2025. She earned her PhD in Electrical and Computer Engineering at the University of Illinois Urbana-Champaign (UIUC), supported by several fellowships, including the Mavis Future Faculty Fellowship and the Joan & Lalit Bahl Fellowship. She holds a B.Tech and M.Tech in Mechanical Engineering from Indian Institute of Technology, Bombay (IIT Bombay).
Areas of Research
- Information and System Science
Learning and Control in Countable State Spaces
Title: Learning and Control in Countable State Spaces
Abstract: We consider policy optimization methods in reinforcement learning settings where the state space is arbitrarily large, or even countably infinite. The motivation arises from control problems in communication networks, matching markets, and other queueing systems. Specifically, we consider the popular Natural Policy Gradient (NPG) algorithm, which has been studied in the past only under the assumption that the cost is bounded, and the state space is finite, neither of which holds for the aforementioned control problems. Assuming a Lyapunov drift condition, which is naturally satisfied in some cases and can be satisfied in other cases at a small cost in performance, we design a state-dependent step-size rule which dramatically improves the performance of NPG for our intended applications. In addition to experimentally verifying the performance improvement, we also theoretically show that the iteration complexity of NPG can be made independent of the size of the state space. The key analytical tool we use is the connection between NPG step sizes and the solution to Poisson’s equation. In particular, we provide policy-independent bounds on the solution to Poisson’s equation, which are then used to guide the choice of NPG step sizes.