I'm a PhD student in computer science at Harvard in the Statistical Reinforcement Learning Lab. I'm interested in understanding and addressing the challenges people face when applying reinforcement learning algorithms to real world problems. I am currently working on statistical inference methods for adaptively collected data, e.g., data collected using a bandit. I am advised by Susan Murphy and Lucas Janson. I am fortunate to have received a NSF Graduate Research Fellowship.

I previously worked on natural language processing problems with Sasha Rush and was a member of the Natural Language Processing group. During my undergrad at NYU, I was a member of the Machine Learning for Language Lab (ML²) at CILVR, and was advised by Sam Bowman and Yann LeCun.


Research Papers

Inference for Batched Bandits
Kelly Zhang, Lucas Janson, and Susan Murphy
NeurIPS 2020
[paper] [code]

Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis
Kelly Zhang and Samuel Bowman
BlackboxNLP 2018 (EMNLP Workshop)

Adversarially Regularized Autoencoders
Junbo (Jake) Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun.
ICML 2018
[paper] [code]


  • Zeyang Jia (April 2020-present) Weighting methods for maximizing power on adaptively collected data.
  • Raymond Feng (February 2020-present) Predicting user disengagement among users of Track Your Tinnitus.


  • Summer 2018, I interned at Facebook AI Research in New York.
  • I was a grader for the fall 2017 instantiation of the NYU Data Science course Natural Language Processing with Representation Learning (DS-GA 1011), which was jointly taught by Sam Bowman and Kyunghyun Cho.
  • I spent summer 2017 working at eBay New York on the homepage recommendations team.


This site is slim.