ICLR 2024 (Spotlight, 5% accept rate, 3rd Highest Rated Paper in ICLR)
Keywords: Privileged Information, Multimodal Perception, RL
I'm a CS Ph.D. student at the University of Pennsylvania advised by Dinesh Jayaraman. I am broadly interested in artifical intelligence, ranging from virtual agents to physical robots. As a result, my research spans perception, reinforcement learning, and robotics.
I received my BS/MS in CS from the University of Southern California, where I worked with Joseph J. Lim on RL. In Summer 2024, I interned at Microsoft NYC and worked on LLM training and planning with John Langford and Alex Lamb.
Code and reviews for all of my PhD papers are public. Check them out!
ICLR 2024 (Spotlight, 5% accept rate, 3rd Highest Rated Paper in ICLR)
Keywords: Privileged Information, Multimodal Perception, RL
CORL 2022 (Oral, 6.5% accept rate, Best Paper Award)
Keywords: Interactive Perception, Task Specification, RL
Implementation of Planning Exploratory Goals, an unsupervised RL agent for hard exploration tasks. ICLR 2023
Code for "Know Thyself: Transferable Visual Control Policies Through Robot-Awareness" (ICLR'22)
Code release for "Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning" (CoRL'22)
IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks (ICRA'21)
OpenAI's GPT2 integrated with slack.
A simple optical illusion in python
Current: |
|
Past: |