I'm a CS Ph.D. student at the University of Pennsylvania advised by Dinesh Jayaraman. I received my BS/MS in CS from the University of Southern California, where I worked with Joseph J. Lim. I also developed software for nonprofits at Code The Change, and interned at Tesla and Intel.


I am broadly interested in artifical intelligence, ranging from virtual agents to physical robots. As a result, my research spans perception, reinforcement learning, and robotics.


I am looking for internships!

Publications and Preprints

Code and reviews for all of my PhD papers are public.

Privileged Sensing Scaffolds Reinforcement Learning
Edward S. Hu, James Springer, Oleh Rybkin, Dinesh Jayaraman

In Submission, 2023

Keywords: Privileged Information, Multimodal Perception, RL

Planning Goals for Exploration
Edward S. Hu, Richard Chang, Oleh Rybkin, Dinesh Jayaraman

International Conference on Learning Representations, 2023 (Spotlight)
CoRL22 Roboadapt Workshop, (Oral, Best Paper Award)

Keywords: Exploration, Goal-conditioned RL, World Models

image for IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
Kun Huang, Edward S. Hu, Dinesh Jayaraman

Conference on Robot Learning, 2022 (Oral, Best Paper Award)

Keywords: Interactive Perception, Task Specification, RL

image for IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
Transferable Visual Control Policies Through Robot-Awareness
Edward S. Hu, Kun Huang, Oleh Rybkin, Dinesh Jayaraman

International Conference on Learning Representations (ICLR), 2022
ICLR Generalizable Policy Learning Workshop, 2022 (Oral)

Keywords: World Models, Robot Transfer, Manipulation

image for IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks

International Conference on Robotics and Automation (ICRA), 2021

Keywords: RL, Manipulation, Benchmark

image for To Follow or not to Follow: Selective Imitation Learning from Observations
To Follow or not to Follow: Selective Imitation Learning from Observations

Conference on Robot Learning (CoRL), 2019

Keywords: Learning from Demonstrations, Goal-conditioned RL

Composing Complex Skills by Learning Transition Policies

International Conference on Learning Representations (ICLR), 2019

Keywords: Hierarchical RL