I'm a CS Ph.D. student at the University of Pennsylvania advised by Dinesh Jayaraman. I am broadly interested in artifical intelligence, ranging from virtual agents to physical robots. As a result, my research spans perception, reinforcement learning, and robotics.


I received my BS/MS in CS from the University of Southern California, where I worked with Joseph J. Lim. This summer, I will be interning at Microsoft Research NYC, hosted by John Langford and Alex Lamb.


Publications

Code and reviews for all of my PhD papers are public. Check them out!

An Extremely Unsupervised RL Agent

In Submission

Keywords: World Models, Autonomy

Privileged Sensing Scaffolds Reinforcement Learning
Edward S. Hu, James Springer, Oleh Rybkin, Dinesh Jayaraman

ICLR 2024 (Spotlight, 5% accept rate, 3rd Highest Rated Paper in ICLR)

Keywords: Privileged Information, Multimodal Perception, RL

Planning Goals for Exploration
Edward S. Hu, Richard Chang, Oleh Rybkin, Dinesh Jayaraman

ICLR 2023 (Spotlight, 5% accept rate)
CoRL22 Roboadapt Workshop, (Oral, Best Paper Award)

Keywords: Exploration, Goal-conditioned RL, World Models

image for IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
Kun Huang, Edward S. Hu, Dinesh Jayaraman

CORL 2022 (Oral, 6.5% accept rate, Best Paper Award)

Keywords: Interactive Perception, Task Specification, RL

image for IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
Transferable Visual Control Policies Through Robot-Awareness
Edward S. Hu, Kun Huang, Oleh Rybkin, Dinesh Jayaraman

ICLR 2022
ICLR Generalizable Policy Learning Workshop, 2022 (Oral)

Keywords: World Models, Robot Transfer, Manipulation

image for IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks

ICRA 2021

Keywords: RL, Manipulation, Benchmark

image for To Follow or not to Follow: Selective Imitation Learning from Observations
To Follow or not to Follow: Selective Imitation Learning from Observations

CORL 2019

Keywords: Learning from Demonstrations, Goal-conditioned RL

Composing Complex Skills by Learning Transition Policies

ICLR 2019

Keywords: Hierarchical RL

Mentorship

Current:
  • James Springer, UPenn MS
Past:
  • Harsh Goel, UPenn MS -> UT Austin PhD
  • Kun Huang, UPenn MS -> Fulltime SWE at Cruise
  • Richard Chang, UPenn BS
  • Lucy Shi, USC undergrad -> Stanford visitor