Controlling Assistive Robots with Learned Latent Actions


We want to make it easier for humans to teleoperate dexterous robots. We present a learning approach that embeds high-dimensional robot actions into an intuitive, human-controllable, and low-dimensional latent space.

Learning from My Partner’s Actions: Roles in Decentralized Robot Teams


When groups robots work together, their actions communicate valuable information. We introduce a collaborative learning and control strategy that enables robots to harness the information contained within their partner's actions.

Influencing Leading and Following in Human-Robot Teams


So much of our lives centers around coordinating in groups. As robots become increasingly integrated into society, they should be able to similarly coordinate well with human groups. However, influencing groups of people is challenging. Our goal is to develop a framework that enables robots to model and influence human groups that is scalable with the number of human agents.

Learning Reward Functions by Integrating Human Demonstrations and Preferences


When learning from humans, we typically use data from only one form of human feedback. In this work, we investigate whether we can leverage data from multiple modes of feedback to learn more effectively from humans.