Deep Reinforcement Learning for Motion Generation and Decision Making

This project explores the use of deep reinforcement learning to enable robots to better reason and interact with the environment.

Click here for more information.

Human-Robot Collaboration & Human-Object Interaction

This project explores the generation of robot appropriate behaviors when it works with a human in collaborative tasks. The robot is to understand human motion, model it, and understand how to best work with the human to achieve collaborative goals in smoother, easier ways for the human user. We also study the way a human interacts with objects and seek to properly classify the interaction.

Click here for more information.

Robot Introspection and Decision Making

This project explores how to perform multimodal robot skill acquisition, in particular for contact tasks. We explore novel ways of modeling, integrating, and classifying low-level robot sensory-motor information or high-level abstracted representations. Our approaches help bootstrap a robot’s ability to perform introspection into the kinds of behaviors it executes as it runs. This in turns aids in the ability to recognize unmodeled and external disturbances that will likely lead the task to fail, and instead, learn efficient recovery strategies to continue with the task at hand.

Click here for more information.

Modular Controllers, Strategies, and Modules for Robots in Contact Tasks

Human-level tasks involve a constant series of new motion strategies and associated controllers to achieve their tasks. This work explores ways in which strategies and controllers can be jointly developed in a way that is modular and flexible and that can yield highly dexterous manipulation. Such work has been used to help a heterogeneous team of robots (a compliant dual-armed humanoid robot and a rigid singled-arm industrial manipulator) work in tandem to perform reactive assemblies of truss structures using different coordination and cooperation approaches. It has also been used to enable single-arm and dual-armed snap assemblies of male and female cantilever snap parts.

Click here for more information.

Robot Sensory-Motor Coordination

This work studied how robots over time can integrate multimodal sensory information with motion as a basis to ground the robot in reality and provides a framework for understanding the world. We studied how sensory-motor state data could self-organize into vector space structures that categorize the world in terms of the robot’s sensory-motor coordination (SMC). We enacted tasks that imitate how babies interacted with objects of interested in the world with increased complexity. Audio, vision, and tactile sensing were used as sensory modalities.

Click here for more information.

Education

This work studies how to improve STEM thinking skills through robotic education.

Click here for more information.

Vision

Supporting vision work for manipulation experiments: 
Simultaneous multi-object 3D Pose estimation.