The Sensory Needs of Robot Learners

Dr. Dinesh Jayaraman

Assistant Professor, Grasp Lab, University of Pennsylvania


Time: 11:30 am – 12:30 pm

Date: Friday, April 12

Location: BC 220

Abstract:General-purpose robots of the future will need vision and learning, but such vision-based robot learning today is inflexible and inefficient: it needs robot-and-task-specific training experiences, expert-engineered task specifications, and large computational resources. This talk will cover algorithms that dynamically select task-relevant information during sensing, representation, decision making, and learning, enabling flexibilities in pre-training controller modules, layperson-friendly task specification, and efficient resource allocation. I will speak about pre-trained object-centric visual representations that track task-directed progress, task-relevant world model learning for model-based RL, and how robot learners can benefit from additional sensory inputs at training time.

Bio: Dinesh Jayaraman is an assistant professor at the University of Pennsylvania’s CIS department and GRASP lab. He leads the Perception, Action, and Learning  (Penn PAL) research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh’s research has received an Best Paper Award at CORL ’22, a Best Paper Runner-Up Award at ICRA ’18, a Best Application Paper Award at ACCV ‘16, an Amazon Research Award ’21, the NSF CAREER award ’23, and been featured on the cover page of Science Robotics and in several press outlets. His webpage is at:

Dinesh Jayaraman (UPenn): The Sensory Needs of Robot Learners