Sarah Dean (Cornell University): Foundations for Learning with Human Interaction & Dynamics (09/06)

Dr. Sarah Dean

 Assistant Professor, Computer Science Department, Cornell University

 

Time: 12:00 pm – 1:00 pm

Date: Friday, September 6

Location: Building C

Abstract: Modern robotic systems benefit from machine learning and human interactions. In this talk, I will discuss recent and ongoing work on developing algorithmic foundations for learning with and from human interactions. I will start with motivation: a collaboration on building a robot for assistive feeding that adaptively asks for help. The first key algorithmic question is how to decide when to query a human expert. I will describe a recently developed interactive bandit algorithm with favorable regret guarantees. The second key question is how to learn dynamical models of human mental state, like cognitive load or boredom, from partial observations. I will describe a learning algorithm based on ideas from system identification that comes with sample complexity guarantees. This is based on joint work with Rohan Banerjee, Tapomayukh Bhattacharjee, Jinyan Su, Wen Sun, Yahya Sattar, and Yassir Jedra.

Bio: Sarah is an Assistant Professor in the Computer Science Department at Cornell. She is interested in the interplay between optimization, machine learning, and dynamics, and her research focuses on understanding the fundamentals of data-driven control and decision-making. This work is grounded in and inspired by applications ranging from robotics to recommendation systems. Sarah has a PhD in EECS from UC Berkeley and did a postdoc at the University of Washington.

Guoquan (Paul) Huang (UD): Visual-Inertial Sensing, Estimation and Learning (09/20)

Dr. Guoquan (Paul) Huang

Associate Professor, Mechanical Engineering (ME) and Computer and Information Sciences (CIS), University of Delaware

 

Time: 12:00 pm – 1:00 pm

Date: Friday, September 20

Location: Building C

Abstract: As cameras and IMUs are becoming ubiquitous, visual-inertial systems for spatial perception, in analogy to biological visual-vestibular neural systems, hold great potential in a wide range of applications from extended reality (XR) to autonomous robots. While visual-inertial navigation systems (VINS), alongside with SLAM, have witnessed tremendous progress in the past decade, yet certain critical aspects in the design of visual-inertial systems remain poorly explored, hindering the widespread deployment of these systems in practice. In this talk, I will present some recent research efforts of my group on advancing the state of the art of visual-inertial sensing, navigation and perception, including consistent and efficient 3D motion tracking and scene understanding. Many of the codebases have been open sourced to promote visual-inertial systems, broadly benefiting the community.

Bio: Guoquan (Paul) Huang is an Associate Professor of Mechanical Engineering (ME) and Computer and Information Sciences (CIS) at the University of Delaware (UD), where he is leading the Robot Perception and Navigation Group (RPNG). He is also a Principal Scientist at Meituan. He was a Postdoctoral Associate (2012-2014) at MIT CSAIL (Marine Robotics) after receiving his PhD in Computer Science from the University of Minnesota. From 2003 to 2005, and was a Research Assistant (Electrical Engineering) at the Hong Kong Polytechnic University. His research interests focus on state estimation and spatial computing for robotics and XR, including optimal sensing, calibration, localization, mapping, tracking, perception, navigation and locomotion of autonomous robots and mobile devices. He has served as an Associate Editor for the IEEE Transactions on Robotics (T-RO), IEEE Robotics and Automation Letters (RA-L), and IET Cyber-Systems and Robotics (CSR), as well as the robotics flagship conferences (ICRA and IROS). Dr. Huang has received many honors and awards, including the 2015 UD Research Award (UDRF), 2015/2023 NASA DE Space Research Seed Award, 2016 NSF Research Initiation Award, 2018 Google Daydream Faculty Research Award, 2019 Google AR/VR Faculty Research Award, 2020 ARL SARA Award, 2022 Google AI Faculty Research Award, 2023 Meta Reality Labs Faculty Research Award. He was recognized among the AI 2000 Scholars (Top 100 in robotics) and World Top 2% Scientists. He was also the recipient of the ICRA 2022 Best Paper Award (Navigation), 2022 Best Paper Award of GNC Journal, and the Finalists of the ICRA 2024 Best Paper Award (Robot Vision), RSS 2023 Best Student Paper Award, ICRA 2021 Best Paper Award (Robot Vision), and RSS 2009 Best Paper Award.

S. Farokh Atashzar (NYU): Human-Robot Symbiosis: From Data-driven Control to AI-enabled Interface (09/27)

Dr. S. Farokh Atashzar

 Assistant Professor, Departments of Mechanical and Aerospace Engineering and Electrical and Computer Engineering, New York University

 

Time: 12:00 pm – 1:00 pm

Date: Friday, September 27

Location: Building C

Abstract: In this talk, Atashzar will present his team’s recent work towards achieving human-robot symbiosis. In the first part, he will discuss a new family of data-driven nonlinear control systems based on passivity theory to enable resilient and transparent physical human-robot interaction by accounting for the passivity-based energetic signature of human biomechanics when interacting with robot dynamics. In the second part, he will present the team’s recent efforts in AI for neural engineering to decode complex neural signals for (a) predicting human intention for the control of neurorobots and (b) assessment of human neuromusculoskeletal disability for closed-loop interfacing. This ranges from novel computational models and deep networks to wearable myographic sensors. He will discuss the practical limitations of non-invasive interfaces like high-density electromyography and how AI can help to address them. Finally, he will briefly introduce two new directions of his team: (a) how AI can support the design and fabrication of soft robots and (b) how flexible probabilistic models can enable generalizable human-to-robot skill transfer, overcoming limitations of existing probabilistic learning from demonstration approaches. The overall vision for the talk will be integrated human-robot systems that combine the best of human and machine capabilities for application in medical robotics and neurorobotics.

Bio: S. Farokh Atashzar is currently an Assistant Professor at the School of Engineering, New York University (NYU). He holds joint appointments between the Departments of Mechanical and Aerospace Engineering and Electrical and Computer Engineering, which he joined in August 2019. Previously, he was a postdoctoral scientist in the Department of Bioengineering at Imperial College London and was sponsored by the National Sciences and Engineering Research Council (NSERC) of Canada postdoctoral award. At NYU, he leads the Medical Robotics and Interactive Intelligent Technologies (MERIIT) lab, focusing on the intersection of robotics, control, and AI for broad applications in human-robot interaction and neural engineering. His research aims to achieve human-robot symbiosis at Physical, Cognitive, and Metacognitive levels by developing technical and technological means to bridge human intelligence and physics with machine cognition and kinodynamics. The outcome will be next-generation machines that augment and learn from rather than replace human skills, merging the best of human and machine capabilities. He has published ~90 journal papers, ~60 conference papers, and two book chapters. He received several awards, including the MathWorks Research Award, NSERC PDF award, and IEEE RAS RAL Distinguished AE award. His research is funded by NIH R01, and five NSF grants in addition to multiple industrial grants besides a $2M equipment grant. He currently serves as Associate Editor for IEEE Transactions on Robotics, IEEE Transactions on Haptics, and IEEE Robotics and Automation Letters. Atashzar is also the chair of the IEEE RAS Cluster for human-centered robotics. He is also the General Chair for the IEEE SPS PROGRESS diversity initiative.