S. Farokh Atashzar (NYU): Human-Robot Symbiosis: From Data-driven Control to AI-enabled Interface

Dr. S. Farokh Atashzar

 Assistant Professor, Departments of Mechanical and Aerospace Engineering and Electrical and Computer Engineering, New York University

 

Time: 12:30 pm – 1:30 pm

Date: Friday, October 4th

Location: BC 220

Abstract: In this talk, Atashzar will present his team’s recent work towards achieving human-robot symbiosis. In the first part, he will discuss a new family of data-driven nonlinear control systems based on passivity theory to enable resilient and transparent physical human-robot interaction by accounting for the passivity-based energetic signature of human biomechanics when interacting with robot dynamics. In the second part, he will present the team’s recent efforts in AI for neural engineering to decode complex neural signals for (a) predicting human intention for the control of neurorobots and (b) assessment of human neuromusculoskeletal disability for closed-loop interfacing. This ranges from novel computational models and deep networks to wearable myographic sensors. He will discuss the practical limitations of non-invasive interfaces like high-density electromyography and how AI can help to address them. Finally, he will briefly introduce two new directions of his team: (a) how AI can support the design and fabrication of soft robots and (b) how flexible probabilistic models can enable generalizable human-to-robot skill transfer, overcoming limitations of existing probabilistic learning from demonstration approaches. The overall vision for the talk will be integrated human-robot systems that combine the best of human and machine capabilities for application in medical robotics and neurorobotics.

Bio: S. Farokh Atashzar is currently an Assistant Professor at the School of Engineering, New York University (NYU). He holds joint appointments between the Departments of Mechanical and Aerospace Engineering and Electrical and Computer Engineering, which he joined in August 2019. Previously, he was a postdoctoral scientist in the Department of Bioengineering at Imperial College London and was sponsored by the National Sciences and Engineering Research Council (NSERC) of Canada postdoctoral award. At NYU, he leads the Medical Robotics and Interactive Intelligent Technologies (MERIIT) lab, focusing on the intersection of robotics, control, and AI for broad applications in human-robot interaction and neural engineering. His research aims to achieve human-robot symbiosis at Physical, Cognitive, and Metacognitive levels by developing technical and technological means to bridge human intelligence and physics with machine cognition and kinodynamics. The outcome will be next-generation machines that augment and learn from rather than replace human skills, merging the best of human and machine capabilities. He has published ~90 journal papers, ~60 conference papers, and two book chapters. He received several awards, including the MathWorks Research Award, NSERC PDF award, and IEEE RAS RAL Distinguished AE award. His research is funded by NIH R01, and five NSF grants in addition to multiple industrial grants besides a $2M equipment grant. He currently serves as Associate Editor for IEEE Transactions on Robotics, IEEE Transactions on Haptics, and IEEE Robotics and Automation Letters. Atashzar is also the chair of the IEEE RAS Cluster for human-centered robotics. He is also the General Chair for the IEEE SPS PROGRESS diversity initiative.

Guoquan (Paul) Huang (UD): Visual-Inertial Sensing, Estimation and Learning (09/20)

Dr. Guoquan (Paul) Huang

Associate Professor, Mechanical Engineering (ME) and Computer and Information Sciences (CIS), University of Delaware

 

Time: 12:30 pm – 1:30 pm

Date: Friday, September 20

Location: BC 220

Abstract: As cameras and IMUs are becoming ubiquitous, visual-inertial systems for spatial perception, in analogy to biological visual-vestibular neural systems, hold great potential in a wide range of applications from extended reality (XR) to autonomous robots. While visual-inertial navigation systems (VINS), alongside with SLAM, have witnessed tremendous progress in the past decade, yet certain critical aspects in the design of visual-inertial systems remain poorly explored, hindering the widespread deployment of these systems in practice. In this talk, I will present some recent research efforts of my group on advancing the state of the art of visual-inertial sensing, navigation and perception, including consistent and efficient 3D motion tracking and scene understanding. Many of the codebases have been open sourced to promote visual-inertial systems, broadly benefiting the community.

Bio: Guoquan (Paul) Huang is an Associate Professor of Mechanical Engineering (ME) and Computer and Information Sciences (CIS) at the University of Delaware (UD), where he is leading the Robot Perception and Navigation Group (RPNG). He is also a Principal Scientist at Meituan. He was a Postdoctoral Associate (2012-2014) at MIT CSAIL (Marine Robotics) after receiving his PhD in Computer Science from the University of Minnesota. From 2003 to 2005, and was a Research Assistant (Electrical Engineering) at the Hong Kong Polytechnic University. His research interests focus on state estimation and spatial computing for robotics and XR, including optimal sensing, calibration, localization, mapping, tracking, perception, navigation and locomotion of autonomous robots and mobile devices. He has served as an Associate Editor for the IEEE Transactions on Robotics (T-RO), IEEE Robotics and Automation Letters (RA-L), and IET Cyber-Systems and Robotics (CSR), as well as the robotics flagship conferences (ICRA and IROS). Dr. Huang has received many honors and awards, including the 2015 UD Research Award (UDRF), 2015/2023 NASA DE Space Research Seed Award, 2016 NSF Research Initiation Award, 2018 Google Daydream Faculty Research Award, 2019 Google AR/VR Faculty Research Award, 2020 ARL SARA Award, 2022 Google AI Faculty Research Award, 2023 Meta Reality Labs Faculty Research Award. He was recognized among the AI 2000 Scholars (Top 100 in robotics) and World Top 2% Scientists. He was also the recipient of the ICRA 2022 Best Paper Award (Navigation), 2022 Best Paper Award of GNC Journal, and the Finalists of the ICRA 2024 Best Paper Award (Robot Vision), RSS 2023 Best Student Paper Award, ICRA 2021 Best Paper Award (Robot Vision), and RSS 2009 Best Paper Award.

Prof. Qiyu Sun (University of Central Florida): Dynamic systems: Carleman meets Fourier

Dr. Qiyu Sun

Professor, Department of Mathematics, University of Central Florida

 

Time: 12:30 pm – 1:30 pm

Date: Friday, September 13

Location: BC 220

Abstract: Taylor expansion and Fourier expansion have been widely used to represent functions.  The question to be discussed in this talk is whether there is some analog for nonlinear dynamic systems.  In particular, we consider Carlemen linearization and Carleman-Fourier linearization of nonlinear dynamic systems and show that the primary block of the finite-section approach has exponential convergence to the solution of the original dynamic system.

Bio: Qiyu Sun received the Ph.D. degree in mathematics from Hangzhou University, Hangzhou, China, in 1990. He is currently a Professor of mathematics with the University of Central Florida, Orlando, FL, USA.  His research interests include applied and computational harmonic analysis, optimal control theory, mathematical signal processing and sampling theory. Together with Nader Motee, he received the 2019 SIAG/CST Best SICON Paper Prize for making a fundamental contribution to spatially distributed systems theory.  He is on the editorial board of several journals, including Journal of Fourier Analysis and Applications, Frontiers in Signal Processing, and  Sampling Theory, Signal Processing, and Data Analysis.

Sarah Dean (Cornell University): Foundations for Learning with Human Interaction & Dynamics

Dr. Sarah Dean

 Assistant Professor, Computer Science Department, Cornell University

 

Time: 12:00 pm – 1:00 pm

Date: Friday, September 6

Location: BC 220

Abstract: Modern robotic systems benefit from machine learning and human interactions. In this talk, I will discuss recent and ongoing work on developing algorithmic foundations for learning with and from human interactions. I will start with motivation: a collaboration on building a robot for assistive feeding that adaptively asks for help. The first key algorithmic question is how to decide when to query a human expert. I will describe a recently developed interactive bandit algorithm with favorable regret guarantees. The second key question is how to learn dynamical models of human mental state, like cognitive load or boredom, from partial observations. I will describe a learning algorithm based on ideas from system identification that comes with sample complexity guarantees. This is based on joint work with Rohan Banerjee, Tapomayukh Bhattacharjee, Jinyan Su, Wen Sun, Yahya Sattar, and Yassir Jedra.

Bio: Sarah is an Assistant Professor in the Computer Science Department at Cornell. She is interested in the interplay between optimization, machine learning, and dynamics, and her research focuses on understanding the fundamentals of data-driven control and decision-making. This work is grounded in and inspired by applications ranging from robotics to recommendation systems. Sarah has a PhD in EECS from UC Berkeley and did a postdoc at the University of Washington.

Matei Ciocarlie (Columbia University): We (finally) have dexterous robotic manipulation. Now what?

Dr. Matei Ciocarlie

Associate Professor, Department of Mechanical Engineering, Columbia University

 

Time: 11:00 am – 12:00 pm

Date: Friday, April 26

Location: PA 466

Abstract: At long last, robot hands are becoming truly dexterous. It took advances in sensor design, mechanisms, and computational motor learning all working together, but we’re finally starting to see true dexterity, in our lab as well as others. This talk will focus on the path our lab took to get here, and questions for the future. From a mechanism design perspective, I will present our work on optimizing an underactuated hand transmission mechanism jointly the grasping policy that uses it, an approach we refer to as “Hardware as Policy”. From a sensing perspective, I will present our optics-based tactile finger, providing accurate touch information over a multi-curved three-dimensional surface with no blind spots. From a motor learning perspective, I will talk about learning tactile-based policies for dexterous in-hand manipulation and object recognition. Finally, we can discuss implications for the future: how do we consolidate these gains by making dexterity more robust, versatile, and general, and what new applications can it enable?

Bio: Matei Ciocarlie is an Associate Professor in the Mechanical Engineering Department at Columbia University, with affiliated appointments in Computer Science and the Data Science Institute. His work focuses on robot motor control, mechanism and sensor design, planning and learning, all aiming to demonstrate complex motor skills such as dexterous manipulation. Matei completed his Ph.D. at Columbia University in New York; before joining the faculty at Columbia, Matei was a Research Scientist and then Group Manager at Willow Garage, Inc., and then a Senior Research Scientist at Google, Inc. In these positions, Matei contributed to the development of the open-source Robot Operating System (ROS), and led research projects in areas such as hand design, manipulation under uncertainty, and assistive robotics. In recognition of his work, Matei was awarded the Early Career Award by the IEEE Robotics and Automation Society, a Young Investigator Award by the Office of Naval Research, a CAREER Award by the National Science Foundation, and a Sloan Research Fellowship by the Alfred P. Sloan Foundation.

Axel Krieger (JHU): Smart and Autonomous Robots for Surgery

Dr. Axel Krieger

Associate Professor, Department of Mechanical Engineering, Johns Hopkins University

 

Time: 12:00 pm – 1:00 pm

Date: Friday, April 19

Location: BC 115

Abstract: Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.

My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery.  Presentation topics will include a robotic system for supervised autonomous laparoscopic anastomosis and robotic trauma assessment and care. 

Bio: Axel Krieger, PhD, joined the Johns Hopkins University in the Department of Mechanical Engineering in July 2020. He is leading a team of students, scientists, and engineers in the research and development of robotic systems for surgery and interventions. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA

Dinesh Jayaraman (UPenn): The Sensory Needs of Robot Learners

The Sensory Needs of Robot Learners

Dr. Dinesh Jayaraman

Assistant Professor, Grasp Lab, University of Pennsylvania

 

Time: 11:30 am – 12:30 pm

Date: Friday, April 12

Location: BC 220

Abstract:General-purpose robots of the future will need vision and learning, but such vision-based robot learning today is inflexible and inefficient: it needs robot-and-task-specific training experiences, expert-engineered task specifications, and large computational resources. This talk will cover algorithms that dynamically select task-relevant information during sensing, representation, decision making, and learning, enabling flexibilities in pre-training controller modules, layperson-friendly task specification, and efficient resource allocation. I will speak about pre-trained object-centric visual representations that track task-directed progress, task-relevant world model learning for model-based RL, and how robot learners can benefit from additional sensory inputs at training time.

Bio: Dinesh Jayaraman is an assistant professor at the University of Pennsylvania’s CIS department and GRASP lab. He leads the Perception, Action, and Learning  (Penn PAL) research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh’s research has received an Best Paper Award at CORL ’22, a Best Paper Runner-Up Award at ICRA ’18, a Best Application Paper Award at ACCV ‘16, an Amazon Research Award ’21, the NSF CAREER award ’23, and been featured on the cover page of Science Robotics and in several press outlets. His webpage is at: https://www.seas.upenn.edu/~dineshj/

Nikolai Matni (UPenn): What makes learning to control easy or hard?

What makes learning to control easy or hard?

Dr. Nikolai Matni

Assistant Professor, Electrical and Systems Engineering, University of Pennsylvania

Time: 12:00 pm – 1:00 pm

Date: Friday, April 5

Location: BC 115

Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning. We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of two case studies: (i) characterizing fundamental limits of learning-enabled control, and (ii) developing novel robust imitation learning algorithms with finite sample-complexity guarantees. In both cases, we will emphasize the interplay between robust learning, robust control, and robust stability and their consequences on the sample-complexity and generalizability of the resulting learning-based control algorithms.

Bio: Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group. He has held positions as a Visiting Faculty Researcher at Google Brain Robotics, NYC, as a postdoctoral scholar in EECS at UC Berkeley, and as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds a B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of autonomous systems. Nikolai is a recipient of the AFOSR YIP (2024), NSF CAREER Award (2021), a Google Research Scholar Award (2021), the 2021 IEEE CSS George S. Axelby Award, and the 2013 IEEE CDC Best Student Paper Award. He is also a co-author on papers that have won the 2022 IEEE CDC Best Student Paper Award and the 2017 IEEE ACC Best Student Paper Award.

Jaime Fernández Fisac (Princeton University): Games and Filters: a Road to Safe Robot Autonomy

Dr. Jaime Fernández Fisac

Assistant Professor, Department of Electrical and Computer Engineering, Princeton University

 

Time: 11 am (coffee and cookies at 10:30 am in PA 367)

Date: Friday, December 8

Location: PA 466

Abstract: Autonomous robotic systems are promising to revolutionize our homes, cities and roads—but will we trust them with our lives? This talk will take stock of today’s safety-critical robot autonomy, highlight recent advances, and offer some reasons for optimism. We will first see that a broad family of safety schemes from the last decade can be understood under the common lens of a universal safety filter theorem, lighting the way for the systematic design of next-generation safety mechanisms. We will explore reinforcement learning (RL) as a general tool to synthesize global safety filters for previously intractable robotics domains, such as walking or driving through abrupt terrain, by departing from the traditional notion of accruing rewards in favor of a safety-specific Bellman equation. We will show that this safety-RL formulation naturally allows learning from near misses and boosting robustness through adversarial gameplay. Finally, we will turn our attention to dense urban driving, where safety hinges on the autonomous vehicle’s rapidly unfolding interactions with other road users. We will examine how robots can bolster safety by leaning on their future ability to seek out missing key information about other agents’ intent or even their location. We will conclude with an outlook on future autonomous systems and the role of transparent, real-time safety proofs in generating public trust.

Bio: Jaime Fernández Fisac is an Assistant Professor of Electrical and Computer Engineering at Princeton University, where he directs the Safe Robotics Laboratory and co-directs the Princeton AI4ALL outreach summer program. His research combines control theory, artificial intelligence, cognitive science, and game theory with the goal of enabling robots to operate safely in human-populated spaces in a way that is well understood, and thereby trusted, by their users and the public at large. Prior to joining the Princeton faculty, he was a Research Scientist at Waymo (formerly Google’s Self-Driving Car project) from 2019 to 2020, working on autonomous vehicle safety and interaction. He received an Engineering Diploma from the Universidad Politécnica de Madrid, Spain in 2012, a M.Sc. in Aeronautics from Cranfield University, U.K. in 2013, and a Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2019. He is a recipient of the La Caixa Foundation Fellowship, the Leon O. Chua Award, the Google Research Scholar Award, and the Sony Focused Research Award.

 

Read More

Kostas Daniilidis (Upenn): Efficiency through symmetry and event-based processing in robot perception

 

Kostas Daniilidis
Prof., CIS, University of Pennsylvania

Dec 1, 2023 (Friday), 12:00 – 1:00 pm
Room 220, Building C

 

Abstract: Scaling up data and computation is regarded today as the key to achieving unprecedented performance in many visual tasks. Given the lack of scale in real-world experience, robotics has turned to simulation as the vehicle for scaling up. Biological perception is characterized though by principles of efficiency implemented through symmetry and efficient sensing. By respecting the symmetries of the problem at hand, equivariant models can generalize better, often requiring fewer parameters and less data to learn effectively. Moreover, they provide insights into the underlying structures and symmetries of the data, which can be invaluable in developing more robust and interpretable models. In this talk, we will present a framework for achieving equivariance by design rather than data augmentation and using orders of magnitude lower capacity models than competing approaches. Then, we will show how the new paradigm of event-based vision can facilitate visual motion and reconstruction tasks with asynchronous low-bandwidth processing.

Bio: Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013, Associate Dean for Graduate Education from 2012-2016, and Faculty Director of Online Learning from 2013- 2017. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992, under the supervision of Hans-Hellmut Nagel. He received the Best Conference Paper Award at ICRA 2017. He co-chaired ECCV 2010 and 3DPVT 2006. His most cited works have been on event-based vision, equivariant learning, 3D human pose, and hand-eye calibration.