Axel Krieger (JHU): Smart and Autonomous Robots for Surgery (4/19)

Dr. Axel Krieger

Associate Professor, Department of Mechanical Engineering, Johns Hopkins University

 

Time: 12:00 pm – 1:00 pm

Date: Friday, April 19

Location: BC 115

Abstract: Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.

My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery.  Presentation topics will include a robotic system for supervised autonomous laparoscopic anastomosis and robotic trauma assessment and care. 

Bio: Axel Krieger, PhD, joined the Johns Hopkins University in the Department of Mechanical Engineering in July 2020. He is leading a team of students, scientists, and engineers in the research and development of robotic systems for surgery and interventions. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA

Dinesh Jayaraman (UPenn): The Sensory Needs of Robot Learners

The Sensory Needs of Robot Learners

Dr. Dinesh Jayaraman

Assistant Professor, Grasp Lab, University of Pennsylvania

 

Time: 11:30 am – 12:30 pm

Date: Friday, April 12

Location: BC 220

Abstract:General-purpose robots of the future will need vision and learning, but such vision-based robot learning today is inflexible and inefficient: it needs robot-and-task-specific training experiences, expert-engineered task specifications, and large computational resources. This talk will cover algorithms that dynamically select task-relevant information during sensing, representation, decision making, and learning, enabling flexibilities in pre-training controller modules, layperson-friendly task specification, and efficient resource allocation. I will speak about pre-trained object-centric visual representations that track task-directed progress, task-relevant world model learning for model-based RL, and how robot learners can benefit from additional sensory inputs at training time.

Bio: Dinesh Jayaraman is an assistant professor at the University of Pennsylvania’s CIS department and GRASP lab. He leads the Perception, Action, and Learning  (Penn PAL) research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh’s research has received an Best Paper Award at CORL ’22, a Best Paper Runner-Up Award at ICRA ’18, a Best Application Paper Award at ACCV ‘16, an Amazon Research Award ’21, the NSF CAREER award ’23, and been featured on the cover page of Science Robotics and in several press outlets. His webpage is at: https://www.seas.upenn.edu/~dineshj/

Nikolai Matni (UPenn): What makes learning to control easy or hard?

What makes learning to control easy or hard?

Dr. Nikolai Matni

Assistant Professor, Electrical and Systems Engineering, University of Pennsylvania

Time: 12:00 pm – 1:00 pm

Date: Friday, April 5

Location: BC 115

Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning. We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of two case studies: (i) characterizing fundamental limits of learning-enabled control, and (ii) developing novel robust imitation learning algorithms with finite sample-complexity guarantees. In both cases, we will emphasize the interplay between robust learning, robust control, and robust stability and their consequences on the sample-complexity and generalizability of the resulting learning-based control algorithms.

Bio: Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group. He has held positions as a Visiting Faculty Researcher at Google Brain Robotics, NYC, as a postdoctoral scholar in EECS at UC Berkeley, and as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds a B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of autonomous systems. Nikolai is a recipient of the AFOSR YIP (2024), NSF CAREER Award (2021), a Google Research Scholar Award (2021), the 2021 IEEE CSS George S. Axelby Award, and the 2013 IEEE CDC Best Student Paper Award. He is also a co-author on papers that have won the 2022 IEEE CDC Best Student Paper Award and the 2017 IEEE ACC Best Student Paper Award.

Jaime Fernández Fisac (Princeton University): Games and Filters: a Road to Safe Robot Autonomy

Dr. Jaime Fernández Fisac

Assistant Professor, Department of Electrical and Computer Engineering, Princeton University

 

Time: 11 am (coffee and cookies at 10:30 am in PA 367)

Date: Friday, December 8

Location: PA 466

Abstract: Autonomous robotic systems are promising to revolutionize our homes, cities and roads—but will we trust them with our lives? This talk will take stock of today’s safety-critical robot autonomy, highlight recent advances, and offer some reasons for optimism. We will first see that a broad family of safety schemes from the last decade can be understood under the common lens of a universal safety filter theorem, lighting the way for the systematic design of next-generation safety mechanisms. We will explore reinforcement learning (RL) as a general tool to synthesize global safety filters for previously intractable robotics domains, such as walking or driving through abrupt terrain, by departing from the traditional notion of accruing rewards in favor of a safety-specific Bellman equation. We will show that this safety-RL formulation naturally allows learning from near misses and boosting robustness through adversarial gameplay. Finally, we will turn our attention to dense urban driving, where safety hinges on the autonomous vehicle’s rapidly unfolding interactions with other road users. We will examine how robots can bolster safety by leaning on their future ability to seek out missing key information about other agents’ intent or even their location. We will conclude with an outlook on future autonomous systems and the role of transparent, real-time safety proofs in generating public trust.

Bio: Jaime Fernández Fisac is an Assistant Professor of Electrical and Computer Engineering at Princeton University, where he directs the Safe Robotics Laboratory and co-directs the Princeton AI4ALL outreach summer program. His research combines control theory, artificial intelligence, cognitive science, and game theory with the goal of enabling robots to operate safely in human-populated spaces in a way that is well understood, and thereby trusted, by their users and the public at large. Prior to joining the Princeton faculty, he was a Research Scientist at Waymo (formerly Google’s Self-Driving Car project) from 2019 to 2020, working on autonomous vehicle safety and interaction. He received an Engineering Diploma from the Universidad Politécnica de Madrid, Spain in 2012, a M.Sc. in Aeronautics from Cranfield University, U.K. in 2013, and a Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2019. He is a recipient of the La Caixa Foundation Fellowship, the Leon O. Chua Award, the Google Research Scholar Award, and the Sony Focused Research Award.

 

Read More

Kostas Daniilidis (Upenn): Efficiency through symmetry and event-based processing in robot perception

 

Kostas Daniilidis
Prof., CIS, University of Pennsylvania

Dec 1, 2023 (Friday), 12:00 – 1:00 pm
Room 220, Building C

 

Abstract: Scaling up data and computation is regarded today as the key to achieving unprecedented performance in many visual tasks. Given the lack of scale in real-world experience, robotics has turned to simulation as the vehicle for scaling up. Biological perception is characterized though by principles of efficiency implemented through symmetry and efficient sensing. By respecting the symmetries of the problem at hand, equivariant models can generalize better, often requiring fewer parameters and less data to learn effectively. Moreover, they provide insights into the underlying structures and symmetries of the data, which can be invaluable in developing more robust and interpretable models. In this talk, we will present a framework for achieving equivariance by design rather than data augmentation and using orders of magnitude lower capacity models than competing approaches. Then, we will show how the new paradigm of event-based vision can facilitate visual motion and reconstruction tasks with asynchronous low-bandwidth processing.

Bio: Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013, Associate Dean for Graduate Education from 2012-2016, and Faculty Director of Online Learning from 2013- 2017. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992, under the supervision of Hans-Hellmut Nagel. He received the Best Conference Paper Award at ICRA 2017. He co-chaired ECCV 2010 and 3DPVT 2006. His most cited works have been on event-based vision, equivariant learning, 3D human pose, and hand-eye calibration.

 

Philip Dames (Temple University): Developing and Deploying Situational Awareness in Autonomous Robotic Systems

Developing and Deploying Situational Awareness in Autonomous Robotic Systems

Dr. Philip Dames

Associate Professor, Department of Mechanical Engineering, Temple University

Nov 10, 2023, 11am – noon, Packard Lab room 466

Abstract: Robotic systems must possess sufficient situational awareness in order to successfully operate in complex and dynamic real-world environments, meaning they must be able to perceive objects in their surroundings, comprehend their meaning, and predict the future state of the environment. In this talk, I will first describe how multi-target tracking (MTT) algorithms can provide mobile robots with this awareness, including our recent results that extend classical MTT approaches to include semantic object labels. Next, I will discuss two key applications of MTT to mobile robotics. The first problem is distributed target search and tracking. To solve this, we develop a distributed MTT framework, allowing robots to estimate, in real time, the relative importance of each portion of the environment, and dynamic tessellation schemes, which account for uncertainty in the pose of each robot, provide collision avoidance, and automatically balance task assignment in a heterogeneous team. The second problem is autonomous navigation through dynamic social spaces filled with people. To solve this, we develop a novel neural network-based controller that takes as its input the target tracks from an MTT, unlike previous approaches which only rely on raw sensor data.

Bio: Philip Dames is an Associate Professor of Mechanical Engineering at Temple University, where he directs the Temple Robotics and Artificial Intelligence Lab (TRAIL). Prior to joining Temple, he was a Postdoctoral Researcher in Electrical and Systems Engineering at the University of Pennsylvania. He received his PhD Mechanical Engineering and Applied Mechanics from the University of Pennsylvania in 2015 and his BS and MS degrees in Mechanical Engineering from Northwestern University in 2010. He is the recipient of an NSF CAREER award. His research aims to improve robots’ ability to operate in complex, real-world environments to address societal needs.

Alireza Ramezani (Northeastern University): Multi-Modal Mobility Morphobot (M4) with Appendage Repurposing for Locomotion Plasticity Enhancement

Multi-Modal Mobility Morphobot (M4) with Appendage Repurposing for Locomotion Plasticity Enhancement

 

Alireza Ramezani,

Assist. Prof., ECE, Northeastern University

Nov. 3, 2023 (Friday), 12:00 – 1:00 pm

Room 115, Building C

Abstract: Robot designs can take many inspirations from nature, where there are many examples of highly resilient and fault-tolerant locomotion strategies to navigate complex terrains by recruiting multi-functional appendages. For example, birds such as Chukars and Hoatzins can repurpose wings for quadrupedal walking and wing-assisted incline running. These animals showcase impressive dexterity in employing the same appendages in different ways and generating multiple modes of locomotion, resulting in highly plastic locomotion traits which enable them to interact and navigate various environments and expand their habitat range. The robotic biomimicry of animals’ appendage repurposing can yield mobile robots with unparalleled capabilities. Taking inspiration from animals, we have designed a robot capable of negotiating unstructured, multi-substrate environments, including land and air, by employing its components in different ways as wheels, thrusters, and legs. This robot is called the Multi-Modal Mobility Morphobot, or M4 in short. M4 can employ its multi-functional components composed of several actuator types to (1) fly, (2) roll, (3) crawl, (4) crouch, (5) balance, (6) tumble, (7) scout, and (8) loco-manipulate. M4 can traverse steep slopes of up to 45 deg. and rough terrains with large obstacles when in balancing mode. M4 possesses onboard computers and sensors and can autonomously employ its modes to negotiate an unstructured environment. We present the design of M4 and several experiments showcasing its multi-modal capabilities.

Bio: Alireza Ramezani is an Assistant Professor at the Department of Electrical & Computer Engineering at Northeastern University (NU). Before joining NU in 2018, he served as a postdoctoral fellow at Caltech’s Division of Engineering and Applied Science. He received his Ph.D. in Mechanical Engineering from the University of Michigan, Ann Arbor, under the supervision of Jessy Grizzle. Alireza’s research interests lie in the design of bioinspired mobile robots with nontrivial morphologies, involving high degrees of freedom and dynamic interactions with the environment, as well as the analysis and nonlinear, closed-loop feedback design of locomotion systems. His designs have been featured in several high-impact journals, including two cover articles in Science Robotics Magazine, listed among the top 5% of all research outputs scored by Science Magazine, and one research article in Nature Communications. Alireza has been awarded the Breakthrough, Innovative, and Game-Changing (BIG) Idea Award from Space Technology Mission Directorate (STMD) Program twice (in 2020 and 2022) for designing bioinspired locomotion systems to explore the Moon and Mars craters. Furthermore, in 2022, Northeastern’s team, under his leadership, won NASA’s top award, the ARTEMIS Award, at the BIG Idea Challenge Competitions. He is also the recipient of JPL’s Faculty Research Program Position. Alireza’s research has received widespread attention and has been covered by over 200 news outlets, including IEEE Spectrum, Space Magazine, Independent, New York Times, Wall Street Journal, Associated Press, National Geographic, CNN, NBC, Euronews, etc., since 2018.

Navid Azizan (MIT): Machine Learning for Safety-Critical Systems

Machine Learning for Safety-Critical Systems

Navid Azizan

Esther & Harold E. Edgerton Career Development Assistant Professor, Massachusetts Institute of Technology

Oct. 27, 2023 (Friday), 12:00 – 1:00 pm

Room 220, Building C

 

Abstract: The integration of machine learning, particularly deep neural networks (DNNs) into autonomous systems has revolutionized their capabilities, enabling sophisticated interpretation of high-dimensional sensory data and informed decision-making. However, the deployment of these systems in safety-critical applications is hindered by the opaque nature of DNNs, especially their unpredictable behavior under out-of-distribution (OoD) or anomalous conditions. This talk presents recent results on enhancing the safety and reliability of machine learning models for autonomous systems. Specifically, we will discuss (1) run-time monitors for learning-enabled components, namely uncertainty estimation and anomaly detection mechanisms for pre-trained models as well as latent representations, mitigating risks associated with unforeseen operational deviations; (2) model adaptation techniques and continual learning algorithms, to ensure consistent integration of new data without the setback of “catastrophic forgetting,” thereby sustaining the model’s adaptiveness and relevance in dynamic environments; and (3) safety-assured, learning-based control and decision-making systems, focusing on controllers intrinsically designed with safety and stability guarantees.

 

Bio: Navid Azizan is the Esther & Harold E. Edgerton (1927) Assistant Professor at MIT, where he is a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS) and holds dual appointments in the Department of Mechanical Engineering (Control, Instrumentation, & Robotics) and the Schwarzman College of Computing’s Institute for Data, Systems, & Society (IDSS). His research interests broadly lie in machine learning, systems and control, mathematical optimization, and network science. His research lab focuses on various aspects of enabling large-scale intelligent systems, with an emphasis on principled learning and optimization algorithms for autonomous systems and societal networks. He obtained his PhD in Computing and Mathematical Sciences (CMS) from the California Institute of Technology (Caltech), co-advised by Babak Hassibi and Adam Wierman, in 2020, his MSc in electrical engineering from the University of Southern California in 2015, and his BSc in electrical engineering with a minor in physics from Sharif University of Technology in 2013. Prior to joining MIT, he completed a postdoc in the Autonomous Systems Laboratory (ASL) at Stanford University in 2021. Additionally, he was a research scientist intern at Google DeepMind in 2019. His work has been recognized by several awards, including the 2020 Information Theory and Applications (ITA) Gold Graduation Award and the 2016 ACM GREENMETRICS Best Student Paper Award. He was named in the list of Leading Academic Data Leaders from the CDO Magazine in 2023, named an Amazon Fellow in Artificial Intelligence in 2017, and a PIMCO Fellow in Data Science in 2018. He was also the first-place winner and a gold medalist at the 2008 National Physics Olympiad in Iran.

Kyriakos G. Vamvoudakis (Georgia Tech): Bounded Rational Intelligence and Metalearning in Autonomous Systems

Bounded Rational Intelligence and Metalearning in Autonomous Systems

 

Prof. Kyriakos G. Vamvoudakis

Dutton-Ducoffe Endowed Professor, The Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology

11:00 am – 12:00 pm, Friday October 20, 2023

Packard Lab 466

Abstract: Autonomous systems will be tasked with operating in complex, human-centric environments both in cooperation and in competition with humans and other agents. In this talk, issues of prediction in the context of game theory and multi-agent cyber-physical systems will be addressed. To account for the cognitive limitations of human and machine decision-makers, we introduce ideas and principles of bounded rationality for autonomy using tools from control theory and reinforcement learning. Specifically, we will formulate level-k thinking and cognitive hierarchy models in nonlinear and linear noncooperative differential games, where each agent is assigned an intelligence level corresponding to several thinking iterations. The applicability of this approach will be highlighted via the example of a pursuit evasion game between Unmanned Aerial Vehicles. The versatility of the proposed methods will be shown via results of level-k thinking in discrete stochastic games. Finally, to design more advanced decision-making algorithms that explicitly exploit the learning abilities of the other agents in the environment, a meta-learning framework in games will be presented, via which an autonomous agent can achieve learning manipulation and deception.

About Dr. Kyriakos G. Vamvoudakis: Dr. Kyriakos G. Vamvoudakis was born in Athens, Greece. He received the Diploma (a 5-year degree, equivalent to a Master of Science) in Electronic and Computer Engineering from the Technical University of Crete, Greece in 2006 with highest honors. After moving to the United States of America, he studied at The University of Texas at Arlington with Frank L. Lewis as his advisor, and he received his M.S. and Ph.D. in Electrical Engineering in 2008 and 2011 respectively. From May 2011 to January 2012, he was working as an Adjunct Professor and Faculty Research Associate at the University of Texas at Arlington and at the Automation and Robotics Research Institute. During the period from 2012 to 2016 he was a project research scientist at the Center for Control, Dynamical Systems and Computation at the University of California, Santa Barbara. He was an assistant professor at the Kevin T. Crofton Department of Aerospace and Ocean Engineering at Virginia Tech until 2018. He currently serves as the Dutton-Ducoffe Endowed Professor at The Daniel Guggenheim School of Aerospace Engineering at Georgia Tech. He holds a secondary appointment in the School of Electrical and Computer Engineering. His research interests include reinforcement learning, control theory, cyber-physical security, bounded rationality, and safe/assured autonomy. Dr. Vamvoudakis is the recipient of a 2019 ARO YIP award, a 2018 NSF CAREER award, a 2021 GT Chapter Sigma Xi Young Faculty Award, and of several international awards including the 2016 International Neural Network Society Young Investigator (INNS) Award, the Best Paper Award for Autonomous/Unmanned Vehicles at the 27th Army Science Conference in 2010, and the Best Researcher Award from the Automation and Robotics Research Institute in 2011. He has also served on various international program committees and has organized special sessions, workshops, and tutorials for several international conferences. He currently is a member of the Technical Committee on Intelligent Control of the IEEE Control Systems Society, a member of the Technical Committee on Adaptive Dynamic Programming and Reinforcement Learning of the IEEE Computational Intelligence Society, a member of the IEEE Control Systems Society Conference Editorial Board, an Associate Editor of: Automatica; IEEE Transactions on Automatic Control; IEEE Computational Intelligence Magazine; IEEE Transactions on Systems, Man, and Cybernetics: Systems; IEEE Transactions on Artificial Intelligence; Neurocomputing; Journal of Optimization Theory and Applications; IEEE Control Systems Letters; and of Frontiers in Control Engineering-Adaptive, Robust and Fault Tolerant Control. He is also a registered Electrical/Computer engineer (PE), a member of the Technical Chamber of Greece, and a Senior Member of both IEEE and AIAA.

Pratik Chaudhari (UPenn): A Picture of the Prediction Space of Deep Networks

A Picture of the Prediction Space of Deep Networks

 

Prof. Pratik Chaudhari
Electrical and Systems Engineering and Computer and Information Science, University of Pennsylvania

Oct. 13, 2023 (Friday), 1:30 – 2:30 pm
Room 220, Building C

Abstract: Deep networks have many more parameters than the number of training data and can therefore overfit—and yet, they predict remarkably accurately in practice. Training such networks is a high-dimensional, large-scale and non-convex optimization problem and should be prohibitively difficult—and yet, it is quite tractable. This talk aims to illuminate these puzzling contradictions.
We will argue that deep networks generalize well because of a characteristic structure in the space of learnable tasks. The input correlation matrix for typical tasks has a “sloppy” eigenspectrum where, in addition to a few large eigenvalues, there is a large number of small eigenvalues that are distributed uniformly over a very large range. As a consequence, the Hessian and the Fisher Information Matrix of a trained network also have a sloppy eigenspectrum. Using these ideas, we will demonstrate an analytical non-vacuous PAC Bayes generalization bound for general deep networks.
We will next develop information-geometric techniques to analyze the trajectories of the predictions of deep networks during training. By examining the underlying high-dimensional probabilistic models, we will reveal that the training process explores an effectively low dimensional manifold. Networks with a wide range of architectures, sizes, trained using different optimization methods, regularization techniques, data augmentation techniques, and weight initializations lie on the same manifold in the prediction space. We will also show that predictions of networks being trained on different tasks (e.g., different subsets of ImageNet) using different representation learning methods (e.g., supervised, meta-, semi supervised and contrastive learning) also lie on a low-dimensional manifold.

Bio: Pratik Chaudhari is an Assistant Professor in Electrical and Systems Engineering and Computer and Information Science at the University of Pennsylvania. He is a core member of the GRASP Laboratory. From 2018-19, he was a Senior Applied Scientist at Amazon Web Services and a Postdoctoral Scholar in Computing and Mathematical Sciences at Caltech. Pratik received his PhD (2018) in Computer Science from UCLA, and his Master’s (2012) and Engineer’s (2014) degrees in Aeronautics and Astronautics from MIT. He was a part of NuTonomy Inc. (now Hyundai-Aptiv Motional) from 2014-16. He is the recipient of the Amazon Machine Learning Research Award (2020), NSF CAREER award (2022) and the Intel Rising Star Faculty Award (2022).