Prof. Qiyu Sun (University of Central Florida): Dynamic systems: Carleman meets Fourier

Dr. Qiyu Sun

Professor, Department of Mathematics, University of Central Florida

 

Time: 12:30 pm – 1:30 pm

Date: Friday, September 13

Location: BC 220

Abstract: Taylor expansion and Fourier expansion have been widely used to represent functions.  The question to be discussed in this talk is whether there is some analog for nonlinear dynamic systems.  In particular, we consider Carlemen linearization and Carleman-Fourier linearization of nonlinear dynamic systems and show that the primary block of the finite-section approach has exponential convergence to the solution of the original dynamic system.

Bio: Qiyu Sun received the Ph.D. degree in mathematics from Hangzhou University, Hangzhou, China, in 1990. He is currently a Professor of mathematics with the University of Central Florida, Orlando, FL, USA.  His research interests include applied and computational harmonic analysis, optimal control theory, mathematical signal processing and sampling theory. Together with Nader Motee, he received the 2019 SIAG/CST Best SICON Paper Prize for making a fundamental contribution to spatially distributed systems theory.  He is on the editorial board of several journals, including Journal of Fourier Analysis and Applications, Frontiers in Signal Processing, and  Sampling Theory, Signal Processing, and Data Analysis.

Sarah Dean (Cornell University): Foundations for Learning with Human Interaction & Dynamics

Dr. Sarah Dean

 Assistant Professor, Computer Science Department, Cornell University

 

Time: 12:00 pm – 1:00 pm

Date: Friday, September 6

Location: BC 220

Abstract: Modern robotic systems benefit from machine learning and human interactions. In this talk, I will discuss recent and ongoing work on developing algorithmic foundations for learning with and from human interactions. I will start with motivation: a collaboration on building a robot for assistive feeding that adaptively asks for help. The first key algorithmic question is how to decide when to query a human expert. I will describe a recently developed interactive bandit algorithm with favorable regret guarantees. The second key question is how to learn dynamical models of human mental state, like cognitive load or boredom, from partial observations. I will describe a learning algorithm based on ideas from system identification that comes with sample complexity guarantees. This is based on joint work with Rohan Banerjee, Tapomayukh Bhattacharjee, Jinyan Su, Wen Sun, Yahya Sattar, and Yassir Jedra.

Bio: Sarah is an Assistant Professor in the Computer Science Department at Cornell. She is interested in the interplay between optimization, machine learning, and dynamics, and her research focuses on understanding the fundamentals of data-driven control and decision-making. This work is grounded in and inspired by applications ranging from robotics to recommendation systems. Sarah has a PhD in EECS from UC Berkeley and did a postdoc at the University of Washington.

Matei Ciocarlie (Columbia University): We (finally) have dexterous robotic manipulation. Now what?

Dr. Matei Ciocarlie

Associate Professor, Department of Mechanical Engineering, Columbia University

 

Time: 11:00 am – 12:00 pm

Date: Friday, April 26

Location: PA 466

Abstract: At long last, robot hands are becoming truly dexterous. It took advances in sensor design, mechanisms, and computational motor learning all working together, but we’re finally starting to see true dexterity, in our lab as well as others. This talk will focus on the path our lab took to get here, and questions for the future. From a mechanism design perspective, I will present our work on optimizing an underactuated hand transmission mechanism jointly the grasping policy that uses it, an approach we refer to as “Hardware as Policy”. From a sensing perspective, I will present our optics-based tactile finger, providing accurate touch information over a multi-curved three-dimensional surface with no blind spots. From a motor learning perspective, I will talk about learning tactile-based policies for dexterous in-hand manipulation and object recognition. Finally, we can discuss implications for the future: how do we consolidate these gains by making dexterity more robust, versatile, and general, and what new applications can it enable?

Bio: Matei Ciocarlie is an Associate Professor in the Mechanical Engineering Department at Columbia University, with affiliated appointments in Computer Science and the Data Science Institute. His work focuses on robot motor control, mechanism and sensor design, planning and learning, all aiming to demonstrate complex motor skills such as dexterous manipulation. Matei completed his Ph.D. at Columbia University in New York; before joining the faculty at Columbia, Matei was a Research Scientist and then Group Manager at Willow Garage, Inc., and then a Senior Research Scientist at Google, Inc. In these positions, Matei contributed to the development of the open-source Robot Operating System (ROS), and led research projects in areas such as hand design, manipulation under uncertainty, and assistive robotics. In recognition of his work, Matei was awarded the Early Career Award by the IEEE Robotics and Automation Society, a Young Investigator Award by the Office of Naval Research, a CAREER Award by the National Science Foundation, and a Sloan Research Fellowship by the Alfred P. Sloan Foundation.

Axel Krieger (JHU): Smart and Autonomous Robots for Surgery

Dr. Axel Krieger

Associate Professor, Department of Mechanical Engineering, Johns Hopkins University

 

Time: 12:00 pm – 1:00 pm

Date: Friday, April 19

Location: BC 115

Abstract: Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.

My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery.  Presentation topics will include a robotic system for supervised autonomous laparoscopic anastomosis and robotic trauma assessment and care. 

Bio: Axel Krieger, PhD, joined the Johns Hopkins University in the Department of Mechanical Engineering in July 2020. He is leading a team of students, scientists, and engineers in the research and development of robotic systems for surgery and interventions. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA

Dinesh Jayaraman (UPenn): The Sensory Needs of Robot Learners

The Sensory Needs of Robot Learners

Dr. Dinesh Jayaraman

Assistant Professor, Grasp Lab, University of Pennsylvania

 

Time: 11:30 am – 12:30 pm

Date: Friday, April 12

Location: BC 220

Abstract:General-purpose robots of the future will need vision and learning, but such vision-based robot learning today is inflexible and inefficient: it needs robot-and-task-specific training experiences, expert-engineered task specifications, and large computational resources. This talk will cover algorithms that dynamically select task-relevant information during sensing, representation, decision making, and learning, enabling flexibilities in pre-training controller modules, layperson-friendly task specification, and efficient resource allocation. I will speak about pre-trained object-centric visual representations that track task-directed progress, task-relevant world model learning for model-based RL, and how robot learners can benefit from additional sensory inputs at training time.

Bio: Dinesh Jayaraman is an assistant professor at the University of Pennsylvania’s CIS department and GRASP lab. He leads the Perception, Action, and Learning  (Penn PAL) research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh’s research has received an Best Paper Award at CORL ’22, a Best Paper Runner-Up Award at ICRA ’18, a Best Application Paper Award at ACCV ‘16, an Amazon Research Award ’21, the NSF CAREER award ’23, and been featured on the cover page of Science Robotics and in several press outlets. His webpage is at: https://www.seas.upenn.edu/~dineshj/

Nikolai Matni (UPenn): What makes learning to control easy or hard?

What makes learning to control easy or hard?

Dr. Nikolai Matni

Assistant Professor, Electrical and Systems Engineering, University of Pennsylvania

Time: 12:00 pm – 1:00 pm

Date: Friday, April 5

Location: BC 115

Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning. We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of two case studies: (i) characterizing fundamental limits of learning-enabled control, and (ii) developing novel robust imitation learning algorithms with finite sample-complexity guarantees. In both cases, we will emphasize the interplay between robust learning, robust control, and robust stability and their consequences on the sample-complexity and generalizability of the resulting learning-based control algorithms.

Bio: Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group. He has held positions as a Visiting Faculty Researcher at Google Brain Robotics, NYC, as a postdoctoral scholar in EECS at UC Berkeley, and as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds a B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of autonomous systems. Nikolai is a recipient of the AFOSR YIP (2024), NSF CAREER Award (2021), a Google Research Scholar Award (2021), the 2021 IEEE CSS George S. Axelby Award, and the 2013 IEEE CDC Best Student Paper Award. He is also a co-author on papers that have won the 2022 IEEE CDC Best Student Paper Award and the 2017 IEEE ACC Best Student Paper Award.

Jaime Fernández Fisac (Princeton University): Games and Filters: a Road to Safe Robot Autonomy

Dr. Jaime Fernández Fisac

Assistant Professor, Department of Electrical and Computer Engineering, Princeton University

 

Time: 11 am (coffee and cookies at 10:30 am in PA 367)

Date: Friday, December 8

Location: PA 466

Abstract: Autonomous robotic systems are promising to revolutionize our homes, cities and roads—but will we trust them with our lives? This talk will take stock of today’s safety-critical robot autonomy, highlight recent advances, and offer some reasons for optimism. We will first see that a broad family of safety schemes from the last decade can be understood under the common lens of a universal safety filter theorem, lighting the way for the systematic design of next-generation safety mechanisms. We will explore reinforcement learning (RL) as a general tool to synthesize global safety filters for previously intractable robotics domains, such as walking or driving through abrupt terrain, by departing from the traditional notion of accruing rewards in favor of a safety-specific Bellman equation. We will show that this safety-RL formulation naturally allows learning from near misses and boosting robustness through adversarial gameplay. Finally, we will turn our attention to dense urban driving, where safety hinges on the autonomous vehicle’s rapidly unfolding interactions with other road users. We will examine how robots can bolster safety by leaning on their future ability to seek out missing key information about other agents’ intent or even their location. We will conclude with an outlook on future autonomous systems and the role of transparent, real-time safety proofs in generating public trust.

Bio: Jaime Fernández Fisac is an Assistant Professor of Electrical and Computer Engineering at Princeton University, where he directs the Safe Robotics Laboratory and co-directs the Princeton AI4ALL outreach summer program. His research combines control theory, artificial intelligence, cognitive science, and game theory with the goal of enabling robots to operate safely in human-populated spaces in a way that is well understood, and thereby trusted, by their users and the public at large. Prior to joining the Princeton faculty, he was a Research Scientist at Waymo (formerly Google’s Self-Driving Car project) from 2019 to 2020, working on autonomous vehicle safety and interaction. He received an Engineering Diploma from the Universidad Politécnica de Madrid, Spain in 2012, a M.Sc. in Aeronautics from Cranfield University, U.K. in 2013, and a Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2019. He is a recipient of the La Caixa Foundation Fellowship, the Leon O. Chua Award, the Google Research Scholar Award, and the Sony Focused Research Award.

 

Read More

Kostas Daniilidis (Upenn): Efficiency through symmetry and event-based processing in robot perception

 

Kostas Daniilidis
Prof., CIS, University of Pennsylvania

Dec 1, 2023 (Friday), 12:00 – 1:00 pm
Room 220, Building C

 

Abstract: Scaling up data and computation is regarded today as the key to achieving unprecedented performance in many visual tasks. Given the lack of scale in real-world experience, robotics has turned to simulation as the vehicle for scaling up. Biological perception is characterized though by principles of efficiency implemented through symmetry and efficient sensing. By respecting the symmetries of the problem at hand, equivariant models can generalize better, often requiring fewer parameters and less data to learn effectively. Moreover, they provide insights into the underlying structures and symmetries of the data, which can be invaluable in developing more robust and interpretable models. In this talk, we will present a framework for achieving equivariance by design rather than data augmentation and using orders of magnitude lower capacity models than competing approaches. Then, we will show how the new paradigm of event-based vision can facilitate visual motion and reconstruction tasks with asynchronous low-bandwidth processing.

Bio: Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013, Associate Dean for Graduate Education from 2012-2016, and Faculty Director of Online Learning from 2013- 2017. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992, under the supervision of Hans-Hellmut Nagel. He received the Best Conference Paper Award at ICRA 2017. He co-chaired ECCV 2010 and 3DPVT 2006. His most cited works have been on event-based vision, equivariant learning, 3D human pose, and hand-eye calibration.

 

Philip Dames (Temple University): Developing and Deploying Situational Awareness in Autonomous Robotic Systems

Developing and Deploying Situational Awareness in Autonomous Robotic Systems

Dr. Philip Dames

Associate Professor, Department of Mechanical Engineering, Temple University

Nov 10, 2023, 11am – noon, Packard Lab room 466

Abstract: Robotic systems must possess sufficient situational awareness in order to successfully operate in complex and dynamic real-world environments, meaning they must be able to perceive objects in their surroundings, comprehend their meaning, and predict the future state of the environment. In this talk, I will first describe how multi-target tracking (MTT) algorithms can provide mobile robots with this awareness, including our recent results that extend classical MTT approaches to include semantic object labels. Next, I will discuss two key applications of MTT to mobile robotics. The first problem is distributed target search and tracking. To solve this, we develop a distributed MTT framework, allowing robots to estimate, in real time, the relative importance of each portion of the environment, and dynamic tessellation schemes, which account for uncertainty in the pose of each robot, provide collision avoidance, and automatically balance task assignment in a heterogeneous team. The second problem is autonomous navigation through dynamic social spaces filled with people. To solve this, we develop a novel neural network-based controller that takes as its input the target tracks from an MTT, unlike previous approaches which only rely on raw sensor data.

Bio: Philip Dames is an Associate Professor of Mechanical Engineering at Temple University, where he directs the Temple Robotics and Artificial Intelligence Lab (TRAIL). Prior to joining Temple, he was a Postdoctoral Researcher in Electrical and Systems Engineering at the University of Pennsylvania. He received his PhD Mechanical Engineering and Applied Mechanics from the University of Pennsylvania in 2015 and his BS and MS degrees in Mechanical Engineering from Northwestern University in 2010. He is the recipient of an NSF CAREER award. His research aims to improve robots’ ability to operate in complex, real-world environments to address societal needs.

Alireza Ramezani (Northeastern University): Multi-Modal Mobility Morphobot (M4) with Appendage Repurposing for Locomotion Plasticity Enhancement

Multi-Modal Mobility Morphobot (M4) with Appendage Repurposing for Locomotion Plasticity Enhancement

 

Alireza Ramezani,

Assist. Prof., ECE, Northeastern University

Nov. 3, 2023 (Friday), 12:00 – 1:00 pm

Room 115, Building C

Abstract: Robot designs can take many inspirations from nature, where there are many examples of highly resilient and fault-tolerant locomotion strategies to navigate complex terrains by recruiting multi-functional appendages. For example, birds such as Chukars and Hoatzins can repurpose wings for quadrupedal walking and wing-assisted incline running. These animals showcase impressive dexterity in employing the same appendages in different ways and generating multiple modes of locomotion, resulting in highly plastic locomotion traits which enable them to interact and navigate various environments and expand their habitat range. The robotic biomimicry of animals’ appendage repurposing can yield mobile robots with unparalleled capabilities. Taking inspiration from animals, we have designed a robot capable of negotiating unstructured, multi-substrate environments, including land and air, by employing its components in different ways as wheels, thrusters, and legs. This robot is called the Multi-Modal Mobility Morphobot, or M4 in short. M4 can employ its multi-functional components composed of several actuator types to (1) fly, (2) roll, (3) crawl, (4) crouch, (5) balance, (6) tumble, (7) scout, and (8) loco-manipulate. M4 can traverse steep slopes of up to 45 deg. and rough terrains with large obstacles when in balancing mode. M4 possesses onboard computers and sensors and can autonomously employ its modes to negotiate an unstructured environment. We present the design of M4 and several experiments showcasing its multi-modal capabilities.

Bio: Alireza Ramezani is an Assistant Professor at the Department of Electrical & Computer Engineering at Northeastern University (NU). Before joining NU in 2018, he served as a postdoctoral fellow at Caltech’s Division of Engineering and Applied Science. He received his Ph.D. in Mechanical Engineering from the University of Michigan, Ann Arbor, under the supervision of Jessy Grizzle. Alireza’s research interests lie in the design of bioinspired mobile robots with nontrivial morphologies, involving high degrees of freedom and dynamic interactions with the environment, as well as the analysis and nonlinear, closed-loop feedback design of locomotion systems. His designs have been featured in several high-impact journals, including two cover articles in Science Robotics Magazine, listed among the top 5% of all research outputs scored by Science Magazine, and one research article in Nature Communications. Alireza has been awarded the Breakthrough, Innovative, and Game-Changing (BIG) Idea Award from Space Technology Mission Directorate (STMD) Program twice (in 2020 and 2022) for designing bioinspired locomotion systems to explore the Moon and Mars craters. Furthermore, in 2022, Northeastern’s team, under his leadership, won NASA’s top award, the ARTEMIS Award, at the BIG Idea Challenge Competitions. He is also the recipient of JPL’s Faculty Research Program Position. Alireza’s research has received widespread attention and has been covered by over 200 news outlets, including IEEE Spectrum, Space Magazine, Independent, New York Times, Wall Street Journal, Associated Press, National Geographic, CNN, NBC, Euronews, etc., since 2018.