Date and Time: Friday, Apr. 1, 2022, at 12 pm
Speaker: Prof. Anirudha Majumdar  (Princeton University)
Talk Title: Safety and Generalization Guarantees for Learning-Based Control of Robots

Abstract ⬇️

The ability of machine learning techniques to process rich sensory inputs such as vision makes them highly appealing for use in robotic systems (e.g., micro aerial vehicles and robotic manipulators). However, the increasing adoption of learning-based components in the robotics perception and control pipeline poses an important challenge: how can we guarantee the safety and performance of such systems? As an example, consider a micro aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to remain safe and perform well on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize to environments that our robot has not previously encountered? Unfortunately, existing approaches either do not provide such guarantees or do so only under very restrictive assumptions.

In this talk, I will present our group’s work on developing a principled theoretical and algorithmic framework for learning control policies for robotic systems with formal guarantees on generalization to novel environments. The key technical insight is to leverage and extend powerful techniques from generalization theory in theoretical machine learning. We apply our techniques on problems including vision-based navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear/hybrid) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.

Short Bio ⬇️

Anirudha Majumdar is an Assistant Professor at Princeton University in the Mechanical and Aerospace Engineering (MAE) department, and an Associated Faculty in the Computer Science department. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the NSF CAREER award, the Google Faculty Research Award (twice), the Amazon Research Award (twice), the Young Faculty Researcher Award from the Toyota Research Institute, the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award from Princeton’s School of Engineering and Applied Science.

Video ⬇️

 


Date and Time: Friday, Mar 25, 2022, at 12 pm
Speaker: Dr. Luis Guerrero Bonilla  (University of California Irvine)
Talk Title: Defense and surveillance strategies based on set-invariance for multi-robot systems

Abstract ⬇️

In this talk, a formulation and a solution to the perimeter defense problem for one intruder and multiple defenders will be presented. The proposed reactive closed-form control laws allow for groups of robots with different body sizes and speeds to defend polygonal perimeters. Extensions of the formulation and solution to area defense and robust defense will also be discussed, as well as experiments showing the application and effectiveness of the theoretical results.

Short Bio ⬇️

Luis Guerrero Bonilla received his B.Sc. in Mechatronics Engineering from Tecnológico de Monterrey, and his PhD. in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania. He has held postdoctoral positions at KTH Royal Institute of Technology and Georgia Institute of Technology. He is currently a postdoctoral researcher at the University of California Irvine, where he develops control strategies for defense and surveillance in collaboration with the US Army Research Laboratory. His research interests include autonomy and resilience of multi-robot systems.

 


Date and Time: Friday, Mar 11, 2022, at 12 pm
Speaker: Prof. Jnaneshwar Das  (Arizona State University)
Talk Title: Closing the Loop on Robotic Sampling of Terrestrial and Aquatic Environments

Abstract ⬇️

My talk will present our methodology for closing the loop on the observation and monitoring of terrestrial and aquatic environments. First, I will present a drone-based data-driven geomorphology pipeline for the mapping of rocky fault scarps, and sparse geologic features such as precariously balanced rocks that are negative indicators of strong seismic activity. Then, I will discuss our efforts in the aquatic realm, with robotic boats and underwater drones, for coral reef monitoring. Finally, I will present preliminary results from the NASA TechFlights Lunar Lander ExoCam project where researchers from Zandef Deksit Inc., Honeybee Robotics, and ASU are simulating lunar landings on earth, to better understand regolith interactions during landing. I will close with highlights from the annual NSF cyber-physical systems (CPS) competitions where students are challenged to develop autonomy for drones, to enable deployment and recovery of sensor probes. Ongoing activities for the 2021-2022 event will be showcased.

Short Bio ⬇️

Jnaneshwar Das holds the Alberto Enrique Behar Research Professorship at the ASU School of Earth and Space Exploration. He is also a core faculty member at the ASU Center for Global Discovery and Conservation Science. Das is the director of the Distributed Robotic Exploration and Mapping Systems (DREAMS) laboratory that leverages robotics and artificial intelligence for closing the loop on environmental monitoring, with research spanning marine sciences, precision agriculture, geomorphology, and ecology. Das obtained his Ph.D. in computer science from the University of Southern California in 2014, following which, he was a postdoctoral researcher at the GRASP Laboratory, University of Pennsylvania till 2018 until his appointment at ASU.

Video ⬇️

 


Date and Time: Friday, Feb 25, 2022, at 12 pm
Speaker: Dr. Lars Lindemann  (University of Pennsylvania)
Talk Title: Safe and Robust Data-Enabled Autonomy

Abstract ⬇️

Autonomous systems show great promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Over the past years, there has been tremendous success in the development of autonomous systems, which was especially accelerated by the computational advances in machine learning and AI. At the same time, however, new fundamental questions were raised regarding the safety and reliability of these increasingly complex systems that often operate in uncertain and dynamic environments. In this seminar, I will provide new insights and exciting opportunities to address these challenges.

In the first part of the seminar, I will present a data-driven optimization framework to learn safe control laws for dynamical systems. For most safety-critical systems such as self-driving cars, safe expert demonstrations in the form of system trajectories that show safe system behavior are readily available or can easily be collected. At the same time, accurate models of these systems can be identified from data or obtained from first order modeling principles. To learn safe control laws, I present a constrained optimization problem with constraints on the expert demonstrations and the system model. Safety guarantees are provided in terms of the density of the data and the smoothness of the system model. Two case studies on a self-driving car and a bipedal walking robot illustrate the presented method. To design autonomous systems that also operate safely outside the lab, robustness against various forms of uncertainty is of paramount importance. The second part of the seminar is motivated by this fact and takes a closer look at temporal robustness which is much less studied than spatial robustness despite its importance, e.g., jitter in autonomous driving or delays in multi-agent systems. We will introduce temporal robustness to quantify the robustness of a system trajectory against timing uncertainties. For stochastic systems, we consequently obtain a distribution of temporal robustness values. I particularly show how risk measures, classically used in finance, can be used to quantify the risk of not being robust to failure, and how we can estimate this risk from data. We will conclude with numerical simulations and illustrate the benefits of using risk measures.

Short Bio ⬇️

Lars Lindemann is currently a Postdoctoral Researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received his B.Sc. degrees in Electrical and Information Engineering and his B.Sc. degree in Engineering Management in 2014 from the Christian-Albrechts-University (CAU), Kiel, Germany. He received his M.Sc. degree in Systems, Control and Robotics in 2016 and his Ph.D. degree in Electrical Engineering in 2020, both from KTH Royal Institute of Technology, Stockholm, Sweden. His current research interests include systems and control theory, formal methods, data-driven control, and autonomous systems. Lars received the Outstanding Student Paper Award at the 58th IEEE Conference on Decision and Control and was a Best Student Paper Award Finalist at the 2018 American Control Conference. He also received the Student Best Paper Award as a co-author at the 60th IEEE Conference on Decision and Control.

Video ⬇️

 


Date and Time: Friday, Feb 18, 2022, at 12 pm
Speaker: Dr. Johannes Betz  (University of Pennsylvania)
Talk Title: Autonomous Vehicles on the Edge: Autonomous Racing & The Indy Autonomous Challenge

Abstract ⬇️

The rising popularity of self-driving cars has led to the creation of a new research and development branch in recent years: autonomous racing. Algorithms and hardware for high performance race vehicles aim to operate autonomously on the limits of the vehicles: High speed, high acceleration, high computational power, low reaction time, adversarial environment, racing opponents, etc. The increasing number of competitions in the field of autonomous racing not only excites the public, but also provides researchers with ideal platforms to test their high performance algorithms. This talk will give an overview of the current efforts in the field, the main research outcomes and the open challenges associated with autonomous racing. In particular, we will focus on the Indy Autonomous Challenge and the software setup of the TUM Autonomous Motorsports Team – the winning team of the Indy Autonomous Challenge in 2021. A detailed look into the software will show how each software module is connected and how we can achieve high speed autonomous driving on the racetrack.

Short Bio ⬇️

Johannes Betz earned a B. Eng. and a M. Sc. in the field of Automotive Engineering. After he did his PhD at the Technical University of Munich (TUM), he was a Postdoctoral Researcher at the Institute of Automotive Technology at TUM where he founded the TUM Autonomous Motorsport Team. Currently, he is a postdoctoral researcher in the xLab for Safe Autonomous Systems at the University of Pennsylvania. His research is focusing on holistic software development for autonomous systems with extreme motions at the dynamic limits in extreme and unknown environments. By using modern algorithms from the field of artificial intelligence, he is trying to develop new and advanced methods and intelligent algorithms. Based on his additional M.A in philosophy he extends current path and behavior planners for autonomous systems with ethical theories.

 


Date and Time: Friday, Feb 11, 2022, at 12 pm
Speaker: Michael Sobrepera  (University of Pennsylvania)
Talk Title: Social Robot-Augmented Telerehab Robots

Abstract ⬇️

In this presentation, I will introduce a new class of rehabilitation robots, social robot-augmented telerehab robots. These systems are designed to address challenges patients face in receiving care from limited access their rehab therapist. Traditional telepresence, while useful for overcoming some limits to care, fails to deliver the rich interactions and data desired for motor rehabilitation and assessment. I will briefly present the design of an exemplar system, Flo, which combines traditional telepresence and computer vision with a humanoid social robot, which can play games with patients and guide them in a present and engaging way under the supervision of a remote clinician. The goal of such a system is to help motivate patients, promote understanding by patients, and drive both compliance and adherence. I will discuss results from a large survey of therapists which examined their perceived usefulness of such a system, features which they believe are important to make an effective telerehab robot, and their feelings towards robots in general. I will also present recent work testing the robot Flo with patients and therapists in an elder care center and testing with subjects in a laboratory environment.

Short Bio ⬇️

Michael Sobrepera is a sixth year PhD student in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He works in the Rehab Robotics Lab, within GRASP, on developing social robots to improve tele-rehabilitation. He received a MS in robotics from Penn and a BS in biomedical engineering from Georgia Tech. He is conducting his work under a NIH F31 Fellowship.

 


Date and Time: Friday, Dec 03, 2021, at 12 pm
Speaker: Prof. Kaidi Xu  (Drexel University)
Talk Title: Neural Networks with Provable Robustness Guarantees

Abstract ⬇️

Neural networks have become a crucial element in modern artificial intelligence. However, they are often black-boxes and can sometimes behave unpredictably and produce surprisingly wrong results. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. Unfortunately, the complexity of neural networks has made the task of formally verifying their properties very challenging. To tackle this challenge, we first propose an efficient verification algorithm based on linear relaxations of neural networks, which produces guaranteed output bounds given bounded input perturbations. The algorithm propagates linear inequalities through the network efficiently in a backward manner and can be applied to arbitrary network architectures using our auto_LiRPA library. To reduce relaxation errors, we further develop an efficient optimization procedure that can tighten verification bounds rapidly on GPUs. Lastly, I discuss how to further empower the verifier with branch and bound by incorporating the additional branching constraints into the bound propagation procedure. The combination of these advanced neural network verification techniques leads to α,β-CROWN (alpha-beta-CROWN), a scalable, powerful and GPU-based neural network verifier that won the 2nd International Verification of Neural Networks Competition (VNN-COMP’21) with the highest total score.

Short Bio ⬇️

Kaidi Xu is an assistant professor in the Department of Computer Science at Drexel University. He obtained his Ph.D. from Northeastern University in 2021. Kaidi’s primary research interest is the robustness of machine learning, including physical adversarial attacks and rigorous robustness verification. Kaidi has published in various top international conferences and his work ‘Adversarial T-shirt’ has received more than 200 media coverage. Kaidi with his team is also the global winner of the 2nd International Verification of Neural Networks Competition.

Video ⬇️

 


Date and Time: Friday, Nov 19, 2021, at 12 pm
Speaker: Xianyi Cheng  (Carnegie Mellon University)
Talk Title: Contact Mode Guided Motion Planning for General Nonprehensile Dexterous Manipulation

Abstract ⬇️

Human beings are highly creative in manipulating objects: grasping, pushing, pivoting, manipulating with limbs, using the environment as extra support, and so on. In contrast, robots lack such intelligence and capabilities for dexterous manipulation. How do we bring human-level dexterity into robotic manipulation? Our current work focuses on dexterous manipulation motion generation, the first step towards general manipulation intelligence. We considered the dexterity from planning through both the robot hand contacts and environment contacts. The contact-rich nature makes this problem challenging and unsolved. To address it, we first studied the mechanics of contacts. We proved that the complexity of enumerating contact modes for one object is polynomial, instead of exponential, to the number of contacts. This exciting observation inspired us to use instantly enumerated contact modes as guidance to an RRT-based planner. Contact modes guide the motion generation like automatically generated motion primitives, capturing all possible transitions of contacts in the system. As a result, our planner can generate dexterous motions for many different nonprehensile manipulation tasks only given the start and goal of object poses. We also observed that some generated strategies resemble human manipulation behaviors, such as the “simultaneous levering out and grasp formation” in human grasping. To the best of our knowledge, our planner is the first method capable of solving diverse dexterous manipulation tasks without any pre-designed skill or pre-specified contact modes.

Short Bio ⬇️

Xianyi Cheng is a third-year Ph.D. student at Carnegie Mellon University, advised by Professor Matthew T. Mason. She received a master’s degree in Robotics from Carnegie Mellon University and a bachelor’s degree in Astronautical Engineering from Harbin Institute of Technology. Her primary research interests include mechanics, planning, and optimization in robotic manipulation. Specifically, her current work focuses on generating versatile and robust dexterous manipulation skills. She is a recipient of a Foxconn Graduate Fellowship.

Video ⬇️

 


Date and Time: Friday, Oct 29, 2021, at 12 pm
Speaker: Prof. Chengyu Li  (Villanova University)
Talk Title: Learning from nature: Odor-guided navigation of flying insects

Abstract ⬇️

Insects rely on their olfactory system for detecting food sources, prey, and mates. They can sense odors emitting from sources of their interest, use their highly efficient flapping-wing mechanism to follow odor trails, and track down odor sources. During such odor-guided navigation, flapping wings not only serve as propulsors for generating lift and maneuvering, but also actively draw odors to the antennae via wing-induced flow. This helps enhance olfactory detection, mimicking “sniffing” in mammals. However, the flow physics underlying this odor-tracking behavior is still unclear due to insects’ small wing size, fast flapping motion, and the unpredictability of their flying trajectories. Limited success has been achieved in evaluating the impact of wing-induced flow on odorant transport during odor-guided navigation. Utilizing an in-house computational fluid dynamics solver, we investigate the unsteady aerodynamics and olfactory sensitivities of insects in upwind surge motion. This study aims to advance our understanding of the odor-tracking capability of animal navigation and leads to transformative advancements in unmanned aerial devices that will have the potential to greatly impact national security equipment and industrial applications for chemical disaster, drug trafficking detection, and GPS-denied indoor environment.

Short Bio ⬇️

Dr. Chengyu Li is an Assistant Professor in the Department of Mechanical Engineering at Villanova University. He received his Ph.D. in Mechanical and Aerospace Engineering from the University of Virginia in 2016 and was a Postdoctoral Researcher at Ohio State University from 2016 to 2018. His research is situated at the intersection of fluid dynamics and computation with an emphasis on engineering and healthcare applications. In particular, he focuses on developing state-of-the-art computational methods that leverage mathematical models and numerical simulations to improve understanding of biological and physiological flows. Dr. Li’s research interests are in bio-inspired flow, unsteady aerodynamics, and odorant transport phenomena of animal olfaction. His interdisciplinary research was recognized with Polak Young Investigator Award by the Association for Chemoreception Sciences in 2017 and Ralph E. Powe Junior Faculty Enhancement Award by Oak Ridge Associated Universities in 2019. He also received an NSF CAREER Award in 2021 for examining the fluid dynamic mechanisms of odor-guided flapping flight in nature.

 


Date and Time: Friday, Oct 22, 2021, at 12 pm
Speaker: Dr. Armon Shariati  (Amazon Robotics)
Talk Title: From Academia to Amazon – Life of a Roboticist

Abstract ⬇️

Amazon Robotics builds the world’s largest mobile robotic fleet where hundreds of thousands of robots help deliver hundreds of millions of customer orders per year. In order to support this massive system, Amazon Robotics relies on cutting edge technologies within the fields of robotic movement, machine learning, computer vision, manipulation, simulation, cloud computing, and data science. In this talk, I will provide an overview of some of the largest software challenges facing our scientists and engineers within each of these domains today. In addition, as a first year scientist fresh out of academia, I’ll also be sharing my own background and experiences which ultimately drove me to pursue my career at Amazon, in an effort to help guide those interested in following a similar path.

Short Bio ⬇️

Armon Shariati is a Research Scientist in the Applied Machine Learning team and the Research and Advanced Development team at Amazon Robotics. His research interests include large-scale optimization and machine learning. Armon joined Amazon after defending his doctoral thesis titled “Online Synthesis of Speculative Building Information Models for Robot Motion Planning” under the advisement of Prof. Camillo Jose Taylor at the University of Pennsylvania’s GRASP Laboratory. He earned his B.S. degree in Computer Engineering from Lehigh University in 2015, where he also worked as an undergraduate research assistant in the VADER Laboratory under the advisement of Prof. John Spletzer.

 


Date and Time: Friday, Sep 24, 2021, at 12 pm
Speaker: Jingxi Xu  (Columbia University)
Talk Title: Dynamic Grasping with Reachability and Motion Awareness

Abstract ⬇️

Grasping in dynamic environments presents a unique set of challenges. A stable and reachable grasp can become unreachable and unstable as the target object moves, motion planning needs to be adaptive, and in real-time, the delay in computation makes prediction necessary. In this paper, we present a dynamic grasping framework that is reachability-aware and motion-aware. Specifically, we model the reachability space of the robot using a signed distance field which enables us to quickly screen unreachable grasps. Also, we train a neural network to predict the grasp quality conditioned on the current motion of the target. Using these as ranking functions, we quickly filter a large grasp database to a few grasps in real-time. In addition, we present a seeding approach for arm motion generation that utilizes a solution from the previous time step. This quickly generates a new arm trajectory that is close to the previous plan and prevents fluctuation. We implement a recurrent neural network (RNN) for modeling and predicting object motion. Our extensive experiments demonstrate the importance of each of these components, and we validate our pipeline on a real robot.

Short Bio ⬇️

Jingxi is a second-year Ph.D. student in Computer Science at Columbia University, co-advised by Professor Matei Ciocarlie and Professor Shuran Song. He also works closely with Dinesh Jayaraman and Nikolai Matni at the University of Pennsylvania. Before joining Columbia, Jingxi also spent some time at MIT with Leslie Pack Kaelbling and Tomás Lozano-Pérez (2018), and at Columbia with Peter Allen (2018-2019). He received a Bachelor’s degree from Edinburgh. His research interests are machine learning and vision-based methods for better planning, perception, and control abilities in autonomous robots.

Video ⬇️

 


Date and Time: Friday, June 18, 2021, at 11 am
Speaker: Prof. Stephanie Gil  (Harvard University)
Talk Title: Trust in Multi-Robot Systems and its Role in Achieving Resilient Coordination

Abstract ⬇️

Our understanding of multi-robot coordination and control has experienced great advances to the point where deploying multi-robot systems in the near future seems to be a feasible reality. However, many of these algorithms are vulnerable to non-cooperation and/or malicious attacks that limit their practicality in real-world settings. An example is the consensus problem where classical results hold that agreement cannot be reached when malicious agents make up more than half of the network connectivity; this quickly leads to limitations in the practicality of many multi-robot coordination tasks. However, with the growing prevalence of cyber-physical systems comes novel opportunities for detecting attacks by using cross-validation with physical channels of information. In this talk we consider the class of problems where the probability of a particular (i,j) link being trustworthy is available as a random variable. We refer to these as “stochastic observations of trust.” We show that under this model, strong performance guarantees such as convergence for the consensus problem can be recovered, even in the case were the number of malicious agents is greater than ½ of the network connectivity and consensus would otherwise fail. Moreover, under this model we can reason about the deviation from the nominal (no attack) consensus value and the rate of achieving consensus. Finally, we make the case for the importance of deriving such stochastic observations of trust for cyber-physical systems and we demonstrate one such example for the Sybil Attack that uses wireless communication channels to arrive at the desired observations of trust. In this way our results demonstrate the promise of exploiting trust in multi-robot systems to provide a novel perspective on achieving resilient coordination in multi-robot systems.

Short Bio ⬇️

Stephanie is an Assistant Professor in the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University. Her work centers around trust and coordination in multi-robot systems. Her work has received recognition including the National Science Foundation CAREER award (2019), and selection as a 2020 Sloan Research Fellow for her contributions at the intersection of robotics and communication. She has held a Visiting Assistant Professor position at Stanford University during the summer of 2019, and an Assistant Professorship at Arizona State University from 2018-2020. She completed her Ph.D. work (2014) on multi-robot coordination and control and her M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.

 


Date and Time: Friday, June 4, 2021, at 11 am
Speaker: Prof. Guilherme Pereira  (West Virginia University)
Talk Title: Contributions to Autonomous Field Robotics

Abstract ⬇️

In this talk I will present the current research projects at the Field and Aerial Robotics Laboratory at West Virginia University. These projects aim the development of mobile robotic systems that work for a long time in challenging real-world environments that include forests and underground mines. I will show our solutions for hardware design and localization but will focus the talk on motion planning. Our proposed motion planning approaches are mostly based on vector-fields associated with optimal planners. In these combination, global vector-fields encode the main robotic task, but do not have a complete knowledge of the environment, while optimal planners are used by the robot to track the vector-field and avoid previously unknown obstacles in the environment. Results that show ground and aerial robots operating using this methodology will be presented along the talk.

Short Bio ⬇️

Guilherme A. S. Pereira received his bachelor and master’s degrees in electrical engineering from the Federal University of Minas Gerais, Brazil, in 1998 and 2000, respectively, and his PhD degree in computer science from the same university in 2003. He has received the Gold Medal Award from the Engineering School of UFMG for the first place among the electrical engineer students in 1998. From 11/2000 to 05/2003, he was a visiting scientist at the GRASP Laboratory of the University of Pennsylvania. He was also a visiting scholar at The Robotics Institute of Carnegie Mellon University from 08/2015 to 07/2016. From July/2004 to July/2018, he was a full-time professor at the electrical engineering department of the Federal University of Minas Gerais. He joined WVU in October 2018 as an associate professor of the Department of Mechanical and Aerospace Engineering. He also holds an adjunct appointment at the Lane Department of Computer Science and Electrical Engineering. At WVU he directs the Field and Aerial Robotics (FARO) Laboratory. His main research areas are field robotics, robot motion planning, aerial robotics, state estimation, cooperative robotics, and computer vision.

Video ⬇️

 


Date and Time: Friday, May 14, 2021, at 11 am
Speaker: Prof. Rafael Fierro  (The University of New Mexico)
Talk Title: Towards Stable Interstellar Flight: Modeling and Stability of a Laser-Propelled Sailcraft

Abstract ⬇️

Traveling to distant stars has long fascinated humanity, but vast distances limited space exploration to our solar system. The Breakthrough Starshot Program aims at eliminating this limitation by traveling to Alpha Centauri, which is 4.37 light-years away. The idea is to accelerate a sail to relativistic speeds using a laser beam aimed at the sail. Currently, sail dynamic stability of the sailcraft is not well understood and there is no agreement on the proper shape of the beam-driven sail. In this seminar, I will present some results on dynamic stability of a beam-driven sail modeled as a rigid body whose shape is parameterized by a sweep function. We analyze the stability of the beam-driven sail and estimate its region of attraction (ROA) using Lyapunov theory and Sum-Of-Square (SOS) programming. I will also summarize our recent efforts on aerial robotics including monitoring of extreme environments, a testbed and methods for counter small UAS, and airborne manipulation.

Short Bio ⬇️

Rafael Fierro is a Professor of the Department of Electrical and Computer Engineering, the University of New Mexico, where he has been since 2007. He received his Bachelors of Science from the Escuela Politécnica Nacional, Quito-Ecuador, his MSc degree in control engineering from the University of Bradford, England, and his Ph.D. in electrical engineering from the University of Texas at Arlington in 1997. Before joining UNM, he held a postdoctoral appointment with the General Robotics, Automation, Sensing & Perception (GRASP) Laboratory at the University of Pennsylvania and a faculty position with the Department of Electrical and Computer Engineering at Oklahoma State University. His areas of interests include cyber-physical systems; heterogeneous robotic networks; and uncrewed aerial vehicles (UAV). The National Science Foundation (NSF), US Army Research Laboratory (ARL) and Air Force Research Laboratory (AFRL), Department of Energy (DOE), and Sandia National Laboratories have funded his research. He directs the AFRL-UNM Agile Manufacturing Center and the Multi-Agent, Robotics, and Heterogeneous Systems (MARHES) Laboratory. Dr. Fierro was the recipient of a Fulbright Scholarship, a 2004 National Science Foundation CAREER Award, and the 2008 International Society of Automation (ISA) Transactions Best Paper Award. He has served on the editorial boards of the Journal of Intelligent & Robotic Systems, IEEE Control Systems Magazine, IEEE Transactions on Control of Network Systems T-CNS, and IEEE Transactions on Automation Science and Engineering T-ASE.

 


Date and Time: Friday, April 2, 2021, at 11 am
Speaker: Cherie Ho  (Carnegie Mellon University)
Talk Title: Adaptive Safety Margins for Safe Replanning under Time-Varying Disturbances

Abstract ⬇️

Safe real-time navigation is a considerable challenge because engineers often need to work with uncertain vehicle dynamics, variable external disturbances, and imperfect controllers. A common strategy used to address safety is to employ hand-defined margins for obstacle inflation. However, arbitrary static margins often fail in more dynamic scenarios, and using worst-case assumptions proves to be overly conservative in most real-world settings where disturbance varies.
In this work, we propose a middle ground: safety margins that adapt on-the-fly using online measurements. In an offline phase, we use Monte Carlo simulations to pre-compute a library of safety margins for multiple levels of disturbance uncertainty. At runtime, our system estimates the current disturbance level and queries the associated safety margins for real-time replanning that present the best trade-off between safety and performance. We validate using extensive simulated and real-world flight tests that our approach better balances safety and performance than baseline static margin methods.

Video ⬇️

 


Date and Time: Friday, Feb 22, 2021, at 11 am
Speaker: Dr. Juan D. Hernández Vega  (Cardiff University)
Talk Title: Motion Planning for Multipurpose Autonomous Systems

Abstract ⬇️

Once limited to highly controlled industrial environments only, today robots are continuously evolving towards becoming autonomous entities, capable of operating in changing and unstructured settings. This evolution is only possible due to multi-disciplinary efforts, which endow autonomous systems with the required capabilities to deal with such changing and uncertain conditions. One of these disciplines, commonly known as motion planning, consists in computational techniques that calculate collision-free and feasible motions of autonomous systems. In this talk, we will discuss how motion planning is being used to improve the decision-making capabilities of different types of autonomous systems, such as autonomous underwater vehicles (AUVs), autonomous/automated cars and service robots.

 


Date and Time: Friday, Dec 18, at 11 am
Speaker: Bing Song (Columbia University)
Talk Title: What can robot learning bring to control?

Video ⬇️

 


Date and Time: Friday, Dec 11, at 11 am
Speaker: Juan David Pabon (Universidad Nacional de Colombia)
Talk Title: Event-Triggered Control for Weight-Unbalanced Directed Robot Networks

Video ⬇️

 


Date and Time: Friday, Sep 25th, at 11 am
Speaker: Bowen Wen (Rutgers University)
Talk Title: Robust, Real-time 6D Pose Estimation and Tracking for Robot Manipulation

Abstract ⬇️

Many manipulation tasks, such as picking, placing or within-hand manipulation, require the object’s pose relative to the manipulating agent, robot or human. These tasks frequently involve significant occlusions, which complicate the estimation and tracking process. This work presents a framework based on RGB-D data. It aims towards robust pose estimation under severe occlusions, such as those arising due to a hand occluding a manipulated object. It also aims for short response times so as to allow for responsive decision making in dynamic setups. In particular, the proposed frameworks leverage the complementary attributes of: i) novel deep neural network architectures via domain disentanglement, and ii) 3D geometric reasoning by using the end-effector’s state to guide the pose estimation of severely occluded objects. Additionally, only synthetic data are required for training, making the proposed approach applicable to new tasks, circumventing the overhead of collecting labeled real world data. Extensive evaluation on multiple real world benchmarks, including public ones and some developed as part of this effort, demonstrates superior performance when compared with state-of-the-art approaches and generalization to different robotic manipulation scenarios. This talk will conclude with ongoing effort for generalizing this line of work in the case of perceiving and manipulating novel objects without access to models.

Short Bio ⬇️

Bowen Wen is currently a 3rd-year PhD student in the Computer Science Department at Rutgers working with Prof Kostas Bekris. He got his BS in Energy and Automation from Xi’an Jiaotong University in 2016 and MS in ECE from Ohio State University in 2018. His research interests include robotics, computer vision and artificial intelligence. He has published papers at ICRA, IROS, RA-L, Conference on Robotic Learning etc. He worked as a research intern with Facebook Reality Lab, Amazon Lab126 and SenseTime in summer 2020, 2019 and 2018 respectively.

Video ⬇️

 

Previous Talks:

Sep 18th, 2020, Towards Scalable Algorithms for Distributed Optimization and Learning, Dr. Cesar Uribe (MIT)

Aug 28th, 2020, Tightly Coupled Vision-based Perception and Control for Autonomous Aerial Robots, Dr. Eric Cristofalo (Stanford)

Aug 14th, 2020, Introduction to F1tenth race: 1/10 the size, 10 times the fun, Prof. Rosa Zheng (Lehigh)

AIRLab Invited Talks