Papers in IROS 2021

Non-Prehensile Manipulation of Cuboid Objects Using a CatenaryRobot (IROS + RAL)

Gustavo A. Cardona, Diego S. D’Antonio, Cristian-Ioan Vasile, and David Saldaña

Abstract: Manipulating objects with quadrotors has been widely studied in the literature, but the majority of those approaches assume quadrotors and loads are previously attached. This setup requires human intervention that is not always achievable or desirable in practice. Furthermore, most of the robot configurations consider rods, manipulators, magnets, and cables modeled as rigid links attached to predefined places on objects. In contrast, we are interested in manipulating objects that are not specifically designed to interact with quadrotors, e.g., no predefined connections, and that do not require humans to set up. In this paper, we control a catenary robot composed of a cable and two quadrotors attached to its ends. Our robot is tasked with moving cuboid objects (boxes) on a planar surface. We design a controller that allows the catenary robot to place the cable in a specific area on the box to perform dragging or rolling. We validate our control design in simulation and with real robots, where we show them rolling and dragging boxes to track desired trajectories.


AIRLab Members Contribute to the US Robotics Roadmap and Science Robotics

Professor Jeff Trinkle has been sharing his expertise and experiences with the robotics community for decades.

His recent effort is co-authoring the 2020 Edition of the “Roadmap for US Robotics – From Internet to Robotics”, which will be published as a Journal paper in a few weeks.

We are looking forward to its publication!



Available now, Prof. Jeff Trinkle and his Ph.D. student, Jinda Cui, published a review paper on Science robotics:
(Author’s Publication Page for full-text)

This paper summarizes types of variations robots may encounter in human environments, and categorizes, compares, and contrasts the ways in which learning has been applied to manipulation problems through the lens of adaptability. Promising avenues for future research are proposed at the end.

A quick summary of this paper can be found in this report:


Lehigh’s AIRLab is set to create and share knowledge in Robotics starting from its creation, and it will continue doing that.

AIRLab Invited Talks

Date and Time: Friday, Apr. 1, 2022, at 12 pm
Speaker: Prof. Anirudha Majumdar  (Princeton University)
Talk Title: Safety and Generalization Guarantees for Learning-Based Control of Robots

Abstract ⬇️

The ability of machine learning techniques to process rich sensory inputs such as vision makes them highly appealing for use in robotic systems (e.g., micro aerial vehicles and robotic manipulators). However, the increasing adoption of learning-based components in the robotics perception and control pipeline poses an important challenge: how can we guarantee the safety and performance of such systems? As an example, consider a micro aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to remain safe and perform well on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize to environments that our robot has not previously encountered? Unfortunately, existing approaches either do not provide such guarantees or do so only under very restrictive assumptions.

In this talk, I will present our group’s work on developing a principled theoretical and algorithmic framework for learning control policies for robotic systems with formal guarantees on generalization to novel environments. The key technical insight is to leverage and extend powerful techniques from generalization theory in theoretical machine learning. We apply our techniques on problems including vision-based navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear/hybrid) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.

Short Bio ⬇️

Anirudha Majumdar is an Assistant Professor at Princeton University in the Mechanical and Aerospace Engineering (MAE) department, and an Associated Faculty in the Computer Science department. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the NSF CAREER award, the Google Faculty Research Award (twice), the Amazon Research Award (twice), the Young Faculty Researcher Award from the Toyota Research Institute, the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award from Princeton’s School of Engineering and Applied Science.

Video ⬇️


Date and Time: Friday, Mar 25, 2022, at 12 pm
Speaker: Dr. Luis Guerrero Bonilla  (University of California Irvine)
Talk Title: Defense and surveillance strategies based on set-invariance for multi-robot systems

Abstract ⬇️

In this talk, a formulation and a solution to the perimeter defense problem for one intruder and multiple defenders will be presented. The proposed reactive closed-form control laws allow for groups of robots with different body sizes and speeds to defend polygonal perimeters. Extensions of the formulation and solution to area defense and robust defense will also be discussed, as well as experiments showing the application and effectiveness of the theoretical results.

Short Bio ⬇️

Luis Guerrero Bonilla received his B.Sc. in Mechatronics Engineering from Tecnológico de Monterrey, and his PhD. in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania. He has held postdoctoral positions at KTH Royal Institute of Technology and Georgia Institute of Technology. He is currently a postdoctoral researcher at the University of California Irvine, where he develops control strategies for defense and surveillance in collaboration with the US Army Research Laboratory. His research interests include autonomy and resilience of multi-robot systems.


Date and Time: Friday, Mar 11, 2022, at 12 pm
Speaker: Prof. Jnaneshwar Das  (Arizona State University)
Talk Title: Closing the Loop on Robotic Sampling of Terrestrial and Aquatic Environments

Abstract ⬇️

My talk will present our methodology for closing the loop on the observation and monitoring of terrestrial and aquatic environments. First, I will present a drone-based data-driven geomorphology pipeline for the mapping of rocky fault scarps, and sparse geologic features such as precariously balanced rocks that are negative indicators of strong seismic activity. Then, I will discuss our efforts in the aquatic realm, with robotic boats and underwater drones, for coral reef monitoring. Finally, I will present preliminary results from the NASA TechFlights Lunar Lander ExoCam project where researchers from Zandef Deksit Inc., Honeybee Robotics, and ASU are simulating lunar landings on earth, to better understand regolith interactions during landing. I will close with highlights from the annual NSF cyber-physical systems (CPS) competitions where students are challenged to develop autonomy for drones, to enable deployment and recovery of sensor probes. Ongoing activities for the 2021-2022 event will be showcased.

Short Bio ⬇️

Jnaneshwar Das holds the Alberto Enrique Behar Research Professorship at the ASU School of Earth and Space Exploration. He is also a core faculty member at the ASU Center for Global Discovery and Conservation Science. Das is the director of the Distributed Robotic Exploration and Mapping Systems (DREAMS) laboratory that leverages robotics and artificial intelligence for closing the loop on environmental monitoring, with research spanning marine sciences, precision agriculture, geomorphology, and ecology. Das obtained his Ph.D. in computer science from the University of Southern California in 2014, following which, he was a postdoctoral researcher at the GRASP Laboratory, University of Pennsylvania till 2018 until his appointment at ASU.

Video ⬇️


Date and Time: Friday, Feb 25, 2022, at 12 pm
Speaker: Dr. Lars Lindemann  (University of Pennsylvania)
Talk Title: Safe and Robust Data-Enabled Autonomy

Abstract ⬇️

Autonomous systems show great promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Over the past years, there has been tremendous success in the development of autonomous systems, which was especially accelerated by the computational advances in machine learning and AI. At the same time, however, new fundamental questions were raised regarding the safety and reliability of these increasingly complex systems that often operate in uncertain and dynamic environments. In this seminar, I will provide new insights and exciting opportunities to address these challenges.

In the first part of the seminar, I will present a data-driven optimization framework to learn safe control laws for dynamical systems. For most safety-critical systems such as self-driving cars, safe expert demonstrations in the form of system trajectories that show safe system behavior are readily available or can easily be collected. At the same time, accurate models of these systems can be identified from data or obtained from first order modeling principles. To learn safe control laws, I present a constrained optimization problem with constraints on the expert demonstrations and the system model. Safety guarantees are provided in terms of the density of the data and the smoothness of the system model. Two case studies on a self-driving car and a bipedal walking robot illustrate the presented method. To design autonomous systems that also operate safely outside the lab, robustness against various forms of uncertainty is of paramount importance. The second part of the seminar is motivated by this fact and takes a closer look at temporal robustness which is much less studied than spatial robustness despite its importance, e.g., jitter in autonomous driving or delays in multi-agent systems. We will introduce temporal robustness to quantify the robustness of a system trajectory against timing uncertainties. For stochastic systems, we consequently obtain a distribution of temporal robustness values. I particularly show how risk measures, classically used in finance, can be used to quantify the risk of not being robust to failure, and how we can estimate this risk from data. We will conclude with numerical simulations and illustrate the benefits of using risk measures.

Short Bio ⬇️

Lars Lindemann is currently a Postdoctoral Researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received his B.Sc. degrees in Electrical and Information Engineering and his B.Sc. degree in Engineering Management in 2014 from the Christian-Albrechts-University (CAU), Kiel, Germany. He received his M.Sc. degree in Systems, Control and Robotics in 2016 and his Ph.D. degree in Electrical Engineering in 2020, both from KTH Royal Institute of Technology, Stockholm, Sweden. His current research interests include systems and control theory, formal methods, data-driven control, and autonomous systems. Lars received the Outstanding Student Paper Award at the 58th IEEE Conference on Decision and Control and was a Best Student Paper Award Finalist at the 2018 American Control Conference. He also received the Student Best Paper Award as a co-author at the 60th IEEE Conference on Decision and Control.

Video ⬇️


Date and Time: Friday, Feb 18, 2022, at 12 pm
Speaker: Dr. Johannes Betz  (University of Pennsylvania)
Talk Title: Autonomous Vehicles on the Edge: Autonomous Racing & The Indy Autonomous Challenge

Abstract ⬇️

The rising popularity of self-driving cars has led to the creation of a new research and development branch in recent years: autonomous racing. Algorithms and hardware for high performance race vehicles aim to operate autonomously on the limits of the vehicles: High speed, high acceleration, high computational power, low reaction time, adversarial environment, racing opponents, etc. The increasing number of competitions in the field of autonomous racing not only excites the public, but also provides researchers with ideal platforms to test their high performance algorithms. This talk will give an overview of the current efforts in the field, the main research outcomes and the open challenges associated with autonomous racing. In particular, we will focus on the Indy Autonomous Challenge and the software setup of the TUM Autonomous Motorsports Team – the winning team of the Indy Autonomous Challenge in 2021. A detailed look into the software will show how each software module is connected and how we can achieve high speed autonomous driving on the racetrack.

Short Bio ⬇️

Johannes Betz earned a B. Eng. and a M. Sc. in the field of Automotive Engineering. After he did his PhD at the Technical University of Munich (TUM), he was a Postdoctoral Researcher at the Institute of Automotive Technology at TUM where he founded the TUM Autonomous Motorsport Team. Currently, he is a postdoctoral researcher in the xLab for Safe Autonomous Systems at the University of Pennsylvania. His research is focusing on holistic software development for autonomous systems with extreme motions at the dynamic limits in extreme and unknown environments. By using modern algorithms from the field of artificial intelligence, he is trying to develop new and advanced methods and intelligent algorithms. Based on his additional M.A in philosophy he extends current path and behavior planners for autonomous systems with ethical theories.


Date and Time: Friday, Feb 11, 2022, at 12 pm
Speaker: Michael Sobrepera  (University of Pennsylvania)
Talk Title: Social Robot-Augmented Telerehab Robots

Abstract ⬇️

In this presentation, I will introduce a new class of rehabilitation robots, social robot-augmented telerehab robots. These systems are designed to address challenges patients face in receiving care from limited access their rehab therapist. Traditional telepresence, while useful for overcoming some limits to care, fails to deliver the rich interactions and data desired for motor rehabilitation and assessment. I will briefly present the design of an exemplar system, Flo, which combines traditional telepresence and computer vision with a humanoid social robot, which can play games with patients and guide them in a present and engaging way under the supervision of a remote clinician. The goal of such a system is to help motivate patients, promote understanding by patients, and drive both compliance and adherence. I will discuss results from a large survey of therapists which examined their perceived usefulness of such a system, features which they believe are important to make an effective telerehab robot, and their feelings towards robots in general. I will also present recent work testing the robot Flo with patients and therapists in an elder care center and testing with subjects in a laboratory environment.

Short Bio ⬇️

Michael Sobrepera is a sixth year PhD student in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He works in the Rehab Robotics Lab, within GRASP, on developing social robots to improve tele-rehabilitation. He received a MS in robotics from Penn and a BS in biomedical engineering from Georgia Tech. He is conducting his work under a NIH F31 Fellowship.


Date and Time: Friday, Dec 03, 2021, at 12 pm
Speaker: Prof. Kaidi Xu  (Drexel University)
Talk Title: Neural Networks with Provable Robustness Guarantees

Abstract ⬇️

Neural networks have become a crucial element in modern artificial intelligence. However, they are often black-boxes and can sometimes behave unpredictably and produce surprisingly wrong results. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. Unfortunately, the complexity of neural networks has made the task of formally verifying their properties very challenging. To tackle this challenge, we first propose an efficient verification algorithm based on linear relaxations of neural networks, which produces guaranteed output bounds given bounded input perturbations. The algorithm propagates linear inequalities through the network efficiently in a backward manner and can be applied to arbitrary network architectures using our auto_LiRPA library. To reduce relaxation errors, we further develop an efficient optimization procedure that can tighten verification bounds rapidly on GPUs. Lastly, I discuss how to further empower the verifier with branch and bound by incorporating the additional branching constraints into the bound propagation procedure. The combination of these advanced neural network verification techniques leads to α,β-CROWN (alpha-beta-CROWN), a scalable, powerful and GPU-based neural network verifier that won the 2nd International Verification of Neural Networks Competition (VNN-COMP’21) with the highest total score.

Short Bio ⬇️

Kaidi Xu is an assistant professor in the Department of Computer Science at Drexel University. He obtained his Ph.D. from Northeastern University in 2021. Kaidi’s primary research interest is the robustness of machine learning, including physical adversarial attacks and rigorous robustness verification. Kaidi has published in various top international conferences and his work ‘Adversarial T-shirt’ has received more than 200 media coverage. Kaidi with his team is also the global winner of the 2nd International Verification of Neural Networks Competition.

Video ⬇️


Date and Time: Friday, Nov 19, 2021, at 12 pm
Speaker: Xianyi Cheng  (Carnegie Mellon University)
Talk Title: Contact Mode Guided Motion Planning for General Nonprehensile Dexterous Manipulation

Abstract ⬇️

Human beings are highly creative in manipulating objects: grasping, pushing, pivoting, manipulating with limbs, using the environment as extra support, and so on. In contrast, robots lack such intelligence and capabilities for dexterous manipulation. How do we bring human-level dexterity into robotic manipulation? Our current work focuses on dexterous manipulation motion generation, the first step towards general manipulation intelligence. We considered the dexterity from planning through both the robot hand contacts and environment contacts. The contact-rich nature makes this problem challenging and unsolved. To address it, we first studied the mechanics of contacts. We proved that the complexity of enumerating contact modes for one object is polynomial, instead of exponential, to the number of contacts. This exciting observation inspired us to use instantly enumerated contact modes as guidance to an RRT-based planner. Contact modes guide the motion generation like automatically generated motion primitives, capturing all possible transitions of contacts in the system. As a result, our planner can generate dexterous motions for many different nonprehensile manipulation tasks only given the start and goal of object poses. We also observed that some generated strategies resemble human manipulation behaviors, such as the “simultaneous levering out and grasp formation” in human grasping. To the best of our knowledge, our planner is the first method capable of solving diverse dexterous manipulation tasks without any pre-designed skill or pre-specified contact modes.

Short Bio ⬇️

Xianyi Cheng is a third-year Ph.D. student at Carnegie Mellon University, advised by Professor Matthew T. Mason. She received a master’s degree in Robotics from Carnegie Mellon University and a bachelor’s degree in Astronautical Engineering from Harbin Institute of Technology. Her primary research interests include mechanics, planning, and optimization in robotic manipulation. Specifically, her current work focuses on generating versatile and robust dexterous manipulation skills. She is a recipient of a Foxconn Graduate Fellowship.

Video ⬇️


Date and Time: Friday, Oct 29, 2021, at 12 pm
Speaker: Prof. Chengyu Li  (Villanova University)
Talk Title: Learning from nature: Odor-guided navigation of flying insects

Abstract ⬇️

Insects rely on their olfactory system for detecting food sources, prey, and mates. They can sense odors emitting from sources of their interest, use their highly efficient flapping-wing mechanism to follow odor trails, and track down odor sources. During such odor-guided navigation, flapping wings not only serve as propulsors for generating lift and maneuvering, but also actively draw odors to the antennae via wing-induced flow. This helps enhance olfactory detection, mimicking “sniffing” in mammals. However, the flow physics underlying this odor-tracking behavior is still unclear due to insects’ small wing size, fast flapping motion, and the unpredictability of their flying trajectories. Limited success has been achieved in evaluating the impact of wing-induced flow on odorant transport during odor-guided navigation. Utilizing an in-house computational fluid dynamics solver, we investigate the unsteady aerodynamics and olfactory sensitivities of insects in upwind surge motion. This study aims to advance our understanding of the odor-tracking capability of animal navigation and leads to transformative advancements in unmanned aerial devices that will have the potential to greatly impact national security equipment and industrial applications for chemical disaster, drug trafficking detection, and GPS-denied indoor environment.

Short Bio ⬇️

Dr. Chengyu Li is an Assistant Professor in the Department of Mechanical Engineering at Villanova University. He received his Ph.D. in Mechanical and Aerospace Engineering from the University of Virginia in 2016 and was a Postdoctoral Researcher at Ohio State University from 2016 to 2018. His research is situated at the intersection of fluid dynamics and computation with an emphasis on engineering and healthcare applications. In particular, he focuses on developing state-of-the-art computational methods that leverage mathematical models and numerical simulations to improve understanding of biological and physiological flows. Dr. Li’s research interests are in bio-inspired flow, unsteady aerodynamics, and odorant transport phenomena of animal olfaction. His interdisciplinary research was recognized with Polak Young Investigator Award by the Association for Chemoreception Sciences in 2017 and Ralph E. Powe Junior Faculty Enhancement Award by Oak Ridge Associated Universities in 2019. He also received an NSF CAREER Award in 2021 for examining the fluid dynamic mechanisms of odor-guided flapping flight in nature.


Date and Time: Friday, Oct 22, 2021, at 12 pm
Speaker: Dr. Armon Shariati  (Amazon Robotics)
Talk Title: From Academia to Amazon – Life of a Roboticist

Abstract ⬇️

Amazon Robotics builds the world’s largest mobile robotic fleet where hundreds of thousands of robots help deliver hundreds of millions of customer orders per year. In order to support this massive system, Amazon Robotics relies on cutting edge technologies within the fields of robotic movement, machine learning, computer vision, manipulation, simulation, cloud computing, and data science. In this talk, I will provide an overview of some of the largest software challenges facing our scientists and engineers within each of these domains today. In addition, as a first year scientist fresh out of academia, I’ll also be sharing my own background and experiences which ultimately drove me to pursue my career at Amazon, in an effort to help guide those interested in following a similar path.

Short Bio ⬇️

Armon Shariati is a Research Scientist in the Applied Machine Learning team and the Research and Advanced Development team at Amazon Robotics. His research interests include large-scale optimization and machine learning. Armon joined Amazon after defending his doctoral thesis titled “Online Synthesis of Speculative Building Information Models for Robot Motion Planning” under the advisement of Prof. Camillo Jose Taylor at the University of Pennsylvania’s GRASP Laboratory. He earned his B.S. degree in Computer Engineering from Lehigh University in 2015, where he also worked as an undergraduate research assistant in the VADER Laboratory under the advisement of Prof. John Spletzer.


Date and Time: Friday, Sep 24, 2021, at 12 pm
Speaker: Jingxi Xu  (Columbia University)
Talk Title: Dynamic Grasping with Reachability and Motion Awareness

Abstract ⬇️

Grasping in dynamic environments presents a unique set of challenges. A stable and reachable grasp can become unreachable and unstable as the target object moves, motion planning needs to be adaptive, and in real-time, the delay in computation makes prediction necessary. In this paper, we present a dynamic grasping framework that is reachability-aware and motion-aware. Specifically, we model the reachability space of the robot using a signed distance field which enables us to quickly screen unreachable grasps. Also, we train a neural network to predict the grasp quality conditioned on the current motion of the target. Using these as ranking functions, we quickly filter a large grasp database to a few grasps in real-time. In addition, we present a seeding approach for arm motion generation that utilizes a solution from the previous time step. This quickly generates a new arm trajectory that is close to the previous plan and prevents fluctuation. We implement a recurrent neural network (RNN) for modeling and predicting object motion. Our extensive experiments demonstrate the importance of each of these components, and we validate our pipeline on a real robot.

Short Bio ⬇️

Jingxi is a second-year Ph.D. student in Computer Science at Columbia University, co-advised by Professor Matei Ciocarlie and Professor Shuran Song. He also works closely with Dinesh Jayaraman and Nikolai Matni at the University of Pennsylvania. Before joining Columbia, Jingxi also spent some time at MIT with Leslie Pack Kaelbling and Tomás Lozano-Pérez (2018), and at Columbia with Peter Allen (2018-2019). He received a Bachelor’s degree from Edinburgh. His research interests are machine learning and vision-based methods for better planning, perception, and control abilities in autonomous robots.

Video ⬇️


Date and Time: Friday, June 18, 2021, at 11 am
Speaker: Prof. Stephanie Gil  (Harvard University)
Talk Title: Trust in Multi-Robot Systems and its Role in Achieving Resilient Coordination

Abstract ⬇️

Our understanding of multi-robot coordination and control has experienced great advances to the point where deploying multi-robot systems in the near future seems to be a feasible reality. However, many of these algorithms are vulnerable to non-cooperation and/or malicious attacks that limit their practicality in real-world settings. An example is the consensus problem where classical results hold that agreement cannot be reached when malicious agents make up more than half of the network connectivity; this quickly leads to limitations in the practicality of many multi-robot coordination tasks. However, with the growing prevalence of cyber-physical systems comes novel opportunities for detecting attacks by using cross-validation with physical channels of information. In this talk we consider the class of problems where the probability of a particular (i,j) link being trustworthy is available as a random variable. We refer to these as “stochastic observations of trust.” We show that under this model, strong performance guarantees such as convergence for the consensus problem can be recovered, even in the case were the number of malicious agents is greater than ½ of the network connectivity and consensus would otherwise fail. Moreover, under this model we can reason about the deviation from the nominal (no attack) consensus value and the rate of achieving consensus. Finally, we make the case for the importance of deriving such stochastic observations of trust for cyber-physical systems and we demonstrate one such example for the Sybil Attack that uses wireless communication channels to arrive at the desired observations of trust. In this way our results demonstrate the promise of exploiting trust in multi-robot systems to provide a novel perspective on achieving resilient coordination in multi-robot systems.

Short Bio ⬇️

Stephanie is an Assistant Professor in the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University. Her work centers around trust and coordination in multi-robot systems. Her work has received recognition including the National Science Foundation CAREER award (2019), and selection as a 2020 Sloan Research Fellow for her contributions at the intersection of robotics and communication. She has held a Visiting Assistant Professor position at Stanford University during the summer of 2019, and an Assistant Professorship at Arizona State University from 2018-2020. She completed her Ph.D. work (2014) on multi-robot coordination and control and her M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.


Date and Time: Friday, June 4, 2021, at 11 am
Speaker: Prof. Guilherme Pereira  (West Virginia University)
Talk Title: Contributions to Autonomous Field Robotics

Abstract ⬇️

In this talk I will present the current research projects at the Field and Aerial Robotics Laboratory at West Virginia University. These projects aim the development of mobile robotic systems that work for a long time in challenging real-world environments that include forests and underground mines. I will show our solutions for hardware design and localization but will focus the talk on motion planning. Our proposed motion planning approaches are mostly based on vector-fields associated with optimal planners. In these combination, global vector-fields encode the main robotic task, but do not have a complete knowledge of the environment, while optimal planners are used by the robot to track the vector-field and avoid previously unknown obstacles in the environment. Results that show ground and aerial robots operating using this methodology will be presented along the talk.

Short Bio ⬇️

Guilherme A. S. Pereira received his bachelor and master’s degrees in electrical engineering from the Federal University of Minas Gerais, Brazil, in 1998 and 2000, respectively, and his PhD degree in computer science from the same university in 2003. He has received the Gold Medal Award from the Engineering School of UFMG for the first place among the electrical engineer students in 1998. From 11/2000 to 05/2003, he was a visiting scientist at the GRASP Laboratory of the University of Pennsylvania. He was also a visiting scholar at The Robotics Institute of Carnegie Mellon University from 08/2015 to 07/2016. From July/2004 to July/2018, he was a full-time professor at the electrical engineering department of the Federal University of Minas Gerais. He joined WVU in October 2018 as an associate professor of the Department of Mechanical and Aerospace Engineering. He also holds an adjunct appointment at the Lane Department of Computer Science and Electrical Engineering. At WVU he directs the Field and Aerial Robotics (FARO) Laboratory. His main research areas are field robotics, robot motion planning, aerial robotics, state estimation, cooperative robotics, and computer vision.

Video ⬇️


Date and Time: Friday, May 14, 2021, at 11 am
Speaker: Prof. Rafael Fierro  (The University of New Mexico)
Talk Title: Towards Stable Interstellar Flight: Modeling and Stability of a Laser-Propelled Sailcraft

Abstract ⬇️

Traveling to distant stars has long fascinated humanity, but vast distances limited space exploration to our solar system. The Breakthrough Starshot Program aims at eliminating this limitation by traveling to Alpha Centauri, which is 4.37 light-years away. The idea is to accelerate a sail to relativistic speeds using a laser beam aimed at the sail. Currently, sail dynamic stability of the sailcraft is not well understood and there is no agreement on the proper shape of the beam-driven sail. In this seminar, I will present some results on dynamic stability of a beam-driven sail modeled as a rigid body whose shape is parameterized by a sweep function. We analyze the stability of the beam-driven sail and estimate its region of attraction (ROA) using Lyapunov theory and Sum-Of-Square (SOS) programming. I will also summarize our recent efforts on aerial robotics including monitoring of extreme environments, a testbed and methods for counter small UAS, and airborne manipulation.

Short Bio ⬇️

Rafael Fierro is a Professor of the Department of Electrical and Computer Engineering, the University of New Mexico, where he has been since 2007. He received his Bachelors of Science from the Escuela Politécnica Nacional, Quito-Ecuador, his MSc degree in control engineering from the University of Bradford, England, and his Ph.D. in electrical engineering from the University of Texas at Arlington in 1997. Before joining UNM, he held a postdoctoral appointment with the General Robotics, Automation, Sensing & Perception (GRASP) Laboratory at the University of Pennsylvania and a faculty position with the Department of Electrical and Computer Engineering at Oklahoma State University. His areas of interests include cyber-physical systems; heterogeneous robotic networks; and uncrewed aerial vehicles (UAV). The National Science Foundation (NSF), US Army Research Laboratory (ARL) and Air Force Research Laboratory (AFRL), Department of Energy (DOE), and Sandia National Laboratories have funded his research. He directs the AFRL-UNM Agile Manufacturing Center and the Multi-Agent, Robotics, and Heterogeneous Systems (MARHES) Laboratory. Dr. Fierro was the recipient of a Fulbright Scholarship, a 2004 National Science Foundation CAREER Award, and the 2008 International Society of Automation (ISA) Transactions Best Paper Award. He has served on the editorial boards of the Journal of Intelligent & Robotic Systems, IEEE Control Systems Magazine, IEEE Transactions on Control of Network Systems T-CNS, and IEEE Transactions on Automation Science and Engineering T-ASE.


Date and Time: Friday, April 2, 2021, at 11 am
Speaker: Cherie Ho  (Carnegie Mellon University)
Talk Title: Adaptive Safety Margins for Safe Replanning under Time-Varying Disturbances

Abstract ⬇️

Safe real-time navigation is a considerable challenge because engineers often need to work with uncertain vehicle dynamics, variable external disturbances, and imperfect controllers. A common strategy used to address safety is to employ hand-defined margins for obstacle inflation. However, arbitrary static margins often fail in more dynamic scenarios, and using worst-case assumptions proves to be overly conservative in most real-world settings where disturbance varies.
In this work, we propose a middle ground: safety margins that adapt on-the-fly using online measurements. In an offline phase, we use Monte Carlo simulations to pre-compute a library of safety margins for multiple levels of disturbance uncertainty. At runtime, our system estimates the current disturbance level and queries the associated safety margins for real-time replanning that present the best trade-off between safety and performance. We validate using extensive simulated and real-world flight tests that our approach better balances safety and performance than baseline static margin methods.

Video ⬇️


Date and Time: Friday, Feb 22, 2021, at 11 am
Speaker: Dr. Juan D. Hernández Vega  (Cardiff University)
Talk Title: Motion Planning for Multipurpose Autonomous Systems

Abstract ⬇️

Once limited to highly controlled industrial environments only, today robots are continuously evolving towards becoming autonomous entities, capable of operating in changing and unstructured settings. This evolution is only possible due to multi-disciplinary efforts, which endow autonomous systems with the required capabilities to deal with such changing and uncertain conditions. One of these disciplines, commonly known as motion planning, consists in computational techniques that calculate collision-free and feasible motions of autonomous systems. In this talk, we will discuss how motion planning is being used to improve the decision-making capabilities of different types of autonomous systems, such as autonomous underwater vehicles (AUVs), autonomous/automated cars and service robots.


Date and Time: Friday, Dec 18, at 11 am
Speaker: Bing Song (Columbia University)
Talk Title: What can robot learning bring to control?

Video ⬇️


Date and Time: Friday, Dec 11, at 11 am
Speaker: Juan David Pabon (Universidad Nacional de Colombia)
Talk Title: Event-Triggered Control for Weight-Unbalanced Directed Robot Networks

Video ⬇️


Date and Time: Friday, Sep 25th, at 11 am
Speaker: Bowen Wen (Rutgers University)
Talk Title: Robust, Real-time 6D Pose Estimation and Tracking for Robot Manipulation

Abstract ⬇️

Many manipulation tasks, such as picking, placing or within-hand manipulation, require the object’s pose relative to the manipulating agent, robot or human. These tasks frequently involve significant occlusions, which complicate the estimation and tracking process. This work presents a framework based on RGB-D data. It aims towards robust pose estimation under severe occlusions, such as those arising due to a hand occluding a manipulated object. It also aims for short response times so as to allow for responsive decision making in dynamic setups. In particular, the proposed frameworks leverage the complementary attributes of: i) novel deep neural network architectures via domain disentanglement, and ii) 3D geometric reasoning by using the end-effector’s state to guide the pose estimation of severely occluded objects. Additionally, only synthetic data are required for training, making the proposed approach applicable to new tasks, circumventing the overhead of collecting labeled real world data. Extensive evaluation on multiple real world benchmarks, including public ones and some developed as part of this effort, demonstrates superior performance when compared with state-of-the-art approaches and generalization to different robotic manipulation scenarios. This talk will conclude with ongoing effort for generalizing this line of work in the case of perceiving and manipulating novel objects without access to models.

Short Bio ⬇️

Bowen Wen is currently a 3rd-year PhD student in the Computer Science Department at Rutgers working with Prof Kostas Bekris. He got his BS in Energy and Automation from Xi’an Jiaotong University in 2016 and MS in ECE from Ohio State University in 2018. His research interests include robotics, computer vision and artificial intelligence. He has published papers at ICRA, IROS, RA-L, Conference on Robotic Learning etc. He worked as a research intern with Facebook Reality Lab, Amazon Lab126 and SenseTime in summer 2020, 2019 and 2018 respectively.

Video ⬇️


Previous Talks:

Sep 18th, 2020, Towards Scalable Algorithms for Distributed Optimization and Learning, Dr. Cesar Uribe (MIT)

Aug 28th, 2020, Tightly Coupled Vision-based Perception and Control for Autonomous Aerial Robots, Dr. Eric Cristofalo (Stanford)

Aug 14th, 2020, Introduction to F1tenth race: 1/10 the size, 10 times the fun, Prof. Rosa Zheng (Lehigh)

ICRA 2021 Best Paper Nomination

Our #ICRA2021 paper titled “Vision-Based Self-Assembly for Modular Multirotor Structures” has been selected as a finalist paper in the Multi-robot Systems Session.

Authors: Yehonathan Litman*, Neeraj Gandhi, Linh Thi Xuan Phan, David Saldaña

Abstract:  Modular aerial robots can adapt their shape to suit a wide range of tasks, but developing efficient self-reconfiguration algorithms is still a challenge. Self-reconfiguration algorithms in the literature rely on high-accuracy global positioning systems which are not realistic for real-world applications. In this paper, we study self-reconfiguration algorithms using a combination of low-accuracy global positioning systems (e.g., GPS) and on-board relative positioning (e.g. visual sensing) for precise docking actions. We present three algorithms:
1) parallelized self-assembly sequencing that minimizes the number of serial “docking steps”;
2) parallelized self-assembly sequencing that minimizes total distance traveled by modules; and
3) parallelized self-reconfiguration that breaks an initial structure down as little as possible before assembling a new structure.
The algorithms take into account the constraints of the local sensors and use heuristics to provide a computationally efficient solution for the combinatorial problem. Our evaluation in 2-D and 3-D simulations shows that the algorithms scale with the number of modules and structure shape.

Latest Papers to Appear in Top Conferences

We have multiple papers accepted by ICRA and ACC 2021, congratulations to those lab members! See below for details (updating):


H-ModQuad: Modular Multi-Rotors with 4, 5, and 6 Controllable DOF (ICRA 2021)

Jiawei Xu*, Diego S. D’ Antonio, David Saldaña

Abstract: Traditional aerial vehicles are usually custom-designed for specific tasks. Although the vehicle is efficient, it might not be able to perform the task after a small change in the specification, e.g., increasing the payload. This applies to quadrotors, having a maximum payload and only four controllable degrees of freedom, limiting their adaptability to the task’s variations. We propose a versatile modular robotic system that can increase its payload and degrees of freedom by assembling heterogeneous modules; we call it H-ModQuad. It consists of cuboid modules, propelled by quadrotors with tilted propellers that can generate forces in different directions. By connecting different types of modules, an H-ModQuad can increase its controllable degrees of freedom from 4 to 5 and 6. We model the general structure and propose three controllers, one for each number of controllable degrees of freedom. We extend the concept of the actuation ellipsoid to find the best reference orientation that can maximize the performance of the structure. Our approach is validated with experiments using actual robots, showing the independence of the translation and orientation of a structure.


The Catenary Robot: Design and Control of a Cable Propelled by Two Quadrotors (ICRA 2021/RAL)

Diego S. D’ Antonio*, Gustavo A. Cardona, David Saldaña

Abstract: Transporting objects using aerial robots has been widely studied in the literature. Still, those approaches always assume that the connection between the quadrotor and the load is made in a previous stage. However, that previous stage usually requires human intervention, and autonomous procedures to locate and attach the object are not considered. Additionally, most of the approaches assume cables as rigid links, but manipulating cables requires considering the state when the cables are hanging. In this work, we design and control a catenary robot.  Our robot is able to transport hook-shaped objects in the environment. The robotic system is composed of two quadrotors attached to the two ends of a cable. By defining the catenary curve with five degrees of freedom, position in 3-D, orientation in the z-axis, and span, we can drive the two quadrotors to track a given trajectory. We validate our approach with simulations and real robots. We present four different scenarios of experiments. Our numerical solution is computationally fast and can be executed in real-time.

DOI: 10.1109/LRA.2021.3062603


Vision-Based Self-Assembly for Modular Quadrotor Structures (ICRA 2021)

Yehonathan Litman*, Neeraj Gandhi, Linh Thi Xuan Phan, David Saldaña

Abstract:  Modular aerial robots can adapt their shape to suit a wide range of tasks, but developing efficient self-reconfiguration algorithms is still a challenge. Self-reconfiguration algorithms in the literature rely on high-accuracy global positioning systems which are not realistic for real-world applications. In this paper, we study self-reconfiguration algorithms using a combination of low-accuracy global positioning systems (e.g., GPS) and on-board relative positioning (e.g. visual sensing) for precise docking actions. We present three algorithms:
1) parallelized self-assembly sequencing that minimizes the number of serial “docking steps”;
2) parallelized self-assembly sequencing that minimizes total distance traveled by modules; and
3) parallelized self-reconfiguration that breaks an initial structure down as little as possible before assembling a new structure.
The algorithms take into account the constraints of the local sensors and use heuristics to provide a computationally efficient solution for the combinatorial problem. Our evaluation in 2-D and 3-D simulations shows that the algorithms scale with the number of modules and structure shape.


Resilient Task Allocation in Heterogeneous Multi-Robot Systems (ICRA 2021/RAL)

Siddharth Mayya*, Diego S. D’ Antonio, David Saldaña, Vijay Kumar

Abstract: This paper presents a resilient mechanism to allocate heterogeneous robots to tasks under difficult environmental conditions such as weather events or adversarial attacks. Our primary objective is to ensure that each task is assigned the requisite level of resources, measured as the aggregated capabilities of the robots allocated to the task. By keeping track of task performance deviations under external perturbations, our framework quantifies the extent to which robot capabilities (e.g., visual sensing or aerial mobility) are affected by environmental conditions. This enables an optimization-based framework to flexibly reallocate robots to tasks based on the most degraded capabilities within each task. In the face of resource limitations and adverse environmental conditions, our algorithm relaxes the resource constraints corresponding to some tasks, thus exhibiting a graceful degradation of performance. Simulated experiments in a multi-robot coverage and target tracking scenario demonstrate the efficacy of the proposed approach.

DOI: 10.1109/LRA.2021.3057559


Vehicle Trajectory Prediction Using Generative Adversarial Network With Temporal Logic Syntax Tree Features. (ICRA 2021/RAL)

Xiao Li, Guy Rosman, Igor Gilitschenski, Cristian-Ioan Vasile, Jonathan A. DeCastro, Sertac Karaman, and Daniela Rus.

Abstract: In this work, we propose a novel approach for integrating rules into traffic agent trajectory prediction. Consideration of rules is important for understanding how people behave — yet, it cannot be assumed that rules are always followed. To address this challenge, we evaluate different approaches of integrating rules as inductive biases into deep learning-based prediction models. We propose a framework based on generative adversarial networks that uses tools from formal methods, namely signal temporal logic and syntax trees. This allows us to leverage information on rule obedience as features in neural networks and improves prediction accuracy without biasing towards lawful behavior. We evaluate our method on a real-world driving dataset and show improvement in performance over off-the-shelf predictors.

To appear on ICRA 2021/IEEE Robotics and Automation Letters.


Specifying User Preferences using Weighted Signal Temporal Logic (ACC/CSL)

Noushin Mehdipour, Cristian-Ioan Vasile, and Calin Belta

Abstract: We extend Signal Temporal Logic (STL) to enable the specification of importance and priorities. The extension, called Weighted STL (wSTL), has the same qualitative (Boolean) semantics as STL, but additionally defines weights associated with Boolean and temporal operators that modulate its quantitative semantics (robustness). We show that the robustness of wSTL can be defined as weighted generalizations of all known compatible robustness functionals (i.e., robustness scores that are recursively defined over formulae) that can take into account the weights in wSTL formulae. We utilize this weighted robustness to distinguish signals with respect to a desired wSTL formula that has subformulae with different importance or priorities and time preferences, and demonstrate its usefulness in problems with conflicting tasks where satisfaction of all tasks cannot be achieved. We also employ wSTL robustness in an optimization framework to synthesize controllers that maximize satisfaction of a specification with user specified preferences.

ACC/IEEE Control Systems Letters, December 2020.

DOI: 10.1109/LCSYS.2020.3047362.


A Control Architecture for Provably-Correct Autonomous Driving (ACC 2021)

Erfan Aasi, Cristian-Ioan Vasile, and Calin Belta

Abstract: This paper presents a novel two-level control architecture for a fully autonomous vehicle in a deterministic environment, which can handle traffic rules as specifications and low-level vehicle control with real-time performance. At the top level, we use a simple representation of the environment and vehicle dynamics to formulate a linear Model Predictive Control (MPC) problem. We describe the traffic rules and safety constraints using Signal Temporal Logic (STL) formulas, which are mapped to mixed integer-linear constraints in the optimization problem. The solution obtained at the top level is used at the bottom-level to determine the best control command for satisfying the constraints in a more detailed framework. At the bottom-level, specification-based runtime monitoring techniques, together with detailed representations of the environment and vehicle dynamics, are used to compensate for the mismatch between the simple models used in the MPC and the real complex models. We obtain substantial improvements over existing approaches in the literature in the sense of runtime performance and we validate the effectiveness of our proposed control approach in the simulator CARLA.

In American Control Conference (ACC), New Orleans, LA, USA, May 2021.


Robust Adaptive Synchronization of Interconnected Heterogeneous Quadrotors Transporting a Cable-Suspended Load (ICRA 2021)

Gustavo A. Cardona, Miguel Felipe Arevalo-Castiblanco, Duvan Tellez-castro, Juan Calderon, Eduardo Mojica-Nava

Abstract: We tackle the problem of multiple quadrotors transporting a cable-suspended point-mass load. The quadrotors are treated as a virtual leader-follower algorithm, where a multi-layer graph encapsulates the communication and physical interaction. On the one hand, communication stands for the approach of following the reference trajectory of a virtual leader. On the other hand, the load exerts a distributed tension force on each cable which is modeled as the well-known spring-damping system to each quadrotor establishing an interconnected dynamic. We assume cables are stretchable and have neglectable mass. Both objectives are accomplished through a Model Reference Adaptive Control approach with a robust modification that treats uncertainties and perturbations given by error in parameters, noise in the signal, and the wind drag forces. We prove stability based on the Lyapunov approach and the results are shown through simulation.



Prof. Mooi Choo Chuah Receives Qualcomm Faculty Award

Prof. Chuah of Lehigh CSE and the AIRLab has been named as a recipient of the 2021 Qualcomm Faculty Award (QFA). Congratulations to her!

The award “supports key professors and their research through a $75,000 charitable donation to their university. The goal of the QFA funding is to strengthen Qualcomm’s engagement with faculty who are playing a key role in our recruiting of top graduate students.”

Prof. Chuah is the associate chair of Lehigh’s Computer Science & Engineering Department, and served as one of Lehigh’s NSF Advance Chairs for 2011. She is an IEEE fellow and NAI fellow. With the award, her team will advance their work on unsupervised video object segmentation and related topics.


New Lab Facility is Getting Ready!

Despite the disruptions from the pandemic, the new lab facility was being built in a safe and orderly manner in Building C. This massive space is almost ready now. It will unleash imaginations in all three dimensions soon.

A motion capture system has been installed. It will facilitate precise tracking for robots, no matter they are on the ground or in the air.

The working area for researchers is being furnished. It will provide a comfortable and safe environment for experiments and the incubation of brilliant ideas.

We are looking forward to 2021 and the inauguration of the new facility!


New Paper Published on IJRR

A new paper from AIR lab is available on IJRR:


Reactive sampling-based path planning with temporal logic specifications

Cristian Ioan Vasile | Lehigh University
Xiao Li | Boston University
Calin Belta | Boston University

Abstract: We develop a sampling-based motion planning algorithm that combines long-term temporal logic goals with short-term reactive requirements. The mission specification has two parts: (1) a global specification given as a linear temporal logic (LTL) formula over a set of static service requests that occur at the regions of a known environment, and (2) a local specification that requires servicing a set of dynamic requests that can be sensed locally during the execution. The proposed computational framework consists of two main ingredients: (a) an off-line sampling-based algorithm for the construction of a global transition system that contains a path satisfying the LTL formula; and (b) an on-line sampling-based algorithm to generate paths that service the local requests, while making sure that the satisfaction of the global specification is not affected. The off-line algorithm has four main features. First, it is incremental, in the sense that the procedure for finding a satisfying path at each iteration scales only with the number of new samples generated at that iteration. Second, the underlying graph is sparse, which implies low complexity for the overall method. Third, it is probabilistically complete. Fourth, under some mild assumptions, it has the best possible complexity bound. The on-line algorithm leverages ideas from LTL monitoring and potential functions to ensure progress towards the satisfaction of the global specification while servicing locally sensed requests. Examples and experimental trials illustrating the usefulness and the performance of the framework are included.


Papers to Appear in ICRA 2020

We have multiple papers accepted by ICRA 2020, congratulations to those lab members! See below for details:


Multi-Robot Path Deconfliction through Prioritization by Path Prospects

Wu, Wenying | University of Cambridge
Bhattacharya, Subhrajit | Lehigh University
Prorok, Amanda | University of Cambridge

Abstract: This work deals with the problem of planning conflict-free paths for mobile robots in cluttered environments. Since centralized, coupled planning algorithms are computationally intractable for large numbers of robots, we consider decoupled planning, in which robots plan their paths sequentially in order of priority. Choosing how to prioritize the robots is a key consideration. State-of-the-art prioritization heuristics, however, do not model the coupling between a robot’s mobility and its environment. This is particularly relevant when prioritizing between robots with different degrees of mobility. In this paper, we propose a prioritization rule that can be computed online by each robot independently, and that provides consistent, conflict-free path plans. Our innovation is to formalize a robot’s path prospects to reach its goal from its current location. To this end, we consider the number of homology classes of trajectories, which capture distinct prospects of paths for each robot. This measure is used as a prioritization rule, whenever any robots enter negotiation to deconflict path plans. We perform simulations with heterogeneous robot teams and compare our method to five benchmarks. Our method achieves the highest success rate, and strikes a good balance between makespan and flowtime objectives.

Dense R-Robust Formations on Lattices

Guerrero-Bonilla, Luis | KTH Royal Institute of Technology
Saldaña, David | Lehigh University
Kumar, Vijay | University of Pennsylvania

Abstract: Robot networks are susceptible to fail under the presence of malicious or defective robots. Resilient networks in the literature require high connectivity and large communication ranges, leading to high energy consumption in the communication network. This paper presents robot formations with guaranteed resiliency that use smaller communication ranges than previous results in the literature. The formations can be built on triangular and square lattices in the plane, and cubic lattices in the three-dimensional space. We support our theoretical framework with simulations.

Estimation with Fast Feature Selection in Robot Visual Navigation

Mousavi, Hossein K. | Lehigh University
Motee, Nader | Lehigh University

Abstract: We consider the robot localization problem with sparse visual feature selection. The underlying key property is that contributions of trackable features (landmarks) appear linearly in the information matrix of the corresponding estimation problem. We utilize standard models for motion and vision system using a camera to formulate the feature selection problem over moving finite-time horizons. We propose a scalable randomized sampling algorithm to select more informative features to obtain a certain estimation quality. We provide probabilistic performance guarantees for our method. The time-complexity of our feature selection algorithm is linear in the number of candidate features, which is practically plausible and outperforms existing greedy methods that scale quadratically with the number of candidate features. Our numerical simulations confirm that not only the execution time of our proposed method is comparably less than that of the greedy method, but also the resulting estimation quality is very close to the greedy method.

Self-Reconfiguration in Response to Faults in Modular Aerial Systems

Gandhi, Neeraj | University of Pennsylvania
Saldaña, David | Lehigh University
Kumar, Vijay | University of Pennsylvania
Phan, Linh Thi Xuan | University of Pennsylvania

Abstract: We present a self-reconfiguration technique bywhich a modular flying platform can mitigate the impact of rotor failures. In this technique, the system adapts its configuration in response to rotor failures to be able to continue its mission while efficiently utilizing resources. A mixed integer linear program determines an optimal module-to-position allocation in the structure based on rotor faults and desired trajectories. We further propose an efficient dynamic programming algorithm that minimizes the number of disassembly and reassembly steps needed for reconfiguration. Evaluation results show that our technique can substantially increase the robustness of the system while utilizing resources efficiently, and that it can scale well with the number of modules.