AIR Lab Robot Autonomy Seminar Series

The AIR Lab Robot Autonomy Seminar Series is a dynamic platform at the intersection of robotics, artificial intelligence, and autonomous systems. This seminar series provides an invaluable forum for experts, researchers, and enthusiasts to delve into the latest advancements and breakthroughs in the field of robot autonomy. Through engaging talks, demonstrations, and interactive discussions, the seminar series explores a wide spectrum of topics, from state-of-the-art machine-learning algorithms for robot perception and control to novel approaches in human-robot interaction and cutting-edge technologies that are reshaping the future of autonomous robotics. Whether you’re a seasoned professional or just starting your journey in robotics, this seminar series is a vital hub for sharing knowledge, fostering collaboration, and staying on the cutting edge of the ever-evolving field of robot autonomy.

Check the upcoming and past seminars.

Axel Krieger (JHU): Smart and Autonomous Robots for Surgery (4/19)

Dr. Axel Krieger

Associate Professor, Department of Mechanical Engineering, Johns Hopkins University


Time: 12:00 pm – 1:00 pm

Date: Friday, April 19

Location: BC 115

Abstract: Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.

My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery.  Presentation topics will include a robotic system for supervised autonomous laparoscopic anastomosis and robotic trauma assessment and care. 

Bio: Axel Krieger, PhD, joined the Johns Hopkins University in the Department of Mechanical Engineering in July 2020. He is leading a team of students, scientists, and engineers in the research and development of robotic systems for surgery and interventions. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA

Dinesh Jayaraman (UPenn): The Sensory Needs of Robot Learners

The Sensory Needs of Robot Learners

Dr. Dinesh Jayaraman

Assistant Professor, Grasp Lab, University of Pennsylvania


Time: 11:30 am – 12:30 pm

Date: Friday, April 12

Location: BC 220

Abstract:General-purpose robots of the future will need vision and learning, but such vision-based robot learning today is inflexible and inefficient: it needs robot-and-task-specific training experiences, expert-engineered task specifications, and large computational resources. This talk will cover algorithms that dynamically select task-relevant information during sensing, representation, decision making, and learning, enabling flexibilities in pre-training controller modules, layperson-friendly task specification, and efficient resource allocation. I will speak about pre-trained object-centric visual representations that track task-directed progress, task-relevant world model learning for model-based RL, and how robot learners can benefit from additional sensory inputs at training time.

Bio: Dinesh Jayaraman is an assistant professor at the University of Pennsylvania’s CIS department and GRASP lab. He leads the Perception, Action, and Learning  (Penn PAL) research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh’s research has received an Best Paper Award at CORL ’22, a Best Paper Runner-Up Award at ICRA ’18, a Best Application Paper Award at ACCV ‘16, an Amazon Research Award ’21, the NSF CAREER award ’23, and been featured on the cover page of Science Robotics and in several press outlets. His webpage is at:

ICRA 2024 Workshop: How to Ensure Correct Robot Behaviors? Software Challenges in Formal Methods for Robotics

Lehigh AIRLab core faculty Prof. Cristian-Ioan Vasile, Ph.D. students Disha Kamale and Gustavo Andres Cardona Calderon are organizing a full-day workshop on May 17th, 2024 at IEEE International Conference on Robotics and Automation in Yokohama, Japan.


Cristian-Ioan Vasile, Lehigh University
Jonathan DeCastro, Toyota Research Institute
Xiao Li, Shanghai Jiao Tong University
Tichakorn (Nok) Wongpiromsarn, Iowa State University
Karen Leung, University of Washington
Disha Kamale, Lehigh University
Gustavo Andres Cardona Calderon, Lehigh University



The formal methods community has produced many successful results and tools in computer
science, and has initiated a steady effort in moving towards applications in robotics over the
past two decades. However, most robotics researchers and practitioners are unaware of or find
it difficult to adopt these methods due to: 1) high initial effort to use, and 2) disconnect between
the method and tool developers and the larger community. While initially methods were shown
in simple proof-of-concept scenarios, today, tools have matured to conform to the requirements
of robotics applications. These tools offer the logical reasoning necessary to automate planning
and control, provide apriori guarantees for bug-free achievement of tasks, provide unambiguous
specification languages, lay the foundation for formal verification of planning components as
well as their composition with perception components (e.g., machine learning), provide a
framework for explainable learning, and enable embedding of logical structure in learned
models. This workshop seeks to bring together the robotics and formal methods communities in
a hands-on event that will demystify the current state-of-the-art, bridge the knowledge-transfer
gap, and foster continued collaboration. We aim to encourage roboticists to adopt automated
synthesis and formal verification tools in their research, and direct the formal methods
community to the needs of robotics.

The speaker list includes: Prof. Kostas Bekris (Rutgers University), Prof. Calin Belta (UMD),
Prof. Chuchu Fan (MIT), Prof. Jie Fu (University of Florida), Prof. Yiannis Kantaros (Washington
University in St. Louis), Prof. Rahul Mangharam (University of Pennsylvania), Prof. Sanjit A.
Seshia (University of California, Berkeley).

Supported by the IEEE Robotics and Autonomous Systems Technical Committee for Verification
of Autonomous Systems.

Nikolai Matni (UPenn): What makes learning to control easy or hard?

What makes learning to control easy or hard?

Dr. Nikolai Matni

Assistant Professor, Electrical and Systems Engineering, University of Pennsylvania

Time: 12:00 pm – 1:00 pm

Date: Friday, April 5

Location: BC 115

Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning. We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of two case studies: (i) characterizing fundamental limits of learning-enabled control, and (ii) developing novel robust imitation learning algorithms with finite sample-complexity guarantees. In both cases, we will emphasize the interplay between robust learning, robust control, and robust stability and their consequences on the sample-complexity and generalizability of the resulting learning-based control algorithms.

Bio: Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group. He has held positions as a Visiting Faculty Researcher at Google Brain Robotics, NYC, as a postdoctoral scholar in EECS at UC Berkeley, and as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds a B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of autonomous systems. Nikolai is a recipient of the AFOSR YIP (2024), NSF CAREER Award (2021), a Google Research Scholar Award (2021), the 2021 IEEE CSS George S. Axelby Award, and the 2013 IEEE CDC Best Student Paper Award. He is also a co-author on papers that have won the 2022 IEEE CDC Best Student Paper Award and the 2017 IEEE ACC Best Student Paper Award.

Matei Ciocarlie (Columbia University): We (finally) have dexterous robotic manipulation. Now what? (04/26)

Dr. Matei Ciocarlie

Associate Professor, Department of Mechanical Engineering, Columbia University


Time: 11:00 am – 12:00 pm

Date: Friday, April 26

Location: PA 466

Abstract: At long last, robot hands are becoming truly dexterous. It took advances in sensor design, mechanisms, and computational motor learning all working together, but we’re finally starting to see true dexterity, in our lab as well as others. This talk will focus on the path our lab took to get here, and questions for the future. From a mechanism design perspective, I will present our work on optimizing an underactuated hand transmission mechanism jointly the grasping policy that uses it, an approach we refer to as “Hardware as Policy”. From a sensing perspective, I will present our optics-based tactile finger, providing accurate touch information over a multi-curved three-dimensional surface with no blind spots. From a motor learning perspective, I will talk about learning tactile-based policies for dexterous in-hand manipulation and object recognition. Finally, we can discuss implications for the future: how do we consolidate these gains by making dexterity more robust, versatile, and general, and what new applications can it enable?

Bio: Matei Ciocarlie is an Associate Professor in the Mechanical Engineering Department at Columbia University, with affiliated appointments in Computer Science and the Data Science Institute. His work focuses on robot motor control, mechanism and sensor design, planning and learning, all aiming to demonstrate complex motor skills such as dexterous manipulation. Matei completed his Ph.D. at Columbia University in New York; before joining the faculty at Columbia, Matei was a Research Scientist and then Group Manager at Willow Garage, Inc., and then a Senior Research Scientist at Google, Inc. In these positions, Matei contributed to the development of the open-source Robot Operating System (ROS), and led research projects in areas such as hand design, manipulation under uncertainty, and assistive robotics. In recognition of his work, Matei was awarded the Early Career Award by the IEEE Robotics and Automation Society, a Young Investigator Award by the Office of Naval Research, a CAREER Award by the National Science Foundation, and a Sloan Research Fellowship by the Alfred P. Sloan Foundation.

Spring 2024 Newsletter

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab


Read this newsletter online

Robotics and Autonomy

News and highlights from Lehigh’s AIR Lab

Spring 2024

I’m delighted to share with you good news about advances and research in Lehigh’s Autonomous and Intelligent Robotics (AIR) Lab and associated robotics research across campus.

Over the past year, there have been several notable developments among our research community; find more about them in the links below.

Our faculty have secured significant grants from several federal agencies, including the National Science Foundation and Department of Defense, paving the way for future projects.

Prof. Nader Motee

Director, Lehigh AIRLab

The highly successful “Defend the Republic 2023” competition held at Lehigh for the first time this Fall underscored our university’s growing prominence in the field of robotics. And we’re excited to expand our team with a new faculty position in robotics.

Our connection with AIR Lab alumni and friends remains vital, and we eagerly seek collaboration opportunities in education and research. Your stories, ideas, and projects are always welcome. In this dynamic era, the AIR Lab continues to strive for excellence, and your support is crucial. We encourage you to consider AIR Lab as a philanthropic focus, aiding us in achieving groundbreaking innovations.

Thank you for your support of the research team in the AIR Lab at Lehigh! Feel free to drop me a line with your thoughts and comments!


Nader Motee

Professor of Mechanical Engineering and Mechanics

Director, AIR Lab

P.C. Rossin College of Engineering and Applied Science

Lehigh University


National robotics competition takes flight at Lehigh

Hosted by Lehigh’s SWARMSLab, November’s “Defend the Republic” event found the perfect home in Mountaintop’s Building C.


Solving ambiguity in robot perception

ONR supports Professor Nader Motee’s novel approach to robotic risk assessment


Advancing the science of network control

Interdisciplinary team aims to develop security tools in modal methods


Improving control and performance of robot swarms

A Lehigh research team builds novel control algorithms and linked robots


Advancing autonomous vehicles in urban spaces

Lehigh’s self-driving cars team works to enhance safety and adaptability on the roads


Video: A ‘cool’ internship at JPL

Student-athlete Harrison Jenkins ’25 tackles challenges related to aerospace engineering and robotics


Autonomous driving for underwater drones

Lane Raczy ’25: Off the deep end as part of Lehigh’s Mountaintop Summer Experience


‘Cutting the cord’ to advance ocean data collection

Professor Rosa Zheng leads NSF-funded team developing a novel autonomous observatory node prototype


Robot autonomy seminars at AIR Lab

Conversations at the intersection of robotics, artificial intelligence, and autonomous systems


Summer CHOICES: Spreading the joy of STEM

Summer outreach program delves into the fundamentals of robotics—with emphasis on fun


PhD students shine at 2023 American Control Conference

ICRA and ACC 2023 accepted several academic submissions from AIR Lab researchers, and Guangyi Liu organized a half-day workshop at ACC


AIR Lab at LehighU



Advancing Network Control

With support from a nearly $500,000 Air Force Office of Scientific Research grant, researchers in Lehigh University’s P.C. Rossin College of Engineering and Applied Science are making strides in understanding and managing signal transmission dynamics in networks.

The interdisciplinary project is led by Subhrajit Bhattacharya, an assistant professor of mechanical engineering and mechanics, with Rick Blum, the Robert W. Wieseman Professor of Electrical and Computer Engineering, and focuses on developing theoretical foundations and algorithmic tools to control signal flow across networks.

Read More >>> 

Jaime Fernández Fisac (Princeton University): Games and Filters: a Road to Safe Robot Autonomy

Dr. Jaime Fernández Fisac

Assistant Professor, Department of Electrical and Computer Engineering, Princeton University


Time: 11 am (coffee and cookies at 10:30 am in PA 367)

Date: Friday, December 8

Location: PA 466

Abstract: Autonomous robotic systems are promising to revolutionize our homes, cities and roads—but will we trust them with our lives? This talk will take stock of today’s safety-critical robot autonomy, highlight recent advances, and offer some reasons for optimism. We will first see that a broad family of safety schemes from the last decade can be understood under the common lens of a universal safety filter theorem, lighting the way for the systematic design of next-generation safety mechanisms. We will explore reinforcement learning (RL) as a general tool to synthesize global safety filters for previously intractable robotics domains, such as walking or driving through abrupt terrain, by departing from the traditional notion of accruing rewards in favor of a safety-specific Bellman equation. We will show that this safety-RL formulation naturally allows learning from near misses and boosting robustness through adversarial gameplay. Finally, we will turn our attention to dense urban driving, where safety hinges on the autonomous vehicle’s rapidly unfolding interactions with other road users. We will examine how robots can bolster safety by leaning on their future ability to seek out missing key information about other agents’ intent or even their location. We will conclude with an outlook on future autonomous systems and the role of transparent, real-time safety proofs in generating public trust.

Bio: Jaime Fernández Fisac is an Assistant Professor of Electrical and Computer Engineering at Princeton University, where he directs the Safe Robotics Laboratory and co-directs the Princeton AI4ALL outreach summer program. His research combines control theory, artificial intelligence, cognitive science, and game theory with the goal of enabling robots to operate safely in human-populated spaces in a way that is well understood, and thereby trusted, by their users and the public at large. Prior to joining the Princeton faculty, he was a Research Scientist at Waymo (formerly Google’s Self-Driving Car project) from 2019 to 2020, working on autonomous vehicle safety and interaction. He received an Engineering Diploma from the Universidad Politécnica de Madrid, Spain in 2012, a M.Sc. in Aeronautics from Cranfield University, U.K. in 2013, and a Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2019. He is a recipient of the La Caixa Foundation Fellowship, the Leon O. Chua Award, the Google Research Scholar Award, and the Sony Focused Research Award.


Read More