Author: ammb23

Dr. Sarath Sreedharan (Colorado State University): Human-Aware AI: A Unifying Framework to Capture Human-AI Interaction

Dr. Sarath Sreedharan (Colorado State University): Human-Aware AI: A Unifying Framework to Capture Human-AI Interaction

Dr. Sarath Sreedharan

Assistant Professor, Department of Computer Science,  Colorado State University

Time: 1:00 pm – 2:00 pm

Date: Friday, November 21, 2025

Location: BC 220

Abstract: While we have witnessed extraordinary progress across various subfields of AI, the deployment of these systems in safety-critical and mission-critical domains has lagged behind. A key requirement for such deployments is the availability of AI systems capable of generating optimal behavior that can be formally verified and effectively used by non-AI experts from diverse backgrounds. While significant advances have been made toward building such systems, we still lack comprehensive formal frameworks to model and analyze the complex dynamics of human-AI interaction. In this talk, I will introduce the Human-Aware AI framework, a multi-agent planning framework specifically designed to support and reason about human-AI interaction. I will show how this framework provides novel solutions to challenges such as explainability, value alignment, and trust calibration, and demonstrate its application in domains including intelligent tutoring systems, cybersecurity, and robotics.

Bio: Prof Sreedharan is an Assistant Professor at Colorado State University. His core research interests include designing human-aware decision-making systems to generate behaviors that align with human expectations. He completed his Ph.D. at Arizona State University, where his doctoral dissertation received one of the 2022 Dean’s Dissertation Awards for Ira A. Fulton Schools of Engineering and was an Honorable mention for the ICAPS-23 Outstanding Dissertation Award. He is the lead author of a Morgan Claypool monograph on explainable human-AI interaction and has provided tutorials and invited talks on the topic at various venues. He was selected as a DARPA Riser Scholar for 2022, a Highlighted New Faculty at AAAI-23, and an IEEE 10 to watch in AI for 2024.

Dr. Yunzhu Li (Columbia University): Foundation Models for Robotic Manipulation: Opportunities and Challenges

Dr. Yunzhu Li (Columbia University): Foundation Models for Robotic Manipulation: Opportunities and Challenges

Dr. Yunzhu Li

Assistant Professor, Computer Science and Computer Engineering,  Columbia University

Time: 1:00 pm – 2:00 pm

Date: Friday, November 14, 2025

Location: BC 220

Abstract: Foundation models, such as GPT, have achieved remarkable progress in natural language and vision, demonstrating strong adaptability to new tasks and scenarios. Physical interaction, such as cooking, cleaning, or caregiving, remains a frontier where these models and robotic systems have yet to reach comparable levels of generalization. In this talk, I will discuss opportunities for incorporating foundation models into robotic pipelines to extend capabilities beyond those of traditional methods. The focus will be on two areas: (1) task specification and (2) task-level planning. The central idea is to translate the commonsense knowledge embedded in foundation models into structural priors that can be integrated into robot learning systems. This approach combines the strengths of different modules (for example, VLMs for task interpretation and constrained optimization for motion planning), achieving the best of both worlds. I will show how such integration enables robots to interpret free-form natural language instructions and perform a wide range of real-world manipulation tasks. In the latter half of the talk, I will discuss current limitations of foundation models, key challenges ahead, and potential avenues for progress, particularly in multi-modal sensing and structured world modeling.

Bio: Yunzhu Li is an Assistant Professor of Computer Science at Columbia University. Before joining Columbia, he was an Assistant Professor at UIUC CS and spent time as a Postdoc at Stanford, collaborating with Fei-Fei Li and Jiajun Wu. Yunzhu earned his PhD from MIT under the guidance of Antonio Torralba and Russ Tedrake. His work has been recognized with the Best Paper Award at ICRA, the Best Systems Paper Award, and as a Finalist for the Best Paper Award at CoRL. He is also a recipient of the AAAI New Faculty Highlights, the Sony Faculty Innovation Award, the Amazon Research Award, the Adobe Research Fellowship, and the First Place Ernst A. Guillemin Master’s Thesis Award in AI and Decision Making at MIT. His research has been published in top journals and conferences, including Nature and Science, and featured by major media outlets such as CNN, BBC, and The Wall Street Journal.

Dr. Dinesh Manocha ( University of Maryland at College Park): Robot Navigation in the Wild

Dr. Dinesh Manocha ( University of Maryland at College Park): Robot Navigation in the Wild

Dr. Dinesh Manocha

Professor, Computer Science and Electrical and Computer Engineering, University of Maryland at College Park

Time: 1:00 pm – 2:00 pm

Date: Friday, November 7, 2025

Location: BC 220

Abstract: In the last few decades, most robotics success stories have been limited to structured or controlled environments. A major challenge is to develop robot systems that can operate in complex or unstructured environments corresponding to homes, dense traffic, outdoor terrains, public places, etc. In this talk, we give an overview of our ongoing work on developing robust planning and navigation technologies that use recent advances in computer vision, sensor technologies, machine learning, and motion planning algorithms. We present new methods that utilize multi-modal observations from an RGB camera, 3D LiDAR, and robot odometry for scene perception, along with deep reinforcement learning for reliable planning. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. We have integrated these methods with wheeled robots, home robots, and legged platforms and highlight their performance in crowded indoor scenes, home environments, and dense outdoor terrains.

Bio: Prof. Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and Distinguished University Professor at University of Maryland College Park. His research interests include virtual environments, physics-based modeling, and robotics. His group has developed several software packages that are standard and licensed to 60+ commercial vendors. He has published more than 850 papers & supervised 57 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI and member of ACM SIGGRAPH and IEEE VR Academies, and Bézier Award from Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi the Distinguished Career in Computer Science Award from Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc in November of 2016.

Professor Negar Mehr (University of California, Berkeley): Interactive Autonomy: Learning and Control for Multi-agent Interactions

Professor Negar Mehr (University of California, Berkeley): Interactive Autonomy: Learning and Control for Multi-agent Interactions

Dr. Negar Mehr

Assistant Professor, Mechanical Engineering, University of California, Berkeley

Time: 1:00 pm – 2:00 pm

Date: Friday, October 31, 2025

Location: BC 220

Abstract: To truly transform our lives, autonomous systems must operate in environments shared with other agents. For instance, delivery robots need to move through spaces that are shared with humans, and warehouse robots must coordinate in shared factory floors. Such multi-agent settings demand systematic methods that enable efficient and reliable interactions among agents. In the first part of my talk, I focus on control challenges in such domains and will discuss game-theoretic planning and control for robots. Intelligent interaction requires robots to reason about how their decisions affect and are affected by others. I will present our recent results showing how exploiting structural properties of interactions leads to motion planning algorithms that are both efficient and tractbale for real-time deployment. The second part of the talk will focus on learning in interactive domains including imitation learning and reinforcement learning. While these approaches have advanced significantly in single-agent settings, multi-agent domains present unique learning challenges because decisions are tightly coupled across agents. I will highlight how some of these challenges can be addressed to make learning feasible in interactive multi-agent domains.

Bio: Negar Mehr is an assistant professor in the Department of Mechanical Engineering at the University of California, Berkeley. Previously, she was an assistant professor of Aerospace Engineering at the University of Illinois Urbana-Champaign. Before that, she was a postdoctoral scholar at Stanford Aeronautics and Astronautics department. She received her Ph.D. in Mechanical Engineering from UC Berkeley in 2019 and her B.Sc. in Mechanical Engineering from Sharif University of Technology, Tehran, Iran, in 2013. She is a recipient of the NSF CAREER Award. She has been recognized as a rising star by the American Society of Mechanical Engineers (ASME). She was awarded the IEEE Intelligent Transportation Systems best Ph.D. dissertation award in 2020. 

Professor Christine Allen-Blanchette (Princeton University): Symmetries in Neural Network Design and Application

Professor Christine Allen-Blanchette (Princeton University): Symmetries in Neural Network Design and Application

Professor Christine Allen-Blanchette

Assistant Professor, Mechanical and Aerospace Engineering, Princeton University

Time: 1:00 pm – 2:00 pm

Date: Friday, October 24, 2025

Location: BC 220

Abstract: Scientists and engineers are increasingly applying deep neural networks (DNNs) to modelling and design of complex systems. While the flexibility of DNNs makes them an attractive tool, it also makes their solutions difficult to interpret and their predictive capability difficult to quantify. In contrast, scientific models directly expose the equations governing a process but their applicability is restricted in the presence of unknown effects or when the data are high-dimensional. The emerging paradigm of physics-guided artificial intelligence asks: How can we combine the flexibility of DNNs with the interpretability of scientific models to learn relationships from data consistent with known scientific theories? In this talk, I will discuss my work on incorporating prior knowledge of problem structure (e.g., physics-based constraints) into neural network design. I will demonstrate how prior knowledge of task symmetries can be leveraged for improved learning outcomes, and how appropriately structured learning algorithms can be useful in scientific contexts.

Bio: Christine Allen-Blanchette is an assistant professor in the Department of Mechanical and Aerospace Engineering, and Center for Statistics and Machine Learning at Princeton University. They hold an associated faculty appointment in the Computer Science department and an affiliation with Robotics at Princeton. Before joining the faculty, they were a Princeton Presidential Postdoctoral Fellow mentored by Naomi Leonard. They completed their PhD in Computer Science and MSE in Robotics at the University of Pennsylvania, and their BS degrees in Mechanical and Computer Engineering at San Jose State University. Among their awards are the Princeton Presidential Postdoctoral Fellowship, NSF Integrative Graduate Education and Research Training award, and GEM Fellowship sponsored by the Adobe Foundation.

Professor Octavia Camps (Northeastern University): When Ordering Matters, Dynamics Inspired AI for Sequences

Professor Octavia Camps (Northeastern University): When Ordering Matters, Dynamics Inspired AI for Sequences

Professor Octavia Camps

Professor, Electrical and Computer Engineering, Northeastern University

Time: 1:00 pm – 2:00 pm

Date: Friday, October 10, 2025

Location: BC 220

Abstract: A long-standing goal of Machine Learning is to enable machines to structure and interpret the world as humans do. This challenge is particularly complex in time-series data, such as video sequences, where seemingly different observations can arise from the same underlying dynamics. In this talk, I will explore how leveraging these inherent dynamics can lead to frugal and interpretable architectures for sequence analysis, classification, prediction, and manipulation. I will illustrate these ideas with two key examples. First, I will introduce CVAR, a dynamics-based architecture for cross-view action recognition. By exploiting temporal coherence in sequential data, CVAR extracts dynamics-based features and invariant representations through an information-autoencoding unsupervised learning paradigm. This flexible framework accommodates various input modalities, including RGB and 3D skeleton data. Experimental results on four benchmark datasets demonstrate that CVAR not only outperforms state-of-the-art methods across all modalities but also significantly bridges the performance gap between RGB and 3D skeleton-based approaches. Next, I will present JPDVT, a framework designed to solve “Set to Sequence” problems— where unordered, incomplete sets must be assembled into meaningful sequences. By employing conditional diffusion denoising probabilistic models, JPDVT learns the probability distribution of all possible permutations in training data, enabling sequence reconstruction through distribution sampling. Our approach achieves state-of-the-art performance in both quantitative and qualitative evaluations, demonstrating its ability to handle missing data and solve larger, more complex image and video puzzles than previous methods. These examples highlight the power of ordering-aware architectures in structured sequence learning and their broad applicability across domains.

Bio: Octavia Camps received a B.S. degree in computer science and a B.S. degree in electrical engineering from the Universidad de la Republica (Uruguay), and a M.S. and a Ph.D. degree in electrical engineering from the University of Washington. Since 2006, she has been a Professor in the Electrical and Computer Engineering Department at Northeastern University. From 1991 to 2006 she was a faculty of Electrical Engineering and of Computer Science and Engineering at The Pennsylvania State University. Prof. Camps was a visiting researcher at the Computer Science Department at Boston University during Spring 2013 and in 2000, she was a visiting faculty at the California Institute of Technology and at the University of Southern California. She is an associate editor IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) and was a General Chair for IEEE/CVF Computer Vision and Pattern Recognition (CVPR) 2024. 

Her main research interests include dynamics-based computer vision, machine learning, and image processing. In particular, her work seeks data-driven dynamic representations for high dimensional temporal sequences, which are compact, physically meaningful, and capture causality relationships. Combining recent deep learning developments with concepts from dynamic systems identification, she has developed models and algorithms for a range of video analytics applications, including human re-identification, visual tracking, action recognition, video generation, and medical imaging.

Professor Xuesu Xiao (George Mason University): Deployable Robots that Learn

Professor Xuesu Xiao (George Mason University): Deployable Robots that Learn

Professor Xuesu Xiao

Assistant Professor, Computer Science, George Mason University

Time: 1:00 pm – 2:00 pm

Date: Friday, Sep. 19, 2025

Location: BC 220

Abstract: While many robots are currently deployable in factories, warehouses, and homes, their autonomous deployment requires either the deployment environments to be highly controlled, or the deployment to only entail executing one single preprogrammed task. These deployable robots do not learn to address changes and to improve performance. For uncontrolled environments and for novel tasks, current robots must seek help from highly skilled robot operators for teleoperated (not autonomous) deployment. In this talk, I will present three approaches to removing these limitations by learning to enable autonomous deployment in the context of mobile robot navigation, a common core capability for
deployable robots: (1) Interactive Learning by Adaptive Planner Parameter Learning fine-tunes existing motion planners by learning from simple interactions with non-expert users before autonomous deployment and adapts to different deployment scenarios; (2) In-Situ Learning of vehicle kinodynamics allows robots to learn from
vehicle-terrain interactions during deployment and accurately, quickly, and stably navigate robots on unstructured off-road terrain; (3) Reflective Learning by Learning from Hallucination enables agile navigation in highly constrained deployment environments by reflecting on previous deployment experiences and creating synthetic
obstacle configurations to learn from. In addition, I will also briefly introduce a few more recent work on robot night vision, social robot navigation, multi-robot coordination, and human-robot interaction.

Bio: Xuesu Xiao is an Assistant Professor in the Department of Computer Science at George Mason University. Xuesu (Prof. XX) directs the RobotiXX lab, in which researchers (XX-Men and XX-Women) and robots (XX-Bots) work together at the intersection of motion planning and machine learning with a specific focus on developing highly capable and intelligent mobile robots that are robustly deployable in the real world with minimal human supervision. Xuesu’s work has been deployed in real-world robot field missions, including search and rescue effort in the Mexico City earthquake and the Greece refugee crisis, decommissioning effort in the Fukushima nuclear disaster, and multiple search and rescue exercises in the US. Xuesu’s research has been featured by The New York Times, InDro Robotics, Google AI Blog, Clearpath Robotics, IEEE Spectrum, US Army, Robotics Business Review, Tech Briefs, NSF Science Nation, WIRED, and KBTX-TV. Xuesu has been awarded George Mason University Presidential Award for Faculty Excellence in Research and College of Engineering and Computing 2025 Faculty Excellence Award for Research.

Professor Dr. Nadia Figueroa (University of Pennsylvania): Teaching Robots to Move, Grasp, and Interact Safely in Our Dynamic, Human-Centric World

Professor Dr. Nadia Figueroa (University of Pennsylvania): Teaching Robots to Move, Grasp, and Interact Safely in Our Dynamic, Human-Centric World

Professor Nadia Figueroa

Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics (MEAM) Department

Time: 12:30 pm – 1:30 pm

Date: Friday, April 18

Location: BC 220

Abstract: For the last decades we have lived with the promise of one day being able to own a robot that can coexist, collaborate and cooperate with humans in our everyday lives. This has motivated a vast amount of research on robot control, motion planning, machine learning, perception and physical human-robot interaction (pHRI). However, we are yet to see robots fluidly collaborating with humans and other robots in the human-centric dynamic spaces we inhabit. This deployment bottleneck is due to traditionalist views of how robot tasks and behaviors should be specified and controlled. For collaborative robots to be truly adopted in such dynamic, ever-changing environments they must be adaptive, compliant, reactive, safe and easy to teach or program. Combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity, time-critical, safety-critical requirements and contradicting goals. In this talk, I will show that with a Dynamical Systems (DS) approach for motion planning and pHRI we can achieve reactive, provably safe and stable robot behaviors while efficiently teaching the robot complex tasks from a single (or handful of) demonstration. Such an approach can be extended to offer task-level reactivity, transferability and can be used to incrementally learn from new data and failures in a matter of seconds and even during physical interactions, just as humans do. Furthermore, I will show that such DS perspective to robot motion planning naturally allows for compliant and passive robot behaviors that inherently ensure human safety. While reactivity and compliance are favorable from the human perspective, it is often difficult to enforce any type of safety-critical constraints for the robot with classical reactive and impedance control techniques. Hence, I will finalize the talk showing some recent work where we offer the best of both worlds, real-time reactivity and compliance while ensuring safety-critical constraints allowing the robot to be passive only when feasible and performing constraint-aware physical interaction tasks with humans such as dynamic co-manipulation of large and heavy objects.

Bio: Nadia Figueroa is the Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania. She holds secondary appointments in Computer and Information Science and Electrical and Systems Engineering and is a core faculty of the GRASP laboratory. Before joining Penn, she was a Postdoctoral Associate in the Interactive Robotics Group (part of CSAIL) at the Massachusetts Institute of Technology (MIT). She obtained a Ph.D. in Robotics, Control and Intelligent Systems from the Swiss Federal Institute of Technology in Lausanne (EPFL). Prior to this, she spent time as a Research Assistant in the Robotics and Mechatronics Institute at the German Aerospace Center (DLR) and at NYU Abu Dhabi. She holds a B.Sc. in Mechatronics from Monterrey Tech and M.Sc. in Automation and Robotics from TU Dortmund. Her research focuses on developing tightly coupled learning, control and estimation algorithms to achieve fluid human-robot collaborative autonomy with safety, efficiency and robustness guarantees. This involves research at the intersection of machine learning, control theory, artificial intelligence, perception, biomechanics and psychology – with a physical human-robot interaction perspective. She has received several honors for her contributions to robotics, including being a finalist for the Georges Giralt PhD award, the KUKA innovation award and receiving best paper awards and nominations at major robotics conferences and journals.

Professor Brendan Englot (Stevens institute of technology): Advancing Autonomy for Marine Robots through Improved Situational Awareness and Decision-Making Under Uncertainty

Professor Brendan Englot (Stevens institute of technology):  Advancing Autonomy for Marine Robots through Improved Situational Awareness and  Decision-Making Under Uncertainty

Professor Brendan Englot

Department of Mechanical Engineering Professor and Director of the Stevens Institute for Artificial Intelligence

Time: 12:30 pm – 1:30 pm

Date: Friday, April 11

Location: BC 220

Abstract: This talk will discuss two research projects aimed at advancing the autonomy of marine robots operating in complex environments. The first is inspired by the need for underwater robots to serve as resident autonomous systems for offshore infrastructure, including at aquaculture sites. To achieve the situational awareness needed for autonomous inspection and precise physical intervention, I will discuss recent work that aims to produce accurate, high-definition 3D maps of underwater infrastructure using wide-aperture multibeam imaging sonar, an underwater sensing technology that permits long-range sensing with wide area coverage in turbid water. A drawback of this sensor technology is the flattened, 2D imagery it produces from 3D observations. Several novel techniques will be discussed that eliminate ambiguity and enable dense 3D mapping of underwater structures. The second topic is inspired by the need for intelligent decision-making in stochastic environments by teams of unmanned surface vehicles (USVs). I will describe a multi-year research effort to identify practical and impactful ways to apply Distributional Reinforcement Learning to autonomous navigation for USVs in disturbance-filled environments. Our results include several open-source algorithm implementations and benchmarking tools that advance the capabilities of Sim2Real for marine robots.

Bio: Dr. Brendan Englot joined Stevens Institute of Technology as an Assistant Professor of Mechanical Engineering in 2014, and he is currently the Anson Wood Burchard Endowed Professor and Director of the Stevens Institute for Artificial Intelligence. Brendan is also the recipient of a 2017 NSF CAREER award and a 2020 ONR Young Investigator award. Prior to his time at Stevens, he was a research scientist at United Technologies Research Center in East Hartford, Connecticut, where he was a principal investigator in the Autonomous and Intelligent Robotics Laboratory and a technical contributor to the Sikorsky Autonomous Research Aircraft. Brendan received S.B., S.M. and Ph.D. degrees in Mechanical Engineering from MIT in 2007, 2009 and 2012, respectively. He is a senior member of the IEEE, and a co-author of eight U.S. patents and more than 75 refereed journal and conference papers. 

Professor Leonardo Bobadilla(Florida International University): Robotics in Adverse Conditions: Overcoming Sensing, Communication, and Uncertainty Challenges

Professor Leonardo Bobadilla(Florida International University): Robotics in Adverse Conditions: Overcoming Sensing, Communication, and  Uncertainty Challenges

Professor Leonardo Bobadilla

Associate Professor, Florida International University School of Computing and Information Sciences

Time: 12:30 pm – 1:30 pm

Date: Friday, April 4

Location: BC 220

Abstract: Several essential domains of robotics and autonomous vehicles, such as surveillance, planetary exploration, oceanic monitoring, automated construction, and search and rescue, require filtering and planning for robots in scenarios where communication, sensing, and uncertainty modeling are difficult. In this talk, I will discuss our progress and experiments with collaborators in addressing these issues in three directions: 1) new limited-communication coordination approaches that can work in extremely low-bandwidth conditions; 2) a framework for embedding task-specific uncertainty requirements into navigation policies; and 3) an approach to localize and navigate using features of naturally occurring scalar fields.

Bio: Dr. Leonardo Bobadilla is currently an Associate Professor at the Knight Foundation School of Computing and Information Sciences (KFSCIS), College of Engineering and Computing at Florida International University (FIU). He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2013. He is interested in understanding the information requirements for solving fundamental robotics tasks such as navigation, patrolling, tracking, and motion safety. He has deployed test beds that monitor and control many mobile units requiring minimal sensing, actuation, and computation. He has published over 70 peer-reviewed journal articles and conference papers in Robotics, Control, and Oceanic Engineering. His research articles have appeared in prestigious journals such as IEEE Transactions of Automation Science and Engineering, IEEE Journal of Oceanic Engineering, IEEE Robotics and Automation Letters, ACM Transactions on Sensor Networks, and Journal of Intelligent and Robotic Systems, and in top conferences such as ICRA (IEEE International Conference on Robotics and Automation), IROS(IEEE/RSJ International Conference on Intelligent Robots and Systems), and RSS (The Robotics: Science and Systems). The ARL, ARPA-E, DoD, NSF, ONR, DHS, FDEP, and the Ware Foundation have sponsored his research.