Studying Robotics Through Powerful Research
Zhexin Xu, MS’25, robotics, pursued a master’s degree to explore a subject that has always intrigued him. Xu has fed his curiosity by participating in enthralling, specified research at Northeastern where he has been able to grow his knowledge of robotics and strengthen his skill set.
Zhexin Xu graduated from North China Electrical Power University in 2023 with a bachelor’s degree in automation engineering. After completing his bachelors, he secured an internship position in robotics, specifically in perception and state estimation. Fascinated by the subject and looking to gain more knowledge in the field, Xu knew his next step would be a master’s degree in robotics.
Before Xu started at Northeastern University, he became fascinated by Assistant Professor David Rosen’s research at the NEU Robot Autonomy Lab (NEURAL). The work in the NEURAL lab focuses on “the development of computationally efficient and provably robust algorithms for machine learning, perception and control.” A major focus in the lab is on the “design of scalable algorithms with provable performance guarantees for fundamental robotics state-estimation problems.” This research topic both interested Xu and was exactly the research work he wanted to get involved in.
NEURAL Lab
With an established interest in Professor Rosen’s research, Xu reached out to him for a co-op opportunity, and was thrilled to join the NEURAL lab as a graduate research assistant. He started his work in January of 2024 and continues to work on research at the lab beyond his co-op experience.

Xu and the NEURAL lab team at dinner.
Xu’s primary focus for his co-op research was proposing a factor graph-based approach for certifiable estimation. Simultaneous localization and mapping (SLAM) is the process of constructing a global model of an environment from local observations and it is a foundational capability for mobile robots; it supports core functions like planning, navigation and control. However, because of the non-convex nature of most state estimation problems, including SLAM, the over-reliance on smooth local optimization has an algorithmic limitation. The problem is robots need to simultaneously understand their location while building a map of their environment—imagine a self-driving car trying to navigate a new neighborhood. Currently, the methods robots use to solve this problem have a weakness: they work like climbing a hill in fog. The robot might think it’s reached the highest point, when really it’s just found a small bump, not the actual peak. This means the robot’s answer depends heavily on where it started looking, which can be dangerous in situations like autonomous driving.
While scientists have made progress on fixing this issue, the solutions are complicated and custom-built for each specific situation—like needing a different tool for every job. Meanwhile, there’s a popular, user-friendly method called “factor graphs” that engineers love because it’s modular (like building with LEGO® blocks), but it has that “climbing in fog” limitation. Xu, in his research, worked on connecting these two worlds—finding a way to use the easy, modular factor graph approach while getting the reliability of the more sophisticated methods. This would make it simpler to create trustworthy navigation systems for robots without needing to custom-design solutions from scratch every time.
Xu and the NEURAL lab team presented their early results at the IEEE International Conference on Robotics and Automation (ICRA) in Atlanta, Georgia, in May 2025. They also recently submitted this work to a top-tier robotics journal. He said resolving difficult challenges is one of the most thrilling and rewarding moments in research. He has enjoyed working through these challenges to find a solution and hopes to continue working in a laboratory environment.

Xu presenting his research at the IEEE International Conference on Robotics and Automation.
While at the NEURAL lab, Xu also worked on helping robots deal with bad data. When robots collect information from their sensors (like cameras or GPS), some of that information can be wrong or misleading—these errors are called “outliers.” It’s like trying to find your way using a map where someone has drawn a few fake streets. Xu and his team created a method that can both identify and ignore this bad data while still being reliable and trustworthy. They built this directly into the factor-graph system (that LEGO®-like modular approach mentioned earlier), making it easier for robots to navigate accurately even when some of their information is incorrect. This innovative work by Xu and the NEURAL team has been submitted to ICRA 2026 consideration.
Future Perspectives
Xu came to Northeastern to grow his understanding of robotics, specifically in perception and state estimation. He believes his work at the NEURAL lab has strengthened his technical abilities as well as given him the tools and knowledge to continue research in robotics. Xu will graduate with his master’s degree in December 2025 and is looking to further his education with a PhD. He is interested in diving deeper into more theoretical robotics research and hopes to work at the intersection of mathematics and practical robot applications.