Sarah Ostadabbas
Associate Professor,
Electrical and Computer Engineering
Director,
Women in Engineering Program
Office
- 520 ISEC
- 617.373.4992
Research Focus
Data-efficient machine learning and computer vision for real-world video perception and generation
About
Professor Ostadabbas is an associate professor in the Electrical and Computer Engineering Department at Northeastern University (NU) in Boston, Massachusetts, USA. She joined NU in 2016 after completing her post-doctoral research at Georgia Tech, following the achievement of her PhD at the University of Texas at Dallas in 2014. At NU, Professor Ostadabbas holds the roles of Director at the Augmented Cognition Laboratory (ACLab), and the Director of Women in Engineering (WIE). Her research focuses on the convergence of computer vision and machine learning, particularly emphasizing representation learning in visual perception problems. In her applied research, she has significantly contributed to the understanding, detection, and prediction of human and animal behaviors through the modeling of visual motion, considering various biomechanical factors. Professor Ostadabbas also extends her work to the Small Data Domain, including applications in medical and military fields, where data collection and labeling are costly and protected by strict privacy laws. Her solutions involve deep learning frameworks that operate effectively with limited labeled training data, incorporate domain knowledge for prior learning and synthetic data augmentation, and enhance the generalization of learning across domains by acquiring invariant representations. Professor Ostadabbas has co-authored over 140 peer-reviewed journal and conference articles and received research awards from prestigious institutions such as the National Science Foundation (NSF), Department of Defense (DoD), Sony, Mathworks, Amazon AWS, Verizon, Oracle, Biogen, and NVIDIA. She has been honored with the NSF CAREER Award (2022), Sony Faculty Innovation Award (2023), winner of Cade Prize for Inventivity in the Technology category (2024), was the runner-up for the Oracle Excellence Award (2023), and One of the 120+ Women Spearheading Advances in Visual Tech and AI Recognized by LDV Capital (2024). She is also elected as a Faculty Fellow and the recipient of the Constantinos Mavroidis Translational Research Award from the NU’s College of Engineering in 2025. She served in the organization committees of many workshops and renowned conferences (such as CVPR, ECCV, ICCV, ICIP, ICCASP, BioCAS, CHASE, ICHI) in various roles including Lead/Co-Lead Organizer, Program Chair, Board Member, Publicity Co-Chair, Session Chair, Technical Committee, and Mentor.
Education
- Postdoc (2015)—Georgia Tech
- PhD (2014) Electrical & Computer Engineering (Signal Processing)—UT Dallas
- MS (2007) Electrical Engineering (Control)—Sharif University of Tech, Tehran, Iran
- BS (2006) Electrical Engineering (Electronics)—Amirkabir University of Tech, Tehran, Iran
- BS (2005) Electrical Engineering (Biomedical)—Amirkabir University of Tech, Tehran, Iran
Honors & Awards
Professional Affiliations
- Member of IEEE
- IEEE Women in Engineering
- IEEE Signal Processing Society
- IEEE EMBS
- IEEE Young Professionals
- ACM SIGCHI.
Research Overview
Data-efficient machine learning and computer vision for real-world video perception and generation
At the Augmented Cognition Lab (ACLab), we rethink how machines perceive and understand the world; not by scaling up data, but by learning more from less. Our research focuses on building intelligent systems that can reason, predict, and interact meaningfully with the real world in “Small Data” domains, where large-scale annotation is impractical or impossible. We specialize in motion-centric video understanding, motivated by the insight that motion encodes causality, intent, and dynamics that static appearance alone cannot capture. Across applications ranging from infant and animal behavior analysis to robotics, healthcare, and defense, we develop methods that operate reliably in unconstrained, real-world environments. Rather than asking how much data can be collected, we ask how much structure can be extracted from the data already available, leveraging physics-inspired priors, generative modeling, and interpretable motion representations such as pose, trajectories, and temporal patterns. Our vision is to augment human understanding rather than replace it, by creating AI systems that are data-efficient, interpretable, and actionable in the settings where they matter most.
Selected Research Projects
- Graph-Centric Exploration of Nonlinear Neural Dynamics in Visuospatial-Motor Functions During Immersive Human-Computer Interactions
- – Principal Investigator, National Science Foundation
- PFI-RP: Augmented Reality and Electroencephalography for Detecting, Assessing, and Rehabilitating Visual Unilateral Neglect in Stroke Patients
- – Principal Investigator, National Science Foundation
- Development of a precision closed loop BCI for socially fearful teens with depression and anxiety
- – Principal Investigator, National Science Foundation
- CAREER: Learning Visual Representations of Motor Function in Infants as Prodromal Signs for Autism
- – Principal Investigator, National Science Foundation
- CHS: Small: Collaborative Research: A Graph-Based Data Fusion Framework Towards Guiding A Hybrid Brain-Computer Interface
- – Principal Investigator, National Science Foundation
- CRII: SCH: Semi-Supervised Physics-Based Generative Model for Data Augmentation and Cross-Modality Data Reconstruction
- – Principal Investigator, National Science Foundation
- NCS-FO: Leveraging Deep Probabilistic Models to Understand the Neural Bases of Subjective Experience
- – Co-Principal Investigator, National Science Foundation- Neural and Cognitive Systems
- NRI: EAGER: Teaching Aerial Robots to Perch Like a Bat via AI-Guided Design and Control
- – Principal Investigator, National Science Foundation
- SCH: INT: Collaborative Research: Detection, Assessment and Rehabilitation of Stroke-Induced Visual Neglect Using Augmented Reality (AR) and Electroencephalography (EEG)
- – Principal Investigator, National Science Foundation
Department Research Areas
Selected Publications
-
B. Galoaa, X. Bai, S. Moezzi, U. Nandi, D. R. Sai Siddhartha Vivek, S. Amraee, and S. Ostadabbas, “Look around and pay attention: Multi-camera point tracking reimagined with transformers,” in International Conference on 3D Vision (3DV), Mar. 2026.
-
L. Jiang, S. Zhu, S. Moezzi, Y. Luo, and S. Ostadabbas, “Broadening view synthesis of dynamic scenes from constrained monocular videos,” in International Conference on 3D Vision (3DV), Mar. 2026.
-
L. Song, H. Bishnoi, S. K. R. Manne, S. Ostadabbas, B. J. Taylor, and M. Wan, “Overcoming small data limitations in video-based infant respiration estimation,” in IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Mar. 2026.
-
X. Bai, S. A. Sreeramagiri⋆, D. R. Sai Siddhartha Vivek, B. Galoaa, E. Mortin, and S. Ostadabbas, “Spartan: Spatiotemporal pose-aware retrieval for text-guided autonomous navigation,” in British Machine Vision Conference (BMVC), Nov. 2025.
-
B. Galoaa, S. Amraee, and S. Ostadabbas, “More than meets the eye: Enhancing multi-object tracking with softmax splatting and optical flow,” in International Conference on Machine Learning (ICML), Jul. 2025.
-
X. Bai, L. Jiang, Y. Luo, and S. Ostadabbas, “Dual-conditioned temporal diffusion modeling for driving scene generation,” in International Conference on Robotics and Automation (ICRA), May 2025.
-
B. Galoaa, S. Amraee, and S. Ostadabbas, “Dragontrack: Transformer-enhanced graphical multi-person tracking in complex scenarios,” in IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Mar. 2025.
-
X. Huang, E. Hatamimajoumerd, A. Mathew, and S. Ostadabbas, “Infant action generative modeling,” in
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Mar. 2025.
-
Y. Luo, X. Bai, L. Jiang, A. Gupta, E. Mortin, H. Singh, and S. Ostadabbas, “Temporal-controlled frame swap for generating high-fidelity stereo driving data for autonomy analysis,” in The British Machine Vision Conference (BMVC), Nov. 2023
- D. Teotia, A. Lapedriza, and S. Ostadabbas, “Interpreting face inference models using hierarchical network dis-section,” International Journal of Computer Vision (IJCV), 2022.
- S. Liu, X. Huang⋆, L. Marcenaro, and S. Ostadabbas, “Privacy-preserving in-bed human pose estimation: High- lights from the ieee video and image processing cup 2021 student competition,” IEEE Signal Processing Magazine, 2022.
- A. Farnoosh and S. Ostadabbas, “Deep Markov Factor Analysis: Towards concurrent temporal and spatial analysis of fMRI data,” in Thirty-fifth Annual Conference on Neural Information Processing Systems (NeurIPS), 2021.
- A. Farnoosh, B. Azari, S. Ostadabbas, “Deep Switching Auto-Regressive Factorization: Application to Time Series Forecasting,” The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI’21). February 2-9, 2021.
- B. Rezaei, A. Farnoosh, and S. Ostadabbas, “G-LBM: Generative Low-dimensional Background Model Estimation from Video Sequences,” 16th European Conference on Computer Vision (ECCV’20), August 23-28, 2020.
- S. Liu, S. Ostadabbas, Seeing Under the Cover: A Physics Guided Learning Approach for In-Bed Pose Estimation, 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI’19), October 13-17, 2019, Shenzhen, China
- S. Liu, S. Ostadabbas, “Inner Space Preserving Generative Pose Machine,” 15th European Conference on Computer Vision (ECCV’18), September 8-14, 2018, Munich, Germany.
Sep 04, 2025
Innovative AI Technology Used to Detect Neurodevelopmental Disorders
ECE Professor Sarah Ostadabbas has finalized a patent licensing agreement with Northeastern University for her spinout company AIWover, Inc.
Jul 21, 2025
The Value of Collaboration and Time in the World of Research
Bishoy Galoaa, MS’25, electrical and computer engineering, has participated in multiple, impactful research projects throughout his time at Northeastern. These projects have taught him valuable skills and the importance of time and collaboration when conducting research. Galoaa has decided to continue his education and pursue a PhD in computer engineering to dive deeper into the world of research.
May 27, 2025
From Financial Analyst to AI Researcher as an MS Student
Bishoy Galoaa, MS’25, electrical and computer engineering, is pursuing his passion of making a meaningful impact with artificial intelligence through academic research at Northeastern, machine learning at Massachusetts General Hospital, and now as a PhD student.
Apr 30, 2025
Patent for 3D Human Pose Estimation System
ECE Associate Professor Sarah Ostadabbas was awarded a patent for “3D human pose estimation system.”
Feb 19, 2025
Patent for Contactless In-Bed Pressure Estimation
ECE Associate Professor Sarah Ostadabbas was awarded a patent for “Method and system for in-bed contact pressure estimation via contactless imaging.”
Jan 30, 2025
Faculty and Staff Awards 2025
Faculty and staff were recognized at the 27th Annual College of Engineering Faculty and Staff Awards for their exceptional service and dedication in support of students, the COE community, and the university during the 2024-2025 academic year.
Oct 01, 2024
Patent for Color-Sensing Technology
ChE Affiliated Faculty Swastik Kar and ECE Associate Professor Sarah Ostadabbas were awarded a patent for “Device and method for color indentification.”
Oct 01, 2024
Ostadabbas Wins 2024 Cade Prize for Inventivity
ECE Associate Professor Sarah Ostadabbas received the 2024 Cade Prize for Inventivity in the technology category for AiWover, a groundbreaking spin-off from her lab that uses AI to transform visual monitoring of babies and toddlers and enhances both safety and developmental tracking.
Aug 29, 2024
Creating Age-Inclusive VR
ECE Associate Professor Sarah Ostadabbas, in collaboration with the University of Rhode Island, was awarded a $600,000 NSF grant for “Graph-Centric Exploration of Nonlinear Neural Dynamics in Visuospatial-Motor Functions During Immersive Human-Computer Interactions.” She is investigating how aging impacts the ability to use emerging HCI technologies.
Jul 08, 2024
Using AI To Save Lives on the Battlefield
Liam McEneaney, MS’25, engineering and public policy, is working with ECE Associate Professor Sarah Ostadabbas and MIE Teaching Professor Beverly Kris Jaeger-Helton, and in collaboration with MIT Lincoln Lab, to develop an AI-powered computer program that will accurately and quickly fill out tactical combat casualty care cards for injured soldiers on its own on the battlefield by processing video and audio from medics in real time, and quickly sending the digital card to hospital staff.