Team Led by Sznaier Awarded $7.5M by DoD for Control and Learning Enabled Verifiably Robust AI
Mario Sznaier, Dennis Picard Trustee Professor, electrical and computer engineering (ECE), is leading a multi-university team that was awarded a $7.5 million, five-year grant from the Department of Defense (DoD).
This award comes as part of the DoD’s annual Multidisciplinary University Research Initiative (MURI) competition, in which they award funding to tackle selected initiatives. This year, the MURI competition awarded $179 million in funding awards to 25 teams based on 26 topic areas of importance to the Army Research Office, the Air Force Office of Scientific Research, and the Office of Naval Research.
Sznaier’s project, titled “Control and Learning Enabled Verifiable Robust AI (CLEVR-AI),” is sponsored by the Office of Naval Research and includes co-PIs from Northeastern—Professor Octavia Camps and Assistant Professor Milad Siami, both from ECE, and University Distinguished Professor Eduardo Sontag, ECE and bioengineering—and from UC Berkeley, the University of Michigan, and Johns Hopkins.
“We are honored to be selected for this highly competitive funding opportunity,” said Sznaier.
Sznaier and his team are designing control systems capable of utilizing artificial intelligence (AI) and machine learning methods to learn from and interact within complex environments in a safe way. Like living systems, the resulting systems will adapt to novel scenarios, where data is generated—and decisions are made—in real time.
The research will lead to a new neurally inspired framework for learning and control, where insights from dynamical systems are used to design verifiable and safe machine learning algorithms, and insights from machine learning and neuroscience are used to design the next generation of learning-enabled control systems. This framework will be a key enabler for designing a new class of truly autonomous systems that are aware of high-level mission specifications and low-level physical constraints and capabilities.
This capability will benefit a variety of mission critical applications such as providing situational awareness to humans like first responders experiencing information overload in a disaster area, and for monitoring large uninhabited spaces such as coastlines and forests for potential hazardous situations.
“There is a disconnect between the promise of AI and the low level of autonomy in existing systems,” explained Sznaier. “The issue we are working to solve is how to make an autonomous system learn about new situations in its environment while still making safe decisions in real time. This is something that researchers have struggled with; it’s hard to task something that is self-learning to perform safety-critical actions in unknown, previously unseen environments.”
Self-learning autonomous systems need to operate in a closed loop system, meaning they are making continuous corrections on their actions and learning as they go. The problem is that if a system makes an incorrect action, then those continuous corrections compound themselves. In extreme examples, the self-driving car hits a stop sign, or the self-driving drone crashes.
Sznaier likened it to catching a ball with your eyes open (open loop) or closed (closed loop).
“If you have your eyes open and can make small corrections as the ball is coming toward your hand, then you’re likely to succeed,” he said. “With your eyes closed, you’re making guesses with no real information to guide you, and that’s where mistakes can happen.”
Sznaier’s team members comprise a wealth of knowledge in diverse areas, such as control theory, machine learning, computer vision, and bioengineering. This academic expertise is enhanced by university capabilities, such as Northeastern’s flight facility for unmanned autonomous systems and the University of Michigan’s Mcity Test Facility, a purpose-built proving ground for testing automated vehicles under controlled and realistic conditions.