New Biomimicry Technology Can Increase Safety in Autonomous Vehicles
A Waymo robotaxi waiting for a pedestrian to pass. AP Photo/Matt York
New research from Electrical and Computer Engineering Professor Ravinder Dahiya has the potential to make self-driving cars safer. This new technology mimics how a human retina analyzes images by using synaptic transistors, devices used to simulate neural pathways, that significantly reduces processing time to machines like autonomous vehicles.
This article originally appeared on Northeastern Global News. It was published by Cesareo Contreras.
This brain-inspired hardware could one day make autonomous vehicles safer
Hop inside a robotaxi, cruising the streets of San Francisco or Phoenix and you may come away thinking these driverless cars operate on magic.
Not quite fairydust, but they are backed by a series of sophisticated technologies, including machine-learning software, high-definition cameras and state-of-the-art sensors. Even so, these vehicles are not foolproof — they can have problems with how they perceive other objects or people on the road.
Developed in part by Northeastern University Electrical and Computer Engineering Professor Ravinder Dahiya, a new component inspired by biology and designed to improve these vehicles’ vision may make them even more responsive in the future.
In a newly published paper in Nature Communications, Dahiya and his co-authors outline new perception technologies that they designed and developed to mimic how a human retina analyzes visual changes.
The goal of the hardware is to reduce time delays between when an autonomous system sees an object and when it can analyze it and take action, a particularly important issue for driverless vehicles which operate in proximity to pedestrians, Dahiya said.
Researchers say they were able to tackle this challenge by taking advantage of synaptic transistors, electrical devices designed to simulate the neural pathways of the brain. The work builds on previous research Dahiya has conducted that involved using these “neuromorphic” or brain-like sensing technologies.

Ravinder Dahiya, a professor electrical and computer engineering, helped the developed the vision system component. Photo by Matthew Modoono/Northeastern University
“We call it neuromorphic behavior because that’s pretty much how processing takes place in our body.”
The researchers’ system is designed to emulate a human’s ability to take in and make sense of visual information, even as that information is changing, such as when someone’s hair is blowing in the wind, or in the case of a robotaxi, a pedestrian walking in the car’s path.
Read full story at Northeastern Global News