Using AI To Save Lives on the Battlefield
Liam McEneaney, MS’25, engineering and public policy, is working with ECE Associate Professor Sarah Ostadabbas and MIE Teaching Professor Beverly Kris Jaeger-Helton, and in collaboration with MIT Lincoln Lab, to develop an AI-powered computer program that will accurately and quickly fill out tactical combat casualty care cards for injured soldiers on its own on the battlefield by processing video and audio from medics in real time, and quickly sending the digital card to hospital staff.
15 minutes to save a soldier: Northeastern researchers make tech to coordinate care on the battlefield
When an American soldier gets seriously injured on the battlefield, a carefully executed plan is put into motion. A field medic rushes over to administer emergency care—anything from applying a truncate to injecting the wounded soldier with morphine. The medic calls another soldier over, who rips a “tactical combat casualty care card” off the injured soldier’s uniform and fills out the treatment and medication the wounded soldier is receiving. If no one else is around, the medic has to fill out the card themself.
The TCCC card is then passed up the ranks so that medical staff at the nearest hospital can prepare for the handoff. It tells them if they need to send a Blackhawk medivac aircraft and what drugs are already in the patient’s system.
One study published in the Journal of Trauma found that when the broader battlefield care protocol that utilizes TCCC cards is executed properly, preventable deaths are eight times less likely to happen.
But the protocol sometimes breaks down; medics and soldiers often must fill out TCCC cards in moments of high pressure and urgency. Senior combat medics interviewed by researchers estimated that only 10% to 15% of TCCC cards reach the hands of hospital staff, and one study found that soldiers had completed TCCC cards for only 7% of the 363 battlefield deaths it looked at.
“There is a golden hour between life and death,” said renowned military surgeon R. Adam Cowley in an interview. “If you are critically injured, you have less than 60 minutes to survive.”
But Liam McEneaney, a Northeastern graduate student and former Marines military medic, says it’s more extreme than that. “It’s not true, the golden hour … You have about 15 minutes.”
The amount of time it takes to fill out and hand off a TCCC card? Three of those 15.
McEneaney is working on a project with Northeastern professors—Sarah Ostadabbas, an electrical and computer engineering professor, and Kris Jaeger-Helton, the director of undergraduate industrial engineering—aimed at streamlining the TCCC card process. Together with MIT Lincoln Laboratory, a research institute funded by the Department of Defense, the team hopes their technology can take the burden of documentation off medics and soldiers, and ultimately, save lives.
They’ve used AI to develop a computer program that can fill out the cards on its own by processing video and audio from medics in real time. Once complete, it can quickly send the digital card to hospital staff. The researchers hope it will reduce the amount of time it takes to fill out the cards, increase the number of cards filled out and increase the accuracy of the information on them. Lincoln Lab aims to find external and internal funding to bring the technology to not just the battlefield, but to EMTs as well.
The project was born out of a partnership between Northeastern’s Gordon Institute, which places Northeastern engineering grad students in leadership positions for projects at industry partners, and Lincoln Lab’s SOCOM Ignite program, an innovation pipeline where military students help turn a concept into a marketable product. The project has become one of Lincoln Lab’s greatest success stories for the program, says Marc Vaillant, the lab’s advisor on the project.
***
Trevor Powers — a former Northeastern graduate student and McEneaney’s predecessor on the project—developed the AI model able to recognize the treatment being administered via headcam videos. He partnered with Ostadabbas, whose research focuses on computer vision. Their major challenge was to develop a model that is as close to 100% accurate as possible with little data available to create and train the model.
“Ten years ago, people were talking about the big data because the world was filled by data. Smartphones, smartwatches, wireless sensor networks: They are constantly collecting data,” says Ostadabbas. “Now you have these complex, advanced models, but they are very, very data hungry.”
Ostadabbas calls this data collection paradigm the “big data domain.” But now she’s focusing on the opposite: small data. She develops techniques that allow her AI models to work in situations where very little data is available. It’s led her to projects where data is limited—whether due to legal and privacy challenges or logistical hurdles in collecting and processing it.
She’s developed models that can pick up on visual cues from lab rats to more objectively measure their behavior, watch infants on baby cams to identify potential neurological diseases earlier than traditional diagnosis methods, and now, fill out TCCC cards on the battlefield using video and audio.
Ostadabbas and Powers put small data techniques into action to create their AI. Instead of looking for real footage from the battlefield, they looked into adjacent domains, such as reenactments, and instead of only trying to obtain data, they created it. The team ended up making their own videos of researchers reenacting common battlefield procedures in the lab.
Then, they studied how they could make the model more efficient—how to get equally accurate results with less training data. They focused on four scenarios: tourniquet application, pressure dressing, hemostatic dressing and chest seal placement. Instead of having the model open-endedly identify what’s happening in a video, they created an algorithm that identifies the type of treatment, and another to estimate the patient’s pose. The model then combines the results to determine what the treatment is and where it’s being administered.
In 2023, At the end of Powers’ year-long fellowship, Powers and Ostadabbas published a paper on the model. Their preliminary model’s accuracy: 100%.
Powers headed off to Fort Gordon in Augusta, Georgia, to begin his military career in cyber security and his Lincoln Lab advisor, Marc Vaillant, headed to Fort Liberty in North Carolina with two students to demo the technology to military medics. They gave a quick presentation to the soldiers, then headed out into the field to test the prototype. They hooked up the helmet camera on one of the medics and attached the black computer brick—the brains of the system—and battery pack. The medic practiced putting a tourniquet on a patient.
“It was a cobbled-together system in some sense,” says Vaillant. “But it was pretty successful. Anytime you do a demo, it unveils some of the difficulties with the system.”
From the feedback they received at Fort Liberty, Vaillant and the Lincoln Lab team decided their next steps should focus on the speech recognition component of the prototype and designing a sleeker and easier-to-use interface for the medics. They wanted to swap out the camera, computer brick, battery pack and wires for something simpler: an Android phone.
***
In late February, graduate researcher McEneaney, pulled out his brand-new Samsung Galaxy S24 Ultra (a major upgrade from his five-year-old S10+) and launched an app. He clicked on a big pink record button and started talking while a live transcript populated the screen. He pressed another pink button and the patient’s record and digital TCCC popped up. The spoken information neatly populated the card. The diagram of the body had a mark on the area of treatment, with a list of medications sitting next to it.
McEneaney worked with Jaeger-Helton through the Gordon Institute to create the prototype. Jaeger-Helton specializes in engineering how people interact with machines.
“Human systems integration is where I come in as a technical advisor,” says Jaeger-Helton. “When we’re dealing with humans in the loop, humans are human beings … They’ll be imperfect, or they’ll be affected by factors in the environment.”
Jaeger-Helton thinks not only about how to make humans’ interaction with systems as smooth as possible, but how the systems can recover when something inevitably goes wrong. In her PhD work, she helped build a simulator for training soldiers to work in hazardous environments. Since then, she’s studied how humans interact with cars and highway infrastructure and conducted mechanical testing and human performance research at Reebok. Now, she’s developing new parachute inspection procedures for the Army with a group of Northeastern undergraduate students.
“She’s a perfect match for my project,” says McEneaney. “Everything about it is solving a human factor problem and reducing cognitive load, so I couldn’t have a better faculty advisor.”
There are plenty of complicating factors on the battlefield. Jaeger-Helton thinks about what could disrupt the system: What if a medic starts talking about a different patient or problem when recording? What if the medic misremembers the exact protocol? What if the background noise becomes too significant to understand what the medic is saying? What if there’s a connectivity issue that prevents the information from being sent to the hospital immediately?
To holistically understand the environment in which the human-machine system has to operate, “We always think about three elements: the human, the activity and the context,” says Jaeger-Helton.
It’s led them to create a reference guide for the medics so they don’t have to rely solely on their memory in such a high-stress environment. They also have plans to implement a procedure where the program reads back the TCCC card to the medic and asks if any corrections need to be made before the information is sent off.
***
The team is riding the wave of the lightning-fast development of reliable open-source large language models and the hardware to support them. In September 2022, OpenAI released Whisper, a general-purpose speech recognition model trained on hundreds of thousands of hours of human conversations. McEneaney decided to try it out for their prototype.
“It’s the bleeding edge,” he says. “It blows everything else out of the water — and I haven’t even fine-tuned it yet.”
Before Whisper, individual research teams had to collect their own data and train their own models. Now, OpenAI’s model—available to the public for free—has done the heavy lifting. Research teams can now take the model and finetune it to their specific application. It’s like teaching a college student a new subject instead of starting with an infant and working all the way up to college-level calculus.
Companies have also been hard at work developing new hardware that can support and run the new AI models. In January, with the launch of the S24, Samsung announced Galaxy AI, a hardware suite that allows users to run large language models on their Galaxy phones. Instead of connecting to the internet and messaging a model like ChatGPT that’s stored on a hefty computer in Silicon Valley, users can run it on the computer in the palm of their hand.
“The fact that it runs on my phone, offline, only became possible in the last year,” says McEneaney.
The team hopes that, someday, they can make tech accessible enough to find a life beyond the battlefield. EMTs face many similar challenges to military medics: working in a dynamic, time-sensitive emergency while having to communicate essential medical information before care is transferred.
This technology “is the perfect way to optimize the process,” says McEneaney. Its uses extend beyond the U.S. military, to “EMTs, the war in Ukraine, and our NATO allies.”
by Noah Haggerty, Northeastern University Research