$1.2M NSF Grant to Secure Deep Neural Networks

ECE Professor Yunsi Fei, Assistant Professor Xue (Shelley) Lin, and Khoury Associate Professor Thomas Wahl were awarded a $1.2M NSF grant for “Protecting Confidentiality and Integrity of Deep Neural Networks against Side-Channel and Fault Attacks”.


Abstract Source: NSF

Deep learning (DL) has become a foundational means for solving diverse problems ranging from computer vision, natural language processing, digital surveillance to finance and healthcare. Security of the deep neural network (DNN) inference engines and trained DNN models on various platforms have become one of the biggest challenges in deploying artificial intelligence. Confidentiality breaches of the DNN model can facilitate manipulations of the DNN inference, resulting in potentially devastating consequences. This project aims to promote broader applications of DNNs in security-critical scenarios by ensuring secure execution of DNN inference engines against side-channel and fault injection attacks.

The project is composed of three salient and interdependent thrusts. SpyNet will study vulnerability of DNNs implemented on mainstream platforms to model reverse engineering via passive side-channel attacks. DisruptNet will investigate the feasibility of active fault injection attacks to disrupt execution of DNN inference engines, and SecureNet will identify protection, detection, and hardening mechanisms for secure execution of DNN inference engines. This project may deepen the understanding of inherent information leakage and fault tolerance of DNN models.

The unprecedented rise of DL technology in diverse application domains has rendered secure execution, primarily confidentiality and integrity, a top priority. This project significantly advances the state-of-the-art on DL implementations, computer architecture and heterogeneous systems, hardware security, and formal methods/verification. Research results and insights on secure DNN design techniques will be incorporated into courses developed by the researchers. The interdisciplinary research will provide unique training and opportunities for graduate and undergraduate students, and industry partners through a newly established Industry-University Collaborative Research Center. The project will leverage the Experiential Education model of Northeastern University to engage undergraduates, women, and minority students in independent research projects.

All the attack library, metrics, methodologies, and software tools will be made available to the public on a dedicated project Website (https://tescase.coe.neu.edu), and the protected and hardened DL models will be released to GitHub to facilitate community usage. The repository will be maintained during and beyond the project.

Related Faculty: Yunsi Fei, Xue "Shelley" Lin

Related Departments:Electrical & Computer Engineering