Enhancing Security for Brain-Inspired Computing

Xiaolin Xu

ECE Assistant Professor Xiaolin Xu, in collaboration with Shaolei Ren from the University of California-Riverside, was awarded a $600,000 NSF grant for “Securing Brain-inspired Hyperdimensional Computing against Design-time and Run-time Attacks for Edge Devices.”

Abstract Source: NSF

Many computing applications depend on machine learning (ML) algorithms that analyze patterns in data and make predictions about new data they encounter. Many recent advances in these machine learning classifiers use approaches based on neural networks; however, neural networks often require large amounts of data, memory, and processing power. Brain-inspired hyperdimensional computing (HDC) has emerged in recent years as a less resource-heavy approach to building classifiers that are well-suited for smaller computing devices that have less computing power. However, just like other ML classifier architectures, HDC models may be threatened by attackers who want to degrade the models’ performance, insert backdoor “triggers” that let attackers take control of devices by presenting secret inputs, or steal the models themselves. However, these security risks in HDC models are not as well-studied as HDC performance. This project’s goal is to close that gap through a better understanding of HDC security vulnerabilities and defenses. This includes analyzing the space of possible attacks on HDC models, drawing parallels between attacks and defenses in neural networks and those in HDC models, and developing defenses that are as effective, efficient, and lightweight as the HDC models themselves so they can too be deployed in devices with limited computing power.

This project paves the way for HDC-based inference on edge devices by systematically investigating the attack surface for HDC computing, from design time to run time and from algorithm to hardware. First, it explores the vulnerabilities associated with HDC and systematically defines its unique attack surface. Accordingly, it investigates critical threats against HDC model performance and privacy from adversarial input, model perturbation, and reverse engineering. Second, it explores effective and efficient defense strategies by incorporating algorithmic-, hardware-, and system-level methods. A key insight and tool in the proposed work are methods for relating neural network-based models and HDC models; this will allow for comparative studies as well as open possibilities for adapting existing attacks and defenses on neural network-based architectures to HDC contexts. The scientific outcomes will help reshape HDC-enabled computing systems toward greater security and robustness. The project also contains a significant educational component and provides abundant opportunities to nurture and attract students from under-represented groups to engage in computer science and computer science research.

Related Faculty: Xiaolin Xu

Related Departments:Electrical & Computer Engineering