Loading Events

« All Events

  • This event has passed.

Ruyi Ding PhD Proposal Review

July 29, 2024 @ 9:00 am - 10:30 am

Name:
Ruyi Ding

Title:
Towards Robust and Secure Deep Learning: From Training through Deployment to Inference

Date:
7/29/2024

Time:
9:00:00 AM

Committee Members:
Prof. Yunsi Fei (Advisor)
Prof. Aidong Ding
Prof. Lili Su

Abstract:
In recent years, deep learning has experienced rapid advancement, leading to the development of numerous commercial deep neural network (DNN) models across diverse fields such as autonomous driving, healthcare, and recommendation systems. However, this wide adoption has intensified concerns about AI security throughout a neural network’s lifecycle — from training to deployment, and inference. Various vulnerabilities have emerged, threatening confidentiality, privacy, and intellectual property (IP) rights: poisoned training datasets facilitate privacy leakage and backdoor injection; after deployment, models may be misused through unauthorized transfer learning, a new form of IP infringement, and weights and parameters are subject to side-channel assisted model extraction attacks; during inference, adversarial attacks may compromise DNN functionality, causing misclassifications.
This dissertation addresses new security challenges across the neural network lifecycle through several novel contributions. We identify a new poisoning vulnerability in graph neural networks, where injecting poisoned nodes exacerbates link privacy leakage, allowing attackers to steal adjacent information from private training data, highlighting the necessity of robust AI training. To prevent model misuse after deployment, we introduce EncoderLock and Non-transferable Pruning, employing innovative training schemes and pruning methods to restrict the malicious use of pre-trained models through transfer learning, effectively implementing applicability authorization. Towards secure deep learning implementations, we adopt a software-hardware co-design approach to address DNN vulnerabilities. Specifically, we leverage the electromagnetic emanations from DNN accelerators in a new approach called EMShepherd, which detects adversarial examples (AE) on edge devices in a ‘black-box’ manner. To protect deployed DNNs against side-channel-based weight-stealing attacks, we develop PixelMask, which leverages the characteristics of DNN for side-channel defense by masking out unimportant inputs and dropping related operations to obfuscate side-channel signals. Lastly, we explore the use of Trusted Execution Environments (TEE) to safeguard model weights and data privacy against model stealing and membership inference attacks.
This proposal identifies key challenges of robust and secure deep learning,  tackles vulnerabilities at various stages of the AI lifecycle, and provides comprehensive protection mechanisms, from securing the training process to safeguarding deployed models, paving the way for more resilient and reliable AI technologies in real-world applications.

Details

Date:
July 29, 2024
Time:
9:00 am - 10:30 am

Other

Department
Electrical and Computer Engineering
Topics
MS/PhD Thesis Defense
Audience
PhD