BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northeastern University College of Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://coe.northeastern.edu
X-WR-CALDESC:Events for Northeastern University College of Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231204T103000
DTEND;TZID=America/New_York:20231204T113000
DTSTAMP:20260511T204729
CREATED:20231127T163905Z
LAST-MODIFIED:20231127T163905Z
UID:40523-1701685800-1701689400@coe.northeastern.edu
SUMMARY:Cheng Gongye PhD Dissertation Defense
DESCRIPTION:Title:\nHardware Security Vulnerabilities in Deep Neural Networks and Mitigations \nDate:\n12/4/2023 \nTime:\n10:30:00 AM \nCommittee Members:\nProf. Yunsi Fei (Advisor)\nProf. Aidong Ding\nProf. Xue Lin\nProf. Xiaolin Xu \nAbstract:\nIn the past decade\, Deep Neural Networks (DNNs) have become pivotal in numerous fields\, including security-sensitive autonomous driving and privacy-critical medical diagnosis. This Ph.D. dissertation delves into the hardware security of DNNs\, discovering their vulnerabilities to fault and side-channel attacks and exploring novel countermeasures essential for their safe deployment in critical applications. \nFault attacks disrupt computation or inject faults into parameters\, compromising the integrity of targeted applications. This dissertation demonstrates a power-glitching fault injection attack on FPGA-based DNN accelerators\, common in cloud environments\, which exploits vulnerabilities in the shared power distribution network and results in model misclassification. In response to these threats\, we introduce a novel\, lightweight defense mechanism to protect DNN parameters from adversarial bit-flip attacks. The proposed framework incorporates a dynamic channel-shuffling obfuscation scheme coupled with a logits-based model integrity monitor. The approach effectively safeguards various DNN models against bit-flip attacks\, without necessitating retraining or structural changes to the models. Furthermore\, our research expands the scope of fault analysis beyond just the parameters of DNN models. We thoroughly examine the entire implementation of commercial products\, defying the prevailing assumption that quantized DNNs are inherently resistant to bit-flips. \nSide-channel attacks exploit information leakage of system implementations\, such as power consumption and electromagnetic emanations\, to reveal system secrets and therefore compromise confidentiality. This dissertation makes significant contributions to side-channel assisted model extraction of DNNs. We present a floating-point timing side-channel attack on x86 CPUs that reverse-engineers DNN model parameters in software implementations. For hardware accelerators\, we target the state-of-the-art AMD-Xilinx deep-learning processor unit (DPU)\, a reconfigurable engine dedicated to convolutional neural networks (CNNs) and representing the most complex commercial FPGA accelerator with encrypted IPs. Our work demonstrates that electromagnetic analysis can be leveraged to recover the data flow and scheduling of the DNN accelerators\, facilitating follow-on architecture and parameter extraction attacks. To mitigate EM side-channel model extraction attacks\, we introduce a novel defense mechanism that devises a random importance-aware activation mask on input pixels to disrupt the operation alignment on EM traces\, with minimal performance and efficiency impacts. \nOverall\, this dissertation significantly deepens the understanding of hardware security of DNN models. It makes important contributions in discovering novel and critical vulnerabilities of DNN inference pertaining to system implementations\, and proposing effective and practical solutions for securing DNNs in mission-critical environments. The research work marks a substantial step forward in the development of resilient and secure AI systems.
URL:https://coe.northeastern.edu/event/cheng-gongye-phd-dissertation-defense/
END:VEVENT
END:VCALENDAR