Loading Events

« All Events

  • This event has passed.

Siyue Wang’s PhD Dissertation Defense

July 21, 2022 @ 2:30 pm - 3:30 pm

“Towards Robust and Secure Deep Learning Models and Beyond”

Abstract:

Modern science and technology witness the breakthroughs of deep learning during the past decades. Fueled by the rapid improvements of computational resources, learning algorithms, and massive amounts of data, deep neural networks (DNNs) have played a dominant role in many real world applications. Nonetheless, there is a spring of bitterness mingling with this remarkable success – recent studies have revealed the limitations of DNNs which raise safety and reliability concerns of its widespread usage: 1) the robustness of DNN models under adversarial attacks and facing instability problems of edge devices, and 2) the protection and verification of intellectual properties of well-trained DNN models.In this dissertation, we first investigate how to build robust DNNs under adversarial attacks, where deliberately crafted small perturbations added to the clean inputs can lead to wrong prediction results with high confidence. We approach the solution by incorporating stochasticity into DNN models. We propose multiple schemes to harden the DNN models when facing adversarial threats, including Defensive Dropout (DD), Hierarchical Random Switching (HRS), and Adversarially Trained Model Switching (AdvMS). Besides, we also propose a stochastic fault-tolerant training scheme that can generally improve the robustness of DNNs when facing the instability problem on DNN accelerators without focusing on optimizations for individual devices.The second part of this dissertation focuses on how to effectively protect the intellectual property for DNNs and reliably identify their ownership. We propose Characteristic Examples (C-examples) for effectively fingerprinting DNN models, featuring high-robustness to the well-trained DNN and its derived versions (e.g. pruned models) as well as low-transferability to unassociated models. To better perform functionality verification of DNNs implemented on edge devices for on-device inference applications, we also propose Intrinsic Examples. Intrinsic Examples as fingerprinting of DNN can detect adversarial third-party attacks that embed misbehaviors through re-training. The generation process of our fingerprints does not intervene with the training phase and no additional data are required from the training/testing set.

Committee:

Prof. Xue Lin (Advisor)Prof. Yunsi FeiProf. Yanzhi Wang

Details

Date:
July 21, 2022
Time:
2:30 pm - 3:30 pm
Website:
https://northeastern.zoom.us/j/3303330369

Other

Department
Electrical and Computer Engineering
Topics
MS/PhD Thesis Defense
Audience
Faculty, Staff