ECE PhD Proposal Review: Siyue Wang
May 13, 2021 @ 2:00 pm - 3:00 pm
PhD Proposal Review: Towards Robust and Secure Deep Learning Models and Beyond
Location: Zoom Link
Abstract: Modern science and technology witnesses the breakthroughs made by deep learning during the past decades. Fueled by rapid improvements of computational resources, learning algorithms, and massive amount of data, deep neural networks (DNNs) have played a dominant role in more and more real world applications. Nonetheless, there is a spring of bitterness mingling with this remarkable success – recent studies reveals that there are two main security threats of DNNs which limit its widespread usage: 1) the robustness of DNN models under adversarial attacks, and 2) the protection and verification of intellectual properties of well-trained DNN models.
In this dissertation, we fist focus on the security problems of how to build robust DNNs under adversarial attacks, where deliberately crafted small perturbations added to the clean input can lead to wrong prediction results with high confidence. We approach the solution by incorporating stochasticity into DNN models. We propose multiple schemes to harden the DNN models when facing adversarial threats, including Defensive Dropout (DD), Hierarchical Random Switching (HRS)and Adversarially Trained Model Switching (AdvMS). The second part of this dissertation focuses on how to effectively protect the intellectual property for DNNs and reliably identify their ownership. We propose Characteristic Examples (C-examples) for effectively fingerprinting DNN models, featuring high-robustness to the well-trained DNN and its derived versions (e.g. pruned models) as well as low-transferability to unassociated models. The generation process of our fingerprints does not intervene with the training phase and no additional data are required from the training/testing set.