- This event has passed.
ECE PhD Proposal Review: Tong Jian
December 2, 2021 @ 3:00 pm - 4:00 pm
PhD Proposal Review: Robust Sparsified Deep Learning
Location: Zoom Link
(ID: 75807284369, Passcode: 463BXOZk)
Abstract: In this thesis, we investigate and address robustness concerns about DNN-based real-life applications on resource constrained systems, environment adaptation, and adversarial learning, respectively. We propose a means of compressing a Radio Frequency (RF) deep neural network architecture through weight pruning, and provide a systems-level analysis of the implementation of such a pruned architecture at resource-constrained edge devices. In particular, we jointly train and sparsify neural networks tailored to edge hardware implementations. Under only negligible accuracy loss (less than 1%), we can achieve at most 27.2x pruning rate for 50-device classification. We demonstrate the efficacy of our approach over multiple edge hardware platforms and our method yields significant inference speedups, 11.5x on the FPGA and 3x on the smartphone, as well as high efficiency.
Furthermore, we propose a new learn-prune-share (LPS) algorithm for achieving robustness to environment adaptation in the field of lifelong learning. Our method maintains a parsimonious neural network model and achieves exact no forgetting by splitting the network into task-specific partitions via an ADMM-based weight pruning strategy. Moreover, a novel selective knowledge sharing scheme is integrated seamlessly into the ADMM optimization framework to address knowledge reuse. We show that our approach achieves significant improvement over the state-of-the-art methods on multiple real-life datasets.
Finally, we investigate the HSIC bottleneck as regularizer (HBaR) as a means to enhance adversarial robustness. We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. In particular, we prove that the HSIC bottleneck regularizer reduces the sensitivity of the classifier to adversarial examples. Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training.