Loading Events

« All Events

  • This event has passed.

ECE PhD Dissertation Defense: Sheng Lin

December 8, 2020 @ 3:00 pm - 4:00 pm

PhD Dissertation Defense: Platform-specific Model Compression for Deep Neural Networks with Joint Methods

Sheng Lin

Location: Zoom Link

Abstract: Deep learning has delivered its powerfulness in many application domains, especially in computer vision, natural language processing and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the hardware platforms with intensive computation and storage. To consider the requirements of specific tasks, many researchers have investigated reducing DNN model size for efficient implementation in hardware devices with reasonable accuracy prediction. However, it lacks a systemic investigation on platform-specific DNN acceleration frameworks.

In this dissertation, we present several software-hardware co-design techniques to speed up the DNN algorithm on specific platforms. At the software level, we present joint model compression techniques for DNN model training and inference with reasonable accuracy performance. At the hardware level, these algorithms and methods are targeting storage reduction, low power consumption, efficient inference, and data security. By using joint methods to optimize different types of networks, the targeted hardware platforms can reduce asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. First, we present a Fast Fourier Transform-based DNN model for inference phase on embedded platforms. Second, we build a framework for two most commonly used model compression techniques, low-bit linear weight quantization and its combination with different weight pruning methods. Third, we apply quantization techniques for the always-on keyword spotting system and eliminate the energy-consuming ADC with an energy-efficient analog processing circuit. Finally, we propose a federated learning framework to protect user’s data privacy while reducing overall communication cost during the training process.

Details

Date:
December 8, 2020
Time:
3:00 pm - 4:00 pm
Website:
https://us02web.zoom.us/j/8333031167?pwd=Y2lvNmZSYnlYMFZ5dFBCRGV2azNvQT09#success

Other

Department
Electrical and Computer Engineering
Topics
MS/PhD Thesis Defense