BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northeastern University College of Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Northeastern University College of Engineering
X-ORIGINAL-URL:https://coe.northeastern.edu
X-WR-CALDESC:Events for Northeastern University College of Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230720T130000
DTEND;TZID=America/New_York:20230720T140000
DTSTAMP:20260513T212647
CREATED:20230711T140015Z
LAST-MODIFIED:20230711T140015Z
UID:37433-1689858000-1689861600@coe.northeastern.edu
SUMMARY:Qing Jin's PhD Dissertation Defense
DESCRIPTION:Title:Decoupling Efficiency-Performance Optimization for Modern Neural Networks \nDate: \n7/20/2023 \nCommittee Members: \nYanzhi Wang (Advisor); Prof. David Kaeli; Prof. Sunil Mittal; Prof. Jennifer Dy \nAbstract: \nDeep learning has achieved remarkable success in a variety of modern applications\, but this success is often accompanied by inefficiency in terms of storage and inference speed\, which can hinder their practical use on resource-constrained hardware. Developing highly efficient neural networks that maintain high prediction accuracy is crucial and challenging. This dissertation explores the potential for simultaneously achieving high efficiency and high prediction accuracy in neural networks\, and can be broadly divided into three sections. (1) In Section One\, we explore the implementation of highly efficient generative adversarial networks (GANs) capable of generating high-quality images within a predefined computational budget. The key challenge lies in identifying the optimal architecture for the generative model while simultaneously preserving the quality of the generated images from the compressed model\, despite its reduced computational cost. To achieve this\, we propose a novel neural architecture search (NAS) algorithm and a new knowledge distillation technique. (2) In Section Two\, we explore the challenge of quantizing discriminative models without relying on high-precision multiplications. To address this issue\, we present an innovative approach to determine the optimal fixed-point formats for both weights and activations based on their statistical properties. Our results demonstrate that high accuracy in quantized neural networks can be achieved without the need for high-precision multiplications. (3) In Section Three\, we delve into the challenge of training neural networks for innovative computing platforms\, specifically processing-in-memory (PIM) systems. Through a detailed mathematical derivation of the backward propagation algorithm\, we facilitate the training of quantized models on these platforms. Additionally\, through a thorough theoretical analysis of training dynamics\, we ensure convergence and propose a systematic solution for quantizing neural networks on PIM systems.
URL:https://coe.northeastern.edu/event/qing-jins-phd-dissertation-defense/
END:VEVENT
END:VCALENDAR