BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northeastern University College of Engineering - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Northeastern University College of Engineering
X-ORIGINAL-URL:https://coe.northeastern.edu
X-WR-CALDESC:Events for Northeastern University College of Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220627T140000
DTEND;TZID=America/New_York:20220627T150000
DTSTAMP:20260413T153340
CREATED:20221103T143838Z
LAST-MODIFIED:20221103T143838Z
UID:34083-1656338400-1656342000@coe.northeastern.edu
SUMMARY:Xiaolong Ma's PhD Dissertation Defense
DESCRIPTION:“Towards Efficient Deep Neural Network Execution with Model Compression and Platform-specific Optimization” \nAbstract: \nDeep learning or deep neural network (DNN)\, as one of the most powerful machine learning techniques\, has become the fundamental element and core enabler of the artificial intelligence. Many incredible\, bleeding-edge applications\, such as community/shared virtual reality experiences and self-driving cars\, will crucially rely on the ubiquitous availability and real-time executability of the high-quality deep learning models. Among the variety of the AI-associated platforms\, mobile and embedded computing devices have become key carriers of deep learning to facilitate the widespread of machine intelligence. In this talk\, I will first focus on a compression-compilation co-design method that deploy a unique sparse model on an off-the-shelf mobile device with real-time execution speed. This method advances the state-of-the-art by introducing a new dimension\, fine-grained pruning patterns inside the coarse-grained structures\, revealing a previously unknown point in the design space. The designed patterns are interpretable\, and can be obtained by a fully automatic pattern-aware pruning framework that achieves pattern library extraction\, pattern assignment (pruning) and weight training simultaneously. With the higher accuracy enabled by fine-grained pruning patterns\, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency. We take a step forward by considering a more practical scenario\, that the deployment-execution mode for AI tasks no longer satisfy the user preference\, and enabling edge training becomes inevitable since it promotes much better personalized intelligent services while strengthen users’ privacy by avoiding data egress from their devices. To this end\, I will demonstrate my approaches that use sparsity to achieve fast and efficient training on the edge devices. I will evaluate the static lottery ticket sparse training\, and then demonstrate a high-accuracy and low-cost dynamic sparse training framework that makes the edge training possible. It successfully incorporates the pattern-based sparsity into sparse training\, and also exploit the data-level sparsity to further improve the acceleration. I will conclude by using our sparse training method on a distributed training scenario\, which demonstrates the state-of-the-art accuracy and great flexibility for modern AI model training. \nCommittee: \nProf. Yanzhi Wang (Advisor) \nProf. Xue Lin \nProf. David Kaeli
URL:https://coe.northeastern.edu/event/xiaolong-mas-phd-dissertation-defense/
END:VEVENT
END:VCALENDAR