Loading Events

« All Events

  • This event has passed.

Kaustubh Shivdikar PhD Dissertation Defense

April 3, 2024 @ 3:30 pm - 5:00 pm

Announcing:
PhD Dissertation Defense

Name:
Kaustubh Shivdikar

Title:
Enabling Accelerators for Graph Computing

Date:
4/3/2024

Time:
3:30 PM

Location: Zoom

Committee Members:
Prof. David Kaeli (Advisor)
Prof. Devesh Tiwari
Prof. Ajay Joshi (Boston University)
Prof. John Kim (KAIST)
Prof. José Luis Abellán (University of Murcia)

Abstract:
The advent of Graph Neural Networks (GNNs) has revolutionized the field of machine learning, offering a novel paradigm for learning on graph-structured data. Unlike traditional neural networks, GNNs are capable of capturing complex relationships and dependencies inherent in graph data, making them particularly suited for a wide range of applications including social network analysis, molecular chemistry, and network security. The impact of GNNs in these domains is profound, enabling more accurate models and predictions, and thereby contributing significantly to advances in these fields.

GNNs, with their unique structure and operation, present new computational challenges compared to conventional neural networks. This requires comprehensive benchmarking and a thorough characterization of GNNs to obtain insight into their computational requirements and to identify potential performance bottlenecks. In this thesis, we aim to develop a better understanding of how GNNs interact with the underlying hardware and will leverage this knowledge as we design specialized accelerators and develop new optimizations, leading to more efficient and faster GNN computations.

A pivotal component within GNNs is the Sparse General Matrix-Matrix Multiplication (SpGEMM) kernel, known for its computational intensity and irregular memory access patterns. In this thesis, we address the challenges posed by SpGEMM by implementing a highly optimized hashing-based SpGEMM kernel tailored for a custom accelerator. This optimization is crucial to enhancing the performance of GNN workloads, ensuring that the acceleration potential of custom hardware is fully realized.

Synthesizing these insights and optimizations, we design state-of-the-art hardware accel-erators capable of efficiently handling various GNN workloads. Our accelerator architectures are built on our characterization of GNN computational demands, providing clear motivation for our approaches. Furthermore, we extend our exploration to emerging GNN workloads in the domain of graph neural networks. This exploration into novel models underlines our comprehensive approach, as we strive to enable accelerators that are not just performant, but also versatile, able to adapt to the evolving landscape of graph computing.

Details

Date:
April 3, 2024
Time:
3:30 pm - 5:00 pm
Website:
https://northeastern.zoom.us/j/97674271897

Other

Department
Electrical and Computer Engineering
Topics
MS/PhD Thesis Defense
Audience
MS, PhD, Faculty, Staff