Loading Events

« All Events

  • This event has passed.

Zulqarnain Qayyum Khan’s PhD Dissertation Defense

July 12, 2022 @ 3:30 pm - 4:30 pm

“Interpretable Machine Learning for Affective Psychophysiology and Neuroscience”

Abstract:

In this thesis, we leverage existing Machine Learning (ML) models where appropriate and develop novel models to advance the understanding of affective psychophysiology and neuroscience. Additionally, considering the increased use of ML as a toolbox, we highlight underlying assumptions and limitations of basic ML methods to help better contextualize the conclusions drawn from application of ML in this domain. Similarly, given the increasingly opaque ML models, the resulting rise of methods to explain these models, and the importance of explainability to interdisciplinary research, we investigate theoretical properties of these explainers.
Affective pyschophysiology research typically uses supervised analyses which leave little room for exploration. Studies of motivated performance tasks often focus on two states of threat and challenge, exhibiting somewhat inconsistent physiological properties. Using unsupervised analysis of physiology data, we find evidence for the presence of a third state for the first time, that may help explain these inconsistencies. Similarly, prototypical view of emotion often searches for consistency and specificity, as opposed to constructionist account of emotion which proposes emotion categories as populations of situation-specific variable instances. In results supportive of this constructionist view, we find large variability in both the number and nature of clusters in unsupervised analyses of ambulatory physiological data. Similarly, in functional neuroimaging a largely unsolved challenge is to develop models that appropriately account for the commonalities and variations among participants and stimuli, scale to large amounts of data, and reason about uncertainty in an unsupervised manner. Such models are needed to investigate important neuroscientific phenomena such as individual variation and degeneracy. We develop Neural Topographic Factor Analysis (NTFA), a novel ML model for fMRI data with a deep generative prior that teases apart participant and stimulus driven variation and commonalities, and demonstrate its potential in investigating individual variation and degeneracy.
We further utilize this interdisciplinary research experience to shed light on assumptions and limitations of some of the basic ML methods commonly used in the sciences (especially psychological science). These methods are often used as software packages. We argue that researchers need to be more mindful of their underlying assumptions when drawing conclusions. Along the same lines, ML methods themselves are becoming increasingly blackbox, making it harder to reason about underlying assumptions. This has led to an increased focus on explainers, which provide interpretability to ML methods that is critical for interdisciplinary research. The theoretical properties of these explainers, however, remain understudied. We further the research in this direction by defining explainer astuteness as a measure of robustness and theoretically demonstrate that smooth classifiers lend themselves to more astute explanations.

Committee:

Prof. Jennifer Dy (Advisor)
Prof. Lisa Feldman Barrett
Prof. Dana Brooks
Prof. Karen Quigley
Prof. Octavia Camps

Details

Date:
July 12, 2022
Time:
3:30 pm - 4:30 pm
Website:
https://northeastern.zoom.us/j/91427465251?pwd=MFRYWWU2bUd4Q1ZEdGtMVEZ4Z2Njdz09

Other

Department
Electrical and Computer Engineering
Topics
MS/PhD Thesis Defense
Audience
Faculty, Staff