- This event has passed.
ECE PhD Proposal Review: Zulqarnain Qayyum Khan
December 1, 2021 @ 3:30 pm - 4:30 pm
PhD Proposal Review: Interpretable Machine Learning for Affective Neuroscience and Psychophysiology
Zulqarnain Qayyum Khan
Location: Zoom Link
Abstract: In this thesis, we leverage Machine Learning to investigate questions of interest in affective psychophysiology and neuroscience . We argue for and apply appropriate existing methods where possible and analyze the results they provide. Where existing methods fail to provide an answer we propose and build new models. We demonstrate the use of Hierarchical Clustering to investigate autonomic nervous system reactivity during an active coping stressor task, revealing physiological indices of challenge and threat. Similarly, we leverage Dirichlet Process Gaussian Mixture Modelling (DP-GMM) to reveal the variation in affective experience during a context-aware experience sampling study and to investigate the relationship between emotional granularity and cardiorespiratory physiological activity using resting state data for participants in the same study. We propose and develop Neural Topographic Factor Analysis (NTFA), a novel factor analysis model for fMRI data with a deep generative prior that teases apart participant and stimulus driven variation and commonalities and learns a latent space that can shed light on important neuroscientific phenomenon such as individual variation and degeneracy.
Based on the work we have already done, we propose three further lines of research that we intend to include in this thesis. First, NTFA can essentially be viewed as a family of models, where appropriate modifications can be made depending on what questions are needed to be answered. Leveraging this, we propose explicitly adapting NTFA to tackle the question of degeneracy in neural responses. This involves introducing another latent space which can be used to capture and visualize the interaction of each participant with each stimulus in a given fMRI study. The arrangement of inferred embeddings in this latent space can then suggest presence or absence of different types of degeneracy in neural responses among participants in response to the presented stimuli. Second, during the course of this interdisciplinary research we realized that there is a need for a comprehensive work that sheds light on the assumptions and limitations of some of the most popular machine learning methods used commonly in the sciences (specially psychology), and provide recommendations on how researchers can be more mindful of the underlying assumptions machine learning methods make. This can then equip users of ML methods to draw more appropriate conclusions from the results they get. We intend to include this in our thesis. Third, continuing along the same lines, there is also a need for better explanation models for the increasingly complicated ML models in use today. This is especially true in health sciences where the knowledge of why an ML model made a particular decision is almost as important as that decision being accurate. To this end we propose a theoretical work that ties the reliability of explanation models to the robustness of the models they are trying to explain.