Loading Events

« All Events

  • This event has passed.

ECE PhD Dissertation Defense: Andac Demir

January 14, 2022 @ 2:00 pm - 3:00 pm

PhD Dissertation Defense: Automated Bayesian Network Exploration for Nuisance-Robust Inference

Andac Demir

Location: Zoom Link

Abstract: A fundamental challenge in the analysis of physiological signals is learning useful features that are robust to nuisance factors e.g., inter-subject \& inter-session variability, and achieve the highest nuisance-invariant classification performance. Towards resolving this problem, we introduce 3 frameworks: AutoBayes, which is an AutoML approach to conduct neural architecture search for research prototyping, and GNN based frameworks: EEG-GNN and EEG-GAT.
The ultimate goal of the AutoBayes framework is to identify the conditional relationship between a physiological dataset, associated task labels, nuisance variations and potential latent variables in order to robustly infer the task labels invariant of nuisance factors. Nuisance factors in the case of physiological datasets could be variations in subjects or sessions, but we only focus on subject variations in the experiments. AutoBayes enumerates all plausible Bayesian networks between data, labels, nuisance variations and potential latent variables, detects and prunes the unnecessary edges according to Bayes-Ball Algorithm, and then trains the resulting DNN architectures for different hyperparameter configurations in an adversarial / non-adversarial or a variational / non-variational setting to achieve the highest validation performance. Instead of hyperparameter tuning for model optimization, AutoBayes concentrates on the architecture search of plausible Bayesian networks, and achieves state-of-the-art performance across several physiological datasets. Furthermore, we ensemble several Bayesian networks by stacking their posterior probability vectors in a higher level learning space, train a shallow MLP as a meta learner, and measure the task and nuisance classification performance on a hold-out dataset. We observe that exploration of different inference Bayesian networks has a significant benefit in improving the robustness of the machine learning pipeline, and the parallel activity of vast assemblies of different Bayesian network models significantly reduces variation across subjects in the cross-validation setting.
In the second part of the dissertation, we benchmark the performance of EEG-GNN and EEG-GAT against the AutoBayes framework. CNN’s have been frequently used to extract subject-invariant features from EEG for classification tasks, but this approach holds the underlying assumption that electrodes are equidistant analogous to pixels of an image and hence fails to explore/exploit the complex functional neural connectivity between different electrode sites. We overcome this limitation by tailoring the concepts of convolution and pooling applied to 2D grid-like inputs for the functional network of electrode sites. Furthermore, we develop various GNN models that project electrodes onto the nodes of a graph, where the node features are represented as EEG channel samples collected over a trial, and nodes can be connected by weighted/unweighted edges according to a flexible policy formulated by a neuroscientist. The empirical evaluations show that our proposed GNN-based framework, EEG-GNN, outperforms standard CNN classifiers across ErrP and RSVP datasets, as well as allowing neuroscientific interpretability and explainability to deep learning methods tailored to EEG related classification problems. Besides that, EEG-GAT employs multi-head attention mechanism in conjunction with the GNN architecture to learn the graph topology of observations instead of utilizing a graph shift operator that is heuristically constructed by a domain expert. This implicitly allows the exploration of the functional neural connectivity peculiar to a cognitive task between pairs of EEG electrode sites as well as EEG channel selection, which is critical for reducing computational cost, and designing portable EEG headsets.