BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northeastern University College of Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Northeastern University College of Engineering
X-ORIGINAL-URL:https://coe.northeastern.edu
X-WR-CALDESC:Events for Northeastern University College of Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220712T153000
DTEND;TZID=America/New_York:20220712T163000
DTSTAMP:20260512T162132
CREATED:20221103T143319Z
LAST-MODIFIED:20221103T143319Z
UID:34092-1657639800-1657643400@coe.northeastern.edu
SUMMARY:Zulqarnain Qayyum Khan's PhD Dissertation Defense
DESCRIPTION:“Interpretable Machine Learning for Affective Psychophysiology and Neuroscience” \nAbstract: \nIn this thesis\, we leverage existing Machine Learning (ML) models where appropriate and develop novel models to advance the understanding of affective psychophysiology and neuroscience. Additionally\, considering the increased use of ML as a toolbox\, we highlight underlying assumptions and limitations of basic ML methods to help better contextualize the conclusions drawn from application of ML in this domain. Similarly\, given the increasingly opaque ML models\, the resulting rise of methods to explain these models\, and the importance of explainability to interdisciplinary research\, we investigate theoretical properties of these explainers.\nAffective pyschophysiology research typically uses supervised analyses which leave little room for exploration. Studies of motivated performance tasks often focus on two states of threat and challenge\, exhibiting somewhat inconsistent physiological properties. Using unsupervised analysis of physiology data\, we find evidence for the presence of a third state for the first time\, that may help explain these inconsistencies. Similarly\, prototypical view of emotion often searches for consistency and specificity\, as opposed to constructionist account of emotion which proposes emotion categories as populations of situation-specific variable instances. In results supportive of this constructionist view\, we find large variability in both the number and nature of clusters in unsupervised analyses of ambulatory physiological data. Similarly\, in functional neuroimaging a largely unsolved challenge is to develop models that appropriately account for the commonalities and variations among participants and stimuli\, scale to large amounts of data\, and reason about uncertainty in an unsupervised manner. Such models are needed to investigate important neuroscientific phenomena such as individual variation and degeneracy. We develop Neural Topographic Factor Analysis (NTFA)\, a novel ML model for fMRI data with a deep generative prior that teases apart participant and stimulus driven variation and commonalities\, and demonstrate its potential in investigating individual variation and degeneracy.\nWe further utilize this interdisciplinary research experience to shed light on assumptions and limitations of some of the basic ML methods commonly used in the sciences (especially psychological science). These methods are often used as software packages. We argue that researchers need to be more mindful of their underlying assumptions when drawing conclusions. Along the same lines\, ML methods themselves are becoming increasingly blackbox\, making it harder to reason about underlying assumptions. This has led to an increased focus on explainers\, which provide interpretability to ML methods that is critical for interdisciplinary research. The theoretical properties of these explainers\, however\, remain understudied. We further the research in this direction by defining explainer astuteness as a measure of robustness and theoretically demonstrate that smooth classifiers lend themselves to more astute explanations. \nCommittee: \nProf. Jennifer Dy (Advisor)\nProf. Lisa Feldman Barrett\nProf. Dana Brooks\nProf. Karen Quigley\nProf. Octavia Camps
URL:https://coe.northeastern.edu/event/zulqarnain-qayyum-khans-phd-dissertation-defense/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220712T173000
DTEND;TZID=America/New_York:20220712T183000
DTSTAMP:20260512T162132
CREATED:20220705T201358Z
LAST-MODIFIED:20220705T201358Z
UID:31785-1657647000-1657650600@coe.northeastern.edu
SUMMARY:Gordon Institute Virtual Information Session
DESCRIPTION:Learn how you can earn a Graduate Certificate in Engineering Leadership as a stand-alone certificate or in combination with one of twenty Master of Science degrees offered through Northeastern’s College of Engineering\, College of Science\, or Khoury College of Computer Sciences.  \nThe National Academy of Engineering recognized The Gordon Institute of Engineering Leadership (GIEL)for its innovative curriculum that combines technical education\, leadership capabilities\, and the “Challenge Project”: an opportunity for students to receive master’s level credit while working in industry.  \nBy aligning technical proficiency with leadership capabilities\, GIEL accelerates the development of high-potential engineers and prepares them to lead complex projects early in their careers. Upon completion of the program\, more than 88% of the 2020 class reported increased leadership responsibility\, while more than 50% of the 2020 class reported being promoted within one year of graduation.  \nOur Director of Admissions will be directly answering your application questions for Fall 2022.  \nYou will have the opportunity to hear from Alumni on how The Gordon Institute propelled their engineering careers. Program professors will also be present to answer curriculum questions. 
URL:https://coe.northeastern.edu/event/gordon-institute-virtual-information-session-7/
ORGANIZER;CN="Gordon Engineering Leadership program":MAILTO:gordonleadership@northeastern.edu
END:VEVENT
END:VCALENDAR