BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northeastern University College of Engineering - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://coe.northeastern.edu
X-WR-CALDESC:Events for Northeastern University College of Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20201015
DTEND;VALUE=DATE:20201231
DTSTAMP:20260516T120654
CREATED:20201015T142444Z
LAST-MODIFIED:20201015T142444Z
UID:22804-1602720000-1609372799@coe.northeastern.edu
SUMMARY:Meet Your Graduate Student Ambassadors!
DESCRIPTION:Meet your Student Ambassadors! Prospective and Admitted Graduate Students are invited to meet their Student Ambassador via Unibuddy.
URL:https://coe.northeastern.edu/event/meet-your-graduate-student-ambassadors/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20201124
DTEND;VALUE=DATE:20201125
DTSTAMP:20260516T120654
CREATED:20201116T145202Z
LAST-MODIFIED:20201116T150907Z
UID:23175-1606176000-1606262399@coe.northeastern.edu
SUMMARY:3 Minute Thesis Competition - Video Submission
DESCRIPTION:The annual GWiSE 3 Minute Thesis Competition 2020 is finally here! The 3MT is an academic competition that challenges Ph.D. students to describe their research within three minutes. This is a great opportunity to practice pitching your research to a non-specialist audience and to improve your science communication. Northeastern GWiSE and the Northeastern University Library have partnered to make 3 Minute Thesis possible with some pretty cool prizes: \n\nFirst place: 100$ Grubhub card\, an interview on the Dean’s podcast\, 100$ credit for 3D printing at the library\nSecond place: 50$ Grubhub card\, an interview on the Dean’s podcast\, 50$ credit for 3D printing at the library\nThird place: 25$ Grubhub card\, an interview on the Dean’s podcast\n\nRSVP to participate here. \nMore details for submission will be sent to those who RSVP. The deadline for video submission is Tuesday\, November 24th via email to gwise.neu@gmail.com. Video requirements\, 3-minute recording over : \n\n1st slide: title and author’s name\n2nd slide: thesis content\n\nThe live event will take place on Wednesday\, December 2nd from 2 PM to 4 PM ET on Zoom! All grad students are welcome to attend and/or present. The event will work in this way: \n\nGWiSE will host the event on Zoom and play prerecorded videos of participants’ explaining their thesis in under 3 minutes\nAfter each video is shown\, the judges will have time to discuss the presentations and assign scores\nGWiSE will proclaim the winners and offer the prizes!\n\nReminder\, please RSVP to participate here. The deadline for video submission is on the 24th of November. To submit your video\, send a video file to gwise.neu@gmail.com. The actual event is on Wednesday\, December 2nd.
URL:https://coe.northeastern.edu/event/3-minute-thesis-competition-video-submission/
ORGANIZER;CN="GWiSE%3A Graduate Women in Science and Engineering":MAILTO:gwise.neu@gmail.com
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201124T140000
DTEND;TZID=America/New_York:20201124T150000
DTSTAMP:20260516T120654
CREATED:20201103T160959Z
LAST-MODIFIED:20201103T160959Z
UID:22989-1606226400-1606230000@coe.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Joseph Robinson
DESCRIPTION:PhD Dissertation Defense: Automatic Face Understanding: Recognizing Families in Photos \nJoseph Robinson \nLocation: Zoom Link \nAbstract: Visual kinship recognition has an abundance of practical uses. For this\, we built the largest database for kinship recognition\, FIW. Built entirely in-house with no cost using a semi-automatic labeling scheme. Specifically\, we first aligned faces detected in family photos with names in the corresponding text metadata to mine the label proposals with high confidence. The remaining data were labeled using a novel clustering algorithm that used label proposals as side information to guide more accurate clusters. Great savings in time and human input was had. Statistically\, FIW shows enormous gains over its predecessors. We have several benchmarks in kinship verification\, family classification\, tri-subject verification\, and large-scale search & retrieval. We also trained CNNs on FIW and deployed the model on the renowned KinWild I and II to gain state-of-the-art (SOTA). Most recently\, we further augmented FIW with multimedia (MM) for 200 of its 1\,000 families- a labeled collection we dubbed FIW-MM. Now\, video dynamics\, audio\, and text captions can be used in the decision making of kinship recognition systems. \nFIW continues to pave the way for this research track: (1) advanced SOTA (e.g.\, marginalized denoising auto-encoder based on metric learning that preserves intrinsic structures of kin-data and encapsulates discriminating information in learned features); (2) introduced generative models to predict a child’s appearance from a parent pair (i.e.\, proposed an adversarial autoencoder conditioned on age and gender to map between facial appearance and these higher-level features for control of age and gender); (3) designed evaluations with benchmarks to support challenges\, workshops\, and tutorials at top tier conferences (e.g.\, CVPR\, MM\, FG\, ICME)\, and a premiere Kaggle Competition. We expect FIW will significantly impact research and reality. \nAdditionally\, we tackled the classic problem of facial landmark localization in images. This is a task that has been in focus for decades\, and many solutions have been proposed. However\, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data with deep networks now prevailing throughout machine learning. A majority of these networks have objectives based on L1 or L2 norms\, which inherit several disadvantages. First of all\, the locations of landmarks are determined from generated heatmaps (i.e.\, confidence maps) from which predicted landmark locations (i.e.\, the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. To address this\, we introduced a LaplaceKL objective that penalizes for low confidence. Another issue is a dependency on labeled data\, which is expensive to collect and susceptible to error. We addressed both issues by proposing an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims SOTA on renowned benchmarks. Furthermore\, our model is robust with a reduced size: 1/8 the number of channels (i.e.\, 0.0398 MB) is comparable to state-of-that-art in real-time on a CPU. Thus\, our method is of high practical value to real-life applications. \nFinally\, we built the Balanced Faces in the Wild (BFW) dataset to serve as a proxy to measure bias across ethnicity and gender subgroups\, allowing us to characterize FR performances per subgroup. We show performances are non-optimal when a single score threshold is used to determine whether sample pairs are genuine or imposter. Furthermore\, actual performance ratings vary greatly from the reported across subgroups. Thus\, claims of specific error rates only hold for populations matching that of the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial encodings extracted using SOTA deep nets. Not only does this technique balance performance\, but it also boosts the overall performance. A benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. The removal of demographic knowledge prevents future potential biases from being injected into decision making. Additionally\, privacy concerns are satisfied by this removal. We explore why this works qualitatively with hard samples. We also show quantitatively that subgroup classifiers can no longer learn from the encodings mapped by the proposed. \n 
URL:https://coe.northeastern.edu/event/ece-phd-dissertation-defense-joseph-robinson/
END:VEVENT
END:VCALENDAR