BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northeastern University College of Engineering - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://coe.northeastern.edu
X-WR-CALDESC:Events for Northeastern University College of Engineering
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210719T113000
DTEND;TZID=America/New_York:20210719T123000
DTSTAMP:20260423T020628
CREATED:20210713T172719Z
LAST-MODIFIED:20210713T172719Z
UID:26599-1626694200-1626697800@coe.northeastern.edu
SUMMARY:ECE PhD Dissertation Defense: Berkan Kadioglu
DESCRIPTION:PhD Dissertation Defense: An Analysis of Algorithms with Discrete Choice Models \nBerkan Kadioglu \nLocation: Zoom Link \nAbstract: In the first half of our work\, we consider a rank regression setting\, in which a dataset of $N$ samples with features in $\mathbb{R}^d$ is ranked by an oracle via $M$ pairwise comparisons.\nSpecifically\, there exists a latent total ordering of the samples; when presented with a pair of samples\, a noisy oracle identifies the one ranked higher w.r.t. the underlying total ordering.\nA learner observes a dataset of such comparisons\, and wishes to regress sample ranks from their features.\nWe show that to learn the model parameters with $\epsilon > 0$ accuracy\, it suffices to conduct $M \in \Omega(dN\log^3 N/\epsilon^2)$ comparisons uniformly at random when $N$ is $\Omega(d/\epsilon^2)$.\nCompared to learning from class labels\, learning from comparison labels has two advantages: First\, comparison labels reveal both inter and intra-class information\, where class labels only contain the former.\nSecond\, comparison labels also exhibit lower variability across different labelers.\nThis has been observed experimentally in multiple domains\, including medicine \citep{campbell2016plus\,kalpathy2016plus\, stewart2005absolute} and recommendation systems \citep{schultz2004learning\,zheng2009mining\,brun2010towards\, koren2011ordrec}\, and is due to the fact that humans often find it easier to make relative\, rather than absolute\, judgements.\nMany works focusing on empirically learning comparison labels show excellent performance in practice \citep{tian2019severity\,yildiz2019classification}.\nOur work provides a theoretical foundation for analyzing and understanding this empirical performance.\nMoreover\, we extend the problem we initially study to a harder setting.\nWe do this by moving from pairwise comparisons to multi-way comparisons.\nFurthermore\, we study an online variant of the previous problem where the goal is to maintain high user engagement throughout the learning period.\nThis of course\, indirectly leads to the goal of learning parameters of the discrete choice model as accurately as possible\, fast.\nThis new problem is directly related to a setting in which a retailer recommends products to customers.\nA common problem in many recommendation tasks is to simultaneously learn the utilities of items to be recommended and maintain high user engagement.\nWe are generally constrained by a limit on the total number of items to be recommended at a time for an unknown time horizon.\nRecently\, bandit algorithms have been proposed for this setting where the multinomial logit model is assumed.\nBounds on error metrics are provided for upper confidence and Thompson sampling based algorithms.\nIn our paper\, we propose a variational inference based Thompson sampling algorithm and identify the required properties to achieve $\tilde O(D^{3/2}\sqrt T)$ worst-case regret.\nThrough extensive experiments we show that our method performs much better than the recently proposed \emph{TSMNL} algorithm in many error metrics.\nWe further accelerate our algorithm to be used in practical settings.
URL:https://coe.northeastern.edu/event/ece-phd-dissertation-defense-berkan-kadioglu/
END:VEVENT
END:VCALENDAR