Loading Events

« All Events

  • This event has passed.

AIX SEMINAR SERIES

March 3, 2021 @ 3:00 pm - 4:00 pm

We cordially invite you to join the AIX SEMINAR

Zoom Link: https://northeastern.zoom.us/j/96209636039

__________________________________________________________________________________

Learning Strong Inference Models in Small Data Domains: Enabling AI in Data Extreme ML

Dr. Sarah Ostadabbas  | Assistant Professor, Northeastern University

Human Factors in Artificial Decision Making: Understanding Humans and Helping Humans Understand Learning Agents

Dr. Pedro Sequeira | Advanced Computer Scientist, SRI International

__________________________________________________________________________________

Learning Strong Inference Models in Small Data Domains: Enabling AI in Data Extreme ML

Sarah Ostadabbas | Assistant Professor, Electrical and Computer Engineering, Northeastern University

Abstract:Recent efforts in machine learning (especially with the new waves of deep learning introduced in the last decade) have obliterated records for regression and classification tasks that have previously seen only incremental accuracy improvements. There are many other fields that would significantly benefit from machine learning (ML)-based inferences where data collection or labeling is expensive. In these domains (i.e. Small Data domains), the challenge we now face is how to learn efficiently with the same performance with less data. Many applications will benefit from a strong inference framework with deep structure that will: (i) work with limited labeled training samples; (ii) integrate explicit (structural or data-driven) domain knowledge into the inference model as editable priors to constrain search space; and (iii) maximize the generalization of learning across domains. My research aims to explore a generalized ML approach to solve the small data problem that leverages existing research and fills in key gaps with original work. There are two basic approaches to reduce data needs during model training: (1) decrease inference model learning complexity via data-efficient machine learning, and (2) incorporate domain knowledge in the learning pipeline through the use of data-driven or simulation-based generative models. In this talk, I present my recent work on merging the benefits of these two approaches to enable the training of robust and accurate (i.e. strong) inference models that can be applied on real-world problems dealing with data limitation. My plan to achieve this aim is structured in four research thrusts: (i) introduction of physics- and/or data-driven computational models here referred to as weak generator to synthesize enough labeled data in an adjacent domain; (ii) design and analysis of unsupervised domain adaptation techniques to close the gap between the domain adjacent and domain specific data distributions; (iii) combined use of the weak generator, a weak inference model and an adversarial framework to refine the domain adjacent dataset by employing a set of unlabeled domain specific dataset; and (iv) development and analysis of co-labeling/active learning techniques to select the most informative datasets to refine and adapt the weak inference model into a strong inference model in the target application.

Bio: Professor Ostadabbas is an assistant professor in the Electrical and Computer Engineering Department of Northeastern University (NEU), Boston, Massachusetts, USA. Professor Ostadabbas joined NEU in 2016 from Georgia Tech, where she was a post-doctoral researcher following completion of her PhD at the University of Texas at Dallas in 2014. At NEU, Professor Ostadabbas is the director of the Augmented Cognition Laboratory (ACLab) with the goal of enhancing human information-processing capabilities through the design of adaptive interfaces via physical, physiological, and cognitive state estimation. These interfaces are based on rigorous models adaptively parameterized using machine learning and computer vision algorithms. In particular, she has been integrating domain knowledge with machine learning by using physics-based simulation as generative models for bootstrapping deep learning recognizers. Professor Ostadabbas is the co-author of more than 70 peer-reviewed journal and conference articles and her research has been awarded by the National Science Foundation (NSF), Mathworks, Amazon AWS, Biogen, and NVIDIA. ​She co-organized the Multimodal Data Fusion (MMDF2018) workshop, an NSF PI mini-workshop on Deep Learning in Small Data, the CVPR workshop on Analysis and Modeling of Faces and Gestures from 2019 and she was the program chair of the Machine Learning in Signal Processing (MLSP2019). Prof. Ostadabbas is an associate editor of the IEEE Transactions on Biomedical Circuits and Systems, on the Editorial Board of the IEEE Sensors Letters and Digital Biomarkers Journal, and has been serving in several signal processing and machine learning conferences as a technical chair or session chair. She is a member of IEEE, IEEE Computer Society, IEEE Women in Engineering, IEEE Signal Processing Society, IEEE EMBS, IEEE Young Professionals, International Society for Virtual Rehabilitation (ISVR), and ACM SIGCHI​.

__________________________________________________________________________________

 

Human Factors in Artificial Decision Making: Understanding Humans and Helping Humans Understand Learning Agents

Dr. Pedro Sequeira | Advanced Computer Scientist, SRI International, Artificial Intelligence Center

Abstract: In this talk I will overview some of my research in the broad topic of human factors in artificial decision-making. I will start by showing how reinforcement learning (RL) agents, equipped with intrinsic motivation provided by emotion appraisal-like rewards, can learn more efficiently and overcome perceptual limitations. In the second part of the talk, I will present a program synthesis approach for automated cognitive behavior analysis (ACBA), where genetic programming (GP) is used to search for programs that are able to reproduce observed human decisions and thereby help understand their underlying strategies and goals. I will show the results of an experiment where we used ACBA-GP to analyze human negotiation behavior, which generated programs resulting in strategies consistent with the way people with different personality traits address negotiation tasks. Finally, I will overview our work in the area of explainable RL (XRL), where a framework based on interestingness elements identifies relevant decision points given an RL policy that can help understand an agent’s behaviors in a task. I will show the results of a user study where we presented people short video clips of RL agents, selected using our XRL framework, allowing the subjects to correctly identify the capabilities and limitations of different agents in a task.

Bio: Dr. Pedro Sequeira is an advanced computer scientist at SRI International in the Artificial Intelligence Center (AIC). His research interests are mainly in the field of Machine Learning (ML) and involve the creation of autonomous and adaptive systems that learn and reason under uncertainty. His approach is based on creating ML mechanisms inspired by human learning and decision-making and use ML to better understand how humans learn and make decisions in complex tasks. Prior to joining SRI, Dr. Sequeira was an associate research scientist at Northeastern University (NU) working in the Cognitive Embodied Social Agents Research (CESAR) lab, led by Prof. Stacy Marsella, on the topics of modeling human decision-making from observation and multiagent systems in the context of pharmaceutical supply-chains. Dr. Sequeira completed the Ph.D. Program in Information Systems and Computer Engineering in 2013 at Instituto Superior Técnico (IST), Universidade de Lisboa in Portugal, under the supervision of Prof. Ana Paiva and Prof. Francisco S. Melo. His thesis focused on building more flexible and robust reward mechanisms for intrinsically-motivated reinforcement learning agents, based on appraisal theories of emotions.

_________________________________________

To Receive Further AIX Seminar Notifications

Sign up to receive further AIX seminar notifications

Presented by the Institute for Experiential Robotics at Northeastern University

 

Details

Date:
March 3, 2021
Time:
3:00 pm - 4:00 pm
Website:
https://mailchi.mp/90de5a73951f/aix-seminar-series-4841667?e=dfa036360f

Organizer

Other

Department
Electrical and Computer Engineering
Topics
Research, Seminar
Audience
Undergraduate, Graduate, Alumni, Student Groups, Faculty, Staff