- This event has passed.
Aria Masoomi PhD Proposal Review
November 29, 2023 @ 3:00 pm - 4:00 pm
Making Deep Neural Network Transparent
Prof. Jennifer Dy (Advisor)
Prof. Mario Sznaier
Prof. Eduardo Sontag
Prof. Peter Castaldi
As machine learning algorithms are deployed ubiquitously to a variety of domains, it is imperative to make these often black-box models transparent.
The ability to interpret and comprehend the reasoning behind machine learning models plays a pivotal role in increasing user trust. It not only offers insights into how a model functions but also opens avenues for model enhancements.
This research delves into the realm of interpretability, focusing on the dichotomy between ‘intrinsic’ and ‘post hoc’ interpretability. Intrinsic interpretability involves constraining the complexity of the machine learning model itself, resulting in models inherently interpretable due to their simplicity, such as decision trees or sparse linear regression. On the other hand, post hoc interpretability employs techniques that assess the model’s behavior after training, offering insights into the model’s outcomes. Examples of post hoc techniques include permutation feature importance and the Shapley value method for feature importance.
The core contribution of this Thesis proposal lies in the development of novel methods to enhance both intrinsic and post hoc interpretability. These methods aim to advance the field by offering new perspectives on understanding machine learning models, thereby contributing to the ongoing discourse on model transparency and user trust.