I'm a PhD candidate at Mila, researching interpretability for Natural Language Processing, primarily focusing on ensuring interpretability methods provide valid explanations. My supervisors are prof. Sarath Chandar and prof. Siva Reddy. Before that, I was known for being an independent researcher, also in interpretability.
Neural networks are very complex, and their logic is not transparent to users or developers. Providing explanations of neural networks is called interpretability, and I think that machine learning in some areas is socially irresponsible without this. Unfortunately, there is not enough research in this area, as most research revolves around beating well-defined benchmarks, and "good" explanation is ambiguous. I want to change that. My compass is to ground my research in real-world settings based on my 3 years of industry experiences working in machine learning.
During my PhD I have been published in ICML, ACL, EMNLP, ACM, Distill, etc. and performed invited talks regarding my work. In particular, I was invited by Sara Hooker to do the inaugural talk at Cohere for AI.
Before starting my PhD I published first in Distill.pub and later at ICLR 2020, where I received a spotlight award. Both of these works received a lot of attention, and I wrote a blog post about my life as an Independent Researcher that went quite viral. All of this also resulted in several interviews and invited talks.
In the past, I worked in the industry on Machine Learning for 3 years. One of my projects was implementing clinic.js, which has become the de-facto profiling tool in JavaScript and won awards. Additionally, I was also a very active open-source contributor in JavaScript. I have helped develop Node.js since 2011 such as: major core components, infrastructure, and I was part of several steering committees. Finally, my own open-source modules were downloaded 173 million times in just 2023.