I'm a PhD candidate at Mila, researching interpretability for Natural Language Processing, primarily focusing on ensuring interpretability methods provide valid explanations. My supervisors are prof. Sarath Chandar and prof. Siva Reddy. Before that, I was known for being an independent researcher, also in interpretability.
Neural networks are very complex and their logic is not transparent to users or developers. Providing explanations of neural networks is called interpretability, and I think that machine learning in some areas is socially irresponsible without this. Unfortunately, there is not enough research in this area, as most research revolves around beating well-defined benchmarks, and "good" explanation is ambiguous. I want to change that. My compass is to ground my research in real-world settings based on my past experiences as a freelancer in machine learning.
I've published 1) At ICLR 2020, where I received a spotlight award. 2) In the Distill.pub 3) At ACM Computing Surveys 4) At EMNLP 2022 and BlackboxNLP 2022. – I've been interviewed and performed invited talks several times about my publications and work.
I've written a blog post about my life as an Independent Researcher that went quite viral.