I'm interested in developing interpretable machine learning models and methodology & mathematics for understanding existing models in general.
I believe that without interpretability, machine learning in some areas is socially irresponsible. Unfortunately, I don't think there is enough research in this area, as most research revolves around beating the state-of-the-art. I want to change that, to do good.
I've published 1) At ICLR 2020, where I received a spotlight award. 2) In the SEDL workshop at NeurIPS 2019. 3) In the Distill.pub journal; PractialAI interviewed me about this.
I've written a blog post about my life as an Independent Researcher that went quit viral. I'm looking for a PhD or Research Software Engineer position. Please reach out if you like my work!