I'm interested in developing interpretable machine learning models and methodology & mathematics for understanding existing models in general.
I believe that without interpretability, machine learning in some areas is socially irresponsible. Unfortunately, I don't think there is enough research in this area, as most research revolves around beating the state-of-the-art. I want to change that, to do good.
I've published in ICLR 2020, where I received a spotlight. I've also published in Distill.pub; PractialAI interviewed me about this.
I'm looking for a PhD or Research Software Engineer position. Please reach out if you like my work!