I am an assistant professor in computational linguistics at the Institute for Logic, Language and Computation working on machine learning for natural language processing. Some of the problems I’ve looked into are machine translation, word alignment, textual entailment, paraphrasing, and question answering. My interests sit at the intersection of disciplines such as formal languages, machine learning, approximate inference, global optimisation, and computational linguistics.
My current projects mostly focus on learning deep probabilistic latent variable models. I’m particularly interested in unsupervised induction of discrete generalisation such as trees and graphs. I’ve also been interested in making machine learning models more transparent about their inner workings.
If you are looking for a project with me, you might want to start by joining one of our reading groups. I’ve put together a list of what I think may help one navigate through the landscape of deep generative models. And finally, I’ve made some technical notes available on github.
If you need to find me try Science Park 107 (F1.01). Here is my public calendar.
- Check the final version of our ICLR2020 paper.
- 2 papers at ACL2019 and 1 at UAI2019!
- Our VI tutorial has a new face
- My research group now has a name and a page!
- Our VITutorial is visiting Moscow: here is the material and programme! Thanks Yandex!
- New paper on multihop question answering accepted at NAACL19
- Gourmet is now live!
- Check our ACL18 paper on latent variation in translation data (code).
- Philip and I are going to present our tutorial on variational inference and deep generative models at ACL 2018! See you in Melbourne!
- We will present our work on generative models of joint word representation and alignment at NAACL18 (code and paper).
- Check the material of our DGM day at UvA, a mini workshopp on variational inference and deep generative models in NLP.