I am an assistant professor in computational linguistics at the Institute for Logic, Language and Computation working on machine learning for natural language processing. Some of the problems I’ve looked into are machine translation, word alignment, textual entailment, paraphrasing, and question answering. My interests sit at the intersection of disciplines such as formal languages, machine learning, approximate inference, global optimisation, and computational linguistics.
Recently, I’ve developed quite an interest in Bayesian deep learning. In particular, I’m developing probabilistic neural network models that reason with and induce forms of discrete generalisation such as trees and graphs.
If you are looking for a project with me, you might want to start by joining one of our reading groups. I’ve put together a list of what I think may help one navigate through the landscape of deep generative models. And finally, I’ve made some technical notes available on github.
If you need to find me try Science Park 107 (F1.01). Here is my public calendar.
- 2 papers at ACL2019 and 1 at UAI2019!
- Our VI tutorial has a new face
- My research group now has a name and a page!
- Our VITutorial is visiting Moscow: here is the material and programme! Thanks Yandex!
- New paper on multihop question answering accepted at NAACL19
- Gourmet is now live!
- Check our ACL18 paper on latent variation in translation data (code).
- Philip and I are going to present our tutorial on variational inference and deep generative models at ACL 2018! See you in Melbourne!
- We will present our work on generative models of joint word representation and alignment at NAACL18 (code and paper).
- Check the material of our DGM day at UvA, a mini workshopp on variational inference and deep generative models in NLP.