I am an assistant professor in computational linguistics at the Institute for Logic, Language and Computation working on machine learning for natural language processing. Some of the problems I’ve looked into are machine translation, word alignment, textual entailment, paraphrasing, and question answering. My interests sit at the intersection of disciplines such as formal languages, machine learning, approximate inference, global optimisation, and computational linguistics.
Recently, I’ve developed quite an interest in Bayesian deep learning. In particular, I’m developing probabilistic neural network models that reason with and induce forms of discrete generalisation such as trees and graphs.
If you are looking for a project with me, you might want to start by joining one of our reading groups. I’ve put together a list of what I think may help one navigate through the landscape of deep generative models. And finally, I’ve made some technical notes available on github.
- We will present our work on modelling latent variation in translation data at ACL18 (code and paper).
- Philip and I are going to present our tutorial on variational inference and deep generative models at ACL 2018! See you in Melbourne!
- We will present our work on generative models of joint word representation and alignment at NAACL18 (code and paper).
- Check the material of our DGM day at UvA, a mini workshopp on variational inference and deep generative models in NLP.