Deep generative models (DGMs) are probabilistic models parametrised by neural networks (NNs). DGMs combine the power of NNs with the generality of the probabilistic learning framework allowing a modeller to be more explicit about her statistical assumptions. To unlock this power however one must consider efficient ways to approach probabilistic inference. Variational inference surfaced as the method of choice, howerver, efficient and effective VI for DGMs require low-variance gradient estimation for stochastic computation graphs (Kingma and Welling, 2013; Rezende et al, 2014; Titsias and Lazaro-Gredilla, 2014). In this talk I will present an overview of deep generative modelling, amortised variational inference, and the mathematics behind low-variance reparameterised gradients.