We also read about deep learning (even when latent variables are nowhere to be seen).
We moved our activities to this page.
- Unsupervised Learning of Task-Specific Tree Structures with Tree-LSTMs
- Jointly Learning Sentence Embeddings and Syntax with Unsupervised Tree-LSTMs
- Grammar variational auto-encoder
- Pointer Networks
- Key-Value Memory Networks for Directly Reading Documents
- June 30: Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation and Gumbel relaxations (a and b)
- June 15: Attention is all you need
- June 9: Learning Structured Text Representations and Structured Attention Networks
- May 17: Frustratingly Short Attention Spans in Neural Language Modeling