This group is interested in NLP applications with a particular focus on parsing and machine translation.
Scheduled
Where:
When:
What:
Done
- Multi-Space VAE for Semi-Supervised Labeled Sequence Transduction
- A Challenge Set Approach to Evaluating Machine Translation
- The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time
- Universal Semantic Parsing
- Learning to parse from a semantic objective: It works. Is it syntax?
- Towards Bidirectional Hierarchical Representations for Attention-Based Neural Machine Translation
- Neural Machine Translation with Gumbel-Greedy Decoding
- Generating Sentences from a Continuous Space
- Non-projective Dependency-based Pre-Reordering with Recurrent Neural Network for Machine Translation
- Semi-Supervised Learning with Ladder Networks
- Recurrent Additive Networks
- A Neural Attention Model for Sentence Summarization
- Short text clustering by finding core terms
- Probabilistic Matrix Factorization (Extra: A Tutorial on Principal Component Analysis)
- Dynamic Programming for Linear-Time Incremental Parsing
- Trainable Greedy Decoding for Neural Machine Translation
- Fully Character-Level Neural Machine Translation without Explicit Segmentation
- Neural Machine Translation in Linear Time
- A Convolutional Encoder Model for Neural Machine Translation
- A Theoretically Grounded Application of Dropout in Recurrent Neural Networks (Extra: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning)
- The Neural Noisy Channel
- Recurrent Neural Network Grammars (Extra: Transition-Based Dependency Parsing with Stack Long Short-Term Memory and What Do Recurrent Neural Network Grammars Learn About Syntax?)
- Tree-To-Sequence Attentional Neural Machine Translation
- LSTM: A Search Space Odyssey
- Nonparametric Spherical Topic Modeling with Word Embeddings
- Word Embeddings as Metric Recovery in Semantic Spaces
- Variational Neural Machine Translation
- Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
- Incremental Parsing with Minimal Features Using Bi-Directional LSTM (Extra: Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations)
- Most “babies” are “little” and most “problems” are “huge”: Compositional Entailment in Adjective-Nouns