• Stars
    star
    123
  • Rank 290,145 (Top 6 %)
  • Language
  • License
    BSD 3-Clause "New...
  • Created over 5 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MT paper lists (by conference)

Papers!!!

List by Conference

ACL

2018

  1. Unsupervised Neural Machine Translation with Weight Sharing. paper

    Zhen Yang, Wei Chen, Feng Wang, Bo Xu

    Domain: Low-resource

  2. Triangular Architecture for Rare Language Translation. paper

    Shuo Ren, Wenhu Chen, Shujie Liu, Mu Li, Ming Zhou and Shuai Ma

    Domain: Low-resource

  3. On the Limitations of Unsupervised Bilingual Dictionary Induction. paper

    Anders Søgaard, Sebastian Ruder, Ivan Vulic

    Domain: Low-resource

  4. The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation. paper

    Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Noam Shazeer, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Mike Schuster, Zhifeng Chen

    Domain: Neural Network Architecture

  5. Attention Focusing for Neural Machine Translation by Bridging Source and Target Embeddings. paper

    Shaohui Kuang, Junhui Li, Antonio Branco, Weihua Luo, Deyi Xiong

    Domain: Neural Network Architecture

  6. Accelerating Neural Transformer via an Average Attention Network. paper

    Biao Zhang, Deyi Xiong, and Jinsong Su

    Domain: Neural Network Architecture

  7. Neural Hidden Markov Model for Machine Translation. paper

    Weiyue Wang, Derui Zhu, Tamer Alkhouli, Zixuan Gan, Hermann Ney

    Domain: Neural Network Architecture

  8. Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates. paper

    Taku Kudo

    Domain: Robustness of NMT

  9. A Stochastic Decoder for Neural Machine Translation. paper

    Philip Schulz, Wilker Aziz, Trevor Cohn

    Domain: Robustness of NMT

  10. Forest-Based Neural Machine Translation. paper

    Chunpeng Ma, Akihiro Tamura, Masao Utiyama3, Tiejun Zhao1, Eiichiro Sumita3

    Domain: Robustness of NMT

  11. Towards Robust Neural Machine Translation. paper

    Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai and Yang Liu

    Domain: Robustness of NMT

2019

  1. Latent Variable Model for Multi-modal Translation. paper

    Iacer Calixto, Miguel Rios and Wilker Aziz

    Domain: Multimodal Translation

  2. Learning Deep Transformer Models for Machine Translation. paper

    Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong and Lidia S. Chao

    Domain: Neural Network Architecture

  3. When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion. paper

    Elena Voita, Rico Sennrich and Ivan Titov

    Domain: Context-aware NMT

  4. A Compact and Language-Sensitive Multilingual Translation Method. paper

    Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu and Chengqing Zong

    Domain: Training

  5. Robust Neural Machine Translation with Doubly Adversarial Inputs. paper

    Yong Cheng, Lu Jiang and Wolfgang Macherey

    Domain: Robustness of NMT

  6. Shared-Private Bilingual Word Embeddings for Neural Machine Translation. paper

    Xuebo Liu, Derek F. Wong, Yang Liu and Lidia S. Chao, Tong Xiao, Jingbo Zhu

    Domain: Practical Problems in Machine Translation

  7. Unsupervised Parallel Sentence Extraction with Parallel Segment Detection Helps Machine Translation. paper

    Viktor Hangya and Alexander Fraser

    Domain: Low-resource

  8. Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation. paper

    Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita and Tiejun Zhao

    Domain: Low-resource

  9. Neural Machine Translation with Reordering Embeddings. paper

    Kehai Chen, Rui Wang, Masao Utiyama and Eiichiro Sumita

    Domain: Training

  10. Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation. paper

    Bram Bulte and Arda Tezcan

    Domain: Training

  11. Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies. paper

    Yunsu Kim, Yingbo Gao and Hermann Ney

    Domain: Domain Adaptation

  12. Bridging the Gap between Training and Inference for Neural Machine Translation. paper

    Wen Zhang, Yang Feng, Fandong Meng, Di You and Qun Liu

    Domain: Training

  13. Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations. paper

    Jiatao Gu, Yong Wang, Kyunghyun Cho and Victor O.K. Li

    Domain: Low-resource

  14. Lattice Transformer for Speech Translation. paper

    Pei Zhang, Niyu Ge, Boxing Chen and Kai Fan

    Domain: Practical Problems in Machine Translation

  15. Generalized Data Augmentation for Low-Resource Translation. paper

    Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos and Graham Neubig

    Domain: Low-resource

  16. Syntactically Supervised Transformers for Faster Neural Machine Translation. paper

    Nader Akoury, Kalpesh Krishna and Mohit Iyyer

    Domain: Practical Problems in Machine Translation

  17. Unsupervised Pivot Translation for Distant Languages. paper

    Yichong Leng, Xu Tan, Tao QIN, Xiang-Yang Li and Tie-Yan Liu

    Domain: Low-resource

  18. Dynamically Composing Domain-Data Selection with Clean-Data Selection by "Co-Curricular Learning" for Neural Machine Translation. paper

    Wei Wang, Isaac Caswell and Ciprian Chelba

    Domain: Training

  19. On the Word Alignment from Neural Machine Translation. paper

    Xintong Li, Guanlin Li, Lemao Liu, Max Meng and Shuming Shi

    Domain: Problem Analysis

  20. Imitation Learning for Non-Autoregressive Neural Machine Translation. paper

    Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin and Xu SUN

    Domain: Neural Network Architecture

  21. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation. paper

    Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li and Colin Raffel

    Domain: Neural Network Architecture

  22. Domain Adaptation of Neural Machine Translation by Lexicon Induction. paper

    Junjie Hu, Mengzhou Xia, Graham Neubig and Jaime Carbonell

    Domain: Domain Adaptation

  23. Beyond BLEU:Training Neural Machine Translation with Semantic Similarity. paper

    John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel and Graham Neubig

    Domain: Problem Analysis

  24. An Effective Approach to Unsupervised Machine Translation. paper

    Mikel Artetxe, Gorka Labaka and Eneko Agirre

    Domain: Low-resource

  25. Distilling Translations with Visual Awareness. paper

    Julia Ive, Pranava Madhyastha and Lucia Specia

    Domain: Multimodal Translation

  26. Reference Network for Neural Machine Translation. paper

    Han Fu, Chenghao Liu and Jianling Sun

    Domain: Neural Network Architecture

  27. Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation. paper

    Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen and Jie Zhou

    Domain: Problem Analysis

  28. Sparse Sequence-to-Sequence Models. paper

    Ben Peters, Vlad Niculae, André F.T. Martins

    Domain: Neural Network Architecture

  29. Look Harder: A Neural Machine Translation Model with Hard Attention. paper

    Sathish Reddy Indurthi, Insoo Chung and Sangha Kim

    Domain: Neural Network Architecture

  30. Robust Neural Machine Translation with Joint Textual and Phonetic Embedding. paper

    Hairong Liu, Mingbo Ma, Liang Huang, hao xiong and Zhongjun He

    Domain: Robustness of NMT

  31. Self-Supervised Neural Machine Translation. paper

    Dana Ruiter, Cristina España-Bonet and Josef van Genabith

    Domain: Low-resource

  32. Soft Contextual Data Augmentation for Neural Machine Translation. paper

    Jinhua Zhu, Fei Gao, Lijun Wu, Yingce Xia, Tao QIN, Wengang Zhou, Xueqi Cheng and Tie-Yan Liu

    Domain: Low-resource

  33. Domain Adaptive Inference for Neural Machine Translation. paper

    Danielle Saunders, Felix Stahlberg, Adrià de Gispert and Bill Byrne

    Domain: Domain Adaptation

  34. Revisiting Low-Resource Neural Machine Translation: A Case Study. paper

    Rico Sennrich and Biao Zhang

    Domain: Problem Analysis

  35. Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation. paper

    Xinyi Wang and Graham Neubig

    Domain: Practical Problems in Machine Translation

  36. Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach. paper

    Zonghan Yang, Yong Cheng, Yang Liu and Maosong Sun

    Domain: Practical Problems in Machine Translation

  37. Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation. paper

    Nima Pourdamghani, Nada Aldarrab, Marjan Ghazvininejad, Kevin Knight and Jonathan May

    Domain: Low-resource

  38. Generating Diverse Translations with Sentence Codes. paper

    Raphael Shu, Hideki Nakayama and Kyunghyun Cho

    Domain: Practical Problems in Machine Translation

  39. Exploring Phoneme-Level Speech Representations for End-to-End Speech Translation. paper

    Elizabeth Salesky, Matthias Sperber and Alan W Black

    Domain: Practical Problems in Machine Translation

  40. Training Neural Machine Translation To Apply Terminology Constraints. paper

    Georgiana Dinu, Prashant Mathur, Marcello Federico and Yaser Al-Onaizan

    Domain: Practical Problems in Machine Translation

  41. Exploiting Sentential Context for Neural Machine Translation. paper

    Xing Wang, Zhaopeng Tu, Longyue Wang and Shuming Shi

    Domain: Context-aware NMT

  42. Depth Growing for Neural Machine Translation. paper

    Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao QIN, Jianhuang Lai and Tie-Yan Liu

    Domain: Neural Network Architecture

  43. Effective Adversarial Regularization for Neural Machine Translation. paper

    Motoki Sato, Jun Suzuki and Shun Kiyono

    Domain: Neural Network Architecture

  44. Evaluating Gender Bias in Machine Translation. paper

    Gabriel Stanovsky, Noah A. Smith and Luke Zettlemoyer

    Domain: Problem Analysis

  45. Putting Evaluation in Context: Contextual Embeddings improve Machine Translation Evaluation. paper

    Nitika Mathur, Timothy Baldwin and Trevor Cohn

    Domain: Context-aware NMT

  46. Sentence-Level Agreement for Neural Machine Translation. paper

    Mingming Yang, Rui Wang, Kehai Chen, Masao Utiyama, eiichiro sumita, Min Zhang and Tiejun Zhao

    Domain: Problem Analysis

  47. Simple and Effective Paraphrastic Similarity from Parallel Translations. paper

    John Wieting, Kevin Gimpel, Graham Neubig and Taylor Berg-Kirkpatrick

    Domain: Problem Analysis

  48. Bilingual Lexicon Induction through Unsupervised Machine Translation. paper

    Mikel Artetxe, Gorka Labaka and Eneko Agirre

    Domain: Practical Problems in Machine Translation

  49. Better OOV Translation with Bilingual Terminology Mining. paper

    Matthias Huck, Viktor Hangya and Alexander Fraser

    Domain: Practical Problems in Machine Translation

  50. Lattice-Based Transformer Encoder for Neural Machine Translation. paper

    Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang and Kehai Chen

    Domain: Neural Network Architecture

  51. Simultaneous Translation with Flexible Policy via Restricted Imitation Learning. paper

    Baigong Zheng, Renjie Zheng, Mingbo Ma and Liang Huang

    Domain: Practical Problems in Machine Translation

2020

  1. Addressing Posterior Collapse with Mutual Information for Improved Variational Neural Machine Translation. paper

    Arya D. McCarthy, Xian Li, Jiatao Gu, Ning Dong

    Domain: Neural Network Architecture

  2. Multiscale Collaborative Deep Models for Neural Machine Translation. paper

    Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng and Weihua Luo

    Domain: Neural Network Architecture

  3. Hard-Coded Gaussian Attention for Neural Machine Translation. paper

    Weiqiu You, Simeng Sun and Mohit Iyyer

    Domain: Neural Network Architecture

  4. Improving Neural Machine Translation with Soft Template Prediction. paper

    Jian Yang, Shuming Ma, Dongdong Zhang, Zhoujun Li and Ming Zhou

    Domain: Neural Network Architecture

  5. Learning Source Phrase Representations for Neural Machine Translation. paper

    Hongfei Xu, Josef van Genabith, Deyi Xiong, Qiuhui Liu, Jingyi Zhang

    Domain: Neural Network Architecture

  6. A Reinforced Generation of Adversarial Examples for Neural Machine Translation. paper

    Wei Zou, Shujian Huang, Jun Xie, Xinyu Dai, Jiajun Chen

    Domain: Data Augmentation

  7. Boosting Neural Machine Translation with Similar Translations. paper

    Jitao XU, Josep Crego, Jean Senellart

    Domain: Data Augmentation

  8. Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation. paper

    Xabier Soto, Dimitar Shterionov, Alberto Poncelas and Andy Way

    Domain: Data Augmentation

  9. AdvAug: Robust Adversarial Augmentation for Neural Machine Translation. paper

    Yong Cheng, Lu Jiang, Wolfgang Macherey, Jacob Eisenstein

    Domain: Training

  10. Norm-Based Curriculum Learning for Neural Machine Translation. paper

    Xuebo Liu, Houtim Lai, Derek F. Wong and Lidia S. Chao

    Domain: Training

  11. Uncertainty-Aware Curriculum Learning for Neural Machine Translation. paper

    Yikai Zhou, Baosong Yang, Derek F. Wong, Yu Wan and Lidia S. Chao

    Domain: Training

  12. Balancing Training for Multilingual Neural Machine Translation. paper

    Xinyi Wang, Yulia Tsvetkov and Graham Neubig

    Domain: Multilingual Translation

  13. Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation. paper

    Biao Zhang, Philip Williams, Ivan Titov and Rico Sennrich

    Domain: Multilingual Translation

  14. Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation. paper

    Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita and Tiejun Zhao

    Domain: Multilingual Translation

  15. Translationese as a Language in “Multilingual” NMT. paper

    Parker Riley, Isaac Caswell, Markus Freitag and David Grangier

    Domain: Multilingual Translation

  16. A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation. paper

    Yongjing Yin, Fandong Meng, Jinsong Su, Chulun Zhou, Zhengyuan Yang, Jie Zhou and Jiebo Luo

    Domain: Multimodal Translation

  17. Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting. paper

    Po-Yao Huang, Junjie Hu, Xiaojun Chang and Alexander Hauptmann

    Domain: Multimodal Translation

  18. Learning a Multi-Domain Curriculum for Neural Machine Translation. paper

    Wei Wang, Ye Tian, Jiquan Ngiam, Yinfei Yang, Isaac Caswell, Zarana Parekh

    Domain: Multi-Domain

  19. Multi-Domain Neural Machine Translation with Word-Level Adaptive Layer-wise Domain Mixing. paper

    Haoming Jiang, Chen Liang, Chong Wang, Tuo Zhao

    Domain: Multi-Domain

  20. Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem. paper

    Danielle Saunders and Bill Byrne

    Domain: Domain Adaptation

  21. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. paper

    Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer

    Domain: Pre-Training

  22. Curriculum Pre-training for End-to-End Speech Translation. paper

    Chengyi Wang, Yu Wu, Shujie Liu, Ming Zhou and Zhenglu Yang

    Domain: Pre-training

  23. Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences. paper

    Xiangyu Duan, Baijun Ji, Hao Jia, Min Tan, Min Zhang, Boxing Chen, Weihua Luo, Yue Zhang

    Domain: Low-resource

  24. A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation. paper

    Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou and Shuai Ma

    Domain: Low-resource

  25. Document Translation vs. Query Translation for Cross-Lingual Information Retrieval in the Medical Domain. paper

    Shadi Saleh and Pavel Pecina

    Domain: Cross-Lingual Information Retrieval

  26. Dynamic Programming Encoding for Subword Segmentation in Neural Machine Translation. paper

    Xuanli He, Gholamreza Haffari, Mohammad Norouzi

    Domain: Subword Segmentation

  27. In Neural Machine Translation, What Does Transfer Learning Transfer?. paper

    Alham Fikri Aji, Nikolay Bogoychev, Kenneth Heafield and Rico Sennrich

    Domain: Problem Analysis

  28. Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation. paper

    Junliang Guo, Linli Xu and Enhong Chen

    Domain: Non-Autoregressive Translation

  29. Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation. paper

    Qiu Ran, Yankai Lin, Peng Li, Jie Zhou

    Domain: Non-Autoregressive Translation

  30. Multi-Hypothesis Machine Translation Evaluation. paper

    Marina Fomicheva, Lucia Specia and Francisco Guzmán

    Domain: Evaluation

  31. On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation. paper

    Wei Zhao, Goran Glavaš, Maxime Peyrard, Yang Gao, Robert West and Steffen Eger

    Domain: Evaluation

  32. On The Evaluation of Machine Translation SystemsTrained With Back-Translation. paper

    Sergey Edunov, Myle Ott, Marc’Aurelio Ranzato and Michael Auli

    Domain: Evaluation

  33. Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics. paper

    Nitika Mathur, Timothy Baldwin and Trevor Cohn

    Domain: Evaluation

  34. Evaluating Explanation Methods for Neural Machine Translation. paper

    Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang and Shuming Shi

    Domain: Evaluation

  35. On the Inference Calibration of Neural Machine Translation. paper

    Shuo Wang, Zhaopeng Tu, Shuming Shi and Yang Liu

    Domain: Problem Analysis

  36. Regularized Context Gates on Transformer for Machine Translation. paper

    Xintong Li, Lemao Liu, Rui Wang, Guoping Huang and Max Meng

    Domain: Neural Network Architecture

  37. Character-Level Translation with Self-attention. paper

    Yingqiang Gao, Nikola I. Nikolov, Yuhuang Hu and Richard H.R. Hahnloser

    Domain: Neural Network Architecture

  38. Variational Neural Machine Translation with Normalizing Flows. paper

    Variational Neural Machine Translation with Normalizing Flows

    Domain: Neural Network Architecture

  39. Enhancing Machine Translation with Dependency-Aware Self-Attention. paper

    Emanuele Bugliarello and Naoaki Okazaki

    Domain: Neural Network Architecture

  40. Content Word Aware Neural Machine Translation. paper

    Kehai Chen, Rui Wang, Masao Utiyama and Eiichiro Sumita

    Domain: Training

  41. Using Context in Neural Machine Translation Training Objectives. paper

    Danielle Saunders, Felix Stahlberg and Bill Byrne

    Domain: Training

  42. A Simple and Effective Unified Encoder for Document-Level Machine Translation. paper

    Shuming Ma, Dongdong Zhang and Ming Zhou

    Domain: Context-Aware Translation

  43. Contextual Neural Machine Translation Improves Translation of Cataphoric Pronouns. paper

    KayYen Wong, Sameen Maruf and Gholamreza Haffari

    Domain: Context-Aware Translation

  44. Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation. paper

    Bei Li, Hui Liu, Ziyang Wang, Yufan Jiang, Tong Xiao, Jingbo Zhu, Tongran Liu and Changliang Li

    Domain: Context-Aware Translation

  45. ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation. paper

    Lifu Tu, Richard Yuanzhe Pang, Sam Wiseman and Kevin Gimpel

    Domain: Non-Autoregressive Translation

  46. Lexically Constrained Neural Machine Translation with Levenshtein Transformer. paper

    Raymond Hendy Susanto, Shamil Chollampatt and Liling Tan

    Domain: Non-Autoregressive Translation

  47. Improving Non-autoregressive Neural Machine Translation with Monolingual Data. paper

    Jiawei Zhou and Phillip Keung

    Domain: Non-autoregressive Translation

  48. Evaluating Robustness to Input Perturbations for Neural Machine Translation. paper

    Xing Niu, Prashant Mathur, Georgiana Dinu and Yaser Al-Onaizan

    Domain: Evaluation

  49. It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information. paper

    Emanuele Bugliarello, Sabrina J. Mielke, Antonios Anastasopoulos, Ryan Cotterell and Naoaki Okazaki

    Domain: Evaluation

  50. Automatic Machine Translation Evaluation using Source Language Inputs and Cross-lingual Language Model. paper

    Kosuke Takahashi, Katsuhito Sudoh and Satoshi Nakamura

    Domain: Evaluation

  51. Language-aware Interlingua for Multilingual Neural Machine Translation. paper

    Changfeng Zhu, Heng Yu, Shanbo Cheng and Weihua Luo

    Domain: Multilingual Translation

  52. Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation. paper

    Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Firat, Mia Chen, Sneha Kudugunta, Naveen Arivazhagan and Yonghui Wu

    Domain: Multilingual Translation

  53. Multimodal Transformer for Multimodal Machine Translation. paper

    Shaowei Yao and Xiaojun Wan

    Domain: Multimodal Translation

  54. Opportunistic Decoding with Timely Correction for Simultaneous Translation. paper

    Renjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu and Liang Huang

    Domain: Simultaneous Translation

  55. Simultaneous Translation Policies: From Fixed to Adaptive. paper

    Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu and Liang Huang

    Domain: Simultaneous Translation

  56. Tagged Back-translation Revisited: Why Does It Really Work?. paper

    Benjamin Marie, Raphael Rubino and Atsushi Fujita

    Domain: Data Augmentation

  57. Modeling Word Formation in English–German Neural Machine Translation. paper

    Marion Weller-Di Marco and Alexander Fraser

    Domain: Practical Problems in Machine Translation

  58. ``You Sound Just Like Your Father’’ Commercial Machine Translation Systems Include Stylistic Biases. paper

    Dirk Hovy, Federico Bianchi and Tommaso Fornaciari

    Domain: Practical Problems in Machine Translation

  59. On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation. paper

    Chaojun Wang and Rico Sennrich

    Domain: Practical Problems in Machine Translation

  60. A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation. paper

    Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou and Shuai Ma

    Domain: Low-resource

EMNLP

2018

  1. Meta-Learning for Low-Resource Neural Machine Translation. paper

    Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho and Victor O.K. Li

    Domain: Low-resource

  2. Unsupervised Statistical Machine Translation. paper

    Mikel Artetxe, Gorka Labaka and Eneko Agirre

    Domain: Low-resource

  3. Phrase-Based & Neural Unsupervised Machine Translation. paper

    Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, Marc’Aurelio Ranzato

    Domain: Low-resource

  4. Contextual Parameter Generation for Universal Neural Machine Translation. paper

    Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, Tom M. Mitchell

    Domain: Domain Adaptation

  5. Multi-Domain Neural Machine Translation with Word-Level Domain Context Discrimination. paper

    Jiali Zeng, Jinsong Su, Huating Wen, Yang Liu, Jun Xie, Yongjing Yin and Jianqiang Zhao

    Domain: Domain Adaptation

  6. A Study of Reinforcement Learning for Neural Machine Translation. paper

    Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai and Tie-Yan Liu

    Domain: Training

  7. A Visual Attention Grounding Neural Model for Multimodal Machine Translation. paper

    Mingyang Zhou, Runxiang Cheng, Yong Jae Lee, Zhou Yu

    Domain: Multimodal Translation

  8. Adaptive Multi-pass Decoder for Neural Machine Translation. paper

    Xinwei Geng and Xiaocheng Feng and Bing Qin and Ting Liu

    Domain: Neural Network Architecture

  9. Exploiting Deep Representations for Neural Machine Translation. paper

    Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi and Tong Zhang

    Domain: Neural Network Architecture

  10. Simplifying Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent Networks. paper

    Biao Zhang, Deyi Xiong, Jinsong Su, Qian Lin and Huiji Zhang

    Domain: Neural Network Architecture

  11. Modeling Localness for Self-Attention Networks. paper

    Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao and Tong Zhang

    Domain: Neural Network Architecture

  12. Addressing Troublesome Words in Neural Machine Translation. paper

    Yang Zhao, Jiajun Zhang, Zhongjun He,Chengqing Zong and Hua Wu

    Domain: Neural Network Architecture

  13. Revisiting Character-Based Neural Machine Translation with Capacity and Compression. paper

    Colin Cherry, George Foster, Ankur Bapna, Orhan Firat and Wolfgang Macherey

    Domain: Neural Network Architecture

  14. Speeding Up Neural Machine Translation Decoding by Cube Pruning. paper

    Wen Zhang, Liang Huang, Yang Feng, Lei Shen and Qun Liu

    Domain: Decoding

  15. Semi-Autoregressive Neural Machine Translation. paper

    Chunqi Wang, Ji Zhang and Haiqing Chen

    Domain: Decoding

  16. Top-down Tree Structured Decoding with Syntactic Connections for Neural Machine Translation and Parsing. paper

    Jetic Gu, Hassan S. Shavarani and Anoop Sarkar

    Domain: External Knowledge

  17. Back-Translation Sampling by Targeting Difficult Words in Neural Machine Translation. paper

    Marzieh Fadaee and Christof Monz

    Domain: Back Translation

  18. Understanding Back-Translation at Scale. paper

    Sergey Edunov, Myle Ott, Michael Auli and David Grangier

    Domain: Back Translation

  19. MTNT: A Testbed for Machine Translation of Noisy Text. paper

    Paul Michel and Graham Neubig

    Domain: Corpus

  20. Improving the Transformer Translation Model with Document-Level Context. paper

    Jiacheng Zhang, Huanbo Luan,Maosong Sun, FeiFei Zhai, Jingfang Xu, Min Zhang and Yang Liu

    Domain: Document-Level NMT

  21. Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures. paper

    Gongbo Tang, Mathias Muller, Annette Rios and Rico Sennrich

    Domain: Analysis

  22. Compact Personalized Models for Neural Machine Translation. paper

    Joern Wuebker, Patrick Simianer and John DeNero

    Domain: Domain Adaptation

  23. Rapid Adaptation of Neural Machine Translation to New Languages. paper

    Graham Neubig and Junjie Hu

    Domain: Domain Adaptation

  24. The Lazy Encoder A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation. paper

    Arianna Bisazza and Clara Tump

    Domain: External Knowledge

  25. Context and Copying in Neural Machine Translation. paper

    Rebecca Knowles and Philipp Koehn

    Domain: Neural Network Architecture

  26. Adversarial Evaluation of Multimodal Machine Translation. paper

    Desmond Elliott

    Domain: Multimodal Translation

  27. Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation. paper

    Junyang Lin, Xu Sun, Xuancheng Ren, Muyu Li, Qi Su

    Domain: Neural Network Architecture

  28. Encoding Gated Translation Memory into Neural Machine Translation. paper

    Qian Cao and Deyi Xiong

    Domain: Neural Network Architecture

  29. Towards Two-Dimensional Sequence to Sequence Model in Neural Machine Translation. paper

    Parnia Bahar, Christopher Brix and Hermann Ney

    Domain: Neural Network Architecture

  30. Training Deeper Neural Machine Translation Models with Transparent Attention. paper

    Ankur Bapna, Mia Xu Chen, Orhan Firat, Yuan Cao and Yonghui Wu

    Domain: Neural Network Architecture

  31. Learning When to Concentrate or Divert Attention:Self-Adaptive Attention Temperature for Neural Machine Translation. paper

    Junyang Lin, Xu Sun, Xuancheng Ren, Muyu Li and Qi Su

    Domain: Neural Network Architecture

  32. Multi-Head Attention with Disagreement Regularization. paper

    Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu and Tong Zhang

    Domain: Neural Network Architecture

  33. Towards Two-Dimensional Sequence to Sequence Model in Neural Machine Translation. paper

    Parnia Bahar, Christopher Brix and Hermann Ney

    Domain: Neural Network Architecture

  34. Accelerating Asynchronous Stochastic Gradient Descent for Neural Machine Translation. paper

    Nikolay Bogoychev, Marcin Junczys-Dowmunt, Kenneth Heafield and Alham Fikri Aji

    Domain: Training

  35. Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation. paper

    Chenze Shao, Yang Feng, Xilin Chen

    Domain: Training

  36. Exploring Recombination for Efficient Decoding of Neural Machine Translation. paper

    Zhisong Zhang, Rui Wang,Masao Utiyama, Eiichiro Sumita and Hai Zhao

    Domain: Decoding

  37. Breaking the Beam Search Curse: A Study of (Re-)Scoring Methods and Stopping Criteria for Neural Machine Translation. paper

    Yilin Yang, Liang Huang and Mingbo Ma

    Domain: Decoding

  38. End-to-End Non-Autoregressive Neural Machine Translation with Connectionist Temporal Classification. paper

    Jindrich Libovicky and Jindrich Helcl

    Domain: Decoding

  39. Multi-Source Syntactic Neural Machine Translation. paper

    Anna Currey and Kenneth Heafield

    Domain: External Knowledge

  40. A Tree-based Decoder for Neural Machine Translation. paper

    Xinyi Wang, Hieu Pham, Pengcheng Yin and Graham Neubig

    Domain: External Knowledge

  41. SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine Translation. paper

    Xinyi Wang, Hieu Pham, Zihang Dai and Graham Neubig

    Domain: Data Augmentation

  42. Fixing Translation Divergences in Parallel Corpora for Neural MT. paper

    MinhQuang Pham, Josep Crego, Jean Senellart and Franc¸ois Yvon

    Domain: Data Processing

  43. Getting Gender Right in Neural Machine Translation. paper

    Eva Vanmassenhove, Christian Hardmeier and Andy Way

    Domain: Corpus

  44. Document-Level Neural Machine Translation with Hierarchical Attention Networks. paper

    Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas and James Henderson

    Domain: Document-Level NMT

  45. Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation. paper

    Samuel Laubli, Rico Sennrich, Martin Volk

    Domain: Document-Level NMT

  46. The Lazy Encoder: A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation. paper

    Arianna Bisazza, Clara Tump

    Domain: Analysis

  47. Beyond Error Propagation in Neural Machine Translation: Characteristics of Language Also Matter. paper

    Lijun Wu, Xu Tan, Di He, Fei Tian, Tao Qin, Jianhuang Lai, Tie-Yan Liu

    Domain: Analysis

  48. Automatic Post-Editing of Machine Translation: A Neural Programmer-Interpreter Approach. paper

    Thuy-Trang Vu, Gholamreza Haffari

    Domain: Post-Editting

2019

  1. Explicit Cross-lingual Pre-training for Unsupervised Machine Translation. paper

    Ren Shuo, Wu Yu, Liu Shujie, Zhou Ming and Ma, Shuai

    Domain: Unsupervised

  2. Latent Part-of-Speech Sequences for Neural Machine Translation. paper

    Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, Niranjan Balasubramanian

    Domain: External Knowledge

  3. Towards Linear Time Neural Machine Translation with Capsule Networks. paper

    Mingxuan Wang

    Domain: Neural Network Architecture

  4. Iterative Dual Domain Adaptation for Neural Machine Translation. paper

    Jiali Zeng, Yang Liu, jinsong su, yubing Ge, Yaojie Lu, Yongjing Yin, jiebo luo

    Domain: Domain Adaptation

  5. Multi-agent Learning for Neural Machine Translation. paper

    tianchi bi, hao xiong, Zhongjun He, Hua Wu, Haifeng Wang

    Domain: Ensemble Training

  6. Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages. paper

    Yunsu Kim, Petre Petrov, Pavel Petrushkov, Shahram Khadivi, Hermann Ney

    Domain: Pre-training

  7. Context-Aware Monolingual Repair for Neural Machine Translation. paper

    Elena Voita, Rico Sennrich, Ivan Titov

    Domain: Document Level

  8. Multi-Granularity Self-Attention for Neural Machine Translation. paper

    Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

    Domain: Neural Network Architecture

  9. Dynamic Past and Future for Neural Machine Translation. paper

    Zaixiang Zheng, Shujian Huang, Zhaopeng Tu, XIN-YU DAI, Jiajun CHEN

    Domain: Coverage

  10. Towards Understanding Neural Machine Translation with Word Importance. paper

    Shilin He, Zhaopeng Tu, Xing Wang, Longyue Wang, Michael Lyu, Shuming Shi

    Domain: Interpretability

  11. Multilingual Neural Machine Translation with Language Clustering. paper

    Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao QIN, Tie-Yan Liu

    Domain: Multilingual

  12. Simple, Scalable Adaptation for Neural Machine Translation. paper

    Ankur Bapna, Orhan Firat

    Domain: Fine Tuning

  13. Controlling Text Complexity in Neural Machine Translation. paper

    Sweta Agrawal, Marine Carpuat

    Domain: Text Complexity

  14. Hierarchical Modeling of Global Context for Document-Level Neural Machine Translation. paper

    Xin Tan, Longyin Zhang, Deyi Xiong, Guodong Zhou

    Domain: Document Level

  15. Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite. paper

    Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, Preslav Nakov

    Domain: Evaluation Measure

  16. Exploiting Monolingual Data at Scale for Neural Machine Translation. paper

    Lijun Wu, Yiren Wang, Yingce Xia, Tao QIN, Jianhuang Lai, Tie-Yan Liu

    Domain: Data Augmentation

  17. Machine Translation With Weakly Paired Documents. paper

    Lijun Wu, Jinhua Zhu, Di He, Fei Gao, Tao QIN, Jianhuang Lai, Tie-Yan Liu

    Domain: Data Augmentation

  18. Transformer Dissection: An Unified Understanding for Transformer’s Attention via the Lens of Kernel. paper

    Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

    Domain: Neural Network Architecture

  19. Attention is not not Explanation. paper

    Sarah Wiegreffe, Yuval Pinter

    Domain: Interpretability

  20. The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English. paper

    Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc’Aurelio Ranzato

    Domain: corpus

  21. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives. paper

    Elena Voita, Rico Sennrich, Ivan Titov

    Domain: Analysis

  22. Improving Deep Transformer with Depth-Scaled Initialization and Merged Attention. paper

    Biao Zhang, Ivan Titov, Rico Sennrich

    Domain: Neural Network Architecture/Deep model

  23. Tree Transformer: Integrating Tree Structures into Self-Attention. paper

    Yaushian Wang, Hung-Yi Lee, Yun-Nung Chen

    Domain: Neural Network Architecture

  24. Adaptively Sparse Transformers. paper

    Gonçalo M. Correia, Vlad Niculae, André F. T. Martins

    Domain: Neural Network Architecture

  25. Jointly Learning to Align and Translate with Transformer Models. paper

    Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik

    Domain: Neural Network Architecture

  26. LXMERT: Learning Cross-Modality Encoder Representations from Transformers. paper

    Hao Tan, Mohit Bansal

    Domain: Pre-training

  27. Encoders Help You Disambiguate Word Senses in Neural Machine Translation. paper

    Gongbo Tang, Rico Sennrich, Joakim Nivre

    Domain: Interpretability

  28. Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings. paper

    Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, Graham Neubig

    Domain: Domain Adaptation

  29. Exploiting Multilingualism through Multistage Fine-Tuning for Low-Resource Neural Machine Translation. paper

    Raj Dabre, Atsushi Fujita, Chenhui Chu

    Domain: Multilingual / Low Resource

  30. Handling Syntactic Divergence in Low-resource Machine Translation. paper

    Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig

    Domain: Low Resource

  31. HABLex: Human Annotated Bilingual Lexicons for Experiments in Machine Translation. paper

    Brian Thompson, Rebecca Knowles, Xuan Zhang, Huda Khayrallah, Kevin Duh, Philipp Koehn

    Domain: External Knowledge

  32. Machine Translation for Machines: the Sentiment Classification Use Case. paper

    amirhossein tebbifakhr, Luisa Bentivogli, Matteo Negri, Marco Turchi

    Domain: Training

  33. Recurrent Positional Embedding for Neural Machine Translation. paper

    Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita

    Domain: Neural Network Architecture

  34. Self-Attention with Structural Position Representations. paper

    Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi

    Domain: Neural Network Architecture

  35. Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons. paper

    Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

    Domain: Neural Network Architecture

  36. Hint-Based Training for Non-Autoregressive Machine Translation. paper

    Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao QIN, Liwei WANG, Tie-Yan Liu

    Domain: Non-AutoRegressive Translation

  37. Simple and Effective Noisy Channel Modeling for Neural Machine Translation. paper

    Kyra Yee, Yann Dauphin, Michael Auli

    Domain: Neural Network Architecture

  38. Understanding Data Augmentation in Neural Machine Translation: Two Perspectives towards Generalization. paper

    Guanlin Li, Lemao Liu, Guoping Huang, Conghui Zhu, Tiejun Zhao

    Domain: Data Augmentation

NAACL

2018

  1. Universal Neural Machine Translation for Extremely Low Resource Languages. paper

    Jiatao Gu, Hany Hassan, Jacob Devlin and Victor O.K. Li

    Domain: Low-resource

  2. Neural Machine Translation for Bilingually Scarce Scenarios: A Deep Multi-task Learning Approach. paper

    Poorya Zaremoodi and Gholamreza Haffari

    Domain: Low-resource

  3. Evaluating Discourse Phenomena in Neural Machine Translation. paper

    Rachel Bawden, Rico Sennrich, Alexandra Birch and Barry Haddow

    Domain: Context-aware NMT

  4. On the Evaluation of Semantic Phenomena in Neural Machine Translation Using Natural Language Inference. paper

    Adam Poliak, Yonatan Belinkov, James Glass and Benjamin Van Durme

    Domain: Problem Analysis

  5. Improving Lexical Choice in Neural Machine Translation. paper

    Toan Q. Nguyen and David Chiang

    Domain: Neural Network Architecture

  6. Combining Character and Word Information in Neural Machine Translation Using a Multi-Level Attention. paper

    Huadong Chen, Shujian Huang, David Chiang, Xinyu Dai, Jiajun Chen

    Domain: Neural Network Architecture

  7. Dense Information Flow for Neural Machine Translation. paper

    Yanyao Shen, Xu Tan, Di He, Tao Qin, and Tie-Yan Liu

    Domain: Neural Network Architecture

  8. Self-Attentive Residual Decoder for Neural Machine Translation. paper

    Lesly Miculicich Werlen, Nikolaos Pappas, Dhananjay Ram and Andrei Popescu-Belis

    Domain: Neural Network Architecture

  9. Target Foresight based Attention for Neural Machine Translation. paper

    Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, Max Meng

    Domain: Neural Network Architecture

  10. Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets. paper

    Zhen Yang, Wei Chen, Feng Wang and Bo Xu

    Domain: Training

  11. Can Neural Machine Translation be Improved with User Feedback?. paper

    Julia Kreutzer, Shahram Khadivi3, Evgeny Matusov and Stefan Riezler

    Domain: Training

  12. Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation. paper

    Matt Post and David Vilar

    Domain: Decoding

  13. Guiding Neural Machine Translation with Retrieved Translation Pieces. paper

    Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig and Satoshi Nakamura

    Domain: Decoding

  14. Improving Character-Based Decoding Using Target-Side Morphological Information for Neural Machine Translation. paper

    Peyman Passban, Qun Liu and Andy Way

    Domain: External Knowledge

  15. Handling Homographs in Neural Machine Translation. paper

    Frederick Liu, Han Lu and Graham Neubig

    Domain: External Knowledge

  16. Neural Machine Translation for Low Resource Languages using Bilingual Lexicon Induced from Comparable Corpora. paper

    Sree Harsha Ramesh and Krishna Prasad Sankaranarayanan

    Domain: Data Processing

  17. Learning Hidden Unit Contribution for Adapting Neural Machine Translation Models. paper

    David Vilar

    Domain: Domain Adaptation

  18. Neural Machine Translation Decoding with Terminology Constraints. paper

    Eva Hasler, Adria de Gispert, Gonzalo Iglesias and Bill Byrne

    Domain: Decoding

  19. Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation. paper

    JFahim Dalvi, Nadir Durrani, Hassan Sajjad and Stephan Vogel

    Domain: Decoding

  20. Pieces of Eight: 8-bit Neural Machine Translation. paper

    Jerry Quinn and Miguel Ballesteros

    Domain: Practical Problems in Machine Translation

  21. Japanese Predicate Conjugation for Neural Machine Translation. paper

    Michiki Kurosawa, Yukio Matsumura, Hayahide Yamagishi and Mamoru Komachi

    Domain: External Knowledge

  22. Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks. paper

    Diego Marcheggiani, Joost Bastings and Ivan Titov

    Domain: External Knowledge

  23. Automated Paraphrase Lattice Creation for HyTER Machine Translation Evaluation. paper

    Marianna Apidianaki, Guillaume Wisniewski, Anne Cocos and Chris Callison-Burch

    Domain: Problem Analysis

  24. Metric for Automatic Machine Translation Evaluation based on Universal Sentence Representations. paper

    Hiroki Shimanaka, Tomoyuki Kajiwara and Mamoru Komachi

    Domain: Problem Analysis

  25. When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?. paper

    Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Janani Padmanabhan and Graham Neubig

    Domain: Problem Analysis

  26. Self-Attention with Relative Position Representations. paper

    Peter Shaw, Jakob Uszkoreit, Ashish Vaswani

    Domain: Neural Network Architecture

2019

  1. Consistency by Agreement in Zero-Shot Neural Machine Translation. paper

    Maruan Al Shedivat, Ankur Parikh

    Domain: Low Resource

  2. Extract and Edit: An Alternative to Back-Translation for Unsupervised Neural Machine Translation. paper

    Jiawei Wu, Xin Wang, William Yang Wang

    Domain: Low Resource

  3. Comparing Pipelined and Integrated Approaches to Dialectal Arabic Neural Machine Translation. paper

    Pamela Shapiro, Kevin Duh

    Domain: Problem Analysis

  4. Neural Machine Translation between Myanmar (Burmese) and Rakhine (Arakanese). paper

    Thazin Myint Oo, Ye Kyaw Thu, Khin Mar Soe

    Domain: Problem Analysis

  5. Measuring Immediate Adaptation Performance for Neural Machine Translation. paper

    Patrick Simianer, Joern Wuebker, John DeNero

    Domain: Domain Adaption

  6. Non-Parametric Adaptation for Neural Machine Translation. paper

    Ankur Bapna, Orhan Firat

    Domain: Domain Adaption

  7. Curriculum Learning for Domain Adaptation in Neural Machine Translation. paper

    Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul McNamee, Marine Carpuat, Kevin Duh

    Domain: Domain Adaption

  8. Massively Multilingual Neural Machine Translation. paper

    Roee Aharoni, Melvin Johnson, Orhan Firat

    Domain: Multilingual

  9. Probing the Need for Visual Context in Multimodal Machine Translation. paper

    Ozan Caglayan, Pranava Madhyastha, Lucia Specia, Loïc Barrault

    Domain: Multimodal NMT

  10. Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations. paper

    Meishan Zhang, Zhenghua Li, Guohong Fu, Min Zhang

    Domain: Syntax Aware

  11. Online Distilling from Checkpoints for Neural Machine Translation. paper

    Hao-Ran Wei, Shujian Huang, Ran Wang, Xin-yu Dai, Jiajun Chen

    Domain: Training

  12. Reinforcement Learning based Curriculum Optimization for Neural Machine Translation. paper

    Gaurav Kumar, George Foster, Colin Cherry, Maxim Krikun

    Domain: Training

  13. Competence-based Curriculum Learning for Neural Machine Translation. paper

    Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, Tom Mitchell

    Domain: Training, Curriculum Learning

  14. Understanding and Improving Hidden Representations for Neural MachineTranslation. paper

    Guanlin Li, Lemao Liu, Xintong Li, Conghui Zhu, Tiejun Zhao, Shuming Shi

    Domain: Training, Problem Analysis

  15. Unsupervised Extraction of Partial Translations for Neural Machine Translation. paper

    Benjamin Marie, Atsushi Fujita

    Domain: Data Augmentation

  16. Selective Attention for Context-aware Neural Machine Translation. paper

    Sameen Maruf, André F. T. Martins, Gholamreza Haffari

    Domain: Contex aware

  17. Neural Machine Translation of Text from Non-Native Speakers. paper

    Antonios Anastasopoulos, Alison Lui, Toan Q. Nguyen, David Chiang

    Domain: Robustness of NMT

  18. Star-Transformer. paper

    Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, Zheng Zhang

    Domain: Neural Network Architecture

  19. Modeling Recurrence for Transformer. paper

    Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, Zhaopeng Tu

    Domain: Neural Network Architecture

  20. Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation. paper

    Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, Philipp Koehn

    Domain: Domain Adaption

  21. Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages. paper

    Rudra Murthy, Anoop Kunchukuttan, Pushpak Bhattacharyya

    Domain: Transfer Learning, Low Resource

  22. Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation. paper

    Xing Niu, Weijia Xu, Marine Carpuat

    Domain: Low Resource

  23. Convolutional Self-attention Networks. paper

    Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu

    Domain: Neural Network Architecture

  24. Differentiable Sampling with Flexible Reference Word Order for Neural Machine Translation. paper

    Weijia Xu, Xing Niu, Marine Carpuat

    Domain: Training

  25. ReWE: Regressing Word Embeddings for Regularization of Neural Machine Translation Systems. paper

    Inigo Jauregi Unanue, Ehsan Zare Borzeshi, Nazanin Esmaili, Massimo Piccardi

    Domain: Training

  26. Lost in Machine Translation: A Method to Reduce Meaning Loss. paper

    Reuben Cohn-Gordon, Noah Goodman

    Domain: Training

  27. Improving Neural Machine Translation with Neural Syntactic Distance. paper

    Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao

    Domain: Syntax Aware

  28. Improving Robustness of Machine Translation with Synthetic Noise. paper

    Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, Graham Neubig

    Domain: Robustness

  29. Learning to Stop in Structured Prediction for Neural Machine Translation. paper

    Mingbo Ma, Renjie Zheng, Liang Huang

    Domain: Decoding

  30. Multimodal Machine Translation with Embedding Prediction. paper

    Tosho Hirasawa, Hayahide Yamagishi, Yukio Matsumura, Mamoru Komachi

    Domain: Multimodal

AAAI

2020

  1. Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation. paper

    Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng and Jie Zhou

    Domain: Training, Non-Autoregressive NMT

  2. Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior. paper

    Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho

    Domain: Non-Autoregressive NMT

  3. Neural Machine Translation with Byte-Level Subwords. paper

    Changhan Wang, Kyunghyun Cho and Jiatao Gu

    Domain: Data Processing

  4. Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation. paper

    Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu

    Domain: Curriculum Learning, Non-Autoregressive NMT

  5. Reinforced Curriculum Learning on Pre-trained Neural Machine Translation Models. paper

    Mingjun Zhao, Haijiang Wu, Di Niu, Xiaoli Wang

    Domain: Curriculum Learning

  6. Unsupervised Neural Dialect Translation with Commonality and Diversity Modeling. paper

    Yu Wan, Baosong Yang, Derek F. Wong, Lidia S. Chao, Haihua Du, Ben C.H. Ao

    Domain: Unsupervised

  7. Transductive Ensemble Learning for Neural Machine Translation. paper

    Yiren Wang, Lijun Wu, Yingce Xia, Tao Qin, ChengXiang Zhai, Tie-Yan Liu

    Domain: Ensemble Learning

  8. Improving Context-Aware Neural Machine Translation Using Self-Attentive Sentence Embedding. paper

    Hyeongu Yun, Yongkeun Hwang, Kyomin Jung

    Domain: Context-Aware NMT

  9. Controlling Neural Machine Translation Formality with Synthetic Supervision. paper

    Xing Niu, Marine Carpuat

    Domain: Training

  10. Towards Making the Most of BERT in Neural Machine Translation. paper

    Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Weinan Zhang, Lei Li

    Domain: Pre-Training

  11. Acquiring Knowledge from Pre-trained Model to Neural Machine Translation. paper

    Rongxiang Weng, Heng Yu, Shujian Huang1, Shanbo Cheng, Weihua Luo

    Domain: Pre-Training

  12. A Meta Learning Method Leveraging Multiple Domain Data for Low Resource Machine Translation. paper

    Rumeng Li, Xun Wang, Hong Yu

    Domain: Low-Resource

  13. Alignment-­‐Enhanced Transformer for Constraining NMT with Pre-­‐Specified Translations. paper

    Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, Min Zhang

    Domain: Neural Network Architecture

  14. GRET: Global Representation Enhanced Transformer. paper

    Rongxiang Weng, Haoran Wei, Shujian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jiajun Chen

    Domain: Neural Network Architecture

  15. Explicit Sentence Compression for Neural Machine Translation. paper

    Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao

    Domain: Neural Network Architecture

  16. Neuron Interaction Based Representation Composition for Neural Machine Translation. paper

    Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu

    Domain: Neural Network Architecture

  17. Neural Machine Translation with Joint Representation. paper

    Yanyang Li, Qiang Wang, Tong Xiao, Tongran Liu and Jingbo Zhu

    Domain: Neural Network Architecture

  18. IntroVNMT: An Introspective Model for Variational Neural Machine Translation. paper

    Xin Sheng, Linli Xu, Junliang Guo, Jingchang Liu, Ruoyu Zhao, Yinlong Xu

    Domain: Neural Network Architecture

  19. Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation. paper

    Baijun Ji, Zhirui Zhang, Xiangyu Duan, Min Zhang, Boxing Chen and Weihua Luo

    Domain: Low-resource

  20. Modeling Fluency and Faithfulness for Diverse Neural Machine Translation. paper

    Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, Dong Yu

    Domain: Decoding

  21. Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation. paper

    Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman

    Domain: Multilingual NMT

  22. Visual Agreement Regularized Training for Multi-Modal Machine Translation. paper

    Pengcheng Yang, Boxing Chen, Pei Zhang, Xu Sun

    Domain: Multi-modal machine translation

  23. Balancing Quality and Human Involvement: an Effective Approach to Interactive Neural Machine Translation. paper

    tianxiang zhao, Lemao Liu, Guoping Huang, Zhaopeng Tu, Huayang Li, Yingling Liu, Liu GuiQuan, Shuming Shi

    Domain: Interactive Translation

IJCAI

2019

  1. Correct-and-Memorize: Learning to Translate from Interactive Revisions. paper

    Rongxiang Weng, Hao Zhou, Huang Shujian, Yifan Xia, Lei Li, Jiajun Chen

    Domain: interactive translation

  2. Sharing Attention Weights for Fast Transformer. paper

    Tong Xiao, Yinqiao Li, Jingbo Zhu, Zhengtao Yu, Tongran Liu

    Domain: Neural Network Architecture

  3. From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots. paper

    Shizhe Chen, Qin Jin, Jianlong Fu

    Domain: Low Resource

  4. Polygon-Net: A General Framework for Jointly Boosting Multiple Unsupervised Neural Machine Translation Models. paper

    Chang Xu, Tao Qin, Gang Wang, Tie-Yan Liu

    Domain: Unsupervised

COLING

2018

  1. A Comparison of Transformer and Recurrent Neural Networks on Multilingual Neural Machine Translation. paper

    Surafel M. Lakew, Mauro Cettolo and Marcello Federico

    Domain: Problem Analysis

  2. Appraise Evaluation Framework for Machine Translation. paper

    Christian Federmann

    Domain: Problem Analysis

  3. On Adversarial Examples for Character-Level Neural Machine Translation. paper

    Javid Ebrahimi, Daniel Lowd and Dejing Dou

    Domain: Problem Analysis

  4. Neural Machine Translation with Decoding-History Enhanced Attention. paper

    Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong and Chao bian

    Domain: Neural Network Architecture

  5. Refining Source Representations with Relation Networks for Neural Machine Translation. paper

    Wen Zhang, Jiawei Hu, Yang Feng and Qun Liu

    Domain: Neural Network Architecture

  6. Adaptive Weighting for Neural Machine Translation. paper

    Yachao Li, Junhui Li and Min Zhang

    Domain: Neural Network Architecture

  7. Multilingual Neural Machine Translation with Task-Specific Attention. paper

    Graeme Blackwood, Miguel Ballesteros and Todd Ward

    Domain: Neural Network Architecture

  8. Deconvolution-Based Global Decoding for Neural Machine Translation. paper

    Junyang Lin, Xu Sun, Xuancheng Ren, Shuming Ma, Jinsong Su and Qi Su

    Domain: Neural Network Architecture

  9. Improving Neural Machine Translation by Incorporating Hierarchical Subword Features. paper

    Makoto Morishita, Jun Suzuki and Masaaki Nagata

    Domain: Neural Network Architecture

  10. Neural Machine Translation with Decoding History Enhanced Attention. paper

    Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong and Chao Bian

    Domain: Neural Network Architecture

  11. Neural Machine Translation Incorporating Named Entity. paper

    Arata Ugawa, Akihiro Tamura, Takashi Ninomiya, Hiroya Takamura and Manabu Okumura

    Domain: Neural Network Architecture

  12. Multi-layer Representation Fusion for Neural Machine Translation. paper

    Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li and Jingbo Zhu

    Domain: Neural Network Architecture

  13. Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches. paper

    Shaohui Kuang, Deyi Xiong, Weihua Luo and Guodong Zhou

    Domain: Context-aware NMT

  14. Fusing Recency into Neural Machine Translation with an Inter-Sentence Gate Model. paper

    Shaohui Kuang and Deyi Xiong

    Domain: Context-aware NMT

  15. A Survey of Domain Adaptation for Neural Machine Translation. paper

    Chenhui Chu, Rui Wang

    Domain: Domain Adaptation

  16. Sentence Weighting for Neural Machine Translation Domain Adaptation. paper

    Shiqi Zhang and Deyi Xiong

    Domain: Domain Adaptation

  17. Incorporating Syntactic Uncertainty in Neural Machine Translation with a Forest-to-Sequence Model. paper

    Poorya Zaremoodi and Gholamreza Haffari

    Domain: External Knowledge

  18. Extracting Parallel Sentences with Bidirectional Recurrent Neural Networks to Improve Machine Translation. paper

    Francis Grégoire and Philippe Langlais

    Domain: Data Processing

  19. Parallel Corpora for bi-lingual English-Ethiopian Languages Statistical Machine Translation. paper

    Solomon Teferra Abate, Michael Melese, Martha Yifiru Tachbelie, Million Meshesha, Solomon Atinafu, Wondwossen Mulugeta, Yaregal Assabie, Hafte Abera, Binyam Ephrem, Tewodros Abebe, Wondimagegnhue Tsegaye, Amanuel Lemma, Tsegaye Andargie and Seifedin Shifaw

    Domain: Data Processing

List by Area

Deep Transformer

  1. Training Deeper Neural Machine Translation Models with Transparent Attention. paper

    Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, Yonghui Wu

    Venue: EMNLP-2018

  2. Learning Deep Transformer Models for Machine Translation. paper

    Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, Lidia S. Chao

    Venue: ACL-2019

  3. Depth Growing for Neural Machine Translation. paper

    Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Jianhuang Lai, Tie-Yan Liu

    Venue: ACL-2019

  4. Improving Deep Transformer with Depth-Scaled Initialization and Merged Attention. paper

    Biao Zhang, Ivan Titov, Rico Sennrich

    Venue: EMNLP-2019

  5. Lipschitz Constrained Parameter Initialization for Deep Transformers. paper

    Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong, Jingyi Zhang

    Venue: ACL-2020

  6. Multiscale Collaborative Deep Models for Neural Machine Translation. paper

    Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, Weihua Luo

    Venue: ACL-2020

  7. Improving Transformer Optimization Through Better Initialization. paper

    Xiao Shi Huang, Felipe Perez, Jimmy Ba, Maksims Volkovs

    Venue: ICML-2020

  8. Understanding the Difficulty of Training Transformers. paper

    Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, JIawei Han

    Venue: EMNLP-2020

  9. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. paper

    Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen

    Venue: arXiv

  10. Character-Level Language Modeling with Deeper Self-Attention. paper

    Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, Llion Jones

    Venue: CoRR

  11. Very Deep Self-Attention Networks for End-to-End Speech Recognition. paper

    Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Muller, Sebastian Stuker, Alexander Waibel

    Venue: CoRR

  12. Shallow-to-Deep Training for Neural Machine Translation. paper

    Bei Li, Ziyang Wang, Hui Liu, Yufan Jiang, Quan Du, Tong Xiao, Huizhen Wang, Jingbo Zhu

    Venue: EMNLP-2020

  13. Learning Light-Weight Translation Models from Deep Transformer. paper

    Bei Li, Ziyang Wang, Hui Liu, Quan Du, Tong Xiao, Chunliang Zhang, Jingbo Zhu

    Venue: AAAI-2021

Fast RNN

  1. A Lightweight Recurrent Network for Sequence Modeling. paper

    Biao Zhang, Rico Sennrich

    Venue: ACL-2019

  2. Simple Recurrent Units for Highly Parallelizable Recurrence. paper

    Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi

    Venue: EMNLP-2018

  3. Simplifying Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent Networks. paper

    Biao Zhang, Deyi Xiong, Jinsong Su, Qian Lin, Huiji Zhang

    Venue: EMNLP-2018

Knowledge Distillation of BERT

  1. Patient Knowledge Distillation for BERT Model Compression. paper

    Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu

    Venue: EMNLP 2019

  2. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. paper

    Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut

    Venue: ICLR 2020

  3. TinyBERT: Distilling BERT for Natural Language Understanding. paper

    Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

    Venue: arxiv

  4. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. paper

    Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf

    Venue: NeurlPS 2019

None Autoregressive Translation

  1. Patient Knowledge Distillation for BERT Model Compression. paper

    Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu

    Venue: EMNLP 2019

  2. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. paper

    Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut

    Venue: ICLR 2020

  3. TinyBERT: Distilling BERT for Natural Language Understanding. paper

    Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

    Venue: arxiv

  4. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. paper

    Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf

    Venue: NeurlPS 2019

  5. Non-Autoregressive Neural Machine Translation. paper

    Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, Richard Socher

    Venue: ICLR-2018

  6. Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement. paper

    Jason Lee, Elman Mansimov, Kyunghyun Cho

    Venue: EMNLP-2018

  7. End-to-End Non-Autoregressive Neural Machine Translation with Connectionist Temporal Classification. paper

    Jindrich Libovický, Jindrich Helcl

    Venue: EMNLP-2018

  8. Fast Decoding in Sequence Models Using Discrete Latent Variables. paper

    Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, Noam Shazeer

    Venue: ICML-2018

  9. Imitation Learning for Non-Autoregressive Neural Machine Translation. paper

    Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Xu Sun

    Venue: ACL-2019

  10. Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation. paper

    Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, Jie Zhou

    Venue: ACL-2019

  11. Syntactically Supervised Transformers for Faster Neural Machine Translation. paper

    Nader Akoury, Kalpesh Krishna, Mohit Iyyer

    Venue: ACL-2019

  12. Non-Autoregressive Machine Translation with Auxiliary Regularization. paper

    Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, Tie-Yan Liu

    Venue: AAAI-2019

  13. Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input. paper

    Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, Tie-Yan Liu

    Venue: AAAI-2019

  14. Mask-Predict: Parallel Decoding of Conditional Masked Language Models. paper

    Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer

    Venue: EMNLP-2019

  15. Insertion-based Decoding with Automatically Inferred Generation Order. paper

    Jiatao Gu, Qi Liu, Kyunghyun Cho

    Venue: TACL-2019

  16. Fast Structured Decoding for Sequence Models. paper

    Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, Zhi-Hong Deng

    Venue: NeurIPS-2019

  17. Levenshtein Transformer. paper

    Jiatao Gu, Changhan Wang, Junbo Zhao

    Venue: NeurIPS-2019

  18. Hint-Based Training for Non-Autoregressive Machine Translation. paper

    Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, Tie-Yan Liu

    Venue: EMNLP-2019

  19. FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow. paper

    Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, Eduard H. Hovy

    Venue: EMNLP-2019

  20. Improving Non-autoregressive Neural Machine Translation with Monolingual Data. paper

    Jiawei Zhou, Phillip Keung

    Venue: ACL-2020

  21. Parallel Machine Translation with Disentangled Context Transformer. paper

    Jungo Kasai, James Cross, Marjan Ghazvininejad, Jiatao Gu

    Venue: ICML-2020

  22. Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation. paper

    Junliang Guo, Linli Xu, Enhong Chen

    Venue: ACL-2020

  23. A Study of Non-autoregressive Model for Sequence Generation. paper

    Yi Ren, Jinglin Liu, Xu Tan, Zhou Zhao, Sheng Zhao, Tie-Yan Liu

    Venue: ACL-2020

  24. Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation. paper

    Qiu Ran, Yankai Lin, Peng Li, Jie Zhou

    Venue: ACL-2020

  25. ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation. paper

    Lifu Tu, Richard Yuanzhe Pang, Sam Wiseman, Kevin Gimpel

    Venue: ACL-2020

  26. Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior. paper

    Raphael Shu, Jason Lee, Hideki Nakayama, Kyunghyun Cho

    Venue: AAAI-2020

  27. Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation. paper

    Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng and Jie Zhou

    Venue: AAAI-2020

  28. Understanding Knowledge Distillation in Non-autoregressive Machine Translation. paper

    Chunting Zhou, Jiatao Gu, Graham Neubig

    Venue: ICLR-2020

  29. Aligned Cross Entropy for Non-Autoregressive Machine Translation. paper

    Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy

    Venue: ICML-2020

  30. Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation. paper

    Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu

    Venue: AAAI-2020

  31. Non-Autoregressive Machine Translation with Latent Alignments. paper

    Chitwan Saharia, William Chan, Saurabh Saxena, Mohammad Norouzi

    Venue: CoRR

  32. Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information. paper

    Qiu Ran, Yankai Lin, Peng Li, Jie Zhou

    Venue: CoRR

  33. LAVA NAT: A Non-Autoregressive Translation Model with Look-Around Decoding and Vocabulary Attention. paper

    Xiaoya Li, Yuxian Meng, Arianna Yuan, Fei Wu, Jiwei Li

    Venue: CoRR

  34. Improving Fluency of Non-Autoregressive Machine Translation. paper

    Zdenek Kasner, Jindrich Libovický, Jindrich Helcl

    Venue: CoRR

More Repositories

1

MTBook

《机器翻译:基础与模型》肖桐 朱靖波 著 - Machine Translation: Foundations and Models
TeX
2,712
star
2

ABigSurvey

A collection of 1000+ survey papers on Natural Language Processing (NLP) and Machine Learning (ML).
1,981
star
3

Classical-Modern

非常全的文言文(古文)-现代文平行语料
Python
1,077
star
4

CNSurvey

一份中文综述文章列表(自然语言处理&机器学习)
548
star
5

NiuTensor

NiuTensor is an open-source toolkit developed by a joint team from NLP Lab. at Northeastern University and the NiuTrans Team. It provides tensor utilities to create and train neural networks.
C++
379
star
6

ABigSurveyOfLLMs

A collection of 150+ surveys on LLMs
172
star
7

NiuTrans.SMT

NiuTrans.SMT is an open-source statistical machine translation system developed by a joint team from NLP Lab. at Northeastern University and the NiuTrans Team. The NiuTrans system is fully developed in C++ language. So it runs fast and uses less memory. Currently it supports phrase-based, hierarchical phrase-based and syntax-based (string-to-tree, tree-to-string and tree-to-tree) models for research-oriented studies.
C++
144
star
8

NiuTrans.NMT

A Fast Neural Machine Translation System developed in C++.
C++
136
star
9

NASPapers

Paper lists of neural architecture search (NAS)
121
star
10

LanguageCodes

We present a list of languages with their codes, families, regions and etc. We also present a list of multi-lingual corpora (with urls).
79
star
11

compiler-notes

60
star
12

Introduction-to-Transformers

An introduction to basic concepts of Transformers and key techniques of their recent advances.
46
star
13

Vision-LLM-Alignment

This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vision models.
Python
41
star
14

MTVenues

A list of conferences and journals relevant to machine translation
33
star
15

Hands-on-GEMM

A tutorial on GEMM
Cuda
7
star