• Stars
    star
    229
  • Rank 174,666 (Top 4 %)
  • Language
  • Created almost 3 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Paper collections of retrieval-based (augmented) language model.

RetrivalLMPapers(personnal usage)

Paper collections of retrieval-based(augmented) methods on/related to language modeling.

Survey

  1. A Survey on Retrieval-Augmented Text Generation. Preprint. Huayang Li, Yixuan Su, Deng Cai, Yan Wang, Lemao Liu [pdf] 2022.2

Papers

0.5. Improving Neural Language Models with a Continuous Cache. ICLR.

Edouard Grave, Armand Joulin, Nicolas Usunier [pdf] 2016.12

  1. Coupling Retrieval and Meta-Learning for Context-Dependent Semantic Parsing. ACL.

    Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, Jian Yin [pdf] 2019.6

  2. Generalization through Memorization: Nearest Neighbor Language Models. ICLR.

    Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis [pdf], [implementation] 2019.11

  3. Retrieval Augmented Language Model Pre-Training. PMLR.

    Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang [pdf] 2020.2

  4. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. NIPS.

    Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela [pdf], [implementation], 2020.5

  5. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. EACL.

    Gautier Izacard, Edouard Grave [pdf], 2020.7

  6. FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation. EMNLP.

    Kushal Lakhotia, Bhargavi Paranjape, Asish Ghoshal, Wen-tau Yih, Yashar Mehdad, Srinivasan Iyer [pdf], 2020.12

  7. Distilling Knowledge from Reader to Retriever for Question Answering. Preprint.

    Gautier Izacard, Edouard Grave [pdf], 2020.12

  8. UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering. Preprint.

    Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Scott Yih [pdf], 2020.12

  9. Adaptive Semiparametric Language Models. TACL.

    Dani Yogatama, Cyprien de Masson d'Autume, Lingpeng Kong [pdf], 2021.02

  10. Case-based Reasoning for Natural Language Queries over Knowledge Bases. EMNLP 2021.

Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay-Yoon Lee, Lizhen Tan, Lazaros Polymenakos, Andrew McCallum [pdf], [implementation], 2021.4

  1. NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework. Preprint.

Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang [pdf], [implementation], 2021.11

  1. End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering. NIPS.

Devendra Singh Sachan, Siva Reddy, William Hamilton, Chris Dyer, Dani Yogatama [pdf], [implementation], 2021.6

  1. Improving language models by retrieving from trillions of tokens. Preprint.

Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, Laurent Sifre [pdf], [unofficial-implementation], 2021.12

  1. Learning To Retrieve Prompts for In-Context Learning. Preprint.

    Ohad Rubin, Jonathan Herzig, Jonathan Berant [pdf], [implementation], 2021.12

  2. Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks. Preprint. Akari Asai, Matt Gardner, Hannaneh Hajishirzi [pdf], [code], 2021.12

  3. LaMDA: Language Models for Dialog Applications. Preprint.

    Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, Quoc Le [pdf], [blog], 2022.1

  4. Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval. Preprint.

    Uri Alon, Frank F. Xu, Junxian He, Sudipta Sengupta, Dan Roth, Graham Neubig [pdf], [pdf], 2022.1

  5. Transformer Memory as a Differentiable Search Index. Preprint.

    Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler [pdf], [twitter], 2022.2

  6. Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data. Preprint.

    Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, Michael Zeng [pdf], [code], 2022.3

  7. In-Context Learning for Few-Shot Dialogue State Tracking. Preprint.

    Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari Ostendorf [pdf], [code], 2022.3

  8. Unsupervised Cross-Task Generalization via Retrieval Augmentation. Preprint.

    Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, Xiang Ren [pdf], [code], 2022.4

  9. Training Language Models with Memory Augmentation. Preprint.

    Zexuan Zhong, Tao Lei, Danqi Chen [pdf], [code], 2022.5

  10. Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning. NeurIPS 2022.

    Xiang Chen, Lei Li, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen [pdf], [code], 2022.5

  11. Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning. SIGIR 2022.

    Xiang Chen, Lei Li, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen [pdf], [code], 2022.8.

  12. Selective Annotation Makes Language Models Better Few-Shot Learners. arxiv.

    Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu [pdf], [code], [twitter] 2022.9.

  13. Promptagator: Few-shot Dense Retrieval From 8 Examples. arxiv.

    Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang [pdf] 2022.9.

  14. Generate rather than Retrieve: Large Language Models are Strong Context Generators. arxiv.

    Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, Meng Jiang [pdf] 2022.9.

  15. Open-domain Question Answering via Chain of Reasoning over Heterogeneous Knowledge. arxiv.

    Kaixin Ma, Hao Cheng, Xiaodong Liu, Eric Nyberg, Jianfeng Gao [pdf] 2022.10.

  16. Retrieval-Augmented Multimodal Language Modeling. arxiv.

    Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih [pdf] 2022.11.