• Stars
    star
    315
  • Rank 132,213 (Top 3 %)
  • Language
  • Created over 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Must-read papers, related blogs and API tools on the pre-training and tuning methods for ChatGPT.

ChatGPTPapers

Must-read papers, related blogs and API tools on the pre-training and tuning methods for ChatGPT.

Papers

  1. 【GPT-1】Improving Language Understanding by Generative Pre-Training.

    Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever [pdf] 2018.6

  2. 【GPT-2】Language Models are Unsupervised Multitask Learners.

    Alec Radford, Jeff Wu, Rewon Child, D. Luan, Dario Amodei, Ilya Sutskeve [pdf] 2019.2

  3. 【GPT-3】Language Models are Few-Shot Learners.

    Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei [pdf] 2020.5

  4. 【GPT-4】 GPT-4 Technical Report.

    OpenAI [pdf] 2023.3

  5. 【WebGPT】WebGPT: Browser-assisted question-answering with human feedback.

    Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, John Schulman [pdf] 2021.12

  6. 【ToolFormer】Toolformer: Language Models Can Teach Themselves to Use Tools.

    Timo Schick, Jane Dwivedi-Yu, Roberto DessĂŹ, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom [pdf] 2023.2

  7. 【InstructGPT】Training language models to follow instructions with human feedback.

    Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe [pdf] 2022.3

  8. 【RLHF】Augmenting Reinforcement Learning with Human Feedback.

    W. Bradley Knox, Peter Stone [pdf] 2011.7

  9. 【PPO】Proximal Policy Optimization Algorithms.

    John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov [pdf] 2017.7

  10. 【LaMda】 LaMDA: Language Models for Dialog Applications.

    Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, Quoc Le [pdf] 2022.1

  11. 【Sparrow】 Improving alignment of dialogue agents via targeted human judgements.

    Amelia Glaese, Nat McAleese, Maja Trębacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Soƈa Mokrá, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, Geoffrey Irving [pdf] 2022.9

  12. 【Claude】Constitutional AI: Harmlessness from AI Feedback.

    Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, Jared Kaplan [pdf] 2022.12

  13. OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization.

    Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov [pdf] 2022.12

  14. Fine-tuning language models from human preferences.

    Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, Geoffrey Irving [pdf][code] 2019.9

  15. Learning to summarize from human feedback.

    Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano [pdf] [code] 2020.9

  16. Cross-task generalization via natural language crowdsourcing instructions.

    Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi [pdf] 2021.4

  17. Finetuned language models are zero-shot learners.

    Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le [pdf] 2021.9

  18. Multitask Prompted Training Enables Zero-Shot Task Generalization.

    Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, Alexander M. Rush [pdf] 2021.10

  19. Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks.

    Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, Daniel Khashabi [pdf] 2022.4

  20. Putting Humans in the Natural Language Processing Loop: A Survey.

    Zijie J. Wang, Dongjin Choi, Shenyu Xu, Diyi Yang [pdf] 2021.4

  21. Scaling Instruction-Finetuned Language Models.

    Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei [pdf] 2022.10

  22. How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection.

    Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, Yupeng Wu [pdf] 2023.1

  23. Is ChatGPT A Good Translator? A Preliminary Study.

    Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Zhaopeng Tu [pdf] 2023.1

  24. Exploring AI Ethics of ChatGPT: A Diagnostic Analysis.

    Terry Yue Zhuo, Yujin Huang, Chunyang Chen, Zhenchang Xing [pdf] 2023.1

  25. A Categorical Archive of ChatGPT Failures.

    Ali Borji [pdf] 2023.2

  26. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity.

    Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung [pdf] 2023.2

  27. Is ChatGPT a General-Purpose Natural Language Processing Task Solver?

    Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang [pdf] 2023.2

  28. Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.

    Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao [pdf] [code] 2023.2

  29. The Wisdom of Hindsight Makes Language Models Better Instruction Followers.

    Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, Joseph E. Gonzalez [pdf] 2023.2

  30. Theory of Mind May Have Spontaneously Emerged in Large Language Models.

    Michal Kosinski [pdf] 2023.2

  31. Augmented Language Models: a Survey.

    Grégoire Mialon, Roberto DessÏ, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste RoziÚre, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, Thomas Scialom [pdf] 2023.2

  32. On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective.

    Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie [pdf] 2023.2

  33. ChatGPT: Jack of all trades, master of none.

    Jan KocoƄ, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika SzydƂo, Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna KocoƄ, BartƂomiej Koptyra, Wiktoria Mieleszczenko-Kowszewicz, Piotr MiƂkowski, Marcin Oleksy, Maciej Piasecki, Ɓukasz RadliƄski, Konrad Wojtasik, StanisƂaw WoĆșniak, PrzemysƂaw Kazienko [pdf] [code] 2023.2

Blogs

APIs

More Repositories

1

active-prompt

Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"
Python
209
star
2

R-Tuning

[NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't Know'"
Python
80
star
3

awesome-domain-adaptation-NLP

domain adaptation in NLP
51
star
4

DaVinci

Source code for the paper "Prefix Language Models are Unified Modal Learners"
Jupyter Notebook
42
star
5

TILGAN

Source code for the Findings of ACL-IJCNLP 2021 paper entitled "TILGAN: Transformer-based Implicit Latent GAN for Diverse and Coherent Text Generation"
Python
26
star
6

automate-cot

Source code for the paper "Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data"
20
star
7

T-DNA

Source code for the ACL-IJCNLP 2021 paper entitled "T-DNA: Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation" by Shizhe Diao et al.
Python
19
star
8

BigGAN-PyTorch-TPU-Distribute

Distributed version (multiple-process) for training BigGAN with TPU.
Python
9
star
9

Post-Training-Data-Flywheel

We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.
Python
9
star
10

awesome-transformers

A curated list of resources dedicated to Transformers.
8
star
11

HashTation

Source code for the paper "Hashtag-Guided Low-Resource Tweet Classification"
Python
5
star
12

Transformers_TPU

transformers_TPU, trying to solve RAM issues with mapping dataset
Python
3
star
13

BigGAN-PyTorch-TPU-Single

Single thread version for training BigGAN with TPU.
Python
3
star
14

SEDST3

SEDST version 3.0 base on the Code for CIKM'18 long paper: Explicit state tracking with semi-supervision for neural dialogue generation.
Python
3
star
15

Doolittle

Source code for the EMNLP 2023 paper entitled "Doolittle: Benchmarks and Corpora for Academic Writing Formalization" by Shizhe Diao et al.
Python
3
star
16

Black-Box-Prompt-Learning

Source code for the paper "Black-Box Prompt Learning for Pre-trained Language Models"
2
star
17

BigGAN-PyTorch-TPU-Parallel

Parallel version (multiple-thread) for training BigGAN with TPU.
Python
2
star
18

TPU-Tutorial

This is a tutorial for beginners who would like to use TPU with Pytorch.
1
star
19

MATH6450-CIFAR10

Course project for MATH6450F training two models onn CIFAR-10 to achieve a good performance. The code is adapted from CIFAR-ZOO (https://github.com/BIGBALLON/CIFAR-ZOO)
Python
1
star