• Stars
    star
    3,262
  • Rank 13,771 (Top 0.3 %)
  • Language
  • License
    Apache License 2.0
  • Created almost 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A curated list of reinforcement learning with human feedback resources (continually updated)

Awesome RLHF (RL with Human Feedback)

Awesome visitors GitHub stars GitHub forks GitHub license

This is a collection of research papers for Reinforcement Learning with Human Feedback (RLHF). And the repository will be continuously updated to track the frontier of RLHF.

Welcome to follow and star!

Table of Contents

Overview of RLHF

The idea of RLHF is to use methods from reinforcement learning to directly optimize a language model with human feedback. RLHF has enabled language models to begin to align a model trained on a general corpus of text data to that of complex human values.

  • RLHF for Large Language Model (LLM)

image info

  • RLHF for Video Game (e.g. Atari)

image info

Detailed Explanation

(The following section was automatically generated by ChatGPT)

RLHF typically refers to "Reinforcement Learning with Human Feedback". Reinforcement Learning (RL) is a type of machine learning that involves training an agent to make decisions based on feedback from its environment. In RLHF, the agent also receives feedback from humans in the form of ratings or evaluations of its actions, which can help it learn more quickly and accurately.

RLHF is an active research area in artificial intelligence, with applications in fields such as robotics, gaming, and personalized recommendation systems. It seeks to address the challenges of RL in scenarios where the agent has limited access to feedback from the environment and requires human input to improve its performance.

Reinforcement Learning with Human Feedback (RLHF) is a rapidly developing area of research in artificial intelligence, and there are several advanced techniques that have been developed to improve the performance of RLHF systems. Here are some examples:

  • Inverse Reinforcement Learning (IRL): IRL is a technique that allows the agent to learn a reward function from human feedback, rather than relying on pre-defined reward functions. This makes it possible for the agent to learn from more complex feedback signals, such as demonstrations of desired behavior.

  • Apprenticeship Learning: Apprenticeship learning is a technique that combines IRL with supervised learning to enable the agent to learn from both human feedback and expert demonstrations. This can help the agent learn more quickly and effectively, as it is able to learn from both positive and negative feedback.

  • Interactive Machine Learning (IML): IML is a technique that involves active interaction between the agent and the human expert, allowing the expert to provide feedback on the agent's actions in real-time. This can help the agent learn more quickly and efficiently, as it can receive feedback on its actions at each step of the learning process.

  • Human-in-the-Loop Reinforcement Learning (HITLRL): HITLRL is a technique that involves integrating human feedback into the RL process at multiple levels, such as reward shaping, action selection, and policy optimization. This can help to improve the efficiency and effectiveness of the RLHF system by taking advantage of the strengths of both humans and machines.

Here are some examples of Reinforcement Learning with Human Feedback (RLHF):

  • Game Playing: In game playing, human feedback can help the agent learn strategies and tactics that are effective in different game scenarios. For example, in the popular game of Go, human experts can provide feedback to the agent on its moves, helping it improve its gameplay and decision-making.

  • Personalized Recommendation Systems: In recommendation systems, human feedback can help the agent learn the preferences of individual users, making it possible to provide personalized recommendations. For example, the agent could use feedback from users on recommended products to learn which features are most important to them.

  • Robotics: In robotics, human feedback can help the agent learn how to interact with the physical environment in a safe and efficient manner. For example, a robot could learn to navigate a new environment more quickly with feedback from a human operator on the best path to take or which objects to avoid.

  • Education: In education, human feedback can help the agent learn how to teach students more effectively. For example, an AI-based tutor could use feedback from teachers on which teaching strategies work best with different students, helping to personalize the learning experience.

Papers

format:
- [title](paper link) [links]
  - author1, author2, and author3...
  - publisher
  - keyword
  - code
  - experiment environments and datasets

2023

2022

2021

2020 and before

Codebases

format:
- [title](codebase link) [links]
  - author1, author2, and author3...
  - keyword
  - experiment environments, datasets or tasks
  • PaLM + RLHF - Pytorch
    • Phil Wang, Yachine Zahidi, Ikko Eltociear Ashimine, Eric Alcaide
    • Keyword: Transformers, PaLM architecture
    • Dataset: enwik8
  • lm-human-preferences
    • Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, Geoffrey Irving
    • Keyword: Reward learning for language, Continuing text with positive sentiment, Summary task, Physical descriptive
    • Dataset: TL;DR, CNN/DM
  • following-instructions-human-feedback
    • Long Ouyang, Jeff Wu, Xu Jiang, et al.
    • Keyword: Large Language Model, Align Language Model with Human Intent
    • Dataset: TruthfulQA RealToxicityPrompts
  • Transformer Reinforcement Learning (TRL)
    • Leandro von Werra, Younes Belkada, Lewis Tunstall, et al.
    • Keyword: Train LLM with RL, PPO, Transformer
    • Task: IMDB sentiment
  • Transformer Reinforcement Learning X (TRLX)
    • Jonathan Tow, Leandro von Werra, et al.
    • Keyword: Distributed training framework, T5-based language models, Train LLM with RL, PPO, ILQL
    • Task: Fine tuning LLM with RL using provided reward function or reward-labeled dataset
  • RL4LMs (A modular RL library to fine-tune language models to human preferences)
  • LaMDA-rlhf-pytorch
    • Phil Wang
    • Keyword: LaMDA, Attention-mechanism
    • Task: Open-source pre-training implementation of Google's LaMDA research paper in PyTorch
  • TextRL
    • Eric Lam
    • Keyword: huggingface's transformer
    • Task: Text generation
    • Env: PFRL, gym
  • minRLHF
    • Thomfoster
    • Keyword: PPO, Minimal library
    • Task: educational purposes
  • DeepSpeed-Chat
    • Microsoft
    • Keyword: Affordable RLHF Training
  • Dromedary
    • IBM
    • Keyword: Minimal human supervision, Self-aligned
    • Task: Self-aligned language model trained with minimal human supervision
  • FG-RLHF
    • Zeqiu Wu, Yushi Hu, Weijia Shi, et al.
    • Keyword: Fine-Grained RLHF, providing a reward after every segment, Incorporating multiple RMs associated with different feedback types
    • Task: A framework that enables training and learning from reward functions that are fine-grained in density and multiple RMs -Safe-RLHF
    • Xuehai Pan, Ruiyang Sun, Jiaming Ji, et al.
    • Keyword: Support popular pre-trained models, Large human-labeled dataset, Multi-scale metrics for safety constraints verification, Customized parameters
    • Task: Constrained Value-Aligned LLM via Safe RLHF

Dataset

format:
- [title](dataset link) [links]
  - author1, author2, and author3...
  - keyword
  - experiment environments or tasks
  • HH-RLHF
    • Ben Mann, Deep Ganguli
    • Keyword: Human preference dataset, Red teaming data, machine-written
    • Task: Open-source dataset for human preference data about helpfulness and harmlessness
  • Stanford Human Preferences Dataset(SHP)
    • Ethayarajh, Kawin and Zhang, Heidi and Wang, Yizhong and Jurafsky, Dan
    • Keyword: Naturally occurring and human-written dataset,18 different subject areas
    • Task: Intended to be used for training RLHF reward models
  • PromptSource
    • Stephen H. Bach, Victor Sanh, Zheng-Xin Yong et al.
    • Keyword: Prompted English datasets, Mapping a data example into natural language
    • Task: Toolkit for creating, Sharing and using natural language prompts
  • Structured Knowledge Grounding(SKG) Resources Collections
    • Tianbao Xie, Chen Henry Wu, Peng Shi et al.
    • Keyword: Structured Knowledge Grounding
    • Task: Collection of datasets are related to structured knowledge grounding
  • The Flan Collection
    • Longpre Shayne, Hou Le, Vu Tu et al.
    • Task: Collection compiles datasets from Flan 2021, P3, Super-Natural Instructions
  • rlhf-reward-datasets
    • Yiting Xie
    • Keyword: Machine-written dataset
  • webgpt_comparisons
    • OpenAI
    • Keyword: Human-written dataset, Long form question answering
    • Task: Train a long form question answering model to align with human preferences
  • summarize_from_feedback
    • OpenAI
    • Keyword: Human-written dataset, summarization
    • Task: Train a summarization model to align with human preferences
  • Dahoas/synthetic-instruct-gptj-pairwise
    • Dahoas
    • Keyword: Human-written dataset, synthetic dataset
  • Stable Alignment - Alignment Learning in Social Games
    • Ruibo Liu, Ruixin (Ray) Yang, Qiang Peng
    • Keyword: Interaction data used for alignment training, Run in Sandbox
    • Task: Train on the recorded interaction data in simulated social games
  • LIMA
    • Meta AI
    • Keyword: without any RLHF, few carefully curated prompts and responses
    • Task: Dataset used for training the LIMA model

Blogs

Other Language Support

Turkish

Contributing

Our purpose is to make this repo even better. If you are interested in contributing, please refer to HERE for instructions in contribution.

License

Awesome RLHF is released under the Apache 2.0 license.

More Repositories

1

DI-engine

OpenDILab Decision AI Engine. The Most Comprehensive Reinforcement Learning Framework B.P.
Python
3,041
star
2

PPOxFamily

PPO x Family DRL Tutorial Course(决策智能入门级公开课:8节课帮你盘清算法理论,理顺代码逻辑,玩转决策AI应用实践 )
Python
1,875
star
3

DI-star

An artificial intelligence platform for the StarCraft II with large-scale distributed training and grand-master agents.
Python
1,215
star
4

LightZero

[NeurIPS 2023 Spotlight] LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios (awesome MCTS)
Python
1,097
star
5

awesome-model-based-RL

A curated list of awesome model based RL resources (continually updated)
851
star
6

awesome-diffusion-model-in-rl

A curated list of Diffusion Model in RL resources (continually updated)
739
star
7

awesome-decision-transformer

A curated list of Decision Transformer resources (continually updated)
671
star
8

LMDrive

[CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
Jupyter Notebook
592
star
9

DI-drive

Decision Intelligence Platform for Autonomous Driving simulation.
Python
563
star
10

InterFuser

[CoRL 2022] InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
Python
522
star
11

LLMRiddles

Open-Source Reproduction/Demo of the LLM Riddles Game
Python
515
star
12

GoBigger

[ICLR 2023] Come & try Decision-Intelligence version of "Agar"! Gobigger could also help you with multi-agent decision intelligence study.
Python
459
star
13

DI-sheep

羊了个羊 + 深度强化学习(Deep Reinforcement Learning + 3 Tiles Game)
Python
416
star
14

awesome-end-to-end-autonomous-driving

A curated list of awesome End-to-End Autonomous Driving resources (continually updated)
371
star
15

awesome-multi-modal-reinforcement-learning

A curated list of Multi-Modal Reinforcement Learning resources (continually updated)
367
star
16

awesome-exploration-rl

A curated list of awesome exploration RL resources (continually updated)
365
star
17

SO2

[AAAI2024] A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning
Python
285
star
18

DI-engine-docs

DI-engine docs (Chinese and English)
Python
281
star
19

DI-orchestrator

OpenDILab RL Kubernetes Custom Resource and Operator Lib
Go
240
star
20

DI-smartcross

Decision Intelligence platform for Traffic Crossing Signal Control
Python
230
star
21

treevalue

Here are the most awesome tree structure computing solutions, make your life easier. (这里有目前性能最优的树形结构计算解决方案)
Python
228
star
22

DI-hpc

OpenDILab RL HPC OP Lib, including CUDA and Triton kernel
Python
222
star
23

awesome-AI-based-protein-design

A collection of research papers for AI-based protein design
216
star
24

ACE

[AAAI 2023] Official PyTorch implementation of paper "ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency".
Python
212
star
25

DI-treetensor

Let DI-treetensor help you simplify the structure processing!(树形运算一不小心就逻辑混乱?DI-treetensor快速帮你搞定)
Python
202
star
26

GoBigger-Challenge-2021

Interested in multi-agents? The 1st Go-Bigger Multi-Agent Decision Intelligence Challenge is coming and a big bonus is waiting for you!
Python
195
star
27

Gobigger-Explore

Still struggling with the high threshold or looking for the appropriate baseline? Come here and new starters can also play with your own multi-agents!
Python
185
star
28

DI-store

OpenDILab RL Object Store
Go
177
star
29

LightTuner

Python
173
star
30

DOS

[CVPR 2023] ReasonNet: End-to-End Driving with Temporal and Global Reasoning
Python
145
star
31

DI-toolkit

A simple toolkit package for opendilab
Python
113
star
32

DI-bioseq

Decision Intelligence platform for Biological Sequence Searching
Python
111
star
33

DI-1024

1024 + 深度强化学习(Deep Reinforcement Learning + 1024 Game/ 2048 Game)
Python
109
star
34

SmartRefine

[CVPR 2024] SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction
Python
107
star
35

DIgging

Decision Intelligence for digging best parameters in target environment.
Python
90
star
36

awesome-driving-behavior-prediction

A collection of research papers for Driving Behavior Prediction
77
star
37

PsyDI

PsyDI: Towards a Personalized and Progressively In-depth Chatbot for Psychological Measurements. (e.g. MBTI Measurement Agent)
TypeScript
70
star
38

DI-adventure

Decision Intelligence Adventure for Beginners
Python
68
star
39

GenerativeRL

Python library for solving reinforcement learning (RL) problems using generative models (e.g. Diffusion Models).
Python
48
star
40

huggingface_ding

Auxiliary code for pulling, loading reinforcement learning models based on DI-engine from the Huggingface Hub, or pushing them onto Huggingface Hub with auto-created model card.
Python
46
star
41

CodeMorpheus

CodeMorpheus: Generate code self-portraits with one click(一键生成代码自画像,决策型 AI + 生成式 AI)
Python
45
star
42

OpenPaL

Building open-ended embodied agent in battle royale FPS game
33
star
43

awesome-ui-agents

A curated list of of awesome UI agents resources, encompassing Web, App, OS, and beyond (continually updated)
31
star
44

.github

The first decision intelligence platform covering the most complete algorithms in academia and industry
19
star
45

CleanS2S

High-quality and streaming Speech-to-Speech interactive agent in a single file. 只用一个文件实现的流式全双工语音交互原型智能体!
1
star