• Stars
    star
    1,139
  • Rank 39,629 (Top 0.9 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated 28 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

FinRL­-Meta: Dynamic datasets and market environments for FinRL.

FinRL-Meta: A Metaverse of Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

Downloads Downloads Python 3.6 PyPI

FinRL-Meta (docs website) builds a universe of market environments for data-driven financial reinforcement learning. We aim to help the users in our community to easily build environments.

  1. FinRL-Meta provides hundreds of market environments.
  2. FinRL-Meta reproduces existing papers as benchmarks.
  3. FinRL-Meta provides dozens of demos/tutorials, organized in a curriculum.

Previously called Neo_FinRL: Near real-market Environments for data-driven Financial Reinforcement Learning.

Outline

News and Tutorials

Our Goals

  • To provide benchmarks and facilitate fair comparisons, we allow researchers to evaluate different strategies on the same dataset. Also, it would help researchers to better understand the “black-box” nature (deep neural network-based) of DRL algorithms.
  • To reduce the simulation-reality gap: existing works use backtesting on historical data, while the actual performance may be quite different.
  • To reduce the data pre-processing burden, so that quants can focus on developing and optimizing strategies.

Design Principles

  • Plug-and-Play (PnP): Modularity; Handle different markets (say T0 vs. T+1)
  • Completeness and universal: Multiple markets; Various data sources (APIs, Excel, etc); User-friendly variables.
  • Layer structure and extensibility: Three layers including: data layer, environment layer, and agent layer. Layers interact through end-to-end interfaces, achieving high extensibility.
  • “Training-Testing-Trading” pipeline: simulation for training and connecting real-time APIs for testing/trading, closing the sim-real gap.
  • Efficient data sampling: accelerate the data sampling process is the key to DRL training! From the ElegantRL project. we know that multi-processing is powerful to reduce the training time (scheduling between CPU + GPU).
  • Transparency: a virtual env that is invisible to the upper layer
  • Flexibility and extensibility: Inheritance might be helpful here

Overview

Overview image of FinRL-Meta We utilize a layered structure in FinRL-Meta, as shown in the figure above, that consists of three layers: data layer, environment layer, and agent layer. Each layer executes its functions and is independent. Meanwhile, layers interact through end-to-end interfaces to implement the complete workflow of algorithm trading. Moreover, the layer structure allows easy extension of user-defined functions.

DataOps

DataOps applies the ideas of lean development and DevOps to the data analytics field. DataOps practices have been developed in companies and organizations to improve the quality and efficiency of data analytics. These implementations consolidate various data sources, unify and automate the pipeline of data analytics, including data accessing, cleaning, analysis, and visualization.

However, the DataOps methodology has not been applied to financial reinforcement learning researches. Most researchers access data, clean data, and extract technical indicators (features) in a case-by-case manner, which involves heavy manual work and may not guarantee the data quality.

To deal with financial big data (unstructured), we follow the DataOps paradigm and implement an automatic pipeline in the following figure: task planning, data processing, training-testing-trading, and monitoring agents’ performance. Through this pipeline, we continuously produce DRL benchmarks on dynamic market datasets.

Supported Data Sources:

Data Source Type Range and Frequency Request Limits Raw Data Preprocessed Data
Akshare CN Securities, A share 2017-now, 1 day NA OHLCV Prices&Indicators
Alpaca US Stocks, ETFs 2015-now, 1min Account-specific OHLCV Prices&Indicators
Baostock CN Securities 1990-12-19-now, 5min Account-specific OHLCV Prices&Indicators
Binance Cryptocurrency API-specific, 1s, 1min API-specific Tick-level daily aggegrated trades, OHLCV Prices&Indicators
CCXT Cryptocurrency API-specific, 1min API-specific OHLCV Prices&Indicators
IEXCloud NMS US securities 1970-now, 1 day 100 per second per IP OHLCV Prices&Indicators
JoinQuant CN Securities 2005-now, 1min 3 requests each time OHLCV Prices&Indicators
QuantConnect US Securities 1998-now, 1s NA OHLCV Prices&Indicators
RiceQuant CN Securities 2005-now, 1ms Account-specific OHLCV Prices&Indicators
Tushare CN Securities, A share -now, 1 min Account-specific OHLCV Prices&Indicators
WRDS US Securities 2003-now, 1ms 5 requests each time Intraday Trades Prices&Indicators
YahooFinance US Securities Frequency-specific, 1min 2,000/hour OHLCV Prices&Indicators

OHLCV: open, high, low, and close prices; volume

adjusted_close: adjusted close price

Technical indicators users can add: 'macd', 'boll_ub', 'boll_lb', 'rsi_30', 'dx_30', 'close_30_sma', 'close_60_sma'. Users also can add their features.

Plug-and-Play (PnP)

In the development pipeline, we separate market environments from the data layer and the agent layer. A DRL agent can be directly plugged into our environments. Different agents/algorithms can be compared by running on the same benchmark environment for fair evaluations.

The following DRL libraries are supported:

  • ElegantRL: Lightweight, efficient and stable DRL implementation using PyTorch.
  • Stable-Baselines3: Improved DRL algorithms based on OpenAI Baselines.
  • RLlib: An open-source DRL library that offers high scalability and unified APIs.

A demonstration notebook for plug-and-play with ElegantRL, Stable Baselines3 and RLlib: Plug and Play with DRL Agents

"Training-Testing-Trading" Pipeline

We employ a training-testing-trading pipeline. First, a DRL agent is trained in a training dataset and fine-tuned (adjusting hyperparameters) in a testing dataset. Then, backtest the agent (on historical dataset), or depoy in a paper/live trading market.

This pipeline address the information leakage problem by separating the training/testing and trading periods.

Such a unified pipeline also allows fair comparisons among different algorithms.

Our Vision

For future work, we plan to build a multi-agent-based market simulator that consists of over ten thousands of agents, namely, a FinRL-Metaverse. First, FinRL-Metaverse aims to build a universe of market environments, like the XLand environment (source) and planet-scale climate forecast (source) by DeepMind. To improve the performance for large-scale markets, we will employ GPU-based massive parallel simulation just as Isaac Gym (source). Moreover, it will be interesting to explore the deep evolutionary RL framework (source) to simulate the markets. Our final goal is to provide insights into complex market phenomena and offer guidance for financial regulations through FinRL-Meta.

Citing FinRL-Meta

FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

@article{finrl_meta_2022,
    author = {Liu, Xiao-Yang and Xia, Ziyi and Rui, Jingyang and Gao, Jiechao and Yang, Hongyang and Zhu, Ming and Wang, Christina Dan and Wang, Zhaoran and Guo, Jian},
    title = {{FinRL-Meta}: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning},
    journal = {NeurIPS},
    year = {2022}
}

FinRL-Meta: Data-Driven Deep ReinforcementLearning in Quantitative Finance

@article{finrl_meta_2021,
    author = {Liu, Xiao-Yang and Rui, Jingyang and Gao, Jiechao and Yang, Liuqing and Yang, Hongyang and Wang, Zhaoran and Wang, Christina Dan and Guo Jian},
    title   = {{FinRL-Meta}: Data-Driven Deep ReinforcementLearning in Quantitative Finance},
    journal = {Data-Centric AI Workshop, NeurIPS},
    year    = {2021}
}

Collaborators

           

Disclaimer: Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.

More Repositories

1

FinGPT

FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
Jupyter Notebook
12,072
star
2

FinRL

FinRL: Financial Reinforcement Learning. 🔥
Jupyter Notebook
9,186
star
3

ElegantRL

Massively Parallel Deep Reinforcement Learning. 🔥
Python
3,490
star
4

FinRL-Trading

For trading. Please star.
Jupyter Notebook
1,905
star
5

FinNLP

Democratizing Internet-scale financial data.
Jupyter Notebook
1,019
star
6

FinRL-Tutorials

Tutorials. Please star.
Jupyter Notebook
719
star
7

FinRL_Podracer

Cloud-native Financial Reinforcement Learning
Python
336
star
8

Awesome_AI4Finance

Resources
127
star
9

RLSolver

Solvers for NP-hard and NP-complete problems with an emphasis on high-performance GPU computing.
Python
112
star
10

FinML

FinML: A Practical Machine Learning Framework for Dynamic Stock Selection
Jupyter Notebook
72
star
11

FinRL_Crypto

FinRL_Crypto: Cryptocurrency trading of FinRL
Python
71
star
12

FinRL_Market_Simulator

Python
66
star
13

Quantifying-ESG-Alpha-using-Scholar-Big-Data-ICAIF-2020

Quantifying ESG Alpha using Scholar Big Data: An Automated Machine Learning Approach.
Jupyter Notebook
64
star
14

Deep-Reinforcement-Learning-for-Stock-Trading-DDPG-Algorithm-NIPS-2018

Practical Deep Reinforcement Learning Approach for Stock Trading. NeurIPS 2018 AI in Finance.
Python
62
star
15

FinRL-Blogs

Blogs, tutorials, news. Please star.
56
star
16

FinRobot

FinRobot: An Open-Source AI Agent Platform for Financial Applications using LLMs
Python
49
star
17

Financial-News-for-Stock-Prediction-using-DP-LSTM-NIPS-2019

Differential Privacy-inspired LSTM for Stock Prediction Using Financial News. NeurIPS Robust AI in Financial Services 2019.
Python
33
star
18

Liquidation-Analysis-using-Multi-Agent-Reinforcement-Learning-ICML-2019

Multi-agent Reinforcement Learning for Liquidation Strategy Analysis. ICML 2019 AI in Finance.
Jupyter Notebook
24
star
19

TransportRL

High-performance RL library for transportation problems, e.g., autonomous driving, traffic light control, UAV control, and path planning.
Python
23
star
20

Popular-RL-Algorithms

Jupyter Notebook
22
star
21

AI4Finance-Education

education channel
22
star
22

FinEmotion

Python
20
star
23

Risk-Management-using-Deep-Learning-for-Midterm-Stock-Prediction-KDD-2019

Risk Management via Anomaly Circumvent: Mnemonic Deep Learning for Midterm Stock Prediction. KDD 2019.
Jupyter Notebook
18
star
24

FinRL_Imitation_Learning

Jupyter Notebook
15
star
25

Dynamic-Stock-Recommendation-Machine_Learning-Published-Paper-IEEE

Jupyter Notebook
13
star
26

Quantum-Tensor-Networks-for-Variational-Reinforcement-Learning-NeurIPS-2020

Quantum Tensor Networks for Variational Reinforcement Learning. NeurIPS 2020.
Python
13
star
27

AI4Finance_Job_Info

Job Infor in the intersection of AI, Big data, and Finance.
6
star
28

FinGPT-Earnings-Call-LLM-Agent

Jupyter Notebook
5
star
29

Optimistic-Bull-Pessimistic-Bear-DRL-Stock-Portfolio-Allocation-ICML-2019

5
star
30

Awesome_FinRL

FinRL resources: papers, projects
5
star
31

Scholar-Data-Driven-Alpha-in-AI-Industry-IEEE-BigData-2019

Practical Machine Learning Approach to Capture the Scholar Data Driven Alpha in AI Industry. IEEE BigData 2019.
Jupyter Notebook
5
star
32

ML_Price_Prediction

Predict price
5
star
33

.github

4
star
34

PlotFigs

Plot figures for academic papers.
Python
1
star
35

FinGPT-Research

Jupyter Notebook
1
star