• Stars
    star
    418
  • Rank 103,620 (Top 3 %)
  • Language
    Jupyter Notebook
  • Created over 6 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Trading with recurrent actor-critic reinforcement learning

A3C trading

Note: Sorry for misleading naming - please use A3C_trading.py for training and test_trading.py for testing.

Trading with recurrent actor-critic reinforcement learning - check paper and more detailed old report

Full_UML

Configuration: config.py

This file contains all the pathes and gloabal variables to be set up

Dataset: download from GDrive

After setting config.py please run this file to download and preprocess the data need for training and evaluation

Environment: trader_gym.py

OpenAI.gym-like environment class

Model: A3C_class.py

This file is containing AC_network, Worker and Test_Worker classes

Training: A3C_training.py

Run this file, preferrable in tmux. During training it will create files in tensorboard_dir and in model_dir

Testing: A3C_testing.ipynb

Jupyter notebook contains all for picturing

Cite as:

@article{ponomarev2019using, title={Using Reinforcement Learning in the Algorithmic Trading Problem}, author={Ponomarev, ES and Oseledets, IV and Cichocki, AS}, journal={Journal of Communications Technology and Electronics}, volume={64}, number={12}, pages={1450--1457}, year={2019}, publisher={Springer} }