Grid_Scale_Energy_Storage_Q_Learning
Final Project for AA 228: Decision-Making under Uncertainty Abstract: Grid-scale energy storage systems (ESSs) are capable of participating in multiple grid applications, with the potential for multiple value streams for a single system, termed "value-stacking". This paper introduces a framework for decision making, using reinforcement learning to analyze the financial advantage of value-stacking grid-scale energy storage, as applied to a single residential home with energy storage. A policy is developed via Q-learning to dispatch the energy storage between two grid applications: time-of-use (TOU) bill reduction and energy arbitrage on locational marginal price (LMP). The performance of the dispatch resulting from this learned policy is then compared to several other dispatch cases: a baseline of no dispatch, a naively-determined dispatch, and the optimal dispatches for TOU and LMP separately. The policy obtained via Q-learning successfully led to the lowest cost, demonstrating the financial advantage of value-stacking.