Guided A* Search for Scheduling Power Generation Under Uncertainty (Papers Track)

Patrick de Mars (UCL); Aidan O'Sullivan (UCL)

Paper PDF Slides PDF Recorded Talk Cite
Power & Energy Reinforcement Learning


Increasing renewables penetration motivates the development of new approaches to operating power systems under uncertainty. We apply a novel approach combining self-play reinforcement learning (RL) and traditional planning to solve the unit commitment problem, an essential power systems scheduling task. Applied to problems with stochastic demand and wind generation, our results show significant cost reductions and improvements to security of supply as compared with an industry-standard mixed-integer linear programming benchmark. Applying a carbon price of \$50/tCO$_2$ achieves carbon emissions reductions of up to 10\%. Our results demonstrate scalability to larger problems than tackled in existing literature, and indicate the potential for RL to contribute to decarbonising power systems.

Recorded Talk (direct link)