Ocean Wave Energy: Optimizing Reinforcement Learning Agents for Effective Deployment (Papers Track)

Vineet Gundecha (Hewlett Packard Enterpise); Sahand Ghorbanpour (Hewlett Packard Enterprise); Ashwin Ramesh Babu (Hewlett Packard Enterprise Labs); Avisek Naug (Hewlett Packard Enterprise); Alexandre Pichard (Carnegie Clean Energy); mathieu Cocho (Carnegie Clean Energy); Soumyendu Sarkar (Hewlett Packard Enterprise)

Paper PDF Poster File NeurIPS 2023 Poster Cite
Power & Energy Reinforcement Learning


Fossil fuel energy production is a leading cause of climate change. While wind and solar energy have made advancements, ocean waves, a more consistent clean energy source, remain underutilized. Wave Energy Converters (WEC) transform wave power into electric energy. To be economically viable, modern WECs need sophisticated real-time controllers that boost energy output and minimize mechanical stress, thus lowering the overall cost of energy (LCOE). This paper presents how a Reinforcement Learning (RL) controller can outperform the default spring damper controller for complex spread waves in the sea, enhancing wave energy's viability. Using the Proximal Policy Optimization (PPO) algorithm with Transformer variants as function approximators, the RL controllers optimize multi-generator Wave Energy Converters (WEC), leveraging wave sensor data for multiple cost-efficiency goals. After successful tests in the EuropeWave\footnote{EuropeWave: https://www.europewave.eu/} project's emulator tank, the platform is planned to deploy. We discuss the challenges of deployment at the BiMEP site and how we had to tune the RL controller to address that. The RL controller outperforms the default Spring Damper controller in the BiMEP\footnote{BiMEP: https://www.bimep.com/en/} conditions by 22.8% on energy capture. Enhancing wave energy's economic viability will expedite the transition to clean energy, reducing carbon emissions and fostering a healthier climate.