Interpretable Machine Learning for power systems: Establishing Confidence in SHapley Additive exPlanationS (Papers Track)

Tabia Ahmad (University of Strathclyde); Robert Hamilton (Shell); Panagiotis Papadopoulos (University of Manchester); Samuel Chevalier (University of Vermont); Ilgiz Murzakhanov (Technical University of Denmark); Rahul Nellikkath (Technical University of Denmark); Jochen Bernhard Stiasny (Technical University of Denmark); Spyros Chatzivasileiadis (Technical University of Denmark)

Paper PDF Poster File Recorded Talk Cite
Power & Energy Interpretable ML

Abstract

Interpretable Machine Learning (IML) is expected to remove significant barriers for the application of Machine Learning (ML) algorithms in power systems. This work first seeks to showcase the benefits of SHapley Additive exPlanations (SHAP) for understanding the outcomes of ML models, which are increasingly being used to optimise power systems with increasing share of Renewable Energy (RE), to support worldwide calls for decarbonisation and climate change. To do so, we demonstrate that the Power Transfer Distribution Factors (PTDF)—a power system physics-based linear sensitivity index—can be derived from the SHAP values. To do so, we take the derivatives of SHAP values from a ML model trained to learn line-flows from generator power-injections, using a DC power-flow case in a benchmark test network. In demonstrating that SHAP values can be related back to the physics that underpin the power system, we build confidence in the explanations SHAP can offer.

Recorded Talk (direct link)

Loading…