
As AI models witness early applications in recent times for developing intelligent energy systems, especially for management of renewable energy sources and their operations & maintenance (O&M), Explainable AI would be the driving force to ensure reliability, efficiency and affordability of non-conventional energy sources, towards achieving a low-carbon future.
Artificial Intelligence (AI) models, which have conventionally been highly popular for utilization in domains such as healthcare, finance, marketing etc. in the last few decades, have recently started to receive increased attention in development and management of intelligent energy systems, particularly for helping make renewable energy sources such as wind energy, solar energy etc. more reliable. While this is a much welcome initiative, and can play a significant role in tackling climate change by helping reduce the associated costs of O&M, enhancing the efficiency (e.g. maximizing the power generated through optimization) and reducing the downtimes of energy sources (e.g. by predicting failures in advance before they actually lead to unexpected shutdowns), there are a plethora of associated risks and challenges which prevent the operators of modern energy sources to widely adopt AI in their everyday routine, including for autonomous decision support and O&M planning. Some of these major challenges can be posed as interesting questions, as enunciated below:
- When an AI model predicts a fault in advance (e.g. in specific sub-components of wind turbines), how do we know what phenomenon would lead to the fault? If we are already aware of what is likely to cause faults in energy systems, appropriate decisions can be taken in advance to avert such failures, but do the black-box AI models help here?
- As no AI model can always deliver accurate decisions, how do we deal with situations of false alarms, inaccurate decisions, missed detections etc.? Unless we really know how and why a model makes (or does not make) certain decisions under different scenarios, we cannot be confident on the model’s outputs, leading to redundant costs of managing energy systems during such situations.
- Not every engineer and technician who actually perform O&M planning on site for present-day energy systems can be expected to possess a background in AI and data science. How do we prevent AI models from potentially misleading key personnel through inaccurate decisions, poor data or unreasoned choices?
All these questions outlined above point to a clear overall issue – non-reliable AI models and a fear of the inaccurate decisions and choices they may make due to their conventionally black-box nature. This pressing challenge not only limits modern energy operators to be reluctant in adopting AI – thus leading them to rely on conventional (manual or semi-automatic) techniques of O&M and energy management, but also significantly affects the end costs to consumers, which largely constitutes households and businesses. This thus leads to a very simple notion: If energy operators adopt AI models for autonomous decision support which can provide accurate, transparent, and well-reasoned decisions, only then can modern energy sources become affordable and reliable, fulfilling the United Nation’s Sustainable Development Goal (SDG) 7.
So how do we make AI models reliable and develop them to provide well-reasoned and accurate decision? Explainable AI is the way forward.

What is Explainable AI?
Explainable AI, simply put, is a specific domain of AI models and techniques which focus on generating decisions that are accurate, reliable and transparent. While conventional AI techniques can generate predictions with high accuracy, their lack of transparency generally makes them infeasible for adoption in applications wherein efficiency and reliability are vital, as in present-day energy systems. Given the valuable importance of this domain, companies such Microsoft, IBM and Google have significantly focused on developing trust in conventional AI models through responsible machine learning, which is unachievable without Explainable AI. While the area of Explainable AI is in itself very vast, there are some major Explainable AI techniques which are likely to play an immensely powerful role in the energy systems domain:
- Transparent AI models for reliable and trustworthy decisions: The most basic family of Explainable AI techniques spans utilization of transparent AI models, which can generate rationale and reasoning for their predictions, such as in the form of importance of certain parameters on the basis of which it generates (or does not generate) certain predictions. For instance, given historical data from solar panels – Photovoltaic Modules (PVs), Explainable AI techniques can help to not only facilitate early prediction of faults and detection of inconsistencies in operation of the panel in advance, but can help isolate specific parameters (such as irradiance, current, power, voltage etc.), which can provide field engineers & technicians better understanding on the context of such faults. And what does a better understanding of faults (and the parameters which cause the faults) ultimately lead to? Reduced O&M costs, better savings for energy system operators, reduced downtimes, and therefore, affordable, reliable and efficient energy.
Similar models can be utilized for other types of energy sources e.g. wind energy, wherein, Explainable AI models can identify the parameters from sensors of a turbine which require early investigation (say an early fault is predicted in the gearbox of a turbine, the model may predict likely causes of the prediction such as high gearbox oil temperature), which can help turbine engineers easily fix the issues in advance of faults, making wind energy systems potentially more reliable. Another aspect is forecasting of energy output of such systems, e.g. solar power production, wind power production etc., wherein, the Explainable AI models can provide an indication of the amount of power such energy systems can generate in future periods of time (along with the reasons for any shortfalls), helping grid operators and energy source operators to better plan for periods of high demand and prevent situations of black-outs, given that electricity grids need a balance in terms of energy fed to the grid and the electricity consumed to keep power plants in continued operation within normal frequency ranges.
- Natural language generation (NLG) for well-reasoned, human-intelligible decisions: While traditionally, AI models have been excellent learners to discover discriminatory patterns from across datasets (such as distinguishing between periods of normal operation and failures of an energy system), they only provide decisions in a binary (or cardinal) manner. AI models are good at learning with numbers, just like our computers work with numbers (even if you have non-numeric data such as text and images, it is always translated into a numeric form for it to be machine-readable and understandable). For instance, if a conventional AI model has been developed to predict faults as a 1 and normal operation as 0s, the model’s decisions are only limited to that, without any further reasoning and rationale behind the predictions. This is a major challenge, as these 1s and 0s do not reveal much, beyond the fact that a fault is predicted to occur. There is significant dearth of information pertaining to what causes the faults (such as the exact alarms leading to the faults) and above all, how to fix/avert the fault. While as such, these decisions can be taken by engineers & technicians based on judgment (or more precisely put, a best guess), it defeats the purpose of automated reasoning and fully autonomous intelligent energy systems. Unlike machines, humans are less capable of understanding numbers, especially, when they only mean 1s and 0s. What can be the best way to communicate well-reasoned decisions to a human? Language is the key. Humans can understand words and sentences better and take far less time to understand the context of the sentences and natural language phrases.
This leads us to the amazing domain of Natural Language Generation (NLG), which spans another family of Explainable AI models which can provide human-intelligible predictions and decisions in the form of natural language phrases. In other words, NLG systems let your data speak to you. How can NLG help in the domain of energy systems? Especially for renewable energy sources, e.g. solar energy which relies on sufficient sunlight, wind energy relying on enough wind etc. to generate power, NLG can provide simple messages on the environmental conditions, something very similar to what you would see every day in the weather forecast on news channels and in newspapers (such as the predicted speed of wind, sunny or overcast weather etc.). This is a very basic example of the applications which NLG can have, but the possibilities are likely endless. For complex engineering systems such as wind turbines, NLG models can provide human-intelligible messages for predicted faults in turbine sub-components, as well as maintenance action reports on appropriate strategies engineers & technicians can adopt in fixing/averting failures. This leads to better context on the model’s decisions. Well-reasoned choices and model outputs and thereby, more trustworthy AI models which can be suitable for real-world utilization in present-day energy systems.
- Causal inference for discovering novel insights: AI models are excellent at prediction and forecasting tasks, as already outlined in this article, with a plethora of applications for energy systems. However, is this sufficient for trust and confidence on the AI model and the data? The unfortunate answer is no, as bad data with outliers (which is very common, such as defective sensors in turbines leading to wrong measurements, incorrect solar power values recorded etc.) can often be misjudged by humans when developing AI models for routine applications in the management of energy systems. Another aspect would be the identification of novel constraints and situations in energy systems, which hold them back from performing optimally. To tackle this pressing challenge, it is vital to better understand and interpret the data used in the energy systems domain, wherein, causal inference can be an integral informative source.
Causal inference spans specialized type of AI algorithms, which can help discover novel associations and hidden insights from your datasets. For example, if a solar PV panel is not generating sufficient power despite conditions of sufficient sunlight or say, turbines in wind farms are failing frequently despite timely monitoring of vital parameters and enough wind, the problem may lie at a very different scope. This may mean that some other factors which fail to be discovered by conventional AI models during predictions are leading to the problems. Causal inference comes here to the rescue, as it can identify the causal association between different parameters in the datasets, as correlation is not always causation, right? This would likely become more important with the adoption of deep learners in the energy systems domain, and such systems becoming more complex (e.g. larger wind farms with higher turbine power generation capacities, bigger solar panels etc.).
- Knowledge graphs (KGs) for ambient, human-level intelligence: Finally, an important domain which is emerging in recent times for ensuring Explainable AI is the utilization of knowledge graphs (KGs). What really are KGs? Well, Google utilizes them to present the results in its search engine. Such KGs utilized in search engines interlink different types of heterogenous information (e.g. a company’s address, customer feedback/reviews, opening and closing hours of the business etc.) to provide the end user a thorough, informative description. But how do they really work in case of energy systems, and are they at all viable? The exciting answer here is yes, as KGs can serve as an indispensable tool to ensure Explainable AI for management of present-day energy systems!
Consider, for instance, big data available from energy systems like solar panels and wind farms. It is an undeniable fact that such datasets would consist not only of historical performance (including failures and inconsistencies) of the energy systems, but also include e.g. work orders and maintenance reports which are generally logged by engineers into spreadsheets, long documents full of textual descriptions etc. With a plethora of data of different types available, it becomes possible to interlink different types of information (e.g. interlinking data from wind turbine sensors with alarm messages historically recorded as well as including maintenance action reports). This can help in constructing domain-specific KGs for energy systems, which can then be interfaced with Explainable AI models for ambient, human-level intelligence. When an Explainable AI model would generate its predictions, it would thereby extract the relevant content from the KG (e.g. appropriate maintenance strategies to fix a specific fault in an energy system) and show it to the user. Thus, we get accurate, reasonable and easy to understand decisions, as the KGs are domain-specific and their content would only consist of information provided by humans, and not any machine generated rationale/reports. It thus leads to trustworthy and human-intelligible AI.

Conclusion
So how are we presently doing in terms of adopting Explainable AI models for the energy systems domain? Sadly, there has been very limited utilization of such models and techniques in case of present-day energy systems in comparison to other areas (e.g. in healthcare, wherein, Explainable decisions are of utmost importance). Also, the utilization of such models has mostly been restricted to early research in academic labs, without much attention to the industry, without which affordable, reliable and efficient clean energy would still remain a distant endeavor. This article has outlined that unless Explainable AI models are utilized, conventional AI models would not be adopted widely for autonomous decision-making by present-day energy system operators, and thus cause faults, downtimes, under-performing assets and poorly reasoned choices to prevail. Thereby, it is the need of the hour to increasingly utilize Explainable AI in the industry for management of energy systems, which will not only benefit the energy system operators with reduced costs (and increased profits), but provide every household in the globe access to affordable energy, especially as we aspire for a low-carbon future.
ABOUT THE AUTHOR
Dr. Joyjit Chatterjee is is presently a Data Scientist (KTP Research Associate) at Reckitt, UK - a leading MNC behind major health, hygiene and nutrition products - like Dettol, Lysol, Strepsils etc.). Joyjit was named in the prestigious Forbes 30 Under 30 Europe list (Manufacturing and Industry) in 2022 for his impactful work on developing AI products that can help bolster manufacturing and energy processes.