SafeML-based Confidence Generation and Explainability for UAV-based Anomality Detection of Blades Surface in Offshore Wind Turbines

Research projects

  • Research area

    Push the Frontiers of Offshore Wind Technology

  • Institution

    University of Hull

  • Research project

    SafeML-based Confidence Generation and Explainability for UAV-based Anomality Detection of Blades Surface in Offshore Wind Turbines

  • Lead supervisor

    Dr Zhibao Mian (Lecturer - Faculty of Science and Engineering, University of Hull, University of Hull)

  • PhD Student

    Planned for 2024 Entry

  • Supervisory Team

    Dr Koorosh Aslansefat (Lecturer/Assistant Professor - Faculty of Science and Engineering, University of Hull)
    Professor Yiannis Papadopoulos (Professor – Faculty of Science and Engineering, University of Hull)

Project Description:

This Research Project is part of the Aura CDT’s Reliability and Health Monitoring Cluster.

Unmanned Aerial Vehicle (UAV) e.g., drones are increasingly used for equipment anomaly and fault detection. When the drones are employed to take images, the quality of the images can be affected by several factors. For instance, images can be blurred due to the relative motion between the blades and the camera mounted on the drones. Noise can be introduced to the images due to the harsh operating conditions of drones. Noise can also be produced by various surrounding electronic devices. As a result, the decision made based on these images could be affected depending on the quality of the images. For this reason, the aim of this project is to propose a methodology to generate confidence in such decision.

Figure 1 illustrates the proposed methodology. In this method, the images taken by each drone will be loaded into the pre-processing unit and then the pre-processed data will be used as the input of the deep learning algorithm. In the next phase, the SafeML tool (a novel open-source safety monitoring tool) [1-3] is used to measure the statistical difference between new images and the trusted datasets (the datasets that the deep learning model has been trained with and validated by an expert in the design time) to generate the confidence. Having generated the confidence, three scenarios have been considered; (a) if the confidence is very low, then the approach will provide notice for O&M team to do the manual inspection, (b) if the confidence is low, the approach will ask the drone to take more picture from that specific area, and (c) if the confidence is high, the approach will generate the diagnosis report with addressing the evaluated confidence. In the last scenario, the system will be permitted to proceed with the results autonomously. Note that the threshold defining for the confidence should be tuned in the design time by an expert. The approach is also capable of providing deep learning explainability and interpretability. To highlight this feature, we give an example. Consider based on drone images, the system diagnoses wrongly a problem like erosion, fatigue, etc and send the maintenance team to fix the problem or at least do a further investigation which is costly for an offshore wind turbine. Three different questions can be considered for three different perspectives; (a) as the owner of the offshore wind, one may want to know why the system made the wrong decision? (b) as the designer of the deep learning-based diagnosis system, one may want to know which part of the algorithm is responsible for the wrong decision (or where is the deep neural network attention)? and as the (c) drone’s third-party company owner, one may want to know what the problem with images, or their pre-processing was that cause the wrong decision. The statistical analysis provided by this approach can explain these kinds of questions.

Figure 1. SafeML concept for confidence measure and explainability for UAV-based Anomality Detection of Blades Surface in Offshore Wind Turbines.

References & Further Reading
[1] Aslansefat, K., Sorokos, I., Whiting, D., Kolagari, R. T., & Papadopoulos, Y. (2020, September). SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures. In International Symposium on Model-Based Safety and Assessment (pp. 197-211). Springer, Cham.
[2] Aslansefat, K., Kabir, S., Abdullatif, A., Vasudevan, V., & Papadopoulos, Y. (2021). Toward Improving Confidence in Autonomous Vehicle Software: A Study on Traffic Sign Recognition Systems. Computer, 54(8), 66-76.
[3] The SafeML projects, tools, dataset, and related papers,

For an informal discussion, call +44 (0) 1482 463331
or contact