The proposed model was designed by an international research group and uses the convolutional neural network (CNN) architecture U-Net for image segmentation and the CNN architecture InceptionV3-Net for error classification.
An international research team has developed a new method for detecting PV faults based on deep learning from aerial images.
The proposed methodology uses the convolutional neural network (CNN) architecture U-Net for image segmentation and then applies the CNN architecture InceptionV3-Net for error classification.
“The presence of dust, snow, bird droppings and other physical and electrical problems on the surface of the solar panels can lead to energy loss,” the academics said. “The need for efficient monitoring and cleaning protocols in solar energy systems cannot be overstated. Based on that goal, we selected research topics to improve image processing and classification tasks related to different types of solar panel damage.”
For the model’s segmentation step, the group used a publicly available annotated database of 4,616 images. The aerial photographs were divided into six categories: cropland, grassland, saline alkali, shrubs, water surface and roofs. The database was split into a ratio of 60%-20%-20% for training, validation, and testing, respectively.
Another database containing 885 images was split into the same proportion for error classification. The dataset includes six categories of PV problems: clean, dusty, bird trap, electrical damage, physical damage and snow-covered. In addition to the InceptionV3-Net model – which applies the InceptionV3 basis with ImageNet weights – the researchers also tested other classification models for compression. These were Dense-Net, MobileNetV3, VGG19, CNN, VGG16, Resnet50 and InceptionV3.
“Initially, aerial satellite images are processed using the U-net model architecture with an input shape of 256x256x3, going through three stages: decoding the input, combining encoding and decoding, and generating the output,” explained the group out.
It also highlighted that the InceptionV3-Net architecture uses the InceptionV3 foundation with ImageNet weights, enhanced by convolutional layers, Squeeze-and-Excitation (SE) blocks, residual connections, and global average pooling. The model contains two dense layers with LeakyReLU and batch normalization, ending with a Soft-Max output layer. It also uses data augmentation techniques such as rotation, shifting, shearing, zooming and brightness adjustments.
“The model was trained using the Adam optimizer with a learning rate of 0.0001 and categorical cross-entropy loss,” they also said.
Their analysis showed that the proposed InceptionV3-Net achieved a validation accuracy of 98.34% and an F1 score (representing the balance between precision and recall) of 0.99%. That is compared to the validation accuracy in the range of 20.9% – 89.87% and F1 of 0.21-0.92 in the competing models.
The test results also showed that the proposed InceptionV3-Net achieved a validation accuracy of 94.35% and an F1 score of 0.94. This is compared to the validation accuracy in the range of 21% – 90.19% and F1 of 0.19 – 0.91 in the competing models.
“Future work could focus on several open opportunities to further improve the capabilities of the InceptionV3-Net model,” the researchers concluded. “Applying the model to other sustainable energy systems, such as wind turbines or hydroelectric power stations, would test its versatility. Further optimization of the model for real-time fault detection could be outlined as future work to improve its practical usability.”
The new method was presented in “SPF-Net: Solar panel fault detection using U-Net based deep learning image classification”, published in Energy reports. The team included scientists from Bangladesh American International University-Bangladesh, King Saud University in Saudi Arabia and India’s GMR Institute of Technology.
This content is copyrighted and may not be reused. If you would like to collaborate with us and reuse some of our content, please contact: editors@pv-magazine.com.