A Hybrid Transformer-CNN Framework For Electroluminescence Image-Based Solar Panel Defect Classification
DOI:
https://doi.org/10.64252/1q3d1366Keywords:
Electroluminescence Imaging, Solar Panel Defect Classification, Transformer-CNN Hybrid Model, Residual Attention Network, ReliefF Subspace Weighted SVM, Photovoltaic Fault Detection.Abstract
This paper introduces a new combined deep learning system called RA-TransformerNet, which automatically identifies defects in solar panels by using electroluminescence (EL) images. The framework integrates Residual Convolutional Neural Networks (ResCNN) for local feature extraction with Vision Transformers (ViT) to capture long-range dependencies across the image. An attention mechanism, either CBAM or SE, is applied to enhance relevant spatial and channel features. The deep features extracted are then passed to a ReliefF-based Subspace Weighted Support Vector Machine (RSWS) classifier, enabling high interpretability and fine-grained classification. The model was evaluated on the benchmark ELPV dataset and classified solar cells into four categories: normal, defective, possibly normal, and possibly defective. The proposed method achieved superior results, with an accuracy of 98.23%, precision of 97.12%, sensitivity of 95.67%, and F-score of 96.35%, outperforming traditional CNNs, hybrid models, and CNN-SVM baselines. Confusion matrix analysis demonstrated minimal misclassifications, especially in borderline classes. Moreover, the model remained computationally efficient, with a training time of 92 seconds and 38.5 million parameters. These results establish RA-TransformerNet as a powerful and scalable solution for solar panel defect analysis, paving the way for intelligent, real-time PV system monitoring and maintenance in renewable energy applications.