Deep Learning-Driven Image Fusion For Remote Sensing: Advancements, Comparative Analysis, And Applications
DOI:
https://doi.org/10.64252/nx969y08Keywords:
Generative adversarial networks, convolutional neural networks, edge detection, spatial resolution, segmentation, and feature extraction.Abstract
Due to its ability to integrate multimodal data and improve spatial, spectral, and temporal resolution, image fusion has emerged as a key tool in remote sensing. Although they have been useful, traditional picture fusion techniques like Principal Component Analysis (PCA) and Wavelet Transform sometimes have trouble maintaining both spatial features and spectral accuracy. Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), and Transformer-based models are among the deep learning-based techniques that have recently emerged has revolutionized image fusion by offering improved feature extraction, noise reduction, and information retention. This research investigates the advancements in deep learning-driven image fusion techniques, comparing them with conventional methods in terms of accuracy, computational efficiency, and real-world applicability. Additionally, the study explores the impact of fused high-resolution images in critical remote sensing applications such as urban mapping, disaster management, and environmental monitoring. The findings highlight the advantages and limitations of existing models while identifying future research directions for optimizing deep learning architectures for picture fusion for remote sensing.




