Deep Learning-Driven Image Fusion For Remote Sensing: Advancements, Comparative Analysis, And Applications

Authors

  • Anita Chaudhary Author
  • Dr. Navdeep Kaur Author

DOI:

https://doi.org/10.64252/nx969y08

Keywords:

Generative adversarial networks, convolutional neural networks, edge detection, spatial resolution, segmentation, and feature extraction.

Abstract

Due to its ability to integrate multimodal data and improve spatial, spectral, and temporal resolution, image fusion has emerged as a key tool in remote sensing. Although they have been useful, traditional picture fusion techniques like Principal Component Analysis (PCA) and Wavelet Transform sometimes have trouble maintaining both spatial features and spectral accuracy. Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), and Transformer-based models are among the deep learning-based techniques that have recently emerged has revolutionized image fusion by offering improved feature extraction, noise reduction, and information retention. This research investigates the advancements in deep learning-driven image fusion techniques, comparing them with conventional methods in terms of accuracy, computational efficiency, and real-world applicability. Additionally, the study explores the impact of fused high-resolution images in critical remote sensing applications such as urban mapping, disaster management, and environmental monitoring. The findings highlight the advantages and limitations of existing models while identifying future research directions for optimizing deep learning architectures for picture fusion for remote sensing.

Downloads

Download data is not yet available.

Downloads

Published

2024-12-30

Issue

Section

Articles

How to Cite

Deep Learning-Driven Image Fusion For Remote Sensing: Advancements, Comparative Analysis, And Applications. (2024). International Journal of Environmental Sciences, 1092-1100. https://doi.org/10.64252/nx969y08