Publication Cover
Canadian Journal of Remote Sensing
Journal canadien de télédétection
Volume 49, 2023 - Issue 1
579
Views
1
CrossRef citations to date
0
Altmetric
Research Article

An Algorithmic Approach towards Remote Sensing Imagery Data Restoration Using Guided Filters in Real-Time Applications

Une approche algorithmique de restauration d’images de télédétection à l’aide de filtres guidés pour des applications en temps réel

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, & ORCID Icon
Article: 2257323 | Received 20 Mar 2023, Accepted 02 Sep 2023, Published online: 15 Sep 2023

Abstract

The images captured from SAR sensors are inherently weakened by speckle noise. The SAR image processing community targeted this problem with many feature-based filters. Since SAR images are low-contrast images, edge retention is the most crucial aspect to consider. This helps in the efficient retrieval of information. This paper provides a two-step edge-preserving homomorphic SAR image despeckling technique that implements a guided filter as the first step, and a modified method of noise thresholding using the bivariate shrinkage rule and canny edge operator in the Discrete Orthonormal Stockwell Transform (DOST) domain as the second step. The use of a canny edge operator improves overall edge preservation after despeckling. The use of noise thresholding delivers the highest level of speckle reduction in the DOST domain. The detected edges are added to the residual part obtained after removing the noise to produce more informative content. According to several qualitative and quantitative criteria, the suggested approach is compared to some of the newest despeckling methods. The execution time of the proposed method is around 7.2679 seconds. Upon conducting qualitative and quantitative analysis, it has been determined that the proposed method surpasses all other despeckling methods that were compared.

Résumé

Les images capturées par les capteurs RSO sont intrinsèquement affaiblies par le chatoiement. La communauté de traitement d’images RSO a alors développé de nombreux filtres basés sur les caractéristiques spatiales des images. Étant donné que les images RSO sont des images à faible contraste, la rétention des arêtes de l’image est l’aspect le plus crucial à considérer afin de faciliter la récupération efficace de l’information. Cet article présente une technique homomorphe de déchatoiement d’images RSO préservant les arêtes en deux étapes. La première étape consiste à implémenter un filtre guidé. La seconde étape est une méthode modifiée de seuillage du bruit utilisant une règle de retrait bivarié et un opérateur d’arêtes astucieux dans le domaine DOST (Discrete Orthonormal Stockwell Transform). L’utilisation d’un opérateur de détection des arêtes astucieux améliore leur préservation globale après le déchatoiement. L’utilisation d’un seuillage du bruit offre le plus haut niveau de réduction de chatoiement dans le domaine DOST. Les arêtes détectées sont ajoutées à la partie résiduelle obtenue après la réduction du bruit pour produire un contenu plus informatif. Selon plusieurs critères qualitatifs et quantitatifs, l’approche suggérée est comparée à des méthodes récentes de déchatoiement. Le temps d’exécution de la méthode proposée est d’environ 7,2679 secondes. Après avoir effectué une analyse qualitative et quantitative, il a été déterminé que la méthode proposée surpasse toutes les autres méthodes de déchatoiement comparées.

This article is part of the following collections:
Technological Advancements in Urban Remote Sensing

Introduction

The radars are classified as real aperture radar (RAR) and SAR according to their antenna size. The non-coherent radars that are controlled by the antenna’s length are known as RAR (Zhu et al. Citation2013). The antenna of an active radar system transmits high-frequency radar waves to a specific region of the terrain for image capture and processing. According to Lapini et al. (Citation2014) and (Singh and Shree Citation2020a), a larger antenna’s length means a larger picture, which is more detailed and hence more accurate in terms of data. A high-resolution photograph cannot be obtained this way. It is very difficult, if not impossible, to fit huge antennas on satellites and aircraft. The synthetic aperture was made by engineers and scientists to solve this problem. They devised the synthetic radar to get ever-increasing data resolution by connecting a small movable antenna at their fixed positions, thus making the smaller antenna seem much larger. SAR is a coherent radar that is linked to satellites and airplanes. It gives high-resolution images of a large area of the Earth’s surface. The antenna size on all satellites and planes is fixed. It is a synthetic antenna, which means it can move forward and backward. As it moves forward, the SAR antenna also moves and keeps transmitting high-frequency radar waves onto the earth’s surface. Upon striking the target, the high-frequency radar waves bounce back to the source. The SAR receives and processes the reflected radiation as well (Singh et al. Citation2021). SAR data processing takes a long time since it is a high-computing task. A constant engagement of constructive and destructive interference of transmitted high-frequency radar waves with targets on the Earth’s surface results in high-resolution SAR pictures (Iqbal et al. Citation2013).

During the image capture stage, this constructive and destructive interaction results in significant information loss and quality deterioration. The scattering phenomena that result from this information loss and quality deterioration may be seen in the SAR image. Speckle noise is an outcome of this scattering process. The SAR images come with this speckle noise by default. The SAR image’s visual quality is severely diminished by speckle noise (Singh and Shree Citation2017a). The pattern of the speckle noise is granular and resembles badly distorted salt and pepper noise (Yommy et al. Citation2015). Before executing any type of segmentation procedure on SAR images, the SAR images must first be pre-processed. SAR image despeckling constitutes this pre-processing (Singh, Diwakar, et al. Citation2021). SAR image despeckling is the act of eliminating speckle noise from SAR images. Any kind of SAR image processing must include this pre-processing technique since it enhances image quality and facilitates subsequent processing (Singh and Shree Citation2016). Compared to other types of noise patterns, the impact of speckle noise is much worse (Mv and Mn Citation2015). Its multiplicative character is the fundamental reason. Speckle noise has a gamma distribution-like pattern (Singh and Shree Citation2020a). The nature of speckle noise is multiplicative. Comparatively, multiplicative noise has a worse impact than additive noise. In this case, multiplicative noise multiplies the external noise components by the reference SAR data to produce a noisy SAR image (Shree et al. Citation2020).

There are several conventional and unconventional SAR image despeckling techniques. Both Bayesian and non-Bayesian approaches may be used to classify these techniques. Traditional SAR image detection approaches are mostly focused on spatial Bayesian algorithms. While nontraditional SAR image despeckling approaches are based on non-Bayesian methods and Bayesian approaches in the transform domain, in the pre-processing step, it is important to solve the major problem of speckle noise in the SAR images. When working with SAR images, this step is recognized as essential. As an outcome, there has been constant progress in this area. Numerous studies have been conducted with excellent outcomes in a variety of fields. Although hybrid, nontraditional approaches are more successful in this field, conventional ones are still useful. The outcomes of nontraditional approaches are more clearly defined than those of conventional ones.

Real-time urban remote sensing applications

The use of SAR data is becoming more important in geospatial research. SAR may be used regardless of lighting conditions or cloud cover, unlike many other observational techniques. Recent years have seen a dramatic rise in data quality and availability owing to an ever-increasing number of orbital SAR devices, with more on the horizon. This has prompted the development of new processing tools. Therefore, analytical processes based on SAR data may now be automated and executed at scale to address issues in fields as diverse as natural and man-made disaster response, urban planning and land use, agriculture, change detection, and ocean and coastal monitoring. The obtained SAR image after speckle reduction can be used for urban object analysis and identification, urban disaster monitoring and change analysis, urban climate change and variation, urban data synthesis and analysis, urban visualization and virtual reality applications, and urban applications of high-resolution optical sensors.

Related work

The authors in Baraha et al. (Citation2022) give an insight into some of the most important current standard speckle filtering algorithms. The various SAR multiplicative noise models and their features are described. It also provides an overview of nonlocal means (NLMs) and variational models (VMs). The various algorithms are described in detail, including information on how they operate as well as their benefits and cons. SAR-DRDNet was suggested by the authors in Wu et al. (Citation2022) as a complete SAR image despeckling model. To highlight the use of global information in images, a non-local block was added. As an outcome of this, the multi-scale impact of the picture was explored. With the SAR-DRDNet, despeckling is achieved while still preserving fine details. A new single-image speckling approach based on a combination of resemblance-based block matching and a deep learning network using noise as a reference is presented in Wang et al. (Citation2022). This approach uses a convolutional neural network encoder-decoder to remove noise from tiny picture patches. After creating many noisy pairs from one or more noisy SAR images, this approach then uses these noisy pairings as training input to create a huge number of new pairs (Wang et al. Citation2022). With two parameter-shared branches, the approach then trains the network in a Siamese fashion. The authors of Perera et al. (Citation2022a) describe a transformer-based network for despeckling SAR images. The proposed method includes a transformer-based encoder that enables the network to discover global relationships between various picture areas, improving despeckling. Using a composite loss function, the network is trained completely on artificially produced speckled pictures.

By narrowing the receptive field, the authors in Perera et al. (Citation2022a) use an overcomplete convolutional neural network (CNN) architecture to concentrate on learning low-level characteristics. An overcomplete branch concentrates on local patterns, whereas an under complete branch concentrates on global patterns in the proposed model. The authors in Singh et al. (Citation2022) proposed wavelet thresholding-based SAR image despeckling techniques using the 2D-Discrete Wavelet Transform (DWT). Here, a speckled SAR image is first pre-processed using an iterative inverse variance-based non-homomorphic filter. The low-frequency components are directed to a bilateral filter, and the high-frequency components are directed to modified Bayesian thresholding followed by inter-level method noise thresholding. The intra-level method of noise thresholding is applied as a post-processing operation to get the final despeckled image (Singh et al. Citation2022). Despeckling of SAR images using dictionary learning and sparse coding is proposed in the paper (Liu et al. Citation2022) to address the issue of speckle noise. The proposed model is built using quick and efficient solution stages that conduct orthogonal dictionary learning, weight parameter updating, sparse coding, and picture reconstruction concurrently. A despeckling approach that relies on the sparse model is presented in Li et al. (Citation2022) to minimize the speckle noise from SAR images. It employs homomorphic filtering to convert the multiplicative noise into additive noise. After that, the log-intensity image is segmented using a straightforward linear iterative clustering method. Based on deep learning methods, a modified CNN algorithm is presented for speckle noise reduction in Mohanakrishnan et al. (Citation2022). This technique employs CNN with 12 layers to reduce speckle noise. The LReLu activation function, batch normalization (BN), and residual learning technique are all part of the network’s architecture, which additionally employs dilated convolution to broaden the receptive field. To prevent picture quality deterioration, a skip connection is implemented. Two cutting-edge techniques for removing speckles have been combined in Wang et al. (Citation2022) to overcome their weaknesses. Clustering and Gray Level Co-Occurrence Matrices (GLCM) are used for image classification and weighting, respectively, to find the optimal filter output for each region in the image. Due to their continuous structural information, optical images are used for clustering and GLCM, which are substantially cleaner than SAR imagery.

According to the authors of Dalsasso et al. (Citation2022), three self-supervised techniques are compared in terms of data preparation and performance. SAR image despeckling using deep neural networks is discussed in this paper. The authors of Farhadiani et al. (Citation2022) carried out the speckle reduction procedure in the complex wavelet domain. Based on a pre-trained CNN model, the multichannel logarithm with the gaussian denoising technique is used to first despeckle the approximate complex wavelet coefficients. The averaged version of the maximum a posteriori estimator is then used to despeckle the log-transformed features of the complex wavelet coefficients. The suggested technique uses a Markov chain to periodically add random noise to clean pictures until they acquire white Gaussian noise (Perera et al. Citation2022a). By utilizing a noise predictor that is conditioned by the speckled image, a reverse process that repeatedly predicts the new noise yields the despeckled image. To enhance the despeckling performance, a novel inference approach based on cycle spinning is also suggested. In the shearlet area, this work suggests a unique statistical methodology based on Gaussian Copula modeling. Two essential parts make up the proposed multi-dimensional minimum mean square error processor (Morteza and Amirmazlaghani Citation2022). The marginal distribution of the shearlet coefficients is first calculated using the biparameter Cauchy Gaussian Mixture Model. To model the dependence of the target coefficient on its neighbors, joint-prior distribution modeling is created second. This is based on the Gaussian copula that is proposed by Morteza and Amirmazlaghani (Citation2022), Nabil et al. (Citation2023), and Baraha and Sahoo (Citation2020). In Kamath et al. (Citation2021), a despeckling technique employing the shrinkage of 2D-DOST coefficients in the transform domain coupled with a shock filter is presented. Also, an effort was made as a post-processing step to maintain the borders and other characteristics while eradicating the speckle. The suggested technique comprises splitting the SAR image into low- and high-frequency components and processing them individually. In Kamath et al. (Citation2021), the speckle noise in an image is eliminated with the use of the shock filter and 2D-DOST. Edge enhancement algorithms are used to improve the sharpness of edges and their surrounding detail. This results in a smooth SAR image over the homogeneous areas, while the heterogeneous areas maintain their original detail. Improved SAR image despeckling was suggested in Yu and Shin (Citation2022) using a self-supervised method. The heart of the suggested method’s design is the transformer block and the residual block. The loss function was the mean squared error with regularization. Both synthetic and actual SAR datasets have been used in the experiments. The findings demonstrate that, in comparison to other despeckling approaches, the suggested method can better maintain detail and lessen smoothness.

This study (Baraha et al. Citation2022) provides an overview of some of the most important state-of-the-art speckle filtering techniques and methodologies. The many multiplicative noise models used in SAR images are described, along with their features. There are brief explanations provided for NLMs and VMs. Different algorithms are described in detail, together with their advantages and disadvantages (Baraha et al. Citation2022). The article tries to explain the fundamentals of SAR imaging with the least amount of mathematics possible (Bamler Citation2000). It is stressed that understanding the unique characteristics of SAR pictures is necessary before interpreting these data (Bamler Citation2000). In a paper, the authors propose using an FPGA-based accelerator to train CNNs in an energy-efficient manner. They used improvements, including quantization, a standard method for compressing models, to speed up the CNN training procedure. In addition, a gradient accumulation buffer is used to guarantee optimal performance while keeping the learning algorithm’s gradient descent intact. This paper centers on the implementation of a watermark encryption and decryption technique utilizing DWT through MATLAB simulation (Tufa et al. Citation2022). The objective of this study is to guarantee the retrievability of embedded data while refraining from limiting access to the primary image. The authors of Barman et al. (Citation2023) put forward a structure for parallel single-image super-resolution that is based on edge-preserving dictionary learning and sparse representations. This framework is aimed at recovering edge information from low-resolution images and is implemented on compute-unified device architecture-enabled graphics processing units. To restore edges, a set of interconnected dictionaries is acquired, specifically, the scale-invariant feature transform key points and non-keypoint patch-based dictionaries. This paper presents a novel guidance-aided triple-adaptive Frost filter that exhibits potential for utilization in real-time processing platforms (Li et al. Citation2023). The proposed approach utilizes a scale-adaptive sliding window sizing technique to establish the neighborhood ranges for each point within the image. The proposed method incorporates an adaptive calculation for the tuning factor within the Frost filter. Finally, the extracted feature information from the initial image is utilized to facilitate automatic edge recovery, thereby ensuring the effective preservation of features.

Major contributions and significance of the proposed method

With the motivation of Singh et al. (Citation2022), this research paper proposes a two-step edge-preserving and hybrid SAR image despeckling technique that implements a guided filter as the first step, and the second step includes modified method noise thresholding using the bivariate shrinkage rule and canny edge operator in the DOST domain. Two noteworthy contributions of this study are emphasized below:

  • The outcome of the traditional method of noise is improved by adding the concept of a canny edge operator. The traditional method of noise reduction works specifically on residual components that remain unfiltered. Incorporating the concept of a canny edge operator into it enhances its performance in terms of edge retention and uniformity in uniform areas.

  • The concept of noise thresholding is employed in the proposed method to get better outcomes in terms of edge retention for those regions of the despeckled SAR image that are not processed or filtered. This is done to get better outcomes. The speckled SAR image is processed by having the despeckled SAR image subtracted from it. To locate the image’s edges, a Canny edge detector operation is carried out. To obtain a more detailed residual component, the identified edges are added to the part that was removed. The DOST is used to perform the decomposition of this more detailed residual portion. It decomposes the more detailed part of the residual into a component that is both approximate and detailed. To achieve the required level of filtering, the detailed component has been redirected to the bivariate shrinkage algorithm. Therefore, in this section, the significance of the proposed method should be verified by testing the outcomes with and without method noise in the proposed methodology.

The organization of this paper is divided into five sections. Section “Introduction” introduces the SAR image and its various perspectives, including speckle noise. It also surveys some of the latest nontraditional research in the same field. The contribution of the paper is also mentioned in this section. Section “Background” gives a brief introduction to the background of the techniques used in the proposed method. Section “Proposed methodology” is about the proposed method. It explains the proposed work using an algorithm and flowchart. The significance is also discussed here. The numerical outcomes with visual analysis are discussed in Section “Experimental outcomes and discussion”. The merits, demerits, and future aspects of this work are discussed in Section “Merits, demerits, and future perspectives”. Section “Conclusion” concludes the paper with a future perspective.

Background

Guided filter

Guided image filters are typically used for edge detection which employs an edge-preserving smoothing filter for identifying the edges. It has a local linear model which indicates a model without bias (He et al. Citation2013). Let b = output image, a = input image, then the function of guidance image ‘e’ is represented as below: (1) bc=dMcd(e)ad(1) where, M = weight, and c, d is the pixel indexes (Hiremath Citation2021).

The quality and effectiveness of guided image filters are excellent across many use cases, including noise reduction, detail smoothing, etc. In contrast, the guided image filter does not reverse the gradient, producing a clear border. When compared to bilateral filters, guided image filters perform very well in terms of computational time.

Canny edge detection

An enhanced canny algorithm for edge detection algorithm is applied to the despeckled image to detect the edges (Zhou et al. Citation2011). The major motive to apply this algorithm is its better ability to do efficient detection, efficient localization, and exactly one response per edge. Also, the error rate is low as well. There are five steps to apply this detection process. The first step is to remove the noise. The second step is to find the intensity gradients, the third step is to apply gradient magnitude thresholding. The fourth step is to determine potential edges, and the last step is to validate the detection of edges by reducing weak and unconnected edges (Wu et al. Citation2022).

Since the Gaussian filter is present, image noise may be eliminated. An improved signal-to-noise ratio may be achieved with the use of non-maxima suppression, which produces ridges one pixel wide. Uses thresholding to find edges even when they are obscured by noise. Parameters allow for fine-tuning the level of efficacy. It provides accurate localization, fast responses, and robustness to background noise.

DOST based method noise thresholding

R.G. Stockwell established the Stockwell Transform (ST) idea in 1996 (Stockwell et al. Citation1996). In the time–frequency domain, ST is a transform that may represent a signal in a variety of ways. It provides referenced phase information in addition to the progressive resolution. Because it provides a duplicate representation of the time–frequency plane, ST cannot be used on images with a high resolution. For the sake of efficiency, a new non-redundant variant of ST, the DOST, has been developed. In other words, DOST is an energy-preserving transform since it is orthonormal and conjugate symmetric (Katunin Citation2021). DOST gathers local orientation data. The fundamental function of DOST with parameters α, β, and Ω is represented below: (2) D(kT)α, β, Ω=exp(iπΩ)βf=αβ2α+(β2)1exp(i2π(kNΩβ)f)(2) where α is the center of each frequency band (voice), β is the width of the band and Ω is the location in time. Equation 2's spectral partitioning may be reversed to get the DOST inverse by reconstructing the Fourier spectrum from the DOST frequency bands (Hong Citation2021). The inverse matrix of a times series transformed into DOST form is equal to the complex conjugate transpose of the orthogonal transformation matrix. Because this is an orthonormal transformation, the vector norm is retained. As an outcome, the DOST norm is the same as the time series norm (Mejjaoli Citation2021).

DOST is the only method that includes both progressive resolution and information about a signal’s phase that is referenced. It is common to practice in wavelet transform to translate the phase reference point in parallel with the wavelet, however, this is not the case with DOST since all phase reference information is always referred to as zero time in DOST. As an outcome, DOST is found as a highly valuable tool in many image processing applications, such as encoding, denoising, despeckling, and compressing images (Esam El-Dine Atta et al. Citation2021).

The concept of method noise thresholding is applied to get better outcomes in terms of edge retention for those parts of despeckled SAR images that are unprocessed or unfiltered. Despeckled SAR image (B) is subtracted from speckled SAR image (A). Canny edge detector is applied to detect edges of a despeckled image. The detected edges (C) are added to the subtracted part to get a more detailed residual part (A − B). This more detailed residual part ((A − B) +C) is decomposed using EquationEquation (2) of DOST. It decomposes the more detailed residual part into approximate (AC) and detailed components (DC). The detailed component (DC) is directed to the Bivariate shrinkage rule using below equations for filtering purposes.

The detailed components (DC) are processed using a bivariate shrinkage rule that can be represented as: (3) Ŷ1k=(DC1k2+DC2k2λk)+DC1k2+DC2k2.DC1k(3)

The function (zz)+ is well-defined as: (4) (zz)+ : = {0, if zz<0zz, if zz>0(4) where, λk is a threshold value that can be estimated by: (5) λk= 3σnoise2σ̂k(5)

To compute the speckle variance i.e., cnoise from the speckled detailed components, a robust median estimator is used from the finest scale detailed components stated. (6) σnoise= median(|DC1k|)0.6745,(6) where, DC1k ϵ speckled detailed components. The marginal variance i.e., (σk2) of speckled detailed components can be calculated as: (7) σ̂k2= (σ̂DC1k2- σ̂noise2)+(7) where, σ̂DC1k2 = marginal variance i.e., (σk2) of speckled detailed components. For that σ̂DCk2 can be calculated as: (8) σ̂DC1k2= 1|N(k)|DC1r ϵ N(k)DC1r2(8) where, |N(k)| is the value of block size (Zhu et al. Citation2013), (Singh and Shree Citation2020a), and (Singh, Diwakar, et al. Citation2021).

Proposed methodology

below shows the full working of methodology framework for SAR image despeckling using a guided filter and DOST-based method noise thresholding using a bivariate shrinkage rule.

Figure 1. Methodology framework of SAR image despeckling.

Figure 1. Methodology framework of SAR image despeckling.

Experimental outcomes and discussion

All experimental evaluations have performed on a computer with an Intel Core i5 processor and 8 gigabytes of memory to do the investigation. We can find out how well the proposed method works in MATLAB R2020a. The experiments have been tested a lot, both with real datasets of speckled images and with datasets of speckled image data that were made artificially. To validate the urban remote sensing application of proposed method, the tested real speckled SAR images are true urban remote sensing images. Reducing speckle noise while retaining the image’s fine details is the most challenging aspect of SAR image despeckling. The lack of a recognized starting point makes the problem of identifying speckle-free reflectivity the field’s most pressing one. The connection between despeckled SAR image quality and reliability is another important issue. Analyzing the deterioration in the homogeneous parts, i.e., decrease of speckle noise and retention of fine features in heterogeneous areas, allows one to determine the other despeckling outlines, i.e., feature and reliability of originality of despeckled SAR images. The simulated image dataset and urban remote sensing image dataset (real speckled) is available at open public access database (DATASET OF STANDARD (Anonymous Citation2023), Test Images (Anonymous Citation2014).

Another approach for determining the image’s quality is to look at it without any reference image. This is the case with despeckled SAR images. It enables the recognition of the primary traits visible to the naked human eye that best characterize the despeckling processes. The inability to preserve edges and points on targets, as well as blurriness and structural and blocky artifacts, are all part of this category. Visual evaluation is limited in that it cannot accurately quantify the filter’s bias or compare the relative effectiveness of alternative despeckling techniques. Several other performance metrics have been developed for the evaluation of the despeckling procedures to get over the limitations of visual inspection. These numerical metrics can be broken down further into two classes: with a reference image and without a reference image. These metrics are used to measure the effectiveness of the proposed strategies.

In the existence of the reference image, the image denoising and despeckling literature is practically unlimited. In this instance, the writer has full knowledge of the image. With a with-reference index, image researchers can quickly and simply evaluate their despeckling methods by comparing them to a set of reference images. Edge retention, texture retention, and uniformity maintenance in both homogeneous and heterogeneous regions can all be measured in this context with a variety of different metrics. For comparative analysis, some recent popular methods are used such as (Wang et al. Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), and (Baraha and Sahoo Citation2022). shows the proposed method’s average outcomes (115 images) between despeckled and Reference images. Here the performance metrics are measured as shown in . From this table, it can be analyzed that the proposed method with method noise improves the outcomes in terms of performance metrics like Peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM), and Universal Image Quality Index (UIQI). The PSNR is a metric used to quantify the ratio between the maximum potential value of a signal and the power of any distorting noise that may impact the quality of its illustration (Mahajan Citation2023). The SSIM is a technique utilized to forecast the perceived quality of digital television and cinematic images, along with various other forms of digital images and videos. SSIM is a metric utilized to evaluate the degree of similarity between a pair of images (Wang et al. Citation2004). The UIQI metric is straightforward to compute and can be utilized in diverse image-processing contexts. The model is formulated to represent image distortion as a composite of three distinct factors, namely loss of correlation, luminance distortion, and contrast distortion (Wang and Bovik Citation2002).

Table 1. The proposed method’s average outcomes (115 images) between despeckled and reference images.

Without–reference indexes

Visually evaluating the quality of a despeckled SAR image can be done in a variety of methods, including artifacts, edge retention, and appearance of low contrast features, texture retention, uniformity in homogeneous parts, and fine detail retention in heterogeneous regions. depicts the comparison of the suggested method to the existing methods. is a speckled SAR image for outcome analysis. shows the findings of Wang et al. (Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022) and the proposed technique. The findings of Wang et al. (Citation2022) visually provide good outcomes in terms of appearance of low contrast features, texture retention, uniformity in homogeneous parts, and retention of tiny details in heterogeneous regions. The outcome of Perera et al. (Citation2022a) is also excellent in terms of appearance of texture retention, uniformity in homogeneous regions, and retention of fine features in heterogeneous regions, however low contrast features are not properly identified. The outcome of Liu et al. (Citation2022) is good in terms of texture appearance, but uniformity in homogeneous regions and fine detail retention in heterogeneous regions are not successfully identified. The outcomes of Perera et al. (Citation2023) are excellent in terms of appearance of texture retention and low contrast features, however uniformity in homogeneous regions and fine detail retention in heterogeneous regions are not properly identified. The outcome of Wu et al. (Citation2022) is good in terms of appearance of texture retention, but low contrast features, uniformity in homogeneous regions, and retention of fine features in heterogeneous regions are not successfully identified. The outcome of Nabil et al. (Citation2023) is excellent in terms of appearance of texture retention, however low contrast features, uniformity in homogeneous regions, and retention of small features in heterogeneous regions are not properly identified. The outcomes of Baraha and Sahoo (Citation2022) are good in terms of appearance and texture retention, however low contrast features, uniformity in homogeneous parts, and retention of fine features in heterogeneous regions are not successfully identified. The proposed method produces overall outstanding outcomes in terms of artifacts, edge retention, appearance of low contrast features, texture retention, uniformity in homogeneous regions, and fine detail retention in heterogeneous regions.

Figure 2. (a) Speckled SAR image; (b) outcome of Wang et al. (Citation2022); (c) outcome of Perera et al. (Citation2022a); (d) outcome of Liu et al. (Citation2022); (e) outcome of Perera et al. (Citation2023); (f) outcome of Wu et al. (Citation2022); (g) outcome of Nabil et al. Citation2023 (2023); (h) outcome of Baraha and Sahoo (Citation2022); (i) outcome of proposed method.

Figure 2. (a) Speckled SAR image; (b) outcome of Wang et al. (Citation2022); (c) outcome of Perera et al. (Citation2022a); (d) outcome of Liu et al. (Citation2022); (e) outcome of Perera et al. (Citation2023); (f) outcome of Wu et al. (Citation2022); (g) outcome of Nabil et al. Citation2023 (2023); (h) outcome of Baraha and Sahoo (Citation2022); (i) outcome of proposed method.

displays a comparison between the proposed method and the existing methods. is a SAR image with speckles for outcome analysis. illustrates the outcomes of Wang et al. (Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022), as well as the proposed method. Visually, the outcomes of Wang et al. (Citation2022), (Perera et al. Citation2022a) are favorable in terms of the appearance of low contrast features, the retention of texture, the uniformity of homogenous regions, and the retention of minute details in heterogeneous regions. However, low contrast features are not accurately detected. The outcome of Liu et al. (Citation2022) is satisfactory in terms of texture appearance, however uniformity in homogeneous regions and retention of fine detail in heterogeneous regions cannot be determined. However, uniformity in homogeneous parts and fine detail retention in heterogeneous regions are not identified correctly. Appearance of texture retention is preserved well by Wu et al. (Citation2022), however low contrast features, uniformity in homogeneous regions, and retention of fine features in heterogeneous regions are not identified satisfactorily. Low contrast features, uniformity in homogeneous parts, and retention of minor features in heterogeneous regions are not accurately identified in Nabil et al. (Citation2023) outcomes. Appearance and texture retention are well preserved in the outcomes of Baraha and Sahoo (Citation2022), however low contrast features, uniformity in homogeneous portions, and retention of fine details in heterogeneous regions are not properly detected. In terms of artifacts, edge retention, appearance of low contrast features, texture retention, uniformity in homogeneous parts, and fine detail retention in heterogeneous regions, the proposed method produces overall remarkable outcomes.

Figure 3. (a) Speckled SAR image; (b) outcome of Wang et al. (Citation2022); (c) outcome of Perera et al. (Citation2022a); (d) outcome of Liu et al. (Citation2022); (e) outcome of Perera et al. (Citation2023); (f) outcome of Wu et al. (Citation2022); (g) outcome of Nabil et al. (Citation2023); (h) outcome of Baraha and Sahoo (Citation2022); (i) outcome of proposed method.

Figure 3. (a) Speckled SAR image; (b) outcome of Wang et al. (Citation2022); (c) outcome of Perera et al. (Citation2022a); (d) outcome of Liu et al. (Citation2022); (e) outcome of Perera et al. (Citation2023); (f) outcome of Wu et al. (Citation2022); (g) outcome of Nabil et al. (Citation2023); (h) outcome of Baraha and Sahoo (Citation2022); (i) outcome of proposed method.

The comparison of the suggested technique to the existing methods is shown in . A speckled SAR image for outcome analysis is shown in . The outcomes of Wang et al. (Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022) and the suggested technique are displayed in . In terms of appearance of low contrast items, texture retention, uniformity in homogeneous portions, and retention of fine details in heterogeneous areas, the findings of Wang et al. (Citation2022) visually produce good outcomes. Although the outcomes of Perera et al. (Citation2022a) are outstanding in terms of uniformity in homogeneous parts, appearance of texture retention, and retention of fine features in heterogeneous regions, low contrast features are not correctly recognized. The outcome of Liu et al. (Citation2022) achieves decent texture appearance; however, it fails to distinguish between uniformity in homogeneous regions and fine detail retention in heterogeneous parts. However, uniformity in homogeneous parts and fine detail retention in heterogeneous regions are not effectively detected. The findings of Perera et al. (Citation2023) are outstanding in terms of appearance of texture retention and low contrast features. However, low contrast features, uniformity in homogeneous parts, and retention of fine features in heterogeneous regions are not properly identified. The outcome of Wu et al. (Citation2022) is good in terms of appearance of texture retention. However, low contrast features, uniformity in homogeneous parts, and retention of small features in heterogeneous regions are not well identified. The outcome of Nabil et al. (Citation2023) is outstanding in terms of appearance of texture retention. However, low contrast features, uniformity in homogeneous portions, and retention of fine features in heterogeneous regions are not successfully detected. The outcomes of Baraha and Sahoo (Citation2022) are good in terms of appearance and texture retention. In terms of artifacts, edge retention, appearance of low contrast features, texture retention, uniformity in homogeneous parts, and fine detail retention in heterogeneous regions, the proposed method produces overall excellent outcomes.

Figure 4. (a) Speckled SAR image; (b) outcome of Wang et al. (Citation2022); (c) outcome of Perera et al. (Citation2022a); (d) outcome of Liu et al. (Citation2022); (e) outcome of Perera et al. (Citation2023); (f) outcome of Wu et al. (Citation2022); (g) outcome of Nabil et al. (Citation2023); (h) outcome of Baraha and Sahoo (Citation2022); (i) outcome of proposed method.

Figure 4. (a) Speckled SAR image; (b) outcome of Wang et al. (Citation2022); (c) outcome of Perera et al. (Citation2022a); (d) outcome of Liu et al. (Citation2022); (e) outcome of Perera et al. (Citation2023); (f) outcome of Wu et al. (Citation2022); (g) outcome of Nabil et al. (Citation2023); (h) outcome of Baraha and Sahoo (Citation2022); (i) outcome of proposed method.

On the other hand, the performance metrics of the without-reference index do not depend on ground-truth SAR information in any way. Calculating these metrics requires the use of mathematical SAR data models as well as fundamental image resolutions such as feature level heterogeneity and homogeneity. A few instances of these metrics that analyze the statistical organization of pixel values in the actual speckled SAR image are ratio images, the coefficient of variation (CV), the equivalent number of looks (ENL), target-to-clutter ratio (TCR), and noise variance (NV). In this section, we will discuss a handful of the metrics that were utilized throughout the entirety of the thesis:

The smoothing factor is evaluated as a performance assessment parameter of the despeckled SAR image by the ENL (Zhu et al. Citation2013) during the entirety of the image creation and post-processing activity. The despeckling of the SAR image and the subsequent counting of the increase in the minimum number of views are the two steps that provide the basis for this statistic. It is calculated over the homogenous area and is defined as the ratio between the mean (μ) squared to the variance (σ). (9) ENL= (μσ)2(9)

To demonstrate the present level of speckle content in the image, the NV is employed (Singh and Shree Citation2017a). When the NV decreases, speckle noise in an image improves. The image’s brightness is not a requisite (Singh and Shree Citation2017a). Use the following formula to determine NV: (10) σ2= 1Nj=0N1uj2(10)

Where, N is the size of the image.

The performance assessment of despeckled SAR images without reference images are tested using NV, MSE, ENL and CV. In , the NV values are shown using existing methods and proposed method. From , it can be analyzed the NV values of Wang et al. (Citation2022) and (Perera et al. Citation2022a) are good but the best NV results are obtained by the proposed method. In , the MSE values are displayed using both the existing methodology and the proposed methodology. A conclusion may be drawn from MSE values that, even though the MSE values of Wang et al. (Citation2022) and (Perera et al. Citation2022a) are satisfactory, the best MSE outcomes can be achieved by using the proposed method. displays the ENL values using both the suggested approach and the current methods. Analysis of the ENL values in reveals that while (Wang et al. Citation2022) and (Perera et al. Citation2022a) have good ENL values, the proposed strategy yields the best ENL outcomes.

Table 2. UIQI of despeckled SAR images.

With–reference indexes

As SAR images are speckled in nature, it is not possible to consider SAR images as reference images. Therefore, the outcome analyses with reference images are performed over Barbara and the Cameraman Images. A despeckled image’s visual quality can be assessed in a variety of ways, including artifacts, edge retention, the appearance of low-contrast features, texture restoration, uniformity in homogeneous regions, and retention of small details in heterogeneous regions. The comparison of the results between the proposed method and current methods is shown in . serves as an image for a noise-free image. To evaluate despeckling techniques, shows a synthetically produced speckled image of Barbra. The results of Wang et al. (Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022) and the suggested approach are shown in –j, respectively. illustrates how the results of Wang et al. (Citation2022) visually produce results that are suitable in terms of low-contrast object appearance, texture retention, uniformity in homogeneous parts, and retention of fine details in heterogeneous regions. Although low contrast features are difficult to distinguish, the results of Perera et al. (Citation2022a) are nevertheless good in terms of the appearance of texture retention, uniformity in homogeneous parts, and retention of tiny details in heterogeneous regions. The results of Liu et al. (Citation2022) are excellent in terms of the appearance of texture retention, but they do not successfully distinguish between uniformity in homogeneous regions and the retention of tiny features in heterogeneous regions. The results of Perera et al. (Citation2023) are excellent in terms of the appearance of texture retention and low contrast features, but they do not successfully distinguish between uniformity in homogeneous regions and the retention of tiny details in heterogeneous regions. The result of Wu et al. (Citation2022) provides a good appearance of texture retention, however low contrast features, uniformity in homogeneous regions, and the retention of small features in heterogeneous regions are not successfully identified. The result of Nabil et al. (Citation2023) provides a good appearance of texture retention, but low contrast features, uniformity in homogeneous regions, and the retention of small features in heterogeneous regions are not properly identified. However, low contrast features, uniformity in homogeneous parts, and the retention of tiny features in heterogeneous regions are not properly identified. The results of Baraha and Sahoo (Citation2022) are excellent in terms of the appearance of texture retention. In terms of artifacts, edge retention, the appearance of low contrast features, texture retention, uniformity in homogeneous parts, and retention of fine details in heterogeneous regions, the proposed methods consistently produce outstanding results.

Figure 5. (a) Reference image of Barbara; (b) speckled image of Barbra; (c) outcome of Wang et al. (Citation2022); (d) outcome of Perera et al. (Citation2022a); (e) outcome of Liu et al. (Citation2022); (f) outcome of Perera et al. (Citation2023); (g) outcome of Wu et al. (Citation2022); (h) outcome of Nabil et al. (Citation2023); (i) outcome of Baraha and Sahoo (Citation2022); (j) outcome of proposed method.

Figure 5. (a) Reference image of Barbara; (b) speckled image of Barbra; (c) outcome of Wang et al. (Citation2022); (d) outcome of Perera et al. (Citation2022a); (e) outcome of Liu et al. (Citation2022); (f) outcome of Perera et al. (Citation2023); (g) outcome of Wu et al. (Citation2022); (h) outcome of Nabil et al. (Citation2023); (i) outcome of Baraha and Sahoo (Citation2022); (j) outcome of proposed method.

shows the comparison between the proposed method and the methods that are already being used. is the reference image as a noise-free image. is an image of a Cameraman that was made to be speckled on purpose so that despeckling methods could be tested. show the outcomes of Wang et al. (Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022), and proposed method. The outcomes of Wang et al. (Citation2022) look good in terms of how well low-contrast features can be seen, how well texture and uniformity are kept, and how well fine details are kept in areas with different amounts of texture and uniformity. The outcome of Perera et al. (Citation2022a) is also satisfactory in terms of preserving texture, uniformity in homogeneous areas, and fine details in heterogeneous areas. However, low-contrast features are not well identified. The outcome of Perera et al. (Citation2023) is satisfactory in terms of how well the texture is kept. However, the uniformity in the homogeneous areas and the fine details in the heterogeneous areas are not well shown. The outcome of Liu et al. (Citation2022) is satisfactory in terms of how well it shows the retention of texture and low-contrast features. However, the uniformity in homogeneous areas and the retention of fine details in heterogeneous areas are not well shown. The outcome of Wu et al. (Citation2022) is satisfactory in terms of how well the texture is preserved. However, low contrast features, uniformity in homogeneous regions, and the retention of fine details in heterogeneous regions are not well identified. The outcome of Baraha and Sahoo (Citation2022) is satisfactory in that the texture is still visible. However, low-contrast features, uniformity in homogeneous areas, and the retention of fine details in heterogeneous areas are not well identified. The outcomes of Nabil et al. (Citation2023) are good in terms of keeping the texture visible. However, low contrast features, uniformity in homogeneous areas, and keeping fine details in heterogeneous areas are not well identified. Overall, the proposed methods give excellent outcomes in terms of artifacts, keeping edges, making low-contrast features visible, keeping texture, keeping uniformity in homogeneous areas, and keeping fine details in heterogeneous areas.

Figure 6. (a) Reference image of Cameraman; (b) speckled image of Cameraman; (c) outcome of Wang et al. (Citation2022); (d) outcome of Perera et al. (Citation2022a); (e) outcome of Liu et al. (Citation2022); (f) Outcome of Perera et al. (Citation2023); (g) outcome of Wu et al. (Citation2022); (h) outcome of Nabil et al. (Citation2023); (i) outcome of Baraha and Sahoo (Citation2022); (j) outcome of proposed method.

Figure 6. (a) Reference image of Cameraman; (b) speckled image of Cameraman; (c) outcome of Wang et al. (Citation2022); (d) outcome of Perera et al. (Citation2022a); (e) outcome of Liu et al. (Citation2022); (f) Outcome of Perera et al. (Citation2023); (g) outcome of Wu et al. (Citation2022); (h) outcome of Nabil et al. (Citation2023); (i) outcome of Baraha and Sahoo (Citation2022); (j) outcome of proposed method.

Additionally, histogram analysis is also performed on cameraman image for comparative study. shows a comparative histogram analysis between reference image, (Wang et al. Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), and proposed method. Similarly shows a comparative histogram analysis between the Reference image, (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022) and proposed method. From both histograms, it can be analyzed that the reference image and the proposed method are most similar histograms in comparison to existing methods. However, histograms of Wang et al. (Citation2022), and (Perera et al. Citation2022a) are also giving satisfactory outcomes in terms of histogram analysis. But from overall histogram analysis, proposed method gives excellent outcomes.

Figure 7. Histogram analysis of Cameraman image; (a) comparative histogram analysis between Reference image, (Wang et al. Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), and proposed method; (b) comparative histogram analysis between reference image, (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022), and proposed method.

Figure 7. Histogram analysis of Cameraman image; (a) comparative histogram analysis between Reference image, (Wang et al. Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), and proposed method; (b) comparative histogram analysis between reference image, (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022), and proposed method.

In addition, an intensity profile analysis as well as a line is done on Barbara’s image to undertake a comparative study. shows the comparative intensity profile analysis between (Wang et al. Citation2022), (Perera et al. Citation2022a), (Liu et al. Citation2022), (Perera et al. Citation2023), (Wu et al. Citation2022), (Nabil et al. Citation2023), (Baraha and Sahoo Citation2022) and proposed method. As an outcome of doing an intensity profile analysis, it has become abundantly evident that the intensity profiles of the reference image and the proposed approach are most comparable to those of the already existing methods. On the other hand, the intensity profile analyses of Wang et al. (Citation2022) and (Perera et al. Citation2022a) are likewise producing satisfactory outcomes in terms of the intensity profile analysis. However, according to the Intensity profile study, the proposed method produces very good outcomes. is histogram analysis, and is intensity profile analysis. Two different pixel level image analysis is presented to check the despeckling at minor and major level. Histogram analysis checks the overall despeckling performance, while intensity profile analysis checks the despeckling performance along a straight pixel line.

Figure 8. Comparative analysis against intensity profile.

Figure 8. Comparative analysis against intensity profile.

Numerous resources (Li et al. Citation2022), (Mohanakrishnan et al. Citation2022), (Wang et al. Citation2022), (Dalsasso et al. Citation2022), and (Farhadiani et al. Citation2022) can be used to determine how effective a despeckled image will be in each situation where a reference index is present. Performance metrics falling under this rubric make use of data from the accompanying reference image. The two images (the "reference" and "despeckled") serve as inputs for these metrics.

The mean squared error (MSE) (Singh, Diwakar, et al. Citation2021) compares the despeckled SAR image to a reference SAR image and provides an evaluation of the performance of the system. The quality of the produced SAR image is evaluated. It cannot determine how well the image performs overall because it does not look at the finer features. (11) MSE= 1Nj=0N1(XY)2(11) where X and Y represent the speck-free and original images, respectively.

By comparing the despeckled image to the reference image, the SSIM can provide an assessment of their degree of resemblance. Lighting, contrast, and structure are the determining elements. All three terms are combined to form the SSIM. (12) SSIM(a, b)=(2μaμb+P1)(2σab+P2)(μa2+μb2+P1)(σa2+σb2+P2)(12) where, μa, μb, σa, σb and σab are the local means, standard deviation and cross variance for images x; y. P1 = (0:01 ∗ L)2 and P2 = (0:03 ∗ L)2, where L is the specified dynamic range value. The range of SSIM varies from 0 to 1 according to the literature.

The SNR is a quantitative metric used to measure the sensitivity of an imaging system. (13) SNR=10. log10[Var[g]MSE](13) where Var[g]  is reference image variance.

In the field of denoising, PSNR is one of the most popular performance metrics used. Outcomes are more promising when the PSNR is high. Specifically, PSNR is calculated by: (14) 10log10(255×255MSE)(14)

The UIQI is determined by considering three separate criteria. The components that are defined include the degree of linear correlation, the proximity of the mean brightness, and the image contrast. The values of all three variables are somewhere between 0 and 1. Because of this, the UIQI will eventually settle somewhere within the range of [0, 1]. Values of the UIQI that are closer to one suggest a better level of image quality, whilst values that are closer to zero indicate a lower level of image quality. (15) Q= σxyσxσy·2x¯y¯(x¯)2+(y¯)2·2σxσyσx2+σy2(15)

The results of performance metrics are shown in . In , the PSNR values are shown with different noise levels using existing methods and proposed method. From PSNR values, it can be analyzed the PSNR values of Wang et al. (Citation2022) and (Perera et al. Citation2022a) at low noise level are good but the highest PSNR values are obtained by the proposed method. Similarly, as noise level increases the PSNR values of proposed method are still highest. Therefore, through PSNR values, it can be concluded that most of the times proposed method gives excellent results. displays the SSIM values at various noise levels when utilizing both the existing and proposed approaches. As can be shown in , the SSIM values produced by the proposed method are the highest, even though the SSIM values obtained by Wang et al. (Citation2022) and (Perera et al. Citation2022a) at low noise level are good. The SSIM values obtained with the proposed method are also the highest even when the noise level is increased. We can conclude from the SSIM values that the proposed method produces high-quality outputs in most cases. displays the UIQI values with varying amounts of noise based on both the methods that already exist and the approach that was developed. Based on the UIQI values presented in , it is possible to draw the conclusion that the UIQI values of Wang et al. (Citation2022) and (Perera et al. Citation2022a) at a low noise level are satisfactory, but the proposed method produces the highest UIQI values. In a similar manner, the UIQI values obtained with the proposed method remain the highest even when the noise intensity rises. As a result of this, using the UIQI values, one can get the conclusion that the proposed method, for the most part, produces outstanding outcomes.

Table 3. Performance assessment of despeckled SAR images (average outcomes over 117 images).

Table 4. PSNR of despeckled SAR images.

Table 5. SSIM of despeckled SAR images.

The MATLAB software was utilized to assess the experimental findings. The utilized hardware and software configuration is cited in Section “Experimental outcomes and discussion”. It is essential to document the system configuration to ensure that the processing time of the proposed methodology is consistent with that of other methodologies. The algorithm being evaluated demonstrates a processing duration of approximately 7.2679 seconds and outperforms all other assessed techniques regarding computational speed as shown in . The proposed methodology exhibits superior outcomes and demonstrates efficient computational performance. The efficacy of the proposed methodology is confirmed by the amalgamation of its minimal computational expenditure and the superior visual fidelity of the despeckled images.

Table 6. Comparison of execution time in seconds.

Merits, demerits, and future perspectives

The proposed methodology uses homomorphic filtering. It helps to incorporate any other additive image restoration model in this work. Due to use of canny edge detector, fine details like edges and object’s corner details are well preserved in homogeneous and non- homogeneous areas. The blurring effect is totally disappeared in the results. Even no artifact generation is observed in during the process. The use of method noise thresholding in proposed method delivers the highest level of speckle noise reduction in DOST domain. The only disadvantage observed in the proposed methodology is high computational cost due to implementation of method noise thresholding in internal working.

The proposed work is a hybrid homomorphic despeckling technique that involves different background supporting methods. Various possibilities in terms of improvement in proposed work are discussed here. Instead of using canny edge detector, other edge detection method can be checked and compared. The detected edges from canny edge detector are added to the residual image. This mathematical expression can be improved by using any other mathematical operation instead of addition. Other advanced domains are available for image analysis and decomposition like non-subsampled contourlet transform, non-subsampled shearlet transform, curvelet transform, ridgelet transform etc. They can be used instead of applying DOST. Similarly, internal parameters changed and can be tested. Multiple other perspectives can be incorporated. The scope for improvement in the field of image enhancement and restoration is always there. After SAR data restoration, the pre-processed SAR image can be used for urban object analysis and identification, urban disaster monitoring and change analysis, urban climate change and variation, and other urban remote sensing applications.

Conclusion

The SAR image despeckling technique proposed in this paper is based on homomorphic filtering that takes advantage of the additive restoration model. The proposed method is a two-step edge-preserving and hybrid SAR image despeckling technique that implements a guided filter as first step and the second step includes modified method noise thresholding using bivariate shrinkage rule and canny edge operator in the DOST domain. The first step is mainly responsible for speckle reduction purpose with better edge preservation. The second step is designed for handling the unprocessed part of the despeckled image that can deliver the highest level of speckle reduction in results. It filters out the unfiltered components of residual images. These steps help in maintaining uniformity in homogeneous areas. The average PSNR value of Barbara and cameraman images are 36.5698 and 34.6598. The average SSIM values of Barbara and cameraman images are 0.9443 and 0.9281. The average UIQI values of Barbara and cameraman images are 0.8838 and 0.8489. All these are average values calculated based on noise variance ranging from 5 to 40%. In the case of without reference index, the average NV value is 0.3915, the average MSE value is 881.1201, and the average ENL value is 2.9657. Based on these comparative qualitative and quantitative testing performed on the proposed work, it is found that the one being proposed surpasses them all.

References

  • Anonymous. 2014. “Test images” 2014-03-13, Website: https://ccia.ugr.es/cvg/dbimagenes/.
  • Anonymous 2023., “Dataset of standard 512X512 grayscale test images” 23/7/03, Website: https://ccia.ugr.es/cvg/CG/base.htm.
  • Bamler, R. 2000. “Principles of synthetic aperture radar.” Surveys in Geophysics, Vol. 21(No. 2/3):pp. 147–157. doi:10.1023/A:1006790026612.
  • Baraha, S., Sahoo, A.K., and Modalavalasa, S. 2022. “A systematic review on recent developments in nonlocal and variational methods for SAR image despeckling.” Signal Processing, Vol. 196: pp. 108521. doi:10.1016/j.sigpro.2022.108521.
  • Baraha, S., and Sahoo, A.K. 2022. “Restoration of speckle noise corrupted SAR images using regularization by denoising.” Journal of Visual Communication and Image Representation, Vol. 86: pp. 103546. doi:10.1016/j.jvcir.2022.103546.
  • Baraha, S., and Sahoo, A.K. 2020. “SAR image despeckling using plug‐and‐play ADMM.” IET Radar, Sonar & Navigation, Vol. 14(No. 9): pp. 1297–1309. doi:10.1049/iet-rsn.2019.0609.
  • Barman, T., Deka, B., and Mullah, H.U. 2023. “Edge-preserving single remote-sensing image super-resolution using sparse representations.” SN Computer Science, Vol. 4(No. 3): pp. 1–22. doi:10.1007/s42979-023-01764-7.
  • Yu, C., and Shin, Y. 2022. "SAR image despeckling based on U-shaped transformer from a single noisy image," 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), pp. 1738–1740, Jeju Island, Republic of Korea, IEEE Xplore doi:10.1109/ICTC55196.2022.9952991.
  • Dalsasso, E., Denis, L., Muzeau, M., and Tupin, F. 2022. “Self-supervised training strategies for SAR image despeckling with deep neural networks”. In EUSAR 2022; 14th European Conference on Synthetic Aperture Radar, pp. 1–6. Leipzig, Germany: IEEE Xplore.
  • Farhadiani, R., Homayouni, S., Bhattacharya, A., and Mahdianpari, M. 2022. “SAR despeckling based on CNN and Bayesian estimator in complex wavelet domain.” IEEE Geoscience and Remote Sensing Letters, Vol. 19: pp. 1–5. doi:10.1109/LGRS.2022.3185557.
  • Hiremath, B. 2021. “All you need to know about guided image filtering”, 22/12/2021. Website: https://analyticsindiamag.com/all-you-need-to-know-about-guided-image-filtering/.
  • Hong, H.P. 2021. “Response and first passage probability of linear elastic SDOF systems subjected to nonstationary stochastic excitation modelled through S-transform.” Structural Safety, Vol. 88: pp. 102007. doi:10.1016/j.strusafe.2020.102007.
  • Iqbal, M., Chen, J., Yang, W., Wang, P., and Sun, B. 2013. “SAR image despeckling by selective 3D filtering of multiple compressive reconstructed images.” Progress in Electromagnetics Research, Vol. 134: pp. 209–226. doi:10.2528/PIER12091504.
  • He, K., Sun, J., and Tang, X. 2013. “Guided image filtering.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35(No. 6): pp. 1397–1409. doi:10.1109/TPAMI.2012.213.
  • Kamath, P.R., Senapati, K., and Jidesh, P. 2021. “Despeckling of SAR images using shrinkage of two-dimensional discrete orthonormal S-transform.” International Journal of Image and Graphics, Vol. 21(No. 02): pp. 2150023. doi:10.1142/S0219467821500236.
  • Kamath, P. R. 2021. Some applications of S-transform and its modifications in signal and image processing. Doctoral dissertation. Surathkal: National Institute of Technology Karnataka.
  • Katunin, A. 2021. “Identification of structural damage using S-transform from 1D and 2D mode shapes.” Measurement, Vol. 173: pp. 108656. doi:10.1016/j.measurement.2020.108656.
  • Lapini, A., Bianchi, T., Argenti, F., and Alparone, L. 2014. “Blind speckle decorrelation for SAR image despeckling.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 52(No. 2): pp. 1044–1058. doi:10.1109/TGRS.2013.2246838.
  • Li, W., Pang, B., Xu, X., and Wei, B. 2022. “Multi-dictionary learning with superpixel-based clustering for SAR Image despeckling.” IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), pp. 390–393. doi:10.1109/IPEC54454.2022.9777399.
  • Li, J., Yu, W., Wang, Y., Wang, Z., Xiao, J., Yu, Z., and Zhang, D. 2023. “Guidance-aided triple-adaptive frost filter for speckle suppression in the synthetic aperture radar image.” Remote Sensing, Vol. 15(No. 3): pp. 551. doi:10.3390/rs15030551.
  • Liu, S., Pu, N., Cao, J., and Zhang, K. 2022. “Synthetic aperture radar image despeckling based on multi-weighted sparse coding.” Entropy, Vol. 24(No. 1): pp. 96. doi:10.3390/e24010096.
  • Esam El-Dine Atta, M., Ibrahim, D.K., and Gilany, M.I. 2021. “Broken bar faults detection under induction motor starting conditions using the optimized stockwell transform and adaptive time–frequency filter.” IEEE Transactions on Instrumentation and Measurement, Vol. 70: pp. 1–10. doi:10.1109/TIM.2021.3084301.
  • Mahajan, Parul. 2023. “Peak signal-to-noise ratio as an image quality metric”, Mar 23, 2023, Website: https://www.ni.com/en-in/shop/data-acquisition-and-control/add-ons-for-data-acquisition-and-control/what-is-vision-development-module/peak-signal-to-noise-ratio-as-an-image-quality-metric.html#:∼:text=The%20term%20peak%20signal%2Dto,the%20quality%20of%20its%20representation.
  • Mejjaoli, H. 2021. “Dunkl–Stockwell transform and its applications to the time–frequency analysis.” Journal of Pseudo-Differential Operators and Applications, Vol. 12(No. 2): pp. 1–59. doi:10.1007/s11868-021-00378-y.
  • Mohanakrishnan, P., Suthendran, K., Pradeep, A., and Yamini, A.P. 2022. “Synthetic aperture radar image despeckling based on modified convolution neural network.” Applied Geomatics, Vol. 14: pp. 1–12. doi:10.1007/s12518-022-00420-8.
  • Morteza, A., and Amirmazlaghani, M. 2022. “A Novel Gaussian-Copula modeling for image despeckling in the shearlet domain.” Signal Processing, Vol. 192: pp. 108340. doi:10.1016/j.sigpro.2021.108340.
  • Mv, S., and Mn, G. 2015. “A modified BM3D algorithm for SAR image despeckling.” Procedia Computer Science, Vol. 70: pp. 69–75. doi:10.1016/j.procs.2015.10.038.
  • Nabil, G., Azzedine, B., and Mustapha, B. 2023. “Fast and efficient variational method based on G 0 distribution for SAR image despeckling.” Multimedia Tools and Applications, Vol. 82(No. 4): pp. 5899–5922. doi:10.1007/s11042-022-13472-0.
  • Perera, M.V., Bandara, W.G.C., Valanarasu, J.M.J., and Patel, V.M. 2022a. “Transformer-based SAR image despeckling.” IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, pp. 751–754. doi:10.1109/IGARSS46834.2022.9884596.
  • Perera, M.V., Bandara, W.G.C., Valanarasu, J.M.J., and Patel, V.M. 2022b. “SAR despeckling using overcomplete convolutional networks.” IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, pp. 401–404. doi:10.1109/IGARSS46834.2022.9884632.
  • Perera, M.V., Nair, N.G., Bandara, W.G.C., and Patel, V.M. 2023. “Sar despeckling using a denoising diffusion probabilistic model.” IEEE Geoscience and Remote Sensing Letters, Vol. 20: pp. 1–5. doi:10.1109/LGRS.2023.3270799.
  • Stockwell, R.G., Mansinha, L., and Lowe, R.P. 1996. “Localization of the complex spectrum: The S transform.” IEEE Transactions on Signal Processing, Vol. 44(No. 4): pp. 998–1001. doi:10.1109/78.492555.
  • Shree, R., Shukla, A.K., Pandey, R.P., Shukla, V., and Singh, P. 2020. “A critical review on despeckling methods in agricultural SAR image.” International Journal of Applied Exercise Physiology, Vol. 9(No. 7): pp. 258–266.
  • Singh, P., and Shree, R. 2020a. “A new homomorphic and method noise thresholding based despeckling of SAR image using anisotropic diffusion.” Journal of King Saud University – Computer and Information Sciences, Vol. 32(No. 1): pp. 137–148. doi:10.1016/j.jksuci.2017.06.006.
  • Singh, P., Shree, R., and Diwakar, M. 2021. “A new SAR image despeckling using correlation based fusion and method noise thresholding.” Journal of King Saud University - Computer and Information Sciences, Vol. 33(No. 3): pp. 313–328. doi:10.1016/j.jksuci.2018.03.009.
  • Singh, P., Diwakar, M., Shankar, A., Shree, R., and Kumar, M. 2021. “A review on SAR image and its despeckling.” Archives of Computational Methods in Engineering, Vol. 28(No. 7): pp. 4633–4653. doi:10.1007/s11831-021-09548-z.
  • Singh, P., and Shree, R. 2020b. “Impact of method noise on SAR image despeckling.” International Journal of Information Technology and Web Engineering, Vol. 15(No. 1): pp. 52–63. doi:10.4018/IJITWE.2020010104.
  • Singh, P., and Shree, R. 2016. “Speckle noise: Modelling and implementation.” International Journal of Control Theory Applications, Vol. 9(No. 17): pp. 8717–8727.
  • Singh, P., and Shree, R. 2017a. “A new computationally improved homomorphic despeckling technique of SAR images.” International Journal of Advanced Research in Computer Science, Vol. 8(No. 3): pp. 894–898. doi:10.26483/ijarcs.v8i3.3122.
  • Singh, P., and Shree, R. 2017b. “Statistical quality analysis of wavelet based SAR images in despeckling process.” Asian Journal of Electrical Sciences, Vol. 6(No. 2): pp. 1–18. doi:10.51983/ajes-2017.6.2.2001.
  • Singh, P., Shankar, A., Diwakar, M., and Khosravi, M.R. 2022. “MSPB: intelligent SAR despeckling using wavelet thresholding and bilateral filter for big visual radar data restoration and provisioning quality of experience in real-time remote sensing.” Environment, Development and Sustainability, pp. 1–31. doi:10.1007/s10668-022-02395-3.
  • Tufa, G.T., Andargie, F.A., and Bijalwan, A. 2022. “Acceleration of Deep Neural Network Training Using Field Programmable Gate Arrays.” Computational Intelligence and Neuroscience, Vol. 2022: pp. 8387364–11. doi:10.1155/2022/8387364.
  • Wang, C., Yin, Z., Ma, X., and Yang, Z. 2022. “SAR image despeckling based on block-matching and noise-referenced deep learning method.” Remote Sensing, Vol. 14(No. 4): pp. 931. doi:10.3390/rs14040931.
  • Wang, G., Bo, F., Chen, X., Lu, W., Hu, S., and Fang, J. 2022. “A collaborative despeckling method for SAR images based on texture classification.” Remote Sensing, Vol. 14(No. 6): pp. 1465. doi:10.3390/rs14061465.
  • Wu, F., Zhu, C., Xu, J., Bhatt, M.W., and Sharma, A. 2022. “Research on image text recognition based on canny edge detection algorithm and k-means algorithm.” International Journal of System Assurance Engineering and Management, Vol. 13(No. S1): pp. 72–80. doi:10.1007/s13198-021-01262-0.
  • Wu, W., Huang, X., Shao, Z., Teng, J., and Li, D. 2022. “SAR-DRDNet: a SAR image despeckling network with detail recovery.” Neurocomputing, Vol. 493: pp. 253–267. doi:10.1016/j.neucom.2022.04.066.
  • Yommy, A. S., Liu, R., and Wu, S. 2015. “SAR image despeckling using refined Lee filter”. 7th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, Vol. 2: pp. 260–265. China: IEEE Xplore
  • Zhou, P., Ye, W., Xia, Y., and Wang, Q. 2011. “An improved canny algorithm for edge detection.” Journal of Computational Information Systems, Vol. 7(No. 5): pp. 1516–1523.
  • Wang, Z., Bovik, A.C., Sheikh, H.R., and Simoncelli, E.P. 2004. “Image quality assessment: from error visibility to structural similarity.” IEEE Transactions on Image Processing, Vol. 13(No. 4): pp. 600–612. doi:10.1109/TIP.2003.819861.
  • Wang, Z., and Bovik, A.C. 2002. “A universal image quality index," in.” IEEE Signal Processing Letters, Vol. 9(No. 3): pp. 81–84. doi:10.1109/97.995823.
  • Zhu, J., Wen, J., and Zhang, Y. 2013. “A new algorithm for SAR image despeckling using an enhanced Lee filter and median filter”. In 2013 6th International congress on image and signal processing (CISP), Vol. 1, pp. 224–228. Hangzhou, China. IEEE Xplore doi:10.1109/CISP.2013.6743991.