The Limited-FCMSPCNN algorithm for image defogging

In order to address the problem of defogging images of foggy scenes, this paper proposes a parameter-tunable image defogging algorithm based on the FC-MSPCNN model. Firstly, we simplify the traditional PCNN model and also propose the Limited-FCMSPCNN (LFC-MSPCNN) model. Secondly, we give the five key parameters in the model. There are five key parameters in the model: α, β, V, G and a modulation parameter q, and how they are set. Thirdly, we verified that the algorithm in this paper has good defogging effect on foggy sky images by comparing with the Dark Channel Prior algorithm, the Retinex algorithm and the FC-MSPCNN algorithm, showing that the LFC-MSPCNN method proposed in this paper has robustness.


Introduction
Over recent years, with the rapid development of intelligence and information technology, images are playing an essential role in conveying effective information in everyday life. In fields such as artificial intelligence and computer vision, the quality and requirements placed on the acquired image information are becoming increasingly advanced. However, during the acquisition and transmission of images, images with interfering information can be received at the receiving end due to the presence of the transmission channel, the transmission medium or the images themselves, and a number of other unavoidable factors. For instance, during the acquisition of an image, fine substances floating in the air due to light, sensors and other factors can cause the contrast of the outdoor image information received by the imaging device to decrease, and therefore the detailed information of the image can be blurred by these interfering substances. As a result of these factors, people may receive images with interference in the information they obtain. [1] And the main cause of reduced imaging quality is particles suspended in the atmosphere, such as water vapour and dust. Since these particles scatter the reflected light from the imaged object to some extent, which in turn produces a weakening of the reflected light from the object. Also, in the imaging environment, the scattering of light is mixed in with the imaged light. Therefore, specific treatments are used to reduce or eliminate the effect of particles in the imaging environment on the image, a process known as image defogging.
At present, image defogging is divided into two main categories: one is based on image restoration, which is a physical type method that defogs the image by establishing an atmospheric scattering model and using relevant a priori knowledge. Examples include the boundary constraint image restoration method [2] and the black channel a priori [3] . The literature [2] proposes to process document images by using simple boundary noise removal and enhancement. The literature [3] proposes the Retinex algorithm based on dark a priori theory for processing foggy sky images. The other type of method is based on image enhancement, which is a non-physical model approach to reduce the impact of fog on image quality by means of enhancement algorithms, such as wavelet transform [4] , Retinex algorithm [5] and image fusion [6] . The bootstrap filtering method proposed in the literature [4] makes the image clearer by inputting the bootstrap image as the filtered content image, subtracting the resulting bootstrap filter image from the bootstrap image, and then adding the difference to the input bootstrap image; the improved Retinex defogging algorithm proposed in the literature [5] enhances the image quality by using the Multi-Scale Retinex's Visual Servo (VS) process to enhance these images. The images are then quantified by visual quality metrics used within VS; the literature [6] proposes a multi-focus image fusion method by learning a Full Convolutional Network (FCN) to detect focal regions at the pixel level and using the entire image to train the FCN. Despite the progress made in image defogging in the abovementioned studies, they are still inadequate for processing images with high levels of fog.
In this paper, we propose a Limited FC-MSPCNN (LFC-MSPCNN) model based on Lian's Fire-Controlled MSPCNN (FC-MSPCNN) [7] model, which contains four adaptive parameters: , , , ,a synaptic weight matrix , and a conditioning parameter . This article is based on the MSPCNN parameter setting method by defining and synaptic weight matrix to ensure that each neuron is fired only once. To avoid a second firing of each neuron, we give two parameters, and , by the result of the change in internal activity. We based the comparison value between the internal activity of and the dynamic threshold on the predetermined parameter , while being able to adjust the number of neuron shots in the effective pulse cycle. Moreover, compared with classical defogging algorithms, the model achieves excellent results in processing foggy sky images.

The PCNN model
R. Eckhorn proposed the pulse-coupled neural network (PCNN) in a bio-inspired neural network model of visual cortex. This network is also known as the third generation artificial neural network due to its pulse modulation and coupled linking properties, and is widely used in image processing fields such as image segmentation, image smoothing and edge detection, etc. Its network model structure is shown in Figure 1.  Figure 1, is the previous output value of the neuron and is the external input. The in the modulation coupling section is the connected input and the resulting 1 is multiplied by to obtain the output . While in the pulse generation section is the threshold value for regulating pulse generation, and are the joint decision whether the next input signal can output or not.
Subsequently, various simplified PCNN (SPCNN) models with automatically set parameters were generated based on the basic discrete model of PCNN proposed by Lindblad etc [8] and the classical model SCM [9] . A number of simplified PCNN models with automatically set parameters (SPCNN) have 3 been generated, for example in the literature [10] and literature [11]. The mathematical equations (1)-(5) of the SPCNN model proposed by Chen in the literature [11] are as follows: , 1 .
(5) Where, and represent the feedback input and connection input for times of iterations of the image information , position, respectively. represents the external stimulus; represents the internal activity term; represents the dynamic threshold; represents the output; represents the link strength; represents the amplitude of the connection input; represents the amplitude of the dynamic threshold.
represents the synaptic connectivity matrix for the linked inputs. , represent the exponential decay constants for the feedback inputs and dynamic thresholds, respectively.
According to the model in the literature [11], it is known that there exist five adaptive parameters , , , , and , which are mathematically modelled as follows: log , , ln .
In (6)-(9), ′ and denote the normalised OTSU threshold and the maximum pixel degree of the image, respectively; is generally a constant and denotes the standard deviation of the sample image.

LFC-MSPCNN model
Based on the favourable image processing performance of the SPCNN model and in order to further simplify the computational complexity, Lian [12] proposed an improved SPCNN model (MSPCNN) in the literature [12]. However, Lian found that the randomness and unknown nature of the model in processing images, Another Fire-Controlled MSPCNN model (FC-MSPCNN) based on MSPCNN is proposed in the literature [7], and the model structure mathematical equations (10)-(12) are as follows: 1 , .
The equations in (16) and (17) show as the number of iterations wanted, as the total number of iterations, and the relationship between and as 1. Nevertheless, because of the limitations of the FC-MSPCNN model. Therefore, we propose a Limited-FCMSPCNN (LFC-MSPCNN) model based on FC-MSPCNN. Firstly, the amplitude parameter in is replaced by , where means variable parameter and means amplitude parameter. Secondly, since the attenuation parameter α varies over a wide range, we will change the expression for α. Next, β denotes the link strength and W_ijkl denotes the synaptic weight matrix. Finally, we proposed the structure of the Limited-FCMSPCNN model as shown in Figure 2.  Figure  3, and its corresponding mathematical equations (18)-(20) are as follows: .
(22) The link strength β in the internal activity term can produce more reasonable image segmentation results. In order to ensure that the set values of dynamic threshold and internal activity term are more sensible, the amplitude parameter in the dynamic threshold is further simplified and a new variable parameter is introduced. The equation for the amplitude parameter and the variable parameter follows: , (23) .
(24) The fading factor in FC-MSPCNN can provide a more reasonable parameter fading value, but the value changes in a wide range, in this model using the method of setting α in MSPCNN model.The equation for the decay factor α in this model is as follows: ln .
(25) Where, the q is the adjustment parameter which increases the diversity of variation in the values of the LFC-MSPCNN model.

Image processing experiments and comparative analysis of results
To verify the feasibility and effectiveness of the LFC-MSPCNN model proposed in this paper, we conducted experimental validation and analysis on image data from Kaiming He [13] used in the literature [13]. Firstly, our experimental platform is Intel(R) Core(TM) i5-6300HQ CPU @ 2.3GHz, RAM is 8.00GB, OS is Windows 10 , and software version is MATLAB R2020a. Secondly, the comparison algorithms used in the experiments are: the Dark Channel Prior method [3] , the Retinex algorithm [5] , the FC-MSPCNN model [7] and the LFC-MSPCNN model proposed in this paper. Finally, compared and analysed the original image as well as the processing results of the four algorithms using subjective and objective evaluation metrics, the image processing results are shown in Figure 4.
Images (a4)-(c4) are the methods of this paper. From Figure 4, when compared with the original foggy sky image, the Retinex algorithm is more likely to cause sudden colour changes and colour distortion in the image regions, making the scene in the image unrealistic, while the method in this paper is significantly better than the other three methods. Compared with the Dark Channel Prior method and the FC-MSPCNN method, the method in this paper is more effective in defogging, rendering clearer scenes with natural colours and richer details in the images. 3.1849 To evaluate the above four methods objectively, we introduce three image evaluation metrics, namely information entropy, standard deviation and mean gradient , for quantitative analysis. Among them, a higher value of information entropy means more information in the image; a higher value of standard deviation means a better quality image; a higher value of average gradient means a clearer image. According to the analysis of the data in the three tables, the method proposed in this paper has higher values in the three aspects of information entropy, standard deviation and average gradient, therefore the algorithm proposed in this paper on the surface has a better effect on realistic natural foggy days, with less noise and high image quality.
In conclusion, in both subjective and quantitative analyses, the proposed defogging algorithm in this paper provides better defogging treatment than the other three algorithms.

Conclusion
In this paper, we propose a parameter-tunable LFC-MSPCNN image defogging algorithm. The algorithm is based on five key parameters: , , , and . The parameter increases the variability of the model values. By comparing with the Dark Channel Prior algorithm, the Retinex algorithm and the FC-MSPCNN algorithm, this paper's algorithm not only has a significant effect on foggy scene image defogging, but also outperforms the other three algorithms in three indexes: information entropy, standard deviation and mean gradient, thus confirming that this paper's defogging algorithm can achieve good results in terms of information quantity, quality and clarity of the image.
In the future, following work in this paper will incorporate deep learning to further improve the algorithm's image visualisation.