Research on Multi-focal Image Fusion Based on Wavelet Transform

In recent years, wavelet transform theory has been paid more and more attention in image fusion. With the development of wavelet transform image fusion, wavelet transform is introduced, and the basic principles and types of wavelet transform are introduced. Then, aiming at two images with different focal lengths, the discrete wavelet transform method of three-layer decomposition is used, the low-frequency and high-frequency components are fused by different fusion rules, and the differences of different fusion strategies in wavelet transform image fusion are compared and analyzed. Experimental results show that wavelet transform has the characteristics of convenient application, rapid fusion and real image, and has a good application prospect.


Introduction
When shooting a scene with an optical camera, the distance between the target in the scene and the lens is often different, which makes the focal length change when focusing on different targets, which leads to the change of the definition of the same target in different images. The resulting image is called multi-focal image.
Using image fusion technology, multiple images with different focal lengths taken from the same target can be fused into a whole clear image. Traditional image fusion methods mostly directly perform arithmetic calculation or statistical analysis on images, and do not consider the transformation of images in frequency domain. As an analysis method in frequency domain,It has made mature development in the fields of signal processing and image analysis. The original image can be decomposed into low-frequency information and high-frequency information by wavelet transform. The low-frequency information reflects the overall contour features of the original image, while the high-frequency information reflects the details of the original image. The high-frequency and low-frequency components are fused separately.The details of different images can be effectively fused together while preserving the overall information of the images, so as to obtain a fused image which is more beneficial to human visual perception.
At present, there has been a mature research on image fusion using wavelet transform at home and abroad. Based on the domain average gradient, Li Chunmei et al. fused the image using wavelet decomposition and reconstruction, pointed out the optimal decomposition level of the algorithm, and obtained that the algorithm has certain advantages [1]; Wang Jianping and Zhang Jie analyzed Mallat algorithm of wavelet transform ,through the simulation of image compression, denoising, enhancement and fusion of wavelet transform, it is concluded that wavelet transform has practical, simple and convenient use value and prospect value in the field of image processing [2]; Peng Renjie and Yao Yunxia carried out multi-scale transformation on the optical variable image, extracted wavelet coefficients with different frequencies, the optical image is denoised, the image fusion is completed, and a clearer fused image is obtained [3].
In this paper, based on the existing research, aiming at two multi-focus images, using three-layer discrete wavelet transform method, the low-frequency and high-frequency components are fused by different fusion rules, and a new image with all the targets clearly focused is obtained. The different fusion rules are evaluated, thus improving the ability of target detection and discrimination.

Principle of wavelet transform
In time-frequency analysis, the traditional Fourier transform can only analyze in frequency domain, which completely discards the information in time domain. For some signals (images), the time domain may contain important information, so it is necessary to popularize the traditional Fourier transform. Short-time Fourier transform developed on the basis of Fourier analysis can characterize time domain and frequency domain information at the same time to a certain extent, but short-time Fourier transform can only be performed at one resolution, which still has great defects.
Wavelet transform overcomes the shortcoming of short-time Fourier transform in single resolution. It adopts a waveform with finite length and zero mean value, so it is called wavelet. By stretching and scaling the wave, the information of the signal in frequency domain can be judged. By moving the wavelet to a position, the information of the signal at the current point can be obtained. Therefore, the wave processing is mainly a scaling and stretching operation and a translation operation.
By transforming the wavelet, we can get the frequency domain information of the signal and improve the time resolution of the signal. In the high frequency part of the signal, because the frequency of the signal changes little, it can use lower frequency resolution to obtain better time resolution; In the low frequency part of the signal, the signal is relatively stable, lower time resolution can be used to obtain better frequency resolution. By the above means, the time-frequency information of the signal can be taken into account at the same time, thus effectively extracting different characteristic information of the signal. Wavelet transform has been widely used in signal analysis, image processing, computer classification and recognition, fault analysis and other different fields.
Wavelet transform is divided into continuous wavelet transform and discrete wavelet transform. Continuous wavelet transforms, also known as integral wavelet transform, uses a base wavelet (or mother wavelet)ψ, will be arbitraryL R The function f(t) in space is expanded, which is called function f(t) about basis waveletψContinuous wavelet transform (CWT) based on. The expression is: (1) It should meet the following requirements: In which a is scale, t is time,τ is the time-dependent offset,ψ( )Is the wavelet basis function. In continuous wavelet transform, parameters such as A, T, τ are all continuous. When calculating them by computer, we need to discretize them first. Therefore, we can limit the values of A, τ of wavelet basis function to some discrete points, thus we can get discrete wavelet transform (DWT): (3) In practical application, discrete wavelet transform is more convenient to calculate than continuous wavelet transforms, and the important information of signal will not be lost by discrete processing, so it has been widely used. In this paper, three-layer discrete wavelet transform is used for image processing.

the strategy of image fusion based on wavelet transform
Wavelet transform decomposes the original image into sub-images in each frequency band, and each sub-image represents different feature components of the original image. With the increase of decomposition layers, more and more image details can be displayed. In this paper, when wavelet transform is decomposed, firstly, the low-frequency information of the whole outline of the image and the high-frequency information of the image in horizontal, vertical and diagonal directions are obtained, so as to obtain four sub-images of a layer of wavelet transform; Then, the low frequency subgraph is decomposed, and so on. Aft that decomposition of wavelet transform is completed, different fusion methods can be adopted according to different feature components to achieve the best fusion effect. Image fusion strategy (method) is the core of image fusion, and the quality of methods and rules directly affects the speed and quality of image fusion.
Specifically, the main effect of image fusion method is that if the same target in image A is more significant than that in image B before fusion, the target in image A will be retained after fusion. If it is equally significant, one of them can be selected or its average value can be taken. In this way, the wavelet transform coefficients of objects in images a and b will dominate at different resolution levels, therefore, in the final fusion image, the prominent targets in image A and image B are preserved.
At present, the fusion rules in wavelet domain are mainly divided into pixel-based fusion rules and region-based fusion rules (as shown in Figure 4). Pixel-based fusion rules require images to be strictly aligned in fusion processing, otherwise the processing results will be unsatisfactory, which increases the difficulty of preprocessing. Region-based fusion rules have a wide range of characteristics in practical applications because they fully consider the relationship with other neighboring pixels and reduce their sensitivity to edge changes. b) taking the maximum absolute value: comparing the wavelet coefficients corresponding to two (or more) images, and selecting the coefficient with the maximum absolute value as the coefficient of the fused image. c) weighted average: add and solve the wavelet coefficients of two (or more) images with a certain weighting coefficient.

Fusion Rules Based on Regional Features
a) Gradient-based method: compare the gradient size in the image area, and select the area with large gradient as the fused image for output. If there is little difference between the gradients of the two images, the weighted average method is adopted, and the gradient can be solved by the following b) method based on local variance: compare the local variance in the image area, and select the area with large local variance as the fused image for output. if there is little difference between the local variances of the two images, the weighted average method is adopted to adopt the average local variance, and the regional variance can be solved by the following formula. v ∑ ∑ Among them,m Is the local mean, which is defined as: c) Method based on local energy: compare the local energy in the image area, and select the area with large local energy as the fused image for output. If there is little difference between the local energy of the two images, the weighted average method is adopted.
Energy is a gray value for an image. The higher the gray value, the greater the energy. The corresponding pixel is "whiter".
The regional energy can be solved by the following formula. ARE i, j ∑ ∑ w p, q | i p, j q | (7) Where w is the weight.

experimental method
In this paper, the following two 480×640 images with different focal lengths are selected for image fusion by wavelet transform. as shown in fig. 2, the wavelet basis function adopts Haar wavelet function, which is respectively subjected to two-layer discrete wavelet transform to obtain low-frequency component subgraph and high-frequency component subgraph, and the fusion strategy of weighted average and large absolute value is adopted for low-frequency components. The fusion strategies based on gradient, local variance and local energy are adopted for high-frequency components, and then the fused images are obtained by inverse wavelet transform. The differences of different fusion principles in dealing with multi-focus image fusion are compared. The flow of image fusion algorithm in this paper is shown in Figure 3.

Comparison of Effects of Different Fusion Strategies
The fusion images obtained by different fusion rules are clear, and the image quality has been greatly improved compared with the original image, but the fusion images obtained by different fusion rules have a small gap in quality.

Comparative analysis of different fusion evaluation methods
In this paper, information entropy, standard deviation, clarity and spatial frequency are used to objectively evaluate the fusion results. The experimental results are shown in Table 1. There is little difference between different fusion rules in the analysis and evaluation results. The method with larger absolute value in low frequency is better than weighted average, and the comparison results of fusion methods with different evaluation methods in high frequency are different. On the whole,There is little difference among the three high-frequency fusion rule.

Summary
In this paper, several common wavelet transform image fusion methods have been realized through matlab experiments, and the experiments have achieved good results, but there are still some shortcomings, such as the overall quality of the fused image has been greatly improved compared with the two original images, but some details can not be fused well, resulting in some details missing; Only haar basis function is used for wavelet transform, and other basis functions are not used for comparison. Using other basis functions may get higher quality fused images; There is a lack of complementarity between the evaluation principle and the fusion method, which has a certain influence on the evaluation of image fusion to a certain extent; Only common wavelet transform fusion methods are compared and changed, innovation is lacking, but on the whole, wavelet transform has its unique advantages in image fusion, especially in multi-focus image fusion, which deserves further and deeper research.