Study on Low Illumination Simultaneous Polarization Image Registration Based on Improved SURF Algorithm

Registration of simultaneous polarization images is the premise of subsequent image fusion operations. However, in the process of shooting all-weather, the polarized camera exposure time need to be kept unchanged, sometimes polarization images under low illumination conditions due to too dark result in SURF algorithm can not extract feature points, thus unable to complete the registration, therefore this paper proposes an improved SURF algorithm. Firstly, the luminance operator is used to improve overall brightness of low illumination image, and then create integral image, using Hession matrix to extract the points of interest to get the main direction of characteristic points, calculate Haar wavelet response in X and Y directions to get the SURF descriptor information, then use the RANSAC function to make precise matching, the function can eliminate wrong matching points and improve accuracy rate. And finally resume the brightness of the polarized image after registration, the effect of the polarized image is not affected. Results show that the improved SURF algorithm can be applied well under low illumination conditions.


Introduction
Polarization imaging is a new technology in optics field, its application has been paid much attention both at home and abroad. Polarization imaging can provide more dimensional target information, and it is a frontier technology of great application value [1]. Using polarization technology can reduce the influence of cluttered background, and can identify targets effectively. Since simultaneous polarization imaging system obtains polarization intensity images of different polarization directions of the same scene by one exposure [2], the detection accuracy is high and it is not affected by the motion of the detection target and the system itself, fast response, high stability and reliability, it is inevitable development trend of polarization imaging system.
Image registration refers to the process of aligning multiple images captured by different cameras in the same scene by a certain algorithm, it is a process of finding the affine transformation relationship between images to be registered and the reference image. The effects of noise, scale transformation and relative rotation can be solved effectively, widely used in image stitching, image fusion and other fields.
For the same target, there are some differences in the geometric characteristics, such as translation and rotation, while using four-angle polarized cameras to acquire images, these differences cannot be solved by artificially adjusting the camera position, thus requiring image registration to solve the problem.
So far, image registration algorithms are divided into three main categories: the algorithm based on transform domain, gray-based algorithm and feature-based algorithm. The most widely used is the feature-based algorithm. David Lowe proposed SIFT (Scale-invariant feature transform) algorithm in 1999, it provides a powerful match for illumination, rotation and scaling. However, computational complexity and time consuming are the disadvantages of this algorithm [3,4]. The SURF (Speeded-Up Robust Features) algorithm was proposed by Bay et al in 2006. SURF is superior to SIFT in repeatability, distinctiveness, and robustness, in addition, because the SURF algorithm combines the integral image, Hession matrix and two-dimensional Hear wavelet response, its speed is greatly improved, almost three times that of SIFT [5]. Therefore, the SURF algorithm is used to register the simultaneous polarized image in this paper.
However, in the process of shooting all-weather, the polarized camera exposure time need to be kept unchanged, sometimes polarization images under low illumination conditions due to too dark result in SURF algorithm cannot extract feature points, thus unable to complete the registration operation, therefore this paper proposes an improved SURF algorithm. After the incident light passes through the optical lens, it is divided into two parts by an general beam splitter prism: transmission and reflection, the transmission part is divided by polarized beam splitter with rotation of 45 degrees into two beams that perpendicular to each other in polarization direction (45 degrees and 135 degrees), finally arrive at CCD0 and CCD1 respectively [6]. The reflected part is divided into two parts: transmission and reflection by a general beam splitter prism, the Trans missive portion passes through a quarter wave plate and a polarizer to obtain a 0 degrees component, reaching CCD2, the reflected part passes through a quarter-wave plate and gets a 90 degrees component, reaching CCD3. Polarization intensity images of four polarization directions of 0, 45, 90 and 135 degrees of target are obtained simultaneously on four detectors [7].

Brightness Adjustment
The experimental data's are four polarization angle images of 1392*1040, and the output image format is BMP. The original polarized image obtained by camera is a color image, because the algorithm is to deal with grayscale double-precision floating-point type images, therefore, gray and double precision processing of color images is carried out beforehand, here we use rgb2gray and im2double functions in mat lab to obtain low illumination grayscale images. Assuming that I represents the low illumination image [8] to be processed, that is, the darker image, T represents the relatively bright image after processing, The idea of the algorithm is to increase the brightness of the image by non-linear overlay. Mat lab code is as follows: Both T and I are images of [0, 1] values. If the result is not good, the algorithm can be iterated several times. For this experiment, one iteration can achieve desired effect. Of course, we can add a control parameter K in the code, values [0, 1], it can control the intensity of the brightness increase.

Automatic Detection and Description of SURF
The SURF algorithm is based on the interest feature points in the scale space to carry out the registration. Fig. 2

Create Integral Images
By creating an integral image to complete the image convolution, the calculation speed will be significantly improved. Integral image can be used for fast calculation of the box type convolution filter, its definition is as follows: X=(x, y) is a pixel of an image, then the integral image   X I  represents the sum of all pixels in the rectangle, the rectangle takes the point X and the origin as the diagonal vertices, its expression is .
The biggest advantage of using integral image is that the image is traversed only once and computation is quite small. After calculating the integral image, it is possible to directly solve the sum of intensity images in any rectangular region. Assume that the four vertices of the rectangular region are A, B, C and D respectively, as shown in Fig. 3. Then the sum of the intensity values of the rectangular region is It should be noted that if the coordinates are on top or left, the value must be zero and the minimum value is zero.

Fast Hessian Matrix Extracting Feature Points
Because Hessian matrix is of good accuracy, it is used to detect the feature points. Hessian matrix for point X=(x, y), scale  is defined as However, the fact is that Gauss filter needs to be discretized, however, as the scale  increases, the details of the image will inevitably be filtered out. Bay has proposed box filters to approximate the filtering method by constructing a Fast-Hessian matrix. This method improves computational efficiency, and the fundamental reason is that the computational complexity of Fast-Hessian matrix in calculating the convolution is not affected by the filter size. The determinant of Fast-Hessian matrix is The upper formula is used to detect extreme points. detH represents the box filter response value at point X neighborhood, calculate the trajectory based on the Laplace transform and prepare for the later calculations. X. The feature points are classified according to the symbols of death. If and only if the determinant value is greater than zero and the eigenvalue is the same, the point is considered to be a local extremism. In the three-dimensional space    , , y x , non-maximum suppression (NMS) processing is performed for each 3*3*3 region. It can obtain stable SURF feature points' position information and its scale value by only selecting the extreme point with the response value larger than 26 adjacent points as the candidate feature point. In order to be able to pinpoint the candidate feature points (sub pixel-level positioning), interpolation is carried out in image space and scale space to obtain the offset of the interpolation and the step distance between the filters, finally, the Laplace response map is displayed based on the feature points' information.

4.3.2.
Get the SURF Descriptor. The descriptor is obtained in a square area cantered on the feature points, and parallel to the previously determined main direction. The concrete realization is: firstly determine a square area, size of 20s, to ensure that the extracted feature vector remains rotationally invariant, rotate the square area until main direction is parallel to the feature point, and then sums the sum of the Haar wavelet responses and absolute values in X and Y directions in each sub-region (∑dx, ∑dy, ∑| dx|, ∑| dy|), the sub-region is obtained by subdividing the square area into a 4*4 window.
Statistics are also weighted by Gauss functions. In this way, each sub-region has a 4-dimensional descriptor [9] V= (∑dx, ∑dy, ∑| dx|, ∑| dy|) (8) A 4*4*4 = 64-dimensional descriptor vector is obtained, in fact, the 64-dimensional descriptor vector can be extended to 128 dimensions, so that the SURF descriptor is more characteristic, but this is at expense of increasing matching time, this needs to be weighed in practice.

RANSAC Removes Mismatching Points
After above operation, the matching feature points often include wrong points. The feature point descriptors are represented in the form of matrices and sorted in ascending order at vector distances. In order to improve the matching accuracy of the matching points, RANSAC algorithm is used to optimize the matching points to get the exact match points. Finally, affine transformation matrix is used to realize registration. The general process is as follows: first, select four points randomly as initialization model, check whether any three points are non-collinear, check whether the selected data has been used before, finally, the affine transformation matrix is calculated to obtain a completely consistent set. The last step is to traverse all the feature points and calculate Euclidean distance to check the error tolerance.

Brightness Recovery
The registration images are obtained through improved SURF algorithm, assume that T' represents the image after registration, since the SURF algorithm does not change brightness, so T' and T are consistent with brightness, so T' is an image of [0, 1] values. I' represents the polarization image after restoration of the luminance, its brightness is consistent with I, Mat lab code is as follows: I=1-sqrt (1-T) (10)

Experimental Results
The hardware environment is: NI PMA-1115, embedded processor model 8135, processor configuration for the 8-core, 2.3GHz CPU, i7 processor. The software development tool is: Lab VIEW programming environment, programming process will call matlabscript script.
Four original low illumination polarization angle grayscale images are shown in Fig. 4. As can be seen from Fig. 4, the polarization images of the four angles are almost all black, add the brightness operator in SURF algorithm to enhance the brightness, The results of the brightness enhancement are shown in Fig. 5. Then the 0 degrees polarization image is used as the reference image, 45 degrees and 90 degrees of the mirror image and 135 degrees of polarization angle images are automatically registered by using SURF. Fig. 7 shows a feature point matching graph. The final registration results are shown in Fig. 8, it is easy to see from Fig. 8, except for the reference image, the other three images have a black border, therefore, tailoring is the first step before proceeding to the next step, the midcrop function of mat lab is used here. The results of clipping and brightness recovery are shown in Fig. 9.

Evaluation
After automatic registration, the image at the same pixel is consistent. Commonly using image difference method to evaluate the registration effect. The reference image is subtracted from the registered image and obtains the difference image. The clearer the outline of the image, the better the registration result will be. This paper takes DOP (degree of polarization) image to express indirectly, as shown in Fig. 10. As can be seen from Fig. 10, there is no ghosting so that registration result is very well. And after the target (two rectangular target plates) is processed, the contour becomes very clear.

Conclusion
Experiments show that, the algorithm can also have good applicability to the simultaneous polarization image processing in the case of low illumination, at the same time meeting the requirements of accuracy, with the advantages of a small amount of calculation and fast operation speed. This experiment requires a total of three registration operations, total registration time is always within 30s, and registration is done automatically, basically meet the real-time requirements. Future work will focus on a greater increase in the speed of the algorithm, making the improved SURF algorithm really meet the real-time.