A Dynamic Target Recognition Method Based on Correlation Tracking Algorithm

Target recognition is a very important part in the field of image processing. The effect of target recognition directly affects the success or failure of subsequent processing. This paper proposed a dynamic target recognition method based on correlation tracking algorithm. The correlation tracking algorithm only needs matching the original data of grey level of related images, no feature extraction or division is required for images. In this case, all information of this series of images can be kept, and the correlation tracking algorithm also has many advantages, no complex pre-processing of the original images is required, and the grey level of images can be used to directly matching key points, so the possibility of implementation is greatly increased. Moreover, under the condition of low noise, matching also can be carried out, and there is no high demand for the quality of sequential images. In such an approach, its local anti-interference ability is very strong. The correlation tracking algorithm can be used to predict the trajectory of a moving object in real life. The experimental results show that the proposed method can detect the objects in an image by providing the region of objects and the background with different colors. The conclusions can be drawn that the proposed method can successfully tackle the object recognition task in most cases.


Introduction
Predicting the trajectory of a moving object means that the changed area is divided from a series of image background acquired. The most basic task of predicting the trajectory of a moving object based on monocular vision is to detect change information of a series of images, which will be simplified into digital signals so as to realize specific recognition and tracking of the research object. The trajectory of a moving object is predicted and tracked, which is also an important part of the video processing technology and digital image technology, and the measurement and division of the moving object has a direct influence on the classification and tracking of the research object [1][2][3][4].
Measurement of monocular vision and its tracking prediction of a moving object mean that a vision sensor such as a video camera or digital camera is used to shoot single images for purposeful measurement. The measured data are reasonably analyzed to achieve the trajectory of the moving object, thus its moving orientation can be predicted at the next moment according to the movement trend. Sequential images of a moving object shot by a digital camera or video camera are classified into two types: First of all, the background of sequential image of the moving object is relatively static. This is a common situation, because the monocular vision measuring system is usually used for measurement when it is static, so the background of sequential images to be shot is static under normal circumstances, for instance, the background of sequential images at the crossroads which are shot by the fixed video camera. Secondly, the background of sequential images to be shot is relatively static, because the monocular vision measurement system moves regularly when measuring. On this occasion, the background of the obtained sequential images is constantly changing, For example, driving status in front of the vehicle which is shot by the on-board camera. Based on the above-mentioned different situations, the available algorithms to predict the moving objects are as follows: the method of background removal, the method of adjacent frame difference, the method of optical flow and so on [5].
The method based on monocular vision only requests a visual sensor so as to avoid the difficulty of stereo matching in stereo vision, small visual field and other defects. In addition, the monocular vision measurement is easy to operate, with a simple structure, so it has attracted more and more researchers' attention. When monocular vision is used to measure the trajectory of an object, however, there are some difficulties below: the processing speed is relatively slow, the focusing method has high demands for hardware and its accurate location is hard to calibrate in the defocusing method. In the monocular vision measuring system for a moving object, due to different relative locations of lens and the object and different image sizes on the CCD, image sizes cannot be used to determine the actual size of the object. Those data, therefore, including image size, distance between the object and the imaging object as well as the focal length used by imaging objects, should be taken into full consideration to determine the actual size of an object. Under the premise that the size of the actual object is not provided, specific data of the object and the monocular vision measurement system cannot be determined according to the size of the imaging object. So when the trajectory of a moving object is predicted, the actual size of the object to be measured must be provided first [6][7][8].

Optical Basis
The optical system imaging through a video camera or digital camera is to image a physical point in the space on a visible plane, forming an image of spatial object on a specific plane. For example, in Fig. 1, A`B` serves as an image plane of AB, hereby called a scene plane, and AB is called a datum plane, where Point C serves as any point in the space, and it can be calculated according to the Gauss formula that C` serves as a conjugate image point of a space point C. Therefore, when a beam of light is emitted from Point C and pass through the lens, a diffuse light spot a`b` will be formed on the scene plane. It can be calculated according to the diffraction image in physical optics that, the diffuse light spot is similar to the Airy disk, and the intensity of the "Airy disk" reaches its maximum value in its center and then is gradually decreasing [9][10]. In real life, a space object to be detected can be seen as the composition of objective image points, after processed through the camera's optical system of a video camera or digital camera, blurred images consisting of diffuse "Airy disks" will be formed on the CCD plane. At the edge of the image, different brightness of light spots and different grey levels of images provide the conditions for image ambiguity post-processing.

Organization of the Text
This paper proposed a dynamic target recognition method based on correlation tracking algorithm. The correlation tracking algorithm only needs matching the original data of grey level of related images, no feature extraction or division is required for images.

Basic Principles
Actual prediction of a moving object is to determine the specific location of the moving object in the space at its corresponding time, which can be obtained through the image matching algorithm according to the monocular vision measuring system. The image matching algorithm refers to an algorithm where a series of template images containing target images is used for image matching according to the relevant similarity principle so as to achieve the trajectory of the research object and its status at the next moment, as is the basic principle of the image matching algorithm.

Realization Conditions
The correlation tracking algorithm only needs matching the original data of grey level of related images, no feature extraction or division is required for images. In this case, all information of this series of images can be kept, and the correlation tracking algorithm also has many advantages, for example, no complex preprocessing of the original images is required, and the grey level of images can be used to directly matching key points, so the possibility of implementation is greatly increased. Moreover, under the condition of low noise, matching also can be carried out, and there is no high demand for the quality of sequential images. In such an approach, its local anti-interference ability is very strong. The correlation tracking algorithm can be used to predict the trajectory of a moving object in real life.

Some of the Restrictions
Of course, there are some problems in the correlation tracking algorithm, for example, a great amount of calculation is required in the process of matching, and there is a high demand for the processor, especially, when the search area is large, it is unlikely to obtain the ideal trajectory of the moving object and to achieve the ideal tracking effect. In the flight process of a moving object, the equipment for image acquisition produces a lot of errors for its own reasons, which will be accumulated in the treating process and lead to a discrepancy between the prediction results and the actual trajectory. These unfavorable factors will affect the stability of the correlation tracking algorithm.
Block diagram for the basic principle of the correlation tracking algorithm is shown in Fig. 2 below.

Results
All the above experimental schemes utilize quantitative analysis to conduct performance evaluation, and in the following parts, we will provide several examples of the object recognition results by proposed method in Fig. 3.  It can be seen that the proposed method detect the objects in an image by providing the region of objects and the background with different colors. Combining the 18 images in Fig.3, the conclusions can be drawn that the proposed method can successfully tackle the object recognition task in most cases.

Conclusion
The trajectory is predicted according to data, so in the absence of data, it belongs to a blind zone, and prediction is only based on the existing movement trend. In the actual movement, it is likely that when the falling-point coordinates are calculated, the angle between velocity direction and horizontal plane at the falling moment is not taken into account, so the calculation results and the actual results are all small. Furthermore, when curve fitting, some key points are easy to be covered up by their surrounding ambiguous data, resulting in deviation.