Research on Fast Detection and Recognition of Object Features Based on Feature Line Matching

As one of the most prominent and important features of visual information, feature lines exist in real objects and target scenes. The characterization features of the pixels on the feature line are constructed. The pixels are classified and output by representing the eigenvalues to complete the classification of different target feature lines. The local gradient information of the points in the straight line segment is extracted and the midpoint descriptor is formed to match the straight line segment accurately. The edge detection is performed sequentially, the edge line of the object feature is extracted, and the edge principal point is obtained simultaneously according to the distance judgment. The research shows that the feature line matching method greatly reduces the calculation amount of matching and greatly improves the matching accuracy. The flexibility of building samples is strong, and different levels of sample forms can be selected according to different application requirements, and the robustness of matching and identification is good. The method is suitable for rapid detection and identification of object features, as well as product feature detection and recognition.


Introduction
In images with obvious features, features such as points, textures, lines and regions can be used for image matching. Texture feature is an unstable feature in image, because texture feature is greatly influenced by external conditions such as illumination. As geometric features of images, regions are not easily affected by external environmental factors [1]. It is a relatively stable feature in the image. Feature detection plays an important role in image segmentation, computer vision and pattern recognition. It is the first problem to be solved in image processing and pattern recognition [2]. Among the many features that images have, line features are undoubtedly an important clue to human visual perception. It can be said that in the current stage of image processing and machine vision, the first basic processing step is line feature detection, and the line feature detection can effectively preserve the structural information of the feature boundary shape of the target object [3]. The matching result is unstable when there is a slight difference between the images. Therefore, the current image feature based matching method is a widely used one. When the target object features changes in illumination, translation, etc., the extracted features are affected to some extent. Therefore, when matching a target, it is crucial to find an algorithm that is highly tolerant to noise, light, and visual change [4].
Image features refer to the attributes that can be used as markers in the image field, and the edge is a kind of image features. The discontinuity of image gray level is called local edge or edge element [5]. The edge elements are connected along the tangent direction to form a wide range of line segments, which are called edges or boundaries. Because the feature reflects the physical limitation of the object feature or region in the image, the edge detection technology has been paid attention to and studied by people all the time [6]. According to the requirements of practical application, different matching algorithms are adopted. For example, only the edge information of object feature image is used for positive correlation processing to detect an arbitrary two-dimensional image of object feature [7]. This method is developed on the basis of complex convolution processing methods. Another common method for scene matching is based on the invariant moment of inertia of object feature images [8]. It has less influence on external environmental factors and is a stable feature in the image. The corner feature is a relatively simple feature in the image, which is easy to obtain in practical applications, but the matching is more complicated and difficult, especially when the image contains a lot of noise, the matching effect is very poor [9]. In contrast, linear features are more accurate and more stable, and are more representative of the amount of information contained in an image. Therefore, line features have long been used in image matching [10]. In addition, existing methods for region feature matching are rare, and the process of region feature extraction is more complicated than straight line extraction. Irregularities in the shape of the region also limit the progress of regional feature matching. The feature extraction determines the accuracy and reliability of feature matching [11].
In order to extract and classify feature lines in multi-target scenes, the computational theory of template matching and mathematical morphology high-low cap transformation and the classification method based on the feature of pixel representation are emphatically studied [12]. Firstly, the image is pre-processed by filtering, denoising and edge extraction, and then the results of the above methods are verified by experiments. When using support vector machine, it does not need to go through complex mathematical reasoning, but only needs to map a vector to a higher dimensional feature space for classification. The implementation of SVM focuses on choosing an appropriate kernel function. It is often difficult to partition low-dimensional space vectors. Generally, it needs to be mapped to high-dimensional space, but this process increases the computational complexity, and the emergence of kernels can solve this problem [13]. Therefore, as long as the appropriate kernel function is selected for the application of support vector machine, the classification function in high-dimensional space can be obtained. Many methods are to detect the high curvature point of the edge curve. The curvature of the curve is the rate of rotation of the tangential direction of the point on the curve versus the length of the arc (defined by differentiation), indicating the extent to which the curve deviates from the line [14]. Commonly used features include descriptions of values or symbols such as feature points (such as corner points, inflection points, edge points, etc.), lines (generally referred to as boundary lines, including lines and curves) [15]. Feature extraction can be implemented by feature extraction operator, and feature extraction operator can be divided into point feature operator and linear feature operator [16]. The characteristics of these samples are all character data samples, the amount of data is small, the storage space is small, and the construction process is simple. The operation efficiency is high, the sample construction flexibility is strong, and the sample forms of different levels can be selected according to different application requirements, and the robustness is good, and the characteristics of error detection and recognition are not easily generated [17].
In this paper, we propose a feature line matching algorithm, which is a new algorithm for fast detection and recognition of object features. In summary, our contributions are as follows: 1. This algorithm is a new algorithm based on feature line matching for fast detection and recognition of object features.
2. This algorithm is widely applicable in the environment of feature line matching, and the fast detection and recognition methods for object features have high applicability.
3. This algorithm has wider applicability, higher recognition and better visualization effect.

Materials And Methods
In this paper, the binary image after edge detection is tracked by chain code according to the chain code criterion to obtain the initial edge feature line. The basic idea of chain code method is to analyze the directional relationship between adjacent points. The points in different directions are coded into different values, and the values of point codes on the same line are the same. In order to determine the 3 structure of the target image, it is necessary to examine the relationship between the target structure one by one. The structural element is the "probe" used to collect information when inspecting the image. Its size is generally significantly smaller than the size of the target image. The gray value of structural elements is only 0 and 1, which is recorded as set S. Structural elements can have different shapes and sizes. Generally, the shape of structural elements is rectangular, diamond or circular. When detecting feature points, the extreme points in all DOG scales of the image are treated equally. Therefore, when extracting features from target images with complex background, the extracted background information is much more than the target. Therefore, a feature extraction method based on saliency map detection is proposed in this paper. Since saliency map detection is based on visual information, it can filter out a large amount of regular background in image processing. After extracting line features and corner features, they must be organized reasonably in order to meet the needs of subsequent matching. Therefore, we need to describe it to some extent. Here, we need to describe the relationship between line features. The so-called relationship description of line features is to construct the connection between the segments of the segmentation and to construct the relationship description of the segments, which is very important for the subsequent relationship matching. The parameter matching of feature points is shown in Table 1. Characteristic point parameter matching diagram is shown in Fig. 1. After extracting the edges of objects, it is not possible to obtain practical lines in one step. In digital image, because of the discreteness of image edge and the discontinuity of edge line, a straight line is often divided into many short lines and scattered near a long line as a trajectory. The main idea of the hierarchical marking method is to select the short line segment as the primitive element and make use of the characteristics of the straight line in the image. The characteristics of line features used in object feature matching are shown in Tables 2 and Fig. 2. The centroid and the maximum and minimum axis of inertia (principal axis) of the encoded object feature image are determined by using the zero and first moments of the object feature image. Directional execution length coding is carried out with centroid as the starting point, spindle as the reference direction and 360/n as the interval angle. If the line segments connecting the endpoints are regarded as the relational attributes between the primitives, their relational graphs can be constructed for any segmented graphics. At the same time, we need to express the relationship between them by using the complete correlation matrix. At the same time, in order to meet the needs of subsequent matching, we must establish primitive attributes and relational attributes. The description of primitive attributes is shown in Table 3 and Fig. 3. Then, by combining the results extracted in different directions, a complete feature line can be obtained. Structural elements in morphology can have different shapes, different sizes, and orientations. These structural elements are also a template in a sense. The core idea of repeated integration is to integrate the original signal. And the kernel function is derived, and the convolution result is unchanged. The spline function is a sparse kernel after n+1 derivation. And its effective size does not change with scale. Therefore, the convolution speed can be greatly increased by repeating the integration method. Although the original integral image is used for spline filtering, the same applies to the spline differential core.   When the features of each object are similar or overlapping, the matching of feature lines on different object features will correspondingly approach or connect, which leads to the inaccurate recognition of feature lines of different object features. As a result, the matching of feature lines belonging to different object features is often mistakenly connected. The purpose of query is to retrieve the data points closest to the query points in the Kd-tree. The query process starts with the root node of the binary tree, and compares the nodes with the smallest distance in the partitioned area in turn. After finding the nearest node, in order to ensure that the query point is nearest, the search path should be "traced back". In the process of backtracking, it is possible to find a point more adjacent to the current point. If it appears, it will replace the adjacent point and continue to "backtrack" the new adjacent point. In addition, the uniqueness of the detected area is also an important aspect. This paper uses the matching score to test the uniqueness of the region detected by the proposed feature detector. The matching score is the ratio between the number of correctly matched local regions and the number of fewer local regions in the two ICEMCE 2020 Journal of Physics: Conference Series 1601 (2020) 052013 IOP Publishing doi:10.1088/1742-6596/1601/5/052013 5 picture repeat regions. The criterion for matching two local regions is that the distance between them is the closest. However, the relationship between "dot" and "line" in the original image is still maintained, which we call topology transformation. It is an advantageous lever to abstract from practical problems into graph theory problems. On the contrary, through it, we can apply the theoretical results obtained in the graph theory to practical problems. That is the centroid of the object feature image and the maximum and minimum moment of inertia axis. In the topology data structure, the points are independent of each other, the points are connected into a line, and the lines form a surface. Each line starts at the starting node (FP), ends at the terminating node (EP), and is adjacent to the left and right polygons (LPY and RPY). In this data structure, arcs are the basic objects of data organization. The arc segment file is composed of arc segment records, and each arc segment record includes the arc segment identification code. The arc segment file structure of the topology data structure is as shown in Table 4. The basic elements of the vector structure graphic are shown in Fig. 4.

Result Analysis and Discussion
Color feature is one of the surface features of the object corresponding to the region. It has global characteristics. In general, color features are based on the characteristics of pixels, and are not affected by the direction of the image area. It is also robust to noise, size, resolution and direction. According to the actual needs of object features and image features, appropriate matching primitives are selected, and relevant constraints are formulated to control matching, so as to optimize an image matching algorithm. But in fact, no matter what type of matching algorithm is used, effective matching strategies must be added to limit the size of its solution space. The matching path is taken as shown in Table 5 and Figure 5. When the target is matched by the feature matching, the threshold of the correct matching rate of the feature is first set. If the correct matching rate of the scene and a certain template is greater than the threshold, the target appearing in the scene is determined. Then the detected points of interest are relocated to the pixel close to the extreme value and then interpolated again until the deviation is less than 0.3 pixels. The position of the last point of interest is the offset of the current point of interest. In this way, the sub-pixel precision can be achieved, and the parameters of the global geometric transformation model can be calculated more accurately during the matching process. In addition, this expression mathematically supports a simple and straightforward measure of similarity that depicts the correspondence of segments of the same name. It should be noted that endpoints that define segments of the same name do not need to be points of the same name. The algorithm only requires that they be on the same line in the object space. This reduces the requirement for matching the previous line segment extraction method.
According to the formula of positioning accuracy, the positioning criterion is to find a filter function according to the formula: The expectation of the number of maximum values in the region of the object's feature length is: The saturation component is adjusted as the luminance component changes. The principle formula is: Output image, F(x, y) is the center surround function, and its solution formula: In the mathematical description, the midpoint, length and gradient direction of the straight line segment can accurately determine the straight line segment, but in the actual image processing process, the straight line segment is composed of some discrete pixels. The pixel information around the line segment also affects the accuracy of the line segment matching, so it is not enough to fully reflect the information contained in the line segment in the image only by using the attributes of the line segment itself. Because of the large digitization error or digitization error, these can be determined by graphic editing or redefining the matching limit. In addition, it is possible that the arc itself is a suspension chain and does not need to participate in the polygon topology. In this case, a mark can be made so that it does not participate in the next stage of the work of building polygons. This is because they use spline functions of the same order. Because of the use of high-order spline function, the calculation speed of B-DoH-C is slightly slower than SURF, but much faster than other detectors based on Gaussian scale space. In addition, B-DoH can be calculated in parallel and is easy to implement in VLSI. The feature points are described by the feature line matching description operator to obtain the feature vector to be matched. Finally, the feature vector is obtained by using the vector search algorithm to obtain the matching vector pair. The matching ratio is obtained based on the last obtained correct matching like two numbers. This matching method is less affected by the position of isolated edge points, etc. This method has strong dependence on the algorithm of image feature extraction selection. Therefore, different feature extraction algorithms need to be selected according to different scenarios. Then, the classification of different target feature lines is completed by integrating the gray value, the gray square difference value and the RGB component values around the pixel. The line detection of pixel fitting is shown in Fig. 6. The experimental results show that in the case of target separation and target occlusion, the proposed algorithm can classify the extracted feature lines and complete the classification of different target feature lines. According to the feature of an object, the recognition sample generated should be robust, that is, its sample form should be multi-perspective, and object feature recognition based on MORLC is insensitive to the proportion change of the feature image of the object being recognized. The proportion factor can be eliminated effectively by normalization. The feature line of object feature edge can be regarded as a kind of fringe-like image. The direction of edge feature line is arbitrary. It is difficult to extract the complete feature line information from one direction at a time. Therefore, template matching algorithm can be introduced to feature line matching detection. The main idea of this method is to construct templates in different directions. The feature lines are extracted from multiple directions, and then the matching of the feature lines detected from each direction is fused into a complete feature line. The feature here is not the global feature of an image, but the local feature that can represent an image or an object. Global features of an image generally refer to information such as variance, histogram, etc. When there are occlusion and illumination transformations in the object features of an image, the global features will be greatly affected. In this case, local features can be used to replace the global features of the image. Local features are some stable features, which generally have good distinguishability. In the matching process, the similarity measure and matching strategy of the straight line segment will affect the matching performance. The simplest matching strategy is to first set a threshold. The value calculated by the similarity measure function is compared to a threshold to determine if it matches. Although this threshold matching method is simple, the accuracy is not high. The matching criterion with left and right consistency has a higher matching correct rate. When looking for the base match, the collinear constraint and the improved least squares template matching method are used, and finally the right image is determined to be the same name edge. The matching result evaluation is shown in Fig. 7.

Conclusion
As an important mid-level descriptor in image information, feature line has many advantages, such as rich information, stable and continuous features, not easily disturbed by noise and illumination, and it is ubiquitous in real objects. It has become the prioritized Pili primitive in image matching algorithm. In this paper, the fast detection and recognition method of object feature based on feature line matching is studied. Based on the difference of the surrounding environment of the pixels on the feature line matching, a method for computing the representation features of the pixels is proposed. For the pixels on the extracted feature line, the feature values are calculated separately, and the feature lines of different objects are distinguished by the range of these eigenvalues, and then the classification of feature line matching in multi-objective is completed. After the feature extraction, the extracted features are described and the association graph is constructed based on the description. The complex image matching problem is transformed into the isomorphism problem of the association graph, and the traversal of the graph in the graph theory and the topological relationship are automatically constructed in the GIS. Applying it, this conversion simplifies the calculation of image matching, and the effect is obvious. Future work will also include applying fast filtering algorithms to compute local features to more quickly compute local features such as SITFs. Finally, the proposed method is applied to various object feature recognition, including object feature matching, object feature classification, and object feature detection.