Testability evaluation of radar equipment with improved matter-element extension method

In response to the problems of many influencing factors and different indexes, a radar equipment testability assessment index system is constructed based on the characteristics of equipment and testability connotation requirements. The Criteria Importance Though the Intercrieria Correlation (CRITIC) method is improved by using the variance coefficient to replace standard deviation and conflict coefficient to calculate the absolute value, and the objective weights of evaluation indexes are determined. An improved matter-element extension model is established, and the testability evaluation of different types of radar equipment is obtained by calculating the asymmetric close degree of the extension set. The effectiveness of the model is verified by an example analysis.


Introduction
As the battlefield becomes more complex, the integration of radars is increasing, and their performance and structure are becoming more complex, creating greater challenges for radar fault detection and isolation [1].As far as radar equipment is concerned, good testability is of great significance in reducing the consumption of maintenance manpower and support resources, reducing life cycle costs, and improving the combat readiness and mission success of equipment [2].Scientifically evaluating the testability level of radar equipment can check whether its testability design meets the specified testability requirements, identify the defects of equipment testability, and accurately grasp its performance status, which provides important decision support for the improvement of radar equipment testability.
At present, a large number of scholars have adopted a variety of methods to evaluate the testability of related equipment from their respective fields, and have achieved certain results.Liu [3] constructed six general quality characteristic index libraries of complex products, including testability, and carried out a comprehensive evaluation based on the VIKOR method.Su et al. [4] took fault detection rate as an example to study the testability evaluation method of aircraft prediction and health management (PHM) system based on outfield data.Li et al. [5] established a missile state assessment index system based on test data and proposed a missile state assessment and decision method with extended TOPSISgrey correlation.
According to the analysis, the testability evaluation objects mainly focus on missiles, aviation, and other equipment, while the testability evaluation studies on radar equipment are relatively few, and most of them are based on the classification of test stages to establish evaluation indexes or rely on a few quantitative indexes such as fault detection rate and fault isolation rate to evaluate the testability of equipment systems.There are many types and complex systems of modern radar equipment, so it is of practical significance to establish a testability evaluation index system and model suitable for most radar equipment.
Based on the characteristics of radar equipment and the requirement of testability connotation, in this paper, the evaluation index system of radar equipment testability is constructed, and the extension evaluation model of radar equipment testability is put forward based on the weight of the improved CRITIC method.CRITIC method is an objective weighting method proposed by Diakoulaki et al [6].This method not only considers the influence of index variation degree on weight but also takes into account the conflict among indexes.In this paper, the coefficient of variation is used to replace standard deviation to measure the contrast intensity between indexes, the correlation coefficient is taken as an absolute value in the calculation of conflict coefficient, and the CRITIC method is optimized and improved to make the weight results more scientific and accurate.By normalizing index assignment and replacing the grade criterion of maximum membership degree, the shortcomings of the traditional matter-element extension model are effectively improved, and a more accurate grade of testability evaluation is obtained, which provides a decision reference for the improvement of testability evaluation and testability design level of radar equipment.

Construction of Testability Evaluation Index System of Radar Equipment
The testability requirements of radar equipment specify the condition monitoring capabilities, fault diagnosis capabilities, and constraints that radar equipment should have, which are mainly divided into qualitative requirements and quantitative requirements [7].Combining with the characteristics of radar equipment itself, analyzing the connotation of testability and the specific content of testability requirements, considering from the perspective of condition monitoring, fault diagnosis, equipment design, and resource guarantee, a radar equipment testability evaluation index system is constructed.
Condition performance monitoring parameters ( ) are parameters used to observe and measure the condition performance of radar equipment to determine whether the condition performance of the equipment meets the specified requirements and to be able to perform fault diagnosis and trend analysis.Its indexes include parameter monitoring rate ( ), condition monitoring rate ( ), and station-level average manual parameter detection time ( ).
Fault diagnostics parameters ( ) describe the process of detecting and isolating faults in radar equipment and are the centralized expression of equipment testability.The index is described by the fault detection rate ( ), fault isolation rate ( ), false alarm rate ( ), fault detection time ( ), fault isolation time ( ), fault isolation ambiguity group size ( ), mistake dismantle rate ( ), and BITE coverage rate ( ).
A good testability design can effectively improve the efficiency of condition monitoring and fault diagnosis while saving costs, thus improving the operational readiness and mission reliability of radar equipment.The indexes of testability design parameters (  ) are test controllability (  ), test observability ( ), UUT compatibility with external test equipment ( ), structure design and function division rationality ( ), software testability ( ), and test security ( ).
Testability resource parameters ( ) refer to the resources planned and designed to ensure the testing and use of radar equipment.Scientific configuration of testability resources can improve the mission success and fault detection isolation of radar systems and effectively reduce the cost and time consumption of testing and diagnosis.The indexes are the quantity and quality of testers ( ), technical data integration rate (  ), random test equipment common rate (  ), random test equipment integration rate ( ), and fault information integrity ( ).

Improve the CRITIC method to determine index weight
The basic steps of using the improved CRITIC method to determine the testability index weight of radar equipment are as follows.
1) Construct the original data matrix and normalize it.Assuming that there are  objects to be evaluated and  evaluation indexes, then  (  = 1,2, ⋯ , ; = 1,2, ⋯ , ) is the observed value of index  in the th evaluation object, from which the original data matrix  = ( ) × is established.To obtain the normalized matrix  = ( ) × , different types of index data need to be normalized.
2) Calculate the coefficient of variation and conflict coefficient of the index.
The coefficient of variation s of the th evaluation index is: where ̅ and  are the mean and standard deviation of the th index respectively.The correlation coefficient  between the indexes is: where cov(, ) is the covariance of index  and .Therefore, the conflict coefficient R of index  is further obtained.
3) Determine the weight of each index.

Improved Matter-Element Extension Evaluation Model
Extension is a new discipline founded by Professor CAI in 1983.Based on matter-element theory, extension studies the extensibility and conjugacy of things with formal models and is used to solve contradictory problems [8].There are some limitations in the application of the matter-element extension evaluation model: first, when there are differences in the type and dimension of the evaluation indicators, and the measured data value of the indicators exceeds the section range, it is impossible to calculate the value of the associated evaluation levels.Second, the traditional Topological model is based on the maximum affiliation as the discriminant criterion, and the assessment levels are determined by the approximate processing [9], which sometimes results in the loss of some information about the object to be assessed and reduces the validity of the assessment results.The validity of the evaluation results is reduced.Therefore, this paper addresses these two shortcomings: (1) normalizing the classical domain, nodal domain, and the actual value of indexes in the assessment model; (2) replacing the maximum affiliation with the asymmetric closeness, which can effectively avoid the loss of information of index data that leads to the failure of the discriminatory principle.
The concrete steps to improve the matter-element extension evaluation model are as follows.1) Determine the classical domain, section domain, and matter element to be evaluated.The testability of the target radar is divided into  levels, then the th evaluation level is represented as  , where  = 1, 2, ⋯ , ; There are  testability evaluation indexes  , that is  ,  , ⋯ ,  ;  is the range of values of evaluation index  in the th evaluation level represented by  ,  ;  is the th level of the radar equipment testability element model.
Section domain represents the total range of quantity values of index  in all evaluation levels, and the matter-element section domain matrix  is: The testability matter element matrix  of radar equipment to be evaluated is expressed as: 2) Normalized processing.The evaluation of each index of the classical domain, section domain, and matter element to be evaluated is processed without dimensionality, then the standardized formulas of the benefit index and cost index are as follows: where  is the magnitude value of each index  of the classical domain, segment domain, and subject element to be evaluated;  and  are respectively the lower limit and upper limit of the section range.
The new classical domain matter element ′ , node domain matter element ′ and subject matter element ′ can be obtained after standardization.
3) Calculate the closeness.The closeness degree can effectively measure the closeness degree between fuzzy sets, so as to avoid the limitation of applying the maximum membership degree to comprehensive evaluation.After sorting out the asymmetric proximity formula ( = 1) [10], the proximity degree of testability matter element ′ of radar equipment is to be evaluated with respect to each grade  can be obtained.
where  (′ ) is the distance between the matter element ′ to be evaluated and the canonical classical domain;  is the weight of the evaluation index.

Instance Analysis
Taking three types of radar equipment as an example, the testability evaluation index of radar equipment is weighted by the improved CRITIC method, and the testability evaluation is performed by the improved matter-element extension model.Based on the existing national and military standards for the testability of radar equipment [7], [11] [12], and through a large number of surveys and expert consultation, the evaluation indexes were divided into four grades: ={ ,  ,  ,  }= {excellent, good, medium, poor}, and the evaluation index grade standards were assigned values as shown in Table 1.Among them, the indexes  ,  ,  ,  ,  , and  are cost indexes, and the other indexes are benefit indexes.
Table 1.Radar equipment testability evaluation index grade standard.

Improved Matter-element Extension Evaluation
The standard assignments of the radar equipment testability assessment levels and the actual measured values of the indexes in Table 1 and Table 2 are normalized according to Equation ( 9) to obtain the new classical domain ′ , section domain element ′ and the object to be evaluated element ′ .Taking the radar equipment numbered 2 as an example, the distance between the object to be evaluated and the normalized classical domain is calculated according to Equation (11), and then substituted into Equation (10) together with the weight values of each index to derive the closeness of this radar equipment to be evaluated testability to each grade as  ( )= (0.999 716, 0.999 945, 1.000 021, 0.999 694).From the formula max  ( ) =  ( )=1.000 021, it can be seen that the radar equipment testability evaluation grade of number 2 is medium; the same calculation process as above, it can be concluded that the radar equipment testability grades of numbers 1 and 3 are good.The measured data of each index of type 2 radar are mostly lower than the evaluation standard of good grade, so it is necessary to improve the testability design of this type of radar equipment to effectively improve the testability level, which is also consistent with the actual investigation results.The testability grades of the three types of radars obtained by the traditional matter-element extension method [13] are all good.The evaluation process ignores the fuzziness of the testability of the radar equipment to be evaluated, and some data information is lost, leading to inaccurate evaluation results.The evaluation results were obtained by using the improved matter-element extension model, and the reasonableness and applicability of the evaluation model were verified.

Conclusion
Based on the definition and requirements of radar equipment testability, in this paper, a combined qualitative and quantitative radar equipment testability index system is constructed.By introducing the coefficient of variation and calculating the conflict coefficient by taking the absolute value of the correlation coefficient, the traditional objective weighting method of CRITIC is improved to make the weighting result more accurate and scientific.Then the improved matter-element extension model is used to evaluate the testability of different types of radar equipment and distinguish the advantages and disadvantages.By standardizing the matter-element domain values, the method makes the evaluation model grade criteria more flexible and practical.The closeness is also used to effectively compensate for the shortcomings of the maximum affiliation to discriminate the assessment level, which makes the assessment results more accurate and reasonable.The application of the method can expose the deficiencies related to the testability of radar equipment, and provide a certain basis for equipment development units, manufacturers, and user units to optimize the testability design of radar equipment in a targeted manner and enhance its fault diagnosis capability and maintenance test efficiency.