Bottom-up evaluation of complex measurements in chemistry by the Monte Carlo simulation of sample preparation

The measurement uncertainty is required for the objective and sound interpretation of the measurement result. However, the evaluation of the uncertainty of complex measurements in chemistry, such as for the analysis of trace levels of analytes in environmental or biological samples after complex sample preparation, is not trivial. Some examples of complex sample preparations are acid digestions of the sample for the elemental analysis or the solvent extraction of organic analytes. This communication presents a novel approach for the “subanalytical” evaluation of complex sample preparation that involves the isolation and the Monte Carlo Method simulation of the precision and trueness of the analytical steps. The simulation is based on observed experimental performance since the analytical efficiency frequently varies with uncontrolled characteristics of the analysed item.

The measurement uncertainty is required for the objective and sound interpretation of the measurement result. However, the evaluation of the uncertainty of complex measurements in chemistry, such as for the analysis of trace levels of analytes in environmental or biological samples after complex sample preparation, is not trivial. Some examples of complex sample preparations are acid digestions of the sample for the elemental analysis or the solvent extraction of organic analytes.
Different approaches for the evaluation of the measurement uncertainty are available. The more complex and accurate evaluations of the measurement uncertainty are designated "bottom-up" or "subanalytical". The pragmatic evaluation of the measurement uncertainty from the dispersion of results produced by different laboratories is designated "top-down based on interlaboratory data" or "supralaboratorial" and generally involves an over-evaluation of the uncertainty. The simplified evaluation of the uncertainty from measurement intermediate precision and trueness determined in inhouse procedure valuation is designated "top-down based on intralaboratory data" or "supra-analytical" and it is not as pessimistic about uncertainty as the "supralaboratorial" approach [1,2]. Therefore, the evaluated uncertainty is a function of the measurement performance, the quality of used references, the collected performance data and how uncertainty is modelled from available information [1].
For the more demanding measurement applications, such as the production of Certified Reference Materials, bottom-up uncertainty evaluation is more adequate since it allows producing reference values with lower uncertainty than the reported by routine laboratories.
The "sub-analytical" evaluation of the measurement uncertainty involves dissecting the measurement procedure in precision and trueness components of all individual analytical steps and effects that affect the combined measurement uncertainty. For this detailed assessment, any relevant correlation between uncertainty components, such as the correlation of the value of calibrators prepared from the same stock solution, must be considered. The isolation and evaluation of the uncertainty of complex sample preparation are particularly demanding since they require experimental assessments and the implementation of additional analytical operations that can mask their performance. Nevertheless, the most challenging part of these evaluations is the management of performance variation with sample matrix and analyte level that makes accurate performance modelling particularly difficult.
This communication presents a novel approach for the "subanalytical" (bottom-up) evaluation of complex sample preparation that involves the isolation and the Monte Carlo Method simulation of the precision and trueness of these analytical steps. The simulation is based on observed experimental performance since the analytical efficiency frequently varies with mostly unknown and uncontrolled characteristics of the analysed sample.
The developed uncertainty evaluation method requires experimental data of the performance of the analytical method on the analysis of samples similar to the unknown samples to be characterised. Typically, the replicate analysis of samples in different days is used to simulate the between-days component of measurement precision as the between-days precision component of the measurement error, b . The simulation of b required a prior simulation of the repeatability of all analytical steps except the complex operations allowing to cover in b all between-days precision effects. The analysis of samples with a reference value, typically spiked samples or Certified Reference Materials, is used to model the mean analyte recovery, ̅ . The modelling of b and ̅ from the analysis of different items is pooled in complex performance models that take into account between-matrix effects.
For b simulation from the analysis of sample A, in n different days (j = 1 to n), it is simulated the measurement repeatability of each analysis day where all components except complex ones are considered. A matrix m × n is built from these analyses where m is the number of simulated values for each analysis day (i = 1 to m). The difference between each simulated value and the mean of the simulation line (i.e. mean of the simulated values from various days of the same simulation line) simulates b . The m × n simulated b from the p studied items are polled and a kernel density plot is used to simulate the probability density of between-days measurement error.
When samples with reference values are analysed, equivalent m × n matrices are built, but the ratio between the mean of each i th simulation line and the reference values are used to produce m simulations of the mean analyte recovery, ̅ . The simulated ̅ from the analysis of q reference materials is used to predict the impact of systematic effects on the measurements.
For the analysis of unknown samples subsequent to measurement procedure validation, the model used to simulate measurement repeatability from all analytical steps except sample preparation is used to simulate values, y, affected by only those components. The simulated repeatability is subsequently combined with simulated b and ̅ , producing m simulations of the measured values with all uncertainty components and correction for the analyte recovery ((y+ b )/ ̅ ) [3].
This methodology was successfully applied to the quantification of various elements in sediments by using operationally defined or rational procedures [4,5].
The developed models were cross-validated by randomly extracting analysis results from measurement uncertainty evaluation and using the extracted data to assess the compatibility between estimated and reference values. The success rate of the metrological compatibility tests was compatible with the confidence level of the tests (95% and 99%).

Conclusions
The developed methodology for the detailed evaluation of the uncertainty of measurements in chemistry involving complex sample preparations was successfully applied to the determination of various elements in sediments. The collected information on analyte recovery and sample preparation precision from the analysis of various spiked samples and certified reference materials was pooled, producing complex probability density models of the sample preparation and final result uncertainty. The developed methodology is applicable to other instrumental methods of analysis involving complex sample preparation.