Table of contents

Volume 53

Number 2, April 2016

Previous issue Next issue

Focus issue papers

S55
The following article is Open access

BIPM Workshop on Measurement Uncertainty

It could be argued that activity measurements of radioactive substances should be under statistical control, considering that the measurand is unambiguously defined, the radioactive decay processes are theoretically well understood and the measurement function can be derived from physical principles. However, comparisons invariably show a level of discrepancy among activity standardisation results that exceeds expectation from uncertainty evaluations. Also decay characteristics of radionuclides determined from different experiments show unexpected inconsistencies. Arguably, the problem lies mainly in incomplete uncertainty assessment. Of the various reasons leading to incomplete uncertainty assessment, from human failure to limitations to the state-of-the-art knowledge, a selection of cases is discussed in which imperfections in the modelling of the measurement process can lead to unexpectedly large underestimations of uncertainty.

S65

and

BIPM Workshop on Measurement Uncertainty

The paper illustrates, by means of selected examples, the merits and the limits of the method for computing coverage intervals described in the Supplement 1 to the GUM. The assessment of coverage intervals is done by evaluating their long-run success rate. Three pairs of examples are presented, relative to three different ways of generating incomplete knowledge about quantities: toss of dice, presence of additive noise, quantization. In all the pairs of examples, the first one results in a coverage interval with a long-run success rate equal to the coverage probability (set to 95%); the second one, instead, yields an interval with a success rate near to zero. The paper shows that the propagation mechanism of the Supplement 1, while working well in certain special cases, yields unacceptable results in others, and that the problematic issues cannot be neglected. The conclusion is that, if a Bayesian approach to uncertainty evaluation is adopted, the propagation is a particularly delicate issue.

S74

and

BIPM Workshop on Measurement Uncertainty

Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.

Short Communication

L1

A cryogenic fixed point cell has been filled with high purity (99.999%) sulfur hexafluoride (SF6) and measured in an adiabatic closed-cycle cryostat system. Temperature measurements of the SF6 melting curve were performed using a capsule-type standard platinum resistance thermometer (CSPRT) calibrated over the International Temperature Scale of 1990 (ITS-90) subrange from the triple point of equilibrium hydrogen to the triple point of water. The measured temperatures were corrected by 0.37 mK for the effects of thermometer self-heating, and the liquidus-point temperature estimated by extrapolation to melted fraction F  =  1 of a simple linear regression versus melted fraction F in the range F  =  0.53 to 0.84. Based on this measurement, the temperature of the triple point of sulfur hexafluoride is shown to be 223.555 23(49) K (k  =  1) on the ITS-90. This value is in excellent agreement with the best prior measurements reported in the literature, but with considerably smaller uncertainty. An analysis of the detailed uncertainty budget of this measurement suggests that if the triple point of sulfur hexafluoride were to be included as a defining fixed point of the next revision of the International Temperature Scale, it could do so with a total realization uncertainty of approximately 0.43 mK, slightly larger than the realization uncertainties of the defining fixed points of the ITS-90. Since the combined standard uncertainty of this SF6 triple point temperature determination is dominated by chemical impurity effects, further research exploring gas purification techniques and the influence of specific impurity species on the SF6 triple point temperature may bring the realization uncertainty of SF6 as a fixed point material into the range of the defining fixed points of the ITS-90.

Papers

743
The following article is Open access

, , and

We present a cryogenic method for the measurement of total hemispherical emissivity and absorptivity of various materials at temperatures from 320 K down to  ≈20 K. In absorptivity measurement the temperature of the examined sample is kept at  ≈5 K–35 K. Radiative heat flow between two plane parallel surfaces of 40 mm in diameter disk samples placed in a vacuum, a sample and a disk with reference surface, is absorbed by a colder sample and sinks into an LHe bath via a thermal resistor (heat flow meter). Heat flow is measured by substitution method, using thermal output of an electrical heater for heat flow meter calibration. A great deal of attention is paid to the estimation of uncertainties associated with this method. Capabilities of the instrument are demonstrated by the absorptivity and emissivity measurement of the pure aluminium sample. The expanded fractional uncertainty (k  =  2) in emissivity ε  =  0.0041 measured at  ≈30 K for pure aluminium is less than 11% and for values of emissivity ε  >  0.0053 measured above 60 K the uncertainties are below 7%. The method was designed primarily for the measurement of highly reflective materials like pure metals, nevertheless high emissivity of the reference sample also enables the measurement of non-metallic materials with reasonable accuracy.

754

, and

Iteratively re-weighted least squares (IRLS) were used to simulate the Lp-norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation without a significant bias. The simulations were performed on different samplings and perturbations of the trajectory. For the platykurtic distributions of the perturbations, the Lp-approximation with 3  <  p  <  4 was found to yield several times more precise gravity estimates compared to the standard least-squares. The simulation results were confirmed by processing real gravity observations performed at the excessive noise conditions.

762

and

The conventional mass is a useful concept introduced to reduce the impact of the buoyancy correction in everyday mass measurements, thus avoiding in most cases its accurate determination, necessary in measurements of 'true' mass. Although usage of conventional mass is universal and standardized, the concept is considered as a sort of second-choice tool, to be avoided in high-accuracy applications. In this paper we show that this is a false belief, by elucidating the role played by covariances between volume and mass and between volume and conventional mass at the various stages of the dissemination chain and in the relationship between the uncertainties of mass and conventional mass. We arrive at somewhat counter-intuitive results: the volume of the transfer standard plays a comparatively minor role in the uncertainty budget of the standard under calibration. In addition, conventional mass is preferable to mass in normal, in-air operation, as its uncertainty is smaller than that of mass, if covariance terms are properly taken into account, and the uncertainty over-stating (typically) resulting from neglecting them is less severe than that (always) occurring with mass. The same considerations hold for force. In this respect, we show that the associated uncertainty is the same using mass or conventional mass, and, again, that the latter is preferable if covariance terms are neglected.

770

, and

A substitution method to measure seawater density relative to pure water density using vibrating tube densimeters was realized and validated. Standard uncertainties of 1 g m−3 at atmospheric pressure, 10 g m−3 up to 10 MPa, and 20 g m−3 to 65 MPa in the temperature range of 5 °C to 35 °C and for salt contents up to 35 g kg−1 were achieved. The realization was validated by comparison measurements with a hydrostatic weighing apparatus for atmospheric pressure. For high pressures, literature values of seawater compressibility were compared with substitution measurements of the realized apparatus.

787

and

The Bureau International des Poids et Mesures has carried out calibrations of the platinum–iridium kilogram mass standards by referencing the international prototype of the kilogram for the first time since the third periodic verification of national prototypes of the kilogram was carried out in 1988–92. This calibration campaign was designated 'Extraordinary Calibrations' in the second phase, in which two platinum–iridium kilogram mass standards of the National Metrology Institute of Japan were calibrated with a standard uncertainty of 3.5 μg. By adding these new calibration data into our data sets from 1991, we established our mass unit with a standard uncertainty of 3.3 μg by least-squares analysis using an exponential model, which is useful for compensating for mass increase after cleaning the mass standards. Moreover, it was found that our established mass unit following the Extraordinary Calibrations shifted against our previously maintained mass unit by  −20.8 μg as of the beginning of 2015. The analysis with a linear model revealed that the amount of mass increase over time of some standards was significantly smaller than that suggested at the third periodic verification of national prototypes of the kilogram. The analysis with the exponential model gave an exponent of 0.217 with a standard uncertainty of 0.057. This suggests that the mass increase due to surface contamination cannot be explained by a diffusion-limited process.

800

, and

This article builds upon a previous work dealing with the budget of uncertainties associated to our recent determination of the Boltzmann constant by means of Doppler broadening thermometry. We report on the outcomes of theoretical calculations and numerical simulations aimed to precisely quantify the influence of the unresolved hyperfine structure of a given ortho component of the $\text{H}_{2}^{18}$ O spectrum at 1.4 μm on the measurement of the Doppler width of the line itself. We have found that, if the hyperfine structure of the ${{4}_{4,1}}\to {{4}_{4,0}}$ line of the ${{\nu}_{1}}+{{\nu}_{3}}$ band was ignored, the spectroscopic measurement of the Boltzmann constant would be affected by a relative systematical deviation of $4\cdot {{10}^{-8}}$ .

805

, , , , , and

A yoke-based permanent magnet, which has been employed in many watt balances at national metrology institutes, is supposed to generate strong and uniform magnetic field in an air gap in the radial direction. However, in reality the fringe effect due to the finite height of the air gap will introduce an undesired vertical magnetic component to the air gap, which should either be measured or modeled towards some optimizations of the watt balance. A recent publication, i.e. Li et al (2015 Metrologia52 445), presented a full field mapping method, which in theory will supply useful information for profile characterization and misalignment analysis. This article is an additional material of Li et al (2015 Metrologia52 445), which develops a different analytical algorithm to represent the 3D magnetic field of a watt balance magnet based on only one measurement for the radial magnetic flux density along the vertical direction, Br(z). The new algorithm is based on the electromagnetic nature of the magnet, which has a much better accuracy.

817
The following article is Open access

, , , , and

A watt balance is a precision apparatus for the measurement of the Planck constant that has been proposed as a primary method for realizing the unit of mass in a revised International System of Units. In contrast to an ampere balance, which was historically used to realize the unit of current in terms of the kilogram, the watt balance relates electrical and mechanical units through a virtual power measurement and has far greater precision. However, because the virtual power measurement requires the execution of a prescribed motion of a coil in a fixed magnetic field, systematic errors introduced by horizontal and rotational deviations of the coil from its prescribed path will compromise the accuracy. We model these potential errors using an analysis that accounts for the fringing field in the magnet, creating a framework for assessing the impact of this class of errors on the uncertainty of watt balance results.

829

and

The combination of isotope dilution and mass spectrometry has become an ubiquitous tool of chemical analysis. Often perceived as one of the most accurate methods of chemical analysis, it is not without shortcomings. Current isotope dilution equations are not capable of fully addressing one of the key problems encountered in chemical analysis: the possible effect of sample matrix on measured isotope ratios. The method of standard addition does compensate for the effect of sample matrix by making sure that all measured solutions have identical composition. While it is impossible to attain such condition in traditional isotope dilution, we present equations which allow for matrix-matching between all measured solutions by fusion of isotope dilution and standard addition methods.

835
The following article is Open access

, and

Gravity meters must be aligned with the local gravity at any location on the surface of the earth in order to measure the full amplitude of the gravity vector. The gravitational force on the sensitive component of the gravity meter decreases by the cosine of the angle between the measurement axis and the local gravity vector. Most gravity meters incorporate two horizontal orthogonal levels to orient the gravity meter for a maximum gravity reading. In order to calculate a gravity correction it is often necessary to estimate the overall angular deviation between the gravity meter and the local gravity vector using two measured horizontal tilt meters. Typically this is done assuming that the two horizontal angles are independent and that the product of the cosines of the horizontal tilts is equivalent to the cosine of the overall deviation. These approximations, however, break down at large angles. This paper derives analytic formulae to transform angles measured by two orthogonal tilt meters into the vertical deviation of the third orthogonal axis. The equations can be used to calibrate the tilt sensors attached to the gravity meter or provide a correction for a gravity meter used in an off-of-level condition.

840

The radian and steradian are unusual units within the SI, originally belonging to their own category of 'supplementary units', with this status being changed to dimensionless 'derived units' in 1995. Recent papers have suggested that angles could be handled in two different ways within the SI, both differing from the present system. The purpose of this paper is to provide a framework for putting such suggestions into context, outlining the range of options that is available, together with the advantages and disadvantages of these options.

Although less rigorously logical than some alternatives, the present SI approach is generally supported, but with some changes to the SI brochure to make the position clearer, in particular with regard to the designation of the radian and steradian as derived units.

846

, , and

Although the relativistic manifestations of gravitational fields in gravimetry were first studied 40 years ago, the relativistic effects combined with free-fall absolute gravimeters have rarely been considered. In light of this, we present a general relativistic model for free-fall absolute gravimeters in a local-Fermi coordinates system, where we focus on effects related to the measuring devices: relativistic transverse Doppler effects, gravitational redshift effects and Earth's rotation effects. Based on this model, a general relativistic expression of the measured gravity acceleration is obtained.

853

, , , and

In response to the strong demand for a total spectral radiant flux (TSRF) standard from domestic lighting manufacturers, such a scale has been realized in the visible range by means of a relative gonio-spectroradiometric method at the National Metrology Institute of Japan (NMIJ). Our gonio-spectroradiometric method employs spectral irradiance as well as a luminous intensity standard as reference standards. We have investigated several models of quartz-halogen lamps from domestic manufacturers with respect to their stability and decided a set of reference standard lamps for TSRF. Our carefully selected quartz-halogen lamps have sufficient stability as the standard lamp for TSRF after a 100 h seasoning process. The relative expanded uncertainty (k  =  2) for realization of the TSRF scale is between 3.1% (visible regions) and 4.1% (near ultraviolet region). We evaluated uncertainties related to the characteristics of the array spectroradiometer using experimental results and found some of those, such as effect of bandpass function, noticeably contributed to the total uncertainty.

860

, , , , , , , , and

Thermal noise is a limiting factor in many high-precision optical experiments. A search is underway for novel optical materials with reduced thermal noise. One such pair of materials, gallium arsenide and aluminum-alloyed gallium arsenide (collectively referred to as AlGaAs), shows promise for its low Brownian noise when compared to conventional materials such as silica and tantala. However, AlGaAs has the potential to produce a high level of thermo-optic noise. We have fabricated a set of AlGaAs crystalline coatings, transferred to fused silica substrates, whose layer structure has been optimized to reduce thermo-optic noise by inducing coherent cancellation of the thermoelastic and thermorefractive effects. By measuring the photothermal transfer function of these mirrors, we find evidence that this optimization has been successful.

869

and

Sub-monolayer sensitivity to controlled gas adsorption and desorption is demonstrated using a double paddle oscillator (DPO) installed within an ultra-high vacuum (UHV) environmental chamber equipped with in situ film deposition, (multi)gas admission and temperature control. This effort is intended to establish a robust framework for quantitatively comparing mass changes due to gas loading and unloading on different materials systems selected or considered for use as mass artefacts. Our apparatus is composed of a UHV chamber with gas introduction and temperature control and in situ materials deposition for future materials testing enabling in situ preparation of virgin surfaces that can be monitored during initial exposure to gasses of interest. These tools are designed to allow us to comparatively evaluate how different materials gain or lose mass due to precisely controlled environmental excursions, with a long term goal of measuring changes in absolute mass. Herein, we provide a detailed experimental description of the apparatus, an evaluation of the initial performance, and demonstration measurements using nitrogen adsorption and desorption directly on the DPO.

881

, , , and

To keep national time accurately coherent with coordinated universal time, many national metrology institutes (NMIs) use two-way satellite time and frequency transfer (TWSTFT) to continuously measure the time difference with other NMIs over an international baseline. Some NMIs have ultra-stable clocks with stability better than 10−16. However, current operational TWSTFT can only provide frequency uncertainty of 10−15 and time uncertainty of 1 ns, which is inadequate. The uncertainty is dominated by the short-term stability and the diurnals, i.e. the measurement variation with a period of one day. The aim of this work is to improve the stability of operational TWSTFT systems without additional transmission, bandwidth or increase in signal power. A software-defined receiver (SDR) comprising a high-resolution correlator and successive interference cancellation associated with open-loop configuration as the TWSTFT receiver reduces the time deviation from 140 ps to 73 ps at averaging time of 1 h, and occasionally suppresses diurnals. To study the source of the diurnals, TWSTFT is performed using a 2  ×  2 earth station (ES) array. Consequently, some ESs sensitive to temperature variation are identified, and the diurnals are significantly reduced by employing insensitive ESs. Hence, the operational TWSTFT using the proposed SDR with insensitive ESs achieves time deviation to 41 ps at 1 h, and 80 ps for averaging times from 1 h to 20 h.

891

, , and

Optical approaches for hydrophone calibrations offer significant advantages over existing methods based on reciprocity. In particular, heterodyne and homodyne interferometry can accurately measure particle velocity and displacements at a specific point in space thus enabling the acoustical pressure to be measured in an absolute, direct, assumption-free manner, with traceability through the SI definition of the metre. The calibration of a hydrophone can then be performed by placing the active element of the sensor at the point where the acoustic pressure field was measured and monitoring its electrical output. However, it is crucial to validate the performance and accuracy of such optical methods by direct comparison rather than through device calibration. Here we report on the direct comparison of two such optical interferometers used in underwater acoustics and ultrasonics in terms of acoustic pressure estimation and their associated uncertainties in the frequency range 200 kHz–3.5 MHz, with results showing agreement better than 1% in terms of pressure and typical expanded uncertainties better than 3% for both reported methods.

899

, , and

We evaluate the microwave lensing frequency shift of the microgravity laser-cooled caesium clock PHARAO. We find microwave lensing frequency shifts of δν/ν  =  11  ×  10−17 to 13  ×  10−17, larger than the shift of typical fountain clocks. The shift has a weak dependence on PHARAO parameters, including the atomic temperature, size of the atomic cloud, detection laser intensities, and the launch velocity. We also find the lensing frequency shift to be insensitive to selection and detection spatial inhomogeneities and the expected low-frequency vibrations. We conservatively assign a nominal microwave lensing frequency uncertainty of  ±4  ×  10−17.

908

, , , , , , and

We present a practical calibration method of the detection efficiency (DE) of single photon detectors (SPDs) in a wide wavelength range from 480 nm to 840 nm. The setup consists of a GaN laser diode emitting a broadband luminescence, a tunable bandpass filter, a beam splitter, and a switched integrating amplifier which can measure the photocurrent down to the 100 fA level. The SPD under test with a fibre-coupled beam input is directly compared with a reference photodiode without using any calibrated attenuator. The relative standard uncertainty of the DE of the SPD is evaluated to be from 0.8% to 2.2% varying with wavelength (k  =  1).