This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

PIV uncertainty quantification from correlation statistics

Published 5 June 2015 © 2015 IOP Publishing Ltd
, , Uncertainty in PIV Citation Bernhard Wieneke 2015 Meas. Sci. Technol. 26 074002 DOI 10.1088/0957-0233/26/7/074002

0957-0233/26/7/074002

Abstract

The uncertainty of a PIV displacement field is estimated using a generic post-processing method based on statistical analysis of the correlation process using differences in the intensity pattern in the two images. First the second image is dewarped back onto the first one using the computed displacement field which provides two almost perfectly matching images. Differences are analyzed regarding the effect of shifting the peak of the correlation function. A relationship is derived between the standard deviation of intensity differences in each interrogation window and the expected asymmetry of the correlation peak, which is then converted to the uncertainty of a displacement vector. This procedure is tested with synthetic data for various types of noise and experimental conditions (pixel noise, out-of-plane motion, seeding density, particle image size, etc) and is shown to provide an accurate estimate of the true error.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

While PIV has been in use for more than 30 years, there is still a lack of quantifying the measurement uncertainty in a robust and accurate way for real experimental data. Often a rough estimate is quoted as 0.05 to 0.1 pixel, or one relies e.g. on inspection of the variation of the displacement field over a spatial or temporal neighborhood where the flow field is expected to be smooth. Most work has been done on synthetic data simulating varying error sources, but typically this underestimates the errors in a real experiment. Only in recent years have there been efforts to derive the measurement uncertainty for a particular experimental setup and for every vector in the measured velocity fields.

In the 'uncertainty surface' method developed by Timmins et al (2012) the recorded image is analyzed for parameters that influence the error. At present, four parameters are examined: particle image size, particle density, displacements and shear. Initially the PIV algorithm has to be tested with synthetic images that vary these parameters, i.e. generating an uncertainty surface for a particular algorithm and selected processing options. By comparison with the measured parameters (particle size, etc) inside each interrogation window, an uncertainty measure can be assigned for each vector.

The 'peak ratio' method (Charonko and Vlachos 2013) assumes that the ratio between the highest correlation peak and the second highest correlation peak is a good measure of the uncertainty. An empirical relationship has been derived between the ratio and the most likely uncertainty of the displacement. The peak ratio has also been used by Persoons (2014) in conjunction with local displacement fluctuations over a spatial 5 × 5 and temporal 9-point kernel in time-resolved PIV. These fluctuations are a combination of physical turbulent fluctuations and measurement uncertainty and were shown to successfully guide the settings and processing of a variable pulse separation scheme in order to enhance the dynamic range of the PIV measurement. Similar advanced multi-pulse or multi-frame techniques (see e.g. the review by Westerweel et al 2013) would benefit in the same way from accurate uncertainty estimation.

In the 'image matching' or 'particle disparity' method (Sciacchitano et al 2013), the measured displacement field is used to dewarp back the second image (or both by half) to overlay on the first one. The position of the particles for both frames is computed in each interrogation window and the residual disparity in the position of matching particles leads to an estimate of the uncertainty of the displacement vector.

All uncertainty quantification methods have their strengths and weaknesses and ability to account for possible error sources.

In general, planar PIV relies on matching two images and to compute a displacement field dx(x, y) as the best fit between the intensities in images I1 (x, y) and I2 (x + dx(x, y), y + dy(x, y)) = $I_{2}^{*}$ (x, y). Usually this involves maximizing the correlation given by the sum of $\left({{I}_{1}}I_{2}^{*}\right).$ Alternatively one can minimize the sum of the L2-norm or mean squared error ${{\left({{I}_{1}}-I_{2}^{*}\right)}^{2}},$ sometimes called minimum quadratic difference (MQD), least squares matching (LSM) or the sum of squared differences method (SSD). It can be shown that the two methods are mathematically identical provided the intensity I is normalized by $\left(I-{{I}_{\text{avg}}}\right)/{{\left(\Sigma {{\left(I-{{I}_{\text{avg}}}\right)}^{2}}\right)}^{1/2}}$ (Medan et al 1991).

These image matching algorithms can be further differentiated into local or global regularization schemes. Global methods like different types of optical flow (Horn and Schunck 1981, Lukas and Kanade 1981) iteratively optimize the whole displacement field at once while local methods more common in PIV select a small interrogation window to be matched to the corresponding window in the second image and repeat the procedure for all windows in the image. Typically an iterative multi-pass predictor–corrector scheme is used (e.g. Schrijer and Scarano 2008). Due to the intermediate vector field regularization including outlier removal, this can then be considered an intermediate local–global approach.

The focus here is not on a particular PIV algorithm, but on providing a generic uncertainty estimation method for any algorithm as a post-processing step once the displacement field has been calculated. It is an extension of the particle disparity method (Sciacchitano et al 2013) but instead of taking individual particle displacements it analyzes individual pixel contributions to the correlation peak.

2. Uncertainty estimation from correlation statistics

The general concept is presented in figure 1. For computing the uncertainty of a single vector inside an interrogation window one could in principle divide the interrogation window into smaller parts (figure 1(a)) where each sub-window corresponds to a displacement vector with higher noise level due to fewer pixels and fewer particles. So the standard deviation of these n × n vectors divided by $\sqrt{(n\times n)}$ is a rough estimate of the uncertainty of the vector computed from the complete interrogation window, given idealized circumstances with no outliers, no small-scale flow gradients, etc.

Figure 1.

Figure 1. Principle of uncertainty estimation by (a) splitting into sub-windows, (b) particle disparity and (c) correlation statistics method.

Standard image High-resolution image

The particle disparity method goes to smaller scales analyzing individual particles and their spread in displacements (figure 1(b)) and provides better statistics for the uncertainty estimation. The correlation statistics method presented here zooms in even further to individual pixels and their fluctuating contributions to the shape of the correlation peak from which an uncertainty estimate is derived (figure 1(c)). As shown later, despite analyzing single pixels and their neighbors, the effective local spatial scale is determined by the width of a spatial covariance matrix, which is again related to the average particle image size.

The correlation statistics method takes as input the two images to be matched and the displacement field computed by PIV. First image 2 is dewarped back onto image 1 using the displacement field u(x), i.e.

Equation (1)

requiring a sufficiently accurate high-order sub-pixel interpolation scheme (Astarita and Cardone 2005). To simplify the equations, this asymmetric dewarping scheme is used here keeping I1 constant. This is sometimes used for time-resolved PIV processing with subsequent Lagrangian analysis of fluid element trajectories. For standard double-frame PIV the symmetric dewarping of both frames with ±u/2 is preferred, applied here to process the data in section 3. Both methods have been implemented in the DaVis software and no difference in performance has been detected.

The following equations are given only for the u-component in the x-direction, but apply equally to the v-component. The sums are evaluated over all $N=~{{n}^{2}}$ pixels of an interrogation window of linear dimension n, in general over a region related to the effective spatial resolution Lsr. This needs to be determined for every PIV algorithm and set of parameters, as discussed again in section 3.2. Depending on window overlap factor and details of the multi-pass scheme with intermediate vector regression filter, the effective spatial resolution can be quite different from n as described below.

Instead of a square window the sums can be evaluated over a somewhat larger Gaussian weighted interrogation window, where the diameter of the 2D Gaussian weighted curve is close to Lsr. In general, when computing the uncertainty one should use the same square, round or elliptical interrogation window, weighting function and sub-pixel interpolation function as in the underlying PIV algorithm.

It is assumed that the PIV algorithm has converged sufficiently such that for the computed displacement u the correlation function

Equation (2)

is at a maximum with zero slope dC/du. A small distance ±Δx away from u the correlation function ${{C}_{+}}=C(u+\Delta x)$ should therefore be equal to ${{C}_{-}}=C(u-\Delta x):$

Equation (3)

Typically Δx will be one pixel as discussed later. Non-zero ΔC indicates that the algorithm was not able to converge for some reason, and taking the three points ${{C}_{0}}=C(u),$ C and C+ one could calculate an improved optimal displacement u + δu which would be equivalent to an extra iteration step of the PIV algorithm. As shown in figure 2, for non-zero ΔC fitting a Gaussian curve through the three points leads to the residual displacement δu given by

Equation (4)

with $~{{C}_{\pm}}=\left({{C}_{+}}+{{C}_{-}}\right)/2.$

Figure 2.

Figure 2. Correlation function C(u).

Standard image High-resolution image

Equation (3) can be rewritten as

Equation (5)

ignoring small differences at the window border when shifting the second term by Δx. Now every term $~\Delta {{C}_{i}}$ represents the elemental contribution to the total correlation difference ΔC. All terms $~\Delta {{C}_{i}}$ and the sum ΔC would be zero for perfect image matching, i.e. if ${{I}_{1}}=\text{const} \Delta I_{2}^{\text{*}}.$ All pixel-wise contributions to the side lobes of the correlation peak are equal (figure 3, left).

Figure 3.

Figure 3. Correlation function between I1 and $I_{2}^{*}$ for ideal noise-free image (left). With added noise this would lead to correlation peak asymmetry (middle). The PIV predictor–corrector scheme shifts the correlation peak back to 0, thus introducing a measurement error (right).

Standard image High-resolution image

Due to various error sources I1 and $I_{2}^{*}$ will not match perfectly even for the true displacement utrue. Assuming one would dewarp I2 by the true displacement utrue, the pixel-wise contributions to the side lobes of the correlation peak are unequal and the individual $\Delta {{C}_{i}}~$ add up in a random walk fashion to a non-zero ΔC (figure 3, middle). This difference will be optimized away by the PIV predictor–corrector scheme such that ΔC is zero again, leading to an erroneous measured displacement ${{u}_{\text{meas}}} =~{{u}_{\text{true}}}+\delta u$ (figure 3, right). Thus from the given known variability in $\Delta {{C}_{i}}$ an estimate of the standard deviation ${{\sigma}_{\Delta C}}$ of ΔC can be derived which is then propagated by equation (4) to an uncertainty estimate of the displacement field.

In general, the standard deviation of $\Delta C=\overset{{}}{\sum} \Delta {{C}_{i}}$ is related to the sum of the covariance matrix of $\Delta {{C}_{i}}:$

Equation (6)

with the requirement that the $\Delta {{C}_{i}}$ have a zero mean $\left(\overset{{}}{\sum} \Delta {{C}_{i}}=0\right).$ The auto-covariance terms are non-zero inside a neighborhood (x1, y1 close to x2, y2) given by the particle image diameter. Outside, the pixel intensities become uncorrelated and the autocorrelation drops to zero. So the right-hand side with a 4D sum can be rewritten as a 2 D sum over distances Δx, Δy:

Equation (7)

In the case of complete independence between $\Delta {{C}_{i}}\ \text{and} \Delta {{C}_{j}},$ i.e. with only a single non-zero covariance term ${{\text{S}}_{0,0}}$ for $\Delta x=\Delta y=0,$ this reduces to the random walk equation

Equation (8)

Unfortunately $\Delta {{C}_{i}}\ \text{and}\ \Delta {{C}_{j}}$ are not completely uncorrelated for i ≠ j. So the covariance sums ${{S}_{\Delta x,\Delta y}}$ need to be evaluated over a sufficiently large neighborhood $~\pm \Delta x,\pm \Delta y$ given roughly by the particle image size.

Finally, using equation (4) the uncertainty estimation of the displacement field can be computed as

Equation (9)

since the right-hand side is first-order linear in $~{{\sigma}_{\Delta C}}/{{C}_{-}}$ provided ${{\sigma}_{\Delta C}}$ is sufficiently smaller than ${{C}_{0}}-~{{C}_{\pm}},$ i.e. it is accurate for ${{\sigma}_{u}}\lesssim 0.3~\text{px}\text{.}$ This has been validated by Monte-Carlo simulations.

For illustration the field of $\Delta {{C}_{i}}$ has been plotted in figure 4 for different types of noise. At the top the particles have a positional jitter, similar to Brownian motion in µPIV, of up to 0.1 px, which is barely visible. The correlation differences $\Delta {{C}_{i}}$ in the x-direction clearly show positive and negative shifts at the particle locations. This is also the basis for the particle disparity method of Sciacchitano et al (2013). In the middle, only camera pixel noise has been added, a background level of noise together with photon shot noise proportional to the square root of the intensities. The correlation difference field has a grainy appearance with higher noise at the particle positions.

Figure 4.

Figure 4. First and second image frames (left and middle) and correlation difference $\Delta {{C}_{i}}$ (right) for Brownian type of particle jitter (top), camera pixel noise (middle) and out-of-plane motion (bottom). Bluish color indicates negative values, green to red positive values of $\Delta {{C}_{i}}.$

Standard image High-resolution image

Finally, at the bottom the particles have a 10% out-of-plane motion within a light sheet with Gaussian intensity profile. Where particles overlap the change of a combination of weak and bright particle to bright and weak due to movement within the light sheet introduces a strong error, clearly visible for example in the circled area with a virtual displacement in the negative x-direction. A weaker pattern of horizontally aligned blue-green or green-blue stripes indicates single particles changing in intensity, but the overall noise is dominated by overlapping particles.

2.1. Implementation details

First of all, the choice of Δx in equation (3) and following is guided by the requirement that it should capture well the form of the correlation function to be maximized, where the width is related to the particle image size. Therefore Δx must be smaller than the particle image size, but considerably larger than the computed $~{{\sigma}_{u}}.$ So a natural choice is Δx = 1 pixel, which also requires only a single dewarped function $I_{2}^{*}(x)$ instead of three in equation (3) and following. For very large particle image sizes it might be advantageous to use larger Δx (e.g. 2)—and also for the PIV algorithm itself, incidentally—but this has not been investigated.

For particle-related error sources—in particular for out-of-plane motion or random particle motion due to e.g. Brownian motion in µPIV—the auto-covariance matrix ${{S}_{\Delta x,\Delta y}}$ is about a positive 2D Gaussian curve with a diameter roughly given by the average particle image size as shown in figures 5(c)(e). Here ${{S}_{\Delta \text{x},\Delta \text{y}}}$ is first evaluated for all Δx, Δy < 5 px, but only the inner values are summed up until $S/{{S}_{0,0}}$ drops below 0.05 to avoid adding outer random and possibly negative covariance terms.

Figure 5.

Figure 5. Typical autocorrelation matrix ${{S}_{\Delta x,\Delta y}}/{{S}_{0,0}}$ for pure pixel noise and particle image size of 2.5 px without (a) and with (b) smoothing of $\Delta {{C}_{i}},$ and for sizes of 1.0 px (c), 2.5 px (d) and 4.0 px (e) with out-of-plane motion of 5% and with $\Delta {{C}_{i}}$ smoothed.

Standard image High-resolution image

In the case of pure camera noise independent for each pixel, due to the inverse coupling between $I(x)$ and $I(x+1)$ in equation (5), the covariance for $\Delta x=\pm 1,~\Delta y=0$ is negative but relevant (similar for the y-component in the y-direction) as shown in figure 5(a), which makes it difficult to find a suitable criterion for limiting the summing of S. To simplify processing, first all $\Delta {{C}_{i}}$ are smoothed in the x-direction by a simple filter $\Delta C_{i}^{\prime}(x)=\left(\Delta {{C}_{i}}(x-1)+2\Delta {{C}_{i}}(x)+\Delta {{C}_{i}}(x+1)\right)/4$ (same in the y-direction for the y-component) which eliminates the negative covariance terms (figure 5(b)). This does not change any other property of the statistical analysis. For particle-related noise it only makes the S-matrix slightly wider in one direction.

The relevant values of C0, C+, C and ${{\text{S}}_{\Delta \text{x},\Delta \text{y}}}$ x, Δy = 0–4) need to be evaluated as sums over the interrogation window, and in the case of Gaussian weighted windows as weighted sums over an area at least twice as large as the nominal interrogation window size. In particular for 75% overlap this can lead to processing times equivalent to 2–3 PIV iterations. A faster implementation consists of computing the Cs and ${{S}_{\Delta \text{x},\Delta \text{y}}}$ first as fields of the whole image for each pixel. These fields are spatially smoothed—equivalent to the weighted summing process—by a fast recursive 2D Gaussian filter (Lukin 2007) with a filter length equal to the effective interrogation window size followed by multiplication by the filter length squared (difference between smoothing and summing). Then at each vector location the local values of C0, C+, C and ${{S}_{\Delta x,\Delta y}}$ are taken to compute ${{\sigma}_{u}}.$ This reduces the processing time to less than one PIV iteration. Further implementation on a GPU has reduced the processing time by another order of magnitude. This scheme has also been successfully tested for PIV processing with square non-weighted windows, where due to the multi-pass PIV predictor–corrector scheme the effective filter shape is also closer to a Gaussian curve than to a top-hat function.

The above 1D derivation is done independently for the x- and y-components of the displacement field. Dependencies between x- and y-directions have not been observed yet for real PIV data, although, e.g. for strong astigmatism in oblique directions, one could expect some cross-terms. The discussion here is restricted to 2D displacements under various error sources.

3. Synthetic data

The procedure has been tested with synthetically generated images, varying parameters such as noise level, particle image size, seeding density and out-of-plane motion. The image size is 1000 × 1000 pixels. Default settings are particle image size of 2.5 px, seeding density of 0.1 particles per pixel (ppp), peak particle intensity of 1000 counts, particles distributed in a Gaussian laser intensity profile, and a constant displacement of 0.6 pixel in the x-direction, 0.3 pixel in the y-direction and no out-of-plane motion. The particle image size is defined as 2σ with an intensity profile proportional to $\text{exp}\left(-\left({{\text{x}}^{2}}+{{y}^{2}}\right)/2{{\sigma}^{2}}\right).$

Processing is done with LaVision DaVis 8.2 using 4-passes, 75% overlap, interrogation windows with a Gaussian weighting function and nominal window sizes of 16 × 16 up to 64 × 64 pixels. Due to the intermediate vector regularization adjusting to shorter displacement wavelengths, for 32 × 32 windows the effective window size is e.g. 32/28/22 pixels for an overlap of 0%/50%/75%. The method used here to measure the effective spatial resolution is similar to Elsinga and Westerweel (2011).

Assuming a typical camera with a conversion rate of 4 e/count, a Gaussian background pixel noise of 16 e (4 counts) has been added to the images. In addition there is photon shot noise proportional to the square root of the number of photoelectrons, leading to noise of up to 16 counts for pixels with 1000 counts intensity.

In the following figures, the total uncertainty $\sigma =\sqrt{\sigma _{u}^{2}+\sigma _{v}^{2}}$ is combined to an average value by computing the root-mean-square (rms) of the uncertainties of all vectors (see appendix in Sciacchitano et al 2015). The total true error is computed as the rms of the differences of all vectors relative to the known displacement. It is usually dominated by the random part; significant bias is only encountered for small particle image sizes ≤1 px ('peak locking'). For the range of tested parameters no outliers have been observed and no post-processing has been done.

First varying only the background pixel noise it is shown that the uncertainty can be accurately estimated for window sizes of 16 × 16 to 64 × 64 for a wide range of noise levels given in percent relative to the particle peak intensity of 1000 counts (figure 6). Only for very large noise amplitudes do the uncertainties level off at about 0.3–0.4 px as expected from the limitation of equation (9). The relevant error levels in PIV to be considered are typically 0.02 px to 0.2–0.3 px, rarely less due to all error sources combined and, if larger, the correlation peak is less well defined with possibly many outliers and one should consider improving the experimental setup or using larger interrogation windows.

Figure 6.

Figure 6. Error as a function of Gaussian pixel noise.

Standard image High-resolution image

Varying the out-of-plane motion from 0 to 30% of the light sheet thickness is shown in figure 7 for particle image sizes of 1.5 px (left) and 2.5 px (right). The uncertainties are correctly calculated for window sizes of 32 × 32 and 64 × 64, but again underestimated slightly for 24 × 24 and by 20% for 16 × 16 windows for a particle size of 2.5 px. The error in the case of out-of-plane motion is dominated by (possibly very few) overlapping particles which change their intensity from e.g. bright–weak to weak–bright as they move together through the light sheet as mentioned before. Thus a strong virtual in-plane displacement is introduced. The error is smaller for a particle size of 1.5 px because the probability of particles overlapping is significantly smaller.

Figure 7.

Figure 7. Error as a function of out-of-plane motion. Particle image size = 1.5 px (left) or 2.5 px (right).

Standard image High-resolution image

As noted by Nobach and Bodenschatz (2009) this error level is quite independent of the seeding density. For low seeding densities there are fewer particles overlapping, but they contribute significantly to the error level since there are few particles in total. Conversely, for high seeding densities, almost all particles overlap and produce errors, but the information content is much higher, which reduces the error correspondingly.

This is confirmed in figure 8 for an out-of-plane motion of 10%, where the seeding density is varied from very low to very dense with almost constant error level. The uncertainty quantification works reasonably well with slight overestimation for 64 × 64 windows and underestimation for 16 × 16.

Figure 8.

Figure 8. Error as a function of seeding density for out-of-plane motion of 10%.

Standard image High-resolution image

In general, out-of-plane motion is often the dominant error source in PIV processing (Nobach and Bodenschatz 2009). The same effect is also introduced by a misalignment of the profile of the first and second laser pulses or unstable laser intensity profiles from shot to shot.

Another error source related to particle size is given by the random Brownian motion of each particle in µPIV. Here the error and estimated uncertainty become lower with higher seeding density due to the averaging effect of more particles (figure 9). Again the agreement between true error and uncertainty is good except for 16 × 16 windows, where the uncertainty is about 20–30% too low.

Figure 9.

Figure 9. Error as a function of seeding density for random Brownian particle motion of ±0.1 px.

Standard image High-resolution image

Varying the particle image size for constant out-of-plane motion of 10% shows again a reasonable agreement between the average true error and the uncertainty (figure 10). For sizes of <1 pixel the random and bias errors increase drastically ('pixel locking') due to unrecoverable loss of information. Closer inspection reveals that only the random part of the error can be estimated by the uncertainty quantification, while the systematic bias remains undetected. Errors for larger particles are slightly underestimated, especially for 16 × 16 windows.

Figure 10.

Figure 10. Error as a function of particle image size for out-of-plane motion of 10%.

Standard image High-resolution image

Finally the effect of in-plane gradients is investigated by generating synthetic images with a width of 400 and height of 1000 pixel and varying the in-plane shear from 0 to 30%, i.e. up to −60/+60 px displacement in y on the left and right of the image, respectively. This amounts to up to 10 px change inside a 32 × 32 interrogation window, which can only be handled by PIV algorithms using image deformation techniques. As shown in figure 11, the true error for a particle image size of 2 px and no out-of-plane motion stays below 0.01 px, which validates the accuracy of the synthetic image generator, the PIV algorithm and the dewarping function under such extreme conditions. Even an inaccuracy in a vector position of only 0.1 px would lead to a bias error of already 0.03 px.

Figure 11.

Figure 11. Error as a function of shear rate for different particle image sizes and out-of-plane motions.

Standard image High-resolution image

The computed uncertainty is overestimated as 0.03 px for 30% shear, even more for a particle image size of 4 px. This is probably due to image dewarping shearing round particles into ellipses rotated in different directions in the first and second frames. While the PIV algorithm correlating two rotated ellipses still computes an accurate mean displacement, the uncertainty method assumes that these intensity differences lead to additional noise, not knowing that in the end the symmetrical contributions still cancel out to an accurate displacement vector.

Often such strong velocity gradients are accompanied by additional larger error sources like out-of-plane motion as shown in figure 11 for 10% out-of-plane motion. Again a good agreement between true error and uncertainties is achieved.

More complicated is the subject of second-order gradients, i.e. velocity fluctuations of small-scale wavelengths, where the amplitude is reduced by the PIV algorithm due to the finite spatial resolution (Schrijer and Scarano 2008). A detailed analysis of the response function of the PIV algorithm and the computed uncertainties is beyond the scope of this paper, including the basic question of whether such truncation errors should really be considered as 'errors' with the—unrealistic—expectation that uncertainty quantification methods should be able to estimate them, or whether the uncertainty should rather be quoted relative to the known spatial and temporal response function of the PIV measurement system.

3.1. Limitation of the uncertainty quantification

First of all, it should be noted that this uncertainty quantification method is not able to detect outliers. It is always assumed that outliers have been removed beforehand by some e.g. median, filter and that the investigated correlation peak is the true one. The uncertainty method has no indication whether the correlation peak found by the PIV processing routine is the correct one. In general, outliers will have larger uncertainties. This can help to detect them, e.g. by a weighted median filter with a weight inversely proportional to the uncertainty.

It is also important to notice that the uncertainty estimation field will show quite some variability in the order of 5–25% even with perfect synthetic data with constant displacement field and the same parameters everywhere as shown in figure 12. This is partly due to the variability of the random pattern with a greater or lesser number of particles or overlapping particles in each interrogation window. But additionally there is the intrinsic random character of the uncertainty estimation due to the random walk process (equations (6)–(8)). This standard deviation of the standard deviation ('SoS') has a relative variability of about $\pm 1/\sqrt{2N}$ given N independent events, which for e.g. 10 particles per interrogation window is already ±22%.

Figure 12.

Figure 12. Typical uncertainty field with an average of 0.142 px and standard deviation of 0.018 px.

Standard image High-resolution image

Another limitation of the above uncertainty quantification method is the case of insufficient independent events contributing to the statistical analysis, for example in the case of too few or too large particles in 16 × 16 windows. So far correction terms have not yet been calculated for such small-number statistics. It may require again counting particles as in the 'image matching' method (Sciacchitano et al 2013), but it is difficult counting e.g. overlapping particles as would be required for errors dominated by out-of-plane motion.

Finally the method mainly estimates the random part of the uncertainty. Bias errors can in principle be quantified by equation (4), but it is assumed that the PIV algorithm is converged sufficiently so that the correlation peak is symmetrical and there should be no bias per se. Unknown biases as in strong peak locking stay undetected and may be better quantified by the uncertainty surface method (Timmins et al 2012). The magnitude of the bias error is usually smaller than the random error/uncertainty even for strong peak locking, but may become dominant in derived statistical quantities.

In practice, small bias terms in equation (4) have been observed even for larger particle image sizes e.g. due to the PIV algorithm not converging to a fixed value, but oscillating somewhat from iteration to iteration. These small bias terms—not being related to true bias error sources—are therefore considered to be more or less of random nature and are now added in the latest implementation (Davis 8.2.2) to the random uncertainty by $\sigma _{\text{total}}^{2}=\sigma _{\text{bias}}^{2}+\sigma _{\text{random}}^{2}.$

3.2. Preparation for uncertainty propagation

Subsequent uncertainty propagation into derived quantities like vorticity, divergence, turbulent kinetic energy or Reynolds stresses requires the knowledge of the spatial and/or temporal autocorrelation coefficients of the true errors as well as the correlation between u and v error terms, and w for Stereo-PIV. The details of the uncertainty propagation will be subject of future investigations (see also Wilson and Smith 2013a, 2013b). Here only the spatial auto/cross-correlation coefficients are briefly investigated. This is independent of the type of uncertainty method. It is only a function of the image data and the PIV processing scheme.

For the synthetic data used for figures 611 there has been no significant coupling observed between the true errors δu and δv, $|C(\delta u,\delta v)|\leq 0.05,$ with the exception of data points with almost ideal image quality and errors below 0.01 px (e.g. figure 6, low noise). Here correlation values of the order of −0.1 to −0.3 have been measured. Since such low errors are typically not encountered in real experiments, this has not been investigated further. It may relate to some floor noise level of the PIV algorithm together with intricacies in the processing scheme.

For Stereo-PIV, on the other hand, due to the 3C reconstruction of (u, v, w) from (u1, v1, u2, v2) there is always a coupling in particular between u and w when e.g. the cameras are aligned along the x-axis, since both u and w depend on u1 and u2. These coupling terms, which can be calculated directly from the calibration function or measured using synthetic data, need to be taken into account in particular for the Reynolds stress Ruw and other quantities.

The average spatial autocorrelation function of the true error has been computed for the data in figures 610 as shown in figure 13 for the autocorrelation of the δu -component in the x-direction (AF(true error)), processing 32 × 32 interrogation windows with 75% overlap and a vector spacing of 8 pixels. Beyond dx = 32 px the values stay close to zero. The same drop in correlation has been measured for the correlation of δu in the y-direction as well as for δv in x- and y-directions. This is directly related to the effective spatial resolution of the PIV algorithm. The plotted solid curve indicates the expected autocorrelation function if the PIV algorithms were equivalent to a Gaussian shaped linear filter with a spatial resolution Lsr = 2σ of 18 px. This deviates slightly from Lsr = 22 px, which has been used for the computation of the uncertainties. Interestingly, the measured autocorrelation of the fluctuation in the uncertainty values corresponds closely to Lsr = 22 px. Further investigation is needed to understand these relationships.

Figure 13.

Figure 13. Spatial autocorrelation of the true error δu in the x-direction.

Standard image High-resolution image

The effective spatial resolution has been determined by a method similar to Elsinga and Westerweel (2011) using synthetic images with a step function in the displacement field. It can also be derived from the response of the PIV algorithm to sine-wave displacement fields of different wavelengths comparing to the sinc-response of a simple top-hat filter (e.g. Schrijer and Scarano 2008). In any case, the autocorrelation values needed for further uncertainty propagation can be taken from the average measured values, which for all types of synthetic data used here are highly constant with a standard deviation of less than 0.02 to 0.03.

4. Conclusion

The local measurement uncertainty of a PIV displacement field is estimated based on a post-processing of the differences between the two images to be matched. The standard deviation of the pixel-wise contributions of intensity differences to the shape of the correlation function is computed in a statistical way. This is then related to the random uncertainty of the displacement vectors.

This uncertainty quantification method has been tested with synthetic data varying, e.g. random Gaussian noise, particle image size and density, in-plane and out-of-plane motion, and shows good agreement with the true random error in most cases, slightly underestimating the true error for 16 × 16 window sizes. The method is not able to estimate bias errors e.g. from peak-locking or another systematic error source.

The accuracy of this approach is furthermore investigated in designated experiments discussed elsewhere (Neal et al 2015, Sciacchitano et al 2015) as part of an international collaborative effort that includes comparisons to other PIV uncertainty quantification methods. It confirms the validity of the uncertainty method presented here.

Please wait… references are loading.
10.1088/0957-0233/26/7/074002