Self-consistent autocorrelation for finite-area bias correction in roughness measurement

Scan line levelling, a ubiquitous and often necessary step in AFM data processing, can cause a severe bias on measured roughness parameters such as mean square roughness or correlation length. Although bias estimates have been formulated, they aimed mainly at assessing the severity of the problem for individual measurements. Practical bias correction methods are still missing. This work exploits the observation that the bias of autocorrelation function (ACF) can be expressed in terms of the function itself, permitting a self-consistent formulation. From this two correction approaches are developed, both with the aim to obtain convenient formulae which can be easily applied in practice. The first modifies standard analytical models of ACF to incorporate, in expectation, the bias and thus actually match the data the models are used to fit. The second inverts the relation between true and estimated ACF to realise a model-free correction. Both are tested using simulated and experimental data and found effective, reducing the total error of roughness parameters several times in the typical cases.


Introduction
Recently a couple of works drew attention to how roughness measurement by atomic force microscopy (AFM) are impacted by levelling/background subtraction [1,2], in particular line levelling, a ubiquitous and often necessary step in AFM data processing [3][4][5][6].The classic results for the effect of mean value subtraction on statistical quantities [7][8][9] were generalised in a theoretical framework covering many common levelling methods.The mean square roughness σ, as well as many other quantities, becomes biased.For 1D data and 1D scan line levelling the bias of estimate σ can be written (in expectation): Function G is the true autocorrelation function (ACF) of the roughness and C a complicated function capturing correlation/spectral properties of the specific levelling method.The second term expresses the measurement bias, which can often be severe [1,2,8].Explicit expressions are known for several common levelling methods and autocorrelation function forms.[1] It should be noted that is more correct to call G the autocovariance function and reserve the term autocorrelation for the function normalised to variance, but both are commonly used.The bias problem is not unique to AFM and profilometry data levelling.Similar problems occur for autocovariance function estimation from locally smoothed (detrended) data [10,11].
Ultimately, the bias and variance depend on the ratio α = T /L of correlation length T and scan line length L. The bias further increases with 'aggressivity' of the levelling procedure [1,8].The ratio α must be kept small for reliable results.If scan line levelling and similar 1D corrections are applied to images the error is proportional to α (not α 2 as one might assume for 2D image data), which can be difficult to keep sufficiently small.Even when scan lines are not levelled explicitly, the computation of 1D ACF imposes the condition of zero mean value on image rows, corresponding to degree-0 polynomial levelling.The length L is, sadly, also often not set deliberately but instead to what 'feels right' [2].This then translates to α which is way too large-sometimes far beyond instrumental constraints.Reported results are then unnecessarily skewed.Bias estimation procedures have been proposed, either simple and coarse [2] or more detailed [1], allowing one to check whether it is within reasonable bounds.The simplest (and coarsest) estimate of relative bias of σ is −nα, where n is the number of terms in the scan line levelling polynomial, usually equal to its degree plus one.
Unfortunately, all the estimates suffer from a chicken and egg problem.They require knowing the correlation length T , or even the form of the ACF, which are not known a priori.They should be the outputs of our measurement.Therefore, they must be estimated from experimental data, and these estimates are again biased.The experimental T (denoted T ) is underestimated because the entire ACF is affected in a similar manner as σ 2 , as illustrated in figure 1.Consequently, although such estimates can help with judging the bias for a particular measurement or guide towards a better choice of scanning parameters, they cannot be applied in a logically consistent manner.They are thus of limited use for actual correction of the biased results.Clearly, the problem is not yet satisfactorily solved.In order to deal with the bias pervading all the roughness parameters we need a self-consistent method which does not require a priori knowledge of the result.It should also be convenient to have practical impact and allow wide adoption.Here we aim to provide this missing piece.
The overall plan is fairly straightforward.We being from the observation that the value of ACF at zero is σ 2 , that is σ 2 = G(0).Formula (1) can thus be also written The second (bias) term is linear in G. Suppose an expression of the same form could be obtained for G as a whole (we show later that it is indeed the case) Figure 1.The effect of scan line polynomial levelling using polynomials of various degrees on the estimated ACF.The beginning of the curve (small distances) is biased, whereas for larger distances the ACF estimate is not converged and exhibits oscillations [8].
Here R is a linear operator expressing the bias, now of the entire ACF.It again captures the properties of the specific levelling procedure.Expression (3) ties self-consistently together the true and estimated ACF.We can say that the ACF knows about its own bias.The relation can be formally inverted yielding unbiased G from the biased estimate Ĝ (in expectation).This is the adventurous option-it is not immediately obvious such inversion would be numerically feasible.The conservative option is to employ expression (3) directly.Assume, for instance, that the roughness is Gaussian.The true ACF has then the form Conventionally, we fit the experimental ACF Ĝ(τ ) with the ideal model G Gauss (τ ) with σ and T as free parameters.But it is clearly the wrong model.It does not describe the experimental ACF, which never conforms to the theoretical form.The correct model is and can be obtained by applying R to G Gauss (τ ).
The questions are what is the form or operator R, whether R and (1 − R) −1 can be reasonably evaluated and how well the bias correction works in practice.They are answered in the following sections.The general expression for R is derived in section 2, which the reader can skip on the first reading.Section 3 provides elementary formulae and procedures for practical bias correction and section 4 tests their effectiveness using simulations and real AFM data.

Bias of ACF after levelling
The calculation of R follows the general scheme and notation introduced in Ref. 1 (sections 3.1 and 3.3), including treating the data as continuous functions.Since scan line levelling is the dominant source of bias even for image data [2], we consider the 1D case.Denote φ j orthonormal basis functions used for background subtraction by linear fitting, with j distinguishing the functions.If φ j are polynomials then j ∈ {0, 1, 2, . . .n − 1} is their degree, but the index may not be a simple integer in other cases.Summations over j are, therefore, written below only formally.
Levelled data are computed by subtracting the projection onto the span of φ j with coefficients a j equal to the dot products The ACF is estimated as Substituting expressions (7) into (9) gives for Ĝ(τ ) (10) which can be expanded into four terms corresponding to the combinations of z and φ: Taking expectations, where we utilised the linearity of expectation and that for any a and b In a similar manner as in formula (1) for the bias of σ 2 , one term (here E[ Ĝ1 ]) gives the unbiased G(τ ) and the remaining terms combine to give the bias RG(τ ).

Linear operator R
In principle, formulae (12) can already be considered a representation of the operator R.However, it is more natural (and useful) to write it explicitly Meaning in E[ Ĝ2 (τ )] we must set u = x ′ − x + τ , transform the domain of integration (which splits the integral into three) and obtain The symmetry of G was utilised to ensure its argument is always positive and thus from interval [0, L].Functions c j again express the correlation properties of φ j , in analogy to Ref. 1.However, as the various integrals are over different subintervals of [0, L] they are more complicated here, defined Finally, in order to transform the expression to the form (14), we replace the integration limits for u using the indicator function resulting in The term in square brackets is one piece of R(τ, u) in the form required by ( 14)-the one corresponding to Ĝ2 .The second piece, corresponding to Ĝ3 , is obtained using the same steps.The last piece contains integrals combining φ j and φ k for j ̸ = k that cannot be expressed using (16).If we define it can be written Therefore, the final expression for R(τ, u) is

Polynomial levelling
A polynomial basis φ j has symmetries which can be used simplify R(τ, u) somewhat.We first note that the expression is not unwieldy because we failed to express it more elegantly.The operator is inherently complicated, with a number of discontinuities in the derivative.Even for mean value subtraction, when the single basis function φ 0 is a constant, we get illustrated in figure 2. Although only function values for small τ and u are important and some of the discontinuities do not affect expansions for small τ and u, R(τ, u) is not totally differentiable at (0, 0).A small-τ approximation of the entire integral in ( 14) is possible only because the integral is a smoother function than R(τ, u) itself.
For general polynomials, we note that Legendre polynomials P n (x) on interval [−1, 1] are either even or odd, P n (−x) = (−1) n P n (x).For the orthonormal basis functions on [0, L] it translates to From this we can easily see that Terms with j + k odd can be omitted as they mutually cancel.And for j + k even only terms with j < k can be kept, multiplied by 2. Together with the relation c j,j,b = c j,[0,b] , permitting rewriting terms with j = k, these rules eliminate most of the terms in the second summation in (21).In fact, for degree 1 no such term remains, giving A similar simplification is possible for other bases formed by even and odd functions φ j , for instance sines and cosines, although the indexing by j may differ (and sines and cosines are more natural to handle in the frequency domain).However, the small-(τ, u) expansion for a specific basis is still tedious and better evaluated using symbolic algebra software.
Maxima [12] was used to obtain the practical formulae summarised in the following section.The expansions were terminated at α 2 terms.The first reason is that preliminary numerical experiments showed that the leading α 1 terms is not always sufficient and without the second term there is a tendency to overcorrection.The general form of σ 2 bias for polynomial levelling contains only even-power terms after α 2 (equation (27) in Ref. 1).Therefore, there is no third order term in the expansion and higher powers are negligible.Finally, the low smoothness of R at zero means that more accurate expansions would not be, in general, Taylor-like and would have to include more complicated ACF-specific terms.For this reason it is advantageous to express analytical models of ACF in terms of α = T /L and s = τ /T as it makes them smoother functions.In model-free inversion there is no T .Therefore, the formulation has to be done in terms of t = τ /L instead of s.
For a polynomial with n terms (degree n − 1) the expansion up to the second order in α is where and The expressions in the following section are obtained by evaluating (26) for particular G.

Corrected models
The (biased where τ = k∆ x if ∆ x is the sampling step.It is fitted by an ACF model function.Simple models have only two free parameters, σ and T .The classic Gaussian ACF model ( 5) and analogous exponential model are replaced with the leading terms of (G − BG)(τ ) expanded for small α and τ .In particular, the Gaussian model is replaced with and the exponential model with where s = τ /T , α = T /L and erf denotes the error function (antiderivative of Gaussian).If evaluation of special functions is not possible or desirable erf can be replaced for instance by a Padé-style approximation as it only occurs in the second order term.The intermediate Gaussian-exponential ACF model [1,8,13] is replaced with where Γ denotes the gamma function and γ the lower incomplete gamma function.
The superscript bias indicates the models are bias-corrected, i.e. take into account the bias of the data (29) they are used to fit.Models (31) and (32) should be fitted from zero to approximately the first zero crossing, i.e. up to the first k for which G k < 0. The biased models do not have any additional free parameters.Nevertheless, they contain two additional inputs, the profile or scan line length L and the number of terms n of the line levelling polynomial-which is one plus its degree.The full profile length must be entered as L, not the length of ACF data which are often cut to a shorter interval of τ .

Model-free inversion
For an unknown, but quickly decaying ACF, formulae ( 26)-( 28) can be evaluated using the discrete values of estimated Ĝk , leading to the following expressions: where G c m with m = 0, 1, 2, . . ., K − 1 are the correct ACF values and matrix A is Although (35) can be read as expressing measured Ĝk using a true ACF G c m , we interpret it as a set of K linear equations for corrected ACF G c m , with A being matrix of the system.Number K is the cut-off after which the function is assumed to be negligible or the data not usable, i.e. again around the first zero crossing.Symbol χ(j < m) is 1 when j < m and 0 otherwise.
Matrix A is the sum of four simple matrices, the identity matrix, two rank-1 matrices and a lower diagonal matrix (in this order).The most efficient solution may be solving first a system with only the first and last terms as A ′ is lower triangular and thus the equations are solved by back substitution.Sherman-Morrison or Woodbury formula [14,15] is then used to perform low-rank updates of the solution to include the two rank-1 terms.However, numerical stability of the update formulae is not well understood.Furthermore, the full system is wellconditioned and only moderately sized.Therefore, it can be easily solved using any standard linear algebra routine.

Simulated data-Gaussian ACF
We first compare the performance of standard and biased Gaussian roughness models ( 5) and (31) using simulated data.Synthetic rough Gaussian surfaces were generated using the spectral method with T = 20 px.The correlation length is in the typical range for real AFM images, regardless of the physical dimensions of the scanned area.The mean square roughness σ was set to 1 as it is only a scaling parameter.The image size varied from 100 px to 2000 px, corresponding to α from 0.01 to 0.2 (in the reverse order).The discrete ACF (29) was evaluated using the standard Fast Fourier Transform method, after levelling image rows using polynomials with degrees from 0 to 2. The polynomial levelling was, of course, not actually necessary here because the simulated data were ideal and had neither tilt nor bow.It simulated the effect of preprocessing that would be applied to measured data.Tilt or bow could be added beforehand, but it would be pointless.The levelling would subtract them again, together with a part of the roughness-which is the effect we are studying.Marquardt-Levenberg algorithm was used for the non-linear least squares fitting to obtain σ and T .Both models were fitted to data up to the first zero crossing.The entire procedure was repeated with randomly generated Gaussian surfaces hundreds of times (with more repetitions for smaller images for which the variances are larger).The means and standard deviations are plotted in figure 3.
The biased model (31) clearly succeeded at bias reduction.For both parameters and almost all image sizes the bias becomes so small that it is no longer an issue.The only exception is very small images which are only several correlation lengths large (α ≲ 1/10).Although bias usually still decreases, it is at the cost of considerably increased variance.Too much roughness information is missing in such small areas.Using them for roughness evaluation is just wrong and the correction cannot change it.
The correction generally trades the bias for variance, i.e. the parameters have larger variances than for the standard model.For reasonable T /L the trade-off is advantageous.The total error (variance+bias 2 ) 1/2 decreases as illustrated in the bottom row of figure 3. The improvement is more marked for σ where it can be an order of magnitude, whereas for T it ranges from about 2× to 5×.The improvement is larger for higher polynomial degrees.It is because the bias is larger in absolute value, but of the same functional form.Hence, the same correction is able to deal with a larger bias.
Full circles in figure 3 correspond to the worst case scenario of a single-image roughness measurement.Multiple scans reduce the variance, the dominant contribution for the improved model, but do not help with bias, the dominant contribution for the standard one.This is illustrated in the plot of total error reduction for five-image evaluation (open circles).

Simulated data-inversion
For model-free correction (inversion) random pyramidal surfaces with an unknown ACF were generated using Gwyddion [16] Objects function which generates surfaces by sequential 'extrusion' [17].The pyramids were randomly oriented and the pattern was large-scale isotropic.The generated images were 8000 × 8000 pixels, corresponding to approximately 700 correlation lengths (α ≈ 0.0015).A small (512 × 512) part of one such image is shown in figure 4. Smaller images of various sizes were again cut from the large base image and used to estimate the ACF.
The corrected ACF was computed by cutting Ĝk slightly beyond the first zero crossing (10 % farther) and solving the linear system (35) as described in section 3.2.Roughness parameters σ and T were again evaluated from both the biased and corrected ACF.In particular σ was calculated from the relation σ 2 = G 0 and T as the distance at which the ACF first falls to G 0 /e (e being Euler's number).The 1/e ≈ 0.368 threshold is consistent with the analytic models ( 5) and ( 30), although it should be noted that roughness measurement standards often set the threshold differently, 0.2 being a common choice [18].
The comparison also requires true values of σ and T .They were obtained using angularly averaged 2D ACF, which was averaged over all generated images.The data were artificial and did not contain any tilt, bow, sample bending or other type of background.Therefore, the only preprocessing necessary before the computation of 2D ACF was the subtraction of the mean value from the entire image.The relative bias introduced by this operation is of the order of α 2 [1,8], i.e. < 10 −5 and thus negligible.
The results are plotted in figure 5.The overall trends are similar as for the modified Gaussian model fitting.Conclusions concerning polynomial degree and multi-image analysis remain unchanged.The dependency on image size (or α) is flatter, especially for σ.Furthermore, T is not improved at all for tiny images and degree 0. Unlike for the explicit model, the correction in fact decreases the accuracy in this case.It must be, however, emphasised that T /L ratios around 0.1 or larger are not recommended, whether with correction or without.
We also tested how the correction depends on the ACF cut-off point by choosing the interval from 10% shorter to 40% longer than to the first zero crossing.The effect can be assessed using the accuracy of σ and T or differences between the corrected and true ACF curves.All the dependencies are generally quite flat and often without any clear trend.This is a reassuring result because it means the correction is not sensitive to the cut-off point precise location.As expected, for very tiny images (large α) shortening the interval improves the accuracy somewhat.For large images (small α) the trend was sometimes slightly opposite.Overall, however, cutting at the first zero crossing or moderately beyond it appeared to work well.

Rough thin film-inversion
A test with real rough surface would ideally be done with a sample whose ACF is precisely known.However, even standard rough samples do not have the ACF specified.Furthermore, the objective is to verify that the bias caused by limited area can be corrected.Meaning the resulting ACF is close to ACF which would be obtained by measuring a very large (or infinite) area.The same approach as in the previous section can thus be used.In fact, comparing measurements on small and huge areas allows us to study the effect in isolation-as opposed to comparison with a reference ACF where any observed difference could have a variety of possible causes.
An SrO thin film, prepared by atomic layer deposition, with large-scale uniformly and isotropically rough upper surface was chosen for the demonstration (see figure 4).The texture is formed by nanocrystals and is clearly non-Gaussian.Images were acquired using a Bruker Dimension Icon atomic force microscope in ScanAsyst mode with a standard ScanAsyst-air probe and scan rate of 0.2 Hz.In order to follow the 2D ACF route, a large image without scan line artefacts is necessary.The absence of scan line artefacts means 2D polynomial levelling is sufficient, leaving only bias proportional to α 2 (or higher powers).A scan of area 12 × 12 µm 2 with pixel resolution of 3072 × 3072 was selected for the evaluation.The correlation length to scan size ratio was estimated as α ≈ 0.0037, meaning the relative bias following from background subtraction was < 10 −3 .The long scanning time resulted to drift, which was estimated from the acquired image using Gwyddion Compensate drift function.Its primary effect on the ACF is slight smearing along the abscissa as distances in the xy plane are distorted, in particular in the slow scanning axis.The relative changes were estimated below 1.5 × 10 −3 and thus negligible.The resulting ACF is plotted in figure 6 (each subplot) and separately also in figure 7.  The large image was then cut to smaller images of various sizes and processed as above, assuming subimages are reasonable approximations of measurements on smaller areas.The uncorrected (red) and corrected (green) ACF computed for each subimage are plotted in figure 6 for three selected sizes and all three polynomial degree 0-2.The corrected ACF curves were extrapolated beyond the cut-off points by a simple subtraction of the last computed correction from all further data (cyan).
The correction is clearly effective.The green (corrected) curves, although spread slightly more than the red (uncorrected), are centred on the best estimate ACF.Deviations are noticeable only for the highest degree and the far ends of the curves, where there is a tendency to overcorrection.

Discussion
We first remark on the normalisation factor in (29) which is sometimes taken to be 1/N instead of 1/(N − k) because of positive definiteness and/or variance [7,19,20].It corresponds to dividing the integrals in section 2 by L instead of L − τ .However, the estimator with N − k denominator is unbiased, or at least it would be without background subtraction.Furthermore, a constant denominator N does not generalise  to irregular regions and other cases where varying amount of data is available for different distances τ [16].Therefore, in this context N − k is the appropriate choice.

Interpretation of results
Figure 6 almost looks too good to be true.One has to be careful with its interpretation.Everything was computed from the same large base image.The clustering of the green curves around the best estimate shows that we removed the bias tied to smaller scan areas.However, they do not necessarily cluster around the true ACF.In the example with synthetic Gaussian and pyramidal data, the surfaces were uniform and could be made infinite for all practical purposes.But for real rough surfaces, the issues of uniformity, representativeness and the statistical character of roughness are much more tangled.It should also be noted that roughness measurement is also affected by tip sharpness and probe-sample interaction in general [21][22][23][24], sampling step [22,25], calibration, scanning speed and feedback loop settings [23], defects, and other effects not analysed here as we attempt to isolate those related to the finite area.The measurement of a neighbour region (somewhat smaller, 8 × 8 µm 2 ) results in a slightly different ACF, as illustrated in figure 7. Subimages taken from this scan yield curves centred on its own best-estimate ACF.The bias estimates for the two images are approximately 3 × 10 −4 and 6 × 10 −4 .The relative standard deviations of G(0) = σ 2 are proportional to α [8] were estimated as 2 × 10 −3 and 3 × 10 −3 .They are all too small to explain the difference of almost 6 % between the two curves.The correlation length does not capture the scale at which real textures can be considered uniform.Although surface heights become uncorrelated for points considerably farther apart than T , the texture itself varies along the surface.The characteristic scale of these variations can be much longer than T even if the texture is ultimately large-scale uniform.Scanning such large areas is seldom feasible and we have to rely on multiple independent scans.

Comparison with spectral density
Two other functions are commonly used to characterise spatial properties of roughness, height-height correlation function [8] (sometimes also called structure function) and power spectrum density function (PSDF).Height-height correlation function H is directly related to ACF by H(τ ) + 2G(τ ) = 2σ 2 , so the results can be translated.PSDF is the Fourier transform of ACF and is probably the most commonly utilised function [22,26].The effect of levelling is suppression of low-frequency components [2].
The low-frequency components can be excluded from fitting, similarly how the ends of spectral range are avoided in PSDF stitching [19,22,[27][28][29].However, the peak around zero frequency is where almost all the spectral weight lies.It is also the least affected by noise, discontinuities and smoothing effects such as tip convolution [22,23,26].It is often critical in roughness analysis.However, it is the region worst affected by levelling, and possibly in a non-trivial manner.In the case of ACF the worst affected region is far from the origin and it is never used for analysis.Around the origin, levelling manifests as the subtraction of a slowly varying function.An approach similar to the one developed here can perhaps be formulated also for PSDF-Ref.1, for instance, gives hints at spectral reinterpretation.However, what would be the equivalent of model-free correction for PSDF is not clear.

Zero crossing
The model-free correction procedure relies on the true ACF monotonically and quickly decaying to zero.In particular, sums of discrete ACF values must give good approximations of integrals (27) (or similar integrals, but up to L instead of infinity).Even though it is true for many types of real roughness, at least approximately, some violate this condition.For instance if the surface is locally periodic/corrugated the true ACF crosses zero, possibly many times.It may be possible to modify the correction procedure for this case, but likely at the cost of reliability.And although the approach of fitting Ĝk with biased model remains intact in principle, the first zero crossing may no longer be a good choice of fitting cut-off.
All the procedures utilise the zero crossing for choosing the cut-off in some manner.Must there always be a zero crossing?By splitting the sum z m z k over all m and k into triangular parts and correcting for the double-counted diagonal The left hand side is zero since the mean value of z is zero.Therefore, and Ĝk must take both signs.As for the crossing location, the leading term approximation of the analytical models or (35) is a small constant (proportional to ).If ACF decays quickly, the first zero crossing occurs when the true ACF is equal to this constant.And this is also when S 0 [G] and S 1 [G] can be assumed to give good approximations to the corresponding integrals.For biased model fitting, the heuristic zero-crossing rule is further supported by the following: • The rule is simple and easy to implement both manually and in code.
• Fitting only data of the ACF apex at origin is an ill-conditioned problem.The optimal bias-variance trade-off invariably includes the side slopes in the fit.Shortening the interval too much cannot be beneficial.
• Although fitting beyond the zero crossing may be beneficial, often the ACF is not converged in this region and telling where useful data end is difficult.
• Numerical simulations support the zero crossing as a good choice (section 4.2).
Choosing the cut-off based on zero crossing for each data contributes to the increased variance of bias-corrected results.When multiple ACF curves are evaluated it may be preferable to choose a single cut-off based on all the data and use it for all curves.

Conclusion
The goal of this work was to correct the finite-area bias in autocorrelation function (ACF) evaluation in roughness measurements, which includes the correction of parameters like the mean square roughness and correlation length.Starting from the observation that it should be possible to express the bias of measured ACF in terms of ACF itself, we developed a self-consistent formulation and used it to propose two types of bias correction.One was a modification of standard analytical ACF models to take into account the bias of the data they should fit.The other was a model-free correction procedure based on inverting the self-consistent relations by solving a set of linear equations.Their effectiveness was tested using simulated and measured data.The two corrections behave similarly.They appear most helpful in the cases when they are also needed the most, that is the common moderate scan line lengths, as data for too short scan lines are not salvageable and for very long profiles the bias may already be small.Furthermore, they are more beneficial for higher levelling polynomial degrees for which the bias is worse.Both also trade bias for variance and thus the accuracy improvement is larger when multiple scans are evaluated.Modified (biased) analytical ACF models do not require any fundamental changes to the evaluation and can even be used to re-analyse existing raw ACF data.Based on numerical results, the measurement of Gaussian roughness, fitting the experimental ACF with a modified model has substantial advantages and few downsides and can probably be recommended quite universally.The model-free correction (inversion) procedure proposed for ACF of an unknown form is computationally efficient and worked surprisingly well in the selected test cases.A simple zero crossing based criterion was proposed for choosing the subset of discrete ACF data to use in the inversion.However, open question remains regarding the application of the procedure to ACFs of more complicated forms as the simple criterion may then no longer be suitable.The second correction method thus should be currently considered more an interesting concept to explore in further works.

Figure 3 .
Figure 3.Comparison of fitting the biased Gaussian ACF model (31) with the standard one (5) (for correlation length of 20 px).Error bars represent singleimage standard deviations.Results for different polynomial degrees are slightly offset horizontally for visual clarity.

Figure 5 .
Figure 5.Comparison of roughness parameters σ and T obtained from uncorrected and model-free corrected ACF curves for a random pyramidal surface.Error bars represent single-image standard deviations.Results for different polynomial degrees are slightly offset horizontally for visual clarity.

Figure 6 .
Figure 6.Autocorrelation functions obtained by model-free correction for rough SrO film surface, compared to uncorrected ACF and the best-estimate ACF.Each curve corresponds to one subimage cut from the large base image.

Figure 7 .
Figure 7.Comparison of best-estimate ACF obtained using independent scans of two different areas.