A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.


Introduction
Experimentally acquired images are usually affected by agents which degrade their final quality. These spoiling agents belong to the two main branches of systematic and random families. Random uncertainties can be reduced by cutting off the associated high frequency contributions using a proper technique, such as a low-pass filtering.
An agent of pure systematic character could in principle be eliminated if one knew its features, which would then make possible to retrieve the original image. In practice this never occurs because the original matrices and the related inverted ones are, or become, ill-conditioned basically due to the arithmetic truncation precluding thus the solving of the necessary linear system of equations.
As an ideal point-source, an ideal flat surface source -i.e., a source emitting rays only perpendicularly to its surface -would produce no penumbra at all. Such a kind of source would be the ultimate one to radiography, for it, unlike the point source, would cause no image amplification. Unfortunately such a source does not exist in the real world. For neutron radiography using reactors, usual approaches to reduce the penumbra is to increase the L/D ratio or to employ collimators. The first option is not always feasible due to space constraints, while the last one yields a shadow of the collimator. Therefore, one has to face with a flat surface source emitting radiation also in other directions which do not necessarily have the same intensity as that occurring in the perpendicular one. Due to this possible anisotropy, the emission profile for every element of the source would not necessarily resemble a spherical calotte but somewhat like a bell. Borrowing a concept from X-ray diffraction, this profile can be assigned as a 3D Rocking Curve -RC, where its FWHM is a measure of the beam divergence. The cross-section of the interception of a vertical plane with the neutron beam would then define a virtual surface source exhibiting a certain divergence.
For a radiographic image acquired with a virtual surface source, as that available at the main port of a reactor, or by any extended source at all, the systematic error arises as an enlargement of the spatial resolution caused by the beam divergence which produces penumbrae blurring the otherwise sharp edges. This divergence has been assessed by measuring the FWHM of its related rocking curve at the main port of the Argonauta reactor where a value of 1 o 16' has been found [1].
A proper way to specify this error is the Point Spread Function or PSF for short. The FWHM of this bell-shaped surface represents the resolution w of the imaging acquisition system for a given geometrical arrangement. It's important to point out that unlike the PSF, the RC doesn't change with the gap object-detector, for it is an intrinsic beam feature.
Once the PSF is known, the original image could theoretically be recovered, or at least improved, by applying an iterative unfolding procedure. However, the basic problem of when to stop the iterations remains open.
Usually, a visual inspection, which is mostly regarded as the ultimate criterion, is required to decide whether the number of performed iterations looks adequate. An excessive number not only wastes processing time, but also may degrade the image improved in previous iterations.
Since the Richardson paper in 1972 [2], the first approach employed as stopping criterion was the number of iterations. Besides χ 2 -based tests used by Lucy and others [3][4][5], Kolmogorov-Smirnov test and Kuhn-Tucker conditions [6] have also later on been be proposed. Another widely employed stopping criterion is the Mean Squared Error -MSE [7]. More recently Eichstädt et al [8] proposed a technique based on the well-known observation that the first iterations improve substantially the image, whereas later iterations tend to fit the noise rather than the signal.
The variety of approaches imbedded into these few examples, points out towards the difficulty to choose an ultimate criterion, a constraint arising from the intrinsically asymptotic behavior of the iteration process. Indeed, when a visual inspection is used as a tool, one cannot assert that some previous or further iterations yield poorer images than the chosen one.
In this work a novel procedure is proposed to estimate the best number of iterations, eliminating thus the cumbersomeness and processing time consumption of a pure visual inspection. No comparison is done with previous techniques but with a visual inspection, which, in spite of its subjective character, cannot be denied as the most reliable one. Its basic approach is to replace the comparison of images obtained after the iterations by a comparison of numbers, a much easier task for the human eye or a for computer program. These numbers arise from the overall difference D between the image histograms of two subsequent iterations.
As the iteration process proceeds the 1 st derivative of G with regard to the number of iterations, dG/dn approaches zero until it reaches a region where the data dispersion overwhelms any further difference to zero. This is recognized by the program by checking the point where the average of the last two derivatives is smaller than half the difference between them.
When this point is reached the program halts or if instructed by the customer stores this information and continues running. The processed images around this point may be then visually analyzed -for the sake of a cross checking -in order to choose the best one. Usually it is hard to recognize feeble differences, if existent at all, between neighbor images and although the ultimate choice could -if desired -be done by the customer, a lot of effort is saved if a program does the hard work.

Methodology
In order to accomplish the task proposed in this work it was necessary to input an image and its spatial resolution as required by the Richardson-Lucy procedure incorporated into the developed algorithm. For this purpose a radiograph taken with a 2.23x10 5 n.cm -2 .s -1 thermal neutron flux available at the main port of the Argonauta Reactor in Instituto de Engenharia Nuclear -CNEN, Brazil, has been used as original image. A neutron-sensitive imaging plate has been employed as detector and developed with a 0.050 mm resolution reader. A mechanical pocket chronometer, chosen for inspection due to its fine clockwork machinery has been placed directly on the imaging plate and exposed for 4 minutes. After its development in the Imaging Plate Reader, the raw data is transferred to the ImageJ free software, which returns the pixel intensities and their coordinates. However, due to data compression, some pixel intensities and their coordinates are missed. A program developed for this purpose, DataProc, fills these voids with the average intensity of the neighbor pixels, weighed according to the inverse of their distances to the missing one. Once completed, the data set feeds the Origin8 commercial software which returns the data in txt format, as required by Deconvplot, a developed deconvolution program embodying a Richardson-Lucy algorithm. A data checking can be done by the DataProc which is capable, as the Deconvplot, to enhance the dynamic range and plot the images on a monitor. A resume of the whole procedure is depicted in figure 1.

Figure 1.
Prior to deconvolution the raw data arising from the Imaging Plate Reader is treated to match with the format used by the deconvolution program. For this purpose, a developed software DataProc is jointly used with ImageJ and Origin8.
Besides the divergence of the neutron beam, the spatial resolution depends upon the object-detector gap, as they both contribute to the penumbra that blurs the edges in the image. Since objects have a non-zero thickness, their different planes parallel to the detector exhibit different spatial resolutions.
Due to their fine granulometry, the contribution of imaging plates to the spatial resolution is not ruled by themselves but by the spot-size of the laser employed to read them. In this work, this contribution represents less than 10% of the total spatial resolution observed in the original image used as example.
Techniques have been developed to evaluate the spatial resolution [9][10], some of them based upon a blind deconvolution approach which do not require any information regarding the acquisition system but from the image itself. The spatial resolution for the radiograph used in this work had been early determined by another type of blind procedure [11] and is applied to carry out the deconvolution. Figure 2 shows an actual thermal neutron radiograph of the mechanical pocket chronometer and its related histogram. It is noticeable in the histogram the existence of three pixels at the middle region, corresponding to three different gray tonality regions.
When the radiograph undergoes a deconvolution the histogram changes because an improved resolution is equivalent to an image with less blur. Since the area beneath the histogram should be maintained constant and equal to the unity -within the statistics dispersion -some grey pixels constituting the original blur have their tonalities shifted toward brighter or darker regions causing a concomitant improvement of the contrast.
This shifting process can be observed in figure 3 where the tails of the histogram after 10 iterations have been substantially raised with regard to those of the original image.   As the iterations proceed the difference between two subsequent images becomes smaller, and so their histograms. This expected asymptotic behavior of the histogram is the basis for the approach proposed in this work to halt the iteration process. For this purpose, two subsequent histograms are compared by using a global difference G(n) between them as follows: where: The summatory in equation (1) aims at a global view of the absolute difference between two subsequent images avoiding thus any particular region. The use of an absolute difference precludes that deviations of different signs eliminate each other. Figure 4 depicts the behavior of G(n) and its 1 st derivative as the iterations progresses. Although both graphics provide the same kind of information, the derivative is more convenient and suitable to be handled by a computer program. Indeed, whereas the lower limit of G(n) is unknown, its asymptotic trend indicates that its 1 st derivative somewhere -within the data dispersion -should reach zero.
Hence, it becomes easier for a program to choose the proper graphic scale since the upper level is previously known, and to plot G(n) as the subsequent iterations proceed. Figure 4. The global difference G(n) exhibits a somewhat asymptotic trend (left). The 1 st derivative (right) is a more suitable approach for a computer program to analyze this behavior because its maximum is known in advance.
In order to get a closer view at the asymptotic region of the graph, a zoom can be performed by rescaling the vertical axis. This is done by defining the first iteration that will be plotted, a procedure that will cut the points representing dG/dn for lower iterations, as shown in figure 4, where the lowest iteration plotted is 4.
Such a stretching of the vertical axis renders the data dispersion at the asymptotic region more visible. Due to the asymptotic behavior of dG/dn and its data dispersion, it somewhere would intercept or surpass the zero level. However, as a large dispersion would cause an early interception resulting in an insufficient number of iterations, it is necessary to smooth the data in order to employ a zero level interception as a criterion to stop the program. The data is then smoothed by averaging the global difference as follows: where:  The above mentioned interception criterion can be properly applied by specifying that the average value of the two last derivatives should be greater (smaller in absolute value, for the derivatives are negative) than the half of the differences between them. This approach -which assures that the dispersion is taking into account -is equivalent to require that the last derivative should itself surpass zero, disregarding its previous value, as follows During the processing the program displays continuously on the monitor a graph of dG/dn versus n which is reproduced in figure 6, solely to show how the display looks like. After each iteration the deconvoluted image and its histogram is recorded for further analysis, as well as the graph dG/dn x n when the maximal number of iterations -as previously specified by the customer -has been reached or the program has been automatically interrupted by the embedded criterion.
The 896 x 847 pixels jpg-image used in this work, has been deconvoluted with a formerly determined point spread function having a FWHM of 8.5 pixels, equivalent to 0.54 mm. Further details can be found elsewhere [11. The processing time required for each iteration depends upon the image size and its resolution, and in this specific case it was circa 100 seconds with a 3.6 GHz processor.
Although in this work the Richardson-Lucy procedure has been employed, images arising from any iterative process can be analyzed under the same approach.

Figure 6.
Example of an actual output of the program Deconvplot during the processing (left) and after its completion (right). Iterations can be stopped at any time by the customer or when a specified criterion has been fulfilled. All images are automatically stored in sequentially numbered files.

Results
A set of deconvoluted neutron radiographs of the mechanical chronometer including the original image and a conventional photograph where the metallic case has been removed is shown in figure 6. It is possible to observe a fair correlation and consistency between the quality of the images and the values of dG/dn of the graph. Indeed, beyond 22 iterations, where dG/dn surpasses zero it is hard to observe a substantial further improvement of the images.
One could eventually notice a slight improvement of the images deconvoluted with 30 and 50 iterations, but some bright spots just beginning to appear in the initial images become better defined. The acceptance of these artifacts -which could not be forecast by the dG/dn behavior -is a matter of trade-off between the overall image quality and their presence.

Conclusion
This work proposes a procedure to evaluate the best number of runs for an iterative process aiming at the improvement of images, eliminating then the cumbersomeness and time consumption of a visual inspection. Its basic idea is to replace the comparison of images by a comparison of numbers arising from the global difference G between subsequent image histograms. When the 1 st derivative of G with regard to the iteration number, dG/dn, reaches zero -within the data dispersion -it means that the histograms and their related images are not substantially changing anymore. Beyond this point further iterations would not produce any additional improvement and the iterative process is interrupted. The developed technique incorporates a Richardson-Lucy algorithm into a Fortran program which plots the behaviour of the dG/dn on a monitor as the iterations proceed and stop it automatically when a given criterion chosen by the customer is reached. This criterion can be the maximum number of iterations or the point where dG/dn surpasses zero. The soundness and performance of the technique has been verified by treating the neutron radiograph of a pocket mechanical chronometer, with fair results. In this work the Richardson-Lucy algorithm has been employed as unfolding procedure, but images improved by any iterative process can be analyzed under the same approach.