Correction of uneven illumination in microscopic image through robust brightness distribution estimation and deviation rectification

For diagnostic purposes, the automatic microscopic scanning system can scan and stitch multiple slide images together to produce a Whole Slide Image. This process provides a clear, high-resolution picture of the slide sample under the high-power objective lens of the microscope, enabling accurate diagnosis and analysis. However, uneven illumination affects every image acquired by a microscope, resulting in the existence of artifacts. In this paper, a novel retrospective approach based on robust brightness distribution estimation and deviation rectification is proposed. Least squares estimation and Cauchy estimation with a smooth regularization term are used to obtain the accurate estimation of background illumination distribution. Then the deviation map is given based on the deviation function of the two directions for illumination correction. This method leverages the statistical properties of image sequences to enhance stability and robustness against biological image artifacts. Through experimental validation, this approach successfully eliminates visible edge artifacts that commonly appear between neighboring tiles and exhibits superior performance when compared to other methods.


Introduction
Optical imaging is an essential tool in biomedical research.Currently, the automatic microscopic scanning system are widely utilized for their ability to quickly acquire a series of images and stitch them together to produce a Whole Slide Image (WSI) [1].This technology enables researchers to examine the entire sample at a high magnification level, providing a comprehensive view of the tissue being analyzed.However, there is no ideal optical system.Due to misalignment of optical devices, dust and uneven light source, there is a problem of uneven illumination in every image obtained by the microscope [2].When the image processing software is used for quantitative analysis of the acquired image, especially when accurate measurement is required, the change of illumination may cause a certain degree of noise, thus confusing the object of the experiment [3].The physical process of microscopic imaging can be approximated as a linear function related to the measured image.The corresponding relationship between the measured image  and the undamaged real image  at pixel position  is as follows [4]: Where the multiplicative term   represents the change of effective illumination through an image, which is called flat field, and the addition term   is called the dark field.At present, the widely investigated correction method follows the physical model reverse image formation process of microscopic imaging, that is, attempts to restore the real image from the degraded microscopic image according to the above formula.
Although this correction process seems simple at first glance,   and   cannot be obtained perfectly.As a result, numerous techniques have been developed to estimate flat-field and dark-field components, addressing this limitation.Depending on the need to obtain a special image for reference calibration during the imaging process, the methods for correcting brightness in microscopic images can be divided into two categories: prospective correction and retrospective correction [5].
Prospective background correction methods aim to utilize specific reference images obtained during the image acquisition process.These reference images are utilized to estimate and capture the characteristics of flat fields and dark fields, which are then employed to correct the brightness using a reverse imaging model.In contrast to prospective methods, the retrospective method corrects the uneven illumination directly from the obtained microscopic images without additional reference to calibrate the images [6].Retrospective methods are more practical as they don't require the acquisition of extra reference images, which adds convenience and efficiency to the process.
Shariff [7] and Singh [8] proposed a straightforward approach where they approximated the flat field by using the mean pixel value of the sequence image.They further applied a median filter for smoothing.This method is currently widely adopted as the default brightness correction module in Cell Profiler software.Carpenter [9] proposed various methods to calculate the brightness background, which include using median filter, polynomial, or spline surface.Carpenter's research indicates that the latter two methods are more reliable and less likely to result in overfitting.In recent years, notable advancements have been made in the utilization of deep convolutional neural networks for image illumination processing, such as the enhancement of low-light images.Similarly, convolutional neural network algorithms have been successfully employed to address the challenge of uneven illumination in color images.For instance, they have been applied to correct uneven illumination in underwater images [10] as well as remove uneven illumination in dermoscopy images [11].
These methods do not fully leverage the statistical properties of image sequences, resulting in potential inaccuracies during correction.They require a substantial quantity of images to achieve stable performance and rely on manual parameter adjustments.Moreover, they exhibit limited robustness to common artifacts found in biological images.At the same time, these algorithms based on convolutional neural networks are usually targeted at specific natural image data sets, and it is difficult to achieve good results when migrating to new data sets.Given these limitations, there is a need for more effective approaches that can harness the statistical properties of image sequences, improve stability, and enhance robustness against biological image artifacts.
We propose a novel retrospective approach, which utilizes robust background estimation and deviation rectification.This method effectively harnesses the statistical properties of image sequences, enhancing stability and robustness against biological image artifacts.The proposed method was compared to three state-of-the-art retrospective methods, namely CIDRE [3], BASIC [12], and Cell Profiler [7][8].Both CIDRE and BASIC offer plugins that can be used within ImageJ, while Cell Profiler in this paper refers to the default brightness correction method of the Cell Profiler open-source software.The results showed that the proposed method achieved a significant improvement of 13.8% and 6.3% in the Average Gradient of Illumination Channel (AGIC) and Discrete Entropy (DE) metrics, respectively.It specifically addresses edge artefacts in microscopic image stitching, significantly improving the appearance and accuracy of stitched images and image processing.

Method
The proposed method in this paper consists of two essential components: robust brightness distribution estimation and deviation rectification.Firstly, the least squares estimation and Cauchy estimation with a smooth regularization term are utilized to obtain an accurate and smooth background brightness distribution.This step ensures that the estimated distribution captures the overall illumination patterns effectively.Secondly, deviation rectification is performed.Deviation correction vectors are designed for both the vertical and horizontal directions, enabling global brightness correction for the entire image.
Crucially, this approach is resilient to errors in background estimation, as it focuses on deviation trends without requiring precise estimation for each pixel position.The flow diagram of image data process is shown in Figure 1.

Robust brightness distribution estimation
Assume that the foreground object appears anywhere in the image with equal probability, a common background is estimated from a series of images obtained.
Stated mathematically, for a set of n tiles   ,  ,  ,  , ⋯ ,  ,  , the goal is to obtain the target background image  ,  .The first step involves converting  to the HSL color space and extract its luminance component, which is denoted as   ,  ,  ,  , ⋯ ,  ,  .The luminance component represents the brightness information of the pixels in the image.Next, the presence of adequate intensity information in  is verified by examining each pixel position to ensure it exhibits a satisfactory range of intensity values.This step aims to assess the distribution and diversity of intensity across the image, facilitating further analysis and processing.If certain positions lack adequate intensity information, additional processing can be performed on these positions to enhance their quality.
The entropy of  is calculated by the following formula: In this formula,  represents the probability of intensity level  in the  component, while  denotes the total number of gray levels.Regions in an image that exhibit low entropy tend to contain limited useful information.For instance, in low-concentration fluorescence images where there is no fluorescent medium, the background pixels have little impact on the target acquisition.To address areas with insufficient intensity information, the method of scale-space resampling is applied.
This method involves resampling the image at different scales to enhance the quality of these areas.For example, for the original image  , it is downsampled to get its image sequence on different scales.Then, for each image at different scales, bilinear interpolation is used to restore them to the original image size.Finally, the images restored to the original size are averaged to obtain the resampled image.Through the process of resampling the scale space, information from the surrounding regions is OCOIP-2023 Journal of Physics: Conference Series 2700 (2024) 012003 IOP Publishing doi:10.1088/1742-6596/2700/1/0120034 incorporated into the background pixels.This enriches them with valuable details and effectively fills the information gap, thereby enhancing the overall usefulness of the image.
After applying the processing steps above, it is assumed that the brightness component is now represented by  .At each pixel position ,  , sort the intensity values in ascending order, creating a new variable known as  that retains the same data but with a different structure in the third dimension.The underlying layer-by-layer intensity distribution  is estimated using a robust mean of the intensity distribution.This n*1 vector represents the average brightness of each layer present in  .To obtain more robust estimations and reduce sensitivity to abnormal distributions, a different approach is used to average each layer to derive .First, the number of pixels  to be used is determined, such as  20% *  * , where  and  represent the number of rows and columns of pixels.To obtain the corresponding value of  for each layer of  , the average intensity value over  specific pixel positions is calculated.To determine the horizontal index list  and vertical index list  of these positions, we compute the average value of the third dimension of  and then sort it.The position index is identified from the center of the sorted distribution, and the corresponding row and column coordinates are recorded in  and  , respectively.
The following is the initial estimation for brightness distribution  ,  .To initiate the estimation process, we use the median cut of the  as the initial value.This choice is based on its close resemblance to the actual brightness distribution of the image.According to the least-squares method, an objective function is designed for estimation.For each pixel position ,  , the observed value of layer  is denoted by  , ,  .The true value for layer  can be theoretically expressed as , * . Therefore, the energy term for the initial estimation can be formulated as follows: The first optimization provides an initial estimation that is close to the true brightness distribution, but due to the presence of noise or outliers, it may not be completely accurate.As a result, we use this initial estimation as a starting point and introduce a Cauchy estimator with a smooth regularization term for the second optimization to improve accuracy and robustness.The fitting energy term with the regular term is represented as follows: ( 4 ) Where, the parameter  represents the bandwidth of the Cauchy estimator and  is coefficient for the  ,  regularization.The bandwidth plays a crucial role in determining the fitting degree of each data point and the tolerance level towards outliers.In this context, we select the Mean Square Error (MSE) from the initial estimation as , which can be expressed as follows: The bandwidth of the Cauchy estimator should accurately reflect the data dispersion to ensure good parameter estimation properties.Using the Mean Square Error as the bandwidth for the Cauchy estimator can better align with the actual data characteristics, as it serves as an objective measure of data dispersion.Meanwhile, the spatial regularization of  ,  is computed as follows: Where  represents the number of scales utilized, ℎ denotes the Laplacian of Gaussian (LoG) kernel associated with the  scale, and  ,  signifies the brightness estimation being solved.In general, the size of LoG kernel is often chosen as 7 or 13, and the sigma values used are typically 0.5, 1, and 2. This regularization term aims to impart a smoother structure to the brightness estimation  ,  and minimize discrepancies between neighboring pixels.

Deviation rectification
After obtaining the estimation of the background image  ,  , the nonlinear deviation functions in both the horizontal and vertical directions can be calculated.These functions are denoted as   and   respectively, and they represent the brightness trends of the background image in the X and Y axes.
To calculate   , the average value of  ,  over the height of the image  is taken.Similarly, for   , the average value of  ,  over the width of the image W is calculated.The formulas are as follows: ∑  ,  ,   ∑  ,  ( 7 ) To mitigate the impact of local brightness sudden changes on the overall effect, we apply a smoothing operation to the nonlinear functions   and   .Subsequently, the smoothed functions are divided by the average brightness value for the entire background image.This process results in the generation of two vectors   and   , which represent the relative deviations from the mean along each axis.The expressions for the deviation functions are as follows: The average brightness value for the background image  ,  , denoted as   ,  , can be calculated using the formula: In equation ( 10), the operation    represents a Gaussian filtering operation that is applied to smooth the function   : ∑  ,     (10) Here,  ,  corresponds to a one-dimensional Gaussian filter with a size of 2k+1 and a standard deviation of .Typically, k is set to 3 and  is chosen as 2, resulting in a Gaussian filter with a size of 7x1 and a standard deviation of 2.
Finally, by performing brightness correction in the L space of the original image, we obtain the corrected image   ,  ,  ,  , ⋯ ,  ,  , where each component  ,  is given by:  ,   , Here, the symbol ' ' represents the vector product of a matrix.After obtaining the corrected image   ,  ,  ,  , ⋯ ,  ,  using equation (11), it is synthesized with the H and S components of the input image  to form a new image in the HSL color space.Subsequently, this synthesized image is converted back to the RGB color space to produce the final corrected image   ,  ,  ,  , ⋯ ,  ,  .The resulting image  represents the output of our brightness correction algorithm, which effectively removes local brightness changes while preserving the overall contrast and visual appearance of the original image.

Verification of correction effect
The background estimation and brightness correction algorithm were tested on two separate collections of microscopic images consisting of 36 and 255 tiles, respectively.These tiles were stitched together to form a Whole Slide Image, as shown in Figure 2(a) and Figure 2 (d).The original stitched images displayed evident brightness artifacts, manifested as visible seams between the tiles.The robust background estimation method was applied to estimate the distribution of background brightness.The results of brightness distribution are shown in Figure 2(b) and Figure 2(e).The distribution of their intensities follows a consistent pattern, whereby the intensity gradually decreases radially from the image center towards the edges.This observation aligns with the inherent characteristics of optical imaging using micro lenses, where the energy concentration is higher at the center of the imaging unit compared to the outer edges.
After applying the brightness correction algorithm presented in this paper, the stitched images (Figure 2(c) and Figure 2(f)) exhibit significant improvements.The corrected images display a significant improvement, featuring an evenly distributed and enhanced brightness across the entire image.Notably, the artifacts that typically appear between adjacent tiles are effectively minimized or entirely eradicated, resulting in a visually enhanced output.The method proposed in this paper, along with three state-of-the-art methods, were simultaneously tested on two collections of microscopic images.The first collection consisted of 30 tiles representing gold-phase images, while the second collection comprised 588 tiles representing cervical cell images.

OCOIP-2023
Journal of Physics: Conference Series 2700 (2024) 012003 These tiles were stitched together to create Whole Slide Images.After processing, the mentioned metrics were calculated for each tile, and the average values of these metrics across all tiles are reported as follows: Compared to other methods, on Image Collection 1, the proposed algorithm demonstrates a 9.8% reduction in AGIC and a 4.6% increase in DE, as shown in Table 1 (compared to the best-performing CIDRE).On Image Collection 2, the proposed algorithm exhibits a 13.8% reduction in AGIC and a 6.3% increase in DE, as shown in Table 2 (compared to the best-performing CIDRE).The results suggest that the proposed correction algorithm is more effective at correcting illumination and preserving image details compared to other methods.Based on the table above, it can be observed that among the three state-of-the-art methods, CIDRE yields the best results.Moreover, to compare the performance of our proposed method with CIDRE, we present the correction results in Figure 3:   IOP Publishing doi:10.1088/1742-6596/2700/1/0120038 artifact and yields a better appearance.Therefore, compared to CIDRE, the proposed algorithm in this paper has a better brightness correction effect.

Conclusion
In conclusion, this paper introduces a novel algorithm for correcting brightness variations in microscopic images by implementing robust background estimation and deviation rectification.By integrating the least squares and Cauchy estimation methods alongside a smooth regularization term, our proposed method achieves precise estimation of the background illumination distribution.Additionally, our devised deviation rectification scheme effectively captures horizontal and vertical trends in brightness variations, enhancing the algorithm's resilience against potential inaccuracies in background estimation.As a result, the proposed algorithm proves effective in addressing global illumination variations while minimizing the impact of background estimation errors.Experimental results demonstrate the successful elimination of artifacts between adjacent tiles using the proposed algorithm.Furthermore, comparative analysis reveals its significant improvement over three state-of-the-art retrospective methods, boasting a 13.8% reduction in AGIC performance and a 6.3% increase in DE (compared to the best-performing CIDRE method).Overall, the proposed method presents a robust and effective solution for brightness correction in microscopic images.

Figure 1 .
Figure 1.The flow of brightness distribution estimation.

Figure 2 .
Figure 2. Results of original mosaic images, background estimation and mosaic images with brightness correction.Original mosaic image of 36 tiles (a) and 255 tiles (d).(b) Background estimation of (a).(c) Mosaic image of 36 tiles with brightness correction.(e) Background estimation of (d).(f) Mosaic image of 255 tiles with brightness correction.

3. 2 .
Compare with other competitive methodsThe performance of the method proposed in this paper is evaluated by comparing it with three state-ofthe-art retrospective methods: CIDRE, BASIC, and Cell Profiler.The Average Gradient of Illumination Channel (AGIC) and Discrete Entropy (DE) metrics are utilized for this comparison.The Average Gradient of Illumination Channel is expressed as: ∑ ∑  , (12)In this formula,  and  denote the number of image blocks along the horizontal and vertical directions, respectively.The variables  and  represent the positions of the image blocks.The term  ,  is the maximum difference between adjacent blocks, given by:  ,    ,   ,  ,  1,2, … 8 (13) Here,  ,  denotes the average brightness of the image block, while  ,  represents the average brightness of the 8 neighboring blocks of  ,  .The max function returns the maximum value among the specified set of values.In terms of imaging the same slide, a smaller AGIC value indicates a more uniform light distribution throughout the slide.Discrete Entropy (DE) is mathematically defined as follows:   ∑    (14) In this formula,   represents the DE value of a grayscale image .The summation is taken over all grayscale levels , ranging from 0 to 255.Here,  denotes the probability of encountering grayscale level  within the image.A higher DE value indicates that the grayscale image contains more information.

Figure 3 .
Figure 3. Results of original mosaic images, mosaic images with brightness correction of CIDRE and our method.Original mosaic image of 30 tiles (a) and 588 tiles (d).(b) A mosaic image of 30 tiles with brightness correction of CIDRE.(c) A mosaic image of 30 tiles with brightness correction of our method.(e) A mosaic image of 588 tiles with brightness correction of CIDRE.(f) A mosaic image of 588 tiles with brightness correction of our method.Figure 3(a) and Figure 3(d) depict the original mosaic images, while Figure 3(b) and Figure 3(e) show the images corrected by the CIDRE method, which exhibit slight edge artifacts.Figure 3(c) and Figure 3(f) represent the stitched images corrected by our algorithm, which essentially eliminates the

Figure 3 (
Figure 3. Results of original mosaic images, mosaic images with brightness correction of CIDRE and our method.Original mosaic image of 30 tiles (a) and 588 tiles (d).(b) A mosaic image of 30 tiles with brightness correction of CIDRE.(c) A mosaic image of 30 tiles with brightness correction of our method.(e) A mosaic image of 588 tiles with brightness correction of CIDRE.(f) A mosaic image of 588 tiles with brightness correction of our method.Figure 3(a) and Figure 3(d) depict the original mosaic images, while Figure 3(b) and Figure 3(e) show the images corrected by the CIDRE method, which exhibit slight edge artifacts.Figure 3(c) and Figure 3(f) represent the stitched images corrected by our algorithm, which essentially eliminates the Figure 3. Results of original mosaic images, mosaic images with brightness correction of CIDRE and our method.Original mosaic image of 30 tiles (a) and 588 tiles (d).(b) A mosaic image of 30 tiles with brightness correction of CIDRE.(c) A mosaic image of 30 tiles with brightness correction of our method.(e) A mosaic image of 588 tiles with brightness correction of CIDRE.(f) A mosaic image of 588 tiles with brightness correction of our method.Figure 3(a) and Figure 3(d) depict the original mosaic images, while Figure 3(b) and Figure 3(e) show the images corrected by the CIDRE method, which exhibit slight edge artifacts.Figure 3(c) and Figure 3(f) represent the stitched images corrected by our algorithm, which essentially eliminates the

Table 1 .
Comparison of quantitative metric results for different methods on image Collection 1.