Brought to you by:
Paper

High-fidelity artifact correction for cone-beam CT imaging of the brain

, , , , , , , and

Published 22 January 2015 © 2015 Institute of Physics and Engineering in Medicine
, , Citation A Sisniega et al 2015 Phys. Med. Biol. 60 1415 DOI 10.1088/0031-9155/60/4/1415

0031-9155/60/4/1415

Abstract

CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30–50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening.

The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT.

Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.

Export citation and abstract BibTeX RIS

1. Introduction

1.1. Motivation

Traumatic brain injury (TBI) is estimated to result in 1.7 million visits to Emergency Departments (Laker 2011) each year and is a contributing factor in 30% of injury-related death in the US (Langlois et al 2006). In the acute phase of TBI, multi-detector CT (MDCT) is the preferred imaging modality for determination of structural damage and intracranial hemorrhage (ICH) via non-contrast-enhanced (NCE) exams, showing clear delineation of fresh blood in the brain, including epidural, subdural, and intraparenchymal injury. MRI, on the other hand, is used extensively in the sub-acute / chronic phases for evaluation of subtle diffuse lesions (Maas et al 2008). Accurate diagnosis of acute TBI is extremely important in determining the appropriate course of treatment and would greatly benefit from availability of imaging at the point-of-care (e.g. ambulance, ICU, or urgent care center) or even directly in the environment where TBI occurs (e.g. athletic or war theater). Neither MDCT nor MRI are well suited to broad deployment in such venues given their high cost, large footprint, and lack of portability.

In recent years, flat-panel detector (FPD)-based cone-beam CT (CBCT) has emerged as a promising technology in a variety of specialty diagnostic applications [e.g. including extremity imaging (Prakash et al 2011, Zbijewski et al 2011, Koskinen et al 2013, Carrino et al 2014), breast imaging (Boone et al 2001, Yang et al 2007b), and otolaryngology imaging (Zoumalan et al 2009, Yu et al 2010, Xu et al 2012)] and image-guided procedures [e.g. surgery (Siewerdsen et al 2005, Fahrig et al 2006, Miracle and Mukherji 2009, Siewerdsen 2011) and radiation therapy (Jaffray et al 2002, Grills et al 2008)]. FPD-based CBCT could provide a valuable means for point-of-care imaging of TBI; however, the current generation of FPD-CBCT systems for head imaging are only well suited to bone imaging (e.g. maxillofacial bones, sinuses, and teeth) and challenged by poor contrast resolution in imaging of soft tissues, brain, and blood. Compared to MDCT, CBCT exhibits reduced CT number accuracy, lower image uniformity, higher noise, and a higher level of artifacts.

Assessment of acute TBI with MDCT involves detection of bleeds that range from >10 mm in size (e.g. severe TBI in the form of a peridural hemorrhage from blunt trauma) to very small (~1 mm) micro-hemorrhages associated with mild-to-moderate TBI and diffuse axonal injury (DAI) from inertial loading (Courville 1950, Gennarelli et al 1982, Adams et al 1985, McKee et al 2012). The contrast of fresh blood in the brain is reported to be 25–50 HU (Parizel et al 2001, Greenberg and Arredondo 2006), so reliable detection of small parenchymal bleeds presents a low-contrast imaging task requiring a high degree of image uniformity and ~3–5% contrast resolution.

1.2. Challenges to image quality: scatter

X-ray scatter has been identified among the main factors limiting image quality in CBCT. The large cone angle used in CBCT creates a large amount of scatter that results in a loss of contrast, decrease in contrast-to-noise ratio (CNR), and cupping and streak artifacts (Siewerdsen and Jaffray 2001). The deleterious effects of scatter are especially pronounced for compact CBCT geometries suitable for portable, point of care systems, since the scatter-to-primary ratio (SPR) increases sharply as the air gap (or axis-to-detector distance) decreases (Neitzel 1992, Siewerdsen and Jaffray 2001). SPR can be reduced using anti-scatter grids, but previous work shows that a compact head imaging system gains little or no improvement in CNR per unit dose when including a grid (Sisniega et al 2013). Furthermore, residual scatter artifacts persist even with a grid, and a scatter correction method is still likely to be applied to obtain performance suitable to low-contrast soft-tissue imaging.

The strong influence of scatter on CBCT image quality has motivated numerous efforts to reduce its effects, typically achieved by experimental measurement or numerical estimation, followed by subtraction of the scatter estimate from the projections. A comprehensive overview of scatter correction methods can be found in (Ruhrnschopf and Klingenbeck 2011a 2011b, Nuyts et al 2013). Experimental approaches for scatter estimation include measurements in the shadow of x-ray opaque blockers (Ning et al 2004, Liu et al 2005, Zhu et al 2005 2009), semiopaque blocking patterns to modulate the primary radiation allowing for separation of scatter by Fourier techniques (Zhu et al 2006, Grimmer et al 2012), using the signal in the shadow of the x-ray collimator (Siewerdsen et al 2006), or an inverse approach in which scatter is estimated from a second scan with an array of pencil beams (Yang et al 2012). Numerical estimation of scatter ranges from simple forms that model scatter as a global constant fluence for the scan (Glover 1982) or an individual constant per projection (Bertram et al 2005), to more elaborate approaches that model scatter as a blurred version (Love 1987, Seibert 1988) of the projection data convolved with a scatter point-spread function obtained from measurements or Monte Carlo (MC) simulations (Maltz et al 2008, Sun and Star-Lack 2010).

MC simulations provide an alternative approach to scatter estimation that avoids many of the simplifications involved in the numerical methods described above. While MC scatter simulation offers high accuracy, its use has been limited due to its computational cost (Jarry et al 2006). Numerous acceleration approaches have been developed, and MC-based scatter correction is now becoming feasible in clinical CBCT applications. The acceleration methods involve the use of variance reduction (VR) techniques such as Woodcock tracking or photon splitting, obtaining reduction in computation time larger than an order of magnitude (Poludniowski et al 2009). Fast MC with a low number of tracked photons and reduced sampling in the projection and angular domain ('sparse MC') followed by de-noising and interpolation to exploit the relatively low-frequency nature of scatter distributions provides an additional means to reduce MC scatter estimation time (Colijn and Beekman 2004, Zbijewski and Beekman 2006, Mainegra-Hing and Kawrakow 2008, Poludniowski et al 2009, Bootsma et al 2013). Finally, implementation on graphic processing units (GPUs) leverages the parallel nature of the MC simulation to provide around 30 × reduction in computation time (Badal and Badano 2009, Jia et al 2012), which increases by an additional factor of 6 when combined with variance reduction (Sisniega et al 2013). Further acceleration is possible via a combination of analytical methods with coarse MC simulation (Kyriakou et al 2006, Baer and Kachelrieß 2012).

1.3. Challenges to image quality: lag

Another important image factor that degrades FPD CBCT image quality is detector lag—i.e. contamination of a given projection by residual signal from previous exposures, predominately caused by charge trapping in the a-Si:H detector elements (Siewerdsen and Jaffray 1999a). Ghosting associated with a change in detector gain from previous exposures imparts analogous history-dependent signal response as investigated by Zhao et al (Zhao et al 2005). Studies by Siewerdsen and Jaffray (Siewerdsen and Jaffray 1999b) showed lag to cause 'comet' and 'streak' artifacts in FPD CBCT for values of 1st frame residual signal as low as 2–3%. More recent studies (Mail et al 2008) showed such exposure history-dependent effects to manifest as skin-line artifacts that significantly degrade image uniformity with differences of 20–30 HU between bright and dark areas for large body sites.

Lag can be characterized in terms of the temporal impulse response function (IRF) of the detector. This temporal IRF can be measured directly (Siewerdsen and Jaffray 1999a 1999b) or can be estimated from the response to rising or falling exposure steps—e.g. the rising step response function (RSRF) or the falling step response function (FSRF) (Mail et al 2008, Starman et al 2011 2012a, Yang et al 2007a). Reduction of lag artifacts can be achieved using hardware solutions (Siewerdsen and Jaffray 1999b, Starman et al 2012b) or software-based methods involving an analytical model of detector signal decay (e.g. a polynomial (Weisfield et al 1999) or multi-exponential (Hsieh et al 2000a, Mail et al 2008) fit to the IRF) and temporal deconvolution with the projection data, assuming the system to be linear and time invariant (LTI) (Hsieh et al 2000a, Starman et al 2011). Recent work (Starman et al 2012a) introduced a correction method that considered non-LTI detector response and achieved similar or slightly better correction than LTI deconvolution.

1.4. Challenges to image quality: glare

Recent work shows veiling glare to be a significant contributor to FPD CBCT artifacts. Such effects are well known in the context of x-ray image-intensifiers (XRIIs), where glare results in a loss of contrast ('fog') in projection data (Carton et al 2009, Poludniowski et al 2011). Although glare is usually considered to be small in FPDs, it arises to a measurable degree from a number of sources, including: long-range dispersion of optical photons in the scintillator; scatter of x-rays in the scintillator and/or detector (or detector enclosure); and K-fluorescence in the scintillator or detector substrate. In each case, signal is generated at a position distant from the interaction of the x-ray. The long-range spread associated with veiling glare in FPD-CBCT degrades image uniformity, reduces contrast, and causes streak artifacts (Poludniowski et al 2011, Lazos and Williamson 2012), with non-uniformity of up to 67 HU in a 20 cm diameter water phantom (Lazos and Williamson 2012). The effect of veiling glare increases with the area of illumination (Dong et al 2012), and glare artifacts are therefore more conspicuous for large cone angles and large fields of view.

Long-range veiling glare can be measured as the tails of the detector PSF. Recent work has modeled veiling glare by fitting the PSF [or alternatively, the line-spread function (LSF)] to a multi-component model with one of the components specifically selected to fit the tails of the curve, yielding a glare spread function (GSF) (Poludniowski et al 2011, Lazos and Williamson 2012). Veiling glare artifacts in CBCT were shown to be reduced by spatial deconvolution of the projection data with the GSF (Poludniowski et al 2011).

1.5. Challenges to image quality: beam hardening

Similar to MDCT, CBCT is affected by artifacts arising from beam hardening, such as cupping and streaks, and numerous beam hardening artifact correction techniques have been proposed. Relatively simple methods include treatment of water-induced artifacts only (Kachelrieß et al 2006), two-stage correction of water- and bone-induced artifacts (Joseph and Spital 1978, Kijewski 1978, Nalcioglu and Lou 1979, Hsieh et al 2000b, Kyriakou et al 2010, Van Gompel et al 2011), and correction of artifacts generated by more than two materials (Joseph and Ruth 1997). Increasingly sophisticated methods include incorporation of a polychromatic beam model and object attenuation in model-based iterative reconstruction (De Man et al 2001, Elbakri and Fessler 2003). Each of these methods employ some form of prior information regarding the energy characteristics of the system (e.g. x-ray spectrum and/or detector response) usually obtained by calibration with known materials.

1.6. A comprehensive framework for artifact correction in FPD CBCT

Without compensation of such artifacts, FPD CBCT does not yield diagnostic quality imaging of the brain for detection of bleeds associated with TBI. In this work we present a computational artifact correction framework for FPD CBCT with the aim of enabling application in point-of-care imaging of TBI. The framework is also applicable to other low-contrast soft-tissue imaging tasks in the brain and is shown to provide a significant breakthrough in CBCT image quality. The principal scientific contribution is a novel, fast, and accurate MC-based scatter correction method based on a sparse, GPU-accelerated MC simulation with VR. The output of the MC simulation is processed with kernel smoothing for de-noising and interpolation of the missing projections. Since scatter correction alone is not sufficient to yield the desired soft-tissue image quality, the framework is complemented by corrections for lag, glare, and beam hardening (which are adapted from previously reported methods detailed below) to yield a fairly comprehensive methodology for achieving high quality images of the brain using FPD CBCT. So-called 'metal artifacts' (a loose catch-all referring to a combination of the effects mentioned above and photon starvation) is not treated explicitly in the current work except to the extent that such effects (with the exception of photon starvation) are handled naturally within the corrections of scatter, lag, glare, and beam hardening. The performance of the correction framework is quantified in experiments conducted using realistic phantoms specifically designed to emulate the imaging tasks relevant for evaluation of acute TBI. The effect of individual corrections, the interplay between various stages of the correction framework, and the overall performance in the context of TBI imaging is investigated.

2. Materials and methods

2.1. Experimental setup

CBCT data were acquired using the imaging bench shown in figure 1(a). The system included a rotating anode DU694 x-ray tube with 14° anode angle, enclosed in EA10 housing (Dunlee, Aurora, IL), and a PaxScan 4343R detector (Varian, Palo Alto, CA) with a 250 mg cm−2 CsI:Tl scintillator and 0.139  ×  0.139 mm2 native pixel size. Linear translation stages (406XR and HLE60SR) were used to position the source and detector, and the sample was rotated using a Dynaserv G3 servo drive with a DR1060CB motor, all driven by Compumotor 6k motion controllers (motion components by Parker Hanniffin, Cleveland, OH). Custom C++ software was used for synchronization of motion, x-ray exposure, and data readout.

Figure 1.

Figure 1. Experimental methods. (a) X-ray bench. (b) Head phantom consisting of a natural skull filled with brain-mimicking gelatin, spherical inserts simulating fresh bleeds, and simulated ventricles (c). Seven ROIs used for evaluation of artifact correction are indicated in a reconstruction of the phantom in (d) and (e).

Standard image High-resolution image

Source-to-detector (SDD) and source-to-axis (SAD) distances were 580 and 800 mm, respectively, mimicking a typical configuration for compact head CBCT systems (Zbijewski and Stayman 2009). The current experiments did not use a bowtie filter or grid (as discussed below). Geometric calibration was performed using the method of (Cho et al 2005).

CBCT data were acquired at 100 kVp (+2 mm Al, +0.2 mm Cu) and 0.4 mAs per projection with 720 projections (0.5° angular step). The total dose was 24.8 mGy, measured at the center of a 16 cm CTDI phantom placed at isocenter. The detector was operated at 7.5 frames per second and 2  ×  2 binning, yielding a pixel size of 0.278  ×  0.278 mm2. Dark current (offset) correction involved subtraction of an average of 50 dark frames. Air scan (gain) correction was performed using the average of 50 flood-field frames. Processed projections were subsequently binned by a factor of two, for a final pixel size of 0.556  ×  0.556 mm2. CBCT reconstructions were carried out using a custom, GPU-accelerated, implementation of the FDK algorithm with a Hann windowed ramp filter. Cutoff frequency for the Hann window was 80% of the Nyquist frequency. The voxel size for the reconstructed data was 0.5  ×  0.5  ×  0.5 mm3. Reconstructions were performed on a Geforce GTX 780 Ti (Nvidia, Santa Clara, CA) GPU.

The accuracy of MC scatter estimation was validated in a configuration that minimized errors due to uncertainties in material composition or object segmentation using a cylindrical water phantom (120 mm diameter, 80 mm height). The phantom was imaged at four settings of longitudinal collimation (FOVz): 45, 30, 15 and 8 mm measured at isocenter. The detector signal at the central row measured as a function of FOVz was linearly extrapolated to 0 mm beam width to generate a virtual, scatter-free projection dataset. Although the phantom was slightly smaller than a standard head (and therefore reduced the potentially confounding effects of beam hardening), it still yielded a significant scatter fraction sufficient to validate the MC calculation.

The main body of experiments involved a custom anthropomorphic head phantom (figure 1(b)) consisting of a natural skull in soft-tissue equivalent plastic (RandoTM, The Phantom Laboratory, Greenwich, NY). The contrast of fresh blood (~40–80 HU) to brain (~25 HU) (Schneider et al 2000) is ~15–55 HU. To achieve this contrast in simulated materials, the interior of the skull was filled with a mixture of gelatin and low-contrast (1.29 mg ml−1) iodinated contrast agent (Omnipaque, GE Healthcare, Little Chalfont UK) to create a simulated brain background of ~50 HU. Intraparenchymal hemorrhage was simulated by including spherical inserts made of acrylic (~100 HU) yielding the desired target contrast of ~50 HU, in reasonable agreement with the contrast reported for fresh bleeds (Parizel et al 2001, Greenberg and Arredondo 2006). The sphere diameters were 1.5, 3, 5, 8, 10 and 12 mm, encompassing a pertinent range of imaging tasks for detection of micro-hemorrhages in mild and moderate acute TBI. Four groups of spheres where included at three axial slices: two in a slice at the center of the brain (ZCtr = 0 mm), one near the top of the brain (ZSup = +40 mm), and one at the skull base area (ZInf = −40 mm). Each group consisted of six spheres (one for each diameter) as shown in figure 1(c). Additional spheres (12 mm diameter) were also placed directly on the interior of the skull to mimic epidural lesions. A realistic model of the ventricles was formed from of a mixture of wax and iodinated contrast to give ~ −20 HU) (figure 1(c)).

The head phantom was imaged at full beam width (FOVz = 320 mm) and with narrow beam collimation (FOVz = 10 mm), once with the head positioned with the central axial slice at ZCtr (central region of the head) and once with the central axial slice at ZSup (superior aspect of the head), each containing simulated bleeds. The narrow beam images provided reference datasets with reduced scatter. The nominal attenuation of water (μw) was estimated from an 8  ×  8 voxel region of interest (ROI) in the soft-tissue material outside the skull (ROI labeled μw in figure 1(d)) in the fully corrected volume. All reconstructions were converted to HU using the measured value, μw = 0.0216 mm−1.

2.2. The artifact correction framework

Figure 2 shows a flowchart of the framework, with the sequence of corrections generally following the order in which image degrading effects occur in the imaging chain. Dark and gain corrected projections are first processed to reduce the effects of lag and veiling glare (sections 2.2.1 and 2.2.2). These processed projections are then input to an iterative loop involving image reconstruction, estimation of scatter and beam hardening corrections, and application of the corrections to the projection data (sections 2.2.3 and 2.2.4).

Figure 2.

Figure 2. Flowchart of the artifact correction framework.

Standard image High-resolution image

2.2.1. Lag correction.

Detector lag correction followed a temporal deconvolution approach (Hsieh et al 2000a, Starman et al 2011) with the temporal impulse response experimentally characterized in terms of the FSRF (Mail et al 2008). The FSRF was measured following 100 frames uniformly exposed at 100 kVp (+2 mm Al, +0.2 mm Cu), 0.4 mAs, and 7.5 pulses per second, corresponding to a steady-state signal of ~85% of detector saturation. The normalized FSRF was assumed to be independent of exposure level. The FSRF signal L(k) was calculated as the average residual intensity in a 100  ×  100 pixel ROI at the center of the detector measured in 100 frames acquired after termination of x-rays (with standard dark and gain correction).

This FSRF is the convolution of the sought impulse response function h(k) with a unit step function representing the falling edge of x-ray exposure:

Equation (1)

where k is the frame number, u(k) is the unit step function, N is the number of terms in the exponential model, and an and bn are lag parameters governing the magnitude and falloff of each exponential term. Equation (1) was fit to measured data using six exponential components (N = 6). Measurements and fits are shown in figure 3(a), showing the 1st frame lag for the FPD to be ~2%, the 10th frame lag ~0.5%, and the 100th frame lag ~0.1%. The lag parameters obtained from the fits were applied in lag correction via recursive deconvolution as in (Hsieh et al 2000a, Starman et al 2011).

Figure 3.

Figure 3. Empirical (i.e. measured) parameters associated with the artifact correction framework. (a) Detector lag characterization based on a (six-component) multi-exponential fit to the FSRF. (b) Measurement and fit of the detector LSF. The zoomed-in plot (c) shows the long-range tails of the LSF and the estimated glare LSF of equation (2). (d) Comparison between measured and analytically estimated exposure values for the x-ray beam attenuated by Al and PMMA as a function of attenuator thickness.

Standard image High-resolution image

2.2.2. Veiling glare correction.

Correction of detector veiling glare was performed with a deconvolution approach similar to (Poludniowski et al 2011), where the detector LSF was modeled as a multi-component function consisting of a Gaussian for short-range blur and a Lorentzian representing the long-range veiling glare:

Equation (2)

where the c terms are fit parameters. The LSF was estimated from the edge spread function (ESF) measured using a polished tungsten edge at the center of the detector. The tungsten edge was positioned directly at the surface of the detector to minimize the impact of source-related effects (e.g., off-focal radiation) on the measured LSF. The oversampled ESF was generated from an angulated profile, (Samei et al 1998) and the 3-point central difference was used to obtain the LSF from the ESF. The estimated LSF was fit to equation (2) as illustrated in figure 3(c). The Lorentzian glare component of the LSF (LSFG) is shown in figure 3(d). A Glare Spread Function (GSF) was computed from LSFG following (Poludniowski et al 2011), assuming the detector PSF to be spatially invariant and rotationally symmetric. The glare corrected image (ylg) was obtained by deconvolution of the (lag-corrected) projections with the GSF as in (Poludniowski et al 2011).

2.2.3. Beam hardening correction.

Beam hardening was addressed using the two-pass correction of water- and bone-induced artifacts proposed by Joseph and Spital (Joseph and Spital 1978). The calibration tables required by the algorithm and relating the measured polyenergetic line integrals with the water-equivalent path length, and relating the thickness of bone traversed by a ray to its water-equivalent path length were estimated from analytical simulation of polyenergetic attenuation and detection in the FPD. This approach avoids biases due to scatter and/or off-focal radiation inherent to calibrations based on experimental attenuation measurements. The simulations employed attenuation coefficients of water and bone obtained from the NIST database (Hubbell and Seltzer 1996), x-ray spectra estimated from the TASMICS model of tungsten anode x-ray spectra (Hernandez and Boone 2014), and detector response computed according to a previously validated cascaded systems model of the CsI:Tl scintillator and FPD (Prakash et al 2011).

The accuracy of the analytical beam hardening calibration was assessed by comparing the predicted exposure at the detector with the value measured using a multi-purpose x-ray exposure meter (Accu-Pro, Radcal Corp., Monrovia, CA) with a solid state Si diode (DDX6-W, Radcal Corp., Monrovia, CA) for a 100 kVp beam (+2 mm Al, +0.2 mm Cu) and 0.4 mAs per pulse attenuated by: i) Al plates of varying thickness (0–10 mm with 0.5 mm increments); and ii) PMMA plates of varying thickness (0–100 mm with 10 mm increments). For each thickness, 100 x-ray pulses were averaged and normalized by the exposure of the unattenuated beam. As shown in figure 3(b), the analytical model (continuous line) shows good agreement with measured data (black data points), illustrating the accuracy of the spectral model in the beam hardening correction.

2.2.4. Scatter correction.

Scatter correction involved a two-stage GPU-accelerated MC scatter estimation employing variance reduction, sparse MC simulations and de-noising to obtain a fast yet accurate estimate of scatter. As illustrated in figure 4, an initialization stage provides a pre-corrected volume via a simplified MC simulation to compute the mean scatter magnitude. The subsequent correction stage combines advanced segmentation and tissue density modeling with beam hardening correction and accelerated, high-fidelity MC scatter simulation.

Figure 4.

Figure 4. Flowchart of the combined beam hardening and scatter correction framework. The initialization stage produces an approximate correction based on the mean scatter. The subsequent 'correction loop' involves iterations of the water + bone JS beam hardening correction and high-fidelity scatter fluence estimation using sparse MC-GPU, VR, and kernel smoothing.

Standard image High-resolution image

At the core of the correction method is a GPU-accelerated MC scatter simulator, based on an in-house extension (Sisniega et al 2013) of the publicly available MC-GPU package (Badal and Badano 2009). The modified MC-GPU implements a probabilistic model for tungsten x-ray sources with arbitrary filtration based on TASMICS as in section 2.2.3 (Hernandez and Boone 2014), and an analytical model of energy-dependent response of a CsI:Tl scintillator (as in section 2.2.3). The MC scatter simulator used below was shown previously to agree well with experimental measurements and a well-established MC simulation package (PENELOPE) in (Badal and Badano 2009).

The MC simulation was accelerated using Variance Reduction (VR) involving a combination of interaction splitting and forced detection (Colijn and Beekman 2004, Mainegra-Hing and Kawrakow 2010) in which every interacting photon is split into several virtual photons and deterministically transported (ray-traced) to a random position (pixel) on the detector. To avoid bias in the scatter estimate, the virtual photons are weighted by: (i) the ratio of the Compton scatter cross-section for the solid angle covered by the detector pixel to the total Compton scatter cross-section, and (ii) by the inverse of the number of generated virtual photons. VR was validated against analog (unaccelerated) MC simulations and optimized for execution on GPU as described previously and achieved ~6 × reduction in noise at equal runtime compared to the original MC-GPU (Sisniega et al 2013). Simulation run times were measured for each phantom.

In the initialization stage of the framework, a (lag- and glare-corrected) reconstruction was segmented into three materials (air, soft tissue, and bone) by simple thresholding (see figure 4: air threshold < − 500 HU, bone threshold > 300 HU). A nominal density was assigned to voxels belonging to a given tissue (0, 1.0, and 1.92 g cm−3 for air, soft tissue, and bone, respectively). A fast MC-GPU simulation (<1 min per scan) was then performed using a very coarse ('sparse') number of photons (106 per frame) and angular sampling (simulating 1 projection every 4° over a 360° orbit) without VR. While such a coarse simulation is too noisy to yield useful scatter fluence maps, it is sufficient to estimate a mean scatter magnitude for each projection (computed from the average over 20  ×  20 pixels at the center of the detector, with bilinear interpolation of projection angles not included in the MC simulation). The estimated mean scatter magnitude was then subtracted from each projection and a pre-corrected volume was reconstructed. At the extremely short simulation times desired for the initialization stage, the scatter estimates obtained with VR tended to show slight bias ('clustering' of detected counts caused by concentration of the virtual particles around the few photons tracked in the simulation). This bias disappears with the increase in the number of particles in the VR simulation, but analog MC provided a more efficient estimation of mean scatter signal in the initialization stage.

The pre-corrected volume then entered the two-pass JS beam hardening correction (section 2.2.3) as illustrated in the flowchart of figure 4. The beam hardening corrected reconstruction provided input to a second-stage scatter estimation using an accurate, computationally efficient MC calculation of scatter fluence maps. For this MC simulation, the volume was segmented into three materials following the 'continuous' tissue model (figure 4) in which the density of each tissue depended linearly on HU. The slope of the linear relationship for each tissue was determined by the HU value assigned to the nominal density of the tissue (as in section 3.3.3).

The second-stage scatter estimate involved a combination of MC simulation with VR, sparse sampling of projection angles (1 projection every 2° over a 360° orbit), and a sparse number of simulated photons (105 photons per frame), yielding estimates with significantly lower noise than those obtained with no VR in the initialization stage, but still too noisy to be directly applied in the correction. A post-processing step employing kernel smoothing (KS) was thus applied to the noisy scatter signal (SMC) to generate a high-fidelity scatter distribution for the correction (SMC–KS):

Equation (3)

where u, v are the horizontal and vertical detector coordinates, respectively, θ is the projection angle and the index i denotes the discrete u, v and θ locations where SMC was obtained. KKS is a 3D Gaussian kernel, isotropic in the detector plane (σu = σv) and extending in the angular direction (σθ ≠ σu, σv); therefore, σu and σv denote the kernel FWHM in u and v, respectively (in pixels) and σθ is the FWHM in the angular direction (in degrees). The optimal KKS for brain imaging was investigated by evaluating scatter estimation accuracy as a function of σu, σv, and σθ. Note that equation (3) estimates scatter values at locations (u, v, θ) different from the lattice used in the simulation (ui, vi, θi), avoiding explicit interpolation in the computation of the complete scatter sinogram from the angularly subsampled SMC.

The combined beam hardening and scatter correction process was repeated iteratively to take advantage of the improved segmentation and HU accuracy after each correction step. This iterative process is denoted as the 'correction loop' in figure 4. Nominally, 4 iterations of beam hardening and scatter correction were executed.

2.3. Evaluation metrics

Difference images computed with and without a particular component of the artifact correction process provided quantitative evaluation (in HU) of the improvement in image uniformity and reduction of artifacts. Such difference images also clearly characterized the magnitude and spatial distribution of each artifact—e.g. beam hardening at the inner periphery of the skull and lag as an azimuthal streak (comet). Image non-uniformity (mostly caused by cupping) was quantified using seven 10  ×  10  ×  10 voxel ROIs, one placed at the center of the brain volume (ROIc) and six at the periphery (ROI1–6), as shown figures 1(d) and (e). Non-uniformity was measured as the mean ($\overline{\text{NU}}$ ) and maximum (NUMax) difference between the average HU value in each of the peripheral ROIs and the average HU value in the central ROI:

Equation (4)

Equation (5)

with N = 6 and i = 1, ..., N.

Image noise was estimated as the average standard deviation of HU values in the same seven ROIs. Contrast was evaluated as the difference in average voxel value between the center of the 12 mm sphere (in each of the four groups of simulated bleeds described in section 2.1) and the adjacent brain background in 8  ×  8 × 8 voxel ROIs. The artifact corrections have little or no effect on spatial resolution (e.g. very slight improvement in modulation transfer function associated with the deconvolution steps in sections 2.2.1 and 2.2.2) and resolution measurements were not included in the results for the sake of brevity.

3. Results

The sections below are presented in a form that systematically isolates one or more of the individual artifacts detailed above. The effects of corrections are presented in an order that elucidates the performance of each correction (although different from the order in which they are applied) while minimizing the confounding effects of other artifacts whenever possible.

3.1. Lag correction

In figure 5, narrow beam data was used to isolate the effects of lag and beam hardening by minimizing the influence of factors such as scatter and glare with narrow collimation. In the top row, axial slices of reconstructions before (figure 5(a)) and after (figure 5(b)) lag correction are shown. Uncorrected data show a clear 'comet' artifact that is noticeably reduced after lag correction. The difference image in figure 5(c) illustrates the improvement in image uniformity, indicated by a reduction of the comet artifact by (±8 HU) ~16 HU. The residual artifact after correction in the strongest region of the comet is ~2 HU.

Figure 5.

Figure 5. Lag and beam hardening artifacts. (a, b) Axial images in the superior (ZSup) portion of the head phantom before and after lag correction, with (c) difference image showing the magnitude (±8 HU) and distribution (comet) of lag artifacts. The notation in the title (L) denotes lag correction (only). (d) Axial reconstructions of narrow beam data at ZCtr following lag correction only (L). (e) The same following lag and beam hardening (BH) correction. (f) Difference image.

Standard image High-resolution image

3.2. Beam Hardening correction

Figures 5(d)(f) investigate the effects of beam hardening correction. Figure 5(d) shows the image at ZCtr corrected for lag (only) denoted by the title (L). Figure 5(e) shows the image following lag and beam hardening correction (L + BH), with the difference image in figure 5(f). Beam hardening manifests as reduced HU (~35 HU magnitude), cupping (~16 HU magnitude), streaks (~30 HU magnitude associated with thick and/or dense regions of bone), and blooming (~30 HU magnitude at the inner periphery of the cranium). The (L + BH) correction provides a strong reduction in artifacts spanning a range from −40 HU to +80 HU. In the full correction framework, beam hardening correction is applied after lag, veiling glare, and initial scatter correction, but it is presented first here to better isolate its effect.

3.3. Scatter correction

Estimation of scatter fluence is the most computationally challenging component of the correction framework, and careful selection of parameters governing segmentation, MC simulation, and de-noising is necessary to achieve high-fidelity scatter correction within a clinically acceptable runtime (targeting < 5 min). The inclusion of beam hardening correction within the scatter correction 'loop' as in figure 4 was found to improve the accuracy of scatter estimation. These different aspects are explored in the following sections.

3.3.1. Variance reduction and kernel smoothing.

Figure 6(a) shows a scatter fluence map for a PA projection of the head phantom computed using a finely sampled, long runtime (i.e. gold-standard) MC-GPU simulation with no variance reduction or de-noising obtained with 1010 photons in 650 s. Figure 6(b) shows a scatter estimate obtained for the same projection view with VR, but only 105 photons and ~1 s runtime (sparse MC as in section 2.2.4). While this distribution is too noisy to be directly applied in the correction loop, it yields a high-fidelity scatter estimate after KS de-noising, as shown in figure 6(c).

Figure 6.

Figure 6. Variance reduction and kernel smoothing. (a) Reference (gold-standard) scatter fluence map (PA projection view) obtained with MC-GPU and a large number of photons (1010) without VR. (b) Sparse MC-GPU scatter estimate for the same projection view as (a) but computed with a lower number of photons (105), VR, and shorter runtime (1.2 s per projection). (c) The scatter estimate from (b) after KS. (d) Optimization of KS kernel size. The RMSE is computed with respect to (a) and minimized (RMSE ~0.001) at σu,v = 30 pixels and σθ = 10°.

Standard image High-resolution image

The size of the de-noising kernel KKS affects the quality of scatter estimation: too narrow of a kernel produces noisy estimates, and too wide of a kernel results in oversmoothing. Optimization of the KKS parameters (σu,v and σθ) is illustrated in figure 6(d). Kernel smoothing was applied to a sparse MC dataset (180 out of 720 frames, 105 photons per frame), and the rms error between the gold standard and the de-noised scatter estimate was computed for a set of 20 projections evenly distributed over the scan range. Minimal RMSE of ~0.001 was achieved using σu,v = 30 pixels and σθ = 10°. These optimal settings were used to obtain the de-noised scatter distribution in figure 6(c) and yield 6 × reduction in RMSE compared to the initial sparse MC estimate in figure 6(b). Visual comparison confirms agreement with the gold standard, at ~500 × reduced runtime.

3.3.2. Accuracy of scatter simulation.

The accuracy of scatter correction was evaluated first in the water cylinder phantom. CBCT images were acquired and reconstructed as a function of longitudinal collimation (FOVz), with reduced FOVz providing reduced scatter. Figure 7(a) shows a reconstruction of 'primary-only' projections obtained from extrapolation of projection data to FOVz = 0. Lag and beam hardening corrections were also applied as described above, resulting in a cupping-free image. Reconstructions of the full-FOVz projection data with lag and beam hardening correction, but without scatter correction, as in figure 7(b), shows the significant level of cupping attributable to scatter. The reconstruction in figure 7(b) was segmented into PMMA and water with density of 1.19 and 1.0 g cm−3, respectively, referred to as 'known material' segmentations believed to be highly accurate due to the simple nature of the phantom. Sparse MC-GPU followed by KS de-noising was performed on the segmented volume, and the resulting scatter distribution was subtracted from the full-FOVz projections, yielding a corrected reconstruction (figure 7(c)) with cupping reduced to a level equal to the primary-only reconstruction.

Figure 7.

Figure 7. Accuracy of scatter correction. (a) Reconstruction of primary-only projections approximated by extrapolation to FOVz = 0. (b) Reconstruction of full-FOVz projections. (c) Same as (b) using MC scatter correction based on an idealized ('known material') segmentation of PMMA and water. (d) Same as (b) using the proposed MC scatter correction loop with continuous segmentation. (e) Central image profiles for the reconstructions in (a)–(d).

Standard image High-resolution image

Figure 7(d) shows the scatter correction performed with the proposed continuous segmentation instead of the idealized known material segmentation. Four iterations of the correction loop were performed, and the cupping is again reduced to a level similar to that of the primary-only data (differences between the center and the periphery of 4–10 HU). The accuracy of scatter correction is futher illustrated in central image profiles in figure 7(e).

3.3.3. Effects of segmentation and beam hardening on scatter correction.

CBCT of the head involves soft-tissue and bone areas of varying density which, combined with the presence of beam hardening artifacts, complicate the segmentation process compared to the simple water phantom of section 3.3.2. For such conditions, the continuous tissue model (figure 4) is important to achieve accurate scatter estimates.

HU inaccuracy caused by beam hardening, however, cannot be addressed by the continuous tissue model and remains a source of segmentation error and therefore affects the quality of scatter correction. The effect of beam hardening corrections on the quality of scatter correction is illustrated in figure 8. We define 'nominal bone HU value' as the HU value assigned to the density of cortical bone in the continuous segmentation of figure 4. Ideally, it should be possible to analytically estimate this nominal bone HU (e.g. from the effective attenuation of cortical bone given by the slope of the logarithm of polyenergetic projection data and computed over a 0–1 mm range of bone thicknesses).

Figure 8.

Figure 8. Beam hardening and scatter correction. (a) Reconstruction obtained with beam hardening correction included in the scatter correction loop and with a tissue model based on analytically estimated nominal bone HU. (b) Reconstruction obtained with beam hardening correction after the scatter correction loop for the same nominal bone HU as in (a).

Standard image High-resolution image

Figure 8(a) shows a reconstruction corresponding to the proposed scatter correction framework in which beam hardening is handled within the correction loop (figure 4), and an analytically estimated nominal bone HU (2200 HU) is used [denoted (Scatter + BH) loop]. Figure 8(b) shows reconstruction with beam hardening correction not included in the loop, instead performed after scatter correction [denoted (Scatter loop) + BH] and employing the same nominal bone HU as in the (Scatter + BH) loop. Significant overcorrection is apparent in the reconstruction obtained with BH applied after scatter correction. A sweep of nominal bone HU values in the [(Scatter loop) + BH] approach (not shown here) revealed that even when this parameter is adjusted empirically, the performance of artifact correction remains inferior to that of the (Scatter + BH) loop, manifesting either as increased streaking or overcorrection, depending on the nominal bone HU.

3.3.4. Effect of scatter correction on image quality.

Figure 9 compares narrow beam data (figure 9(a), acquired with diminishingly small FOVz and assumed to be scatter-free) and full-beam data reconstructed without (figure 9(b)) and with (figures 9(c) and (d)) the proposed MC-based scatter correction. A set of 90 projections with 106 photons (no VR) was simulated for the initialization stage of MC correction (total runtime = 45 s). Subsequently, 180 projections with 105 photons (with VR) were simulated for each of the iterations of the scatter correction loop. Total MC simulation time for one iteration of the loop was 216 s. Note that only lag correction was performed on the full-beam data in figure 9(b), whereas the corrected images in figures 9(c) and (d) involve the complete scatter and beam hardening correction loop.

Figure 9.

Figure 9. Quality of scatter correction. (a) Narrow beam reconstruction compared to full beam reconstruction (b) without scatter correction and with scatter and beam hardening correction after 1 (c) and 4 (d) iterations of the correction loop. (e) Difference image between the narrow beam and full beam reconstructions without scatter correction. (f, g) Difference between the narrow beam data and the scatter-corrected full beam reconstruction for 1 (f) and 4 (g) iterations of the correction loop.

Standard image High-resolution image

Visual inspection of the reconstructions indicates reduction in cupping and streak artifacts and improved image uniformity allowing clearer visualization of the simulated bleeds after application of the MC scatter correction. The difference image in figure 9(e) indicates cupping of ~250 HU in the full beam reconstruction with no scatter correction. A difference of more than 300 HU is found in bone areas, along with bright streaks (~50 HU) in the frontal region of the anatomy. After one iteration of scatter and beam hardening correction, the HU differences with the narrow-beam reconstruction were reduced to less than 60 HU in soft-tissue regions (see figure 9(f)) and streaks were noticeably mitigated. Increasing the number of iterations to four further reduced cupping, improved uniformity in soft-tissue regions, and yielded a difference with narrow beam data of less than 20 HU (figure 9(g)). Streak artifacts between bones were almost completely suppressed. Performing more than four iterations provided negligible further improvement.

After scatter correction, deviations up to 300 HU are still seen inside the bone, and a halo ('blooming') is observed adjacent to the skull, yielding approximately −30 HU difference with the narrow beam image. A slight overcorrection of cupping can also be observed at the center of the brain. These remaining differences are most likely due to veiling glare, as treated in the next section.

3.4. Veiling glare correction

Veiling glare has larger impact on full beam data than on narrowly collimated data due to the long-range nature of the glare spread function. Since scatter (and beam hardening) artifacts can obscure those due to glare in full beam data, effects of glare are analyzed in reconstructions after scatter correction (although glare correction is applied prior to scatter correction in the framework of figure 2).

Figure 10 shows slices at ZCtr before (figure 10(a)) and after (figure 10(b)) glare correction. Blooming artifacts around bone are reduced, yielding better visualization near the skull. The HU value for bone is increased by more than 40 HU, as illustrated in the difference image in figure 10(c), compensating for the reduced HU value for bone observed after scatter correction in figure 9(g).

Figure 10.

Figure 10. Veiling glare correction. (a) Full beam image without glare correction (denoted L + BH + S for correction of other artifacts). (b) The same as (a) with glare correction (denoted L + BH + S + G). (c) Difference image.

Standard image High-resolution image

3.5. Complete framework

Comparison between full beam uncorrected and corrected data is shown in figure 11. Images in the first row (uncorrected) are non-diagnostic due to strong shading and streak artifacts and a loss of contrast that defies detection of simulated blood in the brain. The second row demonstrates major improvement in image quality throughout, including lower attenuation regions (e.g. superior aspect of the brain, figure 11(e)), and higher attenuation regions (e.g. skull base and temporal bones, figure 11(f)).

Figure 11.

Figure 11. The artifact correction framework in application to a head phantom presenting simulated intracranial hemorrhage. (Top row) Uncorrected CBCT images. (Middle row) The same images resulting from the artifact correction framework. (Bottom row) Difference images illustrating the magnitude (~450 HU) and spatial distribution of artifacts.

Standard image High-resolution image

Measurements of non-uniformity, contrast, and noise are provided in table 1. Non-uniformity is reduced to around 20% of its original value. The remaining non-uniformity is mainly caused by the superior region of the brain, which appears at a slightly lower HU value than the center, attributable at least in part to cone-beam artifact from the (flat) superior aspect of the skull. Contrast between simulated bleeds and brain background is significantly increased after artifact correction, with an average absolute improvement of 43 HU. The uniformity of the contrast values of the simulated bleeds is also improved, with a maximum difference of 7 HU, compared to 16 HU before correction. However, the improvement in image quality provided by artifact correction comes at the cost of increased image noise, from 12 HU in the original data to 26 HU after correction.

Table 1. Non-uniformity, noise, and contrast measurements before and after artifact correction.

  Non-uniformity Noise Contrast
$\text{N}{{\text{U}}^{\text{Max}}}$ $\overline{\text{NU}}$ C1 (z = 0) C2 (z = 0) C3 (z = +40) C4 (z = −20)
Non-corrected 108.0 HU 73.5 HU 12.0 HU −3.5 HU −6.1 HU 12.4 HU 4.1 HU
Corrected 29.2 HU 14.8 HU 26.5 HU 54.7 HU 47.4 HU 49.2 HU 47.8 HU

4. Discussion and conclusions

Uncorrected FPD-based CBCT data of the head showed severe degradation in image quality due to numerous artifacts that would confound diagnostic performance in imaging of TBI. Following the comprehensive artifact correction framework, image uniformity was substantially improved and the artifacts were reduced to a level that enables visualization of small bleeds (~1–3 mm) associated with acute mild TBI. Quantitative assessment of detectability under such circumstances is a subject of future observer studies. Non-uniformity in the corrected image was 20% of its original value, and the contrast (simulated blood in brain) was improved by 43 HU, yielding an average contrast value for the blood inserts of 49.7 HU, in agreement with the nominal contrast of 50 HU. While the noise was amplified by the corrections, the resulting average CNR was 1.91, compared to 0.54 for the original, uncorrected data.

As expected, scatter was the primary source of image non-uniformity (accounting for cupping of 250 HU at the center of the phantom) and of high frequency streak artifacts near bones. The observed level of cupping agrees with previous results. For example, Sisniega et al (Sisniega et al 2013) reported a level of cupping of 150 HU for a similar geometry as the one considered here and a slightly smaller phantom with lower bone quantity for which cupping is expected to be less pronounced.

An interesting observation regarding the MC-based scatter correction concerns the segmentation method and the tissue models used to obtain the density and material distributions for the MC simulation from an initial reconstruction. A two-material model with fixed density value for each material is only sufficient to obtain a coarse approximation (such as the one used in the initialization stage of the presented method), but a continuous tissue model is needed for the accurate estimation of the scatter distribution. Furthermore, as discussed in section 3.3.3, beam hardening artifacts have a non-negligible impact in the segmentation step of the MC scatter estimation, creating CT number inaccuracies that are propagated into the scatter distribution and forcing to empirically estimate the nominal bone HU value. The inclusion of beam hardening in the scatter loop not only yields improved uniformity and reduced streaks compared to applying the beam hardening correction after scatter correction (likely due to improved HU uniformity of the beam hardening-corrected image), but also allows the use of analytically estimated nominal bone value, thus simplifying design of the tissue model.

Both lag and detector veiling glare were found to appreciably degrade the quality of the reconstructions, the former mainly by introducing image non-uniformity within the brain ('comet' artifact), and the latter by causing blooming artifacts in areas adjacent to the skull, which are of crucial importance for the visualization of epidural bleeds. With the proposed corrections, the artifacts were significantly reduced. Further reduction of the artifacts is anticipated with improvements in the measurement techniques and models used to estimate the lag and glare response functions [e.g. to account for the dependence of FSRF on exposure (Siewerdsen and Jaffray 1999b, Starman et al 2011) or for the dependence of LSFG on the position on the detector].

The minor residual artifacts and HU inaccuracy found in the corrected reconstructions can be associated with a number of additional image degrading factors that were not considered here. For example, off-focal radiation has been shown to create shading and halo artifacts in conventional CT (Hsieh 2009), similar to those generated by veiling glare. It could account for the remaining blooming and shading around the skull that can be observed in the final corrected volume. Methods for reduction of the effects of off-focal radiation have been proposed in the context of conventional CT, ranging from either deconvolution of projection data (Luhta et al 2003) to more sophisticated and computationally demanding approaches based on iterative sinogram restoration (La Rivière et al 2006).

Another source of artifacts not included in the present framework are cone-beam effects (Tuy 1983). Such artifacts were visible, for example, in the region of the head in proximity to the (flat) superior aspect of the skull. Cone beam artifacts can be reduced by projection ray weighting or by two-pass correction approaches in the analytical, FDK-like reconstruction framework (Zeng et al 2004, Tang et al 2005, Mori et al 2006), or they can be addressed by using iterative reconstruction methods that are less sensitive to cone beam effects (Thibault et al 2007).

One undesired side-effect of the artifact correction framework is noise amplification. In the presented scenario for TBI imaging, the noise level was roughly two times larger after applying the corrections compared to the original uncorrected data (although the net benefit to CNR was an improvement by a factor of 3.5). Possible strategies for compensation of this increase in noise include statistically motivated iterative reconstruction methods that have been shown to improve image quality for noisy CBCT (Nuyts et al 2013, Wang et al 2014). The effect of the artifact corrections on the noise properties of the corrected projections will likely need to be considered in the design of such methods in order to achieve optimal performance—for example, modification of projection data weights in a penalized weighted least-squares (PWLS) method (Dang et al 2014).

The present study used a fairly realistic phantom containing materials with attenuation properties approximating, but not perfectly matching, the attenuation of the simulated natural tissues. Both the background gelatin-iodine mixture (~50 HU) and the acrylic spheres (~100 HU) used as surrogate for blood showed higher attenuation than their respective simulated tissues (~25 HU for brain background and 40–80 HU for blood). While the absolute attenuation values were larger, the simulated blood-to-brain contrast of 50 HU was appropriate to that observed in brain hemorrhages, reported to be 15–55 HU (Parizel et al 2001). However the simulated blood-to-brain contrast was on the upper end of the range associated with fresh blood in the brain, and lower values could be pertinent to detection of mild TBI. The performance of the artifact correction framework will be assessed under such circumstances in future work, and extrapolation of the current results suggests CNR ~1 for 25 HU contrast, which may still be sufficient for reliable detection of large area lesions (Tward et al 2007). The increased bulk attenuation of the gelatin background (and spheres) should not significantly affect the results reported here; in fact, they may challenge scatter and beam-hardening effects to a greater degree than natural tissue, so the results here may be considered conservative. Visual inspection of the corrected reconstructions indicates that bleeds down to ~3 mm diameter are detectable in the CBCT images. However, some hemorrhages associated with mild TBI may be as small as 1 mm. With the benefit of the scatter correction methods shown here, ongoing work includes three aspects aimed at achieving reliable detection of ~1 mm bleeds: optimization of system geometry, source and detector configuration, and technique factors based on models of task-based image quality; and incorporation of statistical iterative reconstruction methods that improve the resolution-noise tradeoff in the CBCT with artifact correction.

The clinical motivation for the current work is in enabling the application of CBCT in imaging of TBI, where the technology could be implemented as a compact, point-of-care solution for rapid assessment of acute disease. A dedicated CBCT scanner for this task is currently in progress, with design guided by a combination of image quality optimization [according to image quality models as in (Siewerdsen and Jaffray 2000, Gang et al 2010, Prakash et al 2011)] and clinical / logistical requirements and constraints. The results presented in this work were obtained using an imaging bench in close approximation to the anticipated system geometry. The setup was based on the configuration of existing compact head CBCT scanners, which have been designed for point-of-care imaging (typically in high-contrast applications) and can thus provide a platform for early deployment of a dedicated system for TBI imaging envisioned here. However, the methodology and results are not limited to CBCT for TBI, and the reduction in artifacts and improvement in contrast-to-noise ratio could enable other applications involving low-contrast soft-tissue CBCT imaging, such as detection of fresh intracranial bleeds. Such tasks include evaluation of hemorrhagic stroke (where rapid access to point-of-care CBCT could be greatly beneficial) and intraoperative detection of intracranial hemorrhages using C-arm CBCT in surgery.

In summary, we have presented a comprehensive framework for artifact correction based on an ultra-fast GPU-accelerated MC scatter correction method, combined with corrections for beam hardening, lag, and veiling glare. The total runtime of the correction method was <4 min per iteration and significant reduction in artifacts was achieved after only one iteration. After four iterations (total time of 14 min), the image uniformity approached that of a narrow beam reconstruction, and streak artifacts were completely suppressed. The interdependence between scatter and beam hardening corrections was investigated, showing that better performance was achieved when beam hardening is included in the generation of the input to the MC scatter estimation. While the effects of lag and glare are less pronounced than those of scatter, correction of each effect was important to minimizing image non-uniformity and artifacts, especially in regions adjacent to the skull. Results suggest that the proposed framework yielded CBCT images with image quality sufficient for reliable imaging of acute TBI and motivates further development and translation of a dedicated system for this application.

Acknowledgments

This work was supported by academic-industry research collaboration with Carestream Health (Rochester NY). Collaboration and useful discussions regarding the MC-GPU package with Dr Andreu Badal and Dr Iacovos Kyprianou (US FDA CDRH, Bethesda MD) are gratefully acknowledged. The authors thank Joshua Levy and colleagues at The Phantom Laboratory (Greenwich NY) for development of the anthropomorphic head phantom and Dr Adam S Wang and Nicholas Uebele (Johns Hopkins University, Baltimore MD) for assistance with the low-contrast simulated brain and bleed formulations.

Please wait… references are loading.