Brought to you by:
Paper

Fisher information analysis of list-mode SPECT emission data for joint estimation of activity and attenuation distribution

, , , , and

Published 20 August 2020 © 2020 IOP Publishing Ltd
, , Special Issue on Modern Challenges in Imaging Citation Md Ashequr Rahman et al 2020 Inverse Problems 36 084002 DOI 10.1088/1361-6420/ab958b

0266-5611/36/8/084002

Abstract

The potential to perform attenuation and scatter compensation (ASC) in single-photon emission computed tomography (SPECT) without a separate transmission scan is highly significant. In this context, attenuation in SPECT is primarily due to Compton scattering, where the probability of Compton scatter is proportional to the attenuation coefficient of the tissue and the energy of the scattered photon and the scattering angle are related. Based on this premise, we investigated whether the SPECT emission data, including the scatter-window data, acquired in list-mode (LM) format and including the energy information, can be used to estimate the attenuation map. For this purpose, we propose a Fisher-information-based method that yields the Cramér–Rao bound (CRB) for the task of jointly estimating the activity and attenuation distribution using only the SPECT emission data. In the process, a path-based formalism to process the LM SPECT emission data, including the scattered-photon data, is proposed. The Fisher information method was implemented on NVIDIA graphics processing units (GPUs) for acceleration. The method was applied to quantify the information content of SPECT LM emission data, which contains up to first-order scattered events, in a simulated SPECT system with parameters modeling a clinical system using realistic computational studies with 2D digital synthetic and anthropomorphic phantoms. Experiments with anthropomorphic phantoms simulated myocardial perfusion and dopamine transporter (DaT)-Scan SPECT studies. The method was also applied to LM data containing up to second-order scatter for a synthetic phantom. The results show that the CRB obtained for the attenuation and activity coefficients was typically much lower than the true value of these coefficients. An increase in the number of detected photons yielded lower CRB for both the attenuation and activity coefficients. Further, we observed that systems with better energy resolution yielded a lower CRB for the attenuation coefficient. Overall, the results provide evidence that LM SPECT emission data, including the scatter-window data, contains information to jointly estimate the activity and attenuation coefficients.

Export citation and abstract BibTeX RIS

1. Introduction

In single-photon emission computed tomography (SPECT), a radiotracer that emits gamma-ray photons is injected into the patient. From the detected gamma-ray photons, the radiotracer distribution within the patient is reconstructed. However, a fraction of photons scatter as they propagate through the tissue. This leads to scatter and attenuation artifacts. Thus, compensation of scatter and the resultant attenuation, referred to as attenuation and scatter compensation (ASC), is required for reliable reconstruction. ASC is a prerequisite for absolute quantification of the tracer uptake and has been observed to benefit visual-interpretation tasks [14]. To perform ASC, an attenuation map of the patient is required. Conventional ASC methods obtain this map using a transmission scan, typically a CT scan of the patient. However, these CT-based ASC methods suffer from many issues such as higher costs and increased radiation dose. Another issue is the possibility of misregistration between the SPECT and CT scans, which could lead to inaccurate diagnosis [1, 58]. Current commercial scanners that perform ASC are often dual-modality SPECT/CT systems, which are substantially more expensive than conventional SPECT systems and often require larger imaging rooms, additional shielding, and more complicated acquisition protocols. Further, currently, a majority (around 80%) of the SPECT market share is occupied by stand-alone SPECT systems [9]. Additionally, several emerging solid-state-detector-based SPECT systems, which have demonstrated capability to provide images at low dose, do not have CT imaging capability [1012]. Due to all these reasons, a method that estimates the attenuation map using only the SPECT emission data is poised to have a strong impact on the SPECT landscape [1]. Given this high significance, in this manuscript, we address the inverse problem of jointly estimating the activity and attenuation distribution using only the SPECT emission data.

Existing techniques for estimation of the attenuation map from SPECT emission data can be divided in two classes. The first class of methods uses the data acquired in the scatter window to reconstruct the attenuation images using simple methods such as filtered back-projection (FBP) [1317]. These methods use the fact that Compton scattering is the dominant photon-interaction mechanism in soft tissue and the probability of Compton scatter is directly proportional to the attenuation coefficient. Thus, the reconstructed images could show contrast between tissues with different attenuation coefficients. The different regions can be segmented in these images and pre-defined attenuation coefficients can be assigned to these regions. These methods work reasonably well when the activity is widely distributed, but have limitations when the activity is concentrated within a small-sized region [1]. Further, assuming known attenuation coefficients can be inaccurate in organs such as lungs where the density varies depending on several factors including disease state. The second class of methods estimate the attenuation coefficients directly from the emission data. These algorithms either perform iterative inversion of the forward mathematical model [1821], or exploit the consistency conditions based on the forward model [2225].However, most of these methods are slow and neglect scattered photons. The techniques have met with limited success [1].

A more recent study explored the potential of inverting the models used for scatter to estimate the attenuation distribution [26]. The study was limited in terms of considering only two energy windows, binned data and two-dimensional (2D) phantoms. However, even with these limitations, it was observed that different regions of attenuation were distinguished for physical phantoms. The reconstruction results were not very accurate and the computation time was high, but as the authors commented, it was a promising first step. Of most importance, this study showed that inverting models used for scatter can help estimate the attenuationdistribution. Similar inversion-based methods [2729] have shown potential for using scattered data for simultaneous reconstruction of activity and attenuation coefficients.

In SPECT, for each detected photon, several attributes such as the position of interaction of the photon with the scintillator, energy deposited by the photon in the scintillator and the time of interaction can be estimated. The energy deposited by the scattered photon and the angular orientation of the detector can yield information about the location of scatter. To illustrate this intuitively in an idealized scenario, consider a 2D SPECT system that has perfect energy and position resolution and only allows photons perpendicular to the detector face (figure 1). In this case, for any detected photon, the energy and the direction of the detected photon will be precisely known. We know that there is a direct relationship between the angle of scatter and the energy deposited by the scattered photon. Thus, for an ideal system, the scatter angle can be precisely computed using the energy attribute of the scattered photon. In that case, if we assume that the emission source is a point source whose location is known, then we could precisely determine the location at which the photon scattered, as illustrated in figure 1. This information regarding scattering location can then help to estimate the attenuation coefficient from the detected LM events.

Figure 1.

Figure 1. A schematic illustrating the intuition behind how the energy of the scattered photon can help determine the scattering location. Due to the relation between the angle of scatter and energy of scattered photon, in a hypothetical scenario with known point source and ideal collimator and detector, the location of scatter can be determined.

Standard image High-resolution image

The above-described methods to estimate the attenuation map from the SPECT emission data do not explicitly use the energy attribute of the detected photons. Further, for each detected photon, all the attributes can be stored in a list-mode (LM) format [30]. In the above-described methods, the attribute space is instead discretized into bins and a given photon is allotted to a bin based on its attribute value. As expected, the binning operation leads to information loss.The effect of binning-related information loss when the photon attributes of position and time of interaction are binned has been shown on the null functions [31] and on quantification [32] in SPECT.

Our manuscript is motivated by previous studies on inverting the models used for scatter [2629], on the information loss that is avoided by processing data in LM format [31, 32] and the potential that the energy attribute contains information about the scattering coefficient. Based on these premises, we investigate whether the SPECT emission data, including the scatter-window data, processed in LM format and including the energy attribute, can jointly estimate the activity and attenuation distribution by inverting the models used for scatter. For this purpose, we develop a novel Fisher information-based method that quantifies the information content in LM SPECT emission data for jointly estimating the activity and attenuation distribution. The method requires processing SPECT emission data, including the scattered photons, in LM format. We propose a new path-based formalism for this purpose. Application of the proposed Fisher-information-based method to computational studies yields several novel insights about the information content in scattered photons in SPECT. Preliminary versions of the theoretical treatment described in this paper have been presented previously [33, 34].

2. Theory

2.1. Problem formulation and a path-based formalism to process scattered-photon data

Consider an SPECT system consisting of scintillation cameras that is detecting gamma-ray photons emitted by an object. This object consists of an activity distribution and an attenuation distribution, which parameterize the emission and attenuation of photons, respectively. Consider that the object being imaged is represented in a voxel basis consisting of N voxels, so that the activity and attenuation distributions are a set of N-dimensional vectors, denoted by λ and μ, respectively. Denote the N elements of the activity and attenuation distribution vector, by $\left\{{\lambda }_{1},\dots ,{\lambda }_{N}\right\}$ and $\left\{{\mu }_{1},\dots ,{\mu }_{N}\right\}$, respectively, where λi is themean number of photons emitted from the ith voxel per unit time and μi is the attenuation coefficient of the ith voxel. When a transmission scan is used to measure the attenuation distribution μ, then the inverse problem is to estimate λ. However, in this paper, we are considering that a transmission scan is unavailable, so the inverse problem is jointly estimating λ and μ from only the SPECT emission data.

For each gamma-ray photon emitted from the object that interacts with the scintillator, the position of interaction of the gamma-ray photon with the crystal and the energy deposited at the interaction site are estimated and recorded. Denote the true and estimated attributes of the jth detected event by the attribute vectors Aj and ${\hat{\mathbf{A}}}_{j}$, respectively. The attribute vector is a q-dimensional vector where q is the number of attributes in each LM event. We assume that the number of detected counts J is fixed, i.e. we have a preset–count system. Note that our analysis is general and can be easily extended to a preset–time system. Denote the full LM dataset of estimated attributes as $\hat{\mathcal{A}}=\left\{{\hat{\mathbf{A}}}_{j},j=1,2,\enspace \dots ,\enspace J\right\}$.

We can represent the LM data as an impulse-valued random process on attribute space [35], given by the generalized function:

Equation (1)

As mentioned above, this data can be binned and for completeness, we present the mathematical representation of the binned data. A bin can be defined as a hyperrectangle in the attribute space, denoted by the function bm(A) for m = 1, ..., M. Here we assume that the binningfunctions represent non-overlapping hyperrectangle and encompass the full range of attribute space. In that case, the measurement in the mth bin, denoted by gm is given by [36]

Equation (2)

where the integral is over the attribute space. We observe that the binned data is the result of an additional binning function applied to the LM data and thus intuitively is expected to lead to an information loss, as also mentioned above. This has also been observed in previousstudies [31, 32, 36]. Thus, we focus our analysis on jointly estimating the activity and attenuation distribution directly from the LM data.

More specifically, we intend to quantitatively determine if the LM data contains information to jointly estimate λ and μ. We use the widely used metric of Fisher information to quantify this information content [35, 37]. Deriving the Fisher information requires computing an expression for the likelihood. This is given by [30]

Equation (3)

where T is the acquisition time required for detecting J LM events. In the above equation, we have used the fact that the acquisition time and each of the list-mode events are independent of each other. Taking the logarithm on both sides of the equation (3) yields the log-likelihood of the observed LM data, denoted by $\mathcal{L}\left(\boldsymbol{\lambda },\boldsymbol{\mu }\vert \hat{\mathcal{A}},T\right)$ and given by

Equation (4)

To quantify the Fisher information in the LM data for jointly estimating λ and μ, the log-likelihood must be differentiated with respect to the elements of λ and μ. This requiresobtaining an analytic expression for $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$ and pr(T|λ, μ). For a fixed number of detected counts J, the acquisition time T follows an Erlang distribution with shape J and rate β, where β is the mean rate of photons detected by the detector [38], so that

Equation (5)

Obtaining a similar direct analytic expression for $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$ is complicated. To address this issue, note that any LM event is the result of a photon emitted from a voxel, traveling in a certain direction and then, in some cases, scattering in certain voxels, and finally being detected by the detector. In other words, any LM event is a result of a photon traveling within a discrete unit of space, which we refer to as a path. The concept of a path will be mathematically defined in the next section, but for now, suffice to say that a path is a discrete variable, denoted by $\mathbb{P}$. The expressions for the PDF of the LM event given the path, $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)$ and the probability mass function of the path, $\mathrm{Pr}\left(\mathbb{P}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$, can be derived, as described later. We thus decompose $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$ as a mixture model over all possible paths. For this purpose, we use the following identity:

Equation (6)

where x denotes a continuous random variable, and y and z denote discrete random variables. In the considered scenario, x, y and z correspond to the LM event attributes, emission and attenuation vectors and the path, respectively. Applying this identity yields the following mixture model for $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$:

Equation (7)

The components of the mixture model are the probabilities that an LM event occurs given the photon traces a path and the weight of each component is the probability of the considered path. Because the LM event has already occurred, the probability of the event given thepath is independent of the activity and attenuation distribution, i.e. $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P},\boldsymbol{\lambda },\boldsymbol{\mu }\right)=\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)$, provided the probability of the path accounts for the emission and attenuation processes, as will be the case in our treatment. Using equation (4), we can rewrite the log-likelihood of the data given the activity and attenuation distribution in terms of this mixture-model decomposition as

Equation (8)

To derive the expression for the Fisher information, analytic expressions for $\mathrm{Pr}\left(\mathbb{P}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$ and $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)$ must be derived. These are the topics of the next two sub-sections.

2.2. Computing radiation transfer through a path

In this section, we derive the expression for $\mathrm{Pr}\left(\mathbb{P}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$. We first mathematically define a path. A path is a discrete unit of space that connects different voxels through which photon radiation propagates. A path can be described in terms of a set of sub-paths, where a sub-path describes the unit of space through which radiation propagates between two voxels. To describe the radiation transfer through a sub-path, we use an approach similar to the discrete-ordinates method for solving the equation of radiative transport [39]. Each sub-path is defined in terms of a start location and a finite angular range. First, assume that the directional coordinates are discretized by dividing the angular space of 4π radians into a finite number of solid-angle sub-domains, referred to as ordinates, each of size ΔΩ. Denote the 3D unitary direction vector by $\hat{\boldsymbol{s}}$ and discretized angular direction by ${\hat{\boldsymbol{s}}}_{k}$ associated with the kth instance of the ordinate. Then the indicator function of the kth angular ordinate ${\psi }_{k}\left(\hat{\boldsymbol{s}}\right)$ is defined as below:

Equation (9)

A sub-path ${\mathbb{S}}_{ik}$ is now defined as a cone with a vertex at the center of the ith voxel and an angular range given by ${\psi }_{k}\left(\hat{\boldsymbol{s}}\right)$. In our analysis, we consider a path between voxels withindices i0, i1, ..., in. This path can be considered to consist of n sub-paths between i0 to i1, i1 to i2 and so on until in−1 to in. A schematic describing the above notations and other notations is presented in figure 2.

Figure 2.

Figure 2. A schematic summarizing the various notations used to describe radiation transfer through a path for a 2D setup.

Standard image High-resolution image

The probability of a particular path $\mathbb{P}$, i.e. $\mathrm{Pr}\left(\mathbb{P}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$, is the ratio of the flux incident on the detector through the path $\mathbb{P}$ to the flux incident on the detector through all possible paths.Let ${\Phi}\left(\mathbb{P}\right)$ denote the flux of photons incident on the detector through a path $\mathbb{P}$. Then,

Equation (10)

To derive the expression for ${\Phi}\left(\mathbb{P}\right)$, we first define a few other radiometric quantities. Let $\mathcal{Q}$ denote the radiant energy, the 3D vector r denote a location in space and E denote energy. The fundamental radiometric quantity we use to describe the photon transport is the photon distribution function $w\left(\boldsymbol{r},\hat{\boldsymbol{s}},E\right)$, given by

Equation (11)

The quantity used to describe emission of photons is the source distribution function, denoted by ${\Xi}\left(\boldsymbol{r},\hat{\boldsymbol{s}},E\right)$, and defined as

Equation (12)

Finally, the radiant intensity, denoted by ${\Gamma}\left(\hat{\boldsymbol{s}}\right)$ is defined as

Equation (13)

Consider a point source at location rs in the voxel indexed by i0 emitting photons of energy E0 at a constant rate ${\lambda }_{{i}_{0}}$. Assuming isotropic emission, the source distribution along the sub-path ${\mathbb{S}}_{{i}_{0},{k}_{0}}$ is given by

Equation (14)

where δ(x) is the Dirac delta function. As photons travel from the voxel i0 to i1, a fraction of the photons scatter, leading to attenuation of photons along this path. The effect of the attenuation transform on the distribution function w is given by [35]

Equation (15)

where μ(r, E) denotes the attenuation coefficient at location r and energy E, and cm denotes the speed of light. Applying the attenuation transform to the source distribution function (equation (14)) yields

Equation (16)

Now, it can be shown that [35]

Equation (17)

where

Equation (18)

Using the above relation and the sifting property of the delta function, the integral over l in equation (16) can be simplified to yield

Equation (19)

In the considered path, Compton scattering occurs at some location r within voxel i1. This operation can be described in terms of the scattering operator $\mathcal{K}$ as

Equation (20)

where $K\left(\hat{\boldsymbol{s}},{\hat{\boldsymbol{s}}}^{\prime },E,{E}^{\prime }\vert \boldsymbol{r}\right)$ denotes the scattering kernel and is given by

Equation (21)

where nsc(r) is the density of scatters and is related to the scattering coefficient by

Equation (22)

where σsc is the scattering cross section. Also, θ is the angle at which the outgoing photon scatters relative to the incoming photon, so $\mathrm{cos}\enspace \theta =\hat{\boldsymbol{s}}\cdot {\hat{\boldsymbol{s}}}^{\prime }$. Finally, the differential scattering cross section $\frac{\partial {\sigma }_{\mathrm{s}\mathrm{c}}}{\partial {\Omega}}$ in equation (21) is given by the Klein–Nishina formula [17]. For notational simplicity, define ${K}_{\mathrm{m}\mathrm{a}\mathrm{g}}\left(\hat{\boldsymbol{s}},{\hat{\boldsymbol{s}}}^{\prime },E\vert \boldsymbol{r}\right)$ by

Equation (23)

Also, denote the path integral between any two locations rl and rm by the function γ(rl, rm, E), i.e.

Equation (24)

Substituting the expression for the attenuation transformed source distribution function from equation (19) into equation (20) and using the sifting property of the delta function in angular coordinates yields

Equation (25)

We now integrate this distribution function over all possible locations within the ${i}_{1}^{\text{th}}$ voxel and over all possible energies. This yields the radiant intensity along direction $\hat{\boldsymbol{s}}$ due to photons traveling through the ${\mathbb{S}}_{{i}_{0},{k}_{0}}$ sub-path and scattering within the ${i}_{1}^{\text{th}}$ voxel. Denote this radiant intensity by ${{\Gamma}}_{{i}_{0}{i}_{1}{k}_{0}}\left(\hat{\boldsymbol{s}}\right)$. Substituting the distribution function from equation (25) into equation (13) yields

Equation (26)

where ϕi(r) is the voxel basis function defined as below:

Equation (27)

Assuming that the functions $K\left(\hat{\boldsymbol{s}},{\hat{\boldsymbol{s}}}_{10},E,{E}_{0}\right)$ and $\mathrm{exp}\left[-\gamma \left(\boldsymbol{r},{\boldsymbol{r}}_{\mathrm{s}},{E}_{0}\right)\right]$ do not vary relatively within any location within voxel i1, we can evaluate them when r is the center of the ${i}_{1}^{\text{th}}$ voxel and rs is the center of the ${i}_{0}^{\text{th}}$ voxel. Denoting the center of the i0 and i1 voxels by r0 and r1, respectively, and denoting the direction vector joining r0 and r1 by ${\hat{\boldsymbol{s}}}_{c10}$, we obtain

Equation (28)

Using the definition of ${\hat{\boldsymbol{s}}}_{10}$ from equation (18) and denoting |rrs| by R, we perform a change of variables $\boldsymbol{r}-{\boldsymbol{r}}_{\mathrm{s}}=R{\hat{\boldsymbol{s}}}_{10}$ and replace r by ${R}^{2}\mathrm{d}R\mathrm{d}{\hat{\boldsymbol{s}}}_{10}$. Simplifying further yields

Equation (29)

The integral over R is equal to the distance traversed by the ${\boldsymbol{r}}_{\mathrm{s}}+R{\hat{\boldsymbol{s}}}_{10}$ vector within the ${k}_{1}^{\text{th}}$ voxel, so this distance should vary with ${\hat{\boldsymbol{s}}}_{10}$. However, assuming that this variation is not substantial, we approximate it by the distance that is covered by the vector ${\boldsymbol{r}}_{0}+R{\hat{\boldsymbol{s}}}_{{k}_{0}}$ in the ${i}_{1}^{\text{th}}$ voxel, which we denote by ${{\Delta}}_{{i}_{1}}\left({\mathbb{S}}_{{i}_{0}{k}_{0}}\right)$. Note that ${\hat{\boldsymbol{s}}}_{{k}_{0}}$ is the discretized direction vector associated with the ${k}_{0}^{\text{th}}$ ordinate. Further, performing the integral over ${\hat{\boldsymbol{s}}}_{10}$ yields the following expression for${{\Gamma}}_{{i}_{0}{i}_{1}{k}_{0}}\left(\hat{\boldsymbol{s}}\right)$:

Equation (30)

To proceed further, for mathematical tractability, assume that this entire radiant intensity is concentrated at the center of the ${i}_{1}^{\text{th}}$ voxel, i.e. at location r1 and divided uniformly over the sub-path from this voxel that includes the direction $\hat{\boldsymbol{s}}$. Denote this sub-path by ${\mathbb{S}}_{{i}_{1},{k}_{1}}$. Thus, the distribution function along this sub-path, denoted by ${w}_{1}\left(\boldsymbol{r},\hat{\boldsymbol{s}},E\right)$, is given by

Equation (31)

After this scattering operation, the photons along this path suffer from attenuation as they travel toward voxel i2.

This series of operations continues until the path intersects with the detector. In the considered path, the last voxel where scattering occurs is in and the sub-path that connects the last voxel to the detector is denoted by ${\mathbb{S}}_{{i}_{n},{k}_{n}}$. Denote the coordinates on the front face of the detector by rd and the normal unit vector to the detector plane by $\hat{\boldsymbol{n}}$. Then the distribution function at the face of the detector, denoted by ${w}_{d}\left({\boldsymbol{r}}_{d},\hat{\boldsymbol{s}},E\right)$ is given by

Equation (32)

Now the plane of the front face of the detector is given by $\delta \left(p-{\boldsymbol{r}}_{d}\cdot \hat{\boldsymbol{n}}\right)$, where p is the perpendicular distance from the origin to the detector plane. Let the sensitivity of the collimator to a photon emitted from location r in direction $\hat{\boldsymbol{s}}$ be denoted by $t\left(\boldsymbol{r},\hat{\boldsymbol{s}}\right)$. Then, the flux detected through the considered path is given by

Equation (33)

Perform a change of variables by replacing rdrn by $R\hat{\boldsymbol{s}}$, so that ${\mathrm{d}}^{3}{r}_{d}={R}^{2}\mathrm{d}R\mathrm{d}\hat{\boldsymbol{s}}$. Next, integrating over E and Ω using the sifting property of the delta function yields

Equation (34)

The above expression formalizes the radiation through a path for a general SPECT system. We now derive the specific form of this expression for an SPECT system with a parallel-hole collimator. This collimator allows only photons that are incident in a small range of angles around $\hat{\boldsymbol{n}}$ to pass through the collimator. Thus, assuming $\hat{\boldsymbol{n}}\cdot \hat{\boldsymbol{s}}\approx 1$ when $t\left(\boldsymbol{r},\hat{\boldsymbol{s}}\right){ >}0$, the integral over R yields

Equation (35)

where ${\boldsymbol{r}}_{d0}={\boldsymbol{r}}_{n}+\left(p-{\boldsymbol{r}}_{n}\cdot \hat{\boldsymbol{n}}\right)\hat{\boldsymbol{s}}$ is the coordinate on the detector plane where the gamma-ray photon is incident. Further, assuming the angular ordinates in the object space to be small compared to the angular range allowed by the collimator, we replace $\hat{\boldsymbol{s}}$ with ${\hat{\boldsymbol{s}}}_{{k}_{n}}$, where ${\hat{\boldsymbol{s}}}_{{k}_{n}}$ denotes the discretized direction vector corresponding to the ${k}_{n}^{\text{th}}$ ordinate. Then the integration over $\hat{\boldsymbol{s}}$ can be performed to yield

Equation (36)

where ${\boldsymbol{r}}_{dc0}={\boldsymbol{r}}_{n}+\left(p-{\boldsymbol{r}}_{n}\cdot \hat{\boldsymbol{n}}\right){\hat{\boldsymbol{s}}}_{{k}_{n}}$. To simplify the expression for ${\Phi}\left(\mathbb{P}\right)$, note that in our analysis, we need to only consider the dependence of this expression on the emission and attenuation coefficients. Quantities that are not dependent on these parameters in the above expression are denoted by ${\Lambda}\left(\mathbb{P}\right)$. Further, to simplify notation, we define ${s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$ as

Equation (37)

This term actually denotes the effective sensitivity for path $\mathbb{P}$ to the detector. Further, denote the activity in the starting voxel of the path, i.e. ${\lambda }_{{i}_{0}}$, by $\lambda \left(\mathbb{P}\right)$. Equation (36) can then be rewritten as

Equation (38)

The attenuation coefficient in equation (37) is a function of energy, which does not lend itself to Fisher information matrix analysis with respect to the attenuation coefficient. To address this issue, based on the energy spectrum for common radionuclides such as Technetium-99m (Tc99m), we model the energy dependence of the attenuation coefficient [40] as a linear function of energy, i.e.

Equation (39)

Here μ denotes the attenuation coefficient at the photo-peak energy i.e. μ = μ(E0). Also, a and b denote constants that can be computed from the spectrum. Finally, substituting the expression of ${\Phi}\left(\mathbb{P}\right)$ in equation (10) yields:

Equation (40)

This explicit modeling of probability of path in terms of λ and μ is then used to express the log-likelihood and perform the Fisher information analysis, as we describe later.

2.3. Computing the PDF $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)$

The term $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)$ denotes the PDF of the measured attribute vector ${\hat{\mathbf{A}}}_{j}$ given the photon followed a particular path $\mathbb{P}$. This attribute vector, as mentioned above, consists of the position of interaction of the gamma-ray photon with the crystal, denoted by ${\hat{\boldsymbol{r}}}_{j}$ and the energydeposited by the photon in the crystal, denoted by ${\hat{E}}_{j}$. To model the randomness in estimating ${\hat{\mathbf{A}}}_{j}$ due to the uncertainty in the estimation process and the finite energy and spatial resolution of the detectors, we first write $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)$ as

Equation (41)

To obtain the expression for $\mathrm{p}\mathrm{r}\left({\mathbf{A}}_{j}\vert \mathbb{P}\right)$, consider a path that connects several voxels, as in the section above. Consider a photon that propagates exactly between the center of these voxels before reaching the detector. Denote the energy of the photon at the end of the path by${E}_{\mathbb{P}}$ and the location where the photon interacts with the detector by ${\boldsymbol{r}}_{\mathbb{P}}$. Assuming that the paths are discretized finely enough, we can assume that all the photons within this path will have approximately the same energy and interact with the detector at the same location. Thus, we can write

Equation (42)

where ${\mathbf{A}}_{j}\left(\mathbb{P}\right)=\left\{{\boldsymbol{r}}_{\mathbb{P}},{E}_{\mathbb{P}}\right\}$.

Next, to derive the expression for the term $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert {\mathbf{A}}_{j}\right)$, assume that the finite spatial and energy resolution of the detector and the uncertainty in the estimation of the LM attributes can be modeled by a Gaussian distribution. This is a very reasonable assumption for most SPECT systems [31, 41]. Then, we can write

Equation (43)

where Kdet denotes the covariance matrix quantifying the variances and co-variances in the energy and position estimates and where |Kdet| denotes the determinant of the matrix Kdet. Substituting equations (43) and (42) into equation (41) and using the sifting property of the delta function yields

Equation (44)

We now have the expressions for $\mathrm{Pr}\left(\mathbb{P}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$ (equation (40)) and $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)$ (equation (44)) as required to formalize the likelihood of the LM data. We proceed to deriving the expression for the elements of the Fisher information matrix.

2.4. The Fisher information matrix (FIM) for the LM data

The general expression for the elements of an FIM is given by

Equation (45)

where θq and θq' denote the parameters we intend to estimate and thus in our case are the activity-attenuation coefficients in the qth and q'th voxels of the object, and where $\mathcal{L}\left(\boldsymbol{\lambda },\boldsymbol{\mu }\vert \hat{\mathcal{A}},T\right)$ is the log-likelihood of the observed LM data (equation (8)). Substituting the expression for $\mathrm{Pr}\left(\mathbb{P}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$ from equation (40) into equation (8), and further using equation (5) yields

Equation (46)

Here β, which is the mean rate of photons detected, is equivalently the total radiant flux over all possible paths. Thus, β is given by

Equation (47)

Using equation (47) to substitute for β and differentiating the log-likelihood with respect to the activity in the qth voxel λq yields

Equation (48)

where the summation in the numerator is only over the paths that start from voxel q, denoted by ${\mathbb{P}}_{q}$. Similarly, differentiating the log-likelihood (equation (46) with respect to the attenuation coefficient in the qth voxel, i.e. μq, yields

Equation (49)

where ${\zeta }_{q}\left(\mathbb{P}\right)$ and ${{\Delta}}_{q}\left(\mathbb{P}\right)$ are the number of scatter events occurring in the qth voxel in the considered path and the distance that the considered path covers in the qth voxel, respectively. To derive the FIM elements, we must differentiate equations (48) and (49) further with respect to the activity and attenuation coefficients in q'th voxel and then average over the observed LM data.The derivations are detailed in appendix A and the final expressions are as below:

Equation (50)

Equation (51)

Equation (52)

Equation (53)

Since we cannot simplify these expressions further, we use Monte Carlo integration to evaluate these expressions from simulated LM data, thus yielding the elements of the FIM. The inverse of the FIM yields the Cramér–Rao bound (CRB), which is the lower bound on thevariance of any unbiased estimate of the activity and attenuation coefficients from the SPECT emission data. Thus, using the CRB, we can quantify the information content of the SPECT emission data on the task of jointly estimating activity and attenuation.

3. Methods

3.1. SPECT imaging system and LM data acquisition

To evaluate the information content in the LM emission data for joint reconstruction, we simulated a clinical 2D parallel-hole SPECT system configuration similar to GE's Optima NM/CT 640 SPECT/CT system. Details of this SPECT system geometry are given in section 3.3. The geometric sensitivity of the detector to different paths, the finite extent and bore diameters of the collimator and the finite energy and spatial resolution of the detectors were all modeled. For generating the LM data, we simulated the photon transport via a Monte Carlo-basedapproach in MATLAB. The scattering was modeled using the Klein–Nishina formula, which was normalized for a 2D system so that all the scattering was in plane. While simulating the photon transport, we considered photons that scattered at most once or twice, depending on the experimental setup. This was done to reduce the computational requirements in the FIM computation code, as we discuss in the next section. For each detected photon, the 1D estimated position of interaction of the incident gamma-ray photon with the scintillator, the estimated energy of the gamma-ray photon and the angular orientation of the detector were recorded in LM format.

3.2. Implementing the Fisher information approach

For computing the FIM using the proposed path-based approach, software was developed in C programming language that used GPU acceleration using CUDA parallel computing platform. The first step was to implement the path-based formalism to describe the radiation transfer through different paths, as described in section 2.2. This formalism was validated by comparing it with the results obtained using the Monte Carlo approach described above (section 3.1) through several studies. The results obtained with the two approaches, for example, the photon flux, the energy spectrum and also the profiles of the projection data, were found to match. We do not show these results since our focus in this paper is on studying the information content of LM data.

The validated path-based formalism was then used to develop software to compute the FIM terms, as described by equations (50)–(53). For each detected LM event, we considered all the possible unscattered and scattered paths that the photon could have taken, regardless of the energy of the photon. In other words, photons were not labeled a priori as scattered or unscattered, since even a photon that falls in the photopeak (PP) window may have scattered. As is convention, PP window was characterized as having a width equal to twice the full-width half maximum (FWHM) centered at the photopeak energy. Further, in all our experiments, the energy window between 90 keV and the lower threshold of the PP window was considered as the scatter window. The FIM elements were computed at the true value of the parameters. These FIM terms were used to compute the CRB, which was then used to compute the lower bound on the standard deviations for the activity and attenuation coefficients of the different voxels.

A major challenge in the FIM computation was the large computational and memory requirements. These computational requirements were addressed using various algorithmic strategies. For example, to reduce the memory requirements, certain quantities that did not require large computation times were pre-computed and stored. This included quantitiessuch as the radiological path between the different voxels, distance covered by a sub-path inside a voxel and sensitivity of the collimator as a function of the angular and spatial voxel index. Further the code was parallelized and executed on graphics processing units (GPUs) for faster execution. More specifically, quantities that require sum over paths $\mathbb{P}$, such as $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$ and $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)\lambda \left(\mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)\left[-{{\Delta}}_{q}\left(\mathbb{P}\right)+\frac{{\zeta }_{q}\left(\mathbb{P}\right)}{{\mu }_{q}}\right]$, were computed in parallel and summed using reduction algorithm. A more detailed procedure to compute the FIM terms, including a pseudocode of the GPU-based implementation, are presented in appendix B.

However, even after these computational optimizations, we observed that the code took substantial time to execute. A major reason for this was that the code scales as the number of paths, P, which in turn scales as the square of the number of voxels and as the number of angular samples used to define a path. Further, if we consider photons that have scattered M times, the number of path scales as PM. To reduce this computational expenses and conduct experiments within a practical time limit, we considered phantoms with 32 × 32 pixels. Further,for most of the experiments, we modeled photons that scattered at most once; although in a subset of experiments, photons that scattered twice were modeled. In particular, in phantoms that had activity only within a single pixel, we could computationally model the dual-scattered photons in the FIM computation code within a practical time limit. Thus, in summary, the code was implemented to model single and dual-scattered events.

3.3. Experiments

In the experiments, our objectives were to use the FIM computation framework to study the CRB of SPECT LM data for joint estimation of activity and attenuation distribution in simulation studies. The emission source was assumed to be Tc-99m, one of the most commonly used SPECT tracers, emitting photons at 140 keV. Photons were acquired at 64 angular positions spaced uniformly over 360°. A low-energy high-resolution parallel-hole collimator, with specifications similar to GE Optima NM/CT 640 SPECT/CT scanner was simulated. The system yielded a resolution of 7.8 mm at 10 cm depth. The scintillation detector had an intrinsic resolution of 4 mm and a length of 35 mm. Further, the energy resolution of the scintillation detector was set to 10% at 140 keV.

The system was used to first image a set of synthetic phantoms. For the synthetic phantoms, circular-shaped field of view with a diameter of 35 cm and 32 × 32 spatial pixels were considered. Two attenuation map configurations were considered: a uniform attenuation map with a constant attenuation value of 0.15 cm−1 inside the field of view (figure 3(a)) and a non-uniform attenuation map (figure 3(b)) that simulated the cardiac region with an attenuation coefficient of 0.151 cm−1 and 0.056 cm−1 in background and lung region, respectively.Three activity distributions were considered, namely phantom with radiotracer uptake confined to a single pixel (figure 4(a)), uptake in multiple isolated pixels (figure 4(d)) and uptake over a donut-shaped region (figure 4(g)), referred to as single-pixel, multi-pixel and donut phantom, respectively. Experiments with these synthetic phantoms were conducted to gain an understanding of how activity distribution in uniform and non-uniform attenuation maps affect the information content of scattered photons.

Figure 3.

Figure 3. True (a) uniform and (b) non-uniform attenuation map.

Standard image High-resolution image
Figure 4.

Figure 4. (a) The single-pixel phantom with on-axis activity. (d) The multi-pixel phantom with activity in off-axis locations. (g) The donut-shaped phantom. For uniform attenuation map, as shown in figure 3(a), the normalized standard deviation of the estimate of the attenuation coefficients for the different pixels computed using the proposed approach for the (b) single-pixel phantom (e) multi-pixel phantom and (h) donut-shaped phantom. For non-uniform attenuation map (shown in figure 3(b)), the normalized standard deviation of the estimate of the attenuation coefficients for the different pixels computed using the proposed approach for the (c) single-pixel phantom (f) multi-pixel phantom and (i) donut-shaped phantom.

Standard image High-resolution image

Two anthropomorphic phantoms, namely the cardiac and the brain phantom, were also imaged to provide more clinical realism to our studies. The cardiac activity (figure 5(a)) and attenuation (figure 5(b)) phantoms were generated using a 2D slice of extended cardiac and torso (XCAT) phantom [42]. The brain activity (figure 5(d)) and attenuation (figure 5(e)) phantoms were generated using a 2D slice of the Zubal phantom [43]. For the study with thecardiac phantom, the SPECT system parameters were similar to as described above. For the study with the brain phantom, we simulated a DaTScan SPECT study [44] by modeling a system that was imaging ioflupane (I-123) tracer emitting photons with energy of 159 keV and had a circular field-of-view with a diameter of 30 cm.

Figure 5.

Figure 5. (a) True activity map, (b) true attenuation map and (c) normalized standard deviation of attenuation map for cardiac phantom. (d) True activity map, (e) true attenuation map and (f) standard deviation of attenuation map for brain phantom.

Standard image High-resolution image

LM data for these phantoms were generated using the simulated SPECT system. In the first set of experiments, photons that scattered more than once were discarded.The FIM for the LM data was computed and used to determine the CRB for the activity and attenuation estimates in the different pixels. The first experiment was conducted with 4 × 105 detected LM events, which is a clinically realistic count level in cardiac SPECT studies for a 2D slice [45]. Next, the detected photon counts were reduced from 4 × 105 to 40 000 to study the CRB at different count levels. We also computed the CRB of the activity and attenuation coefficients when events only within the PP window were considered. This study was conducted to assess the impact of including the scattered photons on the CRB of these coefficients. Next, we studied the effect of varying the energy resolution of the system.

In the second set of experiments, we considered photons that scattered at most twice. These experiments were conducted for the single-pixel activity phantom with non-uniformattenuation (figure 3(b)). This dual scattering was modeled in the FIM code. Our objective was to assess the impact of including dual-scattered photons on estimating the attenuation coefficient. In the first experiment, we varied the number of detected LM events and computed the CRB for attenuation coefficients. Our second experiment investigated the impact of change in attenuation coefficient on the CRB. The rationale behind this experiment was that an increase in the true value of the attenuation coefficient causes an increase in the number ofscattered photons, but also increases the proportion of dual-scattered photons. An important question is whether even in these scenarios, the scattered photons provide information to estimate the attenuation coefficients. In this experiment, we varied the attenuation coefficient in the lung region of the non-uniform phantom map (figure 3(b)).

4. Results

The standard deviation values of activity and attenuation coefficients obtained for each pixel were normalized by dividing by the true value of the activity and attenuation coefficient of that pixel, respectively. This was done for easier interpretation of the results and in particular to assess whether the standard deviation is smaller than the true value. This normalization is especially helpful when analyzing results from phantoms with non-uniform attenuation maps. The computed normalized standard deviation of attenuation coefficients for the different phantoms when 4 × 105 detected events were considered is shown in an image format in figures 4 and 5. In all the results, including those with the cardiac (figure 5(c)) and brain (figure 5(f)) phantoms, the standard deviation of the attenuation coefficient for all the pixels was lower than the true value of the attenuation coefficient. Further, the standard deviation of the attenuation coefficient was low in the pixels that had non-zero activity values. This observation is most easily apparent in the synthetic phantoms, in particular the single-pixel and multi-pixel phantoms. For example, in the single-pixel phantom, the standard deviation was the lowest in the center pixel, the only pixel with activity (figures 4(b) and (c)). Also, the standard deviation increased radially as we moved away from this pixel for uniform attenuation map (figure 4(b)). Further, pixels with uptake had a standard deviation much lower than the true value of the attenuation coefficient.

The effect of varying the number of detected LM events on the CRB for the attenuation and activity coefficients is shown in figures 6 and 7, respectively. In these plots, we show the mean of the normalized standard deviation of activity and attenuation coefficients over all the pixels having non-zero activity and attenuation, respectively. We observe that the mean standard deviation values are lower compared to the true values for synthetic and anthropomorphic phantoms at all count levels (figures 6(a) and (b), 7(a) and (b)).In contrast, when using only the data in PP window (figures 6(c) and 7(c)), the standard deviation for the activity and attenuation coefficients were infinite for the single-pixel and the multi-pixel phantoms in uniform attenuation, irrespective of the amount of activity. Thus, inclusion of scatter-window photons had a significant impact on the CRB for these phantoms. For the other phantoms too, while the PP data provided finite CRB, but that was much higher than the true value of the attenuation coefficient (figure 6(c)). When the scatter-window photons were included, the CRB was significantly lowered, and became smaller than the true attenuation coefficient (figure 6(b)). All these results demonstrate that the scatter-window photons contain information to estimate the attenuation coefficient.

Figure 6.

Figure 6. The mean of normalized standard deviation of the attenuation coefficient as a function of the number of detected LM events for (a) synthetic phantoms in uniform attenuation map (b) anthropomorphic phantoms and synthetic phantoms in non-uniform attenuation map and (c) synthetic and anthropomorphic phantoms when only photo-peak events are considered.

Standard image High-resolution image
Figure 7.

Figure 7. The mean of normalized standard deviation of the activity as a function of the number of detected LM events for (a) synthetic phantoms in uniform attenuation map (b) anthropomorphic phantoms and synthetic phantoms in non-uniform attenuation map and (c) synthetic and anthropomorphic phantoms when only photo-peak events are considered.

Standard image High-resolution image

We observe in figures 6(a) and (b), 7(a) and (b) that as the number of detected counts increases, the standard deviation reduces for all the phantoms. An increase in the number of detected photons also corresponds to an increase in the number of scattered photons.This provides further evidence that scattered photons contain information about the attenuation coefficient.

In figure 8, the mean of normalized standard deviation of the attenuation distribution are plotted as a function of different energy resolutions of the SPECT system. We observed that as the energy resolution improved, the standard deviation of the attenuation coefficients reduced. This finding shows that the energy information of the photons contains information that facilitates estimating the attenuation distribution. Further implications of this result are in discussions section.

Figure 8.

Figure 8. The mean of normalized standard deviation of the attenuation coefficients as a function of the energy resolutions for different synthetic phantoms with (a) uniform and (b) non-uniform attenuation map and (c) anthropomorphic phantoms. The normalized standard deviation was calculated for 4 × 105 detected LM events.

Standard image High-resolution image

The normalized standard deviation of attenuation map when up to second-order scattered events are considered is shown in figure 9(a) for the single-pixel activity phantom in the non-uniform attenuation medium. We observe that the standard deviation values for all the pixels are lower than the true attenuation coefficient, similar to the single-scatter case.In fact, the standard-deviation map is very similar to case where up to first-order scattered events were considered (figure 4(c)). This indicates that photons that have scattered at most twice also contain information to estimate the attenuation distribution. The value of the mean of normalized standard deviation of the attenuation coefficient as a function of the lung attenuation are shown in figure 9(b) for dual-scatter case. The value decreases on increasing the attenuation coefficient for all count levels. These results indicate that even when the proportion of dual-scattered photons increases, the scattered-photon data contains information to estimate the attenuation distribution.

Figure 9.

Figure 9. (a) Normalized standard-deviation map of attenuation coefficients for single-pixel phantom in non-uniform medium with 4 × 105 counts, as shown in figure 3(b), where up to two scatter events are considered and (b) mean of normalized version of the CRB-derived standard deviation as a function of lung attenuation and different photon counts in non-uniform phantom for the dual-scatter case.

Standard image High-resolution image

5. Discussions

Overall, the presented results show that photons that have scattered at most twice and are acquired in LM format contain information to jointly estimate the activity and attenuation coefficient. In particular, results in figures 4 and 5 show that for synthetic as well as anthropomorphic digital phantoms, the standard deviation of the attenuation coefficient is lower than the true value. Additionally, while considering up to first-order scatter, CRB is usually lower in the pixels having non-zero activity uptakes and increasing the number of detected counts improves the information content. Similar results were obtained in the case where second-order scattered photons were considered for a single-pixel phantom.

The results in figure 8 showed the importance of energy attribute in estimating the attenuation coefficient. We also conducted a study to assess the impact of binning the energy attribute on the CRB of the attenuation coefficients. To study this effect explicitly, the scintillation detector was assigned a very high energy resolution of 0.5 keV. In the binning process, the entire LM data was binned into different number of bins based on the energy value.For each configuration, we set the range of energy bin values such that approximately similar number of photons were present in each bin. All photons with energies within a bin were assigned the same energy as at the center of the bin. The CRB for the binned and LM data were compared. The mean of normalized standard deviation of the attenuation coefficient for single-pixel phantom with LM data and data where the energy attribute was binned into different number of bins is plotted in figure 10. We observed that as the number of energy bins increased, the standard deviation of the attenuation coefficient reduced. Of most importance, LM data yielded a lower standard deviation for the attenuation coefficient in comparison to even when up to four energy bins were considered. These results are in agreement with other recently conducted studies [31, 32, 36, 46, 47], which have all shown that binning of LM attributes leads to loss of information.

Figure 10.

Figure 10. The mean of normalized standard deviation of the attenuation coefficient as a function of the number of LM events for single-pixel phantom with (a) uniform and (b) non-uniform attenuation map. The results are shown for LM emission data and for cases where the emission data was binned into different number of energy bins.

Standard image High-resolution image

The results in figure 8 suggest that detectors having better energy resolution can improve the joint estimation of the activity and attenuation distribution. In this context, the emerging solid-state detectors such as the Cadmium–Zinc–Telluride (CZT) detectors currently provide an energy resolution of 6% and could theoretically provide up to 1.5% energy resolution [48]. Our results show that such an increase in energy resolution could improve thejoint activity-attenuation estimation capability of these systems. This is highly significant since several CZT-based systems for cardiac imaging, such as those from spectrum dynamics (DSPECT) and GE (NM 530c) typically have high sensitivity, and high energy, temporal, and spatial resolution. Further, these systems have demonstrated capability to obtain low-dose SPECT images. Some of these solid-state systems are also lightweight and portable, such as the Cardius XPO-M system from Digirad, enabling mobile SPECT in remote locations. However, these systems often do not have CT imaging capability. A recent study has shown that ASC leads to improved diagnostic accuracy with these solid-state-detector systems [11]. Thus, enabling ASC using only SPECT emission data for these systems would have significant impact.

A limitation of our study is that we consider photons that scatter at most twice. While the theoretical formalism that we have developed provides the mathematical expressions to conduct such a study, we were limited by the computational requirements. The percentof photons that scatter more than twice in clinical SPECT is relatively small [49, 50]. Nevertheless, evaluating the information content of these photons is an important research frontier. This has applications also in other imaging modalities where incoherent scatter occurs.

Another limitation of our study is that while we studied the performance of our method with different kind of phantoms, including anthropomorphic phantoms, in all the cases, the phantoms were discretized over a 32 × 32 grid. However, in SPECT, the reconstructed images are discretized over a 64 × 64 or 128 × 128 grid. This limitation arises due to the computational and memory requirements of the software. Advances in computational hardware provide a mechanism to address this challenge. At the same time, our results do indicate the possibility to reconstruct the attenuation distribution over a low-resolution 32 × 32 grid.Thus, one possibility is that this low-resolution attenuation map can be interpolated to a higher-resolution attenuation distribution, which could then be used for attenuation compensation. Evaluating the efficacy of such an approach would require studies that objectively assess the quality of the reconstructed activity images on the clinical task. A third limitation is that the study was conducted for a 2D SPECT system with 2D phantoms. However, the theoretical treatment is completely general and implementing this study for a 3D SPECT system is an important direction of future research. Finally, in our method, a few processes such as inter-septal and inter-crystal scatter are not modeled. Modeling these additional processes will make the study even more realistic.

Results from this study motivate application of this method to anthropomorphic physical-phantom studies and to patient data. These studies will provide further insights on the information content of SPECT emission data for joint reconstruction in clinically morerealistic settings. The results from this study also motivate the development of methods to jointly estimate activity and attenuation distribution using only the SPECT emission data, especially if the computational requirements of processing LM data can be reduced. Efforts in the direction of reducing these computational requirements are currently underway [51, 52].

6. Conclusions

We have investigated the information content of LM SPECT emission data, which includes the scatter-window data and the energy attribute for each detected photon, for the task of jointly estimating the activity and attenuation distributions. For this purpose, we developed a Fisher-information-based method that yielded the CRB for the activity and attenuation coefficients from SPECT LM data for the joint estimation task. In the process, we also proposed a path-based formalism to process the LM scattered-photon data. Computational experiments with a simulated 2D SPECT imaging system, and synthetic and anthropomorphic digital phantoms, for different photon count levels, demonstrated that photons that had scattered at most once contain information about the attenuation coefficients. Similar results were observed for the case when photons that scattered at most twice were considered.The standard deviation of the attenuation coefficient was lower than the true attenuation coefficient value for clinical count levels. Further, improving the energy resolution of the SPECT system resulted in more information about the attenuation coefficients. Overall, the results provide promising evidence that the LM SPECT emission data that includes the energy attribute and including the scatter-window data contain information to jointly estimate the activity and attenuation distributions.

Acknowledgments

This work was supported by National Institute of Biomedical Imaging and Bioengineering of National Institute of Health under Grant Number R21-EB024647, R01-EB016231, R01-EB000803 and SNMMI Bradley-Alavi Fellowship. Support is also acknowledged from the NVIDIA GPU Grant. The authors thank Drs Harrison H Barrett, Brian Hutton and Jonathan Links for helpful discussions. The authors would also like to thank Paul Segars and Duke University for providing the XCAT phantom.

Appendix A.: Deriving elements of FIM

To derive the elements of the FIM, we start from equations (48) and (49). Differentiating equation (48) with respect to the activity in the q'th voxel, λq' yields

Equation (54)

Differentiating equation (49) with respect to μq' gives

Equation (55)

Similarly, differentiating equation (48) with respect to μq' gives

Equation (56)

Differentiating equation (49) with respect to λq' yields

Equation (57)

To obtain the elements of the FIM for a given value of the activity and attenuation coefficient, the quantities obtained in equations (54)–(57) must be averaged with respect to the observed LM data at that value of the activity and attenuation coefficient. Before averaging, note that

Equation (58)

where in the second step, we use the expressions from equation (40) and equation (47), while in the third step, the mixture-model definition (equation (7)) is used.

To evaluate the FIM elements with respect to attenuation coefficients, start from equation (55) and consider the second term in this equation. Substitute equation (58) in the denominator of the second term in Equation (55). Next, average over the LM attributes $\hat{\mathcal{A}}$. To perform the averaging operation over $\hat{\mathcal{A}}$, note that $\mathrm{p}\mathrm{r}\left(\hat{\mathcal{A}}\vert J,\boldsymbol{\lambda },\boldsymbol{\mu }\right)=\prod _{j=1}^{J}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{J}\right)$. Thus, $\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \boldsymbol{\lambda },\boldsymbol{\mu }\right)$ in the denominator cancels out with the corresponding term in expression for $\mathrm{p}\mathrm{r}\left(\hat{\mathcal{A}}\vert J,\boldsymbol{\lambda },\boldsymbol{\mu }\right)$ in the numerator. Marginalizing over the rest of the variables reduces the second term in equation (55) to

Equation (59)

The third term in equation (55) is independent of event attributes. The measurement time, T is Erlang-distributed random variable with shape J, rate β and mean $\frac{J}{\beta }$. Thus the second term is the negative of the third term in equation (55) and cancels out, leading to equation (50).

Performing a similar analysis as above, from equations (56) and (54) leads to equations (52) and (51) respectively.

Algorithm 1. GPU-accelerated algorithm for calculating FIM terms.

 Input: λ, μ (N-dimensional arrays), lmData (LM data containing attributes of J LM events)
 Data: sysParam (SPECT system geometry)
 Output: Fisher Information Matrix, FM of size 2N × 2N
  1Initialize GPU context
  2Calculate and store intersection length and corresponding voxel index for different sub-paths in the arrays Δ and voxIndex respectively
  3Calculate and store radiological path, radPath (equation (24)) using μ, Δ and voxIndex
  4Calculate number of voxels with non-zero activity, Nnz
  5Set appropriate grid and block size, and shared memory size for each kernel
  6/*   Store${\sum }_{\mathbb{P}}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)\lambda \left(\mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$ for eachj in the arraysumAj */
 sumAj[j] ← 0, for j = 0, 1, ..., J − 1
 for i = 0 to Nnz – 1 do
    Call K_SumAjKernel < < < grid1, block1, shMem1 ${ >}{ >}{ >}\left(\right.\boldsymbol{\lambda },\boldsymbol{\mu },radPath,voxIndex,\mathbf{\Delta },i$, sysParam, lmData, sumAj)
 end for
  7/*   Calculate
 $prAjActTerm\left[q\right]\left[j\right]=\left\{{\sum }_{{P}_{q}}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)\right\},\quad j=0,1,\enspace \dots ,\enspace J-1,\enspace q=0,1,\enspace \dots ,\enspace N-1$*/
 prAjActTerm[q][j] ← 0, j = 0, 1, ..., J − 1, q = 0, 1, ..., N − 1
 for i = 0 to Nnz − 1 do
    Call K_AjActTermKernel < < < grid2, block2, shMem2 ${ >}{ >}{ >}\left(\right.\boldsymbol{\lambda },\boldsymbol{\mu },radPath,voxIndex,\mathbf{\Delta },i$, sysParam, lmData, prAjActTerm)
 end for
  8/*   Calculate$prAjAttnTerm\left[q\right]\left[j\right]={\sum }_{P}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)\lambda \left(\mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)\left[-{{\Delta}}_{q}\left(\mathbb{P}\right)+\frac{{\zeta }_{q}\left(\mathbb{P}\right)}{{\mu }_{q}}\right],j=0,1,2,\dots J-1,q=0,1,2,\dots N-1$*/
 prAjAttnTerm[q][j] ← 0, j = 0, 1, ..., J − 1, q = 0, 1, ..., N − 1
 for i = 0 to Nnz – 1 do
    Call K_AjAttnTermKernel < < < grid3, block3, shMem3 ${ >}{ >}{ >}\left(\right.\boldsymbol{\lambda },\boldsymbol{\mu },radPath,voxIndex,\mathbf{\Delta },i$, sysParam, lmData, prAjAttnTerm)
 end for
  9/*   Calculate FIM matrix elements one block/chunk at a time */
 FM[i][j] ← 0, i = 0, 1, 2, ..., 2N − 1, j = 0, 1, 2, ..., 2N − 1
 for chunk = 0 to Nchunk – 1 do
    Call K_CalcFisherKernel < < < grid4, block4, shMem4 > > > (chunk, sumAj, prAjActTerm, prAjAttnTerm, FM)
 end for

Appendix B.: Describing the GPU-based implementation to compute the FIM terms

The developed FIM computation software reads the input system configuration and the measured LM data. For each event, we considered all emission and scattered path as mentioned earlier. Next, the radiological path for each path and the values of ${{\Delta}}_{q}\left(\mathbb{P}\right)$ for each voxel index were computed and stored. The quantities ${\sum }_{\mathbb{P}}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)\lambda \left(\mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$ were computed and stored for each LM event. This term appeared in the denominator of equations (50)–(53) and was a function of only the LM event index j.

The next step evaluated terms of the form ${\sum }_{{\mathbb{P}}_{q}}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$ for the qth voxel, which appeared in the numerator of equations (50) and (51). For simplicity, we will denote this terms as ${S}_{q,j}^{\left(1\right)}$ which depends on the paths that start from the qth voxel and attributes of the jth event. Thus, this term was calculated only for voxels with non-zero activity and the sum was calculated in parallel over only those paths that start from qth voxel and result in detection at angle exactly equal to the detector angle attribute of jth event. The quantity ${S}_{q,j}^{\left(1\right)}$ was then stored as a function of the LM event index and the voxel index q.

To evaluate the FIM terms with respect to the activity coefficients, in accordance with equation (51), for each pair of voxel indices q and q', and for each LM event index j, the terms ${\sum }_{{\mathbb{P}}_{q}}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$ and ${\sum }_{{\mathbb{P}}_{{q}^{\prime }}}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$, which have been computed and stored in the previous operations as a function of the LM event index and the voxel index, were multiplied. The result was divided by the square of ${\sum }_{\mathbb{P}}\mathrm{p}\mathrm{r}\left({\hat{\mathbf{A}}}_{j}\vert \mathbb{P}\right)\lambda \left(\mathbb{P}\right){s}_{\mathrm{e}\mathrm{ff}}\left(\mathbb{P}\right)$. Finally, these terms were summed over the LM events. This yielded the FIM terms with respect to the activity distribution. All these operations are done in parallel for a subset of all combination of q and q', where this subset can be defined as a block of elements in the original FIM. The FIM terms with respect to the attenuation coefficients and the FIM cross terms were also computed following a very similar procedure, but in accordance with equations (50), (52) and (53), respectively. The pseudo-code of the implementation is given in algorithm 1.

Please wait… references are loading.