3D dose computation algorithms

The calculation of absorbed dose within patients during external photon beam radiotherapy is reviewed. This includes the modelling of the radiation source i.e. in most cases a linear accelerator (beam modelling) and examples of dose calculation algorithms applied within the patient i.e. the dose engine. For the first part - the beam modelling, the different sources in the treatment head as target, filters and collimators etc are discussed as well as their importance for the photon and electron fluence reaching the patient. The consequences of removing the flattening filter, which several vendors now have made commercially available, is also shown. The pros and cons regarding different dose engines ability to consider density changes within the patient will is covered (type a and b models). Engines covered are, for example, pencil-beam models, collapsed cone superposition/-convolution models and combinations of these, as well as a glimpse on Monte Carlo methods for radiotherapy. The different models’ ability to calculate dose to medium (tissue) and or water is. Finally, the role of commissioning data especially measurements in today’s model based dose calculation is presented.


Introduction
Clinically used dose planning systems have for many years used calculation algorithms for X-rays and γ-beams, which make use of empirically, determined inhomogeneity corrections. These corrections have been applied to either measured dose distributions using e.g. film or simple analytical models describing the distribution in homogeneous water. Modelling of the dose distribution has not relied on basic photon and electron interaction (basic principles). The methods in use today are often model based algorithms where one separates the modelling of the radiation source from the in-patient dose calculation, the socalled dose engine [1].

Historical algorithms
A short description of older algorithms seems to be in place and we divide then depending on their ability to model scattered radiation. These models can be summarised as correction models applied to the dose distribution in water.

Algorithms without Scatter Modelling
In this group of algorithms, e.g. effective attenuation, isodose shift, tissue air ratios (TAR), effective source skin distance (SSD) [2] the only influence of the patient geometry is thru a scaling of the radiological depth for the calculation of absorbed dose. More or less sophisticated versions existapproximations of homogeneous media, regional corrections or pixel by pixel corrections. These methods only account for inhomogeneities with respect to their density along the fan line. A better method was introduced with the algorithm by Batho where the position of the inhomogeneity also is considered [3] These methods were all, in principle, developed prior to the use of CT. The resulting dose distributions from these models can in some clinical cases differ up to 20 % from measurements, especially in low density media irradiated with narrow (5 x 5 cm 2 ) high energy photons beams [4]. Deviations of the same magnitude between measurements and calculations were found for a tangential beam geometry using the effective attenuation method. Of these methods, the Batho method estimates the absorbed dose with the highest accuracy. Lulu and Bjärngard have extended the Batho algorithm to account also for the lateral extension of inhomogeneities [5,6].

Algorithms with Scatter Modelling
The next group of methods all use the radiological depth to determine the primary dose component but also accounts for scatter produced in the irradiated volume.
The Equivalent Tissue Air Ratio, ETAR algorithm relies on a calculation of an effective homogeneous density which is assigned to each point using weighting factors applied to the surrounding points (voxels) [7]. Geometric dimensions (e g depth, field size) are scaled from unit density to the effective density using O'Connor's theorem [8,9]. The weighting factors used are calculated using inelastic scattering cross sections for water, i.e. basic principles are applied. The agreement with measurements are better for this method than for the simple algorithms. An accuracy of ± 3 % can be expected in most cases [7], however, in some situations, the deviation for the ETAR method can increase, for example, tangential beams, where deviations up to 5 % can be found [10].
An extension of the ETAR method using TAR's calculated by Monte Carlo have been proposed [10]. The new TAR's are more accurate especially for small fields where they fall to zero for zero field size. Therefore, the absence of lateral electronic equilibrium for narrow beams in low density media is modelled better. The final equation used is a superposition integral similar to the methods discussed by Mackie et al [11] and Mohan et al [12].
In general, none of the methods discussed deals with the lack of electronic equilibrium for narrow beams of high energy X-rays in low density media. All these models assume that the energy transferred to electrons is absorbed locally i.e. the collision kerma is equal to the absorbed dose.

Multi source models
The limited accuracy of the methods described above has initiated much work on methods based on physical principles. The approach used mainly today is fluence or energy fluence multi source models. The latter is describing the radiation source, usually the treatment head of the accelerator down to different detailed levels. For example, the target generating x-rays from the impinging electron which is considered the primary radiation beam, scattered radiation from primary collimator, beam flattening filter (if present), monitor chamber, multi leaf collimator and conventional jaw system. A common simplification is to reduce all scatter sources to one single usually placed at the position of the flattening filter. This position is chosen since the flattening filter accounts for the majority of the scattered radiation from the treatment head.
Once the model of the radiation source is set up, one calculates the output from it, as an energy fluence distribution. In the case of using Monte Carlo this can be a phase space of the energy fluence i.e. a description of type of particles, their position, direction, and energy. The energy fluence distribution is modulated by field shaping parts of the treatment head. The sources lateral distribution and relative magnitude is usually adjusted based on measurements of conventional measurements e.g. lateral profiles, depth doses, output factors in air and water.

In-patient or dose engine
Dose calculation is based on the previous energy fluence impinging on and into the patient. The patient is represented by a 3D voxel matrix where each voxel is representing the density (relative physical or electron density depending on system). For some models, also a medium may be given for each voxel.

Point kernels
The energy fluence is ray traced thru the patient matrix and attenuated considering the density of each voxel and when appropriate also the medium of the voxel. For each voxel, the TERMA (Total Energy Released in Mass is calculated as the product of the energy fluence Ψ and the mass energy absorption coefficient µ/ρ. The following equation gives the TERMA T differential in energy E at an arbitrary point in the 3D room given by the vector r. From the TERMA distribution one gets the absorbed dose by a convolution with a point kernel describing the transport of energy by primary electrons release (collision kerma) and scattered photons. (The differential in energy has been excluded for clarity, the integral has to be performed over all energies).
The differential in energy has been excluded for clarity, the integral has to be performed over all energies. It is, however, quite common to perform the integral over energy before the convolution, thus only one TERMA matrix and one single point kernel is required. This time-consuming integral can be solved using Fourier transforms where the convolution integral is replaced by the inverse Fourier transform of the product between the transforms of T and P. This approach is fundamental in the work by Boyer who used Fast Fourier Transform (FFT) to solve the convolution of dose distribution kernels with photon fluence distributions to give the final dose in 3D [13,14]. However, if the kernel function P varies with position due to changes in density, an analytical superposition integral has to be solved instead.
The point dose kernels used, which includes electrons released in the first interaction as well as single and multiple scattered photons, are generally calculated using Monte Carlo simulations but analytical methods are also applicable.

Convolution in inhomogeneous medium
Inhomogeneous media can be included in the convolution integral either using a large number of dose distribution kernels, one per density, or a single kernel combined with the following scaling theorems: "In a medium of a given composition exposed to a uniform flux of primary radiation (such as X-rays or neutrons) the flux of secondary radiation is also uniform and independent on the density of the medium as well as of the density variations from point to point." [15] "In a given irradiation system the ratio of scattered photons to primary photons remains unchanged when the density of the irradiated material is changed if all the linear dimensions of the system are altered in inverse proportions to the change in density." [8,9] Density in this context should be interpreted as interaction sites per unit volume, e g electron density for incoherent scattering of photons. It is also assumed that the amount of secondary radiation produced at a point is proportional to the local number of interaction sites.
Mackie et al used superposition kernels calculated for several densities with interpolation and scaling between them [11]. A similar approach using kernels for only a single density combined with the scaling theorems has been applied [12,[16][17][18].They all use ray tracing of scatter between interaction and absorption point and employ the scaling theorem for inhomogeneities. Transportation of energy by electrons is included in the kernels, therefore, electronic disequilibrium will be modelled.

Pencil beam kernels
A method to increase the calculation speed to make it clinical feasible was the introduction of pencil beam kernels where the dose calculation problem has decreased by going from 3D to 2D convolution. By integrating the point kernel along the depth direction, we get a 2D description of the dose distribution. Pencil beams have been created in various ways for example from measurements by radial differentiation of relative dose on central axis from broad beam dosimetric quantities [19]. Another method was the differentiation of radial beam data [20]. Monte Carlo methods have also been used to calculate pencil beam kernels in water [16,21,22]. Usually the pencil beams are parametrized and the actual values of these are found by a fitting process to measured data [22].
Applying the pencil beam convolution is described in principle by the following equation: Here the energy fluence distribution Ψ at a certain specified plane is convolved with the pencil beam kernel P. In heterogeneous media, the pencil beam kernel is scaled along the propagation direction by replacing the z with the radiological depth zradiol considering the density of the voxels along the ray. No scaling is performed, in the initial implementations, perpendicular to the propagation direction. Thus, the pencil beam convolution is a so-called type a algorithm [23]. There is, however, today one pencil beam model where a lateral scaling of the kernel has been added [24]. The commercial implementation is the analytical anisotropic algorithm (AAA) [25,26]. The performance of pencil beams algorithms have been reported in several reports [27][28][29][30][31][32].

Collapsed cone convolution
Another approximation to increase speed (which maybe is not a problem today) as well as managing inhomogeneities is to discretize the point kernel. The kernel is discretized in about 100 directions and by letting each direction representing a cone where all energy is collapsed to the center of the cone [33,34]. When collapsing the energy in the cone into the central ray the inverse square dependence is removed. The distribution of cones is not isotropic, instead a higher density is used in the forward direction because the majority of energy in high energy beams is transported in this direction. A few cones take care of the backscatter. Density of cones should be high enough where significant amount of energy is deposited such that each voxel is passed by one or more collapsed cones.
The convolution integral is commutative thus changing the order of the functions does not change the result This property can be interpreted as shown in the following figure: For the dose deposition view, energy or TERMA from the surrounding voxels is summed up to the deposition voxel. When all contributions are summed up, the total dose in the voxel is determined. In the interaction view, the TERMA in a voxel is spread out to the surrounding voxels and all contributions from interaction points have to be covered. In principle, the first approach can be used to calculate the absorbed dose in a single point/voxel only. The implementations based on this approach utilize the possibility to have a dose grid with different spacing to speed up calculations. For example, in areas with small gradients in TERMA or patient density a coarse grid can be used and in high gradients a finer one.
Another approach used to efficiently perform the convolution is to align the point kernels such that the cones in the same direction overlaps. This makes it possible to ray trace through the TERMA matrix along each cone direction and recursively pick up energy from each traversed voxel, transport, attenuate and then deposit energy along the axis of the cone [33].

Linear Boltzmann transport equation
A more direct way of performing in-patient dose calculations is to start with the linear transport equation by Boltzmann (LBTE). For many years one have used Monte Carlo (MC) techniques to statistically solve the problem [35][36][37][38]. MC methods use stochastic distributions describing the physics to simulate the transport of individual particles (photons and charge particles) through the density and medium matrix describing the patient geometry. The limitation, at least up today, is the large number of particles is required to get a sufficient high uncertainty of the deposited dose in each voxel of interest. MC can be seen as a microscopic solution of the LBTE.
Recently analytical solutions have been applied to LBTE for radiotherapy planning problems in a macroscopic approach i.e. instead of individual particles we create, follow, and attenuate energy fluencies in a matrix describing the patient. An exact analytical solution is not possible, mainly due to its complexity, thus a numerical have to be used, i.e. discrete ordinate method or grid based solvers [39][40][41][42]. Evaluation of a commercial implementation have been presented by e.g. Fogliata et al [43,44] and Hoffman et al [45].
In short, the grid based method can be described as; the properties of the radiation source and the absorbing medium (patient) are described, the multi-source model (e.g. by a phase space) is used to get the energy fluence to the patient and then ray traced through the irradiated volume (observe this is very similar to the point kernels methods discussed above). In the next step the discrete ordinates is applied to solve the LBT equation giving photon energy fluence and the electron sources in each voxel. Finally, the electrons are transported to obtain electron fluence distribution in each voxel and by applying mass stopping power the absorbed dose is determined. The energy of photons and electrons are discretized, 25 and 49 levels, retrospectively. Transport directions is discretized based on the energy involved between 32 and 512 directions (These data are from the Acuros implementation [46]).
Interesting to comment is that both solutions to the LBTE show inherent errors, for MC we have stochastic uncertainties. For discrete ordinate solvers, the concept of discretization may produce uncertainties. As always this can be improved by prolonging the computation by increasing the number of particles in the MC case and for the solver using a finer discretization.

Dose to medium or dose to water
For many years, there have been a discussion on how to report absorbed dose during radiotherapy procedures [47]. No consensus really exists if the dose should be reported as absorbed in water or in the actual medium. With new algorithms as MC and grid based LBTE solvers getting more common this problem is growing. Both these approaches can report either.
The issue can be divided into two parts, a) if the energy/TERMA during the ray trace considers the physical properties of the transversed voxels or all media is considered to be water, and b) the other step is how the deposition of energy is accomplished. Introducing the formalism D transport, deposit where transport and deposit are the medium used respectively. They can either take the value water (w) or medium (m). Considering the kernel approaches, pencil beam and point kernels, are all determined in water, analytical, MC or from experiments. Looking in detail to pencil beam algorithms, scaling according to the transversed density (radiological depth scaling) assumes all media is water and no corrections are performed during dose deposition. Thus pencil beam models transports and deposit dose in water, Dw,w.
For the point kernel models available, no consistent handling exists. In principle, the TERMA or equivalent can be ray-traced considering the medias physical properties or not. The kernels are determined in water but when dose deposition occurs a correction using mass stopping power ratio between medium and water may be applied. No single conclusion on what is reported can be given, however, we probably have one of the following situations: Dw,w, Dm,m or Dm,w.
MC models have the possibility to transport in water or medium and the same is valid for dose deposition. Most common is to transport in medium and deposit in medium. If the user want dose to water, a mass stopping power ratio between water and medium is applied to each voxel [48]. Thus we have Dm,m or Dm,w. When applying the mass stopping power ratio one usually have to use a macroscopic value for the energy deposit in the voxel since the electron energy fluence differentiated in energy in the voxel is not known. Alternatively, this could be done microscopically if done for each single energy deposition.
The discrete ordinate solvers for the LBTE determines the electron energy fluence differentiated in energy for each voxel and either the mass stopping power for the medium or for water is used to determine the absorbed dose. All transport is accomplished in the medium thus we have Dm,m or Dm,w. The choice is user selectable in the only existing commercial implementation [49,50].

Summary
Today we have very accurate tools available for estimating the absorbed dose within the patient during radiotherapy [51][52][53][54][55][56][57]. We still have, however, several problems to solve. The models discussed here are in principle for static patients without movements i.e. intrafractional and/or interfractional movements. The latter can probably be solved by daily imaging and adapting the today's plan. This can be accomplished by e.g. a library of plans or on-line re-optimisation and calculation. For intrafractional movements, we have used PTV, ITV etc. and margin recipes to assure that the target get the correct dose. This will in many cases lead to an over-irradiation of healthy tissues surrounding the tumour. Techniques with synchronisation of the patient with the treatment delivery e.g. gating and tracking is close to be clinical routine at many departments.
Another problem that has been noticed when introducing more accurate algorithms is a tendency of optimizers to put in too much dose in the vicinity of the tumour to assure an adequate dose in the whole tumour. This is especially noted when working with the PTV concept and tumours in low density regions such as the lung. In these cases, one should probably use a combination of the type a and b algorithms where optimisation is performed using type a and then followed by a recalculation with an appropriate type b model. More physically one can describe this as an optimisation based on energy fluence (pencil beam model) followed by dose calculation with full lateral electron modelling.