Evolution of 3D X-Ray Dose Computation Algorithms

Radiation treatment planning of individual cancer patients relies on the accurate computation of dose distributions in irradiated tissue. Inaccurate dose maps have the potential to mislead clinical decision-making and compromise the balance between effective tumour control and side effects in surrounding normal tissue. In the context of this conference, 3D dosimetry is important for the experimental validation of computed dose distributions. Dose computation methods for external beams of high energy x rays have evolved over the past decade with computer simulation models more closely aligned with the fundamental physics of x-ray scattering and absorption in heterogeneous tissue. In this article, we first present a historical review from a Canadian perspective, followed by an introductory intuitive description of contemporary algorithms used in clinical treatment planning: (1) Convolution-superposition algorithm fundamentally based on the Green’s function method; (2) Stochastic Monte Carlo simulation of x-ray interactions in tissue, and (3) Deterministic numerical solution of a system of Boltzmann transport equations. In principle, all these methods solve the same problem of predicting x-ray scattering and absorption in heterogeneous tissue. However, the mathematical tools differ in their approach and approximations to achieve sufficient speed for routine clinical application. In the conclusion of this article, the evolution of 3D x-ray dose computation is summarized, in terms of accuracy and computational speed.


Historical Perspective
Dose computations used for treatment planning have advanced with several "giant leaps" forward: (a) splitting of radiation dose into primary and scatter components (1960s); (b) incorporation of in vivo tissue densitometry by x-ray computed tomography (CT) (1970s); and (c) exponential gains in costeffective computing power.The evolution of algorithms occurred within a backdrop of concurrent technological advances in medical physics, as illustrated in Figure 1.Early computer methods (1960s) focused on automating the treatment planning process through rapid recall of stored measured beam data, replacing the manual overlay of transparent isodose curve charts and dose calculations performed with a hand-held electronic calculator.
The decoupling of primary and secondary components of dose enabled better prediction over a wider range of beam configurations, beyond those that were measured.Semi-empirical models of the 1970s considered irregularly-shaped fields including a basic form of beam intensity-modulation [1].These fields were produced with shielding blocks and tissue compensating filters, before the commercial advent of multi-leaf collimators (MLCs).Dose distributions were first computed on the assumption of a patient composed entirely of water-like tissue, and were then adjusted with secondary "inhomogeneity correction factors" for tissue density variations [2][3][4][5].The tissue space was segmented by internal contours of regions with non-aqueous density, typically done in a single central slice of the patient (i.e.2D planning).In the 1970s, the introduction of x-ray computed tomography (CT) and patient-specific tissue densitometry, caused a sudden and major paradigm shift from 2D vector-based to 3D pixel-based dose computations [6][7][8].Historically, it is interesting that a precursor pixel-based method [8] using differential Scatter-Air Ratios (SARs) [9] accounted for the influence of an isolated voxel on the scattered radiation field.Dr. John R. ("Jack") Cunningham was a key figure in such developments and their implementation on computer workstations (e.g.Theraplan, Oncentra); we dedicate this review article to his memory [10].Later experiments measured the influence of an airfilled toroid on central axis dose points in a water phantom [11,12].Two counteracting radiation effects were observed: (a) reduced scattering from the air void per se, and (b) enhanced transmission through the toroid, causing greater intensity of scatter produced in underlying voxels.This observation accentuated the need to develop dose computation algorithms with full consideration of scattering effects in 3D [6,[13][14][15].During the era of Cobalt-60 machines, energy transferred from x rays to secondary knock-on electrons was generally assumed to be deposited on-the-spot, or equivalently, charged particle equilibrium (CPE) was assumed to be guaranteed at all dose voxel locations.However, this assumption weakened for higher energy x-ray beams delivered by linear accelerators that soon replaced Cobalt units (1970s).The energetic x rays liberate electrons with longer ranges in porous tissue such as lung.Localized dose perturbations had also been observed experimentally in the buildup region of depth-dose curves, beam penumbrae, and near tissue interfaces with metallic implants.It became clear that electron migration away from primary photon interaction sites had to be considered in the next generation of multiparticle algorithms [16].The convolution method, introduced in the mid-1980s [17] was a major driving force in reawakening interest in photon-electron transport; it sparked the rapid deployment of a new generation of 3D computational methods [15,[18][19][20].
This article focuses on algorithms (Types B and C, or 2 nd and 3 rd generation) that are used for contemporary clinical treatment planning in radiation oncology (Figure 2) [3,21].These techniques solve a common problem -modeling the propagation of radiation through a heterogeneous tissue space.However, each technique approaches the situation from a different perspective, and applies unique mathematical tools and approximations during practical implementation: (a) Convolutionsuperposition algorithms are based on Green's impulse-response concept; (b) Monte Carlo techniques simulate a large number of individual particle trajectories and scores energy deposition in voxels in real space; and (c) Deterministic techniques solve a system of coupled differential (Boltzmann transport) equations that constrain the flow and energy of particle cohorts across a multidimensional phase space.The goal of this article is to introduce the reader to these different methods in an intuitive manner.Mathematical details are found elsewhere [22][23][24][25][26][27].

Convolution & Superposition Methods
Convolution-superposition algorithms offer a good compromise between accurate dose modelling and efficient computation.This algorithm has become the most popular method in commercial treatment planning systems.However, implementations vary considerably from 2D (i.e., pencil beams) to consideration of the 3D shower of scattered radiation launched by primary photons.The concept is easy to comprehend intuitively; each photon interaction site becomes an epicentre (i.e., kernel) of nanoscale "explosions" that produce a debris field of scattered energy.The mathematical foundation of convolution is the Green's function method that has a rich history, well before applications to medical physics.The impulse-response approach uses a data base of energy deposition kernels that describe the debris field.Kernels are pre-computed for a homogeneous water absorber exposed to various incident energies using a one-time Monte Carlo simulation [28].X-ray interactions are artificially forced to occur at an isolated point in a large homogeneous water absorber and the emerging particles create the scattered dose deposition pattern.In this approach, attenuation of primary radiation is cleanly decoupled from the spreading of energy by secondary particles.For an external beam of x rays, each kernel is weighted by the local energy released by primary photons at a grid of launch sites.The magnitude of each energy burst is quantified by a special quantity called TERMA (total energy released per unit mass) [29].The attenuated TERMA is computed by raytracing through voxels lying along primary rays, from the radiation source to each interaction point in the patient space.Radiological pathlengths across heterogeneous tissue are determined from megavoltage linear attenuation coefficients at the treatment beam energy, inferred from CT image data usually obtained with kilovoltage beams.The kernels are intensity-modulated by the attenuated TERMA within the irradiated space and each kernel disperses the released energy to surrounding voxels.The sum of all energy deposited to each dose voxel forms the total dose distribution.When the in-water kernel is applied "as is" throughout an irradiated water volume, the process is a convolution with a spatially-invariant kernel, as illustrated in Figure 3.This process resembles the method used for computing a brachytherapy dose distribution from a discrete array of radioactive sources placed within homogeneous tissue.The dose pattern around each source is the kernel.
For heterogeneous tissue exposed to a more realistic poly-energetic divergent x-ray beam, the situation becomes more complex because the kernels must be modified at each launch point.Each kernel must be rotated slightly in order to align with divergent primary beam rays, and consideration must also be given for the local energy spectrum and tissue type, plus the external border of the patient [15,30].The spectral changes are due to x-ray "beam hardening", but tissue inhomogeneity poses the biggest challenge.The calculation involves raytracing throughout tissue along pairs of interactiondeposition voxels within the kernel domain.Variations in electron density (electrons per cm 3 ) alter the mean free path and range of Compton-scattered photons and Compton electrons, respectively.Dose spread kernels are available for a large all-water medium, categorized by each type of energyspreading mechanism (i.e.primary electrons, first-scattered photons, multiply-scattered photons, radiative photons) [28].The kernels for heterogeneous tissue, however, can only be approximated, assuming water-like atomic composition.For primary Compton electrons and first-scattered photon kernels, the water-equivalent pathlength along fan lines from the kernel origin to each dose destination point is used to look up in-water kernel values.The kernels effectively become distorted by a radial expansion or contraction along these fan lines.This approximation assumes that primary electrons and first-scattered photons travel paraxially along these direct flight paths.Fortunately, the majority of dose is deposited by these two dominant kernels.For other kernels, the diversity of pathways requires density averaging over a wider volume [17] [30].When spatially-variant kernels become necessary, the convolution integral transforms into a superposition integral.
We now introduce the mathematical formalism for 3D convolution-superposition. Consider energy launch sites at source positions ( ⃗).The vector  ⃗ specifies the location of a dose-receiving point in 3D.The fraction of energy that is liberated and subsequently deposited is defined for each type of energy-spreading process (i).The kernels are normalized as fractions of the total energy released (i.e., TER = TERMA × mass) at location  ⃗, then deposited in a surrounding voxel at location  ⃗, per unit volume of the receiving voxel.Primary kernels are shaped by short-range secondary electrons (primarily Compton electrons for MV x rays), with a high concentration of distributed energy near the kernel origin.Other kernels reflect Compton-scattered or radiative photons that travel far from the interaction site with a much longer mean free path.The partial dose contribution is governed by the differential energy spectrum of the TERMA (  ), the kernel   () for i-type of interactions and incident photon energy, : The passage of energy between a pair of interaction-deposition voxels in an all-water medium is wholly defined by the vector line that joins these voxels directly, (i.e.,  ⃗ −  ⃗).The leading density term in equation ( 1) yields the dose absorbed in tissue of the receiving voxel, assuming water-like atomic composition.
As mentioned previously, realistic conditions of beam divergence, poly-energetic x-ray beams, tissue inhomogeneity, and finite borders of the patient negate the use of the idealistic convolution integral (Equation 1).This integral must then be replaced by the more general superposition integral.
The kernel values for a pair of interaction-deposition voxels considers radiological pathlengths along ray lines as previously described.The short-hand notation ( �⃗;  �⃗) is intended to draw attention to the scattering entourage around  �⃗ within the scope of the kernel placed with its origin at  �⃗.The following superposition integral replaces the convolution integral Eq. ( 1): The total dose distribution is the sum of energy deposited by all energy-spreading mechanisms (i): Various strategies have been developed to alleviate the computational burden of superposition algorithms because they involve a large set of ray tracings for the primary TERMA calculation and for kernel reshaping in heterogeneous tissue.Pratx and Xing [31] described the use of graphical processor units (i.e., GPUs) in medical physics.GPU-aided superposition can be accomplished in 3D with impressive speed gains of approximately 50-fold for the TERMA calculation step and 940-fold for the -dose calculation step [32]; 3D dose distributions are computed within seconds!Efficient software has also been developed based on the Fast Fourier transform (FFT).Unfortunately, the spatially-variable kernel limits the practical application of this otherwise attractive strategy [33][34] [35].Reducing the dimension of scatter integration from 3D to 2D is another obvious accelerant, but this approach compromises flexibility and accuracy of such algorithms (e.g., pencil beam, SVD -singular value decomposition, AAA -Anisotropic Analytical Algorithm).This strategy can be applied during dose optimization cycles that necessitate high speed, but their inaccuracies could lead to IMRT fields with suboptimal optimization.The most practical compromise between dose accuracy and speed of computation has been achieved with the Collapsed Cone Convolution (CCC) approximation [36] [37].The technique is, in fact, a superposition rather than convolution model.The energy released at a photon interaction point is propagated into a set of small cones and projected onto the cone axes (akin to closure of an umbrella).This approximates the lateral transport of scattered energy, but greatly reduces the computational burden by limiting the ray tracing burden and capitalizing on an efficient recursive expression that was developed to be applied along the cone axes.The misplacement of lateral energy causes a moderate loss in dose accuracy for most clinical situations.The CCC algorithm has become the most prevalent algorithm on commercial workstations with products such as Pinnacle (Philips), Tomotherapy (Accuray), RayStation (RaySearch Laboratories), Monaco (Elekta CMS), Oncentra (Nucletron), and Mobius3D (Mobius Medical Systems).
There are inherent assumptions and limitations of the superposition methods.The electron density used in calculating water-equivalent pathlengths for the primary TERMA calculation and kernel modifications assumes 100% Compton events and "soft" electron collisions in all tissues at the treatment beam energy; atomic number effects are not normally considered.If there are obvious regions of compact bone or foreign prosthetic materials observed in the CT image data, more accurate ad-hoc attenuation coefficients can be "looked up" for the TERMA calculation.However, the approximate distortion of the kernels, using only electron density data will produce less accurate dose at interfaces of tissues with different atomic composition.The following methods have the potential to account for atomic number effects more accurately.

Monte Carlo Simulation -Stochastic Model
The Monte Carlo method is the random sampling method named after the famous gambling casino in Monaco.The technique has been applied to a wide range of fields including mathematics, economics, simulation of games and lotteries, and radiation sciences [38].It has become an invaluable and flexible tool for uncovering weak (and often tacit) assumptions of more approximate dose computation models.In medical physics, the Monte Carlo simulation method has often been regarded as the de facto gold standard technique for decades, with a strong record of corroborating experimental validation.The execution of this method can be viewed as a virtual radiation experiment evolving in slow motion.Furthermore, the experiment and internal physics can be manipulated to isolate key physical effects most important to accurate dosimetry.The method has also been used to design experimental apparatus or to clarify the interpretation of signals generated in radiation dosimeters [39].The method has yielded essential data on radiation emerging from linear accelerators [40] and energy deposition kernels [28] shown in Figure 4.As a point of historical interest, these kernels were originally computed in 1987, running 1 million histories per kernel for a total of 0.5 years of computational time on a cluster of VAX-11 computers (Digital Equipment Corporation) across participating institutions in Canada.Today, the entire database can be regenerated in less than 1 hour using a personal computer!
The general applicability of stochastic solutions fundamentally stems from the ability to evaluate the integral of any function, including those that may be difficult to express analytically.The integral can be evaluated by counting the fraction of random points that land under the surface of a multi-dimensional integrand.The disadvantage of the Monte Carlo method is the computational burden of tracking millions or billions of individual particle "histories" to reach a required precision of a few percent in high dose regions.The reduction in statistical uncertainty of dose values is notoriously sluggish because of an inverse square root relationship with the number of events contributing dose to scored voxels.The reader is reminded that precision does not guarantee accuracy.The accuracy of a Monte Carlo run is fundamentally dependent on the quality of the underlying radiation physics infrastructure: you cannot fool the physics!An advantage of the Monte Carlo concept is that it is also easy to understand intuitively.The processing of one photon at a time, and the single-event structure facilitates graphical visualization [41] and debugging of software.The disadvantage is lengthy computation times with a clear trade-off between dose noise level and spatial resolution (i.e., dose voxel size).In addition, wise choices of runtime parameters (e.g., energy or geometric cutoffs) and applying bias to promote more relevant radiation events can shorten simulation times; artificial bias can be corrected a posteriori.Efforts are ongoing to reduce the computation time using variance reduction strategies, highly specialized codes for treatment planning and faster processors (i.e., CPUs or GPUs).The Monte Carlo method is very well suited to parallel processing because batches of particle histories or types of interactions can be distributed to sub-processors.Monte Carlo computer codes that are used clinically are summarized in Table 1.
The programs are classified according to their practical application: treatment planning or quality assurance of simpler algorithms applied to unusual clinical cases [42].The clinical programs are usually descendants of parent codes used previously in nuclear physics, such as DOSXYZnrc coupled to EGS developed at Stanford University and enhanced and repackaged for medical physics applications as EGSnrc by the Canadian National Research Council (NRC).[42].

Solving Boltzmann Transport Equations -Deterministically
The deterministic approach has been used to model many natural phenomena including electrodynamics, fluid dynamics, and radiation transport in outer space.In modelling dose distributions within the patient, significant progress has recently been made in developing efficient "solvers" of a large system of differential (Boltzmann transport) equations [23][26][43] [44].Boltzmann and Monte Carlo techniques can generally use similar input databases of atomic cross sections that prescribe the possible interactions of multi-particle systems.Dose distribution results calculated by either method are in excellent agreement, provided that input physics and patient-related data, and run-time parameters are well matched.This is confirmed by several studies [45][46][47] [48][49], including tests performed for difficult clinical situations such as pelvic regions with hip prostheses [50] and small lung lesions treated by stereotactic radiosurgery [51][52] [53].Boltzmann dose distributions are based on expectation values of dose without statistical noise associated with Monte Carlo results.The efficiency of deterministic problem solving has been compared with stochastic simulation.The relative speed is highly dependent upon the complexity of the problem (i.e. the "curse of dimensionality") [38].With fast execution on a commercial software platform and evolving familiarity with the method in the medical physics community, clinical adoption of Boltzmann techniques is expected to rise in the coming decade.

Introduction to Phase Space
A significant shift in mentality is required to understand hyper-dimensional phase space [54][55] [56].Three coordinates specify voxel locations in 3D space and two spherical coordinates (i.e., polar and azimuthal angles) specify particle direction.Adding the particle's instantaneous energy, the phase space becomes six-dimensional for each particle type (i.e., photon or electron).Before we develop a mathematical formulation, we define important quantities used in radiation transport theory (Table 2) [26][57].Quantities on the left specify the density and flow rates for particles travelling with a specific direction (Ω � ) and energy (E) at a fixed 3D location ( ⃗) sampled at a moment in time (t).The uppercase quantities in the middle column are the results of integrating left-side quantities over all particle directions and energies.Quantities on the right side are further integrated over radiation exposure time.They then become scalar in nature, more often seen in familiar radiation dosimetry.

Counting Particles in Phase Space
The book-keeping principle embodied in the Boltzmann equation is that mass-energy must be conserved as particles migrate across phase space "bins".Changes in bin occupancy (n) are crosschecked from two perspectives: (1) net flow of particles through dose voxel borders, and (2) particle interaction events that can occur within the voxel.The equations impose the equality of results obtained by both viewpoints, accounting for internal energy losses, and removal, creation, or massenergy transformations of particle types (e.g., pair production).The particle occupancy in phase space bins evolves as particles enter, start/stop in, transform, or exit a phase space voxel.For example, particle fluences are reduced in a photon phase space bin by a photelectric absorption.Scattering may yield the same type of emerging particle (e.g., Compton-scattered photons) with a switch to bins tagged by lower energy and new directional indices.In addition, interactions can spawn new particle types (e.g., photons releasing electrons).Bin counts are then transferred across two particle phase spaces that must be tracked for the case of megavoltage photon-electron showers.For example, bremsstrahlung due to charged particles may spawn a fresh contribution to the photon phase space bins.Particles that pass straight through a voxel without interaction retain their incoming energy and orientation, and simply have an updated emerging 3D location.The Boltzmann transport equations enforce a strict auditing and cross-checking process as particle counts are transferred across phase space bins and domains.
In Figure 5, a simple numerical example introduces the reader to the accounting principle in a photon phase space.Imagine a voxel at 3D location ( ⃗) exposed to a bi-energetic bi-directional fluence of photons over a fixed time.Each arrow in Figure 5 represents the flow of 10 photons over the irradiation period.The two starting populations (in blue and green) initially occupy different bins in phase space due to their inbound orientation and energy difference.Sixty (60) of the incoming photons have initial conditions ( ⃗, Ω � 1 ,  1 ) -coordinates of the phase space bin of interest in this calculation.Another 20 photons have a higher inbound energy (E2 > E1) and different orientation ( Ω � 2 ).On passing through the voxel, 20 photons with incoming energy E1 undergo total photoelectric absorption and another 10 photons are 'out-scattered' from their original phase space bin with a degraded energy E3 < E2 and re-orientation to Ω � 3.In this context, out-scattering refers to a photon count being transferred out of the phase space bin of interest (Ω � 1 ,  1 ).The remaining 30 photons with incoming energy E1 are transmitted through the voxel without interacting in the voxel.Twenty (20) photons with greater starting energy E2 are scattered into the phase space bin of interest.This obvious coincidence was forced to illustrate the basic concept of accounting for 'in-scattering'.The book-keeping is shown in the inset of Figure 5, with a loss of 30 counts due to full absorption (-20) combined with out-scattering (-10), while the gain from in-scattering is 20.The net change in the count at phase space bin (Ω � 1 ,  1 ) is therefore -10.This change in occupancy is alternatively predictable from the photon divergence which is the difference between incoming and outgoing particles; 60 photons entered the voxel with attributes (Ω � 1 ,  1 ) but only 50 exited with the same attributes, yielding a net loss (-10) in bin (Ω � 1 ,  1 ).Table 2. Radiometric Quantities used in transport theory.Particles with direction ( � ) and energy (E) are at location ( ⃗) in 3D space for a moment in time (t) [26].
In Figure 5, a simple numerical example introduces the reader to the accounting principle in a photon phase space.Imagine a voxel at 3D location ( ⃗) exposed to a bi-energetic bi-directional fluence of photons over a fixed time.Each arrow in Figure 5 represents the flow of 10 photons over the irradiation period.The two starting populations (in blue and green) initially occupy different bins in phase space due to their inbound orientation and energy difference.Sixty (60) of the incoming photons have initial conditions ( ⃗, Ω � 1 ,  1 ) -coordinates of the phase space bin of interest in this calculation.Another 20 photons have a higher inbound energy (E2 > E1) and different orientation ( Ω � 2 ).On passing through the voxel, 20 photons with incoming energy E1 undergo total photoelectric absorption and another 10 photons are 'out-scattered' from their original phase space bin with a degraded energy E3 < E2 and re-orientation to Ω � 3.In this context, out-scattering refers to a photon count being transferred out of the phase space bin of interest (Ω � 1 ,  1 ).The remaining 30 photons with incoming energy E1 are transmitted through the voxel without interacting in the voxel.Twenty (20) photons with greater starting energy E2 are scattered into the phase space bin of interest.This obvious coincidence was forced to illustrate the basic concept of accounting for 'in-scattering'.The book-keeping is shown in the inset of Figure 5, with a loss of 30 counts due to full absorption (-20) combined with out-scattering (-10), while the gain from in-scattering is 20.The net change in the count at phase space bin (Ω � 1 ,  1 ) is therefore -10.This change in occupancy is alternatively predictable from the photon divergence which is the difference between incoming and outgoing particles; 60 photons entered the voxel with attributes (Ω � 1 ,  1 ) but only 50 exited with the same attributes, yielding a net loss (-10) in bin (Ω � 1 ,  1 ).The equality between divergence and net losses in a voxel forms the basis of the steady-state Boltzmann equation.On the left hand side of equation ( 4), the leading divergence term is notated as a dot product.This term tracks the net passage of particle fluences through all voxel boundaries.All the gains and losses of particles within the voxel are listed on the right hand side of the equation.For a given type of particle (photon or electron), the following equality must hold true, ignoring internal radioactive sources: Rearranging of terms yields the more common version of this equation [26]: where ( ⃗, Ω � , ) is the flux density distribution integrated over the exposure time.The Σ  term refers to the total macroscopic cross section for photon interactions that remove photon counts from a phase space bin through total absorption or out-scattering or re-orientation.The Σ  term refers to the scattering cross section that can convert an incoming photon's direction Ω � ′ → Ω �  energy ′ → , causing an entrance move to the phase space bin of interest at ( ⃗, Ω � , ).
In practice, a large set of coupled Boltzmann transport equations must be solved to predict the flow of photons and charged particles across a 3D matrix of dose voxels.The ultimate solution is the charged particle fluence, Φ () , that reaches geometric voxels in real patient space coordinates.Once this fluence is determined, the dose can be calculated using the local mass collisional stopping power, (/)  , specific to the voxel's atomic composition.The following equation is simplified for monoenergetic charged particles and assumes delta-ray equilibrium (δCPE) within a voxel: In practice, a polyenergetic variant of this equation is actually used.Note that absorbed dose is the concentration of energy deposited in a specific absorber.The flippant use of the word "dose" without specifying the absorbing medium has led to ambiguity in dose results reported by different algorithms [58][59][60].
As an example of a nuclear physics code that was adopted for medical physics applications, the Atilla code (http://silverfirsoftware.com/attilads.html) from the Los Alamos National Laboratory has been recast as the Acuros® XB algorithm on Varian's Eclipse treatment planning system [61].Simplifications and assumptions are required to achieve practical turnaround times: (a) the normal Cartesian voxel space of the patient anatomy is restructured into a lattice of tetrahedrally-shaped finite elements; (b) scattering cross sections are approximated by a finite series of Legendre polynomials; (c) energy levels are quantized via the multi-grouping method, and (d) particle directions are quantized via discrete ordinates.The accuracy of dose results remains competitive with that of Monte Carlo simulations, and the process can be completed in practical times for routine radiotherapy planning.
To recapitulate, solving the linear Botzmann equations with boundary conditions of clinical beams of x rays incident on a finite heterogeneous patient space has now become practical.Advances in numerical methods and computer hardware have produced practical solutions for clinical applications of ever increasing complexity.The biggest disadvantage at present is unfamiliarity with commissioning a new computational technique for clinical application.A primer on Boltzmann concepts and equations is available for readers facing the challenging learning curve [23].

Conclusion: State of the Art
The progression of algorithms has been driven by an interplay across the fields of nuclear physics, numerical computation, 3D medical imaging, and experimental dosimetry -the theme of this conference [62].The accuracy of Monte Carlo or Boltzmann results is becoming difficult to distinguish because results often lay within the range of experimental uncertainties.Table 3 summarizes the performance of various contemporary algorithms for typical clinical situations.Direct head-to-head comparison across algorithms is complicated by variations in computer hardware, runtime parameters, and debated tolerance metrics for judging acceptable clinical accuracy.Remarkable progress has been achieved in the development of stochastic and deterministic algorithms.Both methods have recently "caught up" with convolution-superposition algorithms in terms of speed of execution, and potentially offer better accuracy over a wider scope of clinical conditions.The next evolutionary step for these algorithms will likely be application to rapid dose optimization cycles and adpative radiotherapy at the treatment machine console while the patient remains in treatment position.
Progress in computer technology will continue to have an impact not only on treatment planning but also on multi-modality imaging, automated dose optimization, and on-line dose reconstruction for adaptive radiotherapy.The GPUMCD code was implemented on a NVIDIA GTX480 processor to achieve a speed gain on the order of 1000 over the general-purpose EGSnrc/DOSXYZ code running on a CPU (Intel i7 860) [63], reducing execution times to a matter of seconds.As another clinical example, a multi-core CPU-based system (PhiMC code, Xeon V3) produces clinical IMRT treatment plans, as shown in Figure 6, in about 10 seconds [64].Cloud-based shared computing resources are also under investigation [65].Dose accuracy will continue to evolve, with opportunities for interactive treatment re-optimization at the treatment console for dose-adaptive 4D radiotherapy [66][67][68][69].With assessment of acute and long-term clinical outcomes obtained under such tightly-controlled dosimetry, radiobiological models of human tumour and normal tissue response will take a quantum leap forward [70].

Figure 1 .
Figure 1.Chronology of major advances in medical physics and their influence on dose computation.

Figure 2 .
Figure 2. Categories of algorithms and corresponding generations.CPE denotes charged particle equilibrium.ρ e denotes electron density (electrons per cm 3 ).Z denotes consideration of atomic number effects.(Z) denotes partial consideration.

Figure 3 .
Figure 3. Convolving the attenuated TERMA distribution (left panel) with energy deposition kernels (middle panel) produces the blurred dose distribution (right panel).

Figure 4 .
Figure 4. Total (points) and singly scattered Compton photon contributions (dash-dot lines) kernels for monoenergetic incident photon energies of 0.4, 1.25, and 10 MeV, from left to right.The kernel units are fractions of energy released at the origin (TER) subsequently deposited in neighbouring voxels per unit volume.Reproduced with permission [29].

Figure 5 .
Figure 5. Voxel exposed to a fluence of bi-energetic bi-directional photons.Book-keeping of the phase space bin counts is shown in the inset on the right side.

12th
International Conference on 3D and Advanced Dosimetry Journal of Physics: Conference Series 2630 (2023) 012008

Figure 6 .
Figure 6.High-resolution dose distributions with 1% precision for prostate (top panels) and head & neck (bottom panels) IMRT cases produced with the Phi Monte Carlo code in 10 seconds [64].Reproduced with permission of the Creative Commons license.