Efficiency enhancements of a Monte Carlo beamlet based treatment planning process: implementation and parameter study

Objective. The computational effort to perform beamlet calculation, plan optimization and final dose calculation of a treatment planning process (TPP) generating intensity modulated treatment plans is enormous, especially if Monte Carlo (MC) simulations are used for dose calculation. The goal of this work is to improve the computational efficiency of a fully MC based TPP for static and dynamic photon, electron and mixed photon-electron treatment techniques by implementing multiple methods and studying the influence of their parameters. Approach. A framework is implemented calculating MC beamlets efficiently in parallel on each available CPU core. The user can specify the desired statistical uncertainty of the beamlets, a fractional sparse dose threshold to save beamlets in a sparse format and minimal distances to the PTV surface from which 2 × 2 × 2 = 8 (medium) or even 4 × 4 × 4 = 64 (large) voxels are merged. The compromise between final plan quality and computational efficiency of beamlet calculation and optimization is studied for several parameter values to find a reasonable trade-off. For this purpose, four clinical and one academic case are considered with different treatment techniques. Main results. Setting the statistical uncertainty to 5% (photon beamlets) and 15% (electron beamlets), the fractional sparse dose threshold relative to the maximal beamlet dose to 0.1% and minimal distances for medium and large voxels to the PTV to 1 cm and 2 cm, respectively, does not lead to substantial degradation in final plan quality compared to using 2.5% (photon beamlets) and 5% (electron beamlets) statistical uncertainty and no sparse format nor voxel merging. Only OAR sparing is slightly degraded. Furthermore, computation times are reduced by about 58% (photon beamlets), 88% (electron beamlets) and 96% (optimization). Significance. Several methods are implemented improving computational efficiency of beamlet calculation and plan optimization of a fully MC based TPP without substantial degradation in final plan quality.


Introduction
A highly conformal dose distribution to the target volume is achieved with treatment plans for photon intensity modulated radiotherapy (IMRT). IMRT was enabled through the introduction of the multileaf collimator (MLC) (Convery and Rosenbloom 1992) and inverse planning strategies (Bortfeld 2006). To follow an inverse planning strategy for photon IMRT, the user sets up multiple treatment fields. Each field is discretized into hundreds of small beams and their dose distributions are calculated (beamlets). These beamlets are described by a dose-influence matrix d ij , where j is the beamlet index and i is the voxel index. A large-scale optimization follows to optimize the large amount of parameters such as leaf positions of the MLC or intensities based on the information of the d ij . Finally, the dose distribution of the optimized plan is re-calculated to accurately predict the delivered dose to the patient with algorithms also considering the impact of the MLC such as Monte Carlo (MC) simulations, solvers of the linear Boltzmann transport equations or convolution/superposition models. Final plan quality is less degraded from optimized to final dose distribution, if the same accurate dose calculation algorithm is used for beamlet as for final dose calculation (Jeraj 2002, Dogan et al 2006, Li et al 2015. However, computation time is increased. It is apparent that following an inverse planning strategy in the treatment planning process (TPP) with its beamlet calculations, optimization and final dose calculation leads to an enormous computational effort. For the development of novel treatment techniques and optimization strategies beyond standard IMRT, there is also a strong tendency for increasing computational demands: • Dynamic treatment techniques such as volumetric modulated arc therapy (VMAT) (Yu 1995, Otto 2008 or dynamic trajectory radiotherapy (DTRT) (Smyth et al 2019) extending VMAT by dynamic table and collimator rotation are described by dynamic paths discretized into control points. The beamlets are calculated for each of these control points.
• Treatment techniques using multiple beam energies such as photon MLC based modulated electron radiotherapy (MERT) (Klein et al 2008, Salguero et al 2009, 2010  • Robust optimization needs to calculate beamlets for each error scenario to be considered (Unkelbach et al 2018a).
• Optimization strategies such as multicriteria optimization and auto-planning run numerous optimizations with different objectives each dealing with a huge amount of beamlets.
• Many beam angle optimization strategies need to calculate the beamlets for any beam angle under consideration and need to run multiple optimizations to find a suitable set of beam angles (Haas 2003, Aleman 2007).
Thus, the computation time is a crucial aspect for the clinical introduction of novel techniques and hence of great importance. However, there exists usually a compromise between final plan quality and computational efficiency as already pointed out with the example of the algorithm selection for beamlet calculation. There are multiple approaches to reduce the computation time of the TPP in general. Especially adaptations in the beamlet calculation are relevant for the TPP: • Adaptations and suitable parameter selection of the beamlet calculation algorithm itself. In case of MC simulations, examples are variance reduction techniques (Fippel 2016) and the choice of the desired statistical uncertainty (Jeraj et al 2000, Ma et al 2005. • Reduction of beamlet data leading to a faster plan optimization. An example is using a larger voxel size (De Smedt et al 2005).
• Perform beamlet calculation using faster hardware and especially using parallel execution (Neph et al 2019), because each beamlet can be calculated independent from the others.
The techniques in the first two categories have in common that they often lead to less accurate or less precise beamlets and therefore to a degradation in final plan quality. Thus, the impact of such solutions needs to be studied systematically to find a reasonable trade-off.
The goal of this work is to improve the computational efficiency of a fully MC based TPP. For this purpose, multiple approaches enhancing the efficiency of the beamlet calculation itself or reducing the beamlet data leading to reduced optimization times are implemented and their impact on final plan quality and computational efficiency is studied. Several intensity modulated photon, electron and mixed photon-electron beam treatment techniques are considered for this study.

Methods
In this work, whenever the term statistical uncertainty is used for both beamlet and final dose distributions, the mean statistical uncertainty (one standard deviation) over the voxels with dose values higher than 50% of the maximum dose is meant. The statistical uncertainty is always determined with the history by history method (Walters et al 2002).

Treatment planning process
In recent work (Henzen et al 2014a, Mueller et al 2017, 2022, a fully MC based TPP as illustrated in figure 1(a) was developed to create treatment plans for IMRT, MERT, MBRT, VMAT, DTRT and dynamic mixed beam radiotherapy (DYMBER) . These treatment plans are all deliverable in the developer mode of a TrueBeam system (Varian Medical Systems, Palo Alto, CA) equipped with a Millennium 120 MLC (Varian Medical Systems, Palo Alto, CA). Source-axis-distance (SAD) of the TrueBeam system is 100 cm. DYMBER is a combination of MERT and DTRT . Plans created by this TPP for MERT, MBRT and DYMBER collimate the electron beams with the TrueBeam installed photon MLC. Thus, no cut-out nor applicator is used, but the patient is moved to a reduced source-to-surface distance (SSD) to reduce in-air scatter of electrons.
This TPP starts by importing the CT data into a research version of the Eclipse treatment planning system (TPS) (Varian Medical Systems, Palo Alto, CA) and contouring the structures including the PTV, OARs and normal tissue. Afterwards, the fields are defined in Eclipse TPS. This step is treatment technique dependent. All photon and electron fields to be delivered with intensity modulated step-and-shoot apertures from static beam directions (called static fields) are defined manually as single fields in Eclipse TPS. Similarly, VMAT arcs are defined manually in Eclipse TPS. To setup photon dynamic trajectories in Eclipse TPS, the external path finding procedure described in Fix et al (2018) is used. In this work, conventional arcs and dynamic trajectories are called dynamic fields.
The next step is to calculate the beamlets for each beamlet collection (static field or control point of a dynamic field) within the framework of the Eclipse interfaced Swiss Monte Carlo Plan (SMCP) (Fix et al 2007. For the purpose of this work, the part of the SMCP managing beamlet calculations was reimplemented in C++ to fulfill the following three functional requirements to steer the computational performance of beamlet calculation and plan optimization: • The user can specify the desired statistical uncertainty of the beamlets.
• The user can specify a sparse dose threshold setting dose values below this threshold to zero.
• The user can specify distances from voxels to the PTV surface from which voxels are merged to larger voxels. . The inner loop iterating over all beamlets of a single field (orange background) is performed in parallel with as many threads as CPU cores available. Beamlet specific tasks are colored in magenta, tasks specific to a beamlet collection (static field or control point of a dynamic field) in purple and common tasks for the whole plan in dark red.
These parameters are described in more detail in the sections 2.1.1-2.1.3. The beamlet calculation is designed as a single multi-threaded process calculating all beamlets of all fields assigned to a treatment plan. This enables to perform common tasks for all beamlet collections and common tasks for all beamlets of a specific beamlet collection only once. Thus, computational overhead is reduced. The workflow is illustrated in figure 1(b). First, HU values of the CT data are transformed to physical density and material composition according to a dose scoring grid and CT calibration ramp. Next, as many photon and electron dose calculation engines are prepared as there are CPU cores available. Each engine is an instance of the Voxel Monte Carlo (VMC++) (Kawrakow and Fippel 2000) algorithm for photons or the electron Macro Monte Carlo (eMC) (Neuenschwander and Born 1992, Neuenschwander et al 1995, Fix et al 2013 algorithm for electrons. Afterwards, the contours of the structures are loaded and it is identified which voxels are merged together. It follows a first loop iterating over all beamlet collections of the fields assigned to the treatment plan starting with a preparation of a beamlet collection. This includes the definition of the rectangular beamlet grid in the middle plane of the MLC. The beamlet grid is conformal to the PTV in beams eye view (static field) or the smallest possible opening encompassing all control point specific conformal openings (dynamic field). An additional beam modality and SSD dependent margin is added to the beamlet grid size to account for the beam penumbra. The beamlet size is 0.5 × 0.5 cm 2 for the inner 0.5 cm wide leaves and 0.5 × 1.0 cm 2 for the outer 1.0 cm wide leaves, specified in the plane of the isocenter. Furthermore, the preparation of the beamlet collection also calculates the transformation of the beamlet collection from the beam coordinate system to the patient coordinate system. This is needed as the beamlet sources are patient-independent pre-calculated phase-spaces located at the treatment head exit plane and provided in the beam coordinate system, while the dose calculation engines simulate the particle transport in the patient coordinate system. Then, a second loop follows, iterating over all beamlets of the prepared beamlet collection in parallel using OpenMP with as many threads as CPU cores available. For each beamlet, first the number of particles to be simulated is estimated to reach the userdefined desired statistical uncertainty. Afterwards the phase-space is loaded and a third loop (not explicitly depicted as a loop in figure 1(b)) iterating over the particles is executed using a prepared dose calculation engine. The engine transports the particle and scores the dose values in the dose scoring grid (original voxel grid). Afterwards, the sparse dose threshold is applied, and then the dose is transformed to the prepared voxel merged format.
After beamlet calculation is completed, the treatment plan is optimized using a hybrid column generation , while other impact of the MLC such as particle scattering is not considered. Thus, a final MC dose calculation is performed within the SMCP framework re-calculating the dose distribution of each static aperture and each control point of dynamic fields depending on the field type (static beam direction or dynamic field). For the final dose calculation of photon beams, the source is a pre-simulated phase-space located above the secondary collimator jaws. The particles of the photon beams are simulated through the secondary collimator jaws and MLC using VMC++. For the electron beams, the multiple source model ebm70 (Henzen et al 2014b) is used. The same algorithms VMC++ (photon beams) and eMC (electron beams) are used for the final dose calculation in the patient as for the beamlet calculation. Finally, the monitor units (MUs) of the static apertures and control points are re-optimized using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm with the same objectives as used for the H-DAO optimization. Purpose of this re-optimization is to mitigate the degradation from optimized to final dose distribution.

Statistical uncertainty
The user can specify the desired statistical uncertainty u p of the beamlets for both photon and electron beamlets separately. p stands for the particle type x (photon beam) or e (electron beam) of the beam. For each specific beamlet, the number of particles n to be simulated to reach the desired statistical uncertainty u is estimated as a function of the actual beamlet size b, actual source-to-surface-distance SSD and the actual voxel volume V (only outlaid for cubic voxels). For this, pre-simulations of beamlets are performed to determine the number of particles n ref b , to reach u ref = 1% statistical uncertainty in defined reference conditions (see table 1). b stands here either for small (0.5 × 0.5 cm 2 ) or for large (0.5 × 1.0 cm 2 ) beamlets. The reference conditions are at , = 80 cm for photon and electron beams, respectively. The formula used to correct n for the specific beamlet is the following: The quadratic correction for the statistical uncertainty is motivated by the history-by-history approach, in which we have a statistic over the number of independent histories (equal to n as all sampled particles here are independent). The quadratic correction for the SSD is motivated by the inverse square law and the correction for the voxel size is motivated by the change of the number of histories impinging on a cubic voxel (linear increase with the area of the voxel surface orthogonal to beam direction).

Sparse format
The total beamlet data is very memory-inefficient, because many voxels receive zero dose or very low dose. Thus, the beamlet data can be enormously reduced by neglecting dose values below a certain sparse dose threshold. In the implemented beamlet calculation framework, the user can specify a fractional sparse dose threshold t (e.g. 1%) to determine an absolute sparse dose threshold d t in cGy/MU according to the following formula where d max is the maximal dose value of beamlet j to any voxel in the whole body. Dose values in voxel i of beamlet j below this threshold (d ij < d t ) are then set to zero. The optimization does not consider any d ij = 0.0 leading to reduced beamlet data and to reduced optimization time.

Voxel merging
A larger voxel size (in units of cm × cm × cm) leads to less beamlet data but also to less accurate beamlets. A promising compromise is to increase the voxel size further away from the PTV, because a small voxel size is probably more important close to the PTV and in the PTV. This concept of distance dependent voxel merging is implemented in the beamlet calculation framework. The user can specify two distances d m and d l . First, all original voxels are grouped in large voxels of 4 × 4 × 4 = 64 original voxels. Those large voxels with a distance, determined from their center to the PTV surface, larger than d l are kept as large voxels. All others are re-grouped in medium voxels of 2 × 2 × 2 = 8 original voxels. Those medium voxels with a distance to the PTV surface larger than d m are kept as medium voxels and all others as original voxels. However, there is one exception: The original voxel size is always used for structures with voxel volume smaller than 500 original voxels such that small structures like the chiasma or the lenses are represented by a reasonable large number of voxels. Voxels overlapping the PTV contours are always of original voxel size. The concept of voxel merging is illustrated in figure 2. The plan optimization accounts for the different voxel sizes and that a voxel of any size may only partly overlap with a structure. During optimization each voxel is weighted by its volumetric overlap with a specific structure for the calculation of the objective function . Furthermore, any dostrimetric evaluation during optimization (e.g. mean dose to a structure) also considers these volumetric overlaps of voxels and structures.

Parameter study
The compromise between final treatment plan quality and computational efficiency for the parameters listed in table 2 is studied for several parameter values. Each parameter is tested separately by running the whole TPP with Table 1. Number of particles needed to reach 1% statistical uncertainty with small 0.5 × 0.5 cm 2 (middle column) and large 0.5 × 1.0 cm 2 (right column) beamlets in the reference conditions. These conditions are an SSD of 95 cm for photon beams and 80 cm for electron beams with the beam perpendicular to a 30 × 30 × 30 cm 3 water phantom with 0.25 × 0.25 × 0.25 cm 3 voxel size. This parameter study is performed for five different treatment techniques, each planned for another clinical or academic case of another treatment site as listed in table 3. Original voxel size is always 0.25 × 0.25 × 0.25 cm 3 . The academic case consists of a 30 × 30 × 30 cm 3 water phantom including contours of a wedgeshaped PTV and a single OAR surrounding the PTV. The PTV is located 0.5 cm from the body surface and the maximal depth from the closest body surface to the PTV is 5.5 cm.
The objective function, which is minimized by the H-DAO and also the MU re-optimization after final dose calculation, consists always of the same objectives per case. There are always a max-dose and min-dose objective for the PTV, a normal tissue objective penalizing dose values depending on their distance from the PTV and one generalized equivalent uniform dose (gEUD) (Niemierko 1999) objective for each OAR. The whole formalism of the objective function value is described in Mueller et al (2022). The value of the tissue-specific factor a s used in the gEUD formula is OAR specifically selected based on the work of Luxton et al (2008). The weights and dose or gEUD values of all the objectives are identified by iterative adjustments of the human planner.
The dependency of the final treatment plan quality on the single parameter values is evaluated with the objective function value, dose homogeneity index HI in the PTV, the gEUD value averaged over all OARs gEUD OARs using the tissue-specific factors a s already used for the objectives and the mean dose D NT in normal tissue. All these quantities are evaluated after MU re-optimization of the final dose distribution, which has always a statistical uncertainty of 1% or lower. Furthermore, there is never any sparse format nor voxel merging applied for the final dose distribution and the original voxel resolution is the same as for beamlet calculation.
HI is expressed by where D p is the prescribed dose and D 2% and D 98% the dose received by at least 2% and 98%, respectively, of the PTV. gEUD OARs is expressed by Figure 2. Illustration of the voxel merging approach implemented and investigated in this work on a two-dimensional plane of the CT. There are three sizes of voxels with the smallest being the original voxel size for beamlet calculation. The size is determined depending on the distance from the target to be treated.  Additionally, the electron contribution is evaluated for the mixed beam plans. It is calculated by the mean dose to the PTV delivered by electron beams divided by the mean dose to the PTV of the whole plan.

Additional parameter value selection
One more treatment plan (selection plan) is created with the same computer hardware for each case listed in table 3 with the following combined selection of parameter values: u x = 5%, u e = 15%, t = 0.1%, d m = 1 cm, d l = 2 cm. This plan is compared to the conservative plan with the most conservative values under consideration: u x = 2.5%, u e = 5%, t = 0.0%, no voxel merging. The parameter values of the selection plan aim to achieve a reasonable compromise between degradation in final plan quality and computational efficiency. They are motivated by the results of the parameter study described in section 2.2.

Parameter study
The dependency of the evaluation quantities on the values of the investigated parameters, namely statistical uncertainty of the photon beamlets and electron beamlets, fractional sparse dose threshold and distances for the voxel merging are shown in figures 3-6, respectively.
For each investigated case with photons, the statistical uncertainty of the photon beamlets has large impact on the final treatment plan quality as shown in figure 3. However, the degradation is low for 5% compared to 2.5% statistical uncertainty. The two cases which are studied with mixed beams are less affected. The increased electron contribution might explain the smaller impact, because electron beams can be used instead of photon beams to a certain degree. Beamlet computation time for the photon beamlets is substantially reduced by increasing statistical uncertainty from 2.5% up to 7.5%, but for higher statistical uncertainties the computation time converges quickly. More surprisingly, the optimization memory usageand optimization time is also substantially reduced with higher statistical uncertainty of the photon beamlets except for the breast case, where these evaluations are probably dominated by the electron beamlets.
The statistical uncertainty of the electron beamlets has smaller impact on the final treatment plan quality compared to the photon beamlets as shown in figure 4. However, there are still small discrepancies visible, especially in the normal tissue for the brain case, which is caused by the increased photon contribution. Like for the photon beamlets, beamlet calculation and optimization time and also optimization memory usageis substantially reduced by increasing the statistical uncertainty for the electron beamlets.
Any of the investigated fractional sparse dose thresholds leads to degradations in final treatment plan quality, especially sparing of OARs and normal tissue in general as shown in figure 5. As expected, the beamlet calculation time is not substantially influenced but the optimization time gets reduced by at least 30% by using any of the investigated fractional sparse dose thresholds. The memory consumption of the optimization is also Table 3. The clinical and academic cases used for the investigations in this work. x and e stand for photons and electrons, respectively. The static beam directions are used for intensity modulated step-and-shoot fields with a certain user-defined number of apertures. Each static beam direction for electrons is prepared for six different beam energies (6,9,12,15,18 and 22 MeV MBRT 50 Gy = 25 fx × 2 Gy fx −1 6/3 5 0 --Brain DYMBER 60 Gy = 30 fx × 2 Gy fx −1 -/2 2 5 -2 substantially reduced by at least 70% except for the prostate case only about 20%. However, increasing the fractional sparse dose threshold to more than 0.1% does only lead to minor reduction of optimization time and also memory usage. Surprisingly, any of the investigated voxel merging distances did not lead to substantial degradation in final plan quality (see figure 6). Beamlet calculation time is as expected also not substantially affected, but optimization time and memory usageare enormously reduced with every shortening of the voxel merging distances. These observations are consistent over all cases.

Additional parameter value selection
The evaluation quantities are compared between the conservative and selection plan in table 4. Furthermore, DVHs between the two plans are compared in figure 7. According to the DVH comparison, dose homogeneity is nearly identical between conservative and selection plan for all cases, while for the selection plan the sparing of the OARs is slightly worse for some cases (gEUD OARs is maximally 0.7 Gy higher). D NT is similar between the conservative plan and selection plan for all cases, except for the brain case for which the selection plan has a 0.4 Gy higher value.
Performance improvements from the conservative to the selection plans are substantial and consistent over all investigated cases. The absolute numbers vary enormously between the different cases, because of the different number of fields. In more detail, computation time for photon beamlets is about 52% to 60% reduced and for  electron beamlets about 88% reduced. For the selection plans, beamlet calculation time for the photons is between 0.2 and 8.6 min and for the electrons between 4.5 and 131.3 min. Optimization time and memory consumption are even more reduced for the selection plans by about 88% to 97% and by about 95% to 98%, respectively. In absolute units, optimization time and memory consumption are 0.3 to 11.8 min and 0.4 to 38.2 GB, respectively.

Discussion
An MC beamlet calculation framework processing beamlets in parallel is successfully implemented including the features to specify a desired statistical uncertainty for photon and electron beamlets, a fractional sparse dose threshold and minimal distances between voxel and PTV surface from which voxels are merged. These specifications have substantial influence on the computational efficiency of the beamlet calculation itself as well as the plan optimization, the optimization memory usageand the final plan quality. However, the parameter study in this work shows that it is possible to reduce computation time enormously without substantial degradation in final plan quality compared to not using sparse reduction and voxel merging at all and conservative low statistical uncertainty. The settings for the selection plans are demonstrated to fit many different treatment techniques. Furthermore, these parameters could be specifically tuned for treatment  techniques or optimization strategies with a different end goal. An example is beam angle optimization, because there the goal is not primarily plan quality but to find a suitable set of beam directions. For this purpose, it might be suited to use less conservative parameter values.
The degradation in final plan quality of using 5% statistical uncertainty for photon beamlets is considered as acceptable and higher statistical uncertainty does not lead to substantial improvements in computation time. This appears surprising, as substantially less particles need to be simulated for higher statistical uncertainty. However, for such a low number of particles, the MC transport is not the most time consuming task anymore, but to transform the dose into the sparse voxel merged format. This step also includes the change of the unit from deposited energy per simulated particle to cGy/MU. For electron beamlets, a substantially higher statistical uncertainty of 15% seems acceptable. However, the number of particles to be simulated to reach this statistical uncertainty of 15% is still larger than double than the number of particles to reach 5% for photon beamlets (see table 1). Reason why the electron beamlets need many more particles to reach the same statistical uncertainty in Table 4. Evaluated quantities of the conservative plan with the most conservative parameter values and the selection plan with selected parameter values to achieve a reasonable compromise between degradation in final plan quality and computational efficiency. The last column depicts the relative change from conservative to selection plan. comparison to the photon beamlets is their dose distribution which is much more distributed laterally to beam direction. Thus, electrons deposit in general a lot of their energy outside of the high-dose volume (dose larger than 50% of maximal dose) used to calculate the mean statistical uncertainty. However, when electron beamlets are added together during optimization, statistical uncertainty of the summed dose distribution is lowered more drastically than for photon beamlets. A technique that enables to further increase statistical uncertainty is denoising with techniques such as a median-filter or deep convolutional neural networks (Neph et al 2021). This technique seems promising especially for electron beamlets, because the dose distribution of an electron beamlet is comparably smooth in all directions, i.e. no change of the dose within short distance as this is the case for the photon beamlets with their short penumbra. The results showed that a higher statistical uncertainty led to less beamlet data and therefore reduced optimization memory usgage. This is explained by the fact that MC calculated beamlets consider secondary particles leading to small dose values in a large volume of the patient and that this volume is increased with the number of simulated particles.
Summing beamlets together leads to a dose stacking effect, which is substantially affected in case of applying a high sparse dose threshold. Dosimetric characteristics of summed beamlets such as the range in the depth dose curve can drastically change. Hence, the sparse reduction is a critical method. The PTV is generally less sensitively affected than OARs and normal tissue (see figure 6), because in the PTV the beamlets have rather high dose values. Nevertheless, also a sparse format with a low fractional sparse dose threshold decreases beamlet data and therefore optimization memory consumption drastically. This is prominent for MC calculated beamlets with small dose values in a large volume of the patient. Voxel merging on the other side had nearly no impact on final treatment plan quality but very large influence on the optimization time. However, this might be also case specific, because depending on the case, OARs have different sizes and especially different distances to the PTV. The results of the voxel merging are very promising such that it might be worth to even introduce a third level of merged voxel size (e.g. 8 × 8 × 8 = 512 voxels).

Conclusion
This work successfully demonstrates several approaches improving computational efficiency of beamlet calculation and plan optimization of a fully MC based TPP without substantial degradation in final plan quality. The absolute computation times shown in this work indicate that such a TPP with highly accurate dose calculations is clinically feasible. This eases the introduction of fully MC based TPPs and novel treatment techniques and optimization strategies into clinical workflow. Figure 7 For each investigated case, DVH comparisons between the conservative plan with the most conservative parameter values and the selection plan with selected parameter values to achieve a reasonable compromise between degradation in final plan quality and computational efficiency.