Table of contents

Volume 67

Number 15, 7 August 2022

Previous issue Next issue

Topical Reviews

15TR01

, , , , and

In the artificial intelligence era, machine learning (ML) techniques have gained more and more importance in the advanced analysis of medical images in several fields of modern medicine. Radiomics extracts a huge number of medical imaging features revealing key components of tumor phenotype that can be linked to genomic pathways. The multi-dimensional nature of radiomics requires highly accurate and reliable machine-learning methods to create predictive models for classification or therapy response assessment.

Multi-parametric breast magnetic resonance imaging (MRI) is routinely used for dense breast imaging as well for screening in high-risk patients and has shown its potential to improve clinical diagnosis of breast cancer. For this reason, the application of ML techniques to breast MRI, in particular to multi-parametric imaging, is rapidly expanding and enhancing both diagnostic and prognostic power. In this review we will focus on the recent literature related to the use of ML in multi-parametric breast MRI for tumor classification and differentiation of molecular subtypes. Indeed, at present, different models and approaches have been employed for this task, requiring a detailed description of the advantages and drawbacks of each technique and a general overview of their performances.

15TR02
The following article is Open access

, , , , , , , , , et al

Helium ion beam therapy for the treatment of cancer was one of several developed and studied particle treatments in the 1950s, leading to clinical trials beginning in 1975 at the Lawrence Berkeley National Laboratory. The trial shutdown was followed by decades of research and clinical silence on the topic while proton and carbon ion therapy made debuts at research facilities and academic hospitals worldwide. The lack of progression in understanding the principle facets of helium ion beam therapy in terms of physics, biological and clinical findings persists today, mainly attributable to its highly limited availability. Despite this major setback, there is an increasing focus on evaluating and establishing clinical and research programs using helium ion beams, with both therapy and imaging initiatives to supplement the clinical palette of radiotherapy in the treatment of aggressive disease and sensitive clinical cases. Moreover, due its intermediate physical and radio-biological properties between proton and carbon ion beams, helium ions may provide a streamlined economic steppingstone towards an era of widespread use of different particle species in light and heavy ion therapy. With respect to the clinical proton beams, helium ions exhibit superior physical properties such as reduced lateral scattering and range straggling with higher relative biological effectiveness (RBE) and dose-weighted linear energy transfer (LETd) ranging from ∼4 keV μm−1 to ∼40 keV μm−1. In the frame of heavy ion therapy using carbon, oxygen or neon ions, where LETd increases beyond 100 keV μm−1, helium ions exhibit similar physical attributes such as a sharp lateral penumbra, however, with reduced radio-biological uncertainties and without potentially spoiling dose distributions due to excess fragmentation of heavier ion beams, particularly for higher penetration depths. This roadmap presents an overview of the current state-of-the-art and future directions of helium ion therapy: understanding physics and improving modeling, understanding biology and improving modeling, imaging techniques using helium ions and refining and establishing clinical approaches and aims from learned experience with protons. These topics are organized and presented into three main sections, outlining current and future tasks in establishing clinical and research programs using helium ion beams—A. Physics B. Biological and C. Clinical Perspectives.

15TR03

and

Radiomics features extracted from medical images have been widely reported to be useful in the patient specific outcome modeling for variety of assessment and prediction purposes. Successful application of radiomics features as imaging biomarkers, however, is dependent on the robustness of the approach to the variation in each step of the modeling workflow. Variation in the input image quality is one of the main sources that impacts the reproducibility of radiomics analysis when a model is applied to broader range of medical imaging data. The quality of medical image is generally affected by both the scanner related factors such as image acquisition/reconstruction settings and the patient related factors such as patient motion. This article aimed to review the published literatures in this field that reported the impact of various imaging factors on the radiomics features through the change in image quality. The literatures were categorized by different imaging modalities and also tabulated based on the imaging parameters and the class of radiomics features included in the study. Strategies for image quality standardization were discussed based on the relevant literatures and recommendations for reducing the impact of image quality variation on the radiomics in multi-institutional clinical trial were summarized at the end of this article.

15TR04

Three dimensional (3D) printing technology has been widely evaluated for the fabrication of various anthropomorphic phantoms during the last couple of decades. The demand for such high quality phantoms is constantly rising and gaining an ever-increasing interest. Although, in a short time 3D printing technology provided phantoms with more realistic features when compared to the previous conventional methods, there are still several aspects to be explored. One of these aspects is the further development of the current 3D printing methods and software devoted to radiological applications. The current 3D printing software and methods usually employ 3D models, while the direct association of medical images with the 3D printing process is needed in order to provide results of higher accuracy and closer to the actual tissues' texture. Another aspect of high importance is the development of suitable printing materials. Ideally, those materials should be able to emulate the entire range of soft and bone tissues, while still matching the human's anatomy. Five types of 3D printing methods have been mainly investigated so far: (a) solidification of photo-curing materials; (b) deposition of melted plastic materials; (c) printing paper-based phantoms with radiopaque ink; (d) melting or binding plastic powder; and (e) bio-printing. From the first and second category, polymer jetting technology and fused filament fabrication (FFF), also known as fused deposition modelling (FDM), are the most promising technologies for the fulfilment of the requirements of realistic and radiologically equivalent anthropomorphic phantoms. Another interesting approach is the fabrication of radiopaque paper-based phantoms using inkjet printers. Although, this may provide phantoms of high accuracy, the utilized materials during the fabrication process are restricted to inks doped with various contrast materials. A similar condition applies to the polymer jetting technology, which despite being quite fast and very accurate, the utilized materials are restricted to those capable of polymerization. The situation is better for FFF/FDM 3D printers, since various compositions of plastic filaments with external substances can be produced conveniently. Although, the speed and accuracy of this 3D printing method are lower compared to the others, the relatively low-cost, constantly improving resolution, sufficient printing volume and plethora of materials are quite promising for the creation of human size heterogeneous phantoms and their adaptation to the treatment procedures of patients in the current health systems.

Papers

155001

, , , , , , and

Objective. Reliable radionuclide production yield data are a prerequisite for positron-emission-tomography (PET) based in vivo proton treatment verification. In this context, activation data acquired at two different treatment facilities with different imaging systems were analyzed to provide experimentally determined radionuclide yields in thick targets and were compared with each other to investigate the impact of the respective imaging technique. Approach. Homogeneous thick targets (PMMA, gelatine, and graphite) were irradiated with mono-energetic proton pencil-beams at two distinct energies. Material activation was measured (i) in-beam during and after beam delivery with a double-head prototype PET camera and (ii) offline shortly after beam delivery with a commercial full-ring PET/CT scanner. Integral as well as depth-resolved β+-emitter yields were determined for the dominant positron-emitting radionuclides 11C, 15O, 13N and (in-beam only) 10C. In-beam data were used to investigate the qualitative impact of different monitoring time schemes on activity depth profiles and their quantitative impact on count rates and total activity. Main results. Production yields measured with the in-beam camera were comparable to or higher compared to respective offline results. Depth profiles of radionuclide-specific yields obtained from the double-head camera showed qualitative differences to data acquired with the full-ring camera with a more convex profile shape. Considerable impact of the imaging timing scheme on the activity profile was observed for gelatine only with a range variation of up to 3.5 mm. Evaluation of the coincidence rate and the total number of observed events in the considered workflows confirmed a strongly decreasing rate in targets with a large oxygen fraction. Significance. The observed quantitative and qualitative differences between the datasets underline the importance of a thorough system commissioning. Due to the lack of reliable cross-section data, in-house phantom measurements are still considered a gold standard for careful characterization of the system response and to ensure a reliable beam range verification.

155002
The following article is Open access

, and

Objectives. Microdosimetry is proving to be a reliable and powerful tool to be applied in different fields such as radiobiology, radiation protection and hadron therapy. However, accepted standard protocols and codes of practice are still missing. With this regard, a systematic and methodical uncertainty analysis is fundamental to build an accredited uncertainty budget of practical use. This work studied the contribution of counting statistics (i.e. number of events collected) to the final frequency-mean and dose-mean lineal energy uncertainties, aiming at providing guidelines for good experimental and simulation practice. The practical limitation of current technologies and the non-negligible probability of nuclear reactions require careful considerations and nonlinear approaches. Approach. Microdosimetric data were obtained by means of the particle tracking Monte Carlo code Geant4. The uncertainty analysis was carried out relying on a Monte Carlo based numerical analysis, as suggested by the BIPM's 'Guide to the expression of uncertainty in measurement'. Final uncertainties were systematically investigated for proton, helium and carbon ions at an increasing number of detected events, for a range of different clinical-relevant beam energies. Main results. Rare events generated by nuclear interactions in the detector sensitive volume were found to massively degrade microdosimetric uncertainties unless a very high statistics is collected. The study showed an increasing impact of such events for increasing beam energy and lighter ions. For instance, in the entrance region of a 250 MeV proton beam, about 5 ∗ 107 events need to be collected to obtain a dose-mean lineal energy uncertainty below 10%. Significance. The results of this study help define the necessary conditions to achieve appropriate statistics in computational microdosimetry, pointing out the importance of properly taking into account nuclear interaction events. Their impact on microdosimetric quantities and on their uncertainty is significant and cannot be overlooked, particularly when characterising clinical beams and radiobiological response. This work prepared the ground for deeper investigations involving dedicated experiments and for the development of a method to properly evaluate the counting statistics uncertainty contribution in the uncertainty budget, whose accuracy is fundamental for the clinical transition of microdosimetry.

155003

, , and

Objective. Photon-counting CT (PCCT) has better dose efficiency and spectral resolution than energy-integrating CT, which is advantageous for material decomposition. Unfortunately, the accuracy of PCCT-based material decomposition is limited due to spectral distortions in the photon-counting detector (PCD). Approach. In this work, we demonstrate a deep learning (DL) approach that compensates for spectral distortions in the PCD and improves accuracy in material decomposition by using decomposition maps provided by high-dose multi-energy-integrating detector (EID) data as training labels. We use a 3D U-net architecture and compare networks with PCD filtered back projection (FBP) reconstruction (FBP2Decomp), PCD iterative reconstruction (Iter2Decomp), and PCD decomposition (Decomp2Decomp) as the input. Main results. We found that our Iter2Decomp approach performs best, but DL outperforms matrix inversion decomposition regardless of the input. Compared to PCD matrix inversion decomposition, Iter2Decomp gives 27.50% lower root mean squared error (RMSE) in the iodine (I) map and 59.87% lower RMSE in the photoelectric effect (PE) map. In addition, it increases the structural similarity (SSIM) by 1.92%, 6.05%, and 9.33% in the I, Compton scattering (CS), and PE maps, respectively. When taking measurements from iodine and calcium vials, Iter2Decomp provides excellent agreement with multi-EID decomposition. One limitation is some blurring caused by our DL approach, with a decrease from 1.98 line pairs/mm at 50% modulation transfer function (MTF) with PCD matrix inversion decomposition to 1.75 line pairs/mm at 50% MTF when using Iter2Decomp. Significance. Overall, this work demonstrates that our DL approach with high-dose multi-EID derived decomposition labels is effective at generating more accurate material maps from PCD data. More accurate preclinical spectral PCCT imaging such as this could serve for developing nanoparticles that show promise in the field of theranostics (therapy and diagnostics).

155004

, , and

Objective. Soft tissue phase aberration may be particularly severe for histotripsy due to large aperture and low f-number transducer geometries. This study investigated how phase aberration from human abdominal tissue affects focusing of a large, strongly curved histotripsy transducer. Approach. A computational model (k-Wave) was experimentally validated with ex vivo porcine abdominal tissue and used to simulate focusing a histotripsy transducer (radius: 14.2 cm, f-number: 0.62, central frequency fc: 750 kHz) through the human abdomen. Abdominal computed tomography images from 10 human subjects were segmented to create three-dimensional acoustic property maps. Simulations were performed focusing at 3 target locations in the liver of each subject with ideal phase correction, without phase correction, and after separately matching the sound speed of water and fat to non-fat soft tissue. Main results. Experimental validation in porcine abdominal tissue showed that simulated and measured arrival time differences agreed well (average error, ∼0.10 acoustic cycles at fc). In simulations with human tissue, aberration created arrival time differences of 0.65 μs (∼0.5 cycles) at the target and shifted the focus from the target by 6.8 mm (6.4 mm pre-focally along depth direction), on average. Ideal phase correction increased maximum pressure amplitude by 95%, on average. Matching the sound speed of water and fat to non-fat soft tissue decreased the average pre-focal shift by 3.6 and 0.5 mm and increased pressure amplitude by 2% and 69%, respectively. Significance. Soft tissue phase aberration of large aperture, low f-number histotripsy transducers is substantial despite low therapeutic frequencies. Phase correction could potentially recover substantial pressure amplitude for transabdominal histotripsy. Additionally, different heterogeneity sources distinctly affect focusing quality. The water path strongly affects the focal shift, while irregular tissue boundaries (e.g. fat) dominate pressure loss.

155005

, , , , , and

Objective. This study aimed to produce a three-dimensional liver elasticity map using the finite element method (FEM) and respiration-induced motion captured by T1-weighted magnetic resonance images (FEM-E-map) and to evaluate whether FEM-E-maps can be an imaging biomarker comparable to magnetic resonance elastography (MRE) for assessing the distribution and severity of liver fibrosis. Approach. We enrolled 14 patients who underwent MRI and MRE. T1-weighted MR images were acquired during shallow inspiration and expiration breath-holding, and the displacement vector field (DVF) between two images was calculated using deformable image registration. FEM-E-maps were constructed using FEM and DVF. First, three Poisson's ratio settings (0.45, 0.49, and 0.499995) were validated and optimized to minimize the difference in liver elasticity between the FEM-E-map and MRE. Then, the whole and regional liver elasticity values estimated using FEM-E-maps were compared with those obtained from MRE using Pearson's correlation coefficients. Spearman rank correlations and chi-square histograms were used to compare the voxel-level elasticity distribution. Main results. The optimal Poisson's ratio was 0.49. Whole liver elasticity estimated using FEM-E-maps was strongly correlated with that measured using MRE (r = 0.96). For regional liver elasticity, the correlation was 0.84 for the right lobe and 0.82 for the left lobe. Spearman analysis revealed a moderate correlation for the voxel-level elasticity distribution between FEM-E-maps and MRE (0.61 ± 0.10). The small chi-square distances between the two histograms (0.11 ± 0.07) indicated good agreement. Significance. FEM-E-maps represent a potential imaging biomarker for visualizing the distribution of liver fibrosis using only T1-weighted images obtained with a common MR scanner, without any additional examination or special elastography equipment. However, additional studies including comparisons with biopsy findings are required to verify the reliability of this method for clinical application.

155006

, , , , , , and

Objective. Glioma is one of the most fatal cancers in the world which has been divided into low grade glioma (LGG) and high grade glioma (HGG), and its image grading has become a hot topic of contemporary research. Magnetic resonance imaging (MRI) is a vital diagnostic tool for brain tumor detection, analysis, and surgical planning. Accurate and automatic glioma grading is crucial for speeding up diagnosis and treatment planning. Aiming at the problems of (1) large number of parameters, (2) complex calculation, and (3) poor speed of the current glioma grading algorithms based on deep learning, this paper proposes a lightweight 3D UNet deep learning framework, which can improve classification accuracy in comparison with the existing methods. Approach. To improve efficiency while maintaining accuracy, existing 3D UNet has been excluded, and depthwise separable convolution has been applied to 3D convolution to reduce the number of network parameters. The weight of parameters on the basis of space and channel compression & excitation module has been strengthened to improve the model in the feature map, reduce the weight of redundant parameters, and strengthen the performance of the model. Main results. A total of 560 patients with glioma were retrospectively reviewed. All patients underwent MRI before surgery. The experiments were carried out on T1w, T2w, fluid attenuated inversion recovery, and CET1w images. Additionally, a way of marking tumor area by cube bounding box is presented which has no significant difference in model performance with the manually drawn ground truth. Evaluated on test datasets using the proposed model has shown good results (with accuracy of 89.29%). Significance. This work serves to achieve LGG/HGG grading by simple, effective, and non-invasive diagnostic approaches to provide diagnostic suggestions for clinical usage, thereby facilitating hasten treatment decisions.

155007

, , and

Objective. Sparse-view computed tomography (CT) reconstruction has been at the forefront of research in medical imaging. Reducing the total x-ray radiation dose to the patient while preserving the reconstruction accuracy is a big challenge. The sparse-view approach is based on reducing the number of rotation angles, which leads to poor quality reconstructed images as it introduces several artifacts. These artifacts are more clearly visible in traditional reconstruction methods like the filtered-backprojection (FBP) algorithm. Approach. Over the years, several model-based iterative and more recently deep learning-based methods have been proposed to improve sparse-view CT reconstruction. Many deep learning-based methods improve FBP-reconstructed images as a post-processing step. In this work, we propose a direct deep learning-based reconstruction that exploits the information from low-dimensional scout images, to learn the projection-to-image mapping. This is done by concatenating FBP scout images at multiple resolutions in the decoder part of a convolutional encoder–decoder (CED). Main results. This approach is investigated on two different networks, based on Dense Blocks and U-Net to show that a direct mapping can be learned from a sinogram to an image. The results are compared to two post-processing deep learning methods (FBP-ConvNet and DD-Net) and an iterative method that uses a total variation (TV) regularization. Significance. This work presents a novel method that uses information from both sinogram and low-resolution scout images for sparse-view CT image reconstruction. We also generalize this idea by demonstrating results with two different neural networks. This work is in the direction of exploring deep learning across the various stages of the image reconstruction pipeline involving data correction, domain transfer and image improvement.

155008

, , , , and

Objective. Material decomposition (MD) evaluates the elemental composition of human tissues and organs via computed tomography (CT) and is indispensable in correlating anatomical images with functional ones. A major issue in MD is inaccurate elemental information about the real human body. To overcome this problem, we developed a virtual CT system model, by which various reconstructed images can be generated based on ICRP110 human phantoms with information about six major elements (H, C, N, O, P, and Ca). Approach. We generated CT datasets labelled with accurate elemental information using the proposed generative CT model and trained a deep learning (DL)-based model to estimate the material distribution with the ICRP110 based human phantom as well as the digital Shepp–Logan phantom. The accuracy in quad-, dual-, and single-energy CT cases was investigated. The influence of beam-hardening artefacts, noise, and spectrum variations were analysed with testing datasets including elemental density and anatomical shape variations. Main results. The results indicated that this DL approach can realise precise MD, even with single-energy CT images. Moreover, noise, beam-hardening artefacts, and spectrum variations were shown to have minimal impact on the MD. Significance. Present results suggest that the difficulty to prepare a large CT database can be solved by introducing the virtual CT system and the proposed technique can be applied to clinical radiodiagnosis and radiotherapy.

155009

, , , , , , and

Objective. To demonstrate the benefits of using an joint image reconstruction algorithm based on the List Mode Maximum Likelihood Expectation Maximization that combines events measured in different channels of information of a Compton camera. Approach. Both simulations and experimental data are employed to show the algorithm performance. Main results. The obtained joint images present improved image quality and yield better estimates of displacements of high-energy gamma-ray emitting sources. The algorithm also provides images that are more stable than any individual channel against the noisy convergence that characterizes Maximum Likelihood based algorithms. Significance. The joint reconstruction algorithm can improve the quality and robustness of Compton camera images. It also has high versatility, as it can be easily adapted to any Compton camera geometry. It is thus expected to represent an important step in the optimization of Compton camera imaging.

155010

and

Objective. An ultrasound-based system capable of both imaging thrombi against a dark field and performing quantitative elastometry could allow for fast and cost-effective thrombosis diagnosis, staging, and treatment monitoring. This study investigates a contrast-enhanced approach for measuring the Young's moduli of thrombus-mimicking phantoms. Approach. Magnetomotive ultrasound (MMUS) has shown promise for lending specific contrast to thrombi by applying a temporally modulated force to magnetic nanoparticle (MNP) contrast agents and measuring resulting tissue displacements. However, quantitative elastometry has not yet been demonstrated in MMUS, largely due to difficulties inherent in measuring applied magnetic forces and MNP densities. To avoid these issues, in this work magnetomotive resonant acoustic spectroscopy (MRAS) is demonstrated for the first time in ultrasound. Main results. The resonance frequencies of gelatin thrombus-mimicking phantoms are shown to agree within one standard deviation with finite element simulations over a range of phantom sizes and Young's moduli with less than 16% error. Then, in a proof-of-concept study, the Young's moduli of three phantoms are measured using MRAS and are shown to agree with independent compression testing results. Significance. The MRAS results were sufficiently precise to differentiate between thrombus phantoms with clinically relevant Young's moduli. These findings demonstrate that MRAS has potential for thrombus staging.

155011

, , , , , , , , , et al

Objective. This paper proposes a novel approach for the longitudinal registration of PET imaging acquired for the monitoring of patients with metastatic breast cancer. Unlike with other image analysis tasks, the use of deep learning (DL) has not significantly improved the performance of image registration. With this work, we propose a new registration approach to bridge the performance gap between conventional and DL-based methods: medical image registration method regularized by architecture (MIRRBA). Approach.MIRRBA is a subject-specific deformable registration method which relies on a deep pyramidal architecture to parametrize the deformation field. Diverging from the usual deep-learning paradigms, MIRRBA does not require a learning database, but only a pair of images to be registered that is used to optimize the network's parameters. We applied MIRRBA on a private dataset of 110 whole-body PET images of patients with metastatic breast cancer. We used different architecture configurations to produce the deformation field and studied the results obtained. We also compared our method to several standard registration approaches: two conventional iterative registration methods (ANTs and Elastix) and two supervised DL-based models (LapIRN and Voxelmorph). Registration accuracy was evaluated using the Dice score, the target registration error, the average Hausdorff distance and the detection rate, while the realism of the registration obtained was evaluated using Jacobian's determinant. The ability of the different methods to shrink disappearing lesions was also computed with the disappearing rate. Main results. MIRRBA significantly improved all metrics when compared to DL-based approaches. The organ and lesion Dice scores of Voxelmorph improved by 6% and 52% respectively, while the ones of LapIRN increased by 5% and 65%. Regarding conventional approaches, MIRRBA presented comparable results showing the feasibility of our method. Significance. In this paper, we also demonstrate the regularizing power of deep architectures and present new elements to understand the role of the architecture in DL methods used for registration.

155012
The following article is Open access

and

Objective. Online monitoring of dose distribution in proton therapy is currently being investigated with the detection of prompt gamma (PG) radiation emitted from a patient during irradiation. The SiPM and scintillation Fiber based Compton Camera (SiFi-CC) setup is being developed for this aim. Approach. A machine learning approach to recognize Compton events is proposed, reconstructing the PG emission profile during proton therapy. The proposed method was verified on pseudo-data generated by a Geant4 simulation for a single proton beam impinging on a polymethyl methacrylate (PMMA) phantom. Three different models including the boosted decision tree (BDT), multilayer perception (MLP) neural network, and k-nearest neighbour (k-NN) were trained using 10-fold cross-validation and then their performances were assessed using the receiver operating characteristic (ROI) curves. Subsequently, after event selection by the most robust model, a software based on the List-Mode Maximum Likelihood Estimation Maximization (LM-MLEM) algorithm was applied for the reconstruction of the PG emission distribution profile. Main results. It was demonstrated that the BDT model excels in signal/background separation compared to the other two. Furthermore, the reconstructed PG vertex distribution after event selection showed a significant improvement in distal falloff position determination. Significance. A highly satisfactory agreement between the reconstructed distal edge position and that of the simulated Compton events was achieved. It was also shown that a position resolution of 3.5 mm full width at half maximum (FWHM) in distal edge position determination is feasible with the proposed setup.

155013

, , , , , , and

Objective. Soft-tissue sarcoma spreads preferentially along muscle fibers. We explore the utility of deriving muscle fiber orientations from diffusion tensor MRI (DT-MRI) for defining the boundary of the clinical target volume (CTV) in muscle tissue. Approach. We recruited eight healthy volunteers to acquire MR images of the left and right thigh. The imaging session consisted of (a) two MRI spin-echo-based scans, T1- and T2-weighted; (b) a diffusion weighted (DW) spin-echo-based scan using an echo planar acquisition with fat suppression. The thigh muscles were auto-segmented using the convolutional neural network. DT-MRI data were used as a geometry encoding input to solve the anisotropic Eikonal equation with the Hamiltonian Fast-Marching method. The isosurfaces of the solution modeled the CTV boundary. Main results. The auto-segmented muscles of the thigh agreed with manually delineated with the Dice score ranging from 0.8 to 0.94 for different muscles. To validate our method of deriving muscle fiber orientations, we compared anisotropy of the isosurfaces across muscles with different anatomical orientations within a thigh, between muscles in the left and right thighs of each subject, and between different subjects. The fiber orientations were identified reproducibly across all comparisons. We identified two controlling parameters, the distance from the gross tumor volume to the isosurface and the eigenvalues ratio, to tailor the proposed CTV to the satisfaction of the clinician. Significance. Our feasibility study with healthy volunteers shows the promise of using muscle fiber orientations derived from DW MRI data for automated generation of anisotropic CTV boundary in soft tissue sarcoma. Our contribution is significant as it serves as a proof of principle for combining DT-MRI information with tumor spread modeling, in contrast to using moderately informative 2D CT planes for the CTV delineation. Such improvements will positively impact the cancer centers with a small volume of sarcoma patients.

155014

and

Objective. The overarching objective is to make the definition of the clinical target volume (CTV) in radiation oncology less subjective and more scientifically based. The specific objective of this study is to investigate similarities and differences between two methods that model tumor spread beyond the visible gross tumor volume (GTV): (1) the shortest path model, which is the standard method of adding a geometric GTV-CTV margin, and (2) the reaction-diffusion model. Approach. These two models to capture the invisible tumor 'fire front' are defined and compared in mathematical terms. The models are applied to example cases that represent tumor spread in non-uniform and anisotropic media with anatomical barriers. Main results. The two seemingly disparate models bring forth traveling waves that can be associated with the front of tumor growth outward from the GTV. The shape of the fronts is similar for both models. Differences are seen in cases where the diffusive flow is reduced due to anatomical barriers, and in complex spatially non-uniform cases. The diffusion model generally leads to smoother fronts. The smoothness can be controlled with a parameter defined by the ratio of the diffusion coefficient and the proliferation rate. Significance. Defining the CTV has been described as the weakest link of the radiotherapy chain. There are many similarities in the mathematical description and the behavior of the common geometric GTV-CTV expansion method, and the definition of the CTV tumor front via the reaction-diffusion model. Its mechanistic basis and the controllable smoothness make the diffusion model an attractive alternative to the standard GTV-CTV margin model.

155015

, , and

Objective. This study aims at quantifying the lifetime attributable risk of secondary fatal cancer (LARFAC) to patients receiving adjuvant radiotherapy treatment for thymoma, a neoplasm where cure rates and life expectancy are relatively high, patient age at presentation relatively low and indications for radiotherapy controversial depending on the disease stage. Approach. An anthropomorphic phantom was scanned, organs were contoured and a standard 6 MV 3DCRT treatment plan was produced for thymoma treatment. The phantom was loaded with thermoluminescent dosimeters (TLDs) and treated by linear accelerator per plan. The TLDs were subsequently read for out-of-field dose distribution while in-field dose distribution was obtained from the planning system. Sex and age-specific lifetime radiogenic cancer risk was calculated as the sum of in-field risk and out-of-field risk. The latter risk was estimated using hybrid ICRP 2007 103-BEIR VII tables of organ-specific risks based on the linear-no threshold (LNT) model and applicable at low doses, while the former using mathematical risk models applicable at high doses. Main results. The LARFAC associated with a prescribed dose of 50 Gy to target volume in 25 fractions was in the approximate range of 1%–3%. The risk was higher for young and female patients. The largest contributing organ to this risk were the lungs by far. Using the LNT model inappropriately to calculate risk at therapeutic doses (in-field) would overestimate the risk up to tenfold. Significance. The LARFAC to patient from thymoma radiotherapy was quantified taking into consideration the inapplicability of the LNT model at therapeutic doses. The risk is not negligible; the information may be relevant to patients and clinicians.

155016

, , and

Objective. To develop and evaluate a deep learning based fast volumetric modulated arc therapy (VMAT) plan generation method for prostate radiotherapy. Approach. A customized 3D U-Net was trained and validated to predict initial segments at 90 evenly distributed control points of an arc, linked to our research treatment planning system (TPS) for segment shape optimization (SSO) and segment weight optimization (SWO). For 27 test patients, the VMAT plans generated based on the deep learning prediction (VMATDL) were compared with VMAT plans generated with a previously validated automated treatment planning method (VMATref). For all test cases, the deep learning prediction accuracy, plan dosimetric quality, and the planning efficiency were quantified and analyzed. Main results. For all 27 test cases, the resulting plans were clinically acceptable. The V95% for the PTV2 was greater than 99%, and the V107% was below 0.2%. Statistically significant difference in target coverage was not observed between the VMATref and VMATDL plans (P = 0.3243 > 0.05). The dose sparing effect to the OARs between the two groups of plans was similar. Small differences were only observed for the Dmean of rectum and anus. Compared to the VMATref, the VMATDL reduced 29.3% of the optimization time on average. Significance. A fully automated VMAT plan generation method may result in significant improvement in prostate treatment planning efficiency. Due to the clinically acceptable dosimetric quality and high efficiency, it could potentially be used for clinical planning application and real-time adaptive therapy application after further validation.

155017

, , , , and

Objective. Accurate segmentation of the pancreas from abdomen CT scans is highly desired for diagnosis and treatment follow-up of pancreatic diseases. However, the task is challenged by large anatomical variations, low soft-tissue contrast, and the difficulty in acquiring a large set of annotated volumetric images for training. To overcome these problems, we propose a new segmentation network and a semi-supervised learning framework to alleviate the lack of annotated images and improve the accuracy of segmentation. Approach. In this paper, we propose a novel graph-enhanced pancreas segmentation network (GEPS-Net), and incorporate it into a semi-supervised learning framework based on iterative uncertainty-guided pseudo-label refinement. Our GEPS-Net plugs a graph enhancement module on top of the CNN-based U-Net to focus on the spatial relationship information. For semi-supervised learning, we introduce an iterative uncertainty-guided refinement process to update pseudo labels by removing low-quality and incorrect regions. Main results. Our method was evaluated by a public dataset with four-fold cross-validation and achieved the DC of 84.22%, improving 5.78% compared to the baseline. Further, the overall performance of our proposed method was the best compared with other semi-supervised methods trained with only 6 or 12 labeled volumes. Significance. The proposed method improved the segmentation performance of the pancreas in CT images under the semi-supervised setting. It will assist doctors in early screening and making accurate diagnoses as well as adaptive radiotherapy.

155018

, , , and

Objective. In this paper, we focus on the dielectric and mechanical characterization of tissue-mimicking breast phantoms. Approach. Starting from recipes previously proposed by our research group, based on easy-to-handle, cheap and safe components (i.e. sunflower oil, deionized water, dishwashing liquid and gelatin), we produced and tested, both dielectrically and mechanically, more than 100 samples. The dielectric properties were measured from 500 MHz to 14 GHz, the Cole–Cole parameters were derived to describe the dielectric behaviour in a broader frequency range, and the results were compared with dielectric properties of human breast ex vivo tissues up to 50 GHz. The macroscale mechanical properties were measured by means of unconfined compression tests, and the impact of the experimental conditions (i.e. preload and test speed) on the measured Young's moduli was analysed. In addition, the mechanical contrast between healthy- and malignant-tissue-like phantoms was evaluated. Main results. The results agree with the literature in the cases in which the experimental conditions are known, demonstrating the possibility to fabricate phantoms able to mimic both dielectric and mechanical properties of breast tissues. Significance. In this work, for the first time, a range of materials reproducing all the categories of breast tissues were experimentally characterized, both from a dielectric and mechanical point of view. A large range of frequency were considered for the dielectric measurements and several combinations of experimental conditions were investigated in the context of the mechanical characterization. The proposed results can be useful in the design and testing of complementary or supplementary techniques for breast cancer detection based on micro/millimetre-waves, possibly in connection with other imaging modalities.

155019

, , , and

Small field dosimetry is significantly different from the dosimetry of broad beams due to loss of electron side scatter equilibrium, source occlusion, and effects related to the choice of detector. However, use of small fields is increasing with the increase in indications for intensity-modulated radiation therapy and stereotactic body radiation therapy, and thus the need for accurate dosimetry is ever more important. Here we propose to leverage machine learning (ML) strategies to reduce the uncertainties and increase the accuracy in determining small field output factors (OFs). Linac OFs from a Varian TrueBeam STx were calculated either by the treatment planning system (TPS) or measured with a W1 scintillator detector at various multi-leaf collimator (MLC) positions, jaw positions, and with and without contribution from leaf-end transmission. The fields were defined by the MLCs with the jaws at various positions. Field sizes between 5 and 100 mm were evaluated. Separate ML regression models were generated based on the TPS calculated or the measured datasets. Accurate predictions of small field OFs at different field sizes (FSs) were achieved independent of jaw and MLC position. A mean and maximum % relative error of 0.38 ± 0.39% and 3.62%, respectively, for the best-performing models based on the measured datasets were found. The prediction accuracy was independent of contribution from leaf-end transmission. Several ML models for predicting small field OFs were generated, validated, and tested. Incorporating these models into the dose calculation workflow could greatly increase the accuracy and robustness of dose calculations for any radiotherapy delivery technique that relies heavily on small fields.

155020

and

Objective. Complete time of flight (TOF) sinograms of state-of-the-art TOF PET scanners have a large memory footprint. Currently, they contain ∼4 · 109 data bins which amount to ∼17 GB in 32 bit floating point precision. Moreover, their size will continue to increase with advances in the achievable detector TOF resolution and increases in the axial field of view. Using iterative algorithms to reconstruct such enormous TOF sinograms becomes increasingly challenging due to the memory requirements and the computation time needed to evaluate the forward model for every data bin. This is especially true for more advanced optimization algorithms such as the stochastic primal-dual hybrid gradient (SPDHG) algorithm which allows for the use of non-smooth priors for regularization using subsets with guaranteed convergence. SPDHG requires the storage of additional sinograms in memory, which severely limits its application to data sets from state-of-the-art TOF PET systems using conventional computing hardware. Approach. Motivated by the generally sparse nature of the TOF sinograms, we propose and analyze a new listmode (LM) extension of the SPDHG algorithm for image reconstruction of sparse data following a Poisson distribution. The new algorithm is evaluated based on realistic 2D and 3D simulationsn, and a real data set acquired on a state-of-the-art TOF PET/CT system. The performance of the newly proposed LM SPDHG algorithm is compared against the conventional sinogram SPDHG and the listmode EM-TV algorithm. Main results. We show that the speed of convergence of the proposed LM-SPDHG is equivalent the original SPDHG operating on binned data (TOF sinograms). However, we find that for a TOF PET system with 400 ps TOF resolution and 25 cm axial FOV, the proposed LM-SPDHG reduces the required memory from approximately 56 to 0.7 GB for a short dynamic frame with 107 prompt coincidences and to 12.4 GB for a long static acquisition with 5·108 prompt coincidences. Significance. In contrast to SPDHG, the reduced memory requirements of LM-SPDHG enables a pure GPU implementation on state-of-the-art GPUs—avoiding memory transfers between host and GPU—which will substantially accelerate reconstruction times. This in turn will allow the application of LM-SPDHG in routine clinical practice where short reconstruction times are crucial.

155021
The following article is Open access

, , , and

Organ-specific PET scanners have been developed to provide both high spatial resolution and sensitivity, although the deployment of several dedicated PET scanners at the same center is costly and space-consuming. Active-PET is a multifunctional PET scanner design exploiting the advantages of two different types of detector modules and mechanical arms mechanisms enabling repositioning of the detectors to allow the implementation of different geometries/configurations. Active-PET can be used for different applications, including brain, axilla, breast, prostate, whole-body, preclinical and pediatrics imaging, cell tracking, and image guidance for therapy. Monte Carlo techniques were used to simulate a PET scanner with two sets of high resolution and high sensitivity pixelated Lutetium Oxyorthoscilicate (LSO(Ce)) detector blocks (24 for each group, overall 48 detector modules for each ring), one with large pixel size (4 × 4 mm2) and crystal thickness (20 mm), and another one with small pixel size (2 × 2 mm2) and thickness (10 mm). Each row of detector modules is connected to a linear motor that can displace the detectors forward and backward along the radial axis to achieve variable gantry diameter in order to image the target subject at the optimal/desired resolution and/or sensitivity. At the center of the field-of-view, the highest sensitivity (15.98 kcps MBq−1) was achieved by the scanner with a small gantry and high-sensitivity detectors while the best spatial resolution was obtained by the scanner with a small gantry and high-resolution detectors (2.2 mm, 2.3 mm, 2.5 mm FWHM for tangential, radial, and axial, respectively). The configuration with large-bore (combination of high-resolution and high-sensitivity detectors) achieved better performance and provided higher image quality compared to the Biograph mCT as reflected by the 3D Hoffman brain phantom simulation study. We introduced the concept of a non-static PET scanner capable of switching between large and small field-of-view as well as high-resolution and high-sensitivity imaging.

155022

, , , and

Objective. Transit in vivo dosimetry methods monitor that the dose distribution is delivered as planned. However, they have a limited ability to identify and to quantify the cause of a given disagreement, especially those caused by position errors. This paper describes a proof of concept of a simple in vivo technique to infer a position error from a transit portal image (TPI). Approach. For a given treatment field, the impact of a position error is modeled as a perturbation of the corresponding reference (unperturbed) TPI. The perturbation model determines the patient translation, described by a shift vector, by comparing a given in vivo TPI to the corresponding reference TPI. Patient rotations can also be determined by applying this formalism to independent regions of interest over the patient. Eight treatment plans have been delivered to an anthropomorphic phantom under a large set of couch shifts (<15 mm) and rotations (<10°) to experimentally validate this technique, which we have named Transit-Guided Radiation Therapy (TGRT). Main results. The root mean squared error (RMSE) between the determined and the true shift magnitudes was 1.0/2.4/4.9 mm for true shifts ranging between 0–5/5–10/10–15 mm, respectively. The angular accuracy of the determined shift directions was 12° ± 14°. The RMSE between the determined and the true rotations was 0.5°. The TGRT technique decoupled translations and rotations satisfactorily. In 96% of the cases, the TGRT technique decreased the existing position error. The detection threshold of the TGRT technique was around 1 mm and it was nearly independent of the tumor site, delivery technique, beam energy or patient thickness. Significance. TGRT is a promising technique that not only provides reliable determinations of the position errors without increasing the required equipment, acquisition time or patient dose, but it also adds on-line correction capabilities to existing methods currently using TPIs.

155023
The following article is Open access

, , , , , , , , , et al

Objective. Due to the radiosensitizing effect of biocompatible noble metal nanoparticles (NPs), their administration is considered to potentially increase tumor control in radiotherapy. The underlying physical, chemical and biological mechanisms of the NPs' radiosensitivity especially when interacting with proton radiation is not conclusive. In the following work, the energy deposition of protons in matter containing platinum nanoparticles (PtNPs) is experimentally investigated. Approach. Surfactant-free monomodal PtNPs with a mean diameter of (40 ± 10) nm and a concentration of 300 μg ml−1, demonstrably leading to a substantial production of reactive oxygen species (ROS), were homogeneously dispersed into cubic gelatin samples serving as tissue-like phantoms. Gelatin samples without PtNPs were used as control. The samples' dimensions and contrast of the PtNPs were verified in a clinical computed tomography scanner. Fields from a clinical proton machine were used for depth dose and stopping power measurements downstream of both samples types. These experiments were performed with a variety of detectors at a pencil beam scanning beam line as well as a passive beam line with proton energies from about 56–200 MeV. Main results. The samples' water equivalent ratios in terms of proton stopping as well as the mean proton energy deposition downstream of the samples with ROS-producing PtNPs compared to the samples without PtNPs showed no differences within the experimental uncertainties of about 2%. Significance. This study serves as experimental proof that the radiosensitizing effect of biocompatible PtNPs is not due to a macroscopically increased proton energy deposition, but is more likely caused by a catalytic effect of the PtNPs. Thus, these experiments provide a contribution to the highly discussed radiobiological question of the proton therapy efficiency with noble metal NPs and facilitate initial evidence that the dose calculation in treatment planning is straightforward and not affected by the presence of sensitizing PtNPs.