Organ-specific PET scanners have been developed to provide both high spatial resolution and sensitivity, although the deployment of several dedicated PET scanners at the same center is costly and space-consuming. Active-PET is a multifunctional PET scanner design exploiting the advantages of two different types of detector modules and mechanical arms mechanisms enabling repositioning of the detectors to allow the implementation of different geometries/configurations. Active-PET can be used for different applications, including brain, axilla, breast, prostate, whole-body, preclinical and pediatrics imaging, cell tracking, and image guidance for therapy. Monte Carlo techniques were used to simulate a PET scanner with two sets of high resolution and high sensitivity pixelated Lutetium Oxyorthoscilicate (LSO(Ce)) detector blocks (24 for each group, overall 48 detector modules for each ring), one with large pixel size (4 × 4 mm2) and crystal thickness (20 mm), and another one with small pixel size (2 × 2 mm2) and thickness (10 mm). Each row of detector modules is connected to a linear motor that can displace the detectors forward and backward along the radial axis to achieve variable gantry diameter in order to image the target subject at the optimal/desired resolution and/or sensitivity. At the center of the field-of-view, the highest sensitivity (15.98 kcps MBq−1) was achieved by the scanner with a small gantry and high-sensitivity detectors while the best spatial resolution was obtained by the scanner with a small gantry and high-resolution detectors (2.2 mm, 2.3 mm, 2.5 mm FWHM for tangential, radial, and axial, respectively). The configuration with large-bore (combination of high-resolution and high-sensitivity detectors) achieved better performance and provided higher image quality compared to the Biograph mCT as reflected by the 3D Hoffman brain phantom simulation study. We introduced the concept of a non-static PET scanner capable of switching between large and small field-of-view as well as high-resolution and high-sensitivity imaging.
The aim of the Institute of Physics and Engineering in Medicine (IPEM) is to promote the advancement of physics and engineering applied to medicine and biology for the public benefit. Its members are professionals working in healthcare, education, industry and research.
IPEM publishes scientific journals and books and organises conferences to disseminate knowledge and support members in their development. It sets and advises on standards for the practice, education and training of scientists and engineers working in healthcare to secure an effective and appropriate workforce.
ISSN: 1361-6560
The international journal of biomedical physics and engineering, published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine (IPEM).
Browse IPEM-IOP ebooks Series in Physics and Engineering in Medicine and Biology.
Open all abstracts, in this tab
Amirhossein Sanaat et al 2022 Phys. Med. Biol. 67 155021
Stephen Joseph McMahon 2019 Phys. Med. Biol. 64 01TR01
The linear-quadratic model is one of the key tools in radiation biology and physics. It provides a simple relationship between cell survival and delivered dose: , and has been used extensively to analyse and predict responses to ionising radiation both in vitro and in vivo. Despite its ubiquity, there remain questions about its interpretation and wider applicability—Is it a convenient empirical fit or representative of some deeper mechanistic behaviour? Does a model of single-cell survival in vitro really correspond to clinical tissue responses? Is it applicable at very high and very low doses? Here, we review these issues, discussing current usage of the LQ model, its historical context, what we now know about its mechanistic underpinnings, and the potential challenges and confounding factors that arise when trying to apply it across a range of systems.
Wayne D Newhauser and Rui Zhang 2015 Phys. Med. Biol. 60 R155
The physics of proton therapy has advanced considerably since it was proposed in 1946. Today analytical equations and numerical simulation methods are available to predict and characterize many aspects of proton therapy. This article reviews the basic aspects of the physics of proton therapy, including proton interaction mechanisms, proton transport calculations, the determination of dose from therapeutic and stray radiations, and shielding design. The article discusses underlying processes as well as selected practical experimental and theoretical methods. We conclude by briefly speculating on possible future areas of research of relevance to the physics of proton therapy.
Conor K McGarry et al 2020 Phys. Med. Biol. 65 23TR01
Tissue mimicking materials (TMMs), typically contained within phantoms, have been used for many decades in both imaging and therapeutic applications. This review investigates the specifications that are typically being used in development of the latest TMMs. The imaging modalities that have been investigated focus around CT, mammography, SPECT, PET, MRI and ultrasound. Therapeutic applications discussed within the review include radiotherapy, thermal therapy and surgical applications. A number of modalities were not reviewed including optical spectroscopy, optical imaging and planar x-rays. The emergence of image guided interventions and multimodality imaging have placed an increasing demand on the number of specifications on the latest TMMs. Material specification standards are available in some imaging areas such as ultrasound. It is recommended that this should be replicated for other imaging and therapeutic modalities. Materials used within phantoms have been reviewed for a series of imaging and therapeutic applications with the potential to become a testbed for cross-fertilization of materials across modalities. Deformation, texture, multimodality imaging and perfusion are common themes that are currently under development.
Mahdieh Dashtbani Moghari et al 2023 Phys. Med. Biol. 68 165005
Objective. Cerebral CT perfusion (CTP) imaging is most commonly used to diagnose acute ischaemic stroke and support treatment decisions. Shortening CTP scan duration is desirable to reduce the accumulated radiation dose and the risk of patient head movement. In this study, we present a novel application of a stochastic adversarial video prediction approach to reduce CTP imaging acquisition time. Approach. A variational autoencoder and generative adversarial network (VAE-GAN) were implemented in a recurrent framework in three scenarios: to predict the last 8 (24 s), 13 (31.5 s) and 18 (39 s) image frames of the CTP acquisition from the first 25 (36 s), 20 (28.5 s) and 15 (21 s) acquired frames, respectively. The model was trained using 65 stroke cases and tested on 10 unseen cases. Predicted frames were assessed against ground-truth in terms of image quality and haemodynamic maps, bolus shape characteristics and volumetric analysis of lesions. Main results. In all three prediction scenarios, the mean percentage error between the area, full-width-at-half-maximum and maximum enhancement of the predicted and ground-truth bolus curve was less than 4 ± 4%. The best peak signal-to-noise ratio and structural similarity of predicted haemodynamic maps was obtained for cerebral blood volume followed (in order) by cerebral blood flow, mean transit time and time to peak. For the 3 prediction scenarios, average volumetric error of the lesion was overestimated by 7%–15%, 11%–28% and 7%–22% for the infarct, penumbra and hypo-perfused regions, respectively, and the corresponding spatial agreement for these regions was 67%–76%, 76%–86% and 83%–92%. Significance. This study suggests that a recurrent VAE-GAN could potentially be used to predict a portion of CTP frames from truncated acquisitions, preserving the majority of clinical content in the images, and potentially reducing the scan duration and radiation dose simultaneously by 65% and 54.5%, respectively.
Jia-wei Li et al 2023 Phys. Med. Biol. 68 23TR01
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Shaoyan Pan et al 2023 Phys. Med. Biol. 68 105004
Objective. Artificial intelligence (AI) methods have gained popularity in medical imaging research. The size and scope of the training image datasets needed for successful AI model deployment does not always have the desired scale. In this paper, we introduce a medical image synthesis framework aimed at addressing the challenge of limited training datasets for AI models. Approach. The proposed 2D image synthesis framework is based on a diffusion model using a Swin-transformer-based network. This model consists of a forward Gaussian noise process and a reverse process using the transformer-based diffusion model for denoising. Training data includes four image datasets: chest x-rays, heart MRI, pelvic CT, and abdomen CT. We evaluated the authenticity, quality, and diversity of the synthetic images using visual Turing assessments conducted by three medical physicists, and four quantitative evaluations: the Inception score (IS), Fréchet Inception Distance score (FID), feature similarity and diversity score (DS, indicating diversity similarity) between the synthetic and true images. To leverage the framework value for training AI models, we conducted COVID-19 classification tasks using real images, synthetic images, and mixtures of both images. Main results. Visual Turing assessments showed an average accuracy of 0.64 (accuracy converging to indicates a better realistic visual appearance of the synthetic images), sensitivity of 0.79, and specificity of 0.50. Average quantitative accuracy obtained from all datasets were IS = 2.28, FID = 37.27, FDS = 0.20, and DS = 0.86. For the COVID-19 classification task, the baseline network obtained an accuracy of 0.88 using a pure real dataset, 0.89 using a pure synthetic dataset, and 0.93 using a dataset mixed of real and synthetic data. Significance. A image synthesis framework was demonstrated for medical image synthesis, which can generate high-quality medical images of different imaging modalities with the purpose of supplementing existing training sets for AI model deployment. This method has potential applications in many data-driven medical imaging research.
Mats Danielsson et al 2021 Phys. Med. Biol. 66 03TR01
The introduction of photon-counting detectors is expected to be the next major breakthrough in clinical x-ray computed tomography (CT). During the last decade, there has been considerable research activity in the field of photon-counting CT, in terms of both hardware development and theoretical understanding of the factors affecting image quality. In this article, we review the recent progress in this field with the intent of highlighting the relationship between detector design considerations and the resulting image quality. We discuss detector design choices such as converter material, pixel size, and readout electronics design, and then elucidate their impact on detector performance in terms of dose efficiency, spatial resolution, and energy resolution. Furthermore, we give an overview of data processing, reconstruction methods and metrics of imaging performance; outline clinical applications; and discuss potential future developments.
Stefan Gundacker and Arjan Heering 2020 Phys. Med. Biol. 65 17TR01
The silicon photomultiplier (SiPM) is an established device of choice for a variety of applications, e.g. in time of flight positron emission tomography (TOF-PET), lifetime fluorescence spectroscopy, distance measurements in LIDAR applications, astrophysics, quantum-cryptography and related applications as well as in high energy physics (HEP).
To fully utilize the exceptional performances of the SiPM, in particular its sensitivity down to single photon detection, the dynamic range and its intrinsically fast timing properties, a qualitative description and understanding of the main SiPM parameters and properties is necessary. These analyses consider the structure and the electrical model of a single photon avalanche diode (SPAD) and the integration in an array of SPADs, i.e. the SiPM. The discussion will include the front-end readout and the comparison between analog-SiPMs, where the array of SPADs is connected in parallel, and the digital SiPM, where each SPAD is read out and digitized by its own electronic channel.
For several applications a further complete phenomenological view on SiPMs is necessary, defining several SiPM intrinsic parameters, i.e. gain fluctuation, afterpulsing, excess noise, dark count rate, prompt and delayed optical crosstalk, single photon time resolution (SPTR), photon detection effieciency (PDE) etc. These qualities of SiPMs influence directly and indirectly the time and energy resolution, for example in PET and HEP. This complete overview of all parameters allows one to draw solid conclusions on how best performances can be achieved for the various needs of the different applications.
Steven L Jacques 2013 Phys. Med. Biol. 58 R37
A review of reported tissue optical properties summarizes the wavelength-dependent behavior of scattering and absorption. Formulae are presented for generating the optical properties of a generic tissue with variable amounts of absorbing chromophores (blood, water, melanin, fat, yellow pigments) and a variable balance between small-scale scatterers and large-scale scatterers in the ultrastructures of cells and tissues.
Open all abstracts, in this tab
Chengxiang Wang et al 2024 Phys. Med. Biol. 69 095019
Objective. Limited-angle x-ray computed tomography (CT) is a typical ill-posed inverse problem, leading to artifacts in the reconstructed image due to the incomplete projection data. Most iteration CT reconstruction methods involve optimization for a single object. This paper explores a multi-objective optimization model and an interactive method based on multi-objective optimization to suppress the artifacts of limited-angle CT. Approach. The model includes two objective functions on the dual domain within the data consistency constraint. In the interactive method, the structural similarity index measure (SSIM) is regarded as the value function of the decision maker (DM) firstly. Secondly, the DM arranges the objective functions of the multi-objective optimization model to be optimized according to their absolute importance. Finally, the SSIM and the simulated annealing (SA) method help the DM choose the desirable reconstruction image by improving the SSIM value during the iteration process. Main results. Simulation and real data experiments demonstrate that the artifacts can be suppressed by the proposed method, and the results were superior to those reconstructed by the other three reconstruction methods in preserving the edge structure of the image. Significance. The proposed interactive method based on multi-objective optimization shows some potential advantages over classical single object optimization methods.
A Kofler et al 2024 Phys. Med. Biol. 69 095022
Objective. Task-adapted image reconstruction methods using end-to-end trainable neural networks (NNs) have been proposed to optimize reconstruction for subsequent processing tasks, such as segmentation. However, their training typically requires considerable hardware resources and thus, only relatively simple building blocks, e.g. U-Nets, are typically used, which, albeit powerful, do not integrate model-specific knowledge. Approach. In this work, we extend an end-to-end trainable task-adapted image reconstruction method for a clinically realistic reconstruction and segmentation problem of bone and cartilage in 3D knee MRI by incorporating statistical shape models (SSMs). The SSMs model the prior information and help to regularize the segmentation maps as a final post-processing step. We compare the proposed method to a simultaneous multitask learning approach for image reconstruction and segmentation (MTL) and to a complex SSMs-informed segmentation pipeline (SIS). Main results. Our experiments show that the combination of joint end-to-end training and SSMs to further regularize the segmentation maps obtained by MTL highly improves the results, especially in terms of mean and maximal surface errors. In particular, we achieve the segmentation quality of SIS and, at the same time, a substantial model reduction that yields a five-fold decimation in model parameters and a computational speedup of an order of magnitude. Significance. Remarkably, even for undersampling factors of up to R = 8, the obtained segmentation maps are of comparable quality to those obtained by SIS from ground-truth images.
Joshua Kirby and Katherine Chester 2024 Phys. Med. Biol. 69 095018
Objective. To use automation to facilitate the monitoring of each treatment fraction using an electronic portal imaging device (EPID) based in vivo dosimetry (IVD) system, allowing optimisation of breast radiotherapy delivery for individual patients and cohorts. Approach. A suite of in-house software was developed to reduce the number of manual interactions with the commercial IVD system, dosimetry check. An EPID specific pixel sensitivity map facilitated use of the EPID panel away from the central axis. Point dose difference and the change in standard deviation in dose were identified as useful dose metrics, with standard deviation used in preference to gamma in the presence of a systematic dose offset. Automated IVD was completed for 3261 fractions across 704 patients receiving breast radiotherapy. Main results. Multiple opportunities for treatment optimisation were identified for individual patients and across patient cohorts as a result of successful implementation of automated IVD. 5.1% of analysed fractions were out of tolerance with 27.1% of these considered true positives. True positive results were obtained on any fraction of treatment and if IVD had only been completed on the first fraction, 84.4% of true positive results would have been missed. This was made possible due to the automation that saved over 800 h of manual intervention and stored data in an accessible database. Significance. An improved EPID calibration to allow off-axis measurement maximises the number of patients eligible for IVD (36.8% of patients in this study). We also demonstrate the importance in selecting context-specific assessment metrics and how these can lead to a managable false positive rate. We have shown that the use of fully automated IVD facilitates use on every fraction of treatment. This leads to identification of areas for treatment improvement for both individuals and across a patient cohort, expanding the uses of IVD from simply gross error detection towards treatment optimisation.
C Angla et al 2024 Phys. Med. Biol. 69 095017
Objective. To optimize and ensure the safety of ultrasound brain therapy, personalized transcranial ultrasound simulations are very useful. They allow to predict the pressure field, depending on the patient skull and probe position. Most transcranial ultrasound simulations are based on numerical methods which have a long computation time and a high memory usage. The goal of this study is to develop a new semi-analytical field computation method that combines realism and computation speed. Approach. Instead of the classic ray tracing, the ultrasonic paths are computed by time of flight minimization. Then the pressure field is computed using the pencil method. This method requires a smooth and homogeneous skull model. The simulation algorithm, so-called SplineBeam, was numerically validated, by comparison with existing solvers, and experimentally validated by comparison with hydrophone measured pressure fields through an ex vivo human skull. Main results. SplineBeam simulated pressure fields were close to the experimentally measured ones, with a focus position difference of the order of the positioning error and a maximum pressure difference lower than 6.02%. In addition, for those configurations, SplineBeam computation time was lower than another simulation software, k-Wave's, by two orders of magnitude, thanks to its capacity to compute the field only at the focal spot. Significance. These results show the potential of this new method to compute fast and realistic transcranial pressure fields. The combination of this two assets makes it a promising tool for real time transcranial pressure field prediction during ultrasound brain therapy interventions.
Marie-Luise Kuhlmann and Stefan Pojtinger 2024 Phys. Med. Biol. 69 095021
Objective. Personalized dose monitoring and risk management are of increasing significance with the growing number of computer tomography (CT) examinations. These require high-quality Monte Carlo (MC) simulations that are of the utmost importance for the new developments in personalized CT dosimetry. This work aims to extend the MC framework EGSnrc source code with a new particle source. This, in turn, allows CT-scanner-specific dose and image calculations for any CT scanner. The novel method can be used with all modern EGSnrc user codes, particularly for the simulation of the effective dose based on DICOM images and the calculation of CT images. Approach. The new particle source can be used with input data derived by the user. The input data can be generated by the user based on a previously developed method for the experimental characterization of any CT scanner (doi.org/10.1016/j.ejmp.2015.09.006). Furthermore, the new particle source was benchmarked by air kerma measurements in an ionization chamber at a clinical CT scanner. For this, the simulated angular distribution and attenuation characteristics were compared to measurements to verify the source output free in air. In a second validation step, simulations of air kerma in a homogenous cylindrical and an anthropomorphic thorax phantom were performed and validated against experimentally determined results. A detailed uncertainty evaluation of the simulated air kerma values was developed. Main results. We successfully implemented a new particle source class for the simulation of realistic CT scans. This method can be adapted to any CT scanner. For the attenuation characteristics, there was a maximal deviation of 6.86% between the measurement and the simulation. The mean deviation for all tube voltages was 2.36% (σ = 1.6%). For the phantom measurements and simulations, all the values agreed within 5.0%. The uncertainty evaluation resulted in an uncertainty of 5.5% ().
Open all abstracts, in this tab
Christian P Karger et al 2024 Phys. Med. Biol. 69 06TR01
Modern radiotherapy delivers highly conformal dose distributions to irregularly shaped target volumes while sparing the surrounding normal tissue. Due to the complex planning and delivery techniques, dose verification and validation of the whole treatment workflow by end-to-end tests became much more important and polymer gel dosimeters are one of the few possibilities to capture the delivered dose distribution in 3D. The basic principles and formulations of gel dosimetry and its evaluation methods are described and the available studies validating device-specific geometrical parameters as well as the dose delivery by advanced radiotherapy techniques, such as 3D-CRT/IMRT and stereotactic radiosurgery treatments, the treatment of moving targets, online-adaptive magnetic resonance-guided radiotherapy as well as proton and ion beam treatments, are reviewed. The present status and limitations as well as future challenges of polymer gel dosimetry for the validation of complex radiotherapy techniques are discussed.
Dong Sik Kim 2024 Phys. Med. Biol. 69 03TR01
Objective. The noise characteristics of digital x-ray imaging devices are determined by contributions such as photon noise, electronic noise, and fixed pattern noise, and can be evaluated from measuring the noise power spectrum (NPS), which is the power spectral density of the noise. Hence, accurately measuring NPS is important in developing detectors for acquiring low-noise digital x-ray images. To make accurate measurements, it is necessary to understand NPS, identify problems that may arise, and know how to process the obtained x-ray images. Approach. The primitive concept of NPS is first introduced with a periodogram-based estimate and its bias and variance are discussed. In measuring NPS based on the IEC62220 standards, various issues, such as the fixed pattern noise, high-precision estimates, and lag corrections, are summarized with simulation examples. Main results. High-precision estimates can be provided for an appropriate number of samples extracted from x-ray images while compromising spectral resolution. Depending on medical imaging systems, by eliminating the influence of fixed pattern noise, NPS, which represents only photon and electronic noise, can be efficiently measured. For NPS measurements in dynamic detectors, an appropriate lag correction technique can be selected depending on the emitted x-rays and image acquisition process. Significance. Various issues in measuring NPS are reviewed and summarized for accurately evaluating the noise performance of digital x-ray imaging devices.
Lena Nenoff et al 2023 Phys. Med. Biol. 68 24TR01
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Jia-wei Li et al 2023 Phys. Med. Biol. 68 23TR01
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Tiziana Malatesta et al 2023 Phys. Med. Biol. 68 21TR01
This topical review focuses on Patient-Specific Quality Assurance (PSQA) approaches to stereotactic body radiation therapy (SBRT). SBRT requires stricter accuracy than standard radiation therapy due to the high dose per fraction and the limited number of fractions. The review considered various PSQA methods reported in 36 articles between 01/2010 and 07/2022 for SBRT treatment. In particular comparison among devices and devices designed for SBRT, sensitivity and resolution, verification methodology, gamma analysis were specifically considered. The review identified a list of essential data needed to reproduce the results in other clinics, highlighted the partial miss of data reported in scientific papers, and formulated recommendations for successful implementation of a PSQA protocol.
Open all abstracts, in this tab
Johnson
Six decades after its conception, proton computed tomography (pCT) and proton radiography have yet to be used in medical clinics. 
However, good progress has been made on relevant detector technologies in the past two decades, and a few prototype pCT systems now exist that approach the performance needed for a clinical device.
The tracking and energy-measurement technologies in common use are described, as are the few pCT scanners that are in routine operation at this time.
Most of these devices still look like detector R\&D efforts as opposed to medical devices, are difficult to use, are at least factor of five slower than desired for clinical use, and are too small to image many parts of the human body.
Recommendations are made for what to consider when engineering a pre-clinical pCT scanner that is designed to meet clinical needs in terms of performance, cost, and ease of use.
Shaikh et al
Objective: The superior dose conformity provided by proton therapy relative to conventional X-ray radiotherapy necessitates more rigorous Quality Assurance (QA) procedures to ensure optimal patient safety. Practically however, time-constraints prevent comprehensive measurements to be made of the proton range in water: a key parameter in ensuring accurate treatment delivery. Approach: A novel scintillator-based device for fast, accurate water-equivalent proton range QA measurements for ocular proton therapy is presented. Experiments were conducted using a compact detector prototype, the Quality Assurance Range Calorimeter (QuARC), at the Clatterbridge Cancer Centre (CCC) in Wirral, UK for the measurement of pristine and spread-out Bragg peaks (SOBPs). The QuARC uses a series of 14 optically-isolated 100 x 100 x 2.85 mm polystyrene scintillator sheets, read out by a series of photodiodes. The detector system is housed in a custom 3D-printed enclosure mounted directly to the nozzle and a numerical model was used to fit measured depth-light curves and correct for scintillator light quenching. Main Results: Measurements of the pristine 60 MeV proton Bragg curve found the QuARC able to measure proton ranges accurate to 0.2 mm and reduced QA measurement times from several minutes down to a few seconds. A new framework of the quenching model was deployed to successfully fit depth-light curves of SOBPs with similar range accuracy. Significance: The speed, range accuracy and simplicity of the QuARC make the device a promising candidate for ocular proton range QA. Further work to investigate the performance of SOBP fitting at higher energies/greater depths is warranted.
Piller et al
Objective. The efficient usage of prompt photons like Cherenkov emission is of great interest for the design of the next generation, cost-effective, and ultrahigh-sensitivity TOF-PET scanners. With custom, high power consuming, readout electronics and fast digitization the prospect of sub-300 ps FWHM with PET-sized BGO crystals have been shown. However, these results are not scalable to a full system consisting of thousands of detector elements.
Approach. To pave the way toward a full TOF-PET scanner, we examine the performance of the FastIC ASIC with Cherenkov-emitting scintillators (BGO), together with one of the most recent SiPM detector developments based on metal trenching from FBK. The FastIC is a highly configurable ASIC with 8 input channels, a power consumption of 12 mW/ch and excellent linearity on the energy measurement. To put the timing performance of the FastIC into perspective, comparison measurements with high-power consuming readout electronics are performed. 
Main results. We achieve a best CTR FWHM of 330 ps for 2x2x3 mm3 and 490 ps for 2x2x20 mm3 BGO crystals with the FastIC. In addition, using 20 mm long LSO:CeCa crystals, CTR values of 129 ps FWHM have been measured with the FastIC, only slightly worse to the state-of-the-art of 95 ps obtained with discrete HF electronics.
Significance. For the first time, the timing capability of BGO with a scalable ASIC has been evaluated. The findings underscore the potential of the FastIC ASIC in the development of cost-effective TOF-PET scanners with excellent timing characteristics.
Ripaud et al
Background: Breast Background Parenchymal Enhancement (BPE) is correlated with
the risk of breast cancer. BPE level is currently assessed by radiologists in Contrast-Enhanced
Mammography (CEM) using 4 classes: minimal, mild, moderate and marked, as described
in Breast Imaging Reporting and Data System (BI-RADS). However, BPE classification re-
mains subject to intra- and inter-reader variability. Fully automated methods to assess BPE
level have already been developed in breast Contrast-Enhanced MRI (CE-MRI) and have been
shown to provide accurate and repeatable BPE level classification. However, to our knowledge,
no BPE level classification tool is available in the literature for CEM.
Materials & Methods: A BPE level classification tool based on Deep Learning (DL) has
been trained and optimized on 7012 CEM image pairs (low-energy and recombined images)
and evaluated on a dataset of 1013 image pairs. The impact of image resolution, backbone
architecture and loss function were analyzed, as well as the influence of lesion presence and
type on BPE assessment. The evaluation of the model performance was conducted using dif-
ferent metrics including 4-class balanced accuracy and mean absolute error. The results of the
optimized model for a binary classification: minimal/mild versus moderate/marked, were also
investigated.
Results: The optimized model achieved a 4-class balanced accuracy of 71.5% (95% CI:
71.2–71.9) with 98.8% of classification errors between adjacent classes. For binary classifi-
cation, the accuracy reached 93.0%. A slight decrease in model accuracy is observed in the
presence of lesions, but it is not statistically significant, suggesting that our model is robust to
the presence of lesions in the image for a classification task. Visual assessment also confirms
that the model is more affected by non-mass enhancements than by mass-like enhancements.
Conclusion: The proposed BPE classification tool for CEM achieves similar results than
what is published in the literature for CE-MRI.
Yuan et al
Objective: Automatic and accurate airway segmentation is necessary for lung disease diagnosis. The complex tree-like structures leads to gaps in the different generations of the airway tree, and thus airway segmentation is also considered to be a multi-scale problem. In recent years, convolutional neural networks have facilitated the development of medical image segmentation. In particular, 2D CNNs and 3D CNNs can extract different scale features. Hence, we propose a two-stage and 2D+3D framework for multi-scale airway tree segmentation. Approach: In stage 1, we use a 2D Full Airway SegNet(2D FA-SegNet) to segment the complete airway tree. Multi-scale Atros Spatial Pyramid (MASP) and Atros Residual Skip connection (ARSc) modules are inserted to extract different scales feature. We designed a hard sample selection strategy to increase the proportion of intrapulmonary airway samples in stage 2. 3D Airway RefineNet (3D ARNet) as stage 2 takes the results of stage 1 as a priori information. Spatial information extracted by 3D convolutional kernel compensates for the loss of in 2D FA-SegNet. Furthermore, we added False Positive losses and False Negative losses to improve the segmentation performance of airway branches within the lungs. Main results: We performed data enhancement on the publicly available dataset of ISICDM 2020 Challenge 3, and on which evaluated our method. Comprehensive experiments show that the proposed method has the highest DSC of 0.931, and IoU of 0.871 for the whole airway tree and DSC of 0.699, and IoU of 0.543 for the intrapulmonary bronchi tree. In addition, 3D ARNet proposed in this paper cascaded with other State-Of-The-Art methods to increase DLR by up to 46.33% and DBR by up to 42.97%. Significance: The quantitative and qualitative evaluation results show that our proposed method performs well in segmenting the airway at different scales.
Trending on Altmetric
Open all abstracts, in this tab
A Kofler et al 2024 Phys. Med. Biol. 69 095022
Objective. Task-adapted image reconstruction methods using end-to-end trainable neural networks (NNs) have been proposed to optimize reconstruction for subsequent processing tasks, such as segmentation. However, their training typically requires considerable hardware resources and thus, only relatively simple building blocks, e.g. U-Nets, are typically used, which, albeit powerful, do not integrate model-specific knowledge. Approach. In this work, we extend an end-to-end trainable task-adapted image reconstruction method for a clinically realistic reconstruction and segmentation problem of bone and cartilage in 3D knee MRI by incorporating statistical shape models (SSMs). The SSMs model the prior information and help to regularize the segmentation maps as a final post-processing step. We compare the proposed method to a simultaneous multitask learning approach for image reconstruction and segmentation (MTL) and to a complex SSMs-informed segmentation pipeline (SIS). Main results. Our experiments show that the combination of joint end-to-end training and SSMs to further regularize the segmentation maps obtained by MTL highly improves the results, especially in terms of mean and maximal surface errors. In particular, we achieve the segmentation quality of SIS and, at the same time, a substantial model reduction that yields a five-fold decimation in model parameters and a computational speedup of an order of magnitude. Significance. Remarkably, even for undersampling factors of up to R = 8, the obtained segmentation maps are of comparable quality to those obtained by SIS from ground-truth images.
Marie-Luise Kuhlmann and Stefan Pojtinger 2024 Phys. Med. Biol. 69 095021
Objective. Personalized dose monitoring and risk management are of increasing significance with the growing number of computer tomography (CT) examinations. These require high-quality Monte Carlo (MC) simulations that are of the utmost importance for the new developments in personalized CT dosimetry. This work aims to extend the MC framework EGSnrc source code with a new particle source. This, in turn, allows CT-scanner-specific dose and image calculations for any CT scanner. The novel method can be used with all modern EGSnrc user codes, particularly for the simulation of the effective dose based on DICOM images and the calculation of CT images. Approach. The new particle source can be used with input data derived by the user. The input data can be generated by the user based on a previously developed method for the experimental characterization of any CT scanner (doi.org/10.1016/j.ejmp.2015.09.006). Furthermore, the new particle source was benchmarked by air kerma measurements in an ionization chamber at a clinical CT scanner. For this, the simulated angular distribution and attenuation characteristics were compared to measurements to verify the source output free in air. In a second validation step, simulations of air kerma in a homogenous cylindrical and an anthropomorphic thorax phantom were performed and validated against experimentally determined results. A detailed uncertainty evaluation of the simulated air kerma values was developed. Main results. We successfully implemented a new particle source class for the simulation of realistic CT scans. This method can be adapted to any CT scanner. For the attenuation characteristics, there was a maximal deviation of 6.86% between the measurement and the simulation. The mean deviation for all tube voltages was 2.36% (σ = 1.6%). For the phantom measurements and simulations, all the values agreed within 5.0%. The uncertainty evaluation resulted in an uncertainty of 5.5% ().
Moritz Schneider et al 2024 Phys. Med. Biol. 69 095020
The in vivo evolution of radiotherapy necessitates innovative platforms for preclinical investigation, bridging the gap between bench research and clinical applications. Understanding the nuances of radiation response, specifically tailored to proton and photon therapies, is critical for optimizing treatment outcomes. Within this context, preclinical in vivo experimental setups incorporating image guidance for both photon and proton therapies are pivotal, enabling the translation of findings from small animal models to clinical settings. The SAPPHIRE project represents a milestone in this pursuit, presenting the installation of the small animal radiation therapy integrated beamline (SmART+ IB, Precision X-Ray Inc., Madison, Connecticut, USA) designed for preclinical image-guided proton and photon therapy experiments at University Proton Therapy Dresden. Through Monte Carlo simulations, low-dose on-site cone beam computed tomography imaging and quality assurance alignment protocols, the project ensures the safe and precise application of radiation, crucial for replicating clinical scenarios in small animal models. The creation of Hounsfield lookup tables and comprehensive proton and photon beam characterizations within this system enable accurate dose calculations, allowing for targeted and controlled comparison experiments. By integrating these capabilities, SAPPHIRE bridges preclinical investigations and potential clinical applications, offering a platform for translational radiobiology research and cancer therapy advancements.
Robert P Johnson 2024 Phys. Med. Biol.
Six decades after its conception, proton computed tomography (pCT) and proton radiography have yet to be used in medical clinics. 
However, good progress has been made on relevant detector technologies in the past two decades, and a few prototype pCT systems now exist that approach the performance needed for a clinical device.
The tracking and energy-measurement technologies in common use are described, as are the few pCT scanners that are in routine operation at this time.
Most of these devices still look like detector R\&D efforts as opposed to medical devices, are difficult to use, are at least factor of five slower than desired for clinical use, and are too small to image many parts of the human body.
Recommendations are made for what to consider when engineering a pre-clinical pCT scanner that is designed to meet clinical needs in terms of performance, cost, and ease of use.
Saad Shaikh et al 2024 Phys. Med. Biol.
Objective: The superior dose conformity provided by proton therapy relative to conventional X-ray radiotherapy necessitates more rigorous Quality Assurance (QA) procedures to ensure optimal patient safety. Practically however, time-constraints prevent comprehensive measurements to be made of the proton range in water: a key parameter in ensuring accurate treatment delivery. Approach: A novel scintillator-based device for fast, accurate water-equivalent proton range QA measurements for ocular proton therapy is presented. Experiments were conducted using a compact detector prototype, the Quality Assurance Range Calorimeter (QuARC), at the Clatterbridge Cancer Centre (CCC) in Wirral, UK for the measurement of pristine and spread-out Bragg peaks (SOBPs). The QuARC uses a series of 14 optically-isolated 100 x 100 x 2.85 mm polystyrene scintillator sheets, read out by a series of photodiodes. The detector system is housed in a custom 3D-printed enclosure mounted directly to the nozzle and a numerical model was used to fit measured depth-light curves and correct for scintillator light quenching. Main Results: Measurements of the pristine 60 MeV proton Bragg curve found the QuARC able to measure proton ranges accurate to 0.2 mm and reduced QA measurement times from several minutes down to a few seconds. A new framework of the quenching model was deployed to successfully fit depth-light curves of SOBPs with similar range accuracy. Significance: The speed, range accuracy and simplicity of the QuARC make the device a promising candidate for ocular proton range QA. Further work to investigate the performance of SOBP fitting at higher energies/greater depths is warranted.
Markus Piller et al 2024 Phys. Med. Biol.
Objective. The efficient usage of prompt photons like Cherenkov emission is of great interest for the design of the next generation, cost-effective, and ultrahigh-sensitivity TOF-PET scanners. With custom, high power consuming, readout electronics and fast digitization the prospect of sub-300 ps FWHM with PET-sized BGO crystals have been shown. However, these results are not scalable to a full system consisting of thousands of detector elements.
Approach. To pave the way toward a full TOF-PET scanner, we examine the performance of the FastIC ASIC with Cherenkov-emitting scintillators (BGO), together with one of the most recent SiPM detector developments based on metal trenching from FBK. The FastIC is a highly configurable ASIC with 8 input channels, a power consumption of 12 mW/ch and excellent linearity on the energy measurement. To put the timing performance of the FastIC into perspective, comparison measurements with high-power consuming readout electronics are performed. 
Main results. We achieve a best CTR FWHM of 330 ps for 2x2x3 mm3 and 490 ps for 2x2x20 mm3 BGO crystals with the FastIC. In addition, using 20 mm long LSO:CeCa crystals, CTR values of 129 ps FWHM have been measured with the FastIC, only slightly worse to the state-of-the-art of 95 ps obtained with discrete HF electronics.
Significance. For the first time, the timing capability of BGO with a scalable ASIC has been evaluated. The findings underscore the potential of the FastIC ASIC in the development of cost-effective TOF-PET scanners with excellent timing characteristics.
David Stocker et al 2024 Phys. Med. Biol.
Optimizing complex imaging procedures within Computed Tomography,
considering both dose and image quality, presents significant challenges amidst rapid
technological advancements and the adoption of Machine Learning (ML) methods.
A crucial metric in this context is the Difference-Detailed Curve, which relies on
human observer studies. However, these studies are labor-intensive and prone to
both inter- and intra-observer variability. To tackle these issues, a ML-based model
observer utilizing the U-Net architecture and a Bayesian methodology is proposed. In
order to train a model observer unaffected by the spatial arrangement of low-contrast
objects, the image preprocessing incorporates a Gaussian Process-based noise model.
Additionally, Gradient-weighted Class Activation Mapping is utilized to gain insights
into the model observer's decision-making process. By training on data from a
diverse group of observers, well-calibrated probabilistic predictions that quantify
observer variability are achieved. Leveraging the principles of Beta regression,
the Bayesian methodology is used to derive a model observer performance metric,
effectively gauging the model observer's strength in terms of an 'effective number of
observers'. Ultimately, this framework enables to predict the DDC distribution by
applying thresholds to the inferred probabilities.
Justin Ellin et al 2024 Phys. Med. Biol.
Objective. Prompt gamma timing (PGT) uses the detection time of prompt gammas emitted along the proton track in proton radiotherapy to verify Bragg peak positioning. Cherenkov detectors offer the possibility of enhanced signal-to-noise ratio (SNR) due to Cherenkov physics which enhances detection of high energy prompt gamma rays relative to other induced uncorrelated signals. In this work, the PGT technique was applied to 3 semiconductor materials that emit only Cherenkov light: a 3x3x20 mm3 TlBr, a 12x12x12 mm3 TlBr, and a 5x5x5 mm3 TlCl. Approach. A polymethyl methacrylate (PMMA) target was exposed to a 67.5 MeV, 0.5-nA proton beam and shifted in 3-mm increments at the Crocker Nuclear Laboratory (CNL) in Davis, CA, USA. A fast plastic scintillator coupled to a PMT provided the start reference for the proton time of flight (TOF). TOF distributions were generated using this reference and the gamma ray timestamp in the Cherenkov detector. Main results. The SNR of the proton correlated peaks relative to the background was 20, 29, and 30 for each of the three samples, respectively. The upper limit of the position resolutions with the TlCl sample were 2 mm, 3 mm, and 5 mm for 30k, 10k, and 5k detected events, respectively. The time distribution of events with respect to the reference reproduced with clarity the periodicity of the beam, implying a very high SNR of the Cherenkov crystals to detect prompt-gammas. Background presence from the neutron-induced continuum, prompt-gammas from deuterium, or positron activation were not observed. Material choice and crystal dimensions did not seem to affect significantly the outcome of the results. Significance. These results show the high SNR of the pure Cherenkov emitters TlBr and TlCl for the detection of prompt-gammas in a proton beam with current of clinical significance and their potential for verifying the proton range.
Yolanda Prezado et al 2024 Phys. Med. Biol.
Spatially fractionated radiation therapy (SFRT) is a therapeutic approach with the potential to disrupt the classical paradigms of conventional radiation therapy. The high spatial dose modulation in SFRT activates distinct radiobiological mechanisms which lead to a remarkable increase in normal tissue tolerances. Several decades of clinical use and numerous preclinical experiments suggest that SFRT has the potential to increase the therapeutic index, especially in bulky and radioresistant tumors. To unleash the full potential of SFRT a deeper understanding of the underlying biology and its relationship with the complex dosimetry of SFRT is needed. This review provides a critical analysis of the field, discussing not only the main clinical and preclinical findings but also analyzing the main knowledge gaps in a holistic way.
Adam George Tattersall et al 2024 Phys. Med. Biol.
Training deep learning models for image registration or segmentation of dynamic contrast enhanced (DCE)-MRI data is challenging. This is mainly due to the wide variations in contrast enhancement within and between patients. To train a model effectively, a large dataset is needed, but acquiring it is expensive and time consuming. Instead, style transfer can be used to generate new images from existing images.
 
In this study, our objective is to develop a style transfer method that incorporates spatio-temporal information to either add or remove contrast enhancement from an existing image.
 
We propose a Temporal Image-to-Image Style Transfer Network (TIST-Net), consisting of an auto-encoder combined with convolutional long short-term memory (LSTM) networks. This enables disentanglement of the content and style latent spaces of the time series data, using spatio-temporal information to learn and predict key structures . To generate new images , we use deformable and adaptive convolutions which allow fine grained control over the combination of the content and style latent spaces. We evaluate our method, using popular metrics and a previously proposed contrast weighted structural similarity index measure (CW-SSIM). We also perform a clinical evaluation, where experts are asked to rank images generated by multiple methods. 
 
Our model achieves state-of-the-art performance on three datasets (kidney, prostate and uterus) achieving an SSIM of 0.91±0.03, 0.73±0.04, 0.88±0.04 respectively when performing style transfer between a non-enhanced image and a contrast-enhanced image. Similarly, SSIM results for style transfer from a contrast-enhanced image to a non-enhanced image were 0.89±0.03, 0.82±0.03, 0.87±0.03. In the clinical evaluation, our method was ranked consistently higher than other approaches.
 
TIST-Net can be used to generate new DCE-MRI data from existing images. In future, this may improve models for tasks such as image registration or segmentation by allowing small training datasets to be expanded.