Early molecular markers for retrospective biodosimetry and prediction of acute health effects

Radiation-induced biological changes occurring within hours and days after irradiation can be potentially used for either exposure reconstruction (retrospective dosimetry) or the prediction of consecutively occurring acute or chronic health effects. The advantage of molecular protein or gene expression (GE) (mRNA) marker lies in their capability for early (1–3 days after irradiation), high-throughput and point-of-care diagnosis, required for the prediction of the acute radiation syndrome (ARS) in radiological or nuclear scenarios. These molecular marker in most cases respond differently regarding exposure characteristics such as e.g. radiation quality, dose, dose rate and most importantly over time. Changes over time are in particular challenging and demand certain strategies to deal with. With this review, we provide an overview and will focus on already identified and used mRNA GE and protein markers of the peripheral blood related to the ARS. These molecules are examined in light of ‘ideal’ characteristics of a biomarkers (e.g. easy accessible, early response, signal persistency) and the validation degree. Finally, we present strategies on the use of these markers considering challenges as their variation over time and future developments regarding e.g. origin of samples, point of care and high-throughput diagnosis.


What is a biomarker?
The term 'biomarker' became introduced around 1980 and number of publications increased about ten-fold per decade ( figure 1(A)). Since 2019 about 20 000 biomarkers related articles/year are published. According to WHO a biomarkers defines 'any measurement reflecting an interaction between a biological system and an environmental agent, which may be chemical, physical or biological' [1]. Definition of the term 'biomarkers' is constantly reviewed in societies as different as ecology/ecotoxicology, epidemiology, medicine, biodosimetry and radiobiology and suggested to be discriminated from other terms such as bioindicator or ecological indicator due to an overlap of definitions [2]. For instance, Peakall in 1994 used the term biomarker for both, a 'biological response to … chemicals that gives a measure of exposure and sometimes also, of toxic effects' [2,3]. In the same year this has been limited by others to a definition of biomarkers which are 'biochemical, physiological and morphological changes in plants to measure their exposure to chemicals' [2,4]. Searching for terms biomarkers or bioindicator (pubmed) reflects the synonymous [5][6][7][8] and dual use of both terms for both, a marker for exposures and effects, since e.g. bioindicator species (e.g. mussel watch program in Norway) are used as part of monitoring programs for exposure and effects from contaminants in the environment and their impact on exposed species or populations [9,10]. These contaminants can be e.g. radionuclides [9], metals [11] or non-steroidal anti-inflammatory drugs [12].
Authors representing other scientific societies follow the dual use of biomarkers and specify the purpose [13,14]. Considering the causal pathway where biological processes (intermediates) in response to an exposure occur, these biomarkers can be used for upstream reconstruction of the exposure as well as for downstream prediction of effects ( [15][16][17], figure 1(B)). These effects can be clinical endpoints as diverse as Figure 1. Radiation exposure results into health effects along a causal pathway (grey arrow). Biological processes thought as intermediates comprise e.g. dicentric chromosomal aberrations (DIC), clinical parameter such as prodromi (e.g. vomiting, diarrhoea) and blood cell count changes as well as gene and protein expression changes. These biological changes can be used for both, retrospective dosimetry purposes or prediction of later developing health effects. In this review only gene and protein expression are considered as biomarkers of exposure or biomarkers of effect prediction. sepsis [18], cancer [17,19] or acute radiation syndrome (ARS) [15]. Depending on the applicability these biomarkers are called biomarkers for exposure reconstruction including e.g. asbestos, food intake, fatty acids, radiation [13,14,[20][21][22][23][24][25] or biomarkers of clinical endpoints such as for diagnosis, prediction and prognosis (development) of diseases (so called surrogate response marker in personalized medicine [17,19,[26][27][28]), biomarkers for treatment tailoring to facilitate personalized medicine [29] as well as its use as a marker for susceptibility to certain exposures [13,14,24,28,30].
Digital biomarkers derived from sensors and mobile technologies (e.g. monitoring vital parameters such as heartrate) represent another class of biomarkers [28].
Acknowledging the predominance of the term biomarkers over bioindicator and its dual use for exposure estimation as well as effect prediction, we follow the already suggested use of more defined phrases such as the use of 'biomarkers for exposure' or 'biomarkers for dose estimation' and 'biomarkers for effect prediction' or 'biomarkers of ARS prediction' and use the term in both directions in this review.

Retrospective biodosimetry or prediction of acute health effects-challenges and promises
Radiation exposure estimates represent a useful measure or surrogate for the medical management (diagnosis) of ARS, but conversion of dose in an easy to follow clinical application remains challenging [16,[31][32][33]. For instance, providing a clinician a dose estimate for a radiation-exposed individual is as informative as providing the number of COVID-19 viruses a patient absorbed. However, predicting the clinical-follow up of a radiation-exposed individual by providing a METREPOL ARS category is as informative as predicting the severity of a COVID-19 infection so that clinicians in advance know about the development of a severe acute respiratory syndrome, required clinical resources such as an intensive care unit or an outpatient facility. Acute health effects after irradiation, summarized as the ARS, are mainly caused by sudden and massive cell death and corresponding functional organ deficits and organ system failure [34]. Radiation quality, dose rate, fractionation, partial/whole body exposure, homogeneity of exposure as well as external and internal radionuclide contamination are known to cause large differences in cell survival (summarized in [16], figure 2). Providing an absorbed dose only as a surrogate for acute health effects is insufficient since many other exposure characteristics are not considered but are urgently required, thus, challenging this approach. Even a single whole-body exposure in the 1-5 Gy dose band reveals difficulties in the estimation of acute health effects (see article by Blakely et al in this issue and [16,[31][32][33]).
Cells and tissues respond differently to the same radiation dose. In radiobiology several important biological processes have been identified with strong impact on cell survival or cell death, finally leading to the configuration of radiotherapy as it is used nowadays [34][35][36]. Hence, multiple biological processes such Figure 2. Radiation exposure as well as biological processes consist of certain characteristics shown as bullet points. Biomarkers of acute health effects might integrate these characteristic leading to a simplified prediction of later developing acute health effects. as differences in cell type and inter-individual radiosensitivity, cell-cycle phases and oxygen conditions affect and challenge the prediction of acute health effects (summarized in [16], figure 2).
Radiation-induced cytological or molecular biological changes lying downstream the radiation exposure (occurring in genes, RNA-species, proteins or metabolites) but preceding acute health effects (and causally or timely related to it) might enable the integration of these exposure characteristics and biological processes. They are called biomarkers for acute health effect prediction herein (summarized in [16], figure 2).
How does the integration of radiation exposure characteristics and biological processes via biomarkers for prediction of acute health effects work? For instance, a biomarkers such as the reduction in lymphocyte counts and a transient increase and later decrease in granulocyte counts observed within the first 1-3 days after radiation exposure allow H-ARS severity prediction [37,38]. The altered peripheral blood cell counts (BCC) are caused by cell death in irradiated radiosensitive peripheral BCCs and bone marrow stem cells. If partial and not the whole body irradiation occurred with either neutrons or gamma-rays, bone marrow stem cell death would be reduced. That would be observed as well in the fraction of radiosensitive irradiated peripheral BCC (e.g. lymphocytes) and would be directly proportional to the irradiated body area and the duration and homogenity of the exposure. The number of surviving peripheral blood cells and bone marrow stem cells would directly affect peripheral BCC and the subsequent events of H-ARS severity, thus integration differences in radiation exposure. Likewise biological factors, such as cell type or individual radiosensitivity would be covered (integrated, figure 2). This has been recently shown using gene expression (GE) analysis in the peripheral blood of animals ( [15,39] and is reviewed here in section 4) and using proteomic and haematology biomarkers [40].
Interestingly, biomarkers of acute health effect prediction lying on the causal pathway bear the potential of therapeutic intervention (molecular targeting) due to the causal relationship to the later developing health effects [16,17].
Biomarkers of acute health effect prediction or radiation damage are preferably examined in the plasma, serum or the whole peripheral blood, because of the easy access (see sections below). These biomarkers can origin from different irradiated organs or organ systems (e.g. gastrointestinal tract) with molecules released and detected in the peripheral blood or blood components (e.g. citrulline, see sections below). This can occur via e.g. extracellular vesicles (EVs) comprising mRNA or miRNAs and shed from normal cells as well as malignant transformed cells. EVs carry material specific to their parental cells and represent 'a snapshot of the cell status at the moment of release' [41]. Hence, the peripheral blood somehow mirrors the health status of the whole body and represents a communication medium for a concerted action of the organism as a whole in response to radiation damage and other local or whole body exposures. Hence, molecular marker can origin from targets other than e.g. cytogenetic marker such as dicentric fragments or micronuclei examined in lymphocytes only.

Useful biomarkers-characteristics
Biomarkers for dose estimation as well as for ARS prediction are of clinical significance if present during early diagnosis or triage of ARS patients. This will probably occur within several days after a radiological or nuclear event, at a time, where exposed individuals are in the prodromal or latency but not the ARS manifestation phase. Early diagnosis after radiation exposure improves the prognosis of later developing ARS considerably in opposite to other exposures, e.g. nerve agents which can cause early death within minutes due to lost body control over respiratory and other muscles leading to e.g. asphyxiation or cardiac arrest. Biomarkers should identify [1] individuals in need of early treatment and immediate hospitalization to improve their prognosis [2], individuals with lower radiation exposures not requiring immediate care but surveillance due to increased risk for late health effects and [3] individuals believing to be exposed but who are not, in order to safe limited clinical resources. These considerations narrow the required biomarkers characteristics as followed and are in line with already cited work [33,[42][43][44]: Baseline; proteins and genes should be detectable in all samples. Inter-and intra-individual variance in unexposed samples has to be considered.
Radiation-induced fold-changes; radiation exposures should lead to fold-changes sufficient to deal with inter-and intra-individual baseline variance (for details see section 4.2) and generation of calibration curves (highlighted for several genes and gene combinations in sections 4.1.1-4.1.8).
Early response; biomarkers results must be present within hours or about 1-3 days after irradiation to facilitate hospitalization and medical treatment decisions.
Signal persistency; it would be preferable if biological changes would persist over hours and days. This would eliminate the dimension time and simplify ARS diagnosis.
Validation status; examinations on biomarkers require a robust validation procedure including the use of different methodologies, independent validation in different samples, inter-species comparisons, supporting and subsequent in vitro experiment as well as examinations on human (patient) samples, but considering the impaired health status.
Confounder and specificity; there is an ongoing debate on the impact of confounders such as diseases on certain protein or GE marker. Specificity of GE for radiation exposure is another important prerequisite and both topics are discussed in section 6.4.
Test characteristics; different research groups already developed biomarkers-based tools for dose estimation and ARS prediction. Corresponding quality criteria such as sensitivity, specificity, positive and negative predictive values and receiver operator characteristics (ROC) curve will be reported.
Further aspects such as the easiness the markers are to assess (blood or saliva samples) and the field deployability and high-throughput potential for early markers are separately addressed in section 7. Expenses for gene and protein expression analysis in a radiologial scenario represent another feature and are platform dependent and difficult to assess. However, experience regarding the use of qRT-PCR and ELISA techniques in the context of the coronavirus pandemic provide a magnitude.
Biomarkers will be reviewed under these aspects in the following sections.

GE biomarkers for retrospective biodosimetry
Combination of terms 'gene expression and radiation and dosimetry' generated 2523 entries in PubMed (03/2021) of which 122 were selected for further examination, leaving finally 72 publications (based on the abstract) for detailed analysis. Among them 45 publications were chosen for extracting exposure and methodological details to develop table 1 which aggregates detailed description of these publications shown in supplemental table 1 (available online at stacks.iop.org/JRP/42/010503/mmedia). These citations are provided in a separate chapter of the references for convenience reason to the reader. This review does not cover all relevant articles, but it certainly provides an overview and important features to this topic.
Beside this gnostic approach, microarrays and bioinformatic analysis allowed identification of unknown early response genes (agnostic approach) and several teams identified gene signatures comprising dozens of genes for discriminating e.g. exposed from unexposed groups or different exposure heights [53,[55][56][57][58][59]. Other teams reduced the number of genes to the minimum required for dose estimation (below 10) [60][61][62][63]. With the invention of qRT-PCR for robust and sensitive quantitative GE analysis (gold-standard Table 1. Summary on most frequently examined candidate genes for retrospective dosimetry and H-ARS (hematological acute radiation syndrome) prediction. The table is ordered per gene and each row represents another radiation exposure quality (e.g. publications using x-ray, γ-ray, neutron or α-exposures). The last column depicts the corresponding number of publications followed by references in parenthesis and depicted in a separate chapter of the manuscript references. Abbreviations: FC, Fold change relative to an unexposed reference, PPV, positive predictive value; NPV, negative predictive value; ROC, receiver operator characteristic curve. The references here can be found in a supplementary reference list.
First published studies focused on GE changes at 24 h after irradiation, but this time window widened, covering 2 h to several days (supplemental table 1). Radiation-induced GE changes represent a biological process requiring a certain biological response time (several hours, supplemental table 1) which was recently recognized in the context of the RENEB 2019 exercise [76]. As that, it is a temperature sensitive process, and temperatures above or below 37 • C result in a several-fold decreased GE of widely used biomarkers for dose estimation which has to be considered in particular when performing field exercises [76].
Examinations on the association of GE with dose represents the topic of the majority of molecular radiobiological publications (see supplemental table 1, table 1). Dose rate dependent GE changes are less examined [78,79] and only a few publications examined the impact of neutron (nuclear event [80][81][82][83]), or proton (mission to mars) exposures on mRNA GE in peripheral blood cells [84][85][86].
Radiation-induced GE saturates at a certain upper dose which seemed to be about 5 Gy after x-ray or γ-exposure (see supplemental tables 1, table 1). For highest mixed x-ray/neutron exposure at 3 Gy comparable DDB2 GE changes of about seven-fold to 3 Gy X-irradiation are reported [81]. These doses are sufficient enough for detection of severe haematological ARS (H-ARS) requiring hospitalization as discussed by Blakely et al in this issue. Several genes appear in particular sensitive for low radiation exposure and e.g. FDXR discriminated exposures of about 0.01-0.03 Gy (representative for a computer tomography) from unexposed individuals [87,88].
GE measurements like all other biodosimetry assays require a reference to be interpreted. Knowing baseline expression and inter-individual variance of a certain gene, provides a measure discriminating unexposed from exposed individuals without need for individual pre-exposure controls, which has been shown by different groups already [55,74,89].
Suitable calibration curves are required in order to reflect exposure characteristics and time after exposure of a certain RN event. Strategies to overcome the exposure dependency and the time dependency are reflected in sections 2, 4.2 and 6 of this review, respectively.
International inter-assay studies allowed the comparison of [1] mean absolute difference of estimated doses relative to the true doses [2], number of dose estimates outside the 0.5 Gy interval as recommended for triage dosimetry as well as [3] the examination of merged doses into binary dose categories of clinical relevance and accuracy, sensitivity and specificity of the assays was compared. Regarding all three parameters, GE revealed a performance comparable to the micronucleus assay, but below the dicentric assay, still considered to represent the gold standard in biodosimetry [90]. However, GE measurements reveal early and high-throughput characteristics making them in particular attractive for RN scenarios involving hundreds or thousands of individuals. For instance, dose estimates can be provided from 1000 samples within 30 h [89] or 600 samples within 24 h [91].
Ex vivo whole blood experiments appear to reflect the in vivo situation regarding certain genes (e.g. FDXR, DDB2, WNT3 or POU2AF1), because irradiated T-and B-lymphocytes represent the origin of measured radiation-induced GE changes [92]. Other radiation-responsive genes examined in the peripheral blood after in vivo irradiation (e.g. CCR7, ARG2, CD177) are not seen ex vivo, indicating other radiation-responsive targets than the whole blood cells to be involved. Hence, using ex vivo experiments require in vivo models to estimate their significance.
Several groups continuously examined almost a dozen of genes over two decades. Details on this research are presented in supplemental table 1 and aggregated in table 1. Below, each of the genes will be presented following the biomarkers characteristics of section 3.

FDXR.
Represents one of the most often recognized genes used for retrospective biodosimetry. Most of the 27 publications examined employed X-irradiation (n = 21; 0-8 Gy) and γ-irradiation (n = 7; 0-8 Gy), neutron (n = 2, 0.1-3 Gy) or α-exposure (0-1.5 Gy). The examined post-exposure time frame covered up to 72 h and earliest biological responses were observed at 2 h after irradiation. FDXR could be detected in all samples due to high baseline copy numbers (if reported). Radiation-induced FDXR fold-changes (relative to unexposed) increased 1-3-fold after low (0.1-1 Gy) and 3-50-fold after >1-8 Gy X-irradiation in a time dependent pattern with copy numbers increasing up to 24 h and decreasing copy numbers (3-30-fold) detected latest 72 h after irradiation. Five publications consistently report on a 1.3-2-fold increase in FDXR GE 24 h or earlier after low radiation exposures covering 5-100 mGy und using different techniques like NGS and qRT-PCR (supplemental table 1). This underlines their use for identification of low or unexposed individuals. Fold-differences after neutron and α-exposures appeared about two-fold higher compared to x-ray exposure. Mostly, peripheral blood was irradiated ex vivo and in about one-third of the publication examinations were performed in cancer patients suffering e.g. from leukaemia, prostate, breast cancer or receiving a computer tomography (CT-scan). Two animal studies in baboons and rats are reported [54,66]. A methodological validation on the same samples using microarrays and qRT-PCR combined was performed in the majority of studies, but an independent validation using different samples was reported for six studies. Major source of FDXR response in the peripheral blood are T-lymphocytes [92]. A minority of studies examined confounder and LPS (n = 3) was observed to decrease FDXR copy numbers in particular during the first 24 h after X-irradiation [65,93,94], which was observed at temperatures below or higher than 37 • C as well [76]. About half a dozen of studies report on significant linear associations of FDXR with dose covering 0-4 Gy and the ability discriminating categories of unexposed, low and high exposed individuals based on FDXR GE values [74,77].

DDB2.
Represents another most frequently used gene for retrospective biodosimetry. Most of the 26 publications examined employed X-irradiation (n = 21; 0-4 Gy) and less used γ-irradiation (n = 6; 0-8 Gy) or neutron (n = 4, 0-3 Gy) or α-exposure (n = 1, 0-1.5 Gy). The examined post-exposure time frame covered up to 72 h for x-rays and 48 h for γ-rays, but neutron exposure was examined at 24 h (as well as α-exposure) and 168 h only. Earliest biological responses were observed at 3-4 h after irradiation and DDB2 could be detected in all samples due to high baseline copy numbers (if reported). Radiation-induced DDB2 fold-changes (relative to unexposed) increased 1-7-fold after low (0.005-0.5 Gy) and 4-41-fold after 1-8 Gy X-irradiation in a time dependent pattern with copy numbers increasing up to 24 h and decreasing copy numbers (3-19-fold) detected latest 72 h after irradiation. Two publications report on a 1.5-3-fold increase in DDB2 GE 24 h after low radiation exposures covering 20-100 mGy using qRT-PCR (supplemental table 1). This underlines their use for identification of low or unexposed individuals. Fold-differences after neutron and α-exposures in most cases resembled the x-ray exposures. Mostly, peripheral blood was irradiated ex vivo and three publications examined GE changes on X-irradiated cancer patients suffering e.g. from leucemia, prostate, breast cancer or after a CT-scan. Five animal studies in baboons and rats are reported (supplemental table 1). A methodological validation on the same samples using microarrays and qRT-PCR combined was performed in the majority of studies, but an independent validation using different samples was reported for three studies. Major source of DDB2 response in the peripheral blood are T-lymphocytes [92]. A minority of studies examined confounder and LPS (n = 1) was reported to have no impact on DDB2 expression, but decreased DDB2 GE was observed at temperatures below or higher than 37 • C [76]. Several studies report on significant linear associations of DDB2 with dose covering 0-4 Gy and the ability discriminating categories of unexposed, low and high exposed individuals based on DDB2 GE values [77].

CDKN1A (P21).
Altogether 28 publications employing X-irradiation (n = 20; 0-10 Gy, γ-irradiation (n = 8; 0-12 Gy), neutron exposure (n = 4, 0.1-3 Gy) or α-irradiation (n = 1, 0.5-1.5 Gy) report on increased GE values over up to 72 h and in one study even over 7 days after irradiation in a dose and time dependent way followed by decreased values. A 1.3 and 1.8-fold increase was examined after 50 mGy X-irradiation in two studies, indicating applicability for identification of low or unexposed individuals. Several studies indicate a saturation of CDKN1A GE (5-6-fold increased over unexposed) occurring around 5 Gy X-or γ-irradiation. X-, γ-and α-irradiation appear comparable effective on radiation-induced CDKN1A GE, but neutron irradiation appeared up to two-fold more effective. The earliest biological responses were observed at 2 h after X-or γ-irradiation. CDKN1A could be detected in all samples due to high baseline copy numbers (own experience). Mostly (n = 19), human peripheral blood was irradiated ex vivo and several mice studies (n = 7) and examinations in patients (n = 4) were reported as well. A methodological validation on the same samples using microarrays and qRT-PCR combined was performed in the majority of studies, but an independent validation using different samples was reported once. One study reported LPS-induced decreased CDKN1A GE and gender dependent differences in CDKN1A GE. Significant linear association with dose were reported in three studies.

GADD45.
Represents features, very common to CDKN1A. Altogether 21 publications employing X-irradiation (n = 15; 0-10 Gy), γ-irradiation (n = 6; 0-8 Gy), neutron exposure (n = 2, 0.1-4 Gy) or α-irradiation (n = 1, 0.5-1.5 Gy) report on increased GE values over up to 48 h after irradiation in a dose and time dependent way followed by decreased values. A 1.3 and 1.8-fold increase was examined after 90-100 mGy X-irradiation in three studies, indicating applicability for identification of low or unexposed individuals. Several studies indicate a saturation of GADD45 GE (5-7-fold increased over unexposed) occurring around 4-5 Gy X-or γ-irradiation. X-, γ-and α-irradiation appear comparable effective on radiation-induced GADD45 GE, but neutron irradiation appeared up to two-fold more effective for this gene as well. The earliest biological responses were observed at 2 h and 4 h after X-or γ-irradiation. GADD45 could be detected in all samples due to high baseline copy numbers (own experience). Mostly (n = 15), human peripheral blood or subpopulations was irradiated ex vivo and several mice studies (n = 3) and examinations in patients (n = 3) were reported as well. A methodological validation on the same samples using microarrays and qRT-PCR combined was performed in all except one study. One study indicated no GADD45 GE changes after LPS exposure. Significant linear association with dose were reported in four studies.

BAX.
Altogether 14 publications employing X-irradiation (n = 9; 0-10 Gy), γ-irradiation (n = 5; 0-12 Gy), neutron exposure (n = 2, 0.1-4 Gy) or α-irradiation (n = 1, 0.5-1.5 Gy) report on increased GE values over up to 48 h after irradiation in a dose and time dependent way followed by decreased values. A 2-4-fold increase was examined after 0.5 Gy X-irradiation in six studies. Several studies indicate a saturation of BAX GE (4-5-fold increased over unexposed) occurring around 3-4 Gy X-, γ-, α-or neutron-irradiation, indicating a small linear-dynamic range. X-, γ-, α-and neutron-irradiation appear comparable effective on radiation-induced GADD45 GE. The earliest biological responses were observed at 4 h and 3 h after X-or γ-irradiation and BAX GE appears almost constant over 24-48 h and declines afterwards. Mostly (n = 12), human peripheral blood or subpopulations was irradiated ex vivo and one study in mice and patients was reported. A methodological validation on the same samples using microarrays and qRT-PCR combined was performed in all studies.

PCNA.
Altogether 18 publications with about half of them employing X-irradiation (n = 12; 0-10 Gy) and the other studies using γ-irradiation (n = 7; 0-8 Gy), neutron exposure (n = 3, 0.1-3 Gy) or α-irradiation (n = 1, 0.5-1.5 Gy) report on increased GE values over up to 24 h or less in a dose and time dependent way followed by decreased values. A 1.3-2-fold increase was examined after 0.1-0.5 Gy X-irradiation in nine studies, indicating applicability for identification of low or unexposed individuals. Several studies indicate a saturation of PCNA GE (4-8-fold increased over unexposed) occurring around 4-6 Gy X-,γ-or neutron-irradiation. X-, γ-and α-irradiation appear comparable effective on radiation-induced PCNA GE, but neutron irradiation appeared up to two-fold more effective. The earliest biological responses were observed at 4 h and 2 h after X-or γ-irradiation, respectively. PCNA could be detected in all samples due to high baseline copy numbers (where reported). Mostly (n = 11), human peripheral blood was irradiated ex vivo and four animal studies (mice and baboons) and three examinations in patients were reported as well. A methodological validation on the same samples using microarrays and qRT-PCR combined was performed in the majority of studies, but an independent validation using different samples was reported once. LPS exposure did not alter PCNA GE, but gender dependent differences in PCNA GE were reported in one study. Linear association with dose were reported in one study.

WNT3 and POU2AF1.
Were recognized as radiation-responsive genes in baboons in 2018 employing a whole genome screening (agnostic approach using microarrays) and were methodologically (qRT-PCR) and independently validated in γ-irradiated baboons (supplemental table 1, table 1, four and seven publications). Further validation of WNT3 and POU2AF1 was performed using human and baboon ex vivo whole blood and human whole blood from patients suffering from cancer and receiving CT-scans. POU2AF1 was validated in three further studies and other teams using whole blood ex vivo neutron exposure and γ-exposure to mice. WNT3 and POU2AF1 become down-regulated after high radiation exposure (2-10 Gy) and appear mostly unaltered after lower exposures. Examinations over 72 h indicate earliest biological responses at 8 h and 4 h after irradiation, respectively and a persistent about 5-20-fold and 5-10-fold down-regulation over 72 h and in some studies covering even 7 days, respectively. WNT3 reveals a baseline below FDXR and DDB2 and because of the radiation-induced down-regulation it can be detected in a fraction of about 80% of the samples only. That differs from POU2AF1 which is detectable in all samples (if reported). Major source of WNT3 and POU2AF1 response in the peripheral blood are B-lymphocytes [92]. Confounder are not examined so far. WNT3 test characteristics used for retrospective biodosimetry are less studied, but their association with H-ARS severity categories is carefully examined in several studies as outlined in section 4.2.

Gene signatures and combinations.
Many studies (n = 10 shown herein) comprising hundreds, dozens or less genes examined predominantly the capability discriminating unexposed and exposed samples categorically (e.g. 0 vs 1.4 Gy) employing microarrays or for fewer genes qRT-PCR. Exposures included X-irradiation and γ-irradiation with dose bands covering 0-6.4 Gy and 0-10 Gy, respectively. The peripheral blood used originated either from human healthy donors (n = 5, irradiated ex vivo), mice/rats (n = 3) or irradiated patients (n = 2). Examinations were mainly performed at a 6-48 h time frame. Specific signatures were built for each dose comparison and time point. Promising test statistics with PPV, NPV, sensitivity, specificity and ROC area around 0.8-1.0 were reported in most of the studies. However, gene combinations and employed bioinformatic models varied depending on the time after exposure and the categorical exposure comparison, but often genes as described above were part of the signatures, underlining their applicability for biodosimetry purposes.

GE biomarkers for ARS prediction
Examinations on the association of GE with health effects other than ARS are rather common outside the field of radiobiology and used for clinical purposes (see above) including e.g. associations with atherosclerosis [95], cancer [96,97] and other clinical endpoints such as discrimination of sporadic from radiation-exposed thyroid or mammary cancer [98,99] or identifying patients prone for developing late radiation toxicity [100,101]. Prediction of radiation toxicity (e.g. normal tissue response, lung fibrosis) due to radiotherapy using GE changes is an approach examined in the context of radiation oncology [102][103][104][105][106]. Development of ARS even after whole body irradiation treatment of leukaemia patient is not observed, leaving ARS unobserved from the clinical point of view.
The radiobiological community has a strong interface to biology and physics, but less to the field for medical management of live threatening acute radiation health effects, which might contribute to the research focus on GE associations with radiation exposure (see above). GE biomarkers for radiation injury such as death/survival or ARS prediction, are less examined with some exceptions. Recently, two groups correlated GE with radiation-induced organ damage and survival in two large animal models [107,108].
Discriminating ARS severity based on the clinical follow-up requires some medical expertise. Diagnostic guidance and treatment protocols for ARS have been established and updated [109]. The Medical Treatment Protocols for Radiation Accident Victims (METREPOL) document serves as a resource for physicians. Hematologic changes, such as the development of severe immune deficiency (due to lymphocytopenia and granulocytopenia) or bleeding due to thrombocytopenia, represent one of the challenges in appropriate clinical management of H-ARS. METREPOL categorizes H-ARS into five severity degrees based on BCC changes in the weeks that follow the exposure: no H-ARS (H0, referred to as baseline levels in METREPOL), low (H1), medium (H2), severe (H3) and fatal (H4) H-ARS. These severity degrees are associated with treatment decisions as outlined in METREPOL. Hence, predicting the H-ARS severity will ultimately provide important inputs for clinical diagnosis and treatment decisions. Since H-ARS develops with a delay over time, for prediction purposes H-ARS is supposed to be categorized based on the entire clinical follow-up of BCC spanning about 60 days. The alternative concept of correlating biomarkers with BCC changes to build H-ARS categories at the same early time points after irradiation bears the disadvantage to be not predictive and to just reflect the current clinical status which at early ARS stage (before the manifestation of the disease) renders this kind of research uninformative.
Furthermore, several authors use dose bands as a surrogate for the ARS severity, (e.g. ARS II = 2-4 Gy), thus, representing not a clinical, but an exposure related categorization [110,111] with associated limitations (see section 2).
H-ARS severity degrees used herein, represent clinically relevant METREPOL based categories. Using a baboon model combined with in vitro experiments and measurements on healthy donors and radiotherapy patients led to the prediction of the H-ARS with features as followed (for details see  supplemental table 1 and table 1): A combination of four genes was reported for early and high-throughput diagnosis of H-ARS severity [31]. This GE signature identifies [1] unexposed individuals (H-ARS 0) to conserve clinical resources for those requiring it [2], low-exposed individuals (H-ARS 1) requiring surveillance (late health effects) but no hospitalization or early treatment and [3] highly exposed individuals who will develop acute health effects, e.g. H-ARS 2-4 severity. The latter are in need of early intensive care. Merging existing H-ARS categories in those three categories appears appropriate to address urgent clinical questions such as prioritizing hospitalization as well as considering restricted medical resources. Pairwise redundant genes (FDXR/DDB2 and WNT3/POU2AF1) showing the same association with H-ARS severity degrees were employed in order to increase the robustness of the gene combination (e.g. difficulties with low baseline, figure 3).
For identification of H-ARS 0 (unexposed individuals) all four radiation-induced GE changes relative to unexposed do not exceed a FC of 2. This is the FC considered to adjust for methodological variance.
For identification of H-ARS 1, a > two-fold up-regulation of FDXR and DDB2 and no GE changes of WNT3 and POU2AF1 below the methodological variance (FC 2 down-regulated) are expected.
For identification of H-ARS 2-4, a > two-fold up-regulation of FDXR and DDB2 and a pronounced down-regulation of WNT3 and POU2AF1 is expected. Based on currently available data, a 10-fold  down-regulation of WNT3 or POU2AF1 predicts H-ARS 2-4 with a positive predictive value (PPV) around 100% [31,112]. A > 2 and < 10-fold down-regulation of WNT3/POU2AF1 reduces the PPV from 100% to about 90% [112].

Protein expression biomarkers for retrospective biodosimetry
It is well known that radiation exposure causes changes in intracellular proteins and as a result of radiation-induced injury to tissues changes in blood and urine proteins [113]. In the case of radiation-induced intracellular proteins gamma-H2AX proteins have been advocated for use in biodosimetry [114]. Alternatively, Roy and colleagues proposed the use of organ-specific biomarkers as a biochemical approach to assess radiation dose and injury [115]. Table 2 illustrates a panel of organ-specific radiation biomarkers that have been proposed for use as biochemical dosimeters. Discovery and validation efforts typically involve initial evaluating blood samples from studies using rodent radiation models, followed by studies using blood/plasma samples derived from nonhuman primates (NHP) radiation studies and human radiation-therapy patients. In selected cases these candidate organ-specific proteomic biomarkers have been used for dose-and injury assessment in radiation accidents.

Hematopoietic system biomarkers.
Flt-3 ligand. Bertho and colleagues lead effort to use FMS-like tyyrosine kinase 3 ligand (Flt-3 ligand) to assess bone marrow aplasia and radiation dose [147]. When using a NHP radiation model Flt-3 ligand changes occur as early as 2 days after exposure and dose-dependency is observed by 5 days after irradiation [147]. Validation studies supported use of Flt-3 ligand for radiation dose assessment has been documented in rodent [136,137,[139][140][141]148], NHP [146,147], baboon [149], human radiation therapy studies Table 2. Select list of candidate organ-specific radiation blood or plasma biomarkers: pathways, photon acute dose-windows for radiation diagnosis and ARS sub-syndrome useful for diagnostic injury assessment. Abbreviations.

Cutaneous system biomarkers.
Guipaud and colleagues have performed time-course radiation studies using mice and identified a panel of candidate serum proteomic changes of potential diagnostic utility [153]. Exposure to ionizing radiation result in the production of cytokines by skin. In a review of the literature Muller and Meineke report that the major cytokines produced in response of skin cells to ionizing radiation include: IL (interleukin)-1, IL-6, tumour necrosis factor (TNF)-a, transforming growth factor (TGF)-b, and the chemokines IL-8 and eotaxin [154].

DNA damage and repair pathway biomarkers.
Exposure to ionizing radiation causes DNA damage and cells respond in a complex manner resulting in repair of DNA damage and delays in cell-cycle progression. Several proteins associated with the DNA damage response have been identified as candidate proteomic biomarkers by the exhaustive review by Marchetti and colleagues [113]. For example, ionizing radiation induces the expression of growth arrest and DNA damage (GADD45) gene resulting in increases in the resulting protein [155,156]. Ossetrova and Blakely reported radiation-induced dose-dependent increase in GADD45 using a mouse radiation study [135] (table 2).

Use of multiple proteomic biomarkers for dose assessment.
Radiaton dose-assessment accuracy with use of single biomarkers can show reasonable accuracy, however, the use of multiple proteomic shows expanded diagnostic utility in the early-phase (1-7 days) following a radiation exposure. While extensive reports exist of multiple protein changes after radiation exposure, Ossetrova and Blakely were the first to demonstrate the diagnostic utility to use multiple blood proteomic biomarkers for dose assessment in a mouse radiaton model sysem 1 days after exposure [135]. Table 3 illustrates use of multiple blood protein biomarkers for radiation dose assessment studies using rodent [135-137, 139, 141, 142, 157], Rhesus macaque [119-121, 144, 145, 158], and baboon [149] model systems.
The panel of three radiation-responsive biomarkers (i.e. AMY1, MCP1, Flt-3 ligang) used by Balog and colleagues has further been demonstrated in blood plasma samples derived from human radiation therapy patients. Typically the radiation induced dynamic range for organ-based biomarkers is larger than the biomarkers from the DNA damage and repair pathway. The selection of biomarkers from discrete different tissues/pathways leads to a more robust panel for radiation dose assessment applications. Parallel studies are underway to identify and validate the induction of candidate protein biomarkers in human lymphocytes [159,160].

Protein expression biomarkers for ARS prediction
The ability to extend proteomic biomarkers findings from animal radiation studies to predict ARS severity using the METREPOL system [161], developed for use with humans, necessitates the establishment of a similar ARS severity scoring system in the animal models. Animal based ARS severity scoring systems have been now been established in rodent [148], Rhesus macaque [162] and baboon [39] model systems.
Early-phase blood plasma proteomic biomarkers have been established for the assessment of ARS severity in mouse [148] and baboon [40] model systems. In the baboon radiation model three time-windows were established for the H-ARS severity predictive algorithms: model 1 (0-2 days) includes CRP, IL-13, and procalcitonin biomarkers; model 2 (2-7 days) includes CD27 (T and B cell surface biomarkers), Flt-3 ligand, SAA, and IL-6; and model 3 (7-28 days) includes CD27, SAA, EPO, and CD177 (neutrophil cell surface biomarkers) [40]. The use of proteomic-based ARS severity predictive algorithms can supplement clinical signs and symptoms, other biological dosimetry, and physical dosimetry endpoints to assess H-ARS risk severity.
Using data from an analysis of dosimetric and clinical data from a group of ARS patients exposed to γand neutron or γ-rays alone, Azizova and colleagues showed an application of the METREPOL-based severity scoring system to provide early-phase diagnostic assessment [163]. The constellation of early-phase prodomal clinical signs and symptoms, early changes in blood cell (i.e. neutrophils, lymphocytes) counts can provide ARS severity assessment. In 2016 Blakely and colleagues recommended that the METREPOL scoring system be supplemented using plasma proteomic biomarkers [164]; see updated figure 4. Table 3. Blakely [135] Sproull et al [137] Kim et al [140] Blakely et al [141] Ossetrova et al [142] Blakely et al [119] Ossetrova et al [157] Blakely et al [120] Ossetrova et al 2011 Ossetrova et al [144] Ossetrova et al [146] Balog et al [151,152] Herodin et al [149] Time  Figure 4. Illustration of a proposed modification of the acute radiation syndrome severity scoring system, which is based on the medical treatment protocols or METREPOL, to incorporate a biochemical biodosimetry concept in the assessment of organ-based radiation injury (Friesecke et al [109]; Milner et al [164]).

Single genes or gene combinations used for dose estimation
Many laboratories worldwide established their own gene set comprising 1-4 genes for dose estimation using qRT-PCR [54,[64][65][66][67][68][69][70][71][72][73]. Dose estimation requires a calibration curve comprising irradiated samples examined at the same time after irradiation as the potentially irradiated samples of some radiation victims. Due to the high automation degree, this extra burden can be handled sufficiently. Hence, dose estimates can be reported within hours after sample delivery as shown in several large-scale inter-laboratory comparison exercises [74][75][76][77]. Microarrays can be also used for this task. After selecting the blinded sample with the lowest GE value as the non-irradiated, GE values of a seven-gene as well as a two-gene combination and their internal calibration curves became re-adjusted and dose estimates on all blinded samples were performed [76]. Choosing the appropriate calibration curve (if available) in the absence of details regarding the radiation exposure reflects a difficulty of this approach. However, even in the absence of appropriate calibration curves, samples can be categorized based on their normalized GE and within the same time interval. Baseline values (from healthy donor samples) would identify unexposed individuals and from there, samples could be put into categories considering fold-changes (relative to unexposed) which, based on previous experience, reflect high radiation exposure demanding for hospitalization (acute severe health effects are expected) or lower radiation exposure without the need for hospitalization and early treatment, but requiring surveillance for late health effects (discussed in [31]).

Complex gene signatures used for dose estimation
As outlined in 4.1.8, hundreds or dozens of genes are employed for discriminating unexposed and exposed samples categorically (e.g. 0 vs 1.4 Gy) with signatures being specific for each examined time point. This approach demands a large variety of established gene sets to select the appropriate one. So far X-and γ-irradiation exposures are covered and other radiation qualities or dose-rate dependent examinations are largely missing. Although challenging, one team introduced and evaluated a high throughput biodosimetry test system (REDI-Dx) continuously since 2007 [57,77,90,91]. The signature consists of 15 radiation-responsive genes including e.g. FDXR, BAX, CDKN1A, PCNA, DDB2 and others [57,91]). The test system provides dose estimates covering 0-10 Gy from 24 h to 7 days after irradiation [91]. Recently, authors examined the applicability of their tool for determining two dose categories of clinical significance: >2-6 Gy (hospitalization and intensive care like ICU and cytokine treatment required) and > 6 Gy (more aggravated H-ARS). ROC area over four different time points (e.g. 24 h and 72 h) and both dose categories (relative to unexposed healthy donors) always ranged between 0.94 and 1.0 [91]. Authors examined different confounders/parameter in 331 from altogether 656 unirradiated human subjects. Those were chronic conditions (e.g. allergy, asthma, diabetes, arthritis, pregnancy), skin burns (<10%, 10%-20%, >21% body surface), influenza, trauma (injury severity scale 10-14, 15-24 and >25), age (e.g. 18-21, 22-39, 40-64, >65), ethnicity and gender. Authors report negligible effects on identification of unexposed samples regarding these health conditions and demographic factors. Those examinations are missing for a total of 344 whole body irradiated samples which originated from NHP. Also, exclusion factors for patients comprised treatments with impact on whole BCC (and GE values) such as chemotherapy as well as cytokine inhibitor, -inducer or cytokine therapy (e.g. G-CSF) given prior to irradiation. Hence, dose estimates in patients with these treatments will probably fail.

Gene combinations used for H-ARS prediction 1-3 days after irradiation
Section 4.2 describes a combination of four genes for early and high-throughput diagnosis of H-ARS severity [31]. Figure 3 provides the onset of the time window which depending on the genes starts 2-8 h after irradiation. Over 3 days GE changes considerably, but even normal values are not observed three days after irradiation. Taking advantage of typically up-(FDXR and DDB2) or down-regulated genes (WNT3 and POU2AF1), different H-ARS severity degrees can be discriminated based on the fold-change and during the three days lasting time frame employing logistic regression analysis combined with ROC. Fold-differences in GE measured at different time points translate into a probability which is proportional to a PPV and NPV and a certain sensitivity and specificity [31]. Although validated in vitro, on healthy donors and on leukaemia patients, these data are based on 17 irradiated baboons and 200 unirradiated human healthy donor samples only [112]. Further validation of this approach on larger animal cohorts and more whole body irradiated leukaemia patients (with BCC changes similar to severe H-ARS) is required, but considering the underlying disease with strong impact on baseline GE [66].
Predicting H-ARS is an important step in diagnosis of the ARS. But other organ systems are also affected by ARS such as the gastrointestinal or the neurovascular system. Those are not covered by the diagnostic tool. However, the bone marrow is considered the most radiosensitive. Hence, identifying individuals who will develop H-ARS, automatically results in hospitalization of individuals who may also develop the ARS associated gastrointestinal or neurovascular syndrome or multi organ failure.

Confounder and specificity of radiation-induced GE
Several publications report on the impact of e.g. LPS (bacterial infection) on GE of certain (e.g. FDXR and CDKN1A), but not all genes (e.g. DDB2, GADD45, PCNA, see above). Also, demographic parameter such as ethnicity, age and gender could have an impact on GE and the associated allocation of unexposed or exposed individuals. Several publications report on a negligible effect of these factors for allocation of unexposed healthy donors (see section 6.2). This has been also examined on another 200 blood samples from 122 male and 88 female healthy donors and gender as well as age were found to lie well within the twofold differences in GE generally considered to represent control values [112]. Gene signatures are may be in particular robust across diverse disease states as shown by several groups [57,91,165]. Nevertheless, the association of single genes as outlined above with e.g. the TP53 signal transduction pathway makes them likely candidates being altered by other exposures or diseases as well. For instance, CDKN1A acts as a negative regulator of the cell cycle. Its expression is increased in response to various intra-and extracellular stimuli to arrest the cell cycle ensuring genomic stability as well as involved in diverse biological processes such as differentiation, cell migration, cytoskeletal dynamics, apoptosis, transcription, DNA repair, reprogramming of induced pluripotent stem cells, autophagy and the onset of senescence [166]. Furthermore, all treatments impacting the composition of peripheral BCC (see section 6.2) will change GE measurements. These thoughts highlight the requirement for considering GE changes context related. In a RN scenario, clinical parameter (e.g. vomiting, diarrhoea, erythema), physical measurements and further biological changes are expected (for details see Blakely et al in this issue). It is the context related view in its entirety, considering different aspects of the disease and not solely gene or protein expression changes, which finally results in the appropriate allocation of individuals to clinically relevant categories, we believe.

Protein expression
On-going efforts to advance the biochemical biodosimetry approach for radiation dose assessment has evaluated: (a) selection of biomarkers panel, (b) measurement of population baseline levels, and (c) consideration of potential confounders [121]. A similar approach is needed for a proteomic biomarkers panel applicable for assessment of ARS severity. In both of these cases consideration of special populations (i.e. children, immune comprised individuals, etc) need to be addressed. It is recommended that an evidence-based review of this approach be performed to obtain international consensus of this approach as the standard of diagnostics for early-phase assessment of potential life-threatening radiation exposures. 7. Current and future developments 7.1. Biofluids used for GE related retrospective dose estimation or effect prediction Use of peripheral blood as a convenient source for GE measurements, is widely accepted (see table 1 and  supplemental table 1). Isolation of RNA from whole saliva samples as a non-invasive and easily accessible biofluid could represent an attractive alternative to blood. Saliva is a blood plasma ultra-filtrate and as that thought as a 'mirror of the body' [167,168]. Although saliva has been already shown to contain RNA biomarkers for e.g. oral cancer diagnostic [169][170][171], it's use appears challenged because of an overwhelming bacterial contamination, associated with a low fraction of human RNA making a more complex workflow for meaningful RNA analysis necessary, which is currently in progress [172,173]. Only a few studies report on radiation-induced changes in urine using e.g. mass spectrometry for metabolomic measurements in mice and NHP [174,175]. Another study reports on the stability of miRNAs by collecting serial urine samples from patients with localized prostate cancer [176], indicating a developing field requiring further research.

High throughput analysis of GE biomarkers
In study scenarios where dozens or several hundred candidate genes and larger sample numbers are analysed, formats other than using 96 well plates have to be utilized for GE analysis. So-called customized low-density arrays (LDA, Thermo Fisher Scientific) or recent developments such as the 12k open array format (OA, Thermo Fisher Scientific) can be applied. In the LDAs, primer and probes for each gene are filled and lyophilized in each well, thus ensuring single qRT-PCR reactions and identification of up to 384 different genes per LDA. The 12k OA includes four metal slides with about 3000 holes each comprising primer and probes for each gene coated on the walls. LDA proved to be reliable, but 12k OA must be run in triplicate due to technical limitations inherent to this technology (company's recommendations and own experience). Besides these qRT-PCR based high-throughput methodologies, NGS and nanopore sequencing based technologies provide high-throughput capabilities due to multiplexing of hundreds of samples in one reaction combined with targeted sequencing of selected genes [65,69,88,89].

Point of care diagnostic (POC) of GE biomarkers
Microfluidic technology and the concept of a 'lab on the chip' provides the promise to put the complex laboratory workflow from the blood drop to RNA-isolation, cDNA synthesis and qRT-PCR on a microfluidic card to be used on the point of care, e.g. in hospitals, where potentially radiation exposed individuals will arrive in a radiological or nuclear event. No established qRT-PCR based microfluidic cards exist so far, but several reports dealing with microfluidic mixing processes and work on a module for isolation of white blood cells from peripheral blood as a prerequisite for automation of GE assays on a microfluidic cartridge indicate some movement in this direction [177,178]. Own experience with this technology reflect challenges in RNA isolation, limitations in the linearity dynamic range of GE values using a one-step or two-step qRT-PCR and miniaturization restrictions. After overcoming these challenges, a mobile platform will exist, however expenses per card have to be considered and multiplexing will be unlikely.
Recently, nanopore sequencing became introduced as a rapid (3 min for a total of 50 000 reads) and portable real-time biodosimetry platform, requiring some future improvements regarding sample processing and the bioinformatic pipeline for specific radiation-responsive transcript identification, to finally gain the full potential of this portable and rapid technology [69].
Lateral-flow immunoassay technology for protein detection is well established [179,180]. Using this technology for nucleic acid detection is currently under development [181,182].
Employing machine learning (artificial intelligence) for analysis of e.g. multi-omics biomarkers or medical imaging to improve clinical outcome predictions is one task [183][184][185]. However, there is ongoing activity using this technology for integration of multiple input data such as clinical signs and symptoms of ARS, radiation-induced cytogenetic and molecular data for accelerated and improved prediction of acute radiation health effects.

Multiple proteomic biomarkers analysis
The proposed candidate proteomic biomarkers are all suitable for analysis using conventional enzyme-linked immunosorbent assay (ELISA)-based analysis. POC analysis using lateral-flow technology can support up to 3 targets from different organs as shown in tables 2 and 3. Multiplex analysis supporting up to 8-10 targets can be measured in a single well [146,148]. Recent developments involving multiplex protein analysis using aptamer/SOMAmer reagents analysis or protein extension analysis show promise to extend these technologies to support biochemical biodosimetry analysis to assess both radiation dose and injury.

Summary
Measuring radiation-induced GE changes is recognized as a reliable, early and high-throughput tool for retrospective biodosimetry and prediction of H-ARS. Several tools consisting of complex gene signatures or few genes combined over about two decades have been established and were studied by different groups and validated in small and large animal models, on human samples in vitro and in patients. The maturation degree holds promise for their applicability in a RN scenario. However, several aspects appear under researched including exposure characteristics such as dose-rate dependency or α-and neutron or proton exposures to simulate radionuclide incorporation, nuclear scenarios or space missions. Prediction of acute health effects and associated clinical questions appear under researched as well and a more vivid interface of the radiobiological to the medical society is required to solve a clinical problem, namely the medical management of radiation-associated acute and late health effects. The use of proteomic (biochemical) biodosimetry has also emerged as an useful diagnostic approach for early-phase assessment of both radiation dose and injury (e.g. H-ARS severity).

Acknowledgments
This work was supported by the German Ministry of Defence and AFRRI's intramural protocol (AFR-B4-10971).

Disclaimers
Mention of any brand name products does not imply endorsement. The opinions and assertions expressed herein are those of the author(s) and do not necessarily reflect the official policy or position of the Bundeswehr and the Uniformed Services University or the United States Department of Defence. This work was prepared by a civilian employee of the US Government as part of the individual's official duties and therefore is in the public domain and does not possess copyright protection.