Perspective The following article is Open access

Use of models and observations in event attribution

Published 2 July 2015 © 2015 IOP Publishing Ltd
, , Citation Gabriele C Hegerl 2015 Environ. Res. Lett. 10 071001 DOI 10.1088/1748-9326/10/7/071001

1748-9326/10/7/071001

Abstract

Research is pursued worldwide that aims to determine if a particular observed extreme event has become more or less likely due to climate change. A recent paper (King et al 2015 Environ. Res. Lett. 10 054002) uses two methods to quantify how much more likely a record hot year in Central England has become. One of the methods is based largely on climate modeling, the other on interpreting the observed record. This is an important step towards improving the reliability of event attribution results. Improved understanding and prediction of changes in extreme events is recognized as one of the 'grand challenges' in climate research.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Extreme weather and climate events demonstrate the vulnerability of society and ecosystems, and bring climate change into the public's interest far more than changes in global mean temperature do. Hence research that determines if a particular observed extreme event has become more or less likely due to climate change, and by how much, is pursued worldwide (Peterson et al 2014).

However, quantifying how and why the frequency and intensity of high impact events has changed is not easy. Observations provide only limited realizations of rare events, hence establishing their changing frequency of occurrence is difficult, particularly for short records. This requires careful application of statistics, either estimating the shape of the tail of the distribution (Smith 1989) or statistically modeling record setting events (Benestad 2003, Meehl et al 2009). Also, variations in the frequency of extreme events occur for many reasons, not just due to human influences. Use of climate model data is an attractive alternative, as much larger ensembles of events can be provided (e.g., Pall et al 2011, Otto et al 2012). The difference between large ensembles of model simulations with and without human influences allows to quantify how the probability of an event has changed due to human influences (see Stott et al 2004). However, this leaves open the question if the climate model captures the event in question as well as any changes in its amplitude and occurrence probability. Hence each individual event attribution approach is subject to uncertainty, an uncertainty that is arguably larger than when attributing recent mean climate change to human and natural influences (see Bindoff et al 2013), as the link between model results and data is less direct.

King et al (2015) bring us a large step closer to robust results by using two complementary methods to estimate changes in the probability of anomalously warm Central England annual temperatures, as has occurred in 2014. The first method uses the multi-model ensemble to estimate how much the probability of a year hotter than 2006 has increased. The incidence of anomalously warm years is then compared between model simulations driven with natural forcings only, and simulations of the early 21rst century driven with all historical forcings, including greenhouse gas and aerosol forcing. Structural deficiencies in a single model are overcome by using multiple state-of-the art climate models. The authors find that the probability of an anomalously warm year in Central England is much larger if including anthropogenic forcing than in the historical runs with natural forcing only. The complementary approach relies on the observed distribution of annual Central England temperatures, assuming that the threshold for an unusual event changes proportionally to greenhouse gas concentrations in the atmosphere. The long Central England time series then provides enough samples to fit a statistical distribution to extreme events. Based on this fitted distribution, the changing probability of annual mean temperatures that exceed the 2006 record is compared between the early 20th and early 21rst century. Earlier studies have established that human influences have contributed to the warming of Central England over that period (Karoly and Stott 2006) and support that it was human influences that shifted the probability of a warm year. Using the more conservative result between both methods, the authors estimate that this shift has increased the return time of a year warmer than 2006 by at least a factor of 13 (90% confidence). Both approaches yield similar results, although the observational fit renders the event more unusual during the early 20th century than 'naturally' forced climate model simulations do.

Anomalously warm years in Central England are a stepping stone towards robust event attribution of events that really matter. The Central England record is an excellent test case, as the data show a detectible response to radiative forcing. Its exceptional length provides strong constraints on climate model simulations of its variability (strong enough to reject several models as unrealistic) and is the basis of a well constrained observed distribution. In most other regions records are shorter, and the link of shorter-lived events such as droughts, heavy rainfall, heat waves or cold spells to climate change is more uncertain. In such a situation, providing observational constraints on the changing probability of extremes is both important and challenging. Station data can provide an evaluation of climate variability as simulated by models, even if the observed record may not always be sufficient to quantify the probability of rare extremes. Observations and reanalysis products document past circulation and can be used to quantify how the intensity of an event linked to a given synoptic situation has changed over time (e.g. Cattiaux et al 2010). Also, it is reassuring if events in models occur for the same synoptic conditions as they do for observations (e.g. Pall et al 2011, Krueger et al 2015). When aggregating changes in the frequency and intensity of extreme events on large or global scales, detection and attribution methods can determine the human contribution to changes in extreme events on large scales and evaluate model simulated change (see Bindoff et al 2013, Fischer and Knutti 2014).

Physical understanding of changes in climate dynamics as well as of local feedbacks that lead to extreme events is key to reliably attributing changes in extremes and an area of active research. The WCRP grand challenge on climate extremes (http://wcrp-climate.org/gc-extreme-events) aims to coordinate work towards improved understanding and prediction of changes in extreme events. It comprises four focus activities: improving data availability for observed extremes, understanding of mechanisms and feedbacks, attribution of extreme events, and improving simulations including model evaluation (Zhang et al 2013). A collaborative effort by the community, making the most of recent modeling advances and confronting models with observations to the extent possible will help to provide the robust attribution information that society needs. The present paper is a very useful step that gleans information on changing probability of an extreme event both from models and observations in order to overcome uncertainties in individual methods. I hope that more will follow.

Please wait… references are loading.