CliMathNet Conference 2013 Contributed talks

Chris Boulton (University of Exeter), P. Good, B. B. B. Booth & T. M. Lenton

Early Warning Signals of Simulated Amazon Dieback

A number of early warning indicators have been proposed signalling the approach of the system to a tipping point. Here we show the results when these methods are tested on Amazon rainforest vegetation time series, provided as outputs from an ensemble of the global climate model HadCM3. Due to the characteristics of the drivers of the system in the models, these ‘generic’ early warning systems do not provide robust indication that a tipping point is being approached, despite the system being stressed in all of the members and in some cases the tipping point having been passed. Because of this, we test ‘system specific’ indicators which are calculated in a similar way to the generic indicators but use information about the Amazon specifically. One of these uses variables that have recently started to be monitored and so potentially could be used as an early warning signal in the real world.

Aside from this work, we also measure the stability of the forest in each of our ensemble members using dry-season resilience. This uses information about the forest cover in the whole of the tropics along with the climate conditions on a grid point scale to assess which are suitable for sustainable forest. This in turn allows us to determine how close the Amazon is in each case to being unsustainable due to changing climate over the 21st century.  By observing the state of the climate at the end of the 21st century, we can also predict a ‘committed’ response of the forest to the climate change that has occurred. This change is different from that which has been observed by 2100 due to the slow dynamics of the system and the full extent of change having yet to be realised.

 

Philip Brohan (Met Office)

Dispersing the fog of ignorance

Reconstructions of historical climate, predictions of future weather, and longer-term climate projections are all done with a physical model of the climate system, imperfectly constrained by a set of observations. In each case an ensemble of model simulations provides an estimate of uncertainty in the calculated climate.

To improve the simulations, it is necessary to find uncertainty metrics which are powerful in distinguishing regions of confidence and ignorance in the simulations, and it's useful to visualise both the simulated climate and the uncertainty estimates. The Kullback-Leibler divergence between a simulation ensemble and an uninformative prior distribution is a powerful metric of simulation quality, and using fog as a visual metaphor, to mask areas of low divergence is an effective way to illustrate how uncertainty behaves and develops. This approach works equally well for reconstructions and predictions, over a range of timescales.

 

Kate Brown (Met Office), Simon Brown Justin Krijnen, Rachel McInnes Ed Pope and Katy Richardson

Quantifying Extreme Wind Events in a Changing Climate

Design specifications for large scale infrastructure projects often require resilience to extreme weather events going out as far as the 1 in 10,000-year event. In this presentation we describe the analysis of 10 metre winds in support of such a project using observations from nearby meteorological sites up to the current day. Extreme value analysis and the fitting of extreme value distributions (EVDs) allows the estimation of the probability and severity of events that are more extreme than exist in a given data series.

We analysed a number of wind related parameters (e.g. gust, 10 minutes averages and hourly averages) to investigate the changes in extreme winds in specific areas of the UK. When fitting EVDs it is usually assumed that the data are stationary – that is, that the distribution does not change systematically with time. For some meteorological variables under a changing climate this is unlikely to be true so we allowed the EVD parameters in our analysis to depend on global temperature as a proxy for climate change. In addition, it is known that multi-decadal oscillations such as the North Atlantic Oscillation (NAO) can have a profound impact on wind speed in the UK. As a result, we included additional covariates such as NAO and wind direction to prevent confounding between multi-decadal oscillations and climate change.

Fitting EVD to observations from nearby meteorological station allows calculation of the return levels for these meteorological sites. However, to obtain return levels for the desired location necessitates location specific information. Typically location specific wind observations are not available. However, the Met Office’s Virtual Met Mast (VMM) application has been developed to generate virtual time series of hourly wind speeds for the whole UK by using spatially complete numerical weather prediction model output, at a grid resolution of 4km.  The combination of meteorological site specific EV analysis and VMM output suggests a method to provide site specific estimates of extreme wind climatology in the absence of site specific observations for the UK.

 

Simon Brown (Met Office), James Murphy, David Sexton and Glen Harris

Climate projections of future extreme events accounting for modelling uncertainties and historical simulation biases.

A methodology is presented for providing projections of absolute future values of extreme weather events that takes into account the uncertainty in predicting future climate and is used to calculate changes in 50 year return levels of  summer maximum temperatures, summer daily rainfall and autumn five day total rainfall for London in 2050 under the A1B future emission scenario.

This is achieved by characterising both observed and modelled extremes with a single form of non-stationary extreme value (EV) distribution that depends on global mean temperature and which includes terms that account for model bias. 

Uncertainty in modelling future climate, arising from a wide range of atmospheric, oceanic, sulphur cycle and carbon cycle processes, is accounted for by using probabilistic distributions of future global temperature and EV parameters.  These distributions are generated by Bayesian sampling of emulators with samples weighted by their likelihood with respect to a set of observational constraints.  The emulators are trained on a large perturbed parameter ensemble of global simulations for present and doubled CO2 climate.

Emulated global EV parameters are converted to the relevant regional scale through downscaling relationships derived from a smaller perturbed parameter regional climate model ensemble.  The simultaneous fitting of the EV model to regional model data and observations allows the characterisation of how observed extremes may change in the future in spite of biases that may be present the regional models.

The impact of selected parameter perturbations will be discussed.

 

Charles Camp1, Alex M. Gerber1, Matthew J. Rodrigues1, Pamela A. Martin2

1Department of Mathematics, California State Polytechnic University, San Luis Obispo, CA, USA

2Departments of Earth Sciences and Geography, Indiana University{Purdue University, Indianapolis, IN, USA

Characteristics of the Mid-Pleistocene Transition as Revealed by Empirical Mode Decomposition

The time series records found in the study of climate are generally noisy, nonlinear and nonstationary. Classical techniques such as Fourier analysis are ill-suited for the study of such records due to assumptions of an a priori basis and the global or linear nature of the analysis. Newer techniques have been developed which are nonlinear, local and data adaptive; as such, they can provide opportunities to extract more meaningful information from climate records. Here, we provide a case study of the utility of such a technique, Empirical Mode Decomposition (EMD), in the analysis of paleoclimate records.

A consensus as to the characterization of Pleistocene climate with respect to Milankovitch theory (the forcing of climate by orbital dynamics) has remained elusive.  Recently, new compilations of published ocean sediment records have been constructed which are devoid of orbital assumptions. These “untuned" stacks provide unbiased datasets with which to explore classic Milankovitch theory and related hypotheses put forth over the last several decades. Using EMD, our analyses partition the records into distinct modes which exhibit variability on different coherent timescales. In particular, strong 100-kyr variability is seen to emerge abruptly at about 1.25 Ma, while reasonably steady 40-kyr variability is seen to persist throughout the Pleistocene. The 40-kyr mode is paced by obliquity and is consistent with a direct response to obliquity forcing.  However, while the 100-kyr mode is paced by eccentricity during the late Pleistocene, it exhibits an negative correlation in amplitude. Combined with the lack of a strong 400-kyr signal in the climate records, these results suggest that the observed 100-kyr variability is not a direct response to eccentricity forcing; rather, they are consistent with the hypothesis that the sudden appearance of 100-kyr oscillations in the climate records is caused by the emergence of a new internal dynamical mode, phase-locked to weak eccentricity forcing.

 

Coralia Cartis (University of Edinburgh)

Efficient optimization algorithms for nonlinear least-squares and inverse problems

We discuss various regularization and optimization techniques suitable for inverse problems. In particular, we present a new termination condition for nonlinear least-squares algorithms that can distinguish between the zero- and non-zero error automatically and that gives the best known efficiency guarantees for some regularization-based methods. There has been much recent work on inverse problems that admit a sparse solution, namely, one with few significant entries. We outline a new average-case analysis that quantifies the level of undersampling that can be employed while still being able to recover a sparse solution. Finally, we discuss some derivative-free optimization techniques that we have successfully employed to tune some relevant parameters of the HadAM3 model so as to match radiation measurements. This work is joint with Nick Gould (RAL, UK), Philippe Toint (Namur, Belgium), Andrew Thompson (Duke, USA) and Mike Mineter and Simon Tett (Edinburgh GeoSciences).

 

Zaid Chalabi (London School of Hygiene and Tropical Medicine), Sari Kovats

A Decision Analytical Framework to Reduce the Health Impacts of Extreme Weather Events

By their nature, extreme weather events have low probability of occurrence and high (or very) high impacts. This raises a difficult question for public health policy makers. Given competing demands and budgetary constraints on healthcare and social welfare resources, combined with large uncertainty associated with extreme weather events in terms of their probability of occurrence and impacts, how should governments decide on appropriate protective measures for extreme events? Decision support is widely used in the health sector for the allocation of scarce resources. They are normally based on maximising expected health benefits whilst satisfying budgetary and other constraints.  

Standard decision rules based on expected utility theory are however inappropriate for handling decisions concerning extreme events because these methods are based on maximising expected utility and are therefore insensitive to low probability high impact events. Increasing interest in developing alternative decision rules for handling extreme events has led to new methods where decision rules are based on non-additive probability measures (capacities) and Choquet integrals. 

This paper outlines a new mathematical framework based on the new axiomatic decision rules to support public health policy makers evaluate and compare alternative protective measures against extreme weather events when the evidence on the future occurrence of these events and their impacts is very uncertain. The framework takes into account the future time horizons over which the decision options are compared and the possible short- to medium-term opportunity costs of the decisions in terms of welfare capital forgone (which can have implication on inter-temporal equity issues). We will illustrate the theory using the example of heatwaves and cold spells in the UK. Evidence regarding the effectiveness of protective measures to reduce the health and social impacts is based on previous work at the London School of Hygiene and Tropical Medicine. 

 

Peter Challenor (University of Exeter)

Uncertainty in Estimates of Extremes From Numerical Models

Extreme values (floods, heat waves, giant waves, …) are of great practical importance.  To estimate these extremes in the present day we can use data but if we wish to predict extremes in the future we need to use models (simulators). Even for present day extremes the use of models can give much longer data series than our limited data sets. At present the uncertainty arising from the simulator is not included when estimating extremes. In this paper we examine two methods of approaching this problem. Because of the Fisher-Tippett limits we know the form of the distribution of extremes. Because extreme value distributions are very non-Normal the use of a Gaussian process is not appropriate. We investigate two ways of producing emulators for extremes. The first is to use a multivariate Gaussian process from the parameters of the extreme value distribution; the second is to use a max-stable process. We will discuss the differences between these methods and how we should deal with the inherent stochastic nature of extremes and the sampling variability from the simulator.

 

Sandra Catherine Chapman1,4, David Stainforth 2,1, Nicholas Wynn Watkins 2,3,1

1. Physics, University of Warwick, Coventry, United Kingdom

2. London School of Economics, London, United Kingdom

3. British Antarctic Survey, Cambridge, United Kingdom

4. Department of Mathematics and Statistics, University of Tromso, Norway

An observationally-centred method to quantify local climate change as a distribution

For planning and adaptation, guidance on trends in local climate is needed at the specific thresholds relevant to particular impact or policy endeavours. This then requires the quantification of trends at specific quantiles, in the distributions of variables such as daily temperature or precipitation. These nonnormal distributions vary both geographically and in time. The trends in the relevant quantiles may not simply follow any trend seen in the distribution mean. We present a method[1] for analysing local climatic time series data to assess which quantiles of the local climatic distribution show the greatest and most robust trends. We demonstrate this approach using E-OBS gridded data[2], composed of time series of local daily temperature and precipitation from specific locations across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions to changing climate. Geographical location and temperature are treated as independent variables, and we thus obtain as outputs how the trend or local sensitivity varies with temperature (or occurrence likelihood), and with geographical location. These sensitivities are found to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We find as an output many regionally consistent patterns of response, of potential value in adaptation planning. For example in temperature, robust signatures at specific quantiles and locations can be as much as 2-4 degrees warming over a period where the global temperature has increased by 0.5 degree. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This also quantifies the level of detail needed from climate models if they are to be used as tools to assess climate change impact.

 

Marianna Demetriou (University College London)

Combining information from multiple climate simulators to obtain estimates of global surface air temperature change, under a probabilistic Bayesian framework

To make projections of future climate, there is increasing use of “multi-model ensembles" (MMEs) in which information from many different climate simulators, such as Atmosphere- Ocean General Circulation Models, is combined. The question then arises as to how best to combine the information.  Issues to be considered include the fact that none of the currently available simulators can simulate the true climate perfectly, and that they do not cover the whole range of possible climate modelling decisions; moreover, different simulators have different credibility in representing different climate parameters. To address these issues, Chandler (2013) proposes a probabilistic, Bayesian framework for summarizing true climate, while explicitly quantifying uncertainty, using information from a MME and actual climate observations. Under the proposed framework, each simulator is weighted based on: its internal variability, its consensus with the rest of the simulators, the internal variability of the true climate and the shared simulator discrepancies with the actual climate. Inference about true climate is enclosed in the derived posterior distribution. The work presented here illustrates three implementations of the proposed framework, using information from observations, along with projections of yearly mean global surface air temperature from a suite of climate simulators from the CMIP5 experiment. The first one is a “poor man's" implementation, which provides a quick and easily-computed approximation of the required posterior distribution. However, the approximations result in neglecting part of the uncertainty. To fully-capture the uncertainty, a computationally intensive fully-Bayesian analysis must be carried out. The work here compares two implementations of this full analysis with that of the “poor man's version", to obtain estimates of yearly mean global surface air temperature change. The focus is mainly to observe whether the simplified “poor man's" version yields “adequate" approximations to the posterior of interest. The uncertainty under the three implementations is expressed in the form of predictive distributions of yearly mean global surface air temperature, evaluated from the derived posterior under each implementation.

 

Theo Economou (University of Exeter), David Stephenson

A simple framework for multi-peril events in European windstorms

European windstorms are amongst the most damaging natural hazards, both in terms of life and monetary value. There are multiple potentially damaging perils associated with intense European windstorms, which are often associated with each other, implying that peril independence when estimating storm-related risks may be an unreasonable assumption. We investigate the effect of multi-peril dependency (MPD) on the overall storm-related risk in a given time period (e.g. winter).

We propose a framework for quantifying and understanding MPD in simultaneously occurring events (multi-peril events), based on random sums.  In a given time period, the aggregate risk associated with different perils is represented as a random sum and we characterise the dependence between two such random sums.

We analyse storm track data from climate model output and the two storm-related perils are precipitation and wind-speed, occurring in extended winters October–March. We found that for high intensities of the two perils, MPD disappears whereas for low intensities, MPD is positive due to the simultaneous occurrence of the multi-peril events. No evidence of negative MPD was found.

 

Christopher Ferro (University of Exeter)

Evaluating decadal hindcasts: why and how?

Extrapolating the performance of historical climate forecasts and hindcasts can be a poor guide to the performance of future climate predictions. Nevertheless, historical predictions do contain useful information about future performance. We propose a new approach to using this information to form quantitative judgments about future performance, thereby making explicit our answer to the question "how good are climate predictions?". We also discuss how to extract this information by evaluating hindcasts. In particular, we show how measures of performance can be chosen to (1) avoid spurious skill arising from time trends, (2) provide a fair evaluation of ensemble forecasts, and (3) describe how performance varies with the timescale of the predicted quantity.

 

Imogen Fletcher (University of Exeter)

A statistical approach to modelling the occurrence of tropical fires

Tropical forests play a crucial role in the carbon cycle, by sequestering vast quantities of carbon. Fires in these ecosystems therefore have the potential to drastically affect the carbon cycle by releasing much of this carbon back into the atmosphere. A changing climate is expected to cause increasingly frequent and severe droughts.

Combined with extensive deforestation and degradation, extreme fire seasons are more likely than ever before. However, tropical wildfires are poorly represented in Global Dynamic Vegetation Models, and so the magnitude of their effect on the global climate remains uncertain.

Current methods for predicting the occurrence of fires take a somewhat disjointed approach. Ignition sources are modelled as functions of lightning flashes and, in some cases, population density, though many models neglect to include anthropogenic ignitions. The effects of fuel availability and moisture are taken into account separately. As such, these estimates disregard the predilection of humans to set fires, primarily for deforestation purposes, in dry, fuel-abundant areas.

Additionally, since the definition of an ignition event includes all potential fires, and not only the fires that take hold, there is no adequate data with which to verify these models.

We propose a statistical method for identifying whether fire counts in a given tropical region are ultimately limited by excessive water, insufficient fuel, or an unfavourable population density. Additionally, since plant growth is driven primarily by water availability during the growing season, we can represent both of these first two limitations with measures of moisture. The cumulative water deficit (CWD) of the dry season, equivalent to the accumulated precipitation minus evapotranspiration over the time period for which this difference is negative, is used to represent water availability. The wet season moisture (WSM), calculated as for the CWD but over the wet season, can be shown to be a suitable proxy for fuel availability. By using these two variables as drivers, the effects of not only the intensity of drought, but also the timing of changes in precipitation, are taken into consideration. Since lightning-caused fires are uncommon in tropical forests, their effect on fire counts is negligible, and this ignition source has therefore been omitted.

We fit three univariate models to the data, and then estimate the maximum potential number of fires in a given region as the minimum of the three fitted values. A probabilistic model based on this estimate provides an estimate of the actual number of fires that occur. Initial analyses show strong relationships between WSM, CWD, population density and maximum fire counts. Work on the probabilistic component of the model is ongoing.

 

Stephen Griffiths (University of Leeds)

Ocean tides and unresolved climate dynamics

Ocean tides are rarely simulated in global ocean circulation models, or climate models. Although this is justified for many applications, tides do play a role in global climate. In particular, it is well-known that small-scale tidally generated internal waves induce vertical mixing of density and heat in the stratified ocean interior, with implications for the large-scale temperature distribution of the ocean. Tides also interact with ice sheets and ice shelves in polar regions; indeed, amplified polar paleotides have been suggested as a possible catalyst of ice-sheet destabilisation and rapid climate change during the glacial cycles of the late Quaternary.

Such processes can only be understood using multiscale modelling accounting for both global tidal dynamics and smaller scale physics. Here, progress in developing such models is reported, which requires simplified mathematical formulations and efficient numerical schemes. These models can be used to investigate variations in paleotides (over glacial-interglacial cycles, for example), and possible changes in future tides and ocean mixing (driven by changes in ocean stratification, sea-level, and coastal ice sheet margins). Understanding interactions between tides and ice sheets requires mathematical modelling of grounding-line dynamics, which is an active area of research where further developments are needed.

 

Clare Hobbs (University of Exeter), Sebastian Wieczorek, Peter Cox

Rate-induced tipping

Climate systems subject to slowly varying external conditions can undergo unexpected rapid transitions from one stable state to another, a phenomenon known as tipping.  The well established tipping mechanism is a bifurcation at some critical level of external conditions, where the stable state destabilises. Here, we discuss a different and less understood mechanism: rate-induced tipping.  In rate-induced tipping the system can be stable for any fixed level of external conditions, but if the external conditions are changed too fast, the system is unable to adapt to the changing stable state and a tipping occurs.  Mathematically, the critical rates of external forcing define non-autonomous instabilities which cannot be captured by classical bifurcations.  We discuss how to obtain critical rates - both numerically and analytically - for canonical low-dimensional models. We use an approach related to the validity boundary of geometrical singular perturbation theory.  Our work has repercussions for climate change policy making.  Current policy focuses on critical levels of external conditions, and does not directly address critical rates of change as a potential trigger of tipping in the earth system.

 

Alasdair Hunter (University of Exeter), David Stephenson, Theodoros Economou, Ian Cook, Angelika Werner

Quantifying and understanding the collective risk  from mid-latitude windstorms to society

Natural hazards such as windstorms, flood and hail can have serious consequences for society when they occur. It is necessary for decision makers to be able to quantify the risk from individual events as well as from multiple occurrences of the same type of hazard (collective risk).  A common assumption in hazard modelling is that the frequency of occurrence of events and the intensity of individual events are independent.  A deviation from that assumption of independence would have strong implications for the insurance/reinsurance industry. Here we show that for some regions European extratropical cyclones don’t follow the assumption of independence.

Time series of counts and mean vorticity of extratropical cyclones are constructed for the October-March extended winters using the NCEP-NCAR reanalysis. A positive correlation is found to exist between the winter counts and mean vorticity of extratropical cyclones over North West Europe which is significant at the 95% level.  The observed correlation for extratropical cyclones over Europe is shown to be driven by joint forcing of the frequency and intensity by large scale flow patterns.  The relationship between the frequency and intensity can then be reproduced by separately regressing them against suitable teleconnection patterns. 

Having investigated the correlation between the frequency and intensity for European windstorms the implications for the collective risk are further explored.  Cantelli bounds are used to investigate the effect of including correlation on the extremes of the collective risk of extratropical cyclones.  Results suggest that assuming independence between the frequency and intensity underestimates the collective risk from extratropical cyclones by 20% or more in some regions.

 

Frank Kwasniok (University of Exeter)

Predicting critical transitions from time series using non-stationary modelling

Statistical techniques for predicting critical transitions in dynamical systems from time series are discussed. Firstly, a parametric model of the (marginal) probability density of a scalar variable is built from data, allowing for trends in the parameters to model a slowly evolving quasi-stationary probability density. These trends are then extrapolated to predict the nature and timing of structural changes in the probability density of the system. Secondly, a non-stationary stochastic dynamical model of the system is derived, incorporating trends in the drift and diffusion parameters. In the simplest case, this is noise-driven motion in a onedimensional non-stationary potential landscape, but also higher-dimensional reconstructions based on time-delay embeddings are considered. Probabilistic predictions of future tipping of the system are made based on ensemble simulations with the estimated models.

 

Valerie Livina (National Physical Laboratory), T.M.Lenton

Tipping points toolbox: anticipating, detecting and forecasting tipping points in climatic time series

 We report the finalised framework of the tipping point toolbox which serves for anticipation (early warning signals), detection (potential analysis) and potential forecasting of transitions and bifurcations in climatic records. The three modules were developed and reported in several publications in 2007-2013 [1-5] with applications to one-dimensional trajectories of dynamical systems in climatology, from paleorecords to historic, modelled and observed time series. Some of the results were independently confirmed by other research groups: the double-well-potential structure of Dansgaard-Oeschger events [6] and the bifurcation at 25kyr BP in oxygen isotopes series [7]. The methodology was also blind-tested [8] and applied in other branches of statistical physics [9] and ecology [10]. We demonstrate the application of the toolbox on Arctic sea-ice extent data with projected disappearance of the summer sea-ice in the mid-2030s, which is in agreement with modern GCM runs [11].

 [1] Livina and Lenton, GRL 2007; [2] Livina et al, Climate of the Past 2010; [3] Livina et al, Climate Dynamics 2011; [4] Lenton et al, Phil Trans Royal Soc A 2012; [5] Livina et al, arxiv:1212:4090;  [6] Ditlevsen and Johnsen, GRL 2012; [7] Cimatoribus, Climate of the Past 2013; [8] Livina et al, Physica A 2012; [9] Vaz Martins et al, Phys Rev E 2010; [10] Dakos et al, PLoS One 2012; [11] Livina and Lenton, Cryosphere 2013.

 

Doug J. McNeall1, P.G. Challenor2, J.R. Gattiker3, E.J. Stone4

1. Met Office Hadley Centre, Exeter, UK

2. University of Exeter, Exeter, UK

3. Los Alamos National Laboratory, Los Alamos, NM, USA

4. University of Bristol, Bristol, UK

The potential of an observational data set for calibration of a computationally expensive computer model

We measure the potential of an observational data set to constrain a set of inputs to a complex and computationally expensive computer model. We use an ensemble of output from a computationally expensive model, corresponding to some observable part of a modelled system, as a proxy for an observational data set. We argue that our ability to constrain inputs to a model using its own output as data, provides a maximum bound for our ability to constrain the model inputs using observations of the real system.

The ensemble provides a set of known input and output pairs, which we use to build a computationally efficient statistical proxy for the full system, termed an emulator. We use the emulator to predict and rule out \implausible" values for the inputs of held-out ensemble members, given the output. As we have the true values of the inputs for the ensemble, we can compare our constraint of the model inputs with the true value of the input for any ensemble member. The measures have the potential to inform strategy for data collection campaigns, before any real-world data is collected, as well as acting as an effective sensitivity analysis.

We use an ensemble of the ice sheet model Glimmer to demonstrate our metrics. The ensemble has 250 model runs with 5 uncertain input parameters, and an output variable representing the pattern of the thickness of ice over Greenland. We have an observation of historical ice sheet thickness that directly matches the output variable, and offers an opportunity to constrain the model. We show that different ways of summarising our output variable (ice volume, ice surface area and maximum ice thickness) offer different potential constraints on individual input parameters. We show that combining the observational data gives increased power to constrain the model. We investigate the impact of uncertainty in observations or in model biases on our metrics, showing that even a modest uncertainty can seriously degrade the potential of the observational data to constrain the model.

 

Alex Megann (National Oceanography Centre)

Structural Uncertainty in Climate Projections – an ocean perspective

Much effort is currently being expended on tackling the uncertainty in climate projections associated with atmospheric forcing scenarios, as well as that arising from the fact that the optimum choice of model parameters is unknown (for example under the RAPID RAPIT project), but at present there is little understanding of the biases introduced by the intrinsic structure of these models.

Simulations of pre-industrial climates using two coupled climate models (HadCM3 and CHIME), which differ only in the vertical coordinate of their ocean component, will be compared. It will be shown that CHIME, which uses an isopycnic (constant density) vertical coordinate, has a superior representation of climatically important water masses such as North Atlantic Deep Water, Subantarctic Model Water and Antarctic Intermediate Water, which by contrast are eroded by numerical diffusion in the z-coordinate ocean model of HadCM3. As a result of reduced spurious oceanic heat drawdown, CHIME has a warm sea surface bias, whereas HadCM3 is too cold at its surface, as are the Hadley Centre's other climate models.

In addition, we will show that the two models have different responses to increasing atmospheric CO2, as well as to increasing freshwater input. Finally, we will discuss the implications of structural differences to the interannual and decadal climate variability of the models, with a focus on the north Atlantic overturning circulation.

 

Isabelle Mirouze (Met Office), Dan Lea, Matt Martin, Adrian Hines

The challenges of weakly-coupled data assimilation systems

The Met Office is currently developing a weakly-coupled data assimilation system around the global coupled model HADGEM3 (Hadley Centre Global Environment Model, version 3). This model combines the atmospheric model UM (Unified Model) at 60 km horizontal resolution on 85 vertical levels, with the ocean model NEMO (Nucleus for European Modelling of the Ocean) at 25 km (at the equator) horizontal resolution on 75 vertical levels, and with the sea-ice model CICE at the same resolution as NEMO. The initial condition of the coupled model is corrected using two separate data assimilation systems: a 4D-Var for the atmosphere (VAR), and a 3DVar-FGAT for the ocean and sea-ice (NEMOVAR). The background information in the data assimilation systems comes from a previous forecast of the coupled model.

Combining two different systems of data assimilation, even weakly, requires assessing certain scientific questions. For example, the ocean assimilation cycle has been reduced from 24 hours to 6 hours in order to comply with the atmosphere assimilation cycle - is this appropriate? Gravity waves are likely to be generated due to the initialisation shock - how to diagnose and reduce those shocks? Coupling ocean and atmosphere requires including correlations between atmospheric and oceanic forecast errors - how to characterise the errors structure and how to model it? Like any models, coupled models are imperfect - how to quantify and estimate the model errors?

The weakly-coupled data assimilation system developed at the Met Office will be described, and results from some initial assessments will be presented, highlighting some of the challenges of coupled data assimilation.

 

Joe Osborne (University of Exeter)

The response of precipitation to sulphate aerosol forcing in the Northern Hemisphere mid-latitudes

Changes in the distribution of precipitation could have even greater socio-economic impacts than global warming. Less research has focused on precipitation change, however, because spatial coverage of the observed record is poor and physical understanding is weaker. Northern Hemisphere mid-latitudes (NHMLs) are an ideal place to study land precipitation change, because this is where our longest and most reliable surface-based records exist. Developing previous theory, it is possible to identify a fingerprint of sulphate aerosol forcing in simulations of precipitation from general circulation models (GCMs). Such a fingerprint would appear absent from the observed record.

Changes in global mean precipitation are constrained by the tropospheric energy budget. An increase in tropospheric latent heating associated with precipitation, LΔP, must be compensated by tropospheric cooling through increases in the net radiative and sensible heat fluxes. Changes in cooling can be dependent on global mean surface temperature, kTΔT, or independent and directly due to forcing, ΔRA. Therefore,

LΔP ≈ kTΔT + ΔRA,

with temperature-dependent effects, kTΔT, dominating, producing increases in precipitation of ~2% K-1. Black carbon and greenhouse gas forcing contribute significant direct effects that stem from atmospheric absorption and oppose kTΔT, reducing total change. Because shortwave radiation is mainly absorbed at the surface, direct effects for sulphate aerosols are weak, meaning that sulphates produce larger changes in precipitation per unit temperature.

CMIP5 GCM global mean precipitation and temperature time series show a marked change in the precipitation-temperature relationship in the latter half of the twentieth century in many cases. Using a regression-based attribution technique, these differences in behaviour are shown to be due to sulphate aerosol forcing. This is in keeping with global mean precipitation being constrained by the tropospheric energy budget.

In the absence of global surface-based precipitation observations, the NHML region is considered. Here, the effect of sulphate aerosol forcing on precipitation, as well as temperature, can be isolated in both simulations and observations. This aerosol fingerprint is compared to the strength of sulphate aerosol forcing in models, diagnosed from forcing time series. Robust relationships emerge between both precipitation and sulphate aerosol forcing, and temperature and sulphate aerosol forcing in the GCMs. However, observed temperature is consistent with GCMs incorporating strong sulphate aerosol forcing, while observed precipitation is consistent with those with weak sulphate aerosol forcing. There exists a clear inconsistency. Further analyses of the precipitation records would appear to question their quality, with potentially spurious positive trends particularly evident in the autumn months.

 

 Philip Sansom (University of Exeter), David B. Stephenson, Chris A. T. Ferro

On using emergent constraints to reduce structural uncertainty in climate change projections

When projecting century scale climate change it is commonly assumed that the climate change response is independent of the climate mean state.  However, for some parts of the climate system feedback mechanisms exist that constrain the response to depend strongly on the mean state. Previous studies have quantified these emergent constraints using differences between models and used them to constrain the future climate change response.  We present a statistical framework for representing a multi-model ensemble that incorporates emergent constraints to help account for structural uncertainty in the climate change response. The statistical framework uses variations between different models and also between different runs of each model to estimate the emergent constraint, thus providing greater precision than existing methods. By specifying a model for the whole ensemble we are able to quantify both structural uncertainty and internal variability. Therefore, the projections include uncertainty from both sources and provides a more consistent assessment of the total uncertainty.  The modelling framework is applied to CMIP5 projections of cyclone frequency over the North Atlantic and Europe. The storm tracks simulated by the CMIP5 models are generally too zonal and extend too far into Europe.  On the flanks of the storm track, where the deviation from observations is greatest, the climate change response is found to depend strongly on the historical mean state of the storm track. Up to 50% of the structural uncertainty in the response in these regions can be accounted for by the historical mean states. Adjusted projections are presented based on a comparison with ERA-40 reanalysis data.

 

Patrick Sessford (University of Exeter)

Quantifying sources of variation in multi-model ensembles: A process-based approach

The representation of physical processes by a climate model depends on its structure, numerical schemes, physical parameterisations and resolution, with initial conditions and future emission scenarios further affecting the output. The extent to which climate models agree is therefore of great interest, with greater confidence in robust results across models. This has led to climate model output being analysed as ensembles rather than in isolation, and quantifying the sources of variation across these ensembles are the aims of many recent studies. Statistical attempts to do this include the use of various different variants of the mixed-effects analysis of variance or covariance (mixed-effects ANOVA/ANCOVA), usually focusing on identifying variation in a variable of interest due to model differences such as their structure or the carbon emissions scenario. Quantifying such variation is important in determining where models agree or disagree, but further statistical approaches can be used to diagnose the reasons behind the agreements and disagreements by representing the physical processes within the climate models. A process-based approach is presented that uses simulation with statistical models to quantify the sources of variation in multi-model ensembles. This approach is a general framework that can be used with any generalised linear mixed model (GLMM), which makes it applicable to use with statistical models designed to represent (sometimes complex) physical relationships within different climate models. The variation in the response variable can be decomposed into variance due to (1) variation in driving variables, (2) variation across ensemble members in the relationship between the response and the driving variables, and (3) variation unexplained by the driving variables. The method is demonstrated using vertical velocity and specific humidity as drivers to explain wet-day precipitation over the UK using an ensemble from the UK Met Office Hadley Centre regional climate model. The variation in the precipitation is found to be due mainly to the variation in the driving variables rather than due to variation across ensemble members in the relationship between the precipitation and the driving variables.

 

David A Stainforth1, Joseph D  Daron2

1Grantham Research Institute on Climate Change and the Environment, London School of Economics

2Climate System Analysis Group, University of Cape Town

The role of initial condition ensembles in quantifying model climate under climate change

Can today's global climate model ensembles characterize the 21st century climate of their own “model-worlds'”? This question is at the heart of how we design and interpret climate model experiments for both science and policy support. The role of initial condition (IC) uncertainty on multi-decadal timescales has to-date received only limited exploration due to the computationally expensive nature of atmosphere/ocean general circulation models (AOGCMs). Their potential significance, however, can be explored using simpler nonlinear systems with climate-like characteristics, using ensembles generated for climate-like situations. Here we present results from very large IC ensembles (up to 10,000 members) of the Lorenz ’84 model (Lorenz, 1984) with a seasonal cycle, coupled to the Stommel ’61 ocean (Stommel, 1961; van Veen et al., 2001). This system has several key characteristics of climate: rapidly varying atmosphere-like variables, slowly varying ocean-like variables, a seasonal cycle, and nonlinear behaviour. Having only five state variables, however, it is computationally fast to run and very large ensembles can be relatively easily generated.

We use this system to explore the consequences of having only small IC ensembles, for the quantification of distributions representative of the system under stationary and non-stationary conditions. Such distributions can be considered the “climate” of the system. In non-stationary conditions the robustness of these distributions have direct relevance to how we address the climate prediction problem. Since “climate “ is also often taken to be represented by means and variations over a 30 year period, the implications of constructing distributions over such fixed periods of time are also considered. Parallels are drawn with the use of AOGCMs to explore climate under pre-industrial and 21st century conditions.

Small ensembles are shown to be misleading in non-stationary conditions which parallel climate change, and sometimes also in stationary situations analogous to an unforced climate. The results suggest that ensembles of several hundred members may be required to robustly characterize a model’s climate. This raises questions about the degree to which today’s ensembles are able to quantify the differing contributions to prediction uncertainty from forcing, model and initial condition uncertainty (see: Cox and Stephenson, 2007, Hawkins and Sutton, 2009). From a policy perspective, the results illustrate that relying on single simulations or small ensembles of climate model simulations is likely to lead to overconfidence and potentially poor decisions.

 

Nick W. Watkins 1,2,3

1 Max Planck Institute for the Physics of Complex Systems, Dresden, Germany

2 Centre for the Analysis of Time Series, LSE

3 Centre for Fusion Space and Astrophysics, University of Warwick, UK.

Why wild events are like buses ... they're never along when you expect them ... 

I will talk about an important, and insufficiently appreciated, aspect of natural fluctuations, with obvious relevance to climate. This is the distinction between the relative frequency of occurrence of events (measured by the probability density function, or pdf) and their degree of sequential temporal dependence(measured by the autocorrelation function, or acf). This is the (slightly non-intuitive) distinction between the question “is this a 1 in 100 year event”, and the question “how long will we wait until the next one” …

The talk will complement my poster on Wednesday by drawing on my recent review paper [Watkins, GRL Frontiers, 2013, doi: 10.1002/grl.50103]. I will illustrate the point by drawing on several types of model for “bursty” time series, which can show both frequent high amplitude “wild” events, and also long range correlations between successive values.  The frequent large values due to the first of these effects, can give rise to an “IPCC type I” burst composed of successive wild events. Conversely, long range dependence, even in a light-tailed models like Mandelbrot and van Ness’ fractional Brownian motion, or Granger's FARIMA, can integrate ``mild” events into an extreme “IPCC type III” burst.

 

Danny Williamson (Durham University), Adam Blaker

Towards quantifying structural uncertainties in climate models

Uncertainty quantification for climate based on climate models requires treatment for a number of different sources of uncertainty. Statistical methods such as emulation, Bayesian calibration and history matching are available to aid the quantification of some of the more well known sources of

uncertainty, such as that related to not knowing how the climate model will behave at untried settings of its parameters and that related to not knowing which parameter settings lead to realistic climate simulations. An extremely important, yet more difficult uncertainty to quantify is that of structural uncertainty or model discrepancy. This represents the extent to which missing processes and approximations in a climate model affect its ability to inform us about the real climate system. Currently, there has been little research into statistical methodology for the quantification of model discrepancy and it remains an avenue requiring a great deal of attention from statisticians and climate scientists.

In this talk I'll present an investigation into the structural uncertainty in the Atmosphere-Ocean Generalised Circulation Model HadCM3. At a workshop in Durham in 2012 we identified a number of key processes that were said to have structural biases in HadCM3. These included location and strength of the sub-polar and sub-tropical gyres, convection sites in the North Atlantic, the strength of the ACC transport, sea surface temperature patterns in the Southern Ocean and North Atlantic sea ice extent. We explore these structural biases to see whether or not they really exist or whether they are artefacts of the choice of the model parameters. We use an unprecedented 20,000 member perturbed physics ensemble to identify models with more realistic representations of these key processes than the standard HadCM3 and use emulators to see whether there are parameter choices that would remove these biases altogether. We found that we can do better than the standard HadCM3 and that some of these \biases" are due to the parameter choices. We identify the extent of the remaining structural deficiencies and attempt to use the information from this study in order to quantify the model discrepancy for certain key outputs of the climate model.

 

Hugo Winter1, Jonathan Tawn1, Simon Brown2

1 STOR-i DTC, Lancaster University, UK

2 Met Office, UK

Modelling drought in Southern Africa with Extreme Value Theory

Natural disasters are rare events that can cause much social and economic damage. It is of interest to companies and decision makers when and how severe future natural disasters will be. Extreme Value Theory is an area of statistics that can be used to draw inferences about rare events. From an academic perspective, dependence between extreme values at different times and locations provides an interesting modelling challenge. This is especially so for environmental data where several types of dependence can occur.  The presented work aims to assess spatial dependence of severe droughts using different approaches to modelling extremal dependence. A comparison will be made between the two most common methods, the joint tail approach of Ledford and Tawn (1997) and the conditional approach of Heffernan and Tawn (2004). Data used for this study are monthly rainfall values from the HadGEM2 global climate model. There will also be discussion of future extensions to the aforementioned models with a view to modelling covariates and non-stationarity.