- Home
- About CliMathNet
- Who we are
- Meetings and events
- 2017 Conference, Reading
- 2017 Workshop: Next Generation HPC architectures for studying climate variability
- 2017 Pacing and synchronisation of palaeoclimate variability, Dartington
- 2017 Workshop: The influence of weather and climate variability on water resources management
- 2016 Conference, Exeter
- 2016 Conference, Exeter
- 2016 Virtual Outreach Conference
- 2015 Statistics for Climate Science, London
- 2015 Conference, Bath
- 2015 Dartington, Non-equilibrium Dynamics of Climate: linking models to data, Jan 2015
- 2015 Decision Analysis Meeting. April 2015.
- 2014 Conference, Leeds
- 2013 Tipping Points Workshop
- 2013 Conference, Exeter
- Schedule
- Plenary Speakers
- Poster Session
- Contributed talks

- For schools and public
- Contact us

# CliMathNet Conference 2013 Poster abstracts

**Kopal Arora **(University of Exeter)

*Past and Future Variations in Tropical Cyclones*

Tropical cyclones are one of the most destructive natural phenomenon which cause loss of life and property. Cyclones have killed more people worldwide in the last fifty years than any other natural cataclysm. Despite their destructive nature, they play an important role towards the stability of our tropical climate.

Present work is focussed on detecting variations in tropical cyclone intensity by analysing observational and model data. Reanalysis data is considered over five ocean basins namely, Indian Ocean, equatorial Atlantic, equatorial Pacific, north Pacific, and south pacific ocean. In each ocean basin we look for re-occurring signal in the time series. This is done by time series analysis using spectral methods. After finding definite cycles, we compare these oscillations to the cycles found in other phenomenon in nature, for instance, ENSO, to then investigate any physical connection between the processes. Since these natural cycles are non-stationary in nature, data analysis techniques like wavelet suits the purpose. This work is then followed by inspecting tropical cyclone intensity, its destructive potential, under climate change scenario.

We have considered climate change scenario in the model by doubling carbon-dioxide (CO2) and switching other parameters. The climatemodel considered for our studies is, HadCM3. Perturbed physics ensemble (PPE) technique is applied to deal with parameter uncertainty in the model. Since, we can change model parameters, we can pin point which ones contributes to more destructive potential of cyclones. This method help us to understand change in tropical cyclone activity under climate change and to comprehend any missing link between tropical cyclone formation and the natural variables influencing it.

On analysing reanalysis data we found increase in destructive potential of cyclones’, in Indian ocean by about 71.6% and is highest of all ocean basins considered during 64 years (1954 to 2011). Equatorial pacific comes next in the list with increase in power dissipation index (PDI) by 70.3% during the same course of time. 59.9% of higher fluctuation can be observed in equatorial Atlantic ocean in the same duration. Interestingly, North Pacific ocean shows less PDI than that in South Pacific. In north Pacific the index rise by 21.4% while in South Pacific, it strengthen by 31.5%. Power spectrumanalysis shows significant cycle of 20-22 years in each time series, PDI in the considered ocean basins, and thus indicate connection of hurricane activity to solar cycle. Other noticeable periodicities are 2, 5 and 6 years. Modelling studies show that theoretically maximum achievable speed and minimum pressure levels point towards increasing destructiveness of the storms with increase in CO2 level and thus under climate change scenario.

**Sandra Catherine Chapman ^{1,3}, **David Stainforth

^{2,1}, Nicholas Wynn Watkins

^{2,1}

1. Physics, University of Warwick, Coventry, United Kingdom

2. London School of Economics, London, United Kingdom

3. Department of Mathematics and Statistics, University of Tromso, Norway

*Quantifying local climate change at adaptation-relevant thresholds*

For planning and adaptation, guidance on trends in local climate is needed at the specific thresholds relevant to particular impact or policy endeavours. This then requires the quantification of trends at specific quantiles, in the distributions of variables such as daily temperature or precipitation. These nonnormal distributions vary both geographically and in time. The trends in the relevant quantiles may not simply follow any trend seen in the distribution mean. We present a method[1] for analysing local climatic time series data to assess which quantiles of the local climatic distribution show the greatest and most robust trends. We demonstrate this approach using E-OBS gridded data[2], composed of time series of local daily temperature and precipitation from specific locations across Europe over the last 60 years. Our method uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction quantifies how different quantiles of the distributions are responding to changing climate. These are found to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We find as an output many regionally consistent patterns of response, of potential value in adaptation planning. For example in temperature, robust signatures at specific quantiles and locations can be as much as 2-4 degrees warming over a period where the global temperature has increased by 0.5 degree. We discuss methods to quantify the robustness of these observed signatures and their statistical likelihood. This also quantifies the level of detail needed from climate models if they are to be used as tools to assess climate change impact.

[1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends,

Phil. Trans. R. Soc. A, 371 20120287; doi:10.1098/rsta.2012.0287 (2013)

[2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European

daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, doi:10.1029/2008JD10201

**Andrew Ciavarella** (Met Office)

*Toward Operational Attribution of Extreme Events*

The attribution of the risk of extreme weather-related events to anthropogenic climate change is a subject of increasing interest to scientists, policy-makers and industry alike. By modelling the climate when influenced by the sum of anthropogenic and natural forcings versus the climate in the absence of anthropogenic influence we may estimate the changing odds of extreme weather and calculate the fraction of the risk of an event that is attributable to human activity using extreme value analysis.

The development of a new state of the art system for attribution of extreme climate events is presented that uses the Met Office HadGEM3-A model. The system has already been applied to the study of a number of high-impact events including the recent UK cold winters, Moscow heat wave, African droughts and Australian floods as part of the ACE (Attribution of Climate-related Events) international collaboration. Development will continue toward an operational system through the EU funded EUCLEIA project with a European focus but world wide scope, issuing attribution products regularly in near-real time.

The system uses large ensemble runs offering improved extreme statistics and takes a focus on the model's skill in reproducing regional events and climatological distributions through reliability diagrams and power spectra. Individual extreme events for which model reliability is high can be assigned a fraction of attributable risk (FAR). An upgrade in resolution from N96 L38 to N216 L85 is additionally planned, benefitting the system with resolution of weather systems and a better representation of the stratosphere that together allow representation of more and smaller scale modes of variability. Robustness of the system to uncertainties introduced through inter-model variability are addressed by the inclusion of alternate realisations of the natural world produced by model specific SST boundary conditions.

**Laura Dawkins** (University of Exeter)

*Summary and Prediction of European Windstorm Footprint Characteristics*

This study has investigated how best to summarise windstorm footprints, and how well such summary statistics can be predicted from simpler track information. As part of the Extreme Wind-Storm (XWS) catalogue project, footprints of 43 historic extreme storms have been created using reanalysis-forced runs of the Met Office 25km resolution regional model. The footprint is defined as the maximum 3s gust at each grid point over Europe/E. Atlantic over a 72 hour period covering the passage of the storm.

Footprints have often been summarised using Storm Severity Indices (SSIs) defined as the cube of the excess wind speed above a threshold summed over the spatial domain. This study has found that for each of the 43 storms, the wind speed excesses at each grid point are well characterised by the 2-parameter Generalised Pareto distribution. These two parameters vary considerably from storm to storm and provide a good summary of the mean cubed wind speed excesses for each storm.

Linear regression has been used to try to predict SSI from simple track variables such as maximum wind speed and/or minimum pressure for each storm. The best predictor was found to be maximum wind speed over land but it fails to account for much variance in SSI most likely due to the spatial complexity of the footprints. Furthermore, a simple loss model involving population density in Europe shows that SSI alone is not a good predictor of loss due to the large spatial variation in population density (i.e. a severe storm could easily miss heavily populated areas).

**Samantha Ferrett** (University of Exeter), Matthew Collins

*ENSO stability in a perturbed physics ensemble*

El Ni~no Southern Oscillation (ENSO) is a naturally occurring variation in the equatorial Pacific Ocean climate between abnormally cold conditions, termed a La Ni~na event, and abnormally warm conditions, referred to as an El Ni~no event. It is known this fluctuation affects extreme weather events worldwide, such as hurricanes, as well as ecosystems and more. It is currently not known how or if ENSO will be affected by climate change, so many studies using climate models have been carried out into this area. Many measures have been used in attempts to quantify ENSO behaviour change in models, one being the Bjerknes' Stability Index (BJ index). Originally derived from a dynamical model of the tropical atmosphere-ocean system, the index consists of several distinct damping and amplifying feedbacks which, combined, represent the growth rate of the leading mode of variability. The index is calculated using the climate mean state and the strengths of simple linear relationships between ocean and atmospheric variables in the equatorial Pacific Ocean.

Here the index has been calculated for 33 members of a HadCM3 perturbed physics ensemble, featuring perturbations to ocean and atmosphere parameters, over various time periods in a climate change scenario. By determining any alterations in the behaviour of ENSO in the ensemble it is hoped a greater understanding of the influence that variations in processes and mean climate may have on ENSO can be gained.

It is found for current climate conditions that thermodynamic damping and thermocline feedback dominate the index, however the ensemble shows a wide range of values for the BJ index. Under climate change it is found most contributing components of the index show little change, however thermodynamic damping shows large increases leading to a decreased BJ index. There are also smaller increases in zonal advective feedback and decreases in thermocline feedback, suggesting a possible shift in the dominating processes leading to El Ni~no events, but not affecting stability overall.

**Glen R. Harris** (University of Exeter), David M. H. Sexton, James M. Murphy, Ben B. B. Booth, Mat Collins

*Probabilistic projections of transient climate change *

Direct application of a Bayesian framework (e.g., Rougier 2007) to produce multivariate probability distribution functions (PDFs) for transient climate change is not yet practicable, due to insufficient computer resources to run the large ensembles of transient climate simulations with coupling to a dynamic ocean model that would be required. We have therefore developed a tractable methodology (Harris et al, 2012) based upon sampling PDFs for the equilibrium response to doubled carbon dioxide, and scaling by global mean temperature predicted by a Simple Climate Model (SCM) in order to emulate corresponding transient responses. The equilibrium PDFs, derived from a sampling of uncertainties in modelling of surface and atmospheric processes in the Hadley Centre model HadSM3, are constrained by multiannual mean observations of recent climate. They include a contribution from structural uncertainty explored by the CMIP3 ensemble, in addition to parameter uncertainty explored by the perturbed parameter ensemble (Sexton et al, 2012).

Since model variants that give good simulations of mean climate may not necessarily achieve the same level of skill with historical anthropogenic and natural climate forcing, the sampled transient projections are then reweighted, based upon the likelihood that they correctly replicate observed historical changes in surface temperature. The PDFs for transient response also account for modelling uncertainties associated with aerosol forcing, ocean heat uptake and the terrestrial carbon cycle, sampled using SCM configurations calibrated to the response of perturbed physics ensembles generated using the Hadley Centre model HadCM3, and other international climate model simulations.

The techniques described here formed the basis for UKCP09, a set of probabilistic projections for UK climate for the twenty-first century, and have been used for European climate impact studies in the ENSEMBLES project.

Here we summarize key steps in the implementation of our methodology, discuss the main caveats associated with our projections that arise from methodological assumptions for tractability, and show how projections are constrained by the selected observational data. Despite these advances, the increasing spatial resolution and additional complexity associated with Earth System processes in contemporary climate models continue make probabilistic climate projections a challenging prospect. We highlight this with results from a new Earth System Ensemble where, in the first experiment of this kind, parameters that control atmosphere, ocean, land carbon cycle and sulphur cycle processes in the HadCM3C Earth System model are simultaneously perturbed.

References:

Harris, G.R., D.M.H. Sexton, B.B.B. Booth, M. Collins, J.M. Murphy, 2012:

Probabilistic Projections of Transient Climate Change. Clim Dyn, doi:10.1007/s00382-012-1647-y

Rougier, J., 2007: Probabilistic inference for future climate using an ensemble of climate model evaluations. Clim. Change, 81, 247--264

Sexton D.M.H, Murphy J.M., Collins M., Webb M.J. 2012: Multivariate probabilistic projections using imperfect climate models part I: outline of methodology. Clim Dyn 38:2513--2542. doi:10.1007/s00382-011-1208-9

**John Hemmings**^{ 1}, Peter Challenor^{2}

1National Oceanography Centre, Marine Systems Modelling, UK

2University of Exeter, College of Engineering Mathematics and Physical Sciences, UK

*Using Eulerian and Lagrangian time-series data to improve models of ocean biogeochemistry*

Time-series data comprising physical and biogeochemical observations are essential for assessing the fidelity of biogeochemical model response to physical drivers. Models are typically semi-empirical constructs representing our understanding of dominant plankton ecosystem processes in the cycling of key elements such as carbon and nitrogen. Such models necessarily rely on many adjustable parameters to compensate for un-modelled biological complexity and incomplete ecological knowledge. Bio-Argo data promise to improve constraints on these parameters leading to more reliable predictions of environmental change. However, uncertainty in the physical environment to which the ecosystem responds raises particular challenges for model calibration and assessment, implying an external source of error in trial simulations. The expected simulation error must be taken into account in model-data comparison if reliable inferences are to be made.

Recent advances are enabled by the development of a new water-column simulation tool for plankton model analysis: the Marine Model Optimization Testbed. MarMOT supports multi-column ensemble simulations and parameter optimization in a realistic 3-D context. 1-D analysis allows computational effort to be focussed on data-rich locations such as time-series sites or Argo float tracks. MarMOT has been used to demonstrate, in experiments with synthetic data, that plausible patterns of environmental uncertainty can lead to strong temporal and spatial variability in the expected simulation error of observable biogeochemical properties. A new calibration scheme that uses this information in weighting residuals gives a marked improvement in parameter estimates when tested against established methods. The results motivate efforts to better characterize real-world uncertainties, including those associated with the effects of lateral advection or shear flow as well as those due to vertical transport processes.

A statistical modelling strategy for the environmental input data is described. Ideally, simulations for a trial model would represent its response to a perfect 3-D circulation so that advective tendencies depended on real-world currents and upstream tracer gradients for a hypothetical simulation. Gradient distributions can be provided by 3-D biogeochemical simulations and constrained by satellite SST gradients on the basis of modelled relationships. Satellite products also provide constraints for near-surface currents. In-situ measurements can be used to constrain physical properties of the water column. Where physical observations are unavailable, statistics must rely on model results and their uncertainties inferred from validation data. High levels of uncertainty mean that only weak parameter constraints can realistically be obtained from syntheses of data from individual sites or Argo floats. The size of the data set and its representativeness of the global ocean are key to obtaining good constraints for global models.

**Stephen Jewson** (Risk Management Solutions)

*Statistical Methods for Generating Climate Predictions based on Objective Probabilities*

In UKCP09, the Met Office used \emph{subjective probabilities} to quantify uncertainty. A more scientific alternative is to use \emph{objective probabilities}. Here's why.

1) One of the goals of science is to derive things in the most objective way possible, and to go beyond "trust me I'm an expert" wherever possible. Objective means: testable and based on evidence, as non-arbitrary as possible, and not based on personal judgement.

2) The UKCP subjective probabilities are based on subjective priors. Subjective priors are based on personal judgement. They are not testable, they are not based on evidence, and they are more or less arbitrary. You just have to believe the expert. A subset of statisticians like subjective probabilities because they are simple to use and self-consistent, and the Met Office has been strongly influenced by a particular small group of statisticians who like subjective probabilities. But given the controversy surrounding climate change, it makes sense to avoid subjective probabilities, and base climate predictions as much as possible on objective methods. A perfectly valid criticism of UKCP would be: "I don't believe the arbitrary choices you made with respect to your subjective priors", and, by definition of subjective probabilities, there is no defence against that criticism other than "They reflect the Met Office beliefs".

3) The alternative is to use objective probabilities, based on objective priors. Objective priors are determined by a criterion. There are two or three different criteria one could choose (such as matching probabilities, or maximising utility). For climate science I would propose that the best way to choose objective priors is to aim to make the predictions from climate models probability matching: that means they would be \emph{reliable} in \emph{perfect model tests}. In other words, the prior should be chosen so that if you test the model out of sample using its own data then 10% really means 10%. This method is objective, and has sensible properties.

4) In 2009 I wrote a paper (Jewson et al, 2009) explaining one method to generate an objective prior for climate models, for outputs that are multivariate normal. It's pretty simple, and involves calculating gradients of the output versus the climate model parameters. But I've recently figured out that the method I proposed in that paper is not guaranteed to give reliable forecasts in perfect model tests in all situations, especially if correlations between locations are unknown (although the method probably isn't too bad, and at least it is objective). Trevor Sweeting (retired stats professor from UCL, and expert on reliability of statistical models) has recently helped me dig deep into the statistics literature to find a slightly different method that gives reliable forecasts in perfect model tests in all multivariate normal situations. There are extensions to the non-normal case too.

5) Meanwhile Nic Lewis has, in parallel, published two papers (Lewis (2013) and Otto et al (2013)) introducing the idea of objective probabilities to the estimation of climate sensitivity.

**Jill Johnson** (University of Leeds), Lindsay Lee, Ken Carslaw and Zhiqiang Cui

*Exploring uncertainty in the cloud model MAC3*

The effect of global aerosols on clouds is one of the largest uncertainties in the radiative forcing on the climate. The complex and highly computational cloud model MAC3 with bin-resolved microphysics can be used to simulate the formation of deep convective clouds (with liquid drops, ice crystals, graupel particles and snow) given a set of microphysical and atmospheric parameters, some of which are subject to a degree of uncertainty. Using MAC3, we want to explore the model parameters and processes of relevance to climate and identify parameters that drive uncertainty in model outputs of interest. In particular, we look to quantify the cloud response to aerosol in the atmosphere and determine the factors that most contribute to it.

In order to understand how uncertainty in different cloud processes and parameters can impact on model outputs of interest, we need to explore these outputs given all possible input combinations. Unfortunately, the computationally intensive nature of the MAC3 model means that classical methods for uncertainty analysis involving direct Monte Carlo simulation are not feasible in real time. To overcome the computational barriers here, we adopt a strategy of using statistical emulation to explore the model uncertainty.

The fitted emulator model provides a statistical representation of the relationship between a set of uncertain inputs and a model output of interest, along with a representation of the uncertainty. The emulator is considerably quicker to evaluate than running the MAC3 model simulator itself. Once validated, the emulator can be used to explore the model output over the full input space defined by the set of uncertain model inputs, allowing for a variance-based sensitivity analysis to be performed and the leading causes of parametric uncertainty to be identified.

In this poster presentation, we determine the main sources of parametric uncertainty and the response to aerosol for a set of 12 outputs from the MAC3 cloud model, including particle masses, particle concentrations and precipitation rates.

This research is funded as part of the NERC project consortium ACID-PRUF.

**Nicholas Lewis**

*Objective Bayesian estimation of climate sensitivity and other key climate parameters*

A number of explicitly Bayesian studies have obtained estimated probability density functions (PDFs) for key climate system parameters by comparing simulations from adjustable-parameter climate models with observations. But the subjective Bayesian frameworks employed in such studies are generally not objectively valid, principally due to their use of prior distributions for the parameters that are highly informative and bias parameter estimation. In particular, use of uniform priors typically leads to the probability of high values for climate sensitivity being greatly overestimated.

The solution is not to abandon Bayesian methods, which offer the most coherent way to formulate probabilistic problems and to combine evidence from different sources, but to adopt an objective Bayesian approach. That involves using a noninformative prior for the parameters being estimated, rather than a subjective prior that has some arbitrary distribution or represents existing probabilistic estimates of the parameters. A noninformative prior reflects the statistical model and has no probabilistic interpretation: the idea is for it to allow the data to 'speak for itself' to the greatest extent possible. Noninformative priors are often judged by whether parameter PDFs estimated using them achieve at least approximate probability matching under repeated-sampling, and thereby enable valid confidence level statements.

I have applied an objective Bayesian approach to reanalyse a well known subjective Bayesian study (Forest et al, 2006). That study estimated equilibrium climate sensitivity, effective vertical ocean diffusivity and total aerosol forcing, using optimal fingerprints to compare multi-decadal observations with simulations by the MIT 2D climate model over a wide range of settings of the three climate parameters. The revised methodology uses Bayes' theorem to derive a PDF for the whitened (made independent using an optimal fingerprint transformation) observations, for which a uniform prior is known to be noninformative. A dimensionally-reducing change of variables onto the parameter surface is then made, deriving an objective joint PDF for the climate parameters. The PDF conversion factor from the whitened variables space to the parameter surface represents a noninformative joint parameter prior, which is far from uniform. The noninformative prior prevents more probability than data uncertainty distributions warrant being assigned to regions where data responds little to parameter changes, producing better-constrained PDFs, particularly for climate sensitivity, than those obtained using uniform priors.

References:

Forest, C.E., P. H. Stone and A. P. Sokolov, 2006: Estimated PDFs of climate system properties including natural and anthropogenic forcings. Geophys. Res. Lett., doi:10.1029/2005GL023977.

Lewis, N, 2013: An Objective Bayesian improved approach for applying optimal fingerprint techniques to estimate climate sensitivity. J. Climate, doi: 10.1175/JCLI-D-12-00473.1 (in press).

**David Long** (University of Exeter), Mat Collins

*Investigating constraints on projections of future climate change *

We investigate the possibility of using simple theoretical models, based on fundamental principles of the climate system, to constrain and improve our quantitative understanding of future climate change. Representing the ocean as a 1-Dimensional heat diffusivity term we construct a simple “one-box” heat balance model of the climate system to emulate the evolution of global annual mean surface temperature. It is shown that for optimized values of ocean thermal diffusivity our simple model is capable of accurately reproducing the temperature response of 16 CMIP5 models under historical and RCP8.5 forcing. Using the CMIP5 model ensemble we generate distributions of the parameters required as inputs for our simple model. Randomly sampling from such parameter distributions we then produce 1000 separate emulations of global annual mean surface temperature response for the historical and RCP8.5 experiments and create probability density functions of 21st century temperature change, defined as the difference between the average of the last 20 years of the 21st century and the average of years 1985-2005 of the historical period.

Combining short and long wave feedback/forcing components at the top-of-atmosphere, estimated from idealized CO2 forcing experiments within the CMIP5 archive, with emulated temperature responses, under the assumption of global energy balance we can also emulate the short and long wave top-of-atmosphere response and precipitation response for the historical and RCP8.5 experiments. Using CMIP5 model output as perfect “truth”, we can weight each emulated response and construct a posterior probability density function of weighted 21st century change. Differences between the 21st century response of the CMIP5 models and weighted posterior distributions show the level of constraint the applied weight places on future climate projections.

We show that when weighting emulated temperature responses using linear trends over the observational period (1981-2010) there is a higher degree of constraint placed on 21st century change when using a multi-variate approach. There is also a noticeable increase in constraining 21st century temperature responses if the period of the linear trend weight is extended by 10 years (1891-2020), highlighting the importance of maintaining observational networks. Constraints on 21st century precipitation are shown to be considerably weaker than those on surface temperature but strengthen when using a multi-variate approach and when the period over which the constraints are applied is extended into the near future.

**Thomas Mendlik** (University of Exeter), M. Collins, B. Bhaskaran

*Estimating climate change signals by accounting for multi-model dependencies *

In order to develop mitigation and adaptation strategies for a changing climate, decision makers often draw inference from an ensemble of climate simulations. However, different climate simulations tend to project quite distinct climates, so statistical tools seem to be a natural way for assessing this uncertainty.

But this is not an easy task at all, as usually ensembles of climate simulations violate assumptions for standard statistical inference. Neither are climate models independent, nor can they be treated as being drawn randomly and systematically from some population.

In this study we try to regard these flaws by incorporating assumptions of dependence into a hierarchical statistical model to estimate the climate change signal for temperature and precipitation over the Indian region. The data stem from regionalized HadCM3 climate simulations with perturbed parametrization to systematically account for model uncertainties.

**Edward Pope** (Met Office)

*Application of deconvolution algorithms in climate data analysis*

Deconvolution is a technique for extracting information from blurred, noisy data, and can be applied to both spatial and time series data. There are two main forms of blurring: i) explicit smoothing - the result of intentional filtering operations in data analysis; ii) implicit smoothing, which can be the result of the limited performance of measuring instruments or, as investigated here, the solution of differential equations on finite grids. In each case, smoothing modifies autocorrelations, moments and extremes in the data; however, the effect of implicit smoothing is notably difficult to quantify since the smoothing function is generally unknown. This is of particular concern when analysing and interpreting output produced by numerical models. Here, we explore the use of an Iterative Blind Deconvolution (IBD) technique as a way for 'unsmoothing' a time series of Significant Wave Height produced by the Wave Watch 3 model, driven by the QUMP RCM ensemble. The aim of the approach is to extract a best-estimate of the statistical properties of the unsmoothed data, and use this information to perform a Points over Threshold Extreme Value Analysis (EVA) that provides more realistic estimates of return levels of extreme waves. This effectively amounts to a statistical downscaling technique which can be applied even when there is no direct information about sub-grid processes. We find that performing an IBD on the simulation data increases estimates of the return levels of 1 in 100 year wave events by up to ~50%. The increase is attributed to the larger scale parameter of the Generalised Pareto distribution fitted to the unsmoothed data, which is consistent with theoretical predictions. This work suggests that deconvolution techniques can be a powerful tool in the analysis and interpretation of a wide variety of datasets, but that they must be studied further to understand the full range of applications and uncertainties.

**William Seviour** (University of Oxford), Lesley Gray, Dann Mitchell

*A simple geometrical method to classify stratospheric sudden warmings *

During winter, the Arctic stratosphere is dominated by a vortex of strong westerly winds. This vortex is highly variable and can break down rapidly in an event known as a stratospheric sudden warming (SSW). Such events usually occur in one of two manners: splits and displacements. During a split, the vortex separates into two daughter vortices, while during a displacement the vortex remains whole, but moves significantly away from the pole.

It has been known for about two decades that SSWs can be associated with anomalous surface weather patterns lasting up to two months. Recently, Mitchell et al. 2013 have stressed the importance of differentiating between the two types of SSW, by showing that surface anomalies following splits are much larger than those following displacements. It is therefore important to be able to objectively distinguish between these two types, particularly if we wish to study such events in climate models.

Despite this, there is a lack of an easily applicable method for determining vortex splits and displacements. Traditional methods rely on zonal mean diagnostics which fail to capture the asymmetry of the vortex, or potential vorticity fields which many models do not output. Here we outline a simple method based on a geometrical description of the vortex using geopotential height. It makes use of moment (or elliptical) diagnostics, which fit an ‘equivalent ellipse’ to the vortex, allowing properties such as the aspect ratio and centroid to be defined. The results of this method, including coupling to the surface, are presented for both reanalysis data and climate models.

References:

Mitchell, D.M., Gray, L.J., Anstey, J., Baldwin, M.P and Charlton-Perez, A.J. The Influence of

Stratospheric Vortex Displacements and Splits on Surface Climate , 2013. J. Climate

**David A Stainforth**^{1,2}, Sandra Chapman^{2,4}, Nicholas Watkins^{1,3,2}** **

1London School of Economics, Houghton Street, London, United Kingdom. d.a.stainforth@lse.ac.uk

2Department of Physics, University of Warwick, United Kingdom.

3British Antarctic Survey, Cambridge, United Kingdom.

4University of Tromsø, Tromsø, Norway.

*Robustness and Relevance in Observed Changes in Climatic Distributions*

Observational timeseries of local weather variables can be processed into observations of the changing local climatic distributions of said variables (Chapman et al., in press). Such information provides a means of addressing how global scale climate change has influenced, and arisen from, local changes. It also provides information directly relevant to many adaptation decisions; decisions which are typically taken at local and regional scales.

The limited length of the observational timeseries and the non-stationary nature of the system, however, creates a requirement to trade-off the resolution of the climatic distributions at different points in time with the size of the observed change in distribution over time. The focus of this presentation will be on sensitivity analyses in this trade-off and the consequent conclusions which can be drawn regarding how the robustness of the results vary by geographical location and quantile of the distributions. A limited-confidence bounding box approach is taken; akin to the “envelope of possibilities” discussed by Stainforth et al., 2007 for perturbed physics ensembles, and having similarities to the interpretational approach considered in Otto et al., 2013 (submitted). The connection between the observational analyses presented here and the model assessments in these previous studies relates to how they are interpreted as a guide for the future. For such purposes the role of epistemic uncertainty presents a significant barrier to more quantitative statistical interpretations of the data.

Sensitivity analyses will be presented for observed changes in daily max/min summer/winter temperature across Europe. The appropriate interpretation of the bounding box approach will be discussed in this context, along with a discussion of the relevance of the information for adaptation decision makers.

References:

Chapman, S.C., Stainforth, D.A., Watkins, N.W., 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, in press.

Otto, F.E.L., Ferro, C.A.T., Fricker, T.E., and Suckling, E.B., (2013 - submitted). On judging the credibility of climate projections. Climatic Change - submitted.

Stainforth, D. A., Downing, T. E., Washington, R., Lopez, A. & New, M, 2007. Issues in the interpretation of climate model ensembles to inform decisions. Philos. Trans. R. Soc. A, 365, 2163-2177.

**Zoe Thomas** (University of Exeter), Peter Cox, Frank Kwasniok, and Richard Jones

*Early Warnings of **Abrupt Transitions in the East Asian Summer Monsoon over the Penultimate Glacial Cycle*

High resolution speleothem δ18O records from China show evidence of abrupt changes in the East Asian summer monsoon (EASM). This paper tests the hypothesis of Schewe et al. (2012) who propose a bifurcation structure in the EASM. We look for early warning signals of a bifurcation in a speleothem record spanning the penultimate glacial cycle and carry out a suite of analytical methods as applied in research investigating tipping points. We show no clear evidence of bifurcations in abrupt monsoonal shifts between 230 and 150 ka BP, however, a clear signal of slowing down is found during the WMI (Weak Monsoon Interval) just before Termination II in two regionally distinct speleothem records. These results are supported by potential analysis, modulated by Northern Hemisphere Summer Insolation, which illustrates the changing shape of the potential. The two states become in turn very stable and then very unstable, directly forced by the insolation. However, flattening of the potential occurs during the WMI before Termination II, suggesting a different mechanism is implicated during this period before a termination. These results indicate that the underlying physical dynamics of the monsoon system are needed in order to obtain an early warning signal of a monsoon transition in the future.

**Ben Timmermans** (National Oceanography Centre), Peter Challenor, Christine Gommenginger

*Probabilistic uncertainty analysis of predicted wave conditions from a wave model with spatial and temporal variability*

Wave models are employed for fore- and hind-casting ocean surface wave conditions. Wave models are nonlinear, and perform computation based upon approximate physics that requires tuning, and uncertain input data in high dimension, such as forcing winds. It is therefore challenging to obtain quantitative measures of uncertainty about predictions. An uncertainty analysis could be carried out from large ensembles of runs but this is often prohibitive for a wave model, even where large computational resource is available.

An approach to this problem is to run a designed computer experiment and interpolate the output using a statistical model. The statistical model, based on a Gaussian process, is computationally cheap and can be used to perform various uncertainty and sensitivity analysis. This method has so far proved effective in performing uncertainty analysis for simple wave model configurations using the state-of-the-art model Wavewatch III. Previously, analysis was performed to evaluate the effect of uncertainty about wave model "tuning" parameters on wave growth. The wave model configurations were highly simplified in that only a single grid cell was used, together with a constant wind.

This work builds upon previous findings by considering uncertainty in predictions in a more realistic case. The analysis was based upon waves generated by winds blowing over Lake Michigan. To perform the analysis we applied emulator methods and produced quantitative measures of uncertainty for predicted wave (average) height and period that account for variability in forcing winds, in addition to a number of uncertain model tuning parameters. The relative effects of the different sources of uncertainty were analysed and are discussed.

**Nick W. Watkins ^{1,2,3}**

1 Max Planck Institute for the Physics of Complex Systems, Dresden, Germany

2 Centre for the Analysis of Time Series, LSE

3 Centre for Fusion Space and Astrophysics, University of Warwick, UK

*Compound Extremes and Bunched Black (or Grouped Grey) Swans *

Observed “wild” natural fluctuations may differ substantially in their character. Some events may be genuinely unforeseen (and unforeseeable), as with Taleb’s “black swans”. These may occur singly, or may have their impact further magnified by being “bunched” in time, with accompanying issues about event identification.

Some others may, however, be rare extreme events taken from a light-tailed underlying distribution such as an exponential or Gaussian, that is either known a priori, or to be inferred. Studying their occurrence may then be tractable with the methods of extreme value theory [e.g. Coles, 2001], suitably adapted to cope with temporal dependence or spatial correlation if observed to be present.

This presentation, however, focuses on a third broad class [reviewed in Watkins, GRL Frontiers, 2013, doi: 10.1002/grl.50103]. Such “bursty” time series may show comparatively frequent high amplitude “wild” events, and/or “slow” long range correlations between successive values. The frequent large values due to the first of these effects, modelled in economics by Mandelbrot in 1963 using heavy- tailed probability distributions, can give rise to an “IPCC type I” burst composed of successive wild events. Conversely, long range dependence, even in a light-tailed Gaussian model like Mandelbrot and van Ness’ fractional Brownian motion, or Granger's FARIMA, can integrate ``mild” events into an extreme “IPCC type III” burst.

I will show how a standard statistical time series model, linear fractional stable motion (LFSM), which descends from the two special cases advocated by Mandelbrot, allows these two effects to be varied independently, and will present results from a preliminary study of such bursts in LFSM. The consequences for burst scaling when low frequency effects due to dissipation (FARIMA models), and multiplicative cascades (such as multifractals) are considered will also be briefly discussed, as will be the physical assumptions and constraints associated with making a given choice of model.

**Sebastian Wieczorek** (University of Exeter), Bernard De Saedeleer & Michel Crucifix (Université Catholique de Louvain, Louvain-la-Neuve, Belgium)

*Is the astronomical forcing a reliable and unique pacemaker for Climate? A conceptual model study*

There is evidence that ice age cycles are paced by astronomical forcing, suggesting some kind of synchronisation phenomenon. Here, we identify the type of such synchronisation and explore systematically its uniqueness and robustness using a simple paleoclimate model and a concept of a pullback attractor. As the insolation is quite a complex quasiperiodic signal involving different frequencies, the traditional concepts used to define synchronisation to periodic forcing are no longer applicable. Instead, we explore a different concept of generalised synchronisation in terms of (coexisting) synchronised solutions for the forced system, their basins of attraction and instabilities. In this way, we uncover multistable synchronisation (reminiscent of phase- or frequency-locking to individual periodic components of astronomical forcing) at low forcing strength, and monostable or unique synchronisation at stronger forcing. In the multistable regime, different initial conditions may lead to different paleoclimate histories. To study their robustness, we analyse Lyapunov exponents that quantify the rate of convergence towards each synchronised solution (local stability), and basins of attraction that indicate critical levels of external perturbations (global stability). We find that even though synchronised solutions are stable on a long term, there exist short episodes of desynchronisation where nearby climate trajectories diverge temporarily (for about 50 kyr). As the synchronised solution can sometimes lie close to the boundary of its basin of attraction, a small perturbation could quite easily make climate to jump between different histories, reducing the predictability. Our study reveals a possibility for the climate system to wander throughout different climatic histories related to preferential synchronisation regimes on obliquity, precession or combinations of both, all over the history of the Pleistocene.

**Robin Williams** (University of Exeter)

*A comparison of ensemble post-processing methods for extreme events*

Ensemble post-processing methods are used in weather and climate prediction to form probability distributions that represent forecast uncertainty. Several such methods have been proposed in the literature, including logistic regression, ensemble dressing, Bayesian model averaging and nonhomogeneous Gaussian regression. We conduct an imperfect model experiment with the Lorenz 1996 model to investigate the performance of these methods, especially when forecasting the occurrence of rare, extreme events. We show how flexible bias-correction schemes can be incorporated into these post-processing methods, and that allowing the bias correction to depend on the ensemble mean can yield considerable improvements in skill when forecasting extreme events. In the Lorenz 1996 setting, we find that ensemble dressing, Bayesian model averaging and nonhomogeneous Gaussian regression perform similarly, while logistic regression performs less well.

**Hugo Winter ^{1}, **Jonathan Tawn

^{1}, Simon Brown

^{2}

1 STOR-i DTC, Lancaster University, UK

2 Met Office, UK

*Modelling drought in Southern Africa with Extreme Value Theory*

Natural disasters are rare events that can cause much social and economic damage. It is of interest to companies and decision makers when and how severe future natural disasters will be. Extreme Value Theory is an area of statistics that can be used to draw inferences about rare events. From an academic perspective, dependence between extreme values at different times and locations provides an interesting modelling challenge. This is especially so for environmental data where several types of dependence can occur. The presented work aims to assess spatial dependence of severe droughts using different approaches to modelling extremal dependence. A comparison will be made between the two most common methods, the joint tail approach of Ledford and Tawn (1997) and the conditional approach of Heffernan and Tawn (2004). Data used for this study are monthly rainfall values from the HadGEM2 global climate model. There will also be discussion of future extensions to the aforementioned models with a view to modelling covariates and non-stationarity.

References:

[1] Heffernan, J.E., Tawn, J.A. (2004). A conditional approach for multivariate extreme values (with discussion), J. Roy. Statist. Soc. B 66, 497–546.

[2] Ledford, A.W., Tawn, J.A. (1997). Modelling dependence within joint tail regions , J. Roy. Statist. Soc. B 59, 475–499.

**Andrew ZammitMangion **(University of Bristol)

*Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework*

Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically-based deterministic models to reduce the dimensionality of the problem. Here, we present a new approach for estimating the Antarctic contribution which only incorporates descriptive aspects of the physically-based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geo-statistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method.