CliMathNet Conference 2013 Plenary speakers

For schedule, see http://www.climathnet.org/conference2013/schedule/

 

Mat Collins (University of Exeter), Samantha Ferrett, David Long, Thomas Mendlik, Indrani Roy

Constraints on Future Climate Projections (Presentation)

Complex climate models may be used to make quantitative projections of future climate change but different models produce different outputs.  We are uncertain about the magnitude, and sometimes even the sign, of future climate change. We can use ensembles of models to search for relationships between past, verifiable, climate, climate trends and climate variability and future climate change in order to constrain projections and quantify and reduce uncertainties. We describe research towards constraining projections of three climate variables. (i) The global hydrological cycle, which may be related to past trends in temperature and radiative fluxes. (ii) The El Niño Southern Oscillation, which can be quantified using linear stability theory. (iii) The Indian Monsoon, which is partly constrained by global factors but is also impacted by local sea-surface-temperatures and land-surface conditions.

 

Henk Dijkstra (Utrecht University)

Complex networks: a useful tool in climate research? (Presentation)

Analysis of the topology of interaction/recurrence networks reconstructed from climate time series has recently been proposed as an approach to understand climate variability. In this talk a critical assessment is given on the usefullness of this approach. First a short introduction to complex network theory, interaction/recurrence network reconstruction and the topological  analysis techniques will be given. Then, focus will be on three example problems: (i) network based early warning  indicators of bifurcation (tipping)  points, (ii) network based  indicators of anomaly  propagation and (iii)  a possible hierarchical structure of the climate network.

 

Peter Düben (University of Oxford) Tim Palmer, Hugh McNamara

 The use of imprecise processing to improve accuracy in weather & climate prediction (Presentation)

The use of probabilistic methods in weather and climate models is promising a reduction of long term biases, an improvement of the representation of sub-grid-scale variability, and more skilful estimates of uncertainty than deterministic methods. It might be useful to expand the application of probabilistic methods on a software level, such as the use of stochastic parametrisation, to a hardware level, for example by using imprecise computational methods that are not bit-reproducible. To give-up the requirement to calculate all operations correctly and with double precision accuracy promises large reductions in the energy demand and therefore in the computational costs. A reduction of computational cost would allow work with higher resolution or more ensemble members to take place.

We investigate the use of imprecise computational methods for a dynamical core of a global atmospheric model that is based on spectral discretisation methods. While the use of correct processing and high precision is necessary to calculate the large scale dynamics, for which exactness is crucially important, we emulate imprecise processing for the small scale dynamics. The small scales possess an inherent uncertainty due to limited grid resolution and parametrisation schemes, precise methods appear to be over-engineered.

We study two approaches to imprecise methods in order to gain performance in atmospheric modelling: The use of stochastic processors and the use of very low real number accuracy. Stochastic processors reduce the accuracy since they make faulty calculations, but they offer a significant reduction of the power consumption. The use of lower accuracy for real numbers can significantly increase the performance of the model simulations, since less storage is needed and more data fits into memory and cache. We emulate stochastic processors and very low real number accuracy by randomly flipping bits at a prescribed error rate, after all real number operation. The results indicate that both approaches posses high potential for weather and climate models.

 

Michael Goldstein (Durham University)

Bayesian  uncertainty analysis: linking simulators and climate (Presentation)

There is a growing area of methodology which is concerned with Bayesian uncertainty analysis for complex physical systems which are modelled by computer simulators. The methodology is general, but each application has characteristics arising from its own area of application. In this talk, I shall outline some of the general principles associated with this methodology and discuss their relevance to climate science, focusing both on the practical aspects of the analysis (how can we predict what climate is likely to be?) and the foundational (why should our methods work and what do our answers mean?).

 

Peter Guttorp (University of Washington)

The Heat Is On! A statistical look at the state of the climate (Presentation)

For a statistician, climate is the distribution of weather and other variables that are part of the climate system. This distribution changes over time.  From this point of view we will consider a ensemble of temperature simulations from CMIP5, and talk about how to compare them to data. We will also look at how climate change is evident in data and models using a statistical tool called the shift function. Looking at a finer scale of resolution, we compare minimum temperatures in data from southern Sweden to a regional model run.

 

Peter Haynes (University of Cambridge), Fenwick Cooper

 Calculating the forced response of the tropospheric circulation (Presentation)

Some aspects of the coupling between stratosphere and troposphere can be understood as tropospheric response to stratospheric forcing, with the tropospheric circulation acting, through the coupled effects of eddies and mean flow, as an amplifier. The Fluctuation-Dissipation Theorem (FDT) is one theoretical tool available to predict the tropospheric response to forcing. The FDT provides an estimate of the linear operator relating forcing to response, based only on the statistics of the unforced tropospheric circulation, calculated from a suitable time series. The simplest prediction of the FDT, already exploited in the context of response to stratospheric forcing, is that response to forcing will be proportional to the longest correlation timescale in the unforced circulation. Potentially the FDT can provide more precise information on the structure and magnitude of the response to an arbitrary forcing. However the usefulness is limited by (a) sampling issues (i.e. the accuracy of the prediction is limited by the length of the time series of the unforced circulation) and (b) the gaussianity assumption in the traditional form of the FDT. This presentation will provide quantitative analysis of (a), present a non-gaussian extension of the traditional FDT -- we refer to the extension as a 'non-parametric FDT' and discuss its usefulness as a predictor of changes in the tropospheric circulation.

 

David Hendry (University of Oxford), Felix Pretis

Econometric modelling and forecasting of non-stationary time series (Presentation)

Non-stationarity has two main sources: stochastic trends and distributional shifts. The first can be addressed by differencing and cointegration, now well established procedures in times-series analysis and forecasting. The second is more problematic, as the most pernicious changes, namely location shifts, are often unanticipated and lead to forecast failure, so a new approach to obtaining forecasts that are robust after such location shifts will be described.  Recent developments to detect and remove location shifts during modelling include impulse-indicator saturation (IIS) and step-indicator saturation (SIS) where indicator variables are created for every observation (IIS) or cumulated (SIS) and added to the set of candidate variables for an automatic model search. Although there will be more candidate variables (N) than observations (T), N>T is feasible for an extended block search algorithm. The theory will be explained and illustrated, and has been applied to model anthropogenic influences on atmospheric CO2.

 

Chris Jones (University of North Carolina, Chapel Hill)

 Lagrangian data assimilation as a paradigm

Assimilating Lagrangian data into ocean models has proved very successful. Its importance derives from the fact that most subsurface observations are made by instruments that are carried by the flow itself. Its mathematical structure, however, is suggestive for climate problems generally. It separates the system state in various ways: fast/slow; high/low-dimensional; relatively tame/chaotic; linearizable/highly nonlinear. Its success will be outlined and explained as well as the possibility of its being a paradigm discussed.

 

Erland Källén (ECMWF)

 Reasons for forecast skill improvements at ECMWF (Presentation)

The forecast skill at ECMWF has improved steadily over the past three decades and today's 6 day forecasts are as skilful as 3 day forecasts were 30 years ago. The main reasons behind theses skill improvements are threefold: Improved accuracy of the forecast model, improved accuracy of the initial state and an improved observational coverage. Recently an analysis has been made concerning the relative importance of these three factors. For 20 years an assessment of the forecast uncertainty has also been made using an ensemble method. The skill of the uncertainty estimate has progressed in line with the forecast skill improvements. Future developments of the forecast system including the ensemble element will be discussed.

 

Lindsay Lee (University of Leeds), Ken Carslaw, Carly Reddington, Kirsty Pringle, Graham Mann, Jill Johnson.

Sensitivity analysis and calibration of a global aerosol model (‌Presentation)

Aerosol effects on clouds have proved to be a persistent source of uncertainty in the calculation of climate radiative forcing and as a result complex aerosol microphysical models have been developed. These microphysical models simulate the lifetime of global aerosols such as sulphate, sea salt and black carbon from emission through to deposition via processes such as condensation, evaporation and chemical processing. The simulation of such processes within a computer code results in estimates of the global aerosol distribution that are subject to uncertainties due to the need to parameterise to account for incomplete knowledge or model scales. The uncertainty in global aerosol models is currently estimated with multi-model ensembles showing large diversity between models but with little information on the sources of this diversity. The calibration of global aerosol models has done little to reduce this model diversity despite better agreement between individual models and observations. It is unclear whether models are right for the wrong reasons.

Here we use the GLOMAP global aerosol model developed in Leeds to investigate new methods of model calibration using statistical methods that take into account model uncertainty and its sources. Expert elicitation, statistical design, emulation and variance-based parametric sensitivity analysis have all been applied to GLOMAP leading to a much more complete understanding of the model behaviour. We see that model sensitivity to its uncertain parameters is globally and temporally heterogeneous but there are patterns of regional and seasonal similarities that can be understood in terms of aerosol science giving us confidence in the models ability to simulate global aerosol. We compare the emulated aerosol to observations and identify structural deficiencies present in the model and attempt a calibration of the global aerosol model where no structural deficiencies are obvious. Sensitivity analysis indices and model response surfaces for each of the uncertain model parameters help us to assess the robustness of our calibration.

This research is funded as part of the NERC research projects: AEROS and GASSP.

 

Tim Lenton (University of Exeter)

 Early warning of climate tipping points

A ‘tipping point’ occurs when a small change in forcing triggers a strongly non-linear response in the internal dynamics of a system, qualitatively changing its future state.  Large-scale ‘tipping elements’ have been identified in the Earth’s climate system that may pass a tipping point under human-induced global change this century.  At the smaller scale of ecosystems, some critical regime shifts have already been observed, and more are anticipated in future.  Such abrupt, non-linear changes are likely to have large impacts, but our capacity to forecast them has historically been poor.  Recently, much excitement has been generated by the possibility that approaching tipping points carry generic early warning signals. I will critically examine the prospects for gaining early warning of approaching climate tipping points.  Promising methods are based on detecting ‘critical slowing down’ in the rate a system recovers from small perturbations, and on characteristic changes in the statistical distribution of its behaviour (e.g. increasing variability).  Early warning signals have been found in paleo-climate data approaching past abrupt transitions, and in models being gradually forced past tipping points.  I will show new results from analysis of recent observational climate data.  In particular, we detect a tipping point in Arctic sea-ice cover around 2007.  Furthermore, analysis of sea surface temperature datasets from the North Pacific show strong slowing down in response to perturbations, which can only be partly explained by observed deepening of the mixed layer.

 

Valerio Lucarini (University of Hamburg), Jeroen Wouters

 A general statistical mechanical approach for deriving parametrizations

We consider the problem of deriving approximate autonomous dynamics for a number of variables of a dynamical system, which are weakly coupled to the remaining variables. Our findings have relevance for the construction of parametrizations of unresolved processes in many non-equilibrium systems, and most notably in geophysical fluid dynamics.  

We first propose a systematic way to construct a surrogate dynamics, such that the expectation value of any observable agrees, up to second order in the coupling strength, to its expectation evaluated on the full dynamics. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are two separate and very different contributions. One is a term taking into account the second order contributions of the fluctuations in the coupling, which can be parametrized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system.

Furthermore, we show that such surrogate dynamics agrees up to second order to an expansion of the Mori-Zwanzig projected dynamics. This implies that the parametrizations of unresolved processes suited for prediction and for the representation of long term statistical properties are closely related, if one takes into account, in addition to the widely adopted stochastic forcing, the often neglected memory effects. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exist, can be used equally well to study the statistics of the slow as well as that of the fast variables [1,2].

[1] J. Wouters and V. Lucarini, Disentangling multi-level systems: averaging, correlations and memory, J. Stat. Mech. P03003 doi:10.1088/1742-5468/2012/03/P03003 (2012)

[2] J. Wouters and V. Lucarini, Multi-level Dynamical Systems: Connecting the Ruelle Response Theory and the Mori-Zwanzig Approach, J. Stat. Phys., doi: 10.1007/s10955-013-0726-8 (2013)

 

Beatrice Pelloni (University of Reading)

A model for large-scale atmospheric flow: the semigeostrophic system

In this talk , I will  survey the reasons for considering the so-called geostrophic system for modelling large scale atmospheric flows. I will explain why this system is also mathematically appealing, and give a summary of existing results, as well as of work in progress.

 

Chiara Piccolo (Met Office), Mike Cullen

Evaluation of model errors using data assimilation techniques (Presentation)

We wish to demonstrate how data assimilation techniques can be used to quantify model errors. Model error on a case by case basis is unknowable, as the governing equations cannot be solved completely and some processes cannot be represented by equations. However, information about the statistics of model error can be inferred from the observations.  In order to achieve this, we incorporate a stochastic representation of model error which can be calibrated. This can be physically based, for instance the stochastic backscatter scheme, or be represented by a simple covariance model as used in the operational data assimilation. We then carry out a stochastic data assimilation using an ensemble DA method. The model in the ensemble is augmented by the stochastic term. The ensemble is then run on into a forecast.  The predictions are verified by testing the hypothesis that the truth is a member of the population represented by the ensemble.   Results are shown using an ensemble of 4dVars and comparing the stochastic backscatter and other physically based model error representations with that given by using a simple covariance model. These results can be used to inform the design of ensembles. They also give a measure of the quality of the deterministic model because the size of the model error is estimated independently of the uncertainty in the initial data.

 

Nigel Wood (Met Office)

GungHo into the future: developing a dynamical core for exascale machines

To continue improving the accuracy and reliability of our climate models requires us to harness the power of some of the world's largest supercomputers. To add to the challenge, supercomputers are entering a period of radical change in their architecture. It is predicted that by the early 2020's exascale machines (capable of performing 10^18 FLOPS) will become a reality. This will be achieved though by a huge increase in the number of processors; numbers in the region of a million processors seem likely.

Rising to this challenge is made more difficult by the fact that the UK Met Office's model (the MetUM) is unified in that the same dynamical core (and increasingly also the same physics packages and settings) is used for all our operational weather and climate predictions. The model therefore has to perform well across a wide range of both spatial scales [O(10^0)-O(10^4)km] and temporal scales [O(10^-1)-O(10^4) days] as well as a wide range of platforms.

This talk will outline the current status of the MetUM and then focus on GungHo!, the project that is redesigning the dynamical core of the model to radically improve its scalability whilst retaining good accuracy.