Phil Browne, University of Reading

Title: Particle filters: nonlinearity, the curse of dimensionality and the future.

Abstract: Particle filters are a class of Monte-Carlo method to perform data assimilation. Their key advantage is they aim to represent the full posterior PDF of interest, without making simplifying (linear) assumptions. In the case of nonlinear model dynamics, non-Gaussian observations or non-Gaussian priors, such assumptions are not always justified.

One major drawback of a particle filter approach is filter degeneracy - the Monte-Carlo method only gives non-zero weight to a very small   subset of the ensemble members. Here I shall discuss the developments which have addressed the problem of filter degeneracy.

The particle filters which have been designed to address filter degeneracy are not yet competitive with much more established methods in the Gaussian case. I will discuss other particle filters which have been developed in order to get the best possible result from data assimilation without addressing the degeneracy issue.

In this talk I will discuss the current efforts to combine the methods with these two different particle filter approaches, and suggest many pressing open questions which must be addressed in order to reach a particle filter method that is applicable for numerical weather prediction.

 

Mike Cullen, Met Office

Title:  The physics of large-scale atmospheric circulations.

Abstract: Observations of the atmospheric circulation show a rather flat but non-stationary energy spectrum at wavelengths greater than about 4000km, then a steep (-3) spectrum between wavelengths of 4000 and 500km, and then a -5/3 spectrum at smaller scales. The boundary of the large-scale regime is primarily determined by the vertical structure of the zonal mean circulation, including the tropopause height and the reversal of the meridional temperature gradient above it which means that disturbances reach their maximum amplitude at the tropopause. The large-scale circulation is close to geostrophic and hydrostatic balance, and its evolution can be described by the semi-geostrophic model introduced by Eliassen. This model is controlled by the thermodynamics, and the total atmospheric circulation is determined diagnostically by the requirement to maintain geostrophic and hydrostatic balance. It typically generates very stable anomalies, which explain large-scale weather variations.  In the tropics, the weakening of the rotational constraint means that horizontal pressure and temperature gradients becomes weak and the large-scale regime only exists in a  zonal mean sense. Even apparently small-scale circulations, such as convection, are constrained by this principle. The intermediate regime is governed by quasi-geostrophic turbulence in which energy is cascaded up to the large-scale regime. These scales act as a ‘spectral gap’ in that the evolution of the large-scale circulations is only weakly coupled to the small-scale regime with wavelengths below 500km. In the small-scale regime the vertical coupling resulting from rotation no longer applies, and the vertical scale can collapse, leading to stratified turbulence, a very intermittent regime with a  -5/3 spectrum as observed.

In the talk the mathematics behind each of these regimes will be illustrated and examples from simulations shown.

 

Peter Dueben, University of Oxford

Title:  Inexact computers for more accurate forecasts: Are we over-engineering numerical precision in weather and climate models?

Abstract:  In numerical atmosphere and ocean models, values of relevant physical parameters are often uncertain by more than 100% and weather forecast skill is decreasing significantly after a couple of days. Still, numerical operations are typically calculated with 15 significant decimal digits. If we reduce numerical precision, we can reduce power consumption and increase computational performance significantly. This would allow an increase in resolution in weather and climate models and might be a short-cut to global cloud-resolving modelling.

In co-operations with groups in computing science we study different approaches to the use of inexact hardware in the hierarchy of models (from Lorenz '95 all the way up to IFS) and estimate possible savings. Results show that numerical precision can be reduced significantly within simulations with a three-dimensional dynamical core of an atmosphere model with no significant increase in model error for both weather and climate type simulations. If computational cost is reduced due to the use of inexact hardware, the possible increase in resolution allows a much stronger reduction of model error compared to the increase of errors due to reduced precision.

However, there is no doubt that rounding errors will influence numerical simulations. We study the impact of hardware errors on model dynamics at different spatial scales as well as interactions between the rounding error forcing and stochastic forcings of stochastic parametrisation schemes. We find that a scale selective approach that introduces significant rounding errors into the computation of small scale dynamics has huge potential due to the high inherent uncertainty which is present in these scales due to viscosity, sub-grid-scale variability and parametrisation schemes. We also find that rounding errors can be hidden inside the distribution of stochastic forcings of stochastic parametrisation schemes and that rounding errors can be used to generate forecast ensembles of similar spread and quality compared to ensembles based on stochastic parametrisation schemes, at least in idealised test cases.

 

David Ham, Imperial College

Title: Automated code generation for atmospheric simulation

 

How will we implement ever-more complex numerical schemes and parametrisations for ever-more complex and fine-grained parallel hardware? How will we analyse the petabytes of data which result?

In this talk I will discuss the prospects of dramatically simplifying the generation of complex high performance simulations from a mathematical specification of the discrete problem formulation. I will discuss the particular challenges of representing and expoiting the columnar structure of atmospheric models, and of representing operations - such as parametrisations - which do not directly fit the pure PDE abstraction. Time permitting, I will further discuss the beginnings of work to automate the analysis of atmospheric simulation data.

 

Andrew Lorenc, Met Office

Title: Developments in hybrid ensemble-variational DA for NWP

Abstract: The short-period forecast from a modern high-resolution NWP model is often more accurate than an individual observation; of course this is only the case because the data assimilation (DA) has extracted information from past observations. A successful DA must combine observations distributed in time with forecasts, taking account of the likely errors in both. For a small linear forecast model and Gaussian errors, the Kalman filter is an optimal DA method. But it requires manipulation of forecast error covariance matrices which for NWP are too large to even store, let alone estimate and manipulate.

Two different approaches have been used to simplify NWP forecast error covariances: variational methods model their stucture using dynamical insight, so they can be approximated using a sequence of transforms and a manageable number of coefficients; ensemble methods estimate them directly from a small number of forecasts drawn from the error distribution.

There are also different approaches to coping with observations distributed in time: changes within a short time-window can be ignored; 4D-Variational DA uses a simplified linear model to propagate errors; 4D-Ensemble methods use the forecast trajectories directly.

This presentation will review recent developments aimed at combining the above approaches, leading to schemes such as hybrid ensemble-4D-Variational DA (hybrid-4DVar) and 4D-Ensemble-Variational DA (4DEnVar). The emphasis will be on methods expected to be practicable on the computers foreseen for the coming decade.

 

Marian Mittermaier, Met Office

Title: Limits to temperature forecast accuracy due to temporal sampling

Marion Mittermaier and David Stephenson 

Abstract: Synoptic observations are often treated as error-free representations of the true state of the real world. For example, when observations are used to verify Numerical Weather Prediction (NWP) forecasts, forecast-observation differences (the total error) are often entirely attributed to forecast inaccuracy. Such simplification is no longer justifiable for short-lead forecasts made with increasingly accurate higher resolution models. For example, at least 25% of t+6h individual Met Office site-specific temperature forecasts now typically have total errors of less than 0.2 K, which are comparable to typical instrument measurement errors of around 0.1 K.

In addition to instrument errors, uncertainty is introduced by measurements not being taken concurrently with the forecasts. For example, synoptic temperature observations in the UK are typically taken 10 minutes before the hour, whereas forecasts are generally extracted as instantaneous values on the hour. 

This study develops a simple yet robust statistical modelling procedure for assessing how serially correlated sub-hourly variations limit the forecast accuracy that can be achieved. The methodology is demonstrated by application to synoptic temperature observations sampled every minute at several locations around the UK. Results show that sub-hourly variations lead to sizeable forecast errors of 0.16--0.44 K for observations taken 10 minutes before the forecast issue time. The magnitude of this error depends on spatial location and the annual cycle, with greater errors occurring in the warmer seasons and at inland sites. This important source of uncertainty consists of a bias due to the diurnal cycle, plus irreducible uncertainty due to unpredictable sub-hourly variations that fundamentally limit forecast accuracy.

 

David Stephenson, University of Exeter

Title: On the predictability of extremes: Does the butterfly effect ever decrease?

David B. Stephenson, Alef E. Sterk, Mark Holland, Ken R. Mylne

Abstract: This study investigates whether or not predictability always decreases for more extreme events. Predictability is measured here by the Mean Squared Error (MSE) between pairs of ensemble forecasts, conditioned on one of the forecast variables (the “pseudoobservation”) exceeding a threshold.

Using an exchangeable linear regression model for pairs of forecast variables, we show that the MSE can be decomposed into the sum of three terms: a threshold-independent constant, a mean term that always increases with threshold, and a variance term that can either increase, decrease, or stay constant with threshold.

Using the Generalised Pareto Distribution to model excesses, we show that MSE always increases with threshold at sufficiently high threshold. However, MSE can be a decreasing function of threshold at lower thresholds but only if the forecasts have finite upper bounds.

The methods are illustrated by application to daily wind speed forecasts for London made using the 24 member Met Office Global and Regionbal Ensemble Prediction System from 1 Jan 2009 to 31 May 2011. For this example, the mean term increases faster than the variance term decreases with increasing threshold, and so predictability decreases for more extreme events.

 

Nils Wedi, ECMWF

Title:  How to afford future global numerical weather prediction?

Abstract:  The biggest challenge for state-of-the-art NWP arises from the need to simulate complex, multi-scale physical phenomena within tight production schedules. Existing extreme-scale application software of weather and climate services is ill-equipped to adapt to the rapidly evolving hardware. This is exacerbated by other drivers for hardware development, with processor arrangements not necessarily optimal for the range of weather and climate simulations. Moreover, in particular with increasing resolution and model complexity there is a need to better understand and robustly handle the uncertainty of the underlying simulated processes, with hardware that itself is less reliable than we know today or even explicitly trades uncertainty for energy-efficiency.

ECMWF is leading a programme of innovation actions for developing a holistic understanding of energy-efficiency for extreme-scale applications using heterogeneous architectures, accelerators and special compute units by (a) defining and encapsulating the fundamental algorithmic building blocks ("Weather & Climate Dwarfs") underlying weather and climate services; (b) combining frontier research on algorithm development for use in extreme-scale, high-performance computing applications, minimizing time- and energy-cost-to-solution; and (c) synthesizing the complementary skills in global NWP with leading European regional forecasting consortia, university research, experienced high-performance computing centres and hardware vendors.