Peter Ashwin (University of Exeter)

The Mid-Pleistocene transition as a generic bifurcation on a slow manifold (Presentation)

We discuss a conceptual model of the Pleistocene ice ages over the last 2000 kyr in terms of a forced relaxation oscillator with Milankovitch forcing. By considering a generic form of singularity of the slow manifold we discuss a possible explanation of the Mid-Pleistocene transition from approximately 41 kyr cycles to approximately 100 kyr cycles. The model explains it as a generic bifurcation of transcritical type on varying a distinguished parameter, for the slow manifold of the relaxation oscillator (joint work with Peter Ditlevsen).

 

 

Stuart Barber (University of Leeds), Robert G. Aykroyd

A statistical regression framework using localised frequency information

In many applications, we observe data on an evolving a process along with several potential explanatory variables.  Many statistical methods are available which attempt to model the relationship between the explanatory variables and outcome.  Typically, the information taken from each explanatory variable is simply the value it takes, but we can be more flexible about how we use the variables.

We propose a framework where statistics derived from multiple time series are synthesised into ``activity measures''.  These activity measures are used as candidate explanatory variables in a regression model.  The aims of this modelling process could be to understand the dominant frequencies driving the process, to diagnose the current state of the process, or to predict future values of the process.

We derive the localised frequency information from wavelets, a type of localised basis function.  Wavelets are able to represent information in both the time and frequency domains at the same time, enabling us to use both mean level and frequency information simultaneously. Wavelets are also highly computationally efficient, allowing for real-time application of our approach.

 

 

Phil Browne (University of Reading), P.J. van Leeuwen, S. Wilson, J. Robson, E. Hawkins, R. Sutton

Mathematical issues when applying a fully nonlinear particle filter to initialise a coupled ocean-atmosphere climate model

It is a widely held assumption that particle filters are not applicable in high-dimensional systems due to filter degeneracy, commonly called the curse of dimensionality.  However the equivalent weights particle filter has been shown to perform particularly well on systems of dimension up to 216  6.5104 without suffering filter degeneracy.  In this talk we will present mathematical issues involved in the use of the equivalent weights particle filter in twin experiments with the global climate model HadCM3, and present the associated numerical results. 

The twin experiments consist of assimilating daily SST data over a 6 month period. The model has state dimension approximately 4106 and approximately 4104 observations per analysis step. This is 2 orders of magnitude more than has been achieved with a particle filter in the geosciences. 

Using a fully nonlinear data assimilation technique to initialise a climate model gives us the possibility to find non-Gaussian estimates for the current state of the climate, as well as giving estimates for the uncertainty of each variable. In doing so we may find that the same model may demonstrate multiple likely scenarios for forecasts on a multi-annular/decadal timescale. 

We will present our solutions to a number of mathematical and numerical issues which had to be overcome in order to apply the equivalent weights particle filter with HadCM3.

 

 

Chris Budd (University of Bath), Chiara Piccolo, Mike Cullen (Met Office) and Phil Browne (Reading)

Adaptive mesh methods and data assimilation

Data assimilation is the process of systematically including (often noisy) data into a forecast. It is now widely used in numerical weather prediction and its positive impact on the accuracy of weather forecasts is unquestionable.

Data Assimilation is now an essential part of the Met Office forecasting procedures. However, a significant problem faced by the Met Office is that of assimilating data in the presence of atmospheric inversion layers or other fine structures. Misrepresenting these layers in the computations, leads to spurious correlations between observed data and the underlying physical structures. This has a negative effect on the assimilation of data (for example from radiosondes) into the forecast, possibly degrading the forecast performance.

In this talk I will describe an adaptive mesh procedure based on moving a mesh to equidistribute a monitor function which aims to reduce these correlations by locally resolving the inversion layer.  This procedure can work in one, two or three spatial dimensions.

I will discuss this general process, and will then show how an implementation by Chiara Piccolo of a one-dimensional adaptive procedure into the Met Office operational data assimilation code has led to a measurable improvement in forecast accuracy.

 

 

Rob Chadwick (Met Office)

What causes uncertainty in future tropical rainfall projections?

Changes in the patterns of tropical rainfall under global warming have the potential to have large societal and environmental impacts, but are currently highly uncertain. The physical processes that drive the large spread in climate model rainfall projections are investigated, using idealised modelling experiments and a novel theoretical framework. The dominant driver of uncertainty is the range of different spatial shifts in the regions of convection and convergence produced by models. Over the oceans much of this is associated with changes in the patterns of sea surface temperature. Over land, the largest cause of uncertainty is the shifts in convective regions in response to a uniform global rise in sea surface temperatures.

 

 

Peter Challenor (University of Exeter and National Oceanography Centre), Danny Williamson (University of Exeter) and Adam Blaker (National Oceanography Centre)

Towards Reconstructing the Climate of the Recent Past from a Combination of Data and Models

If we are to successfully forecast future climate we need to understand the climate of the past. Although we have been taking measurements for the last century or so, until the last 20 years the data has been scarce and even in the modern era the system is seriously undersampled. However we have a good understanding of the equations that govern fluid flow on a sphere and these are implemented in a variety of numerical models. A reconstruction that is a solution to these equations conserves physical quantities through time and is referred to as ‘dynamically consistent’. To produce good simulations we need to run at high resolution and hence the computational expense is high so traditional calibration methods are infeasible. Data assimilation methods can be used but fail to produce dynamically consistent solutions. We show how methods developed for uncertainty quantification (emulators and history matching) can be used to combine data and model efficiently to produce both dynamically consistent reconstructions and calibrated models for prediction.

 

 

Fenwick Cooper (University of Oxford)

Optimisation of an idealised ocean model, stochastic parameterisation of sub-grid eddies

An optimisation scheme is developed to accurately represent the sub-grid scale forcing of a high-dimensional chaotic ocean system. Using a simple parametrisation scheme, the velocity components of a 30km resolution shallow water ocean model are optimised to have the same climatological mean and variance as that of a less viscous 7.5km resolution model. The 5 day lag-covariance is also optimised, leading to a more accurate estimate of the high resolution response to forcing using the low resolution model.

The system considered is an idealised barotropic double gyre that is chaotic at both resolutions. Using the optimisation scheme, we find and apply the constant in time, but spatially varying, forcing term that is equal to the time integrated forcing of the sub- mesoscale eddies. A linear stochastic term, independent of the large scale flow, with no spatial correlation but a spatially varying amplitude and time scale is used to represent the transient eddies. The climatological mean, variance and 5 day lag-covariance of the velocity from a single high resolution integration is used to provide an optimisation target. No other high resolution statistics are required. Additional programming effort, for example to build an adjoint model, is not required either.

The method can be applied to help understand and correct biases in the mean and variance of a more realistic coarse or eddy-permitting ocean model. The method is complementary to current parameterisations and can be applied at the same time without modification. For climate change experiments the parametrisation is expected to improve the accuracy of a climate model's response to forcing.

 

 

Rosie Eade (Met Office)

Do seasonal to decadal climate predictions underestimate the predictability of the real world?

Seasonal to decadal predictions are inevitably uncertain, depending on the size of the predictable signal relative to unpredictable chaos. Uncertainties are accounted for using ensemble techniques, permitting quantitative probabilistic forecasts. In a perfect system, each ensemble member would represent a potential realization of the true evolution of the climate system, and the predictable components in models and reality would be equal. However, we show that the predictable component is sometimes lower in models than observations, especially for seasonal forecasts of the North Atlantic Oscillation and multi-year forecasts of North Atlantic temperature and pressure. In these cases the forecasts are under-confident, with each ensemble member containing too much noise. Consequently, most deterministic and probabilistic measures under-estimate potential skill and idealized model experiments under-estimate predictability. However, skilful and reliable predictions may be achieved using a large ensemble to reduce noise and adjusting the forecast variance through a post-processing technique proposed here.

 

 

Tamsin Edwards [1], Catherine Ritz [2], Gael Durand [2], Tony Payne [1], Vincent Peyaud [2] and Richard Hindmarsh [3]

1. University of Bristol, 2. LGGE, France, 3. British Antarctic Survey

Probabilistic projections of Antarctic ice sheet instability

Large parts of the Antarctic ice sheet lie below sea level and may be vulnerable to Marine Ice Sheet Instability (MISI), a positive feedback in which ice shelf collapse or ice sheet exposure to warmer water triggers self-sustaining ice loss at a rate independent of the original forcing. But reliable simulation of the response of the grounding line (which divides ice resting on bedrock from floating ice shelves and thus determines contribution to sea level) to the drivers of MISI is currently precluded by the computational expense of state-of-the-art numerical ice sheet models.  

We present a new approach that parameterises grounding line migration in a numerical model within a statistical framework to account for uncertain driving and limiting factors of MISI. We sample 16 uncertain model inputs (including the probability of triggering MISI) in a 3000 member ensemble of the GRISLI ice sheet model to simulate Antarctic present day and future dynamic contribution to sea level over the next 200 years. We calibrate the results in a Bayesian statistical framework using observations of present day mass loss in the Amundsen Sea region, where the grounding line is currently retreating.

The uncalibrated (prior) sea level projections are consistent with, or slightly lower than, previous estimates using extrapolation and kinematic constraints, but the calibrated (posterior) projections are very much lower. This is because they downweight ensemble members with greater present day mass losses than observed. The results are novel not only in the MISI parameterisation but also in their quantification of uncertainty for Antarctic ice sheet model projections.

 

 

Gavin Esler (UCL)

Adaptive Stochastic Trajectory Modelling

Lagrangian models of aerosol, trace gas and pollutant dispersion are tools of great importance in many branches of climate science. The effects of unresolved atmospheric (or oceanic) turbulence can be represented in such models by a stochastic term. The subject of this talk will be to investigate the extent to which, if such models are formulated as stochastic differential equations, techniques from mathematical finance and stochastic physics can be adapted to enhance their efficiency.

Here, a model transport problem in a chaotic advection flow is examined in detail, with the aim of finding an  efficient and accurate method to calculate the total tracer transport between a source and a receptor in the case where the direct flow between the two locations is weak, causing a naive stochastic Lagrangian simulation to be prohibitively expensive. The method found to be most useful is a variant of Grassberger's `go-with-the-winners' branching process, which acts to remove particles unlikely to contribute to the net transport, and reproduces those that will contribute. The key to success is a shown to be a new solution to the problem of defining a `winner', using results from the adjoint (of back trajectory) calculation. It is demonstrated that the new method acts to reduce the variance of an estimator of the total transport by several orders of magnitude compared with the naive simulation.  The result is an algorithm for an `adaptive' stochastic trajectory model.

 

 

Alan Gadian (NCAS, University of Leeds)

Ralph Burton, James Groves, Alan Blyth and Chris Collier, NCAS, University of Leeds

Greg Holland, Cindy Bruyere, James Done, NESL , NCAR, USA

Jutta Thielen et al. Climate Risk Management Group, JRC, ISPRA, Italy

A Weather climate change Impact Study at Extreme Resolution (WISER)

Climate is now a weather scale process problem, and simulation of weather processes is required to understand the rapidly changing climate.  The resolution required to include meso-scale features, is still out of the reach of climate model resolution, and this project attempts to include the important meso-scale features. WISER (Weather climate change Impact Study at Extreme Resolution) is a regional climate study to use a numerical weather model (WRF), in  a channel formulation (68 degrees latitude) at a resolution of 20 km at the equator reducing to 9 km. A  nested regional model at a resolution of 3-4 km over  Western Europe aims at resolving the larger convective scale precipitation events statistically.

We compute model climatologies driven by:-

(a) ERA interim climate reanalysis study for recent decades, (e.g. 1989-2001)

(b) CESM/CAM climate data for the same recent decades, to obtain offset and bias corrections

(c) Future climate scenarios in a warmer climate, for decadal periods, 2020-2030 initially and later 2050-2060, with a clearer warming signal.

The overall aim is to examine predicted changes in

(a)  general precipitation over western Europe and the UK,

(b)  patterns of frontal storm tracks on decadal time scales and  the Atlantic storm tracks with associated occurrence of blocking in the North Atlantic,

(c) in quantity and frequency of severe and hazardous convective rainfall events.

This presentation will show the approach, initial results from the pdfs of precipitation in the inner and outer domains, and the importance of bias corrections discussed.

 

 

Céline Guervilly (University of Leeds), David Hughes & Chris Jones

Formation of large-scale vortices in rotating convection

Using numerical simulations of rotating Boussinesq convection in a Cartesian box, we study the formation of long-lived, large-scale, depth-invariant vortices. These vortices, which are always cyclonic in the explored parameter range, grow to the largest horizontal size permitted in the computational domain. We will discuss the domain of existence of these vortices, the possible mechanisms by which they form, and how they affect the heat transfer through the system.

 

 

Nili Harnik (Tel Aviv University),Chaim Garfinkel and Orli Lachmy

The influence of Jet stream structure and transitions on extreme weather events in a hierarchy of models and observations

This study aims to better understand how the spatial distribution and seasonal evolution of extreme events depends on the large scale atmospheric flow characteristics, in particular, on the type of jet stream, in idealized models and observations. In its basic form, the global circulation is a complex interaction between three components - the Hadley cell, midlatitude jet streams, and storms. This three-way interaction gives rise to multiple dynamical regimes, which greatly affect the spatial and temporal characteristics of the jet stream and synoptic storms. We find that the distribution of extreme events, in relation to the jet stream structure, are quite different between an Eddy driven jet and a subtropical jet. Correspondingly, the temporal evolution of extreme events is quite different between these dynamical regimes. The differences have to do both with the differences in the mean circulation and in the characteristics of the dominant wave modes. Wave breaking is found to play a major role in the formation of extreme events even in the most idealized model. A comparison of these results to observations will also be presented.

 

 

Alan Hewitt1, Ben Booth1, Chris Jones1, Eddy Robertson1, Andy Wiltshire1, David Stephenson2, Stan Yip3

[1] Met Office, [2] University of Exeter, [3] The Hong Kong Polytechnic University

Sources of uncertainty in future projections of the carbon cycle

The increased inclusion of carbon cycle processes within CMIP5 climate models provides a new framework to explore the relative importance of uncertainties in scenario and model representation to future land and ocean carbon fluxes. A two-way ANOVA approach, appropriate for an unbalanced design, was used to distinguish the relative importance of these uncertainties at different lead times. This study has found contrasting pictures for global ocean and land carbon fluxes. For global ocean fluxes, differences between atmospheric CO2 scenarios become more important than modelled differences around 2030 and completely dominate by 2100. In contrast, modelled differences in land carbon process representation remain the largest uncertainty source beyond 2100. This suggests that modelled processes that determine ocean fluxes are currently better constrained than those of land fluxes, thus we can be more confident in linking different future socio-economic pathways to consequences of ocean carbon uptake than for land carbon uptake.

The apparent model agreement in atmosphere-ocean carbon fluxes, globally, masks strong model differences at a regional level. The North Atlantic and Southern Ocean are key regions, where differences in modelled processes represent an important source of uncertainty in projected regional fluxes. For the tropical land region, dependence on model-scenario interaction uncertainty was identified, linked to interactions between “anthropogenic land-use change” and “CO2 concentration” driven scenarios. This emphasises how increased complexity in the representation of Earth system processes leads to a more nuanced picture of anthropogenic influence in future scenarios, with how we account for projected land cover change showing not insignificant impact on future projected spread in carbon fluxes.

 

 

Jill Johnson (University of Leeds), Zhiqiang Cui, Ken Carslaw and Lindsay Lee

Exploring Uncertainty in a Cloud Microphysics Model

The effect of global aerosols on clouds is one of the largest uncertainties in the radiative forcing on the climate. Aerosol particles can serve as nuclei for cloud droplet and ice formation, influencing cloud properties, cloud dynamics, precipitation and the way a cloud interacts with solar radiation.

In this study, we use the Model of Aerosols and Chemistry in Convective Clouds (MAC3) to simulate the formation of a deep convective cloud in a continental environment given a set of microphysical and atmospheric parameters - some of which are subject to a degree of uncertainty. This model is complex and highly computational, with many calculations required to represent the cloud dynamics and microphysics as the cloud forms within the simulation. The model outputs are properties of the simulated cloud that are of relevance to climate and we aim to identify the model processes and parameters that lead to uncertainty in these predicted outputs.

Classical methods for uncertainty and sensitivity analysis involving direct Monte Carlo simulation are not feasible when using the MAC3 model directly. To overcome this we use statistical emulation to construct surrogate representations for MAC3 that can be evaluated quickly and easily, and use these within a variance-based sensitivity analysis to evaluate the parametric uncertainty in this model. Training data for the emulation was obtained by running the cloud model at selected input combinations across the defined uncertain input space according to a space-filling maximin Latin hypercube design. Exploration of these training runs has revealed different regimes of cloud behaviour over the defined parametric uncertainty, and we explore the drivers of uncertainty in a selection of 12 cloud responses for each of these regimes as well as the full uncertain input space. In particular, we look to quantify the cloud response to aerosol in the atmosphere and determine the factors that most contribute to it.

This research is funded as part of the NERC project consortium ACID-PRUF.

 

 

Tom Kent (University of Leeds), Onno Bokhove, Steve Tobias

A modified shallow water model for investigating convective-scale data assimilation

I outline a modified rotating shallow water model to represent an idealised atmosphere with moist convection for use in inexpensive data assimilation experiments. By combining the non-linearity due to advection in the shallow water equations and the onset of precipitation, the proposed model captures two important dynamical processes of convecting and precipitating weather systems. The model is a valid non-conservative hyperbolic system of partial differential equations and is solved numerically using a shock-capturing finite volume/element framework which deals robustly with the high non-linearity and so-called non-conservative products.

The model will be used for investigating data assimilation schemes (ensemble and variational) at the convective-scale. We will conduct idealised twin experiments whereby the "truth" trajectory is determined by model simulations at a very high resolution and pseudo-observations are generated by randomly perturbing this "truth".   The "forecast" model is then run at a lower resolution in which gravity waves are: (i) not resolved, (ii) partially resolved, and (iii) fully resolved, in order to ascertain if and how small-scale dynamics affect data assimilation schemes.

The model is currently being integrated in to the Met Office's Data Assimilation Modelling framework, a testbed for data assimilation research using idealised models.

 

 

Vladimir Lapin (University of Leeds)

Recovering accuracy of finite-difference ocean models with staircased boundaries

Many general circulation and climate models traditionally rely on finite-difference schemes for spatial discretization of the underlying primitive equations. They are simple to implement and some, in particular staggered Arakawa C-grid, can yield computationally robust numerical schemes that respect the geostrophic balance and essential conservation laws. However, irregular topographic boundaries are crudely approximated with a sequence of steps, or staircases, which can introduce O(1) errors in the near-land velocity field and reduce global accuracy of the scheme to first order in grid spacing.

In this talk we describe an improved numerical implementation of the no normal-flow condition that reduces errors introduced by the staircase boundary to a desired order of accuracy, e.g. the order of the interior discretization scheme itself. Three toy problems of the linearized shallow water equations are used to benchmark the performance of this approach against a finite-element method of choice. Also, the impact of adopting these improved boundary conditions rather than standard staircase boundary conditions for coastal ocean dynamics is illustrated in a global tidal model.

 

 

Adam Lea (UCL) [1], Mark A. Saunders [1] and Richard E. Chandler [2]

(1) Department of Space and Climate Physics, University College London.

(2) Department of Statistical Science, University College London.

How well do ensemble forecasts of European windspeed represent true uncertainty?

Ensemble forecasts of European windspeed are used by industry to give probabilistic forecasts of impact. In general these probabilistic applications ‘assume’ that the ensemble members represent forecast uncertainty accurately. However, is this assumption justified? We examine this assumption using ensemble forecasts of European windspeed made by eight state-of-the-art numerical weather prediction models archived in the TIGGE (Thorpex Interactive Grand global Ensemble) database. These models possess between 14 and 50 ensemble members. Our assessment is made for the European region 35N-65N, 10W-30E. We consider ensemble predictions out to 10 days lead with updates either every 6 hrs or 12 hrs, and we include all forecasts between March 2011 and February 2014. Two different re-analysis datasets are employed for verification: ERA-interim at a lat/long resolution of 0.75˚ x 0.75˚ and NASA-MERRA at a lat/long resolution of 0.5˚ x 0.667˚. We employ two tools - verification rank (VR) histograms and the Reliability Index - to assess the calibration of ensemble predictions. VR histograms plot the distribution of the ranks of the verifications when pooled within the ordered ensemble predictions. A well calibrated ensemble forecast will produce a uniform flat VR histogram. The Reliability Index (RI) is used to quantify the deviation of VR histograms from uniformity; a small RI shows good calibration while a large RI shows that uncertainty is poorly represented. Maps of RI have been made by grid cell across Europe for each forecast model, for four different lead times (24, 72, 144 and 240 hrs), for different seasons (annual, winter, spring, summer and autumn), and for both ERA-Interim and NASA-MERRA used as the verification. Our findings show that ensemble forecasts of European windspeed are in general mis-calibrated and often poorly represent uncertainty; this mis-calibration increases with decrease in forecast lead time. Our findings are consistent across Europe and for different seasons, and are repeatable for different re-analysis verification datasets. The ECMWF model stands out as providing ensemble forecasts which best represent forecast uncertainty.

 

 

Lindsay Lee (University of Leeds), Ken Carslaw.

Using sensitivity analysis to quantify the value of observations in reducing model uncertainty

Our ability to predict how atmospheric aerosol particles affect climate has remained the largest uncertainty in the causes of climate change through all Intergovernmental Panel for Climate Change (IPCC) reports since 2001. Aerosols mostly act to cool the climate, by reflecting solar radiation, potentially offsetting some of the global warming caused by greenhouse gases. Huge investment in atmospheric measurements and computer modelling has improved our understanding of aerosol-climate processes, but as new theory and processes are incorporated, model uncertainty increases. Our ability to constrain model uncertainty has thus remained almost unchanged for more than a decade.  Large model uncertainty limits our ability to make precise predictions of future climate change, leading to public doubt about what scientists really know and the validity of the broader climate change message. 

Working with global aerosol modellers to apply statistical methods we are able to understand the effect of uncertainty in a global aerosol model on its predictions. The focus here is the prediction of cloud condensation nuclei, a measurable climate relevant model output.  We use the main effect indices from a sensitivity analysis of the global aerosol model to 1) identify currently irreducible uncertainties; 2) identify regions of the world where observations will constrain similar model uncertainties; 3) quantify the value of a perfect observation in different regions of the world; and 4) identify the best regions to obtain observations for maximum model uncertainty reduction.   

 

 

Doug Parker (University of Leeds, Met Office), C. E. Birch, J. H. Marsham, and C. M. Taylor

A model analysis of the relationship between initiation of deep tropical convection and the low-level convergence field

Representing the initiation and development of tropical convection in weather and climate models is probably the greatest source of uncertainty in global atmospheric modelling. Since the 1980s there has been fundamental disagreement over the relationship of deep convection to the prevailing field of low-level convergence. Those from an operational forecasting background tend to argue that the presence of convergence is always necessary, to deliver a ‘trigger’ and to overcome convective inhibition. For these reasons the convergence contributes to some convection parameterisation schemes. In contrast, theoreticians and global modellers tend to argue that statistically, triggers are abundant, and the larger-scale field of convergence and vertical motion is always very weak in the tropics, so that from a large-scale perspective convergence does not initiate convection (although it may help destabilise the atmosphere for convection). Although these contrasting views are partly explained by the spatial scale considered, in that triggers are accepted to be necessary for individual clouds on the 10km scale, the extent to which large-scale convergence is a necessary condition for convection remains uncertain. To address the issue, this study has analysed convective initiation in convection-permitting simulations for West Africa, conducted as part of the Cascade project, in relation to the prevailing environment of low-level convergence. Several hundred storm initiation events have been identified from a 40-day simulation, and the prevailing conditions analysed. A measure of fractal dimension has been employed to describe the spatial organisation of the convergence patterns. Almost all events were associated with strong convergence on the local-scale (60km). The majority of convective events occur along convergence lines, consistent with observational statistics from the USA. Most of these lines are oriented along the prevailing wind direction. Relatively few convective events are formed over isolated convergence “hotspots”. On the large scale (300km), most events occur in conditions of positive convergence, although a significant number (around 20%) occur under large-scale divergence. These results imply that while large-scale convergence favours convection, it can not be used as a necessary condition for convective initiation. Simulations with parameterised convection do not show the same behaviour; initiations are equally likely within large-scale divergence and convergence and the local convergent ‘triggers’ are of a different form. Repeating the analysis to relate observations of convective initiation with convergence in global model analyses showed that the initiations are unrelated to model convergence on the 300km scale. African forecasters should therefore not use the large-scale convergence field from a global forecast model to predict convection.

 

 

Leighton Regayre (University of Leeds)

Aerosol radiative forcing uncertainty in recent decades

This research uses Gaussian process emulation and variance-based sensitivity analyses to quantify temporal changes in the magnitude of contributions from uncertain aerosol parametrisations on the radiative forcing of climate. The effect of atmospheric physics parametrisations on aerosol radiative forcing are analysed separately for a contrasting perspective of climate parametric uncertainty.  Preliminary results from a simultaneous aerosol and atmospheric physics perturbed parameter ensemble reveal the relative magnitude of contributions to climate uncertainty from these sources.

 

 

Philip Sansom (University of Exeter), David Stephenson

 Emergent constraints and ensemble discrepancies: putting the pieces together

Emergent relationships between the climate responses and historical climates simulated by ensembles of climate models have the potential to constrain projections of future climate by comparison with observations of the recent climate. However, the relationship between climate model outputs and the Earth system is uncertain, and how emergent relationships constrain that uncertainty is not clear.

A new statistical framework is presented for making inferences about future climate change by combining climate models outputs with observations. The relationship between the ensemble of model outputs and the Earth system is represented by an uncertain discrepancy. Emergent relationships are interpreted as constraints on the expected discrepancy between the expected climate response of an ensemble of climate models and the expected climate response of the Earth system. It is shown that internal variability in the models must be accounted for in order to avoid biased estimates of emergent constraints. Measurement error and sampling uncertainty in the observations are shown to play a key role when projecting future climate change using emergent constraints.

The new framework is applied to the projection of Arctic winter near-surface temperature at the end of the 21st century using the CMIP5 multi-model ensemble.  The CMIP5 models tend to simulate Arctic winter temperatures that are too cold compared to observations. However, observation uncertainty in the Arctic is large.  The best estimate of recent Arctic temperature may be 0.5-1.0 K lower when climate model output is combined with observations, than when observations are considered alone. An emergent relationship is expected wherever sea ice forms on a seasonal basis. The emergent constraint reduces projections of future Arctic warming by up to 2.0 K. The estimated reduction in warming is up to 0.5 K greater than estimated by existing methods using emergent constraints, due to the adjustment of the historical temperature.

 

 

Chris Smith (University of Leeds), Jamie Bright, Rolf Crook

The hourly distributions of solar irradiance transmission based on cloud fraction

The distributions of the ratio of solar irradiance to theoretical clear-sky solar irradiance obtained from long-term cloud statistics are described in this paper.  This has applications in areas such as solar energy resource modelling where hourly irradiance is of importance, but where high-quality irradiance measurements at the site of interest are unavailable. 

From a number of UK Met Office MIDAS stations, hourly values of cloud fraction measured in oktas (eighths) and irradiance were obtained for one year. Then, for the latitude, longitude and altitude of each MIDAS station, a theoretical clear-sky irradiance was computed using the DISORT radiative transfer algorithm for each hour of the same year, using the cosine-weighted average zenith angle for the hour. The clear-sky irradiance was computed without clouds, but including a climatological monthly aerosol and water vapour loading. The solar transmission due to clouds is then represented as a clear-sky index, which is the ratio of actual hourly irradiance recorded at the MIDAS site to the theoretical clear-sky value.  The variability in cloud optical thickness is implicitly taken care of by the clear-sky index. 

The clear-sky index for each hour is calculated and tabulated in a histogram for each cloud okta number.  It is found that the four-parameter skew-t distribution [1] provides a good model fit to the distributions of clearsky index by cloud okta. The distributions of clear-sky index are approximately normal for oktas 4, 5 and 6 (partially cloudy), and approximately gamma for oktas 7 and 8 (overcast), but no satisfactory simpler distribution can be found for the okta 0--3 (clear and scattered cloud) cases.  For these cases, the clear-sky index shows a modal value close to 1 as expected, but there are several instances where the clear-sky index is significantly lower than 1, owing to particularly turbid atmospheric conditions or the sun being obscured in the cloudy portion of the sky.  Values of clear-sky index greater than 1 are occasionally observed even at high oktas, which can be explained by reflections from the sides of clouds when the sky is cloudy, or for exceptionally clear situations when the sky is clear. 

Global (diffuse plus direct) clear-sky irradiance at a latitude, longitude, altitude and time of interest is retrieved by running DISORT to generate a clear-sky irradiance and multiplying this by a clear-sky index obtained from the distribution for the corresponding cloud okta. DISORT also provides the direct and diffuse components of the clear-sky irradiance. The cloud transmission relationship from Müller and Trentmann [2] can be used to calculate the direct irradiance transmission including the cloud effects.  Diffuse irradiance is then obtained by subtracting the direct irradiance due to clouds from the global irradiance calculated by clear-sky index. This is necessary for example to calculate irradiance on an angled plane. 

One application of this method is in a statistical weather generator, where an hourly time series of cloud coverage is obtained by a Markov chain and irradiance can be simulated from the distributions obtained. This is the subject of ongoing work [4]. It is envisaged to extend the model to be able to take 3-hourly or monthly cloud fraction from CMIP5 climate models as input, to provide realistic hourly distributions of solar irradiance for modelling solar energy resource under different climate experiments.

References

[1] A. Azzalini and A. Capitanio. Distributions generated by perturbation of symmetry with emphasis on a multivariate skew-t distribution. Journal of the Royal Statistical Society, series B, 65:367--389, 2003.

[2] R. Muller and J. Trentmann. Algorithm theoretical baseline document direct irradiance at surface. Technical report, EUMETSAT Satellite Application Facility on Climate Monitoring, 2010.

[3] A. Skartveit and J.A. Olseth. The probability density and autocorrelation of short-term global and beam irradiance. Solar Energy, 49(6): 477--487, 1992.

[4] J. Bright, C.J. Smith, and R. Crook. Stochastic generation of minutely irradiance time series, derived from hourly weather data. Article in press.

 

 

Steve Tobias (University of Leeds), Brad Marston (Brown)

Multiscale Approach to the Direct Statistical Simulation of Geophysical Flows

Statistics of model geophysical and astrophysical fluids may be directly accessed by solving the equations of motion for the statistics themselves as proposed by Lorenz nearly 50 years ago. Here we introduce such Direct Statistical Simulation (DSS) via cumulant hierarchies. We generalise this approach by separating eddies by length scale and then discarding triads that involve only small-scale waves. Under such approximations the statistical equations close and the extension of the second order closure (CE2) based on zonal averaging is derived (GCE2) The GCE2 approach is tested on two idealised models: A stochastically driven barotropic jet, and the two-layer primitive equations. Comparison to low-order statistics accumulated from numerical simulation finds GCE2 to be surprisingly accurate.

 

 

Jochen Voss (University of Leeds)

MAP estimators and 4DVAR

In a recent article (Dashti et al., 2013) we show how the maximum a posteriori (MAP) estimator can be applied to problem of estimating an unknown function u from noisy measurements of a known, possibly nonlinear, map G applied to u.  Our result shows that the MAP estimator can be characterised as the minimiser of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior, thus leading to a variational problem.  I this talk we relate our results about MAP estimators to the four-dimensional variational assimilation (4DVAR) method for data assimilation.

 

 

Peter Watson (University of Oxford), Lesley J Gray

The stratospheric wintertime response to applied extratropical torques and its relationship with the annular mode (Presentation)

We present the response of the wintertime Northern Hemisphere (NH) stratosphere to applied extratropical zonally symmetric zonal torques in a primitive equation model of the middle atmosphere. This is relevant to understanding the effect of gravity wave drag (GWD) in models and the influence of natural forcings such as the quasi-biennial oscillation (QBO), El Nino-Southern Oscillation (ENSO), solar cycle and volcanic eruptions on the polar vortex. The steady state circulation response to torques applied at high latitudes is found to closely resemble the Northern annular mode (NAM) in perpetual January simulations. This behaviour is analogous to that shown by the Lorenz (1963) system and tropospheric models. Imposed westerly high-latitude torques lead counter-intuitively to an easterly zonal mean zonal wind () response at high latitudes, due to planetary wave feedbacks. However, in simulations with a seasonal cycle, the feedbacks are qualitatively similar but weaker, and the long-term response is less NAM-like and no longer easterly at high latitudes. The wave feedbacks are consistent with ray theory and their differences are due to climatological  differing between the two types of simulations. Our results suggest that dynamical feedbacks tend to make the long-term NH extratropical stratospheric response to arbitrary external forcings NAM-like, but only if the feedbacks are sufficiently strong. This may explain why the observed polar vortex responses to natural forcings such as the QBO and ENSO are NAM-like. The results imply that wave feedbacks must be understood and accurately modelled in order to understand and predict the influence of GWD and other external forcings on the polar vortex, and that biases in a model's climatology will cause biases in these feedbacks.

 

Toby Wood (University of Leeds)

The pseudo-incompressible approximation and Hamilton's principle

Most fluid flows of interest in engineering, geophysics, and even astrophysics can be regarded as "pseudo-incompressible", in the sense that acoustic waves carry only a tiny fraction of the total energy.  Numerical simulations of such flows therefore often use "wave-filtered" equations (e.g. incompressible, Boussinesq, anelastic, etc.) that are based on approximations to the fully compressible equations.  In this talk, we will describe how these approximations can be improved, and generalized, by using concepts from Lagrangian mechanics.

 

 

Guangzhi Xu (UEA)

Inter-annual variability of Pacific atmospheric moisture divergence: exploring non-linear behaviour of extreme El Nino events with Self Organizing Maps (SOM)

On seasonal and inter-annual time scales, column integrated atmospheric moisture divergence provides a useful measure of the atmospheric branch of the tropical hydrological cycle. It reflects the combined dynamical and thermodynamical effects, but bypasses possible uncertainty issues related to limitations and errors in observations of evaporation (E) minus precipitation (P). An Empirical Orthogonal Function (EOF) analysis of the tropical Pacific moisture divergence fields calculated from the ERA-Interim Reanalysis dataset reveals the dominant effects of the El Nino-Southern Oscillation (ENSO) phenomenon on inter-annual time scales. Two EOFs are necessary to capture the ENSO signature, and regression relationships between their Principal Components and indices of equatorial Pacific sea surface temperature (SST) demonstrate that the transition from strong La Nina through to extreme El Nino events is not a linear one. The largest deviation from linearity is for the strongest El Nino events, and we interpret that this arises at least partly because the EOF analysis cannot easily separate different patterns of hydrological cycle response if those patterns are not orthogonal to each other.

To overcome the orthogonality constraints, a Self Organizing Map (SOM) analysis of the same moisture divergence fields was performed. The SOM analysis captures the range of responses to ENSO, including the distinction between the moderate and strong El Nino events identified by the EOF analysis. The work demonstrates the potential for the application of SOM to large scale climatic analysis, by virtue of its easier interpretation, relaxation of orthogonality constraints and its versatility (such as combining different fields together to diagnose their covariances) for serving as an alternative classification method. Both the EOF and SOM analyses suggest the separate classification of 'moderate' and 'extreme' El Nino events because they do not only differ in the magnitude of the hydrological cycle responses but they also display distinct spatial patterns and evolutionary paths. Classification from the moisture divergence point of view shows consistency with results based on SST and other physical variables.

 

 

Hiroe Yamazaki (Imperial College)

Cartesian-grid modelling: An approach for handling of 3D complex Topography

Synchronizing with the rapid development of computer technology, resolutions of atmospheric numerical models have increased significantly. Consequently, steep gradients in mountainous terrain become resolved in high-resolution models, which lead to the models using the commonly-used terrain-following coordinates suffering from large truncation errors.

In this study, a new 3D nonhydrostatic atmospheric model is developed using the Cartesian co-ordinates. A cut-cell representation of topography based on finite-volume discretization is applied along with a cell-merging approach, in which small cut-cells are merged with neighbouring cells either vertically or horizontally. In addition, a block-structured Cartesian mesh-refinement technique achieves a variable resolution on the model grid that is fine close to the terrain surface.

The model successfully reproduces flows over a wide range of 3D slopes. The advantage of a locally refined grid around a 3D hill is also demonstrated with the use of cut-cells at the terrain surface.

 

 

Kuniko Yamazaki (University of Edinburgh), S. F. B.Tett (U of Edinburgh), M. J. Mineter (U of Edinburgh), C. Cartis (U of Oxford)

 Tuning HadAM3 using an optimisation method

 Perturbed physics configurations of version 3 of the Hadley Centre Atmosphere Model (HadAM3) driven with observed sea surface temperatures (SST) and sea ice were tuned to outgoing radiation observations using a Gauss–Newton line search optimization algorithm to adjust the model parameters. Four key parameters that previous research found affected climate sensitivity were adjusted to several different target values including two sets of observations. The observations used were the global average reflected shortwave radiation (RSR) and outgoing longwave radiation (OLR) from the Clouds and the Earth’s Radiant Energy System instruments combined with observations of ocean heat content. Using the same method, configurations were also generated that were consistent with the earlier Earth Radiation Budget Experiment results. Many, though not all, tuning experiments were successful, with about 2500 configurations being generated and the changes in simulated outgoing radiation largely due to changes in clouds. Clear-sky radiation changes were small, largely due to a cancellation between changes in upper-tropospheric relative humidity and temperature. Changes in other climate variables are strongly related to changes in OLR and RSR particularly on large scales. There appears to be some equifinality with different parameter configurations producing OLR and RSR values close to observed values. These models have small differences in their climatology with the one group being similar to the standard configuration and the other group drier in the tropics and warmer everywhere.

This methodology has been developed and we will present preliminary results from tuning experiments using a larger number of observations and parameters but the same methodology. One issue with multiple parameters is the need to weight them. We aim to do this using a covariance based on estimated observational uncertainty and simulated internal variability.