A new operational ocean reanalysis system (ORAS4) has been implemented at ECMWF. It spans the period 1958 to the present. This article describes its main components and evaluates its quality. The adequacy of ORAS4 for the initialization of seasonal forecasts is discussed, along with the robustness of some prominent climate signals. ORAS4 has been evaluated using different metrics, including comparison with observed ocean currents, RAPID‐derived transports, sea‐level gauges, and GRACE‐derived bottom pressure. Compared to a control ocean model simulation, ORAS4 improves the fit to observations, the interannual variability, and seasonal forecast skill. Some problems have been identified, such as the underestimation of meridional overturning at 26°N, the magnitude of which is shown to be sensitive to the treatment of the coastal observations. ORAS4 shows a clear and robust shallowing trend of the Pacific Equatorial thermocline. It also shows a clear and robust nonlinear trend in the 0–700 m ocean heat content, consistent with other observational estimates. Some aspects of these climate signals are sensitive to the choice of sea‐surface temperature product and the specification of the observation‐error variances. The global sea‐level trend is consistent with the altimeter estimate, but the partition into volume and mass variations is more debatable, as inferred by discrepancies in the trend between ORAS4‐ and GRACE‐derived bottom pressure.
This article describes the UK Met Office Global Seasonal forecast system version 5 (GloSea5). GloSea5 upgrades include an increase in horizontal resolution in the atmosphere (N216–0.7°) and the ocean (0.25°), and implementation of a 3D‐Var assimilation system for ocean and sea‐ice conditions. GloSea5 shows improved year‐to‐year predictions of the major modes of variability. In the Tropics, predictions of the El Niño–Southern Oscillation are improved with reduced errors in the West Pacific. In the Extratropics, GloSea5 shows unprecedented levels of forecast skill and reliability for both the North Atlantic Oscillation and the Arctic Oscillation. We also find useful levels of skill for the western North Pacific Subtropical High which largely determines summer precipitation over East Asia.
Incomplete global coverage is a potential source of bias in global temperature reconstructions if the unsampled regions are not uniformly distributed over the planet's surface. The widely used Hadley Centre–Climatic Reseach Unit Version 4 (HadCRUT4) dataset covers on average about 84% of the globe over recent decades, with the unsampled regions being concentrated at the poles and over Africa. Three existing reconstructions with near‐global coverage are examined, each suggesting that HadCRUT4 is subject to bias due to its treatment of unobserved regions. Two alternative approaches for reconstructing global temperatures are explored, one based on an optimal interpolation algorithm and the other a hybrid method incorporating additional information from the satellite temperature record. The methods are validated on the basis of their skill at reconstructing omitted sets of observations. Both methods provide results superior to excluding the unsampled regions, with the hybrid method showing particular skill around the regions where no observations are available. Temperature trends are compared for the hybrid global temperature reconstruction and the raw HadCRUT4 data. The widely quoted trend since 1997 in the hybrid global reconstruction is two and a half times greater than the corresponding trend in the coverage‐biased HadCRUT4 data. Coverage bias causes a cool bias in recent temperatures relative to the late 1990s, which increases from around 1998 to the present. Trends starting in 1997 or 1998 are particularly biased with respect to the global trend. The issue is exacerbated by the strong El Niño event of 1997–1998, which also tends to suppress trends starting during those years.
This article describes the non‐hydrostatic dynamical core developed for the ICOsahedral Non‐hydrostatic (ICON) modelling framework. ICON is a joint project of the German Weather Service (DWD) and the Max Planck Institute for Meteorology (MPI‐M), targeting a unified modelling system for global numerical weather prediction (NWP) and climate modelling. Compared with the existing models at both institutions, the main achievements of ICON are exact local mass conservation, mass‐consistent tracer transport, a flexible grid nesting capability and the use of non‐hydrostatic equations on global domains. The dynamical core is formulated on an icosahedral‐triangular Arakawa C grid. Achieving mass conservation is facilitated by a flux‐form continuity equation with density as the prognostic variable. Time integration is performed with a two‐time‐level predictor–corrector scheme that is fully explicit, except for the terms describing vertical sound‐wave propagation. To achieve competitive computational efficiency, time splitting is applied between the dynamical core on the one hand and tracer advection, physics parametrizations and horizontal diffusion on the other hand. A sequence of tests with varying complexity indicates that the ICON dynamical core combines high numerical stability over steep mountain slopes with good accuracy and reasonably low diffusivity. Preliminary NWP test suites initialized with interpolated analysis data reveal that the ICON modelling system already achieves better skill scores than its predecessor at DWD, the operational hydrostatic Global Model Europe (GME), and at the same time requires significantly fewer computational resources.
Following previous work on an inherently mass‐conserving semi‐implicit (SI) semi‐Lagrangian (SL) discretization of the two‐dimensional (2D) shallow‐water equations and 2D vertical slice equations, that approach is here extended to the 3D deep‐atmosphere, non‐hydrostatic global equations. As with the reduced‐dimension versions of this model, an advantage of the approach is that it preserves the same basic structure as a standard, non‐mass‐conserving, SISL version of the model. Additionally, the model is simply switchable to hydrostatic and/or shallow‐atmosphere forms. It is also designed to allow simple switching between various geometries (Cartesian, spherical, spheroidal). The resulting mass‐conserving model is applied to a standard set of test problems for such models in spherical geometry and compared with results from the standard SISL version of the model.
We describe the development and testing of the hybrid ensemble/4D‐Var global data assimilation system that was implemented operationally at the Met Office in July 2011, giving an average reduction of RMS errors of just under 1%. The scheme uses the extended control variable technique to implement a hybrid background error covariance that combines the standard climatological covariance with a covariance derived from the 23‐member operational ensemble MOGREPS‐G. Unique features of the Met Office scheme include application of a horizontal ‘anti‐aliasing’ filter to the ensemble error modes, a vertical localization scheme based uniquely on a modification of the climatological stream function covariance, and inflation of the climatological covariance to maintain the analysis fit to observations. Findings during development include a significantly greater impact of the scheme in 3D‐Var than 4D‐Var, a clear positive impact from the combination of the anti‐aliasing filter and vertical localization, and a relatively small sensitivity to full coupling of the ensemble and 4D‐Var systems. Supplementary experiments suggest that the ability of the ensemble to capture coherent ‘Errors of the Day’ is key to the improvements in forecast skill. A particular problem encountered during development was significantly poorer tropical verification scores when measured against own analyses. In contrast, verification against independent (ECMWF) analyses gave scores that were much more consistent with those against observations.
This article describes the performance of the variational bias correction system for satellite radiance data in ERA-Interim, and considers implications for the representation of climate signals in reanalysis. We briefly review the formulation of the method and its ability to automatically develop bias estimates when radiance measurements from newly available satellite sensors are first introduced in the reanalysis. We then present several results obtained from the first 19 years (1989-2007) of ERA-Interim. These include the identification of Microwave Sounding Unit (MSU) instrument calibration errors, the response of the system to the Pinatubo eruption in 1991, and the detection of a long-term drift in biases of tropospheric AMSU-A data. We find that our results support the notion that global reanalysis provides an appropriate framework for climate monitoring. Copyright (C) 2009 Royal Meteorological Society
Demonstrating the effect that climate change is having on regional weather is a subject which occupies climate scientists, government policy makers and the media. After an extreme weather event occurs, the question is often posed, Was the event caused by anthropogenic climate change?' Recently, a new branch of climate science (known as attribution) has sought to quantify how much the risk of extreme events occurring has increased or decreased due to climate change. One method of attribution uses very large ensembles of climate models computed via volunteer distributed computing. A recent advancement is the ability to run both a global climate model and a higher resolution regional climate model on a volunteer's home computer. Such a set-up allows the simulation of weather on a scale that is of most use to studies of the attribution of extreme events. This article introduces a global climate model that has been developed to simulate the climatology of all major land regions with reasonable accuracy. This then provides the boundary conditions to a regional climate model (which uses the same formulation but at higher resolution) to ensure that it can produce realistic climate and weather over any region of choice. The development process is documented and a comparison to previous coupled climate models and atmosphere-only climate models is made. The system (known as weather@home) by which the global model is coupled to a regional climate model and run on volunteers' home computers is then detailed. Finally, a validation of the whole system is performed, with a particular emphasis on how accurately the distributions of daily mean temperature and daily mean precipitation are modelled in a particular application over Europe. This builds confidence in the applicability of the weather@home system for event attribution studies.
Recent observational and theoretical studies of the global properties of small-scale atmospheric gravity waves have highlighted the global effects of these waves on the circulation from the surface to the middle atmosphere. The effects of gravity waves on the large-scale circulation have long been treated via parametrizations in both climate and weather-forecasting applications. In these parametrizations, key parameters describe the global distributions of gravity-wave momentum flux, wavelengths and frequencies. Until recently, global observations could not define the required parameters because the waves are small in scale and intermittent in occurrence. Recent satellite and other global datasets with improved resolution, along with innovative analysis methods, are now providing constraints for the parametrizations that can improve the treatment of these waves in climate-prediction models. Research using very-high-resolution global models has also recently demonstrated the capability to resolve gravity waves and their circulation effects, and when tested against observations these models show some very realistic properties. Here we review recent studies on gravity-wave effects in stratosphere-resolving climate models, recent observations and analysis methods that reveal global patterns in gravity-wave momentum fluxes and results of very-high-resolution model studies, and we outline some future research requirements to improve the treatment of these waves in climate simulations. Copyright (C) 2010 Royal Meteorological Society and Crown in the right of Canada
This article describes an ensemble of ten atmospheric model integrations for the years 1899-2010, performed at the European Centre for Medium-Range Weather Forecasts (ECMWF). Horizontal spectral resolution is T159 (about 125 km), using 91 levels in the vertical from the surface up to 1 Pa, and a time step of 1 h. This ensemble, denoted by ERA-20CM, formed the first step toward a twentieth-century reanalysis within ERA-CLIM, a three-year European funded project involving nine partners. Sea-surface temperature and sea-ice cover are prescribed by an ensemble of realizations (HadISST2), as recently produced by the Met Office Hadley Centre within ERA-CLIM. Variation in these realizations reflect uncertainties in the available observational sources on which this product is based. Forcing terms in the model radiation scheme follow CMIP5 recommendations. Any effect of their uncertainty is neglected. These terms include solar forcing, greenhouse gases, ozone and aerosols. Both the ocean surface and radiative forcing incorporate a proper long-term evolution of climate trends in the twentieth century, and the occurrence of major events, such as the El Nino-Southern Oscillations and volcanic eruptions. No atmospheric observations were assimilated. For this reason ERA-20CM is not able to reproduce actual synoptic situations. However, the ensemble is able to provide a statistical estimate of the climate. Overall, the temperature rise over land is in fair agreement with the CRUTEM4 observational product. Over the last two decades the warming over land exceeds the warming over sea, which is consistent with models participating in the CMIP5 project, as well as with the ECMWF ERA-Interim reanalysis. Some aspects of warming and of the hydrological cycle are discerned, and the model response to volcanic eruptions is qualitatively correct. The data from ERA-20CM are freely available, embracing monthly-mean fields for many atmospheric and ocean-wave quantities, and synoptic fields for a small, essential subset.
Demonstrating the effect that climate change is having on regional weather is a subject which occupies climate scientists, government policy makers and the media. After an extreme weather event occurs, the question is often posed, ‘Was the event caused by anthropogenic climate change?’ Recently, a new branch of climate science (known as attribution) has sought to quantify how much the risk of extreme events occurring has increased or decreased due to climate change. One method of attribution uses very large ensembles of climate models computed via volunteer distributed computing. A recent advancement is the ability to run both a global climate model and a higher resolution regional climate model on a volunteer's home computer. Such a set‐up allows the simulation of weather on a scale that is of most use to studies of the attribution of extreme events. This article introduces a global climate model that has been developed to simulate the climatology of all major land regions with reasonable accuracy. This then provides the boundary conditions to a regional climate model (which uses the same formulation but at higher resolution) to ensure that it can produce realistic climate and weather over any region of choice. The development process is documented and a comparison to previous coupled climate models and atmosphere‐only climate models is made. The system (known as weather@home) by which the global model is coupled to a regional climate model and run on volunteers' home computers is then detailed. Finally, a validation of the whole system is performed, with a particular emphasis on how accurately the distributions of daily mean temperature and daily mean precipitation are modelled in a particular application over Europe. This builds confidence in the applicability of the weather@home system for event attribution studies.
Sub‐seasonal forecasts have been routinely produced at ECMWF since 2002 with reforecasts produced ‘on the fly’ to calibrate the real‐time sub‐seasonal forecasts. In this study, the skill of the reforecasts from April 2002 to March 2012 and covering a common set of years (1995 to 2001) has been evaluated. Results indicate that the skill of the ECMWF reforecasts to predict the Madden–Julian Oscillation (MJO) has improved significantly since 2002, with an average gain of about 1 day of prediction skill per year. The amplitude of the MJO has also become more realistic, although the model still tends to produce MJOs which are weaker than in the ECMWF re‐analysis. As a consequence, the ability of the ECMWF model to simulate realistic MJO teleconnections over the Northern and Southern Extratropics has improved dramatically over the 10‐year period. Forecast skill scores have also improved in the Extratropics. For instance, weekly mean forecasts of the North Atlantic Oscillation Index are more skilful in recent years than 10 years ago. A large part of this improvement seems to be linked to the improvements in the representation of the MJO. Skill to predict 2 m temperature anomalies over the Northern Extratropics has also improved almost continuously since 2002. Changes in the horizontal and vertical resolutions of the atmospheric model had only a small impact on the skill scores, suggesting that most of the improvements in the ECMWF sub‐seasonal forecasts were due to changes in model physics which were primarily designed to improve the model climate and medium‐range forecasts. The impact of changes in the data assimilation system and in the observing data has not been considered in this study, since all the reforecasts used for this study were initialized from the same re‐analysis over a common set of years.
Much of the atmospheric variability in the North Atlantic sector is associated with variations in the eddy-driven component of the zonal flow. Here we present a simple method to specifically diagnose this component of the flow using the low-level wind field (925-700 hPa). We focus on the North Atlantic winter season in the ERA-40 reanalysis. Diagnostics of the latitude and speed of the eddy-driven jet stream are compared with conventional diagnostics of the North Atlantic Oscillation (NAO) and the East Atlantic (EA) pattern. This shows that the NAO and the EA both describe combined changes in the latitude and speed of the jet stream. It is therefore necessary, but not always sufficient, to consider both the NAO and the EA in identifying changes in the jet stream. The jet stream analysis suggests that there are three preferred latitudinal positions of the North Atlantic eddy-driven jet stream in winter. This result is in very good agreement with the application of a statistical mixture model to the two-dimensional state space defined by the NAO and the EA. These results are consistent with several other studies which identify four European/Atlantic regimes, comprising three jet stream patterns plus European blocking events. Copyright (C) 2010 Royal Meteorological Society
A coupled data assimilation system has been developed at the European Centre for Medium‐Range Weather Forecasts (ECMWF), which is intended to be used for the production of global reanalyses of the recent climate. The system assimilates a wide variety of ocean and atmospheric observations and produces ocean–atmosphere analyses with a coupled model. Employing the coupled‐model constraint in the analysis implies that assimilation of an ocean observation has immediate impact on the atmospheric state estimate and, conversely, assimilation of an atmospheric observation affects the ocean state. This covariance between atmosphere and ocean induced by the analysis method is illustrated with simple numerical experiments. Realistic data assimilation experiments based on the global observing system are then used to assess the quality of the assimilation method. Comparison with an uncoupled system shows a mostly neutral impact overall, with slightly improved temperature estimates in the upper ocean and lower atmosphere. These preliminary results are considered of interest for the ongoing community efforts focusing on coupled data assimilations.
The convectively active part of the Madden–Julian Oscillation ( MJO ) propagates eastward through the warm pool, from the Indian Ocean through the Maritime Continent (the Indonesian archipelago) to the western Pacific. The Maritime Continent's complex topography means the exact nature of the MJO propagation through this region is unclear. Model simulations of the MJO are often poor over the region, leading to local errors in latent heat release and global errors in medium‐range weather prediction and climate simulation. Using 14 northern winters of TRMM satellite data it is shown that, where the mean diurnal cycle of precipitation is strong, 80% of the MJO precipitation signal in the Maritime Continent is accounted for by changes in the amplitude of the diurnal cycle. Additionally, the relationship between outgoing long‐wave radiation ( OLR ) and precipitation is weakened here, such that OLR is no longer a reliable proxy for precipitation. The canonical view of the MJO as the smooth eastward propagation of a large‐scale precipitation envelope also breaks down over the islands of the Maritime Continent. Instead, a vanguard of precipitation (anomalies of 2.5 mm day −1 over 10 6 km 2 ) jumps ahead of the main body by approximately 6 days or 2000 km. Hence, there can be enhanced precipitation over Sumatra, Borneo or New Guinea when the large‐scale MJO envelope over the surrounding ocean is one of suppressed precipitation. This behaviour can be accommodated into existing MJO theories. Frictional and topographic moisture convergence and relatively clear skies ahead of the main convective envelope combine with the low thermal inertia of the islands, to allow a rapid response in the diurnal cycle which rectifies onto the lower‐frequency MJO . Hence, accurate representations of the diurnal cycle and its scale interaction appear to be necessary for models to simulate the MJO successfully.
This review assesses storm studies over the North Atlantic and northwestern Europe regarding the occurrence of potential long‐term trends. Based on a systematic review of available articles, trends are classified according to different geographical regions, datasets, and time periods. Articles that used measurement and proxy data, reanalyses, regional and global climate model data on past and future trends are evaluated for changes in storm climate. The most important result is that trends in storm activity depend critically on the time period analysed. An increase in storm numbers is evident for the reanalyses period for the most recent decades, whereas most long‐term studies show merely decadal variability for the last 100–150 years. Storm trends derived from reanalyses data and climate model data for the past are mostly limited to the last four to six decades. The majority of these studies find increasing storm activity north of about 55–60° N over the North Atlantic with a negative tendency southward. This increase from about the 1970s until the mid‐1990s is also mirrored by long‐term proxies and the North Atlantic Oscillation and constitutes a part of their decadal variability. Studies based on proxy and measurement data or model studies over the North Atlantic for the past which cover more than 100 years show large decadal variations and either no trend or a decrease in storm numbers. Future scenarios until about the year 2100 indicate mostly an increase in winter storm intensity over the North Atlantic and western Europe. However, future trends in total storm numbers are quite heterogeneous and depend on the model generation used.
Large‐eddy simulations (LES) with the new ICOsahedral Non‐hydrostatic atmosphere model (ICON) covering Germany are evaluated for four days in spring 2013 using observational data from various sources. Reference simulations with the established Consortium for Small‐scale Modelling (COSMO) numerical weather prediction model and further standard LES codes are performed and used as a reference. This comprehensive evaluation approach covers multiple parameters and scales, focusing on boundary‐layer variables, clouds and precipitation. The evaluation points to the need to work on parametrizations influencing the surface energy balance, and possibly on ice cloud microphysics. The central purpose for the development and application of ICON in the LES configuration is the use of simulation results to improve the understanding of moist processes, as well as their parametrization in climate models. The evaluation thus aims at building confidence in the model's ability to simulate small‐ to mesoscale variability in turbulence, clouds and precipitation. The results are encouraging: the high‐resolution model matches the observed variability much better at small‐ to mesoscales than the coarser resolved reference model. In its highest grid resolution, the simulated turbulence profiles are realistic and column water vapour matches the observed temporal variability at short time‐scales. Despite being somewhat too large and too frequent, small cumulus clouds are well represented in comparison with satellite data, as is the shape of the cloud size spectrum. Variability of cloud water matches the satellite observations much better in ICON than in the reference model. In this sense, it is concluded that the model is fit for the purpose of using its output for parametrization development, despite the potential to improve further some important aspects of processes that are also parametrized in the high‐resolution model. Visible images of (top row) MODIS satellite (200 m resolution) and (middle row) synthetic radiances based on simulations with 156 m (625 m in the rightmost column) resolution using the new ICOsahedral Non‐hydrostatic (ICON) model for four simulated days in spring 2013. Bottom row: zoom into North Sea coastal region, 24 April (white dashed box in panel a). This is one of several approaches to evaluate the new ICON model using multiple observations with a focus on clouds and precipitation.