This website's content is no longer actively maintained, but the material has been kept on-line for historical purposes.
The page may contain broken links or outdated information, and parts may not function in current web browsers.

GCSS-DIME Homepage

NASA Modeling Framework
Cloud Modeling and Analysis Initiative as Prototype

Basics

The most basic tools for development and testing of climate models are a set of observations and the model code. It is usually assumed that, somehow, looking at observations will suggest ways to represent processes in the model, that comparisons of the modeled climate state against the observed climate state will indicate how similar the model is to reality, and that discrepancies between the model and the observed climate state will suggest improvements that can be made to the model. In practice, these simplistic procedural ideas do not work very well because the climate and models of it are very complex systems. Instead, observations have to be analyzed in much more depth and detail to separate and elucidate processes and comparisons of much more than the state of the system are needed to diagnose problems and inspire improvements. The complexity of the climate system ensures that these activities can only be carried out by a large team of researchers, yet climate research is largely organized only as individuals or small teams within larger institutions. Even the government laboratories that conduct model-related work do not coordinate the efforts of very many people at one time. Thus, the number of researchers who can actually access sufficiently comprehensive data sets and conduct experiments with sophisticated models is too small.

Rationale for Designing a “Different” Modeling Activity

The very slow progress in developing climate models and demonstrating improvements in them may be attributed to the predominance of this overly simple approach, in part necessitated by a lack of coordination of a large enough group of researchers. To push forward more aggressively, we need a different mode of working that complements the usual “discovery-driven” research by individuals and small teams. This different mode must comprise a large enough team to cover the scope of the problem, a team large enough to provide the expertise and level of effort needed to analyze large datasets and investigate complicated code, a team that is nevertheless focused on a specific strategic plan that attacks the problems in a more deliberate and rational manner, and a set of procedures that lead to a systematic accumulation of progress.

Key elements missing from model-related research today are: (1) a framework that allows many more people to work on and assemble results that accumulate in the same model – such a framework would not only expand the opportunities for more researchers to be involved but also for a broader range of expertise to be applied to the work than is currently possible – the current inaccessibility of comprehensive models to most researchers is also the case for the larger datasets that are becoming available, (2) a method for identifying the “most important” problems, (3) a well-understood pathway from process-representation concept to implementation in the various models and sufficient personnel working on each step, (4) a testing infrastructure that not only evaluates the process in stand-alone mode but also within the whole-system model, (5) a set of testing procedures that account for the effects of the coupling of processes and (6) a method for evaluating the “sufficiency” of both the observations and the model (when are they “good enough”?).

Rationale for This Modeling Activity Being a NASA Program

The primary reason why NASA should try this “different” approach to climate model development is that the most crucial tests of the whole-system model that are required are those that necessarily employ long-term, global datasets that quantify the weather-scale variability, datasets that will come primarily from satellite observations. Thus, NASA is well-positioned to better integrate the model development with observational analyses.

Elements Necessary for Progress

In addition to supplying the missing elements, other specific developments that are needed include development of: (1) a common set of analyzed observations that describe the long-term, global behavior of the climate system and key relationships among the thermodynamic/dynamic state variables in sufficient detail to determine the basic energy cycle of the system, (2) techniques for identifying the model problems that most affect the fidelity of its representation of climate and these key relationships (also used to support program planning), (3) advanced, common analysis tools that emphasize diagnosis of the key relationships to be applied to observations to support development of physical parameterizations and to model output for testing, (4) a set of test metrics that provide a performance standard to measure progress, (5) a method to determine the key physics that all models must incorporate accurately rather than collecting a poorly understood “black box” of implementations of complex parameterizations and (6) a critical mass of scientists actually working on cloud parameterizations for each type of model.

The key to the design of this “different” approach is to provide procedures to accumulate progress, both in terms of demonstrating model improvements in a rigorous manner and in terms of possessing at least one climate model that embodies all of the state-of-the-art understanding of climate physics available at a given time.

The Modeling Framework

A. CMAI Framework

CMAI will be used as a prototype for this “different” type of modeling activity. The funded group of researchers will work as part of a team coordinated by the Program office and interacting with the two NASA-provided “facilities” discussed below.

The team members will be responsible for four developments: (1) determining observation needs and requirements, (2) developing common analysis tools for application to observations and model outputs, (3) developing and investigating new ideas for model representations of clouds and precipitation processes and (4) implementing these ideas within the NASA model(s).

B. Common Datasets and Analysis Tools

(DIME = new laboratory microphysical measurements, field studies and long-term site measurements, weather-scale variations of atmospheric state and motions, up to decadal-scale determinations of the energy and water cycle)

As the research proceeds, needed observational data products (if available) will be supplied through the DIME mechanism. This mechanism provides data products and analysis tools/results in a common framework for all research activities, so that other modeling studies can also benefit from these collections. Team members will need to identify useful datasets for their work and help develop useful analysis methods. Once identified, both the observations and the analysis tools will be provided through DIME; in some cases, the results of analysis will be provided as products. The team members will also help define more generally useful combinations of observations and analyses that can serve as metrics of model performance. In particular, application of key analyses to long-term global observations can be used to generalize results from specific case studies or field experiments.

C. Common Models

(parcel microphysics model, LES/CRM, WRF, fvGEOS, MMF, ModelE)

At least three of the needed model types are available for analysis, development and testing. The WRF model is a limited-domain model that can be used as a “cloud-resolving” model or as a regional weather model. The fvGEOS is a new modular weather AGCM. ModelE is a new modular coupled atmosphere-ocean climate model. The team members are expected to implement new physical process representations in these models with support from CMAI liaisons at the GMAO units hired specifically for this purpose. A state-of-the-art parcel microphysics model and a LES model incorporating state-of-the-art cloud physics need to be selected/developed.

D. Metrics

Model performance metrics based on observations are needed for several purposes during the development of a model. Development involves a repeating cycle of evaluation and model modification. Metrics help evaluate the model performance with an emphasis on identifying the most important remaining problems. Metrics help evaluate whether a particular process representation change has been effective in reducing the problem. Metrics provide an overall measure of the model fidelity and reliability and help establish program goals. Sometimes a more physically realistic representation of a process can degrade model performance relative to a certain metric, which will require use of CMAI resources to investigate the interactions between cloud processes and other components of the system.

A specific goal of this program (as a prototype) will be to develop metrics that can be used to stimulate improvements in model representations of cloud and precipitation processes and to measure progress in understanding cloud/precipitation behavior within the weather-climate system.

Some Examples that Could be Used as Specifics for Focus

A. CMAI Framework (Elements of Proposals)

(1) Define observation needs and requirements through specific investigations involving both models and current observations. Examples are studies that identify observational tests of a model that discriminates between a poorer and better representation of a process and determines a model’s dependence on specific processes that explain its behavior characteristics from weather scale to climate scale. Some examples are as follows. Model representations of cloud-precipitation process imply many attributes beyond the average precipitation amount, including threshold cloud properties at the onset of different kinds of precipitation (different meteorology), frequency of occurrence of precipitation as a fraction of cloud occurrence frequency, frequency distributions of precipitation rate, and vertical profiles of latent heating and convective intensity. Model studies could identify the features of the model representation that control these particular features and then compare them with the observed properties. At weather scales, models predict a frequency distribution of storm (convective and baroclinic) strengths that can be observed; investigations could identify the factors controlling the model’s prediction for comparison with observations. Climate model sensitivity can be related to particular choices of representation of cloud, precipitation and radiation processes; if the leading determinants are identified by model experiments, then the fidelity of these relationships can be checked as they operate on shorter, observable time scales (interannual to inter-decadal). Models can also be used to test the effects of different characteristics of the observing system, for example, how do limited space-time sampling or coverage or differing sensitivity to specific cloud types affect the conclusions. Model studies can be conducted to provide tests that discriminate between different representations of the same process, either in the same kind of model, or across the different types of models used at different scales; such a discrimination must be defined in terms of or related to an observable quantity or behavior. Such tests must be devised for process-scale through decadal time scales.

(2) Develop common analysis tools for application to both observations and model outputs that determine relationships among key quantities from process level to whole climate system level. These analysis tools should not only quantify relationships among variables (multi-variate, non-linear) but also diagnose the higher-level workings of the climate systems by determining exchanges and transports of energy, momentum and mass (including various tracers, especially water vapor) and by elucidating feedback relationships at different time scales from process-scale to decadal time scale. Some examples are as follows. Available observations now provide quantitative information on the exchanges of energy, momentum and mass (water vapor) among the main components of the climate system so that straightforward comparisons of model versions of these exchanges and their variations can be performed. Even more useful would be model studies identifying the key processes controlling these exchanges and transports in such a way as to suggest further analyses of the observations to confirm the importance of these processes to the observed exchanges. Statistical compositing of multi-variate relationships by meteorological state allows for testing of a model’s equivalent statistics as well as of the model’s representation of each state and its frequency of occurrence. Associated diabatic heating and atmospheric motions provide a statistical estimate of dynamical feedbacks by cloud processes at weather scales that can be observed and compared with the modeled relationships. Other advanced (multi-variate, non-linear) mathematical analysis methods for analysis of complex systems, available in other research fields but not yet applied in ours, could also be adapted and tested for use to elucidate these multi-variate, non-linear relationships. Particularly important for weather and climate are analysis methods that consider relationships that are non-local in space and time and can account for both chaotic and periodic behavior.

(3) Develop ideas for new model representations of cloud-precipitation-aerosol processes and define metrics for discriminating tests at all scales of representation. The key feature of investigations of this type is that the representation of a process should be applicable to model runs of durations from seasonal/interannual to centennial/millenial on NASA computers and testable at more than one modeling scale, i.e., at both the individual cloud scale and the cloud ensemble scale of a GCM grid cell. Moreover, the investigation must develop observational tests for the new representation that discriminate it from previous representations. Some key examples are as follows. Current day global models produce most of their precipitation from convective clouds produced in separate subroutines that do not (usually) include explicit water budget equations including aerosol effects, whereas cloud resolving models do (though often crudely) represent water budgets for all cloud types: the goal is a single set of cloud process equations used for all cloud types but necessary differences in representation between different types of models need to be tested explicitly across the whole suite of model types and against observations. Forthcoming information from CloudSat/Calipso, together with the rest of the “A-train” constellation, provides new information about the details of cloud vertical structure and its variations with weather and seasons, so models will now have to be revised to represent such structure in more detail and tested against these observations.

(4) Implement and test existing parameterization ideas in the whole range of models. This type of study especially applies to the use of small-scale process models to test general circulation model representations directly. The latter usually involve approximations to account for the fact that the smaller-scale atmospheric motions are not explicitly represented (they, too, are parameterized), whereas the former determine these motions explicitly. This type of investigation is not a new idea but the problems are that the small-scale models often represent other physics (including cloud microphysics and radiation) with less sophistication than the GCM, even though their dynamics is better determined, and that modelers working with one type of model are not usually able or cannot work with the other type of model. This type of investigation requires multiple model types to be used together and must also define observational tests for comparing the process representation at all scales.

(5) Liaison teams at NASA modeling centers should be available to interact directly with other CMAI investigators. These people will be responsible for developing an overall understanding of the existing model physics and diagnostics, will assist CMAI team members in implementing new candidate physics into the NASA models, and will have prime responsibility for conducting parallel tests of the models that mimic observational analysis approaches developed by the CMAI investigators.

B. Common Datasets and Analysis Tools

(1) Key processes: (A) Parcel microphysics (especially for ice and mixed phases): laboratory work on the physics of ice particle formation and mixed-phase particle interaction; also laboratory work to support development of new in situ measurement tools for UAVs to measure ice and mixed phase cloud particles. (B) Boundary layer turbulence including shallow convection: field experiments specifically designed to test LES representations but with moist processes, 3D radiation and land surface inhomogeneity effects accounted for. More turbulence statistics may be obtained from properly instrumented long-term surface sites. (C) Deep convection: field experiments are limited in their ability to observe the interaction of deep convection with the larger-scale circulation, especially for mesoscale convective complexes, so special intensive datasets combining TRMM, CloudSat, Calipso with high resolution geostationary cloud measurements, surface precipitation radar networks and weather analyses are needed for true multi-scale studies.

(2) Key weather characteristics (TRMM – A-train with CloudSat/Calipso –> PPM/NPOESS): Diagnosis of the distribution of precipitation rate and cloud/precipitation vertical structure with variations of larger-scale meteorological state. Analyses of the variations of clouds, precipitation and radiation with weather state (storm strength).

(3) Key climate datasets: Characterization of the weather-to-decadal scale variations of the global atmospheric energy and water cycle: there are two directly forced modes of variability – diurnal and seasonal – and two hydrodynamical instability modes – convective and baroclinic – that must be characterized specifically. Longer-term modes that arise from the coupling of the atmosphere and ocean are such things as MJO, QBO, ENSO, AO and PDO that can all must be characterized.

(4) Analysis tools: Multi-variate compositing by meteorological state or “event” type. Other advanced relational statistics (aka neural networks as non-linear, multi-variate statistical fits).

(5) Specifics of DIME (data infrastructure): Foundation: meteorological state case studies providing model input-output datasets for “regional”-sized domains for time periods from days to seasons along with climatological statistics for the same cloud type cases. All of these datasets are online in a common (very simple) format. Links will be added to key field and surface-site collections and to satellite-based compilations or other combined-instrument analyses that provide comprehensive analyses or diagnoses of behavior. All such datasets and analysis tools needed by or produced by CMAI investigators will be supported to provide wider access. The idea is to identify key observational tests of model physics and performance: the datasets needed for such tests will be obtained and provide through the DIME online system.

C. Common Models

(1) Process-to-regional-weather-scale model (parcel microphysics, LES/CRM, WRF)

(2) Global-weather-to-short-term-climate-variation model (fvGEOS, MMF)

(3) Short-term-to-long-term-climate-variation model (ModelE)

D. Metrics (should answer the question – is it better?)

(1) Process representation metrics: (i) threshold cloud properties (average particle size, layer thickness, water path/content) for precipitation onset (liquid, mixed, ice), (ii) precipitation rate distribution as function of temperature and meteorology, (iii) threshold effective vertical velocity and humidity for cloud formation.

(2) Weather forecast metrics: (i) cloud water and precipitation distributions, (ii) storm strength evolution as function of humidity – diabatic heating rate distributions, (iii) frequency of occurrence of storms and precipitation events by strength, (iv) space-time distribution of cloud properties within storms, (v) heat and water exchanges with surface and within atmosphere.

(3) Climate representation metrics, both “performance (responses) & behavior (characteristics)” and key relationships (feedbacks and sensitivities) from control run: (i) diurnal, intraseasonal, seasonal and interannual variations of clouds, precipitation and radiation, (ii) statistics of convection by depth/strength (including associated cloud properties and diabatic heating), (iii) statistics of baroclinic eddies by strength (including associated cloud properties and diabatic heating), (iv) exchange rates of heat and water with surface and transports by atmosphere and ocean, (v) mean spatial distributions.

(4) Climate forecast metrics (determine sensitivity and leading feedbacks from state-differences): (i) specified SST (impulsive) changes (? 2 & 4K, homogeneous and specified equator-to-pole gradient), (ii) ? 2 & 4% solar constant (impulsive) change experiments, (iii) doubled & halved CO2 transient change experiments.

First Draft | Second Draft | Third Draft


CMAI: Participants | Meetings | Draft Workplan | Investigations