This website's content is no longer actively maintained, but the material has been kept on-line for historical purposes.
The page may contain broken links or outdated information, and parts may not function in current web browsers.

GCSS-DIME Homepage

NASA Modeling Framework
Cloud Modeling and Analysis Initiative as Prototype

Basics

The most basic tools for development and testing of climate models are a set of observations and the model code. It is usually assumed that, somehow, looking at (often averaged) observations will suggest ways to represent processes in the model, that comparisons of the modeled climate state against the observed climate state will indicate how similar the model is to reality, and that discrepancies between the model and the observed climate state will suggest improvements that can be made to the model. In practice, these simplistic procedural ideas do not work very well because the climate and models of it are very complex systems. Instead, observations and model outputs have to be analyzed in much more depth and detail to separate and elucidate processes and comparisons of much more than the state of the system are needed to diagnose problems and inspire improvements. The complexity of the climate system ensures that these activities can only be carried out by a large team of researchers with a diverse range of expertise covering data analysis and modeling, yet climate research is largely organized only as individuals or small teams within larger institutions. Even in the few large modeling teams in the U.S. (NOAA NCEP and GFDL, NSF NCAR, NASA GMAO and GISS), work on parameterization development for a particular process, such as clouds and precipitation, rarely involves more than one or two people at one time. In other words, the number of researchers who can actually access and analyze sufficiently comprehensive data sets and conduct the needed range of experiments with sophisticated models of more than one type is too small.

Rationale for Designing a “Different” Modeling Activity

The very slow progress in developing specific process representations in climate models and demonstrating improvements in them may be attributed to the predominance of this overly simple approach, in part necessitated by the absence of a critical mass of researchers within modeling groups focused on each key process and a lack of direct involvement of the community that produces and analyzed observations. To push forward more aggressively, we need a different mode of working that complements the usual “discovery-driven” research by individuals and small teams. This different mode must comprise a large enough team to cover the scope of the problem, a team large enough to provide the expertise and level of effort needed to analyze large datasets and investigate and develop complicated code, a team that is nevertheless focused on a specific strategic plan that attacks the problems in a more deliberate and rational manner, and a set of procedures that lead to a systematic accumulation of progress.

Key elements missing from climate-model-related research today are: (1) a framework that allows many more people to work on and assemble results from a range of different model types (covering different space-time scales) and accumulate the results in the same model – such a framework would not only expand the opportunities for more researchers to be involved but also provide for a broader range of expertise to be applied to the work than is currently possible – the current inaccessibility of comprehensive models to most researchers is mirrored by the inaccessibility of the larger datasets that are now becoming available, (2) a tighter integration of observational analysis activities to inform the development and evaluation of the models and to exploit the models to refine observational requirements, (3) a method for identifying the “most important” model problems, (4) a well-understood pathway from process-representation concept to implementation in the various models covering the complete range of space-time scales and sufficient personnel working on each step, (5) a model-testing infrastructure that not only evaluates the process representation in stand-alone mode but also within the “whole-system” context where the process is coupled to many other processes, again employing a range of model types, (6) a process to facilitate the transfer of new process representations and/or new modeling approaches among the range of model types, (7) a set of testing procedures that accounts for the effects of the coupling of processes, and (8) a method for evaluating the “sufficiency” of both the observations and the model (how accurate are they and when are they “good enough”?) – such benchmarks need to be explicitly tied to requirements for climate prediction accuracy that come from evaluations of societal impacts of climate change.

Rationale for This Modeling “New” Activity Being a NASA Program

The primary reason why NASA should try this “different” approach to climate model development is that the most crucial tests of the whole-system model that are required are those that necessarily employ long-term, global datasets that quantify the weather-scale variability, datasets that will come primarily from satellite observations. NASA is well-positioned to better integrate the model development with observational analyses.

Elements Necessary for Progress

In addition to supplying the missing elements, other specific developments that are needed include development of: (1) a common set of analyzed observations that describe the long-term, global behavior of the climate system and key relationships among the thermodynamic/dynamic state variables in sufficient detail to determine the basic energy cycle of the system, (2) techniques for identifying the model problems that most affect the fidelity of its representation of climate and climate change and the key relationships that can guide definition of observation requirements (to support program planning), (3) development of a full range of models from cloud parcel physics to global climate models, (4) advanced, common analysis tools that emphasize diagnosis of the key relationships to be applied to observations and model output to support development of physical parameterizations and to model output for testing, (5) a set of test metrics that provide a performance standard to measure progress, (6) a method to determine the key physics that all models must incorporate accurately rather than collecting a poorly understood “black box” of implementations of complex parameterizations and (7) a critical mass of scientists actually working on cloud parameterizations for each type of model.

The key to the design of this sophisticated approach is to provide procedures to accumulate progress, both in terms of demonstrating model improvements in a rigorous manner and in terms of possessing at least one climate model that embodies all of the state-of-the-art understanding of climate physics available at a given time.

The Modeling Framework

A. CMAI Framework

CMAI will be used as a prototype for this sophisticated type of modeling activity. The funded group of researchers will work as part of a team coordinated by the Program office and interacting with the two NASA-provided “facilities” discussed below.

The team members will be responsible for four developments: (1) determining observation needs and requirements, (2) developing common analysis tools for application to both observations and model outputs, (3) developing and investigating new ideas for model representations of clouds and precipitation processes and (4) implementing these ideas within the NASA model(s).

B. Common Datasets and Analysis Tools

(DIME = new laboratory microphysical measurements, field studies and long-term site measurements, weather-scale variations of atmospheric state and motions, up to decadal-scale determinations of the energy and water cycle)

As the research proceeds, needed observational data products (if available) will be supplied through the DIME mechanism. This mechanism provides data products and analysis tools/results online in a common framework for all research activities, so that other modeling studies can also benefit from these collections. Team members will need to identify useful datasets for their work and help develop useful analysis methods. Once identified, both the observations and the analysis tools will be provided through DIME; in some cases, the results of analysis will be provided as products possibly residing in other NASA (and NOAA, DOE) centers but linked to the DIME website. The team members will also help define more generally useful combinations of observations and analyses that can serve as metrics of model performance. In particular, application of key analyses to long-term global observations can be used to generalize results from specific case studies or field experiments.

C. Common Models

(Parcel microphysics model, LES/CRM, WRF, Global CRM, fvGEOS, MMF, ModelE)

At least three of the needed model types are available for analysis, development and testing. The WRF model is a limited-domain model that can be used as a “cloud-resolving” model or as a regional weather model. A global CRM may become available soon. The fvGEOS is a new modular weather AGCM. MMF incorporates a CRM with the fv-GCM dynamics and is currently under development. ModelE is a new modular coupled atmosphere-ocean climate model. The team members are expected to implement new physical process representations in these models with support from CMAI liaisons at the GMAO units hired specifically for this purpose. A state-of-the-art parcel microphysics model and a LES model incorporating state-of-the-art cloud physics need to be selected/developed.

D. Metrics

Model performance metrics based on observations are needed for several purposes during the development of a model. Model development involves a repeating cycle of evaluation and model modification. Metrics help evaluate the model performance with an emphasis on identifying the most important remaining problems. Metrics help evaluate whether a particular process representation change has been effective in reducing the problem. Metrics provide an overall measure of the model fidelity and reliability and help establish program goals. Sometimes a more physically realistic representation of a process can degrade model performance relative to a certain metric, which will require use of CMAI resources to investigate the interactions between cloud processes and other components of the system. At present, these metrics are poorly defined for the different types of models described above.

A specific goal of this program (as a prototype) will be to develop metrics that can be used to stimulate improvements in model representations of cloud and precipitation processes (including the role of aerosols) and to measure progress in understanding cloud/precipitation behavior within the weather-climate system.

Some Examples that Could be Used as Specifics for Focus

A. CMAI Framework (Elements of Proposals)

(1) Define observation needs and requirements through specific investigations involving both models and current observations. Examples are studies that identify observational tests of a model that discriminate between a poorer and better representation of a process and determine a model’s dependence on the specific processes that explain its behavior characteristics from weather scale to climate scale. Some examples are as follows. Model representations of cloud-precipitation process imply many attributes beyond the average precipitation amount, including threshold cloud properties at the onset of different kinds of precipitation (different meteorology), frequency of occurrence of precipitation as a fraction of cloud occurrence frequency, frequency distributions of precipitation rate, and vertical profiles of latent heating and convective intensity. Model studies could identify the features of the model representation that control these particular features and then compare them with the observed properties. At weather scales, models predict a frequency distribution of storm (convective and baroclinic) strengths that can be observed; investigations could identify the factors controlling the model’s prediction for comparison with observations. Climate model sensitivity can be related to particular choices of representation of cloud, precipitation and radiation processes; if the leading determinants are identified by model experiments, then the fidelity of these relationships can be checked as they operate on shorter, observable time scales (interannual to inter-decadal). Models can also be used to test the effects of different characteristics of the observing system, for example, how do limited space-time sampling or coverage or differing sensitivity to specific cloud types affect the conclusions. Model studies can be conducted to provide tests that discriminate between different representations of the same process, either in the same kind of model, or across the different types of models used at different scales; such a discrimination must be defined in terms of or related to an observable quantity or behavior. Such tests must be devised for process-scale through decadal time scales.

(2) Develop common analysis tools for application to both observations and model outputs that determine relationships among key quantities from process level to whole climate system level. These analysis tools should not only quantify relationships among variables (multi-variate, non-linear) but also diagnose the higher-level workings of the climate systems by determining exchanges and transports of energy, momentum and mass (including various tracers, especially water vapor) and by elucidating feedback relationships at different time scales from process-scale to decadal time scale. Some examples are as follows. Available observations now provide quantitative information on the exchanges of energy, momentum and mass (water vapor) among the main components of the climate system so that straightforward comparisons of model versions of these exchanges and their variations can be performed. Even more useful would be model studies identifying the key processes controlling these exchanges and transports in such a way as to suggest further analyses of the observations to confirm the importance of these processes to the observed exchanges. Statistical compositing of multi-variate relationships by meteorological state allows for testing of a model’s equivalent statistics as well as of the model’s representation of each state and its frequency of occurrence. Associated diabatic heating and atmospheric motions provide a statistical estimate of dynamical feedbacks by cloud processes at weather scales that can be observed and compared with the modeled relationships. Other advanced (multi-variate, non-linear) mathematical analysis methods for analysis of complex systems, available in other research fields but not yet applied in ours, could also be adapted and tested for use to elucidate these multi-variate, non-linear relationships. Particularly important for weather and climate are analysis methods that consider relationships that are non-local in space and time and can account for both chaotic and periodic behavior.

(3) Develop ideas for new model representations of cloud-precipitation-aerosol processes and define metrics for discriminating tests at all scales of representation. The key feature of investigations of this type is that the representation of a process should be applicable to model runs of durations from seasonal/interannual to centennial/millenial on NASA computers and testable at more than one modeling scale, i.e., at both the individual cloud scale and the cloud ensemble scale of a GCM grid cell. Moreover, the investigation must develop observational tests for the new representation that discriminate it from previous representations. Some key examples are as follows. Current day global models produce most of their precipitation from convective clouds produced in separate subroutines that do not (usually) include explicit water budget equations including aerosol effects, whereas cloud resolving models do (though often crudely) represent water budgets for all cloud types: the goal is a single set of cloud process equations used for all cloud types but necessary differences in representation between different types of models need to be tested explicitly across the whole suite of model types and against observations. Forthcoming information from CloudSat/Calipso, together with the rest of the “A-train” constellation, provides new information about the details of cloud vertical structure and its variations with weather and seasons, so models will now have to be revised to represent such structure in more detail and tested against these observations.

(4) Clouds are largely a direct result of the large-scale stratification and circulation variability on hourly to synoptic time scales. Thus, the value of direct comparisons between modeled and observed cloud occurrence and properties is dependent on knowledge of the atmospheric circulation. Global and regional weather analysis (and reanalysis) products are our best information about the atmospheric state and circulation, but their accuracy on the relevant time scales, particularly the divergent component of the motions, is suspect in many regions that encompass important climate regimes. Without accurate analyses of the atmospheric circulation, direct comparisons of modeled and observed clouds are ambiguous because we don’t know whether the fault lies with the circulation or the cloud physics representation. It still may be possible to make statistical comparisons (indirect inferences), but it is still a useful CMAI goal to establish, using advanced satellite sounders, the extent to which current weather analyses provide credible information on atmospheric stability and advective tendencies in regions where radiosondes are sparse or absent. For example, such studies might explore whether such analyses provide realistic subsidence rates and inversion strengths in the marine stratocumulus regions so that a typical LES or CRM produces realistic clouds using the analysis as initialization and forcing. Another example is to explore whether the analysis results correctly indicate the observed transitions from suppressed to active deep convection in the tropics. Such successes in analysis might then serve as the basis for studies of how different types of models respond to the same advective forcing.

(5) Implement and test existing parameterization ideas across the whole range of models. This type of study especially applies to the use of small-scale process models to test general circulation model representations directly. The latter usually involve approximations to account for the fact that the smaller-scale atmospheric motions are not explicitly represented (they, too, are parameterized), whereas the former determine these motions explicitly. This type of investigation is not a new idea but the problems are that the parameterizations (including cloud microphysics, turbulence and radiation) used in small-scale models are not directly validated with observations, even though their dynamics is better determined, and that modelers working with one type of model are not usually able or cannot work with the other type of model. This type of investigation requires multiple model types to be used together and must also define observational tests for comparing the process representation at all scales. The Multi-scale Modeling Framework is one way to carry out this type of investigation.

(6) Liaison teams at NASA modeling centers should be created to interact directly with other CMAI investigators. These people will be responsible for developing an overall understanding of the existing model physics and diagnostics, will assist CMAI team members in implementing new candidate physics into the NASA models, and will have prime responsibility for conducting parallel tests of the models that mimic observational analysis approaches developed by the CMAI investigators. The liaison teams should include people responsible for each of the following tasks: (1) implementing analysis techniques that produce diagnostics from the model output that are analogous to those obtained from satellite (and other) observations and that have proven useful, (2) implementing new model physics that results from the data analysis and data-model comparisons, and (3) diagnosing the reasons for positive and negative model performance changes when the new physics is implemented. The last of these tasks has often been overlooked but is crucial because the cloud/convection parameterizations are coupled to other model processes and may be affected by them, even though they are not within the purview of CMAI. Since well-justified parameterization improvements may adversely affect model run time and negatively impact intended model applications, some work on optimization of parameterizations is also needed. Thus, CMAI activities will have to be coordinated with the PI of each MAP-funded model development to ensure that the outcomes are compatible with the PI’s plans for model usage.

In organizing the above framework, consideration must be given to programmatic links (the most important being with NEWS) and institutional links within NASA and between NASA and NOAA – most particularly CPT (possibly even international links through the GEWEX Cloud System Study).

B. Common Datasets and Analysis Tools

(1) Key processes at different cloud scales. (a) Parcel microphysics (especially for ice and mixed phases): laboratory work on the physics of ice particle formation and mixed-phase particle interaction; also laboratory work to support development of new in situ measurement tools for UAVs to measure ice and mixed phase cloud particles. (b) Boundary layer turbulence including shallow convection: field experiments specifically designed to test LES representations but with moist processes, 3D radiation and land surface inhomogeneity effects accounted for. More turbulence statistics may be obtained from properly instrumented long-term surface sites. (c) Deep convection: field experiments are limited in their ability to observe the interaction of deep convection with the larger-scale circulation, especially for mesoscale convective complexes, so special intensive datasets combining TRMM, CloudSat, Calipso with high resolution geostationary cloud measurements, surface precipitation radar networks and weather analyses are needed for true multi-scale studies.

(2) Key weather characteristics (TRMM – A-train with CloudSat/Calipso –> GPM/NPOESS): Diagnosis of the distribution of precipitation rate and cloud/precipitation vertical structure with variations of larger-scale meteorological state. Analyses of the variations of clouds, precipitation and radiation with weather state (storm strength, subsidence rate, static stability, wind shear, advective tendencies, etc.).

(3) Key climate datasets: Characterization of the weather-to-decadal scale variations of the global atmospheric energy and water cycle: there are two directly forced modes of variability – diurnal and seasonal – and two hydrodynamical instability modes – convective and baroclinic – that must be characterized specifically. Longer-term modes that arise from the coupling of the atmosphere and ocean are such things as MJO, QBO, ENSO, AO and PDO that can all must be characterized.

(4) Analysis tools: Multi-variate compositing by meteorological state or “event” type. Other advanced relational statistics (aka neural networks as non-linear, multi-variate statistical fits). Analysis of the processes that produce different cloud types, as defined by their physical properties, is also needed to stratify observations and model behavior by meteorological regime. Another possible analysis tool is to develop a type of assimilation procedure that specifically diagnoses key processes and employs different kinds of constraints that common in the weather analysis version of this methodology.

(5) Specifics of DIME (data infrastructure): Foundation: meteorological state case studies providing model input-output datasets for “regional”-sized domains for time periods from days to seasons along with climatological statistics for the same cloud type cases. All of these datasets are online in a common (very simple) format. Links will be added to key field and surface-site collections and to satellite-based compilations or other combined-instrument analyses that provide comprehensive analyses or diagnoses of behavior. All such datasets and analysis tools needed by or produced by CMAI investigators will be supported to provide wider access. The idea is to identify key observational tests of model physics and performance: the datasets needed for such tests will be obtained and provide through the DIME online system.

C. Common Models

(1) Process-to-regional-weather-scale model (Parcel microphysics, LES/CRM, WRF). To date these types of models have been run primarily in case-study mode, which helps illuminate processes operating in a few specific situations but does not indicate whether the deduced behavior is sufficiently general to form the basis of a new parameterization in a global model. A notable difference is the MMF that, in effect, runs a global collection of CRMs each time; the potential of this is not yet realized. To achieve the goal of the GEWEX Cloud System Study to have such models inform parameterization development, three things are needed: (a) improvement of the physics of this type of model (especially radiation and cloud microphysics), (b) application of such process models to a larger number of cases in multiple climate regimes so that the general aspects of the cloud behavior and dependence on regime-specific conditions both become evident, (c) a commitment to analyze the results of such models thoroughly at domain-average or domain-aggregated level to make the link to larger-scale models more explicit. A proposal to create a “community” process-model-based product that makes available high space-time resolution output from a large number of experiments for analysis by other investigators would be one approach to realize the mostly untapped potential of these types of models.

(2) Global-weather-to-short-term-climate-variation model (fvGEOS, MMF, Global CRM). These models span the range of time scales for which most satellite datasets provide information so they provide the most natural link to global datasets. Numerous examples of variability that long-term climate models have trouble simulating exist on the time scales appropriate to these types of models: (a) the diurnal cycle – for instance, most models predict the onset of precipitation over land hours earlier than observed, (b) the MJO – most global models produce tropical intraseasonal variability significantly weaker than observed and do not select the observed 30-60 time scale to the extent that is observed, (c) ENSO – coupled atmosphere-ocean models produce a variety of ENSO-like SST anomalies, from too strong to too weak, while few faithfully reproduce the remote atmospheric response to ENSO. These important modes of variability might serve as an initial focus for exploring the extent to which a state-of-the-art model combined with appropriately analyzed satellite data might lead to advances in fundamental understanding, i.e., a set of physics concepts, that all models can incorporate. The MMF is a potentially important testbed for such ideas, but the first order of business is to understand what aspects of the atmospheric physics it is suitable for (e.g., cumulus-scale dynamics in the deep convection situation) and what cloud types it does not provide new useful information about (e.g., marine stratus, midlatitude frontal clouds).

(3) Short-term-to-long-term-climate-variation model (ModelE). Analysis of the higher-resolution global models described above can be used to develop a set of metrics that might be used to tests the fidelity of longer-term climate change simulation models that usually employ coarser resolution GCMs (coupled atmosphere-ocean). Which metrics are useful predictors of climate change behavior have not been identified. A related question concerns the “non-universality” of parameterization improvements – there is a long history showing that advances developed by one modeling group do not translate into similar benefits when implemented in another model. This experience emphasizes the need to define progress in terms of physical process understanding rather than development of code modules to be shared among all models. In turn this requires model (and observation) analysis methods that diagnose the essential aspects of the processes, especially how they operate when coupled to other processes, and a set of performance metrics that all models must meet.

At least four of the needed model types are available for analysis, development and testing: WRF, fvGEOS, MMF and ModelE. A global CRM is “on the horizon.” CMAI team members are expected to work towards new physical parameterizations in these models with support from liaisons hired for each model for this purpose. State-of-the-art Parcel microphysics and LES/CRM models need to be identified and/or developed.

D. Metrics (should answer the question – is it better?)

Climate prediction and observation metrics have evolved as a loose collection of intuitive measures of model performance against a set of signals in the climate system (e.g., diurnal cycle, MJO, seasonal cycle, QBO, ENSO, NAO, PDO, etc.) and mean climate state parameters (e.g., global distributions of temperature, humidity, winds). Climate models are also characterized by sensitivity tests – how the model responds to changes in parameterizations and how the model’s climate responds to specified (sometimes standard) forcing changes. These metrics have yet to provide a standard judgement of model performance, like weather prediction skill scores, that can be used to measure model improvements.

In contrast to weather prediction skill, which depends on the accuracy with which the initial state can be specified and the accuracy of the model physics (especially dynamics), climate prediction depends almost entirely on the boundary conditions and the accuracy of the model physics, especially those processes that control the global energy and water cycle. Also, assessing climate prediction skill can depend on the space-time scales of climate characteristics that are examined. The basic dilemma is that the available observation record is too short either to evaluate the effects of averaging to different space-time scales or to provide a direct comparative test of model performance. Consequently, we cannot directly determine the importance to climate predictions of different observations in Observing System Simulation Experiments nor have we identified the model aspects that are most critical to the accuracy of its climate prediction. Even the intrinsic limits to prediction accuracy are not known. Hence, developing a sufficient set of climate model metrics is indirect at best.

Suggested below are some examples of metrics at a range of space-time scales that might be relevant to evaluating how well a model represents the cloud processes that determine the impact of cloud processes on the global energy and water cycle, the role of cloud feedbacks in determining the climate’s natural variability and sensitivity to forced changes.

(1) Process representation metrics: (i) threshold cloud properties (average particle size, layer thickness, water path/content) for precipitation onset (liquid, mixed, ice), (ii) precipitation rate distribution as function of temperature and meteorology, (iii) threshold effective vertical velocity and humidity for cloud formation.

(2) Weather forecast metrics: (i) cloud water and precipitation distributions, (ii) storm strength evolution as function of humidity – diabatic heating rate distributions, (iii) frequency of occurrence of storms and precipitation events by strength, (iv) space-time distribution of cloud properties within storms, (v) heat and water exchanges with surface and within atmosphere.

(3) Climate representation metrics, both “performance (responses) & behavior (characteristics)” and key relationships (feedbacks and sensitivities) from control run: (i) diurnal, intraseasonal, seasonal and interannual variations of clouds, precipitation and radiation, (ii) statistics of convection by depth/strength (including associated cloud properties and diabatic heating), (iii) statistics of baroclinic eddies by strength (including associated cloud properties and diabatic heating), (iv) exchange rates of heat and water with surface and transports by atmosphere and ocean, (v) mean spatial distributions.

(4) Climate forecast metrics (determine sensitivity and leading feedbacks from state-differences): (i) specified SST (impulsive) changes (? 2 & 4K, homogeneous and specified equator-to-pole gradient), (ii) ? 2 & 4% solar constant (impulsive) change experiments, (iii) doubled & halved CO2 transient change experiments.

CMAI Workshop Agenda

Day 1, Session 1: Describe purposes of meeting: (1) find out planned research, (2) discuss rationale and proposed framework for research outlined in white paper, (3) identify missing elements, (4) define specific activities.

Day 1, Session 2: PI descriptions of how proposed research fits within and benefits from proposed framework

Day 1, Session 3: PI descriptions of how proposed research fits within and benefits from proposed framework

Day 1, Session 4: PI descriptions of how proposed research fits within and benefits from proposed framework

Day 2, Session 1: PI descriptions of how proposed research fits within and benefits from proposed framework

Day 2, Session 2: Discussion of goals that might be achieved with this combination of work, identify missing activities

Day 2, Session 3: Discussion of Framework White Paper (Questions 1, 2 and 3)

Day 2, Session 4: Discussion of Framework White Paper (Questions 4 and 5)

Questions for Discussion at the CMAI Workshop

1. Does this White Paper identify the key obstacles to progress? If not, which should be removed, which added?

2. Does this White Paper suggest an approach that is both workable and likely to overcome these obstacles? If not, how should this framework be altered?

3. The CMAI PIs are not likely to exercise all of the elements of this framework in the first 3-yr period, so which ones should be given the most attention now?

4. If the CMAI PIs can see the benefit of working within this framework, what facilities and personnel need to be in place to make it work?

5. What are the specific activities that the PIs will undertake within this framework?

First Draft | Second Draft | Third Draft


CMAI: Participants | Meetings | Draft Workplan | Investigations