Skip to main content

CIRC Rationale

What is the rationale behind CIRC?

CIRC is intended as an evolving and regularly updated permanent reference source for GCM-type radiative transfer (RT) code evaluation that will help in the continued improvement of RT parameterizations. CIRC seeks to establish itself as the standard against which to document code performance in scientific publications and in coordinated joint modeling activities such as GCM intercomparisons or ensemble climate change predictions. While it is understood that the CIRC reference calculations reflect current spectroscopic knowledge and may themselves be imperfect, our intent is to update them when algorithmic or database improvements become available and to expand them with new cases as part of future phases of the effort. Feedback from participants, users of the dataset, and the atmospheric RT modeling community at large are essential for maintaining and enhancing the continuous nature of the CIRC effort.

A feature that distinguishes CIRC from previous intercomparisons such as the Intercomparison of Radiation Codes in Climate Models (ICRCCM) is that its pool of cases is largely based on observations. Atmospheric and surface input, as well as radiative fluxes used for consistency checks come primarily from the Atmospheric Radiation Measurement (ARM) Climate Research Facility measurements and satellite observations that are then compiled in the Broadband Heating Rate Profile (BBHRP) product. Additonal datasets beyond BBHRP such as measurements from ARM field campaigns and spectral radiances from the AERI instrument are also used to complete the set of desired cases and to ensure the quality of the input. For Phase I, CIRC aims to assess the baseline errors of GCM RT codes and therefore provides test cases that evaluate performance under the least challenging conditions, i.e, well-understood clear-sky cases and homogeneous, single-layer overcast liquid cloud cases. Future phases will add greater variety and complexity in the atmospheric description, or (depending on the lessons learned from Phase I and feedback from participants) even include simpler, perhaps synthetic, idealized experiments.

Model validation

The goal of CIRC at first stage is to document the performance of the participating models relative to the LBL standards. Ultimately, however, model performance should be critically evaluated in terms of the accuracy needed to address operational GCM requirements for current and future climate simulations and comparisons with observations. As submissions come in and we start understanding better the details of how the algorithms have processed the input provided to perform the runs, we plan to define performance targets which can be used by modelers to assess the performance of their models. Since the participating algorithms may be tweaked to accomodate CIRC input and output requirements, performance evaluations may not reflect actual performance in typical operational environment. It will therefore be valuable to receive multiple submissions where different interpretations or processing of the provided input are used (see also our submission page). 


Comments?

Please send to Lazaros Oreopoulos your ideas and suggestions on how to evaluate model performance with the CIRC dataset and how to make CIRC succeed in meeting RT modeling needs.