(p. A1) BOULDER, Colo.—For almost five years, an international consortium of scientists was chasing clouds, determined to solve a problem that bedeviled climate-change forecasts for a generation: How do these wisps of water vapor affect global warming?
They reworked 2.1 million lines of supercomputer code used to explore the future of climate change, adding more-intricate equations for clouds and hundreds of other improvements. They tested the equations, debugged them and tested again.
The scientists would find that even the best tools at hand can’t model climates with the sureness the world needs as rising temperatures impact almost every region.
When they ran the updated simulation in 2018, the conclusion jolted them: Earth’s atmosphere was much more sensitive to greenhouse gases than decades of previous models had predicted, and future temperatures could be much higher than feared—perhaps even beyond hope of practical remedy.
(p. A9) “We thought this was really strange,” said Gokhan Danabasoglu, chief scientist for the climate-model project at the Mesa Laboratory in Boulder at the National Center for Atmospheric Research, or NCAR.
. . .
As world leaders consider how to limit greenhouse gases, they depend heavily on what computer climate models predict. But as algorithms and the computer they run on become more powerful—able to crunch far more data and do better simulations—that very complexity has left climate scientists grappling with mismatches among competing computer models.
While vital to calculating ways to survive a warming world, climate models are hitting a wall. They are running up against the complexity of the physics involved; the limits of scientific computing; uncertainties around the nuances of climate behavior; and the challenge of keeping pace with rising levels of carbon dioxide, methane and other greenhouse gases. Despite significant improvements, the new models are still too imprecise to be taken at face value, which means climate-change projections still require judgment calls.
“We have a situation where the models are behaving strangely,” said Gavin Schmidt, director of the National Aeronautics and Space Administration’s Goddard Institute for Space Sciences, a leading center for climate modeling. “We have a conundrum.”
. . .
In its guidance to governments last year, the U.N. climate-change panel for the first time played down the most extreme forecasts.
Before making new climate predictions for policy makers, an independent group of scientists used a technique called “hind-casting,” testing how well the models reproduced changes that occurred during the 20th century and earlier. Only models that re-created past climate behavior accurately were deemed acceptable.
In the process, the NCAR-consortium scientists checked whether the advanced models could reproduce the climate during the last Ice Age, 21,000 years ago, when carbon-dioxide levels and temperatures were much lower than today. CESM2 and other new models projected temperatures much colder than the geologic evidence indicated. University of Michigan scientists then tested the new models against the climate 50 million years ago when greenhouse-gas levels and temperatures were much higher than today. The new models projected higher temperatures than evidence suggested.
For the full story, see:
(Note: ellipses added.)
(Note: the online version of the story has the date February 6, 2022, and has the title “Climate Scientists Encounter Limits of Computer Models, Bedeviling Policy.”)