As ice sheets lose mass at increasing rates, scientists are growing increasingly concerned that portions of these massive reservoirs of frozen water are poised to begin irreversibly retreating [Cornford et al., 2015; DeConto et al., 2021]. To adapt to the ensuing changes along shorelines, authorities responsible for coastal planning and climate mitigation efforts need actionable sea level rise projections. However, recent studies using climate and ice sheet models are, more and more often, coming to very different conclusions about future rates of sea level rise and even about the sensitivity of ice sheets to future warming [DeConto et al., 2021; Edwards et al., 2021].
How can climate scientists help decisionmakers navigate vague or conflicting information to develop practical response strategies in the face of large uncertainties? One solution that may provide needed clarity is to change our emphasis from what we do not know to what we do know.
Large discrepancies among model projections of long-term sea level rise have spawned calls among the scientific community for scientists to work on reducing uncertainty. However, focusing on uncertainty is a trap we must avoid. Instead, we should focus on the adaptation decisions we can already make on the basis of current models and communicating and building confidence in models for longer-term decisions.
The Folly of Focusing on Uncertainty
Emphasizing uncertainty is misguided for two main reasons. First, a growing body of research shows that providing uncertainty estimates to decisionmakers actually decreases the usability of climate projections [Lemos and Rood, 2010]. This is partly because it isn’t always clear how best to incorporate uncertainty into planning. Do we plan for the most likely projection of sea level rise, knowing the protections we put in place may be inadequate, or do we plan for the most extreme sea level projection despite the additional cost to do so? The planning process is complex, with uncertainty in global sea level projections being just one of many factors decisionmakers must consider. For example, investing in protections against sea levels that won’t be experienced for 70 years may not seem pressing when people can’t leave their homes because of air quality concerns or can’t drink tap water because it is contaminated. Furthermore, future planning and infrastructure decisions must directly confront the inequitable practices that have long disadvantaged vulnerable and marginalized populations.
Second, although models provide a murky picture of the magnitude of sea level rise that will occur by the end of the century, estimates of what will happen in the next few decades are much clearer. This clarity is important because the most pressing adaptation decisions facing communities now—related to addressing both climate vulnerabilities and historical inequities—primarily reflect needs on decadal, not centennial, timescales. So rather than stressing distant targets that are elusive and evolving, communities need help to be successful in adapting to near-term climate risks.
Planning for shorter-term sea level rise doesn’t mean ignoring the specter of more substantial sea level rise farther down the road, and there is still a need for longer-term climate and sea level projections. For example, adaptation decisions such as where to place infrastructure designed to last more than a century (e.g., new sewer lines) call for information about long-term as well as short-term change and require significant immediate costs.
But committing to adaptation measures across the board on the basis of unclear long-term projections is like planning a dinner party years in advance: It’s good to think ahead, but it might be premature to buy the groceries. Moreover, sea level rise is not like a tsunami that will suddenly inundate coastlines (although it may seem that way when sea level rise conspires with storm surges to flood communities). Rates of sea level rise, even at the extremely high end, are measured in centimeters per year. Given the reality that sea levels will rise in the near term, plans today can focus on changes expected over the next decade or two and can then be adapted as more nebulous longer-term changes come into focus.
Uncertainty, Confidence, and Skill
Climate and ice sheet model projections increasingly diverge from one another farther in the future—reflecting uncertainty—as physical processes and conditions that we have not observed before occur in climates far different from those we have experienced in modern times. The inherent problem with this divergence, however, is not the magnitude of the uncertainty, but the resulting lack of confidence that the models have the necessary skill to represent the underlying physics responsible for change, especially rapid change.
A common method for estimating uncertainty used by climate and ice sheet modelers is to examine the spread in sea level rise projections associated with a suite of different ice sheet models driven by the same input climate forcing. Each model simulates the same system but varies slightly in how it is constructed and initialized. This approach is a little like looking at the varying answers to an exam question by a group of students. The students may take different approaches to the question, resulting in a range of answers, although hopefully most of them get something close to the right answer.
But what happens if the question asked involves material that wasn’t taught? Well, students (and models) can still figure out the right answer if it is based on physical principles that the students (and models) have learned. In the case of climate and ice sheet models, some of these principles, like conservation of mass and conservation of momentum, are established and always apply. But others are simply working hypotheses called parameterizations.
Parameterizations attempt to represent complex processes using simpler representations that rely on tunable numerical values (parameters) to define a system and how the system evolves. But many parameters can take on a range of values that are poorly constrained, giving rise to a wider spread of potential model outcomes. For example, part of the uncertainty around the projected fate of the Antarctic Ice Sheet involves a controversial recent hypothesis about a process called marine ice cliff instability, which suggests that ice cliffs that form where glaciers flow into the ocean can become structurally unstable if the cliff height becomes too tall [Bassis et al., 2021; Bassis and Walker, 2012; DeConto and Pollard, 2016]. It’s thought that under certain conditions, such instability could cause a runaway domino effect of ice loss and rapid sea level rise. However, this process has not been observed in nature, and current models either do not include ice cliff failure at all or rely on empirical parameterizations based on modern Greenland glaciers [DeConto and Pollard, 2016].
Another way to estimate uncertainty is to explore the range of simulated model outcomes associated with different parameters or parameterizations. The challenge here is that parameterizations are often tuned to represent physical processes as they have been observed in modern times. As climate change continues to expose ice sheets to conditions outside the range for which we have modern observations, existing parameterizations may no longer realistically represent the intended process. Similarly, if processes that have yet to be observed, like marine ice cliff instability, become important, estimates of uncertainty from models may no longer represent reality. Crucially, including more processes in models—especially those for which we have limited observations—to increase model accuracy is likely to increase uncertainty in longer-term sea level rise projections, at least until the processes are better understood.
So how do we know when a model is complex enough in the physics it includes that we can rely on its projections of the future under conditions very different from today’s? Answering this question comes down to two related concepts: model confidence and model skill.
Confidence reflects an assessment (qualitative or quantitative) of whether we believe that the physics and hypotheses underpinning models are fundamentally correct. Beyond being correct, model hypotheses must be complete enough that the models still produce accurate outcomes even when pushed outside the set of conditions, or regime, for which they have been calibrated. For example, we must be able to predict reliably when and how quickly ice breaks and crumbles before we can confidently predict the role of marine ice cliff instability in future sea level rise. Building confidence in models thus requires using them to make—and then test—predictions. Model skill is a measure of how accurately models have predicted past changes. Higher model skill results in greater confidence, but boosting model skill is no easy feat.
Building Confidence Through Failure
Our record of modern observations of ice sheet change is relatively short, dating back to the beginning of the satellite era in the 1970s, and ice sheet models don’t have a long track record of predicting rapid changes. In 2002, the Larsen B ice shelf on the Antarctic Peninsula disintegrated in less than 6 weeks, an unprecedented—and unpredicted—pace [Banwell et al., 2013]. When this happened, the flow of the tributary glaciers that fed it accelerated, providing clear evidence that the ice shelf had been buttressing the grounded ice behind it [Berthier et al., 2012; Scambos et al., 2004] and proving that ice shelves play a critical role in regulating ice sheet discharge. At the time of the collapse, though, ice sheet modelers were still debating whether large-scale instabilities could occur, and the potential for such a rapid process was not accounted for in models [Hindmarsh and Le Meur, 2001].
Significant advances continue to be made in the capability of models to recreate past sea levels, but this capability by itself provides little guidance on whether the models fundamentally represent physical processes correctly. Building confidence in models—and showing that they have the skill needed to represent rapid ice sheet change accurately—will require synthesizing a wider set of observations (beyond just past sea levels) that allows us to test models’ abilities to represent key processes across many different regimes.
The growing catalog of observed Greenland glacier behavior, for example, can be used to test models [Catania et al., 2020]. Ongoing changes in Antarctica, like weakening of the Thwaites Ice Shelf and retreat of floating portions of the Pine Island glacier, may also provide opportunities to test whether current models can represent substantial ice retreat or collapse. Beyond the short modern observation period, paleorecords that show changes over much longer timescales can provide additional clues about past ice sheet instabilities and responses across a wide spectrum of climate forcing. Neither modern nor paleo–data sets are sufficient by themselves, but piecing them together provides a richer, wider scope of conditions with which to test models and ferret out where they misbehave.
And the way to test models and increase confidence in their sea level projections is not to tune them to reproduce certain observations. It is instead—although this may sound paradoxical—to find examples where the models fail to reproduce observations. Identifying model failures is key to making improvements because it highlights processes that are either incorrectly represented or absent entirely in the models. Correcting these deficiencies results in a slow and steady march toward models—whether machine learning based or physics based—that accurately incorporate more of the fundamental physics involved in influencing climate, ice, and sea level. This approach of finding and fixing failures is necessary to build confidence that models will produce realistic predictions when challenged with conditions significantly different from today’s.
Confidence on a Practical Scale
Sea level rise projections that extend to the end of this century and into the next may be uncertain. But this uncertainty isn’t a bad thing for science or for adaptation planning. The current divergence among model predictions is actually a good sign because it means that scientists are probing different parameterizations, representations of processes, and hypotheses. Some of these may eventually be abandoned, but others will evolve and become widely adopted across models because of their improved predictive capabilities.
The skill of models in predicting sea level change on decadal timescales is high, and we already have actionable projections on these timescales. We should be emphasizing that fact in discussions with community members, stakeholders, and decisionmakers, so they can move ahead with important adaptation and mitigation planning. These adaptation decisions need to be initiated now while scientists simultaneously continue to work toward model improvements.
In the short term, making these improvements is likely to increase uncertainty in projections of future sea level rise as we probe a wider suite of processes and conditions. But the increase in uncertainty will come with increased confidence that models aren’t missing critical physics. And this increased confidence is far more useful in developing long-term strategies for adaptation than worrying about all the things we still don’t know.
Banwell, A. F., D. R. MacAyeal, and O. V. Sergienko (2013), Breakup of the Larsen B Ice Shelf triggered by chain reaction drainage of supraglacial lakes, Geophys. Res. Lett., 40(22), 5,872–5,876, https://doi.org/10.1002/2013GL057694.
Bassis, J. N., and C. C. Walker (2012), Upper and lower limits on the stability of calving glaciers from the yield strength envelope of ice, Proc. R. Soc. A, 468(2140), 913–931, https://doi.org/10.1098/rspa.2011.0422.
Bassis, J. N., et al. (2021), Transition to marine ice cliff instability controlled by ice thickness gradients and velocity, Science, 372(6548), 1,342–1,344, https://doi.org/10.1126/science.abf6271.
Berthier, E., T. A. Scambos, and C. A. Shuman (2012), Mass loss of Larsen B tributary glaciers (Antarctic Peninsula) unabated since 2002, Geophys. Res. Lett., 39, L13501, https://doi.org/10.1029/2012GL051755.
Catania, G. A., et al. (2020), Future evolution of Greenland’s marine-terminating outlet glaciers, J. Geophys. Res. Earth Surf., 125(2), e2018JF004873, https://doi.org/10.1029/2018JF004873.
Cornford, S. L., et al. (2015), Century-scale simulations of the response of the West Antarctic Ice Sheet to a warming climate, Cryosphere, 9(4), 1,579–1,600, https://doi.org/10.5194/tc-9-1579-2015.
DeConto, R. M., and D. Pollard (2016), Contribution of Antarctica to past and future sea-level rise, Nature, 531(7596), 591–597, https://doi.org/10.1038/nature17145.
DeConto, R. M., et al. (2021), The Paris climate agreement and future sea-level rise from Antarctica, Nature, 593(7857), 83–89, https://doi.org/10.1038/s41586-021-03427-0.
Edwards, T. L., et al. (2021), Projected land ice contributions to twenty-first-century sea level rise, Nature, 593(7857), 74–82, https://doi.org/10.1038/s41586-021-03302-y.
Hindmarsh, R. C. A., and E. Le Meur (2001), Dynamical processes involved in the retreat of marine ice sheets, J. Glaciol., 47(157), 271–282, https://doi.org/10.3189/172756501781832269.
Lemos, M. C., and R. B. Rood (2010), Climate projections and their impact on policy and practice, Wiley Interdiscip. Rev. Clim. Change, 1(5), 670–682, https://doi.org/10.1002/wcc.71.
Scambos, T. A., et al. (2004), Glacier acceleration and thinning after ice shelf collapse in the Larsen B embayment, Antarctica, Geophys. Res. Lett., 31, L18402, https://doi.org/10.1029/2004GL020670.
Jeremy Bassis (firstname.lastname@example.org), Department of Climate and Space Sciences, University of Michigan, Ann Arbor