For two decades our best Earth System Models have been relying on a placeholder number that assumed forests had plenty of nitrogen to help them soak up carbon dioxide. It turns out this number was wrong. Models overestimated nitrogen in forests and underestimated it in agriculture. The totals looked right so nobody checked the math, but the internal structure was fundamentally flawed. We have been banking on a phantom land carbon sink. Because forests actually have less nitrogen than modeled they cannot absorb as much carbon dioxide as projected, meaning climate change models may be overly optimistic about nature’s ability to slow the pace of warming.

This revelation comes from a new study that exposes a significant structural error in how we simulate the planet’s future. Science advances by refining estimates, yet sometimes a temporary number hardens into something that feels permanent. A cautious placeholder becomes a foundation for global projections. Its origins fade while its influence remains. It begins shaping the questions scientists ask and silencing the doubts it was meant to provoke.

We are now seeing the cost of one such hardened number. Kou Giesbrecht and colleagues have shown that our reliance on a twenty year old estimate inflated the modeled nitrogen supply in forests. This concealed a crucial limit on the global carbon sink. We have been counting on a level of carbon absorption that natural ecosystems cannot provide. This matters because biological nitrogen fixation is a primary control on the climate system. In Earth system models it dictates how strongly land ecosystems can buffer future carbon dioxide emissions.

The researchers used new spatially explicit datasets to map nitrogen fixing plant abundance across natural and agricultural systems. They rescaled global biological nitrogen fixation, the microbial conversion of atmospheric nitrogen into plant usable forms. Their analysis revealed a major misallocation in nitrogen budgets that went unnoticed for decades. Earth System Models have been overestimating natural nitrogen fixation by about thirty five million metric tons of nitrogen per year while underestimating agricultural nitrogen fixation by about forty six million metric tons per year.

The errors nearly canceled out numerically, which kept the global total looking correct. This is why the problem persisted. Once the totals added up there was little pressure to question whether the distribution was accurate. The flaw was invisible in the outputs because models that produce the right totals can still be built on incorrect internal assumptions.

To understand how this happened we need to return to 1999. That year researchers published what became the definitive global synthesis of biological nitrogen fixation in natural ecosystems. Their work improved on earlier attempts by providing biome specific averages rather than a single global mean. They gathered measurements from various locations and scaled them upward. Their method was careful but the available data came from places where nitrogen fixers were easy to measure, locations where signals were strong but not necessarily representative of global behavior.

The original authors highlighted these uncertainties. The estimate was meant to serve as a starting point. But once published, the number was repeated and absorbed into global models. It began influencing nitrogen cycling routines and carbon sink projections. Models built on this estimate showed that there was plenty of nitrogen to fuel future carbon storage as carbon dioxide levels rose. Repeated often enough the provisional synthesis took on the appearance of a settled fact.

Advertisement

The problem emerged when the scientific community stopped asking how the number had been derived. The coherence of the models was arithmetic rather than accuracy. The illusion held together through compensating mistakes. Natural ecosystems were assigned too much nitrogen and agriculture too little. Earth system models assumed there was enough nitrogen in forests to sustain a strong carbon dioxide fertilization effect. When the estimate is corrected more than ten percent of the modeled land carbon sink disappears because the supporting nitrogen was never there. This is a phantom sink, a structural overstatement of nature’s ability to absorb carbon.

Real world measurements show that forest nitrogen inputs cannot sustain the carbon gains predicted by many models. The implications extend beyond overstated carbon sinks. The underestimation of nitrogen fixation in agriculture hides a large source of pollution. The models missed an uncounted nitrogen input of forty six million metric tons per year. This is almost four times the nitrogen fertilizer used annually in the United States.

This missing nitrogen drives environmental impacts that have been absent from climate simulations. A portion leaches into rivers. Another portion volatilizes and escapes as nitrous oxide. Nitrous oxide is a greenhouse gas with nearly three hundred times the warming power of carbon dioxide over a century. Using conservative emissions factors this missing nitrogen could account for roughly ten percent of global anthropogenic nitrous oxide emissions. If even a fifth of that nitrogen enters waterways it represents a large volume of untracked pollution.

The phantom carbon sink and the missing pollution reflect errors with real consequences. Earth system models guide emissions targets and shape nutrient policy. They support national adaptation plans. If these models misallocate nitrogen the mistakes move from technical issues to distortions of responsibility. Scaling is one of the most powerful acts in science because it determines which processes appear large and which processes disappear.

This is why scientists must defend their scaling choices. Approximation is not guesswork. It is a form of quantitative reasoning that exposes whether a variable matters enough to alter the system. These habits allow researchers to ask essential questions. If carbon enters plants, how much nitrogen is needed to support it, and where could that nitrogen come from.

This line of reasoning sharpens the problem by revealing what conditions must hold for the model to remain plausible. Estimation acts as a structural check. It tests whether assumptions translate into consequences that make sense. Scaling begins as a leap but becomes a long term obligation. Once a number is used it must be revisited as measurements improve.

Precise measurements can be ecologically hollow if they are not scaled correctly. A growth rate may describe an organism but until it is connected to land area and time it says nothing about global fluxes. Concepts without numbers remain hunches. Even a rough calculation provides boundaries that improve our understanding. Scientists must continually ask what their measurements would imply if extended across landscapes and time.

When models simulate entire Earth systems the tools used to interrogate them must also scale upward. Simple checks remain necessary because they can reveal mismatches that complex models hide. Estimates remain alive as points of return, places where questions can be asked again. The new research shows how one such return exposes errors hidden by apparent coherence. The estimate from 1999 persisted because it fit the models. The totals aligned. The structure did not.

This discovery matters because the researchers tested multiple ways to represent nitrogen fixation in Earth system models. Each method creates a different amount of nitrogen available to plants as carbon dioxide rises. When models assume high nitrogen fixation in natural ecosystems plants show a strong response to elevated carbon dioxide. When fixation is limited by water or productivity the response weakens. This produces a forty percent swing in projected carbon dioxide fertilization based solely on modeling choices.

To tighten the range of plausible outcomes the researchers used an emergent constraint by comparing model behavior with real world measurements. Models that overestimate nitrogen fixation tend to overestimate the land carbon sink. By comparing model spread with observational patterns they narrowed the future trajectories of carbon absorption.

This correction to the global nitrogen budget exposes the cost of scientific inertia. Numbers repeated without reflection lose their connection to reality. They become fixtures in the system rather than questions to be tested. When the models inherit them the drift spreads across the entire structure. The new findings show the risk of trusting coherence over consequence. Estimates must be revisited because the stability of climate projections depends on their accuracy. We cannot plan for the future on numbers that balance mathematically while failing structurally.

Source:

Hungate, B. A. (2026). Phantom sinks and missing pollution: Legacies of a hardened number. PNAS.
https://doi.org/10.1073/pnas.2531787123

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments