The Big Stupid
The fun has not yet ended, we will continue discuss “greenhouse defects”, yet the time has come to turn serious, dead serious on the subject of ECS estimates. Insights have consequences, and all the mistakes “climate science” has made with the GHE do translate into ill-fated climate projections. No supercomputer in this world can fix stupid, and so we are going to learn how climate projections are magnitudes off, as they are all rotten in their core. Take this as an introduction.
Let me give you a few quotes to illustrate one important issue.
Huang et al (2016)1:
While current atmospheric general circulation models (GCMs) still treat the surface as a blackbody in their longwave radiation scheme, recent studies suggest the need for taking realistic surface spectral emissivity into account.
Depending on the blackbody surface assumptions used in the original calculation, the globally averaged difference caused by the inclusion of realistic surface emissivity ranges from −1.2 to −1.5 W m−2 for clear-sky OLR and from −0.67 to −0.94 W m−2 for all-sky OLR
..and Wijngaarden, Happer (2018)2:
Over most of the Earth’s surface the thermal infrared emissivity surface emissivity, is observed to be in the interval [0.9 to 1]. Negligible error is introduced by setting surface emissivity = 1 in spectral regions of high atmospheric opacity..
Both quotes represent opposing positions, but also document the similarities there are. The scientific discussion narrows in on ever more peculiar details, as the more significant parts of the science have all been “settled”. Considering surface emissivity is such a detail which until now has been largely ignored. And as the above quotes suggest, including accurate estimates will make some small, yet even only static, difference. Additional GHGs or increasing temperatures respectively might have some indirect effects on surface emissivity and maybe that something should be included in the models, but that is about it.
Imagine you would try to explain the dangers of radioactivity to a stone age man. He might ask, does it have claws and sharp teeth? You would say no. Him, well then it can not be so dangerous! Danger is not necessarily only coming from one side and there is a reason why most herbivores have a viewing angle of almost 360°. We have very similar issues in quality assurance back and forth, as things can fail in so many different ways, usually many more than people can imagine.
Stalling, that is a loss of up-lift and control due to it, is a main concern in aviation, as it has caused a lot of fatal crashes. Implementing a flight software preventing it from happening by pulling the nose down, thereby even overruling the pilots inputs, should make an aircraft a lot safer. What the Boeing engineers failed to see was, that handing over ultimate control to an equally fallible system, would bring up a much larger danger than safety benefit. It took two fatal crashes and 346 victims to learn this lesson. Such harsh evidence is hard to deny.
Now imagine a similar fatal flaw in “climate science” where such evidence may never occur and the discussion on systemic problems is strictly suppressed, as “the science is settled” and everyone who disagrees is a “denier” who not “believes” the “science”. Quality assurance wise such a situation is a disaster. Hygiene is a function of washing, not of the conviction not to stink. In fact, someone who is totally convinced he would not stink, so that he does not need to wash, will smell badly. Equally when checks and balances are given up, because “the science is settled”, it is a mandatory consequence “the science” will stink. And with this knowledge in mind it was never a question to me if “the science” was wrong, but only how.
Let us dive into the logic provided by the Wijngaarden, Happer quote, as there are a couple of reasons why they think like this.
- Surface emissivity should only matter as far as surface radiation goes right into space and is not absorbed anyway. This will be true for the atmospheric window, which holds a relatively small share of the emission spectrum (25-30%).
- Such emission spectrums usually show clear skies, while in reality clouds minimize the atmospheric window even further. Typically “radiation budget diagrams” like the one by Kiehl/Trenberth suggest only 40W/m2 of direct surface to space emissions (though that figure is certainly too low). That makes only 16.7% of total TOA emissions (240W/m2).
- Surface emissivity is believed to be very close to 1 anyway, especially that of water.
- Combined these views mean any deviation from 1, like 0.95 for instance, would reduce emissions by less than 1% all over.
- The status quo is naturally included in the satellite measurements made TOA in any way.
- Eventually there is no solid reason to consider surface emissivity would change considerably due to global warming.
With this perspective surface emissivity really looks like being irrelevant for climate modelling or ECS estimates. It looks marginal by the power of 3. Even if one of these assumptions was a little bit off, it should not make much of a difference all over. And so the typical and total ignorance of “climate scientists” over this question, which I have discussed here and here, becomes fairly understandable. Why would you waste time and resources on something not making much difference in the end?
Of course the named quotes are nothing specific. Please keep in mind “climate scientists” communicate, they read each others papers and align their positions, or should I say beliefs. If there is dissent, it is just about a few details, which may yet be significant, but with 90%+ critical “scientists” will agree with the consensus and this instance is an excellent example. Essentially everyone sees it that way and even I have to agree. Discussing surface emissivity will have little merit with regard to emissions TOA, but that is where my consent ends.
GHG forcings ARE a function of surface emissivity!
As if it was not obvious enough throughout the articles here, the role of GHGs is defined by the difference between emissions at the surface and TOA. Any delta in surface emissivity will have a multiplied effect on the GHE in total and equally on that of individual GHGs. For instance with emissions TOA being 240W/m2 and emissions at the surface being 390W/m2, you have a GHE of 150W/m2. But then, if surface emissions are only 355W/m2, the GHE shrinks to 115W/m2. With this perspective surface emissivity is not dampened beyond relevance, but rather amplified. A 9% shrink of surface emissivity results in a 30% shrink of the GHE!
What is true for the “base magnitude” of any radiative effect of a GHG, is necessarily equally true for the underlying function and its slope. Let me demonstrate it with the radiative effect of CO2, since I have discussed the non-overlapped or net size of its GHE in a previous article. Of course, not only surface emissivity will matter, but also the overlapping with other GHGs and clouds.
In this way the base magnitudes, the GHEs of respective GHGs, will directly translate into the forcing (or feedback..) should its concentration change. Equally every flaw in the attribution of the GHE will cause an error in ECS estimates, and as I have shown there are a lot of flaws. All those blunders of the GHE theory have equivalents in climate models, unsurprisingly I should add. I mean would anyone really want to believe, it was possible to model the climate without understanding the basics?
The atmosphere as a whole only adds about 8K to the surface temperature. Sure enough this figure can change when GHG concentrations change, but the margin will naturally be very restricted. Then we have the original sin of “climate science” in ignoring real life surface emissivity and overlapping issues altogether. I mean there are some instances where these question get touched a little bit, like when it comes to the attribution of the GHE to individual GH-factors. But even then it is done inconsequential and amateurish.
For instance Gavin Schmidt et al (2010)3 start with a GHE of 155W/m2, then attribute only 22.4W/m2 of it to clouds and are left with 133.6W/m2 to distribute on the respective GHGs. Given the actual GHGE of only ~85W/m2, it is a 57% exaggeration. Interestingly the paper acknowledges the existence of a far larger gross CRE of 56W/m2 and thus a ratio of gross to net CRE of 2.5, which is not far off my estimate of 2.6. But since they assume such a small net CRE, while badly exaggerating GHGs, they equally avoid most of the overlapping between clouds and GHGs.
The next mistake is in the way they try account for overlaps. Rather than acknowledging the redundant nature of such overdetermined effects, which will not allow for logical operations, they simply split up causation.
“Following KT97, given an overlap between two absorbers, an obvious allocation is to split the difference, i.e., if 5% of the net LW radiation could be absorbed either by water vapor or CO2, then each is allocated 2.5%. For triple overlaps, a third is apportioned to each absorber.”
Since overlapped GHEs will not be altered by changes in the quantity of one of the agents, such an approach could lead to a substantial error in modelling. However, it does not really matter since, and that is the weird part, estimates on forcings, like that of 2xCO2, are obviously all based on erroneous gross GHEs anyhow, instead of correct net GHEs which they should be based on. So the overlaps get effectively thrown out of the window.
We are going deal with these questions in detail, but let me give a little preview. For instance, if CO2 doubles from 400 to 800ppm, this will give about 4W/m2 in forcing versus a perfectly emitting surface, without considering overlaps. Then if you consider a somewhat lower surface emissivity of 0.97, which is a quite common and yet of course wrong assumption, this figure will drop to about 3.7W/m2. If you allow for the overlapping of CO2 with vapor, like Wijngaarden, Happer (2018) did, the forcing drops further to only 3W/m24.
Regrettably you can not cherry pick which parts you are willing to include, or not. Rather it is mandatory to include ALL relevant factors to make a reasonable estimate, and do so correctly. If we do that, if we take a realistic surface emissivity of 0.895 within the CO2 range, if we consider overlaps with vapor AND clouds, the 2xCO2 forcing is only about 2W/m2 (I am still working on specifying this figure). All other things held constant, this single detail cuts all ECS estimates in half, as all feedbacks work relative to this base magnitude. Or to put it alternatively, it is one single reason why ECS estimates run hot by a factor of 2. And I can already tell you it is NOT the only flaw pointing in this direction.
The funny part is, it is a dark secret hidden in plain sight. You can ask modtran what a doubling of CO2 will do to surface temperatures. Again for approximation just use “1976 U.S. Standard Atmosphere”, add some CRE as “Stratus… Top 2.0km”, double CO2 to 800ppm, check for “relative humidity” to be held constant and add some temperature so that emissions will be the same as before (242.91W/m2). You may want try for yourself, but I can tell you 0.78K will do the trick. That is 0.78K warming for doubling CO2, including vapor feedback, only about a quarter of average ECS estimates.
Of course one could explain this with “modtran is stupid”, which it is. And certainly it is not perfect, there are plenty of issues to be discussed. But “stupid” in this instance is not a bad thing, as modtran stupidly just adds up one and one, as it should. It is unable to add up overlapped forcings multiple times, as climate models wrongly do. Modtran is not perfectly right, for instance the uchicago version implicitly assumes a surface emissivity of 0.975, which can not be altered. But unlike climate models, it is not perfectly wrong. And of course, we do understand why modtran behaves that way. It is not a mistake!
With this insight, we already can conclude the Paris agreement of attaining only a 1.5°C warming with CO2 emissions is impossible to reach. No matter how much CO2 mankind should emit, it will never have the power to warm the planet by 1.5°C. Also to explain the warming in the last decades, we will have to find a different cause. More to come..
Climate science based entirely on quantum physics.
Number one is black body radiation. Such radiation defined by Planck's formula. Take a look at any textbook and you'll find two variants of Planck's formula - energy spectrum vs frequency and energy spectrum vs wavelength. Everybody could use Excel to plot a graphs of black body radiation using two variants of this formula. Although the formulas are different, both functions should reach its maximum at the same value. (Frequency = light speed / wavelength). In fact maximums differ almost two times. Which variant of Planck formula is the correct one? Which one is in use by climate science? Just use another formula and your result will be different.
Number two is Stefan-Boltzmann law, which define radiation energy vs. temperature. Temperature of the Earth, calculated by this formula is different from real temperature. The difference called greenhouse effect. The Earth has inner core with the temperature of 5700K. Where the radiation energy from this core going? If we calculate energy flux from inner core to the surface using Stefan-Boltzmann law, we get 2,200,000 Watt per square meter compare to 240 from the Sun. Something obviously terribly wrong here as well.
It is not a mistake but simply perspetive distortion. For instance the wavenumber interval 500-600 represents wavelengths 16.67 to 20µm (or 10.8% of radiation emitted @288K), the interval 1000-1100 wavelengths 9.09 to 10µm (5.9%), yet they have the same "width" in a wavenumber chart. Accordingly the first interval appears almost twice as tall in a wavenumber chart. That is while in a wavelength chart the peak indeed occurs at about 10µm.
Carol McGrathat 23.08.2021