Over at Watts Up With That blog they posted an article, Pocket-calculator climate model outperforms billion-dollar brains, that reminded me how I tried to solve engineering problems in college. Before attempting a rigorous solution to a complex engineering problem, I was taught to quickly estimate the answer first. This estimate was used as a guide to make sure the more rigorous and time consuming solution was going in the right direction. It was really hard to argue with professors for partial credit if you did not check your work. So it is no surprise to me that earlier this year four scientists developed a simplified climate model that can be run on a pocket calculator and it did a good job of predicting the current climate conditions. Despite its greater accuracy the climate alarmists complained that the simplified model was not back tested. So the four scientists took up the challenge, plugged in historical data, and found that the pocket calculator model was more accurate than the current climate models. Now they have written a paper about their findings. Hmm… all of this complicated, time consuming climate modeling work and you can get better results from a model that can be run a pocket calculator. As my engineering professors would undoubtedly say, “You should have checked your work!”
I was reading a post over at Fabius Maximus and could not resist myself. Here was my comment.
My confidence in science is not crumbling but it is shaken. I do not remember whether I was science skeptic before college but I definitely was a skeptic after getting my engineering degree. You had to be very, very careful to get the right answer in lab experiments. It was hard, tedious work. As a result I do not have the high and lows being experienced by some other people. I have seen how easy it is to be wrong despite your best efforts. I believe what you are seeing in crumbling confidence in science is that bad science is being penalized for being wrong and that is a good thing!
Climate science is interesting example of science going off the rails. I find it amazing that even after all of these years I still remember enough of a thermodynamics class I took 30 years ago to question the approach being used by climate scientists to solving what I would call a heat transfer problem. I was not surprised to see climate scientists struggle to explain global warming with temperature graphs. If thermodynamics is settled science, why did the scientists choose this alternate approach?
I think the scientists noticed that they could not unambiguously prove whether we are experiencing warming or cooling so they went with the political group with the most passion and money. There was a fifty-fifty chance they might get the science right without actually doing any science. All they needed was for Mother Nature to continue to do what she had been doing for a few more years. Unfortunately their prayers to Mother Nature went unanswered and the warming stopped. Now these researchers have to explain how they got it wrong. I think the worship of pagan goddesses took a real tumble when the climate scientists went back to doing real science again.
I hate to be picky but vaccinations, global warming, and economic “science” are not even close to what I see as the most serious confidence problems in science. Frankly, I am not surprised that most people get economics wrong. I still hold to the belief that economics is not a science but a conspiracy to make weathermen look smart. In my mind I have been able to write off the climate science problems as problems to be solved by my son’s generation. Unfortunately my generation gets to deal with the false positive problems in health care. Getting this right is a matter of life and death for healthy people like me. The false positive problem in prostate and mammogram testing is severe enough that one part of the medical profession is recommending less testing. At the same time another part of the medical profession thinks it is better to be safe than sorry so the over-diagnosis of prostate and breast cancer is a necessary evil. My inner engineer keeps wondering why doctors are advocating less testing rather than improving their testing? When did reducing false positives cease to be a noble scientific objective? So what is an otherwise healthy person to think in this environment? When I ask my doctor he shrugs his shoulders.
I think the fundamental problems affecting my confidence in health care can be best explained with the ulcer example. Just two months ago I learned via the TodayIFoundOut podcast that ulcers should be treated with antibiotics. I am old enough to remember when the standard diagnosis for ulcers was that it was caused by stress and could only be solved by surgery. This was settled science so it is not surprising that hospitals were the biggest force that prevented ulcers from being treated with antibiotics. Ulcer surgery was a money maker for many doctors and hospitals regardless of its efficacy. From 1984 to the early 1990s Dr. Marshall and his long-time collaborator, Robin Warren, were thought to be quacks. Finally in 1994 the National Institutes of Health (NIH) held a two day summit to discuss his research and the rest is history. In 2005 Dr. Marshall received the Nobel prize for Medicine for his discovery of the bacteria that leads to peptic ulcers. From settled science to a different settled science in 21 years. Doctors were absolutely, positively sure of their diagnosis and treatment until they changed their mind to a completely different treatment. It sounds like a House script. Does anyone wonder why so many people have become born again science skeptics?
So here is the bottom line if you are looking at prostrate and mammogram testing. You are damned if you do, damned if you don’t, and hospitals make money on either outcome. Even if you opt for the safe rather than sorry route, the hospital still has a chance to collect on the daily double. My father went into the hospital after a fall and got an infection. He never left the hospital alive. I am guessing that Tricare/Medicare paid a quarter of million dollars for this mishap. Sepsis is America’s dirty little secret. The NIH factsheet on sepsis says that the “Agency for Healthcare Research and Quality lists sepsis as the most expensive condition treated in U.S. hospitals, costing more than $20 billion in 20115.” It is the mistakes in health care that shakes my confidence in science.
I was looking at this wonderful chart from Business Insider and from Bloomberg LP Chief Economist Michael McDonough and wondered who was the better forecaster, the economist or the climate scientist. As we can see from the chart the GDP forecasts for the last couple of years are particularly bad. In four out of four years the forecasts start out 50% to 100% too high. That is impressive!
Here is my favorite chart from that other dismal science, climate science. Although this is not a fair comparison the climate scientists are wrong only 95% of the time! They win!
Yesterday I watched the Dr. Richard Lindzen lecture at EIKE in Germany on Models vs. Measurements in April 2014. The folks at Watts Up With That had posted the lecture link at their site. The lecture, https://www.youtube.com/watch?v=7jOD4CK8MSM, primarily addresses these questions.
- What is the sensitivity of global mean temperature to increases in greenhouse gases?
- What, if any, connection is there between weather events and global mean temperature anomaly?
- Is our understanding of the greenhouse effect adequate?
- How relevant is the simplistic notion of global mean radiative imbalance driving global mean temperature to actual climate change?
His first point is that he does not quibble with the question that global warming exists but he does take exception with the quantitative estimates of global warming changes. He goes on to show that the change in temperature due to doubling of CO2 in the models is very dependent on your assumptions for aerosols.
His second point is that when we look at the radiative forcing effects due to anthropogenic emissions and volcanoes we see a climate system that responds to these events in a manner that is less sensitive manner than what is being used in the climate models. The volcano data implies effects due to doubling of CO2 is closer to 0.75~1.0 C.
He next goes on to discuss the natural variability of introduced by the oscillations in the Pacific and Atlantic oceans and argues that their effects are at least comparable in magnitude to AGW.
He finishes up his lecture with a short discussion that despite what the press says the IPCC reports do not attribute extreme weather to global warming. He reminds everyone that extreme weather depends on the temperature difference between the tropics and the high latitudes. In a warmer world, this difference is expected to decrease not increase.
I am skeptical about climate models since most of the models have proven to be terrible at predicting anything. I have to admire the chutzpah of the EPA for advocating new regulations while ignoring the IPCC forecasting problem. This irony was not lost on the folks over at Watts Up With That who wrote, EPA leaves out the most vital number in their fact sheet, and are more than willing to provide the “temperature change avoided” metric. It is hard for me to imagine how we can plan our work and work our plan without using the temperature change metric as one of our key goals. The problem is that it is only 0.018°C and considering our measurement error this would make this goal indistinguishable from zero. In the annals of government failure this is one of those times where we achieved our objective before we have even started and are still going ahead with the plan.
The EPA highlighted what the plan would achieve in their “By the Numbers” Fact Sheet that accompanied their big announcement.
For some reason, they left off their Fact Sheet how much climate change would be averted by the plan. Seems like a strange omission since, after all, without the threat of climate change, there would be no one thinking about the forced abridgement of our primary source of power production in the first place, and the Administration’s new emissions restriction scheme wouldn’t even be a gleam in this or any other president’s eye.
But no worries. What the EPA left out, we’ll fill in.
Using a simple, publically-available, climate model emulator called MAGICC that was in part developed through support of the EPA, we ran the numbers as to how much future temperature rise would be averted by a complete adoption and adherence to the EPA’s new carbon dioxide restrictions*.
The answer? Less than two one-hundredths of a degree Celsius by the year 2100.
0.018°C to be exact.
We’re not even sure how to put such a small number into practical terms, because, basically, the number is so small as to be undetectable.
This weekend I was reading the Watts Up With That article, NOAA shows ‘the pause’ in the U.S. surface temperature record over nearly a decade, and thought that the graph of the data from the U.S. Climate Reference Network sure looked like a decline. Since my anecdotal experience is that the recent winters are colder than 2004 I was curious what the data would say. Is this one of those rare cases where anecdotal weather information has started to match climate data? So I went over to the NOAA page, downloaded the data, and ran it through R to create graphs of the average, maximum, and minimum temperature anomalies. When we are talking about weather data it is the high and low temperatures of the day that are reported in weather reports. I am still looking for that brave weather forecaster to include the average temperature for the day. For a person who was already inclined to believe that it has gotten colder, it was not surprising that the slope for minimum temperature anomaly showed the greatest decline. All three graphs show a decline and the maximum anomaly showed the least decline. Since most people are sensitive to maximum temperatures during the summer and minimum temperatures during the winter it explains to me why I think the winters have gotten colder. Without further adieu here are the graphs.
Recently I have been pondering whether climate science is a valid science theory if it is consistently wrong. If we look at the definition of science from Wikipedia we get this definition.
When we look at Dr. Spencer’s graph shown on the right we can see that over 95% of the climate models are wrong. If the late Nobel prize winning physicist, Richard Feynman, was looking at that chart he probably would say this.
It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.
I will go one step further. You do not have a valid climate science theory until you successfully predict or explain something non-trivial about the global average temperature. Until then you are just a technician collecting data. The average person expects that when they are told that a government policy is based on science, it is based on science that has successfully predicted something. In the case of climate science we find that government policies are being proposed because “ninety-seven percent of climate scientists agree that climate-warming trends over the past century are very likely due to human activities”. If 95% of the climate models are wrong how can climate scientists say climate-warming trends are likely due to human activities? Where is the science that is successfully predicting something? Are we to infer that since the surface temperature stopped rising during the past decade that human activities stopped rising too? When you say that climate-warming trends are very likely due to human activities you did not leave any wiggle room for www.climate.gov to argue that natural climate cycles slowed down the rise during the past decade. Until the climate scientists can predict something non-trivial about the global average temperature it sure looks like we have put the cart before the horse.
In honor of President Obama’s speech on climate change I found this quote by Richard P. Feynman.
It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.
When I looked at my most recent electric bill I was dismayed to see expensive it was. Most of the increase can be attributed to higher electrical rates but a significant portion can be attributed to colder weather. With all of this talk about global warming, it is not showing up on my bottom line. That got me to thinking. What is the trend for heating degree days and is there a significant correlation between heating degree days and carbon dioxide?
To partly answer these questions I went back to NOAA’s Climate at a Glance site for heating degree data. I downloaded the heating degree data for December, January, and February and added them together as a numerical value for the heating season. Next I combined it with the CO2 data since 1958, ran it through the R’s Performance Analytics package, and got this chart. Like my previous chart for temperature and CO2 we can see that correlation between CO2 and heating degree days is weak. If we look at the trend line for the heating degree days in the bottom left hand corner and compare it to the chart above it, we can see they are going in different directions. CO2 is definitely going up with time while the trend for heating degree days is flat. Once again the argument that CO2 is causing climate change looks weak.
The question is whether CO2 is driving temperatures up. Burt Rutan started the discussion with this graph on Watts Up With That? post, Burt Rutan: ‘This says it all and says it clear’ | Watts Up With That?
Chiefio liked a different graph from D. B. Stealey. It is a bit more dramatic.
D.B. Stealey CO2 vs USA Temperature Graph
I was curious whether the analysis would hold up if we normalized the variables. So I copied the data into an Excel spreadsheet. I was somewhat surprised to find that the NOAA CO2 data only goes back to 1958. I guess we are guessing at CO2 levels before 1958 so my inner engineer said to ignore them. Extrapolations are just assumptions with a fancier name. I also decided I would look at the temperature plots for January and July as approximate indicators of the highs and lows for the year. So I normalized the variables using 1958 as 1. It is interesting to note that the January data represented by the pink line has a greater slope(about 4x) and is a lot more variable than the July data represented by the yellow line. This is a lot more slope than be accounted for by normalizing to a with a smaller value. If we are supposed to be heating up because of CO2, the July data seems somewhat impervious to the 22.43% buildup in CO2 since 1958. Here is the bottom line. Using the predicted values from the regressions, we can say that CO2 went up 22.43% compared to a 3.72% rise in January temperature and a 1.85% rise in July temperature. It sure does not look like there is much correlation let alone causality between these variables. Here is a real strange thought. According to the slopes calculated for the January and July temperature plots, the difference between the high and the low for the year is getting smaller?! Could the additional CO2 be moderating the magnitude of the annual temperature swings? Now that is counter-intuitive. Without much ado here is my version.