As most readers of this blog already know, there has been a posting and revision of CO2 data on the Mauna Loa observatory website in the past day that has generated quite a lot of controversy.Now after having been in touch with Dr. Pieter Tan at MLO through several emails I hope to shed some light on what happened.
It all started Sunday August 3rd when a revision of data was posted that showed a clear drop between January and July of this year. I did a story on the January to July trend reversal of CO2 at Mauna Loa, The post on that highlighted what the data published by MLO said at that time. What it said was that there was an unusual, never before seen in the history of the dataset lower CO2 PPM value in July than was measured in January.
Then yesterday, Monday August 4th, there was an abrupt change in the MLO data published on their website that very nearly erased the trend highlighted in the previous story, and there was no mention of the change on the NOAA web page for Mauna Loa Observatory. There still isn’t.
So I did another story using a blink comparator to highlight the change in the data and made note of the mystery hoping to get more info from the curator of the MLO CO2 dataset, Dr. Pieter Tans.
Meanwhile, quite a lot of speculation occurred, much of it critical of the entire process MLO used to publish and revise this data. There were also some commenters on this blog that looked at the change in the data to reverse engineer what happened and figure out plausible reasons for it.
Early today, 08/05 8:55AM PST, I received my first communications from Dr. Tans on the subject:
Anthony,
We appreciate your interest in the CO2 data. The reason was simply that
we had a problem with the equipment for the first half of July, with the
result that the earlier monthly average consisted of only the last 10
days. Since CO2 always goes down fast during July the monthly average
came out low. I have now changed the program to take this effect into
account, and adjusting back to the middle of the month using the
multi-year average seasonal cycle. This change also affected the entire
record because there are missing days here and there. The other
adjustments were minor, typically less than 0.1 ppm.Best regards,
Pieter Tans
That left more questions, most notably as to “what happened to the rest of the monthly data” and I followed up with a request for more information:
> Hello Pieter,
>
> Thank you very much for your prompt response. I appreciate you taking
> time from your busy schedule to answer.
>
> Can you elaborate on the problem with the equipment?
>
> And do you keep a public changelog or publish notices of such changes as
> occurred yesterday?
>
> Thank you for your consideration.
>
> Anthony Watts
To which he responded with a blunt one-liner:
From: “Pieter Tans” <Pieter.Tans@xxxx.xxx>
To: “Anthony Watts - TVWeather” <awatts@xxxxxxx.xxx>
Sent: Tuesday, August 05, 2008 9:30 AM
Subject: Re: question on ML CO2 monthly mean data changeThe computer disc crashed…
When I read that, I was simply floored. Here we have what is considered the crown jewel of all surface based CO2 measurement stations suddenly missing 20 days of data, and it was all due to a hard disk crash. In this day and age of cheap storage and RAID systems it seemed unfathomable that such a thing could happen, especially to something so important as this data.
So I asked again:
> Thank you Dr. Tan for your forthright communications and willingness to
> explain.
>
> I am puzzled though, as to how a hard disk crash could permanently lose 20
> days of data. Surely with something so important that the whole world is
> watching, you have a backup of the data? Or even written lab notes?
> Collation forms? Data entry forms?
>
> Thank you for your consideration.
>
> Best Regards,
> Anthony Watts
To which he replied:
From: “Pieter Tans” <Pieter.Tans@xxxx.xxx>
To: “Anthony Watts- TVWeather” <awatts@xxxxxx.xxx>
Sent: Tuesday, August 05, 2008 12:47 PM
Subject: Re: THANK YOU Re: question on ML CO2 monthly mean data changeAnthony,
There are three independent backups of the MLO record: We also take
flask samples at MLO analyzed here in Boulder, and at Scripps Ralph
Keeling is continuing the record his father started with both continuous
analyzer measurements as well as flask samples taken at MLO and analyzed
at Scripps. Beyond that, we are monitoring CO2 at ~60 other places over
the globe, and many other scientists are as well, in yet other places.
Don’t ever make easy assumptions that we are cavalier about this,
keeping no records etc. We do not have infinite amounts of time and
money, however. Instruments tend to stay at a given station for a long
time. We are not allowed to lobby for additional support, as federal
scientists.Pieter Tans
It started to become clear to me how this might occur, especially with his statement of “Instruments [that] tend to stay at a given station for a long time”. If the data recorder was older, such this as mirrored RAID wouldn’t be a part of it, and indeed a single disk failure could render the entire measurement process void.
Siemens Ultramat 3 nondispersive infrared gas analyzer used at MLO, referenced here.
Compare to some more modern equipment by the same company
So I wrote back:
> Hello again Pieter,
>
> Thanks again for the reply. I do apologize if my response suggested that
> you are “cavalier”, that was not the intent. You must have had a trying
> couple of weeks. I’m sure that my probing didn’t help.
>
> It’s just that I was just a bit incredulous that 20 days of data could get
> lost in this day and age. But reading your response about “length of time
> equipment stays at a station”, I think I understand now how it could happen.
>
> Having worked for a state agency once, and knowing the procurement processes
> and pitfalls I’m guessing that you are operating an older data recorder,
> probably with a hard drive that is difficult to find these days, like an old
> ST225
>
> Knowing lab technology from the 70’s 80’s and 90’s, I could see how you
> could be going along with such devices thinking it was correctly recording
> the data, only to find later your dataset came up empty.
>
> Things like that have happened to me.
>
> My interest is making sure the data is right, whether it is up or down isn’t
> much consequence to me at this point, trust in the dataset is the most
> important issue.
>
> Initially it appeared that there was a drop in PPM from January to July,
> which was truly unique. Then the data changed suddenly. It was that abrupt
> change that raised concerns. While I’m sure you are removed from it,
> measurement quality control, data quality issues, and arcane unexplained
> adjustments have plagued the surface temperature record. Attempts to get
> answers have been stonewalled and met with hostility. Replication in science
> should never be met with hostility, in my view.
>
> Your dialog though has been a refreshing change from the way people like Jim
> Hansen treat people that ask honest questions like “why did the data change
> abruptly’ and “what were the adjustment methods used”?
>
> I hope you won’t mind a suggestion that could prevent such problems.
>
> Based on my experience, one of the biggest favors you can do yourself would
> be to put up a section near the FTP links that is a running change log about
> the data. That way, if a change is needed for truly valid reasons (such as
> this) you have a way to notify the public.
>
> For example, if I had known that the data posted Sunday August 3rd, was
> missing 20 days for July, I never would have considered looking at it until
> that issue was resolved. The boilerplate caveats saying the data could be
> revised up to a year really don’t convey anything beyond a generic caution.
> In this case a specific caution would have helped, a lot.
>
> To not do so invites a lot of speculation, as you’ve probably noted. But as
> it was presented Sunday August 3rd, it appeared to be ready for primetime,
> and was absent such caveats.
>
> I’m satisfied with the answers you’ve provided, and If you’d like to write
> up a statement explaining the whole issue, beyond what has already been
> said, I’ll be happy to post it. That should quiet things down a bit. Absent
> that, perhaps due to restrictions you may be under, I’m prepared to put the
> issue to rest and write-up what I know based on our correspondence. I’ll
> even offer a preview if you like.
>
> Perhaps I can even help your mission. If it becomes clear public knowledge
> that you are operating with substandard equipment with data recording
> reliability issues, some procurement action could be taken in the future.
> You’d be surprised who reads my blog. Again my whole issue has been and
> always will be “the data should be right”.
>
> Again thank you for your willingness to discuss and communicate.
>
> Best Regards,
>
> Anthony Watts
To which he replied with a final note:
Anthony,
You asked for my comments on your previous email. I have a few. With
respect to the “drop” in (seasonally corrected) CO2 you mentioned, it
happens frequently. The annual increase has averaged about 2 ppm per
year recently, which equals 0.17 per month. The “noise” in monthly data
is larger than that. In 1994 there were three months in a row that CO2
went down, for example. I grant that the drop in July did look
suspicious, though. I knew that the direct monthly mean based on 10
days had to be corrected, and I had written a program to make that
correction, including also small corrections to other months, for the
same reason. It was unfortunate that the uncorrected July data did
appear on the web, which was not intended, and we corrected it the next
day.We have thought about a change log, and may still do that, but thought
that it would be too much detail for almost everyone. Our methods have
been published, and our data are freely available on the web site, so
that anyone interested can do his/her own analysis. We are committed to
complete and prompt availability of our data because it is essential to
credibility and it improves the science. The promptness implies that we
are more likely to make a mistake in public now and then, but we take
that in stride. Please check out our CarbonTracker web site which
embodies the same philosophy. CarbonTracker “translates” observed CO2
patterns into an assessment of emissions/uptake of CO2 that is optimally
consistent with the observations. We are very much aware that in a time
when carbon dioxide emissions will cost a lot of money, there has to be
an objective and fully credible way to quantify emissions. Without
that, carbon markets cannot function efficiently, and policies cannot be
measured relative to their objectives. We think that the atmosphere
itself can provide objective quantification.With respect to reliability, it is a fact that the equipment we use is
not good enough “off-the-shelf” to produce the measurement accuracy that
is needed. We have to build an entire control and gas handling system
around every analyzer to keep it in check. We control temperature,
pressure and flow rate, dry the air stream, and inject calibrated
reference gas mixtures at regular intervals, etc. Since very recently,
there are what appear to be much better instruments on the market,
fortunately. The last steps in quality control are the comparisons with
independent measurements I mentioned earlier, and scientific analysis of
the data.I am not much of a blogger, but would appreciate a preview if you write
something about our correspondence.Best regards,
Pieter Tans
In the meantime, some of the commenters on the blog called for significant scrutiny:
Basil (06:47:08) :
If any think that this is grasping at straws, just remember that this is a bellwether site for the AGW hypothesis. So it deserves all the scrutiny it gets, and has to live up to the strictest standards because of it.
And some did their own analysis, one notable was Dee Norris, who did her own analysis of the changes in the data.
=============
Dee Norris (11:42:47) :
The adjustments go both ways as seen on this plot: http://tinyurl.com/6qb3sg
The net gain is 0.19 ppmv over the entire 34 year record - this includes the July 2008 adjustment. If we back out the July 2008 adjustment of 0.67 ppmv, the gain becomes a decrease of 0.48 ppmv.
I really don’t see anyone here diddling with the data-set in order to amplify the AGW aspect.
and later she wrote:
Dee Norris (11:49:42) :
I have been having an ongoing email exchange with Dr Tans. In the last go round, I asked him to confirm my understanding of the nature of the adjustment.
I wrote:
“Am I correct that when you changed the program to account for the missing 20 days in July, there was a backward propagation of adjustments filling in for other missing days?”
Dr Tans replied:
“You are good.
When I was at it, I made another adjustment to the program. I used to fit 4 harmonics (sine, cosine with frequencies 1/year through 4/year) to describe the average seasonal cycle. I changed that to 6 harmonics.
Therefore, there will be small systematic differences as a function of time-of-year in the de-seasonalized trend. That will be on top of adjustments caused by months in the past during which there were a number of missing days not symmetrically distributed during that month.”
I think we are too conditioned to data getting Hansenized and may be jumping to conclusions. So far, unlike Hansen, Dr Tans has been forthright with communicating his approach.
=============
Summary:
Unlike the seemingly random and cloaked adjustments we’ve seen from Hansen and GISS, the MLO adjustments used in this episode appear to have a purpose, and the result is that the data, while adjusted, doesn’t really get much change at all, except where there is a missing data period. The results and explanation seem reasonable to me, and to others I’ve corresponded with about it.
There are however some remaining significant issues which I think need to be addressed, some of which which have been raised by commenters to this blog.
1. From my perspective a change log is needed for any public dataset like this, and especially one this important. As I mentioned in correspondence, had I known 20+ days were missing from the July 2008 dataset, I would not have even bothered to write about it. I think MLO erred in not making the state of the data known both on the initial posting on August 3rd, as well as the “adjusted data” on August 4th. Both releases suffered from a lack of explanation, which invited speculation.
2. There appears there could be a bit of confirmation bias going on. From Dr. Tan’s own writing to me:
I grant that the drop in July did look suspicious, though. I knew that the direct monthly mean based on 10 days had to be corrected, and I had written a program to make that
correction, including also small corrections to other months, for the same reason.
Thus it appears that the algorithm had not been used before. And many commenters have pointed out that there may have been peaks that weren’t caught in the past, because well, the conditioned expectation we’ve been exposed to is that “CO2 is going up”. So errors on the positive may not have been caught due to this human condition when it comes to inspecting the data. One commenter wrote:
Even if Dr. Tans’ adjustment is reasonable and defensible on its face, it is still a bit troubling. Would a similar adjustment have been made if the CO2 numbers were higher than expected? Somehow I doubt it. And if not, it has the potential to introduce a bias into the numbers.
One of the cardinal rules of statistical analysis is that you choose your criteria BEFORE you see the data. Otherwise it’s very easy to fool yourself (and others) into thinking your results are significant.
By analogy, Dr. Tans should have carefully chosen an averaging method IN ADVANCE and then stuck with it.
While the biases that may exist may be small, catching them on both sides of the zero anomaly line builds confidence in the dataset.
3. There’s a hole in the public process. This is public data, and thus under the auspices of the Data Quality Act. Altering data in a 24 hour window with no notice, and more specifically no review, public comment, comparitive dataset logs, and no immediate posted public identification of the cause (except after prodding) is surely a fast track to a DQA violation. While I’m appreciative of Dr. Tan’s willingness to share information and converse, unlike some other publicly funded scientists, I’m also critical of the way it has been handled from the data publishing side. This needs correction, as the public trust is at issue.
It is my opinion that the truly raw CO2 data along with the adjusted data and should be published so that there is complete transparency.
4. Finally there are a few questions that remain that perhaps we’ll get some answers to:
- How many times in the past have these adjustment algorithms been run?
- How long has Dr. Pieter Tan been responsible for adjustments?
- Did his predecessor instruct in the necessity of these wholesale adjustments?
- Are there any records of previous adjustments?
- Did Dr. Keeling initiate a protocol that required regular adjustments?
All in all, while this episode produced as Lucia described it, a “Kerfluffle”, it has had the positive effect of putting some needed scrutiny on a dataset that most everyone, until now, has not signifcantly questioned. The errors at MLO that allowed an incomplete data set to be posted with no visible caveats for the public user have highlighted weaknesses in the system that need correction. I’m hopeful that this episode will bring about positive changes, especially in the reporting by MLO on the current state of the data set.
Dr Tan speaks of about 60 other sites measuring CO2 levels. Probably the one with the ‘purest’ air and most constant flow is that at Cape Grim, Tasmania. What a pity that the data from it can’t be seen at least on a monthly basis!
Great post, Anthony. I hope Dr. Tans appreciates the impartiality you have brought to this investigation. I know I do. A more rabid investigator, sure that there was a conspiracy afoot, might have driven him into the stonewalling tactics that have become so commonplace amongst many climatologists.
Dr Tans replied:
_You are good.
i’m impressed by the patience you (Anthony) and Dr Tan have given in these circumstances. Sometimes a little patience is required to get to the truth.
Good work Anthony.
[...] wattsupwiththat.wordpress.com [...]
Thank you Anthony {and Dee and others posting comments and initiating queries} for all the work put into this issue.
After all the issues of transparency that have surfaced over the years, this exchange is one of the freshest and most open exchanges I’ve read in a long while. Whether an AGW proponent or skeptic, one’s belief system should not get in the way of obtaining accurate and reliable data. What one does with the data once it is available is, of course, another matter.
I can also sense Dr. Tans’ frustrations at the position he’s in and the funding constraints under which he doubtless operates.
Again, this has been one of the most enjoyable series of posts to follow with all the comments as well. Up to the norm of a science focus and less the sound and fury of politics.
It would seem to make sense that to publish the daily raw data. (1) This would make sure the data would never be lost. (2) It would provide complete transparency.
REPLY: Given what little I know about the setup of that gas analyser, I suspect that may not be as easy as it sounds.
Looking forward to AIRS CO2 data to see how well it agrees with Mauna Loa for the last few years. Maybe someone can get us a sneak peek.
Mike
Thanks Anthony,
This should put the whole discussion to rest, be it that there need to be a follow up of the remaining remarks.
Only one remark about the raw (hourly average) data: these are available, but with a long delay (2006 is the last year published). See:
ftp://ftp.cmdl.noaa.gov/ccg/co2/in-situ/mlo/
I don’t know why that can’t be published faster (e.g. together with the monthly data, all warned as preliminary before manual quality control).
There are daily files with sub-hour data available up to May 2008 from the Ameriflux network (used to measure CO2 fluxes in forests e.g. in Flagstaff), so there it is not a problem to have lots of datapoints quite fast.
Further, it seems that in nowadays “science” there is little interest in collecting field data, be it CO2 or tree rings or the greening of the earth, and that most of the subsidies go to the building of useless computer models. That Pieter Tans still need to work with (too) old equipment doesn’t differ much of the struggle his predecessor, the late Charles D. Keeling, had during his whole carreer. See the fascinating autobiography of Keeling:
http://scrippsco2.ucsd.edu/publications/keeling_autobiography.pdf
@Anthony:
I am glad to see your emails with Dr Tans were similar to mine. Even though Dr Tans would appear to be in the AGW camp, it was good to see that science to precedence over beliefs. This is as it should be.
While we have all experienced the religious fanaticism of many of the AGWers and their unwillingness to even entertain facts contrary to their belief system, after reading some of the posts, I wonder if some skeptics are beginning to fall into the same trap.
Real science is agnostic in all things. The search to understand the nature of universe in both macro and micro has to be indifferent to what is discovered or how that discovery is used.
I think we too often hear the words of the vocal minority of researchers who use science to push a political/social agenda and overlook the thousands of scientists in the silent majority who chip away at the mysteries of the reality and the discovery of something new being its own reward.
I have an affinity for the position in which Dr Tans found himself. My first exposure to Atmospheric Science was back in the late 70s when I was in High School. I was selected by the Science Dept to take over the operation of a airborne particulate monitoring program the school had started several years earlier. Nothing glamorous. But the data we collected was included in the New State public data sets.
Basically, every 6 days, a vacuum sampling unit would draw air through a filter for 24 hrs and the difference in the before and after mass of the filter divided by the total cubic meter of air drawn through the filter would give the total particulate mass in micrograms per cubic meter.
However, to get the correct weight of the filter before each weighing, it would have to be desiccated. In this case, I used calcium chloride. The filter would have to sit in a bell jar along with the CaCl2 for at least 48 hrs.
CaCl2 has to be kept tightly sealed if it is going to be used at a desiccant. I got distracted one day and left the lid off the container of CaCl2 when I left the school for the weekend. One of the teachers kindly replaced the lid at some subsequent point without alerting me to my error. Having then forgotten I left the CaCl2 exposed to the humid air of Long Island, I proceeded to use the now partially hydrated CaCl2 to desiccate the filter for the next run.
Naturally, the filter contains more water after having 24 hrs of moist air drawn through it and consequently my results were skewed upwards due to the decreased effectiveness of the hydrated CaCl2. While I noticed the change in the data, I didn’t pursue the cause to my eventual embarrassment.
On a monthly basis, I would call in my data to Nassau County who would then transmit the county-wide data to the state. About two weeks after I called in my numbers, I was asked to bring the vacuum sampling unit in for calibration as it appeared that my data was high when compared to the rest of the county. To the puzzlement of everyone, the unit tested fine on the bench. Needless to say, my data was still high the following month.
To make a long story much shorter, despite procedure reviews and further testing, this kept up until I finally opened a new container of CaCl2.
Stuff happens for the simplest of reasons. Not all data errors/adjustments are nefarious.
this thread demonstrates the quality of Anthony’s blog and explains the root of its great success.
As a former chemist and aborted global carbon cycle researcher (validating the usefulness of stable carbon isotope ratios as a bulk parameter in tracing carbon cycling through various, mainly marine, sources and sinks - in the ’70s), I was always puzzled about the continuing quasi linearity of the MLO data.
Does anybody have any suggestions how this correlates to the recently discussed large CO2 data sets put forward by Dr. Beck? With the MLO curve being the cornerstone of the ‘industrial age’ AGW consensus, I believe that the topic of true CO2 variations during the recent past deserves much more scrutiny!?
and it seems to me that this Blog is now the proper forum (at least for the English-speaking blogosphere)
ulrich lobsiger
Thanks to Anthony and Dr Tans
This blog topic gives one hope that perhaps there can be movement to rationality.
I used to do forensic accounting for a living. The questions and responses especially in such an important set of data sets being transparent is a breath of fresh air. Often discussions break down thru emo crap.
I agree that Anthony has the right to ask and especially seeing the main project of this blog is checking the US earth station measuring systems. Trying to get your nation’s measuring systems up to their own standards.
In auditing of any nature, when a change is made to data that effect the end result or interpretation being relied upon to make a business case, a strategy or a finance application, notes must be tendered for changes or the process is not audited and a rider must be inserted saying exactly that the accounts in the statements cannot be relied upon legally because though the accounts as presented balance, the underlieing data has not been audited.
I don’t think any government employee has a right to stonewall on publicly funded data. In Australia with our version of NASA the CSIRO, scientists with legitmate requests have come across intellectual property and so called privacy rights issues in policy document requests which government and business rely to make long term economic and financial decisions.
Regards, this blog is always a good read
The Beck paper was a real eye opener, perhaps he (Beck) could get in touch with NASA and tell them how to measure co2 the good old chemical method way (accurate within 3%).
Does anyone else think they are measuring co2 in the wrong place? Surely the best place to measure co2 is -not- out in hawaii where hardly anyone lives but in a city where the so-called “greenhouse effect” is supposed to affect us the most (by virtue of most of us living in cities!)
Hmm yet again more questions than answers..
I have a problem with adjusted data being presented as data. If the data has holes, show me the holes. If one must adjust, then show the ra w data and the adjusted data together.
I can tell you nobody in the corporate world accepts adjusted data w/o being able to see the unadjusted data. Adjustments are always treated with a healthy dose of skepticism.
When you got caught fudging the data and someone wants to look closer at your fudged data, having the unadjusted data disappeared is very convenient. Incidentally, just because a hard disk crashes, does not mean the data can not be recovered If the data was never recorded because of the crash, is he saying nobody noticed it crashed for 20 days. Thats just unbelievable.
Also, if you only have 10 days data for July, why not compare it with the same 10 days in 2007 and present this, perhaps with a note explaining it.
These are issues people are making very big decisions on. Exposing that you have missed 20 days of data from a very key measuring site due to antiquated equipment or lack of funding would be a good way of raising a flag on the issue. Instead it was covered up. Maybe if people really knew how unreliable this data was they would wake up. It may be that they really do not want good data, since the unadjusted data exposes an Inconvenient Truth.
This is not an issue with Tan of course, he is like so many government scientists, between a rock and a hard place.
My skepticism is growing at an alarming rate.
Anthony
I have been commenting many places about the importance of data archives, so here is another. All data and programs need to be defended. The naming for archives of data sets and programs need to be named so that the elements can stand alone if possible Mauna_LOA_CO2_ddmmyy.vvv, don’t be cheap in the naming. The archive needs to to be able to answer who, what, why and when. This is like brushing your teeth you just do it. Then when things happen and they will, at worst only the data to your last check in is lost. I recommend offsite copies as well. Some of my comments on this subject were in advance of this event. This would have not even been a ripple on the pond if the above suggestions were in place, it would have even answered your why as to why the data was very different.
Terry
Speaking of closely-followed data, RSS is in for July 2008. It’s 0.147, down 0.216 from July 2007. This is the 11th straight month of falling year-over-year temperatures. I stuck my neck out and put up my “predictions” (based on linear regression) for Hadley/GISS/UAH/RSS in the thread http://wattsupwiththat.wordpress.com/2008/08/01/award-winning-astronaut-slams-hansen-urges-nasa-to-debunk-the-current-hysteria-over-warming/#comments at 18:03:33 August 1st.
An interesting additional piece of info is that the MEI (Multivariate ENSO Index) at http://www.cdc.noaa.gov/people/klaus.wolter/MEI/mei.html has been at just above zero for MAY-JUN and JUN-JUL, whereas they were slightly below zero at the same times last year. The executive summary is that we’ve come out of La Nina to neutral conditions (slightly closer to El Nino than last year), and temperatures are *STILL* down year-over-year. I wonder what straw the AGW crowd is going to clutch at now, to explain the continuously falling temperatures.
Thanks Anthony and Dr Tans
Where is all the money going? Evidently it’s not going to upgrade Equipment. This is another disgrace along with the surface station.
This whole “kerfuffle” was an interesting exercise in civil discourse (for the most part), but the bolded part of this comment of Dr. Tans’ really has no place in a scientific discussion:
We are very much aware that in a time when carbon dioxide emissions will cost a lot of money, there has to be an objective and fully credible way to quantify emissions. Without that, carbon markets cannot function efficiently, and policies cannot be measured relative to their objectives.
While I agree with the part about policies being measured relative to their objectives, I just can’t fathom a serious scientist being concerned about the effect the accuracy of his data will have on carbon markets. Like Old construction Worker, I also find it inexcusable, with all the nitwit pork projects that politicians squander our hard-earned tax dollars on, that MLO doesn’t have the most up-to-date equipment.
I’ll agree with Anthony that Dr Tans is more responsive than most other AGW advocates… but the idea that there isn’t a clipboard hanging around that has the months data for back-up in the event that there is an equipment failure (especially since they happen so often) is mind boggling… but then I forget, these are federal employees… who by the way can’t lobby for new equipment (let’s all forget there is a budget process for new equipment and etc).
Anyways, back to the importance of quality data, the years following Mt Pinatubo the increase in CO2 stopped (maybe because the globe cooled)… it brings up the question of exactly how much of the CO2 increase over the past 50 years is anthropogenic and how much is natural.
Stan, darn you to Haiti! You stole my thunder! Even with Dr. Tans’ apparent openness and congeniality, that single, solitary slip of the tongue should throw a HUGE red flag on any “adjustments” being made to any of the data that he’s responsible for. (No, I’m not insinuating that he’s conspiring with Hansen, but that’s a bit like saying that the bedrock samples from “the best site” needed to be adjusted, in order for the damn to be built on a particular landowner’s property, before all of the possible sites have been surveyed. As the Powers That Be™ at work always remind us lowly worker bees: “Perception is reality.”)
Perhaps one of the statisticians/programmers among Anthony’s readers could go over the AlGoreithm (Ha!) that Dr. Tans came up with and see if it’s truly “unbiased” and “objective”, when compared to
cleanunadjusted data sets collected over the years?Anyway, keep up the great work, Anthony, and I hope the good Dr. Tans is able to upgrade his data collection and storage system from the Commodore 64® that he’s apparently stuck with. I’ve also worked in government, both federal and local, and know how incredibly difficult it is to get new, up-to-date equipment.
Old equipment and undocumented data changes are ripe for law suits and now these issues are public information. Emission regulations that force a company to pay will end up being trotted into court and literally torn apart. If Al Gore really wants his company to get off the starting blocks, he’d better invest in top quality equipment and controls, or else he and his carbon credit scheme will go down in history much like snake oil did. And if congress wants to pass laws related to carbon credits, just wait for the stampede. Businesses will pour into the offices of senators and representatives with their lawyers in tow.
End of story, regardless of whether or not CO2 DOES harm the planet.
BC, yeah, that’s it, I can see it now…
Climate Scientist: “Mister Chaiman, the foundation for the increase in energy costs of hundeds of billions of dollars is all of this extrapolated data that I have adjusted for because my Comodore 64 broke down…”
Well, I have a Commodore 64 I could lend them. It worked the last time I turned it on, but that was a decade or two ago. Only one floppy drive, but I did use a Midi card as a serial interface to talk to an Apple ][ and then to a Heathkit CP/M system with a 10 MB hard disk. It cost me $3,000. For the disk. In 1980 dollars. Ah well….
At the very least, I suspect that Dr. Tans has a better appreciation of how many people are interested in the data he measures and the scrutiny it gets, whether CO2 concentration goes up, down, or sideways.
I find it troubling that Dr. Tans, as a public scientist, felt that he shouldn’t lobby for changes to his budget, but other public scientists feel free to lobby for public policy changes that have much more of an effect on the public/science relationship. If the standard is for no public lobbying for funds it should also be no lobbying for policy/political change. Let science be about science.
Is this the same Pieter Tans from Boulder who associates himself with the 9/11Truth movement?
http://www.dailycamera.com/news/2006/oct/13/letters-to-the-editor—oct-14/
http://archive.boulderweekly.com/102104/coverstory.html
Anthony,
I severed some nerves and have pretty much lost the use of my right hand for a while or maybe longer. So, until I develop my single hand typing skills, I can’t type as much as I would like.
I would like to write many words of thanks here for the work you do. Work, family, this blog and everything — don’t know how you do it all but I am very grateful you do.
REPLY: Thank you for the kind words, and for the extra effort. I once broke my right wrist in a fall off a bycycle. The saviour was a headset with voice recognition software, they get to be pretty good once tuned, which takes a couple of days. Get well soon. - Anthony
Anthony
The last paragraph above the word “summary” shouldn’t be italicised I think. It makes it look as if Dr Tan is criticising Hansen.
REPLY: I agree, fixed.
The following is just Mauna Loa hourly data for 2006.
~12.5% of the all 2006 hours have -999.99 reading.
There is a pretty consistent -999.99 reading every 25 hours.
There are some odd “non-noise” type of events in the 2006 data.
For example:
A. Notice C02 is trending slightly down toward the end of March 2006 around 382.25 ppm , on April 1st the CO2 machine stops taking data, April 3rd the machine stops taking data again when the data starts again after a few hours it is now reporting a +2 ppm CO2 increase pretty steady state at around 384.25 ppm.
B. Notice the end of May 2006, after a batch of lower CO2 readings 383.3 ppm the machine is again taken offline and a +1 ppm rough average starts again in June at around 384.4 ppm.
C. the first 6 months have data within 2ppm error band not counting the jump mentioned in point A, during the last six months of 2006 they must have had a lot of measurement problems because the error band widens to 5 ppm and contains lots of 5 ppm negative spikes.
D. The reported monthly averages are 0.5 ppm higher in August then the hourly data yields, so Dr. Tans is removing some data (?? removing the negative spikes) from the calculation or doing something else with the data. The same is visually obvious for July and September as well.
E. I think the hourly data used here is averaged as well, so I’m not sure where to get the actual raw data.
Perhaps someone could start a campaign to gather donations so that Dr Tan can purchase some better equipment so this doesn’t happen again. We’re not talking about millions of dollars here.
Back when I was at school, we used to say:
“My dog ate it!”
Today they say:
“My harddisk crashed!”
Regarding Dr Tan’s comments regarding carbon markets. Why are they not called Oxygen MarKets seeing that there are two oxygen molecules for every carbon one?
Deepslope and Bob,
I have had a lot of discussion with Ernst Beck about his historical data.
My conclusions:
Some historical measurements were made with equipment accurate to +/- 150 ppmv, most were within +/- 10 ppmv. Mauna Loa measurements are accurate to about +/- 0.1 ppmv.
Further, NEVER use measurements in the middle of forests or towns or anywhere near huge sources/sinks, except if you are interested in biological uptake/release of CO2 or car exhausts. That is the main problem with the historical data, only a few were taken at places without huge local sources/sinks in the neighbourhood.
See “where to measure”on my pages:
http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html
The South Pole is the best place to measure (more or less) global CO2 levels, followed by other places, including Mauna Loa. But as Mauna Loa has the longest continuous historical record, these data are used as the reference.
So many adjustments Anthony. So many pieces of climate related measuring equipment which are found to be biased or inaccurate as soon as they produce any data which disagrees with the computer climate models.
3000 Argo buoys are biased. EXpendable BathyThermograph (XBT) data is biased. The satellite temperatures are biased. The historic land based temperatures apparently need adjusting downwards periodically and the current termperature records get adjusted upwards because they are biased. CO2 data discs crash so the reports are biased.
You can only sympathise with climate scientists when they work with such error prone equipment.
Max
Hard drive crash, 10 days lost data and the crash was not noticed for 10 days, rather odd.
Recently I inadvertently double formated the wrong hard drive, all my AGW files were on that drive, I ran a recovery program on my second drive and recovered every single file on the formated drive. The program used was just a bog standard recovery program.
Anthony, perhaps you should take a little “working vacation” to Mauna Loa and take a camera to photograph the equipment that is used there and do a report on how Dr. Tans and his staff measure CO2 at that location. If Dr. Tans is honest about the substandard equipment that he says limits his operation, then this report can be used as a basis for demanding that Congress provide sufficient resources for the accurate measurement and retension of data. If Dr. Tans is being less than forthright about the reason for his data manipulation, he should be exposed.
Stan, darn you to Haiti!
Harsh, B.C., really harsh! LOL.
Is this the same Pieter Tans from Boulder who associates himself with the 9/11Truth movement?
Steve, according to WhitePages.com, there is only one Pieter Tans in Boulder, CO.
Ferdinand Engelbeen: (02:26:17)
” See “where to measure”on my pages:
http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html ”
I had a quick look at that site - a valuable contribution to these discussions, especially using the various isotope ratios to establish origin.
Will study it more carefully
Adjustments. Didn’t Ptolemy call them epicycles?
Anthony,
This is BS and we are letting them off way too easy. Dr Tan states “There are three independent backups of the MLO record: We also take
flask samples at MLO analyzed here in Boulder, and at Scripps Ralph
Keeling is continuing the record his father started with both continuous
analyzer measurements as well as flask samples taken at MLO and analyzed
at Scripps.”
If this is true then why would they ever have missed data. Simply take the flasks and rerun them for the CO2 reading and keep the record whole and intact, period. No more questionable algorithms and smoothings, no more BS, just the data. If it causes more work for them too bad so sad, if equipment I work on goes down I know I’ll have some long days and nights ahead.
This should be insisted on.
Hum
Steve N.
Boulder by way of Berkeley. Bio:
http://www.climatescience.gov/Library/sap/sap2-2/sap2-2prospectus-final-tans.htm
>Instruments tend to stay at a given station for a long
>time. We are not allowed to lobby for additional support, as federal
>scientists.
Working at a federal govt. scientific facility (wxsat operations), we are funded from a federal budget and we have to submit a budget request prior to new bugets being approved. He may not be allowed to ‘lobby’ for additional support, but I would think they could request the funds for up-to-date systems (especially computers, as cheap as they are today) with any annual budget request. You don’t get it if you don’t request it. I wouldn’t think a facility like that would be funded through annual grants from the NSF (National Science Foundation) or the like. Seems odd…
Jeff K
DSP (23:18:58) :
Yeah, but a single lawsuit involving a large power plant and the CO2 market will be millions, and any doubt cast upon a questionable data source could make for some interesting times.
Jim Hansen seems to have congress’ ear, he’d be a good one to help free up federal (i.e. our) money for upgrades and duplication.
I am wondering if Al Gore can spare a bit of pocket change for Pieter Tans and his beleaguered lab on the forgotten wastes of Mauna Loa. While Tans does a yeoman’s task of monitoring CO2, Gore jets around the globe giving speeches, buying yachts, and making tens of millions trading carbon credits, based upon Tans’ data. Time to man up, Mr. Gore.
[...] readers may recall our conversations this week on the hiccup in CO2 data from the Mauna Loa Observatory. I’m pleased to announce that I received this encouraging [...]
How about a shorter post next time?
REPLY: Sorry you had to read so much, try this one -
http://wattsupwiththat.wordpress.com/2008/08/07/mauna-loa-to-improve-the-co2-data/
Stan Needham (19:25:48)
Stan, I think you’re on to something. When a scientist starts mentioning “carbon markets” a red flag should go up immediately. Where is the grant money coming from and going to?
Hi All,
Regardless of where we / you / I think it may / might go…
Raw data, change logs, fantastic result Anthony.
I hope your questions you end with are answered.
I doubt I am the only one awiting with baited breath…
Regarding “the raw data is avaliable” …in the past I’ve seen it posted that the files are too large to download on a normal PC.
I think I’ll go ask if the links now to the raw data are any better / different..
Unless anyone here knows better of course.
I noticed the rejection of my previous post. Perhaps an explanation of vog in Hawaii would help:
http://www.kitv.com/news/16850573/detail.html
http://www.msnbc.msn.com/id/24471365/
Either way congratulations to both Dr. Tans and Anthony for providing a clear and balanced resolution of this problem.
well done Anthony, Dr. Tans stated that he was not much of a blogger but I think he will be a regular visitor here from now on, well lets hope so anyway.
Thanks