Real Climate has published something of a review of Montford’s book. When I read it (yes, I do read Real Climate, as much as I can – that is, I tend to have enough time, I just find it difficult to digest nonsense it occurred to me that it was a perfect example of bad criticism, and thus – as I’ve mentioned before – something which I do think falls into an area where I have a little expertise.
I’m only going to explore one item in Tamino’s review; for more substantial responses go here and here; in brief, Tamino doesn’t engage with the substantive argument. No change there then.
First, it will be worth summarising one of the arguments that Montford makes. Rather helpfully for the statistically challenged, like myself, Montford takes time to explain what Principal Components analysis (PC analysis) actually does: it sifts raw statistical data in order to extract significant information (notable patterns). Crucially, each ‘sifting’ extracts less useful information than the last, so PC1 is very useful but each successive PC is less so. Montford: “while the PC1 might explain 60% of the total variance, by the time you get to PC4, you might be talking about only 6 or 7%. In other words, the PC4 is not telling you much of any significance at all”. Montford uses this very helpful analogy:
“The PCs are often described as being like the shadow cast by a three-dimensional object. Imagine you are holding an object, say a comb, up to the sunlight, and it is casting a shadow on the table in front of you. There are lots of ways you could hold the comb, each of which would cast a different shadow onto the table, but the one which tells you the most about the object is when you expose the face of the comb to the light. When you do this, the sun passes between the teeth and you can see all the individual points. You can tell from the shadow that what is being held up is a comb. This shadow is analagous to the first PC. Now rotate the comb through a right angle, so that you are pointing the long edge of the comb to the sun. If you do this, the shadow cast is just a long thin line. You can see from the sahdow that you are holidng a long thin object, but it could be just about anything. This would be the second PC. It tells us something about the object, but not as much as the first PC. You can rotate through a right angle again and let the sunlight fall on the short edge of the comb. Here the shadow is almost meaningless. You can tell that something is being held up, but it’s impossible to draw any meaningful conclusions from it. This then, is the third PC.”
This is how Tamino ‘responds’:
Principal Components
For instance: one of the proxy series used as far back as the year 1400 was NOAMERPC1, the 1st “principal component” (PC1) used to represent patterns in a series of 70 tree-ring data sets from North America; this proxy series strongly resembles a hockey stick. McIntyre & McKitrick (hereafter called “MM”) claimed that the PCA used by MBH98 wasn’t valid because they had used a different “centering” convention than is customary. It’s customary to subtract the average value from each data series as the first step of computing PCA, but MBH98 had subtracted the average value during the 20th century. When MM applied PCA to the North American tree-ring series but centered the data in the usual way, then retained 2 PC series just as MBH98 had, lo and behold — the hockey-stick-shaped PC wasn’t among them! One hockey stick gone.
Or so they claimed. In fact the hockey-stick shaped PC was still there, but it was no longer the strongest PC (PC1), it was now only 4th-strongest (PC4). This raises the question, how many PCs should be included from such an analysis? MBH98 had originally included two PC series from this analysis because that’s the number indicated by a standard “selection rule” for PC analysis (read about it here).
MM used the standard centering convention, but applied no selection rule — they just imitated MBH98 by including 2 PC series, and since the hockey stick wasn’t one of those 2, that was good enough for them. But applying the standard selection rules to the PCA analysis of MM indicates that you should include five PC series, and the hockey-stick shaped PC is among them (at #4). Whether you use the MBH98 non-standard centering, or standard centering, the hockey-stick shaped PC must still be included in the analysis.
[…snip…]
The truth is that whichever version of PCA you use, the hockey-stick shaped PC is one of the statistically significant patterns. There’s a reason for that: the hockey-stick shaped pattern is in the data, and it’s not just noise it’s signal. Montford’s book makes it obvious that MM actually do have a selection rule of their own devising: if it looks like a hockey stick, get rid of it.
So – Tamino’s argument is that because the hockey-stick shape emerges with the fourth ‘cut’ it still counts as statistically significant. Although he accepts that the standard convention is to use just two passes (= PC1 and PC2) he goes on to say “applying the standard selection rules to the PCA analysis of MM indicates that you should include five PC series, and the hockey-stick shaped PC is among them (at #4)”. (Please shout if I’ve misunderstood the substantive point that Tamino is making here.)
Can people see why I find this an inadequate response to Montford? Montford explains PC analysis at length, and a significant element of the argument is that the #4 cut doesn’t give useful data. Tamino at first accepts this (with a link expanding the acceptance) but then seems to go back on himself by simply asserting that five series should be included, and that the hockey-stick shape (#4) is significant. Why? Where is the argument for this?
There are ways in which Montford could be shot down here – and I would imagine that a competent statistician, familiar with these issues, could do it quite swiftly _if_ Montford is wrong. My point is a broader one – purely as a matter of rhetoric, Montford has the more compelling argument. He makes a point and explains it in detail – I understand the argument that Montford is making and it seems coherent. Tamino’s response is very different, in effect it is merely an assertion, which we are to take ‘on authority’. As the authority of the realclimate site is – for me – completely shot, the argument falls.
If there is another place where realclimate defends the statistical usefulness of a PC4 analysis, I’d be interested to read it.
Principal Component Analysis is a means by which a set of multi-dimensional data is reduced to a smaller number of fixed patterns with varying weights. The classic focus for principal component analysis in climatology is a two-dimensional spatial dataset with time information. At the end of the analysis, you get a number of spatial patterns with time series of weights.
Imagine you did the same thing for Church of England Sunday attendance figures for as long as they had been kept. One of the principal components might show attendance declines in rural and urban areas relative to suburban ones. Another might show positive trends among churches that used the Alpha course. Just discussing the idea is tempting me to do the analysis if I could find the data and properly grid it. Each of these components also is associated with a number that explains how much of the variance (a squared difference to remove the sign of the difference) from the mean is captured by each component.
Montford pretty much explains this when he says, “while the PC1 might explain 60% of the total variance, by the time you get to PC4, you might be talking about only 6 or 7%.” However, the example trend of the decline in explained variance is nicely cherrypicked. The first component could explain 25% of the variance, the next component 20%, the next component 15%, and the next component 10%. The nature of the dataset is important. Stratospheric temperature and ozone data can be broken into a relatively small number of principal components with high explained variance. So can ocean data in my experience. Tropospheric data generally has more components with 10% or so variance, so the first PC can explain only a few percent more than fourth PC.
Therefore, what matters is not how much variance is explained but whether the variance explained by a given component is indistinguishable from the variance explained by a random spatial pattern correlated with the data. Tamino’s claim, which you read as an appeal to authority, is that the fourth PC in MM’s analysis explained more variance than a random spatial pattern would most of the time. Probably, he means the odds were worse than 10:1 or 20:1. If Tamino claims there is some sort of standard convention like MM describes, it’s not one I would admit as a reviewer. I know there’s a statistical test for PCA.
Tamino’s other critique is that the data is improperly centered or pivoted. MM removed the 20th century mean rather than the data series mean. If MM had not removed any mean, the first principal component would be the average temperature, which has a nice latitudinal gradient that explains a great deal of variance. I have no idea what removing the 20th century mean does, but it would tend to distort any rule of thumb based on canonically centered data of how many PCs you should take.
In the end, though, the centering is irrelevant, provided that the statistical test used to estimate the level at which explained variance has a high probability of arising at random accounts for it. It’s that statistical test that should be used, not any rule of thumb.
Thanks Caelius, that’s very helpful. I avoided talking about the centring because that’s one of the major bones of contention (I’m persuaded by Montford/ MacIntyre though). However, the main point in my post stands I think – Tamino doesn’t do more than make the assertion in the review (if someone knows of this issue being discussed in more detail at realclimate please shout). By the way, is the statistical test for PCA ‘R2’?
“By the way, is the statistical test for PCA ‘R2’?”
If you mean something that reads “R-squared”, no. That’s exactly the same as percentage of variance explained.
I think you should be able to access: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0493(1982)110%3C0001%3AASTFPC%3E2.0.CO%3B2
Or just google: A Significance Test for Principal Components Applied to a Cyclone Climatology, which outlines a plausible and well-cited method. I’m afraid tests like this are not used as much as they should be.
There are ways in which Montford could be shot down here – and I would imagine that a competent statistician, familiar with these issues, could do it quite swiftly _if_ Montford is wrong.
The Wegman Review showed that M&M were correct and that Mann, Bradley and Hughes were wrong. In what I found to be a devastating critique the statisticians claimed that the paleaoclimatology community seemed to be unable to use advanced statistical techniques properly. Of course the committed could have been wrong and MBH could be well versed in statistical techniques. But if that is the case then they are guilty of cherry picking or outright academic fraud.
It does not help the AGW proponents when the papers that they hold in such esteem are full of simple and obvious errors that should have been caught by a competent review. We have seen proxies used upside down, proxies not corresponding to the proper areas, proxy samples that do not reflect the entire series, algorithms that turn random red noise into hockey sticks, etc. From what I can see, the debate is about over and the shortcomings of the warming literature is becoming apparent. Montford has written a great book that has exposed the AGW myth in ways that are easily understood to those willing to pay attention.
///Montford has written a great book that has exposed the AGW myth in ways that are easily understood to those willing to pay attention. ///
See, this is where I find Denialist’s so appalling. They use the same tactics as 9/11 “Troofers”. They congratulate an ‘attentive audience’, and by doing so take the first step in deceit. The sneaky lie that we fall for is that we are ‘smart enough’ to understand the arguments. A prepackaged bunch of lies are presented gift wrapped in the slogan, “You’re a logical person, why don’t you tell me what you see here? This is the hole in the Pentagon, and THIS is how wide a 737 jet is. Why didn’t the wings break these windows?” Wow. Maybe it WAS a missile that hit the Pentagon in 9/11. Except I’m NOT an expert in the physics of how a plane crashes into solid concrete objects, and so I don’t know the rest of the details. I don’t know that the wings of this jet were pushed back along the fuselage as the concrete folded them back, and that’s why the windows were left intact.
Many of us don’t have the kind of math to combat any misinformation put to us by Montford and his Denialist friends, nor do we have the general knowledge to combat the assumptions behind the book. For example, is the PRIMARY foundation of AGW past statistical trends, or the DEMONSTRABLE, TESTABLE physics and behaviour of Co2 in a lab?
In my experience just 30 minutes or so of reading usually nails any new Denialist claims. There are just too many strawman arguments and outright lies to take them seriously any more. Some people just seem wired to fight AGW, as if every Climate Scientist is actually a Communist hiding under the bed; as if MacCarthyism were still in vogue. There comes a point in combating the peer-review process when ALL science is suddenly held suspect, and the Denialist wakes up fighting almost every news report, every paper, every politician. That’s when the Denialist has really crossed over from reasonable debate into stubborn ideology.
And it’s sad that so many otherwise reasonable Christians follow them. I can understand raving Creationists also condemning the peer-review process at this point; what’s new? But for people who otherwise respect science as ‘thinking God’s thoughts after him’, Denialism is just wrong. As Dr Andrew Cameron (ethics lecturer at Moore College) asks, “How sceptical is too sceptical?”
Eclipsenow (David?) – was this actually engaging with what I had written?