Thursday, September 25, 2014

ClimateBall at Climate Audit

There's a post at Climate Audit on Kevin O'Neill's comments exposing aspects of the Wegman report. I would like to respond there, but am currently not able to. All my comments go to spam, and at CA, they don't re-emerge.

I'll say a little about this situation. It affects my interaction with all Wordpress blogs. Last month I was temporarily banned at WUWT, in circumstances I describe here. The mechanism is that I was designated a spammer, and my comments went to spam. After a week or so, I tried commenting again, but same result. This apparently was picked up by Akismet, and my comments at CA started going into moderation, then into spam. Same at other Wordpress blogs.

I can comment using my Twitter ID, but CA does not allow that. WUWT nominally does, but my comment was removed because Twitter substitutes my Twitter address for the email address. So I'm out there too.

Anyway, back to CA. Back in 2010, DeepClimate noted some strange features of the Wegman report. There was much plagiarism, but also the statistics had some very odd features. One concerned the trumpeted claim that Mann's algorithm would create hockey sticks out of red noise input. Wegman showed a dozen profiles generated by red noise. He said in the caption to Fig 4,4:

"One of the most compelling illustrations that McIntyre and McKitrick have produced is created by feeding red noise [AR(1) with parameter = 0.2] into the MBH algorithm. The AR(1) process is a stationary process meaning that it should not exhibit any long-term trend. The MBH98 algorithm found ‘hockey stick’ trend in each of the independent replications."

As DC found, what they had actually done, using M&M's code, was to do 10000 runs with red noise input, select the top 100 by hockey stick index, and then select randomly from that 100. I described the consequences of this here. I showed, inter alia, that selecting that way gave hockey sticks whether you used Mann's off-centre PCA or centered PCA.

Brandon Shollenberger responded by trying to move the goalposts. The selection by HS index used by Wegman had the incidental effect of orienting the profiles. That's how DC noticed it; the profiles, even if Mann's algorithm did what Wegman claimed, should have given up and down shapes. Brandon demanded that I should, having removed the artificial selection, somehow tamper with the results to regenerate the uniformity of sign, even though many had no HS shape to base such a reorientation on. And so we see a pea-moving; it's now supposed to be all about how Wegman shifted the signs. It isn't; its all about how HS's were artificially selected. More recent stuff here.

So now Steve McIntyre at CA is taking the same line. Bloggers are complaining about sign selection: "While I’ve started with O’Neill’s allegation of deception and “real fraud” related to sign selection,...". No, sign selection is the telltale giveaway. The issue is hockey-stick selection. 100 out of 10000, by HS index.

Update. It seems that if I disown my WP id, and change my name slightly, I advance at CA from the spam bin to the moderation queue (probably as a first time commenter). That can be a long wait too, but we'll see.
Update. In comments, Rachel from "Engineering Happiness" made a helpful suggestion about contacting Akismet. I followed advice, and someone emailed me. Not solved yet, but we're working on it. Thanks, Rachel.

70 comments:

  1. Notice the wording, Nick:

    > “[T]he last 100 values [was] higher on average than the remainder” (a positive HSI): [...]

    Somehow, I don't think it's the same "100" than what you were talking about, Nick.

    ReplyDelete
  2. Nick, sorry about that stupid moderation issue. This is a very bizarre problem, and Wordpress needs to come up with a solution to this. Is there anything we can do to help get this resolved?

    I don't think Brandon was moving a goalpost, I think he was just addressing a point you didn't want to address. That's a different thing.

    You should probably mention that DeepClimate gave a precise description of how to reproduce Fig. 4.4. I haven't verified his description, but I'm willing to accept it as accurate without further comment, which is essentially this code:


    hockeysticks<-read.table(file.path(url.source,
    "2004GL021750-hockeysticks.txt"),sep="\t",skip=1)
    postscript(file.path(url.source,"hockeysticks.eps"),
    width = 8, height = 11,
    horizontal = FALSE, onefile = FALSE, paper = "special",
    family = "Helvetica",bg="white",pointsize=8)

    nf <- layout(array(1:12,dim=c(6,2)),heights=c(1.1,rep(1,4),1.1))
    ...
    #index<-sample(100,12)
    index = c(35,14,46,100,91,81,49,4,72,54,33)

    plot(hockeysticks[,index[1]],axes=FALSE,type="l",ylab="",
    font.lab=2,ylim=c(-.1,.03))

    I should point out that McIntyre's original code produced this figure. I do not know, without McIntyre explaining it (if he's explained it, I've missed the explanation), why he was generating this figure.

    But whatever it's purpose, I am almost certain that Wegman did not know how this figure was generated. I think he thought it was typical.

    But the doesn't have any deep meaning anyway. It's just a throughway figure generated by McIntyre's code.

    If there is fraud here, it is in the misrepresentation of the amount of due diligence on his own part by Wegman. As I mentioned on Brandon's post, I think this is a reasonable criticism of Wegman and his report. Wegman made certain claims about the level of scholarship in this document that do not stand up well at all to the light.

    So publishing a figure generated by somebody else's software isn't scientific fraud, unless you are intentionally misleading people about what it means. Since I think the odds are very high that Wegman had no idea what the figure really meant, it's incompetence you are looking at, but that is all.

    I'm willing to accept that, within his area of expertise, that Wegman is a very bright and competent professional. I think the issues look deceptively simple, but aren't really, and to make matters worse, there are people who want to play climate ball in the same room as people who just want to know what the truth is.

    Now the point that Brandon was addressing, and he's absolutely correct, is that the sign of the proxies doesn't matter for the reconstruction.

    This means, if you want to visualize how the algorithm will work, you need to orient them in the same the algorithm will.

    This isn't a big deal, but it is a valid point. And I obviously don't see this as "moving goal posts", though like you, I think this should have been mentioned in the figure caption (though that requires Wegman to know what he was plotting).

    The question of how typical these hockey stick shapes are in the PC1s calculated using emulations of Mann's method is better addressed by McIntyre's Figure 2 of his 2005 GRL paper. I think the answer is "fairly typical".

    Hopefully McIntyre will be able to post on it.

    ReplyDelete
    Replies
    1. Carrick,
      "Wordpress needs to come up with a solution to this. Is there anything we can do to help get this resolved?"

      Thanks. Wordpress and Akismet don't care. Their customers are the blog owners. There is basically no mechanism for commenters to communicate with them. Wordpress has a forum, but the response - "this topic is closed for responses.

      I tried a new account, but that didn't last long. It looks like my name is part of the blacklist, so losing a k and being anonymous seems to works, after a fashion. I had been hoping that Akismet would have a time limit on its ban, but apparently not.

      Delete
    2. But the doesn't have any deep meaning anyway. It's just a throughway figure generated by McIntyre's code.

      This seems strangely uncurious.

      Steve McIntyre apparently wrote code to generate 10,000 simulated "random" time-series and then throw out the 99% that didn't have a certain shape.

      That, it seems to me, is a very bizarre thing to do. In fact, I'm having trouble coming up with any non-dishonest reason for doing such a thing.

      If he needed time series of a certain shape for some bizarre testing purpose, he could just draw them that way. Why include the "randomness" and then discard all the random outcomes that don't match his preferred one specific shape? That kind of obliterates the point of using randomness, doesn't it?

      One hypothetical reason for doing that would be to portray the results as being "random" but without them actually being random. If there is any other plausible reason for doing it, I'm having trouble coming up with it. Maybe I just have insufficient imagination.

      Ned W.

      Delete
    3. "Steve McIntyre apparently wrote code to generate 10,000 simulated "random" time-series and then throw out the 99% that didn't have a certain shape."

      Yes, that puzzled me too. What is the legitimate point of sub-sampling from random data runs?

      The 10000 are a by-product of what was done for Fig 4.2 (or fig 2 in MM05). But I still can't see any justification for not just taking the first 12.

      Delete
    4. This has prompted me to look back at MM05. One thing I just noticed is that they say:

      a sample of 100 simulated ‘‘hockey sticks’’ [...] are provided in the auxiliary material

      and the next sentence talks about the 10,000 simulations.

      So is the "sample of 100 simulated hockey sticks" that they included in the SI a truly random sample? Or is it the same 100 generated by their 1% selection process that were the basis for Wegman's Figure 4.4? I would assume and hope the former. If it's the latter, I would find that very disturbing, enough to radically alter my opinion of McIntyre. The readme in the SI doesn't explain the process used to select their "sample of 100".

      Then there's Figure 1 from MM05. MM don't claim that Figure 1 is a *representative* example. But what's the value in generating an arbitrarily large number of random series and then showing the one series that most strongly confirms your point? To be honest, that rankles me. It smacks of propaganda, not science. If the paper had been sent to me for review, I would have recommended that they either drop Figure 1, or show an evenhanded group of examples (say, the three series at 5%, 50%, and 95% of the distribution on HS index).

      Ned W

      Delete
    5. Sorry for repeated posts. The SI from MM05 is online here: ftp://agu.org/gl/2004GL021750/
      The code that apparently generates the subsample is included in emulation.txt and as best I can tell, it does in fact sort the 10000 simulations by their "hockey-stick-index" prior to exporting the first 100 series.

      And ... DeepClimate already pointed that out. I should have read DeepClimate first and saved myself the time and embarrassment. Probably lots of other people have already pointed this out, and I'm just way behind on this. Argh.

      So I guess it's not just Wegman's figure 4.4 that was produced via this dodgy selection process. The simulation time-series posted in the SI on AGU's website, accompanying MM05, were also produced this way. Yikes. I find that quite disturbing. I'll be curious to see how McIntyre addresses this, since he says an explanation will be forthcoming.

      Ned W

      Delete
    6. Ned
      "So is the "sample of 100 simulated hockey sticks" that they included in the SI a truly random sample? Or is it the same 100 generated by their 1% selection process that were the basis for Wegman's Figure 4.4?"

      It's the latter. DeepClimate has set it out in detail; search for the part beginning:

      "However, the more interesting question is this: Exactly how was this sample of 100 hockey stick PC1s selected from the 10,000? That too is answered in the script code."

      The plots in Wegman's graphs actually come from that set. He may have thought he was generating new data but his version simply copied from that file. As DC identified, Fig 1 in MM05 and Fig 4.1 in the Wegman report are just #71 in that file of 100. And Wegman's Fig 4.4, table of 12, are numbers 35,14,46,100,91,81,49,4,72,54,33. They are not independently calculated. They are taken from the file of 1% selectees.

      Delete
    7. Thanks, Nick. I realized after posting that (a) yes, the 100 "samples" posted in the Supplementary Information accompanying MM05 were the same radically biased set; and (b) that DeepClimate had already pointed that out and many other people were aware of it. Clearly I haven't been paying attention.

      A lot of the discussion about this seems to focus on Wegman. But I'm equally bothered by MM05 posting these in their SI on the AGU's site, with no documentation (beyond the code itself) of the rather extreme selection process.

      Ned W.

      Delete
  3. Carrick,
    Yes. I think that MM05 GRL was submitted originally to Nature, and in that version it had the equivalent of Fig 4.4 (with 10 profiles). I don't know how it was described, or why it dropped from the GRL version.

    I agree that Wegman probably didn't know about it. But he should have, if only from the uniformity of orientation. This was Wegman's report to Congress, not M&M's.

    ReplyDelete
  4. I completely agree that Wegman was simply incompetent in his work. I only threw 'fraud' in there because of JC's injudicious use of the term in her post. Hell, if a mistake or use of a suboptimal method is akin to fraud, then we're all frauds.

    ReplyDelete
  5. In McKitrick's "What is the Hockey Stick Debate about?" he says (page 10):

    "In 10,000 repetitions on groups of red noise, we found that a conventional PC algorithm almost never yielded a hockey stick shaped PC1, but the Mann algorithm yielded a pronounced hockey stick-shaped PC1 over 99% of the time." He then shows six of those hockey sticks alongside the "real" hockey stick (figure 7) to show how similar they are. Of course, hockey sticks as dramatic as those weren't produced "99% of the time" but, instead, 1% of the time.

    http://www.uoguelph.ca/~rmckitri/research/McKitrick-hockeystick.pdf

    ReplyDelete
  6. Orientation is one issue. Selection is another issue. Why conflate the two? If you want to show the effect of using a non-random sample independent of the orientation issue, why not show the effect of both issues separately?

    Discussing one issue while displaying the effect of two issues is only going to confuse and/or mislead people.

    ReplyDelete
    Replies
    1. Anon,
      "Orientation is one issue. Selection is another issue."
      No, selection is the issue. If you select the top 100 by upright HS appearance, then it is a consequence that you'll get upright HS. If you don't, it's not obvious how to do orientation at all. Remember, the HS index is an MM05 creation. I could use it to invert curves of "wrong" sign (there's no provision in the code for doing that). And yes, it would make the curves look more HS-like. It's another way of putting a thumb on the scale to that effect. I could modify with a parabola index, and they would look more like parabolae.

      Delete
    2. Suppose they had added, "All series displayed with the same orientation" to the figure. Would you still insist on showing negative hockey sticks?

      Delete
    3. > Orientation is one issue. Selection is another issue.

      I think it's the other way around.

      Selection is Nick's issue.

      Orientation is the issue Brandon's and the Auditor's found not to discuss Nick's issue.

      The hint was in Nick's title, so it may have been hard to miss.

      ***

      Incidentally, the Auditor punts with:

      > Over the past year or so, Mann’s “ClimateBall” defenders have taken an increasing interest in trying to vindicate Mannian principal components, the flaws of which Mann himself has never admitted.

      I have no idea to whom the Auditor refers.

      Are you a vindicator of Mannian PCA, by any chance, Nick?

      Delete
    4. Anon,
      "Suppose they had added, "All series displayed with the same orientation" to the figure. Would you still insist on showing negative hockey sticks?"
      Well, they'd have to explain what "orientation" meant. The HS index isn't a natural law. It's a M&M creation, and if I did re-orient, it would then fall to me to explain the index and what I was doing.

      Delete
  7. I have no idea of what Wegman did but do realize that if red noise is parameterized as an Ornstein-Uhlenbeck process, the character of the red noise can be anything from a pure random walk (which can drift off to +/- infinity and thus create an apparent hockey stick in either direction) to a tightly bound random up-and-down flipping, which reverts to the mean (no hockey stick there). There is a continuum in between these extremes.

    Kudos to looking at this because it must be horrible, like trying to grade poorly reasoned homework.



    ReplyDelete
  8. It might be interesting to modify the code to produce the 1% of "random" time series that are *least* like a hockey stick, and create an alternate version of the figure that way.

    Ned W

    ReplyDelete
  9. The only reason orientation came up was because *I* find it relevant to Wegman's due diligence. Any statistician looking at a 'random' sample would expect results in both directions. Seeing all results with one orientation and doing nothing to investigate the code to see why is an extremely poor and incurious way to go about your business.

    ReplyDelete
  10. Kevin,
    Indeed so. I tried to say that in a comment at CA here.

    ReplyDelete
  11. Nick,

    Please, please, O please stop racehorsing and tell me:

    Are you a vindicator of Mannian PCA, by any chance?

    What about you, Kevin?


    ReplyDelete
  12. This comment has been removed by the author.

    ReplyDelete
  13. Messed up a tag so deleted and reformatted:

    Nick: Orientation is one issue. Selection is another issue."
    No, selection is the issue.


    Orientation was the because Kevin O'Neill brought that point up:

    Wegman deceptively displayed only upward-pointing ‘hockey sticks’ – though half of them would have had to be downward pointing.

    And while Wegman in the text acknowledges that half the results will be downward sloping all of his results show upward sloping hockey sticks. Why? Pretty obvious that a downward sloping hockey stick wouldn’t look like MBH.




    Ned W:

    Steve McIntyre apparently wrote code to generate 10,000 simulated "random" time-series and then throw out the 99% that didn't have a certain shape.

    That, it seems to me, is a very bizarre thing to do. In fact, I'm having trouble coming up with any non-dishonest reason for doing such a thing.


    Why are you having so much trouble? It seemed really obvious to me, but then I do algorithm development, so the reason for this step would be more obvious.

    McIntyre and McKitrick developed a metric called the "hockey stick index":

    For convenience, we define the ‘‘hockey stick index’’ of a series as the difference between the mean of the closing sub-segment (here 1902–1980) and the mean of the entire series (typically 1400–1980 in this discussion) in units of the long-term standard deviation (s), and a ‘‘hockey stick shaped’’ series is defined as one having a hockey stick index of at least 1 s. Such series may be either upside-up (i.e., the ‘‘blade’’ trends upwards) or upside-down.

    Having developed a new metric, you want want to test how good the metric is in characterizing "hockey-stick-ness". So in this you pick exemplars for hockey sticks and you visually screen how effective the metric is at replicating hockey sticks.
    Ned W:

    It might be interesting to modify the code to produce the 1% of "random" time series that are *least* like a hockey stick, and create an alternate version of the figure that way.

    Why would that be interesting?

    If you want to see how typical Mann's algorithm produces PC1s you histogram his non-centered PCA and compare it to the centered PCA. Like was done in McIntyre & McKitrick 2005 GRL, figure 2:

    http://hiizuru.files.wordpress.com/2014/08/8-6-mm05-figure-2.png?w=758&h=436

    See Wegman Figure 4.2 also.

    Histograms are a much more powerful tool for examining the prevalence of large magnitude hockey sticks than visually plotting curves. The latter is useful more as a check of your metric than it is as a diagnostic tool.

    Curiously the nearly meaningless Figure 4.4 gets mentioned in the Wikipedia, but I think the descriptive text is wrong there. So much for always a bias in one direction.

    Nick Stokes: It's a M&M creation, and if I did re-orient, it would then fall to me to explain the index and what I was doing.
    And I agree with this point. I think it indicates a troubling lack of familiarity with the subject that Wegman wrote so much about (I would move the stick a bit beyond mere incompetency), but maybe I'm just a hard customer.

    ReplyDelete
    Replies
    1. Having developed a new metric, you want want to test how good the metric is in characterizing "hockey-stick-ness". So in this you pick exemplars for hockey sticks and you visually screen how effective the metric is at replicating hockey sticks.
      Well, that's an interesting explanation, but there's certainly nothing remotely resembling it in the text or SI of MM05.

      All they say about the origins of the 100 series is: "a sample of 100 simulated ‘‘hockey sticks’’ [...] are provided in the auxiliary material".

      Do you seriously not think the fact that this "sample" was selected to represent only the closest matches to a particular pattern is worth noting? I cannot imagine providing a "sample" of data as part of the Supplementary Information with a paper of mine, and somehow neglecting to inform the reader that this "sample" was specifically chosen to exclude all the data except the 1% that most closely conformed to the claims I was making. But YMMV, I suppose.

      Ned W.

      Delete
    2. Ned, are we moving goal posts now? Nick doesn't like that. ;-)

      McKittrick has admitted this was an error and the selection process should have been better explained.

      But asked about whether there is a credible legitimate reason for producing a figure like Figure 4.4

      There is, and it's an obvious one.

      Delete
    3. Actually, I'm not convinced that your speculative "legitimate reason" makes much sense.

      Let's assume for the sake of argument that I've just invented a new index to describe some otherwise-qualitative characteristic of time series. And let's say I then generate 10000 random time series, for each of which I calculate that index.

      You're suggesting that I should validate that index by selecting the 1% of the series that have the highest value in that index, and visually checking them to see if they seem to conform to that qualitative pattern.

      Do you really not see the flaw in that experimental design? Would you actually do that?

      In reality, I'm guessing you would also want to look at time-series with low values of your index. Or better yet, look at examples covering the complete range (say, series with indices at the 1st percentile, 25th percentile, 50th percentile, up to 99th percentile).

      Looking only at the cases where your index produces high values is not, in fact, a good way to test the validity of your index. I'm mildly astonished that I need to explain this.

      Ned W

      Delete
    4. In fact, Carrick, your proposed explanation makes so little sense that I'm inclined to rule it out. I'm particularly amused that in the same comment where you proposed it, you expressed surprise at the idea that there might be any reason to try reversing the selection and looking at just the 1% of series with the lowest HSI.

      Your proposed "explanation" would apply equally well (or equally poorly) to both versions. There is no reason why "visual verification that the top 100 values are strongly hockey-stickish" should be preferable to "visual verification that the bottom 100 values are un-hockey-stickish". As tests of the effectiveness of your index, the two approaches are equal. (And equally inadequate on their own).

      Or so it seems to me. I could be wrong.

      Ned W.

      Delete
    5. Ned:


      You're suggesting that I should validate that index by selecting the 1% of the series that have the highest value in that index, and visually checking them to see if they seem to conform to that qualitative pattern.

      Do you really not see the flaw in that experimental design? Would you actually do that?


      It's not an experiment. it's part of the validation process for the "hockey stick index".

      In reality, I'm guessing you would also want to look at time-series with low values of your index. Or better yet, look at examples covering the complete range (say, series with indices at the 1st percentile, 25th percentile, 50th percentile, up to 99th percentile).


      Certainly one should, but you don't know what McIntyre & McKittrick did or didn't do, based on one code snippet.

      Looking only at the cases where your index produces high values is not, in fact, a good way to test the validity of your index.

      Again, if you want to have an index that describes hockey-stick like features, you need to verify that when the hockey-stick index is high, you only select for data segments that have hockey-stick like features.

      There is no reason why "visual verification that the top 100 values are strongly hockey-stickish" should be preferable to "visual verification that the bottom 100 values are un-hockey-stickish".

      The distribution is symmetric so if you look at the bottom 100, you'll see the same pattern, but flipped.

      Were I to plot representative samples (instead of exemplars, which is what was done here), I would orient all of them with the same sign, based on the same criterion used in MBH. This sign choice amounts to selection by correlation with temperature over the period 1901-1980.

      Delete
    6. The distribution is symmetric so if you look at the bottom 100, you'll see the same pattern, but flipped.

      Not if you look at the bottom 100 absolute values.

      Again, if you want to have an index that describes hockey-stick like features, you need to verify that when the hockey-stick index is high, you only select for data segments that have hockey-stick like features.

      And you also need to verify that when the hockey-stick index (or its absolute value) is low, you only select for data segments that don't have hockey-stick-like features. Otherwise, your index isn't a "hockey-stick index".

      But the SI for MM05 only includes the 1% of cases with the highest value. Basing your proposed "validation" on the sample would be more or less worthless.

      So, no, we still don't have a reasonable explanation for why they produced this absurdly biased sample. And, worse yet, no explanation for their failure to note that bias.

      If I were publishing a "sample" representing only 1% of my data set, with that 1% being the result of a selection process I'd developed to screen out everything except the handful of cases that most closely conformed to my claims, I would feel a certain ethical obligation to point that out to the reader.

      Ned

      Delete
    7. Ned W:

      Not if you look at the bottom 100 absolute values.

      What do you think you would learn by doing this? More to the point, is there anything of value from doing this that publication of such a figure would have merit?

      And you also need to verify that when the hockey-stick index (or its absolute value) is low, you only select for data segments that don't have hockey-stick-like features. Otherwise, your index isn't a "hockey-stick index".


      You know already that it selects for hockey sticks when there are hockey sticks using McIntyre's assay, that is we have a low false positive error rate. Looking at the other brackets would give us information about about the false negative error rate.

      The fact that there is such a good separation between the centered and short-centered PCAs in the histograms is informative here. There is clearly feature selection at play by sorting by HSI.

      This method is much more informative than a visual inspection ever would be.

      But the SI for MM05 only includes the 1% of cases with the highest value. Basing your proposed "validation" on the sample would be more or less worthless.


      We can eliminated false negatives using McIntyre's screening process. So "more or less worthless" … not even close.

      The histogram comparison of Figure 2 of the published paper is the key test in any case.

      So, no, we still don't have a reasonable explanation for why they produced this absurdly biased sample.

      Yes, actually we do. The fact you choose to not acknowledge it,it doesn't mean there is one. If you have a legitimate reason for rejecting the explanation, you've totally failed to provide a rational argument for it.

      And, worse yet, no explanation for their failure to note that bias.

      Errors happen. How the authors respond when mistakes are pointed out is at least as important as whether errors happened.

      Given the storied history of hiding of key results by principal researchers in paleoclimate, I find it a bit ironic that such a non-informative graph has drawn such heat.

      Delete
  14. McIntyre says he will post on the issue that Nick cares about as a separate topic. Figure 2 of M&M 2005 GRL is the figure to look at for that question.

    The short version is Mann's non-standard algorithm really butchers his PC1.

    ReplyDelete
    Replies
    1. I wouldn't be at all surprised if that future thread at CA does aggressively try to focus on what Mann did rather than what McIntyre did.

      But people on other blogs may choose to ask questions and draw conclusions about this, even if Steve wants to talk about something else instead.

      Ned

      Delete
    2. > I wouldn't be at all surprised if that future thread at CA does aggressively try to focus on what Mann did rather than what McIntyre did.

      Seems that the focus has turned on Nick, and on what he did not do:

      http://climateaudit.org/2014/09/27/what-nick-stokes-wouldnt-show-you/

      Delete
  15. What helps for me is not to link to my blog, but to my ID page at wordpress.

    Those historical hockey sticks. The climate "debate" is really something special.

    ReplyDelete
  16. Strike the last "an interpretation".

    I might as well clarify that the Nick was asking Brandon to quote him making an issue with flipping HSes.

    Hence the moot counterfactual to pull Nick in that blog post.

    ReplyDelete
  17. Nick,

    Mann's method automatically flips upside down hockey sticks.... no reason not to do that with synthetic hockey stick data.

    Mann's method selects a few "important" proxy series (that are hockey-stick shaped, and so correlate with the instrumental temperature history). The rate of selection is 3 or 4 with meaningful correlation out of ~95. How is that so different from selecting 1% of synthetic series (just pink noise), which happen to have a hockey stick shape, to demonstrate the potential for bias in Mann's methods?

    Really, I am not getting what you and others are so worked up about here; Mann et al's methods were subject to potential bias, which is what M&M were showing. Could Wegman have been more careful? Sure. But claims of fraud? I don't think so.

    Steve Fitzpatrick

    ReplyDelete
    Replies
    1. Steve,
      "Mann's method automatically flips upside down hockey sticks.... no reason not to do that with synthetic hockey stick data."
      No, it doesn't. The correct statement is that the method is indifferent to sign. Another way of saying that is there is no objective right way - it's a human construct. That's one reason why it isn't easy to comply with Brandon's demand, even if it was reasonable.

      "Mann's method selects a few "important" proxy series"
      This is a generic property of PCA. Mann didn't invent it - Fritts was using PCA back in 1971, and it has been standard since. The 100:1 has nothing to do with the decentering, which is supposed to be what Wegman is illustrating.

      I've responded at CA, but in moderation - I think for quoting the bits of your post that had the same effect.

      Delete
    2. Steve Fitzpatrick, 2014-09-26, 8:05, Australian time:

      > But claims of fraud? I don't think so.

      Kevin O'Neill, 2014-09-25, 11:48:

      > I completely agree that Wegman was simply incompetent in his work. I only threw 'fraud' in there because of JC's injudicious use of the term in her post. Hell, if a mistake or use of a suboptimal method is akin to fraud, then we're all frauds.

      You're welcome.

      Delete
  18. Willard asks: "Are you a vindicator of Mannian PCA, by any chance?"

    I doubt that anything I've ever written can be rightfully called a vindication of Mannian PCA - especially since I'm not sure what that exactly means. Besides, I don't have enough experience with EOFs to have an opinion on the technical merits - or at least not an opinion that anyone should give a damn about. Anything I've written has been written 100 times before by people with far more experience and knowledge than I.

    What irks me are posts like Judith Curry's "Fraudulent(?) hockey stick"
    or Brandon Shollenberger's "Michael Mann Committed Fraud"

    I'll stop there.





    ReplyDelete
    Replies
    1. Thank you, Kevin. Sometimes, it's useful to declare what one is purported to establish. But tribalism and all that jazz.

      Speaking of what irks you, here's the first comment by Brandon, on 2014-09-11 at 15:10 EDT (?):

      > We don’t have to call what Michael Mann did fraud if we don’t feel like focusing on the word, but if we are going to focus on the word, what Mann did was fraud.

      http://judithcurry.com/2014/09/11/fraudulent-hockey-stick/#comment-626854

      At 15:49 on the same day, the Auditor responds to this comment:

      > In response to Brandon’s point, [...]

      http://judithcurry.com/2014/09/11/fraudulent-hockey-stick/#comment-626882

      Now, what was Brandon's point again?

      A hint:

      > We don’t have to call what Michael Mann did fraud if we don’t feel like focusing on the word, but if we are going to focus on the word, what Mann did was fraud.

      http://hiizuru.wordpress.com/2014/09/11/michael-mann-committed-fraud/

      The Auditor appears in the comment thread.

      A flag might have been falling.

      Delete
  19. Rachel,
    Someone did contact me today, and we're working on it. Thanks.

    ReplyDelete
  20. If Carrick is an Auditor's sycophant (which I doubt, since Carrick has been fairly critical over the years), does it mean you belong to the class of "Mann’s “ClimateBall” defenders", metzomagic?

    That would be interesting to know, as I thought it was an empty class.

    It might not have been wise to introduce PCA stuff. You may soon discover why.

    Nevertheless, thanks for playing.

    ReplyDelete
    Replies
    1. Personally, I'd like to see the PCA discussion, regardless of where it leads. I would bet Steve Fitzpatrick, to the extent he's interested would too, regardless of outcome.

      Delete
  21. Whatever else Carrick may be, a sycophant he is not. You have not the faintest notion of what you are talking about.

    Steve Fitzpatrick

    ReplyDelete
  22. Carrick was part of the discussion at Brandon's where BS made a charge similar to SteveM's - that somehow my point #6 is the 'primary' or 'major' point in my original list I made at Climate Etc. I pointed out to BS that the idea was ludicrous. SteveM makes the same error.

    It is apparent that BS and SM want to harp on the irrelevance of the sign mathematically (an argument I never made) while ignoring the arguments I did make. We KNOW that Wegman in putting together the images for Fig 4.4 *never* had to reorient any of them. So that defense, whether it's proper, improper, OK with text explaining, etc, etc, ad nauseum is MOOT. Wegman DIDN'T reorient them, he only saw 12 upward sloping images. Carrick may not be a sycophant, for SteveM or anyone else for that matter, but he's avoided answering the question: What are the odds of that?

    As I made mention at Brandon's, I doubt Carrick would ever be so incurious in his own work. I see far murkier examples everyday and ask questions. The sign is a clue to the selection - unless you ignore the sign. If I flipped heads 12 times in a row I'd be telling stories to my grandkids about it. Apparently others just don't see it as unusual.

    ReplyDelete
    Replies
    1. No, I'm not incurious about this.

      It's not a complicated problem. I think it's a matter of your personal spite towards McIntyre and Wegman is blinding you to reason.

      I don't see much point in arguing with irrational people so you guys can have the floor. But you might want to think about the spectacle of your own behavior here.

      This is your version of professional?

      Delete
    2. Carrick - spite? Really? Since when is pointing out an error in logic, an error in judgment due to spite? I'm sure Willard has a name for the ClimateBall move, me - I just think you're on the wrong end of the argument and don't have an answer.

      When I accused Gavin Schmidt of consistently misquoting me and answering arguments I never made - instead of the arguments I *did* make - was I acting out of spite?

      Perhaps I just don't like people misrepresenting the arguments I make. I'll take responsibility for the arguments I make, but have little patience for bad faith actors. Once again you put yourself on that side of the ledger.

      Your accusation of spite is simply wrong. You make a poor psychoanalyst. Keep your day job. And you have the chutzpah to speak of professionalism?

      Delete
    3. BTW, I was intending to write a short letter to Edward Wegman this week to ask if he's ever spoken or written of the aftermath of the Wegman Report, the CDSA paper, and it's criticisms and consequences. Frankly, I wasn't even sure if he was still alive. I was genuinely saddened to see that I believe his wife passed away last month. I decided the unsolicited intrusion bringing up what probably isn't a very happy chapter in his career so soon after his wife's death would be needlessly unkind.

      Of course I'm must be harboring this seething cauldron of spite towards the man. What are the odds of that?

      Delete
    4. From McIntyre's blog:


      Moreover, I think I have a [firm] grasp on why one would select for exemplars from a data set when studying the efficacy of a feature selection algorithm. It’s something I would do myself, even if I wouldn’t regard it as a “primary test”. [Again I think the PC1 histograms are much more informative in that respect.]


      I'm done trying to reason with you. Bye.

      Delete
    5. Brandon ShollenbergerSeptember 26, 2014 at 6:07 PM

      I didn't intend to comment here, but I saw this comment by Kevin O'Neill and felt I had to respond. He claims:

      Carrick was part of the discussion at Brandon's where BS made a charge similar to SteveM's - that somehow my point #6 is the 'primary' or 'major' point in my original list I made at Climate Etc. I pointed out to BS that the idea was ludicrous. SteveM makes the same error.

      I never made said anything like what he claims. I chose to discuss one point he made because it was a point I felt like discussing. He apparently felt the point was important enough to explicitly accuse Wegman of dishonesty over it:

      Wegman deceptively displayed only upward-pointing ‘hockey sticks’ – though half of them would have had to be downward pointing.

      And I felt that accusation merited a response. Nothing about that claims "point #6 is the 'primary' or 'major' point in [his] original list." O'Neill has no basis for claiming I've charged what he claims I've charged.

      Delete
    6. Brandon says now: "I never made said anything like what he claims."
      Brandon wrote on 14 September 2014: "It’s silly to base an accusation of dishonesty primarily upon a point you yourself admit is irrelevant."

      Of course I had never admitted the point was irrelevant.

      Brandon also says now:"He apparently felt the point was important enough to explicitly accuse Wegman of dishonesty over it:"

      'it' - singular, again. Essentially repeating the same claim now he denies having made then.

      The basis is established. Brandon continues to reveal himself as a bad faith actor.

      BTW Brandon, your understanding of tree ring studies is comical :) It matters not that you won't show my comments. You and I both know what I said in them and I'll be sure to make them available to others so they can see :)

      Delete
    7. Kevin,

      Here's how Brandon introduced his choice to discuss "stuff":

      > I decided to write this after reading two comments by the user Kevin O’Neill at Judith Curry’s blog. The comments had a lot of incorrect, and even stupid, stuff in them, but the parts that stood out the most to me were: [...]

      Followed by two instances about the same "outward" stuff, in I believe the same comments I quoted earlier, which the Auditor quoted in part without linking to it or pay due diligence to what you said at Brandon's on a comment thread he may have read since he cited it approvingly.

      Brandon also says:

      > If it was an important enough point for you to bring up in the past, even leveling accusations of dishonesty over, it is certainly a point people are free to discuss.

      http://hiizuru.wordpress.com/2014/09/14/dishonest/#comment-4250

      In that instance, Brandon could have been talking "in general", but he did not, since he later reiterated that Nick made your argument, while failing to provide any evidence that he did. As you can see, Brandon considers anything you say and repeat as " important enough".

      I don't think the rest of that discussion clarifies anything about this question, a question that is "important enough for Brandon and come here to set the record straight, going so against his policy.

      ***

      Here's how Brandon introduced his choice at Judy's:

      > There’s no way I’m going to try to explain everything intellectual uncurious/dishonest people say about the Wegman Report. I have, however, written a post responding to one of the dumber points Kevin O’Neill wrote. You can find it here.

      http://judithcurry.com/2014/09/11/fraudulent-hockey-stick/#comment-627991

      So Brandon declared he wrote a post against the "dumber points".

      Of course, that he addressed only and only point you made is inconsequential.

      Perhaps you presumed that if Brandon would choose the point he considers the most important one. He simply picked one.

      It was 3 AM, after a dart tournament.

      ***

      Interestingly, here's a recent comment Brandon made on that thread:

      > this is one of the dumber comments I’ve seen you make:

      http://judithcurry.com/2014/09/11/fraudulent-hockey-stick/#comment-632872

      Delete
  23. Re: Fraud at SteveM's Carrick writes: "That term was used by Kevin O’Neill, who I think has since retracted it."
    SteveM asks:"do you have a link to O’Neill’s retraction?"

    Which brings out guffaws from the lone occupant sitting in the peanut gallery. Me.

    SteveM is sitting on my comment at ClimateAudit where I quote myself - you know, the very quote Carrick referred to and SteveM is asking for a link to. I knew there was a reason I've made it a longstanding habit not to visit that place.

    ReplyDelete
    Replies
    1. This isn't an example of spite?

      What is it?

      Delete
    2. Carrick - how does my rarely visiting CA hurt Steve? How does my never enjoying my visits there hurt Steve?

      If I were there everyday making constant accusations, misquoting him, building strawman arguments, etc - then you'd have a case. Instead I just avoid the place. Damn, I hope that hurts. It's late, your logic has already gone to sleep.

      Delete
  24. Metzomagic: Mann's de-centered PCA works just fine as long as you keep *all the significant PCs*, as the likes of tamino has ably and simply demonstrated:

    Heh.

    Perhaps you should read the comments to that now-deleted post.

    ReplyDelete
  25. Nick are *my* comments now going to a spam folder?

    ReplyDelete
  26. Kevin,
    Not here. There was one you left a few days ago on the old "selection" post. I've liberated and commented on that, again a while ago. But I can't find anything now in spam or moderation.

    ReplyDelete
  27. > Perhaps you should read the comments to that now-deleted post.

    Firstly, tamino's post isn't deleted. Something bad happened a few years back to his Wordpress account and he lost all his old posts. But the Wayback Machine still has them.

    Secondly, I'm well aware of what one of the fathers of PCA, Ian Joliffe, had to say about de-centered PCA in response to tamino's post above. To me, that is just an old buck having a go at a young buck, as academics are wont to do. The AGW deniers milked that little exchange to the max, of course. tamino is a stats professional with some incisive peer-reviewed papers under his belt, and well able to stand on his own two feet.

    It is ludicrous that folks like Carrick and Steve Fitzpatrick are here trying to defend McI's 1:100 cherry-pick. It is indefensible. You do 10,000 simulation runs and only show the results of the top 100 of them that are selected to get the exact result you are looking for?! That is so disingenuous that we need a new word for it... oh wait, I know: how about 'fraud'? Meanwhile, I see Nick has another post about de-centered PCA up. I'll respond further up there, because the way McI generated his so-called 'trendless red noise' was fundamentally flawed as well - and for more than one reason.

    ReplyDelete
  28. > It is ludicrous that folks like Carrick and Steve Fitzpatrick are here trying to defend McI's 1:100 cherry-pick.

    That's why they don't, Metzomagic. They'd rather "milk" that cow some more if you oblige them, and if you add chocolate like credentials into the milk, it will taste even sweeter. Sociologizing Joliffe's response only adds marshmallows.

    The most Carrick or SteveF did so far is to minimize its impact, ironize or move the goalposts. Carrick already started at Brandon's, and continued today at Steve's.

    There's a new thread about PCA right down the corner, if you like to show us your kung fu, Metzomagic.

    ReplyDelete
  29. > The most Carrick or SteveF did so far is to minimize its impact, ironize or move the goalposts.

    More like they are pretending not to understand what McI did, because it is downright embarrassing to the skeptic cause, and they wish it would somehow just go away. But it's there in the archived MM05 SI up on the AGU site for all to see:

    ftp://ftp.agu.org/apend/gl/2004GL021750/

    ...though I get a "550 Failed to change directory" error when I try to access it just now. But, no bother. I have all the files on my machine at home, as I'm sure do many others. It would be rather ironic if it was removed as a result of this old wound being re-opened. But, nah. I'm sure it's just either finger trouble on my part, or a temporary server problem :-)

    I do admit, willard, that your inimitable Climateball™ styled commentary track does add much needed levity to these proceedings. Do keep it up. I'm still at work here in jolly ol' Eire. Have to get back to it. Later, then...

    ReplyDelete
  30. OK, nothing nefarious to see here. AGU just re-organised their FTP site a bit, and removed the 'apend' dir. The SI from MM05 is still there:

    ftp://ftp.agu.org/gl/2004GL021750/

    See ya'll in the more recent thread if time permits.

    ReplyDelete
  31. "Secondly, I'm well aware of what one of the fathers of PCA, Ian Joliffe, had to say about de-centered PCA in response to tamino's post above. To me, that is just an old buck having a go at a young buck, as academics are wont to do. The AGW deniers milked that little exchange to the max, of course. tamino is a stats professional with some incisive peer-reviewed papers under his belt, and well able to stand on his own two feet."

    Talk about rewriting history! Tamino actually claimed that Jollife's textbook on PCA supported Mann's obscure PCA methodology, and that critics of Mann's PCA were effectively going against known experts and well understood science. Joliffe posted, requestring a retraction of the claim along with statements that made it clear Mann's methodology was specifically not supported by him, nor known generally in the world of PCA. The truth is Mann et al at that time were scrambling around with statistical methods they did not really understand and making mistakes using them incorrectly - MBH98 was published the year he got his PhD. He was certainly at that point not an expert in PCA methods and had his "innovative" PCA methodology really been valid, there would have been a discussion on it in the paper, but there was none. He thought he was using standard PCA.

    ReplyDelete
  32. This whole nitpicking fest is now wholly moot as far as public debate goes, for we now have mathematician Mann exposed widely promoting without any retraction, the worst example of a fake hockey stick of all time, just *last* year, not merely 16 years ago just as the temperature outside had spiked instead of held steady ever since then. The authors created the blade as a pure artifact having nothing at all to do with real temperature, by re-dating some input proxy data to afford sudden spurious data drop-off at the end, as by now you *all* are perfectly well aware of, and how without that utterly fake blade, it would never have been published in top journal Science, and you are also all aware how the whole hockey stick team promoted this to the media as a coauthor described it with a swoosh gesture to NY Times reporter Revkin over an archived video interview as a "super hockey stick."

    http://s6.postimg.org/jb6qe15rl/Marcott_2013_Eye_Candy.jpg

    It's a fantastic PR strategy though, to drag propeller head skeptics into arcane minutia that buries more simple exposed fraud under thousands of words of jargon. However, I have posted this graphic exposure of fraud thousands of times now, on news sites and blogs, defeating your clear attempts to ignore it in favor of noisy ancient history. Those who called us climate model skeptics "deniers" now stand starkly exposed as willful supporters of fraud in the quest to genocidally ration world energy supplies. I cannot explain why you have tried to do this, but I do know that it is quite simply evil.

    ReplyDelete
  33. If Wordpress has been told you're a spammer, so all your comments disappear, one thing you can do try is to set up your own little Wordpress blog, comment there, and as the blog's owner indicate that that comment is not spam. This gets you back in Wordpress's good graces, at least for awhile, until someone again declares you a spammer.

    Wordpress is definitely a pain in the a**, and their developers don't seem to care about this issue, or that of not being able to delete one's Wordpress ID.

    ReplyDelete
    Replies
    1. David,
      Thanks. Yes, the thought crossed my mind. I've been trying to work out Akismet's system as I bounce around in the doghouse. It seems currently set to heavily discount auguries of innocence.

      I did have someone at Akismet contact me, and he said it was fixed. I had to report back, though, that it wasn't. That was near weekend - we'll see what a new week brings.

      Delete
    2. Actually, no, Wordpress is great, Askimet is flawed. Don't use it. Problem solved.

      Delete
    3. "Don't use it. Problem solved."
      Unfortunately, commenters don't have a choice. Do blog owners?

      Delete