Thursday, September 4, 2014

SST alarmism - seas are warm

I was reading Bob Tisdale at WUWT, an article titled: "Alarmism Warning – Preliminary Monthly Global Sea Surface Temperatures at Record High Levels".
With explanation: "An “alarmism warning” indicates alarmism is imminent."
And continuing:
"We’re not just talking a record high for the month of August…we’re talking a record high for any month during the satellite era."

Now I'm always anxious to alert readers to imminent alarmism, so I advise reading Bob warily. But I thought I should find out more, and as usual, Bob has an impressive pile of graphs. Here's my take.

Bob breaks it down into regions, emphasising the North Pacific as the standout for warmth, with North Atlantic second. For good detail on that, Moyhu has a WebGL plot (choose your day), and for the N Pacific (and elsewhere), current movies. But I'm more interested in the global SST.

Actually, Bob's plots there are comprehensive. But I'd like to show longer and shorter scales, and a comparison with Hadcrut 4 surface temp and UAH lower troposphere. So here is a composite plot. You can switch between year ranges (1850-now, 1980-now, and 2005-now) and smoothing (none, running annual mean). For all but the longest range, NOAA SST means OI SST, anomaly relative to 1971-2000, and includes August 2014. For the long range, it is this file.


 

I have arbitrarily subtracted 0.2°C from the Hadcrut 4 anomalies, for better plot match. The idea is to show that SST is usually the leading indicator of a change, with GMST lagging, and TLT often the last.

In fact, GMST has been quite high (records in May and June) but drifting down lately; TLT (UAH and RSS) has not been very high, and also little recent rising tendency (UAH went down in August). Details here.

So who knows? SST is certainly high, though. And if El Nino does come...

25 comments:

  1. The great thing about natural variation is the second it stops cooling to near-record warmth, it starts warming to near record warmth.

    ReplyDelete
  2. Thanks for the graph, Nick. This looks like a pretty steady trend over the last 30 years: one would be hard put to declare a "pause".

    On a different matter - off topic, if you'll permit me - I noticed you commented on a particularly bizarre WUWT post by someone called Jean S suggesting that Mann et al. had applied a "600 year" smoothing procedure to obtain their "hockey stick" back in1999. "Jean S." provided precious little in the way of detail on the grounds that to do so would bore his readers. You commeted that this was simply not possible, which was my reaction as well - all the wiggles would surely be smoothed out. However, I'm no expert on this. I'd be interested in your further thoughts on this matter.

    ReplyDelete
    Replies
    1. I thought it was odd that Jean S made such a fuss without publishing an implulse response, so we could see exactly how much problem there was. He has now posted the response to a rectangular pulse, and although he has tried to exaggerate it by showing only half the y-axis, it isn't much.

      In filter design, there are always tradeoffs. FIRs have frequency domain sidelobes, so they pick out frequencies, as Mann's curves show. Hamming allows a discontinuity designed to minimise the first lobe, at the cost of having the others reduce more slowly. It looks like Mann has tried to counter that with an IIR correction. It probably works, at the cost of a small effect from endpoints. I think simple Hamming would be fine, but Mann's filter is probably fine too.

      Delete
    2. Even if we assume that Jean S is 100% correct in all of his claims, I think the real question is "why does it matter now"?

      Jean S and I had a bit of a kerfuffle on Jeff ID's blog at some point over this. Regardless of the claims and counter claims regarding the smoothing that was actually done, I don't see changing smoothing algorithms as mattering very much.

      It is cringe-worthy wrong to pad with instrumental data, as Mann did, so discussing how the end-points should be treated is somewhat useful. But I don't see anything wrong with what Mann is doing these days with his Butterworth filter (including end-point treatment). I think it'd be marginally interesting what happens to the "decline" near the end-point if we use the Butterworth there.

      Delete
    3. Bill, my guess is it has more to do with Mann's new book than the trial.

      Nick, I don't actually think Jean S is claiming that Mann is using a 600-year window. I originally thought so too and agree with you that this could not have been what was done in Mann's paper. In his post, Jean S says MBH98 uses 50-years length window. Pretty sure what Jean S is describing just the Hamming window filter method.

      The only issue I have is with the end-effects and the excessive amount of padding needed. This isn't an issue at all if you can discard 1/2 of the filter length in points near the end-points (where you're retaining a subsample of a longer recording, the technique is fine).

      Since we're interested in the late 20th century, this does visually affect the result. So as I described it on McIntyre's web site, this is clumsy, but not wrong.

      The newer method used by Mann since circa 2004 based on the forward-backards Butterworth filter and end-point reflection for padding, and greatly reduces the end effects. I like this technique with some caveats. Bernie Hutchins and I had some back and forth on McInytre's site that may be of interest.

      Delete
    4. Carrick,
      Yes, I think now that it is basically a 50 yr Hamming with some IIR correction to attenuate the higher freq dom side lobes. That's based on the rectangular pulse response that Jean S showed. It has a ringing peak max 9%, but that within the range that would normally be cut off 20 yrs. After that it's 5% dip and then some quite rapid roll off. Peak-peak about 33 yrs.

      Delete
    5. Obviously this is a different understanding that the people on WUWT took from it.

      But do you mean exactly by "some IIR correction to attenuate the higher freq dom side lobes"? As far as I can see from his code or Jean S's implementation, it is just a standard taper-window-based filter method.

      Mann's code starts with the function named "lowpass" (it is a general window-based filter function).

      Note that the timestamp of Mann's code on his server is 2011-11-08, so who's to say this is even the same code used originally? That's the problem with arguing over these arcane issues more than a decade after the original paper.

      Delete
    6. Carrick,
      I'm basing the IIR notion on Jean S' convolution with a rectangular pulse. It shows ringing. I don't think an ordinary Hamming should do that.

      Delete
    7. As far as I can tell from the code, it is a standard Hamming low-pass filter. For small enough values of the fcut parameter, I think you should be able to produce ringing.

      BTW, The code can do other cases, like band-pass, high-pass and band-stop (it looks to me like the original code took two parameters f1 and f2, which are now hard-coded).

      The source for the filter cited in Mann's code is Stearns and David (1996).

      Since that is MATLAB source, it is likely that Mann's code is a derivative of that original source.

      Am I wrong that Jean S is claiming he can not find the method described anywhere in the literature.

      Delete
  3. Nick: I have arbitrarily subtracted 0.2°C from the Hadcrut 4 anomalies, for better plot match. The idea is to show that SST is usually the leading indicator of a change, with GMST lagging, and TLT often the last.


    Have you thought about looking at the lagged correlation here?

    Here is what I get for HadSST vs CRUTEM:

    figure.

    Comparing ENSO3.4 to zonally averaged temperature is also interesting, which I've done also, but it shows that the extent of that correlation is confined to roughly ±30° in latitude.

    ReplyDelete
    Replies
    1. Of course land temporally lags ocean as the circulation from El Nino travels toward land.

      Delete
    2. Actually the lag can be many months, as you move farther away from the equator:

      Figure.

      Were I do do a regression to try and remove ENSO effect from land temperature, I'd probably breakdown the temperature into zonally averaged (that is averaged over a band of latitudes) temperature, and use something this rather than assuming a single constant at one latitude and constant lag.

      Delete
    3. Hacked that up. Make the final part:

      "rather than assuming a single constant over all latitudes and one lag value"

      Delete
    4. You are way behind the curve Carrick. My CSALT model has a 6 month lag for ENSO.

      Why don't you do something this straightforward?

      Delete
    5. You're assuming a single lag coefficient, which does not seem to be a legitimate assumption. Also 5 months is clearly too long.

      I also think regressing against global mean temperature grossly over simplifies the physics.

      I know you like your model, but you don't see to have many other buyers so far.

      Delete

    6. Carrick said:
      "Also 5 months is clearly too long."

      Enough with the equivocating, Carrick. We can also read what you wrote before that:

      "Actually the lag can be many months, as you move farther away from the equator:"


      So "many months" is somehow different than "5 months"? In what way would that be?

      BTW, your "single lag coefficient" is my MaximumEntropy mix of discrete lags which follow an exponentially damped PDF. Do you not follow how stochastic analysis is dome?

      Delete
    7. Look at the figure---the region with highest correlation only has a 2-month delay. The delay increases (not unexpected) as you move further from the equator. Five month lag is simply not a tenable assumption. The data themselves show you to be wrong.

      You should have looked at the lagged correlation before trying to construct a model.

      Delete
    8. Carrick, You evidently don't understand a damped exponential PDF of delays, p(tau) = lambda*exp(-lambda*tau), where tau is the delay.

      With a damped exponential, the most likely delay is 0 months. The mean delay is the characteristic exponent, 1/lamba. And the standard deviation is also 1/lambda. And negative delays are not allowed as that would violate causality. This is not a bad model for explaining how an ENSO disturbance will propagate across the world.

      Uncertainty quantification is a requirement for scientific modeling






      Delete
    9. I suspect I know quite a bit more about this particular topic than you do. ;-)

      As to negative delays, yes you can get them, and they aren't unphysical when you are looking at responses to continuous signals. They don't actually imply violations of (speed of light) causality.

      See Nick's comments

      Some drawn out comments between Bart, Mark T and myself. Like you, they are rather difficult to reason with.

      Look I was trying to provide some constructive criticism to help you improve your models. You can choose to ignore .

      Perhaps you'll believe me if I say I don't have a vested interest in seeing any particular trend pop out of the data. Rather, I'd like to know what the real trend is. To that extent, I am interested in seeing somebody who is doing the legwork do it as well as possible.

      If you like what you've done, fine then.

      Cheers.

      Delete
    10. See in particular the summary here, Method II, item 8 in particular.

      Delete
    11. WHUT, since I suspect you distrust me as "the enemy", I'll point you to this demonstration of negative delays in a real system. This was mentioned on first link (to Nick's blog).

      Delete
  4. What the heck are you bringing up negative delays for?

    I simply said that the negative part was zero probability so your kind of ankle-biter wouldn't come back and claim I forgot to specify it.

    As far as models of actual negative delay, they have as much worth as someone that thinks Bose-Einstein statistics are applicable to condensation and freezing nucleation of water.




    ReplyDelete
    Replies
    1. WHT What the heck are you bringing up negative delays for?

      Hm... maybe it was this statement by you:

      And negative delays are not allowed as that would violate causality.

      There are physical systems with negative group delay.

      I simply said that the negative part was zero probability so your kind of ankle-biter wouldn't come back and claim I forgot to specify it.

      Even then, it's zero lag only if you're talking physical delays.

      When you are comparing the delay associated with continuous signals like ENSO or global mean temperatures, the correlation function has physically meaningful values for both positive and negative delays.

      If you don't believe me, read Nick's comments.

      Delete
  5. Sorry this got butchered:

    Even then, it's zero lag only if you're talking physical delays


    Crossed synapse there. I mean to say it's zero for negative lags only if you're talking about physical delays.

    ReplyDelete
    Replies
    1. ENSO has a significant correlation with drought but delayed according to the Palmer Drought Severity Index
      PDSI

      "Also shown in the left-lower panel (red) is the Darwin mean sea level pressure shifted to the right by six months to obtain the maximum correlation (r=0.62). "

      There is that troubling 6-month lag again. I wonder why it is shifted like that? Could it be because this is a real physical delay as the ENSO propagates to the rest of the world?

      Some of us like to live in the real world, not in the fantasy world of half-baked theory, e.g. cloud formation based on Bose-Einstein statistics, yup.


      Delete