Friday, May 5, 2017

Nature paper on the "hiatus".

There is a new Nature paper getting discussed in various places. It is called Reconciling controversies about the 'global warming hiatus'. There is a detailed discussion in the LA Times. The Guardian chimes in. I got involved through a WUWT post on a GWPF paper. They seem to find support in it, but other skeptics seem to think the reconciliation was effective, and are looking for the catch.

I thought it was a surprisingly political article for Nature, in that it traces how the hiatus gained prominence through pressure from contrarians and right wing politics, and scientists gradually came to take it seriously. I think they are right, but the process should be resisted. There really isn't much there, and the fact that contrarians create a hullabaloo doesn't mean that it is worth serious study. I'll show why I think that.

I'm going to show plots of various data since 2001, which is the period quoted (eg by GWPF) which excludes the 1998 El Nino. They weren't so scrupulous about that in the past, but now they want to exclude the recent warm years. Typically "hiatus" periods end about 2013. I recommend using the temperature trend viewer to see this in perspective. The most hiatus-prone of the surface datasets, by far is HADCRUT (Cowtan and Way explain why). Here is the Viewer picture of HADCRUT 4 trends in the period:



Each dot respresents a trend period, between the start year on the y-axis and the end on the x-axis. It's a lot easier to figure out in the viewer, which has an active time series graph which will show you when you click what is represented. If you cherry-pick well, you can find a 13-year period with zero slope, shown by the brown contour. And you'll see that the hiatus periods form two descending columns, headed by a blue blob. These are the periods which end in a fixed year (approx) on the x-axis - ie a dip. There are just two of them, and they are the La Nina years of 2008/9 and 2011/2. The location of those events determines the hiatus. If you look at other sets on the trend viewer, you'll see this much more weakly. At WUWT I listed the 2001-13 trends thus (error range converted to ±1σ):

DatasetTrend °C/cen
HADCRUT0.063 ± 0.301
GISS0.506 ± 0.367
NOAA0.509 ± 0.326
BEST L/O0.468 ± 0.432
C&Way0.489 ± 0.391


All except HADCRUT are quite positive. People sometimes speak of a slowdown. Incidentally, in the triangle plot, there is a reddish horizontal bar, bottom left, that is almost as prominent as the "pause". They are the strong positive trends that you can draw starting in 1999 - ie the 2001-6 warmth seen from the other end. I don't remember anyone getting excited about this feature.

I'd like to talk about the arithmetic of trends. Trend is a first central moment. It has a lot in common with moments of force, or torque. I think of it as a see-saw - a classic torque device. A heavyweight on the end has a lot of effect; in the middle not much. And of course, it depends which end. Trend is an odd see-saw, because it has both weights (cold periods) and uplifts (warm). It also has a progression. Items come on one end, and then progress across, exerting less and then opposite torque, until they drop off the other end (if you keep period fixed). So there isn't actually a lot of the period that is etermining the trend. It is predominantly the end forces.

I'll ilustrate that with this set of graphs (click the buttons below to see various datasets). It shows the mean (green) for 2001-2013 and colors the data (12-month running mean) as deviation from that value. The idea is that there has to be as much or more pulling the trend down rather than up, if it is to be negative. Either blue at the right or red at the left.



Now you can see that there aren't a lot of events that determine that. There is a red block from about 2001-6, which pulls the trend down. Then there are the two blue regions, the La Nina of 2008/9 and 2011/12, which also pull it down. The LN of 2008 has small torque on this period, but would have been effective earlier. @012 has the leverage, and so overcomes the sole uplift period or 2010.

That is just four periods, and it isn't hard to see how their effects can be chancy. It's really the 2001/6 warmth that is the anchor.

And then you see the big red period at the end, which overwhelms all this earlier stuff. GWPF and Co are keen to say that this is just a special case that should be excluded. Something like that it wasn't caused by CO2. But the 2001-6 period is also jus a natural excursion, and wasn't caused by CO2 either.

Basically the pause from 2001 won't come back until that big red is countered by a big blue. That would ensure that the trend returns close to that green line (extended). Of course, the red will be a powerful pauser for trends starting in 2015, and we'll hear about that soon enough.

Here is the same data colored by deviation from the trend from 2001 to present. We're still well on the red side of that too. The point here is that as long as new data lands above that line, it will be more red, and the trend will go up. It won't even reverse direction until you start seeing blue at that end. And if it did, there is a long way to go.



Now that the line has shifted, you can see how the blue periods would have destroyed such a trend earlier. But now, with their reduced leverage and the size of the red, that is where the trend ends up. For Hadcrut it's now 1.4°/Cen (other surface indices are higher).

So my conclusion is that, just as contrarians protest (with some justice) that not too much should be mad eof the current strong warming trends, because they are influenced by a single event, so too should the much waker hiatus be observed with modest interest, because it is the result of the concurrence of two weaker events, La Nina's, which get less noticed because they are less porminent, but are equally rather chance occurrences.







45 comments:

  1. I was talking to some guy on Reddit who was convinced the temperature is exploding looking at the last 4 years. It fits a quadratic or exponential curve very well and may well be erroneously statistically significant. He also had an erroneous explanation: methane hydrates.

    That is the same line of thinking as the mitigation sceptical movement and their global warming stopeed. Although they do not even offer an erroneous explanation. They are just happy with spreading doubt.

    ReplyDelete
  2. I do not think the current warming trend was caused by a single event. There was the cessation of Matthew England's anomalous, intensified trade winds. Then there was the switch to a positive phase of the PDO, and a change in low clouds in the Eastern Pacific. And finally, a moderately strong El Niño.

    In The Knutson paper on the possibility of the PAWS being prolonged, he modeled what he called a springback warming... very strong.

    April cooled... a lot. Looks like it is almost entirely continental cooling, and that is is quickly disappearing. Almost a week through the May threshold, and El Niño this year still looks possible.

    ReplyDelete
  3. In those data set trend figures you show, is the uncertainty bars to 2-sigma? The variability looks large.

    PS you have a lot of patience commenting on WUWT, the trolling is obvious and rampant. Watts really needs to clean that blog up.

    ReplyDelete
    Replies
    1. Harry,
      Yes, they are 2σ. I should have said that. They are calculated using the AR(1) autocorrelation model.

      Delete
    2. I've converted to 1±, which is more conventional.

      Delete
    3. Thanks for your answer. 14 years is not a very long run of data. But I guess the intention is to show the data gives a warming trend as opposed to another trend certain people might dream up. And who knows, the real trend might be larger at those uncertainty levels.

      Delete
    4. There is a paper out that finds the AMO and the PDO have not caused warming since 1900 (not really a surprise,) but I still think the PDO, a sort of proxy for the Eastern Pacific, has caused cooling since ~1985. Nobody seems to know what triggers it. The current warm phase, which constricted and swallowed the 2016 La Niña like a boa (the latest blue nothing on the trend viewer,) could last from a couple more months to 20 to 30 years.

      Delete
  4. Nick wrote: "All [trends] except HADCRUT are quite positive." Although I agree with some of what you write, I don't think you should characterize 0.5 K/century (0.05 K/decade) as a strongly positive trend. In the big picture of GW, it is nearly a negligible trend that almost no one should care about. Especially when it comes with a 2o confidence interval bigger than the trend.

    I think we should remember what we are trying to do with this data. As scientists, if we want to claim that it is warming over a period, the central estimate for the trend over than period needs to be positive and the full 95% ci (or alternative confidence interval) can not contain zero. If we want to claim that it is not warming - which means that it is cooling, the central estimate and the full 95% ci need to be below zero. Our inability to demonstrate warming or cooling over a short period doesn't permit us to conclude anything about the absence of warming or cooling. The absence of unambiguous warming if not evidence FOR cooling. This is the big lie constantly being repeated at WUWT.

    It gets trickier when one wants to claim that temperature isn't changing. I think that requires a scientist to define what he means by "not changing". "Not changing" might be defined as less than 0.5 K/century, in which case the 95% ci must lie within +/- 0.5 K/century. Or one might define "not changing" as 50% or 25% or ?% of the multi-model mean trend projected or hindcast for the period.

    When you think about things in these terms, no one should be making any claims about the meaning of trends over short periods: about warming, not warming or remaining the same.

    The important claim that no one is focusing on is whether the trend agrees with projections.

    The other problem that is rarely discussed is that 2.5% of the time the measured trend will be below the 95% ci due to scatter in the data. On your trend viewer, you perhaps should erase (or fade out) the highest and lowest 2.5% of trends along any diagonal. For example, on the diagonal that represents 10-year periods, the highest and lowest 2.5% if the trends along that diagonal "can be attributed to chance". That might not be the right phrase to use.

    Frank

    ReplyDelete
    Replies
    1. "I don't think you should characterize 0.5 K/century (0.05 K/decade) as a strongly positive trend."

      When does Nick Stokes use the word "strongly"? He says it is "quite positive" which it is.

      "In the big picture of GW, it is nearly a negligible trend that almost no one should care about."

      It is relevant to the subject of the article which is a discussion of the hiatus.

      "no one should be making any claims about the meaning of trends over short periods:"

      Who is making claims? The subject of the article is the trends in the temperature data, and how the short-term trends affect the longer-term trends.

      Delete
    2. Frank,
      The main thing I think about the "positive" trend is that it is not one that one would call a "hiatus". Perhaps, as said, a "slowdown". As to the CI's, I think one needs to think back to what CI's are for. They represent not uncertainty about the trend that happened, but the trend that might have been if there had been different weather. So if you want to say that
      GISS 0.506 +- 0.734 (that is the 2σ CI)
      is a pause because it could have been zero, the full statement is
      We actually had a trend of 0.506, but if there had been different weather there is about a 10% chance that there might have been a pasue (zero trend)
      Not so convincing.

      That's why I think 95% CIs are often misused. There purpose should be to say that the observations are significant enough that you can deduce some general principles. But no-one is trying to deduce a general principle from the interval 2001-2013.

      I agree that the right thing to test is the deviation of trend from projected, rather than zero, with due care for the fallacy of multiple testing. Lucia used to do that, although I didn't agree with her methods.

      Delete
    3. Nick wrote: "As to the CI's, I think one needs to think back to what CI's are for. They represent not uncertainty about the trend that happened, but the trend that might have been if there had been different weather."

      Can I refresh my understanding of CI's by questioning this view? The simplest way to measure warming is to subtract the initial "reading" from the final "reading". The readings could be the anomaly for a year or 3 years or 5 years or 10 years, depending on the nature of the noise. Then use the formula for the difference and standard deviation of two means.

      Since "depending on the nature of the noise" is an ambiguous phrase, we can instead apply the simplest model to the data - a linear trend. The standard deviation in this case samples the noise throughout the period (rather than just the noise near the ends) - and any other phenomena that reflect non-linearity. In theory we are supposed to examine the residuals to detect autocorrelation (which Nick corrects for) and non-linearity (which many probably ignore). For example, a linear model isn't very applicable for the whole 20th century, since the log of forcing hasn't increase linearly.

      Where you say "the trend that might have been", I say the "range of trends that are consistent with the data given the scatter in the data". Then I am going to multiple by the number of years to get a range for how much WARMER in degK , not K/century. That turns out to be 0.06 K (-0.03 to 0.15) for 2001-2013 for GISS. Mathematically, there may not be any difference between "How much WARMER? (an amount) and WARMING (a rate), but there sure is when you start considering periods of different length.

      Frank

      Delete
    4. Nick.

      I agree with your point about the CIs being misunderstood. I asked one of the people at RealClimate about it, and their suggestion was to think about Probability Distribution Functions instead (I think I have the term right).

      I use the term "hiatus" because it was used by the IPCC. I have since being told it is a term used in geology when talking about strata.

      Delete
    5. There is nothing wrong with the term "hiatus" by itself. The data shows one after WWII, mostly due to fast increases in air pollution and quite likely also because of unresolved measurement artefacts around WWII. That one was a few decades long.

      It is just that we did not have a "hiatus" recently. That was overconfidence in the data quality and really, really bad statistics.

      Delete
    6. One can judge data quality by comparing an index such as NINO34 against SOI, which are instrumentally independent measures (one on T and one on P). If you look at a sliding correlation coefficient of these two indices along the complete interval, you will see certain years that are poorly correlated. Impressively, these are the same years that give poor agreement against the ENSO model. What this tells us is the poorly correlated years are ones with poor signal-to-noise ratio.

      So I started to use a modification of a correlation coefficient called a weighted correlation coefficient, whereby the third parameter is a density function that remains near 1 when the signal-to-noise ratio is high and closer to zero where the SNR is closer to zero. This allows the fit to concentrate on the intervals of strong SNR, thus reducing the possibility of over-fitting against noise.


      Delete
    7. Frank,
      "Can I refresh my understanding of CI's by questioning this view?"
      Sorry I missed this. The basis for my view is that the CIs are based on a model of secular trend (climate change) plus random weather, expressed as an AR(1) distribution or whatever. The CIs are based on re-sampling with that distribution - ie running different weather. It's a point I emphasise because a lot of people think it is measurement uncertainty - we aren't sure of the observations. That isn't so - trend has a valid role as just a measure of the change that did occur, and the CIs exaggerate the uncertainty of that, by adding in (and being dominated by) the uncertainty of what different weather might have done. What the CIs are good for is suggesting how much the trend might be different in some future period, with weather we can't currently predict.

      Delete
    8. "The basis for my view is that the CIs are based on a model of secular trend (climate change) plus random weather, expressed as an AR(1) distribution or whatever. "

      ENSO is not weather, but a deterministic geophysical process related to tides. Like I stated above, the random or noisy part of ENSO can be isolated.
      http://contextearth.com/2017/05/15/enso-and-noise/

      Need to do this instead of expressing variability as a brain-dead AR(1) process. That will never give you an optimal estimator in the situation where the noise is not really noise. For example, when you are performing electrical signal analysis, you aren't going to treat a 60 Hz noise hum as AR(1) when you know it exists.

      Delete
    9. WHUT,
      I was explaining what the CIs calculated (by almost everyone) actually mean. If you think a better uncertainty estimate is possible using your knowledge of ENSO, I think you should explain how, and say what it is. That would be very helpful.

      Delete
    10. Hmmm, same way as the daily and yearly nuisance signals are removed?

      Delete
  5. why does anyone assume avg temperatures go up in straight lines anyway, if they did we would see a straight line from January to July in avg temps due to the ever increasing solar insolation (Northern Hemisphere)

    but we don't see that straight line, we see short term variation on a longer warming, (rising) trend - February can be colder that January, and March can be colder than either - but we know July (large volcanic eruptions accepted) will be warmer than January

    and we see the same in the AGW signal, variation on an increasing warming trend - and it would actually be odd if we didn't

    it all seems pretty basic to me in my dunning krugeresque way

    ReplyDelete
    Replies
    1. Indeed, spring pauses are quite common, but summer still arrives.

      Delete
    2. In that vein, at WUWT they are certain fall has arrived; that the trend has leveled off since the 14-16 El Niño.

      But it has not. The big La Niña dips/little El Niño surges happen right after big drops in the PDO index, and the big EL Niño surges/little La Niña dips happen right after big rises in the PDO index.

      I believe this is the first time since 1980 that the PDO has remained positive after a big El Niño surge. It has not fallen back in the way it did from 1980 until 2013. It's not the old normal; remains to be seen whether or not it is the new normal, but since 1900 the PDO index has never remained positive for this many months in a row... and the April NOAA PDO just surged upward, so 48 straight months, 2015 to 2018, is looking to be very possible. A positive PDO spike appears to be followed by GMST spikes like 1998, 2003, 2005, 2007, 2010, 2014, 2015, 2016, and... very possibly 2017 and 2018.

      Delete
    3. Fall did arrive after the 97/98 El Nino in the form of a very strong La Nina in 99. That doesn't seem to have happened after the recent event. When Nick reminds us that trend is the first moment and functions like a teeter-toter, it is going to take many year for the recent El Nino to get far enough from one end of the teeter toter to have less influence on the slope. For that reason, the Pause is unlikely to return to life until well after 2020. The 99 La Nina is why the lowest slopes for the hiatus period begin around 2001 instead of 1998. It is interesting to note that the trends for the 19 years since the 1998 El Nino is almost equal to the trend for the 19 years after that El Nino. The 95% ci for both periods includes zero, but not the ci for the combined 38 years. The 97/98 El Nino is in the center of the "teeter-toter" and effects the ci, but not the trend. We have the '82 and recent El Nino's balancing each other on the ends.

      Before the Karl adjustments, the hiatus did appear to be inconsistent with climate models - partly because of their higher climate sensitivity than energy balance models and partly because the models don't exhibit enough unforced variability. If Karl's adjustments are correct, the inconsistency is much less significant.

      Enough of my noise about noise.

      Frank

      Delete
    4. I was told repeatedly at various blogs that the 14-16 El Niño was going to be followed by a humdinger La Niña, and 99-01 was usually cited as proof. I did not believe that as I was pretty confident the PDO had flipped positive and that La Nina events would be few in number and less powerful. I suspect that the first 13 years of the 21st century will be seen as La Niña's last hurrah.

      If Karl's adjustments are correct...? Yes, you're right, they may have been to small!

      Delete
    5. Yup, ENSO has elements of a teeter-totter. If you look through the research literature, you will find numerous references to a hypothesized behavior that one annual peak is followed by a lesser peak the next year. Yet, no evidence that this strict biennial cycle is evidence in the data -- it's more of a hand-wavy physical argument that this can or should occur.

      The way to model this teeter-totter behavior is via Mathieu equations and delay differential equations. Both of these provide a kind of non-linear modulation that can sustain a biennial feedback mechanism.

      The other ingredient is a forcing mechanism. The current literature appears to agree that this is due to prevailing wind bursts, which to me seems intuitive but doesn't answer what forces the wind in the first place. As it turns out, only two parameters are needed to force the DiffEq and these align EXACTLY with the primary lunar cycles that govern transverse and longitudinal directional momentum, the draconic and anomalistic months.

      http://contextearth.com/2017/05/01/the-enso-model-turns-into-a-metrology-tool/

      Note the table at the bottom ... having one of these values align may be coincidence, but having both combine with that kind of resolution is telling. At some point this will no longer be in my hands since the model is out in the open and its straightforward to implement. It's really the equivalent of the initial discovery that ocean tides were aligned with the diurnal and semidiurnal lunar cycles. Once the cat is out of the bag or the toothpaste has left the tube, you can't put it back in.


      Delete
    6. It's not like these patterns are impossible to reveal. Occasionally it will take the persistence of a symbolic reasoner to extract the info. For example, the ENSO model above is related to the interesting pattern that a delay difference of 2-years in the (e.g.) Sydney SLH readings correlates very well with the ENSO SOI dat, ie SOI = Tide(t) - Tide(t-2 years).
      http://contextearth.com/2016/04/13/seasonal-aliasing-of-tidal-forcing-in-mean-sea-level-height

      All these patterns are out there, not necessarily waiting to be discovered, but perhaps waiting to be acknowledged. Climate science is in dire need of the equivalent of a vaccine "herd effect" -- as long as there are these holes in the understanding of fundamental behaviors of climate science, crappy models by charlatans such as Tsonis and Curry will continue to infect the consensus. What we need is for a high-percentage of the models to become solid and thus immune to manipulation by the Lindzens of the world.

      Delete
  6. Nick: I think your trend viewer over-emphasizes the importance of noise and allows readers to cherry-pick. When I look at your triangles with color-coded trends inside, I'd like to know which periods of time have trend that are statistically significantly different from the overall trend shown in the lower right-hand corner. I'd like to be able to see and then discard the 2.5% of the data points one expects to disagree with the overall trend simply because of scatter in the data, and perhaps exclude periods shorter than 1 year or n years. It would be nice to pick my own confidence interval. The 95% ci for the difference of two means is a challenging standard. If the IPCC can report "likely" conclusions, it would seems appropriate to allow access to that standard. However, that means that 1/6 of the data points would be expected to disagree with the overall trend by chance.

    The goal would be to show how little of the noise in the blogosphere about "lack of statistically significant warming" is meaningful.

    Frank

    ReplyDelete
    Replies
    1. Frank,
      "allows readers to cherry-pick"
      My original post called it a cherrypicker's guide. My excuse was that it also let you see what the cherrypickers were doing. But I followed up with a plot that fades out trends that are not significantly different from zero (there are radio buttons on the viewer (left column) for that). And you can show the CIs independently. That is where you can check for statistically significant difference from prediction (strictly, model values, since it is past). I have labelled one such level, 1.7°C/Cen, with grey. So if you click "Upper CI trend" and look for the grey contours, within them is the region that is significantly (97.5%) below 1.7. It's quite interesting, because it is medium-term; generally ends in the La Nina of 2011/2, and tends to start in warm spells like 2001/3. But there isn't very much.

      Delete
    2. Frank,
      "It would be nice to pick my own confidence interval."
      The last of them is a t-value plot, although it relates to zero trend.

      Delete
  7. Nick: I wasn't very clear early. The "best answer" for the trend is almost always the answer from the "longest relevant period", the color in the lower right hand corner of your triangles. The only justification for considering any trend would be because the trend for some sub-period is significantly different from the trend for the whole period. Let's assume that 1979 to present is the "longest relevant period" - the longest period when radiative forcing has been increasing at a relatively constant rate equivalent to about 2 ppm CO2/yr. Most global records have a trend for this period of about 1.7 K/century with a 95% ci of +/- 0.2 or 0.3 K/century. My suggestion is that this is the "best answer" for the rate of warming for any sub-period because it has the narrowest confidence interval - unless the trend for some other sub-period is significantly different from the whole period. In other words, there was no SLOWDOWN in warming unless the trend during the putative SLOWDOWN period was significantly different from 1.7 +/- 0.25 (using the standard method for calculating the confidence interval around the difference in two means). Now, I think it is only fair to allow skeptics to say that there "likely" (roughly 1o) was a slowdown in the warming trend rather than always demanding on a higher level of confidence.

    Why do I want to focus on the trend for the longest relevant period and insisted shorter periods be proven to be different before discussing their meaning? We have a trend of about 1.7+/-0.25 K/century for the last 38 years. If we break that into two 19-year periods, neither trend is significantly different from zero (at 2o). One of those 19 year periods contains roughly 10 year periods with a best estimate for the trend that is less than zero. Add one El Nino and the trend is above 0.1 K/century. As best I can tell, it has always been warming at about 1.7 K/century since 1979 and EVERYTHING between is consistent (or not, if the proper analysis says no) with the large scatter in the data that is found throughout this period. Is a SLOWDOWN that reaches below 0 K/century for 8 or 10 years a meaningful slowdown or just noise. If the putative slowdown were to dip to 0.5 K/century for 15 years, would that putative slowdown be more worthy of our consideration than 8 or 10 years with no warming? IMO, any decrease in trend that is not inconsistent with 1.5-2.0 K/century isn't really meaningful.

    So what is "the longest relevant period"? CO2 has been increasing at about 0.5% per year for the past few decades and that exponential growth rate should be producing a roughly linear increase in temperature. So the longest relevant period might be back to a time when the rate of increase was 0.3% per year and the average for the whole period would be 0.4% per year (+/- one fourth than much). That should produce a fairly linear warming. Of course, we need to consider all GHGs and aerosols, not just GHGs and use CO2-equivalents. When you get back into the 1960's, I assume that the % increase per year was half as big as it is today. If so, the 1960's would not be "relevant". The 1960-present trend is about 1.4 K/century, modestly below that for 1979-present.

    Frank

    ReplyDelete
    Replies
    1. Frank,
      I see trend as basically a derivative estimate of an underlying secular function. I've talked about that here, with links to earlier posts. It's the usual problem of differentiating in the presence of noise, which could include oscillations etc. You want to get a tight enough interval for the time scale of variation of the secular, but wide enough to damp out the other effects. Whether there is a satisfactory interval depends on the variation of the hypothesised secular (and the noise).

      That is why I think the triangular plots are useful. As you say, you can get a very smooth color bottom right. Then you can move up in the region of interest, and try to see if the variation you start to encounter is related to the secular, or is due to other processes. Something that helps there is that oscillations like ENSO make a pattern or vertical and horizontal bars, which you can trace to the originating event. If you can exclude that, you have a better chance of finding a pattern you think significant.

      My next post, probably after the TempLS result which should come through tomorrow morning (waiting on China) will be on interpretation of the tri-plots - an expansion of this post.

      Delete
    2. Again to continue the medical analogy -- you are treating the symptoms and not the underlying illness. By calling this noise and using palliative measures to account for it, you are not stopping the spread of misinformation by skeptics such as Frank. So until you start to analyze the physical mechanisms of ENSO, the armchair statisticians will have an equal footing simply by playing the uncertainty card.

      Delete
    3. NIck: I think the focus on whether the trend or its full confidence interval for a particular period is greater zero is misplaced when the confidence interval for the trend is wide (say greater than +/-50% of the central estimate at the very most). The only thing we know with a traditional level of scientific confidence is the trend for a period of about 3 decades or longer. (About 1.7 K/century for GMST).

      My position is that skeptics who believe that the rate of warming has slowed below the reliable trend we have established has been occurring for at least the last three decades should be required to prove that an important difference exists (using the confidence interval for the difference in two means, corrected for cherry-picking).

      WHUT: FWIW, I abhor the practice of converting an inability to demonstrate statistically significant warming into evidence SUPPORTING a lack of warming. When we fail to reject a null hypothesis, we can't draw any other inferences from this failure.

      Frank

      Delete
    4. Frank, I don't know what the heck that you are going on about except apart from excessive naval gazing. All we have to do is come up with a good model for ENSO and then we can start making rapid progress in climate science. I have never seen a model that explains the past for ENSO, so any projections you come across certainly won't have much value for predicting the future.

      It's odd how no one has picked up the lunisolar connection to ENSO, despite the fact that places like NASA JPL have released research proposals (albeit rejected) on this topic. It's probably a combination of lack of mathematical savvy in the earth sciences and too much me-too-ism among the ranks, as well as poor use of computational horsepower for approaches such as evolutionary search and symbolic regression to find patterns in the data.

      Shouldn't complain, as I have a likely 2-year head-start on ENSO modeling http://ContextEarth.com .

      Delete
    5. WHUT said: "excessive navel gazing".

      Perhaps. Now that I've thought the situation through, perhaps I can put it more succinctly.

      1) Too many people focus on the wrong question: Has it been warming for the last N years (trend and ci greater than zero)? There is often too much noise in the data to answer this question for N = 10 or even 20 (when you include autocorrelation).

      2) Based on the narrower 95% ci for the useful 40-year trend, we CAN say that it has been warming over this period. Forcing has increased at a relatively steady rate during this period.

      3) The right question to ask: Is there any evidence that the trend for any shorter period is statistically DIFFERENT from the 40-year trend? Can anyone demonstrate a significant SLOWDOWN for any sub-period?*

      4) Likely answer*: No. The "ten-year pause" is statistically consistent with the trend and autocorrelated noise we have observed for the past 40 years. Therefore, the best answer for the current warming rate (GMST) is the current 40-year trend - about 1.7 K/century.

      * Based on the ci for a difference of two means. Corrected for the likelihood of cherry-picking periods outside the ci.

      Frank

      Delete
    6. Frank, You are not interested in the science at all, that's plain to see.
      Anytime I see someone talk about autocorrelated noise, it's the sign of a poseur. Similar to a poseur such as Tsonis

      Delete
    7. In that case I would like to go on record as a big fan of auto-correlated noise as least as much as Count von Count loves numbers. As auto-correlated time series, as correlated non-normally distributed spatial field or as 3D dimensional (cloud) field. With or without bats. Stochastic modelling is a pretty big field of mathematics and very important when it comes to nonlinear processes.

      Delete
    8. To someone like Tsonis, a sine wave would be classified as autocorrelated noise. The problem is that a sine wave is of course highly correlated, yet if you don't necessarily know the content of a time series as being filled with deterministic sine waves, you will end up giving up and calling it stochastic autocorrelated noise. Tsonis and Curry and that gang want to see stochastic properties everywhere so they can continue to ride their uncertainty train.

      What I want to do is derail their train by pointing out ENSO is entirely deterministic and governed almost completely by lunisolar forcing. Its really not that difficult to do, all on a spreadsheet.

      And its not like I am not a fan of stochastic behavior, as I wrote an entire book on applications of Markov modeling for reliability analysis. What I can bring to the table is insight into what is and isn't stochastic behavior, because I have been riding that fine line my entire career.

      One thing for certain is that ENSO is not even close to having red noise properties. It only spectrally looks like red noise because there is enough period doubling, physical aliasing, and accumulation of beat frequencies that the spectral profile looks extremely noisy. No one knows how to do physical alaising and so ENSO has always been assumed to be red noise.



      Delete
    9. Stuff from Nick's backyard
      http://contextearth.com/2017/05/10/enso-and-tidal-slh-a-biennial-connection/

      If one looks at tidal sea-level height (SLH) from places such as Sydney harbor you will find a strong statistical correlation between 2-year differences in the SLH data and the current SOI.

      This completely supports the lunisolar model of ENSO as it reveals the hidden biennial modulation in the Pacific ocean that scientists have been speculating about for years.


      Delete
    10. Seriously radical that I can take that lunisolar-forced model of ENSO and fit it to the NINO34 interval of 1880 to current day, and then back-project it in time. So if I test against the coral-ring-based universal ENSO proxy (UEP), and then apply the modern instrumental data as a training range, it will accurately fit the years 1650 to 1880.

      http://contextearth.com/2017/05/12/enso-proxy-validation/

      This is as solid in both theory and experimental agreement as any conventional ocean SLH tidal analysis.

      Delete
  8. Frank: "Now, I think it is only fair to allow skeptics to say that there "likely" (roughly 1o) was a slowdown in the warming trend rather than always demanding on a higher level of confidence."

    Those confidence intervals are for a random period, not for a period that is selected for its small trend. Thus it would not be justified to call it likely a slowdown. It is nearly always possible to find a recent period where the trend is less than the the long-term trend.

    ReplyDelete
    Replies
    1. I think there is simple and correct response to skeptics who like to cherry-pick. Explain the cherry-picked period is included in the long term trend, and the long term trend is towards warming. Leave it at that.

      Delete
    2. In my opinion, when we complain about cherry-picking, it sounds too much like, "hey, no cheating!".

      Delete
    3. Victor: I agree. In fact, I had raised this issue earlier in the thread. If one is looking a 95% confidence interval, the periods with the lowest and highest 2.5% of the trends on Nick's "trend triangles" are consistent with chance. In the ideal world, they would be identified and eliminated by software. If we are dealing with a "likely" slowdown, then one sixth of the periods with the lowest and highest trends should be eliminated.

      The focus on trends greater than zero is misplaced. The only thing we know for sure is the trend for periods long enough to have a usefully narrow confidence interval - certainly less than +/-50% of the central estimate for the trend. And the forcing for those periods needs to be somewhat uniform, so we are looking at the response to something that hasn't changed dramatically. 1900-presence has dramatic change in forcing. I don't fully understand how forcing evolved from Nick's 1960-present period. Given rising aerosols and only +1 ppm/yr of CO2 in the 1960's, I suspect the forcing experienced then doesn't have much in common with the last 40-years, which is what I personally believe best represents the current warming rate.

      Frank

      Delete
    4. The trend since 2007 is just over a decade long now, and it's around +0.35 C/decade. I haven't seen WUWT et al. posting much about a "surge" in global temperatures, even though they were happy to post endlessly on a period with a smaller trend.

      How strange.

      Delete