Friday, March 25, 2016

Climate and statistical forecasting

About a month ago, there was a minor kerfuffle when the GWPF released a report from an econometrics Prof Terence Mills, titled

"STATISTICAL FORECASTING
How fast will future warming be?"

It got a run in the Murdoch and allied press. I wrote about it here. But it was obvious nonsense, and the fuss died down pretty quickly. I had said I would write up some more analysis, but interest subsided so quickly that I put it off. The actual warming, as Gavin quoted, had already made the forecast look silly, and there was more to say about the recent data.

But these intrusions of misplaced statistical pontification are an intermittent feature of the climate distraction. We had Beenstock and Reingewertz, which didn't exactly forecast, but claimed time series modelling proved the death of AGW. There was Ludecke et al, which was mainly very bad Fourier analysis, but did forecast (naturally) an imminent cooling based on the periodicity of the trigs functions used. And there was Keenan, pulling strings at the House of Lords to promote his Arima (3,1,0) model to claim great uncertainty about trend.

So I want to talk more about the place, if any, of statistical time series forecasting here.

Mills type forecasting of a temperature series T(t) uses a model of the general form
P(B)T = F(t,b) + Q(B)ε
B is the backshift operator; B T is the series displaced one step back. ε denotes a series of iid (independent and identically distributed) random variables. P and Q are polynomials in B; successively applying B is a commutative operator, and so it makes sense to do algebra with it. This goes back at least to Boole. And, critically, F(t,b) is a function of assumed form with fitted parameters. It's often constant or linear.

The forecasting process is that over an observed period, the coefficients (and order) of P and Q, along with the parameters b, are found by some kind of least squares fit. With that fitting, expected values are found in the future. The expected value of ε is zero, so the forecast is dominated by the behaviour of F. And as Mills says:
The central aim of this report is to emphasise that, while statistical forecasting appears highly applicable to climate data, the choice of which stochastic model to fit to an observed time series largely determines the properties of forecasts of future observations and of measures of the associated forecast uncertainty, particularly as the forecast horizon increases.
Well, he said it, but it wasn't the aspect that the GWPF and press emphasised. But it's true.

So the forecast then depends on P(B). This often has roots that are all close to zero, relative to 1. Then the inverse of P is just a smoothing operation, and all the ARIMA has died away. The forecast is a smoothed scaled F, and so depends almost entirely on what form is assumed. IOW, all statistics is doing is estimating the parameters b of F, and smoothing a bit. Alternatively, it may happen that P has one or more roots close to 1 (unit root). That means that it acts like a differentiation, and the forecast is a solution of a DE, but still critically dependent on the form of F.

Now, you can assume various exotic forms for F. There is no constraint, unless you choose to invoke some physics (which econometricians usually don't). if you can find a straight line fit, you can always find a segment of an exponential, or a sinusoid, which will fit equally well, but with very different forecast behaviour.

As I described here, Mills chose for HADCRUT temperature either an IMA form, with F constant, or an AR form, with F linear. Each involves a future trend, but with uncertainty. Being uncertain of the future trend involves a wide scatter of forecasts. But in each case, he found that the range for trend (or drift) included zero, and so proceeded on the basis that it was zero. Thus he changed the structue of the forecast function F(t,b), with radical effects on the forecast. In fact, the forecast just had to be constant. The model allowed no other.

Now statistically, this is very unsound. Saying that zero is within the range of a distribution doesn't mean zero has any preferred status. You could choose any of a large range of other numbers within the range. All you can say is that you can't discriminate (statistically). The effect of all those other choices should be reflected in the uncertainty of the forecast. But Mills calculated his uncertainty on the basis that the trend was certainly zero, and the scatter would just depend on the future ε.

Which comes back to the point, what is the basis for choosing the form of F? It does have to use some knowledge of physics. We are testing a theory that temperatures may be trending (or drifting) because of GHG. You have to test it with a form that at least allows that to happen. To the extent that you are uncertain of the trend, you are uncertain of the forecast.

I think of line fitting in Taylor series terms. If F is a secular function, it's reasonable to think that it may be a smooth function of time, corresponding to the incremental effect of the drivers. A linear assumption is a first-order Taylor approximation at a point, probably the end point. That will hold until the second and higher derivatives start to have a dominant effect. To the extent that we can establish that those are small, the forecast will work to a corresponding time into the future. That may be not very far, and linear trend extrapolation is not a very useful approach. And despite what contrarians sometimes think, it is very little used.

Unit roots

A side story here is the use of Arima models with an I, as in Keenan's (3,1,0). A unit root. This is the difference between a model that has a random perturbation about a mean value, and a random walk. Both Mills and Keenan approach this as just a matter to be resolved by goodness of fit. Beenstock et al, cited, make more of its significance, but their analysis is a total muddle.

But for the reasons described above, it isn't just a matter of goodness of fit. Again, there is an infinite range of possible P,F,Q functions to choose. And statistics can't decide that. The only basis for choice is that the function F (with P and Q), with parameters, is a reasonable model for the physics. And a random walk simply isn't because:
  • There are physical laws to be satisfied. Temperature determines the rate of outgoing infrared energy, and this in the medium term has to balance incoming. Now there are all sorts of short term variations (weather) that allow transient deviations, but they don't allow temperature just to drift unconstrained to a new level. A random walk does.
  • As a variant of that, we know that the history is of temperature bounded within ranges. Seas haven't boiled, or (generally) frozen. In fact the range has been quite narrow, up to the Ice Age range at most. That is inconsistent with a random walk.

I made this objection to Keenan's model, and some said - well, why couldn't it just have been a random walk for the period of observation only? But that comes back to the requirement that the P,F,Q model be an explanation for the process, and even one that can be relied on for forecast. Saying it is sometimes RW, sometimes not, just begs the questions, when and why? It's a model that makes no progress.



20 comments:

  1. The lines of evidence point toward determinism as driving the natural variability of many climate measures. It's actually misguided to apply Markov or random walk models to a behavior that is forced by a non-random physical process.

    "A side story here is the use of Arima models with an I, as in Keenan's (3,1,0). A unit root. This is the difference between a model that has a random perturbation about a mean value, and a random walk. "

    Lots of misconceptions about random walk. A pure random walk is a martingale process, and can wander infinitely far from the mean -- that is, given enough time. But there is a kind of random walk that has a reversion to the mean, as modeled by a potential well -- in physics this is called an Ornstein-Uhlenbeck process. And yes, this is categorized in the ARIMA class, but statisticians don't model physics and so use a generic name.

    Note that this is different than a random perturbation about a mean, which is white noise jitter and not a Markov process.

    There is actually no evidence that large scale processes are best modeled as a stochastic process.

    So for example, a process such as ENSO is closer to a deterministic forcing than it is to the red noise of an O-U process. And for the life of me, I can't figure out what the charade is over acting as if the QBO contains any randomness.

    IMO, it's just a matter of time before physical models applied to machine learning experiments will root out all the deterministic behaviors, and the stochastic crowd gets pushed to the corner on this topic.


    ReplyDelete
  2. Nick, Thanks for the summary and for the econometrics background to all this stochastic modelling. I have often wondered why, all those years ago, McIntyre and McKittrick got into using those strange red noise inputs in their Monte Carlo modelling. Now it's clear, it's become a standard tool for economists, presumably for modelling Stock Market fluctuations.

    It strikes me that economists would do well to give more account to external forcings, despite the supposed failure of deterministic models. For instance, the effect of prolonged drought in the Middle East on the Syrian economy and polity.

    ReplyDelete
    Replies
    1. BillH, By saying "economists would do well to give more account to external forcings", you have hit the nail on the head. When you see all these differential equations without a clear forcing function, you know that they are missing 50% of the problem formulation.

      Can you imagine an economist working on ocean tidal prediction? They would likely create a fancy autonomous differential equation which could adequately fit the observations. But then someone would come along and point out that it's just the sun and moon providing the forcings, thus collapsing their house of cards.

      If you can understand and appreciate that hypothetical situation, then consider that is what is happening right now with the modeling for QBO. Unfortunately, in this case, we can't just blame the economists, but climate scientists such as Richard Lindzen, who for whatever reason were never able to root out the clear external forcings.

      http://contextearth.com/2016/03/21/inferring-forced-response-from-qbo-wave-equation/


      Delete
    2. It doesn't add up...March 30, 2016 at 5:25 AM

      It seems I can imagine a "climate scientist" working on tidal predictions, using an inadequate model based on simply the gravitation effects of the sun and the moon, and then adjusting the data from tide guages to fit the model. The realities of tidal prediction are actually far closer to the idea of curve fitting, and are largely based on Fourier analysis of real data in order to capture the complexities induced by the real shape of the oceans and the oscillatory harmonics that induces, which far outperform the work of Laplace to improve on the basic gravitation model.

      The methodology was originally developed and incorporated in an analogue computer by the first Baron Kelvin, who was of course a physicist of the first rank, albeit he made a number of predictions that turned out to be radically wrong.

      Further work was done by George Darwin (a son of Charles), who was a mathematician and astronomer, before being further refined by A.T. Doodson.

      Delete
    3. I agree it doesn't add up, especially since the chief theorist behind the QBO, Richard Lindzen had suggested that lunisolar tides could have something to do with the behavior very early on -- in his words "Lunar tides are especially well suited to such studies since it is unlikely that lunar periods could be produced by anything other than the lunar tidal potential."

      Unfortunately, Lindzen could not detect the pattern, most likely because he didn't understand how the lunar gravitational cycles could be strongly aliased against the seasonal cycle, which can combine potential energy and thermal energy to get the QBO in motion.

      It's very easy to conceptually debunk the lunisolar cycle QBO model -- all you have to do is show that the lunar cycles get out-of-phase with the QBO measurements. I am not able to falsify the model because the fit works way too well and has stayed in phase with the moon for more than 60 years.

      Delete
    4. And you have to consider this model for ENSO to appreciate how red noise plays no role

      http://imagizer.imageshack.us/a/img923/219/OyG6rV.png

      Delete
    5. One major point of confusion looking at that is the negative spike for 1997/98. You've marked 'Phase Reversal' at the top - is this suggesting some major shift in the climate system so that processes which used to produce positive ENSO now produce negative ENSO?

      Delete
  3. Though being all but a mathematician, I had real pleasure in reading - starting from here, thanks Nick Stokes - lots of posts and comments around Mills forecasting blind-alley.

    Best of all was a hint by L. Hamilton in his comment:

    http://julesandjames.blogspot.com/2016/02/no-terence-mills-does-not-believe-his.html?showComment=1456358463898#c3485582121726390124

    where L. Hamilton reminds us of an earlier paper written by the same Mills

    http://link.springer.com/article/10.1007/s10584-008-9525-7#/page-2

    ' How robust is the long-run relationship between temperature and radiative forcing? '

    That paper's abstract ends with

    ' This result is robust across the sample period of 1850 to 2000, thus providing further confirmation of the quantitative impact of radiative forcing and, in particular, CO2 forcing, on temperatures. '

    Delicious...


    ReplyDelete
    Replies
    1. Well, the GWPF made the rather surprising admission that Mills was paid 3000 pounds for this work. As far as Mills was concerned he was asked to model some data assuming no deterministic inputs, did it, and took the money. Since this is the sort of thing he's probably been doing for much of his career (much of it looks cut and pasted) it looks like easy money. One wonders if he deliberately ran the model from the beginning of 2015, so that it would already be falsified (at 95% confidence) by the time it came out. Funny: you'd have thought all those illustrious professors would have checked up on this, especially the author of the introduction, McKitrick with his profound knowledge of climate science.

      Delete
  4. All of us commentators we know: Nick Stokes and Grant Foster are quite a bit busy with lots of things.

    But when you see this spurious increase of really poor-minded guest posts e.g. at Climate etc, claiming that "Temperatures do not add" or that "Inappropriate use of linear regression can produce spurious and significantly low estimations" etc, you truly hope for some really scientific contribution published there!

    ReplyDelete
  5. I offer another jolly example of odd "statistics", by a few ex-NASA-Apollo guys who comprise "The Right Climate Stuff."

    1) See Hal Doiron in his 14-minute talk, in which he uses a simple climate model, starting with Ljungqvist(2010) and showing 1000-year and 62-year sine-wave cycles to prove global warming is no problem. Exactly where the 62-year cycle came from I'm not sure, and of course, using a 2000-year reconstruction of 30-90N (25% of Earth) seems a little chancey to know a 1000-year cycle. People will be pleased to know that the model shows flat to down temperatures for 2000-2030.

    2) If for some reason that isn't enough, there's a longer version by one of his team, Jim Peacock, in a lecture for Doctors for Disaster Preparedness in 2014, using some of the same slides. The first half is Apollo stories, then he talks on their climate models to prove that old Apollo mechanical engineers can do it better than climate scientists.

    Note of course, that these guys are a tiny fraction of NASA retirees, and the people at NASA, who are generally pretty competent, actually accept the science, use NOAA data, and model risks from sea level rise.

    ReplyDelete
    Replies
    1. John,
      The 62 year cycle is very popular, based on two apparent periods in the observed surface data. One, due to Akasofu, showed up at WUWT a few days ago. It's funny in context, because it's from 2008. With cycle predictions, if temps have been rising, the inevitable prediction is for a downturn, and that is what it shows. But of course, temps have gone up since 2008, and I think will align well with the rising IPCC prediction shown (and mocked). I was going to say so, but got sidetracked by an idea for a general graph superimposing gadget, which I needed and will blog about soon.

      Delete
    2. It's the Automatic Multi-excuse Oscillation: the AMO.

      Delete
  6. Hal Doiron and Norm Page are both Hpuston-based signers of the "300 scientists" list Will Happer collected to help Lamar Smith Harass NOAA.
    For amusement, Page seems to have subscribed to the expanding Earth hypothesis, i.e., "radius of the Earth has increased by at least 33 percent since the Paleozoic'

    ReplyDelete
  7. Akasofu is a sad case, and I see his paper that Page referenced was in a SCRIP journal, one of Jeffrey Beall's "favorites." Oh, my.

    ReplyDelete
  8. I see the GWPF spin machine is hard at work on the Mills report. It's got a pliant "environment editor" at the London Times (Murdoch stable) to produce an article claiming that Mills predicts no global temperature rise right up to 2100, with a very clear graphic demonstrating exactly that. The GWPF then posts this article on its website:

    http://www.thegwpf.com/planet-is-not-overheating-says-uk-statistician/

    To think that the GWPF is run by the UK's former finance minister... (who also happens to be the father of Monckton's brother-in-law)

    ReplyDelete
  9. Nate Silver may have made stats sexy but Nick and Tamino have made it essential.

    ReplyDelete
    Replies
    1. To the survival of the species

      Delete
    2. To put it charitably, Nate Silver is ignorant when it comes to science. And I would also argue that statistics plays a minor role as both CO2 warming and ENSO are both deterministic and single origin in nature. The only real statistical contributor to global temperature variations are volcanos and we haven't seen major eruptions recently.

      Delete