Wednesday, June 19, 2013

Better adjusted global temperatures for ENSO, Solar and volcanoes


This is a follow-up to this earlier post, which please see for details. I had got into some difficulty there with using the R function nlm() to estimate both the regression parameters and the delay coefficients for each of the exogenous variables Vol, Sol and ENSO. The solar variable, which interacts most weakly, was apt to be assigned zero or negative delay, which created constant or exponentially rising secular processes, which were used by the fit.

I could avoid this by constraining that parameter. But I think it is better to do as others have done and use a common delay for all three. There is reasonable physical justification for that, and it reduces overfitting.

The result is a much more stable trend pattern across the time intervals and data sets. The trends since 1997 are now mostly between 0.65 °C/century and 1.325. This might still be seen as a slowdown, but surely a minor one. Oddly the only exception is the case studied by SteveF, Hadcrut 4 with linear from 1950. The trend I got there was 0.117°C/cen, I think quite similar to his, as was the decay coefficient at 0.026 (cf his 0.031).

I'll show below the revised table and images.

A zip file of R code and data is here.

Results


Here is the table. The results look better in several ways:
  • The coefficients are reasonably comparable across cases
  • Adding a quadratic term now always reduces the sum of squares
  • The post-1997 trends are fairly uniform
I have not normalised the units, so the actual magnnitudes of the regression coefficients are not easy to interpret. I'd still discount the 1979 quadratic, although no problems are obvious.

StartTrend1VolSolENSOtt^2DelaySS
HADCRUT419500.1172e-04-8.39135-3e-050.429950.25141NA0.0258410.079
HADCRUT419501.024-0.07304-2.278730.000480.148440.307420.216530.127298.254
HADCRUT419791.086-0.01689-2.721060.000690.133310.50485NA0.117063.909
HADCRUT419791.078-0.00963-2.822780.000690.134780.50155-0.078120.113253.893
GISS19500.787-0.00116-6.001125e-050.244190.34476NA0.0417310.441
GISS19501.246-0.04836-2.704260.000420.12880.371220.142730.110179.626
GISS19791.325-0.01623-3.286440.000650.13340.49335NA0.096495.086
GISS19791.315-0.01142-3.386010.000640.135130.48994-0.052410.093425.079
NOAA19500.65-0.00121-4.420510.000170.190360.33218NA0.062238.103
NOAA19500.877-0.04486-2.390860.000420.131260.348070.132210.1187.351
NOAA19790.929-0.01586-2.814750.000590.127210.47137NA0.104923.581
NOAA19790.913-0.00629-2.992520.000580.130340.46551-0.104090.098493.554

Images


The images may be scanned in the viewer below. There are 24, and you can flip through them using the top buttons. But you can also subselect using the selection boxes. For example, if you choose GISS, you will then cycle through just the 8 GISS plots. If you ask as well for plot type components, you will cycle through the four component plots. etc.



Data


Start Year


Regressor



Plot type





SteveF has had difficulty getting comments through the system - I'm trying to find out why. Anyway, he sent by email these comments, and I'll just follow with a few points I made in reply - hope we can get comments working for him soon:

SteveF says:

I think I may have identified why the influence of the solar cycle increases in the 1979 to present analyses compared to the 1950 to present analyses: there is a coincidental congruence between the 1983 and 1991 volcanoes and the peak (or near peak) of the solar cycle. That is, the downward part of the solar cycle is aliased with the volcanic influence. In the longer series, this influence is diluted in the regression, though for certain is still influencing the results. The 1964 to 1970 eruptions, while weaker, do not coincide with the peak of the solar cycle. I have tried several things to straighten this out, so far without a lot of success. I will next try a regression from 1950 to 1975 to see if the solar cycle influence falls to almost nothing (or even negative!) as I suspect it will. Substituting a regression specified quadratic or cubic secular function.

I think a defensible argument is, absent a good physical rational, that the best approach is to simply sum the two radiative terms (both with watts/M^2 units) and see what happens to the diagnosed "optimal lag". Any regression that reports a physically implausible lag constant for best fit seems to me very suspect. There have been multiple published reports with estimated lags for volcanoes; a tau value near 30-36 months for the decay of the response seems to be a common, though the exact value depends weakly on assumed climate sensitivity value (I have independently verified this is correct). Perhaps the best thing is to just try a few lags in the credible range and see what the best fit regression constants turn out to be. I also note that very much higher solar than volcanic response (on a watt/M^2 basis) is exactly the opposite of what I would expect on physical grounds; the slower solar cycle response ought to be lower, on a degrees/watt/M^2 basis than the volcanic, because the 11 year solar cycle 'sees' slower responding (deeper) parts of the oceans more than the shorter volcanic forcing. 

If one accepts a large discrepancy between the measured changes in solar intensity over the cycle and the size of the temperature response, then some kind of 'amplification' of the solar cycle must be responsible (eg. more low clouds at the minimum), and evidence for that seems lacking. Of course, even if one assumes the true solar cycle forcing (in watts/M^2) is far higher than the measured changes in solar intensity, that doesn't mean there should not be considerable lag in the temperature response.

Finally, I tried a linear secular trend starting in 1964 (to include the earlier volcanoes); the fit is improved compared to starting in 1950 and the discrepancies between model and Hadley global temperatures is much reduced. Perhaps you could try that as well.
Nick says: I can well imagine that solar cycles might get tangled with volcanoes. My experience was the the solar cycle, being weak, could be pushed around a lot by the optimisation process - both the linear and the lag fitting. It had a strong tendency to drift into zero or negative lag, which nlm() took advantage of to create spurious fitting functions. For the same reasons, I don't take too much notice of the amplitude that emerges from regression. It could be just taking up a bit of the volcano signal.

My main concern at the moment is with the use of Nina34. I think it's a sensitive index, but also includes warming trend, and when you adjust with it, it takes out that part. I'm inclined now to switch to SOI, which may not be as good, but can't drift. A practical alternative is to detrend Nina34, but harder to justify, and the detrend would depend on the interval.
I'd be happy to try a 1964 start. Another option is to use GHG forcing as one of the regressor functions.

58 comments:

  1. Nick, looks more stable. Obviously having same delay makes physical sense if all these are simplistically viewed as equivalent radiative forcings.

    I don't know if you're still watching Steve's thread on Lucia. I have been discussing the implications of the analytical solution to the linear model IDE with Paul_K and picked a small but very important error.

    The result confirms something that I have been pushing for a few years now, that this sort of fitting ( at least for short term series) should be fitted to dT/dt and NOT the time series T(t).

    That is quite important obviously but I think it is rigorous in relation to the linear and the solution to the differential equation.

    Check it out and see if you think it makes sense.

    Basically, it is dT/dt that is a power variable in terms of physical units. It is incorrect to try to fit T (which is energy not power) to a power forcing. The units just don't work.

    If you look at the thread I explain why the short term response should be fit to dT/dt in terms of the equations too.

    You're trying to fit apples to oranges.

    ReplyDelete
    Replies
    1. The fit of dT/dt instead of T makes sense for me. The units are more consistent, and the equation based in a simple energy conservation principle is sound. dT/dt=[Pin-Pout]/m/c, with Pin and Pout power and m and c mass and specific heat capacity. It seems reasonable to me, I agree with you.

      Delete
    2. Greg,
      The result confirms something that I have been pushing for a few years now, that this sort of fitting ( at least for short term series) should be fitted to dT/dt and NOT the time series T(t).


      The exponential smoothing is a step in that direction, If the coefficient goes to zero, it becomes a cumulative integral, which is then the integrated version of what you are after.

      Delete
    3. I see what you mean about the coeff of zero but I thought you were applying the exp kernel to all the inputs not to temp, as I understood Steve_L was doing.

      What I was suggesting was doing the mv regression using unmodified forcings against the first difference of the temp TS ie dT/dt.

      I think we are a bit a cross purposes.

      Does that make sense now?
      Greg

      Delete
    4. Oh, hang on. Are you saying integrating all the inputs is the same as diffing temps. Hmm. I'd rather do it directly.
      Greg

      Delete
    5. I've delayed only the exogenous variables, which is what I believe SteveF did too. So it would be temp vs integrated exog.

      Delete
  2. BTW would you like to post some R code of how the do the fit. I would not mind fiddling but I can't find the two days it will need to decrypt the doc and arcane hyroglyphics needed to define a model to fit in R.

    ReplyDelete
    Replies
    1. Greg,
      Yes, I'll post a zipfile of code and data later today.

      Delete
    2. Greg,
      I've posted the code and data - see above

      Delete
    3. thanks, I'll check back. Doesn't seem to be any notification fn on blogspot.

      Delete
  3. Nick , I tried to load npl.r and got two errors.

    load ("nlp.r")
    Error: bad restore file magic number (file may be corrupted) -- no data loaded
    In addition: Warning message:
    file 'nlp.r' has magic number '#Code'
    Use of save versions prior to 2 is deprecated

    I'm running: R version 2.15.1 (2012-06-22)



    Also one of the files you zipped was not the data you imagined:
    == hadcrut4.txt ==

    An error occurred while processing your request
    Reference...

    Greg

    ReplyDelete
    Replies
    1. Greg,
      load() is for binaries. You need 'source("nlp.r")'

      I'll fix the HAD thing. It's a nuisance that HADLEY and CRU use different formats for the same data.

      Delete
    2. Greg,
      I've put had4.txt in the zip

      Delete
    3. Thanks Nick, I unzipped that one and got 8 hadcrut4 graphs then it barfed. Graphs look superficially the same as what you posted so that's good.

      Any ideas what's broken?
      Greg


      > source("nlp.r")
      Read 3012 items
      Read 2976 items
      Read 806 items
      [,1]
      [1,] 10.07881
      [1] "HADCRUT4_1_4_.png"
      (Intercept) n
      -2.427327e-04 6.105980e-07
      [,1]
      [1,] 8.253907
      [1] "HADCRUT4_1_5_.png"
      (Intercept) n
      7.457594e-03 -2.029373e-05
      [,1]
      [1,] 3.908916
      [1] "HADCRUT4_349_4_.png"
      (Intercept) n
      0.0335411977 -0.0001654242
      [,1]
      [1,] 3.892844
      [1] "HADCRUT4_349_5_.png"
      (Intercept) n
      0.0333863135 -0.0001644248
      Error in nlm(res, p = hv[1:(M + 2)]) : non-finite value supplied by 'nlm'
      In addition: There were 16 warnings (use warnings() to see them)


      Warning messages:
      1: In nlm(res, p = hv[1:(M + 2)]) : NA/Inf replaced by maximum positive value
      2: In nlm(res, p = hv[1:(M + 2)]) : NA/Inf replaced by maximum positive value

      same thing 16 times.

      Delete
  4. Greg,
    It's finished the HADCRUT sequence, and then for some reason nlm() has failed to converge with GISS. It's right as far as it goes - the numbers printed are the SS. I've checked that it is the right data file. So I'm puzzled. Our R versions are the same.

    ReplyDelete
    Replies
    1. Strange. I ran what I think is identical code and data, and indeed for HADCRUT the SS was identical, but it completed. If you want to get it to complete with just HAD, go to line 54
      for(J in 1:3){#dataset
      and just change the 3 to a 1. Or
      for(J in c(1,3)){#dataset
      should skip GISS and go to NOAA.

      Delete
    2. Thanks. it still failed so I had a look at the data and removed the "missing" data, that fixed it.

      Now I'm sure this is just one extra line of code to someone who is familiar with R but will take me all night to work out how to do.

      How can I take the first difference before doing the fit? (If you would be so kind ;) )

      thx, Greg

      Delete
  5. Greg,
    I'm glad it worked - I'm still not clear why it failed, but anyway.

    The line
    y=yy[n,J]
    takes the temp from the array. You can difference at this stage (add a line following):
    y=diff(c(NA,y))
    The initial NA is to keep the length of vector right; if it gives trouble, substitute 0.

    ReplyDelete
    Replies
    1. Thanks, looks like what I needed but it's not that simple. Output is flat lines, looks like more serious mods are required.

      Delete
    2. I have limited it to linear detrending (which should be minimal effect on dT/dt) and the "bits" graphs come out as perfectly straight horizontal lines. I had to put replace NA as you suggested.

      even if the results were a mess I would expect this to do some kind of a fit. Can you see why this breaks?

      Greg.

      Delete
    3. Greg,
      Actually, I should have recommended
      y=c(0,diff(y))
      if you're replacing NA.

      Delete
  6. Where I'm hoping to go with all this is to combine the two, since the correct linear solution (as I'm discussing at Lucia's) includes both. A way to do the multivariate fit to both _may_ actually fit post 2000 without all the fudging.

    To do it properly requires another method since the ration between in-phase response that you are fitting already and the dT/dt response I'm trying to do is not fixed. The ration is function of frequency.

    However, if we "filter" the combined response and simply things we can say LF dominates the in-phase and HF dominate dT/dt

    ie we need to fit c1*T(t)+c2*(dT/dt)

    instead of the more correct 1/f weighting we approximate an average for each frequency band.

    Now I think that should be possible with simple changes to what you have already done.

    This should remove the need to "spread" the forcing terms and probably all the detrending as well. If there is a long term multi-century upward trend (and I think there is) that should be added as an explicit variable and not used as a pretext for removing variability from the time-series, much of which may be more properly attributable to the forcing variables we are trying to fit in the first place.

    If I was capable of parsing R code I would have done this already. Unfortunately R is alien to me and I don't have the time to invest in that particularly steep learning curve.

    I'm attacking this problem from a different angle using tools I already understand. Though I think adapting this method would be both simple to do and enlightening.



    ReplyDelete
  7. Nick, I've tried to extract the essentials of this dT/dt argument. You're pretty good on the maths, I'd value your opinion. Especially about the value of the cross-over point between the two regimes.

    http://climategrog.wordpress.com/?attachment_id=399

    If there are time constants like 20 years in the temp response and my estimations are correct everything under 60 year periods will be dominated by dT/dt.

    ReplyDelete
    Replies
    1. Greg,
      Firstly, I think for 20 yrs it should be 2*pi*20=126 years.

      But I can't see where it is going to lead to. As DeWitt keeps explaining. the CO2 response to current temp fluctuations is going to be very hard to measure on a time scale of more than a few years.

      Delete
    2. Perhaps forget "where it's going" for the moment . There's nothing about co2 in what I linked there. It's purely about the maths.

      The point of what I've posted is to recognise what the correct _full_ response to the linear feedback model is. That is fundamental to what you are doing here because you are dong a linear regression of the supposed radiative forcings on T(t). That implies a implicit assumption that the response to the forcing is in phase with the forcing. It is not.

      I've shown is that this is not the case, there are both in-phase and orthogonal (time derivative) responses. The latter dominates HF, the former LF. So determining the cross-over frequency/periodicity it an essential first step.

      If you can then show that the primary time constant of the system is such that you can ignore on or other of these terms for that data being considered, then a linear regression on either T(t) or dT/dt may be proposed.

      Your reply above shows that you have not taken the time to read and understand what I posted so I suggest that you take the time to digest it.

      This is not your error, it's just one you have picked up from loads of published climate science including the F&R.

      It seems to me that just about everyone is making the mistake that a "linear" model must have a linear response and then rush off to do the regression fit.

      The key point is that it is a linear feedback model , not a linear model. and linear feedbacks have the response that Paul_K provided that is far from linear.

      ωt=1 is the cross-over between the two regimes where the two coeffs are equal so the first requirement is to asses what tau is for the system and to asses whether the data being considered is sufficiently devoid of frequencies on either side of that divide for part of the response equation can be ignored.

      If you see some fundamental flaw with that I'd be interested to know where I'm going wrong.

      Greg.

      Delete
  8. "That implies a implicit assumption that the response to the forcing is in phase with the forcing. It is not."
    It is not. The forcing is lagged.

    ReplyDelete
    Replies
    1. I think you need to be careful about this use of the term lagged. I was going to pick Steve up on this in his original post but thought it may be seen as pedantic knit-picking. It is not and I now regret not having corrected it.

      Firstly what you are doing is a convolution . This 'spreads' the duration of the various forcing producing a later peak. But it does NOT lag the data.

      Also we need to clearly state what kind of lag we are referring to when it is truly a lag. A fixed temporal offset, or time lag, is fundamentally different to a phase lag.

      Now it may be that the convolution is providing the third term of the response equation (though I am just guessing that for the moment, if you can comment on the maths of that equivalence please do ).

      It seems that you are still lacking the derivative of the forcing which is an inescapable part of what you should be regressing.

      Delete
    2. "Now it may be that the convolution is providing the third term of the response equation (though I am just guessing that for the moment, if you can comment on the maths of that equivalence please do )."

      Greg, you have this the wrong way around. The exponential smoothing is exactly the solution of your linear equation, by Paul_K's integrating factor method or whatever. And yes, it is pedantic nit-picking to fuss about whether it is a lag. You have chosen to try to represent it via Fourier Transform and phases; you have to deal with the resulting problems. Your opinions are stronger than your technical skills.

      There's absolutely no reason to include the derivative of the forcing in the regression.

      Delete
    3. So you're saying the convolution of the forcing input with a suitable exponential decay is mathematically identical to the full solution of the linear feedback equation.

      It will be identical to Paul's solution summed over all frequencies.

      Is that correct?

      Greg.

      Delete
    4. Yes, that seems to check out with the Laplace transform of a negative feedback. So if you're applying the convolution theorem to find the response function why don't you say so clearly instead of this crap about introducing a "lag". Hardly pedantic is it.

      Delete
    5. BTW I still have not managed to reconcile the two approaches. Applying linearity and summing all components of fourier expansion fed into Paul_K's solution for a pure harmonic, produces

      A * forcing – AÏ„ * d/dt(forcing) + Aωτ*exp(-t/Ï„) * FFT( forcing )

      The latter terms seems to be the convolution but the other two terms are still there. You claim a better understanding perhaps you can point out why the two are not the same?

      Delete
    6. Goodman,
      You can't be mixing frequency space and temporal space representations in the same equation.
      There is an FFT and d/dt simultaneously.

      This is really muddled analysis and perhaps you can start again from scratch.


      Delete
    7. Perhaps I should have written it more explicitly. Using S[] for the summation operator and F_n for the fourier coeffs the solution can be written:

      S[ F_n * sin( ω_n ] - tau* S[ F_n * d/dt sin( ω_n ] + τ*exp(-t/τ) * S[F_n *ω_n]

      That looks dimensionally correct and since we are applying the same summation to all terms I don't see the objection to the third term.

      This requires that FT is defined so is not fully generalised solution. But since all this is done with detrended inputs anyway that does not add any further conditions.

      They may be equivalent but this is not obvious. By linearity considerations, summing the solutions for all the individual Fourier components should give a valid solution.
      If they're different I'd like to know why and find out there's an error.

      Delete
    8. Greg,
      That makes it a bit clearer. I take it your substituting the Fourier expansion for T. I can't see what has happened to the forcing. It would help if you wrote the t argument in the trigs.

      But the point is that d/dt sin( ω_n) is a cos, so you have crossover between sin and cos (phase shift). I think it would be simpler with exp(iωt)

      Delete
    9. "what has happened to the forcing?"

      The forcing is the first summation , its derivative is the second, the third is the transient response which can be got by applying the same FT coeffs as applied to the sine, cos terms to omega itself.

      The transient is dominated by HF and is damped by the exp decay. I don't know if there is a more intuitive way to express FT (ω) sum . I agree with Telescope that it looked a bit like I was adding the FFT to the time series that way I originally wrote it but in fact it is weighting the transient elements with the freq resp of the system , which is quite proper.

      So you see that, with the proviso of a stationary time series for which we can define the FT, the response = forcing + tau * d/dt forcing + exp transient.

      If dominant tau is about 3.5 or so for climate you can see why I wanted to regress d/dt(forcing)

      Now unless there's a slip up in the maths the two solutions should be equivalent for inputs where FT is definable. It's just that with this formulation we clearly see the presence of df/ft whereas in the convolution is not apparent as such.

      In fact by 'lagging' the input with the convolution as per Laplace solution you are effectively introducing all the cos terms to achieve the phase lag , it's just not obvious that is what is happening.

      Does that make more sense to you now?

      Delete
    10. Greg,
      The equation seems to have got mixed up along the way. The equation PaulK solved
      here was
      dT/dt + T/Ï„ = F(t) * S/Ï„

      I've no idea why you would be differentiating a forcing.

      I agree with Web. This has become hopelessly muddled. And I really can't see the point of it.

      Delete
    11. Hey , I'm not differentiating a forcing , I'm solving the ODE and observing the presence of the derivative.

      Calling it "hopelessly muddled" without any reasoned objection is pretty weak. You may as well say " I don't like it".

      I've laid out the maths pretty clearly. Can you see a mathematical error in breaking down F(t) by Fourier analysis, then using linearity to derive a solution to that equation from the individual solutions for each F component using the formal solution for sin(wt) ?



      I'm open to the idea that I may have made an error but calling it "muddled" does not cut much ice.

      You seem to competent at maths, and it's not hard to follow. If there is an error, point to it.

      Delete
    12. Greg,
      It's scattered. Can you jut write it somewhere as a logical sequence? Somewhere the equation has morphed from a DE in T to one in F.

      Delete
    13. Sorry, I thought you'd been following on Lucia's, where I'd posted a link.

      http://climategrog.wordpress.com/wp-admin/post.php?post=399&action=edit&message=1

      It's not as concise as it could be but I think it covers it.

      It has not 'morphed' , one equation is the linear feedback relationship that governs the system the other is the system response as a function of an input.

      dT/dt + T/Ï„ = F(t) * S/Ï„

      It would perhaps be rather surprising if this solved without any differentials of anything, which your comment seems to expect.

      Greg.

      Delete
    14. Greg,
      The thread at Lucia's is long and rambling. I tried your link above but it asked me for a password.

      Delete
    15. oops I posted you the edit link, sorry.
      http://climategrog.wordpress.com/?attachment_id=399

      Delete
    16. Well, Greg, I still can't follow it. This equation
      "So the full solution can also be expressed as follows:
      A * forcing – AÏ„ * d/dt(forcing) + Aωτ*exp(-t/Ï„) * transient"

      just comes from nowhere. And I can see no justification for differentiating the orcing. That is likely, as with volcanoes, to be spiky anyway. Temperature is a smoothed response.

      Delete
    17. "...just comes from nowhere. And I can see no justification for differentiating the forcing."

      It does not 'come from nowhere' it comes from the sum of the individual sin(wt) solution given by Paul_K's equation ( or directly from Laplace if you want to check it).

      I don't need to 'justify" differentiating the forcing because I'm not doing so, I'm simply observing that the second term : Aωτ.cos(ωτ) IS the differential of Asin(ωτ) with a scalar multiplier : tau.

      You obviously find this result rather surprising and are therefore rejecting it. But what counts in maths is the maths , not our expectations.

      Note that A is not a scalar, it is frequency dependant, that is the red line in figure 1.

      Paul's equation is of the form sin +cos + exp , I'm just summing that across all frequencies. The maths is trivial. What bit exactly don't you follow? Is there some step that you think is incorrect?

      Greg

      Delete
    18. Greg,
      I can see what has gone wrong. Aωτ.cos(ωτ) is NOT the differential of Asin(ωτ). A is not constant.

      Aωτ.cos(ωτ) is an integrating operator, not a differentiating op. That's why I thought the appearance of a derivatiye of forcing was wrong. Aωτ is asymptotically 1/ωτ which is integrating behaviour.

      Delete
  9. Having identified tau as being,not just some arbitrary "lag" fitting parameter to "smooth" the input, but as the time constant constant of climate system, leads to an important implication of your results that you have missed.

    Your regression fitted values of tau range between 0.25 and 0.5 whereas tau of climate is reckoned to be between 3 and 4.5 years.

    That would seem to have two interpretations.

    1. Current estimations of climate and model tau are off by an order of magnitude in which case you need to discuss the implications for CS which is inherently tied to tau.

    2. The fit produced by this method is spurious. If tau is off by an order of magnitude what credibility can we give to the fit results in general and specifically the idea that it accounts for the recent 'plateau'.

    In view of the fact that there are none of these inputs which could account for the 60y cycle and your de-trending does not attempt to remove it either I know where I'd put my money.

    Perhaps you should comment on that now know what tau is.

    ReplyDelete
    Replies
    1. Greg,
      I do not claim that this fitting is a good way of identifying the "tau of climate" (there isn't such a thing anyway - there are many time scales). It's a regression - it explains variance. Other values of tau might explain nearly as much. Exponentials are far from orthogonal.

      I can't tell what you've done in your previous comment. You have to be careful with transforming the derivative. But the fourier transform tapers slowly with high freq - you're convolving with a cut-off exponential, which has 1/ω behaviour.

      Delete
    2. "It's a regression - it explains variance. "

      The process of regression does not 'explain' anything . That is what you have to attempt to do with the model you chose to fit. You are fitting what you say corresponds to a linear feedback response model with in single feedback. At the same time you say "there isn't such a thing anyway - there are many time scales".

      It seems clear that you don't know what it shows or why you are fitting this particular model, but you still try to interpret its results to determine what the true underlying rate of change is for a certain period.

      If the tau you are fitting is not the tau of the climate response what it is and why are you fitting it.

      This whole exercise seems unfocused and arbitrary.

      Greg

      Delete
    3. "This is really muddled analysis and perhaps you can start again from scratch."

      For the reasons stated above I would have to say the same thing of what you are doing here.

      Greg

      Delete
  10. Greg,
    I think you've mistaken me for another commenter there.

    ReplyDelete
    Replies
    1. Indeed, I beg your pardon. However, my comment was still valid in the context
      Greg.

      Delete
  11. "... to estimate both the regression parameters and the delay coefficients for each of the exogenous variables"

    Nick, I can't find the 'delay' in the R code nor the convolution. Are you doing this by hand outside the R regression or what ?

    If it's in the R code you posted could you point me to it?

    thx

    ReplyDelete
  12. OK, I've run this with some test data. A synthetic time series that looks "something" like global temps.

    Using that as the input "forcing" , I've calculated the response by both addition of Fourier cmpts and the Laplace convolution.

    They are identical.

    http://climategrog.wordpress.com/?attachment_id=402

    I used adjacent months for the diff so there's a 0.5m offset, I left this slight offset so we can see both. Using [-1,0.+1] for the diff would make it centred.

    Here you can see that the in-phase component is dominated by the 60y cycle, the orthogonal by the 9y cycle, in line with what I said originally.

    If I have time later I may try to prove this formally by showing the kernels are equivalent.

    This way of viewing it gives more insight and may (finally) help you see the importance of the derivative term. It is interesting to see it as the weighted sum of forcing and its derivative , both passed through a low-pass filter.

    The exp term is just a spin up but gives a result for the early period that is not possible with the Laplace convolution.

    The input must be stationary for this work accurately any linear trends or periods longer than the data need to be removed. The Laplace method is fully generalised for all inputs.

    I'm not sure I can see a reason to prefer this method for fitting but having shown they are equivalent it gives a lot more insight into the relationship between input and output.

    That should provide some insights into both temperature and CO2.




    ReplyDelete
  13. Not so "muddled" after all, then.

    Could you reply to the query I posted above: I can't find the 'delay' in the R code nor the convolution. Are you doing this by hand outside the R regression or what ?

    If it's in the R code you posted could you point me to it?

    thanks.

    ReplyDelete
    Replies
    1. Greg,
      The convolution is in the function res(), which is what nlm operates on. It uses the recurrence relation of SteveF's original post. The lines are:
      k1=abs(b[2+M]);
      k2=1-k1; o=length(b)>10;
      for(i in n[-1])r[i,]=k2*r[i-1,]+k1*s[i,]
      s[] is the Mx3 matrix of the forcing series.

      Delete
    2. Thanks Nick, comparing to the original feedback equation, it seems to have lost a linear sensitivity scaling factor. The scaling applied to T(t) is not the same as that for F(t)

      dT/dt + T/ Ï„ = F(t) * S/Ï„

      maybe that could be added to s[] before hand.

      Greg

      Delete
    3. Greg,
      No, it's a regression - no need to worry about scaling. If you scale up the forcing, the regression coef reduces by the same amount, and the fitted component is unchnaged.

      Delete
    4. Nick, the point is the formula is wrong . If you let the regression 'fix' it I suppose you could then scale back the fitted value by the sensitivity.(K/W/m2)

      Anyway,it's all rather academic since the whole exercise is compromised until you take account of the the 60 year cycle. Since you don't have a regression variable that can account for it you need to remove it in the detrending stage. Either by fitting a 60y cosine or by using a cubic rather than a quadratic that could at least follow the general form. Extending the current code to do a cubic should be a doddle.

      If you pretend it is not there you are trying to simulate it , incorrectly, with the regression variables that you do have. The result is necessarily spurious.

      BTW if you use a cubic, you will probably totally remove the down turn and end up with a pretty constant rise. That is the claimed object of the exercise.

      I don't know whether this will bring your fitted tau more into line with acceptable values, it may help.

      I still think the exercise if beggared by the 9y cycle Curry & BEST published on recently and which is a strong feature in many SST records. This will almost certainly end up spuriously attributed to the two volcanoes in that short record. That is the real reason why Grant "Tamino" Foster restricted his analysis to post-79.

      Have a look at my 9/22/60y model below. It really was just a lash-up just to have a some test data, I'm surprised how well it works with the 'lag'/filter provided by the linear feedback.





      Delete
  14. I just lashed up the 9/22/60y model, using equal weight for each, to have some test data. It turns out to be quite close to reproducing AMO when fed through a tau=3.5 linear feedback:

    http://climategrog.wordpress.com/?attachment_id=403

    It matches the 1974 dip better than most of what you did here and shows the supposed strong volcanic correlations may be spurious.


    greg.

    ReplyDelete