Monday, September 18, 2017

Grids, Platonic solids, and surface temperature (and GCMs)

This follows a series, predecessor here, in which I am looking at ways of dealing with surface variation of Earth's temperature, particularly global averaging. I have written a few posts on the cubed sphere, eg here. I have also shown some examples of using an icosahedron, as here. Clive Best has been looking at similar matters, including use of icosahedron. For the moment, I'd like to write rather generally about grids and ways of mapping the sphere.

Why grids?

For surface temperature, grids are commonly used to form local averages from data, which can then be combined with area weighting to average globally. I have described the general considerations here. All that is really required is any subdivision of reasonable (compact) shapes. They should be small enough that the effect of variation within is minimal, but large enough that there is enough data for a reasonable estimate. So they should be of reasonably equal area.

The other requirement, important later for some, is that any point on the sphere can be associated with the cell that contains it. For a regular grid like lat/lon, this is easy, and just involves conversion to integers. So if each data point is located, and each cell area is known, that is all that is needed. As a practical matter, once the cell locations are known for an inventory of stations, the task of integrating the subset for a given month is just a look-up, whatever the geometry.

I described way back a fairly simple subdivision scheme that works for this. Equal latitude bands are selected. Then each band is divided as nearly as possible into square elements (on the sphere). The formula for this division can then be used to locate arbitrary points within the cells. I think this is all that is required for surface averaging.

However, for anything involving partial differentiation, such as finite element or GCM modelling, more is required. Fluxes between cells need to be measured, so hey have to line up. Nodes have to be related to each cell they abut. This suggests regular grids. In my case, I sometimes want to use a kind of diffusive process to estimate what is happening in empty cells. Again, regular is better.

Platonic solids

Something that looks a bit like a sphere and is easy to fit with a regular grid is a Platonic solid. There are five of them - I'll show the Wiki diagram:

Regular means that each side has the same length, and each face is a congruent regular polygon. The reason why there are only five is seen if you analyse what has to happen at vertices (Wiki):

There are 8 shown, but the three with lilac background have defect zero, and are the polygons you can use to grid the plane. The rest correspond to what you would see if you unfolded the polyhedron. The size of defect says how much distortion results, and is to be expected if you map the sphere onto the solid.

The simplest mapping is a projection. You can enclose any of these solids with a sphere that goes through all the vertices. The sphere is mapped by following, for any point, the radius until the surface is reached, and inverted by going the other way. The relation between defect and distortion is that on the sphere the angles will be stretched to add to 360°. The effcets of this can be seen on my cubed maps.

So the cube (4,3) has a moderate defect of 90°. The icosahedron (3,5) has a smaller (better) defect of 60°. The dodecahedron (5,3) is even better, but the resulting pentagons are harder to subdivide, which we'll want to do. Tetrahedron and octahedron have no advantage to compensate the high defect.

Subdividing and equal area

Even an icosahedron is likely too coarse a gridding on its own, even though the faces are guaranteed to be of equal area. The point of regular faces is that they can themselves be gridded, with squares for the cube, or triangles for the icosahedron. You can't grid a plane with pentagons, which is the problem for the dodecahedron. But it can be triangulated. So the inquiry now is to which solid gives equal area mapping, with an eye on the requirement that the mapping should also permit arbitrary points to be located in mapped cells.

There are actually two ways to proceed. The way I used in the cube data structure I posted here, and the way Clive Best divided the icosahedron as linked above, is to successively bisect, in a way shown below, projecting at each stage onto the sphere. In gneral that is the better way; I'll describe some particular issues below. The second is to divide the polygon faces as planes, and then project. That leads to more unequal areas, but it can be remapped. The second method simplifies a basic requirement, that an arbitrary point on the surface can have its enclosing cell identified (providing the remapping can be inverted).

A special case - the cylinder

Familiar flat earth projections are actually projections onto the cylinder. You can, at a stretch, regard this as a degenerate Platonic solid, in which the faces are two-sided polygons, ie lines (longitude). The defect is 360°. The aim is to map onto a finite sheet of paper, so some mapping is always subsequently applied to the y-axis (y=tan(θ), latitude). The famous one is Mercator's; here the y axis is mapped by y=log(tan(π/4+θ/2)). This makes the mapping conformal - locally each feature is in proportion. But the area ratio varies greatly, so Greenland is huge. An equal area map has y=sin(θ); this minimises distortion (of which there is much) near the equator. You can scale to move this minimum zone to convenient latitudes.

I've actually written these as modified mappings of θ, but you can regard them as projections and then re-mappings. I've introduced the idea because it can be applied to mappings onto solids too, also to improve area ratio. It's a more natural way of looking at it there, because the projection is already a reasonable mapping. I'll come back to this.

First subdivision step

I'm now thinking about cubes, dodecahedra and icosahedra. The following image shows how the bisection process described is initiated for the three solids one might consider:

The subdivision then proceeds by dividing triangles as shown for the icosahedron, or squares as shown for the cube. A key thing now is that while the subdivisions are equal area on the plane surface, they generally aren't as mapped onto the sphere. In the case of the icosahedron, in the first subdivision, the centre triangle projects to an area of 0.180 on the unit sphere, while the others are 0.149. Further subdivision creates further inequality, although the first step does about half the eventual damage.

However, the cube and dodecahedron have an interesting feature which, I think, more than compensates for the reduced number of sides. In the first subdivision step, the new polygons are assured equal area by symmetry. So at that stage the square has 24 equal components, and the dodec has sixty. I'll list the pros and cons of each:
  • Icosahedron - starts with 20 equilateral cells. When first divided, the cells are still equilateral, but unequal area on the sphere, ratio about 6:5.
  • Cube - after first subdivision, 24 "squares" of equal area, although the nodes are not quite coplanar. Squares are better than triangles because they are more compact. That means that if you have a number of points within a cell of given area, they are generally closer to each other, which is what gridding is trying to ensure. It also means that a cell of given area sits closer to the sphere surface, so there is less distortion on projecting.
  • Dodecahedron - after first subdivision, there are 60 cells of equal area. They aren't quite equilateral; they are isosceles with angles on the surface of 72,60,60° ( they don't have to add to 180, and in fact the excess is the area). Thereafter subdivision proceeds much as for the icosahedron.
On this basis, I think the dodec wins, followed by cube. However, next comes the possibility of re-mapping.


I have spent an inordinate amount of time on this in recent weeks. You can re-map after the bisection as above, but it might as well be done subdividing the plane surface, and then projecting at the end. The idea is that you divide, then develop a parametric scheme for re-mapping to equalise area. I use a Newton-Raphson (actually really secant) to do this. For the cube array, as I stored data and functions linked here, I used a polynomial form that preserved the required symmetries. For the icosahedron I reasoned thus. You can take just one face, and identify six axes of symmetry, dividing into six equal sections. What you do to the nodes in just one of these determines what happens everywhere, and so solving the permitted perturbations that nearly equalise area (least squares, basically Newton-Raphson), gives the required mapping.

There is still the pesky issue of how to put a point on the sphere into its cell. With the polynomial mapping, that can be inverted, usually also requiring Newton-Raphson. But there is another, more general method.

The more elementary question is, how do you locate a point in a regular grid. Say, (37.3, 124.4) in a 2x2 grid. It's just a matter of truncating each to the nearest below even integer. Or to the nearest odd integer, if you mark the cell by its mid-point. It's easier to think about it if you rescale to a unit grid. Triangles seem harder, but you can do the same in homogeneous coordinates. Simplest is to express everything in the homogeneous coords of one base cell. Then omitting the fractional parts tells you which cell it is in.

If the grid is re-mapped, you can proceed iteratively. Choose a base cell, and locate the point relative to that. Then re-do, with that cell as base. If it is still in, we have it, otherwise repeat going to the revised cell as base. Because the re-mappings are relatively minor changes, this converges quickly.


This was all meant to be preparatory to posting about the icosahedron in the same way I did for cube here, with data structures for six or so stages of bisection. Then I realised that dodecahedra actually looked better, and even cubes. I'll probably still post the icosahedron data, and maybe dodec.

I should add that, while one can seek to optimise equal area mapping, in practice any of these solids will be adequate for surface temperature averaging.


  1. Typos: "is that any point on the sphere can be associated with the sphere that contains it" - "with the cell that contains it"?

    "icosahedron (3,5) has a smaller (better) defect of 90°" - 60.

    1. William,
      Thanks. Yes, right on both. I've fixed the text.

  2. The surface temperature of the Earth roughly follows standing wave modes. Why this comes about is interesting. Consider that a natural mode of the Pacific ocean has a period of 2π(L/g)^0.5, which relates to the motion of a pendulum. This corresponds to a period of the order of 2 hours and explains why the lunar gravitational tidal pull has a strong collective effect on that time-scale.

    Now consider the pendulum of the ocean's thermocline. The effective gravity here is greatly reduced and is due to the slight difference in the density of the sea water above and below the thermocline. This difference in density could be on the order of 1/10000 or less, i.e. g'=g/10000, so period = 100*2π(L/g)^0.5. The effective resonant period now stretches from two hours to a week or longer. In turn, the standing wave mode of the ocean's thermocline is pumped by the nearly commensurate periods of the fortnightly and monthly tidal periods instead of the diurnal and semidiurnal periods for conventional tides.

    That's basic physics and the likely cause of the natural variability in temperature, as the massive thermocline oscillates toward the surface and back. I don't think a mesh is needed to estimate this effect to 1st-order. What's needed is the acknowledgment that this is a real effect.

    1. whut,
      One can point to the frequency, but the amplitude is harder. It seems to me that it would be a very low-Q resonance. And it won't have so much effect on average monthly temperatures. It might be more interesting to try to relate it to the daily NCEP/NCAR data.

    2. I meant to imply that it is not so much that it is a resonance, but that the inertial response window provides a pass-through to the forcing.

      Same thing happens with conventional tides. The two-hour resonance window provides a pass-through for the diurnal and semidiurnal tides. The periods equal the lunisolar cycles but have the magnitude they do because they reinforce the natural resonance under gravity.

      Yet, at the reduced effective gravity of the thermocline layer, the conventional tidal response gets filtered out because the cycling is much too fast. So only the long-period tides will have an impact, and these will be on the order of time-scales of months to years, as the response gets aliases with strong seasonal impulses.

      So the exact value of the resonant frequency doesn't matter as that will never show up in the output, and it only supplies a pass-through window for potential forcings.

    3. Web,
      The thing is, though, that low-Q also reduces the selectivity of a pass-through filter. And the uncertainty about frequency probably means that it just isn't well characterised.

    4. I am confident that it is effectively a high-Q system.

      One point to consider is that for a higher Q, the narrower the bandwidth. So the fact that Q may not be exceedingly high gives it enough bandwidth to pass through a range of frequencies. That's true for a 2nd-order linear differential equation.

      But with sloshing fluid dynamics, its a nonlinear system which is often on the verge of sustaining a type of positive feedback. See e.g. the Mathieu equation. This can increase the Q while maintaining the bandwidth. So an inviscid system such as a stratified fluid with slight density differences is therefore effectively high Q.

      But I have never really thought that much about the Q factor, because the system never has shown any signs of damping -- and that, in turn, is indistinguishable from a forced response system driven by a continuous oscillation. Like conventional tides, it's really a matter of whether the energy supplied by the gravitational forcing can sustain the oscillation. And that's plausible because likely all of the ocean mixing and overturning is caused by tidal forces -- see Wunsch and Munk. It's not much of a stretch to apply that to the thermocline.

    5. Web,
      "So the fact that Q may not be exceedingly high gives it enough bandwidth to pass through a range of frequencies."
      Well, the logical limit of that is an all-pass filter, which is just a wire (ideal). The function of Q is to allow energy storage at selected frequencies, so amplitude can be built up. The thing is, for high Q, you need to be fairly sure what the frequency is; it doesn't make sense without. Or at least the system has to know. That is why diffuse boundaries like the thermocline can't really do much with building amplitude, especially with very long horizontal wavelength. It may build up energy in one band which in a different zone is dissipative (varying thermocline depth).

      Another aspect is that if you want to maintain a resonance with a wavelength that is long relative to the cross-section geometry, there will be innumerable cross-modes into which the energy can be transferred, and you are relying on reflection at the boundaries to prevent that by interference. A diffuse boundary can't do that. You can't knit a flute.

    6. Nick, What you are saying is at odds with the geometry that many of the ENSO researchers are using:

      This paper has the characteristic (resonant) period at 4.25 years

      No damping in that wave equation, so Q is infinite in this model.

      A. J. Clarke, S. Van Gorder, and G. Colantuono, “Wind stress curl and ENSO discharge/recharge in the equatorial Pacific,” Journal of physical oceanography, vol. 37, no. 4, pp. 1077–1091, 2007

      It comes down to forcing by wind or by tidal forces. And if tidal forces pace the winds, then it's essentially the tidal forces with seasonal modulation that establish the periods. Clarke talks about a curl wind force, but there is also a curl force from the nodal tidal cycle.

    7. Web
      But that paper of Clarke et al is different from what you were describing. Oscillations are an exchange between two forms of energy, with the time scale given by the rate of conversion. Your shallow water wave exchanges between potential (gravitational) energy from elevated water, vs kinetic, and has a hourish timescale as per pendulum, which has the same exchange. But Clarke et al have an exchange between depth of 20C and SSTA. That involves the timescales of heat transfer, which are very different, and possibly much slower.

    8. I have always been describing Figure 12 in the Clarke paper.

      There are 3 modes in which tidal forcing can impact the ocean.

      1. Conventional ocean tides (provably forced by moon + sun, with tiny but measurable amount by wind)
      2. Thermocline sloshing, i.e. ENSO, as per Clarke paper (believed to be forced by wind)
      3. Deep ocean mixing described by Wunsch & Munk (forced by mix of tides and wind)

      This is what I find perplexing. For some reason tidal energy is excluded from contributing to #2 even though the periods of the ENSO cycle are precisely commensurate with the lunar fortnightly and monthly long periods.

      The analogy is like saying that both rain and hail are caused by coalescence of water in the atmosphere, but snow isn't. I know that's an extreme analogy, but trying to get wide acknowledgment or to even get this mode considered will take a huge amount of effort.

    9. This is what slight angular momentum changes can do to a volume of water:

      Quake near Mexico City

    10. Even the "Einstein of the Oceans" thinks that lunar forcing has an impact on the pelagic zone

    11. Read how "pretentious blog lions" are arguing over the ENSO topic elsewhere: