Monday, 10 October 2016

A stable global climate reference network

Historical climate data contains inhomogeneities, for example due to changes in the instrumentation or the surrounding. Removing these inhomogeneities to get more accurate estimates of how much the Earth has actually warmed is a really interesting problem. I love the statistical homogenization algorithms we use for this; I am a sucker for beautiful algorithms. As an observationalist it is great to see the historical instruments, read how scientists understood their measurements better and designed new instruments to avoid errors.

Still for science it would be better if future climatologists had an easier task and could work with more accurate data. Let's design a climate-change-quality network that is a stable as we can humanly get it to study the ongoing changes in the climate.

Especially now that the climate is changing, it is important to accurately predict the climate for the coming season, year, decade and beyond at a regional and local scale. That is information (local) governments, agriculture and industry needs to plan, adapt, prepare and limit the societal damage of climate change.

Historian Sam White argues that the hardship of the Little Ice Age in Europe is not just about cold, but also about the turbulent and unpredictable weather. Also the coming century much hardship can be avoided with better predictions. To improve decadal climate prediction of regional changes and to understand the changes in extreme weather we need much better measurements. For example, with a homogenized radiosonde dataset, the improvements in the German decadal prediction system became much clearer than with the old dataset.

We are performing a unique experiment with the climate system and the experiment is far from over. It would also be scientifically unpardonable not to measure this ongoing change as well as we can. If your measurements are more accurate, you can see new things. Methodological improvements that lead to smaller uncertainties is one of the main factors that brings science forward.

A first step towards building a global climate reference network is agreeing on a concept. This modest proposal for preventing inhomogeneities due to poor observations from being a burden to future climatologists is hopefully a starting point for this discussion. Many other scientists are thinking about this. More formally there are the Rapporteurs on Climate Observational Issues of the Commission for Climatology (CCl) of the World Meteorological Organization (WMO). One of their aims is to:
Advance specifications for Climate Reference Networks; produce a statement of guidance for creating climate observing networks or climate reference stations with aspects such as types of instruments, metadata, and siting;

Essential Climate Variables

A few weeks ago Han Dolman and colleagues wrote a call to action in Nature Goescience titled "A post-Paris look at climate observations". They argue that while the political limits are defined for temperature, we need climate quality observations for all essential climate variables listed in the table below.
We need continuous and systematic climate observations of a well-thought-out set of indicators to monitor the targets of the Paris Agreement, and the data must be made available to all interested users.
I agree that we should measure much more than just temperature. It is quite a list, but we need that to understand the changes in the climate system and to monitor the changes in the atmosphere, oceans, soil and biology we will need to adapt to. Not in this list, but important are biological changes, especially ecology needs support for long-term observational programs, because they lack the institutional support the national weather services provide on the physical side.

Measuring multiple variables also helps in understanding measurement uncertainties. For instance, in case of temperature measurements, additional observations of insolation, wind speed, precipitation, soil temperature and albedo are helpful. The US Climate Reference Network measures this wind speed at the height of the instrument (and humans) rather than at the meteorologically typical height of 10 meter.

Because of my work, I am mainly thinking of the land surface stations, but we need a network for many more observations. Please let me know where the ideas do not fit to the other climate variables.

Table. List of the Essential Climate Variables; see original for footnotes.
Domain GCOS Essential Climate Variables
Atmospheric (over land, sea and ice) Surface: Air temperature, Wind speed and direction, Water vapour, Pressure, Precipitation, Surface radiation budget.

Upper-air: Temperature, Wind speed and direction, Water vapour, Cloud properties, Earth radiation budget (including solar irradiance).

Composition: Carbon dioxide, Methane, and other long-lived greenhouse gases, Ozone and Aerosol, supported by their precursors.
Oceanic Surface: Sea-surface temperature, Sea-surface salinity, Sea level, Sea state, Sea ice, Surface current, Ocean colour, Carbon dioxide partial pressure, Ocean acidity, Phytoplankton.

Sub-surface: Temperature, Salinity, Current, Nutrients, Carbon dioxide partial pressure, Ocean acidity, Oxygen, Tracers.
Terrestrial River discharge, Water use, Groundwater, Lakes, Snow cover, Glaciers and ice caps, Ice sheets, Permafrost, Albedo, Land cover (including vegetation type), Fraction of absorbed photosynthetically active radiation, Leaf area index, Above-ground biomass, Soil carbon, Fire disturbance, Soil moisture.

Comparable networks

There are comparable networks and initiatives, which likely shape how people think about a global climate reference network. Let me thus describe how they fit into the concept and where they are different.

There is the Global Climate Observing System (GCOS), which is mainly an undertaking of the World Meteorological Organization (WMO) and the Intergovernmental Oceanographic Commission (IOC). They observe the entire climate system; the idea of the above list of essential climate variables comes from them (Bojinski and colleagues, 2014). GOCS and its member organization are important for the coordination of the observations, for setting standard so that measurements can be compared and for defending the most important observational capabilities against government budget cuts.

Especially important from a climatological perspective is a new program to ask governments to recognize centennial stations as part of the world heritage. If such long series are stopped or the station is forced to move, a unique source of information is destroyed or damaged forever. That is comparable to destroying ancient monuments.

A subset of the meteorological stations are designated as GCOS Surface Network measuring temperature and precipitation. These stations have been selected for their length, quality and to cover all regions of the Earth. Its monthly data is automatically transferred to global databases.

National weather services normally take good care of their GCOS stations, but a global reference network would have much higher standards and also provide data at better temporal resolutions than monthly averages to be able to to study changes in extreme weather and weather variability.

There is already a global radiosonde reference network, the GCOS Reference Upper-Air Network (GRUAN, Immler and colleagues, 2010). This network provides measurements with well characterized uncertainties and they make extensive parallel measurements when they transition from one radiosonde design to the next. No proprietary software is used to make sure it is know exactly what happened to the data.

Currently they have about 10 sites, a similar number is on the list to be certified and the plan is not make this a network of about 30 to 40 stations; see map below. Especially welcome would be partners to start a site in South America.

The observational system for the ocean Argos is, as far as I can see, similar to GRUAN. It measures temperature and salinity (Roemmich and colleagues, 2009). If your floats meet the specifications of Argos, you can participate. Compared to land stations the measurement environment is wonderfully uniform. The instruments typically work a few years. Their life span is thus between a weather station and a one-way radiosonde ascent. This means that the instruments may deteriorate somewhat during their lifetimes, but maintenance problems are more important for weather stations.

A wonderful explanation of how Argos works for kids:

Argos has almost four thousand floats. They are working on a network with spherical floats that can go deeper.

Finally there are a number of climate reference networks of land climate stations. The best known is probably the US Climate Reference Network (USCRN, Diamond and colleagues, 2013). It has has 131 stations. Every station has 3 identical high quality instrument, so that measurement problems can be detected and the outlier attributed to a specific instrument. To find these problems quickly all data is relayed online and checked at their main office. Regular inspections are performed and everything is well documented.

The USCRN has selected new locations for its stations, which are expected to be free of human changes of the surroundings in the coming decades. This way it takes some time until the data becomes climatologically interesting, but they can already be compared with the normal network and this gives some confidence that its homogenized data is okay for the national mean; see below. The number of stations was sufficient to compute a national average in 2005/2006.

Other countries, such as Germany and the United Kingdom, have opted to make existing stations into a national climate reference network. The UK Reference Climatological Stations (RCS) have a long observational record spanning at least 30 years and their distribution aims to be representative of the major climatological areas, while the locations are unaffected by environmental changes such as urbanisation.

German Climate Reference Station which was founded in 1781 in Bavaria on the mountain Hohenpeißenberg. The kind of weather station photo, WUWT does not dare to show.
In Germany the climate reference network are existing stations with a very long history. Originally they were the stations where conventional manual observations continued. Unfortunately, they will now also switch to automatic observations. Fortunately, after making a long parallel measurement to see what this does to the climate record*.

An Indian scientist proposes an Indian Climate Reference Network of about 110 stations (Jain, 2015). His focus is on precipitation observations. While temperature is a good way to keep track on the changes, most of the impacts are likely due to changes in the water cycle and storms. Precipitation measurements have large errors; it is very hard to make precipitation measurements with an error below 5%. When these errors change, that produces important inhomogeneities. Such jumps in precipitation data are hard to remove with relative statistical homogenization because the correlations between stations are low. If there is one meteorological parameters for which we need a reference network, it is precipitation.

Network of networks

For a surface station Global Climate Reference Network, the current US Climate Reference Network is a good template when it comes to the quality of the instrumentation, management and documentation.

A Global Climate Reference Network does not have to do the heavy lifting all alone. I would see it as the temporally stable backbone of the much larger climate observing system. We still have all the other observations that help to make sampling errors smaller and provide the regional information you need to study how energy and mass moves through the climate system (natural variability).

We should combine them in a smart way to benefit from the strengths of all networks.

The Global Climate Reference Network does not have to be large. If the aim is to compute a global mean temperature signal, we would need just as many samples as we would need to compute the US mean temperature signal. This is in the order of 100 stations. Thus on average, every country in the world would have one climate reference station.

The figure on the right from Jones (1994) compares the temperature signal from 172 selected stations &mdsh; 109 in the Northern Hemisphere. 63 in the Southern Hemisphere. &mdash with the temperature signal computed from all available stations. There is nearly no difference, especially with respect to the long term trend.

Callendar (1961) used 80 only stations, but his temperature reconstruction fits quite well to the modern reconstructions (Hawkins and Jones, 2013).

Beyond the global means

The number of samples/stations can be modest, but it is important that all climate regions of the world are sampled; some regions warm/change faster than others. It probably makes sense to have more stations in especially vulnerable regions, such as mountains, Greenland, Antarctica. We really need a stable network of buoys in the Arctic, where changes are fast and these changes also influence the weather in the mid-latitudes.

Crew members and scientists from the US Coast Guard icebreaker Healy haul a buoy across the sea ice during a deployment. In the lead an ice bear watcher and a rescue swimmer.
To study changes in precipitation we probably need more stations. Rare events contribute a lot to the mean precipitation rate. The threshold to get into the news seems to be the rain sum of a month falling in on one day. Enormous downpours below that level are not even newsworthy. This makes the precipitation data noisy.

To study changes in extreme events we need more samples and might need more stations as well. How much more depends on how strong the synergy between the reference network and the other networks is and thus how much the other networks could then be used to produce more samples. That question needs some computational work.

The idea to use 3 redundant instruments in the USCRN is something we should also use in the GCRN and I would propose to also to create clusters of 3 stations. That would make it possible to detect and correct inhomogeneities by making comparisons. Even in a reference network there may still be inhomogeneities due to changes in the surrounding or management (which were not noticed).

We should also carefully study whether is might be a problem to only use pristine locations. That could mean that the network is no longer representative for the entire world. We should probably include stations in agricultural regions, that is a large part of the surface and they may respond differently from natural regions. But agricultural practices (irrigation, plant types) will change.

Starting a new network at pristine locations has as disadvantage that it takes time until the network becomes valuable for climate change research. Thus I understand why Germany and the UK have opted to use locations where there are already long historical observations. Because we only need 100+ stations it may be possible to select existing locations from the 30 thousand stations we have that are and likely stay pristine in the coming century. If not, I would not compromise and use a new pristine location for the reference network.

Finally, when it comes to the number of stations, we probably have to take into account that no matter how much we try some stations will become unsuitable due to war, land-use change and many other unforeseen problems. Just look back a century and consider all the changes we experienced, the network should be robust against such changes for the next century.

Absolute values or changes

Argos (ocean) and GRUAN (upper air) do not specify the instruments, but set specification for the measurement uncertainties and their characterization. Instruments may thus change and this change has to be managed. In case of GRUAN they perform many launches with multiple instruments.

For a climate reference land station I would prefer to keep the instruments exactly the same design for the coming century.

To study changes in the climate climatologists look at the local changes (compute anomalies) and average those. We had a temperature increase of about 1°C since 1900 and are confident it is warming. This while the uncertainty in the average absolute temperature is of the same order of magnitude. Determining changes directly is easier than first estimating the absolute level and then look whether it is changing. By keeping the instruments the same, you can study changes more easily.

This is an extreme example, but how much thermometer screens weather and yellow before they are replaced depends on the material (and the climate). Even if we have better materials in the future, we'd better keep it the same for stable measurements.
For GRUAN managing the change can solve most problems. Upper air measurements are hard; the sun is strong, the air is thin (bad ventilation) and the clouds and rain make the instruments wet. Because the instruments are only used once, they cannot be too expensive. On the other hand, each time starting with a freshly calibrated instrument makes the characterization of the uncertainties easier. Parallel measurements to manage changes are likely more reliable up in the air than at the surface where two instruments measuring side by side can legitimately measure a somewhat different climate, especially when it comes to precipitation, where undercatchment strongly depends on the local wind or for temperature when cold air flows at night hugging the orography.

Furthermore, land observations are used to study changes in extreme weather, not just the mean state of the atmosphere. The uncertainty of the rain rate depends on the rain rate itself. Strongly. Even in the laboratory and likely more outside where also the influence factors (wind, precipitation type) depend on the rain rate. I see no way to keep undercatchment the same without at least specifying the outside geometry of the gauge and wind shield in minute detail.

The situation for temperature may be less difficult with high-quality instruments, but is similar. When it comes to extremes also the response time (better: response function) of the instruments becomes important and how much out-time the instrument experiences, which is often related to severe weather. It will be difficult to design new instruments that have the same response functions and the same errors over the full range of values. It will also be difficult to characterize the uncertainties over the full range of values and velocity of changes.

Furthermore, the instruments of a land station are used for a long time while not being observed. Thus weather, flora, fauna and humans become error sources. Instruments which have the same specifications in the laboratory may thus still perform differently in the field. Rain gauges may be more or less prone to getting clogged by snow or insects, more or less attractive for drunks to pee in. Temperature screens may be more or less prone to be blocked by icing or for bees to build their nest in. Weather stations may be more or less attractive to curious polar bears.

This is not a black and white situation. It will depend on the quality of the instruments which route to prefer. In the extreme case of an error free measurement, there is no problem with replacing it with another error free instrument. Metrologists in the UK are building an instrument that acoustically measures the temperature of the air, without needing a thermometer, which should have the temperature of the air, but in practice never has. If after 2 or 3 generations of new instruments, they are really a lot better in 50 years and we would exchange them, that would still be a huge improvement of the current situation with an inhomogeneity every 15 to 20 years.

The software of GRUAN is all open source. So that when we understand the errors better in future, we know exactly what we did and can improve the estimates. In case we specify the instruments, that would mean that we need Open Hardware as well. The designs would need to be open and specified in detail. Simple materials should be used to be sure we can still obtain them in 2100. An instruments measuring humidity using the dewpoint of a mirror will be easier to build in 2100 than one using a special polymer film. These instruments can still be build by the usual companies.

If we keep the instrumentation of the reference network the same, the normal climate network, the GCOS network will likely have better equipment in 2100. We will discover many ways to make more accurate observations, to cut costs and make the management more easy. There is no way to stop progress for the entire network, which in 2100 may well have over 100 thousand stations. But I hope we can stop progress for a very small climate reference network of just 100 to 200 stations. We should not see the reference network as the top of hierarchy, but as the stable backbone that complements the other observations.


How do we make this happen? First the scientific community should agree on a concept and show how much the reference network would improve our understanding of the climatic changes in the 21st century. Hopefully this post is a step in this direction and there is an article in the works. Please add your thoughts in the comments.

With on average one reference station per country, it would be very inefficient if every country would manage its own station. Keeping the high metrological and documentation standards is an enormous task. Given that the network would be the same size as USCRN, the GCRN could in principle be managed by one global organization, like USCRN is managed by NOAA. It would, however, probably be more practical to have regional organizations for better communication with the national weather services and to reduce travel costs for maintenance and inspections.


The funding of a reference network should be additional funding. Otherwise it will be a long hard struggle in every country involved to build a reference station. In developing countries the maintenance of one reference station may well exceed the budget of their current network. We already see that some meteorologists fear that the millennial stations program will hurt the rest of the observational network. Without additional funding, there will likely be quite some opposition and friction.

In the Paris climate treaty, the countries of the world have already pledged to support climate science to reduce costs and damages. We need to know how close we are to the 2°C limit as feedback to the political process and we need information on all other changes as well to assess the damages from climate change. Compared to the economic consequences of these decisions the costs of a climate reference network is peanuts.

Thus my suggestion would be to ask the global climate negotiators to provide the necessary funding. If we go there, we should also ask the politicians to agree on the international sharing of all climate data. Restrictions to data is holding climate research and climate services back. These are necessary to plan adaptation and to limit damages.

The World Meteorological Organization had its congress last year. The directors of the national weather services have shown that they are not able to agree on the international sharing of data. For weather services selling data is often a large part of their budget. Thus the decision to share data internationally should be made by politicians who have the discretion to compensate these losses. In the light of the historical responsibility of the rich countries, I feel a global fund to support the meteorological networks in poor countries would be just. This would compensate them for the losses in data sales and would allow them to better protect themselves against severe weather and climate conditions.

Let's make sure that future climatologists can study the climate in much more detail.

Think of the children.

Related information

Free our climate data - from Geneva to Paris

Congress of the World Meteorological Organization, free our climate data

Climate History Podcast with Dr. Sam White mainly on the little ice age

A post-Paris look at climate observations. Nature Geoscience (manuscript)

Why raw temperatures show too little global warming


Bojinski, Stephan, Michel Verstraete, Thomas C. Peterson, Carolin Richter, Adrian Simmons and Michael Zemp, 2014: The Concept of Essential Climate Variables in Support of Climate Research, Applications, and Policy. Journal of Climate, doi: 10.1175/BAMS-D-13-00047.1.

Callendar, Guy S., 1961: Temperature fluctuations and trends over the earth. Quarterly Journal Royal Meteorological Society, 87, pp. 1–12. doi: 10.1002/qj.49708737102.

Diamond, Howard J., Thomas R. Karl, Michael A. Palecki, C. Bruce Baker, Jesse E. Bell, Ronald D. Leeper, David R. Easterling, Jay H. Lawrimore, Tilden P. Meyers, Michael R. Helfert, Grant Goodge, Peter W. Thorne, 2013: U.S. Climate Reference Network after One Decade of Operations: Status and Assessment. Bulletin of the American Meteorological Society, doi: 10.1175/BAMS-D-12-00170.1.

Dolman, A. Johannes, Alan Belward, Stephen Briggs, Mark Dowell, Simon Eggleston, Katherine Hill, Carolin Richter and Adrian Simmons, 2016: A post-Paris look at climate observations. Nature Geoscience, 9, September, doi: 10.1038/ngeo2785. (manuscript)

Hawkins, Ed and Jones, Phil. D. 2013: On increasing global temperatures: 75 years after Callendar. Quarterly Journal Royal Meteorological Society, 139, pp. 1961–1963, doi: 10.1002/qj.2178.

Immler, F.J., J. Dykema, T. Gardiner, D.N. Whiteman, P.W. Thorne, and H. Vömel, 2010: Reference Quality Upper-Air Measurements: guidance for developing GRUAN data products. Atmospheric Measurement Techniques, 3, pp. 1217–1231, doi: 10.5194/amt-3-1217-2010.

Jain, Sharad Kumar, 2015: Reference Climate and Water Data Networks for India. Journal of Hydrologic Engineering, 10.1061/(ASCE)HE.1943-5584.0001170, 02515001. (Manuscript)

Jones, Phil D., 1994: Hemispheric Surface Air Temperature Variations: A Reanalysis and an Update to 1993. Journal of Climate, doi: 10.1175/1520-0442(1994)007<1794:HSATVA>2.0.CO;2.

Pattantyús-Ábrahám, Margit and Wolfgang Steinbrecht, 2015: Temperature Trends over Germany from Homogenized Radiosonde Data. Journal of Climate, doi: 10.1175/JCLI-D-14-00814.1.

Roemmich, D., G.C. Johnson, S. Riser, R. Davis, J. Gilson, W.B. Owens, S.L. Garzoli, C. Schmid, and M. Ignaszewski, 2009: The Argo Program: Observing the global ocean with profiling floats. Oceanography, 22, p. 34–43, doi: 10.5670/oceanog.2009.36.

* The transition to automatic weather stations in Germany happened to have almost no influence on the annual means, contrary to what Klaus Hager and the German mitigation sceptical blog propagandise based on badly maltreated data.

** The idea to illustrate the importance of smaller uncertainties by showing two resolutions of the same photo comes from metrologist Michael de Podesta.

Tuesday, 20 September 2016

Global Warming goes viral

NOAA's Climate Monotoring Chief Deke Arndt tweeted in May this year: "I look at this stuff every day and it still astonishes me" and showed this graph. Note, the locally typical units.

Honey, I broke the graph. Again.

How the temperature has risen the last years is amazing.

The prediction made this week for 2016 by Gavin Schmidt of NASA GISS does not look good. With a bit of British understatement he is 99% confident 2016 will be a record year.

Attention grabbers

That temperature jump is one reason people have woken from their slumber. Another reason is that people have started visualising the global temperature increase in interesting new ways. Let me try to explain why they work. It all started with the animate spiral of the global temperature of Ed Hawkins that went viral.

The spiral went viral because it was a new way to present the data. It was also very good timing because the spiral especially shows well how extraordinary the current temperature jump is.

The modern currency is attention.

So just after the Olympics I tried to top Ed Hawkins with this visualisation.

By my standards it went viral. It works because visual connects global warming to the famous Olympic photo of Usain Bold running so fast that he can afford to look sideways and smile at the camera.

I guess the virus did not spread beyond the few thousand people discussing climate change every day because without the axes you need to know the temperature signal to get it. Adding axes that would destroy the beauty.

In the last episode of Forecast Scott St George tells about his project to convert the climate signal into music. The different instruments are different climate regions. At the time this creative idea generated a lot of media attention. Works well on radio and TV than a static graph.

More regional detail can be seen in a so-called Hovmöller plot. The plot by Kevin Anchukaitis shows time on the horizontal axis and the colours indicate the average temperature over latitudinal bands. In the lower half of the figure you see the Southern Hemisphere, which warms less than the Northern Hemisphere at the top.

The additional energy that is available due to the stronger greenhouse effect can go into warming the air or evaporating water. The Northern Hemisphere has much more land, is drier. Thus evaporation increases less and warming more.

The front of the new State of the Climate also shows the observed temperature signal in red and brown.

Understanding climate change

Probably the most eye opening graph to understand the difference between short-term fluctuations and long-term trends of the temperature signal is this one. An important source of fluctuations is the El Nino in the Pacific ocean. By plotting years with El Nino, its counterpart La Nina and neutral conditions separately, you immediately see that they all have about the same long-term trend and that El Nino is mainly important for the short term fluctuations. No need for statistics skillz.

I realise that it is just a small tweak, but I like this graph by Karsten Haustein because it emphasises that the data in WWII is not very reliable. The next step would be to also give the decades around 1900 the colour of the orange menace. The data in this period also has some issues and it may well be warming.

This animation by John Kennedy shows the uncertainty in how much warming we had by displaying a large number of possible realisations. This is the uncertainty as estimated in the temperature dataset of the UK Hadley Centre (HadCRUT) due to instrumental changes; it does not include the uncertainty from not having measurements in some areas (such as the Arctic).

If you plot monthly temperatures rather than yearly averages, the warming graph becomes more noisy. Mitigation sceptics like to plot data that way; the trends naturally stay the same size, but the noise makes them seem smaller. This beautiful solution by John Kennedy plots every month separately and thus shows that all months are warming without distracting noise.

You can also show the seasonal cycle like in this NASA example, or animate it.

The sun is the source of nearly all energy on Earth and naturally important for the climate, but also for climate change? The sun was quite stable the last century and the last decades the sun may even have become less bright. By plotting the sun and temperature together Stefan Rahmstorf illustrates that the variations of the sun are too small to influence the climate much.

A longer perspective

Back in 2013 Jos Hagelaars combined the temperature reconstructions (from tree rings and other indirect information sources), the instrumentally measured temperatures and the temperature projection up to 2100 into one graph, also called The Wheelchair. It makes clear how large and fast the expected warming will be in a historical perspective.

The projected warming jumps up so fast in the graph of Hagelaars that you cannot see well how fast it is. Randall Munroe of the comic XKCD solved this by rotating the graph, so that it can be plotted a lot longer. To see how we are warming the planet, the graph had to become very long. Happy scrolling. See you below.

I hope you did not miss the mouse-over text:
"[After setting your car on fire]
Listen, your car's temperature has changed before.

Some complain that the temperature reconstructions are smoother than the instrumental data. This although it is even explained in the comic. How much more geeky can you get?

They want to suggest that thus a peak like we see now could be hidden in the reconstructions. That is theoretical possible, but there is no evidence for that. More importantly: the current warming is not a peak, it is a jump, it will still get a lot warmer and it will stay warm for a very long long time. If anything we are doing to the climate now had happened in the past it would jump out in the reconstruction.

Chris Colose solves the problem a little more technically and puts the current and future warming in the context of the period during which our civilization developed in this animation.

Mitigation sceptics

HotWhopper gathered and visualised some predictions from mitigation sceptics. Pictures that stand in sharp contrast to scientific predictions and in a sane rational world would discredit their political movement forever.

David Archibalds' prediction from 2006.
Based on solar maxima of approximately 50 for solar cycles 24 and 25, a global temperature decline of 1.5°C is predicted to 2020, equating to the experience of the Dalton Minimum.

Pierre Gosselin of the German blog No Truth Zone in 2008:
-2.5°C by 2020!...My prediction is we’ve started a nasty cold period that will make the 1960s look balmy. We’re about to get caught with our pants down. And a few molecules of CO2 is not going to change it.

Don Easterbrook in 2001.

Christopher Monckton prediction in 2013.
A math geek with a track-record of getting stuff right tells me we are in for 0.5 Cº of global cooling. It could happen in two years, but is very likely by 2020. His prediction is based on the behavior of the most obvious culprit in temperature change here on Earth – the Sun.
Maybe the Lord's math geek got a minus sign wrong. Will be hard to get so much cooling in 2020, especially as promissed due to the sun.

Normal science

A normal scientific graph can be very effective. I loved it how the audience was cheering and laughing after having to endure the nonsense of Australian Senator [[Malcolm Roberts]], when the physicist Brian Cox replied with: "I brought the graph." A good sign that the public is fed up with the misinformation campaign of the mitigation sceptical movement.

The graph Cox showed was similar to this one of NASA-GISS.

You know what they say about laughing at geniuses. I hope.

Related information

Q&A smackdown: Brian Cox brings graphs to grapple with Malcolm Roberts

Temperature observation problems in WWII and the early instrumental period

Early global warming

A similar idea as the orchestra playing global warming is this tune based on the flow of the river Warta in Poland. The red noise of nature sounds wonderful.

* Temperature curve of XKCD used under a Creative Commons Attribution-NonCommercial 2.5 License.

Tuesday, 13 September 2016

Publish or perish is illegal in Germany, for good reason

Had Albert Einstein died just after his wonder year 1905 he would only have had a few publications on special relativity, the equivalence of mass and energy, Brownian motion and Photoelectric Effect on his name and would nowadays be seen as a mediocre researcher. He got the Nobel prize in 1921 "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect", not for relativity, not for Brownian motion. This illustrates how hard it is to judge scientific work, even more than a decade afterwards, much less in advance.
Managing scientists is hard. It is nearly impossible to determine who will do a good job, who is doing a good job and even whether someone did a good job in the past. The last decades science managers have largely given up trying to assess how good a scientist is in most of the world and instead assess how many articles they write and how high the prestige is of the journals the articles appear in.

Unsurprisingly, this has succeeded in increasing the number of articles scientists write. Especially in America scientists are acutely aware that they have to publish or perish.

Did this hurt scientific progress? It is unfortunately impossible to say how fast science is progressing and how fast it could progress. The work is about the stuff we do not understand yet after all. The big steps, evolution, electromagnetism, quantum mechanics, have become rare the last decades. Maybe the low hanging fruit is simply gone. Maybe it is also modern publish-or-perish management.

There are good reasons to expect publish-or-perish management to be detrimental.
1. The most basic reason: The time spend writing and reading articles the ever increasing number of articles is not spend on doing research. (I hope no one is so naive as to think that the average scientist actually became several times more productive.)
2. Topics that quickly and predictably lead to publications are not the same topics that will bring science forward. I personally try to write a mix because only working on more risky science you expect is important is unfortunately too dangerous.
3. The stick and carrot type of management works for manual labor, but for creative open-ended work it is often found to be detrimental. For creative work mastery and purpose are the incentives.

German science has another tradition, trusting scientists more and focusing on quality. This is expressed in the safeguards for good scientific practice of the German Science Foundation (DFG). It explicitly forbids the use of quantitative assessments of articles.
Universities and research institutes shall always give originality and quality precedence before quantity in their criteria for performance evaluation. This applies to academic degrees, to career advancement, appointments and the allocation of resources. …

criteria that primarily measure quantity create incentives for mass production and are therefore likely to be inimical [harmful] to high quality science and scholarship. …

Quantitative criteria today are common in judging academic achievement at all levels. … This practice needs revision with the aim of returning to qualitative criteria. … For applications for academic appointments, a maximum number of publications should regularly be requested for the evaluation of scientific merit.
For a project proposal to the German Science Foundation this "maximum number" means that you are not allowed to list all your publications, but only the 6 best ones (for a typical project, smaller projects even less).

[UPDATE. This limit has unfortunately now been increased to 10. They say the biologists are to blame.]

While reading the next paragraphs, please hear me screaming YES, YES, YES in your ear at an unbearable volume.
An adequate evaluation of the achievements of an individual or a small group, however, always requires qualitative criteria in the narrow sense: their publications must be read and critically compared to the relevant state of the art and to the contributions of other individuals and working groups.

This confrontation with the content of the science, which demands time and care, is the essential core of peer review for which there is no alternative. The superficial use of quantitative indicators will only serve to devalue or to obfuscate the peer review process.
I fully realize that actually reading someone’s publications is much more work than counting them and that top scientists spend a large part of their time reviewing. In my view that is a reason to reduce the number of reviews and trust scientists more. Hire people who have a burning desire to understand the world, so that you can trust them.

Sometimes this desire goes away when people get older. For the outside world this is most visible in some older participants of the climate “debate” who hardly produce new work trying to understand climate change, but use their technical skills and time to deceive the public. The most extreme example I know is a professor who was painting all day long, while his students gave his lectures. We should be able to get rid of such people, but there is no need for frequent assessments of people doing their job well.

You also see this German tradition in the research institutes of the Max Planck Society. The directors of these institutes are the best scientists of the world and they can do whatever they think will bring their science forward. Max Planck Director Bjorn Stevens describes this system in the fourth and best episode of the podcast Forecast. The part on his freedom and the importance of trust starts at minute 27, but best listen to the whole inspiring podcast about which I could easily write several blog posts.

Stevens started his scientific career in the USA, but talks about the German science tradition when he says:
I can think of no bigger waste of time than reviewing Chris Bretherton’s proposals. I mean, why would you want to do that? They guy has shown himself to have good idea, after good idea, after good idea. At some point you say: go doc, go! Here is your budget and let him go. This whole industry that develops to keep someone like Chris Bretherton on a leash makes no sense to me.
Compare scientists who sets priorities within their own budgets with scientists who submit research proposals judged by others. If you have your own budget you will only support what you think is really important; if you do A, you cannot do B. Many project proposals are written to fit into a research program, because a colleague wants to collaborate and apart from the time wasted on writing it, there are no downsides for asking for more funding. If you have your own budget, the person with the most expertise and with the most skin in the game decides. This while they call the project funding, where the deciders have no skin in the game, competitive. It is Soviet style planning; that it works at all shows the dedication and altruism of the scientists involved. Those are scientists you could simply trust.

I hope this post will inspire the scientific community to move towards more trust in scientists, increase the fraction of unleashed researchers and reduce the misdirected quantitative micro-management. Please find below the full text of the safeguards of the German Science Foundation on performance evaluation; above I had to skip many worthwhile parts.

Recommendation 6: Performance Evaluation

Universities and research institutes shall always give originality and quality precedence before quantity in their criteria for performance evaluation. This applies to academic degrees, to career advancement, appointments and the allocation of resources.

For the individual scientist and scholar, the conditions of his or her work and its evaluation may facilitate or hinder observing good scientific practice. Conditions that favour dishonest conduct should be changed. For example, criteria that primarily measure quantity create incentives for mass production and are therefore likely to be inimical to high quality science and scholarship.

Quantitative criteria today are common in judging academic achievement at all levels. They usually serve as an informal or implicit standard, although cases of formal requirements of this type have also been reported They apply in many different contexts: length of Bachelor, Master or PhD thesis, number of publications for the Habilitation (formal qualification for university professorships in German speaking countries), as criteria for career advancements, appointments, peer review of grant proposals, etc. This practice needs revision with the aim of returning to qualitative criteria. The revision should begin at the first degree level and include all stages of academic qualification. For applications for academic appointments, a maximum number of publications should regularly be requested for the evaluation of scientific merit.

Since publications are the most important “product” of research, it may have seemed logical, when comparing achievement, to measure productivity as the number of products, i.e. publications, per length of time. But this has led to abuses like the so-called salami publications, repeated publication of the same findings, and observance of the principle of the LPU (least publishable unit).

Moreover, since productivity measures yield little useful information unless refined by quality measures, the length of publication lists was soon complemented by additional criteria like the reputation of the journals in which publications appeared, quantified as their “impact factor” (see section 2 5).

However, clearly neither counting publications nor computing their cumulative impact factors are by themselves adequate forms of performance evaluation. On the contrary, they are far removed from the features that constitute the quality element of scientific achievement: its originality, its “level of innovation”, its contribution to the advancement of knowledge. Through the growing frequency of their use, they rather run the danger of becoming surrogates for quality judgements instead of helpful indicators.

Quantitative performance indicators have their use in comparing collective activity and output at a high level of aggregation (faculties, institutes, entire countries) in an overview, or for giving a salient impression of developments over time. For such purposes, bibliometry today supplies a variety of instruments. However, they require specific expertise in their application.

An adequate evaluation of the achievements of an individual or a small group, however, always requires qualitative criteria in the narrow sense: their publications must be read and critically compared to the relevant state of the art and to the contributions of other individuals and working groups.

This confrontation with the content of the science, which demands time and care, is the essential core of peer review for which there is no alternative. The superficial use of quantitative indicators will only serve to devalue or to obfuscate the peer review process.

The rules that follow from this for the practice of scientific work and for the supervision of young scientists and scholars are clear. They apply conversely to peer review and performance evaluation:
  • Even in fields where intensive competition requires rapid publication of findings, quality of work and of publications must be the primary consideration. Findings, wherever factually possible, must be controlled and replicated before being submitted for publication.
  • Wherever achievement has to be evaluated — in reviewing grant proposals, in personnel management, in comparing applications for appointments — the evaluators and reviewers must be encouraged to make explicit judgements of quality before all else. They should therefore receive the smallest reasonable number of publications — selected by their authors as the best examples of their work according to the criteria by which they are to be evaluated.

Related information

Nature on new evaluation systems in The Netherlands and Ireland: Fewer numbers, better science

Episode 4 of Forecast with Max Planck Director Bjorn Stevens on clouds, aerosols, science and science management. Highly recommended.

Memorandum of the German Science Foundation: Safeguarding Good Scientific Practice. English part starts at page 61.

On of my first posts explaining why stick and carrot management makes productivity worse for cognitive tasks: Good ideas, motivation and economics

* Photo of Albert Einstein at the top is in the public domain.

Sunday, 4 September 2016

Believe me, the GOP needs to open itself to rational debate

Major Tom (Coming Home)
Peter Schilling

4, 3, 2, 1
Earth below us
Drifting, falling
Floating weightless
Calling, calling home

Second stage is cut
We're now in orbit
Stabilizers up,
Runnning perfect
Starting to collect
Requested data
What will it affect
When all is done?
Thinks Major Tom

Back at ground control,
There is a problem
Go to rockets full
Not responding
Hello Major Tom.
Are you receiving?
Turn the thrusters on
We're standing by
There's no reply

4, 3, 2, 1
Earth below us
Drifting, falling
Floating weightless
Calling, calling home

Across the stratosphere,
A final message
Give my wife my love
Then nothing more

Far beneath the ship
The world is mourning
They don't realize
He's alive
No one understands
But Major Tom sees
Now the light commands
This is my home
I'm coming home

Earth below us
Drifting, falling
Floating weightless
Coming home
Earth below us
Drifting, falling
Floating weightless
Coming, coming
Home, home

Much better German original
Major Tom (Völlig Losgelöst)
Peter Schilling

...Völlig losgelöst
Von der Erde
Schwebt das Raumschiff
Völlig schwerelos

Die Erdanziehungskraft
Ist überwunden
Alles läuft perfekt -
Schon seit Stunden

Doch was nützen die
Am Ende
Denkt sich Major Tom

Im Kontrollzentrum
Da wird man panisch
Der Kurs der Kapsel der
Stimmt ja gar nicht

"Hallo Major Tom
Können Sie hören
Woll'n Sie das Projekt
Denn so zerstören?"
Doch, er kann nichts hörn'
Er schwebt weiter...

...Völlig losgelöst
Von der Erde
Schwebt das Raumschiff
Völlig schwerelos

Die Erde schimmert blau
Sein letzter Funk kommt:
"Grüsst mir meine Frau!"
Und er verstummt

Unten trauern noch
Die Egoisten
Major Tom denkt sich
"Wenn die wüssten -
Mich führt hier ein Licht
Durch das All
Das kennt ihr noch nicht
Ich komme bald
Mir wird kalt."

Völlig losgelöst
Von der Erde
Schwebt das Raumschiff
The Grand Old Party has created a monster and now it has turned on them.

It would be tempting to simply call the monster Donald Trump, but the unnamed monster has many aspects: Trump, conservative media, anti-science, rejection of adult debate, corporate corruption, climate change denial, racism, fear.

One reason to call the monster Trump would that Trump has taken the syndrome to such extremes. A second is that Trump made many prominent conservatives realize that the monster threatens their party. This threat becomes visible in the list of positions Trump is able to sell that are far from Republican and Trump has started attacking conservative politicians directly.

To solve the problem the GOP will have to return to rational debate, rather than ending every second sentence with "believe me". Conservative readers, did you notice that "believe me" did not work when you thought you should believe me? It is just as stupid coming from your side. Give people reasons to accept what you are saying.

The GOP had rational debate in the past, like non-US conservative parties do. What makes rational debate hard is that US politicians now take position based on what the donors want. The incoherent mess a politician then has to defend cannot be defended rationally. Rational debate thus needs to be replaced with misinformation. To make the misinformation palatable politicians need to stoke fear to suppress critical thought and fuel tribalism.

John Ziegler, a nationally syndicated conservative talk show host, points to the role of conservative media in this. Initially conservative media was a comfortable way for conservative politicians to spread their talking points without getting critical questions. To increase market share conservative media has convinced its followers that other information sources are biased against conservatives. Ted Newton, former communications adviser to 2012 Republican presidential nominee Mitt Romney, said:
"What it became, essentially, was they were preaching this is the only place you can get news. This is the only place you can trust. All other media outlets are lying to you. So you need to come to us."
That now makes it hard for conservatives to contain Trump and point out his lies. The term "lie" may not even be appropriate for Trump. To be a lie you have to be aware that what you are saying is wrong and for a conman like Trump right and wrong are irrelevant, what counts is whether a message sounds convincing.

When what Trump finds convincing does not fit to the GOP platform, its politicians cannot point to fact-checkers or The New York Times because conservatives have been convinced these sources are lying. Reinforced by Trump tweeting about "The failing New York Times" or the "the disgusting and corrupt media".

John Ziegler naturally searches for the problem in conservative media:
"We've ... reached the point, I say, we've left the gravitational pull of the rational Earth, where we are now in a situation where facts don't matter, truth doesn't matter, logic doesn't matter. ...

The conservative establishment that needs to be gotten rid of is the conservative media establishment. Sean Hannity needs to go. Bill O'Reilly needs to go. Sadly, Rush Limbaugh needs to go.

Here's what I'll be very disappointed in: If Trump does lose, as I am very confident that he will, and let's say it's not super close, if he loses by a significant margin and Sean Hannity and people like him have not experienced some significant career pain, if not destruction, because of their role, then it's over. It is over.

Because if there is no price to pay for conservative-media elements having sold out to Donald Trump, then guess what? It's going to happen again and again and again. ... If that doesn't happen, then I think we're done. It's over."
I am not sure how much purging specific persons would help. The system needs to change. The media is nowadays financed more and more per view, per click. This pushes the system to scandal and rubbish, to emotion, fear and exaggeration. Europe benefits enormously from a public media system, that may be more boring, but normally gets the facts right and this forces other media sources to also deliver higher quality.

The population also has an important role in keeping the media and politicians honest by doing their due diligence, giving feedback and selecting credible sources. I think twice before I click on a link to a Murdoch publication because every click converts misinformation and vitriol into cash. Due diligence is hard in a culture where people have little time for citizenship because of the stress society produces and a focus on working long hours over working effectively and creatively.

"I think the conservative media is the worst thing that has ever happened to the Republican Party on a national level,"
John Ziegler, conservative radio host

There is a movement to newspapers, magazines and video news and entertainment that is supported by members. This will lead to more partisanship and a splintering of the media landscape. Still, members will likely be people interested in quality. Thus hopefully the partisanship will be limited to having a clear point of view and finding certain stories interesting, but the facts will be right. If the quality is right that would be progress. If the quality is right, the splintering of the media does not have to lead to a splintering of society because there would be a common basis that makes communication possible.

Next to the [[Fourth Estate]], the media, also science is important for creating a foundation of knowledge that makes civilized debate possible. I would call Science the Fifth Estate. This role of science is as important as sending people to the Moon or designing a non-sticking frying pan. Physicist and philosopher of science John Ziman focuses on this aspect when he argues:
Objectivity is what makes science so valuable in society. It is the public guarantee of reliable disinterested knowledge. Science plays a unique role in settling factual disputes. This is not because it is particularly rational or because it necessarily embodies the truth: it is because it has a well-deserved reputation for impartiality on material issues. The complex fabric of democratic society is held together by trust in this objectivity, exercised openly by scientific experts. Without science as an independent arbiter, many social conflicts could be resolved only by reference to political authority or by a direct appeal to force.
I would expect that it is no coincidence that modern science and nation state were born at about the same time and larger nation states only came up when science had spread. You need to be able to talk with each other.

Anti-science sentiments in the USA are thus worrying. We should also not freak out. Scientists are still one of the most trusted professions and even the enemies of science typically claim to be friends of Science. In Canada even literally. This shows how strong science still is culturally.

Still when scientists speak truth to power, it is worrying how easy it is for US corporations to hit back, via think tanks, FOIA harassment and bribing politicians. Republican politicians are the best investment for corporations because conservatives tend to follow the leader more. Corporations are not charities, those campaign contributions are investments with high rate of return. Also for the sake of the economy corporations need to compete on the market again.

The New York Times reports that congressional Republicans are unwilling to help communities in Florida to cope with the consequences of sea level rise and block the Navy from adapting to the ongoing changes. When America is invaded during high tides, I hope that Republican congressman Buck of Colorado will repeat his claim that the military should not be distracted by a "radical climate change agenda". There was a time when national defense was one of the highest priorities of the Republican party, now corporate brides make them weaken national defense and ignore communities in need. Even communities in swing states.

The good news is that Republican voters are just as fed up with the corrupting influence of money on politics as Democrats. The bad news is that the current politicians got into their positions because they are good in finding donors and do not want to disappoint them. Still begging for money is no fun and many politicians got into politics for good reasons. So together with some people power, it should be possible to reduce the influence of money.

Accepting the money and misinforming your constituents gives a short term boost, especially for the incumbents. In the long term you cannot do anything without trust and you lose contact with the ground.

The American political system is more vulnerable for bribery because the voter does not have much choice. If the special interest can convince party A that they can also convince party B with a generous contribution, there is no downside for any of the parties.

Furthermore, because of the two-party system your vote nearly never matters. That makes it less motivating to pay attention what happens and whether politicians do a good job. If no one is looking, it is less dangerous to do the bidding of the donors rather than the voters. Here the crisis in the media also returns because less journalists also means that less people are looking.

whenever the people are well-informed, they can be trusted with their own government; that, whenever things get so far wrong as to attract their notice, they may be relied on to set them to rights.
Thomas Jefferson

I prefer parliamentary democracies, but if you want to introduce more competition between the parties within the district system used in the USA, you could introduce Preferential Voting, like Australia does. Here the voters are required to indicate their first preference, but they can also indicate the order of preference of the other candidates. Free-market Republicans could thus vote for Gary Johnson as first preference and vote for Trump or Clinton as second preference to make sure the vote does not go wasted. Similarly Bernie Sanders supporters could indicate Green party candidate Jill Stein without the risk of this causing a Trump catastrophe.

Time is running out for the Republican party. If the best case scenario for American and the world comes true and Trump is defeated a discussion about the future of the Republican party will break out. NBC News sees four options:
1. The Republican Party remade in Trump's image,
2. Refined Trumpism (Trump without the bigotry),
3. The Party Establishment Wins and
4. The Stalemate.

Without a return to rationality, this process is going to be very messy. I will not call it likely, but I would not even be surprised if the Republican party would fall apart like the [[Whig party]] did over slavery. The GOP did not have a real leader for years. The party spans a broad coalition, many groups of which have radicalized suffering under a black president, but because their politics were limited to blocking everything, there was hardly any discussions about the political program. Add to this the frustration of losing and bad future prospects due to demographics, which Trump made a lot worse, and you get a dangerous situation, especially when you cannot negotiate and debate rationally.

The GOP meddling in private lives of consenting adults, their xenophobia and anti-science stance will make their demographic problems with young people below 45, science enthusiasts and non-whites larger and larger. There is no need for that. What people do in their bedrooms could just as well be seen as a private affair than as something the Washington should police. Immigrants are on average quite conservative and most would likely vote conservative if the conservatives would not reject them. Conservative parties outside of USA embrace science, scientists used to be proud conservatives and in Europe a large part of the faculty, if not the majority, is conservative.

A debate is not possible when the only response to inconvenient facts is "Lying Ted" and insults. Already the body language of the Trump supporter indicates that he will defend Trump no matter the argument. The same attitude I encounter when I visit the mitigation skeptical blog Watts Up With That. They are determined not to have a productive debate.

Related reading

Joshua Green at Bloomberg wrote a very favorable bio on Breitbart's Steve Bannon: This Man Is the Most Dangerous Political Operative in America

Washington Post on the 2013 Republican National Committee’s Growth and Opportunity Project report: GOP autopsy report goes bold.

Media Matter on the new book "I’m Right and You’re an Idiot: The Toxic State of Public Discourse and How To Clean it Up": New Book Explains Media’s Role In Today’s Toxic State Of Public Discourse

Sykes on Morning Joe: GOP Made Itself "Hostage" to Trump

National Review: Conservative Scams Are Bringing Down the Conservative Movement

Experts worry Trump’s war on America’s democratic institutions could do long-term damage

Charlie Sykes: Have We Created This Monster? Talk radio and the rise of Donald Trump

Conservative media reaches a large audience: U.S. Media Publishers and Publications – Ranked for July 2016

* Caricature at the top, Donald Trump - Riding the Wrecking Ball by DonkeyHotey used under a Creative Commons Attribution-Share Alike 2.0 Generic (CC BY-SA 2.0) license.

Dinosaur Birthday Cupcakes by abakedcreation used under a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.

Monday, 29 August 2016

Blair Trewin's epic journey to 112 Australian weather stations

Blair Trewin is a wonderful character and one of the leading researchers of the homogenization community. He works at the Australian Bureau of Meteorology (BOM) and created their high-quality homogenized datasets. He also developed a correction method for daily temperature observations that is probably the best we currently have. Fitting to his scientific love of homogenization, he has gone on a quest to visit all 112 weather stations that are used to monitor the Australian climate. Enjoy the BOM blog post on this "epic journey".

To Bourke and beyond: one scientist’s epic journey to 112 weather stations

There are 112 weather observation stations that feed into Australia’s official long-term temperature record—and Bureau scientist, Blair Trewin, has made it his personal mission to visit all of them! Having travelled extensively across Australia—from Horn Island in the north to Cape Bruny in the south, Cape Moreton in the east to Carnarvon in the west—Blair has now ticked off all but 11 of those sites.

Map: the 112 observation locations that make up Australia's climate monitoring network

Some of the locations are in or near the major cities, but many are in relatively remote areas and can be difficult to access. Blair says perhaps his most adventurous site visit was on the 2009 trip at Kalumburu, an Aboriginal community on the northernmost tip of the Kimberley, and two days’ drive on a rough track from Broome. ‘I asked the locals the wrong question—they said I’d be able to get in, but I didn’t ask them whether I could get back out again’. After striking trouble at a creek crossing leaving town, he spent an unplanned week there waiting for his vehicle to be put on a barge back to Darwin.

While these locations are remote now, in some ways they were even more remote in the past. These days you can get a signal for your mobile phone in Birdsville, Queensland, but as recently as the 1980s, the only means of rapid communication was often-temperamental radio relays through the Royal Flying Doctor Service. Today distance is no longer an issue; the majority of weather stations in the Bureau’s climate monitoring network—including Birdsville—are automated, with thermometers that submit the information electronically.

Photo: Blair Trewin at the weather observation station at Tarcoola, in the far north of South Australia. The Stevenson screen houses a resistance temperature device (thermometer) and a relative humidity probe

But, even some of the sites closer to home have posed a challenge for Blair’s mission. To get to Gabo Island in Victoria for example, you need to either fly or take a boat, and the runway is just a few hundred metres long, so it can only be used in light winds. ‘I spent two days in Mallacoota waiting for the winds to drop enough to get over there’.

Similarly, the site at the Wilsons Promontory lighthouse, if you don’t use a helicopter, is accessed through a 37 km return hike, which Blair did as a training run with one of his Victorian orienteering teammates.

You can read the rest of this adventure at the Blog of the Australian Bureau of Meteorology.