Sunday, 30 August 2015

Democracy is more important than climate change #WOLFPAC

I know, I know, this is comparing apples to oranges. This is a political post. I am thinking of a specific action I am enthusiastic about to make America a great democracy again: WOLFPAC is working to get a constitutional amendment to get money out of politics. If I had to chose between a mitigation sceptical WOLFPAC candidate and someone who accepts climate change, but is against this amendment, I would chose the mitigation sceptic.

Money is destroying American politics. Politicians need money for their campaigns. The politician with most money nearly always wins. This goes both ways; bribing the winner is more effective, but money for headquarters and advertisements sure help a lot to win. For the companies this is a good investment; the bribe is normally much smaller than the additional profit they make by getting contracts and law changes. Pure crony capitalism.

This is a cross-partisan issue. Republican presidential candidate Donald Trump boosted:
[W]hen you give, they do whatever the hell you want them to do. ... I will tell you that our system is broken. I gave to many people. Before this, before two months ago, I was a businessman. I give to everybody. When they call, I give. And you know what? When I need something from them, two years later, three years later, I call them. They are there for me. And that's a broken system.
For Democrat presidential candidate Bernie Sanders getting money out of politics is a priority issue. He will introduce "the Democracy Is for People constitutional amendment" and promises "that any Sanders Administration Supreme Court nominee will commit to overturning the disastrous Citizens United decision."

Bribery will not stop with an appeal to decency. It should be forbidden.

The WOLFPAC plan to get bribery forbidden sounds strong. They want to get a constitutional amendment to forbid companies to bribe politicians and want this amendment passed by the states, rather than Washington, because the federal politicians depend most on the corporate funding. They believe that state legislators believe stronger in their political ideals. This is also my impression in local politics as a student; also the politicians I did not agree with mostly seemed to believe in what they said. Once I even overheard a local politician passionately discussing a reorganization to improve services and employee moral, with his girlfriend in a train on a Saturday afternoon.

In Washington it is harder to win against lobbies who have much more money. At the state level election campaigns are cheaper, this makes the voice of the people stronger and a little money makes more impact. This makes it easier for WOLFPAC to influence the elections; try to get rid of politicians who oppose the amendment, reward the ones that work for it.

Even at the federal level there may actually be some possibilities. Corporations also compete with each other. They are thus more willing to fund campaigns that help themselves than campaigns that help all companies. In the most extreme case, if only one company would have to cough up all the money to keep money in politics, this company would be a lot less profitable than all the others that benefit from this "altruistic company". In other words, even if companies have a lot of money, you are not fighting against their entire war chest.

Almost all people are in favour of getting money out of politics. Thus a campaign in favour of it is much cheaper than one against. WOLFPAC was founded by the owner of The Young Turks internet news company, which has a reach that is comparable to the cable new channels. This guarantees that the topic will not go away and that time is on our side. Some politicians may like to ignore the amendment as long as they can, but will not dare to openly oppose such a popular proposal. With more and more states signing on, the movement becomes harder to ignore.

Wealthy individuals may well bribe politicians now, but be in favour of no one being able to do so. Just like someone can fly or drive a car while being in favour of changing the transport system so that this is no longer necessary.

It needs two thirds of the states (34) to call for a constitutional convention on a certain topic. The amendment that comes out of this then has to be approved by three quarter of the states. The beginning is hardest, but at the moment I am writing this, the main hurdle has already been taken: four states—Vermont, California, New Jersey and Illinois—have already called for a constitutional convention, see map at the top. In
Connecticut, Delaware, Hawaii, Maryland, Missouri and New Hampshire, the amendment already passed one of the houses. In many more the resolution has been introduced or approved in committees.



I would say this has a good chance of winning. It would feel so good to get this working. For America and for the rest of the world; given how dominant America is, a functioning US political system is important for everyone. It would probably also do a lot to heal the culture war in America, fuelled by negative campaigning. As such it could calm down the climate "debate", which is clearly motivated by politics and only pretends to worry about the integrity of science. The nasty climate "debate" is a social problem in the USA, which should be solved politically in the USA, no amount of science communication can do this.

A recent survey across 14 industrialised nations has found that Australia and Norway are the most mitigation sceptical countries. This does not hurt Norway because it has a working political system. A Norwegian politician could not point to a small percentage of political radicals to satisfy his donors. In a working political system playing the fool seriously hurts your reputation; it would probably even work better to honestly say you do this because you support fossil fuel companies. The political radicals at WUWT & Co. will not go away, but it is not a law that politicians use them as excuse.

Please have a look at the plan of WOLFPAC. I think it could work and that would be fabulous.


Monday, 24 August 2015

Karl Rove strategy #3: Accuse your opponent of your weakness

Quite often a mitigation skeptic will present an "argument" that would make sense if the science side would make it, but makes no sense from their side. Classics would be dead African babies or being in it for the money.

A more person example would be Anthony Watts, the host of mitigation skeptical blog WUWT, claiming that I have a WUWT fixation. Fixation champion Anthony Watts who incites hatred of Michael E. Mann on a weekly if not daily basis. That I write about his cesspit occasionally makes sense given that Watts claims to doubt the temperature trend from station measurements; that is my topic. WUWT is also hard to avoid given that PR professional Watts calls his blog "The world's most viewed site on global warming and climate changes" to improve its standing with journalists and his blog is at least a larger one thus the immoral behavior of WUWT represents the mainstream of the political movement against mitigation.

You can naturally see this behavior as the psychological problem called [[projection]]:
Psychological projection is the act or technique of defending oneself against unpleasant impulses by denying their existence in oneself, while attributing them to others.

Political strategy

It is also a political strategy. One that works. It is strategy #3 on the list of USA Republican political strategist Karl Rove. If you see two groups basically making the same claim, it is hard to decide who is right. That requires going into the details, investing time and most people will not do that. They will simply select the version they like most and go on with their lives.

I must admit that I did not see a good way to respond the #3 nonsense and typically simply ignored it rationalizing that these people were anyway too radical, that communication with them is useless. However, that makes no sense, because communicating with the political extremists at WUWT & Co. never makes sense; it is futile to hope to convince them. You communicate with these people for the lurkers (if there are normal people around). For the lurkers it may be less clear who is wrong and for the lurkers it may be less clear that this is a pattern, a strategy.

Thus I was happy to finally have found a suggestion how to reply. Art Silverblatt—professor of Communication and Journalism—and colleagues have developed strategies to neutralize the strategies of Karl Rove. Their response strategy is to make clear to the pubic how strategy #3 works and deflect it with humor.

For example when Ronni Earle was attacked by Tom DeLay using strategy #3, his response strategy was:
Earle put the into perspective for the public, saying, "I find they often accuse others of doing what they themselves do.”

[Earle] chose to discuss the tactic in terms of how it denigrated the political process and, ultimately, the voters. "This is about protecting the integrity of our electoral system and I couldn't just ignore it."

Earle took a humorous approach, so that he wasn't thrown off-stride by the attacks.

"Being called vindictive and partisan by Tom DeLay is like being called ugly by a frog."
In the climate "debate" it is probably also a good idea to bring the discussion back to the facts. Our strong point is that we have science on our side. Try to make mitigations skeptics to stick to one point and debate this in detail, that exposes their weakest side. Point the lurkers to all their factual and thinking errors, debating tricks and attempts to change the topic.

Poor African babies

So how to respond next time someone claims that scientists doing their job to understand the climate system and how the climate is changing are killing African babies that need coal to survive? Explain that it is a typical strategy for political extremists to claim that other people do what they themselves do, that they do this to confuse the audience. That this endangers our open democratic societies and in the end our freedom and prosperity.

That the opposition to mitigation is delaying solving the problem and that this will kill many vulnerable people. Unfortunately, we do not only have nice people on this world, for some the impacts of climate change may the reason to want to delay solving the problem. Thus it is likely good to also note that the largest economic damages will be in the industrialized countries. That the reason we are wealthy and powerful is our investment in capital. That these investments have been made for the climate of the past. That the high input of capital means that industrialized societies are highly optimized and more easily disrupted.



I would also explain that in the current phase the industrialized world needs to build up renewable energy systems to drive the costs down. That no one expects poor countries to do this. Because many African countries have a very low population density, centralized power plants would need expensive distribution systems and ever cheaper renewable energy is often a good choice, especially in combination with cell phones. Building up a renewable energy system in the industrialized world would also reduce demand of fossil fuels on the world market and lower the prices for the global poor.



Any ideas to put more humor in this response? I have been living in Germany for too long.



Related reading

Be aware that not everyone shares your values: Do dissenters like climate change?

How to talk with mitigation skeptics online: My immature and neurotic fixation on WUWT.

How to talk with someone you know in person about climate change: How to talk to uncle Bob, the climate ostrich.


* Solar power world wide figure by SolarGIS © 2011 GeoModel Solar s.r.o. This figure is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Tuesday, 11 August 2015

History of temperature scales and their impact on the climate trends

Guest post by Peter Pavlásek of the Slovak Institute of Metrology. Metrology, not meteorology, they are the scientists that work on making measurements more precise by developing high accurate standards and thus make experimental results better comparable.

Since the beginning of climate observations temperature has always been an important quantity that needed to be measured as its values affected every aspect of human society. Therefore its precise and reliable temperature determination was important. Of course the ability to precisely measure temperature strongly depends on the measuring sensor and method. To be able to determine how precisely the sensor measures temperature it needs to be calibrated by a temperature standard. As science progressed with time new temperature scales were introduced and the previous temperature standards naturally changed. In the following sections we will have a look on the importance of temperature scales throughout the history and their impact on evaluation of historical climate data.

The first definition of a temperature standard was created in 1889. At the time thermometers were ubiquitous, and had been used for centuries; for example, they had been used to document ocean and air temperature now included in historical records. Metrological temperature standards are based on state transitions of matter (under defined conditions and matter composition) that generate a precise and highly reproducible temperature value. For example, the melting of ice, the freezing of pure metals, etc. Multiple standards can be used as a base for a temperature scale by creating a set of defined temperature points along the scale. An early definition of a temperature scale was invented by the medical doctor Sebastiano Bartolo (1635-1676), who was the first to use melting snow and the boiling point of water to calibrate his mercury thermometers. In 1694 Carlo Renaldini, mathematician and engineer, suggested using the ice melting point and the boiling point of water to divide the interval between these two points into 12 degrees, applying marks on a glass tube containing mercury. Reamur divided the scale in 80 degrees, while the modern division of roughly 100 degrees was adopted by Anders Celsius in 1742. Common to all the scales was the use of phase transitions as anchor points, or fixed points, to define intermediate temperature values.

It is not until 1878 that the first sort of standardized mercury-in-glass thermometers were introduced as an accompanying instrument for the metre prototype, to correct from thermal expansion of the length standard. These special thermometers were constructed to guarantee reproducibility of measurement of a few thousandths of a degree. They were calibrated at the Bureau International des Poids et Mesures (BIPM), established after the recent signature of the Convention du Metre of 1875. The first reference temperature scale was adopted by the 1st Conférence générale des poids et measures ( CGPM) in 1889. It was based on constant volume gas thermometry, and relied heavily on the work of Chappius at BIPM, who had used the technique to link the readings of the very best mercury-in-glass thermometers to absolute (i.e. thermodynamic) temperatures.

Meanwhile, the work of Hugh Longbourne Callendar and Ernest Howard Griffiths on the development of platinum resistance thermometers (PRTs) lay the foundations for the first practical scale. In 1913, after a proposal from the main Institutes of metrology, the 5th CGPM encouraged the creation of a thermodynamic International Temperature Scale (ITS) with associated practical realizations, thus merging the two concepts. The development was halted by the World War I, but the discussions resumed in 1923 when platinum resistance thermometers were well developed and could be used to cover the range from –38 °C, the freezing point of mercury, to 444.5 °C, the boiling point of sulphur, using a quadratic interpolation formula, that included the boiling point of water at 100 °C. In 1927 the 7th CGPM adopted the International Temperature Scale of 1927 that even extended the use of PRTs to -183 °C. The main intention was to overcome the practical difficulties of the direct realization of thermodynamic temperatures by gas thermometry, and the scale was a universally acceptable replacement for the various existing national temperature scales.

In 1937 the CIPM established the Consultative Committee on Thermometry (CCT). Since then the CCT has taken all initiatives in matter of temperature definition and thermometry, including, in the recent years, issues concerning environment, climate and meteorology. It was in fact the CCT that in 2010, shortly after the BIPM-WMO workshop on “Measurement Challenges for Global Observing Systems for Climate Change Monitoring” submitted the recommendation CIPM (T3 2010), encouraging National Metrology Institutes to cooperate with the meteorology and climate communities for establishing traceability to those thermal measurements of importance for detecting climate trends.

The first revision of the 1927 ITS took place in 1948, when extrapolation below the oxygen point to –190 °C was removed from the standard, since it had been found to be an unreliable procedure. The IPTS-48 (with “P” now standing for “practical”) extended down only to –182.97 °C. It was also decided to drop the name "degree Centigrade" for the unit and replace it by degree Celsius. In 1954 the 10th CGPM finally adopted a proposal that Kelvin had made back one century before, namely that the unit of thermodynamic temperature to be defined in terms of the interval between the absolute zero and a single fixed point. The fixed point chosen was the triple point of water, which was assigned the thermodynamic temperature of 273.16 °K or equivalently 0.01 °C and replaced the melting point of ice. Work continued on helium vapour pressure scales and in 1958 and 1962 the efforts were concentrated at low temperatures below 0.9 K. In 1964 the CCT defined the reference function “W” for interpolating the PRTs readings between all new low temperature fixed points, from 12 K to 273,16 K and in 1966 further work on radiometry, noise, acoustic and magnetic thermometry made CCT preparing for a new scale definition.

In 1968 the second revision of the ITS was delivered: both thermodynamic and practical units were defined to be identical and equal to 1/273.16 of the thermodynamic temperature of the triple point of water. The unit itself was renamed "the kelvin" in place of "degree Kelvin" and designated "K" in place of "°K". In 1976 further consideration and results at low temperatures between 0.5 K and 30 K were included in the Provisional Temperature Scale, EPT-76. Meanwhile several NMIs continued the work to better define the fixed points values and the PRT’s characteristics. The International Temperature Scale of 1990 (ITS-90) came into effect on 1 January 1990, replacing the IPTS-68 and the EPT-76 and is still today adopted to guarantee traceability of temperature measurements. Among the main features of ITS-90, with respect to the 1968 one, is the use of the triple point of water (273.16 K), rather than the freezing point of water (273.15 K), as a defining point; it is in closer agreement with thermodynamic temperatures; it has improved continuity and precision.

It follows that any temperature measurement made before 1927 is impossible to trace to an international standard, except for a few nations with a well-defined national definition. Later on, during the evolution of both the temperature unit and the associated scales, changes have been introduced to improve the realization and measurement accuracy.

With each redefinition of the practical temperature scale since the original scale of 1927, the BIPM published official transformation tables to enable conversion between the old and the revised temperature scale (BIPM. 1990). Because of the way the temperature scales have been defined, they really represent an overlap of multiple temperature ranges, each of which may have their own interpolating instrument, fixed points or mathematical equations describing instrument response. A consequence of this complexity is that no simple mathematical relations can be constructed to convert temperatures acquired according to older scales into the modern ITS90 scale.

As an example of the effect of temperature scales alternations let us examine the correction of the daily mean temperature record at Brera, Milano in Italy from 1927 to 2010, shown in Figure 1. The figure illustrates the consequences of the temperature scale change and the correction that needed to be applied to convert the historical data to the current ITS-90. The introduction of new temperature scales in 1968 and 1990 is clearly visible as discontinuities in the magnitude of the correction, with significantly larger corrections for data prior to 1968. As expected from Figure 1, the cycling follows the seasonal changes in temperature. The higher summer temperatures require a larger correction.


Figure 1. Example corrections for the weather station at Brera, Milano in Italy. The values are computed for the daily average temperature. The magnitude of the correction cycles with the annual variations in temperature: the inset highlights how the warm summer temperatures are corrected much more (downward) than the cool winter temperatures.

For the same reason the corrections will differ between locations. The daily average temperatures at the Milano station typically approaches 30 °C on the warmest summer days, while it may fall slightly below freezing in winter. In a different location with larger differences between typical summer and winter temperature the corrections might oscillate around 0 °C, and a more stable climate might see smaller corrections overall: at Utsira, a small island off the south-western coast of Norway the summertime corrections are typically 50% below the values for Brera. To better see the magnitude of corrections for specific historical temperatures the Figure 2 is provided.


Figure 2. The corrections in °C that need to be applied to a certain historical temperatures in the range form -50 °C up to +50 °C with regard to the time period the historical data were measured.

The uncertainty in the temperature readings from any individual thermometer is significantly larger than the corrections presented here. Furthermore, even for the limited timespan since 1927 a typical meteorological weather station has seen many changes which may affect the temperature readings. Examples include instrument replacement; instrument relocations; screens may be rebuilt, redesigned or moved; the schedule for readings may change; the environment close to the station may become more densely populated and therefore enhance the urban heat island effect; and manually recorded temperatures may suffer from unconscious observer bias (Camuffo, 2002; Bergstrøm and Moberg, 2002; Kennedy, 2013). Despite the diligent quality control employed by meteorologists during the reconstruction of long records, every such correction also has an uncertainty associated with it. Thus, for an individual instrument, and perhaps even an individual station, the scale correction is insignificant.

On the other hand, more care is needed for aggregate data. The scale correction represents a bias which is equal for all instruments, regardless of location and use, and simply averaging data from multiple sources will not eliminate it. The scale correction is smaller than, but of the same order of magnitude as the uncertainty components claimed for monthly average global temperatures in the HadCRUT4 dataset (Morice et al., 2012). To evaluate the actual value of the correction for the global averages would require a recalculation of all the individual temperature records. However, the correction does not alter the warming trend: if anything it would exacerbate it slightly. Time averaging or averaging multiple instruments has been claimed to lower temperature uncertainty to around 0.03 °C (for example in Kennedy (2013) for aggregate temperature records of sea surface temperature). To be credible such claims for the uncertainty need to consider the scale correction in our opinion.

Scale correction for temperatures earlier than 1927 is harder to assess. Without an internationally accepted and widespread calibration reference it is impossible to construct a simple correction algorithm, but there is reason to suspect that the corrections become more important for older parts of the instrumental record. Quantifying the correction would entail close scrutiny of the old calibration practices, and hinges on available contemporary descriptions. Conspicuous errors can be detected, such as the large discrepancy which Burnette et al. found in 1861 from records at Fort Riley, Kansas (Burnette et al., 2010). In that case the decision to correct the dubious values was corroborated by metadata describing a change of observer: however, this also illustrates the calibration pitfall when no widespread temperature standard was available. One would expect that many more instruments were slightly off, and the question is whether this introduced a bias or just random fluctuations which can be averaged away when producing regional averages.

Whether the relative importance of the scale correction increases further back in time remains an open question. The errors from other sources such as the time schedule for the measurements also become more important and harder to account for, such as the transformation from old Italian time to modern western European time described in (Camuffo, 2002).

This brief overview of temperature scales history has shown what an impact these changes have on historical temperature data. As it was discussed earlier the corrections originating from the temperature scale changes is small when compared with other factors. Even when the values of the correction may be small it doesn’t mean it should be ignored as their magnitude are far from negligible. More details about this problematic and the conversion equation that enables to convert any historical temperature data from 1927 up to 1989 to the current ITS-90 can be found in the publication of Pavlasek et al. (2015).



Related reading

Why raw temperatures show too little global warming

Just the facts, homogenization adjustments reduce global warming

References

Camuffo, Dario, 2002: Errors in early temperature series arising from changes in style of measuring time, sampling schedule and number of observations. Climatic change, 53, pp. 331-352.

Bergstrøm, H. and A. Moberg, 2002: Daily air temperature and pressure series for Uppsala (1722-1998). Climatic change, 53, pp. 213-252.

Kenndy, John J., 2013: A review of uncertainty in in situ measurements and data sets of sea surface temperature. Reviews of geophysics, 52, pp. 1-32.

Morice, C.P., et al., 2012: Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HaddCRUT4 data set. Journal of geophysical research, 117, pp. 1-22.

Burnette, Dorian J., David W. Stahle, and Cary J. Mock, 2010: Daily-Mean Temperature Reconstructed for Kansas from Early Instrumental and Modern Observations. Journal of Climate, 23, pp. 1308-1333.

Pavlasek P., A. Merlone, C. Musacchio, A.A.F. Olsen, R.A. Bergerud, and L. Knazovicka, 2015: Effect of changes in temperature scales on historical temperature data. International Journal of Climatology, doi: 10.1002/joc.4404.

Friday, 17 July 2015

Lakes are warming at a surprisingly fast rate


Map with lake temperature trends. As so often the trend is strongest in the mid-latitudes of the Northern Hemisphere. Two seasons are used to minimize cloud blockage: JAS (July, August & September) and JFM (January, February & March) for the dry season.

Many changes in the climate system go faster than expected, which fits to my hunch that the station temperature trend may have a cooling bias. The coming time I would like to blog about a few example changes. This first post is about lakes and rivers, their temperature changes and changes in the date they freeze and the date the ice breaks up. In this case it is hard to say whether this goes faster than expected, because there is not much research on this, but the temperature changes sure are surprisingly fast.

We will see more research on this in future. There is now a Global Lake Temperature Collaboration (GLTC), which is collecting and analyzing lake temperatures. It is easy to complain about the weather services and how they keep on making changes to the meteorological networks and fail to share many important observational datasets (and I will keep complaining), but at least they have systematically made such historical observations. For other observations, such a lake temperatures and especially ecological datasets it is much harder to obtain long and stable observations lacking institutional support. Now going into the century of climate change this institutional failure becomes even more problematic. I feel it should be part of the international climate change treaties to set up organizations that can provide long term, well-documented and climate quality stable measurements of a large range of environmental systems.

One of the reasons to found the Global Lake Temperature Collaboration was a scientific article by Schneider and Hook in 2010. It analyzed the temperature trends of lakes using thermal infra-red satellite images ([[AVHRR]] and [[ATSR]]). Between 1985 and 2009 the satellite lake temperatures increased by 1.13°C, which they report is more than the regional air temperature increases. For comparison this amounts to 4.5°C per century; see graph below. This is stronger than the land temperatures, although one would expect less warming of the lakes. For the same period the Northern Hemisphere temperature of Berkeley Earth increased by 3.9°C per century.


Trend in lake temperature anomalies. This is an average over all 113 water bodies that were large enough and had at least 15 years of data. The trend over the period 1985 to 2009 is 4.5±1.1°C per century.

Satellite data is great for their global overview (see graph at the top of this post), but are tricky when it comes to trends. Trend estimates from satellite data are difficult due to degradation of the instruments in a harsh space environment, changes in the orbit and height of the satellite, while at the same time the limited life span of satellites means that the instruments are regularly replaced and technological improvement often lead to new designs. All this happens while the small number of satellites means that there is minimal redundancy to study such data problems. Schneider and Hook (2010) did their best to study such data problems, used data from seasons with few clouds, so that the satellite can see the lakes more often. They compared their data with surface observations and found only small biases between the satellites and no indications of trend biases. And they used night-time observations to reduce the influence of orbital drift.

Still an astonishing outlier trend from satellite data calls for ground validation (in situ measurements). The GLTC now provides a dataset with both satellite and in-situ lake temperature measurements. The paper describing the dataset is out now (Sharma et al., 2015). The paper analyzing the trends has still to be published, but Philipp Schneider of the GLTC wrote to me that the in-situ trend is similar. Further papers explaining the trends are in preparation.

At the moment it is not clear yet what is the reason for the stronger increase. Many lakes are close to the Arctic, thus it could be ice albedo feedback for Northern lakes. The summer surface temperature of Lake Superior over the interval 1979 - 2006 has, for example, increased by 11±6°C per century, faster than regional atmospheric warming. This is thought to be due a reduction in the albedo of the lakes due to a reduction in ice cover (Austin and Colman, 2007). The air in the Arctic is also warming faster than elsewhere. Changes in the observations are naturally also possible — as always — and the land surface temperature trend might also be underestimated.

Stronger insolation may especially heat lakes, which typically reflect less solar radiation than the land surface. This may especially be important for the recent decades in the industrialised countries, where air pollution has been reduced considerably.

Surface temperatures may change due to less mixing with deep cold water: as the top temperatures warm more the warm surface water mixes less well with the deep cold and dense water and reductions in the wind speed can reduce mixing (Butcher et al., 2015). Also changes in the transparency of the water can influence where the solar warming ends up (Butcher et al., 2015).

In other words, no observation is ever completely straight forward and the air temperature is just one factor influencing lake water temperatures. Just like with station measurements, you always need to study which part of the changes in the raw observations is the part you are interested in.

On the other hand, because more of the additional greenhouse warming goes into evaporation rather than to warming, the warming of the air over land is expected to be stronger than the lake water temperatures. Butcher and colleagues (2015) estimate that the surface water temperature increases are only about 77% of increase in average air temperature change. Previously Schmidt and colleagues (2014) estimated this to be between 70 to 85%.

Freezing and melting of lakes and rivers

The lake temperature observations are unfortunately not very long. To put their warming into perspective there are also observations of the freezing and breakup dates of lakes and rivers. These sometimes go back many centuries. Magnuson and colleagues (2000) have gathered 39 observational dataset on lakes and rivers with more than 150 years of data. They found that all but one of them showed later freezing dates and earlier breakup dates. The freeze dates are 5.8 days per century later and the breakup dates are 6.5 days per century earlier. This is comparable to a warming of the regional air temperature of about 1.2°C (2°F) per century, but with a large confidence interval. For comparison, the Berkeley Earth dataset shows a warming of the NH land temperatures for the same period between 1846 and 1995 of 0.67°C per century. For this comparison it should be reminded that these rivers and lakes are in high latitude regions that warm more.


Time series of freeze and breakup dates from selected Northern Hemisphere lakes and rivers (1846 to 1995). Data were smoothed with a 10-year moving average. Figure 1 from Magnuson (2002).

For one dataset they mention a small non-climatic influence (a power plant) and one dataset (a harbor) is excluded because of its much stronger trend. Magnuson and colleagues seem confident that warm waste water is not the reason for the trends in the other series, but do not explicitly write about that in their 4-page Science article.


Melting lakes showing a clear contrast between ice and water. This makes is relatively easy to use historical aircraft and satellite observations. Figure 5 from Duguay et al. (2003).
The advantage of such datasets is that they provide yearly information, that the observations are often long and that they are relatively precise. For the recent decades, the observations can be made for a large number of lakes from space. These freezing and breakup dates are also nicely determined by the temperature averaged over a longer period, which removes a lot of variability: Freezing is determined by the temperature in the last two months before the event.

What complicates matters is that snow isolates the ice and slows down freezing and thawing. Thus the ice breakup of lakes is also influenced by snow on top of the ice. Also the depth of the lake is important; deeper lakes thaw later (Duguay and colleagues, 2003). The breakup date of rivers is also determined by the timing and amount of spring river run-off.

Concluding. Lake temperature are rising fast over the last decades, likely faster than the air temperatures, while one would expect that lakes warm slower because more heat goes into evaporation. This could be in part due to an albedo feedback or due to more sunshine from reductions in air pollution during these decades. Furthermore, also the length of the period that lakes and rivers are frozen decreases rapidly since 1850. Converting this in a temperature signal introduces a large uncertainty, but also this temperature increase seems to be faster than the current estimates of land surface temperature increases.

Now if I were a political activist, I would call climate science a hoax or claim that all scientists are stupid. Just listen to lonely brilliant Galileo me, forget science, the oceans will boil soon.

But, well, I am sorry for being such a scientist, I would just say, that we found something very interesting. Apparent discrepancies help one to understand a problem better. Only once we understand the warming of the lakes better can we know whether climate science was wrong.

As will be the case for most of this blog series on faster changes, the topic of this post goes beyond my expertise. Thus if anyone knows of good studies on this topic, I would be very grateful if you could leave a comment or would write me. Especially if anyone knows of comparisons between air and water temperature trends or between models and observations for lake and river temperatures or their ice cover. There are a number of interesting papers coming up, thus I will probably have to write an update in a few months.



Related resources

John Lenters (coordinator of the GLTC) at Nature's Scientific Data blog: Author’s Corner: Are lakes warming?

Global Lake Temperature Collaboration

Why raw temperatures show too little global warming

A recent study shows that since 1970 the ocean heat content of the upper 700m has increased 15% more than climate models have predicted.

Chris Mooney has an interesting piece in the Washington Post on related snow cover observations: Northern Hemisphere snow cover is near record lows. Here’s why that should worry you.

More scientific articles on lake ice by the group of John Magnuson.

The river and like ice phenology database: Benson, B. and J. Magnuson. 2000, updated 2012. Global lake and river ice phenology database. Boulder, Colorado USA: National Snow and Ice Data Center. doi: 10.7265/N5W66HP8.

References

Austin, J. A. and S. M. Colman, 2007: Lake Superior summer water temperatures are increasing more rapidly than regional air temperatures: A positive ice-albedo feedback, Geophysical Research Letters, 34, art. no. L06604, doi: 10.1029/2006GL029021.

Butcher, Jonathan B., Daniel Nover, Thomas E. Johnson, and Christopher M. Clark, 2015: Sensitivity of lake thermal and mixing dynamics to climate change. Climatic Change, March 2015, 129, Issue 1-2, pp 295-305, doi: 10.1007/s10584-015-1326-1.

Duguay, Claude R., Greg M. Flato, Martin O. Jeffries, Patrick Ménard, Kim Morris, and Wayne R. Rouse, 2003: Ice-cover variability on shallow lakes at high latitudes: model simulations and observations. Hydrological Processes, 17, pp. 3465-3483, doi: 10.1002/hyp.1394.

Magnuson, John J., Dale M. Robertson, Barbara J. Benson, Randolf H. Wynne, David M. Livingstone, Tadashi Arai, Raymond A. Assel, Roger B. Barry, Virginia Card, Esko Kuusisto, Nick G. Granin, Terry D. Prowse, Kenton M. Stewart, and Valery S. Vuglinski, 2000: Historical trends in lake and river ice cover in the Northern Hemisphere. Science, 289, pp. 1743-1746, doi: 10.1126/science.289.5485.1743.

Schmid, Martin, Stefan Hunziker, and Alfred Wüest , 2014: Lake surface temperatures in a changing climate: a global sensitivity analysis. Climatic Change, 124, pp. 301–315, doi: 10.1007/s10584-014-1087-2.

Schneider, Philipp, and Simon J. Hook, 2010: Space observations of inland water bodies show rapid surface warming since 1985. Geophysical Research Letters, 37, art. no. L22405, doi: 10.1029/2010GL045059.

Schneider, Philipp and Simon J. Hook, 2012: Global Trends of Lake Surface Temperatures Observed From Space. Geophysical Research Abstracts, 14, EGU2012-2858, EGU General Assembly 2012.

Schneider, Philipp, Simon J. Hook, Derek K. Gray, Jordan S. Read, Stephanie E. Hampton, Catherine M. O’Reilly, Sapna Sharma, and John D. Lenters, 2013: Global lake warming trends derived from satellite and in situ observations. Geophysical Research Abstracts, 15, EGU2013-2235, EGU General Assembly 2013.

Wednesday, 24 June 2015

Overconfidence in the nut test


My apologies, yesterday I promoted an erroneous article on twitter. Science journalist Dan Vergano wrote about his simple nut test, based on the hallmark of almost any nut: overconfidence. Overconfidence is also common among mitigation sceptics, who are quick to shout fraud, rather than first trying to understanding the science and ask polite specific questions trying to clear any misunderstandings.

Thus when Vergano explained his nut test, I accepted that as easy as the word of God is accepted by an elder, as the Dutch say. A typical case of confirmation bias. He writes:

"A decade after my first climate science epiphany, I was interviewing a chronic critic of global warming studies, particularly the 1998 “hockey stick” one that found temperatures in our century racing upward on a slope that mirrored a hockey blade pointed skyward. He argued vociferously that the study’s math was all messed up, and that this meant all of climate science was a sham.

I listened, and at the end of the interview, I gave him the nut test.

“What are the odds that you are wrong?” I asked, or so I remember.

“I’d say zero,” the critic replied. “No chance.”

That’s how you fail the nut test.

I had asked a climate scientist the same question on the phone an hour before.

“I could always be wrong,” the scientist said. Statistically, he added, it could be about a 20% to 5% chance, depending on what he might be wrong about.

That’s how you pass the nut test: by admitting you could be wrong.

And that’s how a climate denier finally convinced me, once and for all, that climate science was on pretty safe ground.
"

The problem of the test is, it is possible to be confident that a scientific statement is wrong. A scientific hypothesis should be falsifiable. One should mainly not be too confident that one is right. Making of positive claim about reality is always risky.



For example, you can be confident that someone cannot claim that all of climate science is a sham after studying temperature changes in the distant past ("hockey stick"). That is a logical fallacy. The theory of global warming is not only based on the hockey stick, but also on our physical understanding of radiative transfer and the atmosphere and on our understanding of the atmospheres of the other planets and on global climate models. Science is confident it is warming not only because of the hockey stick, but also because of historical temperature measurements, other changes in the climate (precipitation, circulation), changes in ecosystems, warming of lakes and rivers, decreases of the snow cover and so on.

So in this case, the mitigation sceptic is talking nonsense, but theoretically it would have been possible that he was rightly confident that the maths was wrong. Just like I am confident that many of the claims on WUWT & Co on homogenization are wrong. That does not mean that I am confident the data is flawless, but just that you should not get your science from WUWT & Co.

Three years ago Anthony Watts, host of WUWT, called a conference contribution a peer reviewed article. I am confident that that is wrong. The abstract claimed without arguments that half of the homogenization should go up and half should go down. I am confident that that assumption is wrong. The conference contribution offered a span of possible values. Anthony Watts put the worst extreme in his headline. That is wrong. Now after three years with no follow-up it is clear that the authors accept that the conference contribution contained serious problems.

Anthony Watts corrected his post and admitted that the conference contribution was not a peer reviewed article. This is rare and the other errors remain. Next to overconfidence, not admitting to be wrong is also common among mitigation sceptics. Anyone who is truly sceptical and follows the climate "debate", please pay attention, when a mitigation sceptic loses an argument, he ignores this and moves to the next try. This is so common that one of my main tips on debating mitigation sceptics is to make sure you stay on topic and point out to the reader when the mitigation sceptic tries to change the topic. (And the reader is the person you want to convince.)

Not being able to admit mistakes is human, but also a sure way to end up with a completely wrong view of the world. That may explain this tendency of the mitigation sceptics. It is also possible that the mitigation sceptic knows from the start that his argument is bogus, but hopes to confuse the public. Then it is better not to admit to be wrong, because then this mitigation sceptic runs the risk of being reminded of that the next time he tries his scam.

Less common, but also important is the second order nut test for people who promote obvious nonsense, claim not to know who is right to give the impression that there is more uncertainty than there really is. Someone claiming to have doubts about a large number of solid results is a clear warning light. One the above mitigation sceptic is apparently also guilty of ("chronic critic"). It needs a lot of expertise to find problems, it is not likely that some average bloke or even some average scientist pulls this off.

Not wanting to look like a nut, I make an explicit effort to not only talk about what we are sure about (it is warming, it is us, it will continue if we keep on using fossil fuels), but also what we are nor sure about (the temperature change up to the last tenth of a degree). To distinguish myself from the nuts, I try to apologize even when this is not strictly necessary. In this case even when the nut test is quite useful and the above conclusions were probably right.



Related reading

Science journalist Dan Vergano wrote a nice article article on his journey from conservative Catholic climate "sceptic" to someone who accepts the science (including the nut test): How I Came To Jesus On Global Warming.

The three year old conference contribution: Investigation of methods for hydroclimatic data homogenization.

Anthony Watts calls inhomogeneity in his web traffic a success.

Some ideas on how to talk with mitigation sceptics and some stories of people who managed to find their way back to reality.

Falsifiable and falsification in science. Falsifiable is essential. Falsification not that important nor straightforward.

Wednesday, 17 June 2015

Did you notice the recent anti-IPCC article?

You may have missed the latest attack on the IPCC, because the mitigation sceptics did not celebrated it. Normally they like to claim that the job of scientists is to write IPCC friendly articles. Maybe because that is the world they know, that is how their think tanks function, that is what they would be willing to do for their political movement. The claim is naturally wrong and it illustrates that they are either willing to lie for their movement or do not have a clue how science works.

It is the job of a scientist to understand the world better and thus to change the way we currently see the world. It is the fun of being a scientist to challenge old ideas.

The case in point last week was naturally the new NOAA assessment of the global mean temperature trend (Karl et al., 2015). The new assessment only produced minimal changes, but NOAA made that interesting by claiming the IPCC was wrong about the "hiatus". The abstract boldly states:
Here we present an updated global surface temperature analysis that reveals that global trends are higher than reported by the IPCC ...
The introduction starts:
The Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report concluded that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years [1998-2012] than over the past 30 to 60 years.” ... We address all three of these [changes in the observation methods], none of which were included in our previous analysis used in the IPCC report.
Later Karl et al. write, that they are better than the IPCC:
These analyses have surmised that incomplete Arctic coverage also affects the trends from our analysis as reported by IPCC. We address this issue as well.
To stress the controversy they explicitly use the IPCC periods:
Our analysis also suggests that short- and long-term warming rates are far more similar than previously estimated in IPCC. The difference between the trends in two periods used in IPCC (1998-2012 and 1951-2012) is an illustrative metric: the trends for these two periods in the new analysis differ by 0.043°C/dec compared to 0.078°C/dec in the old analysis reported by IPCC.
The final punchline goes:
Indeed, based on our new analysis, the IPCC’s statement of two years ago – that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years” – is no longer valid.
And they make the IPCC periods visually stand out in their main figure.


Figure from Karl et al. (2015) showing the trend difference for the old and new assessment over a number of periods, the IPCC periods and their own. The circles are the old dataset, the squares the new one and the triangles depict the new data with interpolation of the Arctic datagap.

This is a clear example of scientists attacking the orthodoxy because it is done so blatantly. Normally scientific articles do this more subtly, which has the disadvantage that the public does not notice it happening. Normally scientists would mention the old work casually, often the expect their colleagues to know which specific studies are (partially) criticized. Maybe NOAA found it easier to use this language this time because they did not write about a specific colleague, but about a group and a strong group.


Figure SPM.1. (a) Observed global mean combined land and ocean surface temperature anomalies, from 1850 to 2012 from three data sets. Top panel: annual mean values. Bottom panel: decadal mean values including the estimate of uncertainty for one dataset (black). Anomalies are relative to the mean of 1961−1990. (b) Map of the observed surface temperature change from 1901 to 2012 derived from temperature trends determined by linear regression from one dataset (orange line in panel a).
The attack is also somewhat unfair. The IPCC clearly stated that it not a good idea to focus on such short periods:
In addition to robust multi-decadal warming, global mean surface temperature exhibits substantial decadal and interannual variability (see Figure SPM.1). Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends. As one example, the rate of warming over the past 15 years (1998–2012; 0.05 [–0.05 to 0.15] °C per decade), which begins with a strong El Niño, is smaller than the rate calculated since 1951 (1951–2012; 0.12 [0.08 to 0.14] °C per decade)
What the IPCC missed in this case is that the problem goes beyond natural variability, that another problem is whether the data quality is high enough to talk about such subtle variations.

The mitigation sceptics may have missed that NOAA attacked the IPCC consensus because the article also attacked the one thing they somehow hold dear: the "hiatus".

I must admit that I originally thought that the emphasis the mitigation sceptics put on the "hiatus" was because they mainly value annoying "greenies" and what better way to do so than to give your most ridiculous argument. Ignore the temperature rise over the last century, start your "hiatus" in a hot super El Nino year and stupidly claim that global warming has stopped.

But they really cling to it, they already wrote well over a dozen NOAA protest posts at WUWT, an important blog of the mitigation sceptical movement. The Daily Kos even wrote: "climate denier heads exploded all over the internet."

This "hiatus" fad provided Karl et al. (2015) the public interest — or interdisciplinary relevance as these journals call that — and made it a Science paper. Without the weird climate "debate", it would have been an article for a good climate journal. Without challenging the orthodoxy, it would have been an article for a simple data journal.

Let me close this post with a video of Richard Alley explaining even more enthusiastic than usually
what drives (climate) scientists? Hint: it ain't parroting the IPCC. (Even if their reports are very helpful.)
Suppose Einstein had stood up and said, I have worked very hard and I have discovered that Newton is right and I have nothing to add. Would anyone ever know who Einstein was?







Further reading

My draft was already written before I noticed that at Real Climate Stefan Rahmstorf had written: Debate in the noise.

My previous post on the NOAA assessment asked the question whether the data is good enough to see something like a "hiatus" and stressed the need to climate data sharing and building up a global reference network. It was frivolously called: No! Ah! Part II. The return of the uncertainty monster.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

How climatology treats sceptics. My experience fits to what you would expect.

References

IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp, doi: 10.1017/CBO9781107415324.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.

Saturday, 13 June 2015

Free our climate data - from Geneva to Paris

Royal Air Force- Italy, the Balkans and South-east Europe, 1942-1945. CNA1969

Neglecting to monitor the harm done to nature and the environmental impact of our decisions is only the most striking sign of a disregard for the message contained in the structures of nature itself.
Pope Francis

The 17th Congress of the World Meteorological Organization in Geneva ended today. After countless hours of discussions they managed to pass a almost completely rewritten resolution on sharing climate data in the last hour.

The glass is half full. On the one hand, the resolution clearly states the importance of sharing data. It demonstrates that it is important to help humanity cope with climate change by making it part of the global framework for climate services (GFCS), which is there to help all nations to adapt to climate change.

The resolution considers and recognises:
The fundamental importance of the free and unrestricted exchange of GFCS relevant data and products among WMO Members to facilitate the implementation of the GFCS and to enable society to manage better the risks and opportunities arising from climate variability and change, especially for those who are most vulnerable to climate-related hazards...

That increased availability of, and access to, GFCS relevant data, especially in data sparse regions, can lead to better quality and will create a greater variety of products and services...

Indeed free and unrestricted access to data can and does facilitate innovation and the discovery of new ways to use, and purposes for, the data.
On the other hand, if a country wants to it can still refuse to share the most important datasets: the historical station observations. Many datasets will be shared: Satellite data and products, ocean and cryosphere (ice) observations, measurements on the composition of the atmosphere (including aerosols). However, information on streamflow, lakes and most of the climate station data are exempt.

The resolution does urge Members to:
Strengthen their commitment to the free and unrestricted exchange of GFCS relevant data and products;

Increase the volume of GFCS relevant data and products accessible to meet the needs for implementation of the GFCS and the requirements of the GFCS partners;
But there is no requirement to do so.

The most positive development is not on paper. Data sharing may well have been the main discussion topic among the directors of the national weather services at the Congress. They got the message that many of them find this important and they are likely to prioritise data sharing in future. I am grateful to the people at the WMO Congress who made this happen, you know who you are. Some directors really wanted to have a strong resolution as justification towards their governments to open up the databases. There is already a trend towards more and more countries opening up their archives, not only of climate data, but going towards open governance. Thus I am confident that many more countries will follow this trend after this Congress.

Also good about the resolution is that WMO will start monitoring data availability and data policies. This will make visible how many countries are already taking the high road and speed up the opening of the datasets. The resolution requests WMO to:
Monitor the implementation of policies and practices of this Resolution and, if necessary, make proposals in this respect to the Eighteenth World Meteorological Congress;
In a nice twist the WMO calls the data to be shared: GFCS data. Thus basically saying, if you do not share climate data you are responsible for the national damages of climatic changes that you could have adapted to and you are responsible for the failed adaptation investments. The term "GFSC data" misses how important this data is for basic climate research. Research that is important to guide expensive political decisions on mitigation and in the end again adaptation and ever more likely geo-engineering.

If I may repeat myself, we really need all the data we can get for an accurate assessment of climatic changes, a few stations will not do:
To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging.
The problem, as so often, is mainly money. Weather services get some revenues from selling climate data. These can't be big compared to the impacts of climate change or compared to the investments needed to adapt, but relative to the budget of a weather service, especially in poorer countries, it does make a difference. At least the weather services will have to ask their governments for permission.

Thus we will probably have to up our game. The mandate of the weather services is not enough, we need to make clear to the governments of this world that sharing climate data is of huge benefit to every single country. Compared to the costs of climate change this is a no-brainer. Don Henry writes that "[The G7] also said they would continue efforts to provide US$100 billion a year by 2020 to support developing countries' own climate actions." The revenues from selling climate data are irrelevant compared to that number.

There is just a large political climate summit coming up, the COP21 in Paris in December. This week there was a preparatory meeting in Bonn to work on the text of the climate treaty. This proposal already has an optional text about climate research:
[Industrialised countries] and those Parties [nations] in a position to do so shall support the [Least Developed Countries] in the implementation of national adaptation plans and the development of additional activities under the [Least Developed Countries] work programme, including the development of institutional capacity by establishing regional institutions to respond to adaptation needs and strengthen climate-related research and systematic observation for climate data collection, archiving, analysis and modelling.
An earlier climate treaty (COP4 from 1998) already speaks about the exchange of climate data (FCCC/CP/1998/16/Add.1):
Urges Parties to undertake free and unrestricted exchange of data to meet the needs of the Convention, recognizing the various policies on data exchange of relevant international and intergovernmental organizations;
"Urges" is not enough, but that is a basis that could be reinforced. With the kind of money COP21 is dealing with it should be easy to support weather services of less wealthy countries to improve their observation systems and make the data freely available. That would be an enormous win-win situation.

To make this happen, we probably need to show that the climate science community stands behind this. We would need a group of distinguished climate scientists from as much countries as possible to support a "petition" requesting better measurements in data sparse regions and free and unrestricted data sharing.

To get heard we would probably also need to write articles for the national newspapers, to get published they would again have to be written by well-known scientists. To get attention it would also be great if many climate blogs would write about the action on the same day.

Maybe we could make this work. My impression was already that basically everyone in the climate science community finds the free exchange of climate data very important and the current situation a major impediment for better climate research. After last weeks article on data sharing the response was enormous and only positive. This may have been the first time that a blog post of mine that did not respond to something in the press got over 1000 views. It was certainly my first tweet that got over 13 thousand views and 100 retweets:


This action of my little homogenization blog was even in the top of the twitter page on the Congress of the WMO (#MeteoWorld), right next to the photo of the newly elected WMO Secretary-General Petteri Taalas.



With all this internet enthusiasm and the dedication of the people fighting for free data at the WMO and likely many more outside of the WMO, we may be able to make this work. If you would like to stay informed please fill in the form below or write to me. If enough people show interest, I feel we should try. I also do not have the time, but this is important.






Related reading

Congress of the World Meteorological Organization, free our climate data

Why raw temperatures show too little global warming

Everything you need to know about the Paris climate summit and UN talks

Bonn climate summit brings us slowly closer to a global deal by Don Henry (Public Policy Fellow, Melbourne Sustainable Society Institute at University of Melbourne) at The Conversation.

Free climate data action promoted in Italian. Thank you Sylvie Coyaud.

If my Italian is good enough, that is Google Translate, this post wants the Pope to put the sharing of climate data in his encyclical. Weather data is a common good.


* Photo at the top: By Royal Air Force official photographer [Public domain], via Wikimedia Commons

Tuesday, 9 June 2015

Comparing the United States COOP stations with the US Climate Reference Network

Last week the mitigation sceptics apparently expected climate data to be highly reliable and were complaining that an update led to small changes. Other weeks they expect climate data to be largely wrong, for example due to non-ideal micro-siting or urbanization. These concerns can be ruled out for the climate-quality US Climate Reference Network (USCRN). This is a guest post by Jared Rennie* introducing a recent study comparing USCRN stations with nearby stations of the historical network, to study the differences in the temperature and precipitation measurements.


Figure 1. These pictures show some of instruments from the observing systems in the study. The exterior of a COOP cotton region shelter housing a liquid-in-glass thermometer is pictured in the foreground of the top left panel, and a COOP standard 8-inch precipitation gauge is pictured in the top right. Three USCRN Met One fan-aspirated shields with platinum resistance thermometers are pictured in the middle. And, a USCRN well-shielded Geonor weighing precipitation gauge is pictured at the bottom.
In 2000 the United States started building a measurement network to monitor climate change, the so called United States Climate Reference Network (USCRN). These automatic stations have been installed in excellent locations and are expected not to show influences of changes in the direct surroundings for decades to come. To avoid loss of data the most important variables are measured by three high-quality instruments. A new paper by Leeper, Rennie, and Palecki now compares the measurements of twelve station pairs of this reference network with nearby stations of the historical US network. They find that the reference network records slightly cooler temperature and less precipitation and that there are almost no differences in the temperature variability and trend.

COOP and USCRN

The detection and attribution of climate signals often rely upon long, historically rich records. In the United States, the Cooperative Observer Program (COOP) has collected many decades of observations for thousands of stations, going as far back as the late 1800’s. While the COOP network has become the backbone of the U.S. climatology dataset, non-climatic factors in the data have introduced systematic biases, which require homogenization corrections before they can be included in climatic assessments. Such factors include modernization of equipment, time of observation differences, changes in observing practices, and station moves over time. A part of the COOP stations with long observations is known as the US Historical Climate Network (USHCN), which is the default dataset to report on temperature changes in the USA.

Recognizing these challenges, the United States Climate Reference Network (USCRN) was initiated in 2000. 15 years after its inception, 132 stations have been installed across the United States with sub-hourly observations of numerous weather elements using state-of-the-art instrumentation calibrated to traceable standards. For a high data quality temperature and precipitation sensors are well shielded and for continuity the stations have three independent sensors, so no data loss is incurred. Because of these advances, no homogenization correction is necessary.

Comparison

The purpose of this study is to compare observations of temperature and precipitation from closely spaced members of USCRN and COOP networks. While the pairs of stations are near to each other they are not adjacent. Determining the variations in data between the networks allows scientists to develop an improved understanding of the quality of weather and climate data, particularly over time as the periods of overlap between the two networks lengthen.

To ensure observational differences are the result of network discrepancies, comparisons were only evaluated for station pairs located within 500 meters. The twelve station pairs chosen were reasonably dispersed across the lower 48 states of the US. Images of the instruments used in both networks are provided in Figure 1.

The USCRN stations all have the same instrumentation: well-shielded rain gauges and mechanically ventilated temperature sensors. Two types of thermometers are used: modern automatic electrical sensors known as the maximum-minimum temperature sensor (MMTS ) and old-fashioned normal thermometers, which now have to be called liquid-in-glass (LiG) thermometers. Both are naturally ventilated.

An important measurement problem for rain gauges is undercatchment: due to turbulence around the instruments not all droplets land in the mouth. This is especially important in case of high winds and for snow and can be reduced by wind shields. The COOP rain gauges are unshielded, however, and have been known to underestimate precipitation in windy conditions. COOP gauges also include a funnel, which can be removed before snowfall events. The funnel reduces evaporation losses on hot days, but can also get clogged by snow. Hourly temperature data from USCRN were averaged into 24 hour periods to match daily COOP measurements at the designated observation times, which vary by station. Precipitation data was aggregated into precipitation events and also matched with respective COOP events.

Observed differences and their reasons

Overall, COOP sensors in shields naturally ventilated reported warmer daily maximum temperatures (+0.48°C) and cooler daily minimum temperatures (-0.36°C) than USCRN sensors, which have better solar shielding and fans to ventilate the instrument. The magnitude of temperature differences were on average larger for stations operating LiG systems, than those for the MMTS system. Part of the reduction in network biases with the MMTS system is likely due to the smaller-sized shielding that requires less surface wind speed to be adequately ventilated.

While overall mean differences were in line with side-by-side comparisons of ventilated and non-ventilated sensors, there was considerable variability in the differences from station to station (see Figure 2). While all COOP stations observed warmer maximum temperatures, not all saw cooler minimum temperatures. This may be explained by differing meteorological conditions (surface wind speed, cloudiness), local siting (heat sources and sinks), and sensor and human errors (poor calibration, varying observation time, reporting error). While all are important to consider, meteorological conditions were only examined further by categorizing temperature differences by wind speed. The range in network differences for maximum and minimum temperatures seemed to reduce with increasing wind speed, although more so with maximum temperature, as sensor shielding becomes better ventilated with increasing wind speed. Minimum temperatures are highly driven by local radiative and siting characteristics. Under calm conditions one might expect radiative imbalances between naturally and mechanically aspirated shields or differing COOP sensors (LiG vs MMTS). That along with local vegetation and elevation differences may help to drive these minimum temperature differences.


Figure 2. USCRN minus COOP average minimum (blue) and maximum (red) temperature differences for collocated station pairs. COOP stations monitoring temperature with LiG technology are denoted with asterisks.

For precipitation, COOP stations reported slightly more precipitation overall (1.5%). Similar to temperature, this notion was not uniform across all station pairs. Comparing by season, COOP reported less precipitation than USCRN during winter months and more precipitation in the summer months. The dryer wintertime COOP observations are likely due to the lack of gauge shielding, but may also be impacted by the added complexity of observing solid precipitation. An example is removing the gauge funnel before a snowfall event and then melting the snow to calculate liquid equivalent snowfall.

Wetter COOP observations over warmer months may have been associated with seasonal changes in gauge biases. For instance, observation errors related to gauge evaporation and wetting factor are more pronounced in warmer conditions. Because of its design, the USCRN rain gauge is more prone to wetting errors (that some precipitation sticks to the wall and is thus not counted). In addition, USCRN does not use an evaporative suppressant to limit gauge evaporation during the summer, which is not an issue for the funnel-capped COOP gauge. The combination of elevated biases for USCRN through a larger wetting factor and enhanced evaporation could explain wetter COOP observations. Another reason could be the spatial variability of convective activity. During summer months, daytime convection can trigger unorganized thundershowers whose scale is small enough where it would report at one station, but not another. For example, in Gaylord Michigan, the COOP observer reported 20.1 mm more than the USCRN gauge 133 meters away. Rain radar estimates showed nearby convection over the COOP station, but not the USCRN, thus creating a valid COOP observation.


Figure 3. Event (USCRN minus COOP) precipitation differences grouped by prevailing meteorological conditions during events observed at the USCRN station. (a) event mean temperature: warm (more than 5°C), near-freezing (between 0°C and 5°C), and freezing conditions (less than 0°C); (b) event mean surface wind speed: light (less than 1.5 m/s), moderate (between 1.5 m/s and 4.6 m/s), and strong (larger than 4.6 m/s); and (c) event precipitation rate: low (less than 1.5 mm/hr), moderate (between 1.5 mm/hr and 2.8 mm/hr), and intense (more than 2.8 mm/hr).

Investigating further, precipitation events were categorized by air temperature, wind speed, and precipitation intensity (Figure 3). Comparing by temperature, results were consistent with the seasonal analysis, showing lower COOP values (higher USCRN) in freezing conditions and warmer COOP values (lower USCRN) in near-freezing and warmer conditions. Stratifying by wind conditions is also consistent, indicating that the unshielded gauges in COOP will not catch as much precipitation as it should, showing a higher USCRN value. On the other hand, COOP reports much more precipitation in lighter wind conditions, due to higher evaporation rate in the USCRN gauge. For precipitation intensity, USCRN observed less than COOP for all categories.


Figure 4. National temperature anomalies for maximum (a) and minimum (b) temperature between homogenized COOP data from the United States Historical Climatology Network (USHCN) version 2.5 (red) and USCRN (blue).
Comparing the variability and trends between USCRN and homogenized COOP data from USHCN we see that they are very similar for both maximum and minimum national temperatures (Figure 4).

Conclusions

This study compared two observing networks that will be used in future climate and weather studies. Using very different approaches in measurement technologies, shielding, and operational procedures, the two networks provided contrasting perspectives of daily maximum and minimum temperatures and precipitation.

Temperature comparisons between stations in local pairings were partially attributed to local factors including siting (station exposure), ground cover, and geographical aspects (not fully explored in this study). These additional factors are thought to accentuate or minimize anticipated radiative imbalances between the naturally and mechanically aspirated systems, which may have also resulted in seasonal trends. Additional analysis with more station pairs may be useful in evaluating the relative contribution of each local factor noted.

For precipitation, network differences also varied due to the seasonality of the respective gauge biases. Stratifying by temperature, wind speed, and precipitation intensity showed these biases are revealed in more detail. COOP gauges recorded more precipitation in warmer conditions with light winds, where local summertime convection and evaporation in USCRN gauges may be a factor. On the other hand, COOP recorded less precipitation in colder, windier conditions, possibly due to observing error and lack of shielding, respectively.

It should be noted that all observing systems have observational challenges and advantages. The COOP network has many decades of observations from thousands of stations, but it lacks consistency in instrumentation type and observation time in addition to instrumentation biases. USCRN is very consistent in time and by sensor type, but as a new network it has a much shorter station record with sparsely located stations. While observational differences between these two separate networks are to be expected, it may be possible to leverage the observational advantages of both networks. The use of USCRN as a reference network (consistency check) with COOP, along with more parallel measurements, may prove to be particularly useful in daily homogenization efforts in addition to an improved understanding of weather and climate over time.




* Jared Rennie currently works at the Cooperative Institute for Climate and Satellites – North Carolina (CICS-NC), housed within the National Oceanic and Atmospheric Administration’s (NOAA’s) National Centers for Environmental Information (NCEI), formerly known as the National Climatic Data Center (NCDC). He received his masters and bachelor degrees in Meteorology from Plymouth State University in New Hampshire, USA, and currently works on maintaining and analyzing global land surface datasets, including the Global Historical Climatology Network (GHCN) and the International Surface Temperature Initiative’s (ISTI) Databank.

Further reading

Ronald D. Leeper, Jared Rennie, and Michael A. Palecki, 2015: Observational Perspectives from U.S. Climate Reference Network (USCRN) and Cooperative Observer Program (COOP) Network: Temperature and Precipitation Comparison. Journal Atmospheric and Oceanic Technology, 32, pp. 703–721, doi: 10.1175/JTECH-D-14-00172.1.

The informative homepage of the U.S. Climate Reference Network gives a nice overview.

A database with parallel climate measurements, which we are building to study the influence of instrumental changes on the probability distributions (extreme weather and weather variability changes).

The post, A database with daily climate data for more reliable studies of changes in extreme weather, provides a bit more background on this project.

Homogenization of monthly and annual data from surface stations. A short description of the causes of inhomogeneities in climate data (non-climatic variability) and how to remove it using the relative homogenization approach.

Previously I already had a look at trend differences between USCRN and USHCN: Is the US historical network temperature trend too strong?