Thursday, 11 February 2016

Early global warming

How much did the world warm during the transition to Stevenson screens around 1900?


Stevenson screen in Poland.

The main global temperature datasets show little or no warming in the land surface temperature and the sea surface temperature for the period between 1850 and 1920. I am wondering whether this is right or whether we do not correct the temperatures enough for the warm bias of screens that were used before the Stevenson screen was introduced. This transition mostly happened in this period.

This is gonna be a long story, but it is worth it. We start with the current estimates of warming in this period. There is not much data on how large the artificial cooling due to the introduction of Stevenson screens is, thus we need to understand why thermometers in Stevenson screens record lower temperatures than before to estimate how much warming this transition may have hidden. Then we compare this to the corrections NOAA makes for the introduction of the Stevenson screen. Also other changes in the climate system suggest there was warming in this period. It is naturally interesting to speculate what this stronger early warming may mean for the causes of global warming.

No global warming in main datasets

The figure below with the temperature estimates of the four main groups show no warming for the land temperature between 1850 and 1920. Only Berkeley and CRUTEM start in 1850, the other two later.

If you look at the land temperatures plotted by Berkeley Earth themselves there is actually a hint of warming. The composite figure below shows all four temperature estimates for their common area for the best comparison, while the Berkeley Earth figure is interpolated over the entire world and thus sees Arctic warming more, which was strong in this period, like it again was strong in recent times. Thus there was likely some warming in this period, mainly due to the warming Arctic.


The temperature changes of the land according to the last IPCC report. My box.

In the same period the sea surface temperature was even cooling a little according to HadSST3 shown below.


The sea surface temperature of the four main groups and night marine air temperature from the last IPCC report. I added the red box to mark the period of interest.

Also the large number of climate models runs produced by the Coupled Model Intercomparison Project (CIMP5), colloquial called IPCC models, do not show much warming in our period of interest.


CMIP5 climate model ensemble (yellow lines) and its mean (red line) plotted together with several instrumental temperature estimates (black lines). Figure from Jones et al. (2013) with our box added to emphasize the period.

Transition to Stevenson screens

In early times temperature observations were often made in unheated rooms or in window screens of such rooms facing poleward. These window screens protected the expensive thermometers against the weather and increasingly also against direct sun light, but a lot of sun could get onto the instrument or the sun could heat the wall beneath the thermometer and warm air would rise up.


A Wild screen (left) and a Stevenson screen in Basel, Switzerland.
When it was realised that these measurements have a bias, a period with much experimentation ensued. Scientists tried stands (free standing vertical boards with a little roof that often had to be rotated to avoid sun during sunrise and -fall), shelters of various sizes that were open to the poles and to the bottom, screens of various sizes, sometimes near the shade of a wall, but mostly in gardens and pagoda huts that could have been used for a tea party.

The more open a screen is, the better the ventilation, which likely motived earlier more open designs, but this also leads to radiation errors. In the end the Stevenson screen became the standard, which protects the instrument from radiation from all sides. It is made of white painted wood and has a measurement chamber mounted on a wood frame, it typically has a double board roof and double Louvred walls to all sides. Initially it sometimes did not have a bottom, but later had slanted boards at the bottom.

The first version [[Stevenson screen]] was crafted in 1864 in the UK, the final version designed in 1884. It is thought that most countries switched to Stevenson screens before 1920, but some countries were later. For example, Switzerland made the transition from Wild screens to Stevenson screens in the 1960s. The Belgium Station Uccle changed their half open shelter to a Stevenson screen in 1983. The rest of Belgium in the 1920s.


Open shelter (at the front) and two Stevenson screens (in the back) at the main office of the Belgium weather service in Uccle.

Radiation error

The schematic below shows the main factors influencing the radiation error. Solar radiation makes the observed maximum temperatures too warm. This can be direct radiation or radiation scattered via clouds or the (snow covered) ground. The sun can also heat the outside of a not perfectly white screen, which then warms the air flowing in. Similarly the sun can heat the ground, which then may radiate towards the thermometer and screen. However, the lack of radiation shielding also makes the minimum temperature too low when the thermometer radiates infrared radiation into the cold sky. This error is largest on dry cloudless nights and small when the sky radiates back to the thermometer, which happens when the sky is cloudy and the absolute humidity is high, which reduces the net infrared radiative cooling. The radiation error is largest when there is not much ventilation, which in most cases need wind. The direct radiation effects are smaller for smaller thermometers.


Schematic showing the various factors that can influence the radiation error of a temperature sensor.

From our understanding of the radiation error, we would thus expect the bias in the day-time maximum temperature to be large where the sun is strong, the wind is calm, the soil is dry and heats up fast. The minimum temperature at night has the largest cooling bias when the sky is cloudless and dry.

This means that we expect the radiation errors for the mean temperature to be largest in the tropics (strong sun and high humidity) and subtropics (sun, hot soil), while it is likely smallest in the mid and high latitudes (not much sun, low specific humidity), especially near the coast (wind). Continental climates are the question mark; they have dry soils and not much wind, but also not as much sun and low absolute humidity.

Parallel measurements

These theoretical expectations fit to the limited number of temperature differences found in the literature; see table below. For the mid-latitudes, David Parker (1994) found that the difference was less than 0.2°C, but his data mainly came from maritime climates in north-west Europe. Other differences found in the mid-latitudes are about 0.2°C (Kremsmünster, Austria; Adelaide, Australia; Basel, Switzerland). While in the sub-tropics we have one parallel measurement showing a difference of 0.35°C and the two tropical parallel measurements show have a difference of 0.4°C. We are missing information from continental climates.

Table with the differences found for various climates and early screen1. Temperature difference in Basel is about zero using 3 fixed hour measurements to compute mean temperature, which was the local standard, but about 0.25 when using minimum and maximum temperature as is used most for global studies.
Region Screen Temperature difference
North-West Europe Various; Parker (1994) < 0.2°C
Basel, Switzerland Wild screen; Auchmann & Brönnimann (2012) ˜0 (0.25)°C 1
Kremsmünster, Austria North-wall window screen; Böhm et al. (2010) 0.2°C
Adelaide, South Australia Glaisher stand; Nicholls et al. (1996) 0.2°C
Spain French screen; Brunet et al. (2011) 0.35 °C
Sri Lanka Tropical screen; in Parker (1994) 0.37°C
India Tropical screen; in Parker (1994) 0.42°C

Most of the measurements we have are in North West Europe and do not show much bias. However, theoretically we would not expect much radiation errors here. The small number of estimates showing large biases come from tropical and sub-tropical climates and may well be representative for large parts of the globe.

Information on continental climates is missing, while they also make up a large part of the Earth. The bias could be high here because of calm winds and dry soils, but the sun is on average not as strong and the humidity low.

Next to the climatic susceptibility to radiation errors also the designs of the screens used before the Stevenson screen could be important. In the numbers in the table we do not see much influence of the designs, but maybe we will see it when we get more data.

Global Historical Climate Network temperatures

The radiation error and thus the introduction of Stevenson screens affected the summer temperatures more than the winter temperatures. Thus it is interesting that the trend in winter is 3 times stronger in the (Northern Hemisphere, GHCNv3). In winter it is 1.2°C per century, in summer it is 0.4°C per century over the period 1881-1920; see figure below2.

Also without measurement errors, the trend in winter is expected to be larger than in summer because the enhanced greenhouse effect affects winter temperatures more. In the CMIP5 climate model average the winter trend is about 1.5 times the summer trend3, but not 3 times.


Temperature anomalies in winter and summer over land in NOAA’s GHCNv3. The light lines are the data, the thick striped lines the linear trend estimates.

The adjustments made by the pairwise homogenization algorithm of NOAA for the study period are small. The left panel of the figure below shows the original and adjusted temperature anomalies of GHCNv3. The right panel shows the difference, which shows that there are adjustments in the 1940s and around 1970. The official GHCN global average starts in 1880. Zeke Hausfather kindly provided me with his estimate starting in 1850. During our period of interest the adjustments are about 0.1°C; a large part of which was before 1880.

These adjustments are smaller than the jump expected due to the introduction of the Stevenson screens. However, they should also be smaller because many stations will have started as Stevenson screens. It is not known how large percentage this is, but the adjustments seem small and early.



Other climatic changes

So far for the temperature record. What do other datasets say about warming in our period?

Water freezing

Lake and river freeze and breakup times have been observed for a very long time. Lakes and rivers are warming at a surprisingly fast rate. They show a clear shortening of the freezing period between 1850 and 1920; the freezing started later and ice break-up started. The figure below shows that this was already going on in 1845.


Time series of freeze and breakup dates from selected Northern Hemisphere lakes and rivers (1846 to 1995). Data were smoothed with a 10-year moving average. Figure 1 from Magnuson et al. (2002).

Magnuson has updated his dataset regularly: when you take the current dataset and average over all rivers and lakes that have data over our period you get the clear signal shown below.


The average change in the freezing date in days and the ice break-up date (flipped) is shown as red dots and smoothed as a red line. The smoothed series for individual lakes and rivers freezing or breaking up is shown in the background as light grey lines.

Glaciers

Most of the glaciers for which we have data from this period show reductions in their lengths, which signals clear warming. Oerlemans (2005) used this information for a temperature reconstruction, which is tricky because glaciers respond slowly and are also influenced by precipitation changes.


Temperature estimate of Oerlemans (2005) from glacier data. (My red boxes.)

Proxies

Temperature reconstructions from proxies show warming. For example the NTREND dataset based on tree proxies from the Northern Hemisphere as plotted below by Tamino.


Temperature reconstruction of the non-tropical Northern Hemisphere.

Paleo Model Intercomparison project

While the CMIP5 climate model runs did not show much warming in our period, the runs for the last millennium of the PMIP3 project do show some warming, although it strongly depends on the exact period; see below. The difference between CMIP5 and PMIP3 is likely that in the beginning of the 19th century there was much volcanic activity, which decreased the ocean temperature to below its equilibrium and it took some decades for it to return to its equilibrium. CMIP5 starts in 1850 and modelers try to start their models in equilibrium.


Simulated Northern Hemisphere mean temperature anomalies from PMIP3 for last millennium. CCSM4 shows the simulated Northern Hemisphere mean temperature anomalies (annual values in light gray, 30-yr Gaussian smoothed in black). For comparison various smoothed reconstructions (colored lines) are included which come from a variety of proxies, including tree ring width and density, boreholes, ice cores, speleothems, documentary evidence, and coral growth.

Sea surface temperature

Land surface warming is important for us, but does not change the global mean temperature that much. The Earth is a blue dot; 70% of our planet is ocean. Thus is we had a bias in the station data our period of 0.3°C, that would be a bias global temperature of 0.1°C. However, larger warming of land temperatures are difficult if the sea surface is not also warming and currently the data shows a slight cooling over our period. I have no expertise here, but wonder if such a large difference would be reasonable.

Thus maybe we overlooked a source of bias in the sea surface temperature as well. It was a period in which sailing ships were replaced by steamships, which was a large change. The sea surface temperature was measured by sampling a bucket of water and measuring its temperature. During the measurement, the water would evaporate and cool. On a steamship there is more wind than on a sailing ship and thus maybe more evaporation. The shipping routes have also changed.

I must mention that it is a small scandal how few scientists work on the sea surface temperature. It would be about a dozen and most of them only part-time. Not only is the ocean 2/3 of the Earth, the sea surface temperature is also often used to drive atmospheric climate models and to study climate modes. The group is small, while the detection of trend biases in sea surface temperature is much more difficult than in station data because they cannot detect unknown changes by comparing stations with each other. The maritime climate data community deserves more support. There are more scientists working on climate impacts for wine; this is absurd.


A French (Montsouri) screen and two Stevenson screens in Spain. The introduction of the Stevenson screen went fast in Spain and was hard to correct using statistical homogenization alone. Thus a modern replica of the original French screen build for an experiment, which was part of the SCREEN project.

Causes of global warming

Let's speculate a bit more and assume that the sea surface temperature increase was also larger than currently thought. Then it would be interesting to study why the models show less warming. An obvious candidate would be aerosols, small particles in the air, which have also increased with the burning of fossil fuels. Maybe models overestimate how much they cool the climate.

The figure from the last IPCC report below shows the various forcings of the climate system. These estimates suggest that the cooling of aerosols and the warming of greenhouse gases is similar in climate models until 1900. However, with less influence of aerosols, the warming would start earlier.

Stevens (2015) argues that we have overestimated the importance of aerosols. I do not find Stevens' arguments particularly convincing, but everyone in the field agrees that there are at least huge uncertainties. The CMIP5 figure gives the error bars at the right and it is within the confidence interval that there is effectively nearly no net influence of aerosols (ochre bar at the right).

There is direct cooling of aerosols due to scattering of solar radiation. This is indicated in red as "Aer-Rad int." This is uncertain because we do not have good estimates on the amount and size of the aerosols. Even larger uncertainties are in how aerosols influence the radiative properties of clouds, marked in ochre as "Aer-Cld int."

Some of the warming in our period was also due to less natural volcanic aerosols at the end. Their influence on climate is also uncertain because of lack of observations on the size of the eruptions and the spatial pattern of the aerosols.


Forcing estimate for the IPPC AR5 report.

The article mentioned in the beginning (Jones et al. 2013) showing the CMIP5 global climate model ensemble temperatures for all forcings, which did not show much warming in our period, also gives results for model runs that only include greenhouse gases, which shows a warming of about 0.2°C; see below. If we interpret this difference as the influence of aerosols, (there is also a natural part) then aerosols would be responsible for 0.2°C cooling in our period in the current model runs. In the limit of the confidence interval were aerosols do not have a net influence, an additional warming of 0.2°C could thus be explained by aerosols.


CMIP5 climate model ensemble (yellow lines) and its mean (red line) plotted together with several instrumental temperature estimates (black lines). Figure from Jones et al. (2013) with our box added to estimate the temperature increase.

Conclusion on early global warming

Several lines of evidence suggest that the Earth’s surface actually was warming during this period. Every line of evidence by itself is currently not compelling, but the [[consilience]] of evidence at least makes a good case for further research and especially to revisit the warming bias of early instrumental observations.

To make a good case, one would have to make sure that all datasets cover the same regions/locations. With the modest warming during this period, the analysis should be very careful. It would also need an expert for each of the different measurement types to understand the uncertainties in their trends. Anyone interested in make a real publishable study out of this please contact me.


Austrian Hann screen (a large screen build close to a northern wall) and a Stevenson screen in Graz, Austria.

Collaboration on studying the bias

To study the transition to Stevenson screens, we are collecting data from parallel measurements of early instrumentation with Stevenson screens.

We have located the data for the first seven sources listed below.

Australia, Adelaide, Glaisher stand
Austria, Kremsmünster, North Wall
Austria, Hann screen in Vienna and Graz
Spain, SCREEN project, Montsouris (French) screen in Murcia and La Coruña
Switzerland, Wild screen in Basel and Zurich
Northern Ireland, North wall in Armagh
Norway, North wall


Most are historical datasets, but there are also two modern experiments with historical screens (Spain and Kremsmünster). Such experiments with replicas is something I hope will be done more in future. It could also be an interesting project for an enthusiastic weather observer with an interest in history.

From the literature we know of a number of further parallel measurements all over the world; listed below. If you have contacts to people who may know where these datasets are, please let us know.

Belgium, Uccle, open screen
Denmark, Bovbjerg Fyr, Skjoldnñs, Keldsnor, Rudkùbing, Spodsbjerg Fyr, Gedser Fyr, North wall.
France, Paris, Montsouris (French) screen
Germany, Hohenpeissenberg, North wall
Germany, Berlin, Montsouris screen
Iceland, 8 stations, North wall
Northern Ireland, a thermograph in North wall screen in Valentia
Norway, Fredriksberg observatory, Glomfjord, Dombas, North wall
Samoa, tropic screen
South Africa, Window screen, French and Stevenson screens
Sweden, Karlstadt, Free standing shelter
Sweden, Stockholm Observatory
UK, Strathfield Turgiss, Lawson stand
UK, Greenwich, London, Glaisher stand
UK, Croydon, Glaisher stand
UK, London, Glaisher stand


To get a good estimate of the bias we need many parallel measurements, from as many early screens as possible and from many different climatic regions, especially continental, tropical and sub-tropical climates. Measurements made outside of Europe are lacking most and would be extremely valuable.

If you know of any further parallel measurements, please get in touch. It does not have to be a dataset, also a literature reference is a great hint and a starting point for a search. If your twitter followers or facebook friends may have parallel datasets please post this post on POST.



Related reading

Parallel Observations Science Team (POST) of the International Surface Temperature Initiative (ISTI).

The transition to automatic weather stations. We’d better study it now.

Why raw temperatures show too little global warming.

Changes in screen design leading to temperature trend biases.

Notes


1) The difference in Basel is nearly zero if you use the local way to compute the mean temperature from fixed hour measurements, but it is about 0.25°C if you use the maximum and minimum temperature, which is mostly used in climatology.

2) Note that GHCNv3 only homogenizes the annual means, that is, every month gets the same corrections. Thus the difference in trends between summer and winter shown in the figure is like it is in the raw data.

3) The winter trend is 1.5 times the summer trend in the mean temperature of the CMIP5 ensemble for the Northern Hemisphere (ocean and land). The factor three we found in for GHCN was only for land. Thus a more careful analysis may find somewhat different values.


References

Auchmann, R. and S. Brönnimann, 2012: A physics-based correction model for homogenizing sub-daily temperature series. Journal Geophysical Research Atmospheres., 117, art. no. D17119, doi: 10.1029/2012JD018067.

Bjorn Stevens, 2015: Rethinking the Lower Bound on Aerosol Radiative Forcing. Journal of Climate, 28, pp. 4794–4819, doi: 10.1175/JCLI-D-14-00656.1.

Böhm, R., P.D. Jones, J. Hiebl, D. Frank, et al., 2010: The early instrumental warm-bias: a solution for long central European temperature series 1760–2007. Climatic Change, 101, pp. 41–67, doi: 10.1007/s10584-009-9649-4.

Brunet, M., J. Asin, J. Sigró, M. Bañón, F. García, E. Aguilar, J. Esteban Palenzuela, T.C. Peterson, P. Jones, 2011: The minimization of the screen bias from ancient Western Mediterranean air temperature records: an exploratory statistical analysis. International Journal Climatololgy, 31, 1879–1895, doi: 10.1002/joc.2192.

Jones, G. S., P. A. Stott, and N. Christidis, 2013: Attribution of observed historical near‒surface temperature variations to anthropogenic and natural causes using CMIP5 simulations. Journal Geophysical Research Atmospheres, 118, 4001–4024, doi: 10.1002/jgrd.50239.

Magnuson, John J., Dale M. Robertson, Barbara J. Benson, Randolf H. Wynne, David M. Livingstone, Tadashi Arai, Raymond A. Assel, Roger B. Barry, Virginia Card, Esko Kuusisto, Nick G. Granin, Terry D. Prowse, Kenton M. Stewart, and Valery S. Vuglinski, 2000: Historical trends in lake and river ice cover in the Northern Hemisphere. Science, 289, pp. 1743-1746, doi: 10.1126/science.289.5485.1743

Nicholls, N., R. Tapp, K. Burrows, and D. Richards, 1996: Historical thermometer exposures in Australia. International Journal of Climatology, 16, pp. 705-710, doi: 10.1002/(SICI)1097-0088(199606)16:6<705::AID-JOC30>3.0.CO;2-S.

Oerlemans, J., 2005: Extracting a Climate Signal from 169 Glacier Records. Science, 308, no. 5722, pp. 675-677, doi: 10.1126/science.1107046.

Parker, D.E., 1994: Effects of changing exposure of thermometers at land stations. International Journal Climatology, 14, pp. 1–31, doi: 10.1002/joc.3370140102.

Photo at the top a Stevenson screen of the amateur weather station near Czarny Dunajec, Poland. Photographer: Arnold Jakubczyk.
Photos of Wild screen and Stevenson screen in Basel by Paul Della Marta.
Photo of open shelter in Belgium by Belgium weather service.
Photo of French screen in Spain courtesy of SCREEN project.
Photo of Hann screen and Stevenson screen in Graz courtesy of the University of Graz.

Friday, 5 February 2016

Malcolm Turnbull, how should Australia adapt to climate change without science?

Adapting to climate change needs information on local changes in the mean, weather variability and extremes. Observed changes in the means are not enough.



If you don't like what #climate science is telling you, just fire all the climate scientists
Miles Grant

The "conservative" government of Australia plans to gut its climate research and kill the groups doing climate research at Australia's main research institute, CSIRO. Australia's opposition leader rightly said the Prime Minister Malcolm Turnbull should "hang his head in shame".

The destruction is not for lack of quality of the research. CSIRO's new chief Larry Marshall send an email to its employees stating:
"CSIRO pioneered climate research ... Our climate models are among the best in the world and our measurements honed those models to prove global climate change.
From this the strange conclusion is drawn:
That question has been answered, and the new question is what do we do about it, and how can we find solutions for the climate we will be living with?"
[UPDATE. Judith Curry agrees with this strange sentiment: "Now that the UN’s community of nations has accepted consensus climate science to drive international energy and carbon policy, what is the point of heavy government funding of climate research, particularly global ­climate modelling?"]

Just because we know climate change is real, does not mean that we understand everything. Projecting increases in the global mean temperature is easy. Saying something about the changes in the hydrological cycle is much harder. We know how much the global mean precipitation will increase because we can estimate the additional evaporation and what goes up must go down, but say where and how it goes down is hard. These assessments naturally have their uncertainties and it certainly pays to reduce them to make better political decisions.

Much more important than the uncertainties in the changes in the global means, for "solutions for the climate we will be living with" (adapting to climate change) we will need local predictions. That is a lot harder and very uncertain. Locally the changes can be very different from the global change. As Roger Pielke Jr. writes about storms on the US East Coast: "So those who argue for a simple relationship between increasing water content of the atmosphere and storm strength, data do not support such a claim over this multi-decadal period, in this region." (my emphasis)

open flames and smoke in a rural Texas landscape

Much more important than the uncertainties in the changes in the global means, for adaptation we need information on changes in weather variability and extremes. Especially for a country like Australia that knows very large variations due, for example, to El Nino.

One of the strategies of the mitigation skeptics is to pretend that adaptation is straightforward and cheap. When the sea level goes up 1 mm, just make the dikes 1 mm higher. However, the sea dikes will break during spring tide and a strong storm. Thus we also need to understand the storms to know how much stronger the dikes need to be. They will break during a once in a century storm. Or at least during what used to be a once in a century storm. Try to estimate from observations during a changing climate whether the 100-year storms are getting worse.


"With climate change, we can’t drive by looking in the rear view mirror. We’re in a new normal."
Climate scientists Berrien Moore and Katharine Hayhoe


Was the flooding of New Orleans due to [[Hurricane Katrina]] a unique event or the "new normal"? During the flooding last year in South Carolina in some locations the rain amounted to a 1,000-year event (in a given year there is a 1 in 1,000 chance of observing rainfall of that magnitude of more). Does South Carolina have to adapt to this because this will happen more often or will this remain an outlier? Parts of the United Kingdom were hit three times by 100 year rain events the last few years. How often will they have to suffer this before we know from waiting and seeing that the weather has changed, people will have to move and the infrastructure needs to be more more robust?


"There's no point putting in flood defenses that respond to mean climate change if you haven't thought of what a one-in-a-hundred-year event will look like in a warmer world... They don't want to know what the climate will be like, they want to know what the weather will be like in 20, 30, 50 years time."


The same goes less visibly for droughts. When your farm takes a hit due to a drought, do you build it up again when the rain comes back or is your land no longer profitable. Do you want to do this blindly? Or do you prefer some scientific guidance? For planning crops and managing reservoirs during droughts, seasonal and decadal climate predictions reduce costs and hardship. For planning new reservoirs and desalination plants long-term climate projects give guidance.

Meteorologists and climatologists are building seamless prediction systems. Going from short term weather predictions and nowcasting using observations during severe weather, to long-term weather predictions to prepare for bad weather, to seasonal and decadal predictions for planning and climate projections for adaptation. In many wealthy countries governments are setting up national climate service centers to help their societies adapt. The World Meteorological Organization is building a Global Framework for Climate Services (GFCS) to coordinate such efforts and help poorer countries understand the changes their region will see. While Australia sticks its head in the sand.

We will need very good science, a very good understanding of the coming climatic changes to adapt. The Australian government destroys climatology at a moment people, communities and companies need it most to adapt to the climatic changes that we have set in motion. This is about as stupid as the US states where the civil servants are no longer allowed to talk about climate change, which will mean that these communities will suffer the consequences without being prepared for the changes.

The same is true for (nearly?) every impact of climate change. In the past we could use long-term observations to determine what kind of extremes we could expect. Now, after all the delays to solve the problem, humanity is becoming more and more dependent on climate science and climate models, the models the mitigation skeptics who campaign for more global warming claim not to trust.

If you do not know which climatic changes you need to adapt to, you need to adapt to everything. Preparing for the worst case scenario in every direction is very expensive.


Never attribute to maladaptation that which can be adequately explained by stupidity.


When Australia notices what a blunder they are making it will easily take over a decade until Australia's climate research is again where it started. It takes years until you understand a climate model or a data set well and start to be productive. Science is a social profession and once you are proficient you can start building your network. Then you notice the kinds of expertise still missing in your freshly build up institute. Unfortunately, like trust losing scientific expertise goes much faster than building it up.






Related reading

CSIRO boss’s failed logic over climate science could waste billions in taxes by Andy Pitman, Director of the Centre for Climate System Science.

The CSIRO and farming in a changing climate

'Misleading, inaccurate and in breach of Paris': CSIRO scientist criticises cuts. Stefan Rahmstorf​: "Closing down climate research capacity at a time of rapid global warming is not just short-sighted, it borders on the insane."

The Sydney Morning Herald: Climate science to be gutted as CSIRO swings jobs axe

Australia's CSIRO dims the lights on climate and environment

Thomas Peterson chair of WMO Commission on Climate explains the need for climate research by example how to deal with a drought.


Top photo. Severe suburban flooding in New Orleans, USA. Aftermath of Hurricane Katrina. Photo by ark Moran, NOAA Corps, NMAO/AOC (CC BY 2.0)
Second photo. Flames burn out of control at Possum Kingdom Lake near Pickwick, TX, on April 15, 2011. Photo by Texas Military Forces, available through a CC license.
Last photo. Flash flooding stalls traffic on I-45 in Houston on May 26, 2015. Photo by Bill Shirley, available through a CC license.

Sunday, 31 January 2016

The difference between Bernie Sanders and Hillary Clinton on climate change? I don't care



Just two days ago 350 Action published a comparison of the plans of the presidential candidates to combat climate change. 350 Action is the political arm of climate action group 350.org, which was founded by Bill McKibben. They tried to ask all candidates 70 questions. A summary of the differences between Sanders and Clinton can be found above. They clearly found that Bernie Sanders plans to do more.

For the non-Americans reading this blog let me add that on the Republican side they had "more luck eliciting declarations of climate denial and defenses of the fossil fuel industry than any significant evolution on the issue." The US Senate voted this week about whether human activity significantly contributes to climate change. A weird thing of itself. The more so in 2016! Of the 54 Republican senators just five accepted that statement. Relative to that extremism, the differences between the Democrats are small.

In December also Think Progress made a comparison of the 3 Democratic candidates and they similarly found Bernie Sanders to have more positions favored by the environmental movement. For the record, Martin O’Malley scored even better.

Both organizations state that it took some time of campaigning against Bernie Sanders before Hillary Clinton improved her climate change plans.

Bernie Sanders clearly came out as the favorite presidential candidate during a “Climate Emergency Caucus” of the environmental group The Climate Mobilization, which fights for zero greenhouse gas emissions by 2025. Sanders won 69 percent of votes at their mock Democratic caucus. Clinton, O’Malley and uncommitted all got about 10 percent.



If you care about climate change, Bernie Sanders is clearly your man. However even if Hillary Clinton would have had better plans, I would go for Sanders because you also have to be able to execute the plans and there are other important issues apart from climate change.

Democracy

In my assessment the main problem in America is the excessive influence of money. Everywhere rich people unfortunately have more influence, but the way corporations and billionaires determine US policies destroys the democratic heart of America. This is to a large part possible because of unlimited campaign contributions, which lead to legal bribery.

The oligarchy is first of all deeply undemocratic. Corporations have different interests than the people. It is amazing the kind of obvious highly popular policies cannot pass Congress. Renewable energy is enormously popular in the public, but does not get much political support. People who are not allowed to fly because they are on the terrorist watch list can buy an automatic weapon. The US Congress explicitly voted against a bill fixing this problem. In 2008 the population had to bail out the banks to avoid an even larger depression because they are to big to fail. That is the end of the market mechanism, when the upsides are private and the downsides get privatized. That is calling for taking too much risk, but the banks are now bigger than in 2008. Because companies legally bribed so many politicians, politics is not able to fix these obvious problems.

The money makes rational debate impossible. The politicians cannot negotiate and compromise because they have to do what their donors want them to do. That is why you get the childish debates we see in the climate "debate" because a real debate is not possible. It would be better when the donors would sit at the table. Like in the medieval times when the local war lords were "advising" the king.


"If government is to play its role in creating a successful economy, we must restore comity, compromise, openness to evidence"
Ben Bernanke


The bribed politicians also have a huge influence on the public and published opinion. By saying crazy things in the media, such as James Inhofe saying that climate change is a hoax and Cruz calling climate science dogma, these kind of statements start to sound acceptable. Most people do not take the time to carefully review the evidence; people are social animals and we normally negotiate our opinions interacting with others and the opinion of leaders is very influential. Especially authoritarians are susceptible to picking up the opinion of their leaders. If only a weather presenter from Chico, California, would blog daily about all those obvious problems with climate science that the experts do not see, the situation in the USA would be fully different. Money in politics is an important reason for the American exceptionalism in the climate "debate".

Bernie Sanders sees money in politics as the main problem that needs fixing. For that reason alone, I would vote for him if I could. Without fixing this, it is nearly impossible to fix other problems. Without fixing this, solving climate change is like running a marathon with a 50kg sack of rice on your back. First the weight needs to be removed.

Money became so dominant in large part due to disastrous Supreme court decisions that money is speech and that corporations and humans. One way to fix that is a better Supreme court. The next president will select one to three Supreme court candidates. Executive actions can make the money streams more transparent, which would likely reduce them and make them less influential. The president can press for a constitutional amendment. (Simultaneously, the people can try to get an amendment via the states.)

Winning

In national polls Clinton has more support among likely Democrat primary voters, but in this stage national polls are not very informative. Just imagine someone calling you up to ask you about something you normally do not think much about. Would you like to carpet bomb Agrabah? Polls are very different from elections and referendums. National poling results at the moment largely reflect name recognition. When an election comes up, people start paying more attention and talk to each other. Only when we get closer to elections, do polls start to have value.

National polls are especially not very informative yet because Clinton has a much higher name recognition than Sanders.

This Monday there is the first caucus in Iowa and after that in New Hampshire. In New Hampshire Sanders is well ahead by now. In Iowa Clinton and Sanders are too close together and the polls do not agree with each other. The main problem is determining who is a likely voter and especially whether young people will show up. Young people overwhelmingly support Sanders. Normally they hardly show up, except in 2008, when they thought they could get real change. I would expect this to happen again. This time it is worth it. Some polling organizations even only classify people who voted in previous caucuses as likely voters. This completely excludes young people.

[UPDATE after Iowa. The result was basically a tie Clinton and Sanders in Iowa. Clinton had 0.4% more "votes". Interestingly, it was a tie for almost all subgroups (income, education), except for young people who support Sanders more and women who support Clinton more. It was even a tie for people who voted Clinton in 2008.

You can see this as a win for Clinton because Iowa is quite white and Clinton does better among people of color. Thus you could argue that Sanders should have won. I do not see this argument as very convincing. There is nothing special about Clinton's policies when it comes to minorities compared to Sanders; that is a policy tie. This can easily change.

I find it more convincing to say that Sanders won. He came from nothing. She was the clear favorite at the beginning of the campaign. Clinton has a lot of name recognition, support of other (local) politicians, more money (from large donors) and had several year to prepared herself. It now becomes harder to ignore him in the media, where he did not get covered much up to now. And when people get to know him, they like him and his policies. So I would say: in Iowa a tie, nationally Sanders won.]

(For the same reason, polling results for Trump are unreliable because many of this supporters normally do not go to caucuses and one can only guess whether they will go the the Iowa caucus this time. [UPDATE. While wining in the polls, Trump lost, almost became third. Not good for his image.])

When Iowa and New Hampshire go to Sanders, the primaries start to get interesting. That is when the corporation will start fighting back, when people will start to inform themselves about Sanders. That is when we will learn how he handles stress and whether he will do a good job in the general presidential election.

I expect he will, but then I am biased as European. The published opinion will try to convince the public that Sander's plans are impossible. For me it is hard to imagine they can pull such nonsense off, most of what Sanders wants is completely mainstream in Europe. No matter how right-wing a European party is, I cannot imagine them accepting that people die because they waited too long with going to the doctor. That sounds as if death panels are okay as long as the hands of the panelists are invisible. It is mainstream in Europe that college is not only a personal benefit, but contributes to society and prosperity as well, that everyone who has the skills and the drive should be able to go to university.

I am afraid after the first primaries will also be when Clinton will show here inner Republican even more. The last two week she already started deceiving the electorate to attack political opponents. I have no problem with playing hard, but I do like politics to be about ideas.

Winning the primaries also depends on whether the voters believe you can win the election. That is hard to judge, but the evidence at this moment does not support Clinton's claim that she is more electable. There is also not a strong case yet to say that Sanders is more electable, but his numbers are going up as people get to know him, while Clinton's numbers are stable or go down.

In match-up polls between one Democrat and one Republican candidate Sanders on average performs better. Clinton wins over Donald Trump with a difference of +2.7%, but Sanders wins with +5.3% and the recent two polls are even above 10%.

Clinton versus Cruz would be won by Cruz by +1.3%, although the recent one Clinton wins. Sander versus Cruz would be won by Sanders by 3.3%, although the recent ones Cruz wins marginally.

Like normal polls, these match-ups do not say much. It is very hard for people to imagine the real choice they would have to make and they hardly know the candidates yet. Thus rather than look at the current polls, we have to try to understand the dynamics of the campaigns. The Daily Kos writes:
[Bernie Sanders] has the overwhelming support of independents, whereas Hillary has lukewarm support from them at best, giving him a huge general election advantage. He also has crossover appeal to Republicans, earning up to 25% of their support in his home state. Already, numerous Republicans for Bernie have been documented. But Bernie is also best positioned to win because he will bring new voters to the polls, who are then likely to vote Democrat—the young, the poor, and the disillusioned.
I would expect the real difference between a Republican and any Democrat to be large it the end. Now the Republican candidates can hide their ignorance or lack of empathy during the debates in an enormous field, in debates that cannot go into depth. In the main campaign there will be only two candidates during the debates; any Republican ignorant outsider will be destroyed there. And if a debate contrasts Cruz or Rubio to a human being, they will look even more extreme and even less sympathetic.

No sitting Republican Senator has endorses either Trump or Cruz. Their celebrity couple name is Crump (ht Stephen Colbert). The deeply conservative magazine the National Review made an entire issue Against Trump.



There are many decent Republicans who will be put in a tough spot when one of the currently leading candidates will become the official Republican candidate. I expect that easily 20% will not vote for one of these radicals. In case Hillary Clinton is the Democratic candidate they will mostly stay at home. In case Bernie Sanders is the candidate a considerable part will vote for him.


“It’s like being shot or poisoned… what does it really matter?”
Sen. Lindsey Graham on Ted Cruz and Donald Trump


Climate change will become an ever large liability for the Republicans. In this primary they cannot soften on the issue, but in the general election they will look completely out of touch with reality. Even people who do not care about climate change itself, will have some doubts about giving such people the nuclear codes. That in a year that quite likely again becomes a record warm year. The third record in a row.



In Vermont Sanders got a decent amount of votes from Republicans. They hate money in politics as much as Democrats. Independents like Sanders more than Clinton. He is sympathetic and trustworthy, with a very consistent voting record. People are fed up with the establishment, which you can see by the popularity of completely incompetent outsiders in the Republican primary. Sanders not taking money from the establishment and running for real change can distance himself from that.

In a race against Trump, he can claim to be his own man, while he can claim that Clinton needs to do what her donors want. In contrast Sanders can also claim to be independent and he actually wants to stop the legal bribery. Trump does not.

In the past, a candidate in the middle would have an advantage. They take some of the voters from the other party and the wings of their party were forced to vote for them to prevent worse. Nowadays, however, with only 50 to 60% of the electorate actually votes, the most important job of a candidate is to get the supporters to actually vote. I would expect Sanders to be able to generate more enthusiasm than Clinton. He has more supporters and large rallies. Both are helped by a radicalized Republican party that makes Democrats clear that they need to vote.

Last month, I made this prediction.


I am reasonably confident Sanders will win; naturally this is not science, just my personal assessment. The Democrats also wining both chambers is a more daring prediction. On the positive side, many more Republican seats are up for election. Let's concentrate on the House, which is more difficult than the Senate. Charlie Cook and David Wasserman:
Today, the Cook Polit­ic­al Re­port counts just 33 [House] seats out of 435 as com­pet­it­ive, in­clud­ing 27 held by Re­pub­lic­ans and six held by Demo­crats.
Still to win the House, the Democrats "would need to win as much as 55 percent of the popular vote, according to the Cook Political Report's David Wasserman". A ten percent difference is large, but has been done before.

Making this happen will depend on turnout and thus on enthusiasm and the hope to finally transferring the power back from the corporations to the citizens. This will not be easy, but easier with Sander.

If Congress does not change color, the climate "debate" suggests to me that reaching out, Clinton's strategy, does not help one thing. We have seen how well it worked for Obama. The only thing that helps is pressure from the electorate. Without sticks, the carrots do not work. Without disinfecting sunlight lighting the ugly spots and fear of being unseated nothing good will happen in Washington. If the Congressmen expect that their donors can no longer help (as much) them the next election, they may feel freer to actually do their job.

The problem that remains is getting money out of the media. Getting money out of politics partially solves that problem; the media gets a large part of that money from the political ads. I sometimes wonder if ads are there to influence consumers or the media. Getting all money out of the media is tough because the freedom of the press should not be endangered in the process. Any suggestions?



Related reading

Democracy is more important than climate change #WOLFPAC

National Review: Conservatives against Trump

Read more at: http://www.nationalreview.com/article/430126/donald-trump-conservatives-oppose-nomination

In 50-49 vote, US Senate says climate change not caused by humans

On Climate Questions, Only One Candidate Has All the Right Answers

Voter's Guide: How the Candidates Compare on Climate and Energy

Thursday, 21 January 2016

Ars Technica: Thorough, not thoroughly fabricated: The truth about global temperature data

How thermometer and satellite data is adjusted and why it must be done.
published on Ars Technica by Scott K. Johnson - Jan 21, 2016 4:30pm CET


“In June, NOAA employees altered temperature data to get politically correct results.”

At least, that's what Congressman Lamar Smith (R-Tex.) alleged in a Washington Post letter to the editor last November. The op-ed was part of Smith's months-long campaign against NOAA climate scientists. Specifically, Smith was unhappy after an update to NOAA’s global surface temperature dataset slightly increased the short-term warming trend since 1998. And being a man of action, Smith proceeded to give an anti-climate change stump speech at the Heartland Institute conference, request access to NOAA's data (which was already publicly available), and subpoena NOAA scientists for their e-mails.

Smith isn't the only politician who questions NOAA's results and integrity. During a recent hearing of the Senate Subcommittee on Space, Science, and Competitiveness, Senator Ted Cruz (R-Tex.) leveled similar accusations against the entire scientific endeavor of tracking Earth’s temperature.

“I would note if you systematically add, adjust the numbers upwards for more recent temperatures, wouldn’t that, by definition, produce a dataset that proves your global warming theory is correct? And the more you add, the more warming you can find, and you don’t have to actually bother looking at what the thermometer says, you just add whatever number you want.”

There are entire blogs dedicated to uncovering the conspiracy to alter the globe's temperature. The premise is as follows—through supposed “adjustments,” nefarious scientists manipulate raw temperature measurements to create (or at least inflate) the warming trend. People who subscribe to such theories argue that the raw data is the true measurement; they treat the term “adjusted” like a synonym for “fudged.”

Peter Thorne, a scientist at Maynooth University in Ireland who has worked with all sorts of global temperature datasets over his career, disagrees. “Find me a scientist who’s involved in making measurements who says the original measurements are perfect, as are. It doesn’t exist,” he told Ars. “It’s beyond a doubt that we have to—have to—do some analysis. We can’t just take the data as a given.”

Speaking of data, the latest datasets are in and 2015 is (as expected) officially the hottest year on record. It's the first year to hit 1°C above levels of the late 1800s. And to upend the inevitable backlash that news will receive (*spoiler alert*), using all the raw data without performing any analysis would actually produce the appearance of more warming since the start of records in the late 1800s.

We're just taking the temperature—how hard can it be?

So how do scientists build datasets that track the temperature of the entire globe? That story is defined by problems. On land, our data comes from weather stations, and there’s a reason they are called weather stations rather than climate stations. They were built, operated, and maintained only to monitor daily weather, not to track gradual trends over decades. Lots of changes that can muck up the long-term record, like moving the weather station or swapping out its instruments, were made without hesitation in the past. Such actions simply didn’t matter for weather measurements.

The impacts of those changes are mixed in with the climate signal you’re after. And knowing that, it’s hard to argue that you shouldn’t work to remove the non-climatic factors. In fact, removing these sorts of background influences is a common task in science. As an incredibly simple example, chemists subtract the mass of the dish when measuring out material. For a more complicated one, we can look at water levels in groundwater wells. Automatic measurements are frequently collected using a pressure sensor suspended below the water level. Because the sensor feels changes in atmospheric pressure as well as water level, a second device near the top of the well just measures atmospheric pressure so daily weather changes can be subtracted out.

If you don't make these sorts of adjustments, you’d simply be stuck using a record you know is wrong.

You can continue reading at Ars Technica. The article still explains several reasons for inhomogeneities in the temperature observations, how they are removed with statistical homogenization methods, how these methods have been validated, the uncertainties in the sea surface temperature and satellite estimates of the tropospheric temperature and why it is so hard to get the right. It finishes with the harassment campaign against NOAA of Lamar Smith because of a minimal update.

Enjoy and bookmark, it is a very thorough and accessible overview.

Saturday, 16 January 2016

The transition to automatic weather stations. We’d better study it now.

This is a POST post.

The Parallel Observations Science Team (POST) is looking across the world for climate records which simultaneously measure temperature, precipitation and other climate variables with a conventional sensor (for example, a thermometer) and modern automatic equipment. You may wonder why we take the painstaking effort of locating and studying these records. The answer is easy: the transition from manual to automated records has an effect on climate series and the analysis we do over them.

In the last decades we have seen a major transition of the climate monitoring networks from conventional manual observations to automatic weather stations. It is recommended to compare these instruments before the substitution is effective with side by side measurements, which we call parallel measurements. Climatologists have also set up many longer experimental parallel measurements. They tell us that in most cases both sensors do not measure the same temperature or collect the same amount of precipitation. A different temperature is not only due to the change of the sensor itself, but automatic weather stations also often use a different, much smaller, screen to protect the sensor from the sun and the weather. Often the introduction of automatic weather stations is accompanied by a change in location and siting quality.

From studies of single temperature networks that made such a transition we know that it can cause large jumps; the observed temperatures at a station can go up or down by as much as 1°C. Thus potentially this transition can bias temperature trends considerably. We are now trying to build a global dataset with parallel measurements to be able to quantify how much the transition to automatic weather stations influences the global mean temperature estimates used to study global warming.

Temperature

This study is led by Enric Aguilar and the preliminary results below were presented at the Data Management Workshop in Saint Gallen, Switzerland last November. We are still in the process of building up our dataset. Up to now we have data from 10 countries: Argentina (9 pairs), Australia (13), Brazil (4), Israel (5), Kyrgyzstan (1), Peru (31), Slovenia (3), Spain (46), Sweden (8), USA (6); see map below.


Global map in which we only display the 10 countries for which we have data. The left map is for the maximum temperature (TX) and the right for the minimum temperature (TN). Blue dots mean that the automatic weather station (AWS) measures cooler temperatures than the conventional observation, red dots mean the AWS is warmer. The size indicates how large the difference is, open circles are for statistically not significant differences.

The impact of the automation can be better assessed in the box plots below.


The bias of the individual pairs are shown as dots and summarized per country with box plots. For countries with only a few pairs the boxplots should be taken with a grain of salt. Negative values mean that the automatic weather stations are cooler. We have data for Argentina (AR), Australia (AU), Brazil (BR), Spain (ES), Israel (IL), Kyrgyzstan (KG), Peru (PE), Sweden (SE), Slovenia (SI) and the USA (US). Panels show the maximum temperature (TX), minimum temperature (TN), mean temperature (TM) and Diurnal temperature range (DTR, TX-TN).

On average there are no real biases in this dataset. However, if you remove Peru (PE) the differences in the mean temperature are either small or negative. That one country is so important shows that our dataset is currently too small.

To interpret the results we need to look at the main causes for the differences. Important reasons are that Stevenson screens can heat up in the sun on calm days, while automatic sensors are sometimes ventilated. The automatic sensors are, furthermore, typically smaller and thus less affected by direct radiation hitting them than thermometers. On the other hand, in case of conventional observation, the maintenance of the Stevenson screens—cleaning and painting—and detection of other problems may be easier because they have to be visited daily. There are concerns that plastic screens get more grey and heat more in the sun. Stevenson screens have more thermal inertia, they smooth fast temperature fluctuations, and will thus show lower highs and higher lows.

Also the location often changes with the installation of automatic weather stations. America was one of the early adopters. The US National Weather Service installed analogue semi-automatic equipment (MMTS) that did not allow for long cables between the sensor and the display inside a building. Furthermore, the technicians only had one day per station and as a consequence many of the MMTS systems were badly sited. Nowadays technology has advanced a lot and made it easier to find good sites for weather stations. This is maybe even easier now than it used to be for manual observations because modern communication is digital and if necessary uses radio making distance much less a concern. The instruments can be powered by batteries, solar or wind, which frees them from the electricity grid. Some instruments store years of data and need just batteries.

In the analysis we thus need to consider whether the automatic sensors are placed in Stevenson screens and whether the automatic weather station is at the same location. Where the screen and the location did not change (Israel and Slovenia), the temperature jumps are small. Whether the automatic weather station reduces radiation errors by mechanical ventilation is likely also important. Because of these different categories, the number of datasets needed to get a good global estimate becomes larger. Up to now, these factors seem to be more important than the climate.

Precipitation

For most of these countries we also have parallel measurements for precipitation. The figure below was made by Petr Stepanek, who leads this part of the study.


Boxplots for the differences in monthly precipitation sums due to automation. Positive values mean that the manual observations record more precipitation. Countries are: Argentina (AG), Brazil (BR), The Check Republic (CZ), Israel (IS), Kyrgyzstan (KG), Peru (PE), Sweden (SN), Spain (SP) and the USA (US). The width of the boxplots corresponds to the size of the given dataset.

For most countries the automatic weather stations record less precipitation. This is mainly due to smaller amounts of snow during the winter. Observers often put a snow cross in the gauge in winter to make it harder for snow to blow out of it again. Observers simply melt the snow gathered in a pot to measure precipitation, while early automatic weather stations did not work well with snow and sticky snow piling up in the gauge may not be noticed. These problems can be solved by heating the gauge, but unfortunately the heater can also increase the amount of precipitation that evaporates before it could be registered. Such problems are known and more modern rain gauges use different designs and likely have a smaller bias again.

Database with parallel data

The above results are very preliminary, but we wanted to show the promise of a global dataset with parallel data to study biases in the climate record due to changes in the observing practises. To proceed we need more datasets and better information on how the measurements were performed to make this study more solid.

In future we also want to look more at how the variability around the mean is changing. We expect that changes in monitoring practices have a strong influence on the tails of the distribution and thus on estimates of changes in extreme weather. Parallel data offer a unique opportunity to study this otherwise hard problem.

Most of the current data comes from Europe and South America. If you know of any parallel datasets especially from Africa or Asia, please let us know. Up to now, the main difficulty for this study is to find the persons who know where the data is. Fortunately, data policies do not seem to be a problem. Parallel data is mostly seen as experimental data. In some cases we “only” got a few years of data from a longer dataset, which would otherwise be seen as operational data.

We would like to publish the dataset after publishing our papers about it. Again this does not seem to lead to larger problems; sometimes people prefer to first publish an article themselves, which causes some delays, and sometimes we cannot publish the daily data itself, but “only” monthly averages and extreme value indices, this makes the results less transparent, but these summary values contain most of the information.

Knowledge of the observing practices is very important in the analysis. Thus everyone who contributes data is invited to help in the analysis of the data and co-author our first paper(s). Our studies are focused on global results, but we will also provide everyone with results for their own dataset to gain a better insight into their data.

Most climate scientists would agree that it is important to understand the impact of automation on our records. So does the World Meteorological Organization. In case it helps you to convince your boss: the Parallel Observations Science Team is part of the International Surface Temperature Initiative (ISTI). It is endorsed by the Task Team on Homogenization (TT-HOM) of the World Meteorological Organization (WMO).

We expect that this endorsement and our efforts to raise awareness about our goals and their importance will help us to locate and study parallel observations from other parts of the world, especially Africa and Asia. We also expect to be able to get more data from Europe; the regional association for Europe of the WMO has designated the transition to automatic weather stations as one of its priorities and is helping us to get access to more data. We want to have datasets for all over the world to be able to assess whether the station settings (sensors, screens, data quality, etc.) have an impact, but also to understand if different climates produce different biases.

If you would like to collaborate or have information, please contact me.



Related reading

The ISTI has made a series of brochures on POST in English, Spanish, French and German. If anyone is able to make further translations, that would be highly appreciated.

Parallel Observations Science Team of the International Surface Temperature Initiative.

Irrigation and paint as reasons for a cooling bias

Temperature trend biases due to urbanization and siting quality changes

Changes in screen design leading to temperature trend biases

Temperature bias from the village heat island

Thursday, 7 January 2016

Interesting EGU sessions and conferences in 2016

Just a quick post to advertise some interesting (new) EGU sessions and conferences this year.

At EGU there will be four interesting sessions that fit to the topic of this blog. The abstract deadline is already next Wednesday, the 13th of January at 13 CET. The conference is half April in Vienna, Austria.

Climate Data Homogenization and Climate Trend and Variability Assessment
The main session for all things homogenization.

Taking the temperature of Earth: Variability, trends and applications of observed surface temperature data across all domains of Earth's surface
On measuring temperatures: surface itself (skin temperature), surface air over land, see surface temperature, marine air temperature. With a large range of observational methods, including satellites.

Transition into the Anthropocene-causes of climate change in the 19th and 20th century
A session on climate change in the very challenging early instrumental period, where the variability of station observations has large uncertainties. This session is new, as far as I can see. But EGU is big, I hope I did not miss it last year.

Historical Climatology
Even further back in time is the session on historical climatology where people mainly look at non-instrumental evidence of climatic changes and their importance for human society.

Also this year there will be an EMS conference. Like every second year, in 2016 it will be combined with the European Conference on Applied Climatology (ECAC) and thus has more climate goodies than average. This year it will be half September in Trieste, Italy.

The main session for fans of homogenization is: Climate monitoring; data rescue, management, quality and homogenization.

Fans of variability may like the session on Spatial Climatology.

A conference I really enjoyed the last two times I was there is the International Meeting on Statistical Climatology. Its audience is half statisticians and half climatologists. Everyone loves beautiful statistical and methodological questions. Great!! This year is will be in June in Canmore, Alberta, Canada.

It also has a session on homogenization: Climate data homogenization and climate trends/variability assessment.

If I missed any interesting sessions or conferences do let us know in the comments (also if it is your own).




Descriptions

Climate Data Homogenization and Climate Trend and Variability Assessment
Convener: Xiaolan L. Wang
Co-Conveners: Enric Aguilar, Rob Roebeling, and Petr Stepanek

The accuracy and homogeneity of climate data are indispensable for many aspects of climate research. In particular, a realistic and reliable assessment of historical climate trends and variability is hardly possible without a long-term, homogeneous time series of climate data. Accurate and homogeneous climate data are also indispensable for the calculation of related statistics that are needed and used to define the state of climate and climate extremes. Unfortunately, many kinds of changes (such as instrument and/or observer changes, and changes in station location and environment, observing practices and procedure, etc.) that took place in the period of data record could cause non-climatic changes (artificial shifts) in the data time series. Such artificial shifts could have huge impacts on the results of climate analysis, especially those of climate trend analysis. Therefore, artificial changes shall be eliminated, to the extent possible, from the time series prior to its application, especially its application in climate trends assessment.

This session calls for contributions that are related to bias correction and homogenization of climate data, including bias correction and validation of various climate data from satellite observations and from GCM and RCM simulations, as well as quality control/assurance of observations of various variables in the Earth system. It also calls for contributions that use high quality, homogeneous climate data to assess climate trends and variability and to analyze climate extremes, including the use of bias-corrected GCM or RCM simulations in statistical downscaling. This session will include studies that inter-compare different techniques and/or propose new techniques/algorithms for bias-correction and homogenization of climate data, for assessing climate trends and variability and analysis of climate extremes (including all aspects of time series analysis), as well as studies that explore the applicability of techniques/algorithms to data of different temporal resolutions (annual, monthly, daily) and of different climate elements (temperature, precipitation, pressure, wind, etc) from different observing network characteristics/densities, including various satellite observing systems.



Transition into the Anthropocene-causes of climate change in the 19th and 20th century
Convener: Gabriele Hegerl
Co-Convener: Stefan Brönnimann

This session focuses on the long view of climate variability and change as available from long records, reconstructions, reanalysis efforts and modelling, and we welcome analysis of temperature, precipitation, extreme events, sea ice, and ocean. Contributions are welcome that evaluate changes from historical data on the scale of large regions to the globe, analyse particular unusual climatic events, estimate interdecadal climate variability and climate system properties from long records, attribute causes to early observed changes and model or data assimilate this period. We anticipate that bringing observational, modelling and analysis results together will improve understanding and prediction of the interplay of climate variability and change. "



Taking the temperature of Earth: Variability, trends and applications of observed surface temperature data across all domains of Earth's surface
See also their homepage.
Convener: Darren Ghent
Co-Conveners: Nick Rayner, Stephan Matthiesen, Simon Hook, G.C. Hulley, Janette Bessembinder


Surface temperature (ST) is a critical variable for studying the energy and water balances of the Earth surface, and underpinning many aspects of climate research and services. The overarching motivation for this session is the need for better understanding of in-situ measurements and satellite observations to quantify ST. The term "surface temperature" encompasses several distinct temperatures that differently characterize even a single place and time on Earth’s surface, as well as encompassing different domains of Earth’s surface (surface air, sea, land, lakes and ice). Different surface temperatures play inter-connected yet distinct roles in the Earth’s surface system, and are observed with different complementary techniques.

The EarthTemp network was established in 2012 to stimulate new international collaboration in measuring and better understanding ST across all domains of the Earth’s surface including air, land, sea, lakes, ice. New and existing international projects and products have evolved from network collaboration (e.g. ESA Climate Change Initiative SST project, EUSTACE, FIDUCEO, International Surface Temperature Initiative, ESA GlobTemperature, HadISST, CRUTEM and HadCRUT). Knowledge gained during this EarthTemp session will be documented and published as part of the user requirements exercises for such projects and will thus benefit the wider community. A focus of this session is the use of ST's for assessing variability and long-term trends in the Earth system. In addition there will be opportunity for users of surface temperature over any surface of Earth on all space and timescales to showcase their use of the data and their results, to learn from each others' practice and to communicate their needs for improvements to developers of surface temperature products. Suggested contributions can include, but are not limited to, topics like:

* The application of ST in climate science
* How to improve remote sensing of ST in different environments
* Challenges from changes of in-situ observing networks over time
* Current understanding of how different types of ST inter-¬relate
* Nature of errors and uncertainties in ST observations
* Mutual/integrated quality control between satellite and in-situ observing systems.
* What do users of surface temperature data require in practical applications?



Historical Climatology
Convener: Stefan Grab
Co-Conveners: Rudolf Brazdil, David Nash, Georgina Endfield


Historical Climatology has gained momentum and worldwide recognition over the last couple of decades, particularly in the light of rapid global climate and environmental change. It is now well recognized that in order to better project future changes and be prepared for those changes, one should look to, and learn from, the past. To this end, historical documentary sources, in many cases spanning back several hundred years and far beyond instrumental weather records, offer detailed descriptive (qualitative) accounts on past weather and climate. Such documentary sources typically include, amongst others: weather diaries, ship log books, missionary reports and letters, historical newspapers, chronicles, accounting and government documents etc. Such proxies have particular advantages in that they in most cases offer details on the specific timing and placement of an event. In addition, valuable insights may be gained on environmental and anthropogenic consequences and responses to specific weather events and climate anomaly. Similarly, oral history records, based on people’s personal accounts and experiences of past weather offer important insights on perceptions of climate change, and details on past and sometimes ‘forgotten’ weather events and their consequences.

This session welcomes all studies using documentary, historical instrumental and oral history based approaches to: produce historical climate chronologies (multi-decadal to centennial scale), gain insights into past climatic periods or specific weather events, detail environmental and human consequences to past climate and weather, share people’s experiences and perceptions of past climate, weather events and climate change, and reflect on lessons learnt (coping and adaptation) from past climate and weather events. Whilst welcoming contributions from all global regions, we particularly appeal for contributions from Asia and the Middle East.



Climate monitoring; data rescue, management, quality and homogenization
Convener: Manola Brunet-India
Co-Conveners: Hermann Mächel, Victor Venema, Ingeborg Auer, Dan Hollis


Robust and reliable climatic studies, particularly those assessments dealing with climate variability and change, greatly depend on availability and accessibility to high-quality/high-resolution and long-term instrumental climate data. At present, a restricted availability and accessibility to long-term and high-quality climate records and datasets is still limiting our ability to better understand, detect, predict and respond to climate variability and change at lower spatial scales than global. In addition, the need for providing reliable, opportune and timely climate services deeply relies on the availability and accessibility to high-quality and high-resolution climate data, which also requires further research and innovative applications in the areas of data rescue techniques and procedures, data management systems, climate monitoring, climate time-series quality control and homogenisation.

In this session, we welcome contributions (oral and poster) in the following major topics:

• Climate monitoring , including early warning systems and improvements in the quality of the observational meteorological networks

• More efficient transfer of the data rescued into the digital format by means of improving the current state-of-the-art on image enhancement, image segmentation and post-correction techniques, innovating on adaptive Optical Character Recognition and Speech Recognition technologies and their application to transfer data, defining best practices about the operational context for digitisation, improving techniques for inventorying, organising, identifying and validating the data rescued, exploring crowd-sourcing approaches or engaging citizen scientist volunteers, conserving, imaging, inventorying and archiving historical documents containing weather records

• Climate data and metadata processing, including climate data flow management systems, from improved database models to better data extraction, development of relational metadata databases and data exchange platforms and networks interoperability

• Innovative, improved and extended climate data quality controls (QC), including both near real-time and time-series QCs: from gross-errors and tolerance checks to temporal and spatial coherence tests, statistical derivation and machine learning of QC rules, and extending tailored QC application to monthly, daily and sub-daily data and to all essential climate variables

• Improvements to the current state-of-the-art of climate data homogeneity and homogenisation methods, including methods intercomparison and evaluation, along with other topics such as climate time-series inhomogeneities detection and correction techniques/algorithms (either absolute or relative approaches), using parallel measurements to study inhomogeneities and extending approaches to detect/adjust monthly and, especially, daily and sub-daily time-series and to homogenise all essential climate variables

• Fostering evaluation of the uncertainty budget in reconstructed time-series, including the influence of the various data processes steps, and analytical work and numerical estimates using realistic benchmarking datasets



Spatial Climatology
Convener: Ole Einar Tveito
Co-Conveners: Mojca Dolinar, Christoph Frei


Gridded representation of past and future weather and climate with high spatial and temporal resolution is getting more and more important for assessing the variability of and impact of weather and climate on various environmental and social phenomena. They are also indispensable as validation and calibration input for climate models. This increased demand requires new efficient methods and approaches for estimating spatially distributed climate data as well as new efficient applications for managing and analyzing climatological and meteorological information at different temporal and spatial scales. This session addresses topics related to generation and application of gridded climate data with an emphasis on statistical methods for spatial analysis and spatial interpolation applied on observational data.

An important aspect in this respect is the creation and further use of reference climatologies. The new figures calculated for the latest normal period 1981-2010 are now recommended as reference period for assessments of regional and local climatologies. For this period new observation types (e.g. satellite and radar data) are available, and contributions taking advantage of multiple data sources are encouraged.

Spatial analysis using e.g. GIS is a very strong tool for visualizing and disseminating climate information. Examples showing developments, application and products of such analysis for climate services are particularly welcome.

The session intends to bring together experts, scientists and other interested people analyzing spatio-temporal characteristics of climatological elements, including spatial interpolation and GIS modeling within meteorology, climatology and other related environmental sciences.



Climate data homogenization and climate trends/variability assessment
Convener: Xiaolan Wang, Lucie Vincent, Markus Donat and Lisa Alexander

The accuracy and homogeneity of climate data are indispensable for many aspects of climate research. In particular, a realistic and reliable assessment of historical climate trends and variability is hardly possible without a long-term, homogeneous time series of climate data. Accurate and homogeneous climate data are also indispensable for the calculation of related statistics that are needed and used to define the state of climate and climate extremes. Unfortunately, many kinds of changes (such as instrument and/or observer changes, and changes in station location and exposure, observing practices and procedure, etc.) that took place in the period of data record could cause non-climatic sudden changes (artificial shifts) in the data time series. Such artificial shifts could have huge impacts on the results of climate analysis, especially those of climate trend analysis. Therefore, artificial changes shall be eliminated, to the extent possible, from the time series prior to its application, especially its application in climate trends assessment.

This session calls for contributions that are related to bias correction and homogenization of climate data, including bias correction and validation of various climate data from satellite observations and from GCM and RCM simulations, as well as quality control/assurance of observations of various variables in the Earth system. It also calls for contributions that use high quality, homogeneous climate data to assess climate trends and variability and to analyze climate extremes, including the use of bias-corrected GCM or RCM simulations in statistical downscaling.