Thursday, 30 June 2016

The EU, refugees and migration



Brexit, Geert Wilders, Nigel Farage, Marine Le Pen and thousands of refugees drowning in the Mediterranean. We should talk about the EU, refugees and migration.

Europe

The European Union started as a peace project, as a collaboration based on two industries that were crucial for war: coal and iron. It is often still sold as a peace project. That is certainly an aspect, but I think that this part is oversold when people point to Europe's violent past. The EU sure helps. However, also outside of the EU the frequency of international conflicts is decreasing. The benefits of war have decreased, most capital is nowadays in humans and organisation and cannot be easily plundered. The costs of war have also increased with nuclear and chemical weapons. The spread of democracy and the absence of war itself makes war less likely.

If leaders and countries acted rationally the EU might no longer be necessary for peace in Europe. Marine Le Pen in France, Geert Wilders in The Netherlands, Nigel Farage in England and Donald Trump in the USA make clear that we should not count on every leader making a cost benefit analysis. The wars in Yugoslavia and Ukraine also warn us that war is possible in Europe. Peace is one of the benefits of the EU and one reason why right-wing extremists do not like it.



The main benefit of the EU is that is allows the citizens of Europe to collaborate and stand up against economic powers. Environmental problems belong to this category. Where powerful companies pollute to make more money, while people with less power have to deal with the consequences. This power abuse increases inequality. On a national level, it is the role of the government to solve such problems. The polluter can, however, threaten to go to another country. International collaboration by setting environmental standards make such threats less credible and makes it easier for governments to serve their populations.

Many environmental problems are naturally also international, for instance, pollution of large rivers and acid rain, and natural candidates for collaboration to avoid international conflicts. A study just out this week tried to estimate the impact of European political measures to reduce air pollutants. It found that:
The reduction in PM2.5 concentrations [very small particles in the air] is calculated to have prevented 80 000 (37 000–116 000, at 95% confidence intervals) premature deaths annually across the European Union, resulting in a perceived financial benefit to society of US$232 billion annually (1.4% of 2010 EU GDP).
Those 80 thousand bodies and 1.4% of GDP is for small particles alone. Add to this all the other pollutants, workers rights and consumer protection. National laws would on average be less strict because firms would nationally have a stronger negotiation position. More people would die, more economic damage would be done. Socializing losses is a money making machine. In this case avoid investments in cleaner technology or selling more cheaper lower quality products increase the private gains at our costs.

People need to collaborate to reduce tax competition between countries. The rich and especially their money are more mobile and they can threaten democracies to pay their taxes elsewhere if the rates for the rich and large companies do not go down. That means that the lower 99%, you and me, will have to pay more, which is why incomes did not increase for most groups in the last decades, while most of the new wealth went to the super rich. The EU should coordinate taxes much more and especially get rid of national tax tricks to allow foreigners to pay less taxes than local people. The EU does this too little, but without the EU we can stop dreaming of achieving more justice here.

What amazes me most about the Leave campaign in the UK is that they managed to portrait the EU as the establishment and themselves as the defenders of the normal man. In reality both campaigns had their elites behind them and leaving the EU would make the UK establishment more powerful. Rupert Murdoch supported the Leave campaign because politicians in London do what he tells them to do. Last time I looked Rupert Murdoch was member of the establishment and not a working man trying to get by.


Rupert Murdoch supported the Leave campaign because politicians in London do what he tells them to do


Yes, the EU also does terrible things. A democratic institution will not always follow your preference; if you like that, try to become a dictator. The establishment naturally also sees that the EU is their main opponent and lobbies to make the EU do what they would like. This is facilitated by the fact that the media does not report much on the EU and much can thus be done behind the back of the people, which means that politicians do not have to fear losing their jobs for doing the bidding of the establishment. We should pay more attention and organize to make sure that our lobbies are also in Brussels.

Well know examples of terrible neo-liberal EU projects are the trade agreements TTIP and CETA and the Euro. Europe is not a banana republic, our courts do their job well and there is no need for special TTIP private courts so that corporations can threaten governments who want to improve living conditions. It is an assault on our democracies. If the EU presses through TTIP or CETA, I will stop being reasonable and from then on I will be anti-EU.

The Euro is a mess, it has many, many problems. We should slowly move out of it.

Refugees

The increase in the number of refugees is not just an EU problem. Improved information and travel possibilities means that more people are travelling further to find a save home.

It is not just an EU problem, but currently the consequences of the Bush-Blair war against Iraq are producing large refugee streams into the EU and tensions within the union. A closer EU foreign policy could have prevented this mess. (The misinformation campaign for the Iraq war is comparable with the Brexit campaign; in both cases the Anglo-American population did not do their due diligence.)

Most countries on Earth have signed the [[Geneva Convention relating to the Status of Refugees]], which obliges them to act humanely and accept refugees. The duty to protect refugees is international law.

While most people we have an empathetic side that wants to help people in need, we also have a tribal side and many do not like too many people from other groups coming to stay with us. If you put yourself in the position of a native American that instinct can make sense. In retrospect it was a monumental mistake to let Columbus and Co. get away alive.

In response to the growing numbers, the refugee convention has been hollowed out by making it very difficult to enter a country and ask for asylum, as well as by the principle of secure third countries and send refugees away without investigating their case. As a consequence many thousands of people drown in the Mediterranean trying to reach Europe and Australia has set up an disgusting system of lawless concentration camps.

I would propose to add two principles to the convention:
1. That when a country helps refugees in the region they come from, they are no longer obliged to house them in their own country. But only then.
2. That refugees can also send a request for asylum by mail, so that people are no longer dying trying to cross the border.


The current refugee crisis started with insufficient aid for the refugee camps along the Syrian border where people were literally hungering. This new legal principle would make such cases of neglect less likely, because it would have consequences. If we do our part to help locally, it should be no problem that asylum can be requested by mail, because they could then be rejected.

Helping refugees locally is also better for them. We may be rich, but a refugee who is used to a normal culture will have a hard time accustoming to the cold and impersonal European societies. Next to all the other culture shocks. Even without considering cultural differences, staying in the region makes it easier to maintain social ties and to go back home when the problems are over.


Staying in the region makes it easier to maintain social ties and to go back home


We will not be able to help everyone locally, especially in case of small groups or individuals. A gay man who is threatened in Russia is helped most easily by granting him asylum in Europe.

These two new principles would strengthen the right of asylum, help refugees better and reduce the number of refugees coming to Europe. Racists will not like this solution, but for the majority who experience a mix of empathy and concern, this should be a good solution. For people in favour of a multi-cultural society this is a good solution because it helps refugees better (and there will be enough diversity left).

Immigration

Refugees and migrants are often seen as one category, but they are fundamentally different. A refugee needs our help. To allow the partner and children of a refugee to live together would be migration, but seems to be a no-brainer as well for people with some empathy.

Economic migration is a different case. Even when it is good for a country, it may not be good for all segments of society. One reason we have democracy is to make sure that all interests are represented.

For the elite migration is mostly nice. By definition the migrants see migration as a benefit. And the elite has other options, thus if they migrate that is normally because they see clear benefits. Even before EU citizens could work everywhere in Europe it was normally possible for scientists to work elsewhere because of the importance of migration for science. Science is highly specialized; there is no labour market in Germany for my specialization.

Many other professions are similarly specialised and professionals with high salaries were normally allowed to work in another country. Also a sufficiently wealthy pensioner will be happy to be allowed to migrate to another (warmer) country. Living in another country a few years can be very enriching.

If you are less well off, the possibility of migration of cheap labour can be used by firms to reduce your bargaining power and you may end up with an even lower salary or without a job. The region the migrant comes from looses a valuable labourer. Migration can thus be used to increase inequality even more. Even for scientists from wealthy countries migration makes the negotiation positions weaker and thus labour conditions worse. But it is good for science and for scientists from poorer countries.


Salaries are determined by bargaining power, not by productivity, which is undefined for individuals in nonlinear production processes


Especially within the EU, I would be in favour of allowing everyone to work where they would like to. Freedom should be our default. In return for this benefit, the elite should compensate the disadvantages for the rest of society by improving the negotiation position of workers. One may think of migration restrictions for some professions, stronger unions, redistribution of wealth, programs for retraining, job guarantees for the unemployed and humane treatment of unemployed people.

A new European Union

There are many benefits of collaboration. The people of Europe need to collaborate to have the power to stand up against ever larger economic powers. There is no reason why this collaboration needs to be so intensive that the EU would become a nation itself. People's interests and customs differ and power is best exercised close to the people. We should only collaborate on large scales where this has a clear benefit.

The Euro makes inequality worse and it creates a lot of negative political energy in the EU that avoids other positive changes. Let's get rid of it. Now that the worst financial crisis is over, this is a good time to start a slow transition.

If we change the refugee convention to alternatively help refugees in the region where they come from, we can help them better than now, less will die on their way to Europe and another problem that creates bad blood in Europe would be gone.

Because refugees and migration are often seen as one problem, a reduction in the number of refugees may also reduce problems people have with migration. Still we should not be blind to the large difference in interests between the elites and the rest of society when it comes to migration. A compromise between the groups may be improving the negotiation position of workers.



Overarching above it all: You are not more pro Europe the more you would like the EU to replace the old nations. When Juncker sees the Brexit as a great opportunity to build a European nation and force the fast introduction of the Euro in every country, he is pro Europe. When I reject that uncreative vision of Europe and see the EU as a way for the people of Europe to collaborate, I am also pro Europe. Just like people argue nationally what the role of government is, we should have an open discussion in the EU about where collaboration is fruitful and possible given our differences.

For me, the most valuable innovation of the EU is actually that it is a twitter. That is the reason why nations all over the world are building similar regional collaborations. They would not if the aim would the end of their nations and only building a larger more anonymous nation. Europe should be proud of its queer identity.



Related reading

Brexit is great news for the rest of the EU. Britain has not yet come to terms with its own irrelevance, and would only have got in the way of plans to create a more democratic pooling of sovereignty.


* Top photo: EU Grunge Flag, Attribution 2.0 Generic (CC BY 2.0).

Photo Auschwitz: Arbeit Macht Frei, Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0)

Map of Regional Organizations: CC BY-SA 3.0.

Thursday, 23 June 2016

Four wonderful climate science podcasts you need to know

[UPDATE: How is this Buzzfeed headline?]

For some years I was a regular listener of EconTalk, where the economist Russ Roberts would interview a colleague, typically about a recent book or article. Roberts is a staunch libertarian and the interviews with fellow libertarians are worse than listening in on drunk men agreeing with each other in a bar at 4am, but many other interviews with economists who went out in the world and had studied reality were wonderful. I learned a lot about the world view of this special tribe and something about the limits on how we organize society.

Podcasts are a nice way to learn. The debate makes otherwise maybe more boring engaging and you can listen to it while commuting, walking or doing the household chores.

I tried to get a few people enthusiastic about doing such a podcast for climate science and even in the end considered doing it myself. But there is no need any more. Suddenly a wealth of really good climate science podcasts has sprung up.

Warm Regards

The newest podcast is by Eric Holthaus, journalist at Slate. It is called Warm Regards. He has as co-hosts climate scientist and Ice Age ecologist Jacquelyn Gill and New York Times science blogger Andrew Revkin. They are so new, in fact, that I could not listen to their first podcast yet: "How Do We Talk About Climate Change?" In a few days their will also join iTunes.

[UPDATE. While preparing my diner, I listened to the podcast. Really enjoyed it. Good voices. Good sound. Professionally made. They introduced themselves, what they work on and why they care about climate change. The main topic was science communication and they emphasised that a good relationship with the listener is much more important than details that are quickly forgotten. Revkin liked talking to mitigation sceptics, the other two take the more productive route of trying to talk as much as possible to people who are wiling to listen and consider the arguments. The app Block Together is a good way to keep lines of communication open with decent people on twitter by very efficiently blocking harassing accounts. I also use it and can highly recommend it.

Personally I would add that we should not overestimate the importance of science communication. Outside of Anglo-America scientists are much less active in communicating climate change, but we have nearly no problems with mitigation sceptical movements. The difference is a working political system and better press. Talking about climate and science is what I do best, but if you have the option, it is probably better to invest your time in getting money out of US politics and building up a free and democratic press, for example by supporting membership supported media channels.]

Climate History Podcast

The Climate History Podcast is hosted by Dr. Dagomar Degroot, the founder of HistoricalClimatology.com and co-founder of the Climate History Network. Even if society has changed a lot, we can learn from how humans have responded to the small climatic changes in the past. This initiative is just three podcasts old and the one I listened to, on the little ice age, is really interesting. First three titles are:

1. Climate Change and Crisis: Lessons from the Past
2. The History of Climate Change with Professor Sam White
3. Archaeology in the Arctic: Reconstructing the Consequences of Climate Change in the Far North

You can list to it on iTunes project and download and listen to the podcast at podbay.

Mostly Weather

The UK MetOffice produces the podcasts Mostly Weather. So officially it is about weather, but most topics are actually dual-use science, important for both weather and climate. (Mitigation sceptics often do not seem to know that meteorology is bigger than climatology.) It is made with love by climate scientists Doug McNeall, Niall Robinson and Claire Witham.

It is aimed at a general audience, but also a scientist can still learn something. I learned from their first podcasts on the history of weather forecasting that this started over the ocean: there it is most important and easier to do. Other podcasts were on the elements: clouds, snow, lightning and about the structure of atmosphere and they just had a series on weather forecasting.

Forecast

For me as a scientist, the clear favorite is Forecast. It is made by Michael White, editor for climate topics at the scientific journal Nature. He makes it as a private project, but naturally he has access to the best and the brightest as editor and a good understanding of the climate system. This makes for in depth interviews on the science, but he also talks a lot about the serendipitous personal and scientific histories of the scientists. I have the feeling, many non-scientists will be able to understand the interviews, but admit I am not a very good judge of this.

The last interview was with Gabi Hegerl, the woman who discovered climate change (the first to do an attribution study). Other names people may recognize are: Reto Knutti, astronaut Piers Sellers, Chris Field, Bjorn Stevens, Kim Cobb, Mat Collins. Oh and a modeller from NASA GISS. Gavin Schmidt.

Have fun listening. Let me know if I missed something and which podcasts you like most.


Sunday, 8 May 2016

Grassroots scientific publishing

These were the weeks of peer review. Sophie Lewis wrote her farewell to peer reviewing. Climate Feedback is making it easy for scientists to review journalistic articles with nifty new annotation technology. And Carbon Brief showed that while there is a grey area, it is pretty easy to distinguish between science and nonsense in the climate "debate", which is one of the functions of peer review. And John Christy and Richard McNider managed to get an article published, which I would have advised to reject as reviewer. A little longer ago we had the open review of the Hansen sea level rise paper, where the publicity circus resulted in a-scientific elements spraying their graffiti on the journal wall.

Sophie Lewis writes about two recent reviews she was asked to make. One where the reviewers were negative, but the article was published anyway by the volunteer editor and one case where the reviewers were quite positive, but the manuscript was rejected by a salaried editor.

I have had similar experiences. As reviewer you invest your time and heart in a manuscript and root for the ones you like to make it in print. Making the final decision naturally is the task of the editor, but it is very annoying as a reviewer to have the feeling your review is ignored. There are many interesting things you could have done in that time. At least nowadays you get to see the other reviews and hear the final decision more often, which is motivating.

The European Geophysical Union has a range of journals with open review, where you can see the first round of reviews and anyone can contribute reviews. This kind of open review could benefit from the annotation system used by Climate Feedback to review journalistic articles; it makes reviewing easier and the reader can immediately see the text the review revers to. The open annotation system allows you to add comments to any webpage or PDF article or manuscript. You can see it as an extra layer on top of the web.

The reviewer can select a part of the text and add comments, including figures and links to references. Here is an annotated article in the New York Times that Climate Feedback found to be scientifically very credible, where you can see the annotate system in action. You can click on the text with a yellow background to see the corresponding comment or click on the small symbol at the top right to see all comments. (Examples of articles with low scientific credibility are somehow mostly pay-walled; one would think that the dark money behind these articles would want them to be read widely.)

I got to know annotation via Climate Feedback. We use the annotation system of Hypothes.is and this system was actually not developed to annotate journalistic articles, but for reviewing scientific articles.

The annotation system makes writing a review easier for the reviewer and makes it easier to read reviews. The difference between writing some notes on an article for yourself and a peer review becomes gradual this way. It cannot take away having to read the manuscript and trying to understand it. That takes most time, but this is the fun part, reducing time time for the tedious part makes it more attractive to review.

Publishing and peer review

Is there a better way to review and publish? The difficult part is no longer the publishing. The central part that remains is the trust of a reader in a source.

It starts to become ironic that the owners of the scientific journals are called "scientific publishers" because the main task of a publisher is nowadays no longer the publishing. Everyone can do that nowadays with a (free) word processor and a (free) web page. The publishers and their journals are mostly brands nowadays. The scientific publisher, the journal is a trusted name. Trust is slow to build up (and easy to lose), producing huge barriers to entry and leading to near monopoly profits of scientific publishing houses of 30 to 40%. That is tax-payer money that is not spend on science and promotes organization that prefer to keep science unused behind pay-walls.

Peer review performs various functions. It helps to give a manuscript the initial credibility that makes people trust it, that makes people willing to invest time in it to study its ideas. If the scientific literature would be as abominable as the mitigation skeptical blog Watts Up With That (WUWT) scientific progress would slow down enormously. At WUWT the unqualified readers are supposed to find out themselves whether they are being conned or not. Even if they would do so: having every reader do a thorough review is wasteful; it is much more efficient to ask a few experts to first vet manuscripts.

Without peer review it would be harder for new people to get others to read their work, especially if they would make a spectacular claim and use unfamiliar methods. My colleagues will likely be happy to read my homogenization papers without peer review. Gavin Schmidt's colleagues will be happy to read his climate modelling papers and Michel Mann's colleagues his papers on climate reconstructions. But for new people it would be harder to be heard, for me it would be harder to be heard if I would publish something about another topic and for outsiders it would be harder to judge who is credible. The latter is increasingly important the more interdisciplinary sciences becomes.

Improving peer review

When I was dreaming of a future review system where scientific articles were all in one global database, I used to think of a system without journals or editors. The readers would simply judge the articles and comments, like on Ars Technica or Slashdot. The very active open science movement in Spain has implemented such a peer review system for institutional repositories, where the manuscripts and reviews are judged and reputation metrics are estimated. Let me try to explain why I changed my mind and how important editors and journals are for science.

One of my main worries for a flat database would be that there would be many manuscripts that never got any review. In the current system the editor makes sure that every reasonable manuscript gets a review. Without an editor explicitly asking a scientist to write a review, I would expect that many articles would never get a review. Personal relations are important.

Science is not a democracy, but a meritocracy. Just voting an article up or down does not do the job. It is important that this decision is made carefully. You could try to statistically determine which readers are good at predicting the quality of an article, where quality could be determined by later votes or citations. This would be difficult, however, because it is important that the assessment is made by people with the right expertise, often by people from multiple backgrounds; we have seen how much even something as basic as the scientific consensus on climate change depends on expertise. Try determining expertise algorithmically. The editor knows the reviewers.

While it is not a democracy, the scientific enterprise should naturally be open. Everyone is welcome to submit manuscripts. But editors and reviewers need to be trusted and level headed individuals.

More openness in publishing could in future come from everyone being able to start a "journal" by becoming editor (or better by organization a group of editors) and try to convince their colleagues that they do a good job. The fun thing about the annotation system is that you can demonstrate that you do a good job using existing articles and manuscripts.

This could provide real value for the reader. Not only would the reviews be visible, but it would also be possible to explain why an article was accepted, was it speculative, but really interesting if true (something for experts) or was it simply solid (something for outsiders). Which parts do the experts debate about. The debate would also continue after acceptance.

The code and the data of every "journal" should be open so that everyone can start a new "journal" with reviewed articles. So that when Heartland offers me a nice amount of dark money to start accepting WUWT-quality articles, a group of colleagues can start a new journal and fix my dark-money "mistakes", but otherwise have a complete portfolio from the beginning. If they would have to start from scratch that would be a large barrier to entry, which like the traditional system encourages sloppy work, corruption and power abuse.

Peer review is also not just for selecting articles, but also to help making them better. Theoretically the author can also ask colleagues to do so, but in practice reviewers are better in finding errors. Maybe because the colleagues who will put in most effort are your friends who have to same blind spots? These improvements of the manuscript would also be missing in a pure voting system of "finished" articles. Having a manuscript phase is helpful.

Finally, an editor makes anonymous reviews a lot less problematic because the editor could delete comment where the anonymity seduced people into inappropriate behavior. Anonymity could be abused to make false attacks with impunity. On the other hand anonymity can also provide protection in case of large power differences in case of real problems.

The advantage of internet publishing is that there is no need for an editor to reject technically correct manuscripts. If the contribution to science is small or if the result is very speculative and quite likely to be found to be wrong in future, the manuscript can still be accepted but simply be given a corresponding grade.

This also points to a main disadvantage of the current dead-tree-inspired system: you get either a yes or a no. There is a bit more information in the journal the author chooses, but that is about it. A digital system can communicate much more subtly with a prospective reader. A speculative article is interesting for experts, but may be best avoided by outsiders until the issues are better understood. Some articles mainly review the state-of-the-art, others provide original research. Some articles have a specific audience: for example the users of a specific dataset or model. Some articles are expected to be more important for scientific progress than others or discuss issues that are more urgent than others. And so on. This information can be communicated to the reader.

The nice thing about the open annotate system is that we can begin reviewing articles before authors start submitting their articles. We can simply review existing articles as well as manuscripts, such as the ones uploaded to ArXiv. The editors could reject articles that should not have been published in the traditional journals and accept manuscripts from archives. I would judge this assessment of a knowledgeable editor (team) more than the acceptance by a traditional journal.

In this way we can produce collections of existing articles. If the new system provides a better reviewing service to science, the authors at some moment can stop submitting their manuscripts to traditional journals and submit them directly to the editors of a collection. Then we have real grassroots scientific journals that serve science.

For colleagues in the communities it would be clear which of these collections have credibility. However, for outsiders we would also need some system that communicates this, which would traditionally be the role of publishing houses and the high barriers to entry. This could be assessed where collections have overlap. Preferably again by humans and not by algorithms. For some articles there may be legitimate reasons why there are differences (hard to assess, other topic of collection), for other articles an editor not having noticed problems may be a sign of bad editorship. This problem is likely not too hard, in a recent analysis of twitter discussions on climate change there was a very clear distinction between science and nonsense.

There is still a lot to do, but with the ease of modern publishing and the open annotate system a lot of software is already there. Larger improvements would be tools for editors to moderate review comments (or at least to collapse less valuable comments); Hypothes.is is working on it. A grassroots journal would need a grading system; standardized when possible. More practical tools would include some help in tracking the manuscripts under review and for sending reminders, and the editors of one collection should be able to communicate with each other. The grassroots journal should remain visible even if the editor team stops; that will need collaboration with libraries or science societies.

If we get this working
  • we can say goodbye to frustrated reviewers (well mostly),
  • goodbye to pay-walled journals in which publicly financed research is hidden for the public and many scientists alike and
  • goodbye to wasting limited research money on monopolistic profits by publishing houses, while
  • we can welcoming better review and selection and
  • we are building a system that inherently allows for post-publication peer review.

What do you think?



Related reading

There is now an "arXiv overlay journal", Discrete Analysis. Articles are published/hosted by ArXiv, otherwise traditional peer review. The announcement mentions three software initiative that make starting a digital journal easy: Scholastica, Episciences.org and Open Journal Systems.

Annotating the scholarly web

A coalition to Annotating All Knowledge A new open layer is being created over all knowledge

Brian A. Nosek and Yoav Bar-Anan describe a scientific utopia: Scientific Utopia: I. Opening scientific communication. I hope the ideas in the above post makes this transition possible.

Climate Feedback has started a crowed funding campaign to be able to review more media articles on climate science

Farewell peer reviewing

7 Crazy Realities of Scientific Publishing (The Director's Cut!)

Mapped: The climate change conversation on Twitter

I would trust most scientists to use annotation responsibly, but it can also be used to harass vulnerable voices on the web. Genius Web Annotator vs. One Young Woman With a Blog

Nature Chemistry blog: Post-publication peer review is a reality, so what should the rules be?

Report from the Knowledge Exchange event: Pathways to open scholarship gives an overview of the different initiative to make science more open.

Magnificent BBC Reith lecture: A question of trust

Sunday, 1 May 2016

Christy and McNider: Time Series Construction of Summer Surface Temperatures for Alabama

John Christy and Richard McNider have a new paper in the AMS Journal of Applied Meteorology and Climatology called "Time Series Construction of Summer Surface Temperatures for Alabama, 1883–2014, and Comparisons with Tropospheric Temperature and Climate Model Simulations". Link: Christy and McNider (2016).

This post gives just few quick notes on the methodological aspects of the paper.
1. They select data with a weak climatic temperature trend.
2. They select data with a large cooling bias due to improvements in radiation protection of thermometers.
3. They developed a new homogenization method using an outdated design and did not test it.

Weak climatic trend

Christy and McNider wrote: "This is important because the tropospheric layer represents a region where responses to forcing (i.e., enhanced greenhouse concentrations) should be most easily detected relative to the natural background."

The trend in the troposphere should a few percent stronger than at the surface; mainly in the tropics. However, it is mainly interesting that they see a strong trend as a reason to prefer tropospheric temperatures, because when it comes to the surface they select the period and temperature with the smallest temperature trend: the daily maximum temperatures in summer.

The trend in winter due to global warming should be 1.5 times the trend in summer and the trend in the night time minimum temperatures is stronger than the trend in the day time maximum temperatures, as discussed here. Thus Christy and McNider select the data with the smallest trend for the surface. Using their reasoning for the tropospheric temperatures they should prefer night time winter temperatures.

(And their claim on the tropospheric temperatures is not right because whether a trend can be detected does not only depend on the signal, but also on the noise. The weather noise due to El Nino is much stronger in the troposphere and the instrumental uncertainties are also much larger. Thus the signal to noise ratio is smaller for the tropospheric temperatures, even if the signal were as long as the surface observations.

Furthermore, I am somewhat amused that there are still people interested in the question whether global warming can be detected.)

[UPDATE. Tamino shows that within the USA, Alabama happens to be the region with the least warming. The more so for the maximum temperature. The more so for the summer temperature.]

Cooling bias

Then they used data with a very large cooling bias due to improvements in the protection of the thermometer for (solar and infra-red) radiation. Early thermometers were not protected as well against solar radiation and typically record too high temperatures. Early thermometers also recorded too cool minimum temperatures; the thermometer should not see the cold sky, otherwise it radiates out to it and cools. The warming bias in the maximum temperature is larger than the cooling bias in the minimum temperature, thus the mean temperature still has some bias, but less than the maximum temperature.

Due to this reduction in the radiation error summer temperatures have a stronger cooling bias than winter temperatures.

The warming effect of early measurements on the annual means is probably about 0.2 to 0.3°C. In the maximum temperature is will be a lot higher and in the summer temperature it will again be a lot higher.

That is why most climatologists use the annual means. Homogenization can improve climate data, but it cannot remove all biases. Thus it is good to start with data that has least bias. Much better than starting with a highly biased dataset like Christy and McNider did.

Statistical homogenization removes biases by comparing a candidate station to its neighbour. The stations need to be close enough together so that the regional climate can be assumed to be similar in both stations. The difference between two stations is then weather noise and inhomogeneities (non-climatic changes due to changes in the way temperature was measured).

If you want to be able to see the inhomogeneities you thus need to have well correlated neighbors that have as little weather noise as possible. By using only the maximum temperature, rather than the mean temperature, you increase the weather noise. But using the monthly means in summer, rather than the annual means or at the very least the summer means, you increase the weather noise. By going back in time more than a century you increase the noise because we had less stations to compare with at the time.

They keyed part of the the data themselves mainly for the period before 1900 from the paper records. It sounds as if they performed no quality control of these values (to detect measurement errors). This will also increase the noise.

With such a low signal to noise ratio (inhomogeneities that are small relative to the weather noise in the difference time series), the estimated date of the breaks they still found will have a large uncertainty. It is thus a pity that they purposefully did not use information from station histories (metadata) to get the date of the breaks right.

Homogenization method

They developed their own homogenization method and only tested it on a noise signal with one break in the middle. Real series have multiple breaks; in the USA typically every 15 years. Furthermore also the reference series has breaks.

The method uses the detection equation from the Standard Normal Homogeneity Test (SNHT), but then starts using different significance levels. Furthermore for some reason it does not use the hierarchical splitting of SNHT to deal with multiple breaks, but it detects on a window, in which it is assumed there is only one break. However, if you select the window too long it will contain more than one break and if you select the window too short the method will have no detection power. You would thus theoretically expect the use of a window for detection to perform very badly and this is also what we found in a numerical validation study.

I see no real excuse not to use better homogenization methods (ACMANT, PRODIGE, HOMER, MASH, Craddock). These are build to take into account that also the reference station has breaks and that a series will have multiple breaks; no need for ad-hoc windows.

If you design your own homogenization method, it is good scientific practice to test it first, to study whether it does what you hope it does. There is, for example, the validation dataset of the COST Action HOME. Using that immediately allows you to compare your skill to the other methods. Given the outdated design principles, I am not hopeful the Christy and McNider homogenization method would score above average.

Conclusions

These are my first impressions on the homogenization method used. Unfortunately I do not have the time at the moment to comment on the non-methodological parts of the paper.

If there are no knowledgeable reviewers available in the USA, it would be nice if the AMS would ask European researchers, rather than some old professor who in the 1960s once removed an inhomogeneity from his dataset. Homogenization is a specialization, it is not trivial to make data better and it really would not hurt if the AMS would ask for expertise from Europe when American experts are busy.

Hitler is gone. The EGU general assembly has a session on homogenization, the AGU does not. The EMS has a session on homogenization, the AMS does not. EUMETNET organizes data management workshops, a large part of which is about homogenization; I do not know of an American equivalent. And we naturally have the Budapest seminars on homogenization and quality control. Not Budapest, Georgia, nor Budapest, Missouri, but Budapest, Hungary, Europe.



Related reading

Tamino: Cooling America. Alabama compared to the rest of contiguous USA.

HotWhopper discusses further aspects of this paper and some differences between the paper and the press release. Why nights can warm faster than days - Christy & McNider vs Davy 2016

Early global warming

Statistical homogenisation for dummies

Tuesday, 26 April 2016

Climate scientists are now grading climate journalism



Guest post by Daniel Nethery and Emmanuel Vincent Daniel Nethery is the associate editor and Emmanuel Vincent is the founder of Climate Feedback. Climate Feedback is launching a crowdfunding campaign today.

The internet represents an extraordinary opportunity for democracy. Never before has it been possible for people from all over the world to access the latest information and collectively seek solutions to the challenges which face our planet, and not a moment too soon: the year 2015 was the hottest in human history, and the Great Barrier Reef is suffering the consequences of warming oceans right now.

Yet despite the scientific consensus that global warming is real and primarily due to human activity, studies show that only about half the population in some countries with among the highest CO2 emissions per capita understand that human beings are the driving force of our changing climate. Even fewer people are aware of the scientific consensus on this question. We live in an information age, but the information isn’t getting through. How can this be?

While the internet puts information at our fingertips, it has also allowed misinformation to sow doubt and confusion in the minds of many of those whose opinions and votes will determine the future of the planet. And up to now scientists have been on the back foot in countering the spread of this misinformation and pointing the public to trustworthy sources of information on climate change.

Climate Feedback intends to change that. It brings together a global network of scientists who use a new web-annotation platform to provide feedback on climate change reporting. Their comments, which bring context and insights from the latest research, and point out factual and logical errors where they exist, remain layered over the target article in the public domain. You can read them for yourself, right in your browser. The scientists also provide a score on a five-point scale to let you know whether the article is consistent with the science. For the first time, Climate Feedback allows you to check whether you can trust the latest breaking story on climate change.


An example of Climate Feedback in action. Scientists’ comments and ratings appear as a layer over the article. Text annotated with Hypothesis is highlighted in yellow in the web browser and scientists’ comments appear in a sidebar next to the article. Illustration: Climate Feedback

Last year the scientists looked at some influential content. Take the Pope’s encyclical, for instance. The scientists gave those parts of the encyclical relating to climate science a stamp of approval. Other “feedbacks,” as we call them, have made a lasting impact. When the scientists found that an article in The Telegraph misrepresented recent research by claiming that the world faced an impending ice age, the newspaper issued a public correction and substantially modified the online text.

But there’s more work to be done. Toward the end of the year the scientists carried out a series of evaluations of some of Forbes magazine’s reporting on climate change. The results give an idea of the scale of the problem we’re tackling. Two of the magazine’s most popular articles for 2015, one of which attracted almost one million hits, turned out to be profoundly inaccurate and misleading. Both articles, reviewed by nine and twelve scientists, unanimously received the lowest possible scientific credibility rating. This rarely occurs, and just in case you’re wondering, yes, the scientists do score articles independently: ratings are only revealed once all scientists have completed their review.

We argue that scientists have a moral duty to speak up when they see misinformation masquerading as science. Up to now scientists have however had little choice but to engage in time-consuming op-ed exchanges, which result in one or two high-profile scientists arguing against the views of an individual who may have no commitment to scientific accuracy at all. Climate Feedback takes a different approach. Our collective reviews allow scientists from all over the world to provide feedback in a timely, effective manner. We then publish an accessible synthesis of their responses, and provide feedback to editors so that they can improve the accuracy of their reporting.

We’ve got proof of concept. Now we need to scale up, and for that we need the support of everyone who values accuracy in reporting on one of the most critical challenges facing our planet. Climate Feedback won’t reach its full potential until we start measuring the credibility of news outlets in a systematic way. We want to be in a position to carry out an analysis of any influential internet article on climate change. We want to develop a ‘Scientific Trust Tracker’ – an index of how credible major news sources are when it comes to climate change.

We’re all increasingly relying on the internet to get our news. But the internet has engendered a competitive media environment where in the race to attract the most hits, sensational headlines can trump sober facts. We’re building into the system a new incentive for journalists with integrity to get ahead. Some journalists are already coming to us, asking our network of scientists to look at their work. We want readers to know which sources they can trust. We want editors to think twice before they publish ideological rather than evidence-based reporting on global warming.

On Friday 22 May 2016, more than 170 countries signed the Paris climate agreement. But this unprecedented international treaty will lead to real action only if the leaders of those countries can garner popular support for the measures needed to curb greenhouse gas emissions. The fate of the Paris deal lies largely in the hands of voters in democratic countries, and we cannot expect democracies to produce good policy responses to challenges of climate change if voters have a confused understanding of reality.

Scientists from all over the world are standing up for better informed democracies. You can help them make their voices heard. We invite you to stand with us for a better internet. We invite you to stand with science.

Victor Venema: I am also part of the Climate Feedback community and have annotated several journalistic articles when they made claims about climate data quality. It is a very effective way to combat misinformation. Just click on the text and add a short comment; Climate Feedback will take care of spreading your contribution. If you are a publishing scientist, do consider to join.





* Photo at the top. Severe suburban flooding in New Orleans, USA. Aftermath of Hurricane Katrina. Photo by ark Moran, NOAA Corps, NMAO/AOC (CC BY 2.0)

Tuesday, 29 March 2016

Upcoming meetings on climate data quality


It looks like 2016 will be a year full of interesting conferences. I already pointed you to EGU, IMSC and EMS before. Here is an update with three upcoming European meetings, including two close deadlines.

The marine climate data community will hold its main workshop this July (18 to 22). The deadline has just been prolonged to the 6th of April, Wednesday next week.

The metrologists (no typo) organize a meeting on climate data, MMC2016. It will take place from 26 to 30 September and will be organized together with the WMO TECO conference.

And naturally we will have the European Meteorological Society meeting in Autumn. This year is an ECAC year (European Conference on Applied Climatology). The abstract submission deadline is in about three weeks, 21 Apr 2016, during EGU. So start writing soon. As always we will have a session on "Climate monitoring; data rescue, management, quality and homogenization" for the homogenization addicted readers of this blog.

If you know of more interesting conferences, do add them in the comments.

MARCDAT-IV

The Fourth International Workshop on the Advances in the Use of Historical Marine Climate Data (MARCDAT-IV) will be held at the National Oceanography Centre, Southampton, UK between the 18th and 22nd July 2016. The workshop will be arranged around the following themes:
  • Data homogenization (benchmarking, bias adjustments, step change analysis, metadata)
  • Quantification and estimation of uncertainty
  • Data management, recovery and reprocessing (digitisation efforts and reprocessing of previously digitised data)
  • Reconstructing past climates
  • Integrating In-situ / satellite data sources
  • Consistency of the climate across domain boundaries (land, ocean, surface, subsurface, atmosphere)
  • The role of ICOADS and applications of marine climate data
  • Review of the 10-year action plan
This looks like an invitation of people working on land data to also participate. I just asked some colleagues on the homogenization list and it looks like there are a decent number of weather stations near the coast. We could compare them to marine observations, I would especially be interested in comparing sea surface temperature observations.

This workshop is free to attend, but participants must register.

Key Dates:
Abstract submission deadline: 8th April 2016
Registration closes: 31st May 2016

MMC-2016

International workshop on Metrology for Meteorology and Climate in conjunction with WMO TECO 2016 conference & Meteorological Technology World Expo 2016. It will be held in Madrid Spain from 26 to 30 September 2016.
During the last years an increasing collaboration has been established between the Metrology and Meteorology communities. EURAMET, the European association of metrology Institutes, is funding several projects aiming at delivering results of valuable impact for the meteorology and climatology science. The key aspect of such projects is the traceability of measurements and uncertainties of measured physical and chemical quantities describing the earth atmosphere. The MMC conference aims to give an opportunity to those two communities to present and discuss needs, methods, expertise and devices for cooperating in producing better data. The invitation is addressed to the metrology, meteorology and climate scientific communities and operators. Starting with the first MMC 2014 in Brdo, Slovenia, this time the MMC 2016 is organized Madrid, Spain, in conjunction with CIMO-TECO conference.
As far as I know the abstract deadline has not been determined yet, but write the date into your agenda, bookmark the homepage and contact the organizers to give you a notice once the deadline is known.

EMS2016

The conference theme of the Annual Meeting of the European Meteorological Society is: Where atmosphere, sea and land meet: bridging between sciences, applications and stakeholders. It will be held from 12 to 16 September 2016 in Trieste, Italy. The abstract submission deadline is 21 April 2016, during EGU2016.

MC1 Climate monitoring; data rescue, management, quality and homogenization
Convener: Manola Brunet-India
Co-Conveners: Hermann Mächel, Victor Venema, Ingeborg Auer, Dan Hollis 
Robust and reliable climatic studies, particularly those assessments dealing with climate variability and change, greatly depend on availability and accessibility to high-quality/high-resolution and long-term instrumental climate data. At present, a restricted availability and accessibility to long-term and high-quality climate records and datasets is still limiting our ability to better understand, detect, predict and respond to climate variability and change at lower spatial scales than global. In addition, the need for providing reliable, opportune and timely climate services deeply relies on the availability and accessibility to high-quality and high-resolution climate data, which also requires further research and innovative applications in the areas of data rescue techniques and procedures, data management systems, climate monitoring, climate time-series quality control and homogenisation.

In this session, we welcome contributions (oral and poster) in the following major topics:
  • Climate monitoring , including early warning systems and improvements in the quality of the observational meteorological networks
  • More efficient transfer of the data rescued into the digital format by means of improving the current state-of-the-art on image enhancement, image segmentation and post-correction techniques, innovating on adaptive Optical Character Recognition and Speech Recognition technologies and their application to transfer data, defining best practices about the operational context for digitisation, improving techniques for inventorying, organising, identifying and validating the data rescued, exploring crowd-sourcing approaches or engaging citizen scientist volunteers, conserving, imaging, inventorying and archiving historical documents containing weather records
  • Climate data and metadata processing, including climate data flow management systems, from improved database models to better data extraction, development of relational metadata databases and data exchange platforms and networks interoperability
  • Innovative, improved and extended climate data quality controls (QC), including both near real-time and time-series QCs: from gross-errors and tolerance checks to temporal and spatial coherence tests, statistical derivation and machine learning of QC rules, and extending tailored QC application to monthly, daily and sub-daily data and to all essential climate variables
  • Improvements to the current state-of-the-art of climate data homogeneity and homogenisation methods, including methods intercomparison and evaluation, along with other topics such as climate time-series inhomogeneities detection and correction techniques/algorithms (either absolute or relative approaches), using parallel measurements to study inhomogeneities and extending approaches to detect/adjust monthly and, especially, daily and sub-daily time-series and to homogenise all essential climate variables
  • Fostering evaluation of the uncertainty budget in reconstructed time-series, including the influence of the various data processes steps, and analytical work and numerical estimates using realistic benchmarking datasets
Next to this session, readers of this blog may also like: Climate change detection, assessment of trends, variability and extremes. And I personally like the session on Spatial Climatology, which has a lot to do with structure and variability.




Top photo by Martin Duggan, which has a CC BY 2.0 license.

Monday, 21 March 2016

Cooling moves of urban stations



It has been studied over and over again, in very many ways: in global temperature datasets urban stations have about the same temperature trend as surrounding rural stations.

There is also massive evidence that urban areas are typically warmer than their surroundings. For large urban areas the Urban Heat Island (UHI) effect can increase the temperature by several degrees Celsius.

A constant higher temperature due to the UHI does not influence temperature changes. However, when cities grow around a weather station, this produces an artificial warming trend.

Why don’t we see this in the urban stations of the global temperature collections? There are several reasons; the one I want to focus on in this post is that stations do not stay at the same place.

Urban stations are often relocated to better locations, more outside of town. It is common for urban stations to be moved to airports, especially when meteorological offices are moved to the airport to assist in airport safety. Also when meteorological offices can no longer pay the rent in the city center, they are forced to move out and take the station with them. When urban development makes the surrounding unsuited or when a volunteer observer retires, the station has to move, it makes sense to then search for a better location, which will likely be in a less urban area.

Relocations are nearly always the most frequent reason for inhomogeneities. For example, Manola Brunet and colleagues (2006) write about Spain:
“Changes in location and setting are the main cause of inhomogeneities (about 56% of stations). Station relocations have been common during the longest Spanish temperature records. Stations were moved from one place to another within the same city/town (i.e. from the city centre to outskirts in the distant past and, more recently, from outskirts to airfields and airports far away from urban influence) and from one setting (roofs) to another (courtyards).”
Since relocations of that kind are likely to result in a cooling, the Parallel Observations Science Team (ISTI-POST) wants to have a look at how large this effect is. As far as we know there is no overview study yet, but papers on the homogenization of a station network often report on adjustments made for specific inhomogeneities.

We, that is mainly Jenny Linden of Mainz University, had a look in the scientific literature. Let’s start in China were urbanization is strong and can be clearly seen in the raw data of many stations. They also have strong cooling relocations. The graph below from Wenhui Xu and colleagues (2013) shows the distribution of breaks that were detected (and corrected) with statistical homogenization for which the station history indicated that they were caused by relocations. Both the minimum and the maximum temperature cool by a few tenth of a degree Celsius due to the relocations.


The distribution of the breaks that were due to relocations for the maximum temperature (left) and minimum temperature (right). The red line is a Gaussian distribution for comparison.


Going more in detail, Zhongwei Yan (2010) and colleagues studied two relocations in Beijing. They found that the relocations cooled the observations by −0.81°C and −0.69°C. Yuan-Jian Yang and colleagues (2013) find a cooling relocation of 0.7°C in the data of Hefei. Clearly for single urban stations, relocations can have a large influence.

Fatemeh Rahimzadeh and Mojtaba Nassaji Zavareh (2014) homogenized the Iranian temperature observations and observed that relocations were frequent:
“The main non-climatic reasons for non-homogeneity of temperature series measured in Iran are relocation and changes in the measuring site, especially a move from town to higher elevations, due to urbanization and expansion of the city, construction of buildings beside the stations, and changes in vegetation.”
They show an example with 5 stations where one station (Khoramabad) has a relocation in 1980 and another station (Shahrekord) has two relocation in 1980 and 2002. These relocations have a strong cooling effect of 1 to 3 degrees Celsius.


Temperature in 5 stations in Iran, including their adjusted series.


The relocations do not always have a strong effect. Margarita Syrakova and Milena Stefanova (2009) do not find any influence of the inhomogeneities on the annual mean temperature averaged over Bulgaria. This while “Most of the inhomogeneities were caused by station relocations… As there were no changes of the type of thermometers, shelters and the calculation of the daily mean temperatures, the main reasons of inhomogeneities could be station relocations, changes of the environment or changes of the station type (class).

In Finland, Norway, Sweden and the UK the relocations produced a cooling bias of -0.11°C and relocations appear to be the most common cause of inhomogeneities (Tuomenvirta, 2001). The table below summarises the breaks that were found and what the reasons for them were if this was known from the station histories. They write:
“[Station histories suggest] that during the 1930s, 1940s and 1950s, there has been a tendency to move stations from closed areas in growing towns to more open sites, for example, to airports. This can be seen as a counter-action to increasing urbanization.”


Table with the average bias of inhomogeneities found in Finland, Sweden, Norway and the UK in winter (DJF), spring (MAM), summer (JJA) and autumn (SON) and in the yearly average. Changes in the surrounding, such as urbanization or micro-siting changes, made the temperatures higher. This was counteracted by more frequent cooling biases from changes in the thermometers and the screens used to protect the thermometers, by relocations and by changes in the formula used to compute the daily mean temperature.


Concluding, relocations are a frequent type of inhomogeneity. They produce a cooling bias. For urban stations the cooling can be very large. For the average over a region, the values are smaller, but especially because they are so common, they will have most likely a clear influence on global warming in raw temperature observations.

Future research

One problem with studying relocations is that they are frequently accompanied by other changes. Thus you can study them in two ways: study only relocations where you know that no other changes were made or study all historical relocations whether there was another change or not.

The first set-up allows us to characterize the relocations directly, to understand the physical consequences to move for example a station from the center of a city / village to the airport. In this way the differences are not subject to other changes specific to a network. So, the results can be easily compared between regions. The problem is that only a part of the parallel measurements available satisfy these strict conditions.

Conversely, for the second design (taking all historical relocations, also when they have another change) the characterization of the bias will be limited to the datasets studied and we will need a large sample to say something about the global climate record. But on the other hand, we can also analyze more data this way.

There are also two possible sources of information. The above studies relied on statistical homogenization comparing a candidate station to its neighbors. All you need to know for this is which inhomogeneities belong to a relocation. A more direct way to study these relocations is by using parallel measurements at both locations. This is especially helpful to study changes in the variability around the mean and in weather extremes. That is where the Parallel Observation Science Team (ISTI-POST) comes into play.

It is also possible to study specific relocations. The relocation of stations to airports was an important transition, especially around the 1940s. This temperature change is likely large and this transition quite frequent and well documented. One could focus on urban stations or on village stations, rather than studying all stations.

One could make a classification of the micro and macro siting before and after the relocation. For micro-siting the Michel Leroy (2010) classification could be interesting; as far as I know this classification has not been validated yet, we do not know how large the biases of the 5 categories are and how well-defined these biases are. Ian Stewart and Tim Oke (2012) have made a beautiful classification of the local climate zones of (urban) areas, which can also be used to classify the surrounding of stations.


Example of various combinations of building and land use of the local climate zones of Stewart and Oke.


There are many options and what we choose will also depend on what kind of data we can get. Currently our preference is to study parallel data with identical instrumentation at two locations, to understand the influence of the relocation itself as well as possible. In addition to study the influence on the mean, we are gathering data on the break sizes found by statistical homogenization for breaks due to relocations. The station histories (metadata) are crucial for this in order to clearly assign breakpoints to relocation activities. It will also be interesting to compare those two information sources where possible. This may become one study or two depending on how involved the analysis will become.

This POST study is coordinated by Alba Guilabert and Jenny Linden and Manuel Dienst are very active. Please contact one of us if you would like to be involved in a global study like this and tell us what kind of data you would have. Also if anyone knows of more studies reporting the size of inhomogeneities due to relocations, please let us know. I certainly have seen more such tables at conferences, but they may not have been published.



Related reading

Parallel Observations Science Team (POST) of the International Surface Temperature Initiative (ISTI).

The transition to automatic weather stations. We’d better study it now.

Changes in screen design leading to temperature trend biases.

Early global warming.

Why raw temperatures show too little global warming.

References

Brunet M., O. Saladie, P. Jones, J. Sigró, E. Aguilar, et al., 2006: The development of a new daily adjusted temperature dataset for Spain (SDATS) (1850–2003). International Journal of Climatology, 26, pp. 1777–1802, doi: 10.1002/joc.1338.
See also: a case-study/guidance on the development of long-term daily adjusted temperature datasets.

Leroy, M., 2010: Siting classifications for surface observing stations on land. In WMO Guide to Meteorological Instruments and Methods of Observation. "CIMO Guide", WMO-No. 8, Part I, Chapter 1, Annex 1B.

Rahimzadeh, F. and M.N. Zavareh, 2014: Effects of adjustment for non‐climatic discontinuities on determination of temperature trends and variability over Iran. International Journal of Climatology, 34, pp. 2079-2096, doi: 10.1002/joc.3823.

Stewart, I.D. and T.R. Oke, 2012: Local climate zones for urban temperature studies. Bulletin American Meteorological Society, 93, pp. 1879–1900, doi: 10.1175/BAMS-D-11-00019.1.
See also the World Urban Database.

Tuomenvirta, H., 2001: Homogeneity adjustments of temperature and precipitation series - Finnish and Nordic data. International Journal of Climatology, 21, pp. 495-506, doi: 10.1002/joc.616.

Xu, W., Q. Li, X.L. Wang, S. Yang, L. Cao, and Y. Feng, 2013: Homogenization of Chinese daily surface air temperatures and analysis of trends in the extreme temperature indices. Journal Geophysical Research Atmospheres, 118, doi: 10.1002/jgrd.50791.

Syrakova M. and Stefanova M., 2009: Homogenization of Bulgarian temperature series. International. Journal Climatology, 29, pp. 1835-1849, doi: 10.1002/jov.1829.

Yan ZW; Li Z; Xia JJ. 2014. Homogenisation of climate series: The basis for assessing climate changes. Science China: Earth Sciences, 57, pp 2891-2900, doi: 10.1007/s11430-014-4945-x.

* Photo at the top "High Above Sydney" by Taro Taylor used with a Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0) license.

Monday, 7 March 2016

Bernie Sanders is more electable



The bias in the American mass media are driving me crazy. So let me get onto my little soap box.

You can see the bias when they add the super-delegates to the pledged delegates to pretend that Clinton has a higher lead. If these super-delegates would actually vote against the wish of the primary voters, the disgust will be large enough to make sure that the Democrats would lose.

You can see the bias in that the media hardly speaks about money in politics. A main topic in both primaries.

What I want to talk about today is that the media generally assumes that Hillary Clinton is more electable in the general presidential election than Bernie Sanders. Often in the dismissive implicit way of the establishment. That can be a real opinion, but the evidence goes into another direction.

Money in politics

I am happy to admit that I am also biased. For me Bernie Sanders is clearly the best candidate, mainly because he wants to get money out of politics. Money in politics is bad for the public debate, for democracy, for the environment and for the economy.

Politicians who have to do what their donors tell them have no flexibility to compromise and get things done; they have to defend indefensible positions. This leads to a childish political debate. Nearly all Republican representatives in Washington reject basic scientific findings on climate change. This is childish. It is equivalent to putting your fingers in your ears and singing la, la, la, la. It matches the donor's preferences, but does not match their voter base. Half of the Republican voters support policies to reduce greenhouse gasses.