Flat White

Taking temperatures

2 June 2017

1:03 PM

2 June 2017

1:03 PM

It started with a palm to the forehead. Then the glass thermometer, the one in the metal case with the bubble of mercury at the end, is found. If you are over two or three this is put in your mouth and you aren’t allowed to open your mouth until you are told. This is very hard if you have bronchitis or a blocked nose due to a heavy cold.

Then we developed the digital thermometer which is applied to the various orifices of mouth, anus, vagina and also the armpit. Now, the newest of these thermometers probe the ear canal in order to get a reading on body temperature.

All have problems in getting the internal body temperature right: The mouth could register the effect of recent hot or cold foods (or be compromised by coughing or a blocked nose); the rectal reading might carry the heat of diarrhoea or last night’s chilli and the armpit reading the heat of the recently removed jumper or the body heat of exercise.

I have a condition called Raynaud’s Disease which is why you will find me on a beach and not a ski slope. My peripheral blood vessels go into vasospasm in cold weather and my extremities go cold then blue, white and numb. So the new technique of a thermometer in my ear in even slightly cold weather will record a lower body temperature due to my loss of peripheral blood circulation.

Compounding these variables is that body temperature varies diurnally (falling at night and rising during the day); is affected by altitude and by any infection or virus causing pyrogens to be activated that stimulate a response that raises body temperature – an attempt to ‘cook’ the virus or bacterial infection.

The body works to maintain a fluctuating ‘normal’ temperature of 36.5°C – 37.5°C or 97.7°F to 99.5°F but there is a time element involved while the body shuts down heat escape routes in cold weather; opens them up in hot weather and when fever strikes so there are many modifying factors at work to deliver what is a ‘normal’ or ‘abnormal’ human body temperature when considered against ranges of time, activity and location. Shouldn’t these same considerations of time, activity, and place background the temperature taking of a body of over 500 billion square kilometres?

If it is hard to get an accurate reading on the temperature for one discreet living body, even after hundreds of years of developing a range of devices through to today’s more accurate thermometers, how come we think we can accurately take the temperature of a living and non-living system as large as our planet? Yet this is what we are being told is possible by all the Climate Science institutes across the world.

First, let’s start with where we are taking the Earth’s temperature; how many thermometers are involved and of what type. In 2014 some 600 of approximately 9000 global temperature stations established and monitored by the National Oceanic and Atmospheric Association’s temperature reading weather stations were taken out of action. NOAA is the world’s leading temperature take and works in collaboration with CRU (the Climatic Research Unit of the University of East Anglia) and GISS (Global Institute for Space Studies).

The recently quoted number of land based weather stations is slightly more than 4000 and ocean based measuring sites at approximately 3000. This gives an average of a land based station every 37,000klm ² and an averaged ocean based station every 120,000klm². Not only is this very sparse, they aren’t evenly distributed.

There are valid reasons for this – political barriers plus geographic isolation and difficulty to establish and maintain equipment. On the political issue, countries may be measuring their local daily temperature ranges but not sharing it with NOAA or their instrumentation may be too inaccurate to use in a global data base.


The NOAA website of land-based weather stations shows heavy clusters near main cities and densely populated areas which are well known as ‘heat islands’. Designated ‘rural’ stations tend to be sited in easily accessible, vast cropland zones (once again ‘hotter’ than natural landscapes).  Outside of these areas the global temperature grids rely on National Meteorological Services information where the readings are considered reliable.

In Australia NOAA identifies 28 -30 stations with only eight off the coast and into the massive centre of Australia. There are none shown in New Guinea, Vietnam, Cambodia or Laos. Similarly none are shown in Taiwan, Pakistan or Afghanistan with none in northern Sweden, only three in southern Sweden while Greenland has only three stations along the southern coast.

The biggest gap in stations can be found in the African continent with none in Libya, Sudan, Southern Sudan, Eritrea, Somalia, Nigeria, Guinea-Bessau, Sierra Leone, Liberia… and meagre scatterings elsewhere.

There is a whole lot of Earth, especially up in its cooler latitudes and higher altitudes, that is not having its temperature taken and yet we are delivered regular statements that are all about what constitutes ‘global’ temperature.

Without handy orifices into which to insert a thermometer to take our planet’s temperature we are left with a preaching of what is an amortisation of available readings and how these vary from what is a supposed ‘normal’ to put our trust in.

This is not a 2000-3000 year old belief system – this is a new one dependent on a media- promoted liturgy of extrapolations based on averages and adjustments to second guess ‘trends’ based on data received from a fraction of our planet’s surface. Can we call this science?

So what is a ‘normal’ or ‘average’ air temperature for today or tomorrow where you live? Where does the ‘average temperature’ base line start given that we didn’t develop atmospheric thermometers until the mid to late 1700s? This was towards the end of several hundred years of bleak, dark coldness now known as the Little Ice Age – and the Dark Ages, and the time of The Inquisition, The Plague and mass starvation from crop failure.

Do we start the ‘normal’ calibration point here using what were clumsy, early thermometers or do we do as Climate Science likes to do, start at the 1880’s when we were still seesawing out of all that coldness. We know that our planet lurches between ice ages and warmings (aka interglacial periods) and we are lucky to be in a warming as we, naked apes, are sun worshippers.

This orchestrated panic about the climate isn’t new as not so long ago, in the 1970s, there were predictions of a new ice age coming. The truth is, when our weather presenter talks about “Two degrees above/below average” there is no validity for this statement beyond fifty years or so. This means nothing because the climate is constantly changing, incrementally, over millennia.

Now we read scientific press releases with a lot of qualifying words and raw (empirical) data qualifications. Be alert to ‘mean’, ‘average’, ‘modelling’, ‘homogenised’, ‘likely’, ‘adjusted’, ‘trend’, ‘prediction’ in anything to do with air temperature as this translates to ‘hogwash’. So let’s construct a sentence without any reference to CO² or Warming: ‘Scientists at (university/research body) predict the likelihood of increasing (something) based on recent modelling of software-adjusted data that compared the recorded 150 year average with the current mean of (whatever) indicating a trend over the next (10, 100, 1000 years) somewhere, somehow that will ultimately end mankind’. Sound familiar? Time to panic?

The problem is these press releases are swamping the internet, drowning out valid dissenters and bullying science journalists who have forgotten that their job is to investigate such claims with counter claims.

Next in this convoluted tale of woe about climate warming is the methodology and rigor used to: a) take the global atmospheric temperature and b) collect and correlate this into a ‘grid’ and graphs. Human thermometers vary so to do those measuring the temperature of the air. Some are semi-wireless, some completely wireless, some use cables others radio waves. Some have their readings measured electronically others by computers or even humans and, here’s the thing, each thermometer is accurate to itself but not to another. So, two different thermometers measuring the same temperature at the same place may vary either way by 0.5°C making for a possible 1.0°C variation. This is about the size of the predicted increase we are meant to be in fear of.

There are other variables to consider such as humidity (which affects the reading), the positioning of the station (there has been uproar about some being on top of hot roofs, beside air-conditioning ducts, near hot machinery, using ships rather than buoys, geographical siting), wind chill etc.… all of which are potential distortions of what is the raw data base of global temperatures taken at various, scattered sites across our planet.

All this raw data is then gathered by various meteorological bodies such as Australia’s Bureau of Meteorology or directly by research bodies like NOAA and is then subjected to various collation techniques. For example, our BOM ‘homogenizes’ their data; others take anomalous readings (those at the extremes) modify, average, adjust, feed into modelling software to create a lot of sawtooth graphs that seek for a median line that may indicate a ‘trend’ and others deliver raw data.

Here is a recounting by David Lister of CRU of how all the data he gathers from different sources is put to bed in ‘grids’: “sometimes it’s just literally a few monthly values, sometimes they’ve revamped it because they run stuff through algorithms, through homogenisation software, etc. and so suddenly they might have had a big rework and so the series could be significantly changed” “one reading might come off the instrument (but) the next reading  through their Quality Control Software determines that this value is wrong” … “or there may be two or three different readings from the same instrument for the day…and there are flags, obviously, so you have to take the flag to know which of these values to use”. So who or what adds the ‘flags’?

There have been two major outings pointing to instances of obfuscation, research manipulation, flawed methodology and compromised software behind some of the most trusted climate research bodies that we have. One in 2009 involving CRU is now known as Climategate and one more recently (as of February 5 this year) of a rushed publication from NOAA in 2015 that used poor methodology in some cases and had a software collapse making the research findings un-replicable. This was a paper that was used as documentation supporting the continued warming hypothesis that underwrote

The Paris Agreement and the $1trillion now pledged on evidence so inexact and so unempirical with so many myriad opportunities for slips between the cup and the lips that it is hard to think of it as a science. Well, not yet.

I’m with Trump. It is time to take our own pause on Climate Change.

Got something to add? Join the discussion and comment below.


Show comments
Close