Australian extremes impossible without a climate change kick

It’s not news that Australia has endured blistering heat this summer. Starting in November 2018 when bushfires burned along coastal northern Queensland, conditions for large parts of Australia have been hot and dry for months.

With Canberra’s record run of days above 30°C, above 35°C and 40°C , I’ve spent January in a state of heightened irritability (and sweat!).

Over the summer, I’ve read and heard lots of analysis and explanation about hot and dry air over the centre bringing hot conditions southward. But we often have these large-scale weather patterns in Australian summers, so why has 2018/2019 been so severe?

Back in 2013, we published the first quantitative attribution study on Australia’s record hot summer, finding that anthropogenic climate change made this record heat at least 5 times more likely.

Today, BoM released monthly climate data showing that January temperatures in many places were well above average. In NSW, average temperatures were nearly 6°C above the 1961-1990 averages.

If extreme heat is no longer new, then what is?

I’ve gone back and looked at the study we did in 2013 and seen what’s changed in the meantime. It turns out a lot.

By applying this global climate model-based event attribution framework (see details below or email for further info), we see rapid changes in the way climate change is impacting extremes in Australia.

The extremes in January in Victoria and NSW simply do not occur in climate model simulations without greenhouse gases. January temperatures in parts of Australia were not possible in the absence of anthropogenic global warming.

Natural climate variability always plays a role in extreme events, but this aloneis not enough to generate the magnitude of extremes we’ve experienced in Australia recently.

These results are also appearing elsewhere. Analysis of large-scale temperature extremes including the 2016 global heat record, record heat across Asia and anomalously warm ocean water off the coast of Alaska were also impossible without climate change in attribution studies.

Another remarkable feature of 2019 in Australia is the rate of warming. Back in 2013, we also looked at record January temperatures across Australia. We found that climate change increased the risk of this event by about 15-fold.

What about 2019? Well this preliminary analysis suggests we are now up around 50-fold increase in likelihood of extreme January temperatures because of climate change. The precise values aren’t the key here, it’s that the signal of climate change in our extreme weather and climate events is increasing over time.

And what about 2020 and beyond? Under further global warming, we can expect more and more extreme temperatures in the future to occur like this. These are events of such high magnitude that they do not appear in climate models without greenhouse gases.

  plots

Study details:

All-time record high mean temperatures were set in NSW and Victoria in 2019, which provide the case studies for this analysis. Temperature anomalies were calculated relative to a 1961-1990 climatology, and January monthly averages determined for NSW, Victoria and Australia.

CMIP5 models were used for analysis based. For each available historical realisation, January temperature anomalies (relative to a 1961-1990 climatology) were calculated for land-based Australian gridboxes. Using a Perkins skill score test (2007), the similarity of simulated and observed temperature distributions was compared for overlapping years 1951-2005. For the high Perkins score models, we use simulations of the standard historical experiment (time varying changes in well-mixed greenhouse gases, aerosols, and ozone, and volcanic and solar forcings imposed), the historicalNat experiment (with only solar and volcanic forcings), piControl experiment, and the RCP8.5 experiment (Representative Concentration Pathway with high emissions for the 21st Century), for ensemble member r1i1p1.

Fraction of attributable risk (FAR) values are a quantification of the fraction of risk of a particular threshold being exceeded (i.e. an event) that can be attributed to a particular influence. First, FAR was calculated for each temporal and spatial focus using both historicalNat (years 1951-2005) and piControl as a reference state (probability PNAT), RCP8.5 (years 2006-2035) as PALLand the all-time observed record as the event thresholds.

ProbabilitiesPALLand PNAT are calculated as the number of times the defined event threshold was exceeded in each relevant (RCP8.5/historicalNat) set of model years, relative to the total sample size.

These assessments of changes in probabilities are model based and it’s also really important to note that different values might be calculated using different sets or types of models.

The details of this approach were developed with my colleague Sarah Perkins-Kirkpatrick whose ideas were essential for this analysis.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s