What’s (not) happening with Algae Biofuels

Next generation algae biofuel is a fuel derived from growing synthetic (genetically modified) algae and decomposing it to extract oils that can be used to substitute conventional petroleum. It is (was?) envisioned principally as a fuel for vehicles and aircraft and therefore as a possible replacement for gasoline/kerosene.

Before we go any further, any discussion on the viability of algae biofuels needs to be framed along these points:

  • Can biofuels from algae compete on price with fossil-derived petroleum?
  • Is it carbon neutral, emitting only CO2that it absorbs first during growth?
  • Can it scale?
  • Does algae biofuel yield substantial energy relative to the energy inputs involved in its production? (Energy Return on Energy Invested)

Basically, you’re asking yourself: is it better than what’s already out there?
The answer is: Nope, at least for now.

The Value Proposition of Algae Biofuels

Potentially the most promising of biofuel technologies, algae set themselves apart from all other biofuel feedstocks, for the following reasons:

  • Algae do not compete with farmland & water: Algae have been shown to thrive in polluted or salt water, deserts and other inhospitable places, bypassing the age old (and legitimate) problem which has plagued the development of conventional biofuels.
                                     Optimum land for growing biofuel sustainably

    ATAG
    Source: ATAG

    Circle sizes are estimates of potential locations for new generation biofuel feedstock production.

  • They do not have an impact on food prices: Since conventional biofuels like corn and sugar are also used as food for us, and farmers can get better prices for their corn and sugar if they sell them to biofuel refiners, leading to volatile food prices. Hartmut Michel, Nobel Prize winner, explains, that when you have “energy plants” competing with food plants, we are all worse off.
  • They feed off CO2: According to Jansson, Wullschleger, Kalluri, & Tuskan, human activities are responsible for an annual emission of 9 gigatons of carbon (33 gigatons of CO2). Whilst terrestrial and oceanic systems manage to absorb 3 and 2 gigatons respectively, leaving the remaining 4 gigatons in the atmosphere making algae ideal for carbon capture from sources like power plants.
  • Fast Oil Production Rate: One of the biggest advantages of algae for oil production is the speed at which they can grow. Some studies estimated that algae produce up to 15 times more oil per square kilometer than others pointed to algae strains that produced biomass very rapidly, with some species doubling in as few as 6 h, and many exhibiting two doublings per day.

So the promise of algae oil is tantalizing: it’s like the silver bullet.

In a nutshell: scientists were meant to identify a strain of algae to be genetically modified to produce lipids (oils) very quickly while feeding on carbon dioxide from the atmosphere. The lipids would be harvested and converted into usable oil while they ducked carbon from the atmosphere. And all this was meant to be economical and scalable.

  🔥 From 2003-2012: The Hype Was On🔥

Turning pond scum into a petroleum-like fuel is both laborious and expensive, yet the end goal was very alluring. As Eric Wesoff ironically puts it, “dozens of companies managed to extract hundreds of millions in cash from VCs in hopes of ultimately extracting fuel oil from algae”.

Researchers and algae oil companies were making huge claims about the promise of algae-based biofuels; the U.S. Department of Energy caught on early and was also making big bets through its bioenergy technologies office; industry advocates claimed that commercial algae fuels were scalable in the near-term and investors jumped the gun.

In 2006, there were a meager handful of specialized companies devoted to commercializing algae biofuel. By 2008 there are over 200, most of which had been active for less than a year. While most of these were angel investor or venture capital backed, there were also some bigwigs that took a stab at developing algae biofuels, such as Shell, Johnson Mathey, General Atomic, Boeing, Honeywell, DuPont, BP, and others.

However, in spite of optimistic investors and bold promises, a few hard truths began to transpire.  It became clear that whilst the technology was indeed “promising”, scaling it in a time frame relevant to our needs at an economic price was not within reach, anytime soon.

Cracks began to show in investment patterns as Exxon Mobil decided to invest $600 million into a joint venture with Craig Venter’s Synthetic Genomics for research into algal fuels, which they quickly scaled back to $300m and then to $100m. Another star player, Sapphire Energy, an algae biofuel start-up, which raised over $100 million in venture capital, including from Bill Gates’ investment firm Cascade Investment, has pivoted away from algae fuel and is now producing omega-3 oils and animal feedstock, while the famous startup GreenFuel, which grew out of Harvard and MIT research, went bust blowing through $70 million.

In the meantime, the surviving algae oil companies have shifted their core business and “branched out” to produce more economically sustainable co-products, like supplements, algae cosmetic oil, pigments and animal feedstocks and products for the pharmaceutical and chemical industry. Here is a list of algae oil companies that have been forced to move away from algae oil.

 So what went wrong?

I think that many stakeholders were blindsided by the great potential of the technology. To date, no company has been able to grow algae at the large enough scale required to produce meaningful quantities of a fuel, affordably. While there are many hurdles, I identified these two main reasons:

  • The basic science behind its cultivation:
    • The first part of the life cycle of the algae turned out to be burdened by obstacles.  As companies developing these algae have attempted to scale up, problems emerged which they could not have anticipated, including the emergence of competitor algae, predators such as microscopic animals that ate the algae, the occurrence of algae diseases in the form of bacteria and fungi and temperature fluctuations, which could kill the algae.
    • Whilst algae can grow quickly, the can do so only in the presence of sufficient nutrients. Algae can obtain carbon, their primary nutrient from atmospheric carbon dioxide, but the amounts present are insufficient to promote the rapid growth that was observed under lab conditions. That requires something more than 10% CO2 concentration in the atmosphere and in fact some of the earliest attempts to grow algae as a fuel source were predicated upon the development of pervasive industrial carbon dioxide capture.

  • Energy Return on Energy Invested:
    Researchers who looked at life-cycle analysis and the EROI/EROEI found algae biofuels would not have a positive energy balance, in other words, you’d have to put more energy in cultivating, harvesting, refining and transporting the algae biofuel than you would get out of it once it’s burned.

    c458bd8454b21bcc39f8255059793d11aabdc4f7
    EROI is a straightforward and simple concept to get your head around, and it is defined as the energy contained in one unit of fuel divided by the total nonrenewable energy required to produce one unit of fuel. It’s a way of getting a handle on how energy-efficient your energy production is.

    The breakeven point is 1. When the EROI is 1 there is no return on the energy invested, and the entire investment has been wasted. When the EROI ratio is higher, it also signals that the energy from that source is easy to get and cheap. Conversely, when the number is small, the energy from that source is difficult to get and expensive.

    Now, I will be the first to admit that there is no consensus on the methodology used to calculate either Life Cycle Analysis or EROI, making calculations on it is somewhat abstract. In part, it depends on what one counts as an “input”, and neither energy companies nor biofuel producers report detailed information on their energy consumption, resulting in researchers generating assumptions in order to calculate them.  To calculate the energy input, researchers have to make an estimate based on the dollars spent on various processes and goods, which means that two reports calculating EROI will likely yield different results, because of the different variables used.

    In spite of this ambiguity, The National Academy of Sciences (Chapter 8 NAS 2012) concludes: “An energy return on investment (EROI) of less than 1 is definitely unsustainable. An algal biofuel production system would have to have or at least show progress toward EROI within the range of EROI required for the sustainable production of any fuel (Pimentel and Patzek, 2005). Algal biofuels would have to return more energy in use than was required in their production to be a sustainable source of transportation. Microalgal fuels use high-value energy inputs such as electricity and natural gas. If these high-quality energy sources are downgraded in the production of algal fuels, it is certainly a sustainability concern that can only be truly understood through careful life-cycle analysis. EROI of 1, the breakeven point, is insufficient to be considered sustainable. However, the exact threshold for sustainability is not well defined. Hall (2011) proposed that EROI greater than 3 is needed for any fuels to be considered a sustainable source. EROI can be estimated with an LCA that tracks energy and material flow”.

    Here is a look at the available literature on Algae EROI:

    EROIbio.PNG
    Source: Quantitative Uncertainty Analysis of Life Cycle Assessment for Algal
    Biofuel Production, 2012

The studies that gave algae biofuel a positive EROI depended on co-products (something produced along with a main product which carries equal importance as main product, for the pharmaceutical, animal feed, and chemical industry) to tip the balance from a negative energy return to a positive.  But the Department of Energy pointed out “if biofuel production is considered to be the primary goal, the generation of other co-products must be correspondingly low since their generation will inevitably compete for carbon, reductant, and energy from photosynthesis…and coal-fired power plant carbon dioxide”.

So far, nobody has been able to make fuel from algae for a cost anywhere close to cheap, let alone competitive. Companies are trying to overcome these problems, but we will not be seeing companies selling large amounts of algae biofuels anytime soon.

tl:dr -> Cool idea, but due to unreliable cultivation methods, large nutrient requirements (of carbon, nitrogen, and phosphorus), low EROI, high capital costs, and competition from below $50 petroleum, the technology isn’t close to being ready.


 

Reading up on the EROI was very interesting so you might be keen on it too. Check out the following sources:

Peakoil.com,. (2014). EROEI as a Measure of Biofuel Effectiveness

Epa.gov. (2010). Renewable Fuel Standards Program Regulatory Impact Analysis Office of Transportation and Air Quality: The United States Environmental Protection Agency.

Inman, M. (2014). Behind the Numbers on Energy Return on Investment.

 

 

 

 

Advertisements

Energy Transition: From Oil 🔜 Wind

It is often touted as a truism, that in the effort to migrate our energy production from fossil fuels to renewables, we will have to use natural gas (controversially argued to be the *least* polluting fossil fuel, out of coal and oil) as a bridge transition energy source as we develop the green infrastructure necessary to satisfy our consumption.

Be that as it may, I see a trend run in parallel to this. While cheap and abundant gas is readily available, oil companies are starting to challenge the biggest wind developers in the race to build off-shore wind turbines in the North Sea.

North Sea: Wind is Eating into the Energy Market Share

Shell, Statoil, and Eni are three giants who are moving into multi-billion-dollar offshore wind farms in the North Sea. They’re starting to score victories against leading power suppliers including Dong Energy (I wrote about them branching out of conventional oil exploration into offshore wind here) and Vattenfall in competitive auctions for power purchase agreements (PPA).

The idea seems to be to leverage the know how they used from off-shore oil, into offshore wind. Irene Rummelhoff, EVP for New Energy Solutions at Statoil (see here) said she was convinced global warming was a very serious problem and her company wanted to help find a solution. “We strongly believe oil and gas will still be needed in future but we also know we have to do things differently and are working to reduce the carbon footprint of these operations,” she said.

“It makes sense to utilize our project-management skills from oil and gas to offshore wind which is why we are operating Sheringham Shoals and Dudgeon Sands off the UK. We are also looking at more carbon capture schemes and at solar worldwide.”

Even Exxon Mobil, in spite of having a conservative reputation, has recently unveiled plans to investigate CCS more fully in a new partnership with a fuel cell company. They also have pledged million of dollars to developing photosynthetic algae for transportation fuels.

Italian oil and gas major, Eni has signed an agreement with General Electric (GE) to develop renewable energy projects and hybrid solutions with a focus on energy efficiency. Their objective is to jointly identify and develop large-scale power generation projects from renewable energy sources, covering innovative technologies such as, onshore and offshore wind generation, solar power, hybrid gas-renewable projects, electrification of new and existing assets, waste-to-energy projects, the ‘green’ conversion of mature or decommissioned industrial assets and the deployment of technologies developed by Eni’s R&D department.

Luca Cosentino, VP of energy solutions, had this to say, “It is certainly an area of interest for us because there are obvious synergies with the traditional oil and gas business…As the oil and gas industry we know, we cannot get stuck where we are and wait for someone else to take this leap.”

This shift in business can be attributed to many factors. Firstly, large oil companies have spent millions in R&D for building oil projects offshore, and now that that business is on its way in some areas where older fields have drained, it makes sense to shift from off-shore oil to off-shore wind. Returns from wind farms are predictable because producers enter into long-term PPA which reduce risk by pinning down government-regulated electricity prices, unlike volatile oil prices, as the dramatic fall in the oil price from 2014 powerfully showed, the value of oil and gas assets is variable and uncertain.

Secondly, even as oil production is declining in the North Sea over the last 15 years, economic activity has been buoyed by offshore windmills. The notorious North Sea winds which threatened oil platforms have become a godsend for the new workers to install and maintain turbines popped into the Northern seabed.  According to Bloomberg New Energy Finance, about $99 billion will be invested in North Sea wind projects from 2000 to 2017. A decade ago, the industry had projects only a fraction of that size.

Circling the drain: The terminal decline of North Sea Oil

There is an evident trend going on in the North Sea, in spite of a slight resurgence we’ve seen in the last year.

north_sea_oil
Data from EIA, taken from Investopedia

Energy consultancy Wood Mackenzie said oil companies were likely to stop output at 140 offshore UK fields during the next five years, even if crude rebounded from $35 to $85 a barrel. According to the Financial Times, this compares with just 38 new fields that are expected to be brought on stream during the same period. Industry execs believe that this will be good news for the decommissioning industry, still in its nascent phase. Shell is preparing to take apart the first of four platforms in its Brent field, while Riverstone-owned Fairfield is to abandon Dunlin. As the sector oil declines, service providers anticipate that decommissioning may help them plug the revenue gap left by diminishing exploration.

Oil Decommissioning Frenzy?

Thirty years ago, North Sea production helped shape the UK’s energy landscape in a way similar to what the shale boom has done for the United States. In the 1980s, offshore production propelled the UK  to become a net crude exporter of oil and then of gas. But today, it’s a net importer of both oil and gas as the North Sea matured and their productivity declined.

As assets reach the end of their useful lives, company resources will become increasingly drawn into the expensive and at times technically complex activities required to cease production, safely remove subsea and surface infrastructure, and ensure that wells are permanently and safely abandoned.

According to a 2017 KPMG report,

The decommissioning era has now dawned in mature oil and gas provinces such as the North Sea – worsening economics, deteriorating infrastructure, technical limits on further recovery and regulatory pressure will make change inevitable. Industry forecasts suggest an unprecedented scale and pace of decommissioning activity in the years ahead.

The North Sea strategy seemed to be to delay the decommissioning of many offshore platforms, preferring to continue squeezing out increasingly small amounts of oil and gas rather than incurring the massive costs of decommissioning and bringing that equipment back onshore.

But those decommissioning delays mean only that oil companies have been kicking the can down the road and set up a more dramatic decline in North Sea production which will still be true even if prices increase.

This makes the shift into offshore wind all the more interesting.

The Energy Infrastructure That the U.S. Really Needs

A power grid is what transmits electricity from where it is made to our homes because electricity cannot be stored (efficiently…yet).

There are thousands of power plants that generate electricity using solar, wind, gas or coal. These generating stations produce electricity at a certain electrical voltage. Conventionally, this voltage is then “stepped-up” (increased) to very high voltages, to increase the efficiency of power transmission over long distance and minimize the dispersion of energy. Once this electricity gets near your town, the electrical voltage is “stepped-down” (decreased) in a utility substation to a lower voltage for distribution around town. As this electrical power gets closer to your home, it is stepped-down by another transformer to the voltage you use in your home. This power then enters your home through your electrical meter. All of this is very good, but given the evolution of energy production, it needs to modernize to meet consumer preferences and environmental requirements.

Enter the smart grid.  The core premise of a smart grid is to add monitoring, analysis, control, and communication capabilities to the grid to maximize the throughput (the maximum rate of production) of the system while reducing the energy consumption. A smart grid entails technology applications that will allow an easier integration and higher penetration of renewable energy, facilitating homeowners and businesses that wish to put their privately-produced energy on the grid. It will be essential for accelerating the development and widespread usage of plug-in hybrid electric vehicles (PHEVs) and their potential use as storage for the grid. Smart grids will allow utilities to move electricity around the system as efficiency and economically as possible.

Essential to efficient use smart grids are smart meters:  Smart meters help utilities balance demand, reduce expensive peak power use and provide better prices for consumers by allowing them to see and respond to real-time pricing information through in-home displays and smart thermostats. For example, you may want to run your dryer for 5 cents per kilowatt-hour at 22:00 pm instead of 17 cents per kilowatt-hour at 18:00 pm in the evening, when demand (and price) is highest. Consumers will have the choice and flexibility to manage your electrical use while minimizing costs.

The need for a smart grid is increasingly recognized by US policymakers at all levels of government, as ways to improve the energy efficiency of producing and using electricity in our homes, businesses, and public institutions become an entrenched imperative. Many believe that a smart grid is a critical foundation for reducing greenhouse gas emissions and transitioning to a low-carbon economy. Certainly, PHEVs and renewable energy have been of great interest to Congress.

In light of this brief introduction, I came across Ethan Zindler’s prepared testimony before the senate Committee on Energy and Natural Resources, here is the meat of what he had to say:

Before I get to my main points, a quick note about “infrastructure”. In the current climate, this term has become a Rorschach test of sorts representing different things to different constituent groups. In the case of energy, infrastructure can encompass a broad scope, including, among other things, building power-generating facilities, expanding oil and gas distribution pipelines, or hardening local power grids.

Those topics are worthy of discussion and I know my fellow panelists will shed light on them. However, my testimony today will focus on the next generation of energy technologies and the infrastructure that will be critical to accommodate them.

The U.S. is transforming how it generates, delivers, and consumes energy. These changes are fundamentally empowering business and home owners, presenting them with expanded choices and control. Consumers today can, for instance, analyze and adjust their heating, air-conditioning, and electricity use over their smart phones thanks to smart meters and smart thermostats. And they can make efficiency improvements through advanced heating and cooling systems and innovative building materials and techniques.

Consumers in much of the country can choose their electricity supplier and may opt for “green choice” plans. They can produce power themselves with rooftop solar photovoltaic systems. They can even store it locally with new batteries.

Consumers can choose to drive vehicles propelled by internal combustion engines, electric motors, or some combination of both (hybrids). That car can be powered by gasoline, diesel, electricity, ethanol, or perhaps even methanol, natural gas, or hydrogen. And electric vehicle drivers who own homes can turn their garages into fueling stations simply by using the outlet on the wall.

Now, realistically speaking, few Americans today have the inclination or income to become high-tech energy geeks. But that is changing as prices associated with these technologies plummet. In the case of electric vehicles (EVs), such cars can be appealing simply because they perform better.

We at BNEF believe that further growth and eventual mass adoption of these technologies is not possible, not probable, but inevitable given rapidly declining costs.

For instance, the price of a photovoltaic module has fallen by 90 percent since 2008, to approximately $0.40 per watt today. For millions of U.S. businesses and homeowners, “going solar” is already an economic decision. Last year the U.S. installed far more solar generating capacity than it did any other technology.

By the end of the next decade, cost competitiveness for distributed solar will arrive most places in the US – without the benefit of subsidies. We expect the current installed base of US solar to grow from approximately 3.6 percent capacity to 13 percent by 2030 then to 27 percent by 2040.

Similarly, the value of contracts signed to procure U.S. wind power have dropped by approximately half as the industry has deployed larger, more productive turbines. We expect current wind capacity to at least double by 2030.

Many of these new energy technologies are, of course, variable (no wind, no wind power; no sun, no solar power). Thus the growth in these and other new energy technologies will be accompanied by unprecedented sales of new batteries of various shapes and sizes.

Utilities such as Southern California Edison Co and others have already begun piloting large-scale batteries in certain markets while providers such Stem Inc and Tesla Inc offer “behind-the-meter” storage for businesses and homeowners.

In the past five years, lithium-battery prices have fallen by at least 57 percent and we expect a further 60 percent drop by 2025. That will contribute to 9.5GWh/5.7GW of battery capacity in the U.S. by 2024, up from 1.7GWh/0.9GW today.

Continuing battery price declines will also make electric vehicles (EVs) for the first time a viable option for middle-class US consumers without the benefit of subsidies. Last year, EVs represented 0.8 percent of global vehicle sales. By 2030, we anticipate that growing to one in four vehicles sold.

The most popular place to fuel such cars could be augmented gasoline stations… or the local grocery store, or simply your garage.

The changes we’ve seen to date are giving U.S. energy consumers unprecedented opportunities to manage, store, distribute, and even generate energy. However, the new, empowered consumer poses inherent challenges to the traditional command-and-control / hub-and-spoke models of conventional power generation and power markets. Already, we have seen examples around the globe where incumbent utilities were caught flat-footed by rapid clean energy build-outs.

In some cases, it has been heavy subsidies for renewables that have catalyzed the change. But more recently, simple low costs are allowing wind and solar to elbow their way onto the grid.

So, where does “infrastructure” fit into this changing energy landscape?

First, conceptually, we must accept that the empowered consumer is here to stay. To some degree, this acceptance is already underway in the private sector where companies that once focused mainly on large-scale power generation are merging with consumer-facing utilities, or buying smaller solar installers and battery system providers.

Second, policy-makers should seek to promote infrastructure that accommodates a new, more varied, more distributed world of energy generation and consumption. Most immediately, this can mean supporting greater deployment of so-called smart meters. To date, the U.S. has installed almost 71 million of these devices, which enable better communication between energy consumers and utilities. Compare that to Italy where all consumers have such meters and are now receiving a second generation with more advanced functionality, or China which has installed 447 million units, across almost its entire urban population.

Policy-makers may also seek to facilitate the development of high-voltage transmission across state lines. It has long been an adage that the Great Plains states represent the “Saudi Arabia of wind”, given the exceptional resources there. To some degree, those states might as well be in Saudi Arabia, given the major challenges of building transmission that would move electrons generated there to more densely populated states in the east or west. The US has added approximately 1.5GW of high-voltage direct current transmission since 2010. By comparison, China has added 80GW over that time.

Investment is needed at lower voltages too. Our passive, one-directional, electricity distribution system is under strain as new distributed generation capacity comes online. In addition, policy-makers might also consider ways to expand support for EV charging stations. As sales of such cars grow, consumers are already putting greater pressure on certain distribution nodes around the country. Ensuring that EV “fuel” demand is managed in an orderly manner will be important.

Finally, the changes afoot and to come will require what might best be described as infrastructure “software”. Most importantly and pressingly, this must include the reform of electricity markets to take into account the new realities of 21st Century power supply and demand.

It may also include expanded programs to educate energy professionals on the new realities of modern energy markets. And, yes, it could include more software to improve energy monitoring and optimize system performance.

In closing, I would reiterate that none of this need be done at the exclusion of investing in traditional energy infrastructure where the needs are also pressing. However, any rational discussion about energy infrastructure investment today must do more than take into account the current situation. It must also consider where we will be tomorrow.

How to Handle a Climate Change Denier

Preamble: I would like to point out that I truly prefer not to engage in these types of discussions (read: I’m over it), because the sources of information that are available to me, are available to everyone else. I also do not consider it my duty to educate every Tom, Dick, and Henry on climate change. However, in light of recent developments, we will probably be encountering a more energized brand of deniers, so here is a non-exhaustive list of answers I took from Robert Henson’s Rough Guide to Climate Change.

Since the days of Roger Revelle, the pioneering oceanographer whose body of work was instrumental for our understanding of the role of greenhouse gas emissions in our atmosphere, deniers developed certain criticisms that are still popular today. I believe that these arguments will keep on cropping up for as long as there is a “debate” on climate change, so it’s best that we equip ourselves with appropriate answers.

Taken to the extreme, anti climate change arguments can be summed up in the following quote:

The atmosphere isn’t warming; and if it is, then it’s due to natural variation; and even if it’s not due to natural variation, then the amount of warming is insignificant; and if it becomes significant, then the benefits will outweigh the problems; and even if they don’t, technology will come to the rescue;  and even if it doesn’t, we shouldn’t wreck the economy to fix the problem when many parts of the science are uncertain.”

toles-washington-post-2006
Toles 2006, Washington Post 

 “But the atmosphere isn’t warming….”

 

According to an ongoing temperature analysis conducted by scientists at NASA’s Goddard Institute for Space Studies (GISS), the average global temperature on Earth has increased by a mean of about 0.8° Celsius (1.4° Fahrenheit) since 1880. Two-thirds of the warming has occurred since 1975, at a rate of roughly 0.15-0.20°C per decade.

This arguement, has seeminly been put to rest, yet deniers seem to resist it, possibly because they do no think that a global mean warming of 0.8°C is a big deal. Here is a more vivd statistical example of what that means:

giphy
Dr. Arun Majumdar’s presentation, Michigan State University

This is a bell curve mapping distribution of temperature anomalies over 60 years. To the left are temperatures colder than average and to the right are temperatures hotter than average. The mean is shifting and the distribution is broadening rightwards. The right tail of the distribution is reaching 4 and 5 sigma, which are probabilities that were unheard of decades ago. The anomalies occurring at 4 and 5 sigma are (were) rare massive heatwaves, storms, and floods, which are becoming more common then ever.

“Okay, but I still went skiing this winter…”

 

The weather and the climate are two different things. The difference between weather and climate is a measure of time. Weather is what conditions of the atmosphere are over a short period of time, and climate is how the atmosphere “behaves” over relatively long periods of time. We talk about climate change in terms of years, decades, and centuries. The weather is forecast 5 0r 10 days ahead, but the climate is studied across long periods of time to look for trends or cycles of variability, such as the changes in wind patterns, ocean surface temperatures, and precipitation. Snow in skiing locations isn’t proof that climate change is not happening.

The warming is due to natural variation…

 

This is a very common argument, the denier does not argue against the existence of climate change, generously admitting the climate has *always* changed, but they do not believe that humans are responsible for it.

The IPCC has concluded that the warming of the last century, especially from the 1970s, falls outside the bounds of natural variability.

carbon-budget-exhibits_3-e1488189073538.png
Variation of Co2 in atmosphere, from 800000BC to today, NOAA NCDC

Let’s walk down memory lane and look at what the IPCC has been saying to us for 26 years. And keep in mind that the IPCC reports are the most comprehensive, global, and peer-reviewed studies on climate change ever written by anyone, bringing together the work of over 800 scientists, more than 450 lead authors from more than 130 countries, and more than 2,500 expert reviewers. In short, the IPCC reports are humanity’s best attempt to date at getting the science right.

Over the last 800,000 years, Earth’s climate has been cooler than today on average, with a natural cycle between ice ages and warmer interglacial periods. Over the last 10,000 years (since the end of the last ice age) we have lived in a relatively warm period with stable CO2 concentration. Humanity has flourished during this period. Some regional changes have occurred – long-term droughts have taken place in Africa and North America, and the Asian monsoon has changed frequency and intensity – but these have not been part of a consistent global pattern.

The rate of CO2 accumulation due to our emissions in the last 200 years looks very unusual in this context (see chart above). Atmospheric concentrations are now well outside the 800,000-year natural cycle and temperatures would be expected to rise as a result.

Moreover, the IPCC in 1995, in its second assessment report included a sentence that hit the headlines worldwide:

“The balance of evidence suggests a discernible human influence on global climate”

By 2001, IPPC’s third report was even clearer:

“There is an new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.”

By 2007, in it’s fourth report, IPCC spoke more strongly still:

“Human induced warming of the climate system is widespread”

In 2013, in the 5th Assessment Report, they stated,

“It is extremely likely that human influence on climate caused more than half of the observed increase in global average surface temperature from 1951 to 2010”

Human activity has led to atmospheric concentrations of carbon dioxide, methane and nitrous oxide that are unprecedented in at least the last 800,000 years.

There is, therefore, a clear distinction to be made between what is “natural variability” and what is our contribution.

“The amount of warming is insignificant…

 

screen-shot-2017-01-18-at-12-28-35-pm

The European Geosciences Union published a study in April 2016 that examined the impact of a 1.5°C vs. a 2.0°C (bear in mind we are at 0.8°C now, without the slightest chance of slowing down) temperature increase by the end of the century. It found that the jump from 1.5 – 2°C, a third more of an increase, raises the impact by about that same fraction, on most of the natural phenomena the study covered. Heat waves would last around a third longer, rain storms would be about a third more intense, the increase in sea level would be that much higher and the percentage of tropical coral reefs at risk of severe degradation would be roughly that much greater.

But in other cases, that extra increase in temperature makes things ever more dire. At 1.5°C, the study found that tropical coral reefs stand a chance of adapting and reversing a portion of their die-off in the last half of the century. But at 2°C, the chance of recovery disappears. Tropical corals are virtually wiped out by the turn of the century.

With a 1.5°C rise in temperature, the Mediterranean area is forecast to have about 9% less fresh water available. At 2°C, that water deficit nearly doubles. So does the decrease in wheat and maize harvest in the tropics.

Bottom line: It may look small but it’s a huge deal.

“The benefits will outweigh the problems”

 

When people talk of alleged benefits of climate change, they are usually talking about agriculture. The argument says that the increased concentrations of CO2 will give a boost to crop harvests leading to larger yields.

This is laughable

Climate change will slow the global yield growth because high temperatures result in shorter growing seasons. Shifting rainfall patterns can also reduce crop yields. Climate trends are already believed to be diminishing global yields of maize and wheat. These symptoms will only worsen as temperatures and extreme weather events become more common. If climate change is allowed to reach a point where the biophysical threshold is exceeded, as would be the case on current emission trajectories, then crop failure will become normal. Also, the severest risks are faced by countries with high existing poverty and dependence on agriculture for livelihoods. Even at “low” levels of warming, vulnerable areas will suffer serious impacts.

  • Sub-Saharan Africa, according to the World Economic Forum, at 1.5°C warming by 2030 would bring about a 40% loss in maize cropping areas;
  • South East Asia, in a 2°C would experience unprecedented heat extremes in 60%-70% of their areas.

Agricultural productivity is at risk, not only in developing countries but also in breadbasket regions such as North and South America, the Black Sea and Australia.

Moreover, in October 2015, a study published in Nature estimated that the world could see a 23% drop in global economic output by 2100 due to a changing climate, compared to a world in which climate change is not taking place. The coauthor of the study had this to say,

“Historically, people have considered a 20% decline in global GDP to be a black swan: a low-probability catastrophe – Instead, we’re finding it’s more like the middle-of-the-road forecast.”

Technology will come to the rescue…”

Deniers who make this case seemingly acknowledge climate change, yet they are optimistic believers in technology being the be-all end-all and that geo-engineering will save us from the clutches of global warming.  There are two things I find problematic about this approach:

  1. I think this argument is akin to the “We almost discovered nuclear fusion- we’re only 20 years away!” argument, which stipulates that the nuclear fusion is at any given point in time 20 years away. It takes into account that we have not developed the appropriate technologies to “save” us from climate change, and when we do, there is still a maddening lag between the innovation and deployment. Not to mention the fact we still have not identified which technologies can do the greatest good in the shortest time so we cannot fly blindly in a vague hope that tech will rescue us;
  2. Such an approach fights the “symptoms” of climate change, not the cause of it, meaning that it entrenches our extremely wasteful and inefficient ways that have brought on climate change in the first place.

None of this is to say that I do not believe that technology will play a pivotal role in our transition, of course, it will! But we cannot afford to rely entirely on waiting for carbon capture and storage and the likes to become a deployable and scalable economic reality.

“We shouldn’t wreck the economy to fix the problem when it’s still uncertain!”

6930c533b5f4fb2d43c4c27d18de62d1

When you really get down do it, people will just tell you what their ultimate bottom-line is. If we don’t know with absolute confidence how much you warmer and what the local and regional impact will be perhaps we’d better not committing ourselves to costly reductions in greenhouse gas emissions.

I have written a post on the employment benefits tied to jobs in the renewable energy sector, and there are a plethora of studies pointing to the huge costs of climate change inaction, amongst these, a new study by scholars from the LSE, published last year in Nature Climate Change, offers a daunting scenario.

They estimate that a business-as-usual emissions path would lead to expected warming of 2.5 degrees C by 2100. Under that scenario, banks, pension funds, and investors could sacrifice up to $2.5 trillion in value of stocks, bonds, and other financial assets. The worst-case scenario, with a 1% chance of occurring, would put $24 trillion (about 17 % of global financial assets) at risk.  This is but one of the scenarios that have been studied, that point to the huge costs of inaction.

Climate change can affect the economy in myriad ways; including the extent to which people can perform their jobs, how productive they are at work, and the effects of shifting temperatures and precipitation patterns on things like agricultural yields or manufacturing processes. These factors help determine our “economic output” — all the goods and services produced by an economy.

In spite of the fact that there is disagreement on how much exactly economies will be affected, we know the cost of inaction will be immense. With the information at our disposal, it would be foolish and dangerous to assume that reducing emissions will cost more than coping with a changing climate.

Good luck with your “debate” and let me know how it goes.