See the Tabbed Pages for links to video tutorials, and a linked list of post titles grouped by topic.

This blog is expressly directed to readers who do not have strong training or backgrounds in science, with the intent of helping them grasp the underpinnings of this important issue. I'm going to present an ongoing series of posts that will develop various aspects of the science of global warming, its causes and possible methods for minimizing its advance and overcoming at least partially its detrimental effects.

Each post will begin with a capsule summary. It will then proceed with captioned sections to amplify and justify the statements and conclusions of the summary. I'll present images and tables where helpful to develop a point, since "a picture is worth a thousand words".

Wednesday, March 27, 2013

Choose a Carbon Fee, Not a Cap and Trade Regime

Summary  Burning fossil fuels generates carbon dioxide as waste whose socioeconomic costs to humanity are not accounted for in the price of the fuel.   Reducing our dependence on fossil fuel use and mitigation of emission of greenhouse gases have led to valuing carbon either by a fee or use of a cap and trade mechanism; the additional value would limit consumption.  A carbon fee is easy to implement legislatively or administratively, and has been effective in reducing demand for fossil fuels.  Cap and trade regimes are in place in many jurisdictions around the world.  They are administratively complex and bureaucratically onerous, and can be unsuccessful in curbing fossil fuel use.  This post expands on these factors, and concludes that lowering the use of fossil fuels is best accomplished by imposing a fee on carbon.

Human activity generates waste.  Significantly, as we burn more and more fossil fuels to produce the energy that powers modern life, we emit more and more carbon dioxide into the atmosphere.  This substance, an important greenhouse gas, is being released as the waste product of our energy economy.
It is imperative to treat manmade carbon dioxide as a cost-bearing waste product because of the harmful effects of the global warming that it produces.  These harms carry enormous costs with them.  Properly accounting for these costs would make it more acceptable to make the investments needed to reduce greenhouse gas emissions.
Policy directed toward reducing dependence on fossil fuels and mitigating greenhouse gas (GHG) emissions has long grappled with the alternative policies of imposing a carbon fee on fossil fuels and creating a cap and trade regime.  These may be viewed as policies that affect, respectively, the demand for, and the supply of, fossil fuels.  A carbon fee levies an added cost on fossil fuels directly.  The fee is passed through directly to the consumer, affecting demand. Cap and trade mechanisms, on the other hand, place upper limits on the emission of carbon dioxide (CO2). 

This post reviews examples of both mitigation mechanisms.  Upon consideration we support the use of a carbon fee in preference to a cap and trade mechanism for mitigation.

A carbon fee is imposed on fossil fuels directly in accordance with the amount of CO2 produced when burned.  The fee is imposed and collected at, or close to, the source of the fuel.  The value of the fee is then passed along as the fuel is transformed (petroleum to gasoline, for example), and/or transported (all fuels), and is ultimately paid by the consumer.  The level of the fee is set by policymakers, and typically is envisioned to start low and rise periodically until it reaches an intended level.  It is seen that a carbon fee is conceptually and operationally easy to implement.  Clearly the fee operates to constrain demand.

Under cap-and-trade major emitting facilities are allotted allowances each of which licenses the release of a fixed amount, say 1 ton, of CO2 and other GHGs.  An administrative agency determines the total number of allowances (the cap) and the allotments for each period.  The cap is reduced year by year, thus constraining fuel consumption.  Ideally the emitters would pay for the allowances, frequently through an auction, but at the outset in many regimes they are distributed at no charge.  In any case, as the program matures markets are ultimately set up to auction annual allowances, and for trading them, thereby establishing a price for emissions.  The market price on carbon established in this market deters fossil fuel use. 

There are many problems with a cap-and-trade regime that make it difficult to succeed.  For example, if the supply of allowances is too high or the market demand is too low, their price will fall and the objective of reducing the rate of emissions of CO2 will be discouraged.  For these and other reasons discussed below, operation of a cap and trade regime is complex, if not cumbersome.  A cap and trade regime may be intricate and top-heavy to administer.

Both cap and trade and a carbon fee assign a monetary value to the waste stream that emissions of CO2 and other greenhouse gases represent.  This has not been done historically; CO2 has not been considered to be a waste product of our energy economy whose disposal had to be priced into the cost of the fossil fuels.
Examples of using a carbon tax.
In Australia, Prime Minister Julia Gillard’s government enacted a carbon fee program in 2011.  Initially the carbon fee is US$23.15 per ton of carbon; much of the revenue is to be applied as compensation to businesses and consumers (“cap and rebate”).  After six months of operation, the electricity generation segment of Australia’s energy economy reduced its carbon emissions rate by 8.6%.  Emissions were 7.5 million tonnes lower in the second half of 2012 than for the same period in 2011.  This arose from a decrease in demand and an increase in residential rooftop solar panel use and increased energy-efficiency.   Some coal-burning facilities ceased operating, while more power came from increased hydroelectric generation.  The long-term goal is to reduce emissions by 33 million tonnes per year by 2020.

Gasoline fees are very effective in affecting drivers’ travel habits.  The graphic below, characterizing how per capita fuel use reflects the size of the gasoline fee,

Sources: New York Times presenting data from the U. S. Department of Energy and the World Bank;  

shows that per capita use of fuel for driving in developed countries decreases as the amount of the gas fee increases.  The U. S. has the lowest gas fee, which is correlated with the highest amount of fuel used per capita (horizontal scale). As the gas fee increases (vertical scale), it is seen that most of the benefit appears to be attained by a fee level of about US$2.20 per U. S. gallon.  In Great Britain, where the gas fee is even higher, Ford, the American car maker, sells a model of its compact Focus whose efficiency is 72 miles per U. S. gallon.  In contrast, a Focus model sold in the U. S. gets only 33 miles per U. S. gallon.  Clearly, automakers already have the technology and capability to mass produce highly fuel efficient cars.  This shows that the current state of technology is sufficient to garner significant improvements today.

Gasoline prices affect consumption.   In the U. S. the price of gasoline has fluctuated considerably in recent years for reasons that do not include imposition of a carbon fee.  The Washington Post reported on April 17, 2012 that higher gas prices had led to reduced consumption, and to a move toward the purchase of more fuel-efficient vehicles.

A review of various studies of the interrelationship between price and consumption concluded that “we can be reasonably assured that a rise in gas fees, all else being equal, will cause consumption to decrease”. 

Examples of using cap and trade.
In the U. S. the Regional Greenhouse Gas Initiative (RGGI) encompasses nine northeastern states.  RGGI controls only for emissions from fossil fuel plants that generate electricity, and affects only larger power plants.  RGGI created a CO2 cap and trade program, with the goal first of stabilizing and subsequently reducing the overall emissions from these plants.  Each state’s base emission amount was established at the outset, and is remaining fixed at that level from 2009 through 2014.  Starting in 2015, the allowances for each state are to be reduced by 2.5% per year, so that by 2018 the emissions will be 10% below the starting level.  Auctions for emission allowances occur quarterly.  RGGI estimates that the auction price increases the cost of electricity to the consumer by only 0.4% to 1%.

In its 19th auction, almost 38 million CO2 allowances were sold, garnering about US$106 million, or US$2.80 per allowance.  The cumulative amount from all auctions is about US$1.2 billion.  The proceeds are used to rebate portions of electricity bills to consumers, invest in the region’s renewable energy economy, including job training for environmental jobs, and similar objectives.  RGGI has already invested in improvements that will produce significant reductions of CO2 emissions and save the need for generating major amounts of electricity, as well as the thermal energy needed to drive the generators.

European Union (EU). Even before the entry into force of the Kyoto Protocol in 2005, the European Commission established its greenhouse gas emissions trading scheme (ETS) using a cap and trade market mechanism. As an accord intended to govern the operations of 27 sovereign nations, each country had to enact laws codifying the applicability of the ETS structure within its borders.

It covers at least 11,000 individual emission sources across the EU. The ETS is being implemented in three phases.

Phase 1, operating from 2005 to 2007, was characterized as a learning phase, and included such features as
  • The level of the emissions cap was determined largely by each nation independently;
  • It included only power plants with a capacity greater than 20 MW, and other industrial facilities; these represented 42% of emissions; and
  • Allocations of emission allowances relied primarily on recent historical records; they were offered at no cost.

In Phase 2 (2008-2012), features that expanded on those of Phase 1 included:
  • The level of the emissions cap conformed to the limits of the Kyoto Protocol; and
  • Limits on emissions from air travel were to begin in 2012.

Phase 3 (2013-2020) departs from the earlier phases in important ways:
  • National emissions caps were to be replaced by a single EU-wide cap; they decrease by 1.74% per year starting in 2010 with the objective of delivering 21% reduction referenced to 2005 by 2020;
  • 90% of the allowances will be sold by auction rather than being distributed free of cost.

The performance of the ETS is shown below in the graphic.  Emissions allowances in a cap-and-trade regime were already in use in the EU prior to 2005. In Phase 1 it turns out that for a variety of reasons the auction market in these initial years established early prices as high as almost EUR30 (about US$39.20 at that time) per tonne of CO2 equivalents (blue and lavender lines; tonne, a metric ton), which then fell to EUR0/tonne toward the end of Phase 1 (orange line; see the graphic).
CO2 price evolution in the EU from 2003 to 2009.  Each period’s price performance is color coded as shown.  The pale aqua line represents futures trading for (the lower number of) allowances to be granted beginning at the start of Phase 2.  The EU-wide number of allowances for Phase 2 was 11.8% lower than for Phase 1.  Once Phase 2 began in 2008, the actual allowance price and the futures trading for 2009 allowances followed essentially identical paths.
Source: Estimations of carbon price in Europe, Nicole Dellero (2008)

The fall of the allowance price to EUR0/tonne in 2007 has been attributed both to a glut of allowances and to the impending economic slowdown preceding the world financial crisis of the coming years.  Of course, with allowances having no penalty value, emission sources were free to continue “business-as-usual”, rather than to curtail them.  On the other hand, when allowances had a significant price, businesses were able to pass along corresponding price increases to customers, which resulted in windfall profits.

The ETS had to cancel its most recent auction in March 2013 because bids received were “significantly” below the actual market rate.  In 2013, the start of Phase 3, about 40% of newly issued carbon emission allowances are being sold at auction for the first time.  The rest are still distributed at no charge.  The price had fallen by 5.6% to EUR3.73 (US$4.86) a metric ton, and reached a low on Jan. 31, 2013 of EUR3.42. 

Longer term the ETS price for emission allowances has fallen drastically, by 90%, in the last five years as demand for energy has fallen because of recessionary conditions among EU countries.  This has led to an oversupply of unused allowances.  The ETS is reevaluating its allocation of allowances, in an attempt to rebalance the trading system and maintain a price on emissions.

The state of California enacted its Global Warming Solutions Act in 2006, establishing mitigation goals through 2020.  The governor at the time, Arnold Schwarzenegger, extended the Act by executive order declaring further stringent mitigation objectives through 2050.  These actions are significant, because California constitutes about 1/6 of the U. S. economy in view of its large size and population.  The Act is also significant because in the U. S. it is the only economy-wide mitigation plan.  Inititally it covers most fixed point sources of emission, including electric generation plants and industrial facilities, beginning now (2012-3).  It will extend to refining and sale of transportation fuels (i.e., distributed sources) in 2015.  The mechanisms for undertaking its mitigation goals include a cap and trade system as well as continuing and expanding California’s historic, successful energy efficiency programs.  State officials and advisors are undertaking to learn lessons from the experience of the European Union’s ETS, seeking to avoid its mistakes.

The state established a rigorous survey of emissions from every potential covered installation in order to allocate emission allowances.  In its first auctions California sold 23.1 million allowances at US$10.09 each, in Nov. 2012, and another 12.9 million allowances at US$13.62 each at the second auction in Feb. 2013.  This works out to revenue from the first two auctions of US$409 million.

Two major mechanisms have been devised to abate the emission of CO2, a major greenhouse gas, (aside from the important contribution of increasing the efficiency of energy usage).  One, a cap and trade regime, operates primarily by capping the supply of energy.  (Of course the auction price imposed on allowances has the effect of raising the price of the energy purchased by the consumer, so cap and trade may also have elements of lowering energy demand as well.)  The second, a carbon fee applied in proportion to the amount of CO2 emitted when the fossil fuel is burned, directly limits demand by raising the price paid for energy.

A cap and trade regime has many disadvantages in comparison to a carbon fee.  Some of these are apparent when considering the case of the European Union.  The factors, many of which are interrelated, include a) a need to account accurately for baseline emissions from each identified source prior to placing the regime in operation; b) a continued need for monitoring emissions from each source as the regime operates; c) a need for a  mechanism to allot allowances both at the outset and in subsequent periods of operation; d) a mechanism or rule for distributing allowances, including determining whether to grant or sell them; e) monitoring use of energy offsets by those installations unable to comply with emissions limits; and f) creating and maintaining the new administrative and bureaucratic offices needed to operate the regime.  It is seen from this incomplete list that a cap and trade regime presents many challenges, requires an extensive bureaucratic structure, and includes many opportunities for mistakes to be made that defeat the objective of constraining emissions.

In contrast, a carbon fee is extraordinarily simple in its operating features and is easy to implement.  A tax rate is established at the outset, covering most or all sources of CO2 emissions.  In order to achieve its objectives, it would be optimal to start with an insignificant tax rate, and then have the rate increase annually to a level at which it would have a meaningful effect in reducing energy demand.  The example cited in the gasoline fee graphic above provides ample evidence that a carbon fee is easy to apply, has a broad if not universal reach, and achieves its objective according to its magnitude.  It is clear that the simplicity and effectiveness of a carbon fee offers major advantages over use of a cap and trade regime.

Many commentators have urged use of a carbon fee to mitigate emissions.  One of the most consistent over time has been Tom Friedman, columnist for the New York Times, most recently in this article.   His writing and that of others have considered the many uses to which the revenues from pricing carbon could be applied.  This post will not address that discussion; most alternatives are worthy ones.
The time to begin abating humanity’s emissions of CO2, a major greenhouse gas, is now.  The longer we wait, the more firmly we cement our dependence on fossil fuels, and the more CO2 accumulates in the atmosphere, exacerbating global warming and its damaging effects on human life and welfare.  The simplest, most direct, and highly effective mechanism for reducing dependence on fossil fuels and mitigate emissions of GHGs is to apply a carbon fee.
© 2013
Henry Auer

Thursday, March 14, 2013

An Earth System Model Envisions High Future Global Temperatures

Summary.  Prinn, Sokolov and a Massachusetts Institute of Technology research group have developed an Integrated Global System Model for future climate development.  The model constructs computational modules that describe interactions between the physical world, and human economic and social activity, in order to project future climate conditions using probabilistic methods.

Five scenarios are devised, ranging from the absence of any policy that mitigates greenhouse gas emissions to a stringent policy constraining the total atmospheric concentration of all greenhouse gases to a relatively low level by the year 2100.  The model projects probabilities for limiting further global temperature increases for each scenario.  For example, in the absence of any abatement policy temperatures are likely to increase by 3.5ºC to 7.4ºC above the level of 1981-2000 by the decade 2091 to 2100.  Emission limits of increasing stringency not only lower predicted mean temperature increases but also project decreased probabilities especially for the largest temperature changes.

Prinn further presents an economic risk analysis that shows that investing early in mitigation minimizes future economic harms arising from extreme weather and climate events that further warming generates.  The financial return on this class of investments is high.

Long-term increases in global temperatures originate from human activities in most developed and developing countries around the world.  All these countries should unite to adopt emission abatement policies to minimize further global warming and its harmful consequences.
Introduction.  Climate models have been used for several decades (see the previous post ) as an important tool to make predictions concerning the expected behavior of the global climate.  The models include General Circulation Models, which seek to account for interactions of atmospheric currents above the earth’s surface and oceanic currents to project future climate development.  The important feature of these models is their incorporation of past and future increases in atmospheric greenhouse gases (GHGs), especially those emitted as a result of human activity.  Application of these models has resulted in predictions, under various emission scenarios, of increased global warming in future decades, and of the harmful climatic and meteorological effects arising from this warming over this time period.
A group working at Massachusetts Institute of Technology and elsewhere has focused on the science and policy of global change.  One member of their group, Ronald Prinn recently published “Development and Application of Earth System Models” (Proc. Natl. Acad. Sci., 2013, Vol. 110, pp. 3673-3680) .  His article is a follow-up to, and builds on, an earlier work from this same group (A. P. Sokolov et al. (including Prinn as a co-author), J. Climate, 2009,  vol. 22, pp. 5175-5204).  The present post summarizes the methods and results of Prinn and the MIT climate group.

Earth system models expand on general circulation models by incorporating many facets of worldwide human activity that impact on greenhouse gas emissions, the warming of the planet and the changes predicted as a result of these effects.  Prinn and the other authors in the MIT group, in developing their Integrated Global System Model (IGSM), seek to account for the growth in human population, the changes in their economic activity that will demand expanded energy supplies, and greenhouse gas emissions arising from these human activities.  The structure of their model is summarized at the end of this post in the Details section.

The IGSM results are expressed in terms of probabilities or probability distributions.  This is because the calculations incorporated into the model include starting values for both climatic and economic parameters that are selected by a random process; the calculations are then repeated several hundred times and the results are assembled into graphs of probabilities or frequencies of occurrence of a given value of the output.  (These are similar to histogram bar charts, which are used when the number of data points is small.  In the probability distributions each value, for example of temperature increase, has an associated frequency of occurrence, which varies as the temperature value moves across an essentially continuous range.)

IGSM forecasts for temperature changes are shown in the graphic below for a “no mitigation policy” case (others call this “business-as-usual”), which leads to an atmospheric concentration of CO2 and equivalent contributions from other greenhouse gases of 1330 parts per million (ppm) CO2 equivalents (CO2-eq) in the decade 2091-2100.  The forecasts also include mitigation policies of increasing stringency which are modeled to constrain GHGs to 890, 780, 660 and 560 ppm CO2-eq. 

Probability distribution of the modeled increase in temperature from the baseline period 1981-2000 to the decade 2091-2100.  The caption inside the frame shows first the “no mitigation policy” case, then cases of increasingly rigorous mitigation policies; the probability distribution curves for the same cases proceed from right to left in the graphic.  The term “ppm-eq” is the same as the term “CO2-eq” defined in the text above.  Each distribution has associated with it a horizontal bar with a vertical mark near its center.  The vertical mark shows the median modeled temperature increase, whose value is shown immediately to the right of the legend line in the graphic (e.g. 5.1ºC for the no mitigation policy case).  The horizontal line designates values of the temperature increase for each model that range from a 5% probability of occurrence to a 95% probability, shown inside the parentheses in the graphic (e.g. 3.3-8.2ºC for the no mitigation policy case). 1ºC corresponds to 1.8ºF.

Source: Prinn , Proc. Natl. Acad. Sci., 2013, Vol. 110, pp. 3673-3680; 

Several features are noteworthy in the graphic above.  First, of course, more stringent abatement policies lead to lower stabilization temperature increases because the atmosphere contains less GHGs than for more lenient policies.  Equally as significant, the breadth of each frequency distribution is less as the abatement policy becomes more stringent.  This means that within each frequency distribution, the likelihood of extreme deviations toward the occurrence of warmer temperatures is reduced as the stringency of the policies increases.  In other words, the 95% probability point (right end of each horizontal line) is further from the median (vertical mark) for the no mitigation policy case than for the others, and this difference gets smaller as the stringency increases from right to left in the graphic.  Furthermore, Prinn states “because the mitigating effects of the policy only appear very distinctly …after 2050, there is significant risk in waiting for very large warming to occur before taking action”.

The Intergovernmental Panel on Climate Change (IPCC) has promoted the goal of constraining the increase in the long-term global average temperature to less than 2ºC (3.6ºF), corresponding to about 450 ppm CO2-eq.  Prinn points out that because significant concentrations of GHGs have already accumulated, the effective increase in temperature is already 0.8ºC (1.4ºF) above the pre-industrial level.  His analysis shows it is virtually impossible to constrain the temperature increase within the 2ºC goal for the four least stringent policy cases by the 2091-2100 decade, and the policy limiting CO2-eq to 560 ppm has only a 20% likelihood of restricting the temperature increase to this value.

Economic costs of mitigation are estimated using the IGSM component modeling for human economic activity.  Prinn establishes a measure of global welfare as being assessed by the value of a percent of global consumption of goods and services, and estimates that this grows by 3% per year.  So, for example, if the welfare cost due to spending on mitigation is estimated at 3%, this means that global welfare change would be set back by one year.

The economic cost of imposing mitigation policies is estimated using a cap and trade pricing regime, graduated with time.  To attain the goals discussed in Prinn’s article by the 2091-2100 decade, compared to economic activity for no mitigation policy, the two least stringent policies have very low probabilities for causing loss of global welfare greater than 1%; the probability of a welfare cost of 1% reaches 70% only for the most stringent policy, stabilization at 560 ppm.  The probability for exceeding 3% loss in welfare is essentially zero for the three least stringent cases, and even for the 560 ppm policy the probability is only 10%.  Thus foreseeable investments in mitigation lead to minimal or tolerable losses in global welfare.

Based on the graphic shown above and other information provided in the article that is not summarized here, Prinn implicitly infers that the increases in long-term global average temperatures foreseen carry with them sizeable worldwide socioeconomic harms.  For this reason, he concludes “[investment in mitigation] is a relatively low economic risk to take, given [that the most stringent mitigation policy of] 560 [CO2-eq] … substantially lower[s] the risk for dangerous amounts of global and Arctic warming”.  He emphasizes that this statement assumes imposition of an effective cap and trade regime as mentioned here.


The MIT Earth System Model.  The work of Prinn, Sokolov, and the rest of the MIT climate group is important for its integration of climate science and oceanography with human activity as represented by economic and agricultural trends.  In this way the prime driver of global warming, man-made emissions of GHGs, is accounted for both in the geophysical realm and in the anthropological realm.

Prinn’s work is cast in probabilistic terms, providing a sound understanding of likely temperature increase in five emissions scenarios.  Projections of future global warming are essentially descriptions of probabilities of occurrence of events.

Human activity is increasing atmospheric GHGs.  The atmospheric concentration of CO2 for more than 1,000 years before the industrial revolution was about 280 ppm.  Presently, because of mankind’s burning of fossil fuels, the concentration of CO2 has risen to greater than 393 ppm, and is increasing annually.  The IPCC has set a goal (which many now fear will not be met) of limiting the warming of the planet to less than 2ºC above the pre-industrial level, corresponding to a GHG level of about 450 ppm CO2-eq.  It is clear from Prinn’s article that this limit will most likely be exceeded by 2100.

Most of the other GHGs shown in the graphic in the Details section (see below) are only man-made; they were nonexistent before the industrial revolution.  Methane (CH4) is the principal GHG other than CO2 which has natural origins.  Human use of natural gas, which is methane, and human construction of landfills, which produce methane, have led to increases in its atmospheric concentration.

Prinn’s temperature scenarios are already apparent in historical data.  Patterns shown in the graphic above for the modeled probability distribution of future temperature increases have already been found to be happening in recent times.  Hansen and coworkers (Proc. Natl. Acad. Sci., Aug . 6,2012) documented very similar shifts to larger temperature increases of global decade-long average temperatures in the decades preceding 2011 (see the graphic below). 

Frequency distributions for each value of the variability (SD) found for successive ten-year global average temperatures.  The frequency is plotted on the vertical axis.  The temperature variability is plotted on the horizontal axis as the deviation from the mean of the black “bell-shaped curve”, in units of the statistical standard deviation.   These curves may be considered as highly compressed histograms showing the fractional occurrence for each value of SD.  (All curves sum to 1.000.)  The curve for a set of numbers that would be found from fully random valuations about the average (“bell-shaped curve”) is shown in black.  Any deviation from the bell-shaped curve shows the existence of a bias in the distribution of the values.  Decades in the base period are: crimson, 1951-1961; yellow, 1961-1971; and green, 1971-1981.  Decades showing warming are aqua, 1981-1991; dark blue, 1991-2001; and magenta, 2001-2011.
Source: Proceedings of the [U.S.] National Academy of Sciences;
The graphic is presented in units of the standard deviation from the mean value, plotted along the horizontal axis.  The black curve shows the frequency distribution for purely random events.  Decadal average temperatures were evaluated for a large number of small grid areas on the earth’s surface.  The data for all the grid positions were aggregated to create the decadal frequency distributions.  Using rigorous statistical analysis the authors showed that, compared to the base period 1951-1980, the temperature variation for each of the decades 1981-1990, 1991-2000, and 2001-2011, shifted successively to higher temperatures.  The distributions for the most recent decades show that more and more points had decadal average temperatures that were much higher (shifted toward larger positive standard deviation values, to the right) compared to the distributions from the earlier base period.  The recent decades also deviate strikingly from the behavior expected for a random distribution (black curve).  Hansen’s analysis of historical grid-based data suggests that the warming of long-term global average temperatures projected by the work of Prinn and the MIT group is already under way.
Risk-benefit analysis supports investing in mitigation.  As global warming proceeds, the extremes of weather and climate it produces wreak significant harms to human welfare; these will continue to worsen if left unmitigated.  Prinn has used risk analysis to show that the economic loss arising from delayed welfare gains, due to investing in mitigation efforts, is far less than the economic damage inflicted by inaction.  In other words, according to Prinn’s analysis, investing in mitigation policies has a high economic return on investment.
Worldwide efforts to mitigate GHG emissions are needed.  GHGs once emitted are dispersed around the world.  They carry no label showing the country of origin.  The distress and devastation caused by the extreme events triggered by increased warming likewise occur with equal ferocity around the world.  Planetary warming is truly a global problem, and requires mitigating action as early as possible by all emitting countries worldwide.  As Prinn points out, humanity enhances its risks of harm by waiting for very large warming to occur before embarking on mitigating actions. 
The Integrated Global System Model is a large scale computational system in which a macro scale Earth System includes within its structure four interconnected modules which themselves include computations for the respective properties of the modules (see graphic below and its legend). 
Schematic depiction of the Integrated Global System Model.  The light gray rectangle, the Earth System, comprises the complete computational model system.  Within the Earth System are four computational submodels, the Atmosphere, Urban (accounting for particulates and air pollution most prevalent in cities), the Ocean and the Land.  These are computationally coupled together, accounting for climatic interactions among them, or they can be run independently as needed.  The Earth System receives inputs from and delivers outputs to Human Activity (at top); solid lines indicate coupling already included in the IGSM, and various dashed and dotted lines indicate effects remaining to be modeled computationally.  The bulky arrows on the right exemplify ultimate product results obtained by running the IGSM.  GDP, gross domestic product.
Source: Prinn , Proc. Natl. Acad. Sci., 2013, Vol. 110, pp. 3673-3680;
The IGSM used in Prinn is an updated and more comprehensive version of earlier ones such as that described in A. P. Sokolov et al., J.  Climate, 2009, vol. 22, pp. 5175-5204.  Human activities involving such factors as economic pursuits that depend on energy, land use and its changing pattern that can store or release GHGs, and harvesting of fossil fuels to furnish energy for the economy are encompassed in a computational module external to the Earth System. 
The Human Activity module, given the name Emissions Prediction and Policy Analysis (EPPA, see the schematic), computationally accounts for most human activities that produce GHGs and consume resources from the Earth System.  As shown in the graphic, EPPA inputs GHGs and an accounting for land use and its transformations to the Earth System, and receives outputs from it such as agriculture and forestry, precipitation and terrestrial water resources, and sea level rise, among others. 
The graphic includes a complete listing of all important GHGs, as inputs from the Human Activity module to the Earth System.  CO2 is the principal, but not the only, GHG arising from human activity.  Many of the others are important because, although their concentrations in the atmosphere are relatively low, their abilities to act as GHGs, molecule for molecule, are much greater than that of CO2, and, like CO2, they remain resident in the atmosphere for long times. 
Overall, Prinn states that the full-scale computational system is too demanding for even the largest computers.  Therefore, depending on the objective of a given project, reduced versions of various modules are employed.  Each module has been independently tested and validated to the greatest extent possible before being used in a project calculation.
The IGSM computations are initiated using input values for important parameters that are set using a random selection method.  Ensembles of hundreds of such runs, each providing output results sought for the project, are aggregated to provide probabilities for outcomes.  Such assessments of probability for outcomes are a hallmark of contemporary climate projections; for example the IPCC likewise characterizes its statements of projected climate properties in probabilistic terms.

© 2013 Henry Auer