Sobre El Clima

Download as pdf or txt
Download as pdf or txt
You are on page 1of 715

Dissertation

Benchmarking Climate Change Strategies


Under Constrained Resource Usage

in Fulfilment of the Requirements for the Degree of


Doctor of Philosophy

Presented to the Faculty of Engineering & Information


Technology
of
The University of Technology, Sydney (Australia)

Stuart Nettleton

B.Eng. (Hons II, Division I), Sydney University, 1974


M.Eng.Sci., University of New South Wales, 1978
M.B.A. (Alan Knott University Medal), Macquarie University, 1982
Graduate Diploma, Australian Institute of Company Directors, 2005
Certified Practising Accountant & Fellow of CPA Australia

12 February 2010

Supervisor: Assoc. Prof. Deepak Sharma, PhD


Introductory remarks

Revision Number 330, published on 12 February 2010


© 2010

Licence. This work is licensed under the Creative Commons Attribution -


Attribution-No Derivative Works 3.0 Unported License or consistent later
version. To view a copy of this license please visit
http://creativecommons.org/licenses/by-nd/3.0 or send a letter to Creative
Commons, 543 Howard Street, 5th Floor, San Francisco, California, 94105,
USA. Contact the author to request other uses if necessary.

Trademarks and service marks. All trademarks, service marks, logos and
company names mentioned in this work are property of their respective owner.
They are protected under trademark law and unfair competition law.

Importance of the glossary. Before starting with the first chapter it is


recommended to peruse the Glossary on page iii, following the Table of
Contents.
Certificate of Authorship/Originality

I certify that the work in this thesis has not previously been submitted for a
degree nor has it been submitted as part of the requirements for a degree
except as fully acknowledged within the text.

I also certify that the thesis has been written by me. Any help that I have
received in my research work and the preparation of the thesis itself has been
acknowledged. In addition, I certify that all information sources and literature
used are indicated in the thesis.

signature (Stuart Nettleton)

Sydney, 12 February 2010


Abstract

This doctoral dissertation presents evidence based research into climate


change policy. The research technique of political economy is used to
investigate policy development. A major change in the Anglo-American growth
paradigm from unconstrained to constrained growth is identified. The
implications of this change for climate policy are identified. The political
economy of climate change policies is expressed in a new Spatial Climate
Economic Policy Tool for Regional Equilibria (Sceptre). This is an innovative
benchmarking approach to computable general equilibrium (CGE) that
provides a spatial analysis of geopolitical blocs and industry groupings within
these blocs. It includes international markets for carbon commodities and
geophysical climate effects. It is shown that climate constrained growth raises
local policy issues in managing technology diffusion and dysfunctional resource
expansive specialisations exacerbated by the creation of global carbon
markets.
Acknowledgements

Discipline in the pursuit of enlightenment

Anon. Shinto Philosopher

A doctoral dissertation that covers so many interdisciplinary subjects and takes


several years to complete could not be written without the support of many
colleagues at the University of Technology Sydney. I would like to thank Assoc.
Professor Deepak Sharma for his wise mentorship, perspicacity, candour and
friendship. Deepak introduced me to the world of Input Output analysis and
general equilibrium modelling, encouraged me in the pursuit of philosophy and
political economy, and kept me focused by regularly drawing the discussion
back to policy implications.

Deepak's Energy Policy Program has been a jewel of the Faculty of Engineering
and Information Technology for eighteen years. I was fortunate to participate
in the Energy Modelling module, which provided an excellent foundation to
Input Output analysis and resource infrastructure policy. This was
complemented by practical studies in Policy Research through the Australian
Technology Network (ATN) of Universities LEAP program.

Although my other mentors William Nordhaus and Thijs ten Raa would at
present know me only from published comments in the Mathematica support
group or through a few emails, they have had a profound influence on my
research. William Nordhaus' 2007 book A Question of Balance showed me best
practice in assessing climate change policies. Thijs ten Raa's 2006 book The
Economics of Input Output Analysis provided a refreshing and exciting
paradigm for computable general equilibrium modelling that elevated Input
Analysis to a completely new level. My head became filled with exciting new
ways to understand the world. My passion to do this led me to an almost
singular obsession in implementing Thijs ten Raa's concepts in a Nordhaus-like
intertemporal economic-climate model.
Daniel Lichtbau at Wolfram is another dear mentor who constantly amazed me
at his knowledge of Mathematica optimisation, ability to obtain answers to
esoteric questions from the mysterious halls of Wolfram and his tireless
perseverance in responding to me as I pushed many of the limits of
Mathematica and its algorithms.

My doctoral assessment examiners Emeritus Professor Rod Belcher, Professors


Stuart White and Joe Zhu, Dr. Gul Ismir and Chair, Professor Jie Lu, provided
incisive and balanced comment at a critical stage in the formation of my
proposal. I am most grateful for their constructive comments on the scope of
this policy research and their support and guidance.

My colleague Ravin Bagia systems dynamics perspective and always cheerful


disposition led to many interesting insights and new ideas. I sincerely hope
that Ravin and I can fulfil our plan to develop critical mass in funded policy
research using the methods shown in this dissertation.

Lastly, the understanding of my family has made this work possible. My thanks
go to my children. To my daughter for deftly wielding her existential blade with
elegance and grace and to my son for his practical demonstration of that secret
of life that Herculean perseverance along with a good heart brings
extraordinary success.

Above all, the patient indulgence of my wonderful partner, Julie, has been
beyond the pale of duty and has made this work worthwhile and enjoyable.
Julie was the first to read this dissertation and see those things my eyes passed
over too quickly. I am truly grateful and humbled by her confidence and ever
encouraging smile for me to persevere with yoga.
Table of Contents
Glossary................................................................................................................................... v
Preface.................................................................................................................................. xiii
Chapter 1 Introduction............................................................................................................ 1
1.1 Background.................................................................................................................. 1
1.2 Policy context................................................................................................................ 4
1.3 Equilibrium tools for policy research..........................................................................18
1.4 Research aims............................................................................................................. 31
1.5 Research methodology................................................................................................ 31
1.6 Scope of research....................................................................................................... 32
1.7 Significance of research............................................................................................. 35
1.8 Chapter references..................................................................................................... 38
Chapter 2 Political Economy of the Anglo-American economic world view...........................47
2.1 Origins of the American worldview.............................................................................47
2.2 International relations................................................................................................ 52
2.3 Precursors of the Anglo-American world view............................................................59
2.4 Unexpected failures in the Anglo-American world view.............................................88
2.5 Evolution of a new Anglo-American world view........................................................108
2.6 Threats to the evolution of a new Anglo-American world view.................................134
2.7 Conclusion................................................................................................................ 138
2.8 Chapter references................................................................................................... 143
Chapter 3 Political Economy of the Anglo-American world view of climate change............165
3.1 Background.............................................................................................................. 165
3.2 Climate change science development....................................................................... 167
3.3 Climate change policy development......................................................................... 186
3.4 American climate policy development...................................................................... 200
3.5 Models to manage the commons............................................................................... 221
3.6 Conclusion................................................................................................................ 233
3.7 Chapter references................................................................................................... 237
Chapter 4 Economic Models for Climate Change Policy Analysis........................................253
4.1 Survey of Computable General Equilibrium modelling literature.............................253
4.2 Survey of Input Output modelling............................................................................. 273
4.3 Survey of mathematical modelling platforms...........................................................287
4.4 Survey of data sources.............................................................................................. 294
4.5 Conclusion................................................................................................................ 297
4.6 Chapter references................................................................................................... 298
Chapter 5 A new spatial, intertemporal CGE policy research tool.......................................315
5.1 Sceptre model flowchart........................................................................................... 315
5.2 Model assumptions................................................................................................... 319
5.3 Comparison of Sceptre with physical modelling.......................................................370

i
5.4 Comparison of Sceptre and DICE............................................................................. 370
5.5 Conclusion................................................................................................................ 378
5.6 Chapter references................................................................................................... 379
Chapter 6 Assessment of changes in regional and industry performance under resource
constrained growth.............................................................................................................. 385
6.1 Policy investigation with the Sceptre tool.................................................................386
6.2 Conclusion................................................................................................................ 481
6.3 Chapter References.................................................................................................. 483
Chapter 7 Conclusion and Suggestions for Further Research.............................................485
7.1 Conclusion................................................................................................................ 485
7.2 Suggestions for further research.............................................................................. 490
7.3 Chapter References.................................................................................................. 495
Appendix 1 Climate change engagement in Australia.........................................................497
A1.1 Submission to Garnaut Review............................................................................... 497
A1.2 Australian climate change policy development.......................................................504
A1.3 Appendix References.............................................................................................. 515
Appendix 2 CGE Modelling.................................................................................................. 517
A2.1 Elementary CGE modelling.................................................................................... 517
A2.2 Economic Equivalence of Competitive Markets and Social Planning.....................526
A2.3 Appendix references............................................................................................... 533
Appendix 3 Input Output Tables.......................................................................................... 535
A3.1 Input Output tables from the Australian Bureau of Statistics.................................535
A3.2 Leontief Matrix....................................................................................................... 539
A3.3 World Multiregional Input Output Model...............................................................545
A3.4 Appendix references............................................................................................... 556
Appendix 4 Nordhaus DICE Model...................................................................................... 559
A4.1 Basic scientific model............................................................................................. 559
A4.2 Model equations..................................................................................................... 559
A4.3 Implementation issues............................................................................................ 566
A4.4 Appendix references............................................................................................... 584
Appendix 5 Acyclic Solver for Unconstrained Optimisation................................................587
A5.1 Overview................................................................................................................ 587
A5.2 Modelling factors that affect performance.............................................................587
A5.3 Phases of model development................................................................................ 597
A5.4 Appendix references............................................................................................... 613
Appendix 6 Benchmarking with Linear Programming.........................................................617
A6.1 Data envelopment analysis..................................................................................... 617
A6.2 Linear programming............................................................................................... 622
A6.3 Emission permits, amelioration and abatement......................................................633
A6.4 Intertemporal stocks and flows model....................................................................634
A6.5 Appendix references............................................................................................... 636
Appendix 7 Mining the GTAP Database............................................................................... 639

ii
A7.1 Aggregating the GTAP7 database........................................................................... 639
A7.2 Creation of GTAP economic databases within Mathematica...................................645
A7.3 Data mining the GTAP database in Mathematica...................................................655
A7.4 Data mining Mathematica's country database for the population growth using GTAP
aggregations................................................................................................................... 661
A7.5 Appendix references............................................................................................... 662
Appendix 8 The Sceptre Model............................................................................................ 663
A8.1 Sceptre Model Flowchart....................................................................................... 663
A8.2 Sceptre Mathematica Code.................................................................................... 664
A8.3 Appendix references:.............................................................................................. 684
Colophon............................................................................................................................. 685
CD-ROM Attachment........................................................................................................... 687
Notes................................................................................................................................... 691

iii
Glossary

Abbreviations

Acronym Meaning
ABARE Australian Bureau of Agricultural and Resource Economics
ABS Australian Bureau of Statistics
APEC Asia Pacific Economic Forum
BAU Business As Usual
bbl Barrel of oil (159 litres)
BP Before (the) Present
Btu British Thermal Unit (about 1.06 x 103 Joules)
C (degrees) Celsius
CCS Carbon capture & storage
CDM Clean Development Mechanism
CEPII Centre d'Etudes Prospectives et d'Information Internationales
CFCs Chlorofluorocarbons
Computed “Computed” means “ascertained or arrived at by calculation or
General computation; (also) performed or controlled by a computer; computerized”
Equilibrium (OED, 2009). A Computed General Equilibrium (CGE) is the result,
(CGE) outcome, state or output of a “Computable General Equilibrium (CGE)
model” following calculation or computation. This dissertation draws no
distinction between models that are able to be computed and models that
have been computed and are represented by their computed state.
Therefore the terms “Computed General Equilibrium”, “Computable
General Equilibrium” and the acronym “CGE” have the same meaning.
Computable “Computable” means “Capable of being computed, calculable; solvable or
General decidable by (electronic) computation” (OED, 2009). A Computable
Equilibrium General Equilibrium (CGE) model is “A general equilibrium model of the
(CGE) model economy so specified that all equations in it can be solved analytically or
numerically. Computable general equilibrium models are used to analyse
the economy-wide effects of changes in particular parameters or policies”
(Black et. al.. 2009). See also Computed General Equilibrium (CGE) above.
CH4 Methane
CO2 Carbon Dioxide
CO2e Carbon Dioxide equivalent see Gt CO2
Cognitive “A cognitive therapy that is combined with behavioural elements (see
Behavioural behaviour therapy). The patient is encouraged to analyse his or her
Therapy (CBT) specific ways of thinking around a problem. The therapist then looks at the

v
Acronym Meaning
resulting behaviour and the consequences of that thinking and tries to
encourage the patient to change his or her cognition in order to avoid
adverse behaviour or its consequences. CBT is successfully used to treat
phobias, anxiety, and depression (it is among the recommended treatments
for anxiety and depression in the NICE guidelines)” (Martin, 2007)
COP Conference of the Parties (of the UNFCCC)
COP15 UNFCCC November 2009 meeting in Copenhagen, Denmark
CRS Constant returns to scale such that production can be increased or
decreased without affecting efficiency
CSIRO Australian Commonwealth Scientific and Industrial Research Organisation
Data Data Envelopment Analysis is a linear programming technique typically
Envelopment used to measure the technical (in)efficiency of decision making units
Analysis (DEA) compared to the units with best practice
DICE Dynamic Integrated Model of Climate and the Economy
DMU A decision making unit in Data Envelopment Analysis (DEA)
Effectiveness The extent to which outputs of service providers meet the objectives set
for them
Efficiency The degree to which the observed use of resources to produce outputs of a
given quality matches the optimal use of resources to produce outputs of a
given quality. This can be assessed in terms of technical efficiency
(conversion of physical inputs such as labour and materials into outputs),
allocative efficiency (whether inputs are used in the proportion which
minimises the cost of production) and dynamic efficiency (degree of
success in altering technology and products following changes in
consumer preferences or productive opportunities)
EU European Union
EU25 Twenty five countries of the EU in 2004, prior to its 2007 expansion to
Bulgaria and Romania
ETR Ecological/ Environmental Tax Reform
ETS Emissions trading scheme, which may be either a differential structure as
introduced in European Union countries, or an absolute structure where
emitters must purchase emissions permits in order to pollute the
atmosphere with greenhouse gases
gg Grammes (grams)
G5 Major emerging economies, comprising Brazil, India, China, Mexico and
South Africa
G8 Group of 8, comprising Canada, France, Germany, Italy, Japan, Russia,
United Kingdom, United States
G20 Group of 20, comprising Argentina, Australia, Brazil, Canada, China,
France, Germany, India, Indonesia, Italy, Japan, Mexico, Russia, Saudi
Arabia, South Africa, South Korea, Turkey, United Kingdom, United States,

vi
Acronym Meaning
European Union. In September 2009, the G20 announced it would be the
world's peak economic policy body, replacing the G8.
gC Grams of Carbon (see GgC)
Gg Giga-grammes (grams)
GAMS General Algebraic Modelling System
GDP Gross Domestic Product
GHG Greenhouse Gases
Global CO2 (1), CH4 (21, although a recently detected reaction with aerosols now
warming suggests 33), N2O (310), CF4 (6,500), C2F6 (9,200), SF6 (23,900), HFC-143a
potentials (3,800), HFC-23 (11,700), HFC-125 (2,800), HFC-134a (1,300), HFC-143a
(3,800)
GJ Gigajoules
Gt Gigatonnes
GtC Gigatonnes of Carbon
Gt CO2 Gigatonne of CO2 (3.67 Gt CO2 has the same carbon content as 1 GtC. The
factor of 3.67 represents the ratio of the molecular weight of CO2 , which
is 44.009, to the atomic weight of carbon, which is 12.011, see Oak Ridge
National Laboratory (Carbon Dioxide Information Analysis Center 1990,
Table 3) and Clark (1982, p467))
GTAP Global Trade Analysis Project (Purdue University)
GTEM Global Trade & Environment Model
HCFC Hydro-Chloro-Fluoro-Carbon
HFC Hydro-Fluoro-Carbon
IEA International Energy Agency
IAEA International Atomic Energy Agency
IMAGE Integrated Model to Assess the Greenhouse Effect
IPCC Intergovernmental Panel on Climate Change, based in Geneva,
Switzerland. In 2007, the IPCC and Al Gore shared the Nobel Peace Prize
IRIO Interregional Input Output Model (see also MRIO)
Kyoto Protocol The Kyoto Protocol stems from a 1992 United Nations Conference on
Environment and Development in Rio de Janeiro (Brazil), which considered
climate change regulations and a United Nations Framework Convention
on Climate Change in Berlin (Germany) the same year. In 1995, a
Conference of the Parties in Berlin proposed a new protocol to replace the
ambiguous agreement reached in 1992. In 1997, at the 3rd session of the
Conference of the Parties to the United Nations Framework Convention on
Climate Change in Kyoto, Japan, the Berlin proposal became the Kyoto
Protocol. Its target was that by 2008–2012 the net emissions of 6
greenhouse gases (CO2, CH4, N2O, HFC, PFC and SF6) would be reduced by
5.2% of the 1990 emission levels of these gases. While each signatory to
the Kyoto Protocol decides how it will implement the agreements of the

vii
Acronym Meaning
Treaty, the Kyoto Protocol offers mechanisms to achieve targeted
reductions in greenhouse gases including international and local emissions
trading schemes (ETS), emissions sinks (the development and
management of forests and agricultural soils), joint implementations
(where one company invests in another's facility and shares reductions in
emissions), clean development mechanisms (where companies invest in
reducing greenhouse pollution in developing countries), bubbling
(collectively attaining targets), etc.
Linear Programming algorithms to maximise or minimise an objective function
program subject to a set of linear mathematical constraints
MAD Mutually assured destruction
Mb millions of bytes of random access memory (RAM) or file size
MEF Major Economies Forum comprising Australia, Brazil, Canada, China,
Germany, the European Union, France, the United Kingdom, India,
Indonesia, Italy, Japan, Mexico, Russia, South Africa, South Korea and the
USA.
MJ Million (106) Joules or Mega Joules
MBTU Thousand (103) BTU (where M is the Roman Numeral for one thousand)
MMBTU Million (106) BTU
Moral Hazard “The observation that a contract which promises people payment on the
occurrence of certain events will cause a change in behaviour to make
these events more likely. For example, moral hazard suggests that if
possessions are fully insured, their owners are likely to take less good care
of them than if they were uninsured. The consequence is that insurance
companies cannot offer full insurance. Moral hazard results from
asymmetric information and is a cause of market failure” (Black et al.
2009)
MRIO Multiregional Input Output Model (see also IRIO)
NGO Non-Governmental Organisation
N2O Nitrous Oxide
NOAA United States of America National Oceanic & Atmospheric Administration
NOX Nitrogen Oxides
NPV Net Present Value
OECD Organisation for Economic Cooperation and Development
PCA Principal components analysis
PJ Peta Joule(s)
ppm Parts per million, used here as a measure of the concentration of
greenhouse gases in the atmosphere (1 ppm of CO2 in the atmosphere =
2.123 GtC in the atmosphere, which assumes an atmospheric mass of
5.137 × 1018 kg, see references for “Gt CO2”)
Principal - “The problem of how person A can motivate person B to act for A's benefit

viii
Acronym Meaning
Agent Problem rather than following self-interest. The principal, A, may be an employer
and the agent, B, an employee, or the principal may be a shareholder and
the agent a director of a company. The problem is how to devise incentives
which lead agents to report truthfully to the principal on the facts they
face and the actions they take, and to act for the principal's benefit.
Incentives include rewards such as bonuses or promotion for success, and
penalties such as demotion or dismissal for failure to act in the principal's
interests” (Black et al. 2009)
Prisoner's “A two-player game that illustrates the conflict between private and social
dilemma incentives, and the gains that can be obtained from making binding
commitments. The name originated from a situation of two prisoners who
must each choose between the strategies ‘Confess’ and ‘Don't confess’
without knowing what the other will choose. The important feature of the
game is that a lighter penalty follows for a prisoner who confesses when
the other does not. The game is summarized in the pay-off matrix where
the negative pay-offs can be interpreted as the disutility from
imprisonment” (Black et al. 2009)
Production A curve plotting the minimum inputs required to produce a given quantity
frontier of output
Productivity The ratio of physical output produced from the use of a quantity of inputs
(see also TFP)
quad Quadrillion BTU, equivalent to 1.055 x 1018 Joules
quadrillion One thousand million (1015) i.e. Peta
R&D Research & Development
Sceptre model Spatial Climate Economic Policy Tool for Regional Equilibria (the model of
this doctoral research and described in this dissertation)
Slacks In a linear program solution, the extra amounts by which an input (output)
can be reduced (increased) to attain technical efficiency after all inputs
(outputs) have been reduced (increased) in equal proportions to reach the
production frontier
SO2 Sulphur Dioxide
SRES United Nations' IPCC “Special Report on Emissions Scenarios”
tt Tonnes
TJ Terajoules
TFP Total Factor Productivity is the ratio of the quantity of all outputs
(weighted by revenue shares) to the quantity of all inputs (weighted by
cost shares)
UK United Kingdom
UKMO United Kingdom Meteorological Office
UN United Nations
UNFCCC United Nations Framework Convention on Climate Change, based in Bonn

ix
Acronym Meaning
USOSTP United States Office of Science and Technology Policy
USGCRP United States Global Change Research Program
USA or U.S. United States of America (America)
WHOSTP White House Office of Science and Technology Policy
WMO World Meteorological Organisation
WWF World Wildlife Fund

Mathematical Symbols

Symbol Meaning
∀ For each/all/any of
∂ Partial differential
∆ Difference
∈ Is an element of
∏ Cartesian product of
∑ Sum of
≤ Less than or equal to
≥ Greater than or equal to

Glossary references
(Carbon Dioxide Information Analysis Center 1990) (Clark 1982) (Black et al. 2009) (OED 2009) (Martin
2007)

Black, J., Hashimzade, N. & Myles, G., 2009. A Dictionary of Economics in


Economics & Business. In Oxford Reference Online Premium. Oxford
University Press. Available at:
http://www.oxfordreference.com.ezproxy.lib.uts.edu.au/views/BOOK_SE
ARCH.html?book=t19 [Accessed February 5, 2010].

Carbon Dioxide Information Analysis Center, 1990. Carbon Dioxide and


Climates (Glossary), Oak Ridge, Tennessee: Oak Ridge National
Laboratory. Available at: http://cdiac.ornl.gov/pns/convert.html
[Accessed June 19, 2009].

Clark, W.C. ed., 1982. Carbon dioxide review: 1982, Clarendon, United
Kingdom: Oxford University Press.

x
Martin, E.A. ed., 2007. Concise Medical Dictionary. In Oxford Reference Online
Premium. Oxford University Press. Available at:
http://www.oxfordreference.com.ezproxy.lib.uts.edu.au/views/BOOK_SE
ARCH.html?book=t60 [Accessed February 5, 2010].

OED, 2009. OED Online. In The Oxford Dictionary of English. Oxford: Oxford
University Press. Available at:
http://dictionary.oed.com.ezproxy.lib.uts.edu.au/entrance.dtl [Accessed
February 5, 2010].

xi
Preface
You must become the change you wish to see in the world.

Mahatma Gandhi

Putting pen to paper on this dissertation in April 2008, it occurred to me that it


has never been a more poignant time for a comprehensive review of climate
change strategies, which is the subject of this dissertation. Oil had reached
US$120 per barrel on its way to US$150 per barrel, the contract prices of
Australian coking coal for steel smelting had just tripled to US$305 per tonne,
steaming coal for power generation up 130% to US$125 per tonne, iron ore up
65% to US$120 per tonne, following a 35% rise the previous year, and third
world food riots were occurring due to rice and corn production being diverted
to ethanol for transport. American President George W. Bush had just taken the
unusual step of invalidating vehicle tailpipe emissions controls in States like
California and announced significantly lower standards.

Three wildly divergent views about the future were circulating in Western
markets about the commodity price spiral. The first was that it was merely a
trading bubble due to speculators and it would burst. The second view was that
the emergence of a prosperous middle class in China and India had a ravenous
demand for animal protein and cars. This meant the world was now in a new
era of high resource demand and prices. However, the world would adapt, as
always. We could be sanguine because the world had previously coped with
similar dire exigencies and would produce the necessary resources, so
commodity prices would fall. The third scenario was that the world had
reached or passed peak-oil production and was in a new era of scarce food,
energy and metals, and this meant a continuation of rapidly increasing demand
and high prices.

Such questions about the impact of resource scarcity on world economic


growth had not been asked since I was an undergraduate in the 1970s. Adam
Smith's invisible hand had solved the crises. Western birth rates declined,
economic growth in Japan and Europe subsided and the Club of Rome's dire
Malthusian projections did not eventuate.

xiii
Yet in April 2008 a new dimension of Climate Change had emerged into a
major market factor in my home country, Australia. In its first day in office,
Australia's new Rudd Labour Government had just ratified the Kyoto Protocol.

In Australia, as in many Western countries, there is unquestioned support for


local protection of air and water quality, and the control of polluting goods
such as waste, noise and smoking. Support for regional level protection is
considerably less overwhelming because it usually has some implication in
trading-off local employment or amenity for the good of people elsewhere in
Australia. For example, management of the Darling, Murray and Snowy Rivers
means the environment gains and farmers lose water. Forestry workers lose
jobs when old growth forests are protected in Tasmania.

Notwithstanding the increasing attention to greenhouse gas pollution at the


Government level, apathy at best exists among industry and consumers. There
is little respect for the scientific evidence of Intergovernmental Panel on
Climate Change, released in 2007, which shows that climate change threatens
the very basis of our civilisation. Nor is there respect for Al Gore's call to
action to avert it.

There appears to be an overwhelming lack of consensus in Australia, or in any


Anglo-American country, for carbon taxes or similar market mechanisms to
increase the price of goods and processes that contribute to global warming. In
fact, despite increasing evidence that Australia was in a long term drought
attributable to climate change, the former Prime Minister John Howard
claimed to be a “non-believer in climate change”. Perhaps even more
disturbing, he legitimated uncritical scepticism amongst Australia's political
conservatives and greatly empowered already strong industry lobby groups
like Clean Coal to argue against any tilt in policy toward the environment.

These examples suggest that as the issues become geographically and


culturally broader, citizens of Western democracies rapidly lose interest in
protecting the commons from being despoiled. As a result, any change to
environmental policies in Western countries is a very difficult thing. The vast
majority of individuals and business ask “Why me, why should I be charged

xiv
more and my business or livelihood be disadvantaged for some theoretical
concept called climate change?” Indeed, following a fuel revolt in 2000, the
Constitutional Court of France declared environmental taxes unconstitutional.
As a result, the European Union emissions trading scheme introduced in 2004
included neither a tax on carbon nor the requirement for companies to bid for
permits to pollute. To date, America has steadfastly refused to affirm the Kyoto
Protocol. At the time of concluding this thesis, its pending Waxman Markey Bill
is not as strong as legislation in the United Kingdom and European Union.

Nor is there understanding in Europe about combining fiscal and


environmental policy into a holistic solution where environmental tax revenues
are recycled into reduced labour and income taxes. Perhaps it is not surprising
that people living in areas of long term structural unemployment are
uninterested in the argument that recycling environmental taxes into lower
labour taxes increases economic growth and creates extra jobs. People in
Europe simply want fiscal and environmental policies kept separate.

In April 2008, I made a submission to the Garnaut Review suggesting a


moderate carbon tax on carbon-adders as the first stage of a market based
emissions trading scheme that would ultimately lead to a market for emissions
trading consistent with that of other major Western democracies (see Appendix
1). My submission also proposed that a carbon-added tax be imposed on
imports in the same way as a goods and services tax (GST) is applied. Between
April 2008 and October 2009, European policy has come full circle to align
with my proposal: France introduced a carbon tax and together with Germany
was planning duties on the untaxed carbon embedded in imports.

There remains much division over policies to ameliorate and abate the
consequences of global warming. As I was drawing this dissertation to a
conclusion in August 2009, the opposition party in Australia used its upper
house majority to vote down the Government's Carbon Pollution Reduction
Scheme.

The end of my dissertation has coincided with the 80th anniversary of the start
of the Great Depression. I reflect on the past few years and am amazed at the
dazzling panoply of world events in the period: China and India burst onto the

xv
world stage as global leaders, America became a debtor nation with multi-
trillion dollar deficits, a global financial crisis of almost Great Depression
proportion came and went in just one year, as did a swine-flu pandemic.

Yet climate change policy remains in disarray as the United Nations' COP15
Copenhagen meeting approaches, Governments flounder and people remain
blasé or cognitively dissonant about scientific evidence. Climate sceptics
abound and mock the melting of the Arctic, Greenland and Antarctic ice-caps
that threaten many metres of sea rise, a shift in the earth's axis of rotation,
widespread earthquakes, tsunamis and volcanic eruptions from shifting
tectonic plates and the release of methane deposits from the sea beds and
permafrost.

Every day, almost every newspaper carries the latest stories of my research
topic and the policy imbroglio in climate change. It is at the same time
satisfying and disturbing that my research into benchmarking climate change
policies remains poignant and needed.

xvi
Chapter 1 Introduction

1.1 Background
In the 200 years from the Industrial Revolution through to the 21 st century,
technology, energy, the political economy of markets and democratic systems
delivered abundant food production, prosperity and growth in lifestyle. It has
truly been homo sapiens' golden age of expansion. In Common Wealth:
Economics for a Crowded Planet (2008a), Jeffrey Sachs writes that following a
millennium of static productivity, output per person jumped one hundred-fold
while aggregate global output exploded from a negligible level to US$70
trillion in 2008.i

Facilitated by the plentiful availability of fossil fuels, rapid advances in


technology and stability within economic, social and political institutions,
policy makers have successfully advanced bedrock social objectives such as full
employment, industrial and security self-reliance, and improving living
standards. In 1800, ninety percent of world economies were subsistence.
Aristocrats and even royalty lived in conditions and health that today, we would
consider distasteful in many ways (Jeffrey Sachs observes that one need only
consider dentistry). Average per capita purchasing power has risen from
US$400-$500 in the 18th Century to a current world average of US$10,000 pa.
However, great disparities in wealth distribution have left one billion people
living in extreme poverty on a few hundred dollars a year while America,
Australia and other western nations enjoy an average income of US$50,000-
$60,000 pa.

Policy makers have never had an easy task in resolving competing priorities for
increased living standards against the backdrop of increasing population. In
1800, global population was about 1 billion. This grew to 2 billion in 1930.
Notwithstanding World War II, population was 3 billion by 1950. It took just ten
years to rise to 4 billion in 1960, 5 billion by 1975, 6 billion by 2000 and 6.7
billion in 2008. The United Nations projects that world population will exceed
9 billion by 2050.

1
Today, population and standards of living based on abundant energy from fossil
fuels, such as coal and oil, and its accompanying greenhouse gas pollution, are
leading to major world problems due to global warming. In November 2007,
scientists of the United Nations Intergovernmental Panel on Climate Change
(IPCC) confirmed that runaway greenhouse gas emissions had created a
situation where global warming was a real and pressing problem for the world
(IPCC 2007; Karoly 2007). The IPCC scientists warned of a 2°C to 6°C increase
in terrestrial atmospheric temperature between 2020 to 2080.

The unique imperative for climate policy and strategy is that decisions
implemented now will determine climate and environmental damage outcomes
in one hundred years time, such as impacts on biodiversity, flooding and mass
human migration. These issues affect people of all nations, from those living in
the poor Bangladeshi river delta to rich financial hubs like New York, London
and Sydney, to name only a few.

Energy efficiency measures have been proposed as a solution, which is doing


more with less. For example, better energy efficient buildings and voluntary
simplicity in domestic consumption, such as fewer children, smaller houses,
smaller cars and smaller bellies. While better energy efficiency in buildings is
fairly obvious, neither voluntary simplicity nor the central planning of private
lives has fared well in policy terms over the last century. Others feel that the
trend of employing energy to support advanced standards of living will not
change. Indeed, the demand for electricity is expected to double from 2009 to
2050. Advocates of increasing energy production argue that no change in
energy policy is necessary other than to price in the new externalities, such as
ameliorating greenhouse gas pollution.

It is realistic to hope that policy makers will deal with climate change. The
world has previously united to solve similar problems of chlorofluorocarbon
gases damaging the ozone layer and acid rain. In his advice to the United
Nations, Jeffrey Sachs outlined that our crises have solutions but require good
science and technology, population control and finding ways to live sustainedly
with biodiversity and water production. He says (Sachs 2008b):

We have within reach solutions for all of these challenges, the irony

2
I should say …. is not that we are at an abyss …. its almost the
opposite, we've unlocked the ability to promote economic
development in all parts of the world, we have at our hand the
ability to end extreme poverty, we have before us …. technologies to
replace dirty fossil fuels …. we have these things, the question is
whether we can bring knowledge to bear on these solutions, and
then find a common purpose on the planet.

Unfortunately, until quite recently, the IPCC's message about climate change
fell on deaf ears. Politically conservative governments in America, Canada and
Australia continued to renege on their December 1997 commitments to the
Kyoto Protocol and were embarrassed by Russia's ratification of the treaty that
brought it into effect.ii In fact governments of most Anglo-American countries,
with the exception of the United Kingdomiii, actively discredited scientific
arguments about climate change and in some cases actively subverted action
(Ayres 2001; Hamilton 2006; Sachs 2008a).

New political parties have since been elected in Australia and America. It is
well known that Australia's Prime Minister Kevin Rudd, in his first act of office,
ratified the Kyoto Protocol on 3 December 2007. Furthermore, America's newly
elected President Obama appreciates that America is facing one of its greatest
ever challenges. His goal is to guide the American economy to sustainable
growth. President Obama unshackled the Environmental Protection Agency
(EPA) to deal with greenhouse gases as pollution and personally appealed for
American Congressmen to support the 2009 Waxman Markey Bill to mitigate
America's contribution to global warming.

With these political developments now behind us, the climate change debate
has moved from science into economics, technology and the competitive
strategies of nations, industries and businesses. If fossil fuel usage is
constrained by greenhouse gas emissions, all countries face the challenge of
using energy resources more efficiently. Industries and countries need to cope
with new commodity and factor substitutions between industries and between
countries. At the same time, national governments are still charged with
stewardship of their citizens' welfare. They need to balance policies for
improved standards of living, employment, utilisation of national endowments,
international trade, technology and security.

3
Climate change is therefore a major cross-disciplinary area of strategy and
policy. It involves macro and welfare economics, political economy, business
and industry strategies, security and warfare strategies, finance, valuation,
technology, climate science, operations research, game theory, philosophy,
sociology and psychology.

1.2 Policy context

Policy issues in climate change


The key climate change policy issues are mitigating the level of future adverse
effects and helping vulnerable communities cope with unavoidable
consequences. At present, storms, floods, rising sea levels, drought and
desertification affect approximately 211 million people. Many of these people
are already living in poverty, hunger, poor health, environmental decline and
insecurity. In the future global warming will impact all countries both in
physical and economic terms.

The unavoidable consequences of global warming include extreme weather


events (storms, cyclones and heat waves), shifting rainfall patterns resulting in
disruption to crops and water supplies, environmental degradation, destruction
of ecosystems and extinction of biodiversity, melting of glaciers and polar ice
caps, acidification of oceans, rising sea levels, inundation of coastal cities and
low-lying regions, changing of sea currents (Gulf Stream and El Niño), water
table contamination, populations coming into contact with pesticides, arsenic,
cyanide and heavy metals from coastal sediments and human suffering from
water shortage, mass starvation, forced migration, mass sickness and
pandemics, political instability and armed conflicts.

Examples may include Hurricane Katrina, aggravated El Niño effects and the
Iraq and Somali wars. According to the IPCC, worldwide sea levels rose 17cm
over the 20th century and are projected to rise by another 18-59 cm by 2100. If
Antarctica and Greenland thaw, the sea level rise could be as large as 75
metres.

4
The magnitude of the problem has prompted calls for a comprehensive set of
sustainability policies to address key risks across the environment, social
equity, economic futures and national culture. Lowe (2009, pp.1-4 & 19-20)
identifies the risks by asking questions about resources and social stability, for
example:

Are we likely to run short of critical resources or energy for heating,


washing and cooking? Are we doing serious damage to the natural
systems that support us, for example clean air, potable water and
food? Is our society stable and equitable (implying the absence of
instability based on inequity between rich and poor)? Is our
economic activity viable and able to sustain our living standard
under the challenges of resources and globalisation? Are our
cultural and spiritual identities stable under the challenge of
globalisation and imported values?

In the United Kingdom, Nicholas Stern (2007) applied the techniques of


computable general equilibrium (CGE) modelling to show that the economic
cost of inaction on climate change is greater than the cost of action. In
Australia, Professor Ross Garnaut (Garnaut Climate Change Review 2008a;
2008b) and the Australian Treasury (2008) conducted similar investigations to
Stern and found similar results.

Policy needs
Cochran & Malone (1995) define the essence of policy as: “Public policy
consists of political decisions for implementing programs to achieve societal
goals.”

While this definition has inherent verisimilitude, it is perhaps merely a


description rather than a theory of public policy because although it can be
used to explain much, in fact is has little value in predicting public policy
outcomes or development. Cochran et al. (1993) better captures the scale of
the behavioural complexity in policy making “The term public policy always
refers to the actions of government and the intentions that determine those
actions …. Public policy is the outcome of the struggle in government over who
gets what.”

5
It is widely appreciated by the major economies of the world that they need to
join together with common policies to contain emissions. However, complexity
arises because equity is an important issue. Any policy response needs to
address a number of fundamental issues leading to different behaviour
amongst nations:

• developed economies, such as America, the European Union and


Australia, have very high per capita emissions and have caused most of
the current climate crisis. While the European Union has been actively
limiting emissions, America and Australia do not yet have legislated
climate policies. These have proven controversial and implementation
has been delayed. Industries and consumers in developed countries
need time to adapt, allow sunk investment to be amortised and reduce
aggregate emissions rather than per capita emissions. If lifestyle,
employment and security is threatened then developed countries such
as America almost certainly will not participate in climate change
amelioration
• developing economies, such as China and India, have not created the
current issue and have comparatively low emissions per capita to date.
However, China is rapidly growing and has one-fifth of the world's
population. It has become the world's second largest emitter.
Continuation of its present trajectory will endanger the world. For
example, over the past decade China has been commissioning one new
coal-fired power station each week. If growth in living standards and
employment is unreasonably impacted then developing countries will
not participate
• countries are coming together under the auspices of the United Nations,
which is an institution that has been seriously weakened in recent years
by the unilateralism of America, Australia and the United Kingdom
• at this point in time, just as climate change science and need for action
has been widely accepted, the world found itself in the 2008-9 Global
Financial Crisis and recession
• climate damages such as sea inundation and extreme weather will affect
the infrastructure and endowments of all counties, albeit in different

6
ways. For example, Australia's coal industry is threatened because coal
is the main polluting fossil fuel
• the world is now highly interlinked through trade. Exporting countries
such as China and Germany will suffer large loss of income if the
economies of their customers suffer climate change damage
• it is an unfortunate feature of international relations that countries often
cheat on their joint obligations. There needs to be effective auditing of
countries by the United Nations
• industries such as steel and aluminium production will move to where-
ever they find the lowest cost of production. This is called “carbon
leakage.” Polluting industries may gravitate to those jurisdictions where
there is no carbon levy and continue with undiminished pollution
• there is a need to legally protect the biodiversity of the planet because a
loss of biodiversity will adversely affect all people in the long term
• there is a need to invest in technology to accelerate the development of
substitution technologies (such as electric cars), supplementary
technologies (such as carbon capture and storage) and new technologies
that will remove CO2 from the atmosphere.

Agreeing a set of policies to ameliorate climate change really means agreeing


on a model of how these policies will work. This is because climate change
treaties are in effect “alliance contracts,” where each participant shares the
profits and correspondingly shares the losses. Unfortunately, as countries are
well aware, immediately after an alliance contract is entered into, the
conditions that applied at the time will change. Through elections,
governments will change, wars will begin, countries will suffer unpredictable
earthquakes and tsunamis, and the effect of the contract will work out
differently than envisaged. Cultural differences will play a big part. For
example, in China a contract is mainly a statement of intentions at the time
and totally disregarded if new circumstances arise or new market
opportunities or alliances present themselves.

As an alliance contract, a climate change treaty needs to be sufficiently flexible


to cope with these changes. The people administering the alliance contract
need to be able to reappraise changing situations and find other ways to
achieve a win-win outcome for all stakeholders. This is not so easy to do when

7
various countries are proceeding to change their institutions on the basis of
previous commitments and arrangements.

The Indian lawyer Anuradha (2009) has identified policy and legal
requirements relating to infrastructure and technology transfer that will
enable developing countries such as India to join with industrialised countries
in addressing global warming:

(i) a framework for assessing economic costs of undertaking


emission reductions, while at the same time investing in economic
growth and development priorities

(ii) predicating emission reductions by developing countries on the


full adherence by developed countries to binding legal obligations
for financial and technical assistance and technology transfer

(iii) a mechanism for periodic assessment at the national level of


programmes and activities necessary for technical and financial
assistance, capacity building and technology transfer and its cost
implications

(iv) evolving clear benchmarks and criteria for monitoring and


evaluating whether implementation of obligations relating to
capacity building, technical and financial assistance and technology
transfer has been effective

(v) articulating any emission reduction targets as being conditional


on all of the foregoing.

Evidence-based policy
In The Policy Context for Research, Lorman & Van Groningen (2009) provide a
more comprehensive social and behavioural definition of public policy:

Public policy is about the arrangements for social and economic life
in our society. It comes out of the interaction between the different
interests of stakeholders who have different views about what

8
constitutes a problem that needs to be solved. These interactions
are mediated through an extensive set of institutional arrangements
…. This engagement is done within and between organisations with
different traditions, including international organisations and
processes. Within formal political processes the emphasis is on
values and ideas, sustained through alliances, brokerage and
compromise …. Public policy is made when people engage with
others, through their interests, commitments and paid occupations
in shaping social and economic arrangements. This capacity to
engage and shape arrangements is greatly influenced by their
command over resources. Those with the greatest command over
resources have the greatest potential to influence policy outcomes.

Policy makers use many techniques in the democratic process of shaping policy
amongst stakeholders. These include evidence-based policy, “dialogic” policy
development (multiple ongoing stakeholder dialogue) and Lindbloom's (1959)
incrementalism or “muddling through” alternative to the rationalist model. As
Chapter 3 Political economy of Anglo-American world view of climate change
demonstrates, the development of climate policy has been amongst the largest
evidence based policy research projects ever undertaken. This dissertation
therefore uses evidence based policy research as the underlying paradigm for
its climate-economic policy research framework. As will be discussed below,
the concept of evidence is more synonymous with an estimate than an
irrefutable truth.

Keane (2009) had discerned a trend towards “monitored democracies,” similar


to India, where government decisions will be increasingly monitored by many
non-governmental organisations. In an early 1990s example of this trend,
Prime Minister Tony Blair sought to reform United Kingdom government policy
and corporate governance with evidence-based policy (United Kingdom
Cabinet Office 1999; UK Hampel Committee 1998).

Although some would argue that Tony Blair's own Prime Minister's office did
not provide a very good example of transparency and evidence-based analysis,
the underlying assumption of evidence-based policy remains undisputed: better
policy is achieved with research and that better policy produces better
outcomes. In contrast, poor policy usually wastes money and fails its aims.

9
Influential deductivists such as Sir Karl Popper and Thomas Kuhn argue that
confidence can only be developed in a hypothesis by attempting to falsify it
through tests (see later in this Chapter). Deductivists would never agree that a
clinical trial is sufficient to be sure that a drug will cure the next person tested.
Popper's oft-quoted example is that no matter how many white swans are
observed, the absolute theory that all swans are white is never justified (Magee
1974, p.22). However, deductivists would agree that the more tests a drug
withstands without failure then the more robust is the efficacy hypothesis. iv

There are famous experiments where deductivists have developed tests of a


hypothesis, such as testing Einstein's theories that light bends and space-time
curves. However, in general it has proven extremely hard in practice to
progress science and society through a rigid deductivist discipline of public
criticism and falsification.

For example, modern political systems are not able to function by hypothesis
falsification. Many issues dominate politics. Neither politicians nor the
bureaucracy like to encourage negative criticism, even if rationality is
identified with the virtues of public criticism and falsification testing.
Unfortunately, people are not purely rational beings. They have emotions and
tend to respond poorly to falsification attacks. Pragmatism, realism, working
trade-offs and sub-optimisation are the norm rather than exception. Indeed,
the working assumptions of the bureaucracy are rarely, if ever, examined. This
is the reality of the messy social milieu for public policy formation.

Instead of deduction, society tends to work by induction, which is the process


of using bodies of well specified information, professional systematic practice
and human intuition and creativity to infer generalisations from specific
observations. This creative process is not uncontrolled but subject to various
measures of quality assurance such as peer and judicial review. The
requirement for coherence and believability of any form of induction is the
same as proving a hypothesis beyond reasonable doubt in a court of law. In this
tradition, policy makers approach issues as a “historian who sees common
tendencies in certain contexts, not a philosopher who seeks clear general
principles that apply across contexts” (Brooks 2009).

10
Policy feasibility

Evidence-based techniques place a high value on evidence being consistent


and rational in order for confidence to develop in the hypothesis. Historical
analysis is a very important part of establishing consistency because every
time more observations confirm a theory, the result becomes more believable.

For example, the scientific method has been exceptionally successful in


validating drugs through clinical trials. Leonhardt (2009) observes that policy
makers and doctors alike have become perplexed by the plethora of treatments
available and the lobbying groups for drug companies, device makers,
insurers, doctors and hospitals. He notes America's transition evidence based
care where doctors and policy makers, working together and across precedent
of circumstances with many nuances, are taking the next step of identifying
the best treatment practices among all the alternatives:

But there is one important way in which medicine never quite


adopted the scientific method. The explosion of medical research
over the last century has produced a dizzying number of treatments
for different ailments. For someone with heart disease, there is
bypass surgery, stenting or simply drugs and behavior changes. For
a man with early-stage prostate cancer, there is surgery, radiation,
proton-beam therapy or so-called watchful waiting. To enter
mainstream use, any such treatment typically needs to clear a high
bar. It will be subject to randomized trials, statistical-significance
tests, the peer-review process of academic journals and the scrutiny
of government regulators. Yet once a treatment enters the
mainstream — once we know whether it works in certain situations
— science is largely left behind. The next questions — when to use it
and on which patients — become matters of judgment, not
measurement. The decision is, once again, left to a doctor’s
informed intuition .... The human mind can sometimes do a better
job of piecing together amorphous bits of information — diagnosing
a disease, for example — than even the most powerful computer. On
the other hand, human beings can also be unduly influenced by just
a few experiences, like the treatment of an especially memorable
patient. As a result, different doctors frequently end up coming up

11
with different answers to the same question. Cardiologists in
Davenport, Iowa, are quick to insert stents; cardiologists in Iowa
City and Sioux City are not. They can’t both be right. Some people
with heart disease are getting the best treatment, and some are not.
The same is true of debilitating back pain, various cancers and even
pregnancy. .... The lobbying groups for drug companies, device
makers, insurers, doctors and hospitals have succeeded, so far, in
keeping big, systemic changes out of the bills. And yet the modern
history of medicine nonetheless offers reason for optimism.
Medicine has changed before, after all. When it did, government
policy played a role. But much of the impetus came from inside the
profession. Doctors helped change other doctors.

This type of evidence-based policy making that relies on inter-subjectivity is


called the “objective theory of evidence.” The name is somewhat controversial
because the words “objective,” “theory” and “evidence” have always been
challenged by one philosophical persuasion or another. For example, how could
something be simultaneously objective and subjective, or be a theory when it is
really an untestable hypothesis, or be evidence when it is really an observation
or estimate?

Sir Karl Popper (1972) proposed that this conundrum be solved by recognising
a “World III” of objective knowledge comprising statute and common laws,
scientific papers, textbooks, documented procedures etc. While both Popper's
“World III” and the “objective theory of evidence” remain controversial in
philosophical circles, these ideas have had a profound influence on normative
theories for practical professional practise as described above.

Evidence based processes are prima facie subjective Bayesian inductive


inference due to the subjective assignment of prior probabilities. However, the
“objective theory of evidence” holds that these processes have the nature of an
“objective theory” because professional researchers have a concern for
objectivity and independence in their work, such as undertaking professional
error-statistical practices as part of their methodologies (Mayo 1996; Mayo &
Spanos 2004; Staley & Cobb 2009); peer reviewers introduce an inter-
subjective due diligence layer because their concern for truth means that
research assumptions and results are subjected to informed criticism and

12
repeatability testing (Achinstein 1991; 2001; Rehg & Staley 2008); and
Bayesian inference conforms to the “likelihood principle” because it merely
depends on prior probabilities, which have nothing to do with the experiment
(Birnbaum 1962, p. 271; Sprenger 2008, pp 197 & 204).v Therefore, the results
emerging from a process that applies the scientific method are qualified to be
considered as part of an independent body of knowledge (which is Popper's
“World III”).

The “objective theory of evidence” relies on two primary concepts. The first is
that true and false are not absolute states. In A Treatise on Probability (Keynes
1921, Chapters 15 & 17), Keynes hypothesised a continuum between falsity
and truth. He suggested that intermediate points in this interval are associated
with probabilities of truth. The legal system accepts his proposition, for
example, requiring guilt to be proven beyond reasonable doubt in serious cases
and on the balance of probabilities in less serious cases.

The second concept in the objective theory of evidence is associated with


Thomas Bayes' theorem of conditional probability. This theorem states that the
probability that a hypothesis is true at a point in time, given certain evidence,
is the probability of the past evidence occurring when the hypothesis was true,
multiplied by the probability of the hypothesis being true in any case and
divided by the probability of the evidence occurring in any case. For example,
suppose a sports drug test is 95% accurate. Assume that 1% of sports people
have taken drugs. The probability that a positive result will occur in random
tests regardless of whether drugs have been taken or not is 5.9%. vi Therefore,
the Bayesian probability of a person having taken drugs, given that a test
result shows positive, is only 16.1%.vii If the accuracy of the drug test is
increased to 99%, the probability of a person having taken drugs given the test
result is positive rises to 50%.viii

Bayes' theory demonstrates the reason why policy makers seek confirmation
from economic modellers that a particular policy represents a scenario that is
at least feasible. In an example analogous to the one above, we assume that
economic modelling has a 90% probability of correctly showing a particular
policy is feasible, if indeed it is feasible, and that say 60% of all proposed
policies are feasible. The probability of economic modelling identifying that

13
policies are feasible, notwithstanding whether the policy is or is not, is 58%. ix
Therefore, the Bayesian probability that a policy is feasible given that
modelling shows that it is feasible, is a more impressive 93%. x

Thus in this example economic modelling has improved a policy maker's


chances of a feasible policy from 60% to 93% by simply showing that the policy
is a feasible scenario. Note that in this example the policy maker is not
necessarily asking the economic modellers whether the particular policy is the
best policy, only whether it is feasible. However, if in addition to determining
feasibility the modellers can reliably discriminate between policies and develop
a better policy then the “value added” is correspondingly greater.

Policy risks

The Australian Securities and Investments Commission Practice Notes


expound on the presentation of risk in financial projections (ASIC). Many
elements have equal applicability in policy settings. There are two types of
risk: the systemic risk of being unable to fully represent the real world in a
model; and non-systemic or ordinary uncertainties associated with the
economic environment.

Systemic modelling risks

As for all professionals, policy modellers' prima facie duties include integrity,
objectivity, an absence of any conflict of interest, possession of the necessary
skills and competence, and processes for care and due diligence. The duty of
care includes an appreciation of misleading (or deceptive) assumptions,
including misleading by omission.

Wise hands in policy formation recognise first and foremost that any projection
or forecast is a matter of opinion and judgement. They look for reasoned and
sustainable assumptions and a systematic modelling process. For their part,
modellers need to appreciate that a reader's understanding of the assumptions
is essential for their proper assessment of the information contained in the
model. Therefore, specialists and experts preparing models need to take as
much care with the formation and publication of the assumptions as they do
with the model results.

14
Non-systemic modelling risks

As there is little verisimilitude to be found in any single-point or stand-alone


scenario, there is no point looking to one or other scenario as an immutable
outcome. Nevertheless, there is considerable value in understanding the
differences between scenarios in order to develop a feel for the patina of
intensity in economic responses to policy. This can be achieved with a narrow
range of scenarios that suffice to highlight risks. Modelling can become
meaningless if the range of scenarios is too wide or too narrow.

Communication risks

The way results are read by the intended audience is also important and there
is a risk that the presentation of results could be misleading. Due to a human
behavioural fallibility, many people act on the assumption that the middle value
of a table or range is the most likely value. Rather than extensive tables, it is
better to show the most probable outcomes and discuss the variables that have
a significant impact on these results.

Public exposure of policy

Evidence-based policy has obvious application in clinical testing and other


scientific experiments where the objective theory of evidence holds. It also has
significant benefit in other areas of society where the scenarios are less
experimentally clear. For example, policy areas such as economics, health,
education, law and defence. Modelling is particularly useful in these areas
because a great number of dependencies exist, variables can rarely be held
constant ceteris paribus as is done in controlled scientific experiments, and the
effluxion of short periods of time inevitably brings additional changes to the
basis of the policy research. A further key problem in macroeconomic or
climate research is that the sample size is just one.

From the above discussion it may be appreciated that a key requirement in the
process of evidence-based policy is that research is assembled that maximises
the probability of correctly determining that a proposed policy is both feasible
and the best policy. In this, consistency of the evidence is extremely important.
For example, developing a historical analysis of the political economy of the
policy area along with economic modelling for future scenarios of the policy.

15
As the great deductivists like Popper and Kuhn surmised, exposure of expertly
prepared evidence-based research to peer review and, ultimately, to an open
and transparent process of public criticism provides a diligent proving ground
for assuring that a proposed policy is both feasible and the best policy. In
particular, it is often only at the stage of public exposure that issues of social
equity and justice are appropriately weighed, for example, doing the most for
the majority while at the same time looking after the least well off as argued by
Rawls (1972).

Prior to public exposure, the process of developing expert opinion for evidence-
based policy usually relies on normative principles of systematic practice in the
respective profession, be it economics, law, engineering or another profession.
Ironically, while systematically applying inductivism throughout evidence-
based policy, enlightened professional practice complies with strict deductivist
principles in claiming only to represent current best working hypotheses and
shunning any ambit that these professional hypotheses be regarded as a
science of theories and laws.

Lastly, evidence based policy is always at risk of being subverted and the
“policy makers for policy making” need to be ever vigilant of degenerate
policy-driven evidence. This is selective or manipulated evidence provided to
justify or promote a particular policy. For example, Thomas Kuhn showed in
The Structure of Scientific Revolutions (1962) that vested interest groups will
invest large resources in defending the status quo. Bryson & Mobray (2005)
highlight the need for high level impartiality and a passion for diligent
governance to eliminate conflicts of interest.

Tools of policy research


One methodology rarely encompasses all that needs to be investigated in
policy research. Usually, the policy issue is deconstructed into smaller,
manageable pieces with an appropriate tool chosen for each research task.
Policy analysis becomes the insight developed through iteratively using each
tool and ensuring consistent answers.

16
Policy making may be understood from the research methods used for
economic decision making and allocation of resources. The main categories are
(expanding on Gruber 2007):

• political economy analysis to understand the fabric in which the


tétonnement of marginal social benefit and marginal cost occurs.
Lindahl pricing techniques for public goods may be applied by
evaluating disclosed or expressed preferences of individuals against
their willingness to pay. Expressed preferences can be determined
through engaging with lobbyists, referenda and election mandates for
political parties
• mathematical microeconomic models such as indifference curve-budget
constraint graphs and equilibrium models for constrained utility
maximisation; and supply & demand diagrams for equilibrium and social
welfare efficiency
• empirical analysis of data using statistical methodologies that measure
the impact of government policy on individuals and markets. For
example, randomised trials and quasi experiments provided by
differential changes in the economic environment, time series analysis,
cross-sectional regression analysis (comparing many individuals at one
point in time) and structural modelling to determine underlying drivers
or factors
• budget analysis using cash and accrual accounting, static and dynamic
scoring, intertemporal accounting (generational accounting &
intergenerational equity), short-run, automatic and discretionary
stabilisation, and IS-LM tétonnement
• cost-benefit analysis applying the theoretical tools of microeconomic
analysis (above) in the context of a host of evidence from surveys and
expert analysis
• ideological analysis in which conservative, liberal, radical and
alternative ideologies are considered. Point of view analysis is a similar
technique which analyses policies from prominent points of view
• influence analysis which measures the influence of the policy on target
groups and evaluates the effectiveness of reaching the target.

17
The importance of historical analysis in developing evidence for consistent and
believable hypotheses was referred to above. Discovery of the historical
background through political economy analysis has become de rigueur for
research in evidence based policy.

1.3 Equilibrium tools for policy research


From the above tools of policy research it may be noted that governments
often look to the discipline of economics to help them understand complex
policies. Classical economic theory and neoclassical economic models have
become one of the main ways of evaluating policy proposals.

Main mathematical microeconomic research tools for developing insights into


policy have included the modelling of equilibriums in competitive commodity
markets and modelling that also includes the market for financial assets
(Ljungqvist & Sargent 2000).

Equilibrium modelling has two variants: partial and general equilibrium.


Marshall's famous microeconomic scissor curves for supply and demand are
the classic representation of partial equilibrium analysis (Marshall 1890).
Partial Equilibrium modelling focuses on a single commodity and assumes
ceteris paribus that the supply and demand curves are independent of each
other. This means that if demand for a commodity increases, it will not lead to
a change in the supply curve because for small changes (called perturbations)
supply is isolated from dynamic effects in other industries.

General equilibrium has the more ambitious goal of finding a commodity


market tétonnement where partial equilibrium assumptions do not apply.
General equilibrium seeks to explain the price and quantity effects of whole
economies, which are composed of many individual commodity markets. As the
production of commodities is interlinked and the raw materials of each
production unit comprise the output commodities of other industries, the
demand of the downstream industries is the derived demand of the upstream
industry. Indeed, in practice many complex industrial feedback loops occur.
Furthermore, if raw materials, labour or capital are constrained then the
producers in each market need to compete for scarce resources and bid for
raw materials. The tétonnement of each market is contemporaneously settled

18
in concert with all of the other markets. This compound effect accounts for the
upward sloping supply curve.

A review of the literature (Chapter 4 Economic models for climate change


policy analysis) suggests that general equilibrium seems to have become the
preferred “model of choice” for developing inputs required for climate change
policies and strategies. A computable general equilibrium (CGE) model
simulates the policy scenarios across commodity markets and consumers using
a system of equations that describe the economy, international trade,
technology and resource constraints of labour, capital and other critical limits
such as the capacity of oceans and atmosphere to absorb CO 2. Economists,
engineers and industrial ecologists use CGE models to simulate policy options
by solving the complex interactions between different technological processes
and labour markets across wealthy, rapidly developing and poor regions.

The crux of the climate-CGE modelling approach can be deciphered from these
examples:

• William Nordhaus' DICE model was used in providing recommendations


to the American Administration regarding the Kyoto Protocol and policy
options for post 2012
• The United Kingdom's Stern Review relied on the Page 2002 climate-
CGE model to investigate climate policy in the United Kingdom and the
European Union
• The Australian Treasury and Garnaut Review modelling for Australia's
Carbon Pollution Reduction Scheme (CPRS) used the Australian Bureau
of Agricultural and Resource Economics (ABARE) GAIM climate-CGE
model

General limitations of CGE tools


CGE is an internally consistent neoclassical paradigm. It assumes that
democracy and free markets are the best form of social organisation and that
at an aggregated level everyone has the same perception of utility in personal
consumption and will make rational decisions. One of the paradigm's
weaknesses is that the construction of synthetic market models and the
evidence used to do this is embedded with assumptions.

19
A neoclassical paradigm may easily be (or become) delaminated from reality.
Economics is not a science based on immutable laws. It can only ever be a
consistent discipline of practice with working assumptions that have proven
generally valid in the past. The past is not always a reliable guide to the future
(Popper 1959). Quite often assumptions become invalid and sometimes the
body of policy makers doesn't notice this happening. At this point the paradigm
diverges from reality. As we have seen from America's recent sub-prime credit
crisis and financial collapse, neither individual nor collective behaviour can be
fully predicted by sets of equations. Markets are subject to failure due to
behavioural factors such as the breakdown of enlightened self interest, which
is an article of faith in the dogma of self-regulation, and not being fully
accountable for the outcome of one's actions, which is called “moral hazard.”

One of the reasons that CGE modelling delaminates from reality is that its
assumptions about utility and profit maximisation are generalisations. At times
when individuals and, even more importantly, institutional stakeholders behave
in different ways then these assumptions can become unjustified. Any
numerical policy research needs to be supplemented with an understanding of
the values and ideas, alliances, brokerage and compromise of the strongly
competing stakeholder institutions that have large resources to influence
policy outcomes. It is necessary to evaluate the same policies with reference to
the tools of political economy, ideology, moral philosophy and influence
analysis as Lorman & Van Groningen (2009) note: “This capacity to engage and
shape arrangements is greatly influenced by their command over resources.
Those with the greatest command over resources have the greatest potential to
influence policy outcomes.”

The difficulty of achieving effective policy analysis may be gauged by the large
range of stakeholder institutions in policies with national and global
implications. These include international and global organisations such as the
United Nations, multinational corporations, international social movements,
and trade, aid and immigration policies within national political processes;
national governments and domestic institutions such as Ministers of
Parliament, Departments, courts, non-government organisations (NGOs) and
private sector industry organisations and companies (which are often striving

20
for self-regulation); the bureaucracy, which often features hierarchical control
and coordination; the professions, which are guided by principles of autonomy,
self-regulation and occupational control; and social movements which have
open and fluid structures, such as Greenpeace, the World Wildlife Fund,
German Watch and the David Suzuki Foundation.

Furthermore, the large range of institutions will often have just as large a
range of alternative agendas, different views on the importance of key issues
and even strong ideological and moral differences about the collective
behaviour of how societies work and individual behaviour, for example,
neoclassical rationalism, Keynesian, Monetarist, self-regulation, social
democracy, capitalism, “dry-liberal”, “wet-liberal”, welfare state, green and
radical views of all types. This can lead to highly contradictory contexts and
pragmatic tradeoffs in negotiating multiple and conflicting objectives.

For example, countries will be the primary stakeholders entering into


international treaties. However, in Western democracies, governments are
voted in and out according to how citizens see their quality of life and security
unfolding. As such, the ability of governments to exercise their social mandate
is highly constrained by voters' perception of their future welfare and security.

It is the citizens and powerful vested interests in the country that can be the
real stakeholders in international agreements. For example, oil, coal and gas
producers and users, such as power stations and motorists, would be
significantly affected by taxes or escalating emissions permit costs designed to
switch users from using fuels that pollute to clean fuels and technologies.

Traditionally fossil fuel producer groups (or their industry associations) have
exceptionally strong influence on governments. These producers are often
commercialising national endowments and in doing so bringing much needed
income, industry and prosperity to the country. Resource companies often
control commodity cashflows with such immense magnitudes that they are
singularly important to countries. Global producers, such as the “six-sisters” of
the oil industry, are bigger than most national governments and on an equal or
better footing in negotiating with governments. These companies can “play the
employment security card” with their employees by threatening job losses. For

21
example job losses if logging of forests, fishing or coal mining is restricted or
financially impaired in any way.

Governments are almost universally committed to economic growth. Lowe


(2009, pp.8 & 74) provides a good example of the popular radical belief that
the cult of economic growth is invalid: “The fundamental myth of modern
society, unlimited growth ….the ”growth is good” idea: that growth is either
inevitable or, at least, desirable as the bringer of wealth and happiness.
Challenging it is tantamount to heresy, so the benefits of growth are acclaimed
and the costs are ignored.”

This paradox of growth has led a number of authors from John Stuart Mill to
the present day to argue for a growth-less or steady state economy (Mill 1848;
Daley 1992; Hamilton 2006). Even the New Scientist editor writes in the
magazine's special issue The Folly of Growth: how to stop the economy killing
the planet: “Most economists care only about growth. Where resources come
from and where wastes go are largely irrelevant. If we are to leave any kind of
a planet to our children, this needs to change” (New Scientist 2008).

As already noted, CGE models are rationalist models that seek to maximise
growth, or at least welfare as measured by the expansion of consumption.
Those who criticise the paradigm of growth are equally scathing of CGE
models being tools of the cult of growth that conveniently justify growth
policies. However, criticism of neoclassical economics and CGE models as
promoting growth is largely misplaced. This is because constraints on resource
usage from natural endowment scarcity and specific policy implementation (for
example to control emissions) means that the dual solution provides the very
efficiency in resource utilisation that Lowe and others seek.

Societies that restructure from unconstrained growth to constrained growth


can achieve the auto-stabilising goals sought by Lowe. However, CGE
modelling is a market tool that shows how this can be achieved through
democratic and market means rather than through quantitative regulation and
central planning mechanisms.

22
It can be seen that inherent conflicts in the outlooks and aims of individuals
and institutions necessitate policy implementation being fine-tuned through a
large number of potential instruments of intervention. Policy is often defined
by the instrument that is used: “It can express itself through the clarification of
public values and intentions; through commitments of money and services; by
the granting of rights and entitlements” (Considine 1994, p.3).

Such instruments of interventions range through “reasoned inaction” to


Research and Development, monitoring, communication and information flow,
education and moral persuasion, consultative mediation, self-regulation,
intergovernmental agreements and policies, new laws, control regulations and
impact assessments to enforce standards and prohibit practices, institutional
change and market price mechanisms.

The traditional process for policy is to set the agenda; formulate policy options;
select policy instrument; implement; monitor; evaluate; review; and terminate
(Sutcliffe & Court 2005, p.9; Lorman & Van Groningen 2009; Young & Quinn
2002, pp.13-4). The first phase of setting the agenda seeks to identify all
aspect of the issue. For example, the reasons why the issue is important,
competing definitions of the problem, potential policy instruments; the steps
ahead; and the power blocs and the stakeholder engagement required for
alliances, brokerage and compromise.

CGE analysis has its place in the second stage of the policy forming process,
namely, research. This phase encompasses the iterative research needed to
establish what needs to be done; identify potential intervention responses;
potential instruments; institutions that will implement the policy; individuals
and institutions that will be affected; and to provide information to help
achieve stakeholder institutions support. The vast number of policy
instruments required for the fine tuning of policy implementation means that
high level policy research tools such as CGE models need to be carefully
finessed.

23
In order to fulfil this role, over the last four decades CGE researchers have
developed models for various influential institutional agendas and strategies.
For example, to take into account developments in instruments of intervention
such as carbon taxes and emissions trading.

The literature survey in Chapter 4 Economic models for climate change policy
analysis highlights that there are now many CGE models from policy
researchers investigations into different dimensions of problems and exploring
advances in theory, techniques, data availability and computing power. From a
climate change perspective, these CGE models have evolved from economic
models into energy models, then economic-energy-emissions (E3) models and
now into economic-climate models.

Inadequacy of traditional CGE modelling for


developing effective climate change policies
As identified in the CGE literature study in Chapter 4 Economic models for
climate change policy analysis the main weaknesses in traditional CGE models
for climate change policy analysis is the difficulty in solving comprehensive
general equilibrium with spatial disaggregation; the computational complexity
in settling intertemporal CGE models, which are already optimisations, within
further overall climate damage and trade deficit feedback loops; including
emissions trading in each country and between countries; applying different
abatement regimes in each country, which is perhaps the most important
scenario outcome of an economic-climate model; and establishing the
redistribution of production between countries after differential carbon pricing
and abatement are introduced in each country.

The reason for this is that markets in CGE models are constructed with many
equations. This is quite onerous and imbued with many assumptions such as
elasticities and marginal productivities. When the number of countries and
commodities is expanded, the complexity of the task increases dramatically.
The rapidly multiplying assumptions become copious and manifold. The shear
scope of addressing the huge set of exogenous variables means that detailed
due diligence of assumptions is difficult to complete. This compares to, say,
using data such as Input Output data at face value, and creating marketplaces
by virtual of primal and dual formulations present in all optimisations. For

24
example, the “Main Theory of Linear Programming” simultaneously maximises
an output isoquant while minimising resources. At the same time, the resource
marginal productivities are established endogenously, instead of exogenously
as in traditional CGE models.

Government policy makers and private sector industry strategists need a


model where their own region, country and industry equilibrium is calculated
in a world context. All countries and industries want to understand their SWOT
profile of strengths, weaknesses, opportunities and threats, to appreciate how
this compares with that of their competitors and trading partners, and to
understand intertemporal tradeoffs such as how fast we need to change now as
compared to deferring action. Many strategic choices then have to be made
both locally and in response to changes in the relative competitive position of
nations. For example, the relocation of distribution warehouses away from
areas that may be impacted by climate change.

This highlights the primary limitation in current CGE models, which can't
readily provide spatial disaggregation. For example, Australia's CGE models
are amongst the most sophisticated in the world. Yet none of the Australian
CGE models could easily model Australia's climate change policy in the world
setting. The Garnaut and Australian Treasury policy analysts need to manually
assemble a system of partial equilibriums (i.e. manually create a synthetic
national equilibrium in a world context). The lack of modelling flexibility
appears to have been exceedingly exasperating because the modelling team
ran late in its task.

A second major research gap in existing CGE models is the need to select a
production function, such as the Nordhaus DICE Cobb-Douglas function,
GTAP's Constant Elasticity function or a Translog function. It is difficult to
justify synthetic, econometrically-estimated production functions based on
calibration alone. Dale Jorgenson was the first person to use econometrics for
estimating American economic parameters, giving rise to the complex task of
econometric general equilibrium modelling (Johansen 1978; Hazilla & Kopp
1990).

25
A third research gap is present in Leontief, Nordhaus DICE and ten Raa's
modelling of intertemporal performance. In intertemporal models, population
and technology productivity are the only exogenous variables. Investment and
capital (i.e. accumulated and depreciated investment) are endogenous because
these factors have to be produced by the economy and the level of production
is determined by expectations of future consumer and industry demand.
Therefore, intertemporal models need a way of inherently controlling
investment in an industry.

In Leontief's B matrix approach, which is a type of Markov chain, the capital at


the end of the period is assumed to be zero. This implies that care needs to
exercised in the use of Leontief B approaches to ensure that consumers and
producers do not consume the total capital base of the industries.

In the Nordhaus DICE model, which is investigated in Chapters 4 and 5 (and


Appendices 4 and 5), industry investment is any available excess of output over
consumption. Capital accumulation becomes an outcome. This is the reverse of
the actual situation where capital investment in industry needs to be
maintained and grow with output. To many, Nordhaus' approach of preferring
consumption over investment is unremarkable because it seems so much in
accord with consumption-led economics, which has been the pervasive Anglo-
American tenet of political economy.

In ten Raa's intertemporal formulation, which seeks equivalence with


Leontief's B matrix approach, capital is determined as a convolution of
investment.

A fourth research gap in traditional CGE models is communication. CGE


modelling is undertaken in batch processing environments where equations
are programmed in specialist modelling languages and presented to industrial
optimisation solvers. The results are returned as batch files of text. The lack of
a Graphical User Interfaces (GUI) for interactive model development, fast
turnaround and visualisation of results is a major disadvantage for research
productivity. Even more unsatisfactory is the difficulty in creating rich graphs
for presentation to policy makers, which often results in bland tables, minimal
graphics and poor communication. While enhanced graphics can be achieved

26
with supplementary tools, the lack of productivity due to double handling and
absence of early visualisation stifles agility and creativity.

Effect of inadequate policy tools on climate change


policy makers
We are at a cusp in history that makes innovation in CGE climate modelling
important and timely. In Anglo-American nations, such as America and
Australia, major philosophical, economic, behavioural and security changes are
taking place. The great American dream of expansion and unlimited resources,
through force of arms if necessary, is evolving to a new type of sustainable
dream. New regionally disaggregate policy modelling platforms are needed for
these new times of heavily constrained and symbiotic global growth.

However, nations continue to vacillate about how long they can defer the
decision to switch from policies of unconstrained growth to policies of
constrained growth. This has led to dithering in international agreements,
what some might call “policy paralysis” and to the use of Prisoners Dilemma
game theory strategies to minimise losses.

All of these elements have been present at the UNFCCC Bonn meeting in June
2009, which failed to bring consensus to policy for 2012 and beyond. For
example, America determinedly sought China's agreement to targets such as
40% reduction in emissions by 2020 (compared to 1990 level). China
responded that this type of target is inappropriate but that China would be
cooperative in reducing emissions if America provides inexpensive green
technologies such as carbon capture and storage.

China's response has the merit of logic. A unique issue in climate policy is the
existence of a fixed tranche of emissions, beyond which global warming is
considered cataclysmic (see Chapter 3 Political economy of the Anglo-
American world view of climate change). This stark reality forces the
inescapable issue that either green technology is cheap and widely available or
countries will need to face large reallocations in their domestic production and
perhaps internationally. Many regions, for example the Spanish Asturias
(Arguelles et al. 2006), have expressed concern that limiting emissions will
have dire effects on their economies. Of course, making green technology

27
cheap and widely available leads to other issues such as minimal or no patent
protection for private technology developers.

Traditional CGE models have great difficulty in adequately coping with the
plurality of climate-economic policy constraints, which multiply the complexity
of models. For example, living within current income rather than borrowing to
maintain lifestyle, maintaining the purchasing power of the labour force,
managing energy requirements and greenhouse gas pollution, while achieving
social objectives such as expanding both population and the welfare of the
population.

A CGE framework for climate policy analysis is needed that captures the
background of changing Anglo-American, European, Chinese, Indian and other
world views, focusing on the various dimensions of the debate on climate
change and looking forward to satisfactory “win-win” solutions to the issues
that emerge from the interaction of such dimensions.

Resolving inadequacies in CGE climate policy


modelling
A way of addressing the first and second shortcomings identified above could
be to utilise the Use (U) and Make (V) tables from national accounts instead of
the more synthesised Input Output tables or the equivalent Leontief (A) matrix.

The U, V and A matrices are related by the equation U = A.V T , where VT


is the transpose of the V matrix. As Gross Domestic Product is V T −U then
many industrial relationships can be conveniently modelled by retaining the U
and V format. For example, pollution, emissions trading, abatement and
various energy sources can be directly modelled. Creating Leontief's A matrix
(and, for this matter, Leontief's B matrix) is useful for many traditional analysis
purposes but sacrifices information. In contrast to utilising techniques

associated with the Leontief A matrix, ten Raa's V T −U  . s may be used as a

straightforward production function, where s is the activity vector of the


commodity production units.

The Use-Make model implies a Leontief production function, which is a


constant return to scale formulation and special case of the Constant Elasticity

28
of Substitution model. Applied in a multi-industry model, there is substitution
between industries of the factors of production such as materials, labour and
capital. The optimisation process dual solution settles the market by balancing
marginal productivities and therefore marginal prices for tétonnement. This
overcomes the usual objection to Leontief production function where there is
no substitution of the factors of production within a single industry. Studies
comparing data envelopment analysis (DEA) and transcendental production
functions (Translog) demonstrate that there is little value in providing a more
advanced econometrically synthesised production function. The long use of
DEA in government and industry imparts confidence in the use of optimisation-
type production functions. Therefore, the use of Use and Make table
production functions within computable general equilibrium models appears to
be a prospective area for investigation.

ten Raa's benchmarking using V T −U  . s is in itself a highly efficient


production function across industry sectors, both for domestic substitution and
international substitution through bilateral trade flows.xi It models the trade-off
effects in policy scenarios across regions and industries. This analysis becomes
insightful when intertemporal outcomes are also constrained as in climate
modelling.xii However, the Armington assumption underlying all multiregional
input output models and CGE models is still applied: that commodities in the
same statistical class are substitutes, albeit imperfect substitutes (Armington
1969).xiii It applies to domestic industries as well as the international trade of
commodities.

ten Raa's benchmarking approach has the advantage that intertemporal


economic models can be readily, directly and transparently solved by fast
linear programming. In contrast to current CGE models, these benchmarking
models are holistic, comprehensive and highly flexible for testing new policy
formulations and the turnaround is very fast. Interior point nonlinear
programming brings these benchmarking models to the next level of
sophistication, for example, when nonlinear climate scientific equations are
used. While not nearly as fast as linear programming, nonlinear models remain
holistic and flexible.

29
Although traditional CGE models have a long heritage and are widely used
there has been no definitive testing of whether benchmarking CGE models are
superior to traditional CGE models. This is because economics, strategy and
policy making are all disciplines of practice rather than sciences in the strict
sense of hypothesis testing. Only the use of both traditional CGE models and
benchmarking CGE models over a reasonably long period will develop a deeper
understanding of whether one or other formulation has compelling advantages.

A way of addressing the third research gap in intertemporal modelling of


maintaining capital in an industry as a competitive endogenous variable can be
bridged by using a financial modelling technique where the ratio of sales to
assets in each industry is maintained. Provenance for such an approach may be
found in the use of Leontief stock coefficients to provide a relationship
between stocks and flows using a turnover period (Bródy 1974; 2004; ten Raa
2004; 2007).

DuPont Analysis deconstructs Return on Assets (all assets, including buildings,


machinery, inventories and debtors) with equation:xiv

Profit Profit Sales


Return on Assets = = ∗
Assets Sales Assets

For example, a certain level of assets is needed to support an expected


economic output or sales volume. For a manufacturer this might be twice sales,
while for a retailer it might be equal to sales. Therefore, capital formation in
intertemporal models can be satisfied with a constraint on future economic
output that limits future flows to the level of opening capital stock multiplied
by a Sales/Asset ratio.

A way of addressing the fourth gap in CGE policy research, being a lack of
agility and poor communication, may be to use a modern mathematical
optimisation platform with a rich set of data visualisation functions. The last
decade has brought considerable advances in the development of advanced
optimisation techniques within visualisation environments.

30
1.4 Research aims
Against the above backdrop, the main aim of this research is to answer the
question:

What changes in regional and industry performance are implied by a


change in the Anglo-American world view from unconstrained to
climate-constrained resource usage?

The means of understanding this question is to examine climate-economic


polices through the lens of a new spatial, intertemporal CGE policy research
tool appropriate for situations where resources are limited by climate change.

1.5 Research methodology


The five-step methodology of achieving the research aim is:

1. Reviewing the history of the Anglo-American world view with a view to


understanding the confluence between economic growth, free markets,
energy security and domestic security, and global warming. The
evolution of the Anglo-American world view will be contrasted to the
European world view using literature survey and political commentary.
This analysis of political economy will particularly focus on the cusp of
change in the Anglo-American psyche from a determination to remain
unconstrained to an acceptance of economic and climate constraints.
2. Reviewing the political economy of climate change in Anglo-American
countries and the policy options under consideration with a view to
delineating the major underlying influences from science and ecology,
the policy making of major nations, environmental activism and the
dynamics of international treaties.
3. Examine the history of climate-CGE modelling and the main
methodological frameworks that have constituted the mainstay on the
analyses required to inform policies and strategies for containing
climate change. This part of the methodology will select a CGE
modelling paradigm, computing environment and data source.
4. Describe a new CGE modelling approach to achieve the research aim.
5. Demonstrate the appropriateness of this new CGE model in the context
of the research aim.

31
The research methodology is shown diagrammatically as follows:

Illustration 1: Research structure

1.6 Scope of research

Time frame
This research project has been conducted during a period of intense
international negotiations over climate change policy. Countries involved in
these negotiations experienced many domestic and international pressures,
and employed various intriguing game strategies. This dissertation investigates
the political economy of these negotiations and strategies as Anglo-American
countries face new constraints on both their economic growth and
unilateralism.

While the political process in international relations has been in-train for
thousands of years and presumably will continue for thousands of years more,
bringing this research to a conclusion necessarily requires that a time scope be

32
set. Therefore the time frame for this research is the period ending with the
UNFCCC's Bangkok talks on 10 October 2009.

Data scope
The wide variety of sources of data assembled and, where necessary,
purchased for this research reflects its multidisciplinary nature across
economics and technology.

Significant advances have been made in the standardisation and availability of


international economic data over the past fifteen years. The author personally
acquired the Global Trade Analysis Project (GTAP) economic database for this
research from the Purdue University Department of Agricultural Resources
(2008). The GTAP 7 database, published in December 2008, provides economic
data for 113 regions and 57 sectors for the base year 2004. The database is
updated every three years so the next update can be expected in 2011.

The economic data in the GTAP database comprises approximately 96% of


world GDP and 88% of world population. Remaining economic activity and the
other 12% of world population is included in aggregated regions. This database
includes 2004 country National Accounts Input Output tables, IMF and OECD
bilateral trade data, CEPII and UN FAO tariff data, and World Bank economic
data. The two unique and compelling advantages of the GTAP database are
that the data is fully reconciled and that specific regions can be investigated
with a Rest of World (ROW) sector, which is essential for regional studies using
multiregional Input Output models.

The GTAP 7 database provides harmonised energy and greenhouse emissions


data derived from the International Energy Agency's (IEA) extended energy
balances and data from Asian Development Bank. Energy related CO 2 emission
volumes are based on the Tier 1 method of the revised 1996 IPCC Guidelines,
with special treatment for non-emitting activities, country-specific sectoral
feedstock use ratios and energy transformation. For example, coal used to
produce coal products. In addition, GTAP is preparing to issue non-CO 2
emissions volumes for, CH4, N2O, and F-gases emissions based on IPCC Tier 1
and Tier 2 methods and mapping emissions sources to GTAP sector activities.

33
While GTAP 7 doesn't provide population growth rates, the GTAP data can be
supplemented with a wide range of financial and resource data, for example
from the Mathematica's Country Database, which provides population growth
rates for the year 2006.

The climate-economic feedback loops in climate CGE modelling require that


scientific data and physical relationships accepted by the United Nation's
scientific body for climate change, the Intergovernmental Panel on Climate
Change (IPCC), be mapped through to economic damage multipliers. William
Nordhaus' DICE model (2007; 2008) is a highly respected model and has been
drawn upon for these scientific-economic linkages.

Geographical scope
The research focus of this dissertation has been to understand the policy
challenges facing Anglo-American countries as they restructure from
unconstrained growth to an acceptance of climate change constraints. Policy
development in Anglo-American countries is contrasted to that in the European
Union and to BRIC countries (primarily Brazil, Russia, India and China), which
dominate the rest of the world category. Therefore, three regions of the world
have been modelled to understand policy development and outcomes with
reference to the two predominant trading blocs: the North Atlantic Free Trade
Association (NAFTA) and the European Union (EU of 25 countries). Countries
outside of these two trading blocs are aggregated into a Rest of World (ROW)
category.

Commodity scope
Three basic commodities are analysed in each region: food, manufactured
goods and services. The GTAP 7 economic and emissions databases are
aggregated for these commodities across the geographical scope of NAFTA,
EU and ROW. Additional emissions permits and carbon mitigation services
commodities are appended to enable climate change policies to be evaluated.

Intertemporal scope
The climate-CGE modelling tool developed in this dissertation facilitates the
study of climate change policies over projections of 130 years (13 decades)

34
from the data's base year of 2004. This is somewhat less than the full 60
decades of the Nordhaus DICE model. While the intertemporal scope is
sufficient, it is limited by the magnitude of the task in symbolically
representing the whole spatial and industry disaggregation model within
optimisation constraints combined with the operations research challenges of
nonlinear optimisation.

1.7 Significance of research


Any new strategy or business idea requires a business plan with market
analysis, economic/financial projections and risk analysis. It is no different for
government policy makers and private sector planners when new constraints
arise. The arrival of climate change imperatives is perhaps the first of many
such new conditions in a rapidly populating world that is perhaps facing
declining oil availability. Climate change brings with it new constraints and this
demands a whole new paradigm of resource-constrained modelling.

Anglo-American governments are facing the extraordinary challenge of shifting


their policy framework from unconstrained growth to constrained growth.
There is considerable trepidation about this new uncharted future and this
anxiety has been exacerbated further by the current financial crisis.

A literature review highlights the difficulty in using existing partial equilibrium


models to answer economic-climate questions in the light of changing Anglo-
American approaches to constraints on growth.

This research has led to an innovative methodological technique for


intertemporal computable general equilibrium in the presence of international
trade, emissions trading, emissions abatement and climate change resource
constraints.

In its review of the role of economic modelling in the global financial crisis,
The Economist (2009) concludes: “Economists need to reach out from their
specialised silos: macro-economists must understand finance, and finance
professors need to think harder about the context within which markets work.
And everybody needs to work harder on understanding asset bubbles and what

35
happens when they burst. For in the end economists are social scientists,
trying to understand the real world.”

Multidisciplinary groups are aware of this. For example Jan Oosterhaven,


President of the International Input Output Association, ends his address in the
Association's 2008 Annual Report (2007, pp.1-2): “IO [Input Output] analysis is
doing well because of the continuous extension of its fields of application. Also
it is doing well as judged by the intensive use of IO data, social accounting data
and all kind of linked satellite accounts. However, it might do better if we could
include the interaction of prices and quantities - between sectors, between
institutions, between regions and between countries.”

This research introduces techniques from finance and computable general


equilibrium (CGE) pricing into Input Output analysis, informed by a consistent
framework of political economy. A new type of neoclassical climate policy
model is developed, which has been called Sceptre.

Sceptre's application is in projecting, pricing and making the most of


constrained resources. From the perspective of policy analysis, it is a
comprehensive approach to globalised markets with full attention to
commodity production technologies and population labour dynamics. It has the
compelling advantages of consistency, flexibility, transparency and the
potential for ubiquity because of its underlying deployment platform.

This research has significant impact in delivering clear and compelling


outcomes from policy and strategy modelling in many different climate and
other scenarios. Organisations at all levels in the community are keenly
interested in the answer to the research question as part of formulating their
own policies and strategies.

From the literature survey we have seen the decades of effort that
international organisations, such as United Nations, IPCC, IEA, IMF and World
Bank, and domestic organisations, such as the Australian Productivity
Commission, ABARE, Australian Treasury, CSIRO, Garnaut Review, Monash
University and others have devoted to the pursuit of better models. This
research is therefore timely in providing the first model of its type for

36
multiregional, intertemporal policy analysis in growth constrained by climate
change.

This research establishes a platform for further development to address


additional and more specific issues in the future. In particular, it addresses a
number of Australian National Research Priority Areas including an
environmentally sustainable Australia (through developing strategies for
transforming existing industries, reducing and capturing emissions in
transport and generation, and responding to climate change and variability);
promoting and maintaining good health in strengthening Australia's social and
economic fabric; and safeguarding Australia's critical infrastructure including
our financial, energy, communications, and transport systems; and
understanding our region and the world (societies, politics and cultures).

37
1.8 Chapter references
(Gödel 1931)
Achinstein, P., 1991. Particles and waves: Historical essays in the philosophy of
science, Oxford University Press, USA.

Achinstein, P., 2001. The Book of Evidence, New York: Oxford University Press.

Anuradha, R.V., 2009. Legalities of climate change: A recent climate change


declaration poses significant challenges--and opportunities--for India.
livemint.com & The Wall Street Journal. Available at:
http://www.livemint.com/2009/08/04221721/Legalities-of-climate-
change.html [Accessed August 13, 2009].

Arguelles, M., Benavides, C. & Junquera, B., 2006. The impact of economic
activity in Asturias on greenhouse gas emissions: consequences for
environmental policy within the Kyoto Protocol framework. Journal of
Environmental Management, 81(3), 249-264.

Armington, P.S., 1969. A Theory of Demand for Products Distinguished by Place


of Production (Une théorie de la demande de produits différenciés
d'après leur origine)(Una teoría de la demanda de productos
distinguiéndolos según el lugar de producción). Staff Papers-
International Monetary Fund, 159-178.

ASIC, Practice Notes 42, 43, 74, 75 & 170, Canberra: Australian Securities and
Investment Commission.

Australian Treasury, 2008. Australia's Low Pollution Future: The Economics of


Climate Change Mitigation. Available at:
http://www.treasury.gov.au/lowpollutionfuture/ [Accessed April 21,
2009].

Ayres, R.U., 2001. How economists have misjudged global warming. World
Watch, 14(5), 12-25.

38
Birnbaum, A., 1962. On the foundations of statistical inference. Journal of the
American Statistical Association, 57(298), 269-306.

Bródy, A., 2004. Near equilibrium : A research report on cyclic growth ,


Budapest: Aula Könyvkiadó.

Bródy, A., 1974. Proportions, Prices and Planning: A Mathematical


Restatement of the Labor Theory of Value, Amsterdam: North-Holland.

Brooks, D., 2009. What Geithner Got Right. The New York Times. Available at:
http://www.nytimes.com/2009/11/20/opinion/20brooks.html?
_r=1&th&emc=th [Accessed November 20, 2009].

Bryson, L. & Mowbray, M., 2005. More Spray on Solution: Community, Social
Capital and Evidence Based Policy. Australian Journal of Social Issues,
40(1), 91-107.

Cochran, C.E. et al., 1993. American public policy: An introduction, St.


Martin's Press New York, NY.

Cochran, C.L. & Malone, E.F., 1995. Public policy: perspectives and choices,
McGraw-Hill.

Considine, M., 1994. Public policy: A critical approach, Macmillan Education


Australia.

Daley, H., 1992. Steady State Economics 2nd ed., London, UK.: Earthscan.

Garnaut Climate Change Review, 2008a. Draft Report to the Commonwealth,


State and Territory Governments of Australia, Commonwealth of
Australia. Available at: http://www.telegraph.co.uk/earth/main.jhtml?
view=DETAILS&grid=&xml=/earth/2008/07/14/scischiz114.xml
[Accessed July 21, 2008].

39
Garnaut Climate Change Review, 2008b. Supplementary Draft Report: Targets
and trajectories, Commonwealth of Australia. Available at:
http://www.garnautreport.org.au/ [Accessed September 7, 2008].

Gödel, K., 1931. Über formal unentscheidbare Sätze der Principia Mathematica
und verwandter Systeme I. Monatshefte für Mathematik, 38(1), 173-
198.

Gruber, J., 2007. Public finance and public policy 2nd ed., New York, NY: Worth
Publishers.

Hamilton, C., 2006. The Political Economy of Climate Change. Unpublished


paper delivered as the Milthorpe Lecture, Macquarie University, Sydney,
NSW, 8.

Hazilla, M. & Kopp, R.J., 1990. Social cost of environmental quality regulations:
A general equilibrium analysis. Journal of Political Economy, 853-873.

IPCC, 2007. Synthesis Report: an assessment of the Intergovernmental Panel


on Climate Change. Fourth Assessment Report. [Abdelkader Allali,
Roxana Bojariu, Sandra Diaz, Ismail Elgizouli, Dave Griggs, David
Hawkins, Olav Hohmeyer, Bubu Pateh Jallow, Lucka Kajfez-Bogataj, Neil
Leary, Hoesung Lee, David Wratt (eds.)], IPP Plenary XXVII Valencia,
Spain: United Nations. Available at: http://www.ipcc.ch/ipccreports/ar4-
syr.htm [Accessed April 5, 2008].

Johansen, L., 1978. On the theory of dynamic input-output models with


different time profiles of capital construction and finite life-time of
capital equipment. Journal of Economic Theory, 19(2), 513-533.

Karoly, D., 2007. Synthesis Report: an assessment of the Intergovernmental


Panel on Climate Change. Fourth Assessment Report: National Online
Media Briefing by Australian members of the Working Group. Available
at: http://www.aussmc.org/IPCC_Synthesis_online_briefing.php
[Accessed April 5, 2008].

40
Keane, J., 2009. The life and death of democracy.

Keynes, J.M., 1921. A treatise on probability, Macmillan and Co., limited.

Kuhn, T., 1962. The Structure of Scientific Revolutions 2nd ed., Chicago:
University of Chicago Press.

Leonhardt, D., 2009. Making Health Care Better. The New York Times.
Available at:
http://www.nytimes.com/2009/11/08/magazine/08Healthcare-t.html?
_r=1 [Accessed November 24, 2009].

Lindbloom, C.E., 1959. The Science of Muddling Through. Public


Administration Review, 19(Spring), 79-88.

Ljungqvist, L. & Sargent, T.J., 2000. Recursive macroeconomic theory 1st ed.,
The MIT Press.

Lorman, D. & Van Groningen, J., 2009. Public Policy, Australia: Australian
Technology Network of Universities.

Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.

Magee, B., 1974. Popper, Routledge.

Marshall, A., 1890. Principles of economics, Macmillan and co.

Mayo, D.G., 1996. Error and the growth of experimental knowledge, Chicago:
The University of Chicago Press. Available at:
http://books.google.com.au/books?
hl=en&lr=&id=FEsAh4L9r_EC&oi=fnd&pg=PR9&dq=
%22Deborah+Mayo%22&ots=j9ccfxtlW5&sig=_N-
Lacjg0yxbdrfhv0nNL2rNFxk.

41
Mayo, D.G. & Spanos, A., 2004. Methodology in practice: Statistical
misspecification testing. Philosophy of Science, 71(5), 1007-1025.

Mill, J.S., 1848. Principles of Political Economy 7th ed., London: Longmans,
Green and Co. Available at: http://www.econlib.org/library/Mill/mlP.html
[Accessed April 9, 2009].

New Scientist, 2008. Editorial: Time to banish the god of growth. New
Scientist, 199(2678), 5.

Nordhaus, W.D., 2008. A Question of balance: weighing the options on global


warming policies, Yale University Press.

Nordhaus, W.D., 2007. Notes on how to run the DICE model, Available at:
http://www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
[Accessed June 26, 2008].

Oosterhaven, J. & Stelder, D., 2007. Regional and interregional IO analysis.

Popper, K., 1972. Objective knowledge, Oxford: Oxford University Press.

Popper, K., 1959. The logic of scientific discovery, London: Hutchinson.

Purdue University Department of Agricultural Resources, 2008. GTAP 7


Database. Available at:
https://www.gtap.agecon.purdue.edu/databases/v7/default.asp [Accessed
April 14, 2009].

ten Raa, T., 2007. Review of A. Brody: Near Equilibrium - A Research Report on
Cyclic Growth. (Review of the book Near Equilibrium - A Research
Report on Cyclic Growth, Andraacutes Broacutedy Aula Publishing
House, Budapest, 2005, vi + 137 pp., ISBN 963-9478-95-4). Economic
Systems Research, 19(1), 111-3.

ten Raa, T., 2004. Structural Economics, London, UK.: Routledge.

42
Rawls, J., 1972. A theory of justice, Oxford: Clarendon Press.

Rehg, W. & Staley, K., 2008. The CDF Collaboration and Argumentation
Theory: The Role of Process in Objective Knowledge. Perspectives on
Science, 16(1), 1-25.

Sachs, J., 2008a. Common Wealth: Economics for a Crowded Planet , Penguin
Press HC, The.

Sachs, J., 2008b. Public lecture. Available at:


http://www.usyd.edu.au/news/alumni/174.html?
eventcategoryid=17&eventid=2903 [Accessed April 27, 2009].

Sprenger, J., 2008. Confirmation and Evidence: Inaugural-Dissertation zur


Erlangung der Doktorwurde der Philosophischen Fakultat. Rheinischen
Friedrich-Wilhelms-Universitat zu Bonn, Germany. Available at:
http://deposit.ddb.de/cgi-bin/dokserv?
idn=990329739&dok_var=d1&dok_ext=pdf&filename=990329739.pdf.

Staley, K. & Cobb, A., 2009. Internalist and Externalist Aspects of Justification
in Scientific Inquiry.

Stern, N., 2007. Stern Review Report, Available at: http://www.hm-


treasury.gov.uk/independent_reviews/stern_review_economics_climate_c
hange/stern_review_report.cfm [Accessed April 17, 2008].

Sutcliffe, S. & Court, J., 2005. Evidence‐based policymaking: What is it? How
does it work? What relevance for Developing Countries?, London, UK.:
Overseas Development Institute. Available at: [Accessed May 15, 2009].

The Economist, 2009. Economics: What went wrong with economics. The
Economist. Available at:
http://www.economist.com/printedition/displayStory.cfm?
Story_ID=14031376 [Accessed July 22, 2009].

43
UK Hampel Committee, 1998. Committee on Corporate Governance (Final
Report), London: The Committee on Corporate Governance and Gee
Publishing Ltd. Available at:
http://www.econsense.de/_CSR_INFO_POOL/_CORP_GOVERNANCE/ima
ges/hampel_report.pdf.

United Kingdom Cabinet Office, 1999. Modernising Government - The White


Paper, London, UK.: Parliament. Available at:
http://archive.cabinetoffice.gov.uk/moderngov/whtpaper/index.htm
[Accessed April 29, 2009].

Young, E. & Quinn, L., 2002. Writing Effective Public Policy Papers. Budapest:
Open Society Institute.

i Professor Jefferey Sachs is Director of The Earth Institute; Quetelet Professor of


Sustainable Development; Professor of Health Policy and Management at
Columbia University; and Special Advisor to United Nations Secretary-General
Ban Ki-moon
ii On 18 November 2004, Russia ratified the Kyoto Protocol thereby satisfying the
condition precedent that at least 55% of 1990 Annex 1 country CO2 emissions
were encompassed by the treaty
iii The United Kingdom government is aware of many climate threats, such as the
flooding of London
iv Ironically, deductivism is itself a hypothesis that cannot be accorded the status
of a theory because any proof of deductivism is inconsistent if it relies on any
axiom or proof established using deductivist principles. Deductivism is caught
by the ultimate paradox that a consistent set of rules cannot establish the
validity of that same set of rules. This paradox is illustrated by Kurt Gödel's
incompleteness theorems in Mathematics. These incompleteness theorems
prove that it is not possible to find a formal theory (i.e. a set of axioms) that can
prove all theories and exclude all falsehoods; and that if a formal theory can be
proven consistent from within itself, then it must rely upon itself and therefore
be inconsistent (Gödel 1931, Theorems VI & XI). Despite vulnerability to these
tests, deductivism is widely regarded as a useful and powerful formulation for
rational scientific enquiry. The deductivist paradigm is not broken merely on
account that it is unable to self-establish. To do so would have many
unreasonable consequences. For example, the identity 1 + 1 = 2 is well
accepted as part of the body of common knowledge that every child learns at
school. Indeed, if the child does not know this rule then either the child's
learning or the school's teaching would be considered grossly deficient.
Nevertheless, Gödel's incompleteness theorems declares that this fundamental
identity is unprovable and, by extension, render as flawed the whole set of
proofs, tests and practices that depend upon it
v Although Birnbaum's “likelihood principle” is highly regarded, it remains
controversial. It is stated as follows, The likelihood principle (L): “In an
experiment E with observed data x, all experimental information about v is
contained in the likelihood function v->P(x|v). All other information can be
neglected. More precisely, if E and E' are two experiments and if the outcomes x
and x' generate the same likelihood function, then Ev(E,x) = Ev(E',x'), without

44
reference to the structure of E and E'” Its corollary is that the probability of
results that could have been observed is irrelevant to the statistical inference.
The “Likelihood Principle” contains the principles of sufficiency (S) and
conditionality (C). Birnbaum notes: “The fact that relatively few statisticians
have accepted (L) as appropriate for purposes of informative inference, while
many are inclined to accept (S) and (C), lend interest and significance to the
result, provided herein, that (S) and (C) together are mathematically equivalent
to (L)“(Birnbaum,1962, p271)
vi 5.9% is the probability of a correct positive identification plus the probability of
an incorrect positive identification (i.e. 95% x 1% + (100%-95%) x (100%-1%))
vii 16.9% is calculated as 95% probability of the test being positive if the person
has taken drugs x 1% probability of a person taking drugs regardless of other
factors / 5.9% probability of a positive result regardless of other factors
viiiIf the accuracy of the test remains at the new level of 99% and the probability of
drug use by sports people falls to just 0.1%, then the probability of drug use
given a positive test falls to only 9.02%. This illustration of Bayes' posterior
probability has been developed with reference to “Further Examples: Example 1
Drug Testing” at http://en.wikipedia.org/wiki/Bayes%27_theorem
ix 58% is calculated as 90% x 60% + (100%-90%) x (100%-60%)
x 93% is calculated as 90% probability that the modelling correctly shows a policy
is feasible if it is indeed feasible x 60% probability of a policy being feasible
regardless of other factors / 58% probability of a positive result regardless of
other factors
xi Without modification, ten Raa's linear programming benchmarking uses a
simple Leontief function for each industry, which implies a constant mix of
inputs. However, it provides full substitution to other industries in the world
with a better technology function and substitutes the factors of production
across domestic industries. The Leontief function can be modified by increasing
returns to scale and decreasing returns to scale if desired but in most situations
the simplicity of constant returns to scale is intuitive and appealing
xii The 2009 case of Pacific Brands transferring its underwear factories from
Australia to Asia is an example of industrial production shifting from one
structure of labour and technology to another
xiiiThe Armington Assumption is a constant elasticity of substitution (CES)
aggregate assumption for international trade
xivDuPont Analysis was developed by the American chemical conglomerate E. I. du
Pont de Nemours and Company

45
Chapter 2 Political Economy of the Anglo-
American economic world view

The 2008-9 Global Financial Crisis, wars in Iraq and Afghanistan and deep
deficits in the American economy have brought challenges and emerging
changes to the Anglo-American world view. This Chapter investigates the
development of the Anglo-American economic world view to establish a
framework for understanding climate change policy. America's unique themes
of liberty and free markets are distinct, pervasive and dominant in Anglo-
American culture.

America is therefore used as a proxy for the Anglo-American group of nations,


which is defined widely to include the United Kingdom, Canada, Australia and,
on various dimensions, countries as diverse as Japan, South Korea, Denmark,
Poland and Georgia. Many of these countries look to a “special relationship”
with America. The sources of America's world view are traced from the time
when American society was formed through to almost the first anniversary of
President Barack Obama's election. America is compared and contrasted to
Europe and in particular with Germany. This analysis concludes with an
investigation of the preparedness of America to accept the decline of its
exceptionalism and new reality of resource constrained growth.

2.1 Origins of the American worldview


Walt Whitman recalled in Democratic Vistas (1888):

The old men, I remember as a boy, were always talking of American


independence. What is independence? Freedom from all laws or
bonds except those of one's own being, control'd by the universal
ones.

Around 17 February 1775, the great ambassador Benjamin Franklin had firmly
highlighted the primacy of freedom in the American psyche (B. Franklin & W. T.
Franklin 1818) “ Those who desire to give up Freedom in order to gain
Security, will not have, nor do they deserve, either one.”

47
Perhaps the most fundamental of American character traits is the belief in
freedom. The tenet was bravely announced in The Unanimous Declaration of
The Thirteen United States of America adopted by the Congress of the United
States on July 4, 1776. It stated:

We hold these truths to be self-evident, that all men are created


equal, that they are endowed by their Creator with certain
unalienable Rights, that among these are Life, Liberty, and the
pursuit of Happiness. That to secure these rights, Governments are
instituted among Men, deriving their just powers from the consent
of the governed. That whenever any Form of Government becomes
destructive of these ends, it is the Right of the People to alter or to
abolish it, and to institute new Government, having its foundation on
such principles and organizing its powers in such form, as to them
shall seem most likely to effect their Safety and Happiness.

However, in drafting the American Constitution approved on 17 September


1787, the fathers decided to subjugate personal liberty to order. They returned
to the British concept of “order out of chaos”, legislating that the rule of law
and order of society were more important than individual liberty.i Two years
later ten amendments to the American Constitution, collectively known as the
Bill of Rights (1789), reinstated the importance of individual liberty in law.

Nearly fifty years later, the French lawyer Alexis de Tocqueville visited America
to critically appraise the emergent American democracy. For his pioneering
work of observational political sociology De la démocratie en Amérique (1835)
de Tocqueville was decorated as a chevalier de la Légion d'honneur (Knight of
the Legion of Honour), elected to the Académie des sciences morales et
politiques and subsequently to the Académie française.

De Tocqueville looked beyond the familiar elite of cosmopolitan cities of New


York, Boston, Washington and New England, recognising the parochial thinking
and fundamental religious values of Americans to the West. Of the business
fervour across America and the social conditions in the virgin territories of the
West, de Tocqueville writes (Volume 1, Chapter III, Social Condition of the
Anglo-Americans ):

48
I know of no country, indeed, where the love of money has taken
stronger hold on the affections of men and where a profounder
contempt is expressed for the theory of the permanent equality of
property .... there are but few wealthy persons; nearly all Americans
have to take a profession …. in the Western settlements we may
behold democracy arrived at its utmost limits …. the population has
escaped the influence not only of great names and great wealth, but
even of the natural aristocracy of knowledge and virtue. None is
there able to wield that respectable power which men willingly
grant to the remembrance of a life spent in doing good before their
eyes. The new states of the West are already inhabited, but society
has no existence among them.

In American society, de Tocqueville identifies pervasive traits of pragmatism


and preoccupation with consumption. In Volume 2, Section 1, Influence of
Democracy on the Action of Intellect in The United States, Chapter X, Why the
Americans are more addicted to practical rather than theoretical science , de
Tocqueville locates these traits in America's engrossment with business:

In America the purely practical part of science is admirably


understood, and careful attention is paid to the theoretical portion
which is immediately requisite to application. On this head the
Americans always display a clear, free, original, and inventive power
of mind. But hardly anyone in the United States devotes himself to
the essentially theoretical and abstract portion of human knowledge.
In this respect the Americans carry to excess a tendency that is, I
think, discernible, though in a less degree, among all democratic
nations …. Everyone is in motion, some in quest of power, others of
gain. In the midst of this universal tumult, this incessant conflict of
jarring interests, this continual striving of men after fortune, where
is that calm to be found which is necessary for the deeper
combinations of the intellect? …. The man of action is frequently
obliged to content himself with the best he can get because he
would never accomplish his purpose if he chose to carry every detail
to perfection. He has occasion perpetually to rely on ideas that he
has not had leisure to search to the bottom; for he is much more

49
frequently aided by the seasonableness of an idea than by its strict
accuracy; and in the long run he risks less in making use of some
false principles than in spending his time in establishing all his
principles on the basis of truth. The world is not led by long or
learned demonstrations; a rapid glance at particular incidents, the
daily study of the fleeting passions of the multitude, the accidents of
the moment, and the art of turning them to account decide all its
affairs …. The greater part of the men who constitute these nations
are extremely eager in the pursuit of actual and physical
gratification. As they are always dissatisfied with the position that
they occupy and are always free to leave it, they think of nothing but
the means of changing their fortune or increasing it. To minds thus
predisposed, every new method that leads by a shorter road to
wealth, every machine that spares labor, every instrument that
diminishes the cost of production, every discovery that facilitates
pleasures or augments them, seems to be the grandest effort of the
human intellect. It is chiefly from these motives that a democratic
people addicts itself to scientific pursuits, that it understands and
respects them. In aristocratic ages science is more particularly
called upon to furnish gratification to the mind; in democracies, to
the body.

Perhaps with great foresight, de Tocqueville identifies systemic risks and


consequences in American's pragmatism and risk taking. In Volume 2, Section
3, Influence of Democracy on the Feelings of Americans, Chapter XIX, What
causes almost all Americans to follow industrial callings he writes:

The Americans make immense progress in productive industry,


because they all devote themselves to it at once; and for this same
reason they are exposed to unexpected and formidable
embarrassments. As they are all engaged in commerce, their
commercial affairs are affected by such various and complex causes
that it is impossible to foresee what difficulties may arise. As they
are all more or less engaged in productive industry, at the least
shock given to business all private fortunes are put in jeopardy at
the same time, and the state is shaken. I believe that the return of
these commercial panics is an endemic disease of the democratic

50
nations of our age. It may be rendered less dangerous, but it cannot
be cured, because it does not originate in accidental circumstances,
but in the temperament of these nations.

Today, most Americans hold unquestioned the core beliefs of freedom,


independence and democracy. Walt Whitman's Democratic Vistas (1888),
previously mentioned, provides an eloquent statement of pride in the new
nation. Following is a short abridgement of Walt Whitman's first three
paragraphs, which confirms the American ideal of unfettered individual social,
economic and moral freedom from the State (advanced by John Stuart Mill in
his 1859 essay On Liberty) and the pride in a nation that is forever nascent and
supreme to all other social systems:

As the greatest lessons of Nature through the universe are perhaps


the lessons of variety and freedom, the same present the greatest
lessons also in New World politics and progress. If a man were
ask'd, for instance, the distinctive points ... he might find the amount
of them in John Stuart Mill's profound essay on Liberty in the future,
where he demands two main constituents, or sub-strata, for a truly
grand nationality -- 1st, a large variety of character -- and 2nd, full
play for human nature to expand itself in numberless and even
conflicting directions … America … counts, as I reckon, for her
justification and success ... almost entirely on the future. Nor is that
hope unwarranted. To-day, ahead, though dimly yet, we see, in
vistas, a copious, sane, gigantic offspring. For our New World I
consider far less important for what it has done, or what it is, than
for results to come. Sole among nationalities, these States have
assumed the task to put in forms of lasting power and practicality,
on areas of amplitude rivaling the operations of the physical kosmos,
the moral political speculations of ages, long, long deferr'd, the
democratic republican principle, and the theory of development and
perfection by voluntary standards, and self-reliance. Who else,
indeed, except the United States, in history, so far, have accepted in
unwitting faith, and, as we now see, stand, act upon, and go security
for, these things? …. I shall use the words America and democracy
as convertible terms … Not the least doubtful am I on any prospects
of their material success. The triumphant future of their business,

51
geographic and productive departments, on larger scales and in
more varieties than ever, is certain. In those respects the republic
must soon (if she does not already) outstrip all examples hitherto
afforded, and dominate the world …. I perceive clearly that the
extreme business energy, and this almost maniacal appetite for
wealth prevalent in the United States, are parts of amelioration and
progress, indispensably needed to prepare the very results I
demand. [Walt Whitman's emphasis] …. Political democracy, as it
exists and practically works in America, with all its threatening
evils, supplies a training-school for making first-class men. It is life's
gymnasium, not of good only, but of all.

2.2 International relations

National security
American shared beliefs uniquely shaped a foreign policy built on the dual
premises of America as a new promised land for a chosen people of God, and
an even more arrogant “bully-boy” attitude that “the most powerful player
makes the rules”. These attitudes were formalised as the Monroe Doctrine,
known more broadly as Manifest Destiny, that justified America attacking any
country in the world (Jensen 2000, pp.86-8; Perkins 2004, pp.69-70):

Manifest Destiny – the doctrine, popular with many Americans


during the 1840s, that the conquest of North America was divinely
ordained; that God, not men, has ordered the destruction of Indians,
forests, and buffalo, the draining of swamps and the channelling of
rivers, and the development of an economy that depends on the
continuing exploitation of labour and resources …. got me thinking
about my country's attitude toward the world. The Monroe Doctrine,
originally enunciated by President James Monroe in 1823, was used
to take Manifest Destiny a step further when, in the 1850s and
1860s, it was used to assert that the United States had special rights
all over the hemisphere, including the right to invade any nation in
central or South America that refused to back U.S. Policies. Teddy
Roosevelt invoked the Monroe Doctrine to justify U.S. Intervention

52
in the Dominican Republic, in Venezuela, and during the “liberation”
of Panama from Colombia. A string of U.S. Presidents – most notably
Taft, Wilson, and Franklin Roosevelt – relied on it to expand
Washington's Pan-American activities through the end of World War
II. Finally, during the latter half of the twentieth century, the United
States used the Communist threat to justify expansion of this
concept to countries around the globe, including Vietnam and
Indonesia.

A notable use of Manifest Destiny in its most extended form was the American
invasion of the sovereign nation Hawaii on 16 January 1893. Using a fabricated
excuse, American Marines invaded Hawaii and occupied Government buildings
and the Iolani Palace. On 18 December 1893, President Grover Cleveland
sought to redress the invasion with an impassioned plea to the Senate and
House of Representatives to not succumb to the wrongful acquisition of Hawaii
(Cleveland 1893):

Our country was in danger of occupying the position of having


actually set up a temporary government on foreign soil for the
purpose of acquiring through that agency territory which we had
wrongfully put in its possession. The control of both sides of a
bargain acquired in such a manner is called by a familiar and
unpleasant name when found in private transactions. We are not
without a precedent showing how scrupulously we avoided such
accusations in former days. After the people of Texas had declared
their independence of Mexico they resolved that on the
acknowledgement of their independence by the United States they
would seek admission into the Union. Several months after the
battle of San Jacinto, by which Texan independence was practically
assured and established, President Jackson declined to recognize it,
alleging as one of his reasons that in the circumstances it became us
"to beware of a too early movement, as it might subject us, however
unjustly, to the imputation of seeking to establish the claim of our
neighbors to a territory with a view to its subsequent acquisition by
ourselves". This is in marked contrast with the hasty recognition of a
government openly and concededly set up for the purpose of
tendering to us territorial annexation …. I believe that a candid and

53
thorough examination of the facts will force the conviction that the
provisional government owes its existence to an armed invasion by
the United States. Fair-minded people with the evidence before
them will hardly claim that the Hawaiian Government was
overthrown by the people of the islands or that the provisional
government had ever existed with their consent. I do not understand
that any member of this government claims that the people would
uphold it by their suffrages if they were allowed to vote on the
question …. But in the present instance our duty does not, in my
opinion, end with refusing to consummate this questionable
transaction. It has been the boast of our government that it seeks to
do justice in all things without regard to the strength or weakness of
those with whom it deals. I mistake the American people if they
favor the odious doctrine that there is no such thing as international
morality, that there is one law for a strong nation and another for a
weak one, and that even by indirection a strong power may with
impunity despoil a weak one of its territory …. a substantial wrong
has thus been done which a due regard for our national character as
well as the rights of the injured people requires we should endeavor
to repair.

President Grover indeed did “mistake the American people”. Instead of Hawaii
being returned to Queen Liliuokalani and her Government, President Grover's
successor, President William McKinley, annexed Hawaii through the Newlands
Joint Resolution of 7 July 1898. President McKinley justified his action as a
consequence of the Spanish-American War. One hundred years later, President
Clinton apologised to the nation of Hawaii (103rd Congress 1993):

The Congress … (3) apologizes to Native Hawaiians on behalf of the


people of the United States for the overthrow of the Kingdom of
Hawaii on January 17, 1893 with the participation of agents and
citizens of the United States, and the deprivation of the rights of
Native Hawaiians to self-determination; (4) expresses its
commitment to acknowledge the ramifications of the overthrow of
the Kingdom of Hawaii, in order to provide a proper foundation for
reconciliation between the United States and the Native Hawaiian
people.

54
Manifest Destiny continued to provide legitimacy for American incursions
across the world, in Vietnam, South America, Panama and ultimately in Iraq
(Perkins 2004, pp.181-2):

In November 1980, Carter lost the U.S. Presidential election to


Ronald Regan ... A president whose greatest goal was world peace
and who was dedicated to reducing U.S. dependence on oil was
replaced by a man who believed that the United States' rightful
place was at the top of a world pyramid held up by military muscle,
and that controlling oil fields wherever they existed was part of our
Manifest Destiny. A president who installed solar panels on White
House roofs was replaced by one who, immediately upon occupying
the Oval Office, had them removed ... Regan … was most definitely a
global empire builder, a servant of the corporatocracy ... He would
advocate what those men wanted: an America that controlled the
world and all its resources, a world that answered to the commands
of America, a U.S. military that would enforce the rules as they were
written by America, and an international trade and banking empire
that supported America as CEO of the global empire.

In a visionary speech on 4 June 2009, seeking a new beginning with Iran,


President Obama acknowledged America's role in the 1953 Iranian coup d'état
of the democratically elected Prime Minister Mohammed Mosaddeq (Obama
2009c) “In the middle of the Cold War, the United States played a role in the
overthrow of a democratically elected Iranian government.”

Self-authorised extraterritorial actions, justified by Manifest Destiny, including


assassinations, kidnapping (known as extraordinary rendition) and torture,
continued to be pervasive through George W. Bush's presidency (Schmitt &
Mazzetti 2008):

The United States military since 2004 has used broad, secret
authority to carry out nearly a dozen previously undisclosed attacks
against Al Qaeda and other militants in Syria, Pakistan and
elsewhere ... These military raids, typically carried out by Special
Operations forces, were authorized by a classified order that
Defense Secretary Donald H. Rumsfeld signed in the spring of 2004

55
with the approval of President Bush ... The secret order gave the
military new authority to attack the Qaeda terrorist network
anywhere in the world, and a more sweeping mandate to conduct
operations in countries not at war with the United States … the new
authority was spelled out in a classified document called “Al Qaeda
Network Exord,” or execute order.

Sachs (2008, p.10) notes that American failures, including the Bush
Administration's crude and violent unilateralism, are a legacy of ashes:

The CIA-led overthrows of several governments (Iran, Guyana,


Guatemala, South Vietnam, Chile), the assassinations of countless
foreign officials, and several disastrous unilateral acts of war (in
Central America, Vietnam, Cambodia, Laos, and Iraq). The United
States has thrown elections through secret CIA financing, put
foreign leaders on CIA payrolls, and supported violent leaders who
then came back to haunt the United States in a notorious
boomerang or “blowback” effect (including Saddam Hussein and
Osama bin Laden, both once on the CIA payroll) … Like the earlier
excesses during the Cold War era, the Bush administration's
excesses are rooted in a perverse belief system in which American
goodness can and must be defended against foreign evil by violent,
covert, and dishonest means. Both the Cold War and today's war
against Islamic fundamentalism are born of a messianism that sees
the world in black and white, and lacks the basic insight that all
parts of the world, including the Islamic world, breathe the same air
…. the United States has completely failed to recognize our common
links with these regions, and instead has carried on an utterly
destructive war on peoples and societies that we barely understand.

Resource wars
International policy is of course about a complex set of issues involving more
than wars between ideologies. Discussions of climate change and economic
growth cannot be divorced from the accompanying issue of energy security.
American consumer traits have, if anything, intensified further to the modern

56
day, demanding that troops be deployed to secure energy supplies for
American consumers.

Andrew Bacevich writes in The Limits of Power: The End of American


Exceptionalism, which reached 4th place on the New York Times bestseller list
(2008, pp.1-6):

In 1991, the US began two decades of unparalleled intervention that


shattered the Long Peace following the Cold War. The US invaded
Panama, Kuwait, Iraq, Bosnia and Haiti. It also attacked Kosovo,
Afghanistan and Sudan. The second invasion of Iraq in 2001 started
the Long War of global, open-ended war on terrorism …. Americans
remain convinced that they are benign and that the perpetual wars
they are involved in are not of their own making …. Instead the
arrogance and narcissism to believe in managing global order, the
sanctimonious conviction that American beliefs are universal, and
the paranoid fear of being attacked have been inculcated through
progressive administrations. …. During the 1990s, US became
convinced that it was an exceptional country with bountiful reserves
of economic, political, cultural and military power. Many Americans
firmly equated America's new position with God's divinely
predetermined plan for the world. It became widely believed in the
US that its dominance was indispensable to world democracy. As
such, the US was entitled to tend its new Pax Americana empire,
expand it through globalisation, regulate the new international
order through both persuasion and military force, and patrol the
perimeter of the empire. Few could argue with the apparently
unassailable situation, and if they did they were regarded as
unpatriotic and delaminated from reality …. As individuals,
Americans never cease to expect more of everything however, they
have never contributed less. Neoconservative Robert Kaplan wrote
after 9/11 that America did not change on September 11. It only
became more itself. Determined pursuit of life, liberty and happiness
through consumption, sanguine about their country's contempt for
international law; enthusiastic embrace of preventative war; and
dodging moral analysis. …. If one were to choose a single word to

57
characterise that US identity it would have to be “more”. For the
majority of contemporary Americans, the essence of life, liberty, and
the pursuit of happiness centres on a relentless personal quest to
acquire, to consume, to indulge, and to shed whatever constraints
might interfere with those endeavours ... oil dependence is key to
our weakness. America's imperial military overstretch since the
1980 promulgation of the Carter Doctrine – which holds that the
U.S. will defend vital interests in the Persian Gulf "by any means
necessary" – is a natural consequence of that oil dependency. Our
collective refusal to conserve oil, to learn to live more sensibly
within our means, requires an ever-growing military commitment to
the Middle East.

Others have supported Bacevich's views. In the January 2009 Darwin Day
Lecture to the British Humanist Society Can British Science Rise to the
Challenges of the 21st Century?, former UK Chief Scientist Sir David Kingii
rejected government claims that America and the United Kingdom invaded
Iraq because of weapons of mass destruction or to topple President Saddam
Hussein. Sir David maintained that the invasion was solely to lessen American
reliance on foreign oil (Randerson 2009):

The Iraq war was just the first of this century's resource wars, in
which powerful countries use force to secure valuable commodities
… future historians might look back on our particular recent past
and see the Iraq war as the first of the conflicts of this kind …. [the
USA,] casting its eye around the world – [saw] there was Iraq [and
its immense oil reserves for the taking] …. it was certainly the view
that I held at the time, and I think it is fair to say a view that quite a
few people in government held …. Unless we get to grips with this
problem globally, we potentially are going to lead ourselves into a
situation where large, powerful nations will secure resources for
their own people at the expense of others.

Highlighting with grim irony the place of oil in America's war in Iraq, Stanford
University's Professor Gretchen Daily has rhetorically asked (Lowe 2009, p.22)
“How concerned would the US administration be about Iraq if it had 10% of
the world's broccoli?”

58
A pessimistic interpretation of the traditional American approach to foreign
policy is that there will be many more wars over important resources such as
oil and water.

2.3 Precursors of the Anglo-American world view


How did Americans develop such an unremitting, perhaps even unbalanced,
focus on consumption?

The founders of Western philosophy, Socrates (469 BCE–399 BCE) and Plato
(approx. 428BCE – 348BCE) first referred to the role of commerce in
organising society. However, it was Plato's student Aristotle (384 – 322 BC)
that philosophically investigated commerce in the role of work.

As many men have done before and after him, Aristotle sought an explanation
to the meaning of life. Consciously or unconsciously, Aristotle subscribed to the
dominant Greek Stoic world view that all things had a purpose and the world
was happily harmonious and in order only when objects followed their innate
and predetermined purpose. The liberal Epicureans regarded humans as free
to some extent but their thoughts were not to become mainstream for 2,300
years, with the German philosophers Immanuel Kant, Arthur Schopenhauer,
Johann Gottlieb Fichte and Friedrich Nietzsche.

Aristotle's view that every object seeks its natural purpose or goal is called
“teleology” (Saunders 1974). Theists believed that God determined this
purpose for each object, while others ascribed it to nature. Whatever the
source of the belief, it was understood that when humans deviated from their
inherent purpose, through misfortune or lack of understanding, then they
became miserable and the world was in disharmony.

After considerable contemplation, Aristotle hypothesised that the inbuilt


purpose for humans was to do work and that humans were only happy when
doing work. Of course, happy people led to a happy society. This was fortunate
because society needed just this happy work to fulfil its consumption needs.
This serendipitous and internally consistent paradigm was seemingly verified
everywhere one looked (Aristotle 350BC, Book I):

59
Now of the Chief Good (i.e. of Happiness) men seem to form their
notions from the different modes of life, as we might naturally
expect: the many and most low conceive it to be pleasure, and hence
they are content with the life of sensual enjoyment. For there are
three lines of life which stand out prominently to view: that just
mentioned, and the life in society, and, thirdly, the life of
contemplation …. As for the life of money-making, it is one of
constraint, and wealth manifestly is not the good we are seeking,
because it is for use, that is, for the sake of something further: and
hence one would rather conceive the forementioned ends to be the
right ones, for men rest content with them for their own sakes ….
And now let us revert to the Good of which we are in search: what
can it be? for manifestly it is different in different actions and arts:
for it is different in the healing art and in the art military, and
similarly in the rest. What then is the Chief Good in each? Is it not
"that for the sake of which the other things are done?" and this in
the healing art is health, and in the art military victory, and in that of
house-building a house, and in any other thing something else; in
short, in every action and moral choice the End, because in all cases
men do everything else with a view to this. So that if there is some
one End of all things which are and may be done, this must be the
Good proposed by doing, or if more than one, then these …. Now
since the ends are plainly many, and of these we choose some with a
view to others (wealth, for instance, musical instruments, and, in
general, all instruments), it is clear that all are not final: but the
Chief Good is manifestly something final; and so, if there is some
one only which is final, this must be the object of our search: but if
several, then the most final of them will be it …. So then Happiness
is manifestly something final and self-sufficient, being the end of all
things which are and may be done …. But, it may be, to call
Happiness the Chief Good is a mere truism, and what is wanted is
some clearer account of its real nature. Now this object may be
easily attained, when we have discovered what is the work of man;
for as in the case of flute-player, statuary, or artisan of any kind, or,
more generally, all who have any work or course of action, their

60
Chief Good and Excellence is thought to reside in their work, so it
would seem to be with man, if there is any work belonging to him ….
we assume the work of Man to be life of a certain kind, that is to say
a working of the soul, and actions with reason, and of a good man to
do these things well and nobly, and in fact everything is finished off
well in the way of the excellence which peculiarly belongs to it: if all
this is so, then the Good of Man comes to be "a working of the Soul
in the way of Excellence," or, if Excellence admits of degrees, in the
way of the best and most perfect Excellence …. And we must add, in
a complete life; for as it is not one swallow or one fine day that
makes a spring, so it is not one day or a short time that makes a man
blessed and happy …. it is thus in fact that all improvements in the
various arts have been brought about, for any man may fill up a
deficiency …. Now with those who assert it to be Virtue
(Excellence), or some kind of Virtue, our account agrees: for
working in the way of Excellence surely belongs to Excellence ….
Why then should we not call happy the man who works in the way of
perfect virtue, and is furnished with external goods sufficient for
acting his part in the drama of life: and this during no ordinary
period but such as constitutes a complete life as we have been
describing it.

In formulating his hypothesis, Aristotle charged humans with the mission to be


useful to society in production in order to be happy. In more temperate words,
he declared man to be a factor of production. It was but a little further
extension to value a person's worth as the future value of his or her labour. It
did not concern Aristotle that his hypothesis was wholly unprovable in common
with all the big philosophical questions, such as “What is life?”

The concept that man's utility was his only value seemed appropriate in the
societies of ancient Greece and seventeenth century USA, which depended on
the exploitation of slave labour. It also matched the power and wealth structure
of society thereby justifying the implicit assumption that there is a natural and
defensible hierarchy (Aristotle says “degrees”) in the society of man.

In France, François Quesnay (1758) developed the Tableau économique to


measure agriculture. His Physiocrat school of philosophy sought Laissez-faire

61
regulation of agriculture at a time when the French monarchy was very
repressive. As agriculture was regarded as the only true production, land was
correspondingly the only scarce resource and was therefore considered the
most important form of wealth. From this perspective, extractive,
manufacturing and merchant services only convert material from one state to
another and are considered “sterile” of wealth creation.

In 1776, Scotsman Adam Smith (1776) interpreted Aristotle's concept of


human value to being useful to society in producing goods for consumption or
export. However, Smith assumed that each individual was the best judge of his
own welfare (Book IV):

Every individual necessarily labours to render the annual revenue of


the society as great as he can. He generally indeed neither intends
to promote the public interest, nor knows how much he is promoting
it… . He intends only his own gain, and he is in this, as in many
other cases, led by an invisible hand to promote as end which was
no part of his intention.

The first extension of this principle was to a society of suppliers and


consumers. According to Smith, a market of individuals pursuing their own
best interest would reach the necessary equilibrium or tétonnement at a price
to clear the market of all commodities. One further extension of the concept
led him to the magical process that converts observable factors of production
(being labour, money and land) into tangible goods for consumption. Famously,
Smith called his magical process the “invisible hand of capitalism”. As with
Quesnay, Smith could not see any role for the government in regulation.
Smith's work was the beginning of today's scholarly discipline of Classical
Economics, which considers only markets to the exclusion of government.

Smith was ready to accept that goods included more than Quesnay's strict limit
of agricultural production. It seemed obvious to Smith that the tangible goods
had a value that could be readily calculated from the comprising factors from
which the goods were made: land rent, labour cost, capital cost and the return
for taking risk. The return for taking this risk was called entrepreneurship and
had been investigated by philosophers such as David Hume (1752) and David
Ricardo of the Mercantile Trading school.

62
Adam Smith's “invisible hand of capitalism” became the fundamentalist,
unproven doctrine of American commerce and social structure as observed by
de Tocqueville. Smith's book The Wealth of Nations became America's bible of
business and philosophy. Unfortunately for many people in society, Smith
categorised certain occupations as unproductive services. He included the
Sovereign along with “churchman, lawyers, physicians, men of letters of all
kinds, players, buffoons, musicians, and opera singers”.

Vargo and Lusch (2004), pioneers in the development of Service Sciences as an


academic discipline, highlight that the classical philosophers of economics,
politics and polity, Jean-Baptiste Say and John Stuart Mill, were early
dissenters to the concept that humans existed to produce goods for
consumption and found ultimate happiness in that work. In addition, while still
accepting the concept of utility, Say and Mill sought to broaden its limits to
include the poor churchman, lawyers, physicians etc. that had been excluded
by Adam Smith.

Jean-Baptiste Say (1803) reasoned that production was the creation of utility
rather than the creation of matter or the growing of something new. For
example, a sword is still only iron ore so no matter has been created.
Therefore, he held, human labour services of churchman, lawyers, physicians
etc. as well as everyone else are intangible products consumed at the time of
production. He developed his now famous Say's law that “Production generates
an equivalent demand that in turn generates employment in production”.

John Stuart Mill was prepared to go further. In Principles of Political Economy


(1848), Mill proposed the Aristotelian heresy that production is not the sole
purpose of human existence. He also moved on from Adam Smith's concept of
embedded value deriving from the factors of production to a quite
revolutionary concept that the value of production is not in the objects
themselves but in the attribute of their usefulness to the particular consumer.

63
Frederic Bastiat's Essays on Political Economy (1848) quickly swept forward
with Mill's concepts to suggest that the value of a man's services is quite
independent of any tangible goods and furthermore is not just an attribute of
tangible goods as Say and Mill still accepted.

Bastiat hypothesised that the foundation of economics is that individuals who


have wants seek out satisfactions and the satisfactions are obtained through:
gratuitous utilities provided by Providence, such as air and water, and onerous
utilities purchased by trading effort through labour. He proposed a still
unprovable hypothesis as a great economic law:

The great economic law is this: Services are exchanged for


services .... It is trivial, very commonplace; it is, nonetheless, the
beginning, the middle, and the end of economic science .... Once this
axiom is clearly understood, what becomes of such subtle
distinctions as use-value, and exchange-value, material products and
immaterial products, productive classes and unproductive classes?
Manufacturers, lawyers, doctors, civil servants, bankers, merchants,
sailors, soldiers, artists, workers, all of use, such as we are, except
for the exploiters, render services. Now since these reciprocal
services alone are commensurate with one another, it is in them
alone that value resides, and not the gratuitous raw materials and in
the gratuitous natural resources that they put to work.

Bastiat's hypothesis has now become a widely accepted part of marketing


practise. For example, Philip Kotler (1994), a major voice in American
marketing education, writes that “The importance of physical products lies not
so much in owning them as obtaining the services they render”.

In his book The Structure of Scientific Revolutions (1962), Thomas Kuhn


demonstrates that theories and power structures change in waves. Those who
make up the system, which they see as the legitimate one, use all means at
their disposal to quash new competing forces. Change occurs when new eyes
come to look at the situation and see compelling reasons for change. So it is
that all fundamentalist paradigms require an enemy on which to sharpen
polemic and differentiate their arguments.

64
America's number one ideological enemy was Karl Marx, who maintained in his
book Das Kapital (1867) that the specialisation of labour would remove the
ownership of production from individuals and introduce monotony, thereby
deprive individuals happiness in producing.iii To Marx, the deterministic
corollary of his theory of dialectical materialism would be that labour would
choose to move away from organisations employing specialisation to self-
producing communities. Violent revolutions occurred in his name in Russia and
China, although Marx did not specifically advocate such violence.

Ironically, Vladimir Lenin and Joseph Stalin introduced specialisation into


Soviet manufacturing. They were impressed with Frederick Winslow Taylor's
view that managers need to motivate workers with performance pay on
measured output. Taylor is known as the father of scientific management and
famous for stopwatch time & motion studies. He believed that managers were
the problem in stifling the productivity growth of workers. This is because
managers kept raising the bar on worker performance until there was no
incentive left, therefore reinforcing the attitude that management needs to
force workers to be productive because they naturally slack-off.

A second enemy of classical economics and its mathematical sibling,


Neoclassical Economics, was John Maynard Keynes, who published a new
macroeconomic theory in The General Theory of Employment, Interest and
Money (1936). Keynes foresaw a role for the government to invest to stimulate
economies when stagnation occurred in the regular economic cycles of boom
and bust. This pump-priming or demand stimulus was to get people working
again in order to both alleviate human misery and reboot the income-
consumption cycle. Keynes' theory appealed to governments as a way of
clearing the Depression. Following World War II, governments of all
persuasions adopted Keynesian stimulation to successfully rebuild their
economies.iv Perhaps an equally impressive use of massive Keynesian stimulus
has been in assuaging the 2008-9 global financial crisis in just one year.

Implicit in Keynes' theories were two key arguments that upset classical
economists. The first was that the simplicity of classical economics could not
cope with economic cycles. The second was that an almost total lack of
government business regulation through the 1920s directly contributed to the

65
excesses of the decade, the 1929 Wall Street crash and the ensuing
Depression. At the time, as in 2009, many people lost confidence in the ability
of classical and neoclassical economics to predict or to fix the market failures.

Classical economics continues to rail against its Keynesian critics. The


Monetarists, Libertarians, the Chicago and Austrian Schools and more recently
Leo Strauss' neoconservative philosophy all fervently believe in minimal
government regulation. Monetarists are controversial for demanding that the
Government keep its hands off the economy and allow the market to heal itself;
cease all subsidies to agriculture, public housing and tax policy because these
have done more harm than good; leave business to its sole function of making
profits and require no ethical duty of corporations other than to obey the law.
Its leading proponent, Milton Friedman, proudly declared that “The social
responsibility of business is to increase its profits.”

Nowadays, the Gini Index measures inequality of income distribution. The


index ranges from 0 (or no inequality) to 1. A small index number implies that
the distribution of income in a society is fair. The Gini Index for Germany is
only 0.28, compared to 0.45 for the America. v

America's inequality of income distribution suggested to John Rawls, Professor


of Philosophy at Harvard University, that welfare in America lacked justice. In
A Theory of Justice (1972, p.152-7), Rawls rejects Adam Smith's concept that
the social welfare of an economy is simply the sum of the social welfare of each
citizen, which is the underlying utilitarian assumption of classical and
neoclassical economics. Assuming that the simple Pareto Optimum is unfair in
welfare terms, Rawls drills into the efficiency of the price mechanism that
determines producer and consumer surpluses. In order to maintain justice, he
argues for a von Neumann maximinvi type of social utility function for the
consumer, where governments look to maximise the welfare of the least well-
off persons (Rawls 1972, pp.273-7):

It is essential to distinguish between the allocative and distributive


efficiency of prices. The former is connected with their use to
achieve economic efficiency, the latter with their determining the
income to be received by individuals in return for what they

66
contribute …. The allocation branch [of government], for example, is
to keep the price system workably competitive and to prevent the
formation of unreasonable market power …. and correcting, say by
suitable taxes and subsidies and by changes in the definition of
property rights, the more obvious departures from efficiency caused
by the failure of prices to measure accurately social benefits and
costs … A competitive price system gives no consideration to needs
and therefore it cannot be the sole device of distribution …. It is
clear that the justice of distributive shares depends upon the
background institutions and how they allocate total income, wages
and other income plus transfers. There is with reason strong
objection to the competitive determination of total income, since this
ignores the claims of need and an appropriate standard of life ….
But once a suitable minimum is provided by transfers, it may be
perfectly fair that the rest of total income be settled by the price
system, assuming that it is moderately efficient and free from
monopolistic restrictions, and unreasonable externalities have been
eliminated.

Americans have machinated over the potent challenges from Marx, Keynes and
Rawls. In most cases, it has not responded by action but used the challenges to
strengthen the defence of its core value system. For example, the defence of
American democracy and capitalism, the unified concept of human existence
and service value, and minimal government regulation.

This core value system is embodied in the neoclassical paradigm, which


derives two major strengths from it. Firstly, that the paradigm is internally
consistent and, secondly, that society has been modified to fit the paradigm.
Therefore, the Neoclassical paradigm parallels the workings of Anglo-American
societies, except in exceptional circumstances.

In fact, a dramatic reversal of Marx's theory of dialectical materialism has


occurred in recent decades: the owners of capital have prospered to a far
greater degree than the owners of labour with the proletariat meekly
accepting their deteriorating position, prospects and vulnerability. Direct
evidence of this can be found across the Anglo-American nations in the
declining labour share of GDP and sweep of income to the most wealthy. UBS

67
economist Martin Lueck comments “If you draw a line dividing the winners and
losers [of the past 20 years], it is not between US or UK economic systems and
Europe's, but rather the owners of capital vs. the owners of work. The losers
are the owners of work in all parts of the world, particularly Western countries.
The winners have been the owners of capital” (Herbst 2009).

In the 27 years from 1980 to 2007, Reaganomics delivered a 700% increase in


the real income of the top 0.01% of Americans compared to only a 22%
increase in median real income (Bucks et al. 2009; Krugman 2009a). The
increase in real median income was only one-third of its increase in the
previous 27 years and there was no increase at all in the otherwise golden
period from 2000-2007 as President George W. Bush pursued Reagan's policies
of supercharging the wealthy sector of the population.

Even proudly egalitarian nations such as Australia have seen inequality


strongly rising in the same period for the same reasons. As shown in the
illustrations below, Australian companies enjoy a very high and rapidly growing
share of factor income. As a result, they are able to pay considerably higher
dividends than companies in the rest of the world. However, this has resulted
in the labour share of factor income falling steadily over the last 30 years. This
has been exacerbated by the compulsory alienation of individual incomes for
retirement superannuation contributions.

Illustration 2: Australian Profit share of Illustration 3: Australian Labour share of


Factor Income (Source: Australian Bureau of Factor Income, before (blue) and net of
Statistics 5206.0 Australian National (purple) superannuation (Source: Australian
Accounts: National Income, Expenditure and Bureau of Statistics 5206.0 Australian
Product, Table 7. Income from Gross National Accounts: National Income,
Domestic Product (GDP), Current prices) Expenditure and Product, Table 7. Income
from Gross Domestic Product (GDP),
Current prices)

68
It may be noted in the illustrations below that the Australian income share of
labour is significantly less than international benchmarks (Krämer 2008).

Illustration 4: Selected labour shares of Illustration 5: Selected labour shares of


advanced economies (Source: Krämer 2008) advanced economies (Source: Kramer, 2008)

Similarly as in America, the depression of labour share of income has been


accompanied by a sweep of income to the top 1% as shown in the right hand
illustration (Atkinson & Leigh 2006). There could only be one result from the
pressure on labour share and sweep of income to the most wealthy. As in
America, easy money coupled with these financial pressures led to recurrent
living expenditure being financed from debt: Australian average private debt to
income has risen four-fold from 40% in 1980 to 160% in 2008.

Illustration 6: Top 1% share of Australian Illustration 7: Australian average private


Income (Source: Atkinson & Leigh 2006, debt to income (Source: Reserve Bank of
Appendix 6, Table 1) Australia, Statistical Bulletin B21 Household
Finances Selected Ratios)

69
The phenomena of ordinary people financing their current expenditure from
debt instead of income has led economists to conclude that this effect was one
of the largest contributors to the 2008 global financial crisis.

Peter Self, a trenchant critic of the American market system, summarises in his
book Rolling Back the Market (Self 2000, pp.xi, 6 & 12):

The prevailing market system is supported by a very influential set


of economic dogmas which have come to occupy a dominant place in
the lives of modern societies. These include the high importance
attached to market-led economic growth; the value of complete free
trade in money and capital as well as in goods and services; the
need to subordinate social welfare to market requirements; the
belief in cutting down or privatizing government functions; the
acceptability of profit as a test of economic welfare; and others as
well … Neoclassical economics provides a comprehensive model for
a market economy based on the exchanges between economic actors
to maximise their utilities … [However,] Neoclassical economics
cannot be subjected to Popperian falsification (Blaug 1992) because
a controlled experiment in human behaviour cannot be performed
with the holding of other factors constant (ceteris paribus) … [and]
is best understood as a Weber's ideal type theory.

While the neoclassical paradigm has served America's preoccupation with


consumption as the measure of happiness and has successfully repelled
external criticism, we will see in the next sections that the paradigm has aged
and come to face its greatest test from within. In the face of major and
unexpected systemic risks, it has delaminated from reality across many facets
of society. Like a patient lurching from spasm to spasm, economic doctors seek
to cure the symptoms by bandaging the wounds, while being unprepared and
mostly opposed to addressing the fundamental causes.

Philosophy & psychology diverge from the paradigm


Americans have been strongly attracted to Aristotle's notion that human
happiness is to be found in work. Government, business and social pressure
has reinforced this assumption. Perhaps with an economy of thinking,

70
individuals in Anglo-American societies have accepted the need to work hard,
notwithstanding their scepticism about the workplace as being the font of
happiness. This resigned perseverance is known as the Protestant Work Ethic.

However, European philosophers have been less convinced. Arthur


Schopenhauer observes that if human existence demonstrates a purpose then
for the vast majority of people in the world this purpose would be suffering,
woe and pain. His point raises the prospect that Aristotle's cult of happiness is
merely a social fiction of the elite, who have the means to afford.

Another German philosopher, Friedrich Nietzsche decisively rent the veil of


socially imposed value systems. Nietzsche identified the important principle
that humans are a “will to power,” which is the seemingly insatiable urge on
the part of some people to exert their will over others (Nietzsche 1887). Once
people understood this, they could reassert control over their own lives.

Max Weber, another important German philosopher, took this idea forward into
organisations, arguing that power structures take precedence over structures
of authority (Weber 1904). Uncloaked from its Aristotelian ideology of work as
a place of virtue, the workplace began to be perceived as a place of power
struggles.

Friedrich Nietzsche bluntly repeated Max Stirner's assertion that “God is


dead,” by which he meant there was no organising principle or “author” of the
universe (Nietzsche 1882; 1887).vii His classic statement of egotism invalidated
all inherent Aristotelian purpose in humans. Furthermore, a corollary of there
being no universal morality or purpose is that every value must be self-created.

The French existential philosopher Jean-Paul Sartre extended Nietzsche's


concept of self-creation. Sartre (1943) developed the theory that a person's
existence precedes essence, which means that humans are born in a biological
process and proceed to develop their being. This opposes the Judeo-Christian
dogma that humans enter the world with a soul.

Sartre perceived that a person's self-created values can lead to psychological


situations such as anxiety. He argued that the fundamental human condition is

71
freedom and that this is both the greatest prize and greatest burden of man:
“Man is nothing else but that which he makes of himself” (Sartre 1946).

Of course, Aristotle and pragmatic American philosophers would agree entirely


with this, the first of Sartre's principles of existentialism. Sartre rejects more
than the existence of God, he rejects any exogenous meaning or reason for
mans' existence. Sartre argues that we all need to find our own reason for
existence and meaning. However, the path to defining one's own essence is
extraordinarily difficult.

Individuals make decisions in unpredictable ways, some may feel personal


decisions lack rationality. For example, recently Brooks (2009) highlighted that
American economists have underestimated the complexity of human behaviour:

Reason is not like a rider atop a horse …. each person’s mind


contains a panoply of instincts, strategies, intuitions, emotions,
memories and habits, which vie for supremacy. An irregular,
idiosyncratic and largely unconscious process determines which of
these internal players gets to control behavior at any instant.
Context — which stimulus triggers which response — matters a lot
…. This mental chaos explains how people can respond so quickly
and intuitively to so many different circumstances. But it also entails
a decision-making process that is more complicated and messy than
previously thought.

The onerous task of making decisions and being fully accountable for the
outcome can be a lonely pursuit because humans individually need to make
decisions and live with the results. At times when the very foundation of
existence is challenged, humans usually begin to contemplate our own finite
mortality. Sartre writes of this time when our values are disturbed (Sartre
1946, Chapter 4):

Everything is indeed permitted if God does not exist, and man is in


consequence forlorn, for he cannot find anything to depend upon
either within or outside himself. He discovers forthwith, that he is
without excuse. For if indeed existence precedes essence, one will
never be able to explain one's action by reference to a given and

72
specific human nature; in other words, there is no determinism--man
is free, man is freedom. Nor, on the other hand, if God does not
exist, are we provided with any values or commands that could
legitimise our behaviour. Thus we have neither behind us, nor before
us in a luminous realm of values, any means of justification or
excuse. We are left alone, without excuse. That is what I mean when
I say that man is condemned to be free. Condemned, because he did
not create himself; yet is nevertheless at liberty, and from the
moment that he is thrown into this world he is responsible for
everything he does.

Moral hazard is present where individuals are in a position to make decisions


for themselves, their economy and their country when they are not fully
accountable for the outcome. This concept is discussed later in this Chapter. In
one of the major differences between pragmatism and existentialism,
pragmatism accepts moral hazard as part of conservative moral philosophy
while existentialism emphasises taking personal responsibility for one's own
actions.

Perhaps the greatest anxiety in life comes from loneliness and the realisation
that one's assumptions are invalid. Sartre dealt with the anxiety of confronting
emptiness in his first novel Nausea (1938). He later explained the experience
of anxiety in an essay, The Look (1992, p.347), as follows:

What I apprehend immediately when I hear the branches crackling


behind me is not that there is someone there; it is that I am
vulnerable, that I have a body which can be hurt, that I occupy a
place in which I am without defence – in short, that I am seen.

Free choices are always prey to one's sense of angst and anxiety. The
loneliness and magnitude of the tension between freedom and responsibility
often leads to despair. Sartre argues that for the most part this reveals that a
person's decisions are often in mauvaise foi (self-deception or bad faith) that
lack authenticity due to angst, which is irrational anxiety over a perceived
need for security. Therefore, we give in and exchange our authenticity for
things like belongingness.

73
Sartre's solution is twofold: firstly, to simply accept the situation that existence
is absurd because there is no “big picture” that gives it meaning; secondly, to
get on with life and be as authentic as possible to oneself in choices. He says
“Man’s task in life is to authenticate his existence. Approach your existence
creatively, and do something with it.”

Sartre chose to balance his personal life on the fulcrum of disruption, rather
than succumb to a conventional life. He was convinced that to be conventional
was bad faith. Sartre believed his only authentic choice was to remain in the
state of uncertainty, perpetually at the point where a man was not only free but
conscious of his total freedom.

Another influential existential philosopher, Albert Camus, won the 1957 Nobel
Prize in Literature at the age of 44.viii Camus claimed to have found meaning
within himself as a great outcome from a bleak and stressful experience. He
wrote of it in his essay Return to Tipasa: “In the depth of winter, I finally
learned that within me there lay an invincible summer” (Camus 1952).

Camus employed the myth of Sisyphus to illustrate that Aristotelian happiness


in working is absurd and, by extension, that all purpose in life is absurd
(Camus 1942, Chapter 4):

74
The gods had condemned Sisyphus to ceaselessly rolling a rock to
the top of a mountain, whence the stone would fall back of its own
weight. They had thought with some reason that there is no more
dreadful punishment than futile and hopeless labour …. You have
already grasped that Sisyphus is the absurd hero. He is, as much
through his passions as through his torture. His scorn of the gods,
his hatred of death, and his passion for life won him that
unspeakable penalty in which the whole being is exerted toward
accomplishing nothing. This is the price that must be paid for the
passions of this earth …. The workman of today works everyday in
his life at the same tasks, and his fate is no less absurd …. One does
not discover the absurd without being tempted to write a manual of
happiness. "What!---by such narrow ways--?" There is but one world,
however. Happiness and the absurd are two sons of the same earth.
They are inseparable …. One must imagine Sisyphus happy.

In 1974, the esteemed philosopher Isaiah Berlinix concurred with Sartre


(Saunders 2009a):

If you aren't fully responsible for your own acts, if you can say, “I am
as I am because my parents maltreated me; I am as I am because
the nature of the universe is such”, and then you put the
responsibility on the back of the universe and shuttle it off your own.
And people don't want to be all alone, lonely persons responsible for
their own actions, they want some justification of what they do from
the nature of something greater, more stable in a way than
themselves. And people can do all sorts of things in the name of
history, in the name of progress, in the name of “my class”, in the
name of the church, which they might hesitate to do if it was
entirely up to them individually.

75
In a perceptive reflection on the human condition, Harrison (2008 pp.111-2)
observes that Aristotle's virtues are but a single facet of a multidimensional
moral paradigm. He writes “vices are every bit as cultivatable as virtues. The
cultivation of envy, spite, pride, greed can be taken to exquisite levels. But this
does not transform those vices into virtues; on the contrary, by submitting
them to extremely regimented rules and protocols, it gives them a style that
renders them more sublime while leaving their vicious essence intact.”

Indeed, virtues are the least part of this moral paradigm. Drawing on his
unique perspectives from Italian Medieval and Renaissance literature,
Harrison concludes that the Western human condition is fundamentally
restless and disconsonant (pp151-8). He compares the hero knights in
Ludovico Ariosto's Orlando's Furioso (1516) to the pilgrims of Dante's Divine
Comedy (circa 1310), suggesting that the existential boredom and aimless path
of the former characterises the modern Western journey:

They wander the earth laterally in search of action and distraction.


Desire is a principle of motion with neither master plan nor final
destination. The knights merely desire “more”, more of their own
dynamism, an intoxication with and more of the same circulating
energies. They are archetypal modern consumers in a tumultuous
world of digressive compulsions where they court adventure, pursue
elusive erotic objects and strive to measure up to their rivals. It
feeds on its need for ever-new challenges and exploits. Remaining in
motion becomes an end in itself. There is neither a higher personal
or historical purpose nor a redemptive goal. The knights craving for
action is at bottom a craving for distraction, what Blaise Pascal
called divertissement, without which the modern male (according to
Pascal) quickly succumbs to melancholy. That craving for diversion
arises from the pointlessness of their mode of being – the
pointlessness of being knights in a post-chivalric world, men of
action in an age when action has lost its normative or underlying
meaning …. The modern differential in Ariosto's knights is not so
much their aversion to [the peace of] Eden as their existential
boredom. Boredom indicates a certain deficiency or blockage of
care. Boredom can bring about the conditions for desperation and

76
lead to a constant search for diversion, a constant “turning-away”
from oneself … Orlando goes on to commit a mindless devastation of
what others have carefully cultivated, laying waste to farmers'
fields, the well-husbanded countryside, the quiet forests and rivers.
He particularly directs his rage against gardeners and shepherds.
This nihilistic vortex of pathological agitation and ravaging
destruction is the hero of the age of which he is the harbinger.
Herein lies the knights quintessential and even contemporary
modernity, for this is precisely the spiritual condition of the age
today: driven and aimless, we are under the compulsion of an
unmastered will to destroy whatever lies in our way, even though we
have no idea where the way leads or what its end point may be.

When Orlando roams with such unpredictable intent the safety of society is
compromised. In order to be happy, a society needs shared attitudes and
sanctions that promote trust. Weiner (2008, pp.234-6 & 405)found that the
deeper the trust ethos in society, the happier the society reports that it is:

Aristotle said more or less "Happiness is your state of mind and the
way you pursue that state of mind." How we pursue the goal of
happiness matters at least as much, perhaps more, than the goal
itself. The means and the end are the same. A virtuous life and a
happy life are the same thing. … Nietzsche says that a society
cannot avoid pain and suffering but the measure of a society is how
well it transforms this pain and suffering into something worthwhile
…. Trust - or to be more precise, a lack of trust - is why Moldova is
such an unhappy land .... Moldovians don't trust the products they
buy at the supermarket .... they don't trust their neighbours .... they
don't even trust their family members. ... For years, political
scientists assumed that people living under democracies were
happier than those living under any other form of government ... but
the collapse of the Soviet Union changed all that. Most (although
certainly not all) of these newly independent nations emerged as
quasi-democracies. Yet happiness levels did not rise. In some
countries they declined, and today the former Soviet republics are,
overall, the least happiest places on the planet .... It is not that
democracy makes people happy but rather that happy people are

77
much more likely to establish a democracy .... The institutions are
less important than the culture. And what are the cultural
ingredients necessary for democracy to take root? Trust and
tolerance. Not only trust of those inside your group - family, for
instance - but external trust. Trust of strangers. Trust of your
opponents, your enemies, even. That way you feel you can gamble
on other people ….Money matters, but less than we think and not in
the way we think. Family is important. So are friends.

Harrison (2008 p.33) identifies the catalytic component of Weiner's ethos of


trust as Karel Čapek's basic ethical principle of proactive care that “you must
give more to the soil than you take away” (K. Capek et al. 2002 p.88) Harrison
extends this principle to “nations, institutions, marriages, friendships,
education, in short for human culture as a whole” and rejects Aristotle's grand
vision of work as a virtue (pp.166 & 170-1):

I have insisted throughout this study that human happiness is a


cultivated rather than a consumer good, that it is a question of
fulfilment more than of gratification. Neither consumption nor
productivity fulfils. Only caretaking does …. A gardener does not
exalt the work ethic …. He does not espouse the cause of labour. He
espouses the cause of what he cultivates …. The gardener is not a
labourer, regardless of how much real labour cultivation entails ….
The gardener, in short, is not committed to work, and even less to
“productivity”. He is committed to the welfare of what he nourishes
to life in his garden …. This self-extension of the gardener into care
is an altogether different ethic from the one that drives the present
age to crave more life and to escape what Heidegger calls the
emptiness of Being through a jacked-up productivity. Nothing is
further from the gardeners mind, nothing motivates him less than
self-perfection, the value of work, or the virtue of his deeds.

American psychologists Aaron Beck and Albert Ellis are each credited with
independently originating Cognitive Behavioural Therapy to assist individuals
deal with distress about the vicissitudes of life, such as Harrison's
unpredictable Orlando and Sartre's angst. The therapy seeks to reorientate an
individual's thinking toward recognising and controlling their own

78
demandingness about needing happiness, authenticity and an environment of
trust, mutual care and peace. Over many decades the therapy has been very
successful in its objective of helping individuals think about their own thinking
and make accountable choices to move away from unremitting stressors.

Cognitive Behavioural Therapy is now highly influential and even the dominant
form of psychological therapy. It draws upon the same fundamental concepts
as does existentialism, for example, the unquestioned existential statement of
existence that “I am”. However, a key difference from existential philosophy
and therapy is that Cognitive Behavioural Therapy specifically circumvents the
major imponderables of life, such as whether life has meaning, if there is a God
etc., as answers are unlikely to be forthcoming. Existentialism emphatically
maintains that the answer to each of these questions is “no”. Cognitive
Behavioural Therapy also avoids Aristotle's idea that people find their
happiness solely or principally in work, or that human value or happiness can
be measured by work in any intrinsic way.

Although corporate human resources departments would prefer otherwise,


nowadays it is considered quaintly misplaced to equate happiness with
economic or work behaviour. Humans do not correspond to formulae, except
when they choose to. Individuals have the unique, nonlinear and disruptive
capacity to critically reflect by thinking about their own thinking. In this
respect we are children of Plato rather than Aristotle.

Furthermore, we commonly work in systems where power and lies coexist,


which was adroitly understood by the Renaissance political philosopher
Niccolò Machiavelli (1513) in his advice to Lorenzo Di Piero De’ Medici the
Magnificent. For example in Chapter 15, Concerning things for which Men,
and especially Princes, are Praised or Blamed he writes “It is necessary for a
prince wishing to hold his own to know how to do wrong, and to make use of it
or not according to necessity.”

Americans value leaders with proactive plans and an inner impetus or passion
to move forward. The American proclivity for actions over words is legendary.
Nike Inc. registered the ubiquitous slogan “just do it” as a trademark. In his
inauguration speech, President Obama sought to motivate Americans and draw

79
the nations together using the mantra of the cartoon character Bob the
Builder, “Yes we can”.

American business practices and character traits surprise Europeans who tend
to be more methodical. For example, Americans prefer “learning by doing” to
extended planning and specification. They have a greater respect for doing
than thinking, or action over words. This is sometimes expressed as the tracer
bullet strategy “ready, fire, aim”. For Americans, tracer bullets are cheap so
the best way of locating a target is just to start shooting. Feedback
mechanisms quickly correct mistakes to provide the way forward.

In business, this means that Americans prefer projects with small investments,
very short payback periods and near term exit strategies. When starting a
project they look to do a “half, not half-assed” job (37 signals 2006, p.48). It
also means that instead of planning a comprehensive project that will provide
for contingencies and future growth, they prefer to limit a project to the
smallest essential element that will just satisfy current requirements. If
expansion is required, then it can be done as another project in the future. This
maximises value by creating “real options” for future stages. However, it can
also result in band-aid policies, shabby urban architecture and massive
cumulative liabilities for infrastructure refurbishment. x

Perhaps above all, Americans respect leaders that develop bold strategies and
have the charismatic personality to carry them forward. They share the
reverence given to Homer's heroes and also the forgiveness given to the often
misplaced, reckless and capricious acts of the Greek Gods. However, these old
legends do not shape their future.

Each individual is free to choose their own path. It is a truism that each person
learns through their own mistakes. The French novelist Marcel Proust neatly
expressed this concept in Remembrance of Things Past, Volume II Within a
Budding Grove and Chapter IV Seascape, with a Frieze of Girls (1913) where
he writes “We don't receive wisdom; we must discover it for ourselves after a
journey that no one can take for us or spare us.”xi

80
Open and closed institutional philosophies
It wasn't only personal psychology and philosophies that were emerging from
the tyranny of top down paradigms. There was equivalent friction taking place
in institutions between the European tradition of open establishment groups in
Science and American closed groups.

Steve Fuller provides an interesting retrospective on the great Popper versus


Kuhn encounter sponsored by Imre Lakatos as part of the International
Colloquium in the Philosophy of Science at the former Bedford College,
University of London on 13 July 1965 (Fuller 2003, p.10). Five years later,
Lakatos wrote of the diametrically opposed views (Lakatos 1970):

The clash between Popper and Kuhn is not about a mere technical
point in epistemology. It concerns our central intellectual values,
and has implications not only for theoretical physics but also for the
underdeveloped social sciences and even moral and political
philosophy.

Thomas Kuhn

In his famous book already referred to above, The Structure of Scientific


Revolutions (1962), Thomas Kuhn examines with the way science was
conducted in cold-war America. He characterises the dominant form of
behaviour as a heads-down or monkish approach.

Rather dispiritingly, heads-down describes organisations where any challenge


to orthodox views is an anathema to the organisational culture. Therefore,
people are advised to keep their head down and focus on processing the work
at hand rather than promoting new ideas. If a person unwisely raises a
controversial idea, the organisation can be expected to eliminate the challenge
as quickly as possible.

It may take ten or even thirty years or more for a new paradigm to be accepted
in the scientific community. Perhaps, the person who had the original idea will
not even be alive to see its fruition. Kuhn found that it was wise not to raise
one's head before the time had come or else the person may be forever tarred

81
as a failure because of the idea (whether or not it subsequently turns out to be
a better theory) and many times a brilliant career could die with the idea.

This is analogous to the way a whistleblower may be treated today for


highlighting problems, injustices or fraud in a company. Kuhn’s organisation
man would be expected to deal with such organisational failures internally
within himself, without any upsetting publicity or revolution. Such failures are
expected to exist and even persist in the organisation for periods of time.
Therefore, Kuhn would maintain that failures are never fundamental and an
organisation is not considered broken just because it has such issues. While
the practices may not be acceptable, an effluxion of time and circumstance will
slowly remedy the situation.

It may even require generational change over twenty or thirty years to bring in
new people that are able to make the needed changes because they do not
have their careers and reputations invested in the old paradigm. This personal
interest factor is known as an “agency conflict” and arises from what Nietzsche
identified as the “will to power” (discussed above). All in all, it is expected that
the failures will be quietly corrected over time and there is no hurry because
institutions have plenty of that resource.

Of course, one is no doubt amazed by the fact that a new scientific idea could
have been placed in the same category as an injustice or a fraud. Kuhn
correctly identifies that the establishment's ferocious defence leads to changes
in the scientific paradigm coming in waves, rather than linearly. It is thought
that the paradigm will naturally switch with new circumstances in the
organisation.

Fortune magazine editor William H. Whyte Jr. was on the same track in his
book The Organization Man (1956). He identified a puzzling dichotomy
between conformity and individualism in American 1950s society. Whyte found
that corporation men willingly subordinated to unquestioned cooperation in
exchange for the security of belongingness. They were prepared to become
“yes men,” leaving their personalities at the door as they entered the office or
factory.

82
Fifty years later, Ehrenhal wrote of the impact of Whyte's book “By the
following spring, it was hard to find a college commencement speaker who
didn’t devote his remarks to the conformity crisis and its implications. “We
hope for nonconformists among you,” the theologian Paul Tillich told one
audience of graduates, “for your sake, for the sake of the nation, and for the
sake of humanity.” The president of Yale, A. Whitney Griswold, talked about a
“nightmare picture of a whole nation of yes men” (Ehrenhal 2006).

Ehrenhal goes on to observe that the waxing and waning of demographic


groups, such as the baby-boomers and the X- and Y-generations, has provided
no better understanding of the dichotomy “The first decade of the 21st century
is now more than two-thirds over, and we are still waiting for a convincing
explanation of what it is all about ... It is the era of cell phones, BlackBerries
and iPods, and we sense that these technologies are changing the nature of
social interaction — but it seems too early to say exactly how.”

Fuller (2003, p.129) argues that Kuhn's findings arise from the three forms of
authority created by Roman Law, which operated until the twelfth century:
Gens, the transmission of the family status and wealth across generations;
Socius, goal based ventures such as business activities and military
expeditions, which were seen as temporary organisations for specific purposes;
Universitas, the enduring public service corporations of craft guilds,
universities, religious orders and city-states.

The important character of Universitas was that it gave certain groups niche
monopolies to perpetually decide what constitutes a worthy pursuit and who is
qualified to pursue it. These organisations are now the institutions of society.

For individuals in these institutions, the practicality is almost unchanged from


the days of Roman Law: that to be against the establishment is to be against
the activity – for example, to be against the position of the scientific
establishment on a particular theory is perceived as being against science
itself. A pervasive risk is that one's research funding will be cut. Therefore, the
enormous pressure to conform with the establishment and keep one's head-
down remains deeply entrenched.

83
Fuller (p46) notes of Kuhn's findings that public institutions which manage
science are “A politically social formation that combined qualities of the Mafia,
a royal dynasty and a religious order. It lacked the constitutional safeguards
that we take for granted in modern democracies that regularly force politicians
to be accountable to more people than just themselves.”

Most people in organisations, whether Socius or Universitas, whether private


or public, see the autonomy of the institution as self-evident and are reluctant
to see any of its authority taken away. There is a strong belief that the
organisation will always do the right thing.xii

Any organisational failures are explained away as because the leaders need to
take risks and it is argued that in the absence of a pattern of fraud they should
be protected or indemnified from the consequences of these risks. This is
arguably one of the two key reasons for the slow and difficult implementation
of corporate accountability and systems of corporate governance in boardroom
and at the level of Government.

Sir Karl Popper

Sir Karl Popper's theory of falsification as the demarcation between science


and pseudo-science is set out in The Logic of Scientific Discovery (1959),
originally published in German in 1934 and translated into English by Popper
himself in 1959. Popper is regarded as one of the first existentialists in science.
In Britain, he was knighted for his liberalist, rational and anti-authoritarian
values.

Popper treats falsification as the cornerstone of the scientific ethic and


challenges scientists to test their theories by simultaneously making
predictions and undertaking empirical tests that actively seek to falsify their
own theories. Fuller (p102) notes that Popper found Kuhn's heads-down model
abhorrent.

Popper maintains that the best theory is the one that has withstood the
greatest number of falsification attempts. According to Fuller (pp. 24-5) he
departs from the logical positivists on this very point: Popper requires that
logic be used to challenge rather than bolster scientific authority.

84
Of course, this is diametrically opposed to Kuhn's finding that scientific
institutions, far from submitting theories to falsification, go to extraordinary
lengths to defend their theories against falsification. Also, in the real world the
number of confirmations of success is regarded as more important than the
number of times a theory has failed or even survived falsification. For example,
an Australian Court of Law will accept widely used rules of thumb as
compelling evidence.

The approaches of Popper and Kuhn have been presented as being completely
opposed. However, in the 1965 debate, Popper readily accepted that Kuhn's
approach best described the way organisations operated and how science
advances in waves. Nevertheless, he says it is an inferior system that should be
replaced by critical thinking; proactively falsifying theories; and passionately
providing new ideas for peer review and receiving in return positive criticism.
Furthermore, that new ideas may die but the careers of the people who have
them should not. Indeed, an individual is even to be respected for sensibly
moving on to new and hopefully better ideas.

Critics of Popper's falsification theory argue that it is itself subject to


falsification, so it cannot be an absolute principle or scientific law warranting a
special position in the core of the scientific paradigm. For example, O'Hear
(1989) argues that the falsification tests are themselves just theories, so they
cannot be true tests of another theory. Curd & Cover (1998) explain the Quine-
Duhem Thesis (Duhem 1906) that it's impossible to isolate a single theory for
testing from the environment of theories that surround it. So if a cluster of
theories is falsified it is not possible to identify the defective element.

Ironically, this criticism turns against Popper his favourite quote from
Xenophanes of Colophon (570 – 480 BCE):xiii

The gods did not reveal, from the beginning,


All things to us, but in the course of time
Through seeking we may learn and know things better.
But as for certain truth, no man has known it,
Nor shall he know it, neither of the gods

85
Nor yet of all things of which I speak.
For even if by chance he were to utter
The final truth, he would himself not know it:
For all is but a woven web of guesses.

Even more ironically for Popper, Kuhn's empirical research finding that in
practice science doesn't proceed by falsification became widely accepted as a
test that falsified Popper's theory. The highly regarded anarchist philosopher,
Feyerabend (1975), one of Popper's greatest critics, concluded that the
theories of both Popper and Kuhn had failed and this left only the pluralist
approach of “anything goes” in Science.

Unfortunately, the same ignominious fate awaited Popper's concept of objective


knowledge, which Popper calls World 3, a third dimension of existence
following the objective and the subjective (Popper 1972). Few were prepared
to admit the existence of knowledge (for example the knowledge within books
contained within a library) and institutional structures (for example laws and
the police force) are independent of the knowing subject.

Perhaps this is because Poppers World 3 breaks the simple Cartesian dualism
of matter and soulxiv and demands an answer to the old phenomenological
chestnut “Does a tree make a noise when it falls in a forest and there is no-one
to hear it?”

In some ways it is surprising that Popper's theory of falsification remains so


controversial as the technique has always been used in academic peer review.
In addition, since 2000, falsification became the fundamental principle for the
way software is developed. In test driven development, tests are written before
the application code is started and then only sufficient software code to pass
the test is actually prepared.

The following discussion of an instance where Popper's world view has been
implemented allows conclusions to be drawn on Popper's influence from the
boardroom to international democracy.

Perhaps the only institutional environment where open criticism was


encouraged was the classical democracy of Athens, which was greatly

86
influenced by Solon (594 BC). However, it is best known for its golden age
under the leader Pericles (c. 495 BCE - 429 BCE). The Athenian democracy
existed for at least 186 years from the time of Cleisthenes (508 BCE) until its
suppression by the Macedonians in 322 BCE.

Fuller (p. 105) notes that: “Athens expected its citizens to speak their minds.
Indeed, failing to speak was worse than failing to persuade.” Not only was
criticism encouraged in Athens, it was actively demanded to protect society
from political capriciousness, or stasis. This is the Athenian term for the
agency conflict between the public interest of politicians and officials in
positions of authority and their private interests of staying in power and
enriching themselves through their position. The duty of public criticism was
designed to empower citizens and remove the mythology, superstition and
institutionalised dogma that accompanies Platonic stratification of knowledge
and authority in a society.

Following Bergson (1932), Popper uses the term open society to describe a
classical democracy that is predicated on debate, accountability and the
testing of ideas (Popper 1945). Fuller (p. 160) explains that Popper was
particularly concerned with what is nowadays called the “spiral of silence.”
This is the tendency in democratic societies for politicians to allow public
opinion to drift towards a minority position that has repeated exposure and
little formal opposition and for a culture of self-censorship to develop amongst
scientists, journalists and bureaucrats in order that to avoid career
victimisation.

In modern society it represents a failure by the media, which is expected to


represent the third estate, to give public expression to the majority view and
not assume it to be simply self-evident. The third estate of the realm in
medieval Europe comprised all members of society, excluding the first estate
(King and clergy) and the second estate (nobility). Today, it is mainly seen as
the public voice represented by investigative journalists.

However, Popper's principle of falsification can appear to be a paradigm that


encourages passionate individuals to rail against institutions. As Fuller (pp. 32
& 162) notes, active falsification was a complete about-face for the logical

87
positivists at the core of America's post World War II big-science phase and had
the alarming potential to cross the line from criticism of that pragmatism
approach to nihilism, which is Nietzsche's term for without meaning, purpose
or value.

Undeniably, the ideal of falsification has a place in science, companies and


democracies. However, it appears to be an unattainable perfection and the
dichotomy between open and closed organisations remains as a continuing
tension. As the democracy of Athens proved to be too pure in its principles to
stand the test of time against powerful elites, so Popper's open society is
considered by the institutions of society to be insufficiently stable for social
cohesion.

Nevertheless, it appears to be a valid hypothesis that the closer falsification


can be approached and transparency is valued, the more open and successful
economies and business will be. Thus organisations and countries that strive
for open principles will not just demonstrate their ethical commitment but
maximise economic welfare in their society by providing the conditions of
transparency and trust in which people can make their greatest achievements.

2.4 Unexpected failures in the Anglo-American world


view

Agency conflict
Agency conflict and corporate excess in the 1980s became the first indication
that something was really wrong with post World War II Anglo-American
capitalism. Perhaps the major deficiency in an elementary paradigm of
competitive markets is that of principal agent conflict. Much of the regulation
of markets has focused on the issues between both shareholders and directors,
and between directors and management.xv

As a result of prominent failures in agency conflict, economic stability in


America and in the world became increasingly threatened by American
business practices. The American Congress, and governments around the
world developed Corporate Governance as a major theme.

88
The need for specific Corporate Governance regulations arises because
directors do not have a legal responsibility to individual stakeholders such as
shareholders, creditors or employees.xvi Prior to the need for Corporate
Governance being recognised, much of directors' duties regulation was merely
to ensure that directors carried out their fiduciary duties honestly and in good
faith for the benefit of the shareholders as a whole. For example, a fiduciary
duty was described by the UK High Court in Aberdeen Railway Co v Blaikie
Bros. (1854, 1 Macq 461) as “A duty to act with fidelity and trust to another, to
act honestly, in good faith and to the best of one’s ability in the interest of the
company.” This simple fiduciary duty leads to imperfect accountability of
directors and managers, which has been exploited in every possible way.

Based on Nietzsche's analysis (above) we might expect that directors and chief
executive officers of companies and organisations, would be reluctant to see
demands for accountability and governance impact on their personal “will to
power.”

Agency conflict is obviously a very big opportunity space for directors and
managers. They would prefer to leave it unresolved and flexible for
exploitation. For example, Duffner (2003, p.34) notes that an agent has the
opportunity to maximise their own utility, utilising better information about the
business, and perhaps pass over obligations such as contracts, laws and moral
standards. Kaplan & Stromberg (2004) emphasise that principal-agent
conflicts are ever present due to these information asymmetries.

One attempt to address the moral hazard of imperfect accountability was to


appeal to the supposed enlightened self interest that Adam Smith assumed to
be a prominent feature of capitalist behaviour. Smith assumed that capitalists
would readily respond to the demands of society to protect their extremely
valuable right to operate under the social mandate granted by society.
Directors associations therefore assiduously prepared Codes of Conduct that
exhorted directors and managers to “act honestly, in good faith and in the best
interest of the company as a whole; not make improper use of information
acquired as a director; and not allow personal interests, or the interests of any
associated person, to conflict with the interests of the Company.”

89
However, these Codes of Conduct could only provide unenforceable statements
of good intent and the mission was flawed from the outset because the social
mandate extended by society was such a nebulous concept. In addition,
forfeiture of the mandate to operate is a very big issue, requiring such gravity
of circumstances, that it had rarely been invoked. Therefore, directors and
managers nodded in due deference to their vague accountability and
unenforceable obligations but were confident that in practice all these lofty
principles are inevitably subject to considerable interpretation in ambiguous
situations. As a result, Codes of Conduct did little to address moral hazard,
which continued to be an imperative because exploitation of company positions
for personal advantage continued unabated as a major ethical problem.

In the late 1990s and early 2000s, a number of prominent American and
Australian companies began to fail after Corporate Governance abuse. More
than any other example, the American company Enron showed what happens
when Corporate Governance goes awry. Bala Dharan, Professor of Accounting
at Rice University, noted in his testimony to the American House of
Representatives Committee on Energy and Commerce that the Enron debacle
will rank as one of the largest securities fraud cases in history. He noted that
many people were confused as to how this tragedy could have happened while
the company’s management, board of directors and outside auditors were
supposedly watching over for employees and investors. Dharan testified
(Dharan 2002):

My analysis of the Enron debacle shows that Enron’s fall was


initiated by a flawed and failed corporate strategy, which led to an
astounding number of bad business decisions. But unlike other
normal corporate failures, Enron’s fall was ultimately precipitated
by the company’s pervasive and sustained use of aggressive
accounting tactics to generate misleading disclosures intended to
hide the bad business decisions from shareholders. The failure of
Enron points to an unparalleled breakdown at every level of the
usual system of checks that investors, lenders and employees rely on
– broken or missing belief systems and boundary systems to govern
the behaviour of senior management, weak corporate governance by

90
board of directors and its audit committee, and compromised
independence in the attestation of financial statements by external
auditor.

When gross deficiencies of Corporate Governance and breach of duty by


auditors such as Arthur Anderson were discovered, Congress took a firm black
letter approach to Corporate Governance in the Sarbanes-Oxley Act of 2002.
This regulated agency conflict and set standards in accountability and risk
management.xvii

In Australia, after failures of prominent companies such as HIH and OneTel,


Justice Owen was appointed to lead a Royal Commission into the failure of
insurance company HIH. For the purposes of the Royal Commission, Justice
Owen defined Corporate Governance as (Owen 2003, p.xxxiii) “The framework
of rules, relationships, systems and processes within and by which authority is
exercised and controlled in corporations.” Justice Owen held that Corporate
Governance encompasses the mechanisms by which companies, and those in
control, are held to account (Owen 2003a, p.2).

With the HIH Royal Commission underway, the Australian Stock Exchange's
Corporate Governance Council acted to address Corporate Governance. Sadly,
under pressure from a political conservative government, it introduced an
undemanding and predominantly voluntary set of Corporate Governance
Guidelines (ASX Corporate Governance Council 2003; 2005; 2006; 2006;
2007). The Committee persevered with the now defunct assumption that
directors and managers would step-up to their responsibilities out of
“enlightened self-interest”. Perhaps predictably, the Governance Council was to
be embarrassed in its naive assumption by a lack of bona fide commitment
“Overall, the quality of exception reporting in 2004 annual reports was lower
than expected. Motherhood statements were commonly used, providing
insufficient disclosure to investors” (ASX Corporate Governance Council 2005).

In 2006, the Committee was forced to move to the previously threatened


sanction of a black letter approach, similar to Sarbanes-Oxley (ASX Corporate
Governance Council 2006)

91
One might be forgiven for assuming that the moral hazard had finally been
addressed by tough Corporate Governance rules across Anglo-American
economies. Unfortunately, this is not the case.

Daily, Dalton & Cannella (2003, p.371) found that Corporate Governance has
degenerated into a set of check the box requirements that do not meet
expectations of bona fide behaviour change: “The field of corporate
governance is at a cross roads. Our knowledge of what we know about the
efficacy of corporate governance mechanisms is rivalled by what we do not
know.”

In other words, measurements based on structure (such as the number and


diversity of directors, the mix of executive and independent directors, the
separations of chairman/chief executive officer roles) have run their course. Of
course, no-one in industry, academia or government doubts the value of
Corporate Governance in keeping agency conflicts at bay. However, the
problem is now board performance, which is behavioural and much harder to
quantify.

In What's wrong with corporate governance (2004) Leblanc concludes: “[the


link between financial performance and the board's strategic decision-making
effectiveness] cannot be measured from the outside, e.g., when a board says
“no” to a CEO, how do you measure this?” Leblanc suggests that the way
forward for research that connects with financial performance, rather than
compliance, is direct observation of board behaviour: “The only possible way to
know whether boards operate well is to observe them in action – to see and
understand the processes by which they reach decisions. The missing link in
establishing the relationship between board governance and corporate
performance may be an understanding of that elusive activity called board
process. Uncovering “how boards work” has tremendous practical significance.
We are just beginning a very important journey.”

In What makes great board great, Sonnenfeld characterises the ingredient that
continued to be missing as “the human side of governance” (Sonnenfeld 2004,
p.109). Two years earlier, he had concluded that the key to strong performance
is a social attitude of accountability rather than compliance where the focus

92
had been placed to date “So if following good-governance regulatory recipes
doesn’t produce good boards, what does? The key isn’t structural, it’s social.
The most involved, diligent, value-added boards may or may not follow every
recommendation in the good-governance handbook. What distinguishes
exemplary boards is that they are robust, effective social systems”
(Sonnenfeld 2002).

Petre (2003) found that exceptional organisations seek to become transparent


and heads-up in giving recognition at all levels for good ideas. The people with
the ideas are allowed to follow them through across multidisciplinary borders.
She also found that organisations that can't quite cope with multidisciplinary
management still pursue multidisciplinary projects but each member of a given
domain (for example, mechanical engineers and industrial designers) remains
within their own department.

However, Petre noted that exceptional performance is still rare and corporate
culture has a complex balance of contributing factors that means it is always at
risk to subversive behaviour:

It should be remarked how fragile this cooperative, communicative


culture can be. It requires energetic, high-quality personnel, with
high levels of expertise and creativity, capable of assimilating and
evaluating high-quality information. It requires trust, sharing and
open-minded communication. It requires careful management of
resources, workload, practices, and team dynamics. It is a complex
system of factors, easily perturbed by a dissonant element or by a
lapse in momentum.

We may conclude that Governments were successful in extending directors'


fiduciary duties into formal Corporate Governance compliance. However, we
have seen that human decisions and behaviour remain discretionary, outside
the net of compliance, and subject to incomplete accountability. Therefore,
regulators failed to effectively address the moral hazard inherent in Anglo-
American capitalism. Shortly we will see that this accumulated to dire
consequences in the 2008-9 global financial crisis.

93
Excessive speculation in markets
While business practices had been out of control, attention had not been
focused on the rampant speculation occurring in commodity markets. By 2008,
American investment practices had become a major issue in world commodity
prices, particularly oil and food prices.

Due to the large amount of surplus capital in America, investors looked to


hedge funds for high returns. Hedge funds along with pension funds and
investment bank trading desks found a source of abnormally high profits in the
commodity markets. So they shifted their hedge capital from share, bond and
currency markets, distressed or scarce agricultural real-estate, to dabble in
commodities while there is high profit to be made from ramping the market.
When the market ultimately corrects and there is no longer abnormal profit to
be made, the hedge funds will depart leaving the production and consumption
players like farmers, miners, refineries, other people in these industries like
airlines and most important of all consumers, to lick their wounds.

Prior to 2000 there was negligible managed capital in commodity markets.


From 2000 to 2007, about US$200 billion in managed financial assets was
invested. This rose a further US$30 billion in the first four months of 2008.

Diana Henriques (2008) reviewed the increasing concern of the American


Congress and President George H. W. Bush's Administration about excessive
speculation in commodity markets exacerbating and even manipulating oil and
food prices. This speculation came at a time when commodity prices were
already under upward pressure from global supply and demand forces such as
unfavourable weather, the decline of the American dollar, economic growth in
India and China economies and increasing standards of living.

The fundamental dichotomy in commodity markets is the need for liquidity and
therefore for speculators. The contrary view of the hundreds of billions of
dollars that has flowed into the commodity markets from 2000-2007 is that
without this capital liquidity would have been far less and prices may have
been far higher and more volatile than they are now.

94
On the other hand, too much money causing excessive speculation leads to
massive bubbles in the price of basic commodities, which hurts ordinary people
and the economy. Commodities market regulations to prevent excessive
speculation were withdrawn in the final year of the President George H. W.
Bush Administration.

The ability of firms to consistently make abnormally high profits in a market


usually has its roots in poor government policy. In America, the legal pursuit of
market profits had become a form of market manipulation. For example, new
speculators piling in on the buy-side in the belief that prices will rise is self
reinforcing and prices rise.

Measures to curb excessive speculation range from outright bans to raising the
capital requirements for futures trades. For example, in America, futures
trading in onions has been banned since 1958. A Congressional report at the
time stated “Speculative activity in the futures markets causes such severe and
unwarranted fluctuations in the price of cash onions as to require complete
prohibition of onion futures trading in order to assure the orderly flow of
onions in interstate commerce.”

In July 2008, just before the global financial crisis, Congress contemplated
raising margin requirements. Following World War II, President Harry Truman
had raised the deposit on margin trades to an unprecedented level of 33% of
the contract value saying “The cost of living in this country must not be a
football to be kicked about by gamblers.” However, increasing margin
requirements may not be effective since prices do not appear to be affected by
margin requirements, although volume of contracts certainly is.

95
Another bias in American markets is the well-known “Enron loophole,” also
called the “investment bank loophole”. Speculative investors such as
commodity index funds can dramatically increase the size of their commodity
bet in excess of normal limits by working with an investment bank. Operating
in a back-to-back way, both the investor and the investment bank avoids
regulation. For example, the investment banks sells a swap to the commodity
index fund for a commodity like corn. The investment bank then writes many
times the swap as a hedge in the commodity futures market. This technique
limiting speculators.

In addition to this new speculation that tilts the market toward higher prices,
there is always secret and collusive trading activity to produce illegal profits.
These things are often below the radar of the regulators due to the massive
volumes and sizes of transactions in the commodity market. However, major
commodity market manipulation scandals have become public, such as J. R.
Simplot's fixing of the Maine potato market, William Herbert Hunt's 1979
manipulation of the silver market to over US$50 per ounce before it collapsed
to US$10.80, Enron in the California energy market and British Petroleum's
2007 settlement of charges that it rigged the propane market.

Collusive trading is not limited to covert communication. It can also occur


through open mechanisms where large institutional investors signal their
intentions through the press or take turns in increasing prices.

The immensity of the hedge fund capital in a relatively small market has led to
a change in the nature of speculators from commodity traders and facilitators
to massive financial betting institutions. These financial institutions have at
their disposal highly sophisticated techniques to achieve extraordinary profits
from price volatility. For example, they are able to react faster to new
information than everybody else. It is a zero-sum game. In aggregate terms,
the profit taken by speculators is a major loss to the market.

Two main issues seem to have been highlighted by the failure of the
commodities market due to excessive speculation. The first is that the
continued existence of loopholes allowing massive speculation is inappropriate.
A free economy needs to facilitate speculation in a way that is pro-public

96
interest. Regulating markets at the point of market failure is not incompatible
with being pro-market. Secondly, in the end the market is just a mechanism
and is not a policy instrument. The market can only operate successfully in the
public interest where government provides a strong sustainable policy for the
commodities.

Sub-prime crisis
In October 2009, a decade of American consumers binging on Chinese imports
and gorging on Middle East oil came to a shuddering end as Americans could
not continue to borrow for their current consumption.xviii

According the Bacevich, the roots of the sub-prime credit crisis lie in the
Regan era (Bacevich 2008, p.36):

Regan portrayed himself as a conservative. He was, in fact, the


modern prophet of profligacy, the politician who gave moral sanction
to the empire of consumption. Beguiling his fellow citizens with his
talk of morning in America, the faux conservative Regan added to
America's civic religion two crucial beliefs: Credit has no limits, and
the bills will never become due. Balance the books, pay as you go,
save for a rainy day – Regan's abrogation of these ancient bits of
wisdom did as much to recast America's moral constitution as did
sex, drugs and rock and roll.

However, the sub-prime crisis was merely a symptom of many economic


exigencies. On 11 September 2001, when the American economy was on the
brink of recession, Al Qaeda attacked the New York World Trade Centre and
Pentagon. The American Federal Reserve sprung into action, aggressively
easing monetary policy. It reduced interest rates from 6.5% in May 2000 to 1%
by June 2003. Already preparing for economic stimulus, the American
Government then started large public deficit spending to prop up the economy.

The Federal Reserve's low interest rates and vastly over-expanded money
supply fuelled a boom in property lending. Investment bankers who used every
avenue of unfettered financial innovation to maximise profits supercharged the
already huge volume of risky debt. One major innovation was bundling

97
mortgages into new unregulated collateralised debt obligations (CDOs). These
securitised debt products were sold to other banks, superannuation funds and
overseas investors. In this new model of business, banks transformed from
boring mortgage lenders into fee-for-service earners.

The new role of banks was to originate mortgages through their sales channels
and sell these mortgages in parcels, taking a fee for the transaction. Parcels of
mortgages had mixed credit quality, just as DeBeers has traditionally sold
parcels of diamonds with variable quality in the parcels. Ultimately, banks
ceased focusing on the credit quality of the mortgages in the parcel, which was
seen as a mortgage insurer risk. In America, Fannie Mae, Freddie Mac and
investment banks provided trillions of dollars of mortgages. AIG insured many
trillions of dollars of these loans against credit risk default. Then the sub-prime
mortgage crisis began as Bear Stearns failed on 13 March 2008. xix

The American Government had begun to believe in its own illusion that there
could be a new world economic order having growth without savings. This
became accepted in many advanced economies such as Australia. A compliant
American government and Federal Reserve became very confident and
permitted high risk lending to borrowers with doubtful credit histories. One of
the now infamous acronyms for such lending was the “NINJA loan,” a loan to
people with “No Income, No Job, no Assets”. In America, Australia and the
United Kingdom, there were regular advertisements for 110% mortgages and
urging existing house owners to withdraw equity from the rise in the value of
their house to spend on a car, boat or holiday.

Buyers with easy money chased properties so house prices began rising in
2000. Continued appreciation of house prices ensured attractive returns for all
involved. In concert, equity prices continued their bull run as the bellboys
(people that heard rumours in the lift) were making big money. Wise heads
knew that this meant a recession but the concerted and massive economic
stimulus meant that the recession just didn't come.

In 2006, China burst onto the world stage, supplying huge volumes of cheap
capital goods to America and the world. This supercharged the already
overheated equity and commodity markets.

98
In June 2004, the American Federal Reserve became very concerned about
runaway inflation. It began to increase interest rates from 1% to 5.25% by June
2006. This discouraged investors who stopped investing in new mortgage loans
and led to a build up in unsold homes. The oversupply of houses led to a steep
collapse in prices from their peak in 2006. By November 2008, American
metropolitan city house prices had fallen approximately 25%.

Bubbles are a massive Ponzi (pyramid) scheme. Each increase in price of


houses or shares requires a new fool to believe that markets will rise further.
At a crucial point, for a myriad of reasons, expectations about future income
growth falter. At this so-called “Minsky moment,” the unbridled greed turns
abruptly to fear and the bubble collapses.xx

Moral hazard
The 2008 sub-prime debt crisis was a watershed in attitudes and a turning
point in history. Given the topicality of this section, perhaps it could have been
placed at the start of the chapter rather than in here its linear place as part of
the development of shared American attitudes.

Over the past 200 years the USA has endured frequent recessions
accompanied by asset bubbles and banking failures.xxi Yet the 2008 recession
has been special for the reason that America forfeited much more than its
global industrial competitiveness and accumulated a huge foreign debt. As
Paul Krugman winner of the 2008 Nobel Prize in Economics “for his analysis of
trade patterns and location of economic activity,” writes “The financial crisis
has had many costs. And one of those costs is the damage to America’s
reputation, an asset we’ve lost just when we, and the world, need it most”
(Krugman 2009b).

Along with abusing and subsequently forfeited its most precious asset of all, its
reputation, America lost its preeminent position as the leader of the Western
world. For at least thirty years, American national arrogance in being different
and exceptional, led by God, and above the law justified increasing hubris and
led to burgeoning moral hazard. The President of France, Nicolas Sarkozy,
concisely summarised the issue as “This crisis is not the crisis of capitalism. On

99
the contrary, it is the crisis of a system that has drifted away from the most
fundamental values of capitalism. It is the crisis of a system that drove
financial operators to be increasingly reckless in the risks they took, that
allowed banks to speculate instead of doing their proper business of funding
growth in the economy; a system, lastly, that tolerated a complete lack of
control over the activities of so many financial players and markets” (Sarkozy
2009).

Moral hazard is the tendency to excess when a person is unaccountable or only


partially accountable for the consequences of their action (for example, in
taking risks, consumption, borrowing and military and covert activities). In this
context, the terms moral hazard and systemic risk are being used in a generic
sense, which is broader than American Treasury Secretary Henry Paulson's use
of the terms. At the time of the Bear Stearn's bailout, Paulson used the term
moral hazard in the context of directors, managers and shareholders receiving
the massive benefit of bailouts when their own choices and actions had led to
their own predicament. The reason for the bailouts was to avert systemic
failure, by which Paulson was referring to the web of derivative and credit
insurance transactions that might fail if Bear Stearns failed in meeting its
trillions of dollars of counterparty obligations.

However, in America, the moral hazard that led to the economic collapse of
2009 was not confined to any single sector or to consumers. Americans
engaged root and branch, as a people and as a nation, domestically and
internationally. Every institution of government, military, business and finance,
not excluding the Federal Reserve, was involved and culpable. The
pervasiveness of moral hazard in government was further compromised by
national and organisational psychopathy, deceit and agency conflict.

Other Anglo-American countries such as the United Kingdom and Australia had
enjoyed a goldilocks decade of abundance rooted in America's intoxication with
consumption. These Anglo-American nations enthusiastically followed America
down the path of moral hazard. In July 2007, the new Governor of Australia's
Reserve Bank stated of the Australian economy (Stevens 2007, pp.3-4):

International financial markets remain remarkably supportive of

100
growth. Long-term interest rates are not far above their 50-year
lows of a few years ago, even though short-term rates have risen in
most countries to be much closer to normal levels, the main
exception being Japan. Share prices have been rising steadily,
appetite for risk is strong, and volatility in prices for financial
instruments has been remarkably subdued. To some extent, these
trends in financial pricing may well reflect a genuine decline in
some dimensions of underlying risk. Variability in economic activity,
and in inflation and interest rates, has clearly diminished over the
past 15 years in a number of countries, including Australia …. The
associated prolonged period of attractive, steady returns on equity
investment and low cost of long-term debt funding certainly seems
to have set the stage for a return to somewhat higher leverage in
the corporate sector. This is most prominent in the rise in merger
and acquisition activity and the re-emergence of leveraged buyouts
around the world. Corporate leverage had been unusually low after
the excesses of the 1980s, so some increase is probably manageable.
Nonetheless, after more than a decade in which the main action in
many countries has been in household balance sheets, this trend in
corporate leverage will bear watching. For the time being, at any
rate, financial conditions are providing ample support for both
corporate investment and household spending around the world.

By 2009, four of the six pillars of American capitalism, the American


investment banks, had collapsed. Taxpayer funds had been used to save many
banks and bankrupt companies like General Motors and Chrysler, formerly
doyens of America's industrial heartland.

America is arguably facing its greatest ever challenge. Ironically its bailouts
and budget deficits have been funded from China's foreign reserves and the
children and grandchildren of current American consumers. This has
confronted the undisputed dogmas of the market system: “The prevailing
market system is supported by a very influential set of economic dogmas which
have come to occupy a dominant place in the lives of modern societies. These
include the high importance attached to market-led economic growth; the
value of complete free trade in money and capital as well as in goods and
services; the need to subordinate social welfare to market requirements; the

101
belief in cutting down or privatising government functions; the acceptability of
profit as a test of economic welfare; and others as well” (Self 2000, p.ix).

Americans had reduced Adam Smith's “invisible hidden hand of capitalism” to


little more than a crude and unprovable, therefore both unchallengeable and
unjustifiable, excuse for ubiquitous greed and reckless risk-taking. Abroad and
increasingly at home the great American dream of unregulated capitalism
became hotly debated and even held in disdain.

At the 2009 G-20 London Summit, President Barack Obama quietly took
responsibility for the world's economic crisis (Hujer et al. 2009):

Something was missing and Italian Prime Minister Silvio Berlusconi


wasn't about to accept it .... Barack Obama, the president of the
United States of America, the most important man at the G-20
summit in London, had remained silent for some time now ….
Berlusconi now spoke to him directly: "I would like to extend my
congratulations to Barack Obama," he said, adding that the
economic crisis had begun in the US. "Now he has to address it," he
said and looked towards Obama. "We wish him all the best for the
citizens of the US and the entire world.".... [Barack Obama] then
lowered his voice: "It is true, as my Italian friend has said, that the
crisis began in the US. I take responsibility, even if I wasn't even
president at the time.” …. The others couldn't believe their ears.
Was that really a confession of guilt from the US? Was it a
translation error, or at least an inaccuracy? Afterwards, this
sentence fuelled long discussions among the members of the
German delegation. German Chancellor Angela Merkel was so
impressed by Obama's statement that she rushed to tell her finance
minister, Peer Steinbrück. Japanese Prime Minister Taro Aso reacted
immediately: The proposal to hold the next summit not in Japan, but
rather in the US, is something that he no longer rejects, he says,
"now that the US has shouldered responsibility." .... Obama's
confession may go down in world history as one of the greatest
statements ever made. The US president is accepting responsibility
for the beginning of one of the worst economic crises of the last

102
century. By doing so, he has admitted that one of the excesses of the
American way of life -- the insatiable craving for huge profits -- has
brought the world to the brink of disaster. The others may have
played their part, but the origins lie in the US. The fact that Obama
has now admitted this sends a strong signal of hope to the world,
perhaps the strongest to emerge from the G-20 summit in London
last Wednesday and Thursday. Such an admission could begin to
pave the way towards rectifying the situation.

Challenges to the legitimacy of neoclassical


economics
Shouldering the responsibility is an important first step. However, the next
step is a recognition that the underlying models are broken. Over the last 80
years, neoclassical economists have seemingly led the world into two serious
economic collapses. Many economists have asked if rational frameworks of
policy testing and analysis are seriously flawed. Mark Dodgson & Eric
Beinhocker criticise the fundamental assumption of rationality in CGE models
(Slattery 2008):

The intellectual field of economics is on the cusp of a big


transformation. Mainstream economics is increasingly being seen to
be detached from reality. Its assumptions about equilibrium,
rationality in human behaviour and the primacy of market forces
that are mysteriously asocial make its predictive power extremely
limited. New approaches, such as evolutionary economics and the
study of economies as complex adaptive systems, are much more
useful in addressing big economic challenges of generating growth
and productivity through innovation in ways that are sustainable
and equitable. The discipline is suffering, in effect, from the
challenge to neoliberal economic doctrine brought on by the [2008]
sub-prime crisis in the U.S. and its repercussions across the global
financial system. Feeding the mood of despair across financial
markets is the perception that mainstream economics was unable to
predict the crisis, or to manage it, and has been intellectually

103
enfeebled by the Gordian knot of peak energy prices, planetary
overheating and global debt.

The financial crisis has also led David Brooks to a certainty that neoclassical
models are overly linear and rational, lacking psychological dimensions. He
writes (Brooks 2008):

Economic models and entire social science disciplines are premised


on the assumption that people are mostly engaged in rationally
calculating and maximizing their self-interest …. But during this
financial crisis, that way of thinking has failed spectacularly. As Alan
Greenspan noted in his Congressional testimony last week, he was
“shocked” that markets did not work as anticipated. “I made a
mistake in presuming that the self-interests of organizations,
specifically banks and others, were such as that they were best
capable of protecting their own shareholders and their equity in the
firms.” …. My sense is that this financial crisis is going to amount to
a coming-out party for behavioral economists and others who are
bringing sophisticated psychology to the realm of public policy. At
least these folks have plausible explanations for why so many people
could have been so gigantically wrong about the risks they were
taking.

Brooks continues his criticism of neoclassical economics (Brooks 2009):

Once, classical economics dominated policy thinking. The classical


models presumed a certain sort of orderly human makeup …. the
market rewards rational behavior …. The invisible hand forms a
spontaneous, dynamic order .... Economic behavior can be
accurately predicted through elegant models …. This view explains a
lot, but not the current financial crisis — how so many people could
be so stupid, incompetent and self-destructive all at once …. This
crisis represents a flaw in the classical economic model and its
belief in efficient markets …. For years, Republicans have been
trying to create a large investor class with policies like private
Social Security accounts, medical savings accounts and education
vouchers. These policies were based on the belief that investors are

104
careful, rational actors who make optimal decisions. There was little
allowance made for the frailty of the decision-making process, let
alone the mass delusions that led to the current crack-up ….
Democrats also have an unfaced crisis. Democratic discussions of
the stimulus package also rest on a mechanical, dehumanized view
of the economy. You pump in a certain amount of money and “the
economy” spits out a certain number of jobs …. But an economy is a
society of trust and faith .... This recession was caused by deep
imbalances and is propelled by a cascade of fundamental
insecurities …. The economic spirit of a people cannot be
manipulated in as simple-minded a fashion as the Keynesian
mechanists imagine …. Mechanistic thinkers on the right and left
pose as rigorous empiricists. But empiricism built on an inaccurate
view of human nature is just a prison.

Brooks has not dug down to the bedrock of the American economic paradigm
founded on Aristotle's analysis of human happiness. Nevertheless, his
questioning of the existing models is poignant for a number of additional
reasons. The most important of these is that models based on consumption
growth as society's main goal do not react well in low or volatile growth
situations.

The once heretical school of behavioural economics is an alternative path to


“rational man” hypothesis of neoclassical economics. This thesis was
championed by Daniel Kahneman, who shared the 2002 Nobel Prize in
Economics “for having integrated insights from psychological research into
economic science, especially concerning human judgement and decision-
making under uncertainty.”

However valuable the insights from irrationalist theories, an implicit


reductionism to individuals does not lead to a social future. Perhaps both
Kahneman and Brooks are seeking assurances in the wrong place. What is
more likely is that individual human psychology, whether of the rational or
irrational, will not turn out to be durable guide for government policy.

Indeed, it may be recalled from the discussion of Evidence Based Policy in


Chapter 1 Introduction, that policy makers would prefer that economic models

105
are always completely correct. However, policy makers seek confirmation of
feasibility from modellers to improve the probability that their policy will be
feasible, not seer-like predictions of the future and iron-clad guarantees of
policy outcomes. Policy makers are well aware that the future will unfold quite
differently to that forecast in economic models. This is why policy makers
chuckle in good humour at John Kenneth Galbraith's quip that “economists
were invented to give fortune tellers a good name.”

In 2008, President Sarkozy of France commissioned eminent economists


including Nobel Prize winners Joseph Stiglitz, Amartya Sen, Kenneth Arrow
and Daniel Kahneman to "set aside the religion of figures" and investigate
whether there was a better measure of national welfare than growth in Gross
Domestic Product (Stiglitz et al. 2009). Like Aristotle, the economists
concluded that the best measure of welfare is an index of well-being, or the
aggregated individual happiness of populations across all aspects of life. The
Commission proposed a new index of Net National Product (NNP), which is
Gross Domestic Product less depletion of natural and human capital.

In highlighting the pluralist role of economics in policy, Krugman (2009c) sees


that “the more it changes, the more it stays the same” because only those
seeking deterministic mathematical solutions have lost. He concludes that
policy formation will always be “messy” rather than neat and mathematical:

As I see it, the economics profession went astray because


economists, as a group, mistook beauty, clad in impressive-looking
mathematics, for truth …. Until the Great Depression, most
economists clung to a vision of capitalism as a perfect or nearly
perfect system. That vision wasn’t sustainable in the face of mass
unemployment, but as memories of the Depression faded,
economists fell back in love with the old, idealized vision of an
economy in which rational individuals interact in perfect markets,
this time gussied up with fancy equations …. It’s much harder to say
where the economics profession goes from here. But what’s almost
certain is that economists will have to learn to live with messiness
…. In practical terms, this will translate into more cautious policy
advice — and a reduced willingness to dismantle economic

106
safeguards in the faith that markets will solve all problems …. flaws-
and-frictions economics will move from the periphery of economic
analysis to its center …. they'll have to do their best to incorporate
the realities of finance into macroeconomics …. It will be a long
time, if ever, before the new, more realistic approaches to finance
and macroeconomics offer the same kind of clarity, completeness
and sheer beauty that characterizes the full neoclassical approach.

It remains to be seen if policy analysis can be improved by innovations in


behavioural economics, the new index of happiness or reminders that policy
formation is a rough and tumble area of politics.

Despite deft oratory from President Obama and such profound reflection
amongst economists and policy makers, it seems that lessons may not have
been learned from the financial crisis. President Obama's most senior chief
economic adviser, former Harvard University President Lawrence Summers,
changed the definition of the crisis from moral hazard to over-exuberance and
over-confidence leading to too much debt. Astounding everyone at a June 2009
conference of Deutsche Bank's Alfred Herrhausen Society in Washington,
Summers' only solution was to rebuild confidence by making credit more
widely available (Steingart 2009a).

Perhaps even worse, two months later pre-crash "casino capitalism" had
returned in America, the United Kingdom and Germany (Herbst 2009). German
Finance Minister Peer Steinbrück criticised exorbitant bonuses to bank
executives in the following terms “Some executives didn't hear the bang ….
They are responsible for the fact that approval of our system of doing business
is waning …. Taxpayers are continuing to completely finance big bonuses”
(Spiegel Online 2009b).

A loss of confidence in the American economy has been taking place since
2000 with the American dollar depreciating 40% against the Euro over the
period 2000-2009. Even before the 2008 financial crisis, fewer investors in
China and Japan were prepared to finance the growing American deficit. The
Federal Reserve's response over the three years to 2009 was to increase the
money supply by 45%. Repurchasing Government securities has flooded money
into the economy to finance consumption rather than productive assets.

107
The American consumptive binge is accelerating with the greying and medical
insurance needs of the population. America's 2009 budget forecasts US$9
trillion additional debt for the decade 2010-2020. This imbalance of wild
growth in money supply to finance Americans living well beyond their means is
seen by many as a precursor to massive inflation and a collapse in the dollar.

2.5 Evolution of a new Anglo-American world view

Decline of American exceptionalism


Sir David King questions whether we have seen a passing of the era of
consumerism “Consumerism has been a wonderful model for growing up
economies in the 20th century. Is that model fit for purpose in the 21st century,
when resource shortage is our biggest challenge?” (Randerson 2009).

Andrew Bacevich's The Limits of Power: The End of American Exceptionalism


(2008) makes a compelling case that current American consumerism has
reached a crisis of American profligacy (p17) but only after having been
doomed for decades. He writes (p22):

The virtuous cycle of abundance and expansion made the United


States the land of opportunity. From expansion came abundance;
from abundance came prosperity; from prosperity came substantive
freedom, the means to safeguard freedom and the means to secure
further abundance. The cycle of consumption and investment built a
prosperous society …. Frederick Jackson Turner wrote that
American democracy was possible was due to Not the Constitution,
but free land and an abundance of resources open to a fit people ….
The American dream was fulfilled. Unseen hands like self interest
and a free market that would efficiently settle or clear economic
utilities worked well in times of growth and prosperity …. At the end
of WWII, the USA was the strongest, richest and freest nation in the
world. It possessed two-thirds of the world's gold reserves, half the
world's manufacturing capacity. The US produced the most oil, steel,
aeroplanes, automobiles and electronics. The US exported one third
of all world trade and its exports were double imports. The Bretton

108
Woods agreement replaced the pound sterling with US dollar as
reserve currency and made the US the manager of the world's
money. The US had unquestioned air and sea superiority, a nuclear
monopoly. In 1948, US per capita income was four times the
combined sum of Britain, France, Germany and Italy.

America built success on success with huge patent and copyright empires
across pharmaceuticals, computers (Intel, AMD, IBM), software (Microsoft,
Oracle, Sun), music, movies, publishing, food and beverage (Coca Cola,
MacDonald's, Kentucky Fried Chicken) and many other industries. It scooped a
margin from the majority of third world development by a form of economic
extortion. America owns 40% of IMF and World Bank and it used its dominant
ownership to control lending to third world countries, requiring that these
countries spend their loans to buy American manufactured equipment and to
employ American contractors (such as Bechtel). America also deployed the CIA
and Marines to coerce investment in American goods and services if economics
didn't work.

However, this miracle was not to last. The extended patent empires have
matured and third world countries such as Indonesia and in South American no
longer need or want International Monetary Fund loans with strings attached.
Bacevich writes (p29):

By 1950, the US had begun to import oil. Then came the crushing
defeat in Vietnam, oil shocks, a destabilised economy, inflation,
stagflation and currency devaluation. Following Vietnam, American
efforts to expand abundance and freedom have become increasingly
problematic … In the name of preserving the American way of life,
President Bush and his lieutenants committed the nation to a
breathtakingly ambitious project of near global domination. Hewing
a tradition that extended at least as far back as Jefferson, they
intended to expand American power to further the cause of
American freedom. Freedom assumed abundance. Abundance
seemingly required access to large quantities of cheap oil.
Guaranteeing access to that oil demanded that the United States
remove all doubts about who called the shots in the Persian Gulf. It
demanded oil wars.

109
Bacevich's point is that America has reach a low point in becoming the world's
biggest debtor nation and demonstrated its moral bankruptcy in having
thousands of American soldiers die in Iraq merely to secure oil for profligate
American consumers (pp 62-3 &155):

While soldiers fought, people consumed. With the United States


possessing less than 3 percent of the world's known oil reserves and
Americans burning one out of every four barrels of petroleum
produced worldwide, oil imports reached 60 percent of daily
national requirements and kept rising. The personal savings rate
continued to plummet. In 2005, it dropped below zero and remained
there. Collectively, Americans were now spending more than they
earned. By 2006, the annual trade imbalance reached a whopping
$818 billion. The following year, public debt topped $9 trillion, or
nearly 70% of gross national product …. In February 2006, the New
York Times Magazinexxii posed the question Is freedom just another
word for many things to buy? …. To anyone with a conscience,
sending soldiers back to Iraq or Afghanistan for multiple combat
tours while the rest of the country chills out can hardly seem an
acceptable arrangement. It is unfair, unjust and morally corrosive.

Bacevich concludes that the end of American exceptionalism has arrived.


Americans are now normal people, just like people in Europe and Japan. The
great challenge for Americans is that they now need to live within their means.

New international symbiosis


The Prisoner's Dilemma game is considered to be an ultimatum game for
economic and political relationships. It is a combination of neoclassical
economics, fierce competition and primitive Darwin's survival of the fastest
and fittest. Such assumptions regularly occur in many real circumstances as
diverse as nuclear deterrence, the Tour de France bicycle race, project
management and cigarette advertising.

Antecedents for game theory can be detected in Niccolò Machiavelli's The


Prince (1513) and Sun Tzu's The Art of War, 500 BC (Tzu 2006).xxiii However,
Claude-Henri de Rouvroy, Comte de Saint-Simon, the father of Positivism, was

110
perhaps the first person to clearly perceive that economic progress would
change the world. He fervently believed that the future could be accurately
predicted by the application of sound mathematical principles. Following mixed
fortunes during the 1794 French Terror, Saint-Simon developed the seeds of
modern game theory (Strathern 2002, p.142-3). He accurately foresaw that
humans would choose science to civilise society because this would minimise
our maximum loss and any other strategy would cause a greater loss.

In 1838, Antoine Augustin Cournot's Researches into the Mathematical


Principles of the Theory of Wealth (Cournot 1838) provided the first formal
proposition of game theory.

In 1944, at Princeton, John von Neumann formulated modern game theory


concepts (von Neumann & Morgenstern 1953). This followed von Neumann's
early work on minimax optimisation (Von Neumann 1928). Von Neumann's
game theory was further developed at RAND by Merrill Flood & Melvin
Dresher in 1950 (Dresher 1981). Albert W. Tucker (1980)later introduced
prison sentence pay-offs, which led to the game's current name of the
“Prisoner's Dilemma”.xxiv

In August 1949, during the early days of the cold war, Russia detonated a
nuclear device that broke America's monopoly on nuclear weapons. It was
early days of the Cold War and America saw itself facing the stark choice of
being red or dead (Bacevich 2008, p.164) America's xenophobia was
exacerbated by Mao Zedong's Communist Revolution of 1 October, 1949. In
what Dick Cheney later called the “one-percent doctrine,” America stood ready
to protect its Manifest Destiny against any tangible threat. The nuclear hawks
sprung into action claiming that America could avoid the choice of “red or
dead”. Using his minimax game theory, Von Neumann vigorously lobbied
Presidents Harry Truman and Dwight D Eisenhowerxxv to launch a first strike
nuclear conflagration at the Soviet Union.

Von Neumann became head of the Atomic Energy Commission in 1954 until his
death on 8 February 1957, aged 53. Following his death, Life Magazine's
obituary reported that von Neumann said of his 1950 game theory strategy “If

111
you say why not bomb them tomorrow, I say why not today? If you say today at
five o' clock, I say why not one o' clock?” (Blair 1957, p.96).

Unfortunately for von Neumann, President Truman's key adviser Paul Nitze
thought the argument for preventative war was absurd. In the top secret
National Security Council document NSC68, promulgated in early 1950, Nitze
wrote that the idea of preventative war was “repugnant and morally
corrosive”.

With the onset of the Korean War in June 1950, President Truman agreed to
NSC68's dogma of mutually assured destruction (ironically with the acronym
MAD) and permanently investing in military capability. NSC68 optimistically
claimed “The economic effects of the program might be to increase the gross
national product by more than the amount being absorbed for additional
military and foreign assistance purposes …. [such as] fomenting and
supporting unrest and revolt [in the Soviet bloc]” (Bacevich 2008, pp.108-11).
Since this time, weapons manufacture for defence and export has underpinned
America's economic growth.

At Princeton, von Neumann belittled John Nash's theory of Equilibrium (Nash


1950) as merely a corollary of von Neumann's own theory. Nash's equilibrium
is a game solution, which may not be a Pareto Optimum, but where all players
have perfect information and no player can gain by changing their strategy. For
example, the dominant strategy for a simple two player game is for each player
to not trust the other and therefore betray the other. Of course, the Pareto
Optimum solution would be where each player is better off by trusting the
other and therefore not betraying. John Nash, John C. Harsanyi and Reinhard
Selten subsequently shared the 1994 Nobel Prize in Economics “for their
pioneering analysis of equilibria in the theory of non-cooperative games.”

In 1968, Garrett Hardin (1968) proposed a “Tragedy of the Commons,” which


is the situation that when each person maximises their own interest then, in
aggregate, people will despoil a commons, for example a common grazing area
each person.

112
Elinor Ostrom (1990) found Harden's hypothesis to be true in many situations
of common property. She showed the “Tragedy of the Commons” is a case of
multiple Prisoners Dilemmas. Together with Edella Schlager, Ostrom
subsequently developed the concept of property rights (Schlager & E. Ostrom
1992; E. Ostrom & Schlager 1996).

Ostrom's work now provides the necessary conceptual framework for


managing international commons, such as the globally shared resources of
clean air and moderation of atmospheric temperature rise and ocean
acidification. Her work was recognised with the 2009 Nobel Prize in
Economics for “her analysis of economic governance, especially the
commons”.xxvi The Royal Swedish Academy of Sciences (2009) noted:

Elinor Ostrom has demonstrated how common property can be


successfully managed by user associations …. [She] challenged the
conventional wisdom that common property is poorly managed and
should be either regulated by central authorities or privatized.
Based on numerous studies of user-managed fish stocks, pastures,
woods, lakes, and groundwater basins, Ostrom concludes that the
outcomes are, more often than not, better than predicted by
standard theories. She observes that resource users frequently
develop sophisticated mechanisms for decision-making and rule
enforcement to handle conflicts of interest, and she characterizes
the rules that promote successful outcomes.

Robert Aumann extended the Prisoner's Dilemma to a repeating game called


the “Iterated Prisoner's Dilemma,” which is also known as the Peace-War
Game. Aumann showed that a cooperative outcome could be sustained
(Aumann & Shapley 1974). Robert Aumann and Thomas C. Schelling
subsequently shared the 2005 Nobel Prize in Economics “for having enhanced
our understanding of conflict and cooperation through game-theory analysis.”

Robert Axelrod (1984) put Aumann's theory to the test in a world-wide


competition for computer simulated Iterated Prisoner's Dilemma strategies.
Axelrod found that if the game has a defined end then the best strategy is to
cheat all the time. The logic behind this strategy is that if the best choice on

113
the last iteration is to cheat, then the best alternative on the second last
iteration is also to cheat, which agrees with von Neumann's analysis.

However, the outcome is different if the game continues with no foreseeable


end such that there is no last iteration. In this case the best strategy was a
simple tit-for-tat rule submitted by Anatol Rapoport (Rapoport & Chammah
1965). Rapoport's tit-for-tat algorithm had just 4 lines. The rule is to trust
other players unless they cheat, in which case the next iteration is retaliation
followed by forgiveness and a return to the trust rule. Furthermore, it was
found that the more that people play the Iterated Prisoner's Dilemma, the more
they recognise that trust strategies maximise everyone's welfare.

There was another winning strategy submitted by Professor Nicholas Jennings


of Southampton University. This was a multi-agent group predator strategy
using sixty players that could recognise each other through a little dance.
These players always trusted each other and cheated on the non-predators.
This multi-agent strategy easily defeated individuals and resulted in predators
taking the top three positions, while their sacrificed losers were at the bottom.

Axelrod's game shows that way out of the Tragedy of the Commons and the
Prisoner's Dilemma game is for people to raise themselves from risk,
despoilment and despair by banding together for the greater good and thereby
together achieving increased individual welfare for all.

The symbiosis that creates this environment of trust is usually found by


creating a high level institution with enforceable powers to sanction members
of the collective group. For example, English Common Law was originally
created to bring to an end a vicious tradition of blood feuds. This has evolved
into national systems of judiciary and police, as well as United Nations
agencies such as the International Monetary Fund and World Bank, and
institutions such as the International Criminal Court.

America has traditionally been a loner, selectively choosing between


international organisations and treaties that it will join. For example, it has
chosen to dominate economic agencies but has neither ratified the Kyoto
climate change agreement nor submitted to the jurisdiction of the

114
International Criminal Court, avoiding the latter in case Americans were tried
for foreign war crimes.

In order to move forward in concert with other major economic blocs in a spirit
of trust, America is beginning to recognise that it must rely less on cherry-
picking international agreements and more on trust strategies for a democracy
of nations. Jeffrey Sachs points out that America's attitude following World War
II was just this, which gave considerable guidance and hope to all peoples:
“Great acts of U.S. cooperative leadership include the establishment of the UN,
the IMF and World Bank, the promotion of an open global trading system, the
Marshall Plan to fund European reconstruction, the eradication of smallpox,
the promotion of nuclear arms control, and the elimination of ozone-depleting
chemicals” (2008, pp.8-10).

Sachs sees the finest hour of the American Presidency as October 1962, when
President John Kennedy led the Soviets and the world away from nuclear
Armageddon. In a secret agreement, American removed nuclear missiles from
Turkey at the same time as Russia removed its missiles from Cuba.

In his now famous Peace Address at American University in June 1963,


Kennedy urged all nations to use peace as the process of finding solutions to
man-made problems:

Let us focus on a more practical, more attainable peace, based on …


a gradual evolution in human institutions – on a series of concrete
actions and effective agreements which are in the interests of all
concerned …. Genuine peace must be the product of many nations,
the sum of many acts. It must be dynamic, not static, changing to
meet the challenge of each generation. For peace is a process – a
way of solving problems …. So let us not be blind to our differences
– but let us also direct attention to our common interests and to
means by which those differences can be resolved. And if we cannot
end now our differences, at least we can make the world safe for
diversity. For, in the final analysis, our most basic common link is
that we all inhabit this planet. We all breath the same air. We all
cherish our children's future. And we are all mortal.

115
Nikita Khrushchev, the Russian President, responded that this was the finest
American Presidential speech since those of Franklin Roosevelt. Six weeks
later he joined America in a Partial Test Ban Treaty for nuclear weapons.

New constrained growth model


The foregoing suggests that America needs to transition to a new economic
model that remains consistent with its strong ideals of individual freedom and
democracy.

Joseph Stiglitz, a former Senior Vice President and Chief Economist of the
World Bank who shared the 2001 Nobel Prize in Economics with George
Akerlof and Michael Spence “for laying the foundations for the theory of
markets with asymmetric information,” writes that America needs to migrate
from failed economic models and look to successes like the German social
model (Stiglitz 2009):

For years the US was the economic powerhouse of the world. It


imported more goods from abroad than it exported, to the joy of
manufacturers in Asia or Europe. But this model no longer works.
The Americans are completely over-indebted. They can't increase
their consumption, instead they have to save. This is why other
global growth has to be increased …. The fall of the Berlin Wall
really was a strong message that communism does not work as an
economic system. The collapse of Lehman Brothers on September
15th again showed that unbridled capitalism doesn't work either ….
Besides the two extremes of communism and capitalism, there are
alternatives, such as Scandinavia or Germany. The Chinese model
has succeeded very well for their people, but at the price of
democratic rights. The German social model, however, has worked
very well. It could also be a model for the US administration.

The German social model was developed by Economics Minister (later


Chancellor) Ludwig Erhard, who guided Germany through a post World War II
boom. He inculcated a culture of stability based on hard work, low gearing and
free markets. At a time when Franklin D Roosevelt was expanding America's

116
economy with massive borrowing, Erhard kept Germany's gearing low. His
success has recently been reflected upon as follows (Steingart 2009b):

His plan shuns excessive debt. His argument was that people would
first make an effort when money became tight and, thus, more
valuable. You get the best results, he found, if, in the tried and true
manner of our forefathers, you work hard and don't forget to save.
"The state can't afford anything that doesn't come from the strength
of its own people," was the message. He also could have said: No
pain, no gain …. His key words were not consumption and credit,
but pay and performance. He insisted, practically to the point of
stubbornness, that work and only work is the foundation of
prosperity: "We must either make do with less or work more." He
felt that the third way, which leads to the vault of the next best bank,
was a dead end …. His record is impressive, even from today's
perspective. He gave Germany the longest economic boom in world
history, from 1949 to 1966. During this period, the country, still
recovering from the war, rose up to become a leading exporter. It
overtook first the French, then three years later the British and as of
1976 the Americans. Germany's currency remained stable and its
level of debt low. From the late 1950s onwards, there was full
employment in Germany …. Even the term "Wirtschaftswunder"
[economic miracle], coined by an admiring populace, was repugnant
to him. Anyone who used the expression in his presence was
snubbed. "There are no miracles," he liked to say.

However there are many geopolitical and cultural differences between


Germany and America that mitigate against convergence of their economic or
political systems. For example, America lost interest in Germany after the fall
of the Berlin Wall when city ceased to be the front line of the Cold War.

Separation, rather than convergence, accelerating. Europeans are beginning to


take note that America no longer wants the role of Europe's patron. The
European Council on Foreign Relations notes “We are now entering a 'post-
American world'. The Cold War is fading into history, and globalisation is
increasingly redistributing power to the South and the East. The United States
has understood this, and is working to replace its briefly held global

117
dominance with a network of partnerships that will ensure that it remains the
'indispensable nation' …. Seen from Washington, there is something almost
infantile about how European governments behave towards them -- a
combination of attention seeking and responsibility shirking” (Witney &
Shapiro 2009).xxvii The authors note that there are “no more special
relationships” and that “governments in the EU must shake off illusions about
the transatlantic relationship if they want to avoid irrelevance on the global
stage.”

America's disinterest in Germany may be further distinguished from the


growing relationship between Russia and Germany. Nowadays, Russia's
biggest trading partner is Germany and Germany depends on Russian gas for
its energy security. In recent times Germany is has shown that it is more
interested in this Russia than its dealings with the European Union, NATO and
America (Cohen 2009). Furthermore, Germans now attribute the collapse of
the Berlin Wall and Communism to German détente rather than American
force. Indeed Germany and Russia have moved beyond self-denial and see their
new geopolitical alignment as underpinned by common experiences and
learning from equally horrendous mistakes in the past. In marked contrast to
American conservative beliefs, the humiliation of Russia following the collapse
of the Soviet Union is now seen as an American error of judgement.

At a cultural level, Americans and Germans have different collective and primal
emotions (Malzahn 2009):

Today, we believe in Obama. We don't actually know what that


means yet. What's interesting in this context is not so much the
nature of Obama and his administration but the nature of German
political beliefs, and how they have developed over time …. For
many Germans, the Americans have always been simply too
extreme. They are either too fat or too obsessed with exercise, too
prudish or too pornographic, too religious or too nihilistic. In terms
of history and foreign policy, the Americans have either been too
isolationist or too imperialistic. They simply go ahead and invade
foreign countries to only, in the end, abandon those countries the
way they did in Vietnam and will soon do in Iraq …. When Obama

118
gave his speech at Berlin's Victory Column last summer, he talked
about the post-war airlift during the blockade of Berlin and about
the care packages the Candy Bombers distributed. And then he
asked, buried in a subordinate and somewhat cloudy clause of one of
his sentences, that Germans start thinking about how to pay back
this moral debt. However, if I know my countrymen, then this type of
nudging just isn't going to work …. When Obama says that the US is
about to change but that the U.S. cannot be the only one to change,
he should not overestimate the innate feelings of personal
responsibility in the German populace or assume that they will fill in
the unspoken subtext …. The difference being: Americans live in a
society which of course celebrates commerce and selfishness -- but
behind the bluster, a mere inch beneath the surface, there are often
huge reservoirs of idealism and selflessness in individual Americans.
We Germans, however, live in a world which in ways is much fairer
and more organized for the public good. Yet, so many of our
experiences from the Thirty Years War onwards have contributed to
a hard egotistical core which lurks just beneath the dutiful surface
of the national psyche.

The America consumer economy has a perverse economic feedback that


Germans don't understand: consumption begets consumption. The robust
saving and belt tightening so embedded in the German psyche has precisely
the opposite effect in America. As the American savings rate rises, house
prices fall, unemployment rises and the consumer sector can't lift out of its
lethargy. Therefore, the individual and national psyches of America and
Germany have profound differences.

This is evidenced in a fundamental difference between the American and


German models of growth. Prima facie both are identical in the policy to
maximise welfare, which is identified with consumption growth. The difference
is that the American model is one of unconstrained maximisation and
continued extension to economic growth while the German model is one of
constrained maximisation.

For the American consumer economy, the quality of that growth is not so
important. For example, short term consumer growth through hollowing out of

119
the manufacturing industry is just as valuable as any other sort of growth. It
matters little that national income inequality is extraordinarily high and huge
differentials in consumption exist between ethno-cultural groups.

We have seen above that America has traditionally removed constraints by


active measures, including covert and military actions to secure resources.
This includes the morally corrosive attitude of American soldiers dying in Iraq
to secure oil for American consumers while they “chill out” at home.

In stark contrast to the Anglo-American model, the German model focuses on


the quality of growth as much as, or more than, the quantum of growth.
Production growth must be knowledge based and export led. Consumption
growth must be based on a broad equality in income so all citizens more or less
equally share in the benefits of the good times and impacts of the bad times.

Prime in the German psyche are the constraints of money and resources. While
the German model still seeks to maximise consumption growth, this is subject
to two main constraints. The first is efficiently satisfying money and resource
limitations. Consistent with the previous discussion of Rawls' A Theory of
Justice, (1972) the second constraint is that society's stability be maintained by
automatic stabilisers that protect the weakest members. For example,
government subsidies that compensate employees when their hours are cut,
called “Kurzarbeit,” is credited with preserving hundreds of thousands of job
losses in the 2008 global financial crisis.

Constrained growth will often be lower than unconstrained growth. For


example, the European Union introduced carbon trading to control emissions
and global warming. This resulted in lower consumption growth over the last
decade compared to the America (excluding the 2008 Global Financial Crisis),
as shown in the following illustration.

120
Source: Mathematica Country Data 9 April 2009 (note that the
graph is denominated in American dollars so the blue line for
Germany incorporates all exchange rate variability)

Eilenberger (2009) speculates that the world has seen the end of old models of
globalisation, including post-WWII American economic and military
imperialism. He sees the globalisation in a down-cycle, moving down a world
historical optimum:

We are -- at this moment -- experiencing a European utopia that has


been cultivated for millennia …. The dogma-free, democratic
marketplace of ideas, for which Socrates gave his life in Athens, is
today a communicative reality in which hundreds of millions of
citizens are actively taking part. The spirit of scientific methodology
and veracity embodied by Bacon, Descartes, and Newton as a
measure of the collective interpretation of the world is driving a
community of researchers that is unique in its diversity. The federal
confederacy based on fundamental human rights that Erasmus and
Kant envisaged as the "kingdom of ends" is now our political order.
The collective safeguarding of physical and intellectual basic rights
that Aristotle recognized as the foundation of every polity, and the
ethically concerned liberalism of Adam Smith are guiding the logic
of our economic activity. And finally, the vision of a secular, active,
multilingual life elevated by Shakespeare, Cervantes, and Goethe as
the core of what it means to be human accurately describes our
cultural existence today as nascent Europeans …. We are not
dealing here with poetry or philosophical pipe dreams, but rather an

121
empirically demonstrable reality. The European Union in the year
2009 represents a world-historical optimum. Never before have 500
million people united under a single political order been better off.
Never before have they been as free, as healthy, or as well educated;
and never before have they been as peaceful. To be sure, it is the
systemic improbability of this state of affairs that lends a certain
credence to the current pessimism about the future.

Eilenberger hypothesises that the Global Financial Crisis of 2008-9 is merely a


symptom of the new logic of scarce resources and a fundamental change in
globalisation. In particular, a change from old British and European models of
imperialism and modern day Anglo-American models of economic and military
imperialism.

According to Eilenberger, the new paradigm is one where Europe and all
regions of the world will become inwardly focused: “The age of globalization is
over. The coming 30 years will be shaped by the logic of scarcity, resulting in a
turn away from global trade and the creation of self-reliant geopolitical zones.”

He sees the European tradition of wisdom as serendipitously being mature and


capable of transferring proven modes of governance under resource scarcity to
Anglo-American nations and new regional unions, such as an enlarged
European Union including Russia, Turkey and Ukraine. He is less certain of the
path of China, Middle Eastern nations and India.

Interestingly, Eilenberger observes that the journey of industrialised


democracies through internationalism, globalisation and finally to domestic
resilience and sustainability parallels the path of self-awareness and wisdom of
Voltaire's hero Candide in Candide, ou l'Optimisme (1759):

After the adventurous hero Candide, inspired by the notion that he


lives in the best of all possible worlds, has circled the globe and thus
directly experienced the deep misère du monde in all its conceivable
forms, he returns to a fenced garden, the fruits of which at least
guarantee him and his own an agreeable livelihood. Now and again
dreadful news from other parts of the world penetrates the walls
and leads to discussion about responsibility and the possibility of a

122
new departure, to which the now wise Candide responds, Cela est
bien dit, mais il faut cultiver notre jardin. (That is well said, but we
must cultivate our garden) …. Tending to one's own garden,
ensuring its sustainability, and continuing to cultivate it innovatively:
this is Europe's future -- behind walls.

These generic themes are suggested as potential applications for further policy
research in Chapter 7 Conclusions and suggestions for further research.

Policy reboot
The Global Financial Crisis of 2008-9 may well mark the end of a 30-year
bubble in finance and impending transition of world governance from America
to a group of major nations. Upon the election Barack Obama in November
2008, President Nicolas Sarkozy of France, who held the European Union's
rotating presidency, wrote to Barack Obama requesting that the world
governance granted to America at the end of World War II, at Bretton Woods,
be redistributed to other countries including the European Union.

In a further serious challenge to America's waning economic leadership, Zhou


Xiaochuan, Governor of China's central bank, suggested on 26 March 2009
that a supra-currency, such as IMF Special Drawing Rights (SDRs) replace the
American dollar as the world's reserve currency (Xiaochuan 2009):

The outbreak of the current crisis and its spillover in the world have
confronted us with a long-existing but still unanswered question,i.e.,
what kind of international reserve currency do we need to secure
global financial stability and facilitate world economic growth,
which was one of the purposes for establishing the IMF? There were
various institutional arrangements in an attempt to find a solution,
including the Silver Standard, the Gold Standard, the Gold
Exchange Standard and the Bretton Woods system. The above
question, however, as the ongoing financial crisis demonstrates, is
far from being solved, and has become even more severe due to the
inherent weaknesses of the current international monetary system
…. The acceptance of credit-based national currencies as major
international reserve currencies, as is the case in the current

123
system, is a rare special case in history. The crisis again calls for
creative reform of the existing international monetary system
towards an international reserve currency with a stable value, rule-
based issuance and manageable supply, so as to achieve the
objective of safeguarding global economic and financial stability ….
The desirable goal of reforming the international monetary system,
therefore, is to create an international reserve currency that is
disconnected from individual nations and is able to remain stable in
the long run, thus removing the inherent deficiencies caused by
using credit-based national currencies …. The IMF also created the
SDR in 1969, when the defects of the Bretton Woods system initially
emerged, to mitigate the inherent risks sovereign reserve currencies
caused. Yet, the role of the SDR has not been put into full play due
to limitations on its allocation and the scope of its uses. However, it
serves as the light in the tunnel for the reform of the international
monetary system …. The basket of currencies forming the basis for
SDR valuation should be expanded to include currencies of all major
economies, and the GDP may also be included as a weight. The
allocation of the SDR can be shifted from a purely calculation-based
system to a system backed by real assets, such as a reserve pool, to
further boost market confidence in its value.

President Barack Obama's 20 January, 2009 inauguration speech shifted the


America's lexicon from growth to renewal, humility and peace. He did not
allude to a new bigger brighter future. Instead, he spoke of Americans dusting
themselves off and setting about re-achieving domestic and global respect and
stability: “With a spirit of service in a new era of responsibility ….The world
has changed — and we must change with it …. [America must] play its role in
ushering in a new era of peace .… our power alone cannot protect us; nor does
it entitle us to do as we please …. our security emanates from the justness of
our cause, the force of our example, the tempering qualities of humility and
restraint.”

He rejected the pervasive Christian fundamentalism of the previous


Administration and substituted a commitment to the rationality of sciences and
law: “[We are] a nation of Christians and Muslims, Jews and Hindus — and non-

124
believers …. we will restore science to its rightful place .… we reject as false
the choice between our safety and our ideals.”

While reaffirming America's commitment to defeat those who seek to advance


their aims by inducing terror, President Obama distanced tomorrow's America
from President Bush's mantras such as “the war on terror.” In an olive branch
to America's adversaries Iran, Korea and the Taliban he reiterated the
campaign idea that “We will extend a hand if you are willing to unclench your
fist.” On 8 March 2009, President Barack Obama declared that America was
not winning the Afghanistan war (Cooper & Stolberg 2009). Mirroring General
David H. Petraeus' rapprochement with Sunni militias in Iraq, he suggested a
new policy in both Afghanistan and Pakistan of America's military reaching out
to moderate elements of the Taliban.xxviii

In an address to the Congress on 24 February 2009, President Obama


reaffirmed his commitment to excise the pervasive rot in America's leadership
that had led to broken promises, delayed reform and a culture of reckless
spending by Americans as a whole (Obama 2009a):

Now, if we're honest with ourselves, we'll admit that for too long we
have not always met these responsibilities, as a government or as a
people. I say this not to lay blame or to look backwards, but because
it is only by understanding how we arrived at this moment that we'll
be able to lift ourselves out of this predicament. The fact is, our
economy did not fall into decline overnight. Nor did all of our
problems begin when the housing market collapsed or the stock
market sank. We have known for decades that our survival depends
on finding new sources of energy, yet we import more oil today than
ever before. The cost of health care eats up more and more of our
savings each year, yet we keep delaying reform. Our children will
compete for jobs in a global economy that too many of our schools
do not prepare them for. And though all of these challenges went
unsolved, we still managed to spend more money and pile up more
debt, both as individuals and through our government, than ever
before. In other words, we have lived through an era where too
often short-term gains were prized over long-term prosperity, where

125
we failed to look beyond the next payment, the next quarter, or the
next election. A surplus became an excuse to transfer wealth to the
wealthy instead of an opportunity to invest in our future.
Regulations - regulations were gutted for the sake of a quick profit
at the expense of a healthy market. People bought homes they knew
they couldn't afford from banks and lenders who pushed those bad
loans anyway. And all the while, critical debates and difficult
decisions were put off for some other time on some other day. Well,
that day of reckoning has arrived, and the time to take charge of our
future is here.

The Rev. Thomas Robert Malthus is mainly remembered for his courageous
albeit erroneous conviction that population would growth in a geometric
sequence and therefore overtake food production, which he thought could only
grow in an arithmetically progression (Malthus 1798). His notable achievement
of modelling with geometric and arithmetic progressions is seen as an
important precursor to neoclassical economics. xxix While he was completely
incorrect in his conclusions, it is ironic and at the same time very interesting
that the base case of the Club of Rome's 1972 Malthus-like projections has
indeed been borne out (Turner 2008).

Modelling aside, Malthus' greatest contribution to economics was his


proposition, in Principles of Political Economy (1820), that general gluts
emerge periodically and that these gluts cannot be cleared by normal market
mechanisms because the downward spiral of unemployment and falling
consumption is too rapid. His views directly challenged Jean-Baptiste Say's
assertion that gluts could only be local and temporary because a major virtue
of capitalism was its automatic clearing of markets, which he provided as Say's
Law (1803) that supply created its own demand.xxx Malthus saw that even if
prices fall, the market may not clear because ordinary people have insufficient
income to afford consumption. Prefiguring Keynes by a century, Malthus
proposed that the government should intervention to fund consumption by
landowners and the employment of the poor in roads and public works.

In an important change in American political philosophy, President Obama has


acted on this Malthus-like insight. In continuing to outline his vision of
reducing America's consumerism, notwithstanding that it has been America's

126
main source of economic growth, President Obama has strongly advocated the
importance of substituting investment and saving for excess consumption,
while simultaneously removing the policy distortions of previous
Administrations that have exacerbated America's inequality of income by
diverting middle class wealth to the rich and played a major part in America's
financial collapse (Obama 2009e):

And most of all, I want every American to know that each action we
take and each policy we pursue is driven by a larger vision of
America's future – a future where sustained economic growth
creates good jobs and rising incomes;a future where prosperity is
fuelled not by excessive debt, reckless speculation, and fleeing
profit, but is instead built by skilled, productive workers; by sound
investments that will spread opportunity at home and allow this
nation to lead the world in the technologies, innovations, and
discoveries that will shape the 21st century. That is the America I
see. That is the future I know we can have …. Even as we clean up
balance sheets and get credit flowing; even as people start spending
and business start hiring – we have to realize that we cannot go
back to the bubble and bust economy that led us to this point …. It is
simply not sustainable to have a 21st century financial system that is
governed by 20th century rules and regulations that allowed the
recklessness of a few to threaten the entire economy. It is not
sustainable to have an economy where in one year, 40% of our
corporate profits came from a financial sector that was based too
much on inflated home prices, maxed-out credit cards, over-
leveraged banks and overvalued assets; or an economy where the
incomes of the top 1% have skyrocketed while the typical working
household has seen their income decline by nearly $2,000.

In the overview of the fiscal 2010 budget A New Era of Responsibility:


Renewing America’s Promise, the President was just as categorical (Obama
2009b):

This crisis is neither the result of a normal turn of the business cycle
nor an accident of history. We arrived at this point as a result of an
era of profound irresponsibility that engulfed both private and

127
public institutions from some of our largest companies’ executive
suites to the seats of power in Washington, D.C. For decades, too
many on Wall Street threw caution to the wind, chased profits with
blind optimism and little regard for serious risks - and with even less
regard for the public good. Lenders made loans without concern for
whether borrowers could repay them. Inadequately informed of the
risks and overwhelmed by fine print, many borrowers took on debt
they could not really afford. And those in authority turned a blind
eye to this risk-taking; they forgot that markets work best when
there is transparency and accountability and when the rules of the
road are both fair and vigorously enforced. For years, a lack of
transparency created a situation in which serious economic dangers
were visible to all too few …. This irresponsibility precipitated the
interlocking housing and financial crises that triggered this
recession. But the roots of the problems we face run deeper.
Government has failed to fully confront the deep, systemic problems
that year after year have only become a larger and larger drag on
our economy. From the rising costs of health care to the state of our
schools, from the need to revolutionize how we power our economy
to our crumbling infrastructure, policymakers in Washington have
chosen temporary fixes over lasting solutions …. The time has come
to usher in a new era of responsibility in which we act not only to
save and create new jobs, but also to lay a new foundation of growth
upon which we can renew the promise of America …. This Budget is
a first step in that journey …. Our problems are rooted in past
mistakes, not our capacity for future greatness. We should never
forget that our workers are more innovative and industrious than
any on earth. Our universities are still the envy of the world. We are
still home to the most brilliant minds, the most creative
entrepreneurs, and the most advanced technology and innovation
that history has ever known. And we are still the Nation that has
overcome great fears and improbable odds. It will take time, but we
can bring change to America. We can rebuild that lost trust and
confidence. We can restore opportunity and prosperity. And we can
bring about a new sense of responsibility among Americans from
every walk of life and from every corner of the country.

128
However, the President was still appealing to the great American dream of
unrestrained economic growth. In contrast, the European Union and Japan
have shown that growth can occur in sustainability and quality of life without
extraordinary growth in GDP. In addition to GDP growth, environmental and
social considerations need to be included in all scenarios for sustainable
development.

In April 2009, President Barack Obama surprised the world with his
understanding of the new realpolitik across international economic, security,
energy and climate relations (Scherer 2009):

Most of the hallmarks of the foreign policy of George W. Bush are


gone. The old conservative idea of "American exceptionalism," which
placed the U.S. on a plane above the rest of the world as a unique
beacon of democracy and financial might, has been rejected ….
Obama has made clear that the U.S. is but one actor in a global
community. Talk of American economic supremacy has been
replaced by a call from Obama for more growth in developing
countries. Claims of American military supremacy have been
replaced with heavy emphasis on cooperation and diplomatic hard
labor …. after the G-20 summit ended ... two American reporters
asked Obama for his response to the claim by Brown that the
"Washington consensus is over." Obama all but agreed with Brown,
noting that the phrase had its roots in a significant set of economic
policies that had shown itself to be imperfect. He went on to talk
about the benefits of increasing economic competition with the U.S.
"That's not a loss for America," he said of the economic rise of other
powers. "It's an appreciation that Europe is now rebuilt and a
powerhouse. Japan is rebuilt, is a powerhouse. China, India — these
are all countries on the move. And that's good." …. At a town hall in
Strasbourg, France, Obama stood before an audience of mostly
French and German youth and admitted that the U.S. should have a
greater respect for Europe. "In America, there's a failure to
appreciate Europe's leading role in the world," he said before
offering other European critical views of his country. "There have
been times where America has shown arrogance and been

129
dismissive, even derisive." …. French President Nicolas Sarkozy
addressed the issue directly, speaking through an interpreter. "It
feels really good to be able to work with a U.S. President who wants
to change the world and who understands that the world does not
boil down to simply American frontiers and borders," he said. "And
that is a hell of a good piece of news for 2009.

On 4 June 2009, President Obama made a visionary speech to the Muslim and
Jewish worlds about international ethics and peace (Obama 2009c). This
speech linked directly to President John F. Kennedy's transformative and
enduring Peace Speech at American University in 1962 (Kennedy 1993). It was
to prove just as historic. President Obama said:

So long as our relationship is defined by our differences, we will


empower those who sow hatred rather than peace, those who
promote conflict rather than the cooperation that can help all of our
people achieve justice and prosperity. And this cycle of suspicion
and discord must end …. I've come here to Cairo to seek a new
beginning between the United States and Muslims around the
world, one based on mutual interest and mutual respect …. Unlike
Afghanistan, Iraq was a war of choice that provoked strong
differences in my country and around the world …. events in Iraq
have reminded America of the need to use diplomacy and build
international consensus to resolve our problems whenever possible.
Indeed, we can recall the words of Thomas Jefferson, who said: "I
hope that our wisdom will grow with our power, and teach us that
the less we use our power the greater it will be." …. In the middle of
the Cold War, the United States played a role in the overthrow of a
democratically elected Iranian government. Since the Islamic
Revolution, Iran has played a role in acts of hostage-taking and
violence against U.S. troops and civilians .... I've made it clear to
Iran's leaders and people that my country is prepared to move
forward …. No single nation should pick and choose which nation
holds nuclear weapons. And that's why I strongly reaffirmed
America's commitment to seek a world in which no nations hold
nuclear weapons. And any nation -- including Iran -- should have the
right to access peaceful nuclear power if it complies with its

130
responsibilities under the nuclear Non-Proliferation Treaty …. all of
us must recognize that education and innovation will be the
currency of the 21st century and in too many Muslim communities,
there remains underinvestment in these areas …. On education, we
will expand exchange programs, and increase scholarships …. On
economic development, we will create a new corps of business
volunteers to partner with counterparts in Muslim-majority
countries …. On science and technology, we will launch a new fund
to support technological development in Muslim-majority countries,
and to help transfer ideas to the marketplace so they can create
more jobs. We'll open centers of scientific excellence in Africa, the
Middle East and Southeast Asia, and appoint new science envoys to
collaborate on programs that develop new sources of energy, create
green jobs, digitize records, clean water, grow new crops ….
eradicate polio …. And …. expand partnerships with Muslim
communities to promote child and maternal health …. Americans are
ready to join with citizens and governments; community
organizations, religious leaders, and businesses in Muslim
communities around the world to help our people pursue a better
life …. All of us share this world for but a brief moment in time. The
question is whether we spend that time focused on what pushes us
apart, or whether we commit ourselves to an effort - a sustained
effort - to find common ground, to focus on the future we seek for
our children, and to respect the dignity of all human beings.

Three months later, in September 2009, President Obama cancelled America's


middle-European “Star Wars” missile shield project (Levy & Baker 2009).
President Obama's breathtakingly bold reversal of President George W. Bush's
security policy is analogous to President Kennedy pulling the world back from
the brink of nuclear war in the Bay of Pigs incident. Prime Minister Vladimir V.
Putin responded to President Obama as his predecessor Nikita Khrushchev had
to President Kennedy, calling President Obama’s decision “correct and brave”.

President Obama's policy reversal could be said to be pragmatic. President


Reagan's policy of intimidating Russia into reducing its nuclear arsenals had
brought no success. Nor had George W. Bush's policy of ”prodding the bear” by
intervening in Russia's sphere of influence across the former satellite states of

131
Poland, Romania, the Czech Republic and the client war in Georgia (Spiegel
Online 2009a).

However, this policy reversal is more than merely pragmatic on the one hand
and a Kennedy-Obama vision of reducing nuclear weapons on the other. It puts
in place a new platform for sweeping change to Anglo-American international
relations. Firstly, looking to a new international consensus across all aspects of
security including nuclear weapons, terrorism and climate change. Secondly, a
de-escalation of harsh words and threats that lead to anxieties, high defence
costs for every country and see weapons systems across the globe placed on
hair triggers. Secondly, a recognition of the new financial reality that America
is unable to remain the world's policeman. Thirdly, a reorientation to domestic
issues, such as health over military spending, or “butter instead of guns.”

United Nations Security Council Resolution 1887


In September 2009, President Obama became the first American President to
chair the United Nations Security Council (United Nations Security Council
2009). In his opening remarks to the 6191st meeting, President Obama pledged
that “the United States would host a Summit in early 2010 and pursue deeper
cuts in its nuclear arsenal, as well as agreements with the Russian Federation
towards the total elimination of nuclear weapons.”

The Council reaffirmed its strong support for the Treaty on the Non-
Proliferation of Nuclear Weapons by adopting Resolution 1887 (2009) to end
nuclear weapons proliferation. The Meeting also called on States parties “to
comply fully with their obligations and to set realistic goals to strengthen, at
the 2010 Review Conference, all three of the Treaty’s pillars - disarmament of
countries currently possessing nuclear weapons, non-proliferation to countries
not yet in possession, and the peaceful use of nuclear energy for all.”

In addressing the United Nations General Assembly on the previous day,


President Obama made an extraordinary commitment to eradicate extreme
world poverty. The President said (Obama 2009d; Bono 2009):

We will support the Millennium Development Goals, and approach


next year’s summit with a global plan to make them a reality. And

132
we will set our sights on the eradication of extreme poverty in our
time.

2009 Nobel Peace Prize


In October 2009, President Obama was recognised with the 2009 Nobel Peace
Prize. The Norwegian Nobel Prize Committee (2009) commented:

The Norwegian Nobel Committee has decided that the Nobel Peace
Prize for 2009 is to be awarded to President Barack Obama for his
extraordinary efforts to strengthen international diplomacy and
cooperation between peoples. The Committee has attached special
importance to Obama's vision of and work for a world without
nuclear weapons …. Obama has as President created a new climate
in international politics. Multilateral diplomacy has regained a
central position, with emphasis on the role that the United Nations
and other international institutions can play …. The vision of a world
free from nuclear arms has powerfully stimulated disarmament and
arms control negotiations. Thanks to Obama's initiative, the USA is
now playing a more constructive role in meeting the great climatic
challenges the world is confronting …. The Committee endorses
Obama's appeal that "Now is the time for all of us to take our share
of responsibility for a global response to global challenges."

Nobel Committee chairman, Thorbjorn Jagland, said after the announcement


(Gibbs 2009):

It’s important for the Committee to recognize people who are


struggling and idealistic but we cannot do that every year. We must
from time to time go into the realm of realpolitik. It is always a mix
of idealism and realpolitik that can change the world …. The
question we have to ask is who has done the most in the previous
year to enhance peace in the world …. and who has done more than
Barack Obama? …. There is great potential. But it depends on how
the other political leaders respond. If they respond negatively, one
might have to say he failed. But at least we want to embrace the
message that he stands for … [West Germany's Chancellor Willy]

133
Brandtxxxi hadn’t achieved much when he got the prize, but a process
had started that ended with the fall of the Berlin Wall …. The same
thing is true of the prize to Mikhail Gorbachev in 1990, for
launching perestroika. One can say that Barack Obama is trying to
change the world, just as those two personalities changed
Europe.xxxii

The news of the award was applauded around the world. French President,
Nicolas Sarkozy, congratulated President Obama saying: “It sets the seal on
America's return to the heart of all the world's peoples.” The Editor of The
New York Times (2009) noted that President Obama's Nobel Peace Prize failed
to resonate in America:

Mr. Obama’s aides had to expect a barrage of churlish reaction, and


they got it. The left denounced the Nobel committee for giving the
prize to a wartime president. The right proclaimed that Mr. Obama
sold out the United States by engaging in diplomacy. Members of
the dwindling band of George W. Bush loyalists also sneered — with
absolutely no recognition of their own culpability — that Mr. Obama
has not yet ended the wars in Afghanistan and in Iraq …. Americans
elected Mr. Obama because they wanted him to restore American
values and leadership — and because they believed he could. The
Nobel Prize, and the broad endorsement that followed, shows how
many people around the world want the same thing.

2.6 Threats to the evolution of a new Anglo-American


world view
A new symbiosis of world nations across the multifaceted dimensions of
economics, trade, security and energy requires Americans to discard
prejudices entrenched over hundreds of years and to rapidly move-on from the
religious fundamentalism and racial rhetoric that characterised the era of
George Bush, which always appeals to the worst bigotry of human attitudes. xxxiii
As the German Foreign Minister, Frank-Walter Steinmeier, said on 5 November
2008:

134
Don't expect a radical change in US climate policy after Barack
Obama takes over as president … America as a whole is not ready
for the contribution it needs to make in order to lessen the negative
affects of global warming … Washington will take pains to ensure
climate protection measures do not harm the US economy ... the
dominant issue in the US has always been energy security.

America's new direction of Democrat politics raises some issues about the
underlying attitudes of Americans and President Obama's ability to change
direction of policy. The discipline of moral psychology, established by Kohlberg
(1969), seeks to understand the difference in these conservative and liberal
attitudes.

Haidt & Graham (2007) have provided a behavioural model based on five
underlying psychological factors that characterise emotional reactions in
politics. These are harm-care, fairness-reciprocity, ingroup-loyalty (i.e. protect
the group or traditions), authority-respect, and purity-sanctity (i.e. religion).

According to Haidt's model, political conservatives are broadly pluralist and


eclectic across all these factors. Political liberalism exists in nations when the
social milieu permits a sort of switching off of the last three dimensions of
conservatism. This leaves the moral intuitions of political liberals mainly or
even solely based on harm-care and fairness-reciprocity.

Clarke (Saunders 2009b) explains that conservatives usually demand


conformity and respect of authority. For example, many Americans who
endorsed George W. Bush did so because they believed he upheld morality.
These voters were completely oblivious to his immoral acts, such as torture.
Haidt believes that this is because political conservatives see no inconsistency
in decisions being made on intuition and implemented expediently with scant
regard to individual rights.

Kass (1997) explains why this occurs. He argues that political conservatives
rely on their gut feel for moral principles and identify any violation as
repugnant and, by extension, a pernicious threat to the establishment. This is
Kass' well know “yuck factor.” Abortion, euthanasia and gay marriage are but a
few of the issues repugnant to political conservatives on the hard right. By

135
further extension, individuals involved with a violation of the moral principles
are characterised as foul, sub-social individuals who deserve no rights and to
whom torture may be an appropriate response.xxxiv

Furthermore, political conservatives see no need to explicitly articulate their


intuition and prejudices. Haidt notes that conservatives will often change the
subject to avoid explaining the basis of their views. Any debate is often
minimised by identifying their decisions with the will of God and requesting
prayer for God's help. George W. Bush's rhetoric ably exemplified this. Indeed,
Tony Blair and George W. Bush each claimed that God had spoken to them
recommending war on Iraq (a nation that had never threatened America).

In stark contrast to the attitudes of political conservatives, political liberals see


no intrinsic importance in moral values such as loyalty, authority and sanctity.
Clarke explains of Haidt's ingroup-loyalty factor: “Conservative morality is the
default morality that occurs in most parts of the world …. so most people
consider patriotism to be a moral virtue, but a liberal will not consider
patriotism to be a moral virtue; they might concede it to be an interesting
character trait, but they don't consider it to be a natural morality.”

Political liberals call for freedom, autonomy and the right of individuals to
express their own preferences. In decision making, political liberals usually
seek principles of equality, natural justice, rational accountability and
transparent, evidence based debate.

Haidt suggests that conservatism is the default political attitude in the world.
Conservative reasoning exists across a wide spectrum of ideology outlooks as
diverse as conservative democracy, conservative Islam and conservative
Marxist-Leninist. In this respect, American Republican democracy and
conservative Islam have more in common than do the American Republicans
and Democrats.

Arguably, throughout George W. Bush's presidency, Americans exhibited


stronger fundamental political conservative values than at any other time in
America's history and far in excess of any other Western democratic nation.
Ebullient from the collapse of the Berlin Wall on 11 November 1989 and break

136
down of Soviet Communism in 1991, American conservatives turned their
energies toward environmental scientists, whom they believed to be the next
threat to Anglo-American sovereignty and unilateralism. In reviewing the
failure of the 1992 Rio Earth Summit, German Finance Minister Klaus Topfer
noted “I am afraid that conservatives in the United States are picking
'ecologism' as their new enemy” (Greenhouse 1992).

In stark contrast to their European counterparts, Anglo-American


conservatives continue to equate the right to pollute the atmosphere with
individualism and free markets (Conason 2009). The strength and resilience of
conservative political values in America represents a major weakness in
President Obama's strategy to re-engage with the world on major issues such
as resource scarcity and greenhouse gas pollution of the common atmosphere.

Waleed Ally, an Australia academic and columnist in Washington as guest of the


State Department to President Obama's inauguration, in a live television
interview on the day observed that President Obama's technique is to provide
themes from history and current affairs to illustrate his points but then to leave
the listener to fill-in the tapestry for themselves. As a result, almost everyone
hears what he or she wants to hear in his speech. Many Americans would have
heard references to 'renewal” in President Barack Obama's Inauguration
Speech as targets for return to “growth” as it was from the 1950s to 2005.

While America allows its presidents considerable influence in international


policy, it remains to be seen what the reaction will be to the realisation that
“renewal” is developing a viable lifestyle with greater responsibility in a
resource and growth curtailed world.

Tuckman's model of group development is known as “forming-storming-


norming-reforming -performing” (Tuckman 1965). America's new
Administration is fresh from forming the new team and being mercilessly
buffeted by “storming” from vested interests across many issues such as the
economy, markets and regulation, health care and climate change. It is
questionable whether “norming”, which is consensus and mutual cooperation,
can take place in the first term of the Administration. This means that
“reforming” and “performing” may be some time in coming, if ever.

137
Yale historian, Paul Kennedy (1993) sees America muddling through its
challenges but provides a word of caution. He observed that those people who
succeed in democratic political systems usually do so by managing to avoid
antagonising powerful interest groups. In this sense President Barack Obama
has a large challenge because as we have seen above, the set of issues
confronting him and the schisms in political values are enormous. Perhaps the
American financial crisis and peak-oil realisation will serve in his favour.
However, it remains to be seen if the broad base of Americans will be capable
of accepting a new humbleness of sustainable living where unilateral action to
secure resources is an international and punishable war crime.

2.7 Conclusion
The political analysis of the Anglo-American economic worldview in this
Chapter has identified a number of key themes.

Origins of the Anglo-American worldview are found in America's drive to


protect individual freedom. Since the sixteenth century this has manifested
itself in the removal of obligations and constraints to independent action.

In the early eighteenth century, Alexis de Tocqueville commented on the other


overwhelming preoccupations of Americans, that of making money and
conspicuous consumption. He identified that the lack of pluralism and diversity
in American's single-mindedness exposed their society to “unexpected and
formidable embarrassments.” One hundred and fifty years on, his observation
proved prophetic and the consequences are still playing out.

With virgin territories for the taking, America's population grew rapidly and
with it the wealth of the nouveau riche. It became a powerful society. As shown
in the analysis of America's national security, the fiction that its domestic
success had been divinely ordained and was part of a Manifest Destiny became
entrenched in its dealings with the world. Such idiosyncratic beliefs defy
testing and encourage polemic. Therefore, it is unsurprising that Americans
accepted their own predetermined destiny while expressing outrage at another
equally untestable vision of pre-determined economics, that of dialectic
materialism.

138
Notwithstanding this, in the name of Manifest Destiny Americans justified
resource grabs from Hawaii to Iraq. In the majority of instances, America's
interventions led to a legacy of ashes. However, as shown in the analysis of
resource wars, it might be expected that powerful nations such as America will
continue to use military force to secure resources.

In precursors of the Anglo-American world view, American beliefs in the


concept of happiness through work and consumption were traced from
Aristotle's virtues through to classical and neoclassical economics. It was
shown that Adam Smith's “invisible hand of capitalism,” explained how the
equilibrium of markets is settled by everybody pursuing their own happiness in
an Aristotelian paradigm. Aristotle's hypothesis, taken for a fundamental
economic law or truth, remains controversial since the late eighteenth century.
Great political economic philosophers including David Hume, David Ricardo,
John Stuart Mill, Jean-Baptiste Say and Frederic Bastiat refined the human
interface of markets and drew from it the principles of classical economics.
Their work is now recognised in the new discipline of service sciences.

However, these philosophers of political economy were unable to resolve the


paradox that truly free markets fail in unexpected ways. It was shown that John
Maynard Keynes and “Keynesian” economists up to the present day remain at
loggerheads with fundamentalist free marketeers. Nevertheless, weaknesses in
a paradigm do not invalidate it and both classical economics and its
mathematical sibling, neoclassical economics, have served America's
preoccupation with consumption as the measure of happiness for over two
hundred years.

The systemic risk observed by Alexis de Tocqueville was to threaten the very
foundations of classical and neoclassical economics. Societies were rescued
from the Great Depression and the Global Financial Crisis of 2008-9 by
Keynesian lifelines. From this behavioural economics has assumed a great
importance.

In “Philosophy and psychology diverge from the paradigm” it was shown that
behaviourist philosophers began to invalidate the Aristotelian hypothesis that

139
humans find happiness through work. This started with the great European
philosophers Kant, Nietzsche, Stirner, Weber, Sartre and Camus. This bubbling
stream became a broad river with the engagement of the great deductivist
institutional philosophers, Popper and Kuhn.

The analysis of “Unexpected Failures in the Anglo-American world view”


concluded that an unravelling of the classical and neoclassical economic
paradigms came firstly with the recognition of “agency conflict” in the 1980s.
After the gnashing of teeth for two decades, black letter law such as the
Sarbanes-Oxley Act emerged to set standards of accountability and thereby
regulate agency conflict. Unfortunately, the regulation did not achieve its
desired result and became an exercise in ticking boxes. The real problem was
in the heads of the company directors. Adam Smith's “enlightened self
interest” was merely a quaint fiction for directors of companies that were
driven to enrich themselves and did so by taking huge risks with other peoples'
money, completely disregarding their social mandate and seeking to profit from
every weakness in deregulated markets.

In “Excessive speculation in markets” it was shown that hedge funds extracted


massive profits from commodity markets by financial manipulation. Commodity
markets are by definition thin because they involve real producers and
consumers in their daily business of satisfying society's needs for commodities
like food and energy. While markets benefit from a little speculation for
liquidity, they became easy fodder for hedge funds with overwhelming
resources. The global chaos that resulted from the Bush Administration's
policy of diverting grain from food to fuel was never resolved because it was
overtaken by the 2008-9 Global Financial Crisis. However, these failures of free
markets and classical economics provide a salutary lesson that any national or
global carbon commodity market will need appropriate regulation and not rely
on flawed concepts of totally deregulated markets.

In analysing the “Sub-prime crisis” it was concluded that this unexpected


failure was due to a cumulative build-up in many critical factors. From 2000,
easy money, massive asset appreciation and gorging on cheap Chinese imports
were seen as rewards for American success. However, the economic success of
American society had long since sown its own seeds of failure. America had

140
become corpulent and happier to be a financial dealer and profiteer than to
produce and innovate. The turning point had been passed sometime in the
1970s.

In discussing “Moral hazard,” it was found that America's unifying value


system had failed. A system of ethics is necessary for the fabric and social
cohesion of every society. America's loss of its value system was accompanied
by the loss of its reputation and world leadership of the world.

“Challenges to the legitimacy of neoclassical economics” investigated whether


neoclassical economics and the American policy founded upon it will emerge
from the furnace of these challenges as a reborn phoenix with a new
behavioural perspective. Indications are that it will continue as before. The fact
that neoclassical economics has internal conflicts and occasional spectacular
failures does not mean the paradigm is broken. In fact, as Thomas Kuhn
showed, this sort of thing is expected in real economies.

The major investigation into “Evolution of a new Anglo-American world view”


began with an appraisal for the “Decline of American exceptionalism”. It
concluded that the future outlook for Americans is that they will need to regain
economic and financial stability and then live within their means like
everybody else on the planet.

A “New international symbiosis” extended this concept beyond America's


borders into its dealings with the rest of the world. Game theory was applied to
understand how America needs to become a normal responsible citizen of the
world and raise itself from the “Tragedy of the Commons.” The award of 2009
Nobel Prizes to President Barack Obama and Elinor Ostrom has sent a
message to the world that this is the new way multilateral cooperation will
evolve across issues of trade, nuclear proliferation and climate change.

The new economic model confronting America was investigated in a “New


constrained growth model.” The American and German social models were
compared and contrasted. It is concluded that there are fundamental
differences in the national psyche that lead to a lack of optimism that America
will adopt a European Union governance model. In fact, the risk is that

141
America and other Anglo-American countries will not rise to the opportunities
of international symbiosis but turn inward to “tend their own gardens.”

The fresh Presidency of Barack Obama, his candid mea culpa on behalf of
America and exhortations for America to engage with international symbiosis
are discussed in “Policy Reboot.” It was found that world nations have not been
comfortable with American policies since the days of President Reagan and
this reached its most objectionable apogee during the term of President
George W. Bush. However, world leaders of all persuasions, including Russia,
have expressed the desire to help President Obama rebuild America's standing
in the international community and as a valued member of a new power
sharing of nations. The investigation of “United Nations Security Council
Resolution 1887” on the elimination of nuclear weapons showed how this new
cooperation might operate.

However, the analysis of President Barack Obama's award of the 2009 Nobel
Peace Prize, with special reference to his multilateralism and policy of action
on nuclear non-proliferation, showed that President Obama does not carry the
goodwill of a large number of Americans and that this may defeat his attempts
to reconcile America with other leading nations and power blocs such as the
European Union, Russia, India, China and South America.

An investigation into “Threats to the evolution of a new Anglo-American world


view” found that political conservatism in America is exceptionally strong and
strongly polarised. Very few political conservatives support President Obama's
domestic and international rebuilding. The far right continue to believe in the
traditional American world view of exceptionalism and America's Manifest
Destiny.

Many of America's future problems and opportunities have international


dimensions, such as international competitiveness, trade, security, nuclear
proliferation and the environment. The polarisation between the old and new
world views has led to a gulf of credibility on the issue of whether America will
have the bipartisan political fortitude to make the transition to a new future of
international symbiosis, or if it will slide backwards into its traditional world
view of yesteryear.

142
The next Chapter examines Anglo-American political economy at the cusp of
change from unconstrained growth to climate constrained growth.

2.8 Chapter references


(Clarke 2005) (Yang et al. 2005a) (Yang et al. 2005b) (Brown & Williamson 1969) (Schwartz et al. 2006) (Critchley 2008)

103rd Congress, 1993. Hawaii: the Apology, Available at: http://www.hawaii-


nation.org/publawall.html [Accessed April 8, 2009].

37 signals, 2006. The smarter, faster, easier way to build a successful web
application, 37 signals.

Aristotle, 350BC. Ethics Project Gutenberg EBook., Available at:


http://www.gutenberg.org/dirs/etext05/8ethc10.txt [Accessed April 9,
2009].

ASX Corporate Governance Council, 2006. 2005 Analysis of corporate


governance practice disclosure, Sydney, Australia: Australian Stock
Exchange. Available at:
http://www.asx.com.au/supervision/pdf/2005_analysis_cg_practice_disclo
sure.pdf.

ASX Corporate Governance Council, 2005. ASX Corporate Governance Council


Implementation Review Group Principles of Good Corporate
Governance and Best Practice Recommendations, Second Report to the
ASX Corporate Governance Council, Sydney, Australia: Australian Stock
Exchange.

ASX Corporate Governance Council, 2003. ASX Corporate Governance


Principles, Sydney, Australia: Australian Stock Exchange.

ASX Corporate Governance Council, 2007. Corporate Governance Principles


and Recommendations, Sydney, Australia: Australian Stock Exchange.

143
ASX Corporate Governance Council, 2006. Principles of Good Corporate
Governance and Good Practice Recommendations - Exposure draft of
changes (showing marked amendments), Sydney, Australia: Australian
Stock Exchange. Available at:
http://www.asx.com.au/supervision/pdf/asxcgc_marked_amended_princip
les_021106.pdf.

Atkinson, A.B. & Leigh, A., 2006. The Distribution of Top Incomes in Australia.
SSRN eLibrary. Available at: http://papers.ssrn.com/sol3/papers.cfm?
abstract_id=891892 [Accessed August 25, 2009].

Aumann, R.J. & Shapley, L., 1974. Values of non-atomic games, Princeton, N.J.:
Princeton University Press.

Axelrod, R.M., 1984. The evolution of cooperation, New York: Basic Books.

Bacevich, A., 2008. The Limits of Power: The End of American Exceptionalism ,
Metropolitan Books.

Bastiat, F., 1848. Selected Essays on Political Economy: Second Letter trans.
Seymour Cain., Irvington-on-Hudson, NY: The Foundation for Economic
Education, Inc. Available at:
http://www.econlib.org/library/Bastiat/basEss6.html [Accessed April 8,
2009].

Bergson, H., 1932. The two sources of religion and morality, Paris: Librairie
Felix Alcan.

Blair, C., 1957. The Passing of a Great Mind. Life Magazine, (February 25).

Blaug, M., 1992. The methodology of economics: Or, how economists explain ,
Cambridge University Press.

Bono, 2009. Rebranding America. The New York Times. Available at:
http://www.nytimes.com/2009/10/18/opinion/18bono.html [Accessed
November 2, 2009].

144
Brooks, D., 2009. An Economy of Faith and Trust. The New York Times.
Available at: http://www.nytimes.com/2009/01/16/opinion/16brooks.html
[Accessed January 21, 2009].

Brooks, D., 2008. The Behavioral Revolution. The New York Times. Available
at: http://www.nytimes.com/2008/10/28/opinion/28brooks.html
[Accessed January 21, 2009].

Brown & Williamson, 1969. Smoking and Health Proposal, United States:
Brown & Williamson. Available at:
http://legacy.library.ucsf.edu/tid/rgy93f00 [Accessed September 30,
2009].

Bucks, B.K. et al., 2009. Changes in U.S. Family Finances from 2004 to 2007:
Evidence from the Survey of Consumer Finances. Federal Reserve
Bulletin, 95, A1-A55.

Camus, A., 1942. Le Mythe de Sisyphe (The Myth of Sisyphus) 1955th ed.,
Available at:
http://www.sccs.swarthmore.edu/users/00/pwillen1/lit/msysip.htm
[Accessed April 12, 2009].

Camus, A., 1952. Retour à Tipasa (Return to Tipasa). In The Myth of Sisyphus
and Other Essays. Vintage.

Capek, K., Klinkenborg, V. & Capek, J., 2002. The Gardener's Year, Modern
Library.

Clarke, J., 2005. Working with monsters: how to identify and protect yourself
from the workplace psychopath, Random House Australia.

Cleveland, G., 1893. President Grover Cleveland's Message to the Senate and
House of Representatives, Washington DC. Available at:
http://www.hawaii-nation.org/cleveland.html [Accessed April 8, 2009].

145
Cohen, R., 2009. Germany Unbound. The New York Times. Available at:
http://www.nytimes.com/2009/10/01/opinion/01iht-edcohen.html?
th&emc=th [Accessed October 1, 2009].

Conason, J., 2009. Look -- conservatives who believe in global warming!


Salon.com. Available at:
http://www.salon.com/opinion/conason/2009/09/25/global_warming_cons
ervatives/ [Accessed September 27, 2009].

Cooper, H. & Stolberg, S.G., 2009. Obama Ponders Outreach to Elements of


Taliban. The New York Times. Available at:
http://www.nytimes.com/2009/03/08/us/politics/08obama.html?
_r=2&th=&emc=th&pagewanted=print [Accessed March 19, 2009].

Cournot, A.A., 1838. Recherches sur les principles mathematiques de la


théorie des richesses, Paris: M. Rivière & Cie.

Critchley, S., 2008. The book of dead philosophers, Melbourne: Melbourne


University Press.

Curd, M. & Cover, J., 1998. Philosophy of Science, Section 3, The Duhem-Quine
Thesis and Under-determination, W.W. Norton & Company. The
Philosophical Review, 60.

Daily, C., Dalton, M. & Cannella, A.A., 2003. Corporate governance: decades of
dialogue and data. Academy of Management Review, 28(3), 371-382.

Dharan, B.G., 2002. Enron's accounting Issues - what can we learn to prevent
future Enrons, Prepared Testimony Presented to the US House Energy
and Commerce Committee's Hearings on Enron Accounting , Available
at:
http://www.ruf.rice.edu/~bala/files/dharan_testimony_enron_accounting.
pdf.

Dresher, M., 1981. The mathematics of games of strategy, Courier Dover


Publications.

146
Duffner, S., 2003. Principal-agent problems in venture capital finance, Basel:
University of Basel.

Duhem, P., 1906. La théorie physique: son objet et sa structure (The aim and
structure of physical theory). Foreword by Prince Louis de Broglie.
Translated from the French by P.P. Wiener, 1954, Princeton, New Jersey:
Princeton University Press.

Ehrenhal, A., 2006. How the Yes Man learned to say No. The New York Times.
Available at:
http://www.nytimes.com/2006/11/26/opinion/26ehrenhalt.html.

Eilenberger, W., 2009. Visions of Europe in 2030: Candide's Garden. SPIEGEL


ONLINE - News - International. Available at:
http://www.spiegel.de/international/europe/0,1518,637522,00.html
[Accessed July 26, 2009].

Feyerabend, P., 1975. Against Method, London, UK.: New Left Books.

Franklin, B. & Franklin, W.T., 1818. Memoirs of the Life and Writings of
Benjamin Franklin,

Fuller, S., 2003. Kuhn vs. Popper: the struggle for the soul of science,
Cambridge and Australia: Allen & Unwin.

Gibbs, W., 2009. From 205 Names, Panel Chose the Most Visible. The New
York Times. Available at:
http://www.nytimes.com/2009/10/10/world/10oslo.html?
_r=1&th&emc=th [Accessed October 11, 2009].

147
Greenhouse, S., 1992. A CLOSER LOOK; Ecology, the Economy and Bush. The
New York Times. Available at:
http://www.nytimes.com/1992/06/14/weekinreview/a-closer-look-ecology-
the-economy-and-bush.html?scp=1&sq=Klaus+Topfer+%22Ecology
%2C+the+Economy%2C+and+Bush%22&st=nyt [Accessed September
30, 2009].

Haidt, J. & Graham, J., 2007. When morality opposes justice: Conservatives
have moral intuitions that liberals may not recognize. Social Justice
Research, 20(1), 98-116.

Hardin, G., 1968. The tragedy of the commons. Science, 162(3859), 1243-1248.

Harrison, R.P., 2008. Gardens, University of Chicago Press.

Henriques, D.B., 2008. A bull market sees the worst in speculators. The New
York Times. Available at:
http://www.nytimes.com/2008/06/13/business/13speculate.html?
_r=1&pagewanted=2&th&emc=th&oref=slogin [Accessed June 15,
2008].

Herbst, M., 2009. Reversing the Economic Plunge: Will Germany Beat the US
to Recovery? SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/business/0,1518,643166,00.html#re
f=rss [Accessed August 20, 2009].

Hujer, M., Reuter, W. & Schwennicke, C., 2009. 'I Take Responsibility': Obama's
G-20 Confession. SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/world/0,1518,617639,00.html#ref=r
ss [Accessed April 7, 2009].

Hume, D., 1752. An inquiry concerning human understanding, Available at:


http://www.gutenberg.org/etext/9662 [Accessed April 8, 2009].

Jensen, D., 2000. A language older than words, New York: Context Books.

148
Kaplan, S.N. & Stromberg, P., 2004. Characteristics, contracts, and actions:
Evidence from venture capitalist analyses. The Journal of Finance,
LIX(5), 2177-2210.

Kass, L.R., 1997. The Wisdom of Repugnance: Why we should ban the cloning
of humans. The New Republic, 216(22), 17-26.

Kennedy, P.M., 1993. Preparing for the twenty-first century, London, UK.:
Harper Collins.

Keynes, J.M., 1936. The General Theory of Employment, Interest and Money,
Macmillan Cambridge University Press, for Royal Economic Society.
Available at:
http://www.marxists.org/reference/subject/economics/keynes/general-
theory/ [Accessed April 9, 2009].

Kohlberg, L., 1969. Stage and sequence: The cognitive-developmental


approach to socialization. Handbook of socialization theory and
research, 347, 480.

Kotler, P., 1994. Marketing management: analysis, planning, implementation,


and control 8th ed., Englewood Cliffs, NJ: Prentice Hall International.

Krämer, H., 2008. Developments and Determinants of the Labor Share of


Income in Selected Advanced Economies, Karlsruhe, Germany:
Karlsruhe University of Applied Sciences. Available at:
http://www.boeckler.de/pdf/v_2008_10_31_kraemer.pdf.

Krugman, P., 2009a. All the President’s Zombies. The New York Times.
Available at:
http://www.nytimes.com/2009/08/24/opinion/24krugman.html?
_r=2&th&emc=th [Accessed August 24, 2009].

Krugman, P., 2009b. America the Tarnished. The New York Times. Available at:
http://www.nytimes.com/2009/03/30/opinion/30krugman.html?em
[Accessed March 31, 2009].

149
Krugman, P., 2009c. How Did Economists Get It So Wrong? The New York
Times. Available at:
http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html?
_r=2&th&emc=th [Accessed September 7, 2009].

Kuhn, T., 1962. The Structure of Scientific Revolutions 2nd ed., Chicago:
University of Chicago Press.

Lakatos, I., 1970. Falsification and methodology of scientific research


programmes. In Criticism and the Growth of Knowledge, I. Lakatos & A.
Musgrave, Eds. Cambridge: Cambridge University Press, p. 135.

Leblanc, R.W., 2004. What's wrong with corporate governance. In Corporate


Governance. Henley Management College.

Levy, C.J. & Baker, P., 2009. Russia’s Reaction on Missile Plan Leaves Iran Issue
Hanging. The New York Times. Available at:
http://www.nytimes.com/2009/09/19/world/europe/19shield.html?
_r=1&scp=1&sq=missile%20shiled&st=cse [Accessed September 20,
2009].

Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.

Machiavelli, N., 1513. The Prince 4th ed., Penguin Classics.

Malthus, T.R., 1798. An essay on the principle of population, as it affects the


future improvement of society,

Malthus, T.R., 1820. Principles of political economy considered with a view to


their practical application 1836th ed.,

150
Malzahn, C.C., 2009. Germany's Miracle Man. Spiegel Online - News -
International. Available at:
http://www.spiegel.de/international/world/0,1518,619431-2,00.html
[Accessed April 16, 2009].

Marx, K., 1867. Capital 1887th ed., Moscow, USSR: Progress Publishers.
Available at: http://www.marxists.org/archive/marx/works/1867-
c1/index.htm [Accessed April 9, 2009].

Mill, J.S., 1848. Principles of Political Economy 7th ed., London: Longmans,
Green and Co. Available at: http://www.econlib.org/library/Mill/mlP.html
[Accessed April 9, 2009].

Nash, J.F.J., 1950. Equlibrium points in n-person games. Proceedings of the


National Acadamy of Science, 36, 48-49.

von Neumann, J. & Morgenstern, O., 1953. Theory of games and economic
behavior 3rd ed., Princeton, New Jersey: Princeton University Press.

Nietzsche, F., 1882. The Gay Science (Die fröhliche Wissenschaft): With a
Prelude in Rhymes and an Appendix of Songs 1887th ed., Available at:
http://www.textlog.de/nietzsche-wissen.html.

Nietzsche, F., 1887. Thus Spoke Zarathustra (Also sprach Zarathustra): A book
for all and none 1995th ed., New York: Modern Library.

Obama, B., 2009a. President Obama’s Address to Congress. The New York
Times. Available at:
http://www.nytimes.com/2009/02/24/us/politics/24obama-text.html?
_r=2&ref=opinion&pagewanted=all [Accessed March 3, 2009].

Obama, B., 2009b. President's Message: Fiscal Year 2010 Budget Overview
Document: A New Era of Responsibility: Renewing America's Promise,
Washington DC: United States Office of Management and Budget.
Available at: http://www.gpoaccess.gov/usbudget/ [Accessed March 3,
2009].

151
Obama, B., 2009c. Remarks by the President on a New Beginning, Cairo
University, Egypt: The White House Press Office. Available at:
http://www.whitehouse.gov/the_press_office/Remarks-by-the-President-
at-Cairo-University-6-04-09/ [Accessed June 5, 2009].

Obama, B., 2009d. Remarks by the President to the United Nations General
Assembly, Washington DC: The White House Press Office. Available at:
http://www.whitehouse.gov/the_press_office/remarks-by-the-president-
to-the-united-nations-general-assembly/ [Accessed November 2, 2009].

Obama, B., 2009e. Remarks on the Economy at Georgetown University. The


New York Times. Available at:
http://www.nytimes.com/2009/04/14/us/politics/14obama-text.html?
_r=1&ref=weekinreview [Accessed April 19, 2009].

O'Hear, A., 1989. An introduction to the philosophy of science, Oxford: Oxford


University Press.

Ostrom, E. & Schlager, E., 1996. The formation of property rights. Rights to
nature: Ecological, economic, cultural, and political principles of
institutions for the environment, 127–156.

Ostrom, E., 1990. Governing the commons: the evolution of institutions for
collective action, Cambridge; New York: Cambridge University Press.

Owen, J.N., 2003a. Corporate governance – Level upon layer. In Melbourne:


Commonwealth of Australia.

Owen, J.N., 2003b. The failure of HIH Insurance Volume 1: A corporate


collapse and its lessons, Australia: HIH Royal Commission.

Perkins, J., 2004. Confessions of an economic hit man, Berrett-Koehler


Publishers.

152
Petre, M., 2003. Disciplines of innovation in engineering design, Expertise in
Design Design Thinking Research Symposium 6 hosted by Creativity
and Cognition Studios, Sydney, Australia: University of Technology,
Sydney. Available at:
http://research.it.uts.edu.au/creative/design/papers/16PertreDTRS6.pdf.

Popper, K., 1972. Objective knowledge, Oxford: Oxford University Press.

Popper, K., 1959. The logic of scientific discovery, London: Hutchinson.

Popper, K., 1945. The open society and Its enemies, London: Routledge.

Proust, V., 1913. À la recherche du temps perdu (Remembrance of Things


Past.),

Quesnay, F., 1758. Le Tableau économique 1759 “third” edition, as reprinted in


M. Kuczynsi and R.L. Meek, 1972, editors Quesnay’s Tableau
Économique, New York: A.M.Kelly.

Randerson, J., 2009. UK's ex-science chief predicts century of 'resource' wars.
The Guardian. Available at:
http://www.guardian.co.uk/environment/2009/feb/13/resource-wars-
david-king/print [Accessed February 16, 2009].

Rapoport, A. & Chammah, A.M., 1965. Prisoner's dilemma, University of


Michigan Press.

Rawls, J., 1972. A theory of justice, Oxford: Clarendon Press.

Sachs, J., 2008. Common Wealth: Economics for a Crowded Planet, Penguin
Press HC, The.

Sarkozy, N., 2009. Failure Is not an Option -- History Would not Forgive Us.
SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/europe/0,1518,616713,00.html#ref
=rss [Accessed April 2, 2009].

153
Sartre, J., 1943. Being and Nothingness: An Essay on Phenomenological
Ontology Trans. Barnes H., New York: Philosophical Library.

Sartre, J., 1946. L'Existentialisme est un humanisme (Existentialism and


Humanism) 1977th ed., Paris: Nagel.

Sartre, J., 1938. Nausea (La Nausée, originally called Melancholia), New
Directions Publishing Corporation, June 1969.

Sartre, J., 1992. The Look. In Being and Nothingness. New York: Washington
Square Press.

Saunders, A., 1974. A conversation with Isaiah Berlin - interview with John
Merson. Philosophers Zone. Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2593244.htm
[Accessed June 15, 2009].

Saunders, A., 2009a. A tribute to Isaiah Berlin - Interview with John Gray,
Professor of European Thought at the London School of Economics.
Philosophers Zone. Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2586694.htm#t
ranscript [Accessed June 15, 2009].

Saunders, A., 2009b. Governance and the Yuck Factor. Philosophers Zone.
Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2631260.htm#t
ranscript [Accessed August 13, 2009].

Say, J.B., 1803. A Treatise on Political Economy: or the production, distribution,


and consumption of wealth 6th ed., Philadelphia: Lippincott, Grambo &
Co. Available at: http://www.econlib.org/library/Say/sayT.html [Accessed
April 9, 2009].

154
Scherer, M., 2009. Barack Obama's New World Order. Time. Available at:
http://www.time.com/time/world/article/0,8599,1889512,00.html
[Accessed April 7, 2009].

Schlager, E. & Ostrom, E., 1992. Property-rights regimes and natural


resources: a conceptual analysis. Land economics, 249-262.

Schmitt, E. & Mazzetti, M., 2008. Secret Order Lets U.S. Raid Al Qaeda. The
New York Times. Available at:
http://www.nytimes.com/2008/11/10/washington/10military.html?
_r=2&th&emc=th&oref=slogin&oref=slogin [Accessed November 10,
2008].

Schwartz, B., Markus, H.R. & Snibbe, A.C., 2006. Is Freedom Just Another
Word for Many Things to Buy? The New York Times. Available at:
http://www.nytimes.com/2006/02/26/magazine/26wwln_essay.html
[Accessed April 15, 2009].

Self, P., 2000. Rolling back the market: economic dogma and political choice ,
Palgrave MacMillan.

Slattery, L., 2008. Abstract to application: after years of being ignored by the
money markets, academic economists are delighted as hard times and
complex problems mean demand for their skills. The Australian, 28-29.

Smith, A., 1776. An Inquiry into the Nature and Causes of the Wealth of
Nations Also known as: Wealth of Nations., Project Gutenberg.

Sonnenfeld, J.A., 2004. Good Governance and the Misleading Myths of Bad
Metrics. Academy of Management Executive, 18, 108-113.

Sonnenfeld, J.A., 2002. What makes great boards great. Harvard Business
Review, Article R0209H.

Spiegel Online, 2009a. EU Report: Independent Experts Blame Georgia for


South Ossetia War. Spiegel Online - News - International. Available at:

155
http://www.spiegel.de/international/world/0,1518,650228,00.html#ref=r
ss [Accessed September 22, 2009].

Spiegel Online, 2009b. 'Some Executives Didn't Hear Bang': Germany's


Steinbrück Warns of Return of 'Casino Capitalism'. SPIEGEL ONLINE -
News - International. Available at:
http://www.spiegel.de/international/business/0,1518,640813,00.html
[Accessed August 21, 2009].

Steingart, G., 2009a. Obama's Mistakes: Chancellor Merkel Visits the Debt
President. SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/world/0,1518,632494,00.html#ref=r
ss [Accessed June 26, 2009].

Steingart, G., 2009b. What Obama Could Learn from Germany. Spiegle Online
International. Available at:
http://www.spiegel.de/international/world/0,1518,605695,00.html
[Accessed March 16, 2009].

Stevens, G., 2007. Statement to Parliamentary Committee: Opening Remarks


by Mr Glenn Stevens, Governor, Perth: Australian House of
Representatives Standing Committee on Economics, Finance and Public
Administration. Available at:
www.rba.gov.au/PublicationsAndResearch/Bulletin/bu_mar07/bu_0307_1
.pdf - 2007-03-15 [Accessed April 8, 2009].

Stiglitz, J., 2009. Interview with Economist Joseph Stiglitz: Government


Stimulus Plans are 'Not Enough'. Spiegel Online - News - International.
Available at:
http://www.spiegel.de/international/world/0,1518,616743,00.html
[Accessed April 2, 2009].

156
Stiglitz, J., Sen, A. & Fitoussi, J., 2009. Report by the Commission on the
Measurement of Economic Performance and Social Progress, Paris:
French Commission on the Measurement of Economic Performance and
Social Progress. Available at: http://www.stiglitz-sen-
fitoussi.fr/en/index.htm [Accessed September 22, 2009].

Strathern, P., 2002. A brief history of economic genius, Texere.

The New York Times, 2009. Editorial: The Peace Prize. The New York Times.
Available at: http://www.nytimes.com/2009/10/10/opinion/10sat1.html?
th&emc=th [Accessed October 11, 2009].

The Norwegian Nobel Committee, 2009. The Nobel Peace Prize 2009 - Press
Release. Nobelprize.org. Available at:
http://nobelprize.org/nobel_prizes/peace/laureates/2009/press.html
[Accessed October 10, 2009].

The Royal Swedish Academy of Sciences, 2009. The Prize in Economics 2009 -
Press Release. Nobelprize.org. Available at:
http://nobelprize.org/nobel_prizes/economics/laureates/2009/press.html
[Accessed October 12, 2009].

de Tocqueville, A., 1835. De la démocratie en Amérique (Democracy in


America) 1st ed., Union générale d'éditions. Available at:
http://xroads.virginia.edu/~HYPER/DETOC/toc_indx.html.

Tucker, A.W., 1980. On Jargon: The Prisoner’s Dilemma. UMAP Journal, 1, 101-
103.

Tuckman, B.W., 1965. Developmental sequence in small groups. Psychological


bulletin, 63(6), 384-399.

Turner, G.M., 2008. A comparison of The Limits to Growth with 30 years of


reality. Global Environmental Change, 18(3), 397-411.

Tzu, S., 2006. The art of war, Penguin Group USA.

157
United Nations Security Council, 2009. 6191st Meeting (AM): Historic Summit
of Security Council Pledges Support for Progress on Stalled Efforts to
End Nuclear Weapons Proliferation: Resolution 1887 (2009) Adopted
with 14 Heads of State, Government Present, New York: United Nations.
Available at: http://www.un.org/News/Press/docs/2009/sc9746.doc.htm
[Accessed October 11, 2009].

Vargo, S.L. & Lusch, R.F., 2004. Evolving to a new dominant logic for
marketing. Journal of Marketing, 68(1), 1-17.

Voltaire, F.M.A., 1759. Candide ou l'optimisme; Traduit de L'Allemand De Mr.


Le Docteur Ralph, The Online Library of Liberty. Available at:
http://oll.libertyfund.org/index.php?
option=com_staticxt&staticfile=show.php%3Ftitle=973&Itemid=27.

Von Neumann, J., 1928. Zur theorie der gesellschaftsspiele (Theory of Parlor
Games). Mathematische Annalen, 100(1), 295-320.

Weber, M.C.E., 1904. Die protestantische Ethik und der Geist des Kapitalismus
(Protestand Ethic and Spirit of Capitalism). In Die protestantische Ethik
und der Geist des Kapitalismus. Available at:
http://www.ne.jp/asahi/moriyuki/abukuma/weber/world/ethic/pro_eth_fra
me.html [Accessed April 12, 2009].

Weiner, E., 2008. The geography of bliss. One grump's search for the happiest
places in the world, New York.

Whitman, W., 1888. Democratic Vistas: And Other Papers, W. Scott; Toronto:
WJ Gage.

Whyte, W.H.J., 1956. The Organization Man, University of Pennsylvania Press.


Available at: http://www.writing.upenn.edu/~afilreis/50s/whyte-
main.html.

158
Witney, N. & Shapiro, J., 2009. Towards a post-American Europe: A Power
Audit of EU-US Relations, The European Council on Foreign Relations.
Available at: http://ecfr.eu/content/entry/towards_a_post-
american_europe_a_power_audit_of_eu-us_relations_shapiro_whi/
[Accessed November 2, 2009].

Xiaochuan, S., 2009. Reform the International Monetary System, Beijing,


China: China Central Bank. Available at:
http://news.xinhuanet.com/english/2009-03/26/content_11074507.htm
[Accessed April 11, 2009].

Yang, Y. et al., 2005a. Prefrontal structural abnormalities in liars. British


Journal of Psychiatry, 187(October), 320-325.

Yang, Y. et al., 2005b. Volume reduction in prefrontal gray matter in


unsuccessful criminal psychopaths. Biological Psychiatry, 57(10), 1103-
1108.

i Napoleon put into practice the principle of “order over chaos” on 5 October
1795 when he turned his cannons on a putsch of thirty thousand royalists
marching to the Tuileries Palace to overturn the Convention of the revolutionary
government. Napoleon's grape-shot killed more than 200 royalists and is still
visible in the walls of the Church of St. Roch at 286 Rue St.-Honore. In
gratitude, the government promoted the twenty-six year old hero of the
Revolution from brigadier general to commander-in-chief of the Army of the
Interior.
Similarly, in 1968 violent student protests against the Vietnam War took place at
the Paris Sorbonne. These protests erupted into France’s second revolution as
ten million French people went on strike over industrial conditions. In June,
President de Gaulle called an election to establish a mandate for reform and it
was granted. However, instead of reform, de Gaulle violently quashed the
protest and in response the people dismissed him in April 1969.
America experienced a similar incident. Following an April 1970 riot by
reportedly six thousand students at Harvard Square in Massachusetts at the
time of the Vietnam War, the American government became extremely nervous
about anti-war protests. One month later, in May, a terrible incident occurred at
Kent State University. National Guardsmen fired on demonstrators for 13
seconds, killing four students and badly wounding nine others
ii Sir David King, director of the Smith School of Enterprise and the Environment
at Oxford University, who was the British Government's Chief Scientific Adviser
at the start of Iraq war in March 2003
iii In 2005, a BBC Radio 4 poll found Karl Marx to be the world's greatest
philosopher by a wide margin (Critchley 2008, p.212)
iv Examples following World War II include the Marshall Plan to rebuild Western
Europe and the rebuilding of Japan in the 1950s and 1960s

159
v Mathematica Country Database (accessed 10 April 2009). Gini Indexes for some
other countries are: Australia 0.305, Canada 0.321, China 0.47, Denmark 0.24,
France 0.28, Russia 0.413, United Kingdom 0.34
vi In 1928, at the age of 25, John von Neumann developed the minmax (minimax)
and equivalently the converse maximin as part of formulating game theories
that would minimise one's maximum possible loss. Von Neumann said of his
strategy defeat is inevitable if you aim to win rather that avoid losing
vii Nietzsche uses this statement in The Gay Science (1882): section 108 (New
Struggles), section 125 (The Madman) and section 343 (The Meaning of our
Cheerfulness). Max Stirner was born Johann Kaspar Schmidt
viiiIn 1960 Albert Camus tragically died in a car accident at the age of 47. He had
earlier written that the point of life was to live and he could conceive of no more
meaningless death than in a car accident (Critchley 2008, p.262)
ix Isaiah Berlin (1909-1997) was an Oxford liberal humanist scholar of Russian-
Jewish descent
x For example, in 2005 the American Society of Civil Engineers calculated that
US$9.4 billion per year for 20 years is needed to refurbish America's collapsing
bridges
xi On ne reçoit pas la sagesse, il faut la découvrir soi-même après un trajet que
personne ne peut faire pour nous, ne peut nous épargner
xii Winston CHurchill's humorous version applied to US foreign policy is that “the
US always does the right thing, when all alternatives are exhausted”
xiiiTranslated by Karl Popper
xivcogito ergo sum (I am thinking therefore I exist, which is the inverse to
Existentialism's existence before essence)
xv For example, in Australia, Section 198A of the Corporations Act (2001) gives
directors all powers in a company, except those reserved for a general meeting.
As the principle of ultra vires (acting beyond mandate) has been removed,
nothing is beyond the powers of the company so a company may in fact do
anything. The directors also have common law duties to act honestly, in good
faith, with good faith and due diligence. However, these common law duties are
met by complying with the statutory Duty of Care and Due Diligence set out in
Section 180(1) of Corporations Law, which is commonly known as the “business
judgement rule.” It requires that directors act in good faith and for proper
purpose; have no personal interest in the outcome; take steps to inform
themselves on all issues; and rationally believe their decision is in the best
interests of the company
xviThe United Kingdom case Percival vs. Wright (1902) established that directors
do not have a general duty to individual shareholders. In Australia, the High
Court decision of Spies vs. R (2000) decided the issue that directors do not have
a duty to creditors, except where a company is insolvent or near-so. This also
resolves the position in respect to employees and other stakeholders, to whom
directors owe no duty except subject to specific laws that may apply. A similar
principle was embodied in the UK Hampel Committee Report on Corporate
Governance (1998)which is arguably the best encapsulation of the concept: the
Committee recommended that directors be accountable to shareholders for
preserving and enhancing the shareholders' investment and be responsible for
relations with stakeholders as part of this accountability
xviiThe Sarbanes-Oxley Act of 2002 (Pub. L. No. 107-204, 116 Stat. 745, also
known as the Public Company Accounting Reform and Investor Protection Act of
2002) is a United States federal law that addresses director responsibilities and
criminal penalties
xviiiBacevich (2008, p181) says of Americans' future: They will guzzle imported oil,
binge on imported goods, and indulge in imperial dreams
xixAs the traditional providers of mortgage finance became increasingly nervous
through 2007, Bear Stearns continued to sell mortgages, providing the finance
itself by rolling 24-hour borrowings with Federated, Fidelity Investment and

160
European lenders. On 6 March 2008 the first European lender, Rabobank, said it
would not renew its credit lines to Bear Stearns. On 11 March, ING followed.
Finally, on 13 March, when Bear Sterns was seeking to roll-over US$75 billion,
Federated and Fidelity Investments said they would no longer accept the sub-
prime mortgages as collateral security. The Federal Reserve requested J. P.
Morgan to review Bear Stearns' accounts and on 14 March, J. P. Morgan used
US$30 billion of Federal Reserve funds to provide Bear Stearns with unlimited
credit to avoid meltdown of the financial system
xx The “Minsky Moment” is named in honour of Russian economist Hyman Minsky.
The term was inspired by the 1998 Russian sovereign debt default. The Minsky
Moment is the point when investors doubt that cashflow can sustain debt
obligations, which leads to a panic sell-off as investor greed abruptly turns to
fear
xxiAmerican recession years: 1807, 1837, 1857, 1873, 1893, 1907, 1929, 1973,
1987, 2001 and 2009
xxiiSee Schwartz et al. 2006
xxiiiSun Tzu (544 – 496 BCE) wrote The Art of War, an immensely influential
ancient Chinese book on military strategy. Sun Tzu argued strongly for military
intelligence, claiming a general must have full knowledge of his own & the
enemy's strengths and weakness. His book was known for thousands of years
but a full copy was discovered only in 1972 on a set of bamboo engraved texts in
a grave near Linyi in Shandong
xxivThe Prisoner's Dilemma is a two person, non-zero sum game. Consider the oft
seen television police drama where two suspects are put in separate rooms for
questioning. Each suspect knows full well that if they both remain silent then
the police will have a hard task proving them guilty. In this case, the
unsatisfactory police evidence means that each prisoner will receive a nominal
sentence of only one year. However, the police keep fermenting the prisoners'
anxiety to turn “Queen's evidence.” This will result in a reduced or commuted
sentence in return for incriminating the accomplice. If only one confesses then
that prisoner will escape sentence and the other will receive a sentence of ten
years. If both prisoners confess, each will be sentenced to five years in prison.
The old maxim of “no honour amongst thieves” mostly holds true because their
agreements are neither binding nor enforceable. So each prisoner cannot trust
the other to remain silent. Each prisoner therefore looks at the situation from a
self-interested point of view and seeks to maximise his own benefit without
regard for what the other may do. Therefore, the dominant outcome is for each
to “rat” on the other in order to be released. However, because both do the
same thing, the separate strategies have the effect of resulting in a sentence of
five years for each. They have foregone the Pareto Optimum outcome of trusting
each other, which would have resulted in sentences of only one year each
xxvDwight D Eisenhower served as the 34th President from January 1953 to
January 1961, when he was succeeded by John F. Kennedy
xxviElinor Ostrom shared the 2009 Sveriges Riksbank Prize in Economic Sciences
in Memory of Alfred Nobel (often referred to as the Nobel Prize in Economics)
with Oliver E. Williamson "for his analysis of economic governance, especially
the boundaries of the firm"
xxviiThe European Council on Foreign Relations is a pan-European think-tank
established by George Soros
xxviiiWhile successful at the time, Shiite Government secret police, aided by the
American military, arrested various Sunni Members of the Sunni Awakening
Councils in late March 2009, notwithstanding that these people were helping
the American military
xxixEconomics was first formally taught as a discipline after the Great Depression
xxxInstead of supply creating its own demand, supply is now seen to be a function
of demand

161
xxxiReferring to the controversial 1971 award of the Nobel Peace Prize to West
Germany's Chancellor Willy Brandt for his “Ostpolitik” policy of reconciliation
with Communist Eastern Europe
xxxiiTwenty-one prominent American recipients of the Nobel Peace Prize include
President Theodore Roosevelt (1906) for his role in international dispute
arbitration, which led to peace between Russia and Japan; President Woodrow
Wilson (1919) for his role in ending World War I, the Treaty of Versailles and
facilitating the League of Nations; Martin Luther King (1964) for his
commitment to non-violent protest of African American civil rights; former US
Secretary of State Henry Kissinger (1973) for his role in negotiating a cease-fire
that ended the Vietnam War; Jimmy Carter (2002) for tireless efforts to spread
peace, democracy, human rights and development; and former Vice President Al
Gore and the IPCC (2007) for climate change leadership
xxxiiiIn politics, this is colloquially known as “playing the race card”, where race
includes racial origin or colour of the skin, religious belief, sexual persuasion,
physical or mental disability etc. Political parties usually tacitly agree not to
“play the race card” in campaigns because it inflames the worst of human
bigotry and often escalates to riots and murders
xxxivAuthoritarian, oppressive regimes and conservative democracies alike can
exhibit a kind of national psychopathy, for example, spying on their own citizens,
imprisonment without charge, suspension of habeas corpus, torture and public
lies. This can be extended to international relationships, for example, America's
so-called Coalition of the Willing comprising thirty members including United
Kingdom, Australia and Denmark. Australia forfeited its proud innocence and
that it had never used military force except in self-defence.
On 4 September 2009, the Chinese Ambassador Zhang Junsai responded to
Australia's criticism of China's military build-up with a diplomatic caution,
reminding Australia that China had never occupied one inch of foreign territory.
However, this assertion needs to be qualified by China's interpretation of
occupation. China considers itself to be peaceful, humble and providing
omnipresent rationality throughout its widespread provinces. Many in the West
have the opposite view. China is perceived to be aggressively expansionist
because of its 1950 so-called “peaceful liberalisation of its province of Tibet”,
1962 war with India over China's strategic occupation of the uninhabited region
of Aksai Chin, ever-present threats to reintegrate Taiwan by force, continued
repression of nationalist Uighurs in its province of Xinjiang Uyghur (East
Turkestan or Uyghuristan) and human rights abuses.
Chapter 3 Political Economy of the Anglo-American world view of climate
change shows how China and India formed an uneasy alliance to successfully
resist America's “might is right” approach in climate change negotiations.
Organisational psychopaths, whether individuals in the office or national
leaders, are difficult to identify because they adopt an overtly "conservative"
disguise (Clarke 2005). They thrive on the excitement of the chase, seeing "who
blinks first" and they will throw any amount of other people's money at a
campaign or litigation to create hysteria, chaos and confrontation.
Often they will lie without compunction - truth, nonsense, disinformation and
barefaced lies are all the same because the end justifies the means. For
example, the Australian Government's infamous “children overboard” lie about
refugee boat people. Usually, organisational psychopaths are not concerned in
the least about being found out for their lies. In fact, their main distinguishing
characteristic is an utter absence of remorse. They use accusations, lying,
bluffing, bullying and character assassination to advance their aims.
They become expert at casting "Fear, Uncertainty and Doubt" (FUD) by
offhandedly making allegations that are without merit to divert attention from
real issues and to put those seeking to route them out onto the defensive. A FUD

162
smear tactic is often used where the initial publicity surrounding claims vastly
overshadow any subsequent retraction or where the assertions cannot be
checked with third parties to whom the assertions are being attributed.
Makers of Kool, Viceroy, Raleigh and Belair cigarettes, Brown & Williamson
(1969) elucidate the “FUD” attack: “We have chosen the mass public as our
consumer for several reasons: - This is where the misinformation about smoking
and health has been focused. - The Congress and federal agencies are already
being dealt with - and perhaps as effectively as possible - by the Tobacco
Institute. - It is a group with little exposure to the positive side of smoking and
health. - It is the prime force in influencing Congress and federal agencies -
without public support little effort would be given to a crusade against
cigarettes. Doubt is our product since it is the best means of competing with the
"body of fact" that exists in the mind of the general public. It is also the means
of establishing a controversy. Within the business we recognize that a
controversy exists. However, with the general public the consensus is that
cigarettes are in some way harmful to the health. If we are successful in
establishing a controversy at the public level, then there is an opportunity to put
across the real facts about smoking and health. Doubt is also the limit of our
"product". Unfortunately, we cannot take a position directly opposing the anti-
cigarette forces and say that cigarettes are a contributor to good health. No
information that we have supports such a claim.”
Organisational psychopaths also use any technique they can to create pressure,
such as incessant delay, preventing routine things being finished and
determinedly side-tracking discussions. Brown & Williamson (1969) also
exemplify the often used diversionary tactic of setting up “straw men”, or
alternative subjects that are easily controlled: “Truth is our message because of
its power to withstand a conflict and sustain a controversy . If in our pro-
cigarette efforts we stick to well documented fact, we can dominate a
controversy and operate with the confidence of justifiable self-interest …. we
would want to be absolutely certain that there is no damage to our advertising
or to the consumer acceptance of our brands . So the first step for the
immediate future would be research . We are recommending basic research to
unearth specific problems in smoking and health that we can deal directly with.”
It is estimated that 1% to 3% of adult males and 0.5% to 1% of women exhibit
some degree of psychopathy. These people range from murderers, serial rapists,
con artists to predators at work and in social situations. Unfortunately, they are
attracted to positions of power in politics and public institutions, where they can
rise through ruthlessness rather than leadership. However, all have the same
profile of self gratification and excessive sexually promiscuity can be a strongly
identifying trait. Many of the finest sportsmen and women are found to be at
least mildly psychopathic.
There are common features for organisational psychopaths in political, social
and corporate environments. An organisational psychopath is difficult to identify
at first but indications become increasingly clear. Fro example, inconsistent lies
that don't match, amorality, defamation, enjoying ruthlessness and a total lack
of remorse. They can be very difficult to ferret-out because that employ multi-
agent predator strategies to amplify their tactics (Axelrod 1984).
Psychopathy is not to be confused with merely subjective behavioural choices by
people. Recent scientific evidence supports the fact that pathological liars
cannot control their habitual impulse to lie, cheat and manipulate others. Yang
et al. (2005a; 2005b) of the University of Southern California showed that
pathological liars had pre frontal cortex abnormalities. They found a 22%
increase in white matter and 14% decrease in grey matter compared to normal
controls. Autistic children were found to have the opposite characteristics.
When people are asked to make moral decisions, they rely on the pre frontal

163
cortex of the brain and it has long been associated with the ability in most
people to feel remorse or learn moral behaviour. In normal people, it's the grey
matter (the brain cells connected by the white matter) that help to keep the
impulse to lie in check. The results of this study are consistent with previous
studies on autistic children, who find it extremely difficult to lie and have an
opposite but complementary combination of white and grey matter. The
University of Southern California researchers suggested that lying takes a lot of
effort and the 22% more white matter in the brains of pathological liars provides
them with enhanced verbal skills to master the complex art of deceit. In
addition, the 14% less grey matter means they don't have the same moral
disinhibition as normal people do for misrepresentation.
Yang et al. commented that there is quite a lot to do in suppressing the truth.
Lying is almost mind reading in so far as the liar needs to understand the
mindset of the other person and the liar needs to suppress his or her own
emotions so as not to appear nervous. Their practical observations were that
pathological liars could not always tell truth from falsehood and would
contradict themselves in an interview; that they were manipulative and admit to
preying on people; and that they are very brazen in terms of their manner, but
very cool when talking about this. Aside from having histories of conning others
or using aliases, habitual liars also admitted to malingering, or telling lies to
obtain sickness benefits.
Whilst corporations and democracies ultimately recognise the psychopathic
behaviour for what it is, in the very existence of organisational psychopaths
there is a classic case of drama. Society suffers a permanent loss. Everyone with
whom an organisational psychopath comes in contact with has lost. A particular
victim is the company or country that mistakenly supports the organisational
psychopath.
As China chided Australia, in the Coalition of the Willing attacking Iraq, a
country that had never threatened them, the Anglo-American societies of
America, United Kingdom and Australia conspired to violate international
conventions on national sovereignty and in the process brought shame on the
institutions of Western democracy and in the process lost their most valuable
asset of all, their reputation.

164
Chapter 3 Political Economy of the Anglo-
American world view of climate change

3.1 Background
The previous chapter examined the changing Anglo-American world view and
emerging renaissance in constrained resource policy. This Chapter examines
the development of Anglo-American climate change policy and how the change
in Anglo-American world view is now influencing this policy.

To preface, some observations by William J. Antholis, Managing Director of The


Brookings Institution, draw together world views with climate change policy.
Antholis' fifteen years experience in climate change negotiations lead him to
call for America and Europe to build a new bridge in time for the final climate
change meeting of the UNFCCC's “Bali Roadmap”, CoP15, in Copenhagen
from 7 to 18 December 2009 (Antholis 2009):

Political systems don’t account for all the difference between the
United States and Europe. European private citizens, NGOs, and
corporations also have moved the needle. These Europeans have not
viewed climate change as a technological or an economic issue.
They have viewed it as a matter of basic common sense morality,
politics, economics and culture …. In contrast to Europe, where the
political system has created an opening for activism on behalf of
protecting the climate, the structure of American politics has been
an obstacle to action on this issue. That is, our federal system - and
particularly the United States Senate - empowers minorities to block
action. Beyond that, or perhaps as a result, our politics tend to
prioritize economic performance - at times almost entirely to the
exclusion of other policy priorities. Moreover, “low-expectation
pragmatism” can lead to half-measures …. while it is clear to
everyone that Europe, in particular, has led America to the point of
passing a real climate change law [Waxman Markey], this will be
sold in the United States as an example of American leadership and
independence …. chances are good that the U.S. will live up to

165
Winston Churchill’s famous quip that "America can always be
counted on to do the right thing, after it has exhausted all other
possibilities." After a decade of learning, the upside of American
pragmatism appears to be rising …. The new bridge that the U.S.
and Europe need to build together .... must be built on Europe’s
historic role as a leader on the issue, and must take advantage of
the United States’ self-centered “following-by-not-following” conceit
…. In short, we need to combine forces. We need to mobilize
Europe’s leadership on the issue: its moral vision, its emphasis on
lifestyles, its empowered minorities, its two millennia of experience
in constitutional construction, its technological elegance, and its
long-standing ties in key places around the world - from Russia to
Africa to Latin America to Southeast Asia. We also need to mobilize
America’s entrepreneurialism, imagination, regulatory uniformity,
and complimentary long-standing ties in other key places around the
world, such as East Asia, South Asia and Latin America.

Through the challenges and confusion of climate change, futurists like Antholis
are perceiving the gradual commencement of humanity's third industrial
revolution. It is important to understand the political economy underpinning
this momentous transition. McNeil, a scientific adviser to the Australian
Government, writes in The Clean Industrial Revolution (2009, p.6) :

Climate change has given all fossil fuels the knockout blow …. The
clean industrial revolution this century is one where the fuel is free
and infinite, and the materials grown or recycled. …. the power of
the sun, wind, ocean and earth is infinite …. Fostering clean
technological innovation and a low carbon economy cannot be
initiated from market forces alone because, for the time being, the
market doesn't account for the cost of carbon emissions or the
inevitable longer-term transition beyond fossil fuels. Slashing
greenhouse gas emissions by governments is needed to kick-start
the revolution.

166
3.2 Climate change science development

Precursors in science and policy


In 1859, Tyndall showed the world's natural greenhouse effect was due to CO 2
and water vapour, which contributed 33°C of warming.

In 1957, Roger Revelle, a scientist at the Scripps Oceanographic Institute in La


Jolla, California, linked increases in CO2 in the atmosphere with global
warming and first referred to CO2 as a greenhouse gas (Revelle & Suess
2002).i

Another researcher at the Scripps


Oceanographic Institute, Charles
Keeling, recorded atmospheric CO2
and showed that about 50% of the CO2
emitted by humans remains in the
atmosphere (Keeling & Whorf 2005).ii
Keeping in mind the old maxim that
“correlation doesn't mean causation,”
the Keeling Curve is considered the
most fundamental relationship of
global warming. It shows a 20% increase in CO2 concentration from 315 ppm
in 1958 to 385 ppm in 2008, and continues rising at a rate of about 2 ppm per
year.

The characteristic wiggle on Keeling's trajectory is attributed to the seasonal


growth and decay of vegetation, which absorbs about 8% of atmospheric CO2
every year and returns it to the atmosphere.

Sampling air trapped in Antarctic ice, Tripati et al. (2009) recently determined
that the last time a 387 ppm atmospheric CO2 concentration had occurred was
“during the Middle Miocene, when temperatures were [approximately] 3 to
6°C warmer and sea level 25 to 40 meters higher than present.” At this time
there was no permanent Arctic ice-cap and only a thin Antarctic polar cap.
Tripati et al. found the atmosphere's CO2 concentration decreased

167
synchronously “with major episodes of glacial expansion during the Middle
Miocene (~14 to 10 million years ago; Ma) and Late Pliocene (~3.3 to -2.4
Ma)”.iii

Illustration 8: CO2 concentration over 20 million years


up to the present (Source: Tripati 2009)

The Jason group


The Jason Group of scientists was established in 1960 by nuclear physicists
that wanted to develop new national defence frameworks within the realm of
classified information. Over the last 40 years, there have been only one
hundred Jasons including 11 with Nobel prizes and 43 U.S. National Academy
of Sciences fellows. The group conducts a workshop for six weeks each year in
San Diego, California.

Oreskes & Renouf (2008) describe two confidential Jason Group reports into
the effect of climate change on the planet and the fabric of society. These
reports were prepared for the U.S. Department of Defence and provided to
President Jimmy Carter.

In 1977 the Group focused on climate change. Drawing upon information from
the National Centre for Atmospheric Research (NCAR) in Boulder, Colorado,
the group developed a climate model called “Features of Energy-Budget
Climate Models: An Example of Weather-Driven Climate Stability”. iv

168
In 1979, the Jason Group published a remarkably foresighted report JSR-78-07
The Long Term Impact of Atmospheric Carbon Dioxide on Climate (MacDonald
1989). This report predicted that the atmospheric CO 2 concentration would
double by 2035 [which the IPCC now expects to occur by 2050]; the planet
would warm by 2-3°C [which accords with the IPCC's projections]; polar
regions could warm by up to 10-12°C and quickly melt; the world’s crop-
producing capacity and productivity could significantly decline, particularly in
marginal areas.

President Carter's Office of Science and Technology Policy sought a second


opinion from the National Academy of Sciences' climate committee headed by
the MIT meteorologist, Jule Charney. Charney's report confirmed the Jason
Group's conclusions.

James Hansen
In 1988, James Hansen, Director of NASA's Goddard Space Center, testified to
the US Senate Committee on Energy and Natural Resources that CO 2 pollution
would lead to dramatic damage from global warming. He noted that the earth
was warmer in 1988 than in any time in the 100 year history of measurement;
global warming could be ascribed to the greenhouse effect with 99%
confidence; and that computer simulations showed the greenhouse effect will
cause extreme climatic events such as summer heat waves. Hansen outlines
three policy scenarios that he had modelled, ranging from "business as usual"
to "draconian emission cuts" that would eliminate trace gas growth by 2000.
Courageously, he predicted that over the period from May 1988 to May 2008,
the earth would become "warmer than it has been in the past 100,000 years".

In the event, Hansen's prediction was technically wrong. With the benefit of
hindsight it became apparent that Hansen's base year of 1988 was an anomaly
in being extraordinarily warm. Indeed, all the years since 1988 have been
cooler. Nevertheless, the substance of Hansen's prediction is correct and
temperatures have been monotonically rising with a superimposed oscillation. v
In 2007, almost twenty years after Hansen's testimony, the world governments
of the Intergovernmental Panel on Climate Change (IPCC) concurred with him.

169
Hansen's current belief is that the only safe course of action is to urgently
lower the level of atmospheric CO2 from 385 ppm to 350 ppm (Pilkington
2008). Hansen is a member of the Tällberg Foundation, which published a full
page advertisement in the Financial Times, the International Herald Tribune
and the New York Times on 23 June 2008 to “Call upon all nations in the
ongoing climate negotiations to adopt 350 as the target to be reached
peacefully and deliberately, with all possible speed” (Tällberg Foundation
2008).

350 is the Tällberg's appellation. It is the atmospheric concentration level of


350 ppm of CO2 that the Foundation believes is the upper limit of safe CO 2
concentration in the earth's atmosphere. It is 35 ppm below the current level
of 385 ppm and represents about 17 years at the current rate of increase of 2
ppm per year.

The Tällberg Foundation claims that the current discussion target of 450 ppm
and global mean temperature rise to 2°C above pre-industrial levels is the
wrong target and will have truly terrible consequences:

The oft-stated goal to keep global warming less than two degrees
Celsius (3.6 degrees Fahrenheit) is a recipe for global disaster, not
salvation .... the simple, yes shocking, truth is that we have gone too
far. We are going in the wrong direction and we have put planetary
systems, all inhabitants and generations to come in grave peril. It is
uncertain how long the planet can remain above the level of 350
ppm CO2 before cascading catastrophic effects spin beyond all
human control .... therefore, we must go back. We must cut carbon
emissions and draw down CO2 below the level of 350 ppm. If we are
to preserve the planet upon which civilisation has developed, we
have no choice but to make bold decisions that will change the way
the world works – together .... to avoid a world at 450 ppm CO 2 is
the greatest challenge humanity has ever had to face.

In June 2008, on the 20th anniversary of his seminal 1988 testimony, the US
Senate Committee on Energy Independence and Global Warming again heard
James Hansen's testimony (2008a). Hansen criticised the goal to keep global
warming less than 2°C, saying:

170
Warming so far, about two degrees Fahrenheit over land areas,
seems almost innocuous, being less than day-to-day weather
fluctuations. But more warming is already “in-the-pipeline”, delayed
only by the great inertia of the world ocean. And climate is nearing
dangerous tipping points. Elements of a “perfect storm”, a global
cataclysm, are assembled .... The disturbing conclusion ... is that the
safe level of atmospheric carbon dioxide is no more than 350 ppm
(parts per million) and it may be less. Carbon dioxide amount is
already 385 ppm and rising about 2 ppm per year. Stunning
corollary: the oft-stated goal to keep global warming less than two
degrees Celsius (3.6 degrees Fahrenheit) is a recipe for global
disaster, not salvation.

In his testimony, Hansen furthermore called for:

• a moratorium on new coal-fired power plants that do not capture


carbon, because ceasing coal burning is the primary requirement in
solving global warming
• the introduction of a direct carbon tax at the first point of sale of coal,
oil and gas, with 100% of the proceeds to be returned to consumers as
equal monthly deposits into their bank accounts.vi An increase in this tax
as necessary to allow the market place to choose winners by
simultaneously weaning energy users from the bad habits developed
due to the availability of cheap subsidised fossil fuels and promote clean
energy sources
• promotion of renewable energy generation by a grid of underground
cables across America, analogous to the interstate highway system. This
would allow America's western states to have clean energy by 2020 and
the rest of America by 2030
• where necessary, import duties to be placed on products from
uncooperative countries in order to level the playing field, with the
import tax added to the dividend pool returned to American consumers
• changed utility regulations to reward increased efficiency rather than
increased assets and sales
• changed building code and vehicle requirements to improve efficiency

171
• an end to using China and India as scapegoats for non-action. Western
countries still have by far the highest emissions per capita and have
been (again, by far) the greatest source of accumulated emissions
leading to the current climate exigencies.

Hansen also expressed his personal opinion that the chief executives of large
energy companies such as Exxon Mobil and Peabody Coal should be tried for
crimes against humanity because they used disinformation to discredit the link
between global warming and burning fossil fuels. He likened this to the
disinformation campaign by tobacco companies such as R. J. Reynolds that
sought to bring the link between smoking and cancer into disrepute.

In March 2008, Hansen wrote in a personal capacity to Prime Minister of


Australia, Kevin Ruddvii, Angela Merkel, Barack Obama and other leaders,
requesting their leadership in “Aggressive forward-looking actions to mitigate
dangerous climate change …. [including a halt] to plans for continuing mining
of coal, export of coal, and construction of new coal-fired power plants around
the world, including in Australia .... that do not capture and sequester the CO 2”
(Hansen 2008b).

Al Gore
Former Vice President Al Gore (2008) tirelessly campaigns for the rise of
another hero generation with a sense of historic mission to solve the climate
crisis by changing the political will in America and laying a bright and
optimistic future for the world. He sees the mission of this new generation of
inspired activists to be singularly momentous as the actions of the fathers of
the Declaration of Independence, the people that ended slavery and the people
that gave women the vote.

Unfortunately, the younger generation appears to suffer from collective


cognitive dissonance on the issue. Gore is despondent that Americans who
have the opportunity to make a difference still see global warming as a low
priority. They have little sense of urgency about the planetary emergency. He
feels that people today have a culture of distraction and a sclerosis in good
citizenship. The result is that little is being done, despite two thirds of

172
Americans accepting that human activity causes global warming and that the
earth is heating up in a significant way.

Gore has concluded that the solution to climate change is a revenue neutral
carbon emissions tax with the proceeds replacing the employment taxes, first
introduced in Germany by Bismarck in the nineteenth century. Similarly to
James Hansen, Al Gore also flatly says “No new coal plants that do not capture
and store their own CO2.”

Gore says a few concerned people doing things themselves like changing light
bulbs, driving hybrid vehicles, digging geothermal wells and installing
photovoltaic panels on their roof is well and good but we need to change the
laws and solve the democracy crisis in good citizenship behaviour.

He advocated the expanded use of large scale geothermal concentration,


advanced photovoltaic technology and conservation. For example, renewable
energy plants like Germany's proposal for a super-grid of solar energy plants
across Saharan Africa to supply Europe.

By July 2008, Gore had sharpened his focus even further. He boldly challenged
his fellow Americans to take an environmentally radial perspective and become
completely green in electricity generation “I’m going to issue a strategic
challenge that the United States of America set a goal of getting 100 percent of
our electricity from renewable resources and carbon-constrained fuels within
10 years …. We need to make a big, massive, one-off investment to transform
our energy infrastructure from one that relies on a dirty, expensive fuel, to fuel
that is free” (Broder 2008; Herbert 2008). Gore also proposed that payroll tax
be cut to offset the inevitably higher prices for fuel and electricity.

Following Barack Obama's election as President, Gore reiterated his plan to


produce 100% of American electricity from carbon-free sources within 10
years. Its five key features have been adopted by President Obama:

• incentives for the construction of concentrated solar thermal plants in


the South-west deserts, wind farms in the corridor stretching from
Texas to the Dakotas and advanced plants in geothermal hot spots

173
• a national smart grid for the transport of renewable electricity from
where it is generated to consumers in cities and smart ways for
consumers to control usage
• help for automakers to move production to plug-in hybrids
• the retrofit buildings with insulation and energy efficient windows and
lighting
• a cap on emissions that puts a price on carbon.

Intergovernmental Panel on Climate Change


The United Nations Intergovernmental Panel on Climate Change (IPCC)
concluded that global warming, if left unchecked, had the potential to
materially impact the planet and the welfare of its peoples. viii It found that
accelerating greenhouse gases from 2001 to 2007 were in large part due to
China's booming economy. The acceleration of economic growth and fossil fuel
emissions has led to the very difficult situation that global emissions are
tracking worst estimates.

Illustration 9: IPCC Report Working Group Illustration 10: IPCC Report Working Group
III Figure SPM.8: Stabilisation scenario III Table SPM.5: Characteristics of post-TAR
categories stabilisation scenarios

In May 2007, member governments of the IPPC permitted for the first time the
display of a graph of greenhouse gases in parts per million versus expected
temperature rise as shown in Illustration 9 above (IPCC 2007). Coloured
shading shows the concentration bands for stabilisation of greenhouse gases in
the atmosphere corresponding to the stabilisation scenario categories in
Illustration 10 above.

174
Illustration 10 summarises the actions and consequences of various CO 2
reduction policies. For example, stabilisation scenario A2 from Illustration 9
implies a mean temperature rise of 2.4°C to 2.8°C, with a reduction in
emissions on 1990 levels of between 50% and 85%, and the CO 2 concentration
will peak between 490 ppm and 535 ppm, sometime between 2000 and 2020.

The black line in the middle of the band in Illustration 9 is the best estimate
climate sensitivity of 3°C. For example, a greenhouse gas concentration of 450
ppm will produce a mean temperature rise of about 2°C. The red line provides
an upper bound of likely range of climate sensitivity of 4.5°C, while the blue
line shows the lower bound of 2°C.

Based on the research of some 4,500 scientists and 2,500 peer reviewers, the
IPCC agreed that limiting global temperature rise to 2°C (3.6°F) was necessary
to avoid severe climate change damage. However, the IPCC conceded that we
are already on a path that will cause more than 2°C warming so policy might
need to be set around 3°C rise and to be prepared for quite large
consequences such as species loss, lack of rainfall in Australia, an increasing
frequency of high force tornadoes in America and nonlinear feedback loops
exacerbating global warming, such as methane release from ocean beds.

In April 2008, the U.S. National Oceanic and Atmospheric Administration


(NOAA 2008) released measurements of greenhouse gas concentrations in the
atmosphere from 60 sites around the world. This data confirmed the IPCC's
conclusions that greenhouse gas concentrations are rising faster than initially
expected and the rate of increase is accelerating. In 2007, the concentration of
CO2 had already reached nearly 385 ppm, up 2.6 ppm from 2006. ix It attributed
the rise mainly to burning of coal, oil and gas.

The next IPCC Assessment Report (AR5) is due in 2014 and will consider risk-
reduction strategies. However, scientists met in Copenhagen in March 2009 to
undertake an interim update of the IPCC's 2007 Fourth Assessment Report
(AR4). The conference concluded that global warming is already 50% greater
than expected. This places the world on track for the worst-case scenario of
the Fourth Assessment Report and a global temperature rise of between 3°C
and 5°C is now expected.

175
Non- CO2 greenhouse gas pollution
CO2 emissions constitute only half of greenhouse gas emissions. The other
greenhouse gases are black soot, nitrous oxide, methane and man-made gases
such as hydrofluorocarbons, perfluorocarbons and sulphur hexafluoride (SF 6).
The emission of many of these gases is easier and cheaper to control than the
emission of CO2. As a fall-back position to a successful international agreement
in Copenhagen, policy makers see non- CO2 gases as a policy deliverable.

Black carbon soot is quite important because its particulates blacken snow and
ice, causing the surface to absorb radiation and directly contributing to
melting of glaciers and polar caps. In addition, millions of human deaths each
year are attributed to soot pollution. Fortunately, soot can be readily and
cheaply abated using diesel filters and more efficient cooking stoves.

Harmful man-made gas emissions are also relatively easy to abate. For
example, under the Montreal Protocol to protect the ozone layer America and
industrialised countries have addressed 97% of chlorofluorocarbon emissions
(see below). A major new challenge is the emission of hydrofluorocarbons
(HFCs) from refrigerators and air conditioners, which continues to grow
strongly, and has 11,000 times the global warming effect of CO 2.

Stockholm Network of Scientists


In June 2008, the Stockholm Network of Scientists investigated what would
happen if governments agreed to address global warming but deferred
effective action (Lynas 2008).

The scientists examined three policy scenarios for global temperature rise
using data from the United Kingdom's Met Office Hadley Centre:

• governments agree but ignore emissions until 2045x – this results in a


temperature rise of 4.85°C by 2100. The consequences of such a rise
are that much of the earth becomes uninhabitable, billions of people are
displaced by desertification of the Mediterranean Sea and the oceans
rise by 7 metres from the melting of the Greenland ice cap and 50%-
80% of all species on the earth are rendered extinct

176
• Kyoto plus - a new round of Kyoto-type targets at Copenhagen in
December 2009 that leads to rising emissions until 2030 with a global
temperature rise of 3.31°C by 2100
• immediate strong policy measures including emissions permits subject
to a cap set by the United Nationsxi - emissions would peak in 2017 and
the temperature rise would be 2.89°C by 2100. This exceeds the
European Union's target of 2°C, which has been set as the danger
threshold for extreme weather conditions, floods, spreading of deserts,
sea rise, and perhaps releasing methane from Siberian permafrost.

These results led many scientists to conclude that global temperature rise
cannot be contained even with the strongest policies.

American Association for the Advancement of


Science
At the Association's Annual Meeting on 14 February 2009, scientists involved
in the IPCC's investigations tabled recent observations that greenhouse gas
emissions from burning fossil fuels in developed countries, coal in particular,
had increased more quickly than expected in the IPCC's 2007 Reports
(Lydersen 2009). They attributed this to higher temperatures that had
triggered self-reinforcing feedback mechanisms, thereby speeding up natural
processes. For example, the unexpected release of hundreds of billions of tons
of CO2 and CH4 from melting Arctic permafrost.

The scientists noted that earlier estimates of CO2 absorption by marine and
terrestrial ecosystems are overly optimistic because oceans are becoming more
acidic and the deeper layers of water, which are being exposed by stronger
winds due to warmer weather, are already saturated with carbon; Northern
Hemisphere land is absorbing more heat than expected, which reduces CO2
sequestration by plants; and wildfire incidence increasing significantly,
contributing about a third as much carbon to the atmosphere as burning fossil
fuels. In conclusion, the scientists suggested that the rate of global warming is
likely to be much faster than recent predictions.

177
A fixed tranche of atmospheric emissions capacity
The IPCC concluded that atmospheric temperature rise needed to be limited to
2°C in order to minimise adverse effects of climate change. However, the
various feedback mechanisms of the carbon cycle governing the relationship
from emissions to carbon in the atmosphere and sea, atmospheric temperature
rise and thence to both physical and economic damages remain uncertain.

The atmospheric temperature rise from accumulating carbon emissions has


been independently estimated by two teams (M. Meinshausen et al. 2009; Allen
et al. 2009). Their probabilistic models capture risks and uncertainties in
establishing limits to atmospheric emissions capacity. The results are
consistent and have exceedingly important ramifications for policy makers in
terms of concepts of natural justice and normative frameworks to deal with
global warming. These proved to be major topics of policy debate at the
UNFCCC Bonn meeting in June 2009 (below).

In the cross compared studies of M. Meinshausen et al. and Allen et al., the
authors deal with two issues. The first is the tranche of emissions from pre-
industrial times that will cause 2°C temperature rise. The second is the
remaining part of this tranche available from 2000-2050.

Allen et al. (2009) found that 3670 Gt CO2 (1,000 GtC) emissions from the time
of the Industrial Revolution c1750 would lead to a 2°C rise in about 2070,
assuming emissions peak in about 2020 at 44 Gt CO2 (12 GtC) per year and
decline sufficiently to limit the atmospheric concentration of CO 2 in 2100 and
beyond to 490 ppm. The 2°C temperature rise occurs at 470 ppm and has a 5%
to 95% confidence band of 1.3°C to 3.9°C.

Of the 3,670 Gt CO2 (1,000 GtC) aggregate, about 1,615 Gt CO 2 (440 GtC or
44%) occurred before the year 2000. This led to CO2 attributable warming by
the year 2000 of 0.85°C, with a 5–95% confidence range of 0.6°C to 1.1°C.

From 2000, a further 2,055 Gt CO2 (560 GtC or 56%) of emissions would lead
to the IPCC limit of 2°C rise over the pre-industrial temperature. Of this post
2000 tranche of 2,055 Gt CO2, it is expected that only 1,550 to 1,950 Gt CO2
could be emitted over the years 2000 to 2049. Of the total 3,670 Gt CO2 from

178
the time of the Industrial Revolution leading to a 2°C rise in temperature,
2,050 to 2,100 Gt CO2 emissions would occur after the year 2000.

In reviewing the dynamic performance of their model, the authors noted that
the relationship between cumulative emissions and peak warming is robust
and insensitive to the timing and rate of emissions. They suggest that policy
makers adopt Cumulative Warming Commitment (CWC) as a policy definition.
CWC is defined as the peak warming response to aggregate CO2 emissions and
has a normalised value of 1.9°C per TtC (i.e. per 1,000 GtC), with a 5–95%
confidence range of 1.4°C to 2.5°C per TtC.

In the second study by M. Meinshausen et al.(2009) limiting the probability of


a 2°C temperature rise to 25% requires emissions in the period 2000 to 2050
to be capped to 1,000 Gt CO2. This is the much discussed remaining tranche of
1,000 Gt of CO2. Emissions of 1,440 Gt CO2 leads to a 50% probability of
causing temperature rise to exceed 2°C, which is similar to the 1,550 Gt CO2
found in the Allen study. Alternatively, if M. Meinshausen et al. allow for non-
CO2 Kyoto gases, their 25% probability cap of 1,000 Gt CO2 increases to 1,500
Gt CO2 equivalent, which is almost the same as in the Allen study.

Of the remaining tranche of 1,000 Gt CO2, approximately 234 Gt CO2 has


already been drawn in the period from 2000 to 2006 and about one third in the
overlapping period from 2000 to 2008. Assuming flat emissions at the current
rate of 36.3 Gt CO2 per year and probabilities of exceeding 2°C of 20%, 25% or
50%, the CO2 emission capacity of 1,000 Gt CO2 would be exhausted in 2024,
2027 or 2039 respectively.

The study finds that the G8's vision of a 50% reduction in world emissions by
2050 (compared to 1990 levels) has a 12% to 45% probability of exceeding
2°C. If abatement is delayed such that in 2020 emissions remain more than
25% above 2000 levels, the probability of exceeding 2°C rises to 53% to 87%.

Drawing on Meinshausen et al. (2009), the German Government's independent


Advisory Council on Global Change (WBGU) has since concluded that no more
than 750 Gt CO2 may be emitted globally between 2009 and 2050 for a two-
thirds probability of meeting a target of 2°C temperature rise (Schellnhuber et

179
al. 2009; Spiegel Online 2009; Schwägerl 2009a) This declines to 600 Gt CO2
for a three-in-four chance.

Schellnhuber et al. (2009) propose that the 750 Gt CO2 “emissions resource”
be allocated to countries on a per capita basis. The aggregate per capita
entitlement would be 110 tonne CO2 for the period 2010-2050. The following
table sets out a CO2 Budget by share of global population in 2010. The table
also shows the number of years of “emission resource” that each nation would
have at 2008 emissions levels.

Population Budget 2010–2050 Emissions Years


Share est. Total Period Per Year est. 2008 @ 2008
2010 % Gt CO2 Gt CO2 Gt CO2 emissions
Germany 1.2 9 0.22 0.91 10
USA 4.6 35 0.85 6.1 6
China 20 148 3.6 6.2 24
Brazil 2.8 21 0.52 0.46 46
Burkina Faso 0.24 1.8 0.043 0.00062 2.89
Japan 1.8 14 0.34 1.3 11
Russia 2 15 0.37 1.6 9
Mexico 1.6 12 0.29 0.46 26
Indonesia 3.4 25 0.62 0.38 67
India 18 133 3.2 1.5 88
Maldives 0.0058 0.043 0.0011 0.00071 61
EU 7.2 54 1.3 4.5 12
World 100 750 18 30 25
Australia* 0.32 2.4 0.06 0.83 3

Table: “Future responsibility": the period 2010-2050, 67% probability of achieving the
respected 2°C safety barrier (Source: Schellnhuber et al. 2009, p.28 Table 5.3.2 Option
II, * Appended Australian Department of Climate Change 2009, Table ES.1 xii
)

It may be noted in the above table that the “emission resources” for Australia,
America, Russia, Germany, Japan and EU are only 3, 6, 9, 10,11 and 12 years,
respectively. China's “emission resources” and the World average are 24 and
25 years respectively, all far short of the 40 year period. Brazil, Indonesia and
India have “emission resources” of 46, 67 and 88 years, respectively.

180
The implication for high emissions countries are extraordinary. For example,
Germany has a target of reducing emissions by 40% by 2020 (compared to
1990 levels). This has been considered exemplary but to meet the above CO 2
Budget would need to be increased to a 60% reduction by 2020 (compared to
1990 levels) with a total emissions moratorium by 2030.

Economic damage
Climate change undermines economic progress through a feedback loop from
economic activity and emissions to temperature rise, which causes socio-
economic damages. However, the damage function is an area of major
uncertainty due to our lack of previous experience and a lack of understanding
of other complex effects that act both directly and mutually.

The cost of climate change has historically been seen as deaths resulting from
extreme weather events, such as flooding and cyclones. This is because
approximately 97% of losses relate to weather events, whilst the other 3% of
losses are due to earthquakes, tsunamis or volcanic eruptions. Over the last
decade the countries affected by such extreme weather-related disasters have
included China, India, Bangladesh, Indonesia, Japan, Philippines, Dominica,
Vanuatu, Samoa and Myanmar.

President of the Geneva-based Global Humanitarian Forum, former United


Nations Secretary Kofi Annan, estimates that climate change already affects
325 million people each year, costing US$125 billion and leading to the death
of 315,000 people. The Forum estimates by 2030 this will rise to 600 million
people affected, US$340 billion in costs and 500,000 deaths.

Another consequence of climate change is the impact on water resources. The


IPCC estimates that by 2020, water stress is expected to affect up to 1.2 billion
people in Asia, 81 million people in Latin America and 250 million people in
African countries.

The International Organisation for Migration forecasts that 200 million people
will be displaced by environmental pressures by 2050.

181
Climate change sceptics
This doctoral research assumes that climate change policies need to be
investigated because the governments of the world, under the auspices of the
United Nations Framework Convention on Climate Change (UNFCCC), have
agreed to address climate change. Underlying their decision is acceptance by
the Intergovernmental Panel on Climate Change (IPCC) of three decades of
scientific research demonstrating that global warming is predominantly a man-
made phenomenon.

Prior to the June 2009 UNFCCC meeting in Bonn, American President Obama
described the climate change situation as a “potentially cataclysmic disaster”
(see below). A report released by the United States Whitehouse and the United
States Global Change Research Program shortly thereafter underscored the
IPCC's conclusions “Observations show that warming of the climate is
unequivocal. The global warming observed over the past 50 years is due
primarily to human-induced emissions of heat-trapping gases. Warming over
this century is projected to be considerably greater than over the last century.
The global average temperature since 1900 has risen by about 1.5°F. By 2100,
it is projected to rise another 2 to 11.5°F In the U.S.” (Karl et al. 2009,
Executive Summary).xiii

Additional necessary albeit not sufficient arguments for addressing climate


change as an important area of policy relate to consistency with other
inherently desirable goals, such as limiting pollution; increasing forests,
decreasing desertification and protecting species; and fulfilling the 2000
Millennium goals of developed nations to assist nations in poverty. xiv

However, the science of climate change remains controversial. This has major
implications for policy makers who need to incur great cost and inconvenience
to fundamentally change the technologies of production and consumption in
economies. Therefore, the issue of climate scepticism is addressed here, in the
context of policy rather than in the context of a discussion about the scientific
basis of climate change.

182
Nierenberg report

Oreskes & Renouf (2008) have established the inception or birth date of
climate scepticism.xv In 1980, President Ronald Reagan commissioned a third
opinion from the U.S. National Research Council: Carbon Dioxide Assessment
Committee (Nierenberg 1983) using Congress funding appropriated in 1979.
The chair of the committee was William Nierenberg, a member of President
Reagan's transition team, director of the Scripps Institution of Oceanography
Climate and a member of the Jason Group. He had been part of the Manhattan
Project team creating the atomic bomb.

The Nierenberg Report shunned contributing researchers' scientific consensus


in lieu of Nierenberg's own conservative intuition, based on unarticulated
assumptions. This deftly derailed the issue and delivered President Regan's
preferred policy outcome of “reasoned inaction”. The Report suggested that
everyone should calm down and concentrate on research and monitoring
“[The] knowledge we can gain in coming years should be more beneficial than
a lack of action will be damaging; a programme of action without a programme
for learning could be costly and ineffective. [So] our recommendations call for
research, monitoring, vigilance and an open mind [Report's emphasis]”.

To drive home the case, Nierenberg even argued that global warming was
benign and nothing new, that it would take many years to significantly affect
the planet, that humans had a successful capacity to adapt to new challenges
and that there was a good chance of finding new technological solutions.

His executive summary denies the importance of CO2-induced climate change


and merely recommends more research and development into alternative fuels:

• Research and development should give some priority to the


enhancement of long-term energy options that are not based on
combustion of fossil fuels (Chapters 1, 2, 9)
• We do not believe, however, that the evidence at hand about CO2 -
induced climate change would support steps to change current fuel-use
patterns away from fossil fuels. Such steps may be necessary or
desirable at some time in the future, and we should certainly think
carefully about costs and benefits of such steps; but the very near future

183
would be better spent improving our knowledge (including knowledge of
energy and other processes leading to creation of greenhouse gases)
than in changing fuel mix or use (Chapters 1, 2, 9)
• It is possible that steps to control costly climate change should start
with non-CO2 greenhouse gases. While our studies focused chiefly on
CO2, fragmentary evidence suggests that non-CO2 greenhouse gases
may be as important a set of determinants as CO2 itself. While the costs
of climate change from non-CO2 gases would be the same as those from
CO2, the control of emissions of some non-CO2 gases may be more easily
achieved (Chapters 1, 2, 4, 9)

Nierenberg's report gave rise to the term “climate change sceptic.” One year
later Nierenberg cofounded the George C. Marshall Institute think tank, which
denies climate change is anything more than normal and natural fluctuation.
Nierenberg himself continued as an entrenched climate change critic.

Modern climate change denial

In a heavily polarised debate, the underlying science continues to be


trenchantly disputed by contrarians called sceptics, or denialists, depending
what side of the debate one accepts. Amongst the sceptics are many in the
American Republican Party and an American group of 31,478 scientists who
signed a petition over the decade 1997-2007 urging the American government
to reject the basis of global warming. Prominent scientists refuting the IPCC's
scientific conclusions include Professors Richard Lindzen of MIT and Fred
Singer of the University of Virginia. In a good natured jest, their 30-scientist
team from 16 countries is referred to as the “Non-Governmental International
Panel on Climate Change.”

Criticisms and refutations from climate sceptics include:

• much of the global warming debate is alarmist because of its "tipping-


point" focus
• there is considerable scepticism that human activity plays any role in
global warming because human activity contributes only 5% of CO 2
emissions

184
• James Hansen's 1988 prediction was technically wrong. There is no
evidence that emission of CO2 is driving up global temperatures.
Statistics from the Hadley Centre and University of East Anglia show
that carbon emissions have been rising while global temperatures have
been stable or trending down
• there is no way of knowing whether the temperature and economic
modelling outcomes are realistic because the changes are based on
assumptions having large differences to our direct experience. Thus
there is a great deal of uncertainty between cause and effect
• temperature rise models generally only extend to 2100 when other
dynamic effects may ameliorate the problem in longer time frames, such
as the sea absorbing CO2
• NASA's solar cycle 24 for increased sunspot activity has not commenced
as expected so the planet may face a cooling cycle (in which a bit of
human induced global warming would be appreciated)
• there is considerable scepticism that humans can do anything about
global warming because of CO2 emissions that have nothing to do with
human activity.

In addition, many people not necessarily classified as “sceptics” hold divergent


opinions such as:

• at the bottom of whatever problem that may exist is human population


growth and not emissions. In support of this it may be noted that a key
feature of IPCC's less calamitous forecasts in its later report was due to
reduced United Nations estimates of human population
• while global warming is not proven, all agree pollution is bad.
Therefore, we should continue to clean up pollution on a regional level
but not require worldwide action or devote enormous resources to it
• we are addressing the wrong problem altogether. Oil will run out long
before climate change is a problem.xvi

The U.S. Chamber of Commerce has intensively lobbied against the Clean Air
Act, calling for climate change science to be put on trial.xvii Its strident
sceptical position led Exelon (America's largest nuclear utility), Pacific Gas &
Electric, PNM Resources (New Mexico's electricity utility) and Apple Computer

185
to resign their memberships (Krauss & Galbraith 2009). In addition, General
Electric, Johnson & Johnson and Nike issued statements distancing themselves
from the Chamber's position.

In Australia, Donald Aitkin (2008) is among the prominent Australian scientists


that reject the IPCC's hypothesis of global warming. xviii He claims quasi-
religious climate change activists have diverted Australians from the key issues
of water and being an energy-dependent society whose resources are
depleting. Aitkin claims that carbon trading will be futile, expensive and will
lead to rorts; that the European Union's attempts have been laughable; and
that China and India are unlikely to reduce their use of carbon fuels in any
case.

3.3 Climate change policy development

IPCC's Kyoto Protocol


In 1997, the member governments of the IPCC agreed the Kyoto Protocol
treaty, which commits all signatory countries to introduce policies by 2010 to
reduce greenhouse gases, limit greenhouse gas concentration to 450 ppm and
temperature rise to 2°C above pre-industrial levels. In particular, the Kyoto
Protocol binds its thirty-seven industrialised signatories to cut emissions by at
least 5% through 2008-2012 (below 1990 levels).

In a practical example of Pareto's 80/20 rule, the top 12 greenhouse gas


emitters produce 82% of all CO2 emissions. America and China lead the list and
together emit 40% of all greenhouse gases in 2009. The reduction of emissions
by these two countries is of critical importance.

Australia is in a particularly exposed position because it exceeds America in


terms of per capita emissions even though it ranks only 12 th in total emissions
with 1.3% to 1.5% of gases. Australia is able to meet its emissions reduction
targets because of a special concession it received to cease land clearing. A
précis of Australian climate change history is provided in Appendix 1 Climate
change engagement in Australia.

186
However, in what is now seen as one of its key failings, the Kyoto Protocol
avoided granting the same land conservation concession to other nations. As a
consequence, about 30 million acres of rainforest continued to be cleared
annually, which constitutes about 20% of all man-made emissions. xix European
countries argued at the time that paying poor countries to refrain from
rainforest deforestation was an improper way for wealthy counties to meet
their climate change obligations.

The IPCC has emphasised that it is essential that by 2010 all Governments
introduce policies to reduce greenhouse emissions. The IPCC estimates that if
member governments do act quickly, climate change can be brought under
control at a reasonable cost. It suggests that this will require a high level of
energy conservation, active investment in renewable energy and new
technologies, and emissions trading with a price on carbon based energy of
US$20-50 per tonne of CO2.

United Kingdom's Climate Change Act


The United Kingdom's approach to climate policy is one aspect in which it is at
odds with the traditional generic Anglo-American world view. The Government
is quintessentially uncompromising on global warming abatement and its
impact on domestic consumption. In April 2001, the Government introduced a
climate change levy to help meet its interim target of 12.5% reduction in
emissions by 2010.xx

In December 2008, the United Kingdom Parliament passed the Climate Change
Act by 463 votes to three. The United Kingdom became the first country in the
world to unilaterally legislate to an 80% reduction in emissions by 2050
(compared to 1990 levels). An important feature of this legislation is that it
self-entrenches to irrevocably bind future governments.

An independent Climate Change Committee (CCC) will review progress and


emerging climate science to advise the Government on the five yearly carbon
budgets. The CCC's first report recommends an increase of the 2020
greenhouse gas reduction target from 26% to 34%, and potentially to a 42%
reduction if a global agreement on climate change is reached. It also addresses
the sectors to be targeted for emission reductions and the technologies to use.

187
The CCC is seeking a 40% emission reduction by 2020 from the power sector,
using wind, nuclear, carbon capture and storage and increased energy
efficiency. It is widely understood that implicit in this is the complete
decarbonisation of electricity generation and switching large parts of the
economy such as cars and gas home heating to electricity.

European Union targets


Prior to the Kyoto Protocol, European and Scandinavian countries experienced
considerable constitutional difficulty with environmental taxes through the
1990s. Of key importance in these countries was the debate about revenue
neutral environmental taxes and double dividends. This is an issue still
dramatically shaping the carbon price debate in Anglo-American countries.

Revenue neutral environmental taxes and the double dividend


hypothesis

Hourcade & Robinson (1996) first suggested that revenue neutral


environmental taxes can have a “double dividend”. The first dividend is the
reduction in pollution. The second dividend arises out of the positive economic
and employment growth that may be expected from the reduction of
distortionary Bismarckian taxes on labour.

Theoretically, there is an optimal level of pollution where the marginal cost of


pollution damage equals the marginal cost of avoiding pollution. Hourcade &
Robinson note (p.867) “We can say that a double dividend occurs when the
marginal distortionary effect of a carbon tax is lower than the distortionary
effect of the taxes for which it is substituted and when the amount of overall
fiscal burden remains constant. An important point is that the existence of
these conditions depends on parameters far beyond the energy field.”

Bosquet (2000) surveyed 139 models with environmental taxes and concludes
that in the short to medium term a double dividend can exist if emissions
reductions are significant and environmental tax revenues are used to reduce
distorting taxes such as payroll tax (and wage-price inflation is prevented).
Bosquet also found that energy-intensive industries may be impacted but this
is unavoidable if the environmental goals are to be achieved. Also, revenue
recycling and support of vulnerable elements of society are able to overcome

188
harm to households that spend a greater share of their income on goods which
produce emissions.

Bayindir-Upmann & Raith (2003) found that only in low tax countries does a
revenue neutral green tax reform yield the effects of better environmental
quality and higher employment. In high tax countries, it is the positive
economic effect, which helps employment, that in turn leads to the
environmental dividend component of the double dividend being lost. The
authors suggest that this may be addressed by abandoning revenue neutrality,
pursuing more drastic tax reforms and using revenues for public works rather
than reducing payroll and income taxes.

Using a computable general equilibrium model, McKibbin, Shackleton &


Wilcoxen (1999) demonstrated that international trade and capital flows
significantly alter projections of the domestic effects of emissions mitigation
policy. The Ricardian comparative advantage of nations that underpins
international carbon trading is strongest if countries have different marginal
costs of abating carbon.

In 1809, David Ricardo proposed that the rent of a resource (such as a piece of
land or a person's labour) is equal to the economic value of the best use of that
resource compared to using the best rent-free resource for the same purpose.
In other words, the resource owner appropriates the value of any excess
production because of the more advantageous resource. For example, the
value of marginal land for agriculture would be nil so the rent of that land
would be nil. Rent would increase with the fertility of the soil, irrespective of
any contribution by the landowner.

Bento & Jacobsen (2007) also disagree with the growing number of studies
that suggest fiscally-neutral swaps of environmental taxes for labour taxes
increase costs and eliminate the double dividend. It is claimed that the positive
welfare effect of revenue-recycling (i.e. reducing marginal tax rates) doesn't
offset the negative welfare effect arising from promoting alternative products
with pre-existing labour taxes that already distort factor-markets.

189
The authors criticise the underlying assumptions in these models: that labour
is a unique input and the production of all goods has a constant return to scale.
They argue that it is well established in public finance literature that a uniform
commodity tax system fails to adequately tax the rents from a fixed factor of
production. The quantity of a fixed factor of production cannot be changed in
the short-run, for example, land & buildings, plant & equipment and key
personnel. In the long run there are no limitations on scale.

Bento & Jacobsen criticise simplistic models that do not take into account that
rents on fixed factors are not fully exhausted. They claim that these models are
in fact beginning with flawed, non-optimal tax systems. Contrary to these
models, the authors hypothesise that it is possible for a double dividend to
occur where there are partially untaxable Ricardian rents for fixed factors in
the production of dirty goods.

The authors generate Ricardian rents in a static model economy where


residents allocate their time between leisure & labour supply. This labour is
used with the exhaustible natural resource coal to produce a dirty good. By
allowing the production of polluting goods to have a fixed-factor, the authors
find a double dividend of up to 11% of the reduction in pollution emissions and
conclude that environmental taxes both improve environmental quality and
increase the efficiency of the tax system.

However, the presence of the fixed factor means that part of the environmental
tax falls on the fixed factor so the price of the dirty good does not increase by
the whole of environmental tax. This reduces the welfare gain from improving
environmental quality. Fortunately, this reduction in benefit is mitigated by a
correspondingly lower tax-interaction effect because the environmental tax
moves the tax burden from labour to the fixed factor.

Traditional models suggest a second-best optimal environmental tax should be


set below the Pigouvian (first-best) tax. Bento & Jacobsen found to the contrary
that an optimal environmental tax is greater than the Pigouvian tax. In contrast
to other studies, the authors also found very high cost savings from using
revenue neutral emissions taxes instead of non-auctioned pollution emissions
permits.

190
Experience with environmental taxes in Germany

Beuermann & Santarius (2006) find that five years after Germany introduced
ecological tax reform in 1999, Germans still regarded environmental policy and
economic policy as separate issues. There is both massive criticism and
unconditional support for environmental policies. As with all fiscally neutral
environmental taxes, the two virtuous macroeconomic effects were meant to
orient production towards energy efficiency and innovation and create
additional jobs due to reduced labour costs. Despite public concern that long
term unemployment was increasing, Germans neither understood nor
welcomed the linking of environmental taxes with employment objectives.
German's general distrust of politics and perceived information asymmetries
led the coalition of the Social Democrats and Greens to stop increasing
environmental tax rates beyond 2003.

Experience with environmental taxes in France

Deroubaix & Leveque (2006) investigated the political difficulty of introducing


controversial environmental policy instruments. Using focus groups, they
sought to understand France's fuel revolt in 2000, which led the Constitutional
Court of France to declare the French Government's Ecological/ Environmental
Tax Reform (ETR) project unconstitutional in December of that year. The ETR
project had commenced in 1993 following a decade of failure in seeking to
limit industry's greenhouse gas emissions through voluntary agreements.

The ETR had sought to be fiscally neutral by recycling taxes on labour to taxes
on pollution. It had also sought to achieve the double dividend proposed by
Hourcade & Robinson (1996).

Deroubaix & Leveque found that the government did not disseminate
information and develop consensus to build acceptance among key groups.
They also found that the distributive effects of a tax such as the ETR led to
different perceptions in different groups. In unexpected outcomes, businesses
that received a net benefit from ETR were the ones not exposed to
environmental issues. These businesses remained uninformed and relatively
ambivalent. However, the industries that were required to pay the tax
strenuously objected to ETR. These were energy intensive companies and
those companies with small highly skilled workforces that would not benefit

191
from lower labour taxes. The issue of whether or not to tax the energy used in
industrial processes was never resolved and remained a highly contentious
issue within the Government's own policy makers.

Deroubaix & Leveque's key finding on the acceptance of environmental tax


economic redistribution was that (p.948):

There is outright hostility to economic instruments among the


general public (independent of class and geography). The link
between environmental protection and employment incorporated in
the ETR concept was incomprehensible for the focus groups
participants …. Under these conditions, the quest for social
acceptability appears a false problem. The social acceptability issue
only makes sense in a rational choice paradigm, taking for granted
that every agent has an obvious perception of the signal price. On
the contrary, the analysis of policy implementation process shows
that there is no optimal tax design. There was no solution to the
paradox of the political feasibility of the tax.

Carbon leakage

Hill (2001) notes that environmental taxes encourage companies to relocate


industrial facilities from Kyoto Protocol Annex 1 (developed) countries to
Annex II countries (developing). World Trade Organisation (WTO) rules would
then prevent countries using tariffs to protect their own industries against the
imports from these new offshore, lower cost, profligate greenhouse gas
producers. Hill also notes that it is uncertain whether the Kyoto Protocol could
provide access to the exceptions allowed for multilateral agreements.

“Carbon leakage” is the migration of carbon intensive production from a


mitigating country to surrounding countries. It is calculated by taking the
increase in emissions in the surrounding countries divided by the reduction in
the emissions of the mitigating country. The IPCC's Third Assessment Report
(2001) suggested leakage rates of 5% to 20%, although its Second Assessment
Report (1995) proposed a wider range of 0% to 70%.

192
Environmental regulations create higher production costs for source emitters
of CO2 and other greenhouse gases. The usual assumption is that these higher
production costs will flow through into downstream producers in the form of
higher prices. However, the source emitters or the downstream direct and
indirect emitters may not be able to pass on price increases. This provides the
incentive for large industries to relocate to other jurisdictions where carbon
pollution is unregulated. This is called “carbon leakage migration.”

Asturias region of Spain

Arguelles, Benavides & Junquera (2006) studied the Asturias region of Spain
where Arcelor produces iron and steel using energy from coal-fired generation.
Low cost, coal-fired electricity has traditionally secured the region's position as
a low cost producer in the global iron and steel industry, which has been
suffering from competitiveness problems for many years.

A large part of Arcelor's production in Asturias is exported to the rest of Spain


and to international markets. There is considerable concern that buyers could
turn to other sources or Arcelor could relocate its plants if costs increase in
Asturias. This could arise by the requirement to purchase emissions permits or
introduce emissions control measures when firms in non-signatory countries do
not face similar imposts; or if other regions or countries develop cheaper
electrical energy, for example, from gas-fired or renewable generation.

It is quite apparent that environmental policy has the potential to change the
comparative advantages of regions and nations. This will favour some nations
and industries and reduce, perhaps fatally, the competitiveness of others.
Governments are unable to stand in the way of very significant pressures such
as companies relocating internationally for lower cost production.

Where changes are not in the global or international theatre, national


government policies can be effective in providing special treatment for sectors
that are impacted at a regional level. The principle embodied in nations
voluntarily adopting environmental policy is that companies should not be
subjected to undue economic or social hardship. So called “horizontal-actions”
are envisaged to provide special treatment where jobs are impacted.

193
Arguelles, Benavides & Junquera found that sector accountability for CO 2
emissions is radically modified if Input-Output analysis is applied to allocate
responsibility for direct, indirect and induced emissions. At the local scale, the
authors found the anomaly that certain sectors will bear the economic costs of
CO2 emissions while other sectors will be exempt, even though these
downstream sectors use outputs from sectors that are most effected by the
regulations.

Carbon leakage from the European Union

Barker et al. (2007) used computable general equilibrium (CGE) models of


historical data in the decade to 2005 from six European Union member states
to undertake an ex-post study of emissions relocation. The authors found that
the member states probably recorded CO2 reductions but that output does not
appear to be relocating away except in highly competitive export driven
markets such as basic metals industries of Germany and the United Kingdom.
Otherwise, local production is favoured by transport costs and local market
conditions and customised products. Leakage is minor "and in some cases
negative". This is attributed to low energy taxes not significantly impacting
costs.

The authors confirm the empirical analysis in Sijm et al. (2004, Section 5.2.2,
p.20) that found environment policies have, to date, not been influential
motives for relocation of energy-intensive processes investments like iron &
steel plants to developing countries. Instead, factors like growth in regional
demand and wage levels have been more important.

European Union targets: Kyoto and post 2012

On 31 May 2002, the European Union and it's then fifteen member countries
ratified the Kyoto Protocol. The European Union set a “20/20/20” target of
reducing emissions by at least 20% by 2020 (compared to 1990 levels). xxi In
addition, the European Union committed to increasing the proportion of
energy from renewable sources (solar, wind, hydro and nuclear) from 8.5% to
20% and reducing energy consumption by 20%. The 2050 target for CO 2 was a
reduction of between 60% and 80% (compared to 1990 levels).

194
In December 2002, the European Union introduced an emissions trading
system with quotas across the six industries of energy, steel, cement, glass,
brick making and paper/cardboard. Chastened by the failure of French and
German environmental taxes, the European Union introduced a differential
system where emitters received emissions permits free of charge and could
sell surplus permits to those emitters which require additional permits. This
led to widespread profiteering with companies such as France's EdeF and
Germany's E.On passing on the price of permits to consumers regardless of the
fact that the companies had received the permits for free.

A second major error was allowing too many permits in the earliest phase of
the scheme from 2005 to 2007. This led to glut. The price of excess permits
initially rose to €30 ($42) in May 2006 and crashing to 2 euro-cents (3¢) by the
end of 2007. This eliminated all incentive for companies to ameliorate or abate
their emissions. As a result, the policy miserably failed to achieve any of its
objectives or reduce emissions over the first three-year phase. The only
winners in the emissions trading scheme were banks such as Barclay and
Goldman Sachs that traded CO2 permits in a market estimated at €62.7 billion
(US$90 billion) in 2008 (Scott 2009).

A third major error in the scheme was to create a double cost for industries
such as metal makers, chemical plants and paper mills. These industries have
their own carbon quotas and in addition were forced to pay higher power
prices.

In November 2008, the European Union (which had now expanded to twenty-
seven countries) extended the same targets to post 2012, when the Kyoto
Protocol no longer applies. A major feature in achieving the “20/20/20”
objectives in 2013 and thereafter will be that the differential system where
emissions permits are granted for free becomes an absolute system with all
emissions permits auctioned.

The effect of auctioning permits on unemployment and carbon leakage (the


loss of industries to less regulated countries) remains highly contentious. In
order to achieve consensus, the European Union acceded to concessional

195
exemptions demanded by Germany and Italy for their steel, chemicals, cement,
aluminium and automobile manufacturing industries.

However, all industries in the European Union will need to reduce emissions
each year. Polluting power producers will receive subsidies and firms that face
international competition, which is estimated to be more than 90% of
European Union firms, will receive free emissions permits until 2020 if their
costs rise more than 5% due to buying permits.

The nine Eastern European countries that threatened to veto the post-Kyoto
“20/20/20” deal because of their highly polluting coal and lignite-fired power
stations were assuaged by free permits. When auctions commence in 2013,
countries with per capita income under half of the European Union average
and with more than 33% of their power from coal fired plants will receive free
permits equal to 70% of their average annual emissions from 2005-2007. This
will decline to zero at 2020.

The United Kingdom was compensated with an extra €3 billion for carbon
capture and storage development, increasing the total subsidy to €9 billion.

The European Union agreement also allows countries to earn emissions credits
by clean development mechanisms (CDMs), which are projects for emissions
amelioration in developing countries. This remains a controversial provision in
the lead-up to the UNFCCC's December 2009 meeting in Copenhagen.

The European Union also hopes that America will join with Europe to create a
global carbon market. As emissions permits in developing countries are
expected to be cheaper than in industrialised countries, European Union
members will be able to buy a proportion of their permits from foreign
countries. Those that meet the power and per capital income test will be able
to buy a higher proportion of permits.

Non-Kyoto Protocol action by governments


Over the past two years, the world's largest polluters, America, China and
India, which are not parties to the Kyoto Protocol, have discussed a new

196
protocol to be agreed at the United Nations climate meeting in Copenhagen in
December 2009.

APEC Sydney Conference


The indifference of politically conservative Anglo-American nations to climate
change was evident at the Asia Pacific Economic Forum (APEC) Conference in
Sydney in September 2007. The 21 member economies merely agreed to a non-
binding, so-called “aspirational goal” of slowing, stopping and eventually
reversing greenhouse gas emissions. They put aside Kyoto Protocol targets to
cut greenhouse gas emissions and merely undertook to plant more trees and
increase energy efficiency by one quarter between 2005 and 2030.

America's view, espoused at the Conference, was that this declaration


represents the emerging parameters of a climate change arrangement to
become effective when the Kyoto Protocol expires in 2012.

In placing the best spin on this lack-lustre outcome of APEC, the host nation's
then Prime Minister, John Howard, noted that it marked the first time that
large polluting countries such as the United States, Russia and China had
agreed that they each have to make commitments to stop human activity from
causing dangerous changes to the climate. Commentators noted that the
wealthy Anglo-American nations regarded climate change as a "hundred year
agenda" and so there was no imperative to do anything immediately.

The Sydney conference also nimbly sidestepped the growing divide between
wealthy and developing nations over the Kyoto Protocol. Wealthy nations like
the USA and Australia had not ratified the Kyoto Protocol, claiming possible
adverse effects on economic and social growth. Most wealthy countries that
did sign, such as Canada, have failed to meet their targets.

Developing countries such as China, Indonesia and poorer APEC members


favour the Kyoto Protocol because it calls on richer countries to a higher
standard for minimising greenhouse gases and exempts developing countries
from emissions targets. Several smaller developing countries at the APEC
Forum reacted angrily to developed nation bullying to endorse a declaration
that would actually undermine the Kyoto Protocol (DeSouza 2007). Papua New

197
Guinea's Prime Minister Sir Michael Somare told fellow leaders “While we
recognise that Kyoto Protocol has its flaws, it needs to be improved and
strengthened - not weakened.”

China remained strongly of the view that developing nations have a lesser role
to play and should be allowed to get on with economic growth and improve
lifestyle to Western standards. It has adopted targets, albeit rather low and
unclear, to reduce the energy intensity of economic activity by 20% by 2010
(compared to 2005 levels) and to sharply increase the contribution by
renewable energy to total energy supply.

China's President Hu Jintao chided developed nations over the need for them
to strictly abide by their targets under Kyoto to compensate for years of
booming economic activity that has produced copious CO2 emissions. Hu said
industrialised countries have polluted for longer and thus must take the lead in
cutting emissions and providing money and technology to help developing
countries clean up. He reminded the wealthy countries that (Yeoh & Gosh
2007) “In tackling climate change, helping others is helping oneself.”

Although the Anglo-American nations didn't have the ears to hear, Hu's theme
increasingly haunted them for almost another two years. America remained
intransigent in its dogged insistence that China adopt binding and equal
targets to America. The issue finally boiled over at the UNFCCC Bonn meeting
in June 2009, resulting in American negotiators desperately seeking a face-
saving solution to appease their own Senate.

Group of 8 Hokkaido meeting


In July 2008 the Group of 8 met in Hokkaido, Japan. The countries continued
their negotiating pressure on China. President George W. Bush dominated the
Group's communiqué, making it an agreement to a target of 50% reduction of
greenhouse gases by 2050 (base unstated) on the condition that China, India
and other developing nations participated (Stolberg 2008).

Environmentalists had hoped for an interim target of 25% reduction by 2020


(compared to the standard 1990 levels) to provide incentive for clean
technologies. In the event, there was considerable disquiet amongst

198
environmentalists that the lacklustre outcome signalled a lack of bona fide
intentions amongst developed nations. South Africa’s minister of
environmental affairs, Marthinus van Schalkwyk, observed “Without short-
term targets the long-term goal is an empty slogan.”

President George W. Bush also suggested a series of meetings with a group of


major emitters he dubbed the “Outreach Five”: China, India, Brazil, South
Africa and Mexico. The label Outreach Five became deprecated almost as soon
as it was first mentioned.

China and other developing nations at a separate but concurrent meeting in


Hokkaido again re-emphasised that America and other developed countries
would need to first solve the problem they had created and then contribute
capital to the developing nations.

The G8 nations, together with the so-called Outreach Five, together with South
Korea, Indonesia and Australia subsequently issued a statement suggesting,
rather self-evidently, that developed countries should share the biggest portion
of the climate change burden.

United Nations 2008 Poznan climate change


conference
During his Presidential election campaign, Senator Obama, outlined his belief
that: “None of the numbers on the table - the EU's 20% by 2020, the US return
to 1990 levels, the Chinese pledge of a 40% reduction in carbon intensity [the
amount of carbon produced per unit of Gross Domestic Product] - was enough
to stave off dangerous climate change.”

Consistent with this position, in November 2008 Senator John Kerry, brought
UN Secretary-General Ban Ki-moon the message from then President-elect
Barack Obama that he would personally lead coordinated global action in
Copenhagen.xxii Kerry also noted his personal view that “Without a new global
deal temperatures could be between 3°C and 5°C higher by mid-century than
they are now.”

199
With America intransigent on committing to any targets, the European Union,
China and India also declined to consider targets. As a result, the Poznan
conference became another vacuum in policy development. The leaders merely
deferred commitments until the November 2009 meeting scheduled for
Copenhagen.

Nevertheless, upon learning of the contemporaneous European Union


ratification of its “20/20/20” objectives for the period beyond 2012 when Kyoto
has expired, delegates expressed relief and immediately released funds to
assist poor nations protect themselves from the impact of climate change. The
Adaptation Fund is expected to provide US$300 million per year by 2012,
through a 2% levy on United Nations' green investments in developing nations.

At the Poznan conference, German Watch, a German environmental group,


released the 2009 Climate Change Performance Index of 57 countries covering
90% of energy related CO2 emissions (Burck et al. 2008). It found that no
country engaged sufficiently in the battle against a 2°C temperature rise to win
one of the top three positions. Sweden, the 2007 winner, ranked 4 th, followed
by Germany and France. The worst three countries were America, ranking 58 th,
followed by Canada and Saudi Arabia. China was 51st out of 60, however rising
due to its expanded environmental initiatives. In 2007, Australia ranked in the
bottom three.

3.4 American climate policy development


It is well known that President Bill Clinton never sent a Kyoto Climate Bill to
Congress for ratification due to its certain defeat and President George W.
Bush was openly contemptuous of the Kyoto Protocol.

Byrd Hagel resolution


Jacoby & Reiner (2001, p.300) note that in advance of the 1997 UNFCCC Kyoto
meeting, the U.S. Senate had already passed, by a majority of 95-0, the non-
binding Byrd-Hagel resolution to oppose any climate treaty that would either
harm the American economy or omit matching commitments from developing
countries. The authors note that (pp 303-4): “The US Senate acts as a high
barrier to ratification of international treaties: not only is a two-thirds vote
required, but Senate rules and practices give blocking power to small

200
coalitions (or even key individuals) ... the most visible Senate critics of Kyoto,
Senators Byrd and Hagel a conservative Democrat and Republican respected
in foreign affairs, represent precisely those views that will have to be won over
to reach the two-thirds majority.”

William Nordhaus
William Nordhaus, a senior policy adviser to the American Government over
many years, takes an economist's approach to climate policy in his book A
Question of Balance: Weighing the Options on Global Warming Policies (2008).
He assumes the science of climate change and long term consequences are
given and focuses only on policies of resource allocation to maximise the
financial benefit of the planet.

Nordhaus' model is called Dynamic Integrated Model of Climate and Economy


(DICE). It has two parts: the first part calculates the effect of reduced
emissions in ameliorating climate damage, and the second calculates the net
value of gains and losses to the world economy over 100 and 200 years,
discounted at a rate of 4%xxiii.

Dyson (2008) summarises the six major global warming policy alternatives
examined by Nordhaus:

• “Business as usual", which results in damage to the environment of


US$23 trillion by 2100. US$23 trillion is approximately $70,000 per
capita of US population. This is the base case against which all other
policies are compared
• “Tax worldwide carbon emissions" at a rate gradually increasing with
time to provide the maximum aggregate economic gain. This is the
optimal policy according to Nordhaus. The net value of this over the
base case is US$3 trillion
• "Continue the Kyoto Protocol" with or without American participation.
The net value of this over the base case is US$1 trillion with American
participation and zero without
• "Sir Nicholas Stern's policy", which is Kyoto plus additional strict limits
on emissions. The net value of this over the base case is negative US$15

201
trillion. In other words, this case has an additional cost over the base
case of $15 trillion
• "Al Gore policy" of reducing emissions gradually to 10% of current levels
by 2050. The net value of this over the base case is negative US$21
trillion
• "Low cost backstop technology", which is a hypothetical atmosphere
scrubbing technology to sequester the Keeling carbon wiggle, such as
pyrolation or genetically engineered carbon eating trees, or a low-cost
solar or geothermal energy technology that at present might only be
imagined in the realm of science fiction. It might be noted that the IPCC
does not give any credence to such highly speculative miracle-
technologies. The net value of a low-cost backstop technological
breakthrough over the base case is US$17 trillion, which is almost the
equivalent of a free solution to global warming.

Nordhaus concluded that the Stern and Gore policies would be prohibitively
expensive, while the "low-cost backstop technology" is enormously attractive.
Other policies like taxing carbon emissions and continuing the Kyoto Protocol
(with or without American involvement) are similar to the base case of
"business as usual".

Based on these findings, Nordhaus strongly recommended against American


ratification of the Kyoto Protocol and that America should actively avoid all
ambitious proposals such as those of Stern and Gore. The way ahead,
according the Nordhaus, was to vigorously pursue low-cost backstop
technologies and, as a safety-net for the planet, seek an international treaty
binding all nations to a progressively more expensive carbon tax.

Environmental taxes on imports


In September 2009, French President Sarkozy noted that France and Germany
were preparing a “border adjustment tax” on the assessed greenhouse gas
pollution content of goods imported from countries with climate control
measures that were inferior to the European Union (Butler 2009). India had
earlier observed that it could respond to any such duties with a 99% tax on
goods imported from countries that had created the CO 2 pollution problem.

202
While America has been tardy in setting greenhouse gas pollution reduction
targets for industry, producers themselves had sought Government protection
against “carbon leakage”. This is the loss of emissions intensive industry to
overseas locations and resulting in the import of formerly manufactured
products.

In February 2008, the Environment and Public Works Committee passed a bill
called “America's Climate Security Act (S. 2191)” requiring importers of
emissions intensive goods such as steel and aluminium to provide the
Government with emissions credits.

The bill was supported by American Electric Power, together with the
International Brotherhood of Electrical Workers. Steel producer Nucor Corp.,
also proposed that a tariff be imposed on goods imported from countries with
no carbon cap.

US Congress House Energy and Air Quality


Subcommittee
In April 2008 the Chairman of the U.S. Congress House Energy and Air Quality
Subcommittee, Richard Boucher, confirmed that the Committee was
developing legislation to reduce carbon emissions 60% to 80% by 2050
(compared to 2008 levels) (Boucher 2008). However, the key feature of this
target was that there would be no American action until 2025, followed by a
higher levels of emissions amelioration if and only if carbon capture and
storage (CCS) had become feasible.

Boucher noted that three carbon capture technologies were in development in


America: integrated gasification combined cycle (IGCC), chilled ammonia
carbon capture application and combustion of coal in an oxygen rich
environment. He suggested that one of these could be commercially available
by 2025, although the integrity of storage locations for carbon dioxide
sequestration would need to be mapped and monitored.

Following these unsatisfactory revelations, in June 2008 environmentalists


requested the Senate to address a proposal to cut greenhouse gases by 70% by

203
2050 (compared to 1990 levels). However, the Senate declined to address the
matter.

President Obama's emergent policy


Senator Obama's November 2008 election policies for climate change were:

• to reduce CO2 emissions by 80% by 2050 (compared to 1990 levels)


• to introduce a cap-and-trade scheme that would cap American CO 2
emissions and require companies to buy permits to pollute at a specified
carbon price
• to invest US$150bn in renewable energy over ten years as part of plan
to reduce American dependence on foreign oil, tackle America's carbon
emissions and create jobs
• increase production tax credits for the wind industry from one year to
seven years

In dealing with the global financial crisis that began shortly after his election,
President Obama included some of these climate change policies in the
American Clean Energy and Security Act of 2009. This was not universally well
received. Notwithstanding calls for a 40% cut in emissions by 2020 (compared
to 1990 levels), America's new clean energy project provides only for a
reduction in greenhouse emissions of between 6% and 7%. European Union
countries expressed dismay at this perceived lack of American leadership in
the lead-up to the United Nation's December 2009 climate change meeting in
Copenhagen.

President Obama later defended the American Clean Energy and Security Act
in a speech at Georgetown University (Obama 2009):

The investments we made in the Recovery Act will double this


nation's supply of renewable energy in the next three years. And we
are putting Americans to work making our homes and buildings
more efficient so that we can save billions on our energy bills and
grow our economy at the same time …. But the only way to truly
spark this transformation is through a gradual, market-based cap on
carbon pollution, so that clean energy is the profitable kind of

204
energy. Some have argued that we shouldn't attempt such a
transition until the economy recovers, and they are right that we
have to take the costs of transition into account. But we can no
longer delay putting a framework for a clean energy economy in
place. If businesses and entrepreneurs know today that we are
closing this carbon pollution loophole, they will start investing in
clean energy now. And pretty soon, we'll see more companies
constructing solar panels, and workers building wind turbines, and
car companies manufacturing fuel-efficient cars. Investors will put
some money into a new energy technology, and a small business will
open to start selling it. That's how we can grow this economy,
enhance our security, and protect our planet at the same time.

American Clean Air Act


In 2007, the U.S. Supreme Court ruled that Congress had intended the Clean
Air Act to cover greenhouse pollution and that the Environmental Protection
Agency (EPA) had a mandatory obligation to regulate greenhouse gas pollution.
However, the Bush Administration opposed this and disenfranchised the EPA of
this obligation.

President Obama subsequently expressed the opinion that it would be better


for Congress to prepare custom regulation for emitters like electricity
generators and factories. Nevertheless, in a move that was seen as the first
step of the Obama Administration in taking global warming seriously and
building its credibility in preparation for the United Nations' Copenhagen
meeting in December 2009, President Obama released the EPA from its
previous administrative restraints.

On 17 April 2009, this resulted in the EPA issuing a report labelling CO 2 and
five other greenhouse gases a significant threat to public health and therefore
subject to its regulation under the Clean Air Act.

American fuel standards


Despite a policy vacuum at the Federal level during the Bush Administration,
many American States adopted their own targets. For example, California

205
legislated to return State emissions to the 1990 level by 2020 with an 80%
reduction by 2050 (compared to 1990 levels). California also required a
reduction in new vehicle emissions of 14% by 2011 and 30% by 2016
(compared to 2008). However, these regulations were immediately blocked by
the Bush Administration. On 3 July 2009, President Obama overturned this
situation with the Environmental Protection Agency granting California the
right to enforce its own standards.

Earlier, in May 2009, President Obama had introduced America's first ever
measure to reduce greenhouse gases by placing an obligation on automobile
manufacturers to increase car and light truck average fuel efficiency by 40%
from 25 miles per gallon to 35.5 miles per gallon by 2016 and to decrease
greenhouse gas emissions by approximately one-third by 2016. The measures
will commence in 2012 and be overseen by the Environmental Protection
Agency and Department of Transport.

President Obama's initiative was widely seen as reversing the previous


Administration's indifference to America's extraordinarily high level of oil
imports and its rebuff of California’s clean car program.

Washington Climate Summit April 2009


At the Washington Climate Summit, China and India reiterated their position
that industrialised nations needed to lead with major CO2 emissions reductions
or developing countries would not commit to any binding reductions.

America and Australia were discussing such small targets in the order of 5% to
7% (compared to 1990 levels) that they could hardly criticise this policy stance
by developing countries. It was left to Russia, Japan and, ironically because of
its widely criticised duplicity, Canada to object that China and India were
already among the world's top emitters and should engage with the issue.

American recognition of domestic climate change


impacts
As noted in the section on Climate Change Sceptics (above), in June 2009 the
White House Office of Science and Technology Policy and the United States

206
Global Change Research Program issued a comprehensive report on how
global climate change was impacting America (Karl et al. 2009). Thirteen
Federal agencies contributed to the report.

This is notable as it is the first time American policy makers have moved the
debate from hypothetical scientific confidence levels to declare, unequivocally,
that climate change was already impacting America across food production,
forests, coastlines and floodplains, water and energy supplies, transportation
and human health. The principal editor of the report, Dr. Thomas Karl,
emphasised the imperative for Americans to act quickly to reduce emissions or
face severe damages and adaption costs: “Our destiny is really in our hands ….
The size of those impacts is significantly smaller with appropriate controls.”

Waxman-Markey Bill
On 26 June 2009, the U.S. House of Representatives narrowly passed the
Waxman-Markeyxxiv Bill, called the “American Clean Energy and Security Act,
H.R. 2454” by 219 votes to 212, with 44 Democrats voting against it and 8
Republicans voting in favour. Democrats control 59% of the House but the
Democrat vote for the Bill was only 51%. If all Democrats had voted for the
Bill, it would have achieved 61% in favour.

The Waxman-Markey Bill includes a cap and trade system to reduce emissions
17% by 2020 (compared to the 2005 level) 42% by 2030 and 83% by 2050. It
also has the aim of ending foreign oil, increasing the use of renewable energy,
generating new clean-energy jobs and technology and setting efficiency
standards for buildings, lighting and industrial facilities.

The proposed reduction of 17% by 2020 (compared to 2005) is equivalent to


only 4% by 2020 (compared to 1990). Although permits will be auctioned, it is
proposed that about 85% of permits would initially be given away free to
industry. Companies that need additional emissions permits can meet their
obligation by purchasing additional permits on an emissions exchange. As
discussed above, the European Union's similar differential approach of giving
away permits is now seen as a major error.

207
The Bill also proposes a requirement that 20% of electricity be generated from
wind, solar and other renewable sources by 2020, including 5% from better
energy efficiency.

Although many manufacturers and generators supported the Bill, Republican


critics claimed the Bill was “the biggest energy tax in history” and that it
would lead to major increases in energy prices and the loss of millions of jobs
(Broder 2009a). Democrats were quite nervous about this Bill because in 1993
President Clinton's proposed tax on all forms of energy was defeated and
widely seen as a factor in the Democrats subsequently losing government.

Environmental groups Greenpeace, Friends of the Earth and Public Citizen are
critical of the Bill because it allows too many free emissions permits for
polluting industries. They also claim it is risky because it relies on hypothetical
and perhaps unlikely reductions in emissions by developing countries.

An alternative Bill sponsored by Republicans, in the tradition of the former


Bush Administration's “drill here, drill now” approach, proposes coastal
drilling on all America's coastline, including the Arctic Coastal Plain. There is
neither a cap on carbon, nor a mandate for electricity from renewable sources.
The only concession to the environment was to add “insult to injury”: that
investment in renewable energy research would be funded from future Arctic
Coastal Plain oil and gas royalties.

Senate amendments to the Waxman-Markey Bill


Following its acceptance in the House of Representatives, the Waxman-Markey
Bill needs to be passed in the Senate. On 1 October 2009 Senators Barbara
Boxer and John Kerry introduced an amended Bill for the Clean Jobs and
American Power Act.

In matters of international treaties, the United States Constitution requires


two-thirds in favour (Article II, Section II): “[The President] shall have Power,
by and with the Advice and Consent of the Senate, to make Treaties, provided
two thirds of the Senators present concur...” In October 2009, Democrats
controlled 59 of the 60 votes required for the Bill to pass.

208
The Boxer-Kerry Senate Bill differs from the Waxman-Markey Bill in three
ways. Firstly, it requires a 20% cut in emissions by 2020 (compared to 2005).
The Waxman-Markey Bill requires a cut of only 17% by 2020. This compares to
the cut of 14% originally proposed by President Obama. However, emissions
are already 8.8% lower than in 2005 due to the American recession. An 83%
cut by 2050 is the same in each case.

Secondly, while both Bills include an economy-wide emissions cap and trade
system, there is a major difference in the contentious issue of how emission
allowances will be distributed. The Waxman-Markey Bill provides for 85% of
emissions permits to be issued free of charge. However, the Senate bill does
not address the matter and leaves negotiations for later.

Lastly, the Waxman-Markey Bill restraints the Environmental Protection


Agency (EPA) from regulating greenhouse gases under the existing Clean Air
Act. In contrast, the Senate Bill does not restrict the EPA.

America's engagement at the UNFCC Bonn meeting


Publicly, America and China had hidden behind the other's intransigence on
climate change. Neither ratified the Kyoto Protocol. However, in July 2007,
Chinese and Bush Administration officials began secret negotiations for a
common approach.xxv Since that time, each nation has encountered domestic
economic and employment issues that have led to pressures to not deal with
greenhouse gas pollution.

In June 2009, the UNFCCC's meeting in Bonn of 182 countries with 4,300
participants debated for the first time a draft Copenhagen Protocol to succeed
the Kyoto Protocol. The November 2009 meeting in Copenhagen (CoP15) is the
culmination of the 2006 Bali Road Map.

This Bonn meeting was the first such conference in which America had fully
engaged. Although a 200-page document was compiled, the countries were
unable to reach agreement on world action to ameliorate climate change. At
the end of the meeting, the exasperated UNFCCC Executive Secretary Yvo de
Boer concluded that a worldwide anti-climate change pact was "physically

209
impossible." However, there was a substantial step forward in the grim
acknowledgement by all parties that an effective policy was urgently needed.

The United Nations presented three draft protocols. The draft promoted by
France and Germany set minimum reductions necessary to cope with climate
change as between 25% and 40% by 2020 (compared to 1990 levels) and
between 50% and 85% by 2050 (compared to 1990 levels). It was proposed
that countries such as America that may be unable to react sufficiently quickly
to meet their domestic targets by 2020 would be able to satisfy their
obligations by CDMs through financing sustainable activities in developing
nations. In addition, the draft provided for levies of approximately US$100
billion per annum for structural adjustment and protection of vulnerable
communities, and additional compensation for historical emissions.

Other draft objectives included peaking emissions by 2015 and then reducing
emissions by 50% by 2050 (compared to 1990 current levels) in order to limit
temperature rise to 2°C and reducing emissions to 2 tonnes per capita. This
compares with 2006 per capita emissions of 19.78 tonnes in America, 7.99
tonnes in Europe and 4.58 tons in China 2006.xxvi Another objective sought a
reduction in atmospheric CO2 from the current level of 385 ppm to 350 ppm.

At the start of the Bonn meeting, the Inter-Academy Panel, representing the
science academies of seventy countries including those of Australia, Britain,
France, Japan and America, implored world governments to take action avoid
an underwater catastrophe from ocean acidification: “To avoid substantial
damage to ocean ecosystems, deep and rapid reductions of carbon dioxide
emissions of at least 50% (below 1990 levels) by 2050, and much more
thereafter, are needed.” xxvii
However, all major countries including Japan and
Russia persistently resisted any commitment while they awaited a resolution
between America and China.

During the conference the world's 6th biggest emitter, Japan, announced an 8%
domestic reduction by 2020 (compared to 1990 levels)xxviii, which is 1% deeper
than its Kyoto Protocol undertaking.xxix Previously Japan had signalled a wide
range from a 4% increase (compared to 1990 levels) to a 25% decrease.
However, the UNFCCC and environmentalists were aghast at Japan's meagre

210
new 8% target, with UNFCCC Executive Secretary Yvo de Boer venting his
frustration at the lacklustre support from developed countries: “For the first
time in my two and a half years in this job, I don't know what to say. We're still
a long way from the ambitious emission reduction scenarios that are a beacon
for the world”.xxx

In his contemporaneous visit to Dresden, President Obama had described the


climate change situation as a “potentially cataclysmic disaster”:

I'm actually more optimistic than I was about America being able to
take leadership on this issue, joining Europe, which over the last
several years has been ahead of us on this issue …. Ultimately the
world is going to need targets that it can meet. It can't be general,
vague approaches …. We're going to have to make some tough
decisions and take concrete actions if we are going to deal with a
potentially cataclysmic disaster …. Unless the United States and
Europe, with our large carbon footprints, per capita carbon
footprints, are willing to take some decisive steps, it's going to be
very difficult for us to persuade countries that on a per capita basis
at least are still much less wealthy, like China or India, to take the
steps that they're going to need to take …. So we are very
committed to working together and hopeful that we can arrive in
Copenhagen having displayed that commitment in concrete ways.

However, American support for a new Copenhagen Protocol remains somewhat


illusory because of the hurdle that Senate ratification of an international treaty
requires a two-thirds vote in support of the treaty. President Bill Clinton had
been unable to secure this level of support for the Kyoto Protocol.

Furthermore, the risks of failure are as considerable as President Obama's


objectives are courageous; America had not yet done anything at all to reduce
emissions (and therefore, by definition, far less than China's energy efficiency
initiatives); the Republican Party, conservative Democrats and influential
climate change sceptics remained implacably opposed to any actions on
emissions; the cost of any climate change policy appears too enormous for
America; this comes at a time when America has huge deficits due to poor
economic management, the 2008-9 Global Financial Crisis, bank and industry

211
bailouts and employment and health exigencies; and lastly, most Americans
remain highly sceptical that they could reduce emissions by even the United
Nations minimum goal of 25% by 2020 (compared to 1990 levels).

Nevertheless, American and China, who between them created 47% of world
greenhouse gas emissions in 2009, recognised they are the most important
participants in the approaching UNFCCC meeting in Copenhagen. In Bonn,
they engaged in complex negotiations across the canvas of their financial,
economic and climate change relationships. However, each country remained
highly suspicious of the others bona fide intentions, with both seeking an
approach analogous to mutually assured nuclear disarmament. For example,
based on measurable, verifiable and reportable reductions and tit-for-tat
retaliation to non-agreement or non-compliance (Broder & Ansfield 2009).

The Bonn meeting was also notable for its focus on historical accountability for
emissions, which gave rise to the dual concepts of current and historical
accountability for climate change debt. The first concept is well understood: it
is the liability of industrialised nations to redress the harm to developing
countries from changing climate patterns due to both the historical levels of
emissions and continuing emissions from industrialised countries. Everyone
expects that developing countries will incur significant expenses in adapting to
the physical effects of climate change.

However, the second notion of industrialised nation debt has only recently
become clear with the increased negotiating power of China and India. It is
reasoned that industrialised nations got to the “cookie jar” first and plundered
it, leaving developing nations with only crumbs.

The illustration below shows the historical accountability for emissions of 48


countries comparing 95% of accumulated emissions (EarthTrends 2007). Of
the total 1066 Gt CO2, America accounts for 29.5%, Russia and Germany each
8.4%, and the United Kingdom 5.2%.

212
Iraq
Former Serbia and Montenegro
Pakistan
Sw itzerland
Malaysia
Finland
Greece
Egypt
Bulgaria
Slovakia
Thailand
Denmark
Hungary
Korea, North
Austria
Belarus
Sw eden
Taiw an
Venezuela
Turkey
Argentina
Uzbekistan
Indonesia
Saudi Arabia
Romania
Iran
Netherlands
Korea, South
Brazil
Belgium
Czech Rep
Spain
Kazakhstan
Mexico
Australia
South Africa
Italy
Poland
Ukraine
Canada
India
France
Japan
United Kingdom
Germany
China
Russia
United States

0 50 100 150 200 250 300 350

Accumulated emissions Gt CO2

Illustration 11: Historical accountability for emissions 1900-


2004 (Source: World Resources Institute EarthTrends 2007)

Now that industrialised nations are desperate for cooperation from developing
countries, it no longer a situation of “let's forgive, forget, move on and start
anew”. Developing nations are seeking a kind of intergenerational and cross
geopolitical equity justice in demanding that withdrawals from the world's
fixed pool of emissions capacity be repaid in order that future generations in
developing countries are on an equal footing with citizens of developed
countries. Reduced to simple terms, China and India's position is that
industrialised countries would need to fix the whole global warming situation
before developing countries would join to go forward on an equal basis.

China re-emphasised the policy remains "common but differentiated


responsibility" following definitions developed by the UNFCCC's 1995 Berlin
meeting. The Berlin Mandate placed industrialised and developing countries
into different international regimes with differentiated accountability for

213
accumulated emissions and responsibility for the costs of mitigation over
forthcoming decades. Indeed, the Berlin Mandate exempted and prohibited
developing countries from entering into binding targets. With the benefit of
perfect hindsight, it may have been preferable for the Berlin Mandate to have
included transition provisions for countries, such as China and India, that had
the potential to emerge as major industrial powers and polluters.

As at the 2007 APEC meeting in Sydney, industrialised countries once again


reneged on the principle of "common but differentiated responsibility". They
endeavoured to lay-off responsibility on developing countries by setting
developing country emission quotas and not responding to suggestions of
providing funds and environmental technology transfer.

In the event, this strategy didn't work. Both China and India responded by
calling America's bluff. Firstly, they accused America and other developed
countries of not engaging with the long-established philosophy of "common but
differentiated responsibility". Secondly, China demanded performance from
America and other industrialised countries as a precondition of its own action.
For example, China declined to commit to target levels, while demanding that
developed nations including America reduce emissions by at least 40% by 2020
(compared to 1990 levels) and contribute at least 0.5% to 1% of their GDP to
help developing countries upgrade technology.xxxi These claims were in stark
contrast to the Waxman-Markey target of just 4% reduction by 2020 (compared
to 1990 levels) and America's foreign aid budget of 0.17% of GDP.

Thirdly, India demanded that developed nations must reduce emissions by


79.2% by 2020 (compared to 1990 levels), by accepting responsibility for both
their current emissions and accumulated emissions in the atmosphere since
the Industrial Revolution c1750. Any compensation pursuant to this
accountability would need to be mandatory in nature and not part of voluntary
transfers usual in foreign aid and development cooperation. Industrialised
countries responded by arguing against any new institution for mandatory
payments and that climate settlements should remain part of development
cooperation.

214
Fourthly, China declined to further engage with America as a kind of leading
nations “G-2” to agree emissions quotas. China said that it would only engage
with the wider United Nations process (Schwägerl 2009b).

Fifthly, China claimed that green technologies were far too expensive for it to
contemplate any emissions target. Developing countries are still smarting from
two decades of abrasive dealings with developed countries over the intellectual
property rights for new AIDS pharmaceuticals. In emulation or secondary
pricing for AIDS pharmaceuticals, developing countries have called on
industrialised countries to require private technology developers to license
their intellectual property rights.

Even more disturbing for international amity, China demanded that America
provide the cheap carbon capture and storage (CCS) technologies that
industrialised countries such as America, Australia and the United Kingdom
have long touted as the “magic bullet” solution to reduce emissions.
Unfortunately, if the IPCC, Al Gore, James Hansen and many environmentalists
prove to be correct, Anglo-American nations may be “hoisted on their own
petard” by this challenge. Environmentalists refer to CCS as a “dirty lie”
because it has been used by the fossil fuel industry as a “red herring” to
absorb renewable energy research funding and deceptively mislead voters
about climate change. In late 2009, despite fledgling pilot projects, CCS still
appears to be merely hypothetical technology. Current indications are that CO 2
collection would reduce boiler burning efficiency by 25% and correspondingly
increase the fuel required by 25% to 30%. Perhaps an even greater hurdle is
the well known issue of CO2 egress from storage. The risks of CCS remain
exceedingly high, it is unlikely to be commercial before 2025 and, at the
current point in time, CCS has a very small probability of ever being available.

Overall, the outcome of the Bonn meeting appeared to be quite counter


productive. America failed to shake off industrialised country liabilities for
climate change. America's efforts to avoid accountability meant that America
and China did not reach any form of mutual empathy. In the end, America was
forced to acknowledge that China (and India) would not agree to legally
binding targets. America therefore proposed a face-saving solution that would
see developing countries legally bound to take measurable action on a basis

215
comparable with other countries but that this would not be enforceable.
However, neither China nor India responded to the suggested arrangement
(although, as discussed below, China subsequently softened its attitude).

Following the Bonn meeting, IPCC Chairman Rajendra Pachauri reflected on


the futility of America's demand for China to cap its emissions (Whiteman
2009):

I don't think you'd expect any of the emerging markets to take an


actual cut or even a commitment to reduce the rate of growth …. It
doesn't make sense to be tough because, let's face it, the developed
world really has not lived up to what was expected of them. I think
there's a far more productive strategy, a constructive approach
would be to first make a commitment to reduce emissions in the
developed world, get the emerging markets to take some fairly
ambitious action within their own territories, and then we move
from there onwards …. If you just keep pushing the Chinese that
they've got to make some kind of a commitment for cuts or
reductions in emissions intensity, you're not going to get anywhere.

These thoughts had already been foreshadowed by William J. Antholis,


Managing Director of The Brookings Institution, a key American adviser whose
words commenced this Chapter. He adds (Antholis 2009):

We must understand that the developing world is a diverse place,


with a wide range of challenges and opportunities, and hence
equities. The simple model of “north” and “south”, “industrial” and
“developing” no longer applies. Emerging markets blend first world
economic cores with still crude industrial development, with
rudimentary legal and regulatory frameworks, and with the most of
extremes of poverty. Even if there are still hundreds of millions of
very poor living in these nations, their central governments do have
some resources for addressing their plight …. So our effort to
engage with them should begin with the premise that each should
be taken at their own level of development, and their own level of
capacity for addressing the issues at hand. That means also
acknowledging and giving credit for actions that they already may

216
be taking to address climate change. In the case of China, these are
already considerable, and are growing by the day …. Moreover,
working with China and India in particular (as well as Russia) is
critically important for how this issue connects to three other global
governance challenges: nuclear energy and non-proliferation, re-
energizing the global trade regime, and redrawing the scrambled
global financial architecture …. The other great challenge lies
beyond them, where the poorest are likely to suffer the most from
climate change, and also still lack capacity to adapt and respond.
Perhaps the most effective way to reach out to developing countries
and to the poorest nations is by focusing on real areas of
opportunity, where mitigation and adaptation can be addressed
simultaneously. This certainly applies in areas such as deforestation
and coastal preservation. But it also extends to infrastructure
development, especially power generation, transportation,
construction.

Pre-G8 Mexico City meeting


In late June 2009, nineteen countries and the European Union met in Mexico
City to repair the fragmentation of the Bonn meeting. Together these countries
accounted for over 80% of global greenhouse gas emissions.

Unfortunately, once more the meeting was to finish without consensus.


American climate envoy Todd Stern dismissed calls for higher reduction
commitments and appeared to be delaying negotiations. This lead to
perceptions that President Barack Obama was resiling from the strength of
commitments in his recent speeches. The Minutes suggest that America, Japan,
Canada and Russia were leaning towards targets for 2050 rather than 2020.

As discussed in Chapter 2 Political economy of the Anglo-American economic


world view, President Obama needs majority support in the Senate for any
international treaty. The Senate sees itself as the defender of American
unilateralism. It has a history of not supporting any international treaty that
constrains America.

217
America is truly facing the major challenge identified at the end of Chapter 2
of whether Americans will accept a paradigm of constrained growth.

Major Economies Forum on Energy and Climate


On 9 July 2009, seventeen developed and developing economies met in
L'Aquila, Italy, to form a new global institution called the Major Economies
Forum on Energy and Climate (MEF). The MEF comprises Australia, Brazil,
China, India, Indonesia, Korea, Mexico, South Africa and the G-8 group of
nations (America, Canada, the European Union, France, Germany, Italy, Japan,
Russia and the United Kingdom).

The MEF's inaugural meeting declared that the increase in global average
temperatures above pre-industrial levels should not exceed 2°C and that both
developed and developing countries need to work towards this goal.

The Kyoto Protocol requires only developed countries to reduce emissions. The
MEF declaration restated the responsibility of industrialised countries to do
this and “Take the lead by promptly undertaking robust aggregate and
individual reductions …. [with] sustainable development, supported by
financing, technology and capacity building.”

An important new aspect of the MEF declaration was that developing countries
“agreed to agree.” The communiqué stated that developing countries would
“[Commit to] promptly undertake actions whose projected effects on emissions
represent a meaningful deviation from business as usual in the mid-term.”
Although it is well understood that any agreement to agree is unenforceable,
developed countries see this outcome as providing a faint glimmer of hope that
China and India may engage with the Copenhagen process.

However, any meaningful statement would have linked the responsibilities of


each group of countries. The declaration fell short of this so it is likely that
China and India see the communiqué as a mere place-filling political nicety
that has little more importance or moral underpinning than the thousands of
unenforceable Memorandums of Understanding they sign each year.

218
With the G20 assuming the mantle of the world's premier policy body in
September 2009, the MEF may become a redundant body.

Pre- UNFCCC Copenhagen meeting in Bangkok


On 16 October 2009, nation members of the UNFCCC concluded talks without
finding common ground for a draft Copenhagen Protocol. Whilst facing the
prospect of a U.S. Senate rejection of the Boxer-Kerry Bill "Clean Jobs and
American Power Act,” the Obama Administration supported Australia's weak
climate change policy proposal. The proposal deprecates the Kyoto Protocol
and only requires that every country would agree to set a best endeavours
target for emissions reduction, called Nationally Appropriate Mitigation
Actions (NAMAs), and report performance (Klein 2009).

Industrialised countries rallied to this new proposal and some began to resile
from existing commitments to date. For example, the European Union
withdrew its pledge to contribute up to US$22 billion per annum in assistance
for developing country adaptation.xxxii This unusual and uncharacteristic step
by the European Union was later reversed in part (Kanter & Castle 2009).

China and its G-77 coalition of developing countries expressed outrage at the
abrogation of responsibility by Kyoto Annex 1 industrialised nations and
introducing deal breakers such as cancelling developing country adaptation
funds (Pasternack 2009). It seems that this shocked indignation may have been
the very response sought by industrialised countries as they endeavour to
chasten China and India through playing-out a dire scenario in which no
country engages in effective emissions reductions. One may also speculate
about the UNFCCC's participation in the negotiations because at the end of the
talks Executive Secretary Yvo de Boer, arguably uncharacteristically and
prematurely, commented that the UNFCCC would now not ask for a new treaty
to replace the Kyoto Protocol (Ramanayake 2009).

Shortly after the Bangkok meeting, India's Environment Minister Jairam


Ramesh responded to American and the European Union hard-line demands
that India and China should accept internationally-binding caps on emissions,
saying “The voluntary actions of developing countries could not be equated
with the commitments of developed countries” (RTTNews 2009). Perhaps even

219
more fractious to multilateral co-operation, India and China signed a pact to
develop technology and reduce greenhouse gas emissions (BBC News 2009).
Their official statement noted "Internationally legally binding [greenhouse gas]
reduction targets are for developed countries and developed countries alone,
as globally agreed under the [2007] Bali action plan."

The penultimate drafting meeting before the December 2009 Copenhagen will
be held in Barcelona in early November 2009. Little is expected to occur at this
meeting as various countries have expressed the view that final negotiating
positions will be reserved until the Copenhagen meeting.

NGO draft Copenhagen protocol


In October 2009, with UNFCCC member nations unable to agree on a draft
protocol for the Copenhagen meeting in November 2009, Greenpeace, the
World Wildlife Fund, German Watch and the David Suzuki Foundation issued a
draft international treaty (Gupta 2009). As might be expected from such an
avant-garde policy group, their approach provided a clear and equitable path
forward without fear or favour and is reminiscent of the clarity in a High Court
judgement. Their “level playing field” carefully closes cherished loopholes and
places obligations on newly industrialised countries such as Saudi Arabia,
South Korea and Singapore. For this reason it may prove to be unpopular with
countries that expect to hold out for concessions and exemptions.

The key features of the draft protocol were equal per capita emissions
allowances for each country and a 95% reduction in emissions by 2050
(compared to 1990). Countries would secure their emissions reductions with
financial bonds, which would be forfeit should they fail to achieve their target.

Other important aspects of the proposal were enhanced reporting


requirements and transparency; cooperative sharing of green technology
intellectual property; contributions by industrial countries of US$160 billion
per annum to assist poor countries adapt; limiting the potential rort of Clean
Development Mechanisms (CDMs) by restricting CDMs to the least developed
countries and small islands; and asymmetrically paying countries to reduce
deforestation and forest degradation while not paying countries to increase
their forest cover.

220
3.5 Models to manage the commons
In the tétonnement of a microeconomic model of supply and demand,
supplier's welfare and consumer's welfare are mutually and simultaneously
maximised by the equilibrium process.

In the case where the representative agent is the supplier, the model is one of
competitive markets. Where the representative agent is the consumer, the
model is that of a social planner. Appendix 2 CGE modelling provides Uhlig's
poof that a priori there is no economic difference between competitive market
optimisation and a social planners optimisation. Uhlig summarises his findings
as: “Whether one studies a competitive equilibrium or the social planners
problem, one ends up with the same allocation of resources” (Uhlig 1999).

The economic equivalence of competitive markets and social planning models


has wide application in political economy, for example in concepts of private
property and strategies to manage common resources. According to ten Raa
(2005, p.139), pollution and over-exploitation of natural resources mainly occur
when resources don't belong to anyone. He reasons that where ownership
rights can be defined, resources will be properly managed because the owner
has an incentive to do so and violation can be sanctioned by fines.

Three basic policy frameworks emerge from the concepts of competitive


markets, social planing and ownership rights. These have been used to protect
natural commons such as air, water, forests and fishing stocks as follows (ten
Raa 2005, pp.139-41; Sachs 2008, p.37-41):

• quantitative limits through regulation, such as quotas and standards or


limit on the production of a “bad”
• taxing the “bad” to provide a price disincentive
• creating a property right for a “good”, such as clean air, and selling or
giving it to someone who will then price the “bad”

Often a policy response requires a complex blending of two or all three of


these instruments. In recent times, quantitative limits have been successfully
implemented by world governments to ameliorate damage to the ozone layer.

221
Quantitative limits
In 1995 atmospheric scientists Paul Crutzen, Frank Sherwood Rowling and
Mario Molina shared the Nobel Prize in Chemistry “for their work in
atmospheric chemistry, particularly the formation and decomposition of
ozone.” In the 1970s, these distinguished scientists discovered, almost by
accident, that man-made nitrous oxides and chlorinated fluorocarbons (CFCs)
were severely damaging the ozone layer and leading to acute health risks for
humans, livestock, crops and marine phytoplankton (Crutzen 1970; Crutzen
1973; Rowland & Molina 1975).

In order to address ozone depletion, the United Nations sponsored the 1985
Vienna Convention on the Protection of the Ozone Layer. Within two years,
governments began to actively phase out CFC usage pursuant to the United
Nations' Montreal Protocol of 16 September 1987.xxxiii In fact, the ozone layer
issue was ameliorated at an actual cost of one percent of the US$135bn
originally suggested by critics.

However, more extensive experience has shown that generic quantitative limits
are a form of social or central planning that lacks flexibility. Whilst absolute
prohibition is useful in an emergency, as was the case with ozone depletion,
this policy instrument is very blunt. The main issue with planned quantitative
limit policies that try to be more sensitive is that this policy approach has all
the deficiencies of an economic system where a government tries to pick
winning strategies, industries and firms. As governments of all persuasions
have discovered to their dismay, picking winners is fraught with danger.

The performance of production and consumption under any new constraint is


not a static balance. It immediately becomes dynamic due to a multiplicity of
feedback loops, many of which are beyond the vision of any planner. A new
regulated regime rapidly becomes difficult to manage in any flexible way. For
example, after the system has commenced, it is almost impossible to alter
allocative decisions that have been made about which firms will receive quotas
and what amount each will receive.

In his influential 1940s books The Road to Serfdom (2001) and The Use of
Knowledge in Society (2005), Friedrich Hayek argues that central planners can

222
never have sufficient information to make quantitative decisions such as the
optimal level of regulation. Hayek champions market price mechanisms for
self-organising societies, which he sees as even more essential to the human
condition than democracy (Hayek 1988). In 1974, Friedrich Hayek and Gunnar
Myrdal shared the Nobel Prize in Economics “for their pioneering work in the
theory of money and economic fluctuations and for their penetrating analysis
of the interdependence of economic, social and institutional phenomena.”

In America, the prospect of Environmental Protection Agency (EPA) regulation


of emissions has been bitterly opposed within industry. Notwithstanding these
fears and divisions, and the issues of political economy in regard to
quantitative regulation, President Obama steeled Senators and industry for
renewed action by the Environmental Protection Agency (EPA) in the event
Senators failed to pass the Boxer-Kerry Bill "Clean Jobs and American Power
Act" (Broder 2009b). On the same day as the Boxer-Kerry Bill was introduced
into the Senate, 1 October 2009, President Obama authorised a controversial
but long anticipated EPA rule requiring the EPA to control the emissions of
plants that emit more than 25,000 tonnes of CO2 per annum. This covers
14,000 coal-fired electricity generators and big industrial plants that
collectively are responsible for nearly 70% of American emissions.

Taxation
From 2010, France will become the first country to tax carbon emitters (Butler
2009).xxxiv However, the initial tax rate of Euro 17 (US$25) per tonne of CO 2 is
less than an estimated Euro 40 per tonne necessary to change consumer
behaviour. The tax is expected to rise to Euro 100-200 per tonne by 2020.
Currently France's electricity is excluded because 90% of France's generation
is from carbon-free nuclear and hydroelectric sources.xxxv France has the lowest
cost electricity in the European Union. Its competitiveness makes it a large net
exporter of electricity and nuclear technology. France stands to become highly
resource expansive as other nations increase prices on dirty power.

Taxes on “bads” require the polluter to pay for the damage caused. Economists
favour a system of taxes on “bads” and negative taxes (i.e. subsidies) on
“goods”. These are called Pigouvian taxes because they are the marginal
productivities, marginal rate of substitution and shadow prices of the

223
disutilities (Pigou 1920). Pigou, a pioneer of welfare economics, stressed that
market transactions produced externalities, which are indirect social costs and
benefits.

Policy makers also favour Pigouvian taxes for simplicity and flexibility. Such
taxes are administratively straight-forward, avoid allocative decisions, provide
price certainty, capture activities that cannot be controlled in other ways (such
as by higher level regulations), can be collected at low cost and do not require
expensive overheads (such as an expensive superstructure for a market in
tradeable permits). The tax rate can be raised or lowered to directly influence
prices across wide sectors, expanded or contracted in coverage, and balanced
with other taxes to achieve welfare objectives. They also reinforce other
policies and integrate agendas such as long term environmental objectives into
mainstream economic policy. The European experience with revenue neutral
taxes and double dividends has been discussed above.

Many prominent American economists such as William Nordhaus, argue that


taxation is the best approach for addressing global warming. James Hansen
succinctly puts the case for a pure revenue neutral environmental tax (Hansen
2009):

A carbon tax on coal, oil and gas is simple, applied at the first point
of sale or port of entry. The entire tax must be returned to the
public, an equal amount to each adult, a half-share for children. This
dividend can be deposited monthly in an individual’s bank account.
A carbon tax with a 100 percent dividend is non-regressive. On the
contrary, you can bet that low and middle income people will find
ways to limit their carbon tax and come out ahead. Profligate energy
users will have to pay for their excesses …. Demand for low-carbon
high-efficiency products will spur innovation, making our products
more competitive on international markets. Carbon emissions will
plummet as energy efficiency and renewable energies grow rapidly
…. Will the public accept a rising carbon fee? Surely – if the revenue
is distributed 100% to the public, and if the rationale has been well-
explained to the public. The revenue should not go to the
government to send to favored industries. Will the public just turn

224
around and spend the dividend on the same inefficient vehicle, etc.?
Probably not for long, if there are better alternatives and if the
public knows the carbon price will continue to rise. And there will be
plenty of innovators developing alternatives.

Property rights
In his book, The Fatal Conceit:The Errors of Socialism (1988), Hayek argues
that the establishment of property rights was the seminal factor instrumental
in the rise civilisation. Creating a property right for a resource places its
economic exploitation into the hands of a profit maximising decision maker.
This may be a person or company, a community management organisation,
state authority or international authority, such as the United Nations. Prima
facie, the decision maker is expected to act rationally by moderating the
harvest of the resource to an economically sustainable level, thereby
continuously maximising profit.

In the same way as a Pigouvian tax is equal to the marginal productivity,


marginal rate of substitution or shadow price of the disutility, so the price of a
property right is an alternative measurement of the same disutility (ten Raa
2005, p.144; Baumol & Wolff 1981).

While in theory the two approaches of tax or property rights are equivalent,
there are at least six problems with creating property rights to address global
warming. The first is due to the uncertainties that society faces about the
marginal benefits and marginal costs of averting climate change. In this
respect, a tax on emissions has the economic advantage of certainty (United
States Congressional Budget Office 2009, p.4).

A second issue is social equity. Creating property rights has proven subject to
corruption and there is an ever present risk that scarce, public resources
might end up in the hands of powerful vested interests, who may exercise
monopoly power or disenfranchise the population. James Hansen is highly
critical of this risk (Hansen 2009):

225
Cap-and-trade is fraught with opportunities for special interests,
political trading, obfuscation from public scrutiny, accounting
errors, and outright fraud …. As with any law, caps can and will be
changed, many times, before 2050. The fact is that national caps
have been set and are widely rejected. When caps are accepted,
they are often set too high – as happened with Russia. If a complete
set of tight caps were achieved, global permit trading would likely
result in a Gresham’s Law effect – “bad money drives out good.”
Some countries will issue too many permits or fail to enforce
requirements. These permits, being cheapest, will find their way
into the world market and undermine the world cap. Caps are also
extremely hard to enforce, as demonstrated by the Kyoto Protocol.

As mentioned in Chapter 1 Introduction, Gruber (2007, p.253) agrees:

The government is assumed to be a benign actor that serves only to


implement the optimal policies to address externalities, to provide
public goods and social insurance, and to develop equitable and
efficient taxation. In reality, however, the government is a collection
of individuals who have the difficult task of aggregating the
preferences of a large set of citizens …. The core model of
representative democracy suggests that governments are likely to
pursue the policies of the median voter, which in most cases should
fairly represent the demands of the society on average. Yet, while
that model has strong evidence to support it, there is offsetting
evidence that politicians have other things on their mind. In
particular, there are clear examples of government's failure to
maximise the well-being of its citizens, with potentially disastrous
implications for economic outcomes.

A third issue in creating property rights is the issue of externalities, for


example, the destruction of biodiversity of flora and fauna, clean water and
clean air. For example, enclosing land that is a migratory path damages fauna.
An owner looking only to profit will be unwilling to consider externalities. Only
in recent years has the price charged to an owner begun to reflect the social
value of species and a clean environment.

226
The fourth issue is that the private sector has a short term focus on profit and
this is reflected in high discount rates on future profits. It will harvest the “low
hanging fruit” while leaving higher cost resources to future owners. For
example, enjoying open cut above ground mines now, leaving troublesome and
expensive underground mines for the future; and avoiding slower growing
plants and animals because they are “poor investments” (Sachs 2008, p.40).
This means future consumers face a higher cost then current consumers and,
all things being equal, intergenerational welfare will be distorted with current
consumers enjoy a greater welfare than future consumers who are not
represented in the market today.

Fifthly, as Jeffrey Sachs points out, there is a “tyranny of the present over the
future” in consumption. Our societies are impatient to consume. The free
market is seen as a right to consume as much as is wanted, with no regard for
the future.

Lastly, the participation of non-industry sectors such as speculators, brings


advantages such as liquidity, but also weaknesses such as volatility and herd
behaviour driven by greed and fear.

A number of the above problems with property rights are analogous to


criticisms of the unregulated markets, which became an article of reformist
belief throughout the 1980s and 1990s. Professor John Freeburn says of
managing Australian water rights and other national resources through this
period (Slattery 2008):

Most of the economic successes of the '90s were owed to reforms


during the '80s, which were heavily run by economists working in
academe, business and government ... What economists have
worked out is that if you let the market go, properly define water
property rights, then the consumptive uses will get it moved around
between different types of uses ... At the same time we've
recognised that markets don't work for the environment so we do
have to have government intervention.

227
Lowe (2009, p.1) is far more trenchantly critical of free market dogma:

In 2005, an almost childlike belief in the magic of the market was


widespread. Otherwise intelligent observers and pragmatic
politicians abandoned their understanding of the complexity of
human society and the need for regulation, in favour of a touching
faith that the pursuit of self-interest and the application of market
forces would produce a better world. The weight of scientific
evidence was showing that both Australia and the world faced very
serious environmental problems that threatened our future. Despite
that, concerted responses were prevented by the prevailing
ideology, the extreme form of market economics.

Notwithstanding these potent criticisms of quantitative limits and property


rights, such policy instruments have been successfully applied, as explained
above, in ameliorating ozone depletion. Perhaps the greatest success in using
property rights to protect the commons has been in abating acid rain.

America's cap & trade system for sulphur dioxide


(SO2)
In order to overcome some of the risks inherent in creating property rights, it
is possible for a government to combine property rights with quantitative limits
and market trading of the scarce permits. This is called a Coasian market after
Coase (1960) who formulated the theory that specifying and allocating
property rights for natural resources and other ecotypes leads to a price
mechanism and thence to the efficient and unique allocation of resources. The
second part of Coase's theorem states that the particular details of the
allocation of the property right are unimportant. However, the generality of
this second part remains controversial (Hurwicz 1995).

A Coasian market was used in America in 1990 to successfully abate SO 2


pollution from coal-fired generation, which had been causing acid rain for a
decade (Broder 2009c).xxxvi

Originally, environmentalists saw cap and trade for acid rain abatement as
merely a license to pollute because it freely gave valuable pollution permits to

228
powerful vested interests. However, arguably President George H. W. Bush's
Clean Air Act amendments have become the most successful domestic
environmental legislation ever enacted. According to the Environmental
Protection Agency (2004), there was close to complete compliance in achieving
a 50% reduction in pollution over the ensuing decade. In addition, the cost of
$1-$2 billion pa was significantly less than the EPA's original estimate of $2.7-
4.0 billion pa (Weiss 2008; Bohi & Burtraw 1997).

The proposed American cap and trade system for climate change amelioration
and abatement is similar to the current European differential model, which
runs to 2012. The American government will give all the emissions permits in
its treaty limit to large emitters. If emitters manage to increase efficiency and
thereby save permits then they may sell their unused permits on the market to
other emitters that need additional permits because they have over polluted.
Emitters know that the government supply of permits will be progressively
reduced and so they must move ahead of this market scarcity or face
potentially high market prices for permits.

As a result of its success in abating SO2 pollution from coal-fired generation,


the use of cap and trade in reducing CO2 pollution is seen as an easier political
alternative than top-down regulation with quantity limits or a tax on fossil
fuels. However, many economists argue that cap and trade is merely a carbon
tax with an expensive superstructure.

Emissions trading between entities with comparative


advantage

Microeconomics of Ricardian trading

David Ricardo (1817) developed his inspired theory that international trade
should be based on the relative or comparative advantage of each country's
commodity production rather than on the absolute advantage. His theory
remains the fundamental principle of modern trade and a major argument
against protectionism.

From Ricardo's theory, the advantage of an emissions trading market


mechanism derives from combining the supply curves of the organisations that

229
trade to form an aggregate supply curve. Trading allows this combination to be
achieved through horizontal aggregation.

The illustration below provides an example that demonstrates how a market


mechanism operates to reduce the cost for participants. It is based on the
American proposals for emissions trading between nations as a condition
precedent for America joining the Kyoto Protocol in 1997.xxxvii

The illustration shows that hypothetical CO2 mitigation curves for America and
Russia. The slope of the curve for America reflects the high cost of reducing
emissions due to America's coal fired generators. The curve for America shows
a cost $500 per tonne to reduce 400 tonnes of CO2.

Illustration 12: International trading lowers the combined cost curve

As a predominantly nuclear nation, Russia's supply curve would have a lower


cost of, say, $20 per tonne to reduce its 240 tonnes of carbon emissions.
Permitting America to buy carbon emission permits from Russia can be
modelled by horizontally summing the supply curves, as shown in the right
hand illustration. This leads to a Combined supply curve of 640 tonnes (i.e. 400
for America plus 240 for Russia) at a price of $50 per tonne.

In this circumstance, America would reduce its carbon emissions by only 40


tonnes, which would be used for low value opportunities at a cost of $50 per
tonne. Russia would reduce emissions by 600 tonnes at the same cost of $50
per tonne. America would pay Russia $50 per tonne for its increased emissions
reductions of 360 tonnes.

230
Problems with naked market mechanisms and where parties
have unequal power

Six major inherent difficulties in the market mechanism for emissions trading
have been identified. The first is that the price of emissions permits rises under
trading due to the price premium. This premium arises because the price of
emissions permits includes commodity risk. An attendant volatility is brought
into existence. The size of a financial market for a commodity is often an order
of magnitude greater than the physical market. The usual volatility of the
physical market due to factors such as seasonality and weather is therefore
exacerbated and completely overshadowed by the risk introduced through the
speculation and gearing strategies of the traders who bear the price volatility.

Secondly, there is a large amount of equity invested in the market and it is


looking for a significant return. Recent problems in deregulated commodity
markets were discussed in Chapter 2 Political economy of the Anglo-American
economic world view. Trading in emissions permits will be subject to the same
pressures and inefficiencies, particularly as the scarcity of permits increases.

Krugman (2009) has addressed this market deficiency in writing of Goldman


Sachs' meteoric success in the immediate aftermath of the global financial
crisis “The American economy remains in dire straits, with one worker in six
unemployed or underemployed. Yet Goldman Sachs just reported record
quarterly profits — and it’s preparing to hand out huge bonuses, comparable to
what it was paying before the crisis. What does this contrast tell us? …. First, it
tells us that Goldman is very good at what it does. Unfortunately, what it does
is bad for America …. Other banks invested heavily in the same toxic waste
they were selling to the public at large. Goldman, famously, made a lot of
money selling securities backed by subprime mortgages — then made a lot
more money by selling mortgage-backed securities short, just before their
value crashed. All of this was perfectly legal, but the net effect was that
Goldman made profits by playing the rest of us for suckers.”

Thirdly, a market has a large overhead cost for the public. This is in addition to
the exacerbated commodity risks that have been so damaging in food and oil in
recent years, the professional suckering that withdraws profits from
commodity markets, and the public's underwriting of the sector's losses

231
through “bail outs” that have received so much prominence in the 2008-9
Global Financial Crisis. There is a large overhead associated with any market
caused by its significant deadweight cost of participants including exchange
operators, compliance regulators, policy makers and lawmakers.

Fourthly is the wealthy country effect. There is little incentive for a wealthy
country such as America or Australia to turn its attention to bona fide
reductions in CO2 emissions; remove obsolete processes or address the linked
interdependencies in its economy; develop new core competences in CO 2
emissions reduction and equip local firms with these new technologies so they
can have higher productivity and lower emissions; develop new agility that
leads to new economies of scope and scale, and new synergies for reduction in
emissions; or develop new intellectual property in unexpected ways.

The noble objective of reducing emissions is vulnerable to subversion by the


amorality of powerful business coalitions and wealthy nations. Reducing
everything to money has the potential to destroy a symbiosis for CO 2 reduction
by materially damaging the reputation of the institutions established by the
United Nations to avert the “Tragedy of the Commons.” For example,
Monetarists argue that the only responsibility is to “make a profit” and it is not
incumbent upon Western governments and companies to apply the same
ethical standards as would in their own country.xxxviii

Wealthy countries may well be condescending in their approach to purchasing


emissions permits, arguing that whatever a smaller nation receives is better
than what it received previously, which was nothing.xxxix Already developing
countries have been exposed in using their financial position to avert bona fide
action or even to cheat on their obligations through actions such as buying CO 2
emissions permits from third world nations in lieu of foreign aid. xl

Lastly, a series of payments to a small nation through the emissions market is


questionable because of the inequality of power between wealthy and poor
third world nations. As has been repeatedly shown in South America and
Africa, exploitation, corruption and covert actions make it almost impossible
for these countries to receive a fair price for their commodities. xli

232
3.6 Conclusion
This Chapter reviewed the history of climate science and policy over the last
50 years with a detailed focus on measures to replace the UNFCCC Kyoto
Protocol, which is due to expire in 2012.

An analysis of “Climate change science development” examined the


contributions of the pioneers Roger Revelle and Charles Keeling, the Jason
Group of scientists and activists James Hansen and former American Vice
President Al Gore who brought the issue of global warming to public attention.
In 2007, the Intergovernmental Panel on Climate Change Assessment Report
AR4 concluded that limiting global atmospheric temperature rise to 2°C
(3.6°F) above the pre-industrial level was necessary to avoid severe climate
change damage. Since this time many scientific groups have highlighted that
emissions are tracking the IPCC worst estimates.

It was found that in mid-2009, scientists recognised that the nations of the
world had only about 750 Gt CO2 emissions capacity remaining if atmospheric
global temperature rise was to be kept within 2°C with a two-thirds probability.
As the world emitted approximately 45% of this amount between 2000 and
2008, the critical nature of the issue was accepted by all United Nations
governments, including those of Anglo-American nations.

An analysis of “Climate change policy development” reviews the Kyoto Protocol


and ensuing actions by various countries including the United Kingdom's 2001
climate levies and 2008 Climate Act, which legislated up to 42% reduction in
emissions by 2020 and 80% by 2050. The European Union's approach to
carbon levies was found to be heavily influenced by its difficult experiences
with environmental taxes through the 1990, including the Constitutional Court
of France declaring the French Government's Ecological/ Environmental Tax
Reform (ETR) project unconstitutional in 2000. In addition, European countries
were concerned about “carbon leakage,” which is the migration of heavy
industry to countries with cheaper electricity due to a lack of carbon impost.
The Asturias region of Spain was reviewed in detail. It was concluded that
while the risks were real, other factors such as labour and transport costs were
at least as influential as carbon costs.

233
In 2002, the European Union adopted a “20/20/20” target for a 20% reduction
in emissions by 2020, an increase to 20% in the proportion of energy from
renewable sources, and reducing energy consumption by 20%. It was found
that the European Union emissions trading scheme designed to place a market
price on emissions failed due to profiteering by electricity producers and
financial institutions. Nevertheless, in late 2008, with the new approach of
auctioning permits, the European Union reconfirmed its 20/20/20 targets for
the post-Kyoto period commencing in 2013.

An examination of the “APEC Sydney Conference” of September 2007, showed


that Anglo-American nations would not engage with climate change mitigation
unless China and India agreed to participate. It was found that this
commenced a period of testy relationships that were to continue with
frustratingly little variation throughout the next two years, notwithstanding the
election of Democrat President Obama. For example, the Group of 8 Hokkaido
meeting in July 2008, the UNFCCC Poznan meeting in November 2008, the
Washington Climate Summit in April 2009, the UNFCCC Bonn meeting and
pre-G8 Mexico City talks in June 2009, the Major Economies Forum in July
2009, the G20 Summit meeting in Pittsburgh in September 2009 and the
UNFCCC Bangkok talks in October 2009.

An investigation of “American climate policy development” has identified the


U.S. Senate's 1997 Byrd-Hagel resolution opposing any climate treaty that
would either harm the American economy or omit matching commitments from
developing countries as a constant theme of America's climate change
negotiations. A senior policy adviser to the American Government, William
Nordhaus, brought measure to the climate change debate through climate-
economic modelling. He proposed a carbon tax.

In early 2008, the U.S. Congress House Energy and Air Quality Subcommittee
began to develop emissions legislation based around carbon capture and
storage technology. At the same time, American industry called for tariffs on
goods and services from countries with no carbon cap.

However, the climate debate only began to move forward following President
Barack Obama's election in November 2008. He addressed America's

234
renewable energy sector as part of stimulating the American economy through
the Recovery Act and amended the American Clean Energy and Security Act to
include small reductions in emissions of between 6% and 7% by 2020.
President Obama also released the EPA to regulate greenhouse gases, as the
U.S. Supreme Court had ruled it should do in 2007. The EPA also released
California and other States to regulate overall State emissions and new vehicle
emissions.

It was found that these measures were consolidated as America began to


engage with the UNFCCC process for a new treaty to replace the Kyoto
Protocol when it expires in 2012. The White House Office of Science and
Technology Policy and the United States Global Change Research Program
reported on the impact of global warming on America and for the first time
declared climate change to be beyond scientific probability and unequivocal.
This was followed with the Waxman Markey Bill and subsequently the Boxer
Kerry Senate Bill to reduce emissions 20% by 2020 (compared to 2005 levels).

America's first bona fide engagement with the international community on


climate change occurred at the UNFCCC Bonn meeting in June 2009. It was
shown that American negotiators continued the Byrd-Hagel demand for China
and India to commit to emissions targets, which they needed to convince the
U.S. Senate to pass the Waxman Markey Bill. The meeting collapsed with China
declining to form a kind of G-2 with America and demanding that America
should agree to provide it with unfettered access to green intellectual property.
America sought a face-saving solution where China would agree to
unenforceable targets, which China ignored.

It was shown that America again pressed for Byrd Hagel conditions without
success at UNFCCC Bangkok talks in October 2009. In what had become a
somewhat desperate American negotiating strategy, the European Union joined
with America to show China a scenario where no nations agreed to reduce
emissions. These cliff-edge negotiations are expected to continue through to
the UNFCCC Copenhagen meeting in December 2009 as the Obama
Administration seeks Byrd Hagel concessions from China to fortify the passage
of its Kerry-Brown Senate Bill through the U.S. Senate, which remains hostile
to limiting U.S. unilateralism in any way.

235
There has been considerable debate about models to manage the commons.
This Chapter investigated competitive market optimisation, social planners
optimisation and three policy instruments for managing “bads.” These policy
instruments were quantitative limits, taxation and property rights. It was found
that all lead to similar outcomes and a mix is often the best policy solution. A
number of issues with naked market mechanisms were identified.

Chapter 2 Political economy of the Anglo-American economic world view


showed that America is on the cusp of accepting its new reality of resource
constrained growth. While America's past behaviour was almost universally
unpopular, most world leaders see the future engagement of America across
multilateral issue integration and technological entrepreneurship as far
preferable to a “fortress America” situation. Nuclear non-proliferation has
been the first big issue, although this has not yet reached the stage of a U.S.
Senate vote. This Chapter has shown that the next big issue, protecting the
global commons from climate change, has reached the stage of a vote and the
Obama Administration faces a hostile Senate with little to show for twelve
years of negotiation to achieve the Byrd Hagel objectives.

Chapter 2 Political economy of the Anglo-American economic world view also


investigated neoclassical economics as a paradigm for Anglo-American policy.
It found many strengths and weaknesses in neoclassical economics,
particularly that from time to time human behaviour led to spectacular market
failures requiring Keynesian lifelines. While recognising that policy formation
will always be a messy process, it found continuing relevance in the
neoclassical perspective. Drawing on the discussion of policy research in
Chapter 1, neoclassical policy research tools remain vitally important in
validating policy options.

This Chapter has further established the policy dimensions on which this
doctoral research in CGE policy research will be framed. It has established a
policy Base Case of 2°C rise, consistent with geophysical modelling of a 750 Gt
CO2 carbon tranche. It has established the framework and risks for carbon
commodity markets (in both carbon permits and physical amelioration and
abatement). It has also established that a number of scenarios are important to

236
understanding the sensitivity of the Base Case. These scenarios include the
various points of view of the dominant groups in climate change debate, the
impact from increasingly severe of climate change reduction targets, the
strong faith in future technological solutions and the importance of technology
cost and availability, and the sensitivity of economic performance to
international carbon commodity trading.

3.7 Chapter references


(Pascal 1662)

Aitkin, D., 2008. A cool look at global warming, Canberra: Planning Institute of
Australia. Available at:
http://www.theaustralian.news.com.au/story/0,25197,23509775-
27703,00.html [Accessed April 14, 2008].

Allen, M.R. et al., 2009. Warming caused by cumulative carbon emissions


towards the trillionth tonne. Nature, 458(7242), 1163.

Antholis, W.J., 2009. U.S.-European Union Cooperation on Climate Change. In


Essen, Germany: The Brookings Institution. Available at:
http://www.brookings.edu/speeches/2009/0610_climate_antholis.aspx
[Accessed June 16, 2009].

Arguelles, M., Benavides, C. & Junquera, B., 2006. The impact of economic
activity in Asturias on greenhouse gas emissions: consequences for
environmental policy within the Kyoto Protocol framework. Journal of
Environmental Management, 81(3), 249-264.

Australian Department of Climate Change, 2009. National Greenhouse Gas


Inventory 2007: The Australian Government Submission to the UN
Framework Convention on Climate Change May 2009 (Volume 1),
Canberra: Australian Government. Available at:
http://www.climatechange.gov.au/inventory/2007/national-report.html
[Accessed September 19, 2009].

Barker, T. et al., 2007. Carbon leakage from unilateral environmental tax


reforms in Europe, 1995-2005. Energy Policy, 35(12), 6281-6292.

237
Baumol, W.J. & Wolff, E.N., 1981. Subsidies to New Energy Sources: Do They
Add to Energy Stocks? The Journal of Political Economy, 891-913.

Bayindir-Upmann, T. & Raith, M.G., 2003. Should high-tax countries pursue


revenue-neutral ecological tax reforms? European Economic Review,
47(1), 41-60.

BBC News, 2009. India-China climate change deal. BBC News. Available at:
http://news.bbc.co.uk/2/hi/south_asia/8318725.stm [Accessed November
2, 2009].

Bento, A.M. & Jacobsen, M., 2007. Ricardian rents, environmental policy and
the `double-dividend' hypothesis. Journal of Environmental Economics
and Management, 53(1), 17-31.

Beuermann, C. & Santarius, T., 2006. Ecological tax reform in Germany:


handling two hot potatoes at the same time. Energy Policy, 34(8), 917-
929.

Bohi, D.R. & Burtraw, D., 1997. SO2 Allowance Trading: How Experience and
Expectations Measure Up, Washington DC: Resources for the Future.
Available at: http://www.rff.org/Documents/RFF-DP-97-24.pdf.

Bosquet, B., 2000. Environmental tax reform: does it work? A survey of the
empirical evidence. Ecological Economics, 34(1), 19-32.

Boucher, R., 2008. Boucher to consider tariffs on goods from developing


countries. Platts Coal Outlook, 15.

Broder, J.M., 2009a. Climate Bill Clears Hurdle, but Others Remain. The New
York Times. Available at:
http://www.nytimes.com/2009/05/22/us/politics/22climate.html?_r=1
[Accessed May 31, 2009].

238
Broder, J.M., 2009b. E.P.A. Moves to Curtail Greenhouse Gas Emissions. The
New York Times. Available at:
http://www.nytimes.com/2009/10/01/science/earth/01epa.html?
_r=2&th&emc=th [Accessed October 1, 2009].

Broder, J.M., 2009c. From a Theory to a Consensus on Emissions. The New


York Times. Available at:
http://www.nytimes.com/2009/05/17/us/politics/17cap.html?
pagewanted=print [Accessed May 17, 2009].

Broder, J.M., 2008. Gore urges change to dodge an energy crisis. The New York
Times. Available at:
http://www.nytimes.com/2008/07/18/washington/18gore.html?fta=y
[Accessed July 21, 2008].

Broder, J.M. & Ansfield, J., 2009. China and U.S. Seek a Truce on Greenhouse
Gases. The New York Times. Available at:
http://www.nytimes.com/2009/06/08/world/08treaty.html?
_r=2&th&emc=th [Accessed June 10, 2009].

Burck, J., Bals, C. & Ackermann, S., 2008. Climate Change Performance Index
2009: A comparison of the 57 top CO2 emitting nations, Bonn, Germany:
Germanwatch & CAN Europe. Available at:
http://www.germanwatch.org/klima/ccpi.htm [Accessed April 18, 2009].

Butler, D., 2009. France unveils carbon tax: Nature talks to climatologist Jean
Jouzel about the plans. Nature News. Available at:
http://www.nature.com.ezproxy.lib.uts.edu.au/news/2009/090911/full/ne
ws.2009.905.html [Accessed September 12, 2009].

Coase, R.H., 1960. The problem of social cost. The journal of Law and
Economics, 3(1), 1.

Crutzen, P., 1973. Photochemical reactions initiated by and influencing ozone


in the troposphere.

239
Crutzen, P.J., 1970. The influence of nitrogen oxides on the atmospheric ozone
content. Quarterly Journal of the Royal Meteorological Society , 96(408).

Deroubaix, J. & Leveque, F., 2006. The rise and fall of French ecological tax
reform: social acceptability versus political feasibility in the energy tax
implementation process. Energy Policy, 34(8), 940-949.

DeSouza, M., 2007. Canada declares APEC success, developing countries say
they were bullied. Canada National Post. Available at:
http://www.nationalpost.com/news/story.html?id=ef4bb5b6-81dc-4d36-
a7bd-25d20fdc9462&k=52757 [Accessed April 27, 2008].

Dyson, F., 2008. The question of global warming. The New York Review of
Books, 55(10). Available at: http://www.nybooks.com/articles/21494
[Accessed June 4, 2008].

EarthTrends, 2007. CO2 Emissions: Cumulative CO2 emissions, 1900-2004.


World Resources Institute. Available at:
http://earthtrends.wri.org/searchable_db/index.php?
theme=3&variable_ID=779&action=select_countries [Accessed
September 7, 2009].

Gore, A., 2008. New thinking on the climate crisis. In TED Conferences, LLC.
Available at: http://www.ted.com/index.php/talks/view/id/243 [Accessed
April 11, 2008].

Gruber, J., 2007. Public finance and public policy 2nd ed., New York, NY: Worth
Publishers.

Gupta, J., 2009. 'Copenhagen treaty' drafted by NGOs matches Indian actions.
Samay Live. Available at: http://www.samaylive.com/news/copenhagen-
treaty-drafted-by-ngos-matches-indian-actions/660830.html [Accessed
October 8, 2009].

Hansen, J.E., 2009. Can We Reverse Global Climate Change? Part I: Carbon tax
and dividend is the solution. Available at:

240
http://yaleglobal.yale.edu/display.article?id=12371 [Accessed June 10,
2009].

Hansen, J.E., 2008a. Global Warming Twenty Years Later: Tipping Points Near:
Briefing before the Select Committee on Energy Independence and
Global Warming, U.S. House of Representatives, and also presented at
the National Press Club on June 23., Washington: Select Committee on
Energy Independence and Global Warming, US House of
Representatives. Available at: http://www.columbia.edu/~jeh1/
[Accessed June 25, 2008].

Hansen, J.E., 2008b. Request to Australian Prime Minister Kevin Rudd to halt
construction of coal-fired power plants. Available at:
http://www.columbia.edu/
%7Ejeh1/mailings/20080401_DearPrimeMinisterRudd.pdf [Accessed
June 23, 2008].

Hayek, F.A., 1988. The fatal conceit: The errors of socialism W. W. Bartley, ed.,
London: Routledge.

Hayek, F.A., 2001. The road to serfdom, Routledge.

Hayek, F.A., 2005. Use of Knowledge in Society, The. New York University
Journal of Law & Liberty, 1, 5.

Herbert, B., 2008. Yes we can. The New York Times. Available at:
http://www.nytimes.com/2008/07/19/opinion/19herbert.html?
_r=2&th&emc=th&oref=slogin&oref=slogin [Accessed July 21, 2008].

Hill, M.R., 2001. Sustainability, greenhouse gas emissions and international


operations management. International Journal of Operations &
Production Management, 21(12), 1503-1520.

Hourcade, J. & Robinson, J., 1996. Mitigating factors : Assessing the costs of
reducing GHG emissions. Energy Policy, 24(10-11), 863-873.

241
Hurwicz, L., 1995. What is the Coase theorem? Japan & The World Economy,
7(1), 49-74.

IPCC, 2001. Climate Change 2001: Synthesis Report. A Contribution of


Working Groups I, II, and III to the Third Assessment Report of the
Integovernmental Panel on Climate Change [Watson, R.T. and the Core
Writing Team (eds.)]., Cambridge University Press, Cambridge, United
Kingdom, and New York, NY, USA. Available at:
http://www.ipcc.ch/ipccreports/tar/vol4/english/index.htm [Accessed
April 26, 2008].

IPCC, 1995. IPCC Second Assessment Climate Change 1995: A Report of the
Intergovernmental Panel on Climate Change, Geneva: IPCC. Available
at: http://www.ipcc.ch/ipccreports/tar/vol4/english/index.htm [Accessed
April 26, 2008].

IPCC, 2007. The Physical Science Basis. Contribution of Working Group I to


the Fourth Assessment Report of the Intergovernmental Panel on
Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis,
K.B. Averyt, M. Tignor and H.L. Miller (eds.)], Cambridge University
Press, Cambridge, United Kingdom and New York, NY, USA: United
Nations. Available at: http://www.ipcc.ch/ipccreports/ar4-wg1.htm
[Accessed April 5, 2008].

Jacoby, H.D. & Reiner, D.M., 2001. Getting climate policy on track after The
Hague. International Affairs (Royal Institute of International Affairs
1944-), 77(2), 297-312.

Kanter, J. & Castle, S., 2009. E.U. Reaches Funding Deal on Climate Change.
The New York Times. Available at:
http://www.nytimes.com/2009/10/31/science/earth/31iht-UNION.html?
_r=3&adxnnl=1&adxnnlx=1257202699-iy2yXmWTpZCJisFNGCA+aA
[Accessed November 2, 2009].

Karl, T.R., Melillo, J.M. & Peterson, T.C. eds., 2009. Global Climate Change
Impacts in the United States, Cambridge University Press. Available at:

242
http://www.globalchange.gov/publications/reports/scientific-
assessments/us-impacts [Accessed June 17, 2009].

Karoly, D., 2007. Synthesis Report: an assessment of the Intergovernmental


Panel on Climate Change. Fourth Assessment Report: National Online
Media Briefing by Australian members of the Working Group. Available
at: http://www.aussmc.org/IPCC_Synthesis_online_briefing.php
[Accessed April 5, 2008].

Keeling, C.D. & Whorf, T.P., 2005. Atmospheric CO 2 records from sites in the
SIO air sampling network. Trends: A compendium of data on global
change, 16–26.

Klein, N., 2009. Obama isn't helping. At least the world argued with Bush. The
Guardian. Available at:
http://www.guardian.co.uk/commentisfree/cifamerica/2009/oct/16/obam
a-isnt-helping [Accessed October 17, 2009].

Krauss, C. & Galbraith, K., 2009. Climate Bill Splits Exelon and U.S. Chamber.
The New York Times. Available at:
http://www.nytimes.com/2009/09/29/business/energy-
environment/29chamber.html?_r=2&emc=tnt&tntemail1=y [Accessed
October 7, 2009].

Krugman, P., 2009. The Joy of Sachs. The New York Times. Available at:
http://www.nytimes.com/2009/07/17/opinion/17krugman.html?_r=1&em
[Accessed July 19, 2009].

Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.

Lydersen, K., 2009. Scientists: Pace of Climate Change Exceeds Estimates. The
Washington Post. Available at: http://www.washingtonpost.com/wp-
dyn/content/article/2009/02/14/AR2009021401757_pf.html [Accessed
February 18, 2009].

243
Lynas, M., 2008. Climate chaos is inevitable. We can only avert oblivion. The
Guardian. Available at:
http://www.guardian.co.uk/commentisfree/2008/jun/12/climatechange.sc
ienceofclimatechange [Accessed June 19, 2008].

MacDonald, G., 1989. The long term impact of atmospheric carbon dioxide on
climate,

McKibbin, W.J. & Wilcoxen, P.J., 1999. The theoretical and empirical structure
of the G-Cubed model. Economic modelling, 16(1), 123-148.

McNeil, B., 2009. The Clean Industrial Revolution: growing Australia's


prosperity in a greenhouse age, Sydney, Australia: Allen & Unwin.

Meinshausen, M. et al., 2009. Greenhouse-gas emission targets for limiting


global warming to 2 °C. Nature, 458(7242), 1158.

Nierenberg, W.A., 1983. Changing climate: Report of the Carbon Dioxide


Assessment Committee/Board on Atmospheric
ciences and Climate, Commission on Physical Sciences, Mathematics,
and Resources,
National Research Council, Washington DC: US National Research
Council: Carbon Dioxide Assessment Committee.

NOAA, 2008. National Oceanic and Atmospheric Administration Report:


Carbon dioxide, methane rise sharply in 2007. Available at:
http://www.noaanews.noaa.gov/stories2008/20080423_methane.html
[Accessed April 22, 2008].

Nordhaus, W.D., 2008. A Question of balance: weighing the options on global


warming policies, Yale University Press.

Obama, B., 2009. Remarks on the Economy at Georgetown University. The


New York Times. Available at:
http://www.nytimes.com/2009/04/14/us/politics/14obama-text.html?
_r=1&ref=weekinreview [Accessed April 19, 2009].

244
Oreskes, N. & Renouf, J., 2008. Jason and the secret climate change war -
Times Online. Available at:
http://www.timesonline.co.uk/tol/news/environment/article4690900.ece
[Accessed September 15, 2008].

Pascal, B., 1662. Pascal's Pensées, France: E. P. Dutton & Co., Inc. 1958.
Available at: http://www.gutenberg.org/files/18269/18269-h/18269-h.htm
[Accessed November 2, 2009].

Pasternack, A., 2009. Post-Bangkok Q&A with Antonio Hill, Oxfam's Climate
Envoy: We Need a "Major Turnaround". TreeHugger. Available at:
http://www.treehugger.com/files/2009/10/antonio-hill-oxfam-climate-
representative-bangkok-meeting.php [Accessed October 17, 2009].

Pigou, A.C., 1920. The economics of welfare 2009th ed., New Brunswick, New
Jersey: Transaction Publishers.

Pilkington, E., 2008. Put oil firm chiefs on trial, says leading climate change
scientist. The Guardian. Available at:
http://www.guardian.co.uk/environment/2008/jun/23/fossilfuels.climatec
hange [Accessed June 24, 2008].

ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.

Ramanayake, W., 2009. Time runs out at Bangkok climate meeting. Climate
Change Media Partnership. Available at:
http://www.climatemediapartnership.org/reporting/stories/time-runs-
out-at-bangkok-climate-meeting/ [Accessed October 17, 2009].

Revelle, R. & Suess, H.E., 2002. Carbon dioxide exchange between atmosphere
and ocean and the question of an increase of atmospheric CO2 during
the past decades. Climate Change: Critical Concepts in the
Environment, 9(1), 92.

245
Ricardo, D., 1817. On the principles of political economy and taxation Third.,
London: John Murray. Available at:
http://www.econlib.org/library/Ricardo/ricP.html [Accessed April 10,
2008].

Rowland, F.S. & Molina, M.J., 1975. Chlorofluoromethanes in the environment.


Reviews of Geophysics, 13(1).

RTTNews, 2009. India stands by Kyoto Protocol on climate change. RTT News.
Available at: http://www.rttnews.com/Content/GeneralNews.aspx?
Node=B1&Id=1097683 [Accessed November 2, 2009].

Sachs, J., 2008. Common Wealth: Economics for a Crowded Planet, Penguin
Press HC, The.

Schellnhuber, H.J. et al., 2009. Solving the climate dilemma: The budget
approach, Berlin: German Advisory Council on Global Change (WBGU).
Available at: http://www.wbgu.de/wbgu_sn2009_en.html [Accessed
September 5, 2009].

Schwägerl, C., 2009a. German Climate Adviser: 'Industrialized Nations Are


Facing CO2 Insolvency'. Spiegel Online - News - International. Available
at:
http://www.spiegel.de/international/germany/0,1518,646506,00.html#re
f=rss [Accessed September 5, 2009].

Schwägerl, C., 2009b. US Wants a 'Legally Binding Climate Agreement' -


interview with US deputy climate change envoy Jonathan Pershing.
Spiegel Online - News - International. Available at:
http://www.spiegel.de/international/world/0,1518,630073,00.html#ref=r
ss [Accessed June 15, 2009].

Scott, M., 2009. Can Washington Learn from Brussels' Mistakes?: Avoiding
Europe's Carbon Trading Missteps. SPIEGEL ONLINE - News -
International. Available at:

246
http://www.spiegel.de/international/business/0,1518,641053,00.html#re
f=rss [Accessed August 18, 2009].

Sijm JPM et al., 2004. Spillovers of climate policy - an assessment of the


incidence of carbon leakage and induced technological change due to
CO2 abatement measures, Available at:
http://demo.openrepository.com/rivm/handle/10029/8960 [Accessed
April 26, 2008].

Slattery, L., 2008. Abstract to application: after years of being ignored by the
money markets, academic economists are delighted as hard times and
complex problems mean demand for their skills. The Australian, 28-29.

Spiegel Online, 2009. Fighting Global Warming: German Scientists Call for
'World Climate Bank'. Spiegel Online - News - International . Available
at: http://www.spiegel.de/international/germany/0,1518,645996,00.html
[Accessed September 1, 2009].

Stolberg, S.G., 2008. Richest Nations Pledge to Halve Greenhouse Gas. The
New York Times. Available at:
http://www.nytimes.com/2008/07/09/science/earth/09climate.html?
_r=2&th=&oref=slogin&emc=th&pagewanted=print&oref=slogin
[Accessed July 9, 2008].

Tällberg Foundation, 2008. How on earth can we live together - 350: remember
this number fo the rest of your life. Available at:
http://www.tallbergfoundation.org/T
%C3%84LLBERGINITIATIVES/350/tabid/429/Default.aspx [Accessed
July 13, 2008].

Tripati, A.K., Roberts, C.D. & Eagle, R.A., 2009. Coupling of CO2 and Ice Sheet
Stability Over Major Climate Transitions of the Last 20 Million Years.
Science, 1178296.

Uhlig, H., 1999. A toolkit for analyzing nonlinear dynamic stochastic models
easily. In R. Marimon and A. Scott: Computational Methods for the

247
Study of Dynamic Economies. Oxford and New York: Oxford University
Press, pp. 30-61.

United States Congressional Budget Office, 2009. How CBO Estimates the
Costs of Reducing Greenhouse-Gas Emissions, Washington DC: The
Congress of the United States. Available at: http://cboblog.cbo.gov/?
p=238 [Accessed June 17, 2009].

United States Environmental Protection Agency, 2004. Cap and Trade: Acid
Rain Program Results, Available at: http://www.epa.gov/airmarkets/cap-
trade/docs/ctresults.pdf.

Weiss, D.J., 2008. Global Warming Solution Studies Will Overestimate Costs,
Underestimate Benefits. Net News Publisher. Available at:
http://www.netnewspublisher.com/global-warming-solution-studies-will-
overestimate-costs-underestimate-benefits/ [Accessed July 19, 2009].

Whiteman, H., 2009. Pachauri: Stern stance on China climate talks 'pragmatic'.
CNN.com/technology. Available at:
http://edition.cnn.com/2009/TECH/science/06/15/ipcc.pachauri.climate.c
hange/ [Accessed June 16, 2009].

Yeoh, E. & Gosh, A., 2007. Hu, on climate change, says rich nations should
clean up acts. Bloomberg: Asia. Available at:
http://www.bloomberg.com/apps/news?
pid=20601080&sid=amhVcqkG6sz8 [Accessed April 27, 2008].

i At Harvard, Al Gore became Revelle's student and research assistant


ii David Keeling, Charles' son, continued this work when his father died in 2005
iii Tripati's CO2 data is provided at aradhna.tripati.googlepages.com/CO2recon.txt
iv Informally it was known as the Jason Model of the World
v The mean temperatures in the USA in the 1930s were very warm and similar to
the 1990s while globally the 1940s were warmer. However, the year 2006 was
the hottest on record and temperatures generally through 2000-2007 were
higher than any other recorded year by 0.4°C to 0.48°C. UAH report the 30 year
global trend as +0.13°C per decade, in line with IPCC projections. While it
cannot be argued that Hansen was unambiguously correct, it is incorrect to
extend this point to a conclusion that there has been zero net warming since
1988. The real problem in these "data arguments" is that there was no reliable
data until the 1980s. This is complicated by the fact that terrestrial monitors
produce different measurements to satellite monitors. Both climate change
advocates and climate change sceptics can choose periods and monitoring

248
technologies that make their case
vi Hansen does not favour a “cap and trade” emissions trading system.
Furthermore, a bill to implement a “cap and trade” emissions trading system
had recently failed to achieve US Senate support in June 2008
vii With similar letters to Australia's State Premiers
viiiThe IPCC's dire outlook is highlighted by Pascal's Wager. In finding the
existence of God to be beyond reason, the French philosopher Blaise Pascal
suggested that it would be wise to behave as if God existed because In doing so
one has everything to gain, and nothing to lose (Pascal 1662, Note 233). If
Pascal's logic is extended to climate change, then countries would best address
the issue of CO2 emissions because they have everything to gain and nothing to
lose. There are three obvious gains. The first gain is to avoid the extraordinarily
high risk of a catastrophic situation where the hypothesised outcomes from
global warming and sea acidification indeed take place. The second gain is that
addressing emissions will greatly reduce pollution. Experience has shown that
this is generally a good thing to do. The third gain is that accelerating total
factor productivity through rapid technological advances will bring much better
standards of living to a much broader base of the world's population. As regards
having nothing to lose, there is no cost on consumers of addressing emissions if
revenue neutral environmental taxes are used as the policy instrument
ix An increase of 38% since pre-1850 levels of 280 ppm
x “Agree and ignore” where governments are assumed to not act for many reasons
ranging from continuing scepticism, a desire to protect their industries, to more
complex geopolitical and "game theory" reasons such as free-riding. There is
also natural concern about the effects of massive change from voluntary
proactive action that has material adverse effects (particularly on powerful
groups such as energy companies, generators, automotive producers and petrol
consumers) when reasons have an ideological component (because in science
there is no absolute certainty), and the new costs and new problems in society
that will be exposed such as carbon-profiteering or one nation being advantaged
over another (as can occur in free-trade agreements)
xi "Step change" in which all governments respond to major climate change
disasters in 2009 and 2010 with strong policy measures. This scenario mirrors
acid rain, which is the only time that world governments have acted in concert.
The governments agreed to cooperate only when the devastating evidence of
pollution was obvious and compelling. In the "step change" scenario, an
international treaty would require all carbon producing companies (coal mines
and oil and gas wells) in all countries to bid for a limited and decreasing number
of carbon permits in a world carbon permit market. The UN would set the
"upstream cap". The price of permits would presumably soar because of their
scarcity and demand for emissions-intensive products would fall
commensurately. The trillions of dollars raised in auctioning permits would be
spent on offsetting these impacts on humanity and as part of the the transition
to the new low-carbon economy. For example the relocation of the nations like
Bangladesh, Kiribati and the Maldives; amelioration of drought in West Africa,
Somalia and Ethiopia; cushioning the effect of price rises on poor nations
xii In 2007, Australia's CO2 equivalent emissions declared under the Kyoto Protocol
were 825.9 million tonnes or 39.3 tonnes per capita. Schellnhuber's table draws
on these Kyoto Protocol declarations. Some countries benefit from “Land Use,
Land Use Change and Forestry” sinks. For example, America's CO 2 net
emissions offset a sink benefit of 15% of the industrial total. Australia
experiences a “Land Use” source due to bushfires and land clearing. Excluding
“Land Use”, Australian emissions were 541.2 million tonnes or 25.8 tonnes per
capita, which is the figure publicised by the Australian Government. The
attractive concession granted uniquely to Australia so it would sign the Kyoto
Protocol, that “Land Use” from bushfires and clearing is offset by new growth,
has been actively debated with regard to many countries which would like the

249
same concession. The apparent symmetry in the assumption continues to be
seen as an error of logic and a glaring loophole. The current approach in the
lead-up to the Copenhagen meeting in December 2009 is that countries would
account for their “Land Use” but not be rewarded for new growth
xiiiThe “About This Report” preamble to the Global Climate Change Impacts in the
United States report notes: The USGCRP called for this report. An expert team
of scientists operating under the authority of the Federal Advisory Committee
Act, assisted by communication specialists, wrote the document. The report was
extensively reviewed and revised based on comments from experts and the
public. The report was approved by its lead USGCRP Agency, the National
Oceanic and Atmospheric Administration, the other USGCRP agencies, and the
Committee on the Environment and Natural Resources on behalf of the National
Science and Technology Council …. The report draws from a large body of
scientific information. The foundation of this report is a set of 21 Synthesis and
Assessment Products (SAPs), which were designed to address key policy-
relevant issues in climate science; several of these were also summarised in the
Scientific Assessment of the Effects of Climate Change on the United States
published in 2008. In addition, other peer-reviewed scientific assessments were
used, including those of the Intergovernmental Panel on Climate Change, the
U.S. National Assessment of the Consequences of Climate Variability and
Change, the Arctic Climate Impact Assessment, the National Research Council’s
Transportation Research Board's report on the Potential Impacts of Climate
Change on U.S. Transportation, and a variety of regional climate impact
assessments. These assessments were augmented with government statistics as
necessary (such as population census and energy usage) as well as publicly
available observations and peer-reviewed research published through the end of
2008
xivNote President Obama's speech to the United Nations General Assembly on 23
September 2009 (refer to discussion in Chapter 2, United Nations Security
Council Resolution 1887)
xv Confirmed in 2008 by Dr. George M. Woodwell, one of the few members of that
committee still alive: “Yes, I remember well that committee and how it was
controlled and deflected by new economic influences as the environmental
issues appeared to become acute. The study was under the auspices of the
National Research Council of the National Academy of Sciences, not the
National Science Foundation. We resorted to individual papers because we
could not agree, or see any way to agree, on a single report. Even within my
own paper there was systematic pressure to dilute the statements and the
conclusions. I had previously written and signed along with Roger Revelle,
David Keeling, and Gordon MacDonald a stronger statement for the CEQ at the
end of the Carter administration. That statement was widely publicised by Gus
Speth, then Chairman of CEQ, and ultimately used in testimony in the Congress
and as background for the Global 2000 Report published by CEQ in 1980. As far
as the summary statement of the Report was concerned, as the Preface states:
there were "no major dissents". That means no one chose to fight with the
chairman. It was poor, sickly job, deliberately made so for political reasons
characteristic of the corruption of governmental purpose in the Reagan regime.
Naomi Oreskes has it right.” Private correspondence disclosed with permission
by John Mashey in a comment submitted on Wed, 2008-09-10 17:26 (Littlemore
2008)
xviThe present world consumption of oil is 300 billion barrels per decade. It is
estimated that there is only 1.2 trillion barrels remaining. The IEA has forecast
that the world needs to increase energy production by 50% from 14 Terra Watt
in 2008 to 21 Terra Watt in 2030. The present mix is oil 5, coal 4, gas 3 and
other (nuclear and renewable energy) 1.5-2.0. With peak-oil threatening to
reduce the availability and percentage contribution of oil, there is an acute need
for both substitute energy sources and alternative liquid fuels from coal-to-oil

250
plants and shale oils. While wind, solar, tidal, geothermal, hydrogen and algae
are worthwhile technologies and need to be pursued to the utmost, only nuclear
fission (and ultimately fusion) has the ability to satisfy increasing demand for
electricity while reducing emissions. At present France generates 70% of its
power from nuclear. Japan also has a high nuclear component. The nuclear
threats are reactor meltdown (as with Chernobyl's graphite cooled reactor) and
weapons proliferation. The meltdown issue has been solved with new pebble-
bed reactors that do not have the negative coefficients of reactivity in water
cooled graphite reactors where a meltdown occurs if the cooling fails. When
pebble-bed reactors are turned-off, they simply cool down
xviiSimilar to the Scopes trial in the 1920s, which was a clash of creationists and
evolutionists
xviiiDonald Aitkin was formerly the vice-chancellor of the University of Canberra,
foundation chairman of the Australian Research Council and a researcher at the
Australian National University and Macquarie University
xixDeforestation of an acre of rainforest trees releases about 200 tonnes of carbon
xx Levied on the supply of fuels and electricity to industry, commerce, agriculture
and public administration
xxiBy 30% if other developed countries commit themselves to comparable
reductions
xxiiAn interregnum in Washington existed when this message was delivered by
Barack Obama's informal emissary, John Kerry. President-elect Barack Obama's
20 January 2009 inauguration was still 6 weeks away
xxiiiWilliam Nordhaus uses a 4% discount rate, which is the same conservative rate
that economists often use for long term projects. At 4%, a $1,000 benefit at the
hundredth year would be discounted to just $29. The same $1,000 at the two-
hundredth year would be worth just 39c. However, global warming and long
term mitigation over 200 years is not a normal project. The term is considerably
longer than the projects to which 4% is usually applied. Sir Nicholas Stern
maintains that no discounting should be applied because discriminating
between current and future generations is unethical.
xxivRepresentatives Henry A. Waxman of California and Edward J. Markey of
Massachusetts (Democrats)
xxvThe meeting was at the Commune Hotel, located at the Great Wall
xxviUS Department of Energy statistics
xxviiOcean acidification would prevent crustaceans forming their shells, dissolve
coral reefs, threaten food security, reduce coastal protection and damage local
economies. Acidification would be irreversible for thousands of years
xxviiiEquivalent to a 14% reduction by 2020 (compared to 2005 levels)
xxixIn an attempt to placate anger against Japan's 8% target, America's deputy
climate change envoy, Jonathan Pershing, noted Japan's new target was for
domestic reductions and compared favourably to the European Union's target of
20%, which allows for half of the reductions to be achieved through projects in
developing nations. However, the American support is misleading because the
important price effects of the two policies are not comparable
xxxDuring the G20's September 2009 Pittsburgh meeting, Japan's recently elected
Prime Minister Yukio Hatoyama expressed optimism that Japan would achieve a
full 25% reduction in emissions
xxxiChina National Development and Planning Commission Climate Policy Paper,
21 May 2009
xxxiiThe European Union's September 2009 commitment to the United Nations
Adaption Fund is part of a total package from industrialised nations of US$33
billion to US$74 billion per annum
xxxiii16 September is now designated World Ozone Day
xxxivPresident Sarkozy has also noted that France and Germany may introduce a
“border adjustment tax” on the assessed CO2 pollution content of goods
imported from countries with inferior climate control measures. India noted that

251
it could respond with a 99% tax on goods imported from countries that have
created the CO2 pollution problem
xxxvIn 2008, France generated 80% of its electricity from nuclear, compared to
23% from nuclear in Germany. The French say of nuclear electricity generation:
“No oil, no gas, no coal, no choice.” As well as being one of the largest net
exporters of electricity, France is a major exporter of nuclear technology
xxxviScrubbers mix lime with the flue gasses from coal-fired power stations to form
calcium sulphate
xxxviiNotwithstanding the adoption of the America's proposal for emissions trading
between nations, America did not ratify the Kyoto Protocol
xxxviiiThere is a common attitude that it is up to the governments of the third
world countries to protect their citizens from practices such as sweatshops and
child labour. However, often these governments deliberately turn a blind eye to
these practices, or merely pay lip-service, in order to earn foreign currency or
because they receive incentive payments or even bribes
xxxixAnalogous to wealthy nations buying products from nations with sweatshops,
child labour, slavery, abuse of human rights and abuse of the environment.
Commodities often associated with these types of practices are chocolate,
coffee, gold, diamonds, sports products, durable goods etc.
xl As Japan does to secure the votes of small nations in retaining loop-holes in the
moratorium on whaling. Another example is that in the period from 2001 to
2007, Australia began a deliberate policy of using foreign aid as a tool of
political intervention in surrounding countries
xli For example, a situation analogous to the stealthy take-over of a company on the
stock market without making a proper takeover offer including adequate
premium for control. Another example in the “property market” is where
Israelis acquired the homes of Palestinian families in Jerusalem for modest
prices, which was however part of an overall covert plan to remove Palestinian
families from areas of Jerusalem. The Government of Israel intervened and
declared this process to be unethical in disenfranchising a class of people of
their rights without adequate compensation

252
Chapter 4 Economic Models for Climate
Change Policy Analysis
Chapter 1 Introduction identified the role of computable general equilibrium
modelling in policy research. Chapter 2 Political economy of the Anglo-
American economic world view determined that neoclassical economics and
modelling had strengths and weaknesses but remained a primary tool for
evaluating policies in Anglo-American economies. Chapter 3 Political economy
of the Anglo-American world view of climate change identified the key
elements to be considered in climate policy, including changing stakeholder
attitudes and technology concerns, the various ways targets might be framed
and policy instruments for achieving these targets. The objective of this
Chapter is to build on this fabric of change by identifying a suitable
computable general equilibrium modelling approach, mathematical platform
and data source to achieve the research aim.

4.1 Survey of Computable General Equilibrium


modelling literature
A computable general equilibrium (CGE) model is a system of equations that
describe the economy, international trade, technology and, nowadays, climate
science. Economists, technologists and ecologists use these models to simulate
policy options by solving the complex interactions between different
technological processes and labour markets across wealthy, rapidly developing
and poor regions. For example, the position taken by America, the United
Kingdom and Australia at Kyoto and in Australia's recent Carbon Pollution
Reduction Scheme (CPRS) were formulated on the basis of CGE modelling.

Chapter 1 Introduction noted that CGE dependence on simplified assumptions


in the internally consistent neoclassical paradigm is both a strength and
weakness. In Chapter 2 Political economy of the Anglo-American economic
world view it was noted that neither individual nor collective behaviour can be
fully predicted by sets of equations.

Science seeks to explain natural laws and the working of the universe through
testing theories in controlled experiments ceteris paribus.i In contrast,

253
economic and socio-technical engineering problems are huge, holistic and
often pressing issues with many feedback loops and dependencies. These
problems are at the core of the fabric of society. They require practical
solutions with mathematical precision, while at the same time guarding against
misplaced confidence in apparently precise numbers and recognising that the
results are merely indicators of possible trends. Examples are Australia's
planned Carbon Pollution Reduction Scheme (CPRS), the re-engineering of
banking systems and major infrastructure development.

This Chapter briefly addresses the heritage of CGE economic climate models
that form the jewel of many public policy centres. ii

General equilibrium theory


The discipline of economics arose from attempts to understand changes to the
structure of society arising from the Industrial Revolution c1750. The seminal
work is Adam Smith's The Wealth of Nations (1776), which celebrates the
invisible hand of capitalism and in so doing gave birth to “classical
economics”.iii

Of course, there were many earlier kinds of invisible hands. One of the first
was Bernard Mandeville's The Fable of the Bees, Private Vices, Public Virtues
(1723), in which he marvelled that private vices, which are publicly deplored,
such as greed, vanity and ambition, indeed lead to the public virtue of
prosperity.

Another invisible hand was the Physiocrats' le droit naturel, or natural order of
things, which governed economic and social equilibriums. A prominent
member of the Physiocrats, François Quesnay is remembered for developing
France's Le Tableau économique (1758). This was the world's first economic
input-output table.iv Today a form of Quesnay's table can be found at the core
of all systems of national accounts.

In the 1930s, Wassily Leontief developed Quesnay's Tableau into a systematic


approach to economic analysis. This lay the foundation for modern computable
general equilibrium modes of economics and climate change (Wassily W.
Leontief 1955; ten Raa 2005). Leontief received the 1973 Nobel Prize “for the

254
development of the input-output method and for its application to important
economic problems.” His major contribution in this area is Input-Output
Economics (1966). Nowadays, the classical textbook on this topic is Miller &
Blair's Input output analysis: foundations and extensions (1985).

Perhaps the earliest recognition of economics as a formal discipline and the


role of “minimax” in social policy optimisation was by Claude-Henri de
Rouvroy, Comte de Saint-Simon after the 1794 Terror in France. Erratic but
inspired Saint-Simon was the first to maintain that mathematics would
determine economics and economics would determine the future history of the
world (Strathern 2002, pp.142-3).v

It was noted in Chapter 2 Political Economy of the Anglo-American economic


world view, that the Rev. Thomas Robert Malthus was instrumental in
preparing the first mathematical-economic model and refuting Jean-Baptiste
Say's assertion that market failure could not occur in capitalism (Malthus
1798; 1820).

Chapter 2 also discussed John Stuart Mill's philosophy that production was not
the sole purpose of human existence. His book Principles of Political Economy
(1848) became the primary nineteenth textbook on classical economics. It
provided the unique new insight that production and consumption (or what he
called distribution) were decoupled. However, it was not until Alfred Marshall
drew his masterful graph of microeconomic supply and demand scissor curves
that the ramifications of this were fully appreciated. Nevertheless, Mill did
appreciate that there was something akin to producers and consumers surplus
and that various moral policies (such as utilitarianism) could be applied to
consumption while not affecting production.

In Elements of Pure Economics (1877), Léon Walras developed the


understanding of what constitutes a general equilibrium, or simultaneous
equilibrium and clearing of all market partial equilibriums in an economy.
Walras called his version of the hidden hand tétonnement, which is the term
still employed in all macroeconomic IS-LM models (where the price is interest
rate and the quantity is national income or money). Walras' understanding of
the interrelatedness of markets is crucial to the solution of major economic

255
climate models where many equilibriums occur within industries in national
economies and commodity substitutions occur between industries in different
countries.

Walras reduced the general equilibrium to five equations that could not be
solved because the number of variables exceeded the equations. In formulating
his problem, Walras was perhaps one of the first people to understand how a
small number of economic equations can rapidly develop into a complex model
requiring the most capable methods of operations research for solution.

Alfred Marshall (1890) is credited with creating neoclassical economics, the


mathematical cousin of classical economics. He believed his new discipline
would help in social reform. However, the new neoclassical economics could
not be raised to the status of a Science. It was unable to be tested for
falsification using controlled experiments in human behaviour (Blaug 1992).

Marshall's key innovation in classical economics was to extend Walras' general


equilibrium by introducing his famous supply and demand curves for clearing
of markets at prices established through partial equilibria based on marginal
utility. Marginal utility was a psychological concept developed by William
Jevons (1871). Marshall's second innovation was to introduce the concept of
time where equilibria evolve with changes in technologies and consumer
preferences.

The paradigm of economic actors trading to maximise their utilities formulated


by Marshall is comprehensive and internally consistent. However, its
simplifying assumptions lead to a number of weaknesses (Self 2000, p.6).
These include the assumption that all commodity prices are set by the
microeconomic forces of supply and demand; the assumption of perfect
competition means that the presence of oligopolies, monopolies and price
cartels is ignored; all actors are assumed to be rational and will continue to
trade up to the point where their marginal gain is exhausted; that individual
preferences, social and other preferences are exogenous to the model; and the
distributions of wealth to different classes of individuals (entrepreneurs,
landowners and labour) can be ignored.

256
Although Marshall had brought Walras' general equilibrium to maturity, there
was one last step. General equilibrium was thought to be merely a theoretical
construct. John von Neumann criticised its two weaknesses: that prices would
sometimes need to be negativevi and that the model was completely abstracted
from sociology, mechanical rather than human and social. In order to address
the first point, Von Neumann developed his own approach to general
equilibrium modelling (Von Neumann 1938; Champernowne 1945). Later he
developed “game theory” to address the behavioural weakness in general
equilibrium, which greatly enhanced Saint-Simon's tentative minimax social
optimisation (Von Neumann 1928; von Neumann & Morgenstern 1953).

Nowadays, we employ a number of von Neumann's modelling assumptions, for


example, that capital investment can be accounted for by the accumulation of
commodities. Although Wassily Leontief had received a Nobel Prize for
thoroughly developing François Quesnay's ideas, John von Neumann was not
rewarded with the honour of a Nobel Prize for developing game theory from
the early thoughts of Saint-Simon, perhaps because von Neumann was such a
controversial person in other ways (Strathern 2002, pp.xiii-xxii & 275-89).

The real power of general equilibrium modelling arrived when Kenneth Arrow,
Gerard Debreu and Lionel McKenzie proved that a general equilibrium could
really exist in an economy (Arrow & Debreu 1954; Debreu 1959). The Arrow-
Debreu theory of general equilibrium showed that markets discount future
events including inventions that have not yet occurred. This led to the
widespread use of computable general equilibrium (CGE) models in policy
analysis. Arrow shared the 1972 Nobel Prize with John Hicks “for their
pioneering contributions to general economic equilibrium theory and welfare
theory.” In 1982, Gerard Debreu was also awarded the Nobel Prize “for having
incorporated new analytical methods into economic theory and for his rigorous
reformulation of the theory of general equilibrium.”

Computable General Equilibrium (CGE) modelling


The development of computing power since the 1970s has allowed policy
makers to test the feasibility of economic paradigms and potential
interventions in order to reduce the risk of policy failure. The abstract
Walrasian general-equilibrium structure has been enhanced to a degree where

257
models for policy analysis have become quite realistic models of regional,
national and global economies. Appendix 2 CGE modelling describes at length
the techniques used in elementary CGE modelling.

Partial equilibrium models are suitable for most regional analysis. In partial
equilibrium analysis, major economic parameters such as economic growth are
provided exogenously and changes in resources are seen as perturbations to
the initial equilibrium. For example, changes to the demand curve do not affect
the supply curve.

Partial equilibrium models have the same consumer utility and production
functions, market clearance and resource constraints as generic general
equilibrium models. The one additional feature of general equilibrium model is
an income balance where the prices of commodities multiplied by the
commodity volumes is equal to (or less than) the prices of the resources
multiplied by the volumes of resources. In the field of linear programming,
discussed later in this Chapter, this relationship is called the “Main Theorem of
Linear Programming.”

The analysis of national and global affairs has increasingly required general
equilibrium models where growth is calculated endogenously, changes to the
demand curves of commodities have a major effect on the supply curves, and
the imports and exports of countries have a major effect on growth rates.

Most major countries in the world have developed models for World Trade
Organisation, GATT and Free Trade Agreements, economic integration,
taxation policies, public finance, development strategies, energy security and
greenhouse gas pollution policies.

Nevertheless, models are never complete and only ever a snapshot in the
journey of emulating the complex and changing marketplace of the globe.
There are many specialist mathematical algorithms and optimisation
limitations involved. Unless policy makers remain highly specialised they can
rarely retain mastery of computable general equilibrium models as a practical
policy making tool. This means that the communication of results is always a

258
challenge from specialist researchers in policy at academic institutions to
policy makers in government and strategists in corporations.

Traditional computable general equilibrium (CGE) models are a set of


simultaneous equations that can be solved to calculate the equilibrium balance
of an economy or set of economies. There are four main groups of equations:
prices and price elasticities, production and trade, economic actors
(households, enterprises, government, and a “rest of the world” institution)
and constraints for factors of production and commodities that have to be
satisfied for the system as a whole.

The economic equations and behaviour of actors are usually solved analytically
before being entered into the system of equations. These equations are then
solved simultaneously. Therefore, a traditional CGE model does not seek to
optimise any objective function. In practice the equations are nonlinear so
cannot be solved by algebraic or linear techniques. Instead, an iterative
solution seeking algorithm changes prices until a solution to the model is
found.

Before test policies are introduced, the equations are calibrated to explain the
payments recorded in national accounts, which are usually provided as an
Input-Output table or system of double entries within a Social Accounting
Matrix (SAM).

Computable General Equilibrium modelling strengths


and weaknesses
The key strengths of CGE modelling are its robust consistency and use of real
world data. Neoclassical microeconomic and macroeconomic theory is well
developed and integrated into CGE models. The ability to endogenously model
consumer and producer behaviour endows CGE with the capacity to model
many different policies in the presence of inter-sectoral and intertemporal
effects, tax effects and changes to trade flows. It is possible to discriminate
between efficiency and distributional effects. In practice, CGE has proven to be
a reliable, flexible and readily extensible policy research tool.

259
Weaknesses in standard CGE formulations either relate to assumptions or
computational complexity. Even small CGE models with nonlinear formulations
rapidly become computationally complex and demanding (see Appendix 2 CGE
modelling).

Perhaps the major weakness in generic CGE models is the copious set of
assumptions involved. Firstly, markets are assumed to be in perfect
competition. It is assumed that both consumers and producers are respectively
rational utility and profit maximisers with the only determinant of their
behaviour being price. It is assumed that consumers are all price takers.

Secondly, equations are highly sensitive to many exogenous parameter


assumptions, such as substitution, income and output elasticities. These are
often not well determined. For example, the consumer elasticity of substitution
assumes that everyone has the same rational behaviour and set of tradeoffs.

Functional forms developed through detailed engineering, industrial ecology


and physical science provide major enhancement to model realism. However,
flexible functional forms such as Constant Elasticity of Substitution (CES) and
Transcendental Logarithmic (Translog) can be fitted to almost any situation. As
we have already observed, correlation doesn't mean causation. The ability to
force a fit by increasing the number of parameters isn't necessarily an
advantage. In fact, a surprising issue to many new researchers is how quickly
model complexity compounds when the researcher strives for realism by this
means. The more that a CGE model becomes complex and assumption infused,
the more it becomes a black box and loses meaning to everyone else. It is often
better to avoid increasing assumptions unnecessarily, and to actively reduce
assumptions, as will be discussed in regard to Occam's Razor in the next
Chapter.

Thirdly, models need to be calibrated. This requires more assumptions using a


selected base year for data and analogues for extended data. In addition,
exogenous “macro closure” assumptions for government net surplus,
aggregate savings and investment, net exports and exchange rate are required
to fully determine the model. A key issue with CGE is that the “macro closure”
assumptions may come to dominate the performance of the model.

260
Fourthly, standard Cobb-Douglas and CES functional forms embody an
assumption of constant returns to scale. Therefore, standard formulations do
not provide for increasing or decreasing returns to scale.

Fifthly, international capital flows are not accommodated because there are no
international asset markets.

Sixthly, even though CGE equations are nonlinear, they are still linear in the
sense of a single non-discontinuous paradigm or frame of reference. CGE
models experience difficulty in migrating from one state to another, for
example, from an initial equilibrium to a new equilibrium in a dramatically
different paradigm.

One further weakness is common to all outputs from large, complex and
processing intensive models. Modellers need to be so diligent with
assumptions that they can fall into the trap of believing that the accuracy of
outputs, which is only an artefact of the technique, is or indeed should be
reality. However, the old maxim of “garbage in, garbage out” remains as valid
as ever and outputs are merely a function of the assumptions, equations and
numerical methodologies.

Of course, much work continues to improve CGE models. For example, the
complexity of CGE models has been solved in three ways. The first is specialist
modelling platforms such as GAMS and AMPL, which are discussed later in
this Chapter. Presolver eliminations and linearisation algorithms have
contributed greatly to computational feasibility. Lastly, new formats of
equations have been developed to simplify equation schemas, for example the
Negishi (welfare optimum) format and mixed complementarity open economy
(MCP) format that is well suited for econometric estimation (Ginsburgh &
Keyzer 1994, pp.93-7, 101-7 & 112-5). Stochastic programming has been
introduced to improve the understanding of risk.

Functional forms also have been extended for scale effects, monopolistic
competition, non-substitutable commodities and expanded product variety.
Different types of institutional behaviour have been modelled, for example,

261
changing consumer preferences and intertemporal tradeoffs through different
discounting techniques.

Exogenous assumptions have been progressively reduced by endogenously


determining investment and capital accumulation, technology innovation and
diffusion, changing labour force skill levels and population growth.

World class integrated assessment models


The first CGE model has been variously attributed to Ramsay/Cass/Koopmans
(Ramsey 1928; Cass 1965; Koopmans 1965); Leontief's work for the American
Government (1937; 1951; 1955), Johansen's important multi-sectoral study of
economic growth (MSG) using input output analysis to dispute the Arrow-
Debreu model (Johansen 1960), Leontief's student Hollis Chenery, who first
computed the Arrow-Debreu model (Chenery & Uzawa 1958; Chenery &
Raduchel 1969), Scarf's general equilibrium following Walras (Harberger 1962;
Scarf 1967; Shoven & Whalley 1984) and Adelman and Robinson's work for the
Korean Government (Adelman & Robinson 1978).vii

CGE energy models became popular following the 1973 and 1979 oil price
crises. For example, the Ford Foundation's model (Hudson & Jorgenson 1974;
Ford Foundation 1974) and Manne's ETA-MACRO model (Manne 1977). Mäler
(1974) is credited with the first CGE model encompassing public goods such as
environmental resources. However, it was not until the 1990s that energy CGE
models evolved into climate policy models, such as the OECD's global energy
and environment model GREEN (Burniaux et al. 1992).viii

World class American and European neoclassical optimal growth integrated


assessment models now include the Leontief's well known environmental
extension to Input Output analysis (Wassily Leontief 1970), Dixon's ORANI
(Dixon 1975; Dixon et al. 1982), which became Hertel's GTAP model (1999),
and the European Union's JOULE Project model GEM-E3 (Capros et al. 1995).

It would be appropriate to include numerical assessment models such as


William Nordhaus' global DICE model (Nordhaus 1979; Nordhaus & Yohe
1983; Nordhaus & Radetzki 1994; Nordhaus 2008) and Nordhaus' regional
RICE model (Nordhaus & Yang 1996; Nordhaus 2009). However, in a

262
comprehensive classification of CGE models, Bergman (2005) argues that
Nordhaus' models should not be classified as CGE because they have no
industries to settle in equilibrium.ix

In addition, there are numerous other models such as MERGE (Manne et al.
1995; Kypreos 2005; 2006; 2007), DIAM3 (Ha-Duong & Grubb 1997), DIMITRI
(Annemarth M. Idenburg & Harry C. Wilting 2000; A. Faber et al. 2007; Harry
C. Wilting et al. 2004; 2008), Duchin's world trade model (Duchin et al. 2002;
Duchin & Steenge 2007), RESPONSE (Ambrosi et al. 2003), G-Cubed
(McKibbin & Wilcoxen 1999; 2004), ENTICE (Popp 2004; 2006; 2006), MIND
(Edenhofer et al. 2005), WIAGEM (Kemfert 2005), Lenzen's generalised Input-
Output (Gallego & Lenzen 2005), WITCH (Bosetti et al. 2006), a Japanese
information technology infused model DEARS (Homma et al. 2006), E3MG
(Köhler et al. 2006), GINFORS (the Global INterindustry FORecasting System)
(Meyer et al. 2007), IAM (Muller-Furstenberger & G. Stephan 2007), the World
Bank's ENVISAGE model (Bussolo et al. 2008) and the PAGE2002 model used
by the United Kingdom Stern Review (Hope 2006).

One of the remaining goals of CGE development is to endogenise the long term
propagation of technological change through industries. This is a somewhat
elusive aim because technological change tends to come in disruptive waves.
Stone's RAS bi-proportional matrix balancing and scaling approach has been
used for many years to introduce technological change into input output
analysis (Kruithof 1937; Deming & F. F. Stephan 1940; W. W. Leontief 1941;
Stone et al. 1942; Stone 1961; 1962; Stone & Brown 1962).x Appendix 3 Input
output tables provides the modern approach of Wilting et al (Harry C. Wilting
et al. 2004; 2008). Haoran Pan, ten Raa's former student and now research
collaborator introduced S-shaped logistic, Gompertz and Bass model
propagation curves (Pan 2006; Pan & Kohler 2007). As an alternative to these
methods, Goulder proposes that R&D be modelled as a traded commodity
(Goulder & Schneider 1999; Goulder & Mathai 2000).

World Bank & Global Trade Analysis Project (GTAP)


While many organisations such as the International Monetary Fund and
Australian Treasury are keen CGE modellers, two dominant CGE groups have
emerged in the world over the last decade. The first is the World Bank with the

263
International Food Policy Research Institute (IFPRI).xi The second is Purdue
University's Global Trade and Analysis Project (GTAP) with Monash
University's Centre of Policy Studies (CoPS). Within each group, the
institutions regularly cross publish and swap staff and management.

Australia researchers have been most interested in the latter group. In 1993,
the Australian Productivity Commission and Monash University assisted
Thomas Hertel establish the Global Trade Analysis Project (GTAP) at Purdue
University. Purdue accepted the Australian Productivity Commission's project
database and CGE model of the world economy, called the Sectoral Analysis of
Liberalising Trade in the East Asian Region (SALTER) (Jornini et al. 1994).xii

The Australian Bureau of Agricultural and Resource Economics (ABARE)


became a founding member of GTAP and the Monash Centre for Policy Studies
(CoPS) contributed its CGE models ORANIxiii and IMPACTxiv, its Australian CGE
databases and models. At GTAP, Monash's ORANI has been actively developed
as GTAP's CGE model. Although it is a static single period model, GTAP's CGE
model is widely used around the world, for example by Fondatzione Enri Enrico
Metti (Eboli et al. 2008) and the Kiel Institute of the World Economy (Deke et
al. 2001). Monash CoPS continued to develop ORANI as a dynamic model,
which is now called MONASH.

In return for these Australian models, Purdue's GTAP undertook to invest in


consistent Input Output tables with reconciled bilateral trade data. It now
provides this data to all world modellers for a modest fee. The underlying
strengths of GTAP's business model is its open source databases derived from
many international and national agencies, an emphasis on quality of data
through full reconciliation, an active CGE development community and a
strong commitment to conferences and training.

Australian economic-climate modelling


When Australia came to investigate the impact of climate change, it already
had a vigorous thirty year tradition of economic modelling at Melbourne, La
Trobe, Monash and Sydney Universities and in government departments such
as the Australian Bureau of Agricultural and Resource Economics and the
Australian Productivity Commission.

264
In its literature study for the Garnaut Climate Change Review, Frontier
Economics (2008, pp.2-3) listed the requirements for a CGE model that could
estimate the benefits as well as the costs of greenhouse emissions:

• capable of modelling economic shocks from climate damage feedback


over long time periods, not just a snapshot for a particular time, because
the benefits of preventing climate change will occur in the future while
the costs of the policy to prevent climate change will occur at the
beginning
• the focus needs to be global, not just national, because all countries are
affected by aggregate emissions and the indirect effects on a country's
economy might be more important than the direct effects, for example,
trade flows and exchange rate
• sufficiently flexible to take into account the numerous uncertainties in
the science and economics of climate change through sensitivity
analysis and eventually probabilistic inputs.

Following a review of Australian and international models, Frontier Economics


concluded (pp 1 & 4) “There are numerous published CGE-based Australian
studies of the costs of policies aimed at restricting Australia's greenhouse
emissions but only the ABARE GIAM project has modelled the economic impact
of climate change occurring …. we spent some time on GAIM because …. it
allowed us to explain the underlying structure of integrated assessment models
in general.”

The Global Integrated Assessment Model (GIAM) is a joint venture between


ABARE and Australia's Government research organisation CSIRO (ABARE
2008) CSIRO's physical climate modelling of CO 2 induced global warming is
called Mk3L. Increases in atmospheric temperature are interpreted within
GIAM as a damage function, which is used to apply negative shocks to total
factor productivity. In this approach GAIM is similar to Nordhaus' DICE model.
Frontier Economics notes of the GIAM project (p8) “The GIAM Project is
innovative in the Australian context, is well documented (at least as far as its
GTEMxv sub-model is concerned) and certainly represents the type of structure

265
that is appropriate for the modelling analysis that the [Garnaut] Review
requires.”

While the combination of GTEM and CSIRO's Mk3L facilitates a climate change
feedback loop and spatial economic disaggregation, a major disadvantage is
that GIAM remains two distinct sub models. The coupling between climate and
sectoral industry performance is therefore indirect and together with the
production function, in and between industries and countries, significantly
impact GIAM's dynamic performance.

On 3 October 2008, Professor Ross Garnaut discussed the Garnaut Climate


Change Review Final Report at the Committee for Economic Development of
Australia (CEDA), in a talk titled Australia as a low-emissions economy
(Garnaut 2008). His description of the modelling process ably demonstrates
the complexity and cross-disciplinary economic, technological and scientific
nature of such modelling:

The story of the transition of the Australian economy to a low


emissions economy is anchored in this modelling exercise ….
[which] involved some of the most complex modelling ever
undertaken in Australia …. we mapped structural change in the
economy out to 2100 …. Venturing into timeframes and levels of
mitigation not previously explored has had its challenges. You have
to make assumptions about the level of innovation you can expect to
see and in a standard technology case, which is the first step in the
modelling, I think we've got a set of reasonably cautious
assumptions, where improvements of technology at a steady rate
from bases that are known, has been assumed. But we modelled two
variations on that technology theme, apart from the standard
technology, which assumes best estimate improvements to known
technologies based on experience. The second case we modelled
was an enhanced technology scenario, which assumed
improvements on the standard scenario through greater energy
efficiency gains, faster learning by doing for electricity and
transport and the backstop technology in agriculture. And the third
variation, which we put in as an alternative to the second, was that

266
at some time a backstop technology would emerge, which would at
some high cost, absorb emissions from the atmosphere and offset
emissions elsewhere and we assume that backstop technology would
come in at US$200; that’s about AU$250 today. At that point, on this
third assumption we assume that there would be a technological
breakthrough, that is, substantial costs would remove carbon
dioxide from the air for sequestration.

Shortly after this conference, the Australian Treasury published its modelling
report Australia's Low Pollution Future (Australian Treasury 2008). Appendix 1
of the report briefly outlines the models used (pp 203 & 218): “Treasury’s
climate change mitigation policy modelling includes three top-down,
computable general equilibrium (CGE) models developed in Australia: Global
Trade and Environment Model (GTEM)xvi; G-Cubed modelxvii; and the Monash
Multi-regional Forecasting (MMRF) modelxviii.”

At a Senate Estimates Enquiry, an Australian Treasury representative


responded to a question from the Greens Party Senator for Tasmania, Christine
Milne, describing the difficulty being experienced in modelling Australia's
place in the world climate framework “These are complex models with complex
exercises and take many days to solve. They are computationally very difficult
for all scenarios, whether they are deep cuts or not …. We are doing
simulations out over 100 years and these models are based on historical
relationships and views around the near term …. We have found it
computationally difficult. We have several different models that we are putting
together, and the complexity of the exercise is quite significant” (Australian
Standing Committee on Economics 2008)

The Australian Treasury further commented on their approach to assembling


partial equilibrium models (Australian Treasury 2008, p.221):

Most Australian results are, in the first instance, from MMRF ….


Since MMRF is a multi-sectoral general equilibrium model of
Australia, it takes world market conditions as given. This means that
it does not determine endogenously the prices Australia faces in the
world market, nor does it project the changes that may occur in
demand for Australian exports. GTEM determines such prices and

267
quantities, which are aggregated over all other regions using ‘free
on board’ and ‘cost insurance freight’ value shares as weights. This
required careful linking to ensure that the world demand curve
determined within GTEM was inputted into MMRF in an appropriate
way …. A partial-equilibrium representation of the export demand
function faced by Australia for each GTEM commodity was derived.
Responsiveness of the export demand to world price changes were
estimated using GTEM parameters assuming that the rest of the
world does not respond to supply-side changes that occurred in
Australia. As the world economy responds to a given shock, such as
the imposition of an emission price, the export demand faced by
Australia shifts. A consistent measure of the shift in the export
demand functions was derived and used as input into the MMRF
model …. GTEM also determines the global emission price that
clears the global permit market. The equilibrium permit price
trajectory was used as input into the MMRF model.

Nordhaus DICE model


William Nordhaus, Sterling Professor of Economics at Yale University, has been
instrumental in advising the American Government on climate policy. His
Dynamic Integrated model of Climate and the Economy (DICE) discrete
mathematical model has been continuously developed since 1974 to provide
analysis of climate policy (Nordhaus 2008, pp.6&9):

The [DICE] model links the factors of economic growth, CO2


emissions, the carbon cycle, climate change, climatic changes and
climate-change policies. The equations for the model are taken from
different disciplines – economics, ecology, and the earth sciences.
They are then run using mathematical optimization software so that
the economic and environmental outcomes can be projected …. The
relationships that link economic growth, GHG emissions, the carbon
cycle, the climate system, impacts and damages, and possible
policies are exceedingly complex. It is extremely difficult to consider
how changes in one part of the system will affect other parts of the
system. For example, what will be the effect of higher economic

268
growth on emissions and temperature trajectories? What will be the
effect of higher fossil-fuel prices on climate change? How will the
Kyoto Protocol or carbon taxes affect emissions, climate and the
economy? The purpose of integrated models like the DICE model is
not to provide definitive answers to these questions, for no definitive
answers are possible, given the inherent uncertainties about many
relationships. Rather, these models strive to make sure that the
answers at least are internally consistent and at best provide a
state-of-the-art description of the impact of different forces and
policies.

At the CEDA Conference on 3 October 2008, Professor Garnaut spoke about


the William Nordhaus DICE model in response to my question about its
similarity to Garnaut and Treasury modelling. Professor Garnaut commented:

Bill Nordhaus at Yale did some very important pioneering work that
I've certainly learnt from as I was gearing up to this effort. I think
that our modelling is much more sophisticated on the structural side
than Nordhaus’. It takes the detail of changing technologies much
further than Nordhaus’ work, but his was very important pioneering
work. We come up with higher costs of mitigation and higher costs
of climate change than Nordhaus. Now, the biggest reasons for that
are not technological. The biggest reason for that is that having
reworked all the numbers on business as usual growth in emissions,
we've formed the confident view that business as usual growth in
emissions is far faster than Nordhaus assumed and the IPCC
assumed and Stern assumed, and that changes the outcome quite a
lot. Nordhaus took the view that we've got longer to deal with this
than our work shows that you have.

Nordhaus seeks to constantly update his model with the latest knowledge in
these areas and openly invites criticism of all his assumptions and
methodologies. To facilitate this he provides all his materials on his web site,
including laboratory notes of sub-models (Nordhaus 2007). An outline of the
DICE model is provided in Appendix 4 Nordhaus DICE model.

269
Nordhaus also details various shortcomings in his model as follows (Nordhaus
2008, pp.28, 34-5, 45, 53, 64-5, 193-4): whilst 600 years are projected, results
beyond 2050 become highly speculative because of expanding variances in
economic, scientific and technological factors; damage functions are a major
source of modelling uncertainty; the DICE model is global and aggregates
regional data sub models, which makes the model less useful for calculating
the costs and benefits of impacts and mitigation on specific regions and
countries (although a parallel effort called RICE is devoted to a multiregional
model); total factor productivity and carbon specific technological change are
exogenous rather than an endogenous variables because the robust modelling
of induced technological change has proven extremely difficult; the model has
no provision for ocean carbonate chemistry, which scientific models have
shown leads to reduced CO2 absorption over time; projecting in decades is
computationally efficient but leads to a loss of annual detail; and the CONOPT
optimisation solver is fast but at the expense of linearising DICE's nonlinear
climate equations (it also doesn't guarantee a global solution but this has not
been an issue).

Without detracting from William Nordhaus' exceptional accomplishment in


building DICE, we may highlight four further limitations. The first is that
DICE's inability to address regions means the model does not allow for the
expansion of national and international trading, such as emissions permits
trading, and spatially disaggregated substitution between these activities.

Nordhaus (2009) recently addressed this issue with a model called RICE. This
model has 12 regions, including America, China, the European Union and Latin
America. However, each region is assumed to produce only a single commodity
and Bergman's criticism of the model not being a true CGE settlement of
industries, discussed above, continues to apply. Also, RICE remains in an
experimental form as Microsoft Excel spreadsheets.

The following illustrations show that geophysical outputs from RICE are quite
similar to those of DICE, while a higher carbon trading price (or tax) is
required due to changes in assumptions and higher growth in global output.

270
Illustration 13: RICE global temperature Illustration 14: RICE carbon price compared
increase compared to previous models to previous models (Source: Nordhaus 2009
(Source: Nordhaus 2009 Figure 9) Figure 10)

A second limitation in DICE is the Cobb-Douglas production function used to


calculate economic output. Cobb-Douglas functions are widely considered to be
somewhat simplistic in comparison to Constant Elasticity of Substitution.

A third limitation, or at least a potential deficiency highlighted by Garnaut, is


that climate impacts are too low because DICE uses a relatively low estimate of
population growth. Population is the main driver for consumption, economic
activity and emissions. Nordhaus models population saturating at 8.6 billion
people in 2050 (W. Lutz et al. 2008). This compares to the United Nations
median estimate of 9.15 billion, rising to a long term saturation level of 11.03
billion (United Nations 2009).

Lastly, DICE is a consumption-preferred model. Investment is the residual of


production and consumption. In reality, the level of investment and
accumulated capital is an important factor in economic performance. Appendix
4 Nordhaus DICE model shows that for any period, the increments of
production, capital and consumption are all constant and predetermined by the
values of various factors and starting capital. As regions, industries and
commodity trade flows are not settled in equilibrium, DICE is essentially a
black box function where emissions or temperature rise constraints are met by
changing the emissions control rate.

271
Recent innovations in integrated assessment models
The integrated assessment models above indicate that the main distinction
between modelling approaches has been whether the models are global,
multiregional or single region models.

The second dimension of classification is whether models are fully computable


general equilibrium (CGE) optimisations (either static or intertemporal) or
more straightforward Leontief-type investigations into the input output table
technology matrix. Elementary Input Output analysis is suitable for
investigating economic interdependencies but is less so for research into
sustainable policies. This is because the basic limitation of Input Output
analysis is that a model is for a single country or region, a single period and
“open” with respect to international trade. There is no treatment of commodity
stocks and no way to ensure that prices are consistent with markets of
resources because consumption, investment and exports are specified outside
the model rather than endogenously determined within the model as virtual
marketplaces. For example, there is no way to ensure that provision of labour
and capital is consistent with returns on labour and capital, or that net exports
are consistent with the comparative advantage and competitiveness of
technology functions of the country or region.

Nevertheless, Input Output analysis remains popular because, in its own way,
the Leontief inverse is a straight forward form of optimisation. It is equivalent
to solving a system of linear equations for commodity flows. The analysis can
be enhanced by introducing production functions that closely match the
technology of the industry through engineering life cycle analysis.
Alternatively, a generic Transcendental Logarithmic (Translog) function can be
fitted to time series data using econometrics.

While the Leontief inverse remains at the centre of all CGE economic analysis,
to evolve toward full CGE status it needs a superstructure of objective
function, constraints and optimisation techniques.

CGE modellers generally use Input Output tables only for the data. Their
systems of equations then determine economic output using a range of
elasticities and production functions. Production functions can range from a

272
relatively simple Cobb-Douglas multiplication and Constant Elasticity of
Substitution (CES) to quite complex Translog functions.

Professor of Structural Economics at Tilburg University, Thijs ten Raa, has


developed linear programming benchmarking into a comprehensive CGE
approach for the Leontief modeller. This has built a much needed bridge
between traditional CGE optimisation and relatively static Leontief
investigations.

The advent of ten Raa's technique closes the Input Output model and thereby
obviates the hoary old chestnut of friction between CGE modellers and
Leontief modellers. It means that it is no longer necessary to classify models as
CGE or Leontief, as top down or bottom up, and to join one or other of the
camps. In any case, nobody could really decide whether Leontief input output
analysis was indeed top down or bottom up.

4.2 Survey of Input Output modelling


As briefly mentioned above, input output analysis in economics draws its
inspiration from François Quesnay's Tableau économique (1758). Wassily
Leontief won a Nobel Prize in Economics in 1973 for his models of the US
economy and trade flows using U.S. Bureau of Labor Statistics across 500
sectors, published as Input output Analysis and Economic Structure: Studies in
the Structure of the American Economy: Theoretical and Empirical
Explorations in Input output Analysis (Wassily W. Leontief 1955). Leontief also
developed the linear activity model of general equilibrium or studies at a
macro level. His major contribution in this area is Input Output Economics
(1966).

Input Output tables are now widely used to predict flows between sectors of
the economy. There has been considerable work on disaggregating high level
inter-industry flows, for example in transportation, and investigating the effect
of industry investments on profits and trade flows.

Interregional and multiregional Input Output models capture complex bilateral


trade flows between trading partners and provide reliable models of global
interactions. Appendix 3 Input output tables provides background to the

273
Australian Input Output tables, Input Output mathematics and interregional
and multiregional input output models

However, after frantic development through the post-war period and a quieter
time in the 1990s, Augusztinovics (1995, p.275) announced the demise of Input
Output analysis: “Game theory and chaos [theory] have already established
themselves in economic model building. Young people, particularly, want
challenging problems and are eager to respond to the new type of demand,
coming mainly from the excessive financial superstructure. This is not to say
that there are no valuable new results in the input-output field. Interesting and
innovative papers are continuously being published that report on expansions
and new applications, address novel problems, extend the subject-matter and
polish the method. The heyday of Input-Output as a simple, transparent,
deterministic, static linear model are, however, certainly over.”

As Mark Twain was to wryly remark in New York Journal on 2 June 1897 “The
report of my death is an exaggeration”. A decade after Augusztinovics'
courageous pronouncement, Input Output analysis saw a renaissance as an
important analytical approach as a means of understanding globalisation. Faye
Duchin, President of the Input Output Association from 2004 to 2006, noted of
its renaissance: “After a lapse of a quarter of a century, models of the world
economy are once again in demand in connection with prospects for improving
the international distribution of income and for reducing global pressures on
the environment. While virtually all empirical models of the world economy
make use of input-output matrices to achieve consistent sector-level
disaggregation, only input-output models make full use of sectoral
interdependence to determine production levels” (Duchin 2005, p.144).

The unique feature of Input Output analysis is that rather than a single data
processing technique or a mathematical formula, it provides a platform for
evolving and customising new solutions to new global problems such as those
involving CO2 emissions.

The OECD has recently provided harmonised Input Output tables and bilateral
trade data to support the growing interest in world models. Wixted et al.
(2006) have summarised the types of policy questions that can be addressed

274
with this data, including world value chains, R&D and embodied technology,
productivity, growth, industrial ecology and sustainable development. Ahmad &
Wyckoff (2003) have already demonstrated the use of bilateral trade patterns
in analysing CO2 emissions embedded in trade.

However, many economists continue to criticise input output models because


the inter industry, trade and final consumption flows are purely in money
values and important factor inputs such as energy and water are reduced to
the “value added” sum of wages, rent, interest and profit.

Nowadays, scientists, engineers, industrial ecologists, economists and policy


makers need greater flexibility in their models to incorporate physical material
flows of commodities, constraints on variables such as SO 2 and CO2 emissions
and assumptions such as peak oil.

Investigating equilibria with Input Output analysis


The relationship between the additional demand and the total effects
generated across the economy is called the “multiplier effect of the industry.”
The study of multipliers is called “impact analysis.”

Multiplying the row of technical coefficients and the column of


interdependence coefficients provides partial multipliers, for example to show
how the balance of trade is affected by changes in import requirements in the
commodities for final consumption. Partial multipliers can also be used to
evaluate changes in indirect taxes, employment, capital, depreciation and
subsidies.

Partial multipliers are always less than one because household income is
exogenous to the input-output table. Complete Keynesian multipliers can be
determined by bringing household income into the intermediate matrix. This is
called “closing the matrix”.

As Input Output tables are static, it is only possible to solve for the endogenous
variables in one equilibrium at a time. Investigation of the shift between
equilibria with different sets of values of parameters and exogenous variables
is known as “comparative statics”.

275
Comparative static analysis may be extended to dynamic analysis by taking
into account the process in moving from one equilibrium state to another and
investigating stability. This can be extended to dynamic optimisation where a
maxima or minima is sought by setting the first differential to zero.

World multiregional input output modelling


A major reason for the renaissance in Input Output modelling over the last
decade has been its ability to address spatial general equilibrium. For example,
the IPCC's Special Report on Emissions Scenarios (SRES) is based on
distinguishing policy in two dimensions: from efficiency to equity and from
regional to global (IPCC 1995; IPCC 2001; IPCC 2007).

Wilting et al. (2004; 2008) classify the SRES scenarios as follows:

A1 Efficiency & globalisation, market economy solutions for a convergent,


globalised, interactive world
A2 Efficiency & regionalisation, market economy solutions but in
heterogeneous local areas for self-reliance
B1 Equity & globalisation, where local identity is important and the
government generally takes a larger role to focus on resilience,
robustness and ecology within a convergent, globalised, interactive world
B2 Equity & regionalisation, where local identity is important and the
government generally takes a larger role to focus on resilience,
robustness and ecology in heterogeneous local solutions and self-reliance

Using their DIMITRI demand driven Input Output model, Wilting et al.
investigate these IPCC policy scenarios in the Netherlands for the period 2000-
2030. including the effect of technological change. The authors found that
current environmental pollution is in many cases due to non-sustainable
production and consumption (A. Faber et al. 2007). They also concluded that
technology changes this pattern but leads to unanticipated side effects.

Spatial Input Output analysis can be based on either a full specification of


interregional trade flows or multi-regional flows. Leontief's Interregional
(IRIO) Input Output model and the Harvard Economic Research Project's

276
Multiregional (MRIO) Input Output model (Polenske 1980) are each described
in detail in Appendix 3 Input output tables.

Miller & Blair (1985, pp.69-73) differentiate between IRIO and MRIO.
Theoretically, IRIO is superior to MRIO because it incorporates all inter-
industry flows whereas MRIO uses some averages. However, IRIO requires
significantly more data. MRIO only needs the standard format of bilateral trade
input data, where the declaring importer country identifies its own importing
industry and the partner exporting country. MRIO models are nevertheless
quite difficult to prepare. Miller & Blair outline the issues in data handling,
correcting conflicting and missing data.

In contrast, IRIO is even more fine grain. It requires the additional


identification of a partner country's export industry. Unfortunately, the latter is
not collected by declaring countries customs agencies and it cannot be
reconciled with declaring countries records of exports.

Linear programming in input output analysis


Duchin's World Trade Model is an input output model employing linear
programming to minimise the use of factor inputs like water and land,
replacing international trade coefficients with the cost structure of countries
and “closing” or endogenising international trade: “The values of endogenous
variables – output, exports, imports, factor scarcity rents for each region, and
world prices for traded goods – are determined through production
assignments for all goods that are made according to comparative advantage”
(Duchin 2005, p.142).

Duchin's World Trade Model identifies optimal resources uses for a given
(exogenous) final demand under radical policy scenarios of sustainability
rather than the incremental scenarios usual in traditional CGE models.

The key features of Duchin's model are combining both price and quantity
input-output models (the price model has both resource prices and product
prices with flows in the quantity Input Output model stated in physical units);
mapping “value-added” from a monetary concept to payments for the factors of
production; and in addition to flows, including factor stocks and extending the

277
usual linear framework with nonlinear production functions that allow
substitution of factors.

Duchin's claim of minimising factor use for a given consumption, rather than
maximising consumption for a given factor use, needs some explanation. While
this is an advantage over traditional CGE models, the primal and dual models
inherent in all constrained Negishi-format welfare optimising models
contemporaneously solves both the output maximising and input minimising
formulations.

Benchmarking

Standard Costing

Managers have long endeavoured to drill into organisational performance


using variance analysis. This has usually been the comparison of actual against
budget performance.

Following World War II, simple variance analysis evolved into a large schema of
standard costing with detailed drill-downs of production performance. The
factory was seen as a “cost centre” that needed to be micromanaged across
overheads, labour and materials. Each of these was finely divided into
spending, efficiency and volume variances.

Unfortunately, this perfect mathematical approach to micro managing the


factory led to unexpected behavioural modifications in managers and
employees.xix The first issue was “Who set the standards?” In the zeal to bear
down on costs, standards were usually tightened each year to create stretch
budgets. The result was that managers and employees had a high probability
of delivering negative variances, which was found to be very stressful and
demotivating. Managers responded with strategies such as buying cheaper
materials to maximise their divisional gross profit, which led to quality
problems occurring in downstream manufacturing or service divisions.

In the 1980s, standard costing was heavily criticised for its distortions that led
managers to make decisions that did not reduce costs or maximise profits.

278
Standard costing became regarded as mostly suitable for mass-production
industries with large variable costs (such as labour) compared to fixed costs.

Standard costing was seen as inappropriate for technologically advanced or


service companies with low direct labour content and multiple products
sharing expensive machinery. As a result, standard costing was seen to have
less relevance for the emerging service economy and custom manufacturers.

Activity Based Costing

In response to the criticism of standard costing, the management accounting


profession developed new approaches that it hoped would be more logical for
management behaviour. One of the most important of these was Activity Based
Costing (ABC). ABC identifies “activity centres” and assigns the costs in these
centres to products on the basis of cost drivers. ABC has the great advantage
that products are not loaded with overheads they don't use. It appears to be
beyond the controversial question of “Who sets the standards?” Unfortunately,
human behaviour being what it is, managers squabbled just as much over the
cost drivers in order to minimise their own cost allocations. Experience using
ABC then showed that maintaining the system required extensive accounting
capabilities like Enterprise Resource Planning (ERP). After all this was
implemented the only benefit of ABC was the self-evident outcome that low
volume products are more costly to produce than high volume products. It was
possible to obtain the same results from simpler costing approaches.

Strategic Management Accounting

Management accountants then turned to strategic management accounting. xx


By this time, Master of Business Administration students had been learning
about financial, performance and strategic analysis for many decades, mainly
using ratios such as sales and profit growth, market share and expenses per
employee. In order to help managers focus on late twentieth century concerns,
such as attracting customers, retaining customers and repelling competitors,
management accountants introduced key performance indicators (KPIs). These
included non-financial ratios such as customer satisfaction, quality and
personnel commitment.

279
However the use of KPIs did not solve the basic problems in measuring
performance. It was still not possible to provide unbiased answers to important
performance questions (ten Raa 2008). For example, “What should be done if
different companies, divisions, industries or even countries scored differently
on the various ratios? What does one do with a business that scores well on
one dimension and poorly in another? Which division should get the capital or
the new business?”

There are various strategies in dealing with a business that has mixed
performance ratios. The management of the business can be directed to excel
on all ratios. Another is to bring in expertise to assist the managers do better
where they are weak. A third strategy is to permit the business to continue
specialising in its strengths and remove the causes of weak performance.

However, any change brings major issues with it. Doing anything always
affects something else because of dependencies. One of Donald Rumsfeld's
more memorable quotes was: “There are the known knowns, the known
unknowns and the unknown unknowns.” Changes always lead to expected as
well as unexpected tradeoffs in price and quality, and things like lower profits
from the reallocation of overheads and higher wages for more specialised staff.

With KPIs not providing sufficient guidance, management turned to various


other techniques, such as quality management (ISO 9000/ BS 5750 and TQM),
Kaplan & Norton's Balanced Scorecard, Value-based Management and the
Business Excellence Model. While all of these techniques have proven valuable
in their own way, the above questions still cannot be answered definitively.

Principal Components Analysis

Organisations increasingly realised that they need to reframe the questions in


new ways that captured the concept of efficiently using resources. This is very
important because it changes the focus from output alone to maximising the
value of assets, people and other organisational resources. The substituted
questions became: “What divisions should receive the allocation of resources
because the managers have done well with what they have? What level of
outputs have been achieved for the input level and is this the most efficient?”

280
Statistical analysis contributed the technique of Principal Components Analysis
(PCA), which regresses performance against input parameters to determine a
production function. Residual errors from the regression line are analysed as
inefficiency. It is assumed that these inefficiencies are observed in the
presence of statistical noise having a normal distribution. Inefficiency is
therefore the non-noise component of the error. It is expected that inefficiency
will have a one-sided normal, exponential or gamma distribution.

Data Envelopment Analysis

In order to answer these same questions, Joseph Farrell developed a markedly


different approach to rank production units in an unbiased way. Farrell outlines
his new method of data envelopment analysis (DEA) in The Measurement of
Productive Efficiency (1957). Appendix 6 Benchmarking with linear
programming provides an outline of using DEA with both constant and variable
returns to scale.

DEA is now widely used in management and operations research to identify


inefficiency and to suggest strategies for maximising output while minimising
input. It is sometimes called “frontier analysis.” A guide to DEA practice is
provided by the Australian Steering Committee for the Review of Government
Service Provision (1997).

DEA uses linear programming to locate piecewise linear planes or facets of the
production function that sit at the outer of the observations where the greatest
efficiency occurs. This technique assumes that at least some of the production
units are successfully maximising efficiency, while others may not be doing so.
Implicitly, the method creates a best virtual proxy on the efficient frontier for
each producer. By computing the distance of these latter units from their best
virtual proxy frontier and partitioning inefficiency among the inputs, strategies
are suggested to make the sub-optimally performing production units more
efficient.

DEA production function, marginal productivity & marginal


prices

In contrast to standard costing where the budget prices are set by assumptions
and managerial agendas, DEA relies on Marshall's basic tenet of classical

281
economics that scarce resources are priced according to their marginal
productivities. Organisations bid for labour and commodities until supply and
demand is satisfied, which is the equilibrium where a price is set for the
resource.

The prices of resources, whether value-added or not, are determined by their


underlying resource content. As is shown in the discussion of CGE modelling,
this is in turn determined by the technology used to manufacture the
resources.

For example, air is free because there is no constraint on its availability. To


date the right to pollute air with CO2 has been free as well. This is because no-
one owns the air and so there is no property right that requires payment of a
rent to pollute the air. If the world's nations reach an international agreement
binding countries to reduce atmospheric emissions then the right to pollute
suddenly becomes scarce and binding on production. This results in the
emergence of a price for the right to pollute. The price depends upon the
degree of scarcity. The price also depends upon the ability of industry and
consumers to reorganise themselves away from this new cost (i.e.
amelioration) and the price of backstop technology services to remove
emissions from industrial processes so emissions permits are not required (i.e.
abatement).

How are these underlying prices set? The major feature in solving a problem of
constrained resources is that shadow or accounting prices are automatically
calculated by the dual solution to the linear programming primal problem
(Hotelling 1932; Samuelson 1953; Houthakker 1960). These shadow prices are
the Lagrange multipliers, denoted by the Greek letter lambda  in honour of
Joseph Louis Lagrange (1736-1813).xxi These prices are also the same as the
marginal productivities of the resources. Free market prices of resources
usually directly reflect these marginal productivities. Indeed, the difference
between market prices and shadow prices provides a penetrating analytical
technique to investigate the inefficiency of monopolies and oligopolies.

ten Raa (2008) shows that shadow prices can be derived from the Lagrange
multipliers. For example, in the illustrations below, the performance of

282
production unit B can increase its output by adopting best practice from A and
C. Unit A might be a firm using labour to best advantage, while C might be
using capital to best advantage.

Illustration 15: DEA: Production Unit B can Illustration 16: DEA: Corresponding vectors
expand to B' using the best practices of A & and normals
C

Building on this example, we can see that the isoquant is the weighted average
of constraints from A and B, and the vectors likewise:

a 1, a 2  =  1 c 11, c 12    2  c 21, c 22 
Where1 and 2 are accounting prices set by the market
and  1 and  2 ≥ 0
If constraint A is labour, then 1 is the wage rate
If constraint B is capital, then 2 is the interest rate to rent capital

The primal linear programming and Lagrangian dual formulations of this DEA
problem are:

Maximise the isoquant: a 1 x 1a 2 x 2 The equivalent Lagrange formulation:


Subject to: a 1=1 c 112 c 21
c 11 x 1 c 12 x 2 ≤ b1 a 2=1 c12  2 c 22
c 21 x 1c 22 x 2 ≤ b 2  1, 2 ≥ 0

The “Theorem of Complementary Slackness” provides that if a constraint is


non binding and slack exists, then the Lagrange multiplier is zero for that
constraint i.e.  = 0 ; and if a constraint is “binding” then the Lagrange
multiplier is non-zero i.e. ≠0 .

283
Therefore, the following Lagrangian equations can be prepared for each

constraints where either  = 0 or [b1− c 11 x 1 c 12 x 2 ] = 0 but both cannot


be zero simultaneously:

 1∗[b1 −c 11 x 1 c 12 x 2 ] = 0
2∗[b2 −c 21 x 1c 22 x 2 ] = 0

From the “Theorem of Complementary Slackness”:

 1 [b1− c 11 x 1 c 12 x 2 ] = 0
2 [b2 −c 21 x 1c 22 x 2 ] = 0

So we can directly sum these equations, which gives:

 1 [b1− c11 x 1c 12 x 2 ]   2 [b2 −c 21 x 1c 22 x 2 ] = 0

Upon expanding and rearranging:

 1 b 1− 1 c11 x 1−1 c 12 x 2 2 b2 −2 c 21 x 1−2 c 22 x 2 = 0


 1 b 1 2 b2 = 1 c 11 x 1 2 c 21 x 11 c 12 x 22 c 22 x 2
 1 b 1 2 b2 =  1 c112 c 21  x 1  1 c 122 c 22  x 2
 1 b 1 2 b2 = a 1 x 1a2 x 2

This provides the “Main Theorem of Linear Programming” that the prices 1
and 2 measure the marginal productivity of the constrained resources b1
and b 2 and the prices multiplied by the quantity of the input resources is
equal to the output.

1 b1   2 b2 = a 1 x 1  a 2 x 2

Shadow prices exist even when market prices may not exist. It is this unique
feature of DEA that allows organisations to be readily studied using DEA even
when there are no market prices for an organisation's inputs and outputs, for
example government departments and non-profit organisations.

284
In the above example on clean air, the second point was the ability of industry
and consumers to reorganise themselves to minimise the new cost of emissions
permits. This focus on reorganisation and reallocation underlies the continued
evolution of CGE benchmarking out of the DEA benchmarking paradigm. xxii

CGE Benchmarking

From a Service Sciences perspective, the idea of CGE benchmarking is that a


decision making unit is efficient if it cannot expand its output by changing its
practices. In other words, the decision making unit is operating at a Nash
Equilibrium.

In CGE benchmarking there is simply one number, usually denoted by the


Greek letter gamma, that is the expansion factor achievable by using all means
possible to reach maximum output, while complying with all constraints.

In the economic context, benchmarking is the process of maximising the


Negishi welfare of a multi-economy system using the utilitarian assumption
that the maximum welfare is the sum of the national expansions in per capita
consumption. This is discussed at length in the next Chapter.xxiii

In 1932, von Neumann wrote “We are interested in those states where the
whole economy expands without change of structure, i.e. where the ratios of

the intensities x 1 :: x m remain unchanged, although x 1, , x m themselves


may change. In such a case they are multiplied by a common factor  per
unit of time. This factor is the coefficient of expansion of the whole economy”
(Von Neumann 1938, p.3).

In an intertemporal context, this becomes the discounted value of the sum to


infinity of the national expansions. Here we are using expansion in
consumption as a proxy for the real problem of contemporaneously using all
resources in the most efficient way. In an optimisation production function, the
two objectives are the same, respectively the primal and dual formulations of
the problem.

285
The mathematical calculation of the objective function with its utilitarian
assumption is only of limited usefulness in comparing strategies and policies.
Of much greater importance is the behaviour of shadow prices, the local and
international substitution of labour and commodities and, in the case of climate
models, the rate of switching from financial payment for emissions permits to
paying for backstop abatement technology services to remove emissions. For
example, after industry and consumers have reorganised themselves nationally
and internationally as much as possible in response to price signals, it is the
absolute reduction in emissions that is the important factor in ameliorating
climate change.

As a result, most of the interest in benchmarking is in the constraints rather


than in the objective function. Unfortunately, constraints are the most
computationally expensive area. It is also where the complexity of the
economic model shows itself. While the objective function may be relatively
simple, each constraint in each time period is an exceedingly long symbolic
equation containing the whole of the accumulated model of the economy and
the climate change science equations.

Benchmarking an intertemporal multiregional input output model adds a


significant layer of complexity. Firstly, consumption demand and labour supply
in each country is a function of population growth. Secondly, there is a
substitution of labour between industries of a country as well as the mutual
substitution of commodity production with other countries, which all use
different technologies. Thirdly, investment becomes an endogenous variable.
Finally, accumulating climate factors become a major feedback issue.

To add even more complexity to the task, the climate equations are non-linear.
This means that heavy duty non-linear optimisation techniques, such as
modern interior point optimisation, need to be called upon instead of the usual
fast linear programming algorithms, such as Simplex.

ten Raa input out models


In the 1970s, Thijs ten Raa was one of Wassily Leontief's research assistants.
ten Raa recognised that the potential to use primary national accounts Make
and Use tables for economic analysis instead of Leontief's input output

286
tables.xxiv Leontief encouraged ten Raa in this alternative perspective. In 1993
it become possible for ten Raa to apply his new methodology when nations
began to implement the United Nations' revised System of National Accounts,
SNA93 (United Nations 1993). A decade on, ten Raa's methodology has begun
to emerge as the bridge between Leontief analysis and CGE models as
mentioned above.

In addition to his reformulation of Leontief's work in terms of Make and Use


tables, ten Raa's second main innovation has been the application of
benchmarking in national and multiregional efficiency analysis. These
innovations have been communicated through his seminal papers, which have
been integrated into textbooks, such as The Economics of Input Output
Analysis (2005) and The Economics of Benchmarking:Measuring Performance
for Competitive Advantage (2008). A practical example of these new
techniques being used that shows the extraordinary scope and impact of such
studies is Competitive pressures on China: Income inequality and migration
(ten Raa & Pan 2005).

However, it is not so much ten Raa's substituting of Make and Use tables for
the Leontief A matrix in economic flows where Make and Use tables have their
key advantage. It is in intertemporal models where investment and capital are
endogenously calculated.xxv

4.3 Survey of mathematical modelling platforms


The main programs used today for solving linear and nonlinear equation
systems in engineering and economics are the World Bank's General Algebraic
Mathematical Solver (GAMS) and Bell Laboratories' A Mathematical
Programming Language (AMPL).

The author is also aware of the requirement in economic modelling for acyclic
network solvers, having previously implemented such solvers in projects
involving production scheduling for ambulance and special vehicle
manufacture, an accounting system and in the creation of a modelling
language similar to Decision Support System (DSS).

287
For the purpose of building a new type of CGE model, the author conducted a
survey of algebraic modelling packages using as evaluation criteria the
functionality of the development packages in acyclic solvers, operations
research and data visualisation.

Issues with GAMS and AMPL


Aside from the usual matters of proprietary software being expensive to
licence and subject to restrictions, there are a number of issues with GAMS
and AMPL programs arising from their development heritage, which stretches
back to the 1970s. Firstly, the use of these packages demands a significant
amount of file handling. Data is returned in files that then need to be re-read
for further in packages such as R or Mathematica for further processing (e.g.
data envelopment analysis, input-output analysis or graphing).

Secondly, development times can be enormously extended due to the extra


time spent in pre-processing and post-processing for what is essentially a one-
off research implementation. Furthermore, the locked-down environment can
lead to additional features being difficult or costly to implement.

Lastly, linking the packages to various commercial and open source solvers
requires understanding of the various platform specifications.

Proprietary closed-source solvers


Large scale modelling programs such as GAMS and AMPL depend upon
separate solvers for which extra license fees need to be paid. These
proprietary solvers include CONOPT, KNITRO, MINOS and SNOPT.xxvi

The key issue with these older solvers is that they do not assume convexity and
seek only a local minima. However, a non-convex objective function can have
several local minima, or a unique minimum given a set of constraints. For
example, the energy reference system optimisation model Markal uses the
MINOS solver.xxvii MINOS in turn uses a quasi-Newton approximation to the
first gradient derivative (Hessian) method. This has no reference to convexity
so if it finds a minimum and this is global minimum then this outcome is pure
chance.

288
An issue with both a nonlinear objective and nonlinear constraints is that the
problem becomes more complex in terms of convexity. CONOPT and SNOPT
are usually used in this circumstance. For example, Nordhaus uses CONOPT
with GAMS.

BARON, an award winning “solver-of-solvers”, uses other solvers for individual


problems in global minima.xxviii

Programming environments
The author investigated and experimented with a number of programming
environments as shown in the following table:

Environment Description
Algencan Stand-alone non-linear solver with integration to various
languagesxxix
AMPL Student version of commercial package with limited number
of variables & constraints. Designed for large economic
models but no graphics.xxx
Ascend Open source computer algebra environment, with the
extraordinary advantage of generously granted access to the
CONOPT solverxxxi
Axiom (also FriCAS, Open source equivalent of Mathematica
OpenAxiom)
Dr AMPL Open source AMPL model checking in preparation for
submission to NEOS serverxxxii
Galahad Stand-alone non-linear solver used by NEOS and Dr AMPL.
Includes the general nonlinear solver Lancelot. Requires
programs to be prepared in AMPL or Standard Input Format
(SIF)
GAMS New student version of commercial package with limited
number of variables and constraints. Designed for large
economic models but no graphicsxxxiii
IPOPT Stand-alone non-linear solver with integration in GAMS,
Neos and ascend
Maple Mathematica equivalent - literature research only
Mathematica UTS Enterprise Licence. Exceptional symbolic and
functional processing, LISP list management, Prolog pattern
management, graph processing, graphics and exception
optimisation functions including an implementation of the
most advanced interior point solver (IPOPT) and augmented

289
Environment Description
Langrangian techniques. Unique advantage is ability for
“whole of model” symbolic constraints in optimisation.
Graphics output is an important feature with major
advantages for communication with policy makers. While
Lagrange multipliers are provided by the
DualLinearProgramming function, unfortunately access is
not provided for the KKT multipliers in nonlinear analysis. xxxiv
Also tested Culoili KKT and Loehle solvers.
Matlab UTS Enterprise License. Procedural processing primarily for
matrix manipulation and inferior graphics to Mathematica
Maxima (Macsyma) Open source equivalent of Mathematica
MuPAD Literature research only
NEOS Server Comprehensive solver service for no charge to run GAMS
and AMPL models.xxxv Requires either GAMS of AMPL to
design programs. Batch processing rather than interactive
and no graphics.
Ocaml Open source symbolic processor similar to Mathematica,
significantly faster due to compilation but limited
functionality and lacking ease of use
Octave Open source equivalent of Matlab
OpenOpt Open source Python framework for accessing solversxxxvi
Pyneos Open source python connector to NEOSxxxvii
R Open source statistical package based on S.xxxviii This
includes network (Carter Butts' R package for graph theory),
mathgraph and genopt (Patrick Burns' R packages for graph
theory and genetic non-linear solver from S Poetryxxxix),
solver packages BB and Rdonlp2
Reduce Literature research only
Sage Open source equivalent of Mathematica - literature research
only
Scilab Open source equivalent of Matlab
yacas Open source equivalent of Mathematicaxl

Toolboxes investigated in survey:

Toolboxes Description
Nordhaus Equations for climate change policy modelling in GAMS xli
perturbationAIM Eric Swanson's Mathematica toolbox for stochastic
perturbation modellingxlii
Stochastic 4 Uhlig's Matlab/Octave toolbox and associated equation

290
Toolboxes Description
generator for stochastic modelling
CUTEr Fortran procedures providing the low level functionality
required by industrial solvers. Requires programming in
Standard Input Format (SIF)

This hands-on survey concluded that Mathematica has many significant


advantages as a development environment for complex models. Firstly, it is an
“all-in” environment where all functionality is available. Secondly, it has
extensive database capabilities and includes Mathematica's Country Databases
with extensive economic data sets.

Thirdly, Mathematica is an algebraic symbolic processor rather a than a


numeric processor like Matlab. This high productivity agile environment allows
development in Mathematica's functional forms, similar to LISP list structures
and Prolog pattern handling. In contrast to procedural programming in C++,
Fortran, Basic or Matlab, development in Mathematica development is quick
and iterative, at a very high level of abstraction. The ability to hold constraints
in symbolic form is a very important advantage in complex optimisation
models.

Lastly, key advantage for rapid development is Mathematica's exceptionally


robust operations research functions, particularly the function FindMinimum,
which incorporates the COIN Project's highly regarded interior point optimiser
IPOPT. Finally, Mathematica's data visualisation functions far exceed the
capability of other applications.

Acyclic processing
Many people involved with the development of solvers see an equation to be

optimised as something like the polynomial f  x  = a.x  b.x 2 or a set of


simultaneous equations like:

f  x  = b.x 1  b.x 22
g  x = c.x 1  d.x 22

291
Solver developers rarely envisage the more complex case of recursion, for
example f t  = a.f t−1  b.g t  . Recursive equations require a higher
level of analysis using graph theory to topologically sort equations and
constraints into a solvable stream.

It may be helpful to describe the problem with the analogy of a spreadsheet for
those not familiar with recursive computer algebra. Spreadsheet cell
connections create a geographical connection between cells. If there is a time
dimension in the columns, for example 2009 to 2012, then the intersection of
the column 2010 with a row, say Revenue , may have a formula that
calculates Revenue 2010 as say

Revenue  2010 = Revenue 2009 ∗ 1  inflation .

Thus recursion exists and is mapped to the geography (or topology) of the
spreadsheet. From this geography, the spreadsheet algorithm calculates a
network which, continuing with our example, identifies that Revenue  2009
must be calculated before Revenue  2010 can be calculated.

This is called an acyclic network. Sometimes the spreadsheet algorithm will


not be able to calculate an acyclic network because a circular reference exists.
Using our example, a circular reference would be generated if for some reason
in our mass of equations that Revenue 2009 depended for some reason on

Revenue  2010 . Circular references are often found in calculating loans and
interest.

Algebraic equations with time recursion like f t  = a.f t−1  b.g t 


therefore lead to a whole spreadsheet of variables over the domain of time.
Each variable is analogous to a cell in a spreadsheet. Say there are 100 periods
and 20 equations, then 2,000 variables exist. These need to be processed
through a transformation stage to compute the acyclic network for the
topology of the problem. Thus there are additional matrices of pointers to the
next variable to be solved.

Therefore, using a non-linear solver (or optimiser) is quite difficult when a


small number of equations generates a very large “spreadsheet” with a

292
complex structure. This is where major algebraic environments like GAMS and
AMPL first found their market niche.

Notwithstanding that acyclic graphs are at the heart of Mathematica's own


structure and optimisation functions such as “Solve”, the survey was unable to
locate a compatible acyclic solvers or solver with intermediate acyclic layer for
use in Mathematica. While, many of Mathematica's functions include acyclic
solvers internally within the function, this does not facilitate large scale
external networks. Perhaps it is an oversight that the required functionality
had not yet been developed in either the R or the Mathematica environment,
particularly given that many people could find this functionality useful and
Fortran network algorithms have been around for forty years.

The stochastic toolbox “perturbationAIM” by Swanson, Anderson & Levin


(2005) provides a starting point to build a Mathematica acyclic processor.
Using techniques drawn from this package together with Mathematica's graph
processing package “Combinatorica,” the author prepared an acyclic solver for
Mathematica. This allows objective functions to be stated as an acyclic network
and facilitates the use of Brent's powerful non-derivative method for evaluation
of unconstrained problems such as the basic Nordhaus model. Appendix 5
Acyclic solver for unconstrained optimisation provides the derivation of this
acyclic processor.

At present, large scale acyclic processing is unsuitable for constrained


problems. This is because modern constrained solvers use first and second
derivatives. The development of an acyclic solver for constraints within
Mathematica's “FindMinimum” solver would provide an increase in flexibility.

For example, the economic-climate model developed in this dissertation has


the whole of the intertemporal economic model embedded via the constraints
rather than in the objective function. The Net Present Value objective function
is comparatively simple. If the value of acyclic processing in unconstrained
optimisation is a good guide, the development of an acyclic solver for
constraints having significant complexity would greatly enhance
Mathematica's optimisation effectiveness. It is recognised that embedding an
acyclic processor for constraints could be a very large and challenging task

293
because the development of modern Interior Point techniques (for example,
IPOPT) has taken three decades to reach standard solvers.

4.4 Survey of data sources


In addition to reviewing potential CGE modelling approaches and algebraic
development platforms, development of a CGE policy tool requires an
understanding of the structure and extent of available data through
investigation and hands-on experimentation where feasible. The four sources
of data investigated were National Accounting Matrices including
Environmental Accounts (NAMEAs), the OECD Input Output tables and
bilateral trade data, Global Trade Analysis Data Project (GTAP) and EXIPOL.

NAMEA
National Account Matrices including Environmental Accounts (NAMEAs) are
national accounts of environmental emissions of 10 to 15 gases. De Haan &
Keuning (1996) describe how NAMEAs provide the direct contributions of
individual industries to environmental pressures, in both absolute and relative
terms. For example, ores, biomass, CO2, CO, N2O, NH3, NOx, SO2, CH4,
NMVOC, Pb, PM10, nutrient pollutants, value added and full-time-equivalent
jobs produced per tonne of mineral consumed. Input Output analysis of
NAMEA data reconstructs the production chain, notwithstanding it may not be
homogeneous.

The submission of NAMEAs by European Union member countries is voluntary,


which contrasts to the requirements of ESA95 to submit an input output table
every five years and annual Source and Use tables.

NAMEA matrices are used in Input Output analysis for evaluating efficiencies
and targeting environmental policies. However, according to Tukker (2008),
the information within NAMEAs is merely sufficient to analyse global warming
impact and perhaps acidification but not the range of analysis required for
external costs, total material requirements and ecological footprints.

294
OECD
The OECD Input Output and bilateral trade databases have been mentioned
above in regard to input output models. In November 2007, the OECD released
its 2006 edition of harmonised Input Output tables and bilateral trade data
(OECD 2007a; 2007b).xliii These Input Output tables cover 28 OECD countries
(all members except Iceland and Mexico) and 10 non-member countries
(Argentina, Brazil, China, Chinese Taipei, India, Indonesia, Israel, Russia and
South Africa. This has increased from 18 OECD countries and 2 non-OECD
countries (Brazil and China) in the previous edition.

The OECD's data is insufficient for comprehensive global models. However, it


has become an important foundation of all world economic databases, such as
the Global Trade Analysis Project (GTAP).

GTAP
The Global Trade Analysis Project (GTAP) Version 7 database of national input
output models, trade data and energy data has 2004 data for 57 sectors and
113 regions. It relies heavily on OECD's harmonised input output and STAN
bilateral trade data and on IEA's energy data.

GTAP's focus on the factors of production and a world economy MRIO table are
exceedingly useful in analysis. Hertel and Walmsley (2008, Chapter 1, 1.1.2)
note that:

Due to its economy-wide coverage, GTAP is particularly useful for


analyzing issues that cut across many diverse sectors. This data
base is particularly popular with researchers analyzing the potential
impact of: (a) global trade liberalization under a future WTO round,
(b) regional trade agreements, (c) economic consequences of
attempts to reduce carbon dioxide (CO2) emissions via carbon taxes,
and (d) domestic impacts of economic shocks in other regions (e.g.,
the Asian financial crisis, or rapid growth in China). Sector-by-sector
analyses of these questions can provide a valuable input into studies
of these issues. However, by their very nature, these shocks affect
all sectors and many regions of the world, so there is no way to

295
avoid employing a data base which is exhaustive in its coverage of
commodities and countries. The Global Trade Analysis Project is
designed to facilitate such multi-country, economy-wide analyses.

EXIOPOL
Tukker (2008) describes “A New Environmental Accounting Framework Using
Externality Data and Input-Output Tools for Policy Analysis” (EXIOPOL). This is
a Euro 5 million collaborative project of 37 institutes funded by the European
Union with the 2010 objective of building a world multiregional Input Output
model (MRIO) from officially reported data as well as OECD and GTAP data.

Environmental themes will be linked to the MRIO model, including the


interactions and spill overs between countries of global warming, acidification,
eutrophication and photochemical oxidants. The results will be used to
estimate the external costs of environmental impacts and applying these
results to major policy questions.

The EXIOPOL project expects to unify current work in IO analysis (IOA),


material flow analysis (MFA) and life cycle assessment of products (LCA) at the
company (or micro) level.

It also hopes to contribute new insights on cost-effectiveness and cost-benefit


analysis to many EU Policy fields including inter alia a policy for integrated
products, strategy for natural resources, action plans for environmental
technologies, sustainable consumption and production. This will involve
scenario-analysis at regional (or meso-) level and national or world (macro)
level using input output analysis (given exogenous technology, emission and
demand scenarios), CGE models and macro econometric models.

Data Survey Results


NAMEAs and OECD data are excellent advances but lack integration and have
limited scope. The compelling advantage of GTAP data is its availability,
consistency, geographic coverage and linkage with World Bank, OECD and IEA
data. While commodity classification has some inconsistencies with the OECD,

296
GTAP is progressively resolving these issues. In the future EXIOPOL may
provide valuable enhancements to the GTAP database.

4.5 Conclusion
This Chapter investigates computable general equilibrium (CGE) theory and
models, mathematical platforms and data sources in order to establish how the
quality of climate-economic policy research might be improved by bringing
together recent developments in CGE policy research techniques and assets.
The Chapter extends the policy framework of Chapter 1 Introduction and the
analysis of political economy set out Chapters 2 and 3.

A consistent thread of classical market economics and neoclassical modelling


is found. This extends from Bernard Mandeville's The Fable of the Bees,
Private Vices, Public Virtues (1723), to Léon Walras identification of
mathematical equilibrium (1877), the commencement of neoclassical
economics with Alfred Marshall's microeconomic supply and demand curves
for partial equilibrium (1890) and to modern CGE techniques authenticated by
the work of Kenneth Arrow, Gerard Debreu and Lionel McKenzie (1954).

A requirement for general equilibrium policy modelling in national and global


affairs is identified with bilateral trade allowing countries to change their
competitive positions. The general limitations of CGE tools were discussed
from a policy perspective in Chapter 1 Introduction. This Chapter elaborates
on the strengths and weaknesses of CGE modelling from a technical
perspective.

World class integrated assessment models are reviewed. Recent best practice
climate-economic modelling by the Australian Garnaut Review and Australian
Treasury is closely examined. A major issue in calculating and communicating
regional and commodity spatial results was identified.

Chapter 3 Political economy of the Anglo-American world view of climate


change identified the policy research of the American Government's adviser
William Nordhaus. This Chapter evaluates Nordhaus' Dynamic Integrated
model of Climate and the Economy (DICE) model and recent regional RICE
model. These models are found to have great value in understanding climate

297
change policy, even though their equilibriums are mostly a function of the
emissions control rate and do not settle across both regions and commodities.

This Chapter focuses on the renaissance of Input Output analysis in


multiregional economic and technology modelling. It finds that a new form of
Input Output modelling by Thijs ten Raa brings together relatively static
Leontief Input Output analysis with traditional CGE optimisation.

The evolution of benchmarking from “standard costing” through to Data


Envelopment Analysis is reviewed. The use of benchmarking techniques that
use linear programming as a CGE market pricing technique is found to provide
a new and compelling theoretical approach for intertemporal CGE modelling.

Mathematical platforms are reviewed for their ability to support benchmarking


of intertemporal multiregional Input Output models and to facilitate agility and
communication of spatial results through advanced data visualisation.
Mathematica is found to be the most appropriate programming environment.
This was verified by solving William Nordhaus' DICE model with an acyclic
processor using the theory of graphs.

Data sources suitable for a benchmarking multiregional Input Output model


were reviewed. The Global Trade Analysis Data Project (GTAP) database was
found to be the most consistent and comprehensive data platform.

4.6 Chapter references

ABARE, 2008. Australian Bureau of Agricultural and Resource Economics


Report: Global Integrated Assessment Model: a new analytical tool for
assessing climate change risks and policies. Australian Commodities,
15(1), 195-216.

Adelman, I. & Robinson, S., 1978. Income distribution policy in developing


countries: A case study of Korea, Stanford University Press.

298
Ahmad, N. & Wyckoff, A., 2003. Carbon dioxide emissions embodied in
international trade of goods, OCED Directorate for Science, Technology
and Industry. Available at:
http://www.olis.oecd.org/olis/2003doc.nsf/43bb6130e5e86e5fc12569fa00
5d004c/7f6eecff40a552d7c1256dd30049b5ba/$FILE/JT00152835.PDF.

Ambrosi, P. et al., 2003. Optimal control models and elicitation of attitudes


towards climate damages. Environmental Modeling and Assessment,
8(3), 133-147.

Arrow, K.J. & Debreu, G., 1954. Existence of an Equilibrium for a Competitive
Economy. Econometrica: Journal of the Econometric Society, 265-290.

Augusztinovics, M., 1995. What Input-Output is About “. Structural Change


and Economic Dynamics, 6(3), 271-277.

Australian Standing Committee on Economics, 2008. Estimates Transcripts -


Treasury greenhouse modelling, Spokesperson Christine Milne,
Available at: http://christine-
milne.greensmps.org.au/content/transcript/estimates-hearings-treasury-
greenhouse-modelling [Accessed April 25, 2009].

Australian Steering Committee for the Review of Government Service


Provision, 1997. Data Envelopment Analysis: A technique for measuring
the efficiency of government service delivery, Canberra: AGPS. Available
at: http://www.pc.gov.au/gsp/reports/research/dea/ [Accessed June 13,
2008].

Australian Treasury, 2008. Australia's Low Pollution Future: The Economics of


Climate Change Mitigation. Available at:
http://www.treasury.gov.au/lowpollutionfuture/ [Accessed April 21,
2009].

Bergman, L., 2005. CGE modeling of environmental policy and resource


management. Handbook of Environmental Economics, 3, 1273–1306.

299
Blaug, M., 1992. The methodology of economics: Or, how economists explain ,
Cambridge University Press.

Bosetti, V. et al., 2006. A world induced technical change hybrid model. Energy
Journal, 27(Hybrid Modeling of Energy-Environment), 13-37.

Burniaux, J.M. et al., 1992. The costs of reducing CO2 emissions: evidence
from GREEN, OECD.

Bussolo, M. et al., 2008. Global Climate Change and its Distributional Impacts.
In Eleventh Annual Conference on Global Economic Analysis, Helsinki,
June.(Available on https://www. gtap. agecon. purdue. edu/) .

Capros, P. et al., 1995. GEM-E3. Computable General Equilibrium Model for


Studying Economy-Energy-Environment. Interactions. European
Commission., EUR 16714 EN.

Cass, D., 1965. Optimum growth in an aggregative model of capital


accumulation. The Review of Economic Studies, 233-240.

Champernowne, D.G., 1945. A Note on J. v. Neumann’s Article on ‘A Model of


Economic Equilibrium’. Review of Economic Studies, 13, 10-18.

Chenery, H.B. & Raduchel, W.J., 1969. Substitution in Planning Models,


Cambridge, MA: Harvard University, Center for International Affairs.

Chenery, H.B. & Uzawa, H., 1958. Non-linear programming in economic


development. Studies in Linear and Non-Linear Programming , 203-229.

Debreu, G., 1959. Theory of value: An axiomatic analysis of economic


equilibrium 1972nd ed., New Haven & London: Yale University Press.
Available at: http://cowles.econ.yale.edu/P/cm/m17/index.htm.

De Haan, M. & Keuning, S.J., 1996. Taking the Environment into Account: The
NAMEA Approach. Review of Income & Wealth, 42(2), 131-148.

300
Deke, O. et al., 2001. Economic impact of climate change: Simulations with a
regionalized climate-economy model, Kiel Institute of World Economics.

Deming, W.E. & Stephan, F.F., 1940. On a least square adjustment of a sampled
frequency table when the expected marginal totals are known. Annals of
Mathematical Studies, 11, 427-444.

Dixon, P.B., 1975. The theory of joint maximization, North-Holland.

Dixon, P.B. et al., 1982. ORANI: a Multisectoral Model of the Australian


Economy, Amsterdam: North Holland.

Duchin, F., 2005. A world trade model based on comparative advantage with m
regions, n goods, and k factors. Economic Systems Research, 17(2), 141-
162.

Duchin, F. et al., 2002. Scenario Models of the World Economy. Cuadernos del
Fondo de Investigación Richard Stone, 7.

Duchin, F. & Steenge, A.E., 2007. Mathematical Models in Input-Output


Economics, New York: Department of Economics, Rensselaer
Polytechnic Institute. Available at:
http://econpapers.repec.org/paper/rpirpiwpe/0703.htm [Accessed
November 6, 2008].

Eboli, F., Parrado, R. & Roson, R., 2008. Climate change feedback on economic
growth: exploration with a dynamic general equilibrium model. In
Eleventh Annual Conference on Global Economic Analysis, Helsinki,
June, available at< www. gtap. agecon. purdue. edu.

Edenhofer, O., Bauer, N. & Kriegler, E., 2005. The impact of technological
change on climate protection and welfare: Insights from the model
MIND. Ecological Economics, 54(2-3), 277-292.

Faber, A., Idenburg, A. & Wilting, H., 2007. Exploring techno-economic


scenarios in an input-output model. Futures, 39(1), 16-37.

301
Farrell, M.J., 1957. The measurement of productive efficiency. Journal of the
Royal Statistical Society: Series A (Statistics in Society) , 120(3), 253-82.

Ford Foundation, 1974. A Time to Choose: America’s Energy Future. Final


Report of the Energy Policy Project of the Ford Foundation., Cambridge,
MA: Ballinger.

Frontier Economics, 2008. Modelling Climate Change Impacts using CGE


Models: a Literature Review, Melbourne: Garnaut Climate Change
Review.

Gallego, B. & Lenzen, M., 2005. A consistent input–output formulation of


shared producer and consumer responsibility. Economic Systems
Research, 17(4), 365-391.

Garnaut, R., 2008. Australia as a low-emissions economy, Sydney: Committee


for Economic Development Australia. Available at:
http://www.garnautreview.org.au/CA25734E0016A131/WebObj/TRANSC
RIPT-FinalReport-CEDAaddress-QandA-voteofthanks-
3oct08/$File/TRANSCRIPT%20-%20Final%20Report%20-%20CEDA
%20address%20-%20Q%20and%20A%20-%20vote%20of%20thanks
%20-%203oct08.pdf.

Ginsburgh, V. & Keyzer, M., 1994. The structure of applied general equilibrium
models, Cambridge, Massachusetts & London, England: The MIT Press.

Goulder, L.H. & Mathai, K., 2000. Optimal CO2 abatement in the presence of
induced technological change. Journal of Environmental Economics and
Management, 39(1), 1-38.

Goulder, L.H. & Schneider, S.H., 1999. Induced technological change and the
attractiveness of CO 2 abatement policies. Resource and Energy
Economics, 21(3-4), 211-253.

302
Ha-Duong, M. & Grubb, M., 1997. Influence of socioeconomic inertia and
uncertainty in optimal CO2-emission abatement. Nature, 390(6657),
270.

Harberger, A.C., 1962. The incidence of the corporation income tax. The
Journal of Political Economy, 215-240.

Hertel, T. & Walmsley, T.L., 2008. GTAP: Chapter 1: Introduction, Center for
Trade Analysis, Purdue University. Available at:
https://www.gtap.agecon.purdue.edu/databases/v7/v7_doco.asp
[Accessed November 14, 2008].

Hertel, T.W., 1999. Global Trade Analysis: Modeling and Applications ,


Cambridge University Press.

Homma, T. et al., 2006. Evaluation of global warming mitigation policies with a


dynamic world energy-economic model considering changes in
industrial structures by IT penetration. In Japan.

Hope, C., 2006. The marginal impact of CO 2 from PAGE2002: An integrated


assessment model incorporating the IPCC's five reasons for concern.
Integrated Assessment Journal, 6(1), 19-56.

Hotelling, H., 1932. Edgeworth's taxation paradox and the nature of demand
and supply functions. The Journal of Political Economy, 577-616.

Houthakker, H.S., 1960. Additive preferences. Econometrica: Journal of the


Econometric Society, 244-257.

Hudson, E.A. & Jorgenson, D.W., 1974. US energy policy and economic growth,
1975-2000. The Bell Journal of Economics and Management Science,
461-514.

303
Idenburg, A.M. & Wilting, H.C., 2000. Dimitri: a dynamic input-output model to
study the impacts of technology related innovations. In University of
Macerata, Italy. Available at: http://www.iioa.org/pdf/13th
%20conf/Idenburg&Wilting_DMITRI.pdf.

IPCC, 2001. Climate Change 2001: Synthesis Report. A Contribution of


Working Groups I, II, and III to the Third Assessment Report of the
Intergovernmental Panel on Climate Change [Watson, R.T. and the Core
Writing Team (eds.)]., Cambridge University Press, Cambridge, United
Kingdom, and New York, NY, USA. Available at:
http://www.ipcc.ch/ipccreports/tar/vol4/english/index.htm [Accessed
April 26, 2008].

IPCC, 1995. IPCC Second Assessment Climate Change 1995: A Report of the
Intergovernmental Panel on Climate Change, Geneva: IPCC. Available
at: http://www.ipcc.ch/ipccreports/tar/vol4/english/index.htm [Accessed
April 26, 2008].

IPCC, 2007. Synthesis Report: an assessment of the Intergovernmental Panel


on Climate Change. Fourth Assessment Report. [Abdelkader Allali,
Roxana Bojariu, Sandra Diaz, Ismail Elgizouli, Dave Griggs, David
Hawkins, Olav Hohmeyer, Bubu Pateh Jallow, Lucka Kajfez-Bogataj, Neil
Leary, Hoesung Lee, David Wratt (eds.)], IPP Plenary XXVII Valencia,
Spain: United Nations. Available at: http://www.ipcc.ch/ipccreports/ar4-
syr.htm [Accessed April 5, 2008].

Jevons, W.S., 1871. The Theory of Political Economy 3rd ed., London:
MacMillan & Co. Available at:
http://www.econlib.org/library/YPDBooks/Jevons/jvnPE3.html#Chapter
%203 [Accessed April 20, 2009].

Johansen, L., 1960. A multi-sectoral study of economic growth 2nd ed.,


(Amsterdam, Oxford, New York): North-Holland Pub. Co., American
Elsevier. Available at: http://openlibrary.org/b/OL5070373M/multi-
sectoral-study-of-economic-growth [Accessed April 23, 2009].

304
Jornini, P. et al., 1994. The SALTER Model of the World Economy: Model
Structure, Database and Parameters, Canberra: Industry Commission.
Available at:
http://www.pc.gov.au/ic/research/models/saltermodel/workingpaper24
[Accessed May 5, 2009].

Kemfert, C., 2005. Induced technological change in a multi-regional, multi-


sectoral, integrated assessment model (WIAGEM): Impact assessment of
climate policy strategies. Ecological Economics, 54(2-3), 293-305.

Köhler, J. et al., 2006. Combining energy technology dynamics and


macroeconometrics: The E3MG model. Energy Journal, 27(Hybrid
Modeling of Energy-Environment), 113-133.

Koopmans, T.C., 1965. On the concept of optimal economic growth, Cowles


Foundation for Research in Economics at Yale University.

Kruithof, J., 1937. Calculation of telephone traffic. De Ingenieur, 52(8).

Kypreos, S., 2007. A Merge model with endogenous technological change and
the cost of carbon stabilization. Energy Policy, 35(11), 5327-5336.

Kypreos, S., 2005. Modeling experience curves in MERGE (model for


evaluating regional and global effects). Energy, 30(14), 2721-2737.

Kypreos, S., 2006. Modeling experience curves in MERGE (model for


evaluating regional and global effects, : Kypreos, S. Energy, 2005, 30,
(14), 2721-2737. Fuel and Energy Abstracts, 47(2), 141.

Leontief, W., 1970. Environmental Repercussions and the Economic Structure:


An Input-Output Approach. The Review of Economics and Statistics,
52(3), 262-271.

Leontief, W.W., 1966. Input-Output Economics, New York: Oxford University


Press.

305
Leontief, W.W., 1955. Input-Output analysis and economic structure: Studies in
the structure of the American economy: Theoretical and Empirical
Explorations in Input-Output Analysis. The American Economic Review,
45(4), 626-636.

Leontief, W.W., 1951. The structure of the American economy, 1919-1939 2nd
ed., White Plains, NY: International Arts and Sciences Press.

Leontief, W.W., 1937. Interrelation of prices, output, savings, and investment.


The Review of Economic Statistics, 109-132.

Leontief, W.W., 1941. The structure of American economy, 1919-1929: An


empirical application of equilibrium analysis, Harvard University Press.

Lutz, W., Sanderson, W. & Scherbov, S., 2008. IIASA’s 2007 Probabilistic World
Population Projections, Vienna, Austria: International Institute of
Applied Systems Analysis. Available at:
http://www.iiasa.ac.at/Research/POP/proj07/index.html?sb=5.

Maler, K.G., 1974. Environmental economics: a theoretical inquiry, (Baltimore


& London): The John Hopkins University Press.

Malthus, T.R., 1798. An essay on the principle of population, as it affects the


future improvement of society,

Malthus, T.R., 1820. Principles of political economy considered with a view to


their practical application 1836th ed.,

Mandeville, B., 1723. The fable of the bees, or Private vices, publick benefits:
with an essay on charity and charity-schools, and a search into the
nature of society 1724th ed., printed for J. Tonson.

Manne, A., Mendelsohn, R. & Richels, R., 1995. A model for evaluating
regional and global effects of GHG reduction policies. Energy policy,
23(1), 17-34.

306
Manne, A.S., 1977. ETA-MACRO: A model of energy-economy interactions.

Marshall, A., 1890. Principles of economics, Macmillan and co.

McKibbin, W.J. & Wilcoxen, P.J., 2004. Estimates of the costs of Kyoto:
Marrakesh versus the McKibbin-Wilcoxen blueprint. Energy Policy,
32(4), 467-479.

McKibbin, W.J. & Wilcoxen, P.J., 1999. The theoretical and empirical structure
of the G-Cubed model. Economic modelling, 16(1), 123-148.

Meyer, B., Lutz, C. & Wolter, I., 2007. The Global Multisector/Multicountry 3-E
Model GINFORS. A Description of the Model and a Baseline Forecast for
Global Energy Demand and CO2 Emissions, to be published in Journal of
Sustainable Development.

Miller, R.E. & Blair, P., 1985. Input output analysis: foundations and extensions,
Englewood Cliffs, N.J.: Prentice-Hall.

Mill, J.S., 1848. Principles of Political Economy 7th ed., London: Longmans,
Green and Co. Available at: http://www.econlib.org/library/Mill/mlP.html
[Accessed April 9, 2009].

Muller-Furstenberger, G. & Stephan, G., 2007. Integrated assessment of global


climate change with learning-by-doing and energy-related research and
development. Energy Policy, 35(11), 5298-5309.

von Neumann, J. & Morgenstern, O., 1953. Theory of games and economic
behavior 3rd ed., Princeton, New Jersey: Princeton University Press.

Nordhaus, W.D. & Radetzki, M., 1994. Managing the global commons: the
economics of climate change, MIT press Cambridge.

Nordhaus, W.D. & Yang, Z., 1996. A regional dynamic general-equilibrium


model of alternative climate-change strategies. The American Economic
Review, 86(4), 741-765.

307
Nordhaus, W.D., 2009. Alternative Policies and Sea-Level Rise in the RICE-
2009 Model, New Haven, CT: Cowles Foundation, Yale University.
Available at: http://econpapers.repec.org/paper/cwlcwldpp/1716.htm
[Accessed September 10, 2009].

Nordhaus, W.D., 2008. A Question of balance: weighing the options on global


warming policies, Yale University Press.

Nordhaus, W.D., 2007. Notes on how to run the DICE model, Available at:
http://www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
[Accessed June 26, 2008].

Nordhaus, W.D., 1979. The efficient use of energy resources, New Haven,
Conn.(EUA). Yale University Press.

Nordhaus, W.D. & Yohe, G.W., 1983. Future carbon dioxide emissions from
fossil fuels. Changing Climate: Report of the Carbon Dioxide
Assessment Committee, 87.

OECD, 2007a. The OECD 2006 Input Output Tables, Paris: OECD Directorate
for Science, Technology and Industry. Available at:
http://www.oecd.org/document/3/0,3343,en_2649_34445_38071427_1_1
_1_1,00.html.

OECD, 2007b. The OECD 2006 STAN: Bilateral Trade Database, Paris: OECD
Directorate for Science, Technology and Industry. Available at:
http://www.oecd.org/document/3/0,3343,en_2649_34445_38071427_1_1
_1_1,00.html.

Pan, H., 2006. Dynamic and endogenous change of input-output structure with
specific layers of technology. Structural Change and Economic
Dynamics, 17(2), 200-223.

308
Pan, H. & Kohler, J., 2007. Technological change in energy systems: Learning
curves, logistic curves and input-output coefficients. Ecological
Economics, 63(4), 749-758.

Polenske, K.R., 1980. US multiregional input-output accounts and model,


United States: D.C. Heath and Company,Lexington, MA.

Popp, D., 2006. Entice-BR: The effects of backstop technology R&D on climate
policy models. Energy Economics, 28(2), 188-222.

Popp, D., 2004. Entice: endogenous technological change in the DICE model of
global warming. Journal of Environmental Economics and Management ,
48(1), 742-768.

Popp, D., 2006. Innovation in climate policy models: Implementing lessons


from the economics of R&D. Energy Economics, 28(5-6), 596-609.

Quesnay, F., 1758. Le Tableau économique 1759 “third” edition, as reprinted in


M. Kuczynsi and R.L. Meek, 1972, editors Quesnay’s Tableau
Économique, New York: A.M.Kelly.

ten Raa, T., 2008. The Economics of Benchmarking: Measuring Performance


for Competitive Advantage, Palgrave MacMillan. Available at:
http://www.palgrave.com/products/title.aspx?PID=327768 [Accessed
April 23, 2009].

ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.

ten Raa, T. & Pan, H., 2005. Competitive pressures on China: Income inequality
and migration. Regional Science and Urban Economics, 35(6), 671-699.

Ramsey, F.P., 1928. A mathematical theory of saving. The Economic Journal,


38(152), 543-559.

309
Samuelson, P.A., 1953. Prices of Factors and Good in General Equilibrium. The
Review of Economic Studies, 1-20.

Scarf, H., 1967. On the computation of equilibrium prices, Cowles Foundation


for Research in Economics at Yale University.

Self, P., 2000. Rolling back the market: economic dogma and political choice,
Palgrave MacMillan.

Shoven, J.B. & Whalley, J., 1984. Applied general-equilibrium models of


taxation and international trade: an introduction and survey. Journal of
Economic Literature, 1007-1051.

Smith, A., 1776. An Inquiry into the Nature and Causes of the Wealth of
Nations Also known as: Wealth of Nations., Project Gutenberg.

Stone, R., 1961. Input-output and national accounts, Organisation for


European Economic Co-operation.

Stone, R., 1962. Multiple classifications in social accounting. Bulletin de


l’Institut International de Statistique, 39(3), 215-33.

Stone, R. & Brown, A., 1962. A Computable Model of Economic Growth (A


Programme for Growth), London, UK.: Chapman and Hall.

Stone, R., Champernowne, D.G. & Meade, J.E., 1942. The precision of national
income estimates. The Review of Economic Studies, 111-125.

Strathern, P., 2002. A brief history of economic genius, Texere.

Swanson, E.T., Anderson, G. & Levin, A., 2005. Higher-Order ‘Perturbation’


Solutions to Dynamic, Discrete-Time Rational Expectations Models ,
USA. Available at: http://www.ericswanson.us/perturbation.html
[Accessed July 7, 2008].

310
Tukker, A., 2008. EXIOPOL: towards a global Environmentally Extended Input-
Output Table. In Helsinki, Finland. Available at:
https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=2702 [Accessed November 14, 2008].

United Nations, 1993. Revised System of National Accounts,


(Brussels/Luxembourg, New York, Paris, Washington, D.C.): United
Nations Statistical Division. Available at:
http://unstats.un.org/unsd/sna1993/foreword.asp.

United Nations, 2009. World Population Prospects: The 2008 Revision


(Highlights), New York: Population Division of the Department of
Economic and Social Affairs of the United Nations Secretariat.

Von Neumann, J., 1938. A model of general economic equilibrium. Readings in


Welfare Economics, 13(1945), 1-9.

Von Neumann, J., 1928. Zur theorie der gesellschaftsspiele (Theory of Parlor
Games). Mathematische Annalen, 100(1), 295-320.

Walras, L., 1877. Elements of pure economics 2003rd ed., Routledge.

Wilting, H.C., Faber, A. & Idenburg, A.M., 2004. Exploring technology


scenarios with an input-output model. In Brussels, Belgium, p. 19.
Available at:
www.ecomod.net/conferences/iioa2004/iioa2004_papers/wilting.pdf
[Accessed April 9, 2008].

Wilting, H.C., Faber, A. & Idenburg, A.M., 2008. Investigating new


technologies in a scenario context: description and application of an
input-output method. Journal of Cleaner Production, 16(1, Supplement
1), 102-112.

Wixted, B., Yamano, N. & Webb, C., 2006. Input-output analysis in an


increasingly globalised world: applications of OECD's harmonised
international tables, OECD Directorate for Science, Technology and

311
Industry. Available at:
http://www.oecd.org/dataoecd/46/46/37587419.pdf.

i ceteris paribus: all other factors being equal


ii For example, Monash University Faculty of Business & Economics, Centre for
Policy Studies Massachusetts Institute of Technology Joint Program on the
Science and Policy of Global Change, Purdue University Agriculture Centre for
Policy Studies
iii Although much of Smith's insights were presented in his first work The Theory
of Moral Sentiments (1759)
iv Of course Francois Quesnay was not the first to create national accounts, which
is attributed to William Petty in 1664
v Saint-Simon was also to become the father of positivism, which is a philosophy
that accepts the existence of only positive facts
vi In fact prices can sometimes be negative, as in the case of a negative real
interest rate
vii Specifying who developed the first computable general equilibrium (CGE) model
is not straightforward because CGE is not a closed class of model. Traditionally,
CGEs have been static single period models, run for one period or concatenated
into a few periods. Since the 1990s, CGE models have become intertemporal,
which is fully dynamic over multiple periods. The common characteristics of
CGE models are: multiple economies with multiple sectors; the technology in
each industry exhibits constant returns to scale; all markets are perfectly
competitive; preferences for production and consumption are based on the
single criterion of price (i.e. homothetic with no econometric estimation of
supply and demand elasticities); firms seek to maximise profit and consumers
seek to maximise utility; prices clear product and factor markets; there is no
storage of product or factors (except capital); and there is no long term
borrowing. The latter characteristic is because CGE models deal with the real
economy in contrast to microeconomic models that encompass financial assets.
CGE models may have technology coefficients that are determined by relative
prices compared to Leontief input output models, which have fixed technology
coefficients.
viiiGREEN became the MIT-EPPA model
ix Nordhaus responded to this criticism by settling DICE (2007) as an equilibrium.
x Stone's acronym RAS for the biproportional adjustment technique is seeming his
two initials R & S, with A inserted for the Leontief A direct requirements matrix.
R & S were Stone's pre- and post- multiplying matrices for the A matrix
xi The World Bank's model is called ENVISAGE. It is based on its global trade
model called LINKAGE and run in conjunction with the Bank's GIDD simulation
model for personal income distribution
xii The Australian Productivity Commission developed SALTER to support APEC
trade negotiations. ABARE also joined GTAP as a founding consortium member.
xiiiThe ORANI model is based on Lief Johansen's multisectorial study of economic
growth (MSG) model. Monash CoPS has since transformed the static ORANI
into a dynamic intertemporal model called MONASH. ORANI-G remains a
template for single country models
xivIMPACT is the CGE model of the Australian Industry Commission's IMPACT
Project. Monash CoPS currently hosts the IMPACT Project. The IMPACT Project
has assisted Monash develop ORANI, MONASH and the GEMPACK modelling
environment, which is used around the world
xv GTEM is in turn a variant of the GTAP model
xviAustralian Treasury 2008 (p203): GTEM is a recursively dynamic general
equilibrium model developed by ABARE to address policy issues with long-term
global dimensions, such as climate change mitigation costs. It is derived from

312
the MEGABARE model and the static GTAP model. The dimension of GTEM used
in this report represents the global economy through 13 regions (including
Australia, the United States, China and India) each with 19 industry sectors and
a representative household (for society). The regions are linked by trade and
investment. Government policies are represented by a range of taxes and
subsidies. The model also disaggregates three energy-intensive sectors into
specific technologies: electricity generation, transport, and iron and steel. Some
modifications have been made as part of the Treasury modelling program.
xviiAustralian Treasury 2008 (p209) G-Cubed models the global economy and is
designed for climate change mitigation policy analysis. An important
characteristic of G-Cubed is that economic agents are partly forward-looking:
they make decisions based not only on the present day economic situation, but
also based on expectations of the future. G-Cubed has limited detail on
technologies. Modelling using the G-Cubed model was conducted in conjunction
with the Centre for Applied economics Analysis (CAMA) and the Treasury. A
report from CAMA covering the joint modelling work is available on the
Treasury website.
xviiiAustralian Treasury 2008 (p211): The Monash Multi-Regional Forecasting
(MMRF) model is a detailed model of the Australian economy developed by the
Centre of Policy Studies (CoPS) at Monash University. MMRF has rich industry
detail (with 58 industrial sectors) and provides results for all eight states and
territories. It is also dynamic, employing recursive mechanisms to explain
investment and sluggish adjustment in factor markets.
xixFrederick Winslow Taylor, Father of Scientific Management, identified this
around 1900
xx Along with strategic management accounting came life cycle analysis,
competitor accounting (i.e. hypothesising the performance and costs of
competitors), marginal costing and target costing. In target costing, the future
selling price in the market was estimated and the designers and engineers were
instructed to reduce costs to the market price less the profit margin.
xxiLagrange multipliers occur in linear programming. In non-linear programming,
the correct terminology is Karush Kuhn Tucker (KKT) multipliers.
xxiiRather than comparing an individual industry to its peers, generic
benchmarking for CGE switches some or all of production to the most efficient
industry.
xxiiiThis embodies the assumption that national expansions can only be greater
than or equal to 1
xxivThe United Nations System of National Accounts (SNA93) requires data to be
measured in make (also called “source”) tables and use tables. A Leontief input-
output table can be calculated from the make-use format. The equations that
connect the two formulations are: A = U . Transpose[V] and x = Transpose[V] .
s, where V and U are the make and use tables, respectively, A is the Leontief
technology matrix, x is the commodity volume and s is the activity of the
production sector that produces the commodity. In the Commodity-technology
model using make use tables, consumption Y is determined by the equation Y =
(Transpose[V] – U) . s This is similar to the Leontief material balance where x
=a.x+y
xxvPersonal communications with Thijs ten Raa in January 2009 and with ten Raa's
former PhD student and now research collaborator, Haoran Pan , suggest that
ten Raa's work will address this further
xxviCONOPT solver by ARKI Consulting & Development A/S, Bagsvaerd, Denmark
(www.conopt.com). KNITRO solver by Zenia Optimization Inc. (www.ziena.com).
MINO solver by Stanford Business Software, Inc (sbsi-sol-
optimiSze.com/asp/sol_product_minos.htm). SNOPT solver by Philip Gill, Walter
Murray and Michael Saunders, available through Stanford Business Software,
Inc. (sbsi-sol-optimize.com/asp/sol_product_snopt.htm)

313
xxviiMARKAL by the International Energy Agency (IEA) Energy Technology
Systems Analysis Programme (ETSAP) (www.etsap.org/markal/main.html)
xxviiiThe Branch And Reduce Optimization Navigator (BARON) by The Sahinidis
Optimization Group of Carnegie Mellon University Department of Chemical
Engineering (www.andrew.cmu.edu/user/ns1b/baron/baron.html)
xxixSee www.ime.usp.br/~egbirgin/tango/index.php
xxxAMPL and GAMS employ pre-solvers to detect redundancy, determining the
values of some variables before applying the algorithm and so eliminate
variables and constraints. The pre-solve phase also determines if the problem is
feasible. Corresponding to the pre-solve phase, a post-solver is required to
restitute the original problem and variables
xxxiSee ascendwiki.cheme.cmu.edu
xxxiiSee www.gerad.ca/~orban/drampl/
xxxiiiSee AMPL and GAMS footnote above
xxxivPrivate communication with Mathematica suggests that following major
enhancements in the optimisation functions, further functionality will not be
possible until new developers are appointed
xxxvSee neos.mcs.anl.gov/neos/
xxxviSee scipy.org/scipy/scikits/wiki/OpenOptInstall
xxxviiSee www.gerad.ca/~orban/pyneos/pyneos.py
xxxviiiSee cran.r-project.org
xxxixSee www.burns-stat.com
xl See code.google.com/p/ryacas
xli See www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
xliiSee www.ericswanson.us/perturbation.html
xliii“Harmonised” means that the OECD input output tables use common industry
definitions with the OECD's STAN Industry Database (STAN), Business R&D
Expenditures by Industry (ANBERD) and Bilateral Trade Database (BTD). All
industry classification is based on ISIC Revision 3 (OECD Input-Output Database
edition 2006 - STI Working Paper 2006/8). The OECD estimates that between
85% and 95% of world trade is covered in its Bilateral Trade Database

314
Chapter 5 A new spatial, intertemporal CGE
policy research tool
Chapter 4 Economic models for climate change policy analysis identified a
suitable computable general equilibrium modelling approach, mathematical
platform and data source to achieve the research aim. The objective of this
Chapter is to describe and validate a benchmarking model that achieves the
research aim. The new model is called Sceptre, which is an acronym for Spatial
Climate Economic Policy Tool for Regional Equilibria.

5.1 Sceptre model flowchart


The flowchart in the illustration below is an abridged version of that provided
along with the Mathematica code in Appendix 8 The Sceptre model:

Illustration 17: Abridged Sceptre model flowchart

315
It may be noted that the above flowchart has three vertical swim-lanes and an
optimisation pool. The first swim-lane contains those activities concerned with
mining GTAP's economic and emissions data. The second swim-lane calculates
exogenous climate equations and builds an endogenous symbolic model for
Nordhaus' DICE model economic-climate equations. These scientific equations
become a climate feedback loop within the constraints. The third swim-lane
builds a multiregional input output model in symbolic form, which becomes the
economic model embedded within the constraints. The optimisation pool draws
upon these models to interpret the optimisation constraints in terms of the
most fundamental or “minimum set” of input variables of the underlying
models.

The schematic structure of the model is generally as follows:

Illustration 18: Model schematic diagram

In this schematic diagram, multiple globes represent the intertemporal nature


of the model. The atmosphere and oceans are geophysical carbon sinks.

316
Three regions are shown, which are bilaterally interconnected through trade.
Trade deficits of each are controlled such that unrealistic global imbalances do
not occur.

In addition, the regions are subject to an economic damage function from the
common effect of carbon emissions induced global warming. A Total Factor
Productivity function offsets the damage function in each region.

In each region the economy comprises aggregated food, manufacturing and


services sectors together with carbon abatement and permit markets. Small
blue arrows represent permit markets evolving to abatement markets as the
price of carbon rises. The endogenous rate that each commodity market

evolves is indicated as 1 to 3 .

The regional matrices represent tabular production functions for each

commodity with s 1 to s 5 being the activity levels of the respective


commodity productions sector. L 1 to L 5 represent labour constraints in
each commodity sector. Sa 1 to sa5 represent sales-to-asset ratios that
mediate the relationship between the stocks and flows of each commodity.

The overall use of commodity production comprises Investment, Consumer and


Government consumption and Net Exports, together with industrial uses of
commodities (represented by the orange arrows). The Purchasing Power links
between labour and the Consumer and Government consumption vector
indicates the closure of the model for households.

The complete mathematical model is shown in the following problem


specification:

317

Maximise NPV  ∑ : where NPV is the discounted net present value of
pop
the simple sum of regional indexes of consumption
per capita , calculated as the index of expansion of
consumption in each regional economy   ,
compared to the initial period , divided by the
index of population growth in each region  pop 

s , z , i ,  , inv , 
Subject to:

Commodity V T −U  s∗TFP∗dam where V T is the Make matrix , U is the


flows −  y 0− inv∗i Use`matrix , s is industry activity , TFP
balance − exim∗z = 0 is Total Factor Productivity , dam is the
fractional economic damage feedback
multilpier due to global temperature
, rise , y 0 is initial consumption vector ,
inv is investment vector with activity i
& exim is net exports with activity z

Sales V T ∗s V T ∗s≤s2a⋅closewdv t −1 where s2a is sales to asset ratio &


being limited closewdv t−1 is the previous period
by assets closing written down value of assets

Maintenance ypc t−1≤ ypct where ypc t is per capita consumption


Consumption at time t
per capita

Final Period inv n −1 ≤ inv n where inv n is Investment in the final


Investment period

Closing model Deficit t ≤ Deficit 0 where Deficit 0 is the initial Balance


for Trade of Payments trade deficit

Labour ∑ L sector∗s ≤ N a region s aggregate utilisation of


Endowment labour is constrained by the total
labour endowment of the region

Closing for ∑ L sector∗s ≥ aggregate region workforce wages


Households initial labour need to increase at the same rate
employed ∗  as the consumption vector 

 continued next page

318
Industrial physical emissions = where emissions0 is the initial level
Emissions s∗1−∗emissions0 of industrial emissions and  is the
Amelioration engineering control rate of emissions ,
& Abatement which incurs a regional backstop
technology cost dependent upon both
 & time

Emission emission permits = Emissions permits is a commodity


Permits s∗∗emissions0 required for carbon emitting
Market production

Economic dam = nonlinear DICE In the DICE geophysical model ,


Damage function of cumulative solar radiation absorbed by carbon
Function emissions in the atmosphere heats the
atmosphere and ocean reservoirs

Constraint on temp rise ≤ 2 degC Example of consensus international


Atmospheric ( temp rise = nonlinear constraint to limit prospective
Temperature DICE function of atmospheric temperature rise ,
cumulative economic damages & adverse
emissions ) social impacts

The assumptions used in developing these relationships and components are


now discussed.

5.2 Model assumptions

Data
The GTAP economic and greenhouse gas emissions databases and acyclic
processing are discussed in Chapter 4 Economic models for climate change
analysis. A detailed procedure for aggregating GTAP data and preparing the
data for generic economic modelling is provided in Appendix 7 Mining the
GTAP database. The Appendix also shows how the data is cross-checked by
rationalising it to GTAP's Social Accounting Matrix.

319
The above illustrations show how the aggregated regions of the European
Union, NAFTA and Rest of World (ROW) compare in terms of Gross Domestic
Product and population. It may be noted that the three aggregated regions
have approximately the same share of Gross Domestic Product.

Key parameters
Four important control parameters in Sceptre are the number of periods in the
projection, social discount rate, depreciation rate and labour endowment
unemployment rate.

Projection periods

Mention is made a number of times above of symbolic models in the flowchart.


As constraints are expressed symbolically in terms of underlying models and
these models are cumulative, each additional period doubles the computing
time required to prepare and optimise the policy scenario. The number of
periods in a projection is therefore a balance between demonstrating long term
economic and climate effects, the mathematical complexity of optimising the
system, processing power and system memory.

Two issues exacerbate the problem of large models. The first is nonlinearity
because it obviates the use of fast linear programming solvers. The
introduction of a nonlinear economic-climate feedback loop means all linear
constraints are interpreted through a nonlinear framework. This makes it
much harder to satisfy constraints and leads to performance issues. For
example, the linear programming optimisation of a 90 period Multi-regional
Input Output (MRIO) model might take only three hours to process on a high
power research computer node. Introduction of a nonlinear economic-climate

320
equation means it becomes increasingly difficult to satisfy all constraints and
necessary to find the best solution by scrubbing away at the constraints over
2,000 iterations. The capacity of the MRIO model drops to 13 periods and even
this takes 15 hours to compute.i

As mentioned in the discussion of acyclic solvers in Chapter 4, at this point of


time a sequential topological processor for constraints is not available.
Therefore, extended symbolic complexity in constraints is dealt with by
modelling decades rather than single years. This is consistent with Nordhaus'
DICE model, which projects in decades, and facilitates the use of his calibrated
economic-climate equations without modification.

Discount rate

Long term discount rates have received considerable attention amongst


economists. In A Mathematical Theory of Saving (1928, p.553), Ramsey
assumed that:

The rate of discounting future utilities must, of course, be


distinguished from the rate of discounting future sums of money. If I
can borrow or lend at a rate r I must necessarily be equally pleased
with an extra £1 now and an extra £(1+r) in a year's time, since I
could always exchange the one for the other. My marginal rate of
discount for money is, therefore, necessarily r, but my rate of
discount for utility may be quite different, since the marginal utility
of money to me may be varying by my increasing or decreasing my
expenditure as time goes on.

Ramsey derived the relationship that the rate at which consumption is


discounted is equal to the sum of the rate of interest on savings and the
percentage change in marginal utility times the growth of consumption. This
equation, sometimes called the “Ramsey equation” provides the real rate of
return on capital r , which is also called the social discount rate:

321
r =  g
where :
 is the pure time rate of preference
 is the marginal elasticity of utility
g is the rate of growth of consumption per generation

Ramsay argued that  should be zero because:

Discounting of future utilities is ethically indefensible and arises


purely from a weakness of the imagination.

Sir Nicholas Stern (2007), Nordhaus (2008, pp.10 & 61; 2009) and The
Garnaut Climate Change Review (2008, p28) all use the Ramsey equation,
albeit with quite different parameters:

Parameter Stern Nordhaus DICE Garnaut


& RICE
Exogenous pure 0.1% 1.5% 0.5%
rate of time

preference 
Exogenous 1 2 1&2
marginal
elasticity of

utility 
Endogenously 1.3% Average of 2% for Average of 1.3%
calculated first 50 years and for period 2003-

growth g 1.25% over 100 2100


years
Social discount 1.4% (Nordhaus Average of 5.5% Average of 1.35%
rate for savings notes that this first 50 years and & 2.65%,
and investment doesn't match 4% over 100 years corresponding to

r historical the two values for


performance of 
2.7%)

Nordhaus suggests that high social discount rates reflect the real situation.
This is because entrepreneurs need to create new technology having returns
commensurate with other high technology investments. For example, the
returns from genetically modified crops.

322
However, the situation is more complex than Ramsey's equation (above)
suggests. For example, traditional intertemporal CGE models employ a
consumption welfare function embodying Arrow-Pratt's constant relative risk
aversion (CRRA) criterion. This provides a constant elasticity of intertemporal
substitution of  :

 = 1 /
−c 1 − 
u c  =
1 − 
where :
 is the constant elasticity of intertemporal substitution
 is the marginal elasticity of utility
 is the pure time rate of preference
u is welfare utility
c is per capita consumption

This welfare function is often modified by subtracting one from the numerator
in order to simplify the welfare function to the log utility ln c  for the
special case of  = 1 . By applying L'Hopital's Rule (Rudin 1976, p.109):

u  c = [ −c 1 −  −1
1 −  ]  1
= lim 1−  0
−c 1− −1
1−
= −ln c

The effective discount rate, analogous to r in Ramsey's equation, can be


calculated for the unmodified welfare function by determining the net present
value and comparing to the net present value of an equivalent standard
function for growth to perpetuity:

∞ −[c 1 g t −1 ]1 − 


∞ ∞ [ c 1 g 
t −1
] 1r
[
NPV ∑t =1 u c  = ∑t =1]1 − 1 t−1
= ∑t =1
1r t−1
=c
 r− g 

The result of this comparison is that the effective discount rate r is:ii

1 g  [c  −1 g ]1 − c  1g  −1


r=   
c −1[1 g  1−1g ] − 1 g  1

323
In the case of  = 1 , this equation simplifies to the well-known relationship
that the discount rate is equal to the growth rate of the economy r=g .
However, in other cases the effective discount rate r is a function both of the
level of consumption c and the Ramsey parameters { ,  , g } . As a
consequence, the respective equivalent discount rates for the Stern, Nordhaus
and Garnaut studies vary widely with the consumption per capita c (except
in the special case marked with an asterisk where  = 1 ):

Social Discount Stern Nordhaus Garnaut


Rate
Ramsey (as above) 1.3%* 5.5% & 4% 1.3%* & 2.65%
With c = 0.5 1.3%* 2.9% & 1.9% 1.3%* & 1.8%

With c =1 1.3%* 5.6% & 4% 1.3%* & 3.1%

With c=2 1.3%* 1.8% & 13.4% 1.3%* & 9.0%

With c =3 1.3%* 47.1% & 33.7% 1.3%* & 20.6%

While the Stern, Nordhaus and Garnaut CGE models endogenously calculate
real discount rate, this is based on four independent assumptions with non-
diversified cumulative errors. Two assumptions, {c , g } vary within and
across cases and the other two { , } are not well understood at all. For
example, Heal (2005) notes that the utility discount rate reflects ethical
judgements and its relationship to the social discount rate requires a wide
understanding of political economy issues such as preferences,
complementarities and substitutabilities.

For example, Weitzman (2001) reminds us that this is particularly poignant in


the case of the marginal elasticity of future utilities  , which is an
assumption that cannot be fully validated:

Economic opinion is divided on a number of fundamental aspects,


including what is the appropriate value of an uncertain future
“marginal product of capital” …. which depends, after all, on the
ultimately unpredictable rate of technological progress.

Occam's Razor, or law of parsimony, suggests Entia non sunt multiplicanda


praeter necessitatem, which approximately translates to Entities should not be

324
multiplied more than necessary.iii As one of the major weaknesses in traditional
CGE modelling is the copious number of assumptions, restricting the number
of assumptions in the Sceptre model has been one of the guiding principles in
its design. In regard to consumption and production functions, this means a
simpler explanation is better than a complex one.

As the benchmarking of economic expansion does not require a welfare utility


function with constant elasticity of utility, there is no need be other than
parsimonious with this assumption. The rationale for this decision is that a
benchmarking model seeks to reorganise the factors of production to expand
an economy by more efficiently using all available resources but at the same
keeping the basket of consumed commodities in constant proportions. This
contrasts to a welfare model that seeks to maximise aggregate consumption.
The difference in these methods is analogous to the complementary techniques
of benchmarking using Data Envelopment Analysis (DEA) and, say, using
Principal Components with the Translog production function.

In an earlier survey of empirical practice, Weitzman (1998) found that the


future real discount rates being used by practitioners had a mode of 2% pa,
median of 3% pa and mean of 4% pa. In 2001, Weitzman suggested that
economists “should” be using a schedule for real discount rates of 4% for 1 to
5 years, 3% for 6-25 years, 2% for 26-75 years and 1% for 75-300 years.

Discounting long term financial returns is relatively uncontroversial amongst


equities analysts and project finance credit analysts. These finance sector
analysts consider that the perpetuity growth rate of company earnings trends
to the historical long term sustainable growth in Gross Domestic Product.
Avoiding the recent decade of extraordinary economic stimulus and leverage,
the historical median growth rates in Retained Earnings for the S&P was about
4% pa from 1960 to 1995 (Penman 2001, p.188).

A discount rate of 4%pa appears justified from the review of Anglo-American


political economy in Chapters 2 & 3. A rate of 4%pa is consistent with
Nordhaus' application of the Ramsey discount rate. Furthermore, in response
to Weitzman's warning above, it is apparent that Anglo-Americans hold a
shared and pervasive belief in the virtue of markets and technology. It is

325
regarded as a truism of markets that future problems will elicit entrepreneurial
technological innovation to solve those problems. This belief is also expressed
as a strong preference for current consumption over future consumption, given
that the welfare of people in the future can be “dismissed” because they will be
better off due to technological progress.

A constant real discount rate of 4% pa is utilised in this research in recognition


of Anglo-American confidence in economic growth through technological
innovation; preference for current consumption over future; a desire for
consistency with Nordhaus' economic-climate model; consistency with other
researchers and the financial industry; and a desire to avoid introducing
unnecessarily variables.

Depreciation rate

In accounting, depreciation is usually treated as either straight line or


declining balance. The latter, declining balance, calculates depreciation in a
year as the opening net balance multiplied by a depreciation rate. The base
depreciation rate is assumed to be 4% pa, which is the same as GTAP's default
rate. This default rate is modified to the lower of the default rate and
investment as detailed below.

Unemployment rate

The labour employed in each industry may be aggregated to a labour resource.


However, if the labour endowment is set to the aggregated labour resource
then the constraint will be immediately binding. In an analysis of the Dutch
economy, ten Raa (2005, pp.121-2) shows that the labour constraint needs to
be released or freed-up by setting the labour endowment to a higher figure by
compensating the resource for the unemployment rate.

In developed countries, the unemployment rate can be a slippery figure


because of factors such as differences in employment participation,
substitution of part-time for full-time jobs, government work programs and
differential unemployment in socio-economic or demographic groups, for
example, youth unemployment. The situation in developing economies is even
more fluid and a term like “unemployment rate” has no meaning.

326
Therefore, the labour constraint is de-bound by allowing for a notional
unemployment rate of 6.5% in the calculation of labour endowment. This is not
a critical assumption because the labour constraint is rarely binding for two
reasons. The first reason is that labour is assumed to grow with regional
population. The second reason is that the climate and asset constraints bind
before the labour constraint, except in extreme policy scenarios that force the
economies to contract. If policy scenarios binding labour are to be investigated
then the nature of the labour endowment in each region may need to be
researched in more detail.

Exogenous population growth


As discussed in Chapter 4 Economic models for climate change policy analysis ,
the United Nations median estimate of population is 9.15 billion by 2050,
rising to a long term saturation level of 11.03 billion (United Nations 2009). In
contrast, Nordhaus assumes population saturating at 8.6 billion people in
2100, based on the forecasts by the International Institute of Applied Systems
Analysts (Lutz et al. 2008).

For consistency with Nordhaus' model, the assumption for the time being
within Sceptre is that population saturates at 8.6 billion in 2100. Following the
effluxion of 10 decades, the actual year of population saturation would be
2104.

The population profile in each region is calculated from GTAP's data of


regional population in 2004. This is increased by a regional population growth
rate, commencing at an aggregated rate derived from Mathematica's Country
Database, which is for the 2006 year. The common uniform exponential
deceleration factor in the population growth rate is manually balanced to
saturate global population after 10 decades. The population profiles are shown
in the following illustrations:

327
It may be seen that the population saturates with EU25 nations growing from
458 million people to 489 million, NAFTA growing from 433 million people to
559 million and the Rest of the World (ROW) growing from 5.5 billion to 7.6
billion. The absolute increases over the 130 year projection period are EU25
6.7%, NAFTA 29.1% and ROW 38.1%.

Aggregation of GTAP U and V matrices


Mining the GTAP database for U and V matrices has been discussed at the
beginning of this Chapter. It only remains to note that various commodities are
treated in different ways. For example, GTAP's services commodity, which
includes electricity generation and distribution and water reticulation, is
identified as a commodity that is not traded between regions. It is recognised
that a small proportion of services are internationally traded. However any
error effect from this assumption will be small. Dealing with it this way is
preferable to allowing services to trade and introducing an additional set of
constraints to restrict the traded proportion.

International trading of the emission permits commodity and the amelioration


or abatement commodity could be restricted in this way if desired. However,
the only one policy scenario modelled in Sceptre assumes that these CO 2
pollution commodities are not internationally traded.

Inclusion of carbon trading & abatement


If the only carbon pollution policy option was to place a quantity limit on CO 2
emissions, then emission permits would be subject to a simple resource
constraint in the same way as labour has a labour endowment.

328
However, the introduction of a new commodity of CO2 amelioration or
abatement means that both emissions permits and amelioration or abatement
need to be modelled as substitutable commodities. The new amelioration or
abatement commodity includes higher cost energy source like solar or nuclear
power, consumer ameliorations such as house insulation and electric cars, and
abatement services such as CO2 collection and sequestration (CCS).

Another of the significant advantages of the Use-Make U −V T format is that


additional commodities and industries can be directly augmented to the
matrices. To the Use matrix, two rows are appended with GTAP's reformatted
IEA industrial emissions data. The only difference between the two rows is that
the first additional row (for abatement) is multiplied by  while the second
additional row (for emissions permits) is multiplied by 1− . The
parameter  is exceedingly important. It is the proportion of carbon
emissions actually eliminated by carbon pollution policies. Optimisation
determines the optimal rate of switching from the financial solution of
emissions permits to the hard task of becoming a low carbon emissions
economy. The consumption and investment vectors are similarly augmented. Of
course, each of the ameliorations or abatements has a cost and this is included
in the ameliorations and abatement column as a projected backstop technology
cost. This backstop technology cost is a function of  because the first units
of ameliorations and abatements will be relatively inexpensive and the last
units will have a very high price.

If the Use matrix is augmented then the Make matrix needs to be likewise
augmented. There is an industry producing amelioration and abatement
services and another producing emissions permits, albeit the latter is most
likely run by the government. The diagonal elements of the Make matrix are
set to the sum of the corresponding uses plus net exports, thereby facilitating
international trade.

329
Trade deficit and taxation bias

Trade deficits

ten Raa's approach of limiting national trade deficits to a maximum of the


current deficit is outlined in Appendix 6 Benchmarking with linear
programming. For a single period multi-regional input output (MRIO) model,
ten Raa ensures that countries do not increase their trade deficits by a
secondary optimisation that scans the primary solutions of a linear
programming maximisation of economic expansion.

Sceptre adopts ten Raa's approach. However, external optimisation of a linear


programming formulation and nonlinear optimisation of a combined model
were tested. Given the need for additional nonlinear climate constraints, the
latter was found to be more convenient.

Valuation and supranational freight

Leontief's multiregional input output (MRIO) model is provided as Appendix 3


Input output tables. The Leontief model ignores the source of the imported
inputs and assumes a constant product-mix in order to manageably manipulate
the Leontief A matrix of technical coefficients. In this product mix approach,
inputs per unit of output are assumed to be constant across regions and the
input coefficient matrix of regions is assumed to be the average of the detailed
coefficients of the supra-entity (in our case the world) weighted by the
proportions of sub-sector outputs to total sector output in each region.

ten Raa's MRIO method utilises net exports (exports less imports) and so
ignores the source of imported inputs in the same way as Leontief. However,
the use of U and V matrices means that the somewhat unrealistic product-mix
assumptions can be relaxed. This method is described in Appendix 6
Benchmarking with linear programming.

Traditional CGE modelling of international trade flows can be quite complex.


Friot (2007, pp.14-6) discusses the need to address valuation effects of import
and export taxes and freight in the supranational trade sector freight in CGE
models using GTAP data.

330
When compared to traditional CGE modelling, ten Raa's MRIO method
introduces a significant advantage and some disadvantages. The significant
advantage is that internal exports and imports within aggregated GTAP regions
are inherently eliminated by the use of net exports. This removes the need to
distribute exports and imports within the domestic intermediary and final
demand and supply matrices on some arbitrary basis, such as proportional
allocation.

However, valuation issues are not as straightforward. If the U and V matrices


are considered to be commodity volumes, when divided by common commodity
prices, then taxation and freight issues in international trade introduce
distortions into the export-import (exim) data.

The approach taken in Sceptre is to value exports and imports at world prices
and treat the difference of freight and taxes as a constant bias. This approach
is detailed in Appendix 7 Mining the GTAP database. The calculated bias
becomes less appropriate if the trade in particular commodities significantly
rises or falls, or reverses. However, the alternative is to introduce trade
multipliers on net exports. This also has problems. For example, as the mix of
imports and exports changes then the multiplier becomes inappropriate. It may
even be the case that import taxes become applied to exports. In addition, such
multipliers cannot be easily implemented in a linear programming schema,
which is the overall controlling paradigm for both linear and nonlinear
formulations.

Initial assets, depreciation & Sales/Assets ratios

Accounting stocks and flows model

The economy's commodity accounts can be modelled using normal accounting

techniques. U −V T  . s is the productive gross margin and this is spent on


investment and consumption. Consumption is analogous to the payment of a
dividend.

331
Profit & Loss
Gross Profit V T −U ⋅s t
Depreciation − ninvt t −1
Dividend − a t
Increase of Retained Earnings  R Et
where :
s t is the activity vector
ninvt t−1 is the net investment at the end of period t−1
invt t is the investment for period t
 is the depreciation rate
 is the economic expansion factor
a t is the consumption vector

For simplicity in explanation, it is assumed here that the trade vector for net
exports is part of the consumption vector. The accompanying Cash Flow is:

Cash Flow
Gross Profit V T −U ⋅s t
Investment −invest t
Dividend − a t
Increase of Cash Casht

Since all value added is used for investment or consumption (here the

Dividend), then Casht = 0 . This provides commodity “flows model”


analogous to standard accounting principles where the Cash Flow is the cash
flows model:

V T −U ⋅s t−invest t − a t = 0

The commodity “stocks model” requires additional discussion. If the profit of


the economy is analogous to Gross National Income (GNI) less depreciation of
capital, and consumption and net exports are together analogous to a dividend,
then investment is analogous to retained earnings.iv

Again by period, country and commodity, the stocks equation is:

closing capital = opening capital – depreciationinvestment

332
The net investment in a production unit comprises both inventory and fixed
capital investment. It is quite clear that inventory of a commodity is the
accumulation of the commodity. In addition, the fixed capital investment and
depreciation of this investment can be modelled as accumulations of the
commodity. In 1932, Von Neumann observed the remarkable duality between
monetary variables and technical variables such as commodity production
intensity. He concluded that money could be eliminated leaving only
commodities in economic models (Von Neumann 1938, p.1; Champernowne
1945, p.13).

Von Neumann also noted that household consumption and investment are each
parcels of commodities and that wear and tear (i.e. depreciation of the net
investment in the production process) could also be treated as a commodity.
Ultimately all net investment in a production process is absorbed into the
commodities produced by the production process itself. Depreciation is the
annual quantum of this depreciation. Therefore, von Neumann's assumptions
implicitly assume that commodities are equivalently bartered at fair value to
achieve the mix of commodities required for fixed capital equipment.

Applying von Neumann's assumptions, the “stocks model” for the production
unit is:

ninvt t = ninvt t−1 invest t − ninvt t−1


where :
ninvt t is the net investment at the end of the current period
ninvt t−1 is the net investment at the end of the previous period
invest t is the new investment for the current period
 is the depreciation rate

Apart from net investment in a production unit, surplus commodities do not


exist in the commodity model of an economy. Therefore the economy's Balance
Sheet has Cash of zero at each of time t−1 and time t and the Balance
Sheet at each time period is given by:

333
Balance Sheet at time: t−1 t
Cash 0 0
Net Investment ninvt t−1 ninvt t −1invest t − ninvt t −1
Total Assets ninvt t−1 1− ninvt t −1invest t

Retained Earnings R E t −1 R E t −1 R E t

Total Assets and Retained Earnings are balanced at time t because:

R Et = R E t −1 R E t
= ninvt t −1  invest t −  ninvt t −1
= 1− ninvt t −1  invest t

So the Balance Sheet indeed balances:

Balance Sheet at time: t−1 t


Cash 0 0
Net Investment ninvt t−1 ninvt t −1invest t − ninvt t −1
Total Assets ninvt t−1 1− ninvt t −1invest t

Retained Earnings R E t −1 1− ninvt t −1invest t

ten Raa has developed an alternative approach to stocks and flows based on
convolution dispersions. This is described in Appendix 6 Benchmarking with
linear programming. However, after modelling both accounting and dispersion
models, the accounting model was found to be simpler to implement.

If we were only to model stocks and flows to this stage then the model would
be unstable. This is because the drive to maximise consumption will
cannibalise capital by sending investment negative. Whilst a constraint can be
set to ensure investment is not less than zero, this is not sufficient because
maximising total consumption will still set investment to zero and depreciation
will relentlessly cannibalise accumulated capital to zero.

It is necessary to ensure investment remains sufficient for the needs of each


economy. In finance, DuPont analysis has been used for many years to
investigate trends in return on capital.v The Sales/Assets ratio is one of the key
indicators in this analysis. Sales/Assets ratios have the advantage of remaining

334
stable for long periods, so much so that rules of thumb are often used. For
example, the Sales/Asset ratio is typically 1 for manufacturers and close to 2
for retailers.

For the Sceptre model, a Sales/Assets constraint can be readily calculated


using the Make matrix as a proxy for Sales, analogous to Bródy's approach
(1974; 2004). Corresponding Assets are available from an economic database.
The equation for each commodity in each country in each period is:

Sales
V T⋅st ≤ ninvtt −1∗
Assets

For intertemporal models, this dynamic constraint takes the place of a static
material balance.

Depreciation rate

A single year depreciation rate for GTAP 2004 data is calculated for each
commodity in each country. The maximum depreciation rate of 4%pa was
discussed above. However, in the industries of some countries the annual
investment can be less than the net accumulated investment multiplied by the
default depreciation rate. In this case, the single year depreciation rate is set
to equal the annual investment divided by the net accumulated investment.

As the Sceptre model is expressed in decades, the depreciation rate for a


single year is compounded for ten years to provide the depreciation rate
appropriate for a decade. It is recognised that aggregating measured annual
depreciation across a decade would be a better approach. However, the latter
approach is not feasible on a consistent basis given available data.

Sales to Asset Ratio

An analogous approach is taken to depreciation rate. A single year of Sales to


Assets is readily calculated for GTAP's 2004 year. Sales for 2004 is then
multiplied by a decade compounding factor where the proxy for annual growth
is the rate of population growth for each particular country. The rate of
population growth is derived from Mathematica's Country database as
described in Project exogenous population growth (above).

335
Sales to Assets ratios tend to remain stable over long periods. A comparison
over a period of 7 years may be calculated using GTAP data sets for the base
years of 1997 and 2004.vi The Sales to Assets ratios for each commodity and
region are shown in the following illustration.

Illustration 19: Comparison of Sales to Assets Ratios 1997 (GTAP5) and


2004 (GTAP7)

Performance of the elementary economic model

The utilisation of DuPont sales-to-assets ratios as resource limit inequalities


mediates flows by stocks, thereby bringing realism to the performance of the
economic model while retaining the elegance of tableau productions functions.
The model's useful and lively economic environment is demonstrated in this
section.

The following specification of a simple optimisation problem has two


constraints, one for the commodity balance and the other to mediate stocks
and flows by sales to assets ratios. The objective function maximises welfare
defined as the Net Present Value of Consumption per capita.

336
Maximise NPV  ypc: where ypc is consumption per capita
s , y , inv
Subject to:
T T
Commodity V −U ∗s where V is Make matrix ,
flows − y −inv=0 U is Use matrix , s is the
balance (∗signifies industry activity , y is the
convolution consumption vector & inv
product ) is the investment vector

Sales V T ∗s s2a⋅closewdv where s2a is sales to asset


being limited ≥ V T ∗s ratio & closewdv is closing
by assets asset written down value

In the above specification, the closing written down value of assets closewdv
is calculated as the convolution over time of investment with the per unit
depreciation profile used to write down the value of accumulated investment.
For example, with say 10% depreciation on a declining balance basis, the
profile for wring down assets would be {1, 0.9 , 0.81 ,  etc } . Accumulated
depreciation is the difference between accumulated investment to the end of a
period and the closing written down value of assets at the end of that period.
Annual depreciation may then be calculated from this series as required.

It may be noted that the above sales to assets constraint uses the closewdv
of the current period rather than the prior period (as in the specification of
Sceptre). The current period is demonstrated here as the most computationally
difficult situation because of the circular dependence of sales on assets, assets
on investment, and investment (and consumption) on sales.

However, the problem specification above has an infinite number of solutions


since industrial production activity is variable. Traditionally the main
constraint on industry activity has been labour resource. With service
economies this has become less compelling albeit the constraint still applies in
many circumstances. For example, my own country Australia traditionally
solves its labour constraints by expanding immigration to rapidly increase the
labour resource. During a recent national manpower shortage arising out of a
mining boom, migration was increased to the highest level ever experienced. vii

337
It is useful to consider the economic model's performance in a Negishi welfare
maximising mode. More simply, to see how much growth can be driven through
the model when it is restrained merely by sales-to-asset infrastructure limits.

However before this can be implemented, Game Theory compensation needs to


be introduced. The first Game Theory effect relates to investment and
accumulated assets. The essence of free markets is that demand evokes
production. As a result, present consumers have no concern to ensure future
production, which is assumed to arise through the invisible hand of enterprise.
As a result, consumers that are not wealth limited will maximise their utility by
consuming all production and even cannibalising assets. The reason that this is
referred to here as a Game Theory effect is that it is analogous to finite game
behaviour in elementary Game Theory. In a game with a defined end, the most
profitable strategy on the last iteration of the game is to consume the most
without compunction (i.e. “cheat” if desired). Furthermore, if that strategy is
pursued, then the next most profitable strategy is to cheat on the penultimate
iteration. Then on the iteration before that … and so on. In other words, a
consumer who has no imperative to care for the future will take the maximum
possible at every iteration.

A second Game Theory issue is that the model requires a system of inter-period
equity or else the model will simply place consumption where it is maximised
and not where it is needed for the real welfare needs of society.

Traditionally CGE models have applied a saturating demand characteristic to


control consumers' avariciousness within the model. However, in the current
specification having constant returns to scale production, consumption and
investment are expected to grow monotonically over time. Therefore, it is
possible to minimally regulate the elementary model by constraining both
consumption and investment to be the same as the previous period or grow.

This constraint on investment removes the “finite game” limitation, which


mirrors the real world situation where businesses will use their ability to
generate high returns on investment to ensure their need for resources is met.
Furthermore, Government policies actively support the flow of financial and

338
commodity resources to firms in order to ensure social stability through
continuity of the virtuous cycle that converts labour to consumption.

The following model specification includes the new monotonic constraints on


consumption and investment, while the objective function continues to
maximise the Net Present Value of Consumption per capita.

Maximise NPV  ypc: where ypc is consumption per capita


s , y , inv
Subject to:
T T
Commodity V −U ∗s where V is Make matrix ,
flows − y−inv=0 U is Use matrix , s is the
balance (∗signifies industry activity , y is the
convolution consumption vector & inv
product ) is the investment vector

Sales V T ∗s  s2a⋅closewdv where s2a is sales to asset


being limited ≥ V T ∗s ratio & closewdv is closing
by assets asset written down value

Maintenance ypct ≥ ypct−1 where ypc t is consumption


Consumption per capita at time t
per capita

Maintenance inv t ≥inv t −1 where inv t is Investment


of Investment at time t

The model problem remains linear and may be quickly solves using a Simplex
or Revised Simplex method. In the illustration below, computed results are
shown for simplified example inputs. These inputs are a commodity V matrix
with a single element having value 100, the corresponding single element in
the U matrix having a value of 80, initial consumption 11, sales-to-asset ratio of
1.0, written down value of assets brought forward of 100, depreciation rate of
4% (declining balance basis), population growth rate of 2% and discount rate
4%.

339
Illustration 20: Model Performance with an
objective function to maximise the Net Present
Value of Consumption per capita

Notwithstanding the discounting of future consumption, the model minimises


(i.e. maintains constant) consumption for around 90% of the projection period
in favour of accumulating assets, which builds a prodigious industrial asset
base. In the final years this production base is applied to a monumental
consumption “binge” as shown above.

Of course, this “consumption party” could not occur in a model closed for
households for two reasons. The first is that the very large increase in industry
activity would increase the quantum of wages, with both the workforce and the
level of wages increasing. Both would lead to increased consumption. Secondly,
the massive consumption vector in the final years could not be afforded by
consumers in those years. Closing for households will stabilise the model and
is used within the final Sceptre model.

Before turning to labour and other exogenous constraints to stabilise the


model, it is insightful to investigate how the model might be endogenously
controlled. It has just been demonstrated that discounting future consumption
in the objective function is insufficient to stabilise the model's dysfunctional
preference for asset creation over consumption. Previously it was mentioned
that demand elasticity has often been used to force demand saturation in each
period. However, the basis of collective demand saturation has always
remained somewhat questionable and risks expanding the number of
assumptions for the sake of it.

340
Another form of endogenous stabilisation is possible. Businesses and
governments look for monotonic growth over quite long periods. While
payback periods as short as one to three years are applied to incremental
investment, the business plans that underpin infrastructure investment
decisions usually range across the period of debt repayment. This is three to
five years as a minimum and might be as long as ten to fifteen years. Some
infrastructure facilities in the resource industry would be evaluated over
production lifetimes of twenty to fifty years. Long term business plans seek to
maximise growth in every year. Expressed as a single number objective, such
business plans seek to achieve the maximum throughput in the present as well
as in the future by maximising the minimum annual growth, which is a
Minimax function.

Here a Minimax objective function, similar to the functions proposed by von


Neumann and Rawls, may be used in the model specification to maximise the
minimum annual growth of Consumption per capita. This also requires the
constraint for Growth of Consumptions per Capita to be suitably modified. One
feature of this formulation is that the minimum annual growth rate may indeed
be negative. This implicitly relaxes the requirement that consumption per
capita be maintained or monotonically increase.

The following model specification introduces the Minimax objective function to


maximise the minimum Growth of Consumption per capita:

341
Maximise ygr : where ygr is the minimum growth of
consumption per capita in any period
s , y ,inv , ygr
Subject to:
Commodity V T −U ∗s where V T is Make matrix ,
flows − y−inv=0 U is Use matrix , s is the
balance ( ∗signifies industry activity , y is the
convolution consumption vector & inv
product ) is the investment vector

Sales V T ∗s  s2a⋅closewdv where s2a is sales to asset


T
being limited ≥ V ∗s ratio & closewdv is closing
by assets asset written down value

Growth of ypct ≥ where ypct is consumption


Consumption 1 ygr  ypct −1 per capita at time t
per capita

Maintenance inv t ≥inv t −1 where invt is Investment


of Investment at time t

This Minimax formulation is still a linear model that can be solved by Simplex
methods but it takes quite a long time to find the Minimax solution. It may be
noted that consumption per capita at 100 years is approximately 6 on the log
scale, compared to 14 in the consumption party example above.

Illustration 21: Model specification with Minimax


objective function

342
In this example the Minimax solution is a minimum annual growth rate of 3%.
As the “wavy” lines in the figure above show, growth in other years is variable
but all rates of growth are higher than in the minimum year. Overall, the
compound constant growth rate corresponds to a fairly robust 3.4% pa. This
equivalent constant growth rate is not itself a feasible solution because the
dynamics of the model require negative investment (i.e. cannibalised assets) in
many years. Indeed, the growth rate is constrained to be constant, the
maximum growth rate is a much more moderate 1.2% pa. This leads to a
significant reduction in economic performance, as shown in the following
figure.

Illustration 22: Minimax growth compared to


Maximised fixed growth

To conclude this discussion of the characteristics of the elementary


intertemporal economic model it is noted that the behaviour of the model is
highly realistic given appropriate stabilisation.

Climate feedback loop and damage function


Nordhaus' DICE model and equations are provided as Appendix 4 Nordhaus
DICE model. Industrial emissions lead to rises in atmospheric and ocean
temperatures and ultimately to an economic damage function. This damage
feedback increases the inputs required for production. However, technological
change acts in the opposite direction, reducing production inputs through
growth in Total Factor Productivity. These effects are used to modify the Use
matrix.

343
Nordhaus' basic scientific model is detailed in Appendix 4 and included here as
an illustration for reference:

Illustration 23: Schematic implementation of Nordhaus climate equations

The main economic-climate equations from Nordhaus' DICE model used in this
research are shown below. The definitions of parameters can also be found in
Appendix 4. However, the reader is referred to the specific implementation
within Sceptre. This is provided in Appendix 8 The Sceptre model and further
referred to in the discussion of assumptions below.

344
A.04 Qt = t [1 − t ] A t K t L 1−
t industrial output
1
A.05 t = economic damage function
[1  1 T AT , t  2 T 2AT ,t ]
2
A.06 t = t  t t abatement cost function
A.12 Et = E ind ,t  Eland , t total emissions
A.13 M AT ,t = E t  11 M AT , t−1  21 M UP ,t −1 atmospheric carbon conc.

A.14 M UP ,t =
{
12 M AT , t−1  22 M UP, t−1
 32 M LO , t−1 } upper oceans carbon conc.

A.15 M LO ,t = 23 M UP, t−1  33 M LO, t −1 lower oceans carbon conc.
M AT , t
A.16 Ft =  log 2 [ ]  F EX ,t radiative forcing function
M AT , 1750

A.17 T AT ,t =
{
T AT ,t −1  1 F t  1 2 T AT ,t −1
− 1 3 [T AT ,t −1 − T LO , t−1 ] } atmospheric temperature rise

A.18 T LO, t = T LO, t−1   4 [T AT , t−1 − T LO ,t −1 ] lower oceans temperature rise


A.19 t = 1−
t
2
participation cost markup

Method of including CO2 and non-CO2 emissions


Nordhaus (2008, pp. 35 & 43) describes how CO2 and non-CO2 greenhouse gas
emissions are included in the DICE model:

In the DICE-2007 model, the only GHG that is subject to controls is


industrial CO2. This reflects the view that CO2 is the major
contributor to global warming and that other GHGs are likely to be
controlled in different ways (chlorofluorocarbons are a useful
example). Other GHGs are included as exogenous trends in radiative
forcing: these include primarily CO2 emissions from land use
changes, other well mixed GHGs, and aerosols …. Equation (A.12)
provides the relationship between economic activity and
greenhouse-gas emissions. In the DICE-2007 model, only industrial
CO2 emissions are endogenous. The other GHGs (including CO 2
arising from land use changes) are exogenous and are projected on
the basis of studies by other modelling groups.

While Nordhaus demonstrates that DICE outputs are consistent with other
physical modelling, there is a significant difference between industrial CO2
emissions and the equivalent global warming potential from all six greenhouse

345
gases (CO2, CH4, N20, PFC, HFC, SF6) as shown in the table below (Baumert et
al. 2009).

2005 MtCO2e CO2 emissions Five Other GHG Total


Energy 26,372 2,036 28,407
Industrial 1,154 712 1,866
Agriculture 6,075 6,075
Other 959 1,419 2,378
LUCF2000 7,619 7,619
Total 36,103 10,241 46,345
Global CO2 and other Greenhouse Gas Emissions. Year 2000 Land
Use Change & Forestry (LUCF2000) has been used as this is the
latest figure in the database (Source: Baumert et al. 2009)

In the above table, the global warming potential of CO 2 gas emissions


represents only 78% of all six greenhouse gases combined. In other words,
total global warming potential is 28% greater than that from CO 2 gas emissions
alone. A large part of this increase comes non-CO2 emissions in Agriculture.

Prima facie agricultural CH4 and N2O emissions might be expected to rise
proportionately with food production. This suggests that the DICE approach of
treating CO2 as the sole element could be improved in a spatial model.
Notwithstanding the potential issues arising from the balance of fixed and
variable emissions, this dissertation adopts the same approach as Nordhaus
DICE in order to remain consistent with the geophysical model.

Data consistency is one of the key features of the GTAP database. Utilising this
advantage, energy related CO2 emissions have been matched to the economic
structure of the database (Lee 2008).

CO2 gas Declared GTAP 2004 GTAP 2004 DICE 2005


viii
MtCO2e 2004 Energy Related Adjusted Total
EU-25 4,264 3,840 3,840
NAFTA 6,720 7,050 7,050
ROW 24,209 15,140 15,140
Subtotal 35,192 26,030 26,030 27,276
DICE Eland 4,037 4,037
Industrial (incl. above) 1,092
Bunkering (incl. above) 910
Land Use Chg & F (incl. above) 7,619
World 35,192 30,067 35,651 31,313
Comparison of 2004 Declared CO2 Gas Emissions with GTAP energy-
related and adjusted CO2 gas emissions (Sources: Baumert et al.
2009, Lee 2008 & DICE 2008 model equations)

346
The table above compares the regional aggregations of GTAP energy related
CO2 emissions used in this dissertation with declared CO2 gas emissions from
energy, industrial, international bunkering and Land Use Changes and Forestry
(LUCF) that have been similarly aggregated.

This dissertation uses GTAP 2004 Energy Related emissions, with the DICE
“eland” adjustment. The reasons for this are:

• total DICE 2005 CO2 gas emissions of 31,313 Mt is significantly less


than declared emissions of 35,192 Mt (2004) and 36,103 Mt (2005)
• total energy-related GTAP 2004 CO2 gas emissions of 30,067 Mt
(adjusted with DICE eland) is 4% different to 31,313 Mt for DICE 2005,
which is not significant. Taking into account the 911 Mt difference
between declared emissions in 2004 and 2005, this difference is 1%
• the Land Use Change & Forestry (LUCF) component varies considerably
as a global aggregate and is highly volatile at the regional level
• there is no consistent basis for selecting all or part of industrial,
bunkering and LUCF emissions, for classifying any arbitrary trimming
amount into regional and commodity aggregates, or for choosing which
components are fixed and variable with commodity activity
• the variance in CO2 gas emissions is small when compared to the
variance in the global warming impact of the other five non-CO2
greenhouse gases, which is dealt with through exogenous trends in
radiative forcing (as described above)
• the DICE model is calibrated in decades. It is difficult to represent any
level of emissions as unambiguously accurate with intra-decade
emissions increasing considerably
• in a policy context, the main purpose of modelling is to determine policy
feasibility and analyse the differences between sensitivity cases. In
achieving this purpose it is essential to maintain a strong methodology
and recognise that all assumptions have variability. This is far preferable
to meeting particular numbers by modifying equations with extra trim
factor assumptions (Occam's Law was discussed earlier).

It may be concluded from the analysis in this section that there is an element
of “modeller's art” in incorporating the global warming potential of CO 2 and

347
non-CO2 gas emissions into geophysical models. The means that element-by-
element comparisons are not always straightforward and validity needs to be
established with outputs rather than inputs. For many decades William
Nordhaus has demonstrated that the results from DICE are consistent with
those from researchers with other approaches and with the linear development
of his model over time.

Given the importance of precedence in evidence based policy research, the


approach adopted in this dissertation is to retain consistency with DICE's
geophysical model while developing a new benchmarking approach to the
economic model. This retains comparability with DICE. An overall check on
model “output” emissions is undertaken in Section 5.3 of this Chapter,
Comparison of Sceptre with physical modelling. Following GTAP's release of
non-CO2 data rationalised to the GTAP7 database it may be possible to move
forward to refine the whole of the geophysical model while keeping the new
economic framework constant.

Optimisation variables
The minimum set of optimisation variables comprises the input variables for
the acyclic topological structure of the economic model. This can be
investigated in two ways. The first is by using the topological processor
developed in this research for serial processing of the objective function in the
Nordhaus DICE model. The second method is to manually use Mathematica's
Solve function to determine the input variables. The difference is that
topological processor uses the initial processing order of the equation set to
make choices of input variables. Mathematica's Solve can be iteratively
customised to take advantage of consistent patterns in the MRIO model.

Objective function
The traditional CGE welfare function has been discussed in relation to discount
rate (above). The consumer welfare function was:

348
−c 1 − 
u c  =
1 − 
where :
u is welfare utility
c is per capita consumption
 is the marginal elasticity of consumption utility

While it is possible to implement this utility function as Sceptre's objective


function, this has not been done for two reasons. The first reason is identified
in the previous discussion of discount rate. The second reason is that people in
various regions of the world have very different marginal elasticities of utility
for their next dollar of consumption.

Nor does Sceptre use consumption as in the traditional CGE welfare function.
For example, the utility function is suitable for a partial equilibrium study of a
single region. However, in a general equilibrium the function is weighted
toward large economies. For example, a 1% expansion of the American
economy would be valued at many thousands of times a 1% expansion of an
African economy.

In Sceptre, the objective function is simply net present value of the sum of the
annual per capita economic expansion of each country. The economic
expansion factor for a country is the multiplier of the GTAP 2004 data
consumption vector for the country divided by the index of population for that
year. Discounting expansion per capita means that all regions in the model are
evaluated in an unbiased way. For example, a 1% increase in the per capita
welfare of an American or an Australian has the same merit as a 1% increase in
the welfare of, say, a Chinese, Indian or African person.

The economic expansion factor for an economy applies to the whole


consumption vector and therefore implies a constant mix or bundle of
commodities is consumed over time. In other words, the amount of
manufacturing, food, services and emissions (or substituted emissions
abatement) remains in a constant proportion. While this is patently unrealistic
over long periods such as 100 years, the assumption serves well in evaluating
policies ceteris paribus.ix

349
In the conclusion to his analysis of intertemporal modelling, ten Raa writes
(2005, Chapter 13: Dynamic Modelling, pp.174-5):

Prescribing desired proportions on the household stock we could


maximize its level, subject to material balances and a labor
constraint. The imposition of desired proportions is troublesome in a
dynamic context. Food may not be a substitute for a car, but a car
now is certainly a substitute for a car tomorrow. The fixed
proportions are therefore dropped in intertemporal settings. In fact,
it is standard to go to the other extreme, to model current and
future consumption as perfect substitutes by entering them into a
linear function, where the coefficients are the discount factors.

To avoid specialization in resource-extensive commodities, a non-


linear contemporaneous utility function is used. Commodities will
not be wasted when reasonable utility functions are maximized.
Consequently, the material balances will be binding and activity
levels will depend on the final demand path of the economy ….
Capacities are fully utilized …. [which is] an easy way to raise the
standard of living.

In designing Sceptre, this research investigated various nonlinear


contemporaneous utility functions. For example of the forms:

−k  b 1 − b
ae , a− and a
 
where :
 is the economic expansion factor

These alternative functions were found to extend the time in locating an


equilibrium solution and not to provide any noticeable improvement in
avoiding specialisation. As discussed in regard to discount rate (above), a
decision was made to limit the number of additional assumptions and apply
simple discounting of the normalised economic expansion factors.

350
Augmented consumption, investment and U & V
matrices

Augmentation for emissions

The ensuing discussion refers to the following illustration of the emissions


relationships between the V and U tables:

Illustration 24: Mapping of Production and Use of Emissions

Use matrix rows

Each of the matrices U and V, and the vectors for consumption and investment
are derived from GTAP data as described in above and in Appendix 7 Mining
the GTAP database.

The matrices U and V are square matrices with the rows and columns equal to
the number of aggregated commodities. In this policy research, there are three
commodities { food , mnfc , services} , which form 3x3 matrices for each of the
three regions {NAFTA , EU25 , ROW } . The consumption and investment
matrices for each of these regions is a single column vector of the three
commodities { food , mnfc , services} .

351
As discussed above, the U and V matrices are augmented with two rows and
two columns, for amelioration and abatement services and for emission
permits trading:

gaml CO2 amelioration / abatement services


gtra CO2 emission permits

The difference between the terms amelioration and abatement is merely one of
form over function. Abatement of emissions in power generation might be a
particular service such as carbon sequestration. In contrast, amelioration
achieves a similar effect by replacing the facility, for example retrofitting a
coal-fired generation plant with a nuclear boiler.

Following the augmentation of U and V by the creation of rows and columns for
{gaml , gtra } , the commodity set becomes
{ food , mnfc , services , gaml , gtra } . The last two rows of the U matrix have
formulae in the cells, not unique data. Emissions are read from the GTAP
industrial emissions database for each production unit in each country. A
production unit is a column in the U matrix (and in the transposed V matrix).
Emissions are treated as a new industrial input requirement for permits rather
than a production output of a “bad” from the process.

Emissions trading row

In each column of the U matrix, the final row for emissions permits gtra , has
the formula of emissions of the production unit (converted to carbon instead of
CO2 because Nordhaus climate equations are based on carbon emissions)
multiplied by 1− , where  is the emissions control rate, which is the
proportion of emissions physically ameliorated or abated.

Amelioration or abatement row

Correspondingly, the penultimate row of the U matrix, representing


amelioration and abatement gaml is emissions multiplied by the emissions
control rate  .

Additional rows are added to the consumption vector in the same way. The only
difference is that multipliers in the penultimate and final rows are:

352
{1− a ,  a }
where : .
 a is the proportion ameliorated or abated for the consumption vector a

Use matrix columns

In Sceptre, it is assumed that the production of emissions permits and


amelioration or abatement services requires neither material inputs nor labour.
However, the production of each of these commodities does consume an
equivalent volume of emissions. As national accounts address labour in this
sector, this assumption can be reassessed.

Amelioration or abatement column

The penultimate column in the U matrix uses resources to produce the


amelioration and abatement commodity gaml . There is a tangible cost to
producing amelioration or abatement services, which is the backstop
technology cost per tonne of carbon multiplied by the number of tonnes of
carbon.

The resources to produce amelioration and abatement services need to be


purchased from the commodity sectors { food , mnfc , services} . Of course, the
resources could all be purchased from, say, the services sector. This would
be an over simplification because there could be considerable manufacturing
mnfc and conceivably even food . An insight on how to proceed is
provided from von Neumann's assumption that investment can be represented
by commodities both directly and through barter at fair value. A sound working
assumption is to purchase the same proportion of resources from a commodity
sector as that sector consumes of the amelioration or abatement service.

In Nordhaus' DICE model (Nordhaus 2008, pp.41-3, 52 & 77-9), the adjusted
cost of backstop technology per tonne of carbon  is:

353
pback backrat−1e−gback∗t −1
= ⋅
 backrat
where :
 abatement cost exponent 2.8
pback maximum marginal backstop cost per tonne of carbon 1.7
backrat ratio of backstop technology final cost / initial cost 2
gback rate of decline of backstop technology cost per decade 0.05

The pback value of 1.7 means the last unit of amelioration or abatement in
the most value-adding industries, such as jet fuel or plastics has a cost of
US$1,700 in 2005 dollars.

The profile of average backstop technology cost with time, assuming full
amelioration or abatement of emissions is shown in the following illustration:

Illustration 25: Backstop Technology Cost

However, the amelioration or abatement cost increases with the proportion of


emissions controlled  :

abatement cost =  [t ][t]−1


where :
 [t ] is the ratio of abatement cost with   1 divided by cost with  = 1

The following illustration shows how abatement cost varies with time and the
control rate  :

354
Illustration 26: Variation of Abatement Cost with time
and controlled emissions

It may be seen that the average price of abatement can be as low as a few US
dollars per tonne of carbon. This low level of cost applies to the “low hanging
fruit” of amelioration and abatement opportunities. However, low costs have
also been suggested for large-scale geoengineering abatement, which is
nowadays known as “climate engineering”. While this geoengineering
technology providing such a low cost does not yet exist, it may include, for
example, large shades in space to block the sun's radiation, spraying seawater
into clouds to make them reflective, seeding clouds with aerosols that reflect
shortwave radiation and the air-capture of carbon dioxide. For comparison,
Charles (2009) estimates the cost of abating emissions from coal-fired power
stations as being about US$60/tonne for carbon capture and storage, albeit
still a hypothetical technology.

An emissions control rate  is calculated for each commodity in each


region. This is used for an industry specific abatement cost since each regional
commodity will have different sets of amelioration and abatement
opportunities. However, there is also an argument to use a regional average
 for pan regional amelioration or abatement.

Emissions trading column

The final column in the U matrix has the resource purchases for the production
centre of emissions permits gtra . All these cells are zero since the
Government has no cost in issuing emissions permits.

355
End usage vectors

In the same way as the U and V matrices were augmented, additional rows are
also appended to the consumption, investment and net exports vector.
However, GTAP doesn't provide emissions data for investment and net exports.
In the investment vector, the cells are simply zero.

In the net exports vector, synthetic entries are made to facilitate international
emissions trading. These synthetic entries need to be small in order to not
disturb the material balance and initially sum to zero for the country.
Therefore, in examining trade flows in emission permits and amelioration and
abatement services, the interpretation of emissions permits traded by each
country will need to be divided by the vector of synthetic emissions used to
seed the international trading.

Make matrix rows

The illustration of V and U above shows sales from the V T matrix. The GTAP
V T matrix is diagonal. Augmentation commodities become further diagonal
elements for amelioration or abatement {gaml } and emission permits sales
{gtra } . Sales of each of these commodities are the sum of the respective
commodity demand including industrial uses, investment, consumption and net
exports.

The proceeds from sale of emission permits needs to be returned to consumers


in one way or another. This may be in the form of reductions in other taxes, as
investigated in the discussion of environmental taxes in Chapter 3 Political
Economy of the Anglo-American world view of climate change.

Sceptre has been structured to evaluate policy responses to various constraints


on atmospheric temperature rise. A rising emissions control rate 
catalyses when the atmospheric temperature rise constraint becomes binding.
The backstop technology cost then generates a price for amelioration and
abatement. It can be expected that emission permits will trade at the marginal
cost of amelioration and abatement.

356
Once the government settles on a policy to limit atmospheric temperature rise,
the government may then introduce quantity limits to create a profile of
scarcity and stimulate a price on emission permits. Sceptre can also be
operated in this mode, where a resource limit is placed on emission permits so
a price is generated.

Modification of U matrix for economic damage

Nordhaus' DICE model uses a Cobb-Douglas production function, where the


economic output of the world without climate impacts is adjusted by applying
multipliers for economic damage and total factor productivity. In this policy
research, the required damage can be implemented either in the U matrix or in
the MRIO equations. The former is preferred following trials of both methods.

In U,V terms, the impaired output following economic damage is

V T −U ∗ 0 , where 0 is the damage multiplier. If this economic

impairment is represented by adjusting the original U matrix U 0 to a new U


matrix U observed , then:

T T V T −U observed  T
V −U observed = V −U 0 ∗ 0 or = V −U 0
0

Given a new level of damages dam , the revised U matrix is U' :

V T −U ' = V T −U o ∗dam
substituting the equationsabove :
' T V T −U observed 
U =V − ∗dam
0
So the revised U ' is given by :
' T T dam
U = V −V −U observed ∗
0

Modification of U matrix for total factor productivity

Total factor productivity al is exogenously calculated and introduced into


the U matrix in the same way as economic damage (above):

357
U ' = V T −V T −U observed ∗al
where :
al is the index of total factor productivity

Combining the effect of total factor productivity with economic damage, the
resulting matrix is:

dam
U ' = V T −V T −U observed ∗ ∗al .
0

Neither economic damages nor total factor productivity benefits are applied to
amelioration or abatement services and emissions permits.

Nordhaus' DICE model assumes increasing energy efficiency. In contrast,


Sceptre's approach is that energy efficiency is part of amelioration and
abatement. This is because an energy efficiency multiplier would double count.
For example, one unit of industry activity produces an increased amount of
output due to the rise in Total Factor Productivity. For the same level of
industry output, a lower level of industry activity is required. Therefore a lower
level of energy is required and less emissions are produced. If, in addition,
energy efficiency is introduced, the same level of output would require
significantly lower energy, reduced by both Total Factor Productivity and
energy efficiency.

Labour factor productivity


Total Factor Productivity (or Multi factor Productivity) is the residual growth in
gross value added after accounting for changes in factors such as labour and
capital.

Labour productivity is the single factor or partial productivity with respect to


labour. It is the change in gross value added divided by labour hours

V T −U / Labour hours .

The illustration below shows that labour factor productivity in America and
Australia has grown by about 2%-3% pa over the last three decades (RBA

358
2009). On a per decade basis, this is equivalent to about 32% and 36% per
decade respectively.

Illustration 27: Labour productivity (Source: RBA 2009)

Spatially disaggregated labour productivity varies considerably by commodity


and region. In addition, there will be different relationships between Labour
Productivity and Total Factor Productivity across different commodities and
regions.

However, Hicks (1932) and subsequently Solow (1957) suggest that production
functions be characterised with a constant relationship between the factors
and that Total Factor Productivity is independent of the factors. It is assumed
that the marginal rates of substitution of the factors remains constant and the
proportional balance of labour and other factors in a production function
remains unchanged notwithstanding an increase in economic output
occasioned by technological progress. This is discussed in Appendix 3 Input
Output Tables, in regard to Solow's variable for technological change A .

In addition to Solow's assumption that technological change is exogenous and


independent, Solow also assumes that constant returns to scale are inevitable.

Sceptre similarly employs a Leontief-type V T −U tableau with constant


returns to scale, a fixed factor relationship to labour and an exogenous and
independent Total Factor Productivity A .

359
As a consequence of these three assumptions, labour per unit of industry

activity L / s remains unchanged when total output AV T −U  s is

increased through an improvement in Total Factor Productivity A . The


improvement in Total Factor Productivity leads to an implicit improvement in
Labour Factor Productivity that is proportional to the growth in Total Factor
Productivity, as shown in the following example.

A2 V T −U  s 2 A1 V T −U  s 1

Growth in the Partial L2 L1
=
Productivity of Labour A1 V T −U  s 1
L1
A2 s 2 A s
− 1 1
L2 L1
=
A1 s1
L1
L L
[ 1 / 2 ] A2 − A1
s1 s2
=
A1
L1 L
With constant returns to scale   =  2 ,
s1 s2
Growth in the Partial A2 − A1
=
Productivity of Labour A1
= Growth in Total
Factor Productivity

The increasing dominance of services in developed economies supports the


assumption of constant returns to scale and a proportional improvement in
Labour Productivity with Total Factor Productivity.

The Economist defines a service as “anything sold in trade that cannot be


dropped on your foot.” In making use of this rough but effective definition,
references to the services and manufacturing sectors in the following
discussion are generic rather than specific to Sceptre's commodities.

Services includes every activity except agriculture, fishing, manufacturing,


construction and mining. By this definition, the services sector is by far the
largest sector in world economy. As shown in the illustration below, changes in

360
technology have led to the situation of approximately 40% of all jobs globally
being in service related areas (Morris 2007). This rises to 80% in advanced
Western economies. The service sector is now twice as large as the
manufacturing sector.

Illustration 28: Its a services world (Source: Morris 2007)

There are three considerations in comparing Labour Productivity to Total


Factor Productivity. Firstly, Total Factor Productivity improvement is primarily
due to rents on capital, not on labour (ten Raa & Mohnen 2008). Secondly,
while manufacturing labour productivity has improved significantly over recent
years, services productivity has not (ten Raa & Wolff 2001). A large part of
manufacturing labour productivity improvement has been due to outsourcing
those tasks where it is difficult to improve labour productivity. For example,
outsourcing transport, cleaning, IT and professional services.

Thirdly, improvements in labour productivity have derived from the


manufacturing sector while the wages benefits have been enjoyed by both the
manufacturing and services sectors (Baumol 1967). Productivity growth in
services is very difficult to achieve, for example the same student to lecturer
ratios and the same time for a cleaner to vacuum a carpet. A new academic
discipline called Service Sciences has arisen to address the intransigence of
services productivity by applying new multidisciplinary approaches across IT
architectures, engineering systems and behavioural psychology (Spohrer et al.
2007; Chesbrough & Spohrer 2006; Morris 2007).

361
Notwithstanding the differences in labour productivity between the
manufacturing and service sectors, competitive markets for labour are heavily
influenced by the sector with the highest capacity to pay. As manufacturing has
the highest marginal productivity then it often influences wages. The increase
of wages in the services sector without corresponding increase is productivity
is known as the Baumol disease (ten Raa & Wolff 2001).

Intertemporal MRIO symbolic model with carbon


trading & abatement

Basic economic model

The flow equations for each commodity in each country in each time period are
the aggregate of the following items, which sum to zero:

• the consumption vector a multiplied by the benchmarking efficiency


expansion factor  . Consumption is the sum of consumer and
government consumption
• the net industrial consumption vector (which will be negative numbers)
multiplied by the optimal activity vector s. This is equal to the Use

matrix less the transpose of the Make matrix U − V T  . It is also


equal to the negative of Gross National Income (GNI)
• net exports multiplied by the optimal export vector z. As net exports is
used, trade between countries of the same aggregated region is
inherently eliminated
• investment multiplied by investment activity vector i
• a bias created by adjusting net exports to world prices, representing net
export and import taxes

The illustration below shows this in a linear programming schema:

362
gamma industry activity export activity investment activity bias total
s1 s2 s3 s4 s5 z1 z2 z3 z4 z5 i1 i2 i3 i4 i5
c1 c2 c3 c4 c5 c1 c2 c3 c4 c5
Food c1 a1 0
Manufacturing c2 a2 0
Services c3 a3 U – VT 0
CO2 permits c4 a4 0
CO2 amelioration c5 a5 0
Labour hours <=N
Illustration 29: MRIO model linear programming schema

It may be seen in the above illustration that the labour used by industry is
constrained to be less than the labour endowment, N. The labour endowment
is usually calculated as the sum of the labour hours divided by one minus the
unemployment rate. The unemployment rate assumption has been discussed
above. When industry activities vary, labour hours are redistributed across the
industries.

If the model was static then a capital constraint would be present with a
limiting endowment M. However, capital is dynamically calculated in an
intertemporal model.

Constraints
While the objective function and its relationship to discount factor has been
extensively addressed above, the heart of a benchmarking model is in the
constraints. The theorem of complementary slackness and the main theory of
linear programming were discussed in Chapter 4 Economic models for climate
change policy analysis. These constraints make the commodity and factor
markets through the Dual formulation. The Lagrange multipliers are the prices
of the constraint resources.

In Sceptre, a nonlinear constraint schema is constructed by specifying


constraints at a high level of abstraction, and then substituting constraint
variables with symbolic solutions to the combined MRIO and climate feedback
model. This results in the constraints being expressed in equations comprising
only the most fundamental input variables to the MRIO model.

For example, Appendix 6 Benchmarking with linear programming shows how


constraint schemas are designed for multi-regional input output models. In this
single period model described there, the linear equations for material balance

363
are relatively simple and can be analytically expressed. When the model
becomes multi-period intertemporal, there is a rolling forward of single period
models. Each successive phase of the model comprises all the symbolic
equations of the antecedent models. The process relies on powerful symbolic
processing in Mathematica and results in extremely long, complicated and
highly nonlinear equations.

An advantage of this approach is that, at the abstract level of description, an


intertemporal MRIO climate model has a relatively small number of inequality
and equality constraints.

Inequality Constraints

MRIO inequality constraints

Constraint Description
Sales/Asset ratio: As discussed in the accounting stocks and flows model
(above), the material balance is brought into the
Net investment in the optimisation model through the Sales to Assets
previous period * sales assumption. Sales in the current period, represented by
to asset ratio the V matrix multiplied by the activity vector, must be
≥ V . sector activity less than or equal to the assets in the previous period
multiplied by the Sales to Assets ratio. In dynamic input-
output modelling this is known as “closing the model for
investment”. This constraint also forms part of “closing
the model for trade”. This Sales to Asset constraint is
very important and a major part of Sceptre's innovation.
This is because it substitutes a dynamic material balance
for ten Raa's static material balance. Therefore, the
Main Theory of Linear Programming is able to form a
series of dynamic markets that maximise outputs while
minimising inputs. Furthermore, using Sales to Assets
ratios is a stable approach because these ratios tend to
be stable over medium term time fames. Therefore
ratios have not been changed over time.
Final period investment: Accumulated investment cannot be cannibalised for
consumption (except through depreciation). As
current period production is divided between investment, consumption
investment ≥ previous and net exports, the simplest assumption to achieve the

364
Constraint Description
period investment anti-cannibalism outcome is to require each industry's
investment be maintained in the final period
Country deficit limit: As discussed above, net exports multiplied by the
activity vector, must be less than or equal to the
exim . export activity ≥ country's actual GTAP 2004 deficit. The deficit is a
deficit negative number. This constraint is part of what is
known as “closing the model for trade” in input-output
modelling.
Labour constraint: Each country's labour endowment is assumed to rise
with its population growth. The labour used in a country
labour endowment ≥ is the sum of the labour used in each sector multiplied
vector of labour in by the activity of the sector. The labour used in a country
sector . vector of must be less than the country's labour endowment. All
activity of sector countries are assumed to have 6.5% unemployment in
2004 such that the initial labour endowment of a country
is the total labour used in 2004 divided by (1 -
unemployment rate).
Purchasing power As the labour force purchases the commodities that
constraint: constitute final demand, the vector of labour in sector .
vector of activity of sector (i.e. the labour used) must be
vector of labour in greater than or equal to the initial labour employed in a
sector . vector of country multiplied by the country's economic expansion.
activity of sector ≥ This constraint is equivalent to “closing the model for
labour employed * households” in input-output analysis, where employment
economic expansion and consumption are linked.
investment ≥ 0 Investment, sector activity and economic expansion
sector activity ≥ 0 must all be greater than zero.
economic expansion ≥ 0
1≥≥0 The proportion of substitution of amelioration or
1 ≥ a ≥ 0 abatement services for emissions permits must be
between 0 and 1, for both industry () and consumers
(a).
Limits on international Limits on international emissions trading may be
trading of emissions introduced here but have not been applied in this policy
permits research.
Limits on national Quantitative limits on national emissions trading may be
emissions introduced here but have not been applied in this policy
research.

365
Climate inequality constraints

Climate constraints will reflect the policy feasibility being investigated. For
example, in limiting the temperature rise to 2°C in 100 years time:

Constraint Description
2°C ≥ temperature rise The temperature rise in 100 years cannot exceed 2°C
at period 10
Following period 10: Following the maximum temperature rise, the
temperature rise must remain stable or decline
previous period
temperature rise ≥
current period
temperature rise
Following period 10: If emissions are not controlled in addition to
temperature, the end effect of the model will be to
previous period accelerate emissions. Therefore, following the maximum
emissions ≥ current temperature rise, industrial emissions must remain
emissions stable or decline

Equality constraints

There are two types of equality constraints. The first are called boundary
conditions such as x = 4 , which is a light imposition on optimisation and
normally eliminated by the in-built pre-solver. However, a second type of
equality constraint heavily encumbers the solution. These are equalities of
endogenous variables that lead to internal feedback loops.

Constraint Description
damage function active Economic damage increases resource usage and
in current period = increases emissions. Increased emissions cause
damage function increased temperature rise and increased economic
resulting from the damage. Therefore, a feedback loop exists. The initial
period economic damage needs to be settled in general
equilibrium with the resulting economic damage as they
are the same number. This is how the nonlinear climate
equations enter into the intertemporal MRIO model.

366
Optimisation
A number of factors needs to be considered in nonlinear optimisation. Prime
amongst these are the trade-offs between global and local minimisation,
methods of solution, and accuracy and iterations.

Global and local optimisation

Global optimisers seek to find the best solutions in the presence of saddle-
points, where two or more optima may exist. Nordhaus (2008, p.45) notes that
the DICE model uses the local optimiser CONOPT. Experience with the DICE
model over many decades has not indicated any issue arising from saddle-
points.

The Mathematica package has both global and local optimisers. Use of these
packages in this current policy research confirms the robust nature of the
optimisation and that faster local optimisers can be confidently used.

Methods of solution

As mentioned above, Nordhaus' DICE model employs the CONOPT solver,


which linearises equations and solves the approximated model quickly with a
linear program.

Mathematica's solver FindMinimum provides many different methods of


solution but for nonlinear constraints only interior point is available. In
Chapter 4 Economic models for climate change policy analysis , the survey of
programming environments found that the interior point algorithm is based on
the COIN Project's IPOPT solver, which is regarded as a very fast nonlinear
solver. Nevertheless, it is significantly slower than CONOPT. The key
advantage of IPOPT over CONOPT is that the interior point solution finds a full
nonlinear equilibrium rather than an approximated linear solution.

An in-depth discussion of methods of solution, including a detailed outline of


global and local optimisation and interior point, is provided as Appendix 5
Acyclic solver for unconstrained optimisation.

367
Accuracy and iterations

The difficulty of finding equilibrium in the presence of nonlinear constraints


was discussed above. For example, a base case projection of just 13 periods
(130 years) involves 926 nonlinear constraints, 429 independent optimisation
variables and 1089 unique variables and parameters in total.

Locating an equilibrium within a reasonable time frame involves many


computing issues such as the internal working precision; the desired accuracy
in locating variables and satisfying the objective function; and the number of
hours or days of computing time involved. In this research it has been found
that about 2,000 iterations is a convenient control parameter because the
calculation of an equilibrium for 13 periods takes about 15 hours and the
outcome has a good degree of accuracy. The number of iterations is increased
to 4,000 or 6,000 if additional accuracy is required.

Constraint slacks
In cases where Mathematica's FindMinimum function cannot return an
optimisation result accurate to say 6 decimal places, it returns the best
solution found together with a message indicating residuals. For example: x

Illustration 30: FindMinimum return message when constraints not fully satisfied

In this example a Karush, Kuhn, Tucker (KKT) residual of 0.000473744 means


that the sum of the errors is 4.7 * 10-4 and has not converged to the solver's
default accuracy of 4.8 * 10-6. However, it may be noted that the accuracy is
still excellent and perhaps would be acceptable in other circumstances. The
reason that the criterion for solver completion is manually set with iterations is
that with some 1,000 constraints, not all may be decisively satisfied. A model's
best fit needs to be discovered by diligent residual minimisation, rather than
arbitrarily reducing the requested accuracy to achieve a more timely solution.

The source of the inaccuracy may be inspected by printing out the unsatisfied
constraints having non-zero slacks and observing the magnitude of the slacks

368
that are unsatisfied. It is assumed in this model that slacks greater than 1*10 -4
merit investigation. In the Base Case model with 4,000 iterations, there are 13
unsatisfied constraints but none are material as shown by the output slacks:

Illustration 31: Evaluation of unsatisfied constraints

With fewer iterations, there is more change of unsatisfied slacks. For example,
a message of the following form is produced with 2,000 iterations:

Illustration 32: Materiality of unsatisfied constraints

It may be noted in the above illustration that the slack is very small, and even
more so when considered as a proportion of the magnitude of the variables. In
the last line, the slack of -4.16 * 10-4 results from the difference of very large
numbers having magnitudes of 106 and 107.

Mathematica's linear optimiser DualLinearProgramming conveniently provides


the Lagrange multipliers and slacks. In a nonlinear context, KKT multipliers
are equivalent to Lagrange multipliers. Unfortunately, at this stage,
Mathematica's nonlinear optimiser FindMinimum does not expose its Karush
Kuhn Tucker multipliers even though it uses these multipliers and calculates a
KKT residual as shown in the message above.xi

Data and graphical output


Along with Mathematica's powerful symbolic processing capability, its other
compelling advantage is a rich set of graphical functions that can be used for
agile development and instant communication. Overall, Sceptre within
Mathematica provides a high productivity development platform model for
policy investigation.

369
5.3 Comparison of Sceptre with physical modelling
Chapter 2 Political Economy of the Anglo-American world view of climate
change introduced the concept of a fixed tranche of atmospheric emissions
capacity for a 2°C temperature rise.

As shown in the table below, the results of the Sceptre model developed in this
research compare favourably with physical climate change modelling by Allen
et al. (2009) and Meinshausen, M. et al. (2009) using a linear extrapolation of
emissions.

Gigatonnes of CO2 in period 2000-2050 for 2°C temperature rise


Allen et al. Meinshausen et al. Sceptre Model
1,550-1,990 from 2000-50 1,000 for 25% probability 1,409 from 2004-2054
2,055 in total from 2000 & same as Allen if non- 2,194 from 2004-2134
CO2 gases are included

For a 50% probability of exceeding 2°C, Meinshausen's tranche of CO 2


emissions rises to 1,440 GtC. The authors note that including non-CO 2 gases in
their defining tranche of 1,000 Gt CO2 (for a 25% probability of exceeding 2°C)
provides the same result as Allen et al.

The Sceptre model is consistent with both sets of results. As discussed earlier
in this Chapter, minor differences are expected because the geophysical
framework deals with non-CO2 gas emissions through a combination of fixed
emissions and trends in radiative forcing.xii

5.4 Comparison of Sceptre and DICE


The climate-unconstrained case for both Sceptrexiii and Nordhaus' DICE
modelxiv is provided in the following sets of illustrations “Comparison of
Sceptre and DICE” (below). Climate-unconstrained “business as normal”
means that economic expansion is not constrained by global warming factors,
for example limits on emissions or temperature rise. The illustrations for
Sceptre and DICE are not symmetric because DICE does not produce the same
spatial and industry information as Sceptre.

370
Economic expansion

It is immediately apparent in comparing the first illustrations for economic


expansion that Sceptre and DICE are fundamentally different models.

Sceptre Normal Case DICE Business as Usual


economic expansion economic expansion

After 6 decades, Sceptre's economic expansion saturates at 1.1x for the


European Union 25 country group (EU25), 1.38x for NAFTA trade zone and
1.46x for the Rest of the World (ROW). In contrast, DICE's economic expansion
of is 4.6x in the same 6 decade period. It continues to rise strongly to 11.3x by
decade 13. These results are presented in terms of expansions from the
current situation, which is consistent with the principles of benchmarking
discussed in Chapter 4 Economic Models for Climate Change Policy Analysis.

Prima facie there is quite a dramatic contrast between Sceptre and DICE. This
is especially so considering that these projections are in real dollars rather
than nominal dollars taking account of inflation. One would not intuitively
expect real income to increase in a J-curve.

One reason for the startling difference between DICE and Sceptre is to be
located in the difference between unconstrained and constrained models. In
DICE, the economic model underpins the objective function rather than the
constraints. In contrast, the economic model and climate damage feedback
loop in Sceptre appears within the constraints which is computationally a
much more expensive situation.

371
Most climate-economic modellers such as Garnaut are happy with a 100-year
time horizon. Indeed, Nordhaus notes that it would be unwise to rely on more
than the first 50 years. However, Nordhaus extends DICE to 60 decades (600
years) to show how the climate-economic ecosystem responds in the long term.
Operating experience with Sceptre has shown that a time-frame of at least 13
decades is required so performance up to 10 decades is unaffected by end
effects.

Nordhaus implements DICE's “business as usual” model using inequality


constraints. However, the use of constraints is not essential because the
paradigm is objective function-centric. Projecting 13 decades directly from the
equations is straightforward. However, the end effects in DICE are
extraordinarily significant, requiring projection over 25 decades for a clear
observation of 13 decades. As there is only one regional economy for the whole
globe, there are only 25 region-periods to project. In the business as usual
case, there is no binding constraint so there are only 25 constraint-region-
periods and 50 optimisation variables in total.xv

Constraint-centric models like Sceptre are significantly more complex. For


example, the spatial and commodity disaggregation in Sceptre means that
there are three regions, each with a five-by-five matrix of commodities and
industries. Thus, for 13 periods there are 975 region-periods to project. In fact,
each matrix of commodities and industries is in reality more complicated
because the Use and Make matrices together provide different production
technologies for each of the commodities (particularly as some of the Use rows
are calculated). This has already been examined (above) so consideration of
the additional information in these detailed Leontief production functions will
not be pursued in this brief comparison of model complexity in terms of
constraints, regions and periods.

Sceptre has 919 symbolic constraints compared to zero in DICE's business as


usual case. Each of Sceptre's constraints is embedded with up to the entire
975 region-periods. This provides 68,925 constraint-region-periods. Some of
Sceptre's symbolic constraints are very long and use all of the 429 optimisation
variables. This is because the constraints progressively compound all the
foregoing periods of regional performance.

372
Although both models share Nordhaus' scientific-economic equations, it may
be seen that Sceptre is optimising in a different way to DICE. Sceptre is
constrained optimisation compared DICE's unconstrained “business as usual”
case. In Sceptre, the consumption expansion in each region is constrained by
the natural endowments of labour and capital, although capital is
endogenously calculated. In addition, consumption is constrained by three
other important factors. These are the purchasing power of labour, a limit on
trade deficits and by the preference given to investment.

In DICE, none of these constraints apply. The most important of all is DICE's
preference for consumption over investment, which arises because
consumption is maximised with respect to capital. This is discussed in
Appendix 4 Nordhaus DICE model and other aspects of DICE performance are
discussed in Appendix 5 Acyclic solver for unconstrained optimisation .

Amelioration and abatement

The DICE unconstrained model is geared to high consumption, high emissions


and high emissions control. While economic projections are “apples and
oranges” for the reasons highlighted above, DICE and Sceptre have similar
geophysics outcomes due to DICE's high emissions control. For example, DICE
maximises economic expansion with 33% participation in emissions control by
decade 6, rising to 63% by decade 13.

Illustration 33: DICE business as usual


emissions control rate

373
In contrast, Sceptre shows 5.5% amelioration and abatement in food, 4% in
manufacturing and 7% in services. Sceptre's emissions control rate and price
are shown in the following illustrations:

Sceptre Normal Case DICE Business as Usual

374
Sceptre Normal Case DICE Business as Usual

Due to its high participation rate, the price of amelioration and abatement in
DICE rises to US$142/tonne at decade 6 and US$390/tonne at decade 13. This
is significantly higher than Sceptre's amelioration and abatement cost of a few
dollars per tonne.

Industrial Emissions

Sceptre shows industrial emissions rising quickly over 1 decade from about 70
GtC to 80 GtC and then slowly stabilising at about 90 GtC. In contrast, DICE's
very high projection of production and consumption cause industrial emissions
to rise to 91 GtC after one decade and stabilise 40% higher at 128 GtC in
decade 9, before slowly decreasing to 115 GtC at decade 13.

Sceptre Normal Case DICE Business as Usual

Over the first five decades, total CO2 emissions are 1515 Gt and 1785 Gt for
Sceptre and DICE respectively.

375
Temperature rise

Initially, both models have similar atmospheric and sea temperature rise
profiles although DICE is more aggressive.

Sceptre Normal Case DICE Business as Usual

DICE's atmospheric temperature rise doubles increases from the present 0.8°C
to 1.0°C over 1 decade and then doubles from the present to 1.65°C over 4
decades. The same doubling in Sceptre occurs after 5 decades. With a similar
difference, the atmospheric temperature rise at the end of the projection
period of 13 decades is 3.2°C for DICE and 2.8°C for Sceptre. It will be shown
in the next section for the Base Case that such a difference in temperature rise
has extraordinary consequences for environmental cost.

Atmospheric carbon concentration and radiative forcing

Atmospheric carbon concentration causes temperature rise and therefore


shows the same pattern of difference as temperature rise. Over the 13 decade
projection period, DICE shows 680 ppm compared to 610 ppm for Sceptre.

Sceptre Normal Case DICE Business as Usual

376
Sceptre Normal Case DICE Business as Usual

Radiative forcing also mirrors CO2 concentration. DICE reaches 5.2 Watts/m2
after 13 periods and continues to accelerate. Sceptre reaches 4.5 Watts/m 2
while flattening.

Damage multiplier

As would be expected from similar temperature rises, the damage multipliers


of 0.971 and 0.977 after 13 periods for DICE and Sceptre, respectively, are
comparable. This is equivalent to economic output declining by 2.3%.

Sceptre Normal Case DICE Business as Usual

Investment and capital

Sceptre shows investment rising to US$500 trillion per decade after 6 decades
and to US$1200 trillion per decade at the end of the projection period. In
contrast, DICE investment per decade is similar at US$512 trillion after 6
decades and US$1254 trillion at decade 13.

377
Sceptre Normal Case DICE Business as Usual

However, net accumulated investment or capital in Sceptre is considerably


higher at US$700 trillion after 6 decades and US$2500 trillion by the end of
the projection period. The lower investment in DICE results in the same capital
of US$733 trillion after 6 decades but only US$1813 trillion, or about 72% of
the capital at the end of the projection period.

Sceptre Normal Case DICE Business as Usual

5.5 Conclusion
This Chapter presented a new intertemporal computable general equilibrium
(CGE) model applying the Service Sciences technique of benchmarking to
multiregional Input Output modelling. Major design assumptions have been set
out and discussed. Key amongst these were the net present value discount
rate, population growth, climate scientific-economic equations, a new method

378
of intertemporal modelling using accounting stocks, flows and Sales/Assets
ratios, and the selection of an objective function.

Make and Use table augmenting methods have also been presented in regard
to carbon commodities (carbon permits and amelioration and abatement
services), impairing economic output for climate damage and enhancing output
for total factor productivity.

Technical optimisation issues have been evaluated. Foremost amongst these


were methods developed for working with marginally satisfied constraints that
occur in real world problems.

The model developed in this Chapter was validated with recent geophysical
research and found to be consistent. The model was also compared to the
William Nordhaus DICE model using a Normal case where output is maximised
without a climate change constraint. This is a “business as usual case” with
economic damages occurring as a result of global warming and with carbon
markets responding to this damage in order to maximise output.

It was found that the Nordhaus DICE model is a high growth, high emissions
control model. This contrasts to the benchmarking model developed in this
Chapter that has lower growth and a correspondingly lower the emissions
control regime.

In comparison with the Nordhaus DICE model, Sceptre proves to be stablised


by the usual neoclassical labour resource constraint, a labour purchasing
power constraint to close the model for households, a cap on trade deficits plus
a new form of intertemporal capital constraint. This new capital constraint is a
substantial stocks and flows model that governs the relationship between
stocks and flows through Sales/Assets rules.

5.6 Chapter references


Allen, M.R. et al., 2009. Warming caused by cumulative carbon emissions
towards the trillionth tonne. Nature, 458(7242), 1163.

379
Baumert, K.A., Herzog, T. & Markoff, M., 2009. The Climate Analysis Indicators
Tool (CAIT), Washington DC: World Resources Institute. Available at:
http://cait.wri.org/cait.php [Accessed November 7, 2009].

Baumol, W.J., 1967. Macroeconomics of unbalanced growth: the anatomy of


urban crisis. The American Economic Review, 415-426.

Bródy, A., 2004. Near equilibrium : A research report on cyclic growth ,


Budapest: Aula Könyvkiadó.

Bródy, A., 1974. Proportions, Prices and Planning: A Mathematical


Restatement of the Labor Theory of Value, Amsterdam: North-Holland.

Champernowne, D.G., 1945. A Note on J. v. Neumann’s Article on ‘A Model of


Economic Equilibrium’. Review of Economic Studies, 13, 10-18.

Charles, D., 2009. ENERGY RESEARCH: Stimulus Gives DOE Billions for
Carbon-Capture Projects. Science, 323(5918), 1158.

Chesbrough, H. & Spohrer, J., 2006. A research manifesto for services science.

Friot, D., 2007. Tracking environmental impacts of consumption: and


economic-ecological model linking OECD and developing countries. In
Geneva: Geneva International Academic Network. Available at:
http://www.ruig-gian.org/conferences/conference.php?ID=37 [Accessed
November 7, 2008].

Garnaut Climate Change Review, 2008. Supplementary Draft Report: Targets


and trajectories, Commonwealth of Australia. Available at:
http://www.garnautreport.org.au/ [Accessed September 7, 2008].

Heal, G., 2005. Chapter 21: Intertemporal Welfare Economics and the
Environment. In Handbook of Environmental Economics. North
Holland.

Hicks, J.R., 1932. The Theory of Wages, Princeton, N.J.: MacMillan.

380
Lee, H., 2008. An Emissions Data Base for Integrated Assessment of Climate
Change Policy Using GTAP, Center for Global Trade Analysis. Available
at: https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=1143 [Accessed June 26, 2009].

Lutz, W., Sanderson, W. & Scherbov, S., 2008. IIASA’s 2007 Probabilistic World
Population Projections, Vienna, Austria: International Institute of
Applied Systems Analysis. Available at:
http://www.iiasa.ac.at/Research/POP/proj07/index.html?sb=5.

Meinshausen, M. et al., 2009. Greenhouse-gas emission targets for limiting


global warming to 2 °C. Nature, 458(7242), 1158.

Morris, R.J.T., 2007. Services Research at IBM.

Nordhaus, W.D., 2009. Alternative Policies and Sea-Level Rise in the RICE-
2009 Model, New Haven, CT: Cowles Foundation, Yale University.
Available at: http://econpapers.repec.org/paper/cwlcwldpp/1716.htm
[Accessed September 10, 2009].

Nordhaus, W.D., 2008. A Question of balance: weighing the options on global


warming policies, Yale University Press.

Penman, S.H., 2001. Financial statement analysis and security valuation,


McGraw-Hill/Irwin Boston, Mass.

ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.

ten Raa, T. & Mohnen, P., 2008. Competition and performance: The different
roles of capital and labor. Journal of Economic Behavior & Organization,
65(3-4), 573-584.

381
ten Raa, T. & Wolff, E.N., 2001. Outsourcing of Services and the Productivity
Recovery in U.S. Manufacturing in the 1980s and 1990s. Journal of
Productivity Analysis, 16(2), 149-165.

Ramsey, F.P., 1928. A mathematical theory of saving. The Economic Journal,


38(152), 543-559.

RBA, 2009. Chart Pack: A Collection of Graphs on the Australian Economy and
Financial Markets, Canberra: Reserve Bank of Australia. Available at:
http://www.rba.gov.au/ChartPack/index.html [Accessed September 28,
2009].

Rudin, W., 1976. Principles of mathematical analysis International series in


pure and applied mathematics 3rd ed., New York, NY: McGraw-Hill.

Solow, R.M., 1957. Technical change and the aggregate production function.
The Review of Economics and Statistics, 39(3), 312-320.

Spohrer, J. et al., 2007. Steps toward a science of service systems. Computer,


40(1), 71-77.

Stern, N., 2007. Stern Review Report, Available at: http://www.hm-


treasury.gov.uk/independent_reviews/stern_review_economics_climate_c
hange/stern_review_report.cfm [Accessed April 17, 2008].

United Nations, 2009. World Population Prospects: The 2008 Revision


(Highlights), New York: Population Division of the Department of
Economic and Social Affairs of the United Nations Secretariat.

Von Neumann, J., 1938. A model of general economic equilibrium. Readings in


Welfare Economics, 13(1945), 1-9.

Weitzman, M., 1998. Gamma discounting for global warming. In First World
Congress of Environmental and Resource Economists. pp. 25–27.

382
Weitzman, M.L., 2001. Gamma discounting. American Economic Review, 260-
271.

i One model was run continuously for over 5 weeks on a high speed research
computing node in an unsuccessful test of ultimate constraint satisfaction
ii Fortunately, calculated very quickly using Mathematica's symbolic processing
iii Attributed to the Franciscan friar William of Ockham (1285-1349)
iv In this example, it may help to think of consumption plus net imports, where net
imports is just a negative number for net exports
v It is interesting to note that the use of DuPont analysis completes a full circle in
Leontief and CGE modelling. The Physiocrat Pierre Samuel du Pont de Nemours,
who became a prominent American industrialist, advocated low tariffs and free
trade
vi As there have been changes in the collection and classification of data between
GTAP5 and GTAP7, a more reliable analysis would require extended
econometric analysis using supplementary data sources
vii For example, Australia's net migration was 285,000 in 2009, compared to a
more normal level of 90,000 per annum
viiiDICE 2005 emissions is calculated from the equation for industrial emissions:
eind(0) = 10 σ(0) (1 – μ(0)) ygr(0) + eland(0)
= 10 x 0.13418 x (1 – 0.005) x 55.667 + 11
. = 85.3205 GtC per decade
Converting this equation into MtCO2 per annum:
eind(0) = 0.13418 x (1 – 0.005) x 55.667 x 3.67 x 1000 + 11/10 x 3.67 x 1000
= 27,276 + 4,037
= 31,313 MtCO2 per annum
ix ceteris paribus: other things being held constant
x This example is drawn from the file m12_13p_2C_100.nb
xi Personal communication with Wolfram indicates that this issue will be
addressed in a future release of Mathematica
xii GTAP's future release of a mapped non-CO2 gas emissions database will
facilitate further improvement in the geophysical model
xiiim12_13p_normal.nb
xivtopo_test12_comp_sceptre.nb
xv DICE “business as usual” has various miscellaneous non-binding constraints

383
Chapter 6 Assessment of changes in regional
and industry performance under resource
constrained growth
The foregoing Chapters have established the framework for a new lens through
which climate-economic polices may be analysed to address the research
question of identifying the regional and industry effects where resources are
limited by climate change. Chapter 5 A new spatial, intertemporal CGE policy
research tool described a new intertemporal, multiregional CGE model called
Sceptre, which is an acronym for Spatial Climate Economic Policy Tool for
Regional Equilibria. The objective of this Chapter is to use this new lens for
policy research to address an example of climate policy.

The Base Case adopted for this policy investigation is that the increase in
global average temperatures above pre-industrial levels should not exceed 2°C
and that both developed and developing countries need to work towards this
goal. This policy was accepted by the Major Economies Forum at its July 2009
inaugural meeting in L'Aquila, Italy (see Chapter 3 Political Economy of the
Anglo-American world view of climate change). This objective is consistent
with the IPCC's recommendations to ameliorate global warming and is
supported by the vast majority of scientists. In September 2009, 133 countries
and the European Union had accepted the proposed 2°C limit.

The multiplicity of results from spatial models is often celebrated and lamented
in rapid succession. Fortunately, Mathematica's rich data visualisation
capabilities allow the communication of the results to be relatively enjoyable
or, if not, then at least bearable.

Chapter 1 Introduction discussed the value of policy modelling: firstly for the
ability of modelling to test feasibility and secondly to provide an appreciation
of risks through the differences between scenarios. It was noted that other
modelling techniques would supplement CGE and ultimately public pluralist
processes, such as forums for stakeholder debate, would determine policy
decisions. So the aim in using the Sceptre policy investigation tool is to
contribute a reference position to the process of policy formation.

385
The 2°C Base Case is important in its own right. However, for the reasons of
systemic modelling risk discussed in Chapter 1 Introduction it is not an
immutable outcome of the policy. With this caveat, features of the Base Case
are discussed in this part of the Chapter. In the ensuing sensitivity cases, the
Base Case is used to contrast sensitivity scenarios for Point of View Analysis,
Constraint Severity Analysis, Technology Cost Analysis and Impaired Sales to
Asset Ratio Analysis.

The results are presented in terms of expansions from the current situation,
which is consistent with both the language and the mathematics of
benchmarking discussed in Chapter 4 Economic Models for Climate Change
Policy Analysis. Aggregate investment, accumulated capital and carbon
commodities are presented in absolute terms. These absolute values need to be
approached with the usual caveat concerning apparently accurate numbers in
projections.

6.1 Policy investigation with the Sceptre tool

6.1.1 Base Case results and analysis

Economic expansion

In 2004, the regions NAFTA (America, Canada and Mexico), the European
Union (25 countries) and the Rest of the World (ROW) had Gross Domestic
Products as shown in the following table:

Gross Domestic Product 2004 US$ trillion


EU25 13.3
NAFTA 12.8
ROW 14.8
Total 41

Illustration BC01 (below) shows the regional expansion of consumption. It may


be seen that there is a marked difference between regions. The EU25 has
subdued performance. Its economic expansion starts with a 2% increase in the

386
first decade and saturates at about 14%. This compares to a 6.7% increase in
population as shown in Chapter 5 Sceptre model development.

Illustration 34: BC01 Base Case economic expansion


factors

NAFTA's economic expansion jumps 10% in the first decade and saturates at
about a 38% increase. This compares to a 29.1% increase in population. The
Rest of the World (ROW) sector expands 12% in the first decade. This saturates
toward a 48% expansion, which compares to an increase in population of
38.1%.

When due only to trade and production efficiency and the growth of labour
availability, these increases suggest a significant increase in output in real
terms. The average increase in living standard at the end of the projection is
the same in each case at about 6.95% in real terms. This reflects the objective
function that equally weights per capita increases in welfare in all regions.

Proportion of emissions ameliorated or abated

The control profile is the proportion of emissions actively ameliorated or


abated, in comparison to being satisfied by the purchase of emissions permits.
It may be noted that after an interregnum of 6 decades, control requirements
rapidly increase in order to achieve the 2°C temperature rise constraint.

The illustrations BC02 to BC04 below show the emissions control profile for
the production of food, manufactured goods and services respectively.
Illustration BC05 shows the control profile for consumer generated emissions.

387
Base Case emissions control rate by commodity

BC02 BC03

BC04 BC05

The following table summarises the saturation emissions control levels in each
country and industry.

Base Case saturation emission control rates


Emissions controlled EU25 NAFTA ROW
Food 14% 9% 18%
Manufacturing 21% 33% 83%
Services 79% 100% 99%
End Consumption 34% 50% 66%

It may be seen in the above table that the control requirements for food are
relatively modest. However, the high figure for ROW manufacturing and end
consumption shows how energy and emissions intensive these sectors are
across the ROW region. It may be noted that for services production, which
includes electricity production, very high or complete control is required in all
regions. This demonstrates the crucial importance of controlling emissions
from electricity generation.

388
Price of amelioration and abatement

Anuradha's (2009) identification of contract conditions that would enable


developing countries to join with industrialised countries is discussed in
Chapter 1 Introduction. A paramount issue is the cost and availability of green
infrastructure and technology. The Sceptre policy tool may be employed in
developing a policy response to the technology factor. Sceptre is able to
exemplify the potential cost of amelioration or abatement where the emissions
control rate varies across regions and industries. This is shown in the following
three illustrations for the Base Case of Maximum 2°C rise at 100 years.

The illustrations BC06 to BC09 below show the average price of amelioration
and abatement based on the above control rates.

Base Case emissions control prices by commodity

BC06 BC07

BC08 BC09

The saturation prices for each commodity in each region are shown in the
following table:

389
Emissions control technology prices
US$ per tC EU25 NAFTA ROW
Food 9 5 14
Manufacturing 19 44 233
Services 209 322 320
Consumption 46 94 153

In an international market, emission permits could be expected to trade at the


marginal cost of the next unit of amelioration and abatement. In the table
above, emission permits would trade at US$322.

While costs of amelioration or abatement are relatively low in the food


industry, an exceedingly high cost of adjustment may be seen in ROW
manufacturing, comprising mainly developing countries. In the services sector,
which includes electricity generation, the amelioration/ abatement cost is high
for all countries. This demonstrates that developing countries are very exposed
to the cost of green technology and infrastructure. However, under this Base
Case scenario, these high costs do not become an imperative until mid-century.

Industrial emissions

Illustration BC10 shows land clearing emissions in purple and industrial


emissions in blue. Total emissions is the sum of these two components.

Illustration 35: BC10 Base Case emissions


from industrial activities and land clearing

In order to meet the 2°C temperature rise constraint while maximising welfare,
industrial emissions show an increasing profile for 5 decades to a maximum of
80 GtC/decade. This is 8 Gt per annum, which is 38% higher than the 1990
level of 5.81 GtC.

390
After reaching the 80GtC/decade maximum, emissions must drop by 88% to 9.4
GtC/decade after 9 decades. This level is equivalent to 0.94GtC per annum,
which is an 83% reduction compared to the 1990 level.

This shows that various widely discussed objectives for a 20% or 40%
reduction by 2020 (compared to 1990 levels) and 50%, 60% or 80% reduction
by 2050 may not be fully consistent with maximising economic welfare but do
represent a progressive approach to controlling emissions that mitigates the
risk of needing to reduce emissions 88% in just one decade.

Temperature rise and economic damage function

Illustration BC11 shows how the 2°C limit on atmospheric temperature rise is
approached after 8 decades and then stabilises. There is also a strong, albeit
delayed rise in ocean temperature, where the effects are yet to be fully
appreciated. Illustrations BC12 and BC13 show the associated concentration of
carbon and radiative forcing.

Base Case geophysical parameters

BC11 BC12

BC13 BC14

391
The second most important illustration is BC14, which is the economic damage
feedback multiplier. This is a function of atmospheric temperature rise and
asymptotically approaches 0.989, which is a reduction of economic output of
about 1.1%.

Industry activities

Illustrations BC17 to BC21 show the level of industry activity by commodity by


region. These are complemented by Illustrations BC22 to BC24 that cross-
tabulate to show industry activity by region by commodity.

Base Case industry activity expansion by commodity and by region

BC17 BC18

BC19 BC20

BC21 BC22

392
Base Case industry activity expansion by commodity and by region

BC23 BC24

Specialisation

A major feature of the industry activity illustrations, for example in BC17


(above), is that for a time the EU25 becomes a food bowl for the Rest of the
World (ROW). The activity of the sector is very strong, increasing from 1 to 4.5
times over six decades. It also exhibits a volatile profile by dropping to 0.65 at
decade 9 and then returning to 2 times by decade 13.

Specialisation is not the result of a fixed input-output coefficient schema. It


occurs because of trade substitution in resource extensive sectors of factor
abundance, guided by the general equilibrium that maximises value-added per
unit of labour resource (ten Raa 2005, pp.48-9, 110-1,& 127-8). Production is
switched to the most viable location until this process becomes limited by a
binding constraint. Higher cost sectors are deactivated. This occurs because of
the Theorem of Complementary Slackness. Sectors are either active, with zero
slack and have positive shadow prices for inputs; or are closed with positive
slack and zero price for inputs (see Chapter 4 Economic Models for Climate
Change Policy Analysis and Appendix 6 Benchmarking with Linear
Programming).

The presence of specialisation in Sceptre's super-free trade model is not


regarded as weakness but as a generic issue inherent in neoclassical modelling
and starkly apparent following optimisation. It is not a matter of suppressing
specialisation. Indeed, the well-known Ricardian benefits that derive from
multiplying the volume of free trade are due to a general equilibrium
optimisation of bilateral specialisation with trade partners (Romer 1994). This

393
has been observed in the off-shoring of Anglo-American jobs to Asia and China.
The real issue is when and how to control specialisation into a practical range.

The only approach taken in Sceptre is to limit trade deficits. It is acknowledged


that this is less than perfect because specialisation may still occur in one
commodity if production of another is relinquished.

In cases where policy studies have specific requirements it will be necessary to


better control specialisation. Saturating consumer utility before too much
specialisation occurs is a synthetic method of achieving this. A carefully
constructed nonlinear contemporaneous utility function is required (ten Raa
2005, p.175). Various non-linear objective functions were evaluated in the
course of Sceptre's development. However, a simple yet effective general
purpose saturating utility function that addressed excessive EU25 food
production was not forthcoming. Ultimately, other social welfare
considerations led to the selection of Sceptre's objective function, as set out in
Chapter 5 Sceptre model development.

Two better methods for controlling specialisation are to employ additional


engineering or ecological infrastructure constraints and the use of differential
technology propagation. Infrastructure constraints are specific to the
specialised commodity. For example, food production in the European Union
would be limited by the availability of arable land. Such a constraint may be
implemented in the same way as a labour constraint with resource data drawn
from GTAP's land use database or Mathematica's Country database. In other
regions or countries where farming is on marginally viable land such as
Australia or China, a better constraint may be water resources.

Differential technology propagation would change value-added functions and


the pattern of substitution. Chapter 3 Political Economy of the Anglo-American
world view of climate change showed that the differential propagation of HIV
pharmaceutical technology and future green energy technologies are major
concerns of developing countries. In relation to limiting EU food specialisation,
it might be that genetically modified crops in NAFTA and the Rest of the World
would act to reduce the resource extensibility of EU food production.

394
Further investigation into Sceptre's objective function, engineering and
ecological infrastructure constraints, and technology propagation remain
policy research opportunities and are set out in Chapter 7 Conclusions and
Suggestions for Further Research.

Total Factor Productivity

Illustration BC18 (above) shows manufacturing industry activity in all regions


rapidly increasing and then declining. The rapid increase is due to growing
output for all regions, while the decline is due to technological progress
through increased factor productivity leading to more output for the same
amount of input and industry activity.

Illustration BC19 (above) shows the Service Industry greatly prospering in


NAFTA and ROW, while initially sluggish in EU25. This sluggishness is due to
the EU's specialisation in food production as discussed above.

Carbon sector activity

Illustrations BC20 and BC21 (above) show the outputs of the augmented
carbon sectors, the amelioration and abatement sector and emission permits
trading sector respectively. It may be noted that in decade 6 the trading of
emissions permits switches over to physical amelioration and abatement. A
feature of the illustration is the strong growth EU25 emissions (for the reasons
discussed above) and in the region's equally strong amelioration and
abatement. Total emissions dealt with by both processes rises from 78GtC in
the first decade to 99 GtC in decade 13.

Commodity Export

Illustrations BC25 to BC29 show the export outputs for each commodity. A
positive amount is a net import while a negative amount is a net export.
Illustration BC27 shows no export activity because Services has been defined
as a nil-export commodity.

395
Base Case international trade in food, manufactured and carbon commodities

BC25 (food trade) BC26 (manufactured commodities trade)

BC28 (carbon amelioration services trade) BC29 (carbon emissions permits trade)

It may be noted in illustration BC25 that EU25 is a net exporter of food to


NAFTA and ROW, as has been recognised in its specialisation. The EU25 is
quiescent in the export of manufacturing. Illustration BC26 shows NAFTA
exporting manufactured products to ROW.

In order to achieve its food expansiveness, illustration BC28 shows that the
EU25 imports permits from NAFTA and after decade 6 begins to import
significant permits from the ROW. However the dominant feature in
illustrations BC28 and BC29 is that after decade 6, EU25 imports large
amounts of both amelioration and abatement services and emissions permits.

Aggregate investment and capital accumulation

Illustrations BC15 & BC16 show aggregate investment and capital in absolute
terms, which are mainly used for comparisons across scenarios.

396
Base Case aggregate investment and capital accumulation

BC15 BC16

The table below compares the Normal or “business as usual” scenario with no
climate constraint (see Chapter 5) to the 2°C Base Case. it may be seen that
the 2°C limit reduces accumulated capital in decade 13 by 11% or US$280
trillion.

Investment and capital accumulation (in decade 13)


2004 US$ trillions Business as Base Case
usual 2°C rise
Investment per decade 1,229 1,214
Accumulated capital 2,424 2,143

Illustrations BC30 to BC33 (below) show the investment activity for each
region by commodity. This is a plot of the multipliers of the existing investment
vectors. Cross-tabs of investment activity for each commodity by region are
shown in illustrations BC33 to BC35.

These activities are the multiple of existing investment vectors, which are:

Initial investment (per decade) 2004 US$ trillion


Food Manufacturing Services
EU25 0.07 9.3 11.3
NAFTA 0.32 10.3 14.6
ROW 0.24 12.6 23.0

397
Base Case disaggregated investment by region and by commodity

BC30 BC31

BC32 BC33

BC34 BC35

In illustration BC30, the EU's small investment vector is increased by very


large multiples for its specialisation in food. Illustrations BC31 and BC32 show
that NAFTA and ROW also grow their food investment by more than 25-fold
and 50-fold, respectively.

There is only sustaining investment in manufacturing in all regions. However,


investment in services in both NAFTA and ROW grows the same 25-fold as
NAFTA's food investment.

398
Illustrations BC36 to BC38 show the net accumulated investment in each
region by commodity. As expected, illustration BC36 shows EU25 accumulated
investment in the food industry is high.

Base Case disaggregated accumulated investment by region

BC36 BC37

BC38

Illustrations B36, B37 and B38 exhibit the feature that investment in services
rises strongly due to the demands of amelioration and abatement.

Marginal cost of global economic expansion

Pursuant to the Theorem of Complementary Slackness, each binding constraint


has a resource productivity and zero slack, while the opposite is true for each
non-binding constraint.

The following constraints in the Base Case meet the first definition of the
Theorem of Complementary Slackness that the slack is zero when a constraint
is binding.i From the original 996 constraints, only 20 constraints have slack of
zero. These are shown in the following illustration.

399
Illustration 36: Binding Constraints, KKT multipliers and Residual Slacks for
Base Case

It may be noted from the table above that excluding the binding constraints for
the damage feedback function and emissions control rate, the only two binding
constraints remaining are for temperature rise and the EU25 food commodity.

As discussed in Chapter 4 Economic models for climate change policy analysis ,


Mathematica's FindMinimum function (in Version 7.01) does not expose its
Karush-Kuhn-Tucker (KKT) multipliers from the underlying C++ code. For
Sceptre's large scale optimisations, the current lack of direct access to KKT
results means that multipliers need to be calculated from first principles using
the results of the optimisation. This task has two disadvantages. Firstly, the
solution of the KKT set of simultaneous equations may not be unique.
Therefore, it is not certain that the KKT multipliers obtained are identical to
those implicit in the results from Mathematica's FindMinimum function.
Secondly, the calculation can be quite long in duration.

A guide to the non-uniqueness of KKT multipliers can be gauged from the first
binding constraint, which is the main constraint of 2°C rise at 100 years (i.e.
the tenth decade):

400
168
This constraint shows a KKT multiplier of . However, this is the lowest
5
multiplier of the set of possible solutions as shown in the illustration below:

The KKT multiplier for a constraint represents the productivity of the resource,
which is the change in the objective function for a unit change in the resource
of the constraint. As Sceptre's objective function is the net present value of the
unbiased or unweighted sum of country expansion factors, the KKT multipliers
or shadow prices are given in terms of Net Present Value of economic
expansion rather than in dollars.

A KKT multiplier of 33.6 for the above constraint implies that a 33.6 increase
in the value of the objective function will result if the temperature rise is
relaxed by one unit, from 2°C at decade 10 to 3°C at decade 11.

However, the shadow price is strictly applicable as a differential only at the


one point of {2C , decade 10} and will vary through the unit rise of 1°C. So it is
usual to express prices in terms of incremental increases. For example, an
increase of one-hundredth of a unit of the resource, 0.01°C or 0.5%, leads the
objective function to rise by 0.336. This is a 5.11% rise compared to the
optimisation value of 6.574. Therefore, the output elasticity is approximately
10x (5.11% / 0.5%).

To convert shadow prices given in terms of population adjusted expansion


factors to absolute prices in dollars requires the objective function to be
mapped to one where expansion factors are multiplied by the weighted
proportions of consumption in each country. This provides the following
conversion:

Objective Function (see Chapter 5)


Raw Expansion Factor Basis Dollar Weighted Equivalent
6.57 US$759 trillion
1 (or per unit) US$116 trillion

401
From the table, it may be noted that the dollar value of the objective function is
about 100 trillion times the expansion value. Therefore a relaxation of the
temperature constraint by 0.01°C and consequent increase of 0.336 in the NPV
of the expansion factors is worth about US$38.8 trillion. This is almost equal to
the single year GDP US$40.97 trillion (2004).

6.1.2 Point of view analysis


Chapter 1 Introduction highlighted the importance of appreciating diverse
points of view of various prominent stakeholder groups when investigating
policy. Many dimensions are needed to capture the diversity of views in society.
In addition, there is a range of views emanating from other national
governments, global corporations and community action groups.

While there may be a preferred point of view, there is no such thing as a “best”
one. Usually each point of view has a prominent and unique perspective. Often,
these points of view are orthogonal, that is, coming from different
philosophical or ideological bases and so are not strictly comparable.

For example, in the climate change debate, national and supra-national polity
need to engage with a range of views from sceptics to environmental radicals,
which represent two rather public clusters in climate change. As different as
the dichotomy of these two views may be, they are united in the
uncompromising demand that society adopt fundamental positions and accept
large risks. For example, sceptics shrug-off the risk of a climate change
induced collapse of civilisation. Radicals equally shrug-off the social risk
associated with mass dislocation of employment.

Although uncomfortable for many policy makers, extreme views have a place
because they stretch the debate. However, as discussed in Chapter 2 Political
economy of the Anglo-American world view the plurality of fundamentalist
views can be breathtaking. These range through such diverse approaches as
free-market, conservative, liberal, evangelical and Marxist-Leninist
perspectives. Even in establishment views great rifts exist. Krugman (2009)
notes that the global financial crisis has reignited irreconcilable differences
between American Keynesian and Monetarist philosophies in establishment
macroeconomics.ii

402
These multidimensional perception spaces offer many interesting pathways for
additional research to achieve a fine reduction and classification of policy
understanding. For reasons of expediency and policy making pragmatism, it is
assumed in this research that Pareto's Rule applies so that 80% of the desired
analysis in points of view can be understood through examining 20% of views,
subject to these being sufficiently diverse.

The points of view selected for analysis include the two extreme positions of
sceptic and radical. Somewhere in the multidimensional space in between are
points of view for Laissez-faire free markets and Government-regulated
markets. These latter two points of view roughly correspond to American free-
enterprise individualism and European free-market social democracy.

Sceptical point of view

Climate change Sceptics do not subscribe to the assumption that industrial


emissions are causing climate change. This group has three sub-clusters: those
who deny the existence of global warming; those that claim the effects of
global warming will be mainly beneficent and profess to eagerly anticipate the
benefits of global warming; and those who believe that global warming is
beyond the influence of human activity and are ambivalent or diffident about
any scientific evidence.

It needs to be kept in mind in analysing this point of view (as in the ones
below) that point of view analysis seeks to model the underlying assumptions
in the point of view. For example, a representative person might say “I feel that
this will be the outcome”. In the case of a Sceptic, it might be “I don't
acknowledge any global warming so I feel that there will be no climate induced
effects to look at.”

The point of view analysis does not endeavour to criticise these assumptions.
Nor does it seek to demonstrate whether or not the point of view is logically
consistent. Furthermore, nor does a point of view sensitivity seek to project a
realistic outcome. In other words, point of view projections try to encompass
the representative view at face value.

403
The assumptions used to model the Sceptic point of view are that there will be
no constraints on emissions, no carbon trading is required (in Sceptre, all
emissions will be ameliorated at no cost) and there will be no climate-economic
damage function.

The difference between this scenario and the Normal case (in Chapter 5) is
that in the Normal case the climate-economic damages mechanisms are
operating and, although there is no climate constraint, the model draws on
amelioration and abatement services in order to improve its economic
expansion. In this Sceptic point of view, it is never necessary to draw on
amelioration and abatement services as emissions have no impact on climate
change or economic performance.

Radical point of view

Radicals generally have a narrow focus, for example on the environment or on


single issues such as the flora or fauna at a specific location. Their chosen
opposition parties are often global companies, big business or developers. For
example, in Australia the unlikely hero of planning ethics was Jack Mundey.
Now a distinguished Australian, in 1969 he led the Building Labourers'
Federation to impose highly controversial Green Bans on the redevelopment of
heritage and naturally significant sites.

A representative Radical approach to climate change might be “now that we


know about the effect of greenhouse gas pollution, to continue is wanton
destruction of the planet and the people doing this are criminals”. James
Hansen's testimony to Congress in Chapter 3 Political Economy of the Anglo-
American world view of climate change is an eloquent example. Radicals
believe that there can be only one logical corollary, that all pollution must
cease forthwith and sanctions be applied to any business that wilfully
continues to pollute.

The assumptions used in Sceptre to model the radical perspective are that all
emissions must be immediately ameliorated or abated in full and the climate-
economic damage function operates even though the low emissions prevent the
damage multiplier from significant activity.

404
Laissez-faire free market point of view

Market systems form the middle ground. The first point of view investigated in
market systems is Laissez-faire free market dogma. This is often identified with
unfettered Anglo-American capitalism and often called neoliberalism. With
regard to climate change, its underlying assumption is that any climate
induced economic damage will become priced in the market. The invisible
hand of capitalism will silently move to evoke entrepreneurial technologies to
solve any problem, if indeed there is money to be made in solving it. This
means “business as usual” and managers acting with self-enlightenment if it
suits them and is earnings accretive. The subject of market failures is met with
complete cognitive dissonance. For example, the Great Depression was merely
people choosing to have a holiday rather than being willing to work for lower
wages (Krugman 2009).

This point of view can be modelled in two ways. The first is the Normal
“business as usual” scenario presented at the beginning of this Chapter as a
comparison with Nordhaus' DICE model. It was seen there that Sceptre's
projection of temperature rise was increasing strongly through 2.5°C at
decade 10.

The Laissez-faire free market view would be that if temperature is rising


strongly, then this is a situation people are happy with otherwise business
would have been paid to arrest the rise. It is phenomenological, positivist and
optimistic. While there may be a climate-economic damage function there is no
temperature rise until it occurs. Looking on the bright side, temperature rise
may never happen if the sceptics are right, so why fix something that ain't
broken.

Therefore, the representative outlook or point of view is that there are neither
constraints on emissions nor any need for emission permits trading or
amelioration and abatement services. Optimistically all emissions will be dealt
with and a reasonable scenario will unfold. Therefore Sceptre's assumptions
are no constraints on emissions, no carbon trading (all emissions ameliorated
at no cost) but a climate-economic damage function is operating.

405
Government-regulated market point of view

The second market related point of view investigated here is one where
governments intervene to address potential or actual market failures. Its
underlying assumption is that Laissez-faire free markets have many
advantages over planned economies but that free markets do not work in
regard to commons, such as the environment. The planned adjustments are
designed to ensure sustainability. Chapter 2 Political economy of the Anglo-
American world view discussed the European Union's market system with
particular reference to Germany. Chapter 3 Political Economy of the Anglo-
American world view of climate change placed this discussion in the context of
climate change.

The United Nations was formed in 1945 to replace the League of Nations,
which America had never joined. Both organisations represent the type of
supranational symbiotic community that countries need to take ownership of
the international commons and protect it. In the climate change policy debate,
the UNFCCC and its' IPCC scientific panel represent the supra-national body.
The IPCC has recommended a maximum post-industrial temperature rise of
2°C. As there was no discernible temperature rise in the period from 1750 to
1900, the 2°C temperature rise effectively applies post 1900.

Therefore, this Government-regulated market point of view is represented by


the Base Case of 2°C limit at 100 years, emission permits trading and
amelioration at full cost, with a climate-economic damage function operating.

Results and analysis of alternative points of view

These four points of view were modelled in Sceptre model with the following
assumptions. The illustrations of the results are shown overleaf:

406
Climate Change Laissez-faire Free Maximum 2°C rise Radical Planet
iii iv v
Sceptic Market @ 100 years Protectionvi
No constraints on No constraints on 2°C limit at 100 years All emissions
emissions, no carbon emissions, no carbon with carbon trading ameliorated at full
trading (all emissions trading (all emissions and amelioration at cost and with a
ameliorated at no ameliorated at no cost, with a climate- climate-economic
cost) & no climate- cost) but with a economic damage damage function
economic damage climate-economic function
function damage function

The Sceptic, Laissez-faire and Radical perspectives all lead to similar outcomes
because each assumes the outcome will be fine (see results in the next section
of this Chapter). However, all three scenarios differ materially from the Base
Case. For example, Sceptic, Laissez-faire and Radicals all believe that
temperature rise will continue to hover at 0.8°C, in comparison to the Base
Case where it rises to 2°C.

A comparison of the objective functions of the Radical and the Base Cases
shows the extra Net Present Value cost of the Radical case to be US$3.8
trillion (2004 dollars) (cf. Previous Base Case analysis for method of
estimation). The Sceptic and Laissez-faire cases, which are not meaningful
comparisons, show savings over the Base Case of US$75 billion and US$14
billion respectively.

The similarity between these points of view has an interesting precedent. In


August 2009, the Australian Parliament provided confirmation of this unlikely
congruence. The Liberal Party and Greens joined to vote down Government's
Carbon Pollution Reduction Scheme (CPRS) legislation. The reason for the
unusual alliance between Greens and Liberals comprising Sceptics and
Laissez-faire free marketeers was that each remained intransigent in the belief
that their preferred path would be the only means of achieving a benign future.
Indeed, perceptions of the benign future were identical but the means of
getting there were dramatically different.

i For this purpose, an arbitrary chop of 10-6 is applied to slacks. This means that
slacks smaller than 10-6 are considered to be zero
ii Krugman (2009) refers to Keynesians as “saltwater economists” because they
tend to live on the East or West coast and Monetarists (or Chicago School) as
“freshwater economists” because they tend to live inland

407
iii m12_13p_full_amel_no_cost_no_dam.nb
iv m12_13p_full_amel_no_cost.nb
v m12_13p_2C_100.nb
vi m12_13p_full_amel.nb

408
Point of view simulation results

Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

409
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

410
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

411
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

412
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

413
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

414
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

415
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

416
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

417
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

418
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

419
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

420
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity

421
6.1.3 Atmospheric concentration constraint severity
In Chapter 3 Political Economy of the Anglo-American world view of climate
change it was noted that the Tällberg Foundation, Al Gore, James Hansen and
others emphatically seek an atmospheric concentration of 350 ppm compared
to 380 ppm in 2009. Until recently the IPCC and member governments
concurred with a 400 ppm or 450 ppm limit. However, with this target
becoming frustrated, MEF governments adopted a 2°C rise limit in lieu.

Constraint severity analysis seeks to identify the spatial regional-commodity


trends arising from various atmospheric concentrations of carbon. The
assumptions used in the sensitivity analysis are set out in the following table:

350 ppmvii 450 ppmviii 550 ppmix Base Case 2°Cx


Assumption: Assumption: Assumption: Assumption: 2°C limit
Atmospheric Atmospheric Atmospheric at 100 years with
concentration of concentration of concentration of carbon trading and
carbon maximum of carbon maximum of carbon maximum of amelioration at cost,
350 ppm and 450 ppm and 550 ppm and with a climate-
emissions declining emissions declining emissions declining economic damage
after 100 years after 100 years after 100 years function

Results and analysis

The values of the objective function for the three sensitivity scenarios of 350,
450 and 550 ppm show that these constraints impose an increased cost on the
economy compared to the Base Case of a 2°C temperature rise. However, this
increased cost is only in the order of US$15-20 billion, which is far less than
US$3.8 trillion for the Radical point of view discussed above.

Trends across the severity scenarios show that the control rate for
amelioration and abatement dramatically declines and is strongly delayed as
the atmospheric tolerance increases to 500 ppm. The 2°C Base Case has a
delayed requirement for emissions control but otherwise is similar to the 450
ppm case. The manufacturing emissions control rates are shown below for
each sensitivity:

422
Manufacturing emissions control rate for Base Case 2°C, 350, 450 & 550 ppm

However, because the emissions control begins immediately for 450 ppm
mitigation, the emissions profile approaches half that of the Base Case. After
10 decades the profile for 350 ppm begins to decrease for EU and NAFTA,
while the emissions control requirements for 450 and 550 ppm mitigation rise.

These trends are reflected in temperature rise illustrations (below), which


show 2°C for the Base Case but only 1°C for 350 ppm, 1.7°C (rising) for 450
ppm at decade 10 and 2.2°C (strongly rising) for 550 ppm.

Atmospheric temperature rise for Base Case 2°C, 350, 450 & 550 ppm

423
Atmospheric temperature rise for Base Case 2°C, 350, 450 & 550 ppm

Another notable feature is global accumulated capital. This is only US$1,200


trillion (2004 dollars) at decade 10 for the 350 ppm case compared to
US$1,700 trillion for 450 ppm and US$1500 trillion for both 550 ppm and the
Base Case.

Accumulated capital for Base Case, 350, 450 & 550 ppm

424
As shown in the illustrations below, a limit of 350 ppm limits EU25's resource
expansive food production. This restriction is removed once the atmospheric
concentration is relaxed and EU25 food production increases markedly in the
450 ppm and 550 ppm cases and Base Case.

Food industry activity level for Base Case, 350, 450 & 550 ppm

The disaggregated results in the following section of this Chapter provide


many other insights for analysis into regional industry activity, trade and
investment.

vii m12_13p_350_100.nb
viiim12_13p_450_100.nb
ix m12_13p_550_100.nb
x m12_13p_2C_100.nb

425
Atmospheric concentration constraint severity sensitivity analysis

Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

426
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

427
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

428
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

429
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

430
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

431
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

432
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

433
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

434
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

435
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

436
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

437
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

438
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity

439
6.1.4 Technology cost sensitivity
In Chapter 3 Political Economy of the Anglo-American world view of climate
change it was noted that many developing countries including China and India
fear that ameliorating emissions will seriously retard economic growth. One
concern is that intellectual property royalties for green technologies will lead
to major transfer payments from developing economies to industrialised
economies. Poor and developing countries know that intellectual property
matters are difficult to resolve, as they have found in the ongoing imbroglio
over the supply of anti-retroviral (HIV) drugs.

Intellectual property concerns aside, there are situations where the abatement
and amelioration task retards economic growth. This is particularly the case
for developing countries due to rapidly rising standards of living and in many
cases, rapid population growth.

Results and analysis

Base Case 2x technology 10x technology 20x technology


xi xii xiii
2°C rise cost cost costxiv
Nordhaus DICE Twice Nordhaus 10x Nordhaus DICE 20x Nordhaus DICE
backstop technology DICE cost cost cost
cost

In each case it is found increased technology costs lead to only a small


decrease in the value of the objective function of the order of US$18 billion
(2004 dollars) (cf. Base Case analysis for method of calculation).

As might be expected, emissions and temperature rise are relatively unaffected


by technology cost. The main effects of increasing technology cost is to
depress the emissions control rate. However, this proves to be inelastic and the
changes are only moderate given the large magnitude of increase in
amelioration and abatement prices.

440
Manufacturing emissions control Manufacturing emissions control
rate technology price

Base Case Base Case

2x increase in technology cost 2x increase in technology cost

10x increase in technology cost 10x increase in technology cost

20x increase in technology cost 20x increase in technology cost

Economic investment and capital are sensitive to backstop technology cost:

441
Aggregate investment Aggregate accumulated capital

Base Case Base Case

2x increase in technology cost 2x increase in technology cost

10x increase in technology cost 10x increase in technology cost

20x increase in technology cost 20x increase in technology cost

442
As may be seen in the above illustrations, global investment fractures, falling
25% from US$800 billion at decade 10 in the Base Case level to US$600 billion
for the 10x technology cost case. The fall is 50% for the 20x technology cost
case. There is a similar effect on accumulated capital, which falls from
US$1500 trillion at decade 10 to US$1100 trillion for 10x cost and US$800
trillion for 20x cost.

The following section in this Chapter provides the disaggregated results. It


may be noted that the main effects continue to be in investment and capital
rather than industry activity and trade. The sensitivity of economic
performance to backstop technology cost suggests that there will be different
stresses on different economies that arise solely as a function of differential
technology propagation.

Technology risk

The scale of the task for developing economies in reducing emissions and the
situation that they usually do not have primary access to technology
intellectual property rights suggests that developing countries appear to face
the greatest technology risk. Developing countries have seen this sort of risk
before, for example in HIV medication.

It is also apparent from these projections that industrialised countries need


developing countries to participate in ambitious targets for amelioration and
abatement. For example, the requirement for ROW participation is
considerably in excess of the challenge for NAFTA. This leads to a double risk
for developed nations. The first risk is for their own performance. The second
is a derivative risk in the performance of developing nations to whose failure
they are exposed.

The world view of China and India was discussed in Chapter 3 Political
Economy of the Anglo-American world view of climate change . This suggests
that industrialised nations will need to resolve the uncertainty about
technology availability and concern about being exploited by technology
providers before they are ready to engage in a common goal.

443
xi m12_13p_2C_100.nb
xii m12_13p_2C_100_tcx2.nb
xiiim12_13p_2C_100_tcx10.nb
xivm12_13p_2C_100_tcx20.nb

444
Abatement technology cost sensitivity analysis

Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

445
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

446
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

447
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

448
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

449
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

450
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

451
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

452
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

453
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

454
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

455
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

456
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology

457
6.1.5 Effect of climate damage on Sales to Assets
ratios
Sceptre has been run with the climate damage function impairing both
industrial output and Sales to Asset ratios. This provides a new perspective on
climate-economic analysis.

It is reasonable to expect that additional assets are required in each industry


as economic-climate damage occurs. This is in addition to the effect of the
damage function on production. The first is to “proof” the industry against
higher levels of climate stress and the second is dealing with extra risk or
volatility associated with climate. More assets for the same amount of sales
means that the Sales to Assets ratio decreases. A decrease in the Sales/Assets
will divert more resources to accumulated capital.

A Sales to Assets ratio for the single year of 2004, which is the base year of
GTAP data, is calculated by dividing the V matrix by the opening assets for the
2004 year. The Sales for decade is calculated from the single year figure by
applying a multiplier comprising the sum of the population index.

Impaired Sales to Assets ratios are shown in the following illustrations for each
commodity of food, manufacturing and services.

Sales to Asset Ratio impairmentxv

458
Sales to Asset Ratio impairment

Results and analysis

Impairing the Sales/Assets ratio reduces the value of the objective function by
about US$180 billion (2004 dollars) (cf. Previous Base Case analysis for
method of calculation). As might be expected, the small changes to
Sales/Assets ratio result in only minor changes to disaggregated results (see
next section of this Chapter).

It is notable that the emissions control rate for consumption increases, while
that for food and manufacturing decreases slightly. The control requirement for
services remains at the maximum. Although the increased responsibility of
consumers to ameliorate and abate is only an indicative trend, it demonstrates
that as industry needs increased assets to produce the same output then
consumers start to bear a greater burden to directly control their emissions.

Base Case Impaired Sales/Asset ratio

459
Base Case Impaired Sales/Asset ratio

EU and NAFTA decrease investment in manufacturing while ROW decreases


investment in food production.

460
Base Case Impaired Sales/Asset ratio
disaggregated investment by region disaggregated investment by region

Other sector activities vary in small ways that would be meaningful to


investigate for particular regional performance. Fully disaggregated results
are provided in the next section of this Chapter.
xv m12_13p_2C_100_s2a.nb

461
Sensitivity with impaired Sales/Asset ratios

Base Case 2°C @ 100 years Impaired Sales/Asset Ratio

462
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio

463
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio

464
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio

465
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio

466
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio

467
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio

468
6.1.6 Effect of carbon commodity trading
In all previous sensitivities, unrestricted international trading in carbon
commodities is assumed. This sensitivity case removes the international
arbitrage of emission permits and amelioration and abatement services. It is
included for the case where international trade in these commodities is limited
or absent.

Limited trading of emission permits and amelioration and abatement services


is a real scenario. For example, Australia's proposed policy is that no more
than 5% of emissions permits may be imported so it would be useful to model
Australia's performance with limited trading of permits.

Results and analysis

The overall net benefit of international trade in carbon commodities does not
appear to be very large. Indeed, there is a negligible US$2 billion (2004
dollars) gain in the objective function if trade is prevented (cf. Base Case
analysis for method of calculation).

The geophysics of the environment is adequately managed so emissions and


temperature rise are unchanged from the Base Case. However, the effects of
zero trade in carbon commodities are insightful. The control of emissions from
food and manufacturing declines for all regions; services are unaffected; and
the demand for consumers to control emissions increases.

Base Case No carbon commodity trading


emissions control rate emissions control rate

469
Base Case No carbon commodity trading
emissions control rate emissions control rate

However, global capital increases significantly. At decade 10, Base Case global
capital accumulation of US$1,500 trillion (2004 dollars) rises to US$2,000
trillion in the case of no international carbon commodity trading.

470
Base Case No carbon commodity trading
capital accumulation capital accumulation

This is reflected in increased accumulated capital profiles at a disaggregated


level as shown in the following table.

Base Case No carbon commodity trading


disaggregated capital accumulation disaggregated capital accumulation

471
Base Case No carbon commodity trading
disaggregated capital accumulation disaggregated capital accumulation

NAFTA and ROW lift their food production significantly and the EU25 resource
expansive food production is less pervasive. From this it may be noted that
EU25 expansive food production is actually a function of trading carbon
commodities.

Base Case No carbon commodity trading


food industry activity level food industry activity level

Perhaps the most important effect of all is that zero international carbon
commodity trading means that the amelioration and abatement task of ROW
rises to almost 60 GtC per decade, which is nearly three times that of NAFTA
and five times that of EU25.

472
Base Case No carbon commodity trading
amelioration and abatement amelioration and abatement

This analysis demonstrates the importance of regional aggregations at the


country level. It provides insight in to why the presence of emissions
management in each country returns the focus of economic policy to regions.

473
Sensitivity for no international trading in carbon commodities

Base Case 2°C rise No international trading of carbon commodities

474
Base Case 2°C rise No international trading of carbon commodities

475
Base Case 2°C rise No international trading of carbon commodities

476
Base Case 2°C rise No international trading of carbon commodities

477
Base Case 2°C rise No international trading of carbon commodities

478
Base Case 2°C rise No international trading of carbon commodities

479
Base Case 2°C rise No international trading of carbon commodities

480
6.2 Conclusion
Based on the results of the political economy analysis, a new benchmarking
type of CGE model has been developed and used to investigate a climate-
economic Base Case and discriminate five categories of sensitivities as shown
in the following table.

Climate Policy Sensitivity Analysis


Point of View Climate Backstop Sales to Asset No Bilateral
Constraint Technology Ratio Carbon
Severity Cost Impairment Commodity
Trading
Sceptic 350 ppm Base Case Base Case Base Case
Laissez faire Base Case 2x Impaired S/A No Trading
Base Case 450 ppm 10x
Radical 550 ppm 20x

The Base Case shows that the IPCC, European Union, Major Economies Forum
and G20 policy of limiting temperature rise from pre-industrial times to a
maximum of 2°C is feasible.

The Point of View sensitivity demonstrates that the Base Case costs little more
than the Sceptic and Laissez-faire scenarios, so controlling emissions for the
safety of the globe does not incur a prohibitive cost. Indeed, the Radical ultra-
risk averse policy option of controlling emissions to 350 ppm has a relatively
small net present value premium over the Base Case of US$3.8 trillion. On this
basis governments may be advised to reconsider the Radical perspective of
strongly limiting emissions through a mix of quantitative regulation, taxes and
property rights.

The climate constraint sensitivity shows that the three sensitivity scenarios of
350, 450 and 550 ppm do not have a significant cost over the Base Case. The
450 ppm case and the Base Case are similar, as the IPCC found, although the
earlier control of emissions in the 450 ppm case results in a lesser temperature
rise of 1.7°C for 450 ppm at decade 10 compared to 2°C for the Base Case.

481
Increasing the cost of backstop technology ultimately leads to a fracturing of
economic performance. While this commences at 20 times current estimates of
the backstop technology cost, it is important to note that current cost and
availability estimates remain highly speculative. In addition, as has been the
case with HIV pharmaceuticals, there may be a disproportionately large risk
for countries that do not hold intellectual property rights. The political
economy analysis showed that this has led to a situation of considerable
anxiety for China, India and other newly developed and developing countries.
It has been a key reason that these countries have declined to engage in
binding emissions reduction targets. In order to minimise the significant
technology risk shown by this sensitivity analysis, governments would be
advised to implement strong quantitative limits in concert with robust market
price signals. These measures will minimise the market risk from technology
development business plans and catalyse immediate technology development.
It is unlikely that continuing the current policy of research subsidies for far
away technologies like carbon capture and storage can adequately address the
technology cost and availability risk.

This model is the first of its type known to use Sales/Assets ratios (instead of
resource limits) to mediate capital accumulation in the underlying economic
model and price resources. The impairment of Sales to Asset ratios has a
subtle influence on the Base Case. As industry struggles with needing more
assets for the same output, consumers are also exposed to a greater burden for
directly ameliorating or abating their emissions.

The sensitivity of prohibiting international carbon commodity trading


demonstrates that countries become more self-sufficient in their own
commodity production. In the past, India has shown how broad-based
resilience is derived from an open but self-sufficient economy. Conversely,
France is seeking exceptional competitiveness in exports as other countries
remain entrenched with dirty electricity generation and resist the green
revolution. In 2009, 90% of France's electricity generation was from carbon-
free sources such as hydro and nuclear. It implemented a carbon tax to give
certainty to French industry and spur technological development.

482
The combination of Base Case and sensitivity analyses using the new spatial
benchmarking CGE tool and informed by a deep investigation of political
economy, provides a range of policy insights at the global, regional and
commodity level. It demonstrates that this tool is appropriate for climate-
economic policy research.

6.3 Chapter References


(Charles 2009) (Anuradha 2009) (Haidt & Graham 2007) (Kohlberg 1969) (Saunders 2009) (ten Raa 2005) (Romer 1994) (Krugman 2009)

Anuradha, R.V., 2009. Legalities of climate change: A recent climate change


declaration poses significant challenges--and opportunities--for India.
livemint.com & The Wall Street Journal. Available at:
http://www.livemint.com/2009/08/04221721/Legalities-of-climate-
change.html [Accessed August 13, 2009].

Charles, D., 2009. ENERGY RESEARCH: Stimulus Gives DOE Billions for
Carbon-Capture Projects. Science, 323(5918), 1158.

Haidt, J. & Graham, J., 2007. When morality opposes justice: Conservatives
have moral intuitions that liberals may not recognize. Social Justice
Research, 20(1), 98-116.

Kohlberg, L., 1969. Stage and sequence: The cognitive-developmental


approach to socialization. Handbook of socialization theory and
research, 347, 480.

Krugman, P., 2009. How Did Economists Get It So Wrong? The New York
Times. Available at:
http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html?
_r=2&th&emc=th [Accessed September 7, 2009].

ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.

Romer, P.M., 1994. New goods, old theory, and the welfare costs of trade
restrictions. Journal of Development Economics, 43, 5-38.

483
Saunders, A., 2009. Governance and the Yuck Factor. Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2631260.htm#t
ranscript [Accessed August 13, 2009].

484
Chapter 7 Conclusion and Suggestions for
Further Research

7.1 Conclusion
Chapter 1 Introduction discussed policy issues in climate change and the way
that the evidence-based policy methodology may help address those issues. It
identified that policy makers look to evidence-based techniques to confirm
policy and instrument feasibility and to understand the sensitivity of proposed
policy solutions. Systematic, non-systematic and communication risks of
modelling in the policy research process were investigated. The tools of policy
research were addressed and it was concluded that for issues with national or
global significance policy makers look to computable general equilibrium
(CGE) modelling in policy research.

The inadequacies of CGE tools were discussed in general and with reference to
developing effective climate change policies. Four research gaps were
identified: the difficulty of solving comprehensive general equilibrium with
spatial aggregation; the choice of production function; intertemporal
consistency; and communication of results. It was proposed that national
accounting Use and Make tables could resolve the first shortcoming,
benchmarking techniques the second, linking flows to stocks through
Sales/Assets ratios the third, and modern data visualisation could address the
last gap.

This analysis led to the research aim of this dissertation, which is to answer
the question “What changes in regional and industry performance are implied
by a change in the Anglo-American world view from unconstrained to climate-
constrained resource usage?” The means of achieving this was to develop a
new lens through which to understand the spatial and intertemporal effects of
climate policy on regions and commodities linked through trade.

A research methodology was proposed that addressed two important issues of


evidence-based policy. Firstly, that the political economy of the research
question was fully researched. Secondly, that the underlying principles of the

485
new CGE tool, or lens, would have provenance in the political economy of the
world view being addressed and, in particular, with regard to the specific
policy area being investigated, in this case climate-economic policy.

Chapter 1 Introduction also set out the scope of the research and showed that
it was a subject of wide interest to national and international governmental
and non-governmental organisations and addressed a number of Australian
National Research Priority Areas.

Chapter 2 Political Economy of the Anglo-American economic world view found


the Anglo-American world view is premised on the drive to protect freedom
and unilateralism. It was concluded that American foreign policy remains in
tension, unlike the United Kingdom which has judged that its long term
welfare is inextricably linked to multilateral cooperation across the global
commons of trade, nuclear non-proliferation, security and the environment.

It was also found that a number of unexpected failures in the Anglo-American


world view such as “agency conflict,” “moral hazard” and the Global Financial
Crisis of 2008-9 have exacerbated America's declining domestic and
international competitiveness. The causes of these challenges was traced to
the dominant Anglo-American world view, which finds expression in classical
economics and its neoclassical sibling. Important for the development of the
neoclassical model in this doctoral research, it was concluded that the classical
and neoclassical paradigms may need to adapt but they are not invalidated by
internal conflicts and occasional spectacular failures.

The main adjustment to be made is by policy makers that seek ideological


assurances from such concepts and models. The unexpected failures have
brought the realisation that policy makers need to reconnect to the
understanding that beautiful neoclassical solutions based on elegant fictions
(such as completely deregulated markets, “enlightened self-interest” and
trickled-down economics) are merely points-of-view. The reality is that the
greater interconnectedness of the world and increased monitoring of
government decisions have led to both policy making and regulation becoming
even more complex, messier and visible processes.

486
Chapter 2 Political Economy of the Anglo-American economic world view also
found that President Obama has recognised that America's competitiveness
and financial position require immediate action and its future is linked to
multilateral cooperation. It was concluded that America may be on the cusp of
accepting its new reality of resource constrained growth but is not yet out of
the “storming” phase. Plans to reform America may be thwarted by the
political conservative psyche, which continues to be driven by dreams of
exceptionalism and is ideologically committed to unfettered American
unilateralism. The direction America ultimately takes will determine both its
future and that of the Anglo-American cohort.

Chapter 3 Political Economy of the Anglo-American world view of climate


change investigates climate science and policy and finds overwhelming
scientific, United Nations and national government support for measures to
limit global atmospheric temperature rise to 2°C (3.6°F) above the pre-
industrial level. However, climate change sceptics remain influential and the
tension between industrialised and developing nations is palpable. In Hamlet
Act 2, Scene 2, Shakespeare's hero soliloquises “the plays the thing wherein I'll
catch the conscience of the king.” It is a play within a play, which is very like
the intriguing drama of climate change policy formation unfolding before all
the world. Although America began to engage with climate change policy in
June 2009, the unstable American economic world view and the previous U.S.
Senate Byrd Hagel resolution continue to render America's commitment to the
2°C objective tantalisingly close but still beyond reach.

This Chapter 3 takes forward the investigation of the neoclassical paradigm in


Chapter 2 Political economy of the Anglo-American economic world view to
establish the policy dimensions on which this doctoral research in CGE policy
research has been framed. It has established a policy Base Case of 2°C rise,
consistent with geophysical modelling of a 750 Gt CO2 carbon tranche.

Chapter 3 also investigates the three main policy instruments for reducing CO 2
emissions, namely quantitative limits, taxation and property rights. It finds that
while all have the same theoretical outcome, in practice each has strengths
and weaknesses. It is concluded that with adequate regulatory protections
against market abuse and market failure, the introduction of property rights is

487
a feasible and attractive way of pricing pollution and mobilising capital. From
this analysis it is concluded that carbon commodity trading is an appropriate
means of including amelioration and abatement measures in the policy
research model developed in this dissertation.

A policy research CGE tool is not merely a set of equations or optimisations. It


is a compound technical solution where the nature of the model, the computing
environment and the nature of the source data and the structure of the data
are all matched to achieve the research aim. Chapter 4 Economic models for
climate change policy analysis determined that a new Service Sciences
benchmarking-type of neoclassical, intertemporal, multiregional and multi-
commodity CGE model using GTAP Input Output data and expressed in
Mathematica would provide the most appropriate expression for the
requirements established in Chapters 1, 2 and 3 (above).

The blueprint for a new CGE model is described in Chapter 5 A new spatial,
intertemporal CGE policy research tool. The model is called “Sceptre,” which is
an acronym for Spatial Climate Economic Policy Tool for Regional Equilibria. It
unites CGE modelling with Input Output modelling by generating resource
pricing through an optimisation dual solution. This is made possible through
recent innovations in nonlinear interior-point techniques. The model employs
Thijs ten Raa's approach to using the Make and Use tables of national accounts
for benchmarking economies using Input Output data. In order to place this in
an intertemporal context, a new approach is introduced to link stocks and
flows through Sales/Assets ratios. This creates both a strong underlying
intertemporal economic framework for the constraints and allows resource
pricing to be generated through these dynamic resource constraints, rather
than through static or exogenous commodity resource limits. New commodities
are introduced for international carbon trading of permits and amelioration
and abatement services. Geophysical feedback is implemented using William
Nordhaus' technology functions and proven climate-economic equations. The
model was validated using the results of recent geophysical modelling and by
comparison with the William Nordhaus DICE model.

Sceptre's suitability for policy research was investigated in Chapter 6


Assessment of changes in regional and industry performance under resource

488
constrained growth. A Base Case of limiting atmospheric temperature rise to
2C maximum was formulated from the political economy analysis of Chapters 2
& 3. The regional and commodity effects are investigated in detail. In terms of
policy makers expectations for CGE modelling in confirming viability, the Base
Case policy is found to be “feasible.” A notable outcome is the degree to which
the European Union becomes resource expansive under the Base Case policy
constraint. This commodity specialisation is an example of the neoclassical
model applying knife-edge pricing, which may be unachievable if realistic land
use or regional self-sufficiency political constraints were included.

Additional risk appraisal was undertaken through sensitivity cases. Point of


view analysis found that the Base Case costs little more than Sceptic and
Laissez-faire scenarios, which might be expected since environmental taxes are
prima facie revenue neutral. Even the Radical ultra-risk averse policy option of
immediately eliminating emissions has a relatively small net present value
premium of US$3.8 trillion over the Base Case. Consistent with this, climate
constraint severity sensitivities of 350, 450 and 550 ppm impose little cost over
the Base Case and would be selected for policy reasons based on political
rather than economic objectives.

Technology cost sensitivity scenarios demonstrated that the anxiety of China,


India and other newly developed and developing countries about a mismatched
risk between targets and technology availability are not misplaced. Third world
experience with HIV pharmaceuticals has demonstrated the disproportionately
large risk for countries that do not hold intellectual property rights.

In addition to impairing economic value added for climate damage, Sales/Asset


ratios may also be impaired. This means that industry requires more assets for
the same output. It was found that this effect, although subtle, also increased
the requirements for consumer emissions control.

In a sensitivity of economic growth without international trade in carbon


commodities it was found that climate-constrained resource expansiveness, for
example of European Union food production, is significantly reduced. This
increased the requirement for regional self-sufficiency. The implication of
resource expansiveness emerging with climate constraints is a real issue that

489
may pose significant challenges for the industry policy of countries and regions
that are struggling to maintain self-sufficiency or an independent industrial
base. For example, the political economy analysis showed that countries such
as France are well positioned and keen to capitalise on the resource expansive
growth. France already derives 90% of its electricity from zero emission
sources and is hurrying to make the transition to a fully green economy with
measures such as a carbon tax and mandating plug-in hybrid cars. It has
recognised that the new climate constraints will provide a magnificent one-
time opportunity to use its resource expansive competitiveness to seize global
market share.

This dissertation has addressed the research aim of answering the question
“What changes in regional and industry performance are implied by a change
in the Anglo-American world view from unconstrained to climate-constrained
resource usage?” This has been achieved through developing a new CGE policy
tool, or lens through which to undertake policy research in both sustainability
and international symbiosis for managing the commons across trade, security
and the environment.

7.2 Suggestions for further research


Some suggestions for future climate policy research using the Sceptre model
include:

Investigating alternative social policy scenarios

Globalisation

The CGE policy research model developed in this dissertation is a unique


spatial policy tool for investigating globalisation risks and the sensitivity of
economies, societies and political structures to rapid change. It is possible to
investigate policies with different aggregations of countries. For example,
those subject to sea rise, desertification, crop changes, net food importers,
mobility of dislocated peoples, new global trading blocs, different ethno-
cultural groups, and perhaps different classifications of moral philosophy such
as conservative and liberal.

490
The model developed in this dissertation may also be used to understand the
effect of emerging, binding constraints of scarcity as they replace relative
abundance. For example, the transition away from dependence on oil. Other
fruitful areas of research may be new security zones, autarchies established to
guard primary resources such as food and water, and new multipolar
superpower equilibriums. Perhaps these new equilibriums may be based on
enlightened democracies or on game theory's mutually assured destruction
framework.

Other dimensions of geopolitical research may include a reorientation of


emphasis from globalisation toward internal self-reliance, resilience and
sustainability of economies. This could include China relaxing its one-child
policy, Russia's expanding link with Germany or joining the European Union, or
modelling of potential North-South economic alliances such as America uniting
with South America, Russia formalising its long-standing relationship with
India or perhaps the surprising scenario of a Sino-Australian trade bloc.

Associated with these geopolitical-scenarios is policy research into the future


of overpopulated middle-eastern regions that may become “lost in the middle”
once their oil revenues decline. Nations in this position include Egypt, Syria,
Iraq, Iran and Saudi Arabia.

Industry policy

Investigate local industry policies in the presence of climate constrained


specialisation and competitive advantage. Different industry aggregations
could include various forms of transport, electricity generation, automobile
manufacture, water resources and military security.

Various forms of utility function

Various social policies may be tested using different forms of utility functions.
For example, the recently proposed Net National Product (NNP), which is GDP
less depletion of natural and human capital (Stiglitz et al. 2009). In addition,
the interface of production specialisation and consumer employment could be
investigated. This would be of the greatest interest for those countries seeking
self-sufficiency in various commodities.

491
Multiple objective and minimax programming

The exploration of alternative objective functions may be extended to multiple


objective optimisation in order to evaluate different social objectives. For
example, multiple objective optimisation involves minimising several objective
functions under the expectation that no unique optimal solution will exist
because a solution that optimises one function often will not optimise the
others at the same time. Multiple objective programming would allow
optimisation for a range of objectives, such as the United Nations Millennium
Development Goals relating to poverty, hunger, education, health etc.

Von Neumann and Rawls' minimax problem is a particular form of multiple


objective optimisation where the minimum value of a function is defined as the
maximum of several functions. For example, performance of the elementary
economic model in Chapter 5 “A new spatial, intertemporal CGE policy
research tool” demonstrated that one effect of a minimax objective function
was to bring stability to economic performance.

Supplementing data for additional functionality

Include non-CO2 greenhouse gases

GTAP expects to release data for non-CO2 greenhouse gas emissions. This data
will improve the modelling of climate feedbacks with greater detail for these
non-CO2 emissions.

Include Land Use data

GTAP's land use database and Mathematica' Country database provide the
opportunity to investigate other factors. For example, the nexus between
economic performance and commodities or factors such as water, fuels,
minerals, arable land, crop yield, forests, erosion and changes in biodiversity.

Refine economic damage functions

A generic climate-economic damage function has been applied to economic


output in the Make Use format by adjusting the Use table. Additional
understanding of the effect of climate damage functions on each of Use and
Make matrices separately would be highly insightful. Engineering, industrial

492
ecology and physical science analysis in the next IPCC Assessment Report
(AR5), which is due in 2014, can be expected to provide major advances in
realism. In addition, to better understanding the effect of damage on Make and
Use tables, specific country and industry risk analysis could be undertaken to
develop localised climate damage functions.

Refine technology costs

The amelioration and abatement cost used in this policy research is a function
of the proportion of emissions ameliorated or abated. At present, Nordhaus'
technology cost profile remains speculative. Technology costs will become
better known with the commercialisation of geoengineering, geosequestation,
wind, solar, hydrogen and nuclear projects. Engineering cost functions may be
embedded in the abatement cost function.

Empirical studies of Sale to Assets ratios

Improved data on historical Sales to Asset ratios, appraisal of the new risks
and volatility to industrial production of climate damage and estimates of
future Sales to Assets ratios would materially improve the reliability of the
model for government policy makers and to industry strategists.

Expand the use data in physical units

Input Output tables have the advantage of being clear and consistent. The
material balance of commodities based on Input Output table monetary data is
common to traditional CGE and benchmarking models. However, the
relationship with physical material flows or ecological flows is more tenuous.
Commodities are assumed to be homogeneous but are only artificial categories
and there are many assumptions made in mapping resources to commodities.
The availability of integrated data through the EXIPOL project will allow
realism to be improved by substituting key rows and columns with data in
physical units such as tonnes of a commodity.

Improve the quality of existing data

Industrial greenhouse gas emissions are already in physical units. However


this emissions data is derived from International Energy Agency estimates. As
greenhouse gas emissions begin to be measured more accurately around the

493
globe, actual data may be substituted in lieu of the IEA's estimated data to
improve realism.

Furthermore, the characteristics of regional labour endowments might be


empirically investigated in order to better understand labour constrained
growth.

Improving analysis techniques

Improve treatment of trade taxes & freight

The use of net exports has many advantages but alterations in trade flows
leads to mismatch with taxes and international freight. Further research into
modelling trade taxes in Sceptre would enhance the trade realism of the
model.

Introduce more complex forms of production function

Sceptre's carbon commodities are computed with detailed technology


functions. The economic commodities of food, manufacturing and services are
optimised through a Leontief-type Make Use tableau as the production
function. It would be insightful to augment the Leontief tableau with functions
having an engineering or ecological foundation for fine grained analysis of
specific commodities at country or local levels. As Occam's Razor mitigates
against additional complexity and assumptions, the alternative generic
approaches of Constant Elasticity of Substitution (CES) and Transcendental
Logarithmic (Translog) functions may not be so worthwhile.

Endogenous technological change

Implement the propagation of technological change through Use and Make


matrices, extending the work of Wilting et al. (2004; 2008) and Pan (2006; Pan
& Kohler 2007) in technology diffusion.

Develop acyclic topological processing for nonlinear


constraints

An acyclic processor for constraints would significantly enhance interior point


optimisation and materially expand the scope of Sceptre.

494
7.3 Chapter References
Pan, H., 2006. Dynamic and endogenous change of input-output structure with
specific layers of technology. Structural Change and Economic
Dynamics, 17(2), 200-223.

Pan, H. & Kohler, J., 2007. Technological change in energy systems: Learning
curves, logistic curves and input-output coefficients. Ecological
Economics, 63(4), 749-758.

Stiglitz, J., Sen, A. & Fitoussi, J., 2009. Report by the Commission on the
Measurement of Economic Performance and Social Progress, Paris:
French Commission on the Measurement of Economic Performance and
Social Progress. Available at: http://www.stiglitz-sen-
fitoussi.fr/en/index.htm [Accessed September 22, 2009].

Wilting, H.C., Faber, A. & Idenburg, A.M., 2004. Exploring technology


scenarios with an input-output model. In Brussels, Belgium, p. 19.
Available at:
www.ecomod.net/conferences/iioa2004/iioa2004_papers/wilting.pdf
[Accessed April 9, 2008].

Wilting, H.C., Faber, A. & Idenburg, A.M., 2008. Investigating new


technologies in a scenario context: description and application of an
input-output method. Journal of Cleaner Production, 16(1, Supplement
1), 102-112.

495
Appendix 1 Climate change engagement in
Australia

A1.1 Submission to Garnaut Review


Stuart J. Nettleton
FCPA, MBA, MEngSci, BEng(Hons), GradDipAICD
Faculty of Engineering
University of Technology, Sydney
7 Broadway, Ultimo 2007

18 April, 2008
Submission to ETS Discussion Paper
Garnaut Climate Change Review Secretariat
Level 2, 1 Treasury Place
East Melbourne, Victoria 3002
By email: [email protected]

Dear Professor Garnaut,

I am a senior lecturer and climate change researcher in the Faculty of


Engineering at the University of Technology, Sydney.

The key points of my submission are:

1. Until the USA commits to an ETS, it may be too early for Australia to do
so.
2. Australia can immediately commence reducing emissions through price
mechanisms by implementing a moderate carbon tax applied on a
carbon-added basis.

Let me more fully explain the rationale for these points.

Until the USA commits to an ETS, it may be too early for Australia to do
so.

497
The European Union implemented its ETS as a differential or relative model in
order to avoid an absolute carbon price or tax. A major reason for this was that
the concept of an environmental tax had been determined unconstitutional in
France. Therefore, the UN and EU sought a self-regulating means to reduce
emissions by using market forces and the profit motive. The differential
scheme introduced by the EU provides for the emissions of firms to be
assessed on a case-by-case basis, quotas determined and carbon permits
granted free for these quotas. Approximately 10,000 steel factories, power
plants, oil refineries, paper mills, and glass and cement installations were
involved, representing approximately half of the EU's emissions. Initially,
aluminium producers, the chemicals industry and the transport sector have not
been included. Through the ETS, firms can sell surplus emissions permits, or
conversely buy permits to offset excess emissions above quota.

The ETS proposed in the Discussion Paper is an absolute scheme quite


different to the EU model in at least two respects. Firstly, in being an absolute
scheme, each firm is required to purchase sufficient permits through the ETS
to acquit all emissions produced by that firm. There is no valuable surplus of
permits arising, as the bounty of innovation in the EU model, that may be
traded for profit. Secondly, the Discussion Paper ETS causes the market to put
a price on carbon, presumably through ASX-type auction and equilibrium price
or through authorised dealers tendering for permits from the proposed Carbon
Bank and selling these through the ETS in smaller denominations to the firms
that need to purchase permits.

Given the difference in the EU and Australian schemes, linking them together
as raised on page 69 of the Discussion Paper could prima facie expose the
Australian economy and tax base to great risks. While the EU proposes to
auction permits in the future, due to the failure of enlightened self-interest
amongst generators (leading to high electricity prices for customers because
the benefits of the free permits were not passed through to customers), it is by
no means certain that an auctioning of permits will be constitutionally possible
(Deroubaix & Leveque 2006).

498
The fabric of an Australian ETS would have a very large cost base, including
the Carbon Bank (acting as a Reserve Bank in permits), operators like the ASX,
regulators like ASIC, primary dealers, distribution brokers, etc. This fabric
would mean the ETS commences its existence from a position of considerably
negative value to the Australian economy.

With the introduction of the ETS, there will be a significant risk introduced into
industry and for consumers. Following considerable debate at the time of the
last Federal election, most Australian stakeholders are expecting a “price on
carbon”. The ETS Discussion model above does not provide such a price.
Instead, a price is set by the market. As for all commodity spot and futures
markets, the price will be extremely volatile as traders and speculators are
driven to hoard and liquidate by the usual emotions of greed and fear.

Certainty and stability are key issues. The ability of firms to plan ahead with
certainty will be impaired unless they commence sophisticated hedging
strategies using futures. This need for thousands of emitters to have new
financial departments to manage hedging portfolios could necessitate a burden
of financial sophistication on firms that is unwarranted and otherwise costly to
have independently managed.

Given the significant costs to efficiency of an Australian ETS and uncertainty


about whether the USA will introduce an ETS and its model of doing so, it is
submitted that it is premature to implement a particular ETS model in
Australia at this time.

Australia can commence reducing emissions using price mechanisms


by implementing a moderate carbon tax applied on a carbon-added
basis

While the word “tax” is an anathema to most Australians, there is a strong


sense of equity in the argument that those who pollute the commons should
pay a price for doing so. As Australian firms are accustomed to completing
monthly or quarterly BAS statements, it would be of very little additional effort
to include a carbon-added tax and for relatively few companies to complete this

499
part of the statement. This would be efficient to administer by standard
Australian Tax Office online procedures.

The concept of a carbon-adder very subtly changes the focus from emitters to
those firms that extract carbon from the earth or import carbon into Australia.
If a company, for example a coal company, sells coal it would need to pay the
Government a carbon-added tax. The generator that buys that coal does not
need an emission permit or to pay tax. It merely needs to pay the higher cost of
the coal including both GST and carbon-added tax.

As upstream companies usually deal in large quantities of commodities, the


Australian Tax Office would find it efficient to deal with the limited number of
carbon-added firms. Contrast this to the task of dealing with the enormously
larger number of emitters in electricity, iron & steel, aluminium, transport,
agriculture etc.

One may ask whether the ability to readily pass on higher costs in the form of
higher prices to consumers will reduce incentive for generators to seek lower
carbon sources of supply. If the National Electricity Market continues to be
regulated then reductions in carbon will come from generators seeking
cheaper fuel sources and new entrants to the market with lower source costs.

As with GST, in the case where a user is able to successfully capture and store
carbon, then the user could claim back the tax on the captured carbon in the
same way GST is claimed back. Mining companies already have the necessary
expertise for storing carbon. Therefore, carbon storage is naturally a task for
the coal miners rather than the generators. As a consequence, coal companies
would both pay the carbon tax and claim the carbon tax offset. Presumably,
this sort of technically advanced coal company would have skilled financial and
accounting personnel and be aware of the risk of re-incurring the tax liability if
the sequestered carbon under its stewardship was inadvertently vented to the
atmosphere.

Export sales and imports could also operate on the same basis as the GST.
Carbon-added tax would not apply to exports as the carbon implications of
trade are for the receiving country to deal with. If that country is a signatory to

500
the Kyoto Protocol, then it may in its own discretion charge for embedded
emissions. Fortunately from Australia's perspective, having no carbon-tax on
exports obviates the need to differentiate between signatory and non-signatory
end destinations, which is a task inevitably complicated by transhipment. It is
also likely to be politically more palatable than the Government compensating
coal and metals exporting companies in cash or otherwise for the cost of their
direct and indirect emissions in producing the exports.

In regard to imports, all products brought into Australia would need to pay
carbon-added tax. With scientific and economic assistance, formulae for
taxable embedded carbon can be readily determined by the Australian Tax
Office in conjunction with Customs & Excise. Techniques such as Input output
analysis are available to model the flow of carbon emissions through the
economy and therefore the vesting in various products.

Government revenues from a carbon-tax will be very large indeed. It is


generally argued that environmental taxes should be fiscally neutral.
Therefore, these tax revenues would, in the main, be returned into the
economy through reducing nineteenth-century Bismarckian labour taxes like
payroll tax and personal income tax. This return mechanism can produce an
additional benefit, which is indirectly alluded to in the Discussion Paper. The
first dividend of the environmental tax is reduced pollution through using the
price mechanism to switch consumer preferences. The second dividend is
economic growth, particularly through enhancing the competitiveness of
labour and increasing the buying power of consumers. Therefore, in contrast to
the heavy burden of the fabric of an ETS, a carbon-added tax may well “start in
front” due to the positive effect of this fortuitous natural double dividend.
Bento & Jacobsen (2007) demonstrate that in real world situations fiscally
neutral environmental taxes can produce a double dividend of up to 11% of the
saving in pollution.

Availability of carbon, even at escalated prices, will be far less frightening to


firms than the prospect of being denied a future supply of permits, due to both
the reduction of sales by a Carbon Bank and unpredictable hoarding by traders
and speculators. Any potential of being denied a future supply of permits to
operate would cause firms to react very negatively to an ETS.

501
A firm can plan for price but not for being unpredictably denied a key
requirement for production. In order to provide certainty to industry, the
Government could introduce a carbon tax with a rising profile. For example,
this may begin at a moderately low rate and increase over 8-12 years to a high
rate. Such a scenario would provide plenty of certainty to firms and give them
the time and incentive to innovate in their production processes to reduce
costs. In addition, a rising profile would provide the opportunity for the
Government to slowly learn about this new paradigm of carbon-added tax and
to change the rate as necessary.

Therefore, a carbon tax as the first stage of a market-based scheme is quite


different in scale, risk, control and cost to a big bang approach of launching a
full ETS as set out in the Discussion Paper. The former would be simple to
implement and would not commit Australians to substantial investment in the
fabric and operation of an ETS without certainty of first knowing where this
very new concept fits into the Australian and international paradigms, with key
stakeholder perceptions in the Australian economy based on real experience. I
refer again to Deroubaix & Leveque (2006) for an investigation of the
importance of bringing stakeholders along in this quite controversial process.

Nevertheless, a carbon tax would be quite straightforward to reformulate into


any internationally agreed ETS scheme with a different mechanism for pricing
carbon. Perhaps this could be seen as no more difficult than floating the
Australian dollar.

Whatever the nature of an international scheme, Australia would be at the


forefront through being experienced, proactive and positioned to flexibly
adjust to any new scheme. It might be noted that the air quality regulator in
the San Francisco Bay Area, with a population of 7.2 million and CO2
emissions of 85.4 million tonnes per year, are currently introducing a small
carbon tax on emissions of CO2 in order to learn about the effect of carbon
taxes and to recoup the costs of registering and controlling emissions
(Barringer 2008).

502
It is therefore submitted that the first stage of an Australian market-based
emissions reduction scheme would be a moderate tax applied on a carbon-
added basis. Stage 2 of the market-based scheme would be an ETS developed
as greater certainty evolves about the nature of an international model.
Further consideration of an Australian ETS could be deferred for a short period
of, say, three years.

I would be pleased to expand on any of the points above at your convenience.

Yours faithfully,

(signed)

Stuart J Nettleton

503
A1.2 Australian climate change policy development

Tim Flannery
In his speech at the the University of Technology's 20 th year celebration dinner
in May 2008, the indefatigable climate change campaigner and 2007
Australian of the Year, Professor Tim Flannery, noted that the climate change
problem is bigger and more urgent than currently being addressed (Flannery
2008). He predicted some form of emissions trading scheme proceeding in
Australia but did not hold much expectation of this reducing emissions.

Flannery's reasons were, firstly, that the standard of living of highly populous
nations such as China and India is rapidly rising with attendant energy
requirements. Developing nations are are not interested in anything to do with
carbon taxes as they claim their per-capita emissions are very low, and they
will not sacrifice economic growth because of this issue, and the problem was
created by the West so the West should pay to fix the problem.i

Secondly, approximately fifty percent (50%) of the world's carbon dioxide


emissions are from coal-fired power stations. There is a massive installed base
and annual increase in coal-fired generation. For example, China has been
commissioning around one new 1 gigawatt coal-fired power station per week.
Only the so-called clean coal technology of carbon capture and storage (CCS)
can reduce the carbon emissions. However, CCS is not currently available and
may never be available. The pressurised storage of carbon dioxide is also very
dangerous because the gas must be stored forever and we cannot be certain
that storage will remain secure. Even though CCS has so many disadvantages
and risks, Professor Flannery called for Australia's coal industry to be heavily
subsidised at 10 times the current investment to achieve successful CCS so it
can be urgently given to China and India.

Thirdly, nuclear fired generation is probably a key technology that is more


certain than CCS. It is only used on a small scale at present, for example 2% of
China's generation is nuclear, however its share could strongly accelerate.
Intrinsically safe nuclear is becoming feasible so there may be no more
Chernobyl meltdowns from new reactors but future generations will inherit the

504
growing problem of safely storing nuclear waste and controlling the
proliferation of nuclear weapons.

Finally, according the Professor Flannery, the only reasonably foreseeable


method of actively reducing atmospheric carbon dioxide is wide scale pyrolytic
combustion of crop biomass. This has the potential to reduce the absolute
amount of carbon dioxide in the atmosphere by approximately 5% per annum.
Biomass such as wheat and corn stalks can be combusted in the absence of
oxygen to produce fuel oil while sequestrating the carbon as charcoal. This
charcoal may be ploughed back into fields where it will be stable for thousands
of years and contribute many beneficial properties to the soils.

Election of Prime Minister Kevin Rudd


It is well known that Australia's Prime Minister Kevin Rudd, in his first act of
office, ratified the Kyoto Protocol on 3 December 2007. Kevin Rudd's 2007
election commitment was to introduce an Australian greenhouse gas reduction
target of 60% by 2050 (compared to the 2000 level).

Garnaut Climate Change Review

Garnaut Report

The Garnaut Climate Change Review was established by the State and
Territory Governments of Australia in July 2007. The Commonwealth
Department of Climate Change joined the work in January 2008.

In the interim report (Garnaut Climate Change Review 2008b), Professor Ross
Garnaut recommended that emissions and climate change should be decoupled
from world economic growth because high world growth is driving Australia
and all the world towards high cost downside risks and that this is happening
more rapidly than commonly appreciated.

The major recommendation of the interim report is that Australia should press
for the strongest possible outcomes in global mitigation. Garnaut saw this as
being in Australia's self-interest in avoiding unacceptable levels of risk of
dangerous climate change effects on Australia's fragile land, biodiversity and
dry climate.

505
Garnaut says Australia should pursue deeper 2008, 2020 and 2050 emissions
cuts than the Rudd Government's single target of a 60% reduction on 2000
levels by 2050 “Waiting until 2020 would be to abandon hope of achieving
climate stabilisation at moderate levels.”

Garnaut highlighted that Australia could not remain complacent about being a
low emitter relative to the USA and China. Under convergent-contraction
principles being developed by the United Nations to ensure developing
countries join the reduction program, all reduction targets will switch from the
relative basis of improving on current emissions to an absolute target of
emissions on a per capita basis. This would greatly impact Australia because it
has one of the highest per capita rates of emissions in the world.

Garnaut also noted that Australia was an emerging world leader and role
model in setting the post-Kyoto framework of global objectives, greenhouse gas
stabilisation, emissions budgets and the principles for allocation of global
emissions among countries.

Garnaut et al. (2008) expand on the need for urgency in addressing the climate
change phenomena. The reason for Professor Garnaut's continued emphasis on
urgency is his hope to justify political expediency in accepting his somewhat
utopian concept of an all encompassing emissions trading scheme, where all
emissions permits need to be purchased. His ultimately unfulfilled hope was
that a cloak of urgency would generate sufficient groundswell to sweep aside
all the arguments of equity that have constrained debate in the European
Union and America and resulted in inferior forms of emissions schemes.

Emissions Trading Scheme

In the Interim Report (Garnaut Climate Change Review 2008b), Professor


Garnaut proposed an emissions trading scheme (ETS). More detail of the
proposed scheme was provided in an Emissions Trading Scheme Discussion
Paper (Garnaut Climate Change Review 2008a).

The European Union has already implemented a regional ETS. Garnaut


proposes that Australia do like-wise with a national scheme. A tradeable permit

506
would provide for a capped quantity of total emissions for a specific time. It
could be used immediately or hoarded indefinitely. The Australian
Governmental would progressively reduce the volume of permits available to
the ETS as Australia's global emissions budget reduces.

Garnaut noted that emissions trading schemes have an implicit assumption


that the world can tolerate a certain level of emissions. Therefore, this scarce
resource of tolerable emissions needs to be allocated in some way across
countries and across emitters within countries.

Garnaut also subscribes to a rather straightforward view of geopolitical equity.


He sees that the basis of allocating quotas for emissions needs to be equitable
to all countries taking into account population, the need to adjust from current
emissions and past emissions, and sufficient time for adjustment etc. This is
because all major emitters, both current and future, must take on their
obligations in order for the sum of the measures to be sufficient for effective
global mitigation.

There are two levels in international ETS schemes. The first is for countries to
trade emissions permits. The second is a market that connects local ETS
markets and facilitates arbitrage and fungibility.ii

Garnaut (p35) makes the point that an international ETS that connects local
markets is a long way off “Only a few countries have proposed national targets,
and fewer still have sought to ground their targets in a framework based on
global emissions budgets derived from explicit mitigation objectives .... All
developing countries reject binding targets.”

Connecting local ETS markets has many implications. Firstly, the linked
national schemes need to define a carbon unit in the same way and agree on
what constitutes a tradeable surplus. Secondly, price and volume fluctuations
in one market immediately cause price and volume changes in the other.
Therefore regulators in each market need to monitor and enforce minimum
standards.

507
For a local ETS, the first issue is the formula on which quotas are allocated.
This is a major issue given countries such as America, China, India and
Bangladesh are highly diverse in their life-style, population, degree of
industrialisation, current level of emissions, exposure to the effects of climate
change and the impact on industry and jobs of compliance with the quotas.

A second issue is level of flexibility permitted to individual governments to


respond to the challenge of managing within the quotas. Some argue that each
government should then be free to decide how to bring down its own emissions
to meet the quota. Others seek a comprehensive prescription on the approach.
For example, specification of a common rate of carbon tax.

However, in perhaps his most controversial point, Garnaut argues against the
European Union's differential form of ETS where permits are granted free of
cost to emitters. Emitters receive the value of the scarcity of the permits,
which is not necessarily passed on to the end consumer. Indeed, in a failure of
enlightened self-interest, generator profits increased by the amount of the
windfall permits so households and people on low incomes suffered
considerable injury from the higher prices.

Garnaut's most contentious and perhaps disputed point is that the Government
would auction emissions permits and thus the market would set the price on
emissions.iii He envisages that the Government would apply the proceeds in the
same way the proceeds of a revenue neutral environmental tax would be used
to reduce labour or other taxes and increase public expenditure.

Garnaut (p45) says of his proposed ETS:

An ETS relies more completely on market processes, and if properly


designed, and allowed to play its role without extraneous
interventions to vary the budget or control the price, would be the
more direct instrument for securing the Australia's emissions
budget. [grammar as in original] .... It is likely that the closest
comparator would be the gold market …. The market would set the
rate at which Australia's emissions budget was utilised …. If there
were high expectations of future progress with new low emissions

508
technologies, the market would set a relatively low price curve,
allowing relatively high use of Australia's emissions budget in the
early years, followed by later rapid reductions in emissions. Low
expectations of emissions would generate a higher price curve, a
faster decline in emissions in the early years, and a more gradual
reduction in later years. Any new information that increased
optimism about new, lower-emissions ways of producing some
product, whether they were expected to become available
immediately or in the future, would shift downwards the whole
structure of carbon prices, spot and forward. Any new information
that lowered expectations about the future availability of low-
emissions alternative technologies would raise the whole structure
of carbon prices, spot and forward .... It is important to allow
permits to be used when they have greatest value to market
participants, to the extent that this is consistent with taking account
of any additional climate impacts of early use of permits and with
emerging international agreements. The practical way to achieve
the desired outcome would be for the Government to define an
optimum path for use of permits - ideally based on analysis of the
minimum cost path of emissions reduction within the total emissions
budget - and to issue permits over time in line with this trajectory of
emissions reduction. The fixed schedule for release of permits could
then be accompanied by provision for banking permits in excess of
current economic use, and borrowing from the future allocations
when the value of current relative to future use suggested it. The
banking and borrowing would allow the market to modify the rate at
which permits were used in a way that minimised the cost of
mitigation. It would allow the market to shape and reshape the
“depletion curve” in response to new information about emissions-
related technology or practices.

Garnaut has also accepted the recommendations in the Report by the Task
Group on Emissions Trading (Australian Department of the Prime Minister and
Cabinet 2007), established by former Prime Minister Howard, which
recommends Government interventions in the ETS to support governance and
ameliorate market failures and innovation, R&D, demand-side energy use and
provision of network infrastructure to address weaknesses.

509
The Garnaut Review recommends a form of Reserve Bank to issue and monitor
the use of permits:

In addition, the independent authority could be given the roles of


ensuring that Australia met its obligations under international
agreements to reach emissions targets (for example, to buy permits
on the international market when the private sector was a net
borrower from the authority in a year in which Australia was
required to meet an international target); and to assess and make
payments related to incentives for operation of trade-exposed,
emissions-intensive industries.

Garnaut (p50) argues that other firms that suffer because of higher prices on
inputs or on what they supply would not be compensated. He says there is no
tradition in Australia for compensating other firms for losses associated with
economic reforms, particularly because the business community has been able
to anticipate the risks of carbon pricing for many years. However, Garnaut
does makes the case for assistance to workers and communities who are
adversely affected by environmental reforms. He notes:

Desirably, and typically, these take the form of assistance in


preparation for new employment: retraining of workers (as with
textile and steel workers in the 1980s after reduction in protection);
grants to communities to support improvements in infrastructure
that would be helpful to the attraction of alternative industries (the
steel towns in the 1980s); or assistance to parts of the industry that
have opportunities for survival and expansion in the new, more
competitive circumstances (design and export assistance to the
passenger motor industry following reductions in protection in the
1980s and 1990s).

The essence of the problem with Garnaut's proposed Australian ETS perhaps
lies in the above point. The very reason emissions permits were given to firms
in the European Union was to avoid charging for the permits, which would
have constituted an environmental tax of the form found unpopular in Germany

510
and determined by the French Constitutional Court to be unconstitutional
(Deroubaix & Leveque 2006).

At the heart of addressing greenhouse gas emissions is the principle that the
cost of adjusting to climate change should not fall on individual countries,
firms or individuals. Garnaut's ETS proposal of auctioning permits is prima
facie inequitable because it leads to differential damage to firms. Firms and
indeed end consumers will have plenty of reason to object to such damage.
They have the right to ask “Why me? Why should I be sacrificed for the good of
the planet?” Garnaut's policy of not compensating firms and individuals who
suffer has not proven to be a point easily accepted. As in France, inequalities
of this nature mean the policy requires a supra-approval under the
Constitution's international treaty provisions.

The Australian Government immediately reacted to Garnaut's interim report in


ways that Garnaut had recommended against. For example, the Government
announced that it was considering excluding petrol from any ETS and
eventually proposed that every increase in permits cost would be offset by
decreases in excise duty. Garnaut responded immediately, rejecting this type of
compensation and arguing “The broader the coverage, the lower the overall
cost to the economy.”

The Treasurer of the New South Wales (NSW) State Government, Michael
Costa, in the process of privatising NSW's power stations, also reacted
immediately to the issue. He said the National Generators Forum was seeking
either free emissions permits or A$20 billion compensation from the proceeds
of an auction of emissions permits.

In an indication of the direction of debate on this issue, the National


Generators Forum issued a polemical statement criticising Garnaut's proposals
“It is of serious concern for the security of the future electricity supply in
Australia, that for the second time in a month, Professor Ross Garnaut has
released a report which demonstrates a fundamental lack of understanding of
how Australia’s energy market operates” (Boshier 2008).

511
The National Generators Forum continues to be perplexed by the simplistic
views of such a complex area that Professor Garnaut espouses. It is an
indication of the difficulty of national consensus in Australia’s transition to a
low carbon economy.

Lastly, due to its unusual structure, another point in Garnaut's proposal has the
potential to become a major controversial issue. Garnaut (p48) says that firms
such as coal, iron ore and metals exporters, which may not be able pass on
price increases, would receive special treatment in the form of cash subsidies:

For the most part, the distinction is between firms selling into the
non-traded domestic sector, which will mostly be in a position
largely to pass on the permit price, and firms in the trade-exposed,
emissions- intensive sector, which mostly will not be able to pass on
the price of permits (in part of in whole) unless and until relevant
competitors in global markets are in a comparable position .... In
Australia, industries included in this category may include non-
ferrous metals smelting, iron and steel-making, and cement ....
There are environmental and economic reasons for establishing
special arrangements for highly emissions-intensive industries that
are trade-exposed and at risk during the transition to effective
global carbon pricing arrangements. The case for special
arrangements is based on efficiency in international resource
allocation. All other factors being equal, if such enterprises were
subject to a higher emissions price in Australia than in competitor
countries, there could be sufficient reason for relocation of
emissions-intensive activity to other countries. The relocation may
not reduce, and in the worst case may increase, global emissions.
The economic costs to Australia and the lack of a global
environmental benefit of such relocation of industry are obvious."

Although this point is analogous to exporters not charging goods and services
tax (GST) and reclaiming from the Government any GST paid on inputs, the
concept of a subsidy to extremely wealthy multinational resource companies is
on the face of it electorally unpalatable.

512
Australian Whitepaper & Carbon Pollution Reduction
Scheme
In December 2008, the Australian Government responded to the Garnaut
Review with a White Paper. The key features of the White Paper were
confirmation of a 60% reduction in emissions by 2050 (compared to 2000
levels); a unilateral 5% reduction by 2020 (compared to 2000 levels) and up to
15% if necessary to join with other nations in global action to limit CO 2
equivalent emissions to 450ppm or lower by 2050; 20% of Australia's energy
being produced from renewable sources by 2020; and an Emissions Trading
Scheme (ETS) to operate from 1 July 2011.

The Government noted that a 15% reduction by 2020 (compared to 2000


levels) is equivalent to 27% per capita (or 34% per capita from 1990) because
Australia's population is projected to grow by 45% over the same period.

The Whitepaper was subsequently embodied in a Bill called the Carbon


Pollution Reduction Scheme (CPRS), which had insufficient support for either
its initial passage in the Australian Senate or a second vote in August 2009.

While the CPRS appears to be well designed, it is subject to ongoing political


negotiations that will variously exempt, advantage and disadvantage various
industries and groups in society. For example, exporters and energy intensive
industries subject to import competition have been exempted. The owners of
coal fired electricity generation plants have been compensated for the loss of
value of their plant. Motorists have been compensated for their extra costs.
The deficiency of the scheme is therefore apparent when contrasted to
Garnaut's framework. Many believe that the Government will face a century or
more of unremitting lobbying from powerful stakeholders.

Lowe (2009, p.48) was particularly dismayed at the Government's response to


the Garnaut Review:

Australia's carbon-dioxide emissions from energy use are now about


40 per cent above the 1990 figure and spiralling out of control. The
emissions trading scheme put forward by the Rudd Government in

513
December 2008 – for our pollution levels in 2020 to be 5 per cent
less than they were in 2000, possibly up to 15 per cent should a
global agreement be reached – will not be adequate to promote
changes to the way we live and do business. On the contrary, the
government has proposed concessions to households and high-
emissions industries to ensure that their levels of consumption and
pollution remain unaffected by the scheme!

And he sees the magnitude of the task to be daunting (p89) “A rough


calculation shows that the eventual carbon budget for each Australian will be
about 5 per cent of the present level of emissions.”

The 5% unilateral and 15% negotiable targets proved to be very controversial


with environmental groups. In May 2009, under parliamentary pressure from
the Green Party, the negotiable cap was increased to a range between 15% and
25%; the ETS was delayed 1 year until 1 July 2012; and a interim price of A$10
per emissions permit was fixed.

Minister for Climate Change & Water, Penny Wong, noted that the Government
would meet the maximum 25% target through the CPRS, the 20% renewable
energy target and from 2015 by purchasing international credits for up to 5%
of the target.

Australian Renewable Energy Target


The Australian Government's 20% renewable energy target (RET) will be
achieved by substantial investment in renewable energy, energy efficiency and
carbon capture and storage (CCS). In August 2009, the RET Bill was finally
passed by the Australian Parliament after the Government agreed to separate
it from the controversial Carbon Pollution Reduction Scheme.

However, it was a foregone conclusion in any case. The Council of Australian


Governments (COAG) had already given its support for the new RET to take
over from the existing Mandatory Renewable Energy Target (MRET), which
runs from 2001 to 2010. The MRET requires wholesale purchasers of
electricity to proportionally contribute to an additional 9,500 GWh of
renewable energy per year by 2010. The RET expanded target is 45,000 GWh

514
by 2020. The new RET now absorbs all existing and proposed state and
territory renewable energy schemes.

A1.3 Appendix References


Australian Department of the Prime Minister and Cabinet, 2007. Prime
Ministerial Task Group on Emissions Trading - Final Report, Canberra:
Australian Government.

Barringer, F., 2008. Businesses in Bay Area may pay fee for emissions. The
New York Times. Available at:
http://www.nytimes.com/2008/04/17/us/17fee.html?
_r=3&th&emc=th&oref=slogin&oref=slogin&oref=slogin [Accessed
April 18, 2008].

Bento, A.M. & Jacobsen, M., 2007. Ricardian rents, environmental policy and
the `double-dividend' hypothesis. Journal of Environmental Economics
and Management, 53(1), 17-31.

Boshier, J., 2008. Media Release: Garnaut gets it wrong again , National
Generators Forum. Available at: http://www.ngf.com.au/html//index.php?
option=com_remository&Itemid=32&func=fileinfo&id=262 [Accessed
April 26, 2008].

Deroubaix, J. & Leveque, F., 2006. The rise and fall of French ecological tax
reform: social acceptability versus political feasibility in the energy tax
implementation process. Energy Policy, 34(8), 940-949.

Flannery, T., 2008. Sustainability - issues facing world ities and world city
universities (Unversity of Technology, Sydney Inaugural Anniversary
Address on its 20th Anniversary), The University of Technology, Sydney.
Available at: http://www.twenty.uts.edu.au/streaming/vod-tf/ [Accessed
June 1, 2008].

Garnaut Climate Change Review, 2008a. Emissions Trading Scheme:


Discussion Paper, Available at:

515
http://www.garnautreview.org.au/CA25734E0016A131/WebObj/ETSdiscu
ssionpaper-March2008/$File/ETS%20discussion%20paper%20-
%20March%202008.pdf.

Garnaut Climate Change Review, 2008b. Interim Report to the Commonwealth,


State and Territory Governments of Australia, Commonwealth of
Australia. Available at:
http://www.garnautreview.org.au/CA25734E0016A131/WebObj/Garnaut
ClimateChangeReviewInterimReport-Feb08/$File/Garnaut%20Climate
%20Change%20Review%20Interim%20Report%20-%20Feb%2008.pdf.

Garnaut, R. et al., 2008. Emissions in the platinum age: the implications of


rapid development for climate change mitigation. In Oxford Review of
Economic Policy. Australian National University. Available at:
http://www.garnautreview.org.au/CA25734E0016A131/pages/reports,-
papers-and-specialist-submissions [Accessed May 13, 2008].

Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.

i Nevertheless, it is possible for the West to impute a carbon tax on imports from
countries that do not levy emissions.
ii Fungibility is the ability to trade a permit in different markets. For example, the
ready sale of an American emission permit on the Australian ETS market
iii Garnaut dismisses outright a carbon tax or capped-price at which the Australian
Government would sell any number of permits, which Garnaut says is the same
as a carbon tax

516
Appendix 2 CGE Modelling

A2.1 Elementary CGE modelling


Embedded within every CGE model are neoclassical consumer utility and
production functions.

Consumer utility function


Chapter 5 Sceptre model development discussed the consumer utility function
often used in general equilibrium studies and its interrelationship with the
pure intertemporal discount rate. The single commodity consumer utility
function described there is:

−c 1 − 
u  c =
1 − 

where c is per capita consumption and the constant elasticity of


intertemporal substitution of  = 1/ .

However, there are many approaches to multi-commodity utility functions.


There are three main types of neoclassical utility function, in order of
ascending flexibility: Cobb-Douglas, Constant Elasticity of Substitution (CES)
and transcendental logarithmic (Translog) with coefficients estimated through
econometrics.

The simplest form of all utility functions that satisfies conditions for regularity
(i.e. monotonicity and convexity) is the Cobb-Douglas (1928) or log-linear
function

n
U = ∏ qi
i

i=1

where u is utility in pure units, qi is the i th commodity with a share

factor of i .

517
The log-linear form of the utility function is (Chung 1994, p.8):

n
u = ln U  = ∑ i q i
i=1

Criticisms of the log-linear function are that partial elasticity of substitution is


unity for all pairs of commodities and the shares of each commodity in the
consumer's budget are independent of the size of the budget. These
assumptions of additivity and homotheticity introduce distortions and limit the
value of log-linear utility functions for empirical studies.

The constant elasticity of substitution (CES) model is a generalisation of the


log linear function, which removes some of the restrictive assumptions. Chung
(1994, p.58) defines the CES utility function as:

−1

[∑ ]
n
− 
u= i q i
i=1

where u is utility in pure units, q i is the i th commodity with a share

factor of i and  is related to the elasticity of substitution by

1−
= 1 .

While the CES function remains highly popular, it is limited by the assumption
of constant elasticities of substitution and it cannot model inferior goods. The
transcendental logarithmic (Translog) functions for price and quantity
developed by Christensen et al. (1975) provide a model free of these
restrictions. However, there are still deficiencies. The price and quantity
functions are approximated to the second order. Furthermore, demand
functions fitted to time-series data are not homogeneous and probably not
symmetric (Chung 1994, p.76 & 81). A Translog function can become unstable
if it takes a homothetic and separable form, whereupon it collapses to a Cobb-
Douglas function of Translog sub-aggregates (or the reverse).

518
Production function
Analogous to the three forms of utility function, there are three main types of
neoclassical production functions: Cobb-Douglas, Constant Elasticity of
Substitution (CES), transcendental logarithmic (Translog). In addition, the
Leontief Input-Output table of proportions is a special form or schema for a
neoclassical production function.

The most commonly used production function in computable general


equilibrium modelling is Constant Elasticity of Substitution (CES). Following
identification of the CES function by Arrow et. al. (1961), the CES production
function is often shown as:

1
  
U  x 1, x 2  = A x 1− x 
1 2

where, for example, x1 may be capital, x 2 labour and A the factor


productivity (i.e. the technology multiplier). The elasticity of substitution

1
between x1 and x 2 is  where = or, alternatively,
1−

−1
= .

It may be noted that it is possible to keep U  x 1, x 2  as a fixed amount (an

“isoquant”) using different proportions of x1 and x 2 . The degree to which


one input can substitute for the other is governed by either of the parameters
{ , } .

The CES production function is often nested so that pairs of composite inputs
(goods or factors), prices and conditional demand functions lead to composite
outputs. For example, labour and capital produce value added, and the
combination of value-added with commodities A & B produces commodity C.
Commodities A & B may have both been produced by other processes. The
same sort of nesting is used for consumer utility: commodities X & Y are
consumed, and this consumption together with savings produces the consumer
utility.

519
The deficiencies of the CES form are similar to those discussed above in
relation to utility. These are that the factor shares do not vary with total output
and the elasticity of substitution is the same for all input pairs (Chung 1994,
p.110).

The CES production function has three special cases, where the elasticity of
substitution  approaches one, zero or infinity.

As   1 ,   0 and the CES function becomes the Cobb-Douglas


production function:

U = A x i x 1−
2

When 0 the CES function becomes the Leontief function of perfect
complements, where factors are contemporaneously used in fixed proportions
{a , b} . No substitution is possible. Therefore, an isoquant q is L-shaped
and the bottom left-hand corner of the isoquant is the minimum resource usage
of each input to achieve the output level q .

x1 x 2
U = Min [ , ]=q
a b

When   ∞ , the CES production function becomes a simple linear


formulation with substitution remaining feasible, albeit rarely observed in
reality:

U=A ∑ n

i=1
xi 
For example, the producer's behaviour is to minimise the total cost of inputs
subject to the constraint of achieving a minimum output of q (the isoquant):

Min p 1 x 1  p2 x 2
subject to:
1
  
A x 1− x  = q
1 2

520
where p1 and p2 are the prices of the respective inputs.

Let 1 =  and  2 = 1−

Taking logarithms of each side of the constraint to facilitate analysis, the


Lagrangian equation becomes:

L =  p 1 x 1  p2 x 2 −  [ log q / A−log  1 x 1  2 x 2 ]

∂L ∂L ∂L
Setting the partial differentials { = 0, = 0, = 0} to zero provides
∂ x1 ∂ x2 ∂
the equations:

x 1−1 1   x −1
2  2  q  
{ p 1   =0, p 2   =0 , − log [ ]log [1 x 1  2 x 2 ]=0}
 1 x 1 2 x 2  2 x 2  1 x 1 A

From the first two equations, the ratio of prices can be calculated as:

p 1 1 x −1
1
=
p 2  2 x −1
2

Upon rearranging into equations {x 1, x 2} :

1 1

[
x 1−
{x 1 = 2
2 p2
 1 p1 ] 1−

[
x 1−
, x2 = 1
p1  2
1 p2 ] 1−
}

Substituting each of these equations for {x 1, x 2} in the CES constraint


1
A1 x 1  2 x 2   = q and solving for the other provides the solutions:

1 1
q −1 /  1 1−  q −1/   2 1−
{x 1 =   k   , x2 =   k   }
A p1 A p2

Where:

521
[ ]
1 − 1 −
1− 1− 1− 1−
k=  1 p 1  2 p 2

The CES function exhibits constant returns to scale, therefore a single unit
numéraire cost function for demand of 1 unit c can be defined as the ratio
of input value to output quantity. Here it is given at the minimum value:

p1 x 1  p2 x 2
c=
q

Substituting the equations above for {x 1, x 2} in the unit cost function


provides:

1 −1
{c =   k  }
A

−1
Therefore, the factor k  appearing in the equations for {x 1, x 2} is given

by:

−1 −1
k 
=  A c 1−

Substituting this in the conditional demand equations for {x 1, x 2} :

1 1

[ ] [ ]
 
1−  1 c 1− 1− 2 c 1−
{x 1 = q A , x2 = q A }
p1 p2

1
Using = for the elasticity of substitution between x1 and x2 ,
1−

the conditional demand equations {x i } become functions of


[ ]
i c
pi
:

 

{x 1 = q A
−1
[ ]
1 c
p1
, x2 = q A
−1
[ ]
2 c
p2
}

522
Generalised multi-input CES production function
This provides a specification for the multi-input CES function used in many

CGE models, where the share parameters {i } are defined slightly
differently to facilitate removal from the power function, for example, with

{−1
1 =  ,  −1
2 = 1−} :

1 

∑ 
1 −1
q=A ∑ n

i=1
 −1
i x i
  or q=A
n

i =1
 x

i i

−1

The numéraire unit cost function c for demand of 1 unit is:

1 /−1
1
c=
A  ∑i =1 i p1−
n
i 
and the conditional demand function xi for are relative prices of pi is:


xi = q A −1
i
[ ]
c
pi

Mathematica CGE model


Noguchi (1991; 1992) provides an elementary single period, multi-sector,
multi-factor CGE model in Mathematica. The model has ten equations, which
are retained at a high level for flexibility rather than being analytically reduced
for efficiency. A modified version of Noguchi's model is provided below. It is
possible to select either a Cobb-Douglas or CES for each of the consumer
utility function and production function. There is also the facility for
intermediate products.

There are two conditions for equilibrium. The first is that the total
consumption of each commodity equals the total production:

n
C i = X i −∑k=1 X k mi , k

523
Where mi, k is the proportion if the ith commodity requisite for producing the

Xk product.

The second is that the marginal rates of substitution of each commodity in


consumption and production are identical, which means the prices of
consumed commodities equals the prices of produced commodities.

Clear[equations, Assign, Equilibrium, X, w, U, p, Y, Co, vp];


TSectors = 2; TFactors = 2;
(**Production Function**)
(*Cobb Douglas Prodn*)
Do[X[i] = A[i] Product[L[i, j]^a[i, j], {j, TFactors}], {i, TSectors}];
(*CES Prodn*)
(*Do[X[i]=A[i] Sum[a[i,j]L[i,j]^-a[i,TFactors+1],{j,1,TFactors}]^(-
1/a[i,TFactors+1]),{i,TSectors}];*)
Do[w[i, j] = D[X[i], L[i, j]], {i, TSectors}, {j, Tfactors}];

(**Utility Function**)
(*Cobb Douglas*)
U = Product[Co[i]^s[i], {i, TSectors}];
(*CES*)
(*U=Sum[s[i]Co[i]^-s[TSectors+1],{i,1,TSectors}]^(-1/s[TSectors+1]);*)
(**Price Functions**)
Do[p[i] = D[U, Co[i]]/D[U, Co[1]], {i, TSectors}];
Y = Sum[p[i] Co[i], {i, Tsectors}];

(**No intermediate products**)


Do[Co[i] = X[i], {i, TSectors}];
Do[vp[i] = w[i, 1] p[i], {i, Tsectors}];

(**Intermediate products present**)


(*Table[Co[i]=X[i]-Sum[X[k] ic[i,k],{k,1,TSectors}],{i,TSectors}];
Table[vp[i]=w[i,1]( p[i]-Sum[p[k] ic[k,i],{k,TSectors}]),{i,TSectors}];*)
(**Constraints**)
equations = Flatten[{
(**Resource Limits**)
Table[Ltot[j] == Sum[L[i, j], {i, TSectors}], {j, TFactors}],
(**Production Equilibrium Condition (relative marginal productivities
are equal**)
Table[w[1, 1] w[i, j] == w[1, j] w[i, 1], {i, 2, TSectors}, {j, 2,
Tfactors}],

(**Marginal Value Productivities (same across all sectors**)


Table[vp[1] == vp[i] , {i, 2, TSectors}]
}];

(**Allocate Parameters**)
Assign[{unitspars_, prodpars_, intcoffs_, utilpars_, extpars_}] :=
Join[
Thread[Array[A, TSectors] -> unitspars],
(*TFactors increased by 1 in prodpars for Production X CES
elasticities*)

Thread[Flatten[Array[a, {TSectors, TFactors + 1}]] ->


Flatten[prodpars]],
Thread[Flatten[Array[ic, {TSectors, TSectors}]] -> Flatten[intcoffs]],

(*TSectors increased by 1 in utilpars for Utility CES elasticity*)

524
Thread[Array[s, TSectors + 1] -> utilpars],
Thread[Array[Ltot, TFactors] -> extpars]
];

(**Equilibrium Function**)
Equilibrium[pars_] := Solve[equations /. Assign[pars]];

(**Execute Equilibrium**)
pars = {{1, 1}, {{0.8, 0.2, 0.3}, {0.2, 0.8, 0.5}}, {{.1, .3}, {.4, .1}},
{0.6, 0.4, 0.5}, {400, 600}};
Assign[pars]
Equilibrium[pars]

Using Cobb-Douglas for each of the consumer utility and production functions,
the output with no intermediate inputs becomes:

(**The result of Assign[mypars1] – dummy parameters have been removed**)


{A[1] → 1, A[2] → 1, a[1, 1] → 0.8, a[1, 2] → 0.2, a[2, 1] → 0.2, a[2, 2]
→ 0.8, s[1] → 0.6, s[2] → 0.4, Ltot[1] → 400, Ltot[2] → 600}

(**The result of Equilibrium[mypars1]**)


{{L[1, 1] → 342.857, L[1, 2] → 163.636, L[2, 1] → 57.1429, L[2, 2] →
436.364, vp[1] → 0.689991, vp[2] → 0.689991, Co[2] → 290.584, Co[1] →
295.71}}

The above formulation processes very quickly for both consumer utility and
producer production functions having the Cobb-Douglas form, and where there
is no intermediate inputs. However, if CES functions are used or intermediate
inputs are allowed then execution becomes laborious.

Linearisation of conditional demand functions


As noted in the previous section, nonlinear CGE equations become very
difficult to solve with direct optimisation techniques. Therefore, industrial
models such as GTAP usually linearise the equations using a presolver
algorithm. Gohin & Hertel (2003, p.5-10) show how linearisation rules may be

used in such an algorithm with proportional changes of the form  = dA :


A
A


AB  B
= A

A/ B  B
= A−

B 
= B
 A  B 
AB = A B
AB AB

525
For example, the analytically reduced nonlinear equation for x1 in the CES
function derived above is:

x1 = q A
−1
[ ]
1 c
p1

Upon transformation using the linearisation rules, this becomes the simple
linear equation:

x1 = q    
 c − p 1 −1 A

It is interesting to note that this equation shows four dynamic effects on


demand:

q expansion effect change of output level 


  c − p 1  substitution effect  change of relative prices
  factor biased technological change
−1 A neutral technological change

A2.2 Economic Equivalence of Competitive Markets


and Social Planning
Sargent (1987) and Ljungqvist & Sargent (2000) have described the discrete
time analysis of non-linear stochastic neoclassical growth in great detail. Uhlig
(1999, pp.7-12) uses this benchmark model to demonstrate that the same
allocation of resources occurs under competitive equilibrium and social
planners welfare optimisation.

Uhlig's formulation is elegant consisting of preferences, technologies,


endowments and information as follows.

Preferences
In the neoclassical growth model, utility of the representative agent is a time
discounted function of the expectation of consumption in the presence of risk
aversion  :

526
C 1−

t−1
U = E [∑  t
]
t=0 1−

where:
C t consumption
.i
 the time discount factor
 the coefficient of risk aversion

Technologies
Technology is represented with a Cobb-Douglas production function as follows:

C t  K t = Z t K t−1 N 1−
t

1 −  K t −1

where:
K t capital
N t labour
 share of capital and 0    1
 depreciation rate and 0    1
Z t total factor productivity

with Z t evolving according to the equation as::

log Z t = 1 −  log Z   log Z t −1  t

where:
 i.i.d. N 0 ; 2 
 parameter and 0 ''   1
 parameter
Z

Endowments
The representative agent is endowed with:

Nt each period has one unit of time so N t = 1


K −1 the initial capital of the time period before t = 0

527
Information

The variables C t , N t and K t are chosen according to information available


at each time period t .

Social planners problem


Social planners objective function is maximisation of the Preferences utility
function subject to the Technologies constraint, with the consumer as
representative agent:


C 1− −1
t
max C , K  ∞ E [∑  t
]
t t t=0
t=0 1−

s.t. K −1 , Z 0
C t  K t = Z t K t −1 N 1−
t 1 −   K t−1
log Z t = 1 −  log Z   log Z t−1  t

which has the Lagrangian function:


C 1−
t −1
E [∑   t

L = max C , K  ∞ − t C t  K t − Z t K t−1 − 1 −  K t−1 ]
t t t=0
t =0 1−

The partial differential Euler equations:

∂L
: 0 = Ct  K t − Z t K t−1 − 1 −  K t −1
∂ t
∂L
: 0 = C −
t − t
∂ Ct
∂L
: 0 = − t   E t [  t1  Z t1 K −1
t  1 − ]
∂ Kt

provide first order condition approximations. The Kuhn-Tucker limiting


condition is introduced by summing to T rather than infinity ∞ and
substituting Ct with:

C t = Z t K t −1 −1 −   − K t

528
A transversality condition prevents unstable solutions. This is obtained by
setting the differential of the Kuhn-Tucker limiting condition to zero, as
follows:

0 = lim E 0 [ T C−
T KT ]
T ∞

In order to provide a set of equations from which a steady state solution can be
determined, Lucas' asset pricing equation can be used (Lucas 1978):

Ct 
1 = E t [   R t1 ]
C t 1
where R t1 is the return on the capital in purchasing an additional unit of
next year's resources.

Collecting the equations for a steady state solution:

C t = Z t K t −1  1 −  K t−1 − K t
−1
R t =  Z t K t −1  1 − 
C 
1 = Et [   t  R t 1 ]
C t1
log Z t = 1 −  log Z   log Z t−1  t

which can be rearranged and restated without time indices as:

 = 
C   1 −  K
Z K −K 
−1
 = 
R Z K  1 − 
1 = R

or alternatively as:

 = 1
R

1
 =  Z 
K  1− 
 − 1 
R
Y = Z K
  
 = Y −  K
C 

529
which may further be reduced to just one equation in Kt or to a popular

formulation in two variables C t and K t−1 .

Competitive equilibrium
Analogous to the social planners objective function, competitive equilibrium in
markets needs to be defined in terms of the market powers. For example, if
competitive equilibrium is the sequence:


C t , N t , K t , R t , W t t= 0

where, in addition to the previous definitions, W t is market wages and Rt


is returns.

The representative agent maximises the same Preferences utility equation as


the social planner, albeit from a suppliers perspective so the superscript s 
is used in K t s  and N t s :


t C 1− −1
max C , K s  ∞ E [∑  t
]
t 
t t=0
t =0 1−

s.t. N st  ,
C t  K s
t = W t N ts   Rt K s
t −1

and the intertemporal budgetary restraint that, over time, returns will pay for
capital (and any borrowings will be paid for from returns) such that at time
∞ there is neither surplus capital nor borrowings (known as the “no-Ponzi-
game condition”):

0 = lim E0  R−1
t Kt
t ∞

The representative agent, the firm, demanding labour will pay wages and

receive returns in the equilibrium function W t , Rt t =0 . Using the
superscript d  for demand as we did s  for supply:

530
max K d 
t−1
d
,Nt
Z t  K dt −1   N dt  1−  1 −  K d  d 
t −1 − W t N t − Rt K t−1

where Z t is exogenous :
Z   log Z t −1  t , t is i.i.d. N 0 ; 2
log Z t = 1 −  log 

Markets clear as follows, although Walras' Law is that only two of these three
equations are needed:

N dt  = N ts  =N t in the labour market


d  s 
K t−1 = K t−1 =K t −1 in the capital market
C t  K t = Z t K t −1  1 −  K t−1 in the goods market

The demand curves for wages and capital return as first order approximations
are:

W t = 1 −  Z t  K t d−1   N d
t 
−

d  −1  1− 
R t =  Z t  K t−1   N d t   1 − 

The “no-Ponzi-game” condition can be shown to be equivalent to the social


planners problem transversality condition.

Dropping the d  superscript, the Cobb-Douglas function is:

Y t = Z t K t −1 N 1−
t

Applying the Cobb-Douglas function to the Euler equations above:

W t N t = 1 −  Y t
R t K t−1 =  Y t  1 −   K t−1

Therefore, the income share of labour is just wages and the income share of
capital is the return on capital plus depreciation.

The rate of return rt is given by:

531
rt = R t − 1
Y
=  t −
K t−1

For the representative agent the Lagrangian is:


C 1−
t −1
L = max C , K  ∞ E [∑   t − t C t  K t − W t − R t K t−1 ]
t t t=0
t =0 1−

Again, the partial differential Euler equations provide first order condition
approximations:

∂L
: 0 = Ct  K t − W t −Rt K t −1
∂ t
∂L
: 0 = C −
t − t
∂ Ct
∂L
: 0 = − t   E t [  t1 R t1 ]
∂ Kt

Collecting the equations and substituting for W t and R t provides the same
equations as for the social planners problem:

C t = Z t K t −1  1 −  K t−1 − K t
−1
R t =  Z t K t −1  1 − 
Ct 
1 = Et [    R t 1 ]
C t1
log Z t = 1 −  log Z   log Z t−1  t

From the preceding analysis, Uhlig (1999, p.13) concludes:

These are the same equations as for social planners problem! Thus,
whether one studies a competitive equilibrium or the social planners
problem, one ends up with the same allocation of resources.

532
A2.3 Appendix references
Arrow, K.J. et al., 1961. Capital-labor substitution and economic efficiency. The
Review of Economics and Statistics, 225-250.

Christensen, L.R., Jorgenson, D.W. & Lau, L.J., 1975. Transcendental


logarithmic utility functions. The American Economic Review, 367-383.

Chung, J.W., 1994. Utility and Production Functions: Theory and Applications,
Oxford UK and Cambridge USA: Blackwell.

Cobb, C.W. & Douglas, P.H., 1928. A theory of production. The American
Economic Review, 139-165.

Gohin, A. & Hertel, T., 2003. A Note on the CES Functional Form and Its Use in
the GTAP Model, Purdue University. Available at:
https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=1370 [Accessed November 7, 2008].

Ljungqvist, L. & Sargent, T.J., 2000. Recursive macroeconomic theory 1st ed.,
The MIT Press.

Lucas, R.E.J., 1978. Asset prices in an exchange economy. Econometrica, 46,


1429-1445.

Noguchi, A., 1992. General Equlibrium Models. In H. R. Varian, ed. Economic


and financial modeling with Mathematica. Telos Pr, pp. 104-123.

Noguchi, A., 1991. The two sector general equilibrium model: numerical and
graphical representation of an economy. The Mathematica Journal, 1(3,
Winter), 96-103.

Sargent, T.J., 1987. Dynamic Macroeconomic Theory, Harvard University Press.

Uhlig, H., 1999. A toolkit for analyzing nonlinear dynamic stochastic models
easily. In R. Marimon and A. Scott: Computational Methods for the

533
Study of Dynamic Economies. Oxford and New York: Oxford University
Press, pp. 30-61.

i Uhlig uses capital letters to denote variables and small letters to denote log-
deviations. This notation is pursued in this analysis but is different to the usual
use of capital letters to represent aggregate variables and small letters to
represent individual variables.

534
Appendix 3 Input Output Tables

A3.1 Input Output tables from the Australian Bureau


of Statistics
Input output tables can be compiled as either industry-by-industry or product-
by-product. The Australian Bureau of Statistics (2000, Chapter 9; 2008) prefers
the former because detailed information on inputs is not normally available for
products, the assumption that goods have the same input structure wherever
they are produced can be expensive to resolve as SNA93 recommends and the
difference from a product-by-product table is only as a result of any secondary
production. Therefore, industry-by-industry tables are suitable for the analysis
of changes in factor costs, productivity, taxes and imports

Illustration 37: Australian Bureau of Statistics Input Output TableSource:


Table 9.1 Industry-by-Industry Matrix, ABS 2000 Paragraph 9.23, with
direct allocation of inputs

535
The Illustration above shows an industry-by-industry input output matrix.
Coefficients taken by row represent the distribution of an industry's output.
Columns provide the sources of inputs for an industry. The total of outputs in a
row is equal to the sum of its inputs in a column, including gross operating
surplus.

Quadrant 1 is called the inter-industry quadrant because it shows the


intermediate goods and services traded between industries. Each coefficient
represents the proportion of industry i's output used by industry j for its
current domestic production.

Quadrant 2 shows the distribution of output for consumption by the public and
private sector and individuals. It also includes changes in inventories.
Quadrants 1 and 2 together show the total usage of the goods and services
supplied by each industry, which is also equal to total supply.

Quadrant 3, the primary inputs, represents employee wages, profits and


imports, which are not part of the output of current domestic production in the
same way as intermediate goods and services traded by industries. The sum of
the inputs in Quadrants 1 and 3 produce the total outputs, that is the total
supply, of each industry.

In the Illustration, imports are shown as a distinct row in the Value Added area
across Quadrants 3 and 4. This is called a direct allocation of imports. It
assumes that each using sector draws on imports and domestic production in
the average proportions established for the total supply of each product.

Technology matrix
It is also possible to have an indirect allocation of imports where the total
output from each industry includes both Australian and imported content.
Imports are recorded as adding to the supply of the sector to which they are
primary and then this supply is allocated along the corresponding row of the
table.

536
This means that the coefficients reflect both domestic and imported supply. It
permits substitution between imports and domestic production without
affecting the size of the coefficients.

As materials coming into the system must be equal to the flows of materials out
of the system (plus any material accumulated within the system during the
period) then the law of conservation of mass is met.

Therefore, coefficients built from total dollar requirements also reflect the
actual technological relationship between industries. The same applies to
energy intensities and greenhouse emissions. For this reason, an input output
table with indirect allocation of imports is called a technology matrix.

However, the technology assumption implicitly requires that in the short run
products of the same type are homogeneous with the same input structure
wherever produced, there are no changes in relative input prices (unless
specific behavioural models are included to separately modify the coefficients),
technological structures are fixed, output is a linear function of inputs, so there
is neither increasing returns to scale nor other constraints in the system and
products are made in fixed proportions to each other.

Symmetric input output tables


Symmetric Leontief-type input output matrices are produced from the above
Sources and Uses tables, as either product by product or industry by industry
tables.

Direct Requirements Coefficients

The matrix of Leontief A a i j  coefficients is called the direct requirements


coefficients matrix. It can be used to calculate input requirements for any
given output of an industry. In all Australian Bureau of Statistics input output
tables, 100% always represents total Australian production. This is
notwithstanding whether imports are allocated directly or indirectly.

Total Requirements Coefficients

The matrix of Leontief Inverse 1− A−1 coefficients is called the total
requirements coefficients matrix. Each coefficient represents the units of

537
industry i's output required both directly and indirectly for industry j to
produce 100 units of output. It needs to be remembered that the answers
obtained by applying these coefficients are in terms of the output of industries
and include the flows of products not primary to these industries.

It is also important to recognise the way imports have been allocated. With
direct allocation of imports the total requirements coefficients in Quadrant 1
refer only to the domestic production. Any use of the total requirements matrix
necessarily has the caveat assumption that imports are unchanged.

Indirect allocation of competing imports means that the total requirements


coefficients of Quadrant 1 implicitly include the usage of both imported and
domestically produced products. Therefore, substitution can take place
between imports and domestic production without affecting the size of the
coefficients. However, the implicit assumption is that the usage of a product by
a particular industry remains unchanged. There is also a need to complete a
separate assessment of the proportion satisfied by import.

Primary data tables of Sources and Uses


The input data from which symmetric input output matrices are derived from
Sources and Uses tables for the economy as a whole. These tables show the
total resources in terms of domestic output and imports, and the uses of goods
and services in terms of intermediate consumption, final consumption, gross
capital formation and exports. They also provide information on the generation
of income from production.

Supply Table

Supply x Product x Industry & Imports

The columns represent output of domestic industries and imports. Rows


contain the output of products primary to these industries. Typically the matrix
is predominantly diagonal because industries mainly produce those products
primary to it.
Table 1 of Australian Bureau of Statistics cat. no. 5209.0.55.001 Australian
National Accounts: Input-Output Tables - Electronic Publication provides
supply by product group by industry and imports.

538
Use Table

Input x Industry & Final UseCategory & Supply x Product Group

Rows contain product groups and primary inputs, whether locally produced or
imported. Rows designated by prefix ‘P’ show the primary inputs which have
been purchased by industries and by final demand.

Columns show the composition of intermediate and primary inputs into each
industry and final demand category.

ABS cat. no. 5209.0.55.001 Australian National Accounts: Input-Output Tables -


Electronic Publication provides the Use Table as “Table 2”. This table
comprises indirect allocation of imports, basic prices and records intra-
industry flows across 109 industries. As imports are neither directly nor
indirectly allocated, it is not suitable for calculating Leontief and Leontief
Inverse matrices.

Imports table

This table is used to reallocate imports, which may be substituted from


domestic production, into the columns to which they would have been primary
if they were produced in Australia. These are called competing imports.

Imports that are not produced in Australia, called complementary imports, are
recorded in separate columns. Coffee and natural rubber are examples of
complementary imports. Imports for re-export are treated the same way.

Margins table

This table relates the basic price and purchasers’ price of all flows in the use
table.

A3.2 Leontief Matrix

Mathematical Derivation

The output of the Selling Sectors X 1 , X 2,  X n is given by:

539
X i = zi 1  zi 2    zi i  Y i

Where the variables are defined as:

Xi = output of Selling Sector i


zi j = output of Selling Sector i becoming inflow of Purchasing Sector j
Yi = final demand of Selling Sector i

In addition to purchasing goods from the Selling Sectors, the Purchasing


Sectors also buy imports and value adding sources as follows:

L = labour services

{ }
government services  paid for as taxes
N =  capital costs interest payments
 land rental payments  entrepreneurship  profits
M = imports

So the Purchasing Sectors have total inflows of:

P j = z 1 j  z2 j    z n j  L j  N j  M j

where Pj is the Total Australian Production (after value added items)

The Leontief “Direct Requirement Coefficients” ai j are given by:

a1 j = z 1 j / P j etc, so a i j = z i j / P j

Where:

Pj = input of Purchasing Sector j


Lj = labour of Purchasing Sector j
Nj = other value added services
Mj = imports of Purchasing Sector j

Since inflows = outflows , X i = P j and therefore over all rows i and columns j:

540
zi 1  zi 2    zi n  Y i = z1 j  z2 j  … zn j  L j  N j  M j

The zi j elements cannot be eliminated against the reverse zji elements

because z i j ≠ z j i . For example, the value of steel that goes into a car is not
equal to the value of cars that go to make steel. However, from the definition of
gross profit GP=Sales− Raw Materials , we know that the value of all
materials purchased by a firm for its output is only different to the sales value
by gross profit (which in turn, represents the value add of
labour overheads profit ). Therefore the sum of the products
z i 1 z i2  z i n is logically equal to z 1 j z 2 j   z n j . So we can
eliminate each side respectively leaving:

Y 1 = L1  N 1  M 1

and upon rearranging and eliminating subscripts to indicate aggregates

LN =Y −M

and since:

Y =C  IG  E

then:

L  N = C  I  G  E − M 

Where:

C = final consumer consumption


I = investment consumption
G = government consumption
E = exports

which means:

541
{ }{ }
Factor Payments in Total spent on
the economy for labour , = consumption ,
rent , interest , profit , investment
indirect taxes , etc. and net exports

In other words,

Gross National Income = Gross National Product

and

{ } { }
Gross National Product Gross National Product
at Factor Prices = at Market Prices
including Indirect Taxes

Leontief Inverse
On a per unit basis:
a i j = zi j / P j

Where:

Pj = { thevaluecolumn
added items such as wages , taxes & profits }
total for Australian Production after

ai j = technical coefficients

so

X j = a j1 X1  a j2 X 2    a jn Xn  Y j

and therefore:

X j − aj 1 X 1 − a j 2 X2 − a j j X j  − a j n X n = Y j

upon rearranging:

542
− a j 1 X 1 − a j 2 X 2  1−a j j X j   − a j n X n = Y j

to provide the matrix:

I – A X = Y

and the Leontief Inverse is given:

X =  I − A −1 Y

A-1 can be rather tediously calculated using the formula:

A−1 = 1 / ∣A∣  adj A

where:

Ai j = the independence coefficients cofactors of element a i j


Ai j = −1 i  j ∣a i j∣
∣ A∣ = the determinant of A

{ }
the adjoint whose element i , j is the cofactor of the element i , j
adj A = of the transpose of A where the minor ∣a i j∣ is the determinant of
the square that remains when row i & column j are removed

To avoid this calculation, a computer can be used or  I − A −1 approximated


by using the first few elements of a power series:

 I− A −1 =  I  A  A 2  A 3  

As all parts of A are less than 1, the elements of the power series quickly

approach zero (for example 0.3 2 = 0.09 ), then it is usually only necessary to
iterate 3 times to capture most of the effects.

The Leontief Inverse derived above:

X =  I − A−1 Y

543
can be interpreted as:

X i = − A i1 Y 1 − A i 2 Y 2 − A i j Y j  − A i n Y n

or, alternatively, as:

X i = f Y 1 , Y 2 , Y j ,  Y n 

Therefore, every one dollar of final demand for industries Y 1 , Y 2 , Y j,  Y n


leads to the output of each sector i  X i  in proportion to the Ai j
independence coefficients.

Calculation of the Leontief and Leontief Inverse


matrices
Table 5 of Australian Bureau of Statistics cat. no. 5209.0.55.001 Australian
National Accounts: Input-Output Tables - Electronic Publication provides an
Industry-by-Industry flow table with direct allocation of imports and basic
prices across 109 industries. The technical coefficients (Leontief A) matrix can
be derived from it by dividing each number in the table by the column total of
Australian Production (T2), which is prior to imports (P2). The Leontief Inverse

matrix  I − A −1 can be derived from A by dividing each coefficient of A by

100 and calculating the necessary inverse  I − A −1 .i

In the case of direct allocation of imports, the Leontief A matrix derived from
Table 5 can be compared to ABS Table 6 and the Leontief Inverse to ABS Table
7. In addition, the output vector Y is provided by the column Final Uses (T5).
From this, the input vector X can be calculated as the matrix multiplication of

the total requirements coefficients matrix  I − A −1 and the output vector Y.ii

Similar tables to the foregoing are provided for the main case of indirect
allocation of imports. Table 8 provides an Industry-by-Industry flow table with
indirect allocation of imports and basic prices, across 109 industries. The

544
Leontief A matrix can be compared to ABS Table 9 and the Leontief Inverse to
ABS Table 10.iii

It needs to be noted that the Leontief A matrix is only for Industry Uses in
Tables 6 and 9, for direct and indirect allocation of imports respectively. The
Leontief A matrix excludes Intermediate Input rows in the ABS tables, which
aggregates with the Leontief A matrix of Industry Uses to produce Australian
Production Compensation of employees (P1), Gross operating surplus & mixed
income (P2), Taxes less subsidies on products (P3) and Other taxes less
subsidies on production (P4).

The Leontief A matrix also excludes the following Final Use columns in the ABS
tables, which aggregates with the Leontief matrix of Industry Uses to produce
total supply Final consumption expenditure of the household (Q1) and
government (Q2) sectors, Gross fixed capital formation of private (Q3), public
enterprise (Q4) and general (Q5) government sectors, Changes in inventories
(Q6) and Exports (Q7).

A3.3 World Multiregional Input Output Model


The presentation of an IRIO model is quite different to that of an MRIO model.
An IRIO model is conceptually an expansion of the regional Z matrix. For
example, a two region model (L & M) would have money flows represented as:

[ ]
LL LM
Z Z
Z=
ZML ZMM

Analogously, the IRIO coefficient matrix is:

[ ]
LL LM
A A
A=
AM L AM M

where, as usual:

LM z iLjM
ai j = M
and X = 1− A −1 Y .
X j

545
In contrast to the full integration of an IRIO model, the MRIO approach seeks
to simplify the modelling paradigm by representing data in regional tables and
interregional trade tables.

MRIO Regional Model

In lieu of IRIO's regional inputs coefficient matrix A L L , a MRIO model uses a


regional technical coefficients matrix A L . In essence, this ignores the source
of the imported inputs. MRIO makes the assumption that inputs per unit of
output are constant across regions at a fine level of industrial classification
(called the “product-mix approach”).

The input coefficient matrix of regions is assumed to be the average of the


detailed coefficients of the supra-entity (in our case the world) weighted by the
proportions of sub-sector outputs to total sector output in each region.

Miller & Blair (1985, Appendix 3.2, pp.91-3) describe the method to build trade
tables as follows:

Total supply of commodity i in region M is:

T Mi = Z 1M
i  Z 2M LM
i    Zi    Z ipM

p
T = ∑ z iLM  L≠M   z iMM
M
i
L=1

where the total production of commodity i by region L is:

p
X Li = ∑ z LM
i
M =1

and the amount supplied from within the region is z MM


i .

Since the interregional trade coefficients are defined as the proportion of all
commodity i used in M that comes from L :

546
LM z iLM
c i =
T iM

then rearranging and substituting z iLM into the above leads to:

p
L
X = i ∑ c iLM T Mi
M =1

As the demand for commodity i in region M is:

n
T Mi = ∑ a Mi j X Mj  Y iM ,
j =1

substituting TM
i into X Li leads to:

p n
L LM
X = i ∑c i ∑ aMi j X Mi  Y iM 
M =1 j=1

Using matrix notation:

p
X Li = ∑ C LM  A M . X M  Y M 
M =1

X = C  A.X  Y 

so:

X =  I − CA −1 CY

where the expanded matrices definitions are:

547
[ ]
1
A  0 0

[ ]
⋮ ⋮ ⋮ ai1M  a M1n
A = 0 A M  0 , A M = ⋮ ⋮ ,
M
⋮ ⋮ ⋮ an1 a Mnn
0 0 Ap

[ ] [
11 1M 1p
C  C  C

]
C 1LM 0  0
⋮ ⋮ ⋮ LM
C = C  C  C Lp , C LM = 0 C 2
 L1  LM 
⋮ ⋮ ⋮ ⋮
C p1  C pM  C pp 0 C LM
n

[] [] [] []
c LM
1 X 1L X 1M Y 1M
⋮ z iLM ⋮ , X M '=' ⋮ , Y M= ⋮
C LM = , ci = M , X L =
LM
⋮ Ti ⋮ ⋮ ⋮
LM
cn X Ln X nM Y nM

Technological change

Solow's technological change

Robert M. Solow (1957) separated the effects of technological change and


capital in the aggregate production function of the United States. In 1987,
Solow was awarded the Nobel Prize in Economics "for his contributions to the
theory of economic growth".

Solow found that aggregate American data 1909-1949 demonstrated that


technical change in the period was on average neutral, the production function
shifted upwards at 0.5%pa for the first two decades and 2%pa for the second
two decades, gross output per man doubled over the period, with 87.5%
attributable to technological change and 12.5% due to increased use of capital
and after correcting for technological change, the aggregate production
function suggests diminishing returns.

Solow assumed that factors are paid their marginal products such that:

548
Q = f  K , L ; t

He further assumed that technical change A is neutral because marginal


rates of substitution remain unaltered even though the production function
shifts then:

Q = A f  K , L

Differentiating with respect to time and dividing by Q:

Q̇ = Ȧ f  K , L  A ḟ  K , L


= Ȧ f  K , L  A K̇
∂f
∂K
 L̇
∂f
∂L 

Q
= 

A Q
A
K̇
∂f
∂K
 L̇
∂f
∂L
Ȧ K̇ L̇
=  wk  wl
A K L
where :
A K ∂f
wk =  
Q ∂K
AL ∂f
wl =  
Q ∂L

Furthermore, dividing by labour L and simplifying provides:

q̇ Ȧ k̇
=  wk
q A k
where :
Q
=q
L
K
=k
L
wl = 1 − w k

Solow assumed that constant returns to scale was unavoidable. This being the
case:

Q = A f  K , L
q = A f k , l

549
Using labour and capital statistics from The Economic Almanac , Solow
reconstructed A by replacing time derivatives with year-to-year changes as
follows:

 A t  q k
= – wk
A t  q k

From his empirical investigation, Solow determined that a Cobb-Douglas log


linear function best fitted the plot of data:

log q=α β log k

In order to determine a generic relationship, the static aggregate Cobb-


Douglas production function is developed to a continuous function and thence
to a discrete function:

βi
y = Π  A i xi 
=  A 1 x 1 β .  A 2 x 2 β .  A 3 x3  β 
1 2 3

where :
y = output
xi = input
Ai = productivity effect that augments x
∑ βi = 1

Taking the natural logarithm:

β1 β2 β3
ln y = ln A 1 x 1   ln  A 2 x 2   ln  A 3 x 3  
= ln A 1β  ln A 2β2  ln A 3β3   ln x 1β  ln x 2β  ln x 3β3
1 1 2

= Σ β i ln A i  Σ β i ln x i

According to Solow (1957, p.313) the learning effect is reflected with the
exponential:

Ai = er t i

550
This may be approximated by the discrete function At = 1  r i t so:

∑ β i ln A i = ∑ β i ri t
= t ∑ βi ri
= t β0
where :
β0 = ∑ βi r i
= measure of the growth of technology i.e. technological progress

Substituting in the above equation for ln y leads to the log linear identity:

ln y = t β 0  ∑ βi ln x i

Differentiating x with respect to time leads to the following growth frontier:

ẏ ẋ
= β0  ∑ βi
y x
where :
z = ln y
dz dy
ż = .
dy dt
dy
 
dt
=
y
dx
ẋ =
dt
dy
ẏ =
dt

Extension with Total Factor Productivity

If it is assumed that there is only a single compound factor of production F


(Denny et al. 1981; Diewert 1981; Sengupta 2004) then the growth frontier can
be expressed in terms of the elasticity  :
ẏ −1 Ḟ
= β0  
y F
So :
ẏ Ḟ
β 0 = − −1
y F

551
where :
F = the compound factor of production

Ḟ / F = β 0  ∑ w i x i
 c xi 
wi = is the price of input x i
c = ∑ w i x i , the total cost of inputs
∂c
 
c
 = , the output cost elasticity
∂y
 
y

Now considering Total Factor Productivity TFP as a function of output y


and the compound factor of production F :

y
TFP =
F
˙ d TFP
TFP =
dt
y
= – y F −2 Ḟ
F
y
 – y F −2 Ḟ 
˙
TFP F
=
TFP y
 
F
y Ḟ
= −
y F
So :
ẏ ˙
TFP Ḟ
= 
y TFP F

Substituting in the above equation for β0 :

ẏ −1 Ḟ
β0 = −
y F
TFP˙ Ḟ Ḟ
=   − −1
TFP F F
˙
TFP −1 Ḟ
=  1 −  
TFP F

For Solow's assumption of constant returns to scale =1 :

552
˙
TFP −1 Ḟ
β0 =  1 −  
TFP F
˙
TFP
=
TFP

It may be seen that for the case of constant returns to scale technological

progress β 0 is simply the proportional rate of change in Total Factor


Productivity.

The parameterised formulation for constant returns to scale is finalised by

substituting β 0 in the growth frontier:

ẏ ẋ
= β0  ∑ βi
y x
˙
TFP ẋ
=  ∑ βi
TFP x

Incorporation of technological change in input output


models
One criticism of input output models is that this technique is only suitable for
short run studies. This is because technology is already installed (i.e. ex-post)
and does not allow substitution among inputs. This may be corrected in two
ways. The first is to incorporate production functions where future choices are
made between several technologies that may be installed (i.e. ex-ante). The
second way is to complement this by generally allowing substitution of inputs
between industries, which is discussed in Chapter 4 Economic Models for
Climate Change Policy Analysis and Appendix 6 Benchmarking with Linear
Programming.

In each sector, there is a relationship between production, consumption and


emissions. The input output coefficients represent technology. In order to
project, the technology coefficients need to be constructed for future periods.
This has the effect of influencing production, balance of trade and emissions in
future years.

553
In an input output context there are two ways to propagate changes in
technology. The first is to extrapolate past trends into the future. While this is
relatively easy to do, it is little better than guessing. Using the popular
management example, the best place to find a drunk in a cornfield is to look in
the place where he was left. Analogously, the best estimate of future
technology is today's best technology. Investigations in the United Kingdom
and the Netherlands have found innovative changes in environmental
technology are very difficult to project.

Wilting et al. (2004; 2008) are critical that traditional approaches such as
Miller & Blair's (1985) use of marginal input coefficients and the University of
Maryland's extension of logistic growth curves in its INFORUM model fare no
better.

The second approach for including technological change is to construct future


technical coefficients based on expert knowledge of future technologies (Rose
1984). For example, it may be assumed that today's best practice becomes
tomorrow's mid-level performance.
It is important to focus on the underlying causes of technology change, rather
than the symptoms of this, which is how the technical coefficients change.

Wilting et. al. (2004; 2008) use trend extrapolation to generate an autonomous
reference path. They combine this with expert analysis of specific technology
life cycles along this reference path.

a R p = a00  a  R
where :
p = the number of projection periods
R = the Reference scenario
a  R p = final year technical coefficient based on reference scenario R
a 00 = original technical coefficient
a  R = absolute change of the technical coefficient across projection horizon
= a00   R

The authors suggest there are two types of change that lead to technology
diffusion through technical coefficients. The first is changes to primary

554
production processes, for example lower demand for herbicides due to a
change from common to organic agriculture. The second is a more general
technological changes due to better information and communication, or
substitutions between inputs.

Wilting et. al. prepare future technical coefficients by surveying the mix of
existing primary technologies that provide the current coefficients of input
output tables and developing new coefficients for alternative technologies. The
difference between these coefficient sets is projected as a changing mix of
technologies independent of the existing primary technology.

The original mix of existing technologies is given by:

a00 = ∑ i00 a i00


i
subject to:
∑ i00 = 1
i
where :
a 00 = input output technical coefficient
a i00 = original technical coefficient for technology i
i
00 = original proportion of technology i

All coefficients are then related to the new technologies. The implicit
assumptions are that pace of technological change in all industries is the same
and that the share of each technology in total production remains constant.
The ratio of coefficients for technologies i and 1 is constant as follows:

ai00 = i ∗ a100
where :
i = ratio of technical coefficents of i & 1

Therefore:

a00 = ∑ i00 i a100


i

1 a 00
a00 =
∑ i00 i
i

555
Changes in non-primary technologies are assumed to lead to an improvement
factor  :

a S ip =  S a  Rip
where :
S  = general change coefficient of scenario S

Therefore:

a S p =∑  S ip a S ip
i

= ∑  Sip  S a  Rip
i

= ∑  Sip  S i a  Rip
i

= ∑  Sip  S i [  R a ioo  a i00 ]


i

where :
 S ip =  S  [ R  a ioo  a i00 ] ∑  Sip i
i
=technology i share of sector total production for scenario S
∑  Sip =1
i

The diffusion changes in labour, capital and emissions can be carried out in the
same way as this projection of technical change.

A3.4 Appendix references


Australian Bureau of Statistics, 2000. Australian National Accounts: Concepts,
Sources and Methods, Canb: Australian Government. Available at:
http://www.abs.gov.au/Ausstats/[email protected]/0/D3E4A0528E3A0863CA2569
A400061666?opendocument [Accessed July 23, 2009].

Australian Bureau of Statistics, 2008. Australian National Accounts: Input-


Output Tables - Electronic Publication, 2004-05 Final, Canberra:
Australian Government. Available at:

556
http://www.abs.gov.au/AUSSTATS/[email protected]/mf/5209.0.55.001 [Accessed
July 23, 2009].

Denny, M., Fuss, M. & Waverman, L., 1981. The measurement and
interpretation of total factor productivity unregulated industries. In
Productivity Measurement in Regulated Industries (Eds) T. Cowing and
L. Waverman. Academic Press, pp. 35-49.

Diewert, W., 1981. The theory of total factor productivity measurement in


regulated industries. In Productivity Measurement in Regulated
Industries (Eds) T. Cowing and L. Waverman. Academic Press, pp. 81-
92.

Miller, R.E. & Blair, P., 1985. Input output analysis: foundations and extensions,
Englewood Cliffs, N.J.: Prentice-Hall.

Rose, A., 1984. Technological change and input output analysis: an appraisal.
Socioecon Plann Sci, 18, 305-18.

Sengupta, J.K., 2004. Estimating technical change by nonparametric methods.


Applied Economics, 36(5), 413-420.

Solow, R.M., 1957. Technical change and the aggregate production function.
The Review of Economics and Statistics, 39(3), 312-320.

Wilting, H.C., Faber, A. & Idenburg, A.M., 2004. Exploring technology


scenarios with an input-output model. In Brussels, Belgium, p. 19.
Available at:
www.ecomod.net/conferences/iioa2004/iioa2004_papers/wilting.pdf
[Accessed April 9, 2008].

Wilting, H.C., Faber, A. & Idenburg, A.M., 2008. Investigating new


technologies in a scenario context: description and application of an
input-output method. Journal of Cleaner Production, 16(1, Supplement
1), 102-112.

557
i With each element multiplied by 100 to comply with ABS scaling
ii With each element divided by 100 to comply with ABS scaling
iii As a “round trip” check on ABS data and computation, process the Table 9
Leontief matrix to the Leontief Inverse and compares the result with Table 10.
R-package code for this comparison is:
# load the ABS Table 9 Leontief dataset (technical coefficients) into
matrix L_ABS:
L_ABS <- read.table("table09_L_indirect_imports.csv", header=FALSE,
sep=",", na.strings="", strip.white=TRUE)
# load the ABS Table 10 Leontief Inverse dataset (technical coefficients)
into matrix LI_ABS:
LI_ABS <- read.table("table10_LI_indirect_imports.csv", header=FALSE,
sep=",", na.strings="", strip.white=TRUE)
# create the Leontief inverse (total requirements coefficients) by using
solve to invert the identity matrix less A
# note that L_ABS and LI_ABS both arrive scaled up by 100
LI_CALC <- solve(diag(nrow(L_ABS))-L_ABS/100)
# compare the calculated Leontief Inverse with ABS Table 10:
LI_CALC-LI_ABS/100

558
Appendix 4 Nordhaus DICE Model

A4.1 Basic scientific model


William Nordhaus' (2008, Appendix A, pp.205-8) scientific equations are based
on the United Nations IPCC Report The Physical Science Basis (2007). Damage
feedback increases the inputs required for production, while technological
change acts in the opposite direction, reducing production inputs through
growth in Total Factor Productivity.

Illustration 38: Schematic Implementation of Nordhaus Climate


Equations

A4.2 Model equations


The equations for the economic and climate models are:

T max

A.01 W = ∑ u[c t , Lt ] R t
t =1

A.02 Rt = 1 − −t
c 1−
t

A.03 U [c t , Lt ] = Lt
1−
A.04 Qt = t [1 − t ] A t K t L1−
t
1
A.05 t =
[1  1 T AT , t  2 T 2AT , t ]

559
2
A.06 t = t t t
A.07 Qt = Ct  I t
C
A.08 ct = t
Lt
A.09 Kt = I t  1 −  K  K t−1
 1−
A.10 E ind ,t =  t [1 − t ] A t K t Lt
T max

A.11 CCum ≥ ∑ Eind t


t =0
A.12 Et = E ind , t  E land ,t
A.13 M AT ,t = E t  11 M AT ,t −1  21 M UP ,t −1
A.14 M UP ,t = 12 M AT ,t −1  22 M UP ,t −1  32 M LO ,t −1
A.15 M LO ,t = 23 M UP, t−1  33 M LO ,t −1
M AT ,t
A.16 Ft =  log2 [ ]  F EX ,t
M AT , 1750
A.17 T AT ,t = T AT ,t −1  1 [ F t  2 T AT , t−1 −3 [T AT , t−1 − T LO ,t −1 ]]
A.18 T LO, t = T LO ,t −1  4 [T AT ,t −1 − T LO ,t −1 ]
1−2
A.19 t = t

Where:

11 = 1 − 12
587.473 12
21 =
1143.894
22 = 1 − 21 − 23
1143.894 23
32 =
18340
33 = 1 − 32


2 =
t2xco2
pop ∗ord −1
e g
−1 t

gfacpopt = pop ∗ord −1


e g t

lt = pop 0 1− gfacpopt   gfacpopt pop a


−dela∗10∗ord t −1
ga t = ga0∗e
at
at 1 = , a = a0
1 − gat t =1

where ord t is the ordinate of t :

560
2

g t = g  0 e−d  ∗ 10 ∗ord −1−d  ∗ 10 ∗ ord −1


1 t 2 t

t
 t 1 = ,  = 0
1− g t 1 1
pback  t backrat − 1  e−gback∗ord −1  t

1, t = .
2 backrat
ord t −1
E land ,t = E land ,0 1−0.1

−10∗ ord t−1


rt = 1
fex t = If ord t ≥ 12 then  fex 0  0.36 else  fex 0  0.1 fex 1−fex 0 
t = If ord t  25 then 21  2 − 21  e−d ∗ ord −2 else 21
t

t =1 = 1
t = 1−
t
2

Supplementary model equations:

ygrosst = a t k t l 1−
t
damagest = ygrosst 1−t 
ynet t = ygrosst t
abate t = ygrosst t
1000 c t
cpc t =
lt
yt = t 1−t  ygrosst
1000 q t
ypct =
lt
it
st =
q t  0.001
 q t 1 − 1 −  k 10
rit = −
kt 10

Boundary conditions:

k.lot = 100
mat.lot = 10
mup.lot = 100
mlo.lot = 1000
c.lot = 20

561
tlo.up t = 20
tlo.lo t = −1
tat.up t = 20
mu.upt = limit
mu.fxt =1 = 0
ccum.upt =tlast = ccumm

Preferences:

 = 2.0 elasticity of marginal utility of consumption


 =0.015 pure rate of social time preference per unit time

Population and technology:

pop0 = 6514 world population 2005 millions


popg = 0.35 population growth rate per decade
popa = 8600 asymptotic population  was popasym
a0 = 0.02722 initial level of total factor productivity
ga 0 = 0.092 initial growth rate for technology per decade
dela = 0.001 decline rate of technological change per decade
k = 0.10 capital depreciation rate per period
 = 0.30 elasticity of output with respect of capital  a pure number 
q0 = 61.1 initial world gross output , trillion 2005 US dollars
k0 = 137 initial capital , trillion 2005 US dollars

Emissions:

0 = 0.13418 initial CO2 equivalent emissions−GNP ratio 2005


g 0 = −0.0730 initial growth of sigma per decade
d 1 = 0.003 decarbonization decline rate per decade
d 2 = 0.0 decarbonization quadratic term
eland 0 = 11.0 land clearing carbon emissions 2005, GtC per decade

Carbon cycle:

562
mat 1750 = 596.4 atmospheric concentration mat 1750GtC
mat 0 = 808.9 atmospheric concentration 2005 GtC
mup0 = 1255 upper ocean concentration 2005 GtC
mlo 0 = 18365 lower ocean concentration 2005 GtC

Carbon cycle transfer parameters:

11 = 0.810712
12 = 0.189288
21 = 0.097212
22 = 0.852787
23 = 0.05
32 = 0.003119
33 = 0.996881

Climate model:

t2xco2 =3 equilibrium temperature impact of CO2 doubling , degC


 = 3.8 estimated forcings of equilibrium CO 2 doubling , deg C /watt /m2
fex 0 = −0.06 estimate of 2000 forcings of non−CO2 GHG
fex 1 = 0.30 estimate of 2100 forcings of non−CO2 GHG

tlo 0 = 0.0068 2000 lower ocean temp change deg C since 1900
tat 0 = 0.7307 2000 atmospheric temp change deg C since 1900
1 = 0.220 parameter of the climate equation flows per period
3 = 0.300 parameter of the climate equation flows per period
4 = 0.050 parameter of the climate equation flows per period

Climate damage function parameters calibrated for quadratic at 2.5C in 2105:

1 = 0.0000
2 = 0.0028388
3 = 2.00

Abatement cost parameters:

563
2 = 2.8 control cost function exponential parameter
pback = 1.17 cost of backstop technology 2005 US $ ' 000 per tC
backrat =2 ratio of initial /final backstop cost
gback = 0.05 initial cost decline backstop percent per decade
mulimit =1 upper limit on control rate

Participation parameters:

0 = 0.25372 initial value of 


1 =1 fraction of emissions under control regime 2005
2 =1 fraction of emissions under control regime 2015
21 =1 fraction of emissions under control regime 2205
d =0 participation decline rate

0 = 0.005 initial value of  determined by Kyoto Protocol


ccumm = 6000 maximum cumulative consumption of fossil fuels GtC

Objective function scaling coefficients:

scale1 = 194
scale2 = 381800

Other parameters:

at total factor productivity , units of productivity


eland t land clearing emissions of carbon , GtC per period
fex t other greenhouse gases exogenous radiative forcing , watts /metre2 since 1900
ga t productivity growth rate upto period t
gfacpopt population growth factor
glt labour growth rate upto period t
g t energy efficiency cumulative improvement

1, t adjustment cost for backstop technology , parameter of the abatement cost function
t participation rate = controlled fraction of emissions
= proportion of emissions included by policy
2 climate model parameter

564
lt population labour inputs , millions
t participation cost markup , abatement cost with incomplete participation
as proportion of abatement cost with complete participation
rt average utility social time preference discount factor per time period
t ratio of uncontrolled industrial CO2 equivalent emissions / output ,
metric tons of carbon per unit of output 2005 prices
mlo t mass of carbon for lower ocean reservoir , GtC at beginning of period
t emissions control rate = proportion of uncontrolled emissions
mupt mass of carbon for upper  shallow ocean reservoir , GtC at beginning
of period
t damage function climate damages as proportion of world output
qt gross world product output of goods , services net of damages and
abatement costs 2005 US trillion dollars
rit real interest rate
st gross savings rate as fraction of gross world product

tat t global mean surface temperature deg C increase since 1900


tlo t global mean lower ocean temperature deg C increase since 1900
ut instantaneous utility function utility per period
w objective function  present value of utility , pure units
yt gross world product net of abatement and damages
ygrosst gross world product gross of abatement and damages
ynet t output net of damages
ypct income per capita , thousands US dollars

Variables:

abate t abatement cost


ct consumption of goods and services , trillions of 2005 US dollars
ccumt cumulative emissions
cpc t per capita consumption of goods and services , thousands 2005
US dollars
damagest damages
et total of industrial and land CO2 equivalent emissions , GtC per period
eind t industrial carbon emissions, GtC per period
2
ft total radiative forcing , watts per metre since 1900
it investment , trillions of 2005 US dollars
kt capital stock , trillions of 2005 US dollars

565
t abatement cost function, cost of emissions reductions as proportion of
world output
mat t mass of carbon in atmosphere reservoir at beginning of period , GtC
matavt average atmospheric concentration GtC
mlo t mass of carbon in lower ocean reservoir , at beginning of period , GtC
t emissions control rate = proportion of uncontrolled emissions
mupt mass of carbon in upper shallow  ocean reservoir , at beginning
of period , GtC
t damage function , climate damages as proportion of world output
qt gross world product output of goods and services net of abatement and
damages , trillions of 2005 US dollars

rit real interest rate


st gross savings rate as fraction of gross world product
tat t global mean surface temperature deg C increase since 1900
tlo t global mean lower ocean temperature , deg C increase since 1900
ut instantaneous utility function, utility per period

yt gross world product net of abatement and damages


ygross t gross world product gross of abatement and damages
ynet t output net of damages
ypct income per capita , thousands US dollars

A4.3 Implementation issues

Preference for consumption over investment


The most important implementation issue is DICE's preference for
consumption over investment, which arises because consumption is maximised
with respect to capital. This can be seen by examining the equations for the
Cobb-Douglas production function, consumption and capital:

A.04 Qt = t [1 − t ] At K t L1t −
A.07 Qt = Ct  I t
A.09 Kt = I t  1 −  K  K t −1

566
For a particular period t , t [1 − t ] A t L1−
t

is an equilibrium settled
factor with both exogenous and endogenous components but having only a
second order feedback effect and no effect at all if the emissions control rate

 is constant. Depreciation of capital in the previous period 1 −  K  K t −1


is constant because K t −1 is predetermined by time t . Therefore, the three
equations above may be simplified by substituting { , } for the two
constants respectively:

A.04 Qt =  K t
A.07 Qt = Ct  I t
A.09 Kt = I t 

Rearranging and simplifying:

C t =  K t − K t 

Maximising C t by differentiating with respect to Kt and equating to zero


provides the maximum condition:

  K t −1− 1 = 0
K t = { }1 − 
Kt = k
where :
k = { }1 − 

From this, it may be noted that Kt is just the constant k . Substituting


back for the value of C t and Qt at the maximum, we find that each is also
just a constant:

C t =  k  − k 
Q t =  k

For the period, the increments of production, capital and consumption

{Qt , K t , C t } are all constant and predetermined by the values of various


factors and starting capital. Indeed, if the emissions control factor  is

567
constant and does not give rise to changes in abatement and economic

damages { t , t } respectively, then the whole economic model is also


predetermined.

However, it is not the case that the emissions control rate  is a constant.
Therefore, over the intertemporal space of the projection period DICE
optimises with respect to  as the primary optimisation factor and capital

Kt as an important albeit now secondary factor. Other aspects of DICE


performance are discussed in Appendix 5 Acyclic solver for unconstrained
optimisation.

Equation A.9
Nordhaus' implementation of his economic-climate model in the General
Algebraic Modelling System (GAMS) has a number of aberrations from the
equations presented above.

This equation for Kt is specified as a function of It .

A.09 K t = I t  1 −  K  K t−1

However, equation A.9 is implemented with I t −1 as follows:

A.09 K t = I t −1  1 −  K  K t −1

This alternative formulation changes the interpretation of Kt from the


capital at the end of period t to the capital at the beginning of the period
t . This is an inconsistent treatment between normal interpretation of the
equations and the manner in which they are implemented.

Utility function

The utility function formula Ut is different in two ways:

568
(a) Population L t is used in the instantaneous utility function Ut rather
than in the summation objective function 2.. However, this is merely a
rearrangement and has little effect on the outcome.

For example, the equations A.1 and A.3 are provided above as:

T max

A.01 W = ∑ u [c t , Lt ] Rt
t =1

c 1−
A.03 U [c t , Lt ] = L t t
1−

However, the implementation is in the form:

T max

A.01 W = ∑ Lt u [c t , L t ] R t
t =1

c 1−
t
A.03 U [c t , Lt ] =
1−

(b) There is an extra term in equation A.03, which is implemented as:

c 1− −1
A.03 U [c t , Lt ] = t
1−

As shown in Chapter 5 Model development, the reason often given for


introducing this additional term is that the function reduces to ln c  in the
special case of  = 1 . Nordhaus uses  = 2 and so does not make this
simplification. However, the discussion in Chapter 5 also shows that the
effective discount rate is relatively unstable in the initial formulation and it is
likely that this supplementary form of the welfare function provides more
stability. The stability could be assessed in the same way as the initial
formulation has been readily investigated in Chapter 5.

Equations A.11 and A.12


These equations are specified as:

569
T max

A.11 CCum ≥ ∑ E ind ,t


t =0
A.12 Et = E ind , t  E land ,t

There are two differences in the implemented model. The first difference is

that Equation A.11 uses E ind , t , whereas the model implementation uses
Et :

T max

A.11 CCum = ∑ Et
t =0
A.12 Et = E ind , t  E land ,t

While the output CCum is used only in the sense of setting a maximum
constraint, there is a major inconsistency in subjecting maximum total
emissions (of industrial and land) to the maximum conceived for industrial
emissions. Secondly, the use of CCum in further discussion and analysis is
inconsistent and likely to be highly confusing.

The second difference is am alternative formulation regarding CCumt 1 as

carried forward forward amount in period t , and equivalently CCumt at


the brought forward amount in period t :

A.11 CCumt1 = E t  CCumt

Normally, this would be implemented in a more straightforward manner using


the sum of current period emissions and accumulated emissions at the end of
the previous year.

A.11 CCumt1 = E t 1  CCumt

There is an inconsistency in Nordhaus' interpretation of CCum because the


constraint of a maximum is applied to the brought forward quantity rather than
the usual understanding of the carried forward quantity.

570
Equation A.13
This equation is specified as:

A.13 M AT ,t = E t  11 M AT , t−1  21 M UP , t−1

However, it is implemented inconsistently with E t −1 instead of Et as


follows:

A.13 M AT ,t = E t−1  11 M AT , t−1  21 M UP ,t −1

Nordhaus DICE Brief (GAMS code version)


The following Nordhaus DICE model is translated from William Nordhaus'
GAMS model (Nordhaus 2007). The model uses the Phase III acyclic
topological processor described in Appendix 5:Acyclic solver for unconstrained
optimisation.

(* Nordhaus Brief Climate Change Policy Model May 2008 *)


(* Note: periods are decades, with the decade to 2005 being period zero
*)
(* Stuart Nettleton Topological Model September 2008 *)
(* Nordhaus Brief Code equations modified only in respect of rendering
acyclic *)
(* Note: Nordhaus Brief Code differs from the Nordhaus Book Code *)

starttime=AbsoluteTime[];
periods = 5; (* projection periods *)
optimpenalty=0; (* optimisation return if iteration non-real *)
<<Combinatorica`

(* objective function *)
(* this program always minimises, so negative for maximisation *)
obj = {-cumu[periods]};

(* optimisation variables: topology start nodes ... *)


(* .. to have FindMinimum use the fast & robust Brent's Method by *)
(* default, which avoids the use of derivatives, make sure there *)
(* are no constraints and set two start variables. If possible *)
(* make sure there is one on either side of the zero crossing. *)
(* If formal constraints are present, FindMinimum will use the Interior
*)
(* If constraints are present, FindMinimum will use the Interior *)
(* Point Method and only one start variable should be present. *)
(* This should be your best estimate of the solution. While *)
(* Brent's Method is much faster than Interior Point, both execute *)
(* much faster and use considerable less memory than NMinimize. *)
(* Note that if an optimisation variable is set here but later *)
(* is defined as an initial variable, the latter is used. *)

571
opt ={
(* emissions control rate, fraction of uncontrolled emissions *)
{ μ[t],0.01,0.5},
(* capital stock *) {k[t],300, 2000}
};

(* exogenous parameters *)
exogparams={
(* population 2005 millions *) pop0 → 6514,
(* population growth rate per decade *) popg → 0.35,
(* population asymptote *) popa → 8600,
(* technology growth rate per decade *) ga0 → 0.092,
(* technology depreciation rate per decade *) dela → 0.001,
(* equivalent carbon growth parameter *) gσ0 → -0.0730,
(* decline rate of decarbonisation per decade parameters*) dσ1 → 0.003,
dσ2 → 0.000,
(* backstop technology cost per tonne of carbon 2005 *) pback → 1.17,
(* backstop technology, final to inital cost ratio *) backrat → 2,
(* backstop technology, rate of decline in cost *) gback → 0.05,
(* pure rate of social time preference *) ρ → 0.015,
(* radiative forcing of non-carbon gases in 2000 & 2001 *) fex0 → -0.06,
fex1 → 0.30,
(* emissions in control regime parameters for 2005, 2015, 2205 *) κ1->1,
κ2 → 1, κ21 → 1,
(* emissions in control regime decline rate*) dκ → 0,
(* abatement cost control parameter *) θ → 2.8,
(* carbon emissions from land use 2005 *) eland0 → 11};

(* initial exogvars *)
exoginitial ={
gfacpop[1] → 0,gfacpop[0] → 0,
ga[0] → ga0,gσ[0] → gσ0,
a[1] → 0.02722, a[0] → 0.02722,
σ[1] → 0.13418, σ[0] → 0.13418,
eland[0] → eland0,
fex[0] → fex0, fex[1] → fex1,
κ[1] → 0.25372, κ[0] → 0.25372
};

(* exogenous equations *)
exogeqns ={
(* population growth factor *) gfacpop[t]==(Exp[popg*(t-1)]-
1)/Exp[popg*(t-1)],
(* population level *) l[t]== pop0*(1-gfacpop[t])+gfacpop[t]*popa,
(* productivity growth rate *) ga[t] == ga0*Exp[-dela*10*(t-1)],
(* total factor productivity *) a[t] ==a[t-1]/(1-ga[t-1]),
(* energy efficiency cumulative improvement *) gσ[t] ==gσ0*Exp[-
dσ1*10*(t-1)-dσ2*10*(t-1)^2],
(* carbon emissions output ratio *) σ[t] == σ[t-1]/(1-gσ[t]),
(* backstop technology adjusted cost *) Θ[t] ==(pback* σ[t]/
θ)*((backrat-1+Exp[-gback*(t-1)])/backrat),
(* carbon emissions from land use sources *) eland[t] ==eland0*(1-
0.1)^(t-1),
(* social time preference discount factor *) r[t] ==1/(1+ ρ)^(10*(t-1)),
(* radiative forcing of other greenhouse gases *) fex[t] ==fex0 + If[ t
≤ 12,0.36, 0.1*(fex1-fex0)*(t-1)],
(* fraction of emissions in control regime *) κ[t] ==If[ t ≥ 25, κ21,
κ21 + ( κ2- κ21)*Exp[-dκ*(t-2)]],
(* ratio of abatement cost with incomplete participation to that with
complete participation *) Π[t] == κ[t]^(1- θ)
};

(* endogenous parameters *)

572
endogparams={
(* elasticity of marginal utility of consumption *) α → 2.0,
(* elasticity of output with respect to capital in production function *)
γ → 0.30,
(* depreciation rate of capital *) δ → 0.1,
(* temperature forcing parameter *) η → 3.8,
(* temperature change with carbon doubling *) t2xco2 → 3,
(* damage function parameters *) ψ1 → 0, ψ2 → 0.0028388, ψ3 → 2,
(* climate equation parameters *) ξ1 → 0.22, ξ2 → η/t2xco2, ξ3 → 0.3, ξ4
→ 0.05,
(* carbon cycle parameters *) φ11 → 1- φ12a, φ12 → 0.189288, φ12a →
0.189288,
(* carbon cycle parameters *) φ21 → 587.473* φ12a/1143.894, φ22 → 1- φ21
– φ23a,
(* carbon cycle parameters *) φ23 → 0.05, φ23a → 0.05, φ32 → 1143.894*
φ23a/18340, φ33 → 1- φ32,
(* mass of carbon in atmosphere, pre-industrial *) mat1750 → 596.4,
(* μlim → 1, *)
(* ceindlim → 6000, *)
(* scaling factor *) scale1 → 194
};

(* endogenous initial *)
endoginitial = {
y[0] → 61.1, c[0] → 30,
inv[0] → 31.1,
k[1] → 137, k[0] → 137,
ceind[1] → 0, ceind[0] → 0,
Λ[1] → 0.66203, Λ[0] → 0.66203,
Ω[1] → 0.99849, Ω[0] → 0.99849,
mat[1] → 808.9, mat[0] → 808.9,
mup[1] → 1255, mup[0] → 1255,
mlo[1] → 18365, mlo[0] → 18365,
tat[1] → 0.7307, tat[0] → 0.7307,
tlo[1] → 0.0068, tlo[0] → 0.0068,
μ[1] → 0.005, μ[0] → 0.005,
cumu[1] → 381800, cumu[0] → 381800
};

(* endogenous variables *)
(* sn modifications of Nordhaus to render acyclic *)
endogeqns={
(* net present value of utility, the objective function *) cumu[t] ==
cumu[t-1]+(l[t]*u[t]*r[t]*10)/scale1,
(* utility function *) u[t] == ((c[t]/l[t])^(1- α)-1)/(1- α),
(* consumption of goods and services *) c[t] == y[t]-inv[t],
(* output of goods and services, net of abatement and damages *) y[t] ==
Ω[t]*(1- Λ[t])*ygr[t],
(* ratio of abatement to world output *) Λ[t] == Π[t] * Θ[t] * μ[t]^ θ,
(* output of goods and services, gross *) ygr[t] == a[t]* k[t]^ γ
*l[t]^(1- γ),
(* ratio of climate damages to world output *) Ω[t] == 1/(1+ \
[Psi]1*tat[t]+ ψ2*(tat[t]^ ψ3)),
(* global mean terrestrial temperature *) tat[t] == tat[t-1]+
ξ1*(for[t]- ξ2*tat[t-1]- ξ3*(tat[t-1]-tlo[t-1])),
(* global mean lower ocean temperature *) tlo[t] ==tlo[t-1]+ ξ4*(tat[t-
1]-tlo[t-1]),
(* radiative forcing total *) for[t] == η*Log[2,((mat[t]+mat[t-
1])/2)/mat1750]+fex[t],
(* mass of carbon in atmosphere *) mat[t] == eind[t] + φ11*mat[t-1] +
φ21*mup[t-1],
(* carbon emissions *) eind[t] == 10 * σ[t] *(1- μ[t]) *ygr[t]
+eland[t],

573
(* mass of carbon in lower oceans *) mlo[t] == φ23*mup[t-1]+ φ33*mlo[t-
1],
(* mass of carbon in upper oceans *) mup[t] == φ12*mat[t-1]+ φ22*mup[t-
1] + φ32*mlo[t-1],
(* carbon emissions cumulative *) ceind[t] == eind[t]+ceind[t-1],
(* capital stock as function of investment *) k[t] == 10*inv[t]+((1-
δ)^10)*k[t-1]
(* (* climate damages, gross *) dam[t] == ygr[t]*(1- Ω[t]),*)
(* (* savings ratio *) s[t] == inv[t]/y[t],*)
(* (* interest rate *) ri[t] == γ*y[t]/k[t] -(1-(1- δ)^10)/10,*)
(* (* consumption of goods and services, per capita *) cpc[t] ==
c[t]*1000/l[t],*)
(* (* output of goods and services, net per capita *) pcy[t] ==
y[t]*1000/l[t],*)
};

(* endogenous constraints *)
endogcons={(*
k[t] ≤ 10*inv[t]+((1- δ)^10)*k[t-1],
0.02*k[periods] ≤ inv[periods],
100 ≤ k[t],
20 ≤ c[t],
0 ≤ mat[t],
100 ≤ mup[t],
1000 ≤ mlo[t],
-1 ≤ tlo[t]<= 20,
tat[t] ≤ 20,
ceind[t] ≤ ceindlim,
0 ≤ q[t],
0 ≤ inv[t],
0 ≤ ygr[t],
0 ≤ eind[t],
0 ≤ μ[t] ≤ μlim *)
};

(* solve topological equations *)

toponodes[eqns_]:=Module[
{eqnvars,flatvars,eqnlist,mysource,mysink,edges1,edges2,edges3,edges,vert
ices2,vertices,forwardgraph,networkflows,forwardflows,forwardedges,revise
dedges,revisedgraph,toposort,sortedequations,
sortedvertices,posfirstequation,startvertices},
eqnvars=Map[Cases[eqns[[#]],_Symbol[_Integer],Infinity]&,Range[Length[eqn
s]]];
flatvars=Union[Flatten[eqnvars]];
eqnlist=Range[Length[eqns]];
f1[a_,b_]:={a,b};
edges1=Map[f1[mysource,flatvars[[#]]]&,Range[Length[flatvars]]];
edges2=Flatten[Map[Outer[f1,eqnvars[[#]],{eqnlist[[#]]}]&,eqnlist],2];
edges3=Map[f1[eqnlist[[#]],mysink]&,eqnlist];
edges=Join[edges1,edges2,edges3];
vertices2= Join[flatvars,eqnlist];
vertices=Join[{mysource},vertices2,{mysink}] ;
forwardgraph=MakeGraph[vertices,(MemberQ[edges,{#1,#2}])&,Type-
>Directed,VertexLabel->True];
If[!AcyclicQ[forwardgraph],Print["*** ERROR: FORWARD GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"]];
networkflows=NetworkFlow[forwardgraph,1,Length[vertices],Edge];
forwardflows=Cases[networkflows[[All,1,All]],
{x_/;x>1,y_/;y<Length[vertices]}];
forwardedges = Map[vertices[[#]]&,forwardflows];
revisededges =
Join[Complement[edges2,forwardedges],Map[Reverse,forwardedges]];

574
revisedgraph=MakeGraph[vertices2,(MemberQ[revisededges ,{#1,#2}])&,Type-
>Directed,VertexLabel->True];
If[!AcyclicQ[revisedgraph],Print["*** ERROR: REVISED GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"];Exit[]];
(* Print[ShowGraph[revisedgraph]];*)
toposort=TopologicalSort[revisedgraph];
(*Print[toposort];*)
sortedvertices=Cases[vertices2[[toposort]],_Symbol[_Integer],1];
sortedequations = Cases[vertices2[[toposort]],_Integer,1];
posfirstequation=Apply[Plus,First[Position[vertices2[[toposort]],_Integer
,1]]];
startvertices = vertices2[[toposort[[Range[posfirstequation-1]]]]];
(*startvertices
=vertices2[[Select[vertices,InDegree[revisedgraph,#]==0&]]];*)
Return[{sortedequations,sortedvertices, startvertices}]
];

(* calculate exogenous variables *)


exoginitialextended = Cases[Union[Flatten[Map[exoginitial/.t → # &,
Range[periods]]]]//.exogparams /.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];
exogextended= Cases[Union[Flatten[Map[exogeqns/.t → # &,
Range[periods]]]]//.Join[exogparams, exoginitialextended]
/.x_Symbol[i_Integer /;i < 0] → 0 ,Except[False|True|Null]];

exogtoposolver[equations_]:=Module[
{eqnorder,soleqn,solvar,outputs={},soltest1,soltest2},
eqnorder = toponodes[equations/.Equal->Subtract][[1]];
For[i=1,i<=Length[eqnorder],i++,
soleqn =equations[[eqnorder[[i]]]]//.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** ERROR: DURING EXOGENOUS CALCULATIONS A VARIABLE HAD NO
SOLUTION ***"];Exit[],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
outputs
];

exogaugmented=Join[exoginitialextended,exogtoposolver[exogextended]];
Print["The exogenous variables calculate as: ", Sort[exogaugmented]];

(* calculate endogenous variables *)


interimparams = Join[exogparams, exogaugmented, endogparams];
endoginitialextended=endoginitial//.interimparams ;
allparams = Join[interimparams , endoginitialextended];
endogextended= Cases[Union[Flatten[Map[endogeqns/.t -> # &,
Range[periods]]]]//.allparams /.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];
endogtoponodes= toponodes[endogextended/.Equal->Subtract];
endogeqnorder =endogtoponodes[[1]];
Print["Directed acyclic graphs and topological processing have been
completed.... optimisation commencing..."];
(*Print["Topological order of variables:" ,endogtoponodes[[2]]];*)

575
Print["Please note that start vertices of the endogenous equation
tolopogy have not been automatically included as optimisation variables.
This is for flexibility as you may wish to use a surrogate based on your
observation of an alternative topological sort order. So please check the
endogenous start vertices here to confirm that these variables (or your
surrogates) have been included with optimisation variables at the start:
",
endogtoponodes[[3]]
];
lenendogeqnorder=Length[endogeqnorder];
endogextendedordered= endogextended[[endogeqnorder]];

(* calculate endogenous constraints *)


endogconextended= Cases[Union[Flatten[Map[endogcons/.t → # &,
Range[periods]]]]//.allparams /.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];
endogconextvars= Union[Cases[endogconextended,_Symbol[_Integer]
,Infinity]];

(* calculate objective vars *)


objvar= Cases[Union[Flatten[Map[obj/.t → # &,
Range[periods]]]]//.allparams/.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];

(* prepare the independent optimising variables *)


optimous=Union[Cases[Apply[List,Map[opt /.t → # //.allparams
&,Range[periods]]],
{_Symbol[_Integer],_Integer|_Real}|
{_Symbol[_Integer],_Integer|_Real,_Integer|_Real}|
{_Symbol[_Integer],_Integer|_Real,_Integer|_Real,_Integer|_Real},
Infinity]];
optimousvars= Union[Cases[optimous//.allparams,_Symbol[_Integer]
,Infinity]];

(* include any additional optimising variables arising from the leaves


and endogenous constraints *)
variables=Union[Join[optimous,Partition[Complement[endogconextvars,optimo
usvars],1]]];
Print["The optimising variables being used are: ",variables];
finalvars=Union[Cases[variables,_Symbol[_Integer] ,Infinity]];

(* commence solve *)
(* objective function ... *)
endogoptimsolver[nmvars_]:=Module[
{soleqn,solvar,outputs={},soltest1,soltest2},
For[i=1,i<=lenendogeqnorder,i++,
soleqn =endogextendedordered[[i]]/.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** Warning: during optimisation ",solvar," became complex or null
in the equation ",soleqn," so the specified optimisation penalty of
",optimpenalty," was applied ***"];Return[optimpenalty],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];

576
Apply[Plus,objvar/.outputs]
]/; VectorQ[nmvars,NumberQ];

(* optimisation phase ... *)


(* ... use FindMinimim to optimise the endogenous equations .. NMinimize
exhausts 16Gb of memory *)
endogoptimsolution=If[(Length[endogconextvars]==0),
FindMinimum[endogoptimsolver[finalvars],variables],
FindMinimum[{endogoptimsolver[finalvars],endogconextended},variables]
];

Print["The solution to the endogenous optimising variables is: ",


endogoptimsolution];

(* calculate final outputs by back substitution *)


endogoutputsolver[nmvars_]:=Module[
{soleqn,solvar,outputs=nmvars, soltest1,soltest2},
For[i=1,i<=lenendogeqnorder,i++,
soleqn =endogextendedordered[[i]]/.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** ERROR: DURING BACKSUBSTITUTION A VARIABLE HAD NO SOLUTION
***"];Exit[],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
outputs
];

endogaugmented =
Join[endoginitialextended,endogoutputsolver[endogoptimsolution[[2]]]];
Print["The final outputs of the endogenous equations are: "
,Sort[endogaugmented]];

Print["Execution time: ",Round[N[AbsoluteTime[]-starttime]/60,0.01],"


minutes using ", Round[N[MaxMemoryUsed[]/10^6],0.01]," Mb; with ",
Length[finalvars]," optimising variables and ",
Length[exogaugmented]+Length[endogaugmented], " final variables in total;
",
Length[exogparams]," exogenous parameters; ",
Length[exoginitial]," exogenous initial variables; ",
Length[exogeqns]," exogenous equations; ",
Length[exogaugmented]," final exogenous variables; ",
Length[endogparams]," endogenous parameters; ",
Length[endoginitial]," endogenous initial variables; ",
Length[endogeqns]," endogenous equations; ",
Length[endogaugmented], " final endogenous variables"
];

Nordhaus DICE Brief (Book version)


The following Nordhaus DICE model is built from equations in A Question of
Balance: Weighing the Options on Global Warming Policy (2008). The model

577
uses the Phase III acyclic topological processor described in Appendix 5
Acyclic solver for unconstrained optimisation.

(* Nordhaus Brief Climate Change Policy Model May 2008 *)


(* Note: periods are decades, with the decade to 2005 being period zero
*)
(* Stuart Nettleton September 2008 *)
(* Nordhaus Book equations modified only in respect of rendering acyclic
*)
(* Note: Nordhaus Book differs from the Nordhaus Brief Code *)

starttime=AbsoluteTime[];
periods = 4; (* projection periods *)
optimpenalty=0; (* optimisation return if iteration non-real *)
<<Combinatorica`

(* objective function *)
(* this program always minimises, so negative for maximisation *)
obj = {-cumu[periods]};

(* optimisation variables: topology start nodes ... *)


(* .. to use the fast & robust Brent's Method by default, *)
(* which avoids the use of derivatives, make sure there *)
(* are no constraints and set two start variables. If possible *)
(* make sure there is one on either side of the zero crossing. *)
(* If formal constraints are present, FindMinimum will use the Interior
*)
(* Point Method and only one start variable should be present. *)
(* This should be your best estimate of the solution. While *)
(* Brent's Method is much faster than Interior Point, both execute *)
(* much faster and use considerable less memory than NMinimize. *)
(* Note that if an optimisation variable is set here but later *)
(* is defined as an initial variable, the latter is used. *)
opt ={
(* emissions control rate, fraction of uncontrolled emissions *)
{ μ[t],0.05,0.2},
(* capital stock *) {k[t],80, 150}
};
(* exogenous parameters *)
exogparams={
(* population 2005 millions *) pop0 → 6514,
(* population growth rate per decade *) popg → 0.35,
(* population asymptote *) popa → 8600,
(* technology growth rate per decade *) ga0 → 0.092,
(* technology depreciation rate per decade *) dela → 0.001,
(* equivalent carbon growth parameter *) gσ0 → -0.0730,
(* decline rate of decarbonisation per decade parameters*) dσ1 → 0.003,
dσ2 → 0.000,
(* backstop technology cost per tonne of carbon 2005 *) pback → 1.17,
(* backstop technology, final to initial cost ratio *) backrat → 2,
(* backstop technology, rate of decline in cost *) gback → 0.05,
(* pure rate of social time preference *) ρ → 0.015,
(* radiative forcing of non-carbon gases in 2000 & 2001 *) fex0 → -0.06,
fex1 → 0.30,
(* emissions in control regime parameters for 2005, 2015, 2205 *) κ1 → 1,
κ 2->1, κ21->1,
(* emissions in control regime decline rate*) dκ → 0,
(* abatement cost control parameter *) θ → 2.8,
(* carbon emissions from land use 2005 *) eland0 → 11
};

578
(* initial exogvars *)
exoginitial ={
gfacpop[1] → 0,gfacpop[0] → 0,
ga[0] → ga0, gσ[0] → gσ0,
a[1] → 0.02722, a[0] → 0.02722,
σ[1] → 0.13418, σ[0] → 0.13418,
eland[0] → eland0,
fex[0] → fex0, fex[1] → fex1,
κ[1] → 0.25372, κ[0] → 0.25372
};
(* exogenous equations *)
exogeqns ={
(* total factor productivity *) a[t] ==a[t-1]/(1-ga[t-1]),
(* social time preference discount factor *) r[t] == 1/(1+ ρ)^(10*(t-1)),
(* carbon emissions from land use sources *) eland[t] == eland0*(1-
0.1)^(t-1),
(* radiative forcing of other greenhouse gases *) fex[t] == fex0 + If[ t
≤ 12, 0.36, 0.1*(fex1-fex0)*(t-1)],
(* ratio of abatement cost with incomplete participation to that with
complete participation *) Π[t] == κ[t]^(1-θ),
(* population growth factor *) gfacpop[t]==(Exp[popg*(t-1)]-
1)/Exp[popg*(t-1)],
(* population level *) l[t] == pop0*(1-gfacpop[t])+gfacpop[t]*popa,
(* productivity growth rate *) ga[t] == ga0*Exp[-dela*10*(t-1)],
(* energy efficiency cumulative improvement *) gσ[t] == gσ0*Exp[-
dσ1*10*(t-1)-dσ2*10*(t-1)^2],
(* carbon emissions output ratio *) σ[t] == σ[t-1]/(1-gσ[t]),
(* backstop technology adjusted cost *) Θ[t] ==(pback*σ[t]/θ)*((backrat-
1+Exp[-gback*(t-1)])/backrat),
(* fraction of emissions in control regime *) κ[t] ==If[t ≥ 25, κ21, κ21
+ (κ2- κ21)*Exp[-dκ*(t-2)]]
};
(* endogenous parameters *)
endogparams={
(* elasticity of marginal utility of consumption *) α →2.0,
(* elasticity of output with respect to capital in production function *)
γ → 0.30,
(* depreciation rate of capital *) δ → 0.1,
(* temperature forcing parameter *) η → 3.8,
(* temperature change with carbon doubling *) t2xco2->3,
(* damage function parameters *) ψ1 → 0, ψ2 → 0.0028388, ψ3 → 2,
(* climate equation parameters *) ξ1 → 0.22, ξ2 → η/t2xco2, ξ3 → 0.3, ξ4
→ 0.05,
(* carbon cycle parameters *) φ11 → 1- φ12a, φ12 → 0.189288, φ12a →
0.189288,
(* carbon cycle parameters *) φ21 → 587.473* φ12a/1143.894, φ22 → 1- φ21
– φ23a,
(* carbon cycle parameters *) φ23 → 0.05, φ23a → 0.05, φ32 → 1143.894*
φ23a/18340, φ33 → 1- φ32,
(* mass of carbon in atmosphere, pre-industrial *) mat1750->596.4,
(* scaling factor *) scale1 → 194
};

(* endogenous initial *)
endoginitial = {
y[0] → 61.1, c[0] → 30, inv[0] → 31.1,
ygr[1] → 55.667, ygr[0] → 55.667,
k[1] → 137, k[0] → 137,
ceind[1] → 0, ceind[0] → 0,
Λ[1] → 0.66203, Λ[0] → 0.66203,
Ω[1] → 0.99849, Ω[0] → 0.99849,
mat[1] → 808.9, mat[0] → 808.9,
mup[1] → 1255, mup[0] → 1255,

579
mlo[1] → 18365, mlo[0] → 18365,
tat[1] → 0.7307, tat[0] → 0.7307,
tlo[1] → 0.0068,tlo[0] → 0.0068,
μ[1] → 0.005, μ[0] → 0.005,
cumu[1] → 381800, cumu[0] →381800
};
(* endogenous variables *)
(* sn modifications of Nordhaus to render acyclic *)
endogeqns={
(* utility function *) u[t] == l[t]*((c[t] / l[t])^(1- α))/(1- α),
(* capital stock as function of investment *) k[t] == 10*inv[t]+((1-
δ)^10)*k[t-1],
(* output of goods and services, net of abatement and damages *) y[t] ==
Ω[t]*(1- Λ[t])*ygr[t],
(* output of goods and services, gross *) ygr[t] == a[t]* (k[t]^γ)
*(l[t]^(1- γ)),
(* ratio of climate damages to world output *) Ω[t] == 1/(1+ ψ1*tat[t]+
ψ2*(tat[t]^ ψ3)),
(* ratio of abatement to world output *) Λ[t] == Π[t] * Θ[t] * μ[t]^ θ,
(* consumption of goods and services *) c[t] == y[t]-inv[t],
(* carbon emissions from industrial sources *) eind[t] == 10 * σ[t] *(1-
μ[t]) *ygr[t],
(* carbon emissions from industrial sources cumulative *) (*ceind[t] ==
eind[t]+ceind[t-1],*)
(* carbon emissions total *) e[t]==eind[t]+eland[t],
(* mass of carbon in atmosphere *) mat[t] == e[t] + φ11*mat[t-1] +
φ21*mup[t-1],
(* mass of carbon in upper oceans *) mup[t] == φ12*mat[t-1]+ φ22*mup[t-1]
+ φ32*mlo[t-1],
(* mass of carbon in lower oceans *) mlo[t] == φ23*mup[t-1]+ φ33*mlo[t-
1],
(* radiative forcing total *) for[t] == η*Log[2,mat[t]/mat1750]+fex[t],
(* global mean terrestrial temperature *) tat[t] == tat[t-1]+ ξ1*(for[t]-
ξ2*tat[t-1]- ξ3*(tat[t-1]-tlo[t-1])),
(* global mean lower ocean temperature *) tlo[t] ==tlo[t-1]+ ξ4*(tat[t-
1]-tlo[t-1]),
(* net present value of utility, the objective function *) cumu[t] ==
cumu[t-1]+(u[t]*r[t]*10)/scale1
};
posteqns={
(* climate damages, gross *) dam[t] == ygr[t]*(1- Ω[t]),
(* savings ratio *) s[t] == inv[t]/y[t],
(* interest rate *) ri[t] == γ*y[t]/k[t] -(1-(1- δ)^10)/10,
(* consumption of goods and services, per capita *) cpc[t] ==
c[t]*1000/l[t],
(* output of goods and services, net per capita *) pcy[t] ==
y[t]*1000/l[t]
};
(* endogenous constraints *)
endogcons={(*
k[t] ≤ 10*inv[t] + ((1- δ)^10)*k[t-1],
0.02*k[periods] ≤ inv[periods],
100 ≤ k[t],
20 ≤ c[t],
0 ≤ mat[t],
100 ≤ mup[t],
1000 ≤ mlo[t],
-1 ≤ tlo[t] ≤ 20,
tat[t] ≤ 20,
ceind[t] ≤ 6000,
0 ≤ q[t],
0 ≤ inv[t],
0 ≤ ygr[t],

580
0 ≤ eind[t],
0 ≤ μ[t] ≤ 1 *)
};

(* solve topological equations *)


toponodes[eqns_]:=Module[
{eqnvars,flatvars,eqnlist,mysource,mysink,edges1,edges2,edges3,edges,vert
ices2,vertices,forwardgraph,networkflows,forwardflows,forwardedges,revise
dedges,revisedgraph,toposort,sortedequations,
sortedvertices,posfirstequation,startvertices},
eqnvars=Map[Cases[eqns[[#]],_Symbol[_Integer],Infinity]&,Range[Length[eqn
s]]];
flatvars=Union[Flatten[eqnvars]];
eqnlist=Range[Length[eqns]];
f1[a_,b_]:={a,b};
edges1=Map[f1[mysource,flatvars[[#]]]&,Range[Length[flatvars]]];
edges2=Flatten[Map[Outer[f1,eqnvars[[#]],{eqnlist[[#]]}]&,eqnlist],2];
edges3=Map[f1[eqnlist[[#]],mysink]&,eqnlist];
edges=Join[edges1,edges2,edges3];
vertices2= Join[flatvars,eqnlist];
vertices=Join[{mysource},vertices2,{mysink}] ;
forwardgraph=MakeGraph[vertices,(MemberQ[edges,{#1,#2}])&,Type-
>Directed,VertexLabel->True];
If[!AcyclicQ[forwardgraph],Print["*** ERROR: FORWARD GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"]];
networkflows=NetworkFlow[forwardgraph,1,Length[vertices],Edge];
forwardflows=Cases[networkflows[[All,1,All]],
{x_/;x>1,y_/;y<Length[vertices]}];
forwardedges = Map[vertices[[#]]&,forwardflows];
revisededges =
Join[Complement[edges2,forwardedges],Map[Reverse,forwardedges]];
revisedgraph=MakeGraph[vertices2,(MemberQ[revisededges ,{#1,#2}])&,Type-
>Directed,VertexLabel->True];
If[!AcyclicQ[revisedgraph],Print["*** ERROR: REVISED GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"]; Exit[]];
(* Print[ShowGraph[revisedgraph]];*)
toposort=TopologicalSort[revisedgraph];
(*Print[toposort];*)
sortedvertices=Cases[vertices2[[toposort]],_Symbol[_Integer],1];
sortedequations = Cases[vertices2[[toposort]],_Integer,1];
posfirstequation=Apply[Plus,First[Position[vertices2[[toposort]],_Integer
,1]]];
startvertices = vertices2[[toposort[[Range[posfirstequation-1]]]]];
(*startvertices
=vertices2[[Select[vertices,InDegree[revisedgraph,#]==0&]]]; *)
Return[{sortedequations,sortedvertices, startvertices}]
];
(* calculate exogenous variables *)
exoginitialextended = Cases[Union[Flatten[Map[exoginitial/.t → # &,
Range[periods]]]]//.exogparams /.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];
exogextended= Cases[Union[Flatten[Map[exogeqns/.t → # &,
Range[periods]]]]//.Join[exogparams, exoginitialextended]
/.x_Symbol[i_Integer /;i < 0] → 0 ,Except[False|True|Null]];

exogtoposolver[equations_]:=Module[
{eqnorder,soleqn,solvar,outputs={},soltest1,soltest2},
eqnorder = toponodes[equations/.Equal->Subtract][[1]];
For[i=1,i<=Length[eqnorder],i++,
soleqn =equations[[eqnorder[[i]]]]//.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,

581
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** ERROR: DURING EXOGENOUS CALCULATIONS A VARIABLE HAD NO
SOLUTION ***"];Exit[],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
outputs
];

exogaugmented=Join[exoginitialextended,exogtoposolver[exogextended]];
Print["The exogenous variables calculate as: ", Sort[exogaugmented]];

(* calculate endogenous variables *)


interimparams = Join[exogparams, exogaugmented, endogparams];
endoginitialextended=endoginitial//.interimparams ;
allparams = Join[interimparams , endoginitialextended];
endogextended= Cases[Union[Flatten[Map[endogeqns/.t → # &,
Range[periods]]]]//.allparams /.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];
endogtoponodes= toponodes[endogextended/.Equal → Subtract];
endogeqnorder =endogtoponodes[[1]];
Print["Directed acyclic graphs and topological processing have been
completed.... optimisation commencing..."];
(*Print["Topological order of variables:" ,endogtoponodes[[2]]];*)
Print["Please note that start vertices of the endogenous equation
tolopogy have not been automatically included as optimisation variables.
This is for flexibility as you may wish to use a surrogate based on your
observation of an alternative topological sort order. So please check the
endogenous start vertices here to confirm that these variables (or your
surrogates) have been included with optimisation variables at the start:
",
endogtoponodes[[3]]
];
lenendogeqnorder=Length[endogeqnorder];
endogextendedordered= endogextended[[endogeqnorder]];

(* calculate endogenous constraints *)


endogconextended= Cases[Union[Flatten[Map[endogcons/.t → # &,
Range[periods]]]]//.allparams /.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];
endogconextvars= Union[Cases[endogconextended,_Symbol[_Integer]
,Infinity]];

(* calculate objective vars *)


objvar= Cases[Union[Flatten[Map[obj/.t → # &,
Range[periods]]]]//.allparams/.x_Symbol[i_Integer /;i < 0] → 0
,Except[False|True|Null]];

(* prepare the independent optimising variables *)


optimous=Union[Cases[Apply[List,Map[opt /.t → # //.allparams
&,Range[periods]]],
{_Symbol[_Integer],_Integer|_Real}|
{_Symbol[_Integer],_Integer|_Real,_Integer|_Real}|
{_Symbol[_Integer],_Integer|_Real,_Integer|_Real,_Integer|_Real},
Infinity]];

582
optimousvars= Union[Cases[optimous//.allparams,_Symbol[_Integer]
,Infinity]];

(* include any additional optimising variables arising from the leaves


and endogenous constraints *)
variables=Union[Join[optimous,Partition[Complement[endogconextvars,optimo
usvars],1]]];
Print["The optimising variables being used are: ",variables];
finalvars=Union[Cases[variables,_Symbol[_Integer] ,Infinity]];

(* commence solve *)
(* objective function ... *)
endogoptimsolver[nmvars_]:=Module[
{soleqn,solvar,outputs={},soltest1,soltest2},
For[i=1,i<=lenendogeqnorder,i++,
soleqn =endogextendedordered[[i]]/.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar] ≠ 0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** Warning: during optimisation ",solvar," became complex or null
in the equation ",soleqn," so the specified optimisation penalty of
",optimpenalty," was applied ***"];Return[optimpenalty],
soltest2 = Select[soltest1,(solvar/.#) >0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
Apply[Plus,objvar/.outputs]
]/; VectorQ[nmvars,NumberQ];

(* optimisation phase ... *)


(* ... use FindMinimim to optimise the endogenous equations .. NMinimize
exhausts 16Gb of memory *)
endogoptimsolution=If[(Length[endogconextvars]==0),
FindMinimum[endogoptimsolver[finalvars],variables],
FindMinimum[{endogoptimsolver[finalvars],endogconextended},variables]
];

Print["The solution to the endogenous optimising variables is: ",


endogoptimsolution];

(* calculate final outputs by back substitution *)


endogoutputsolver[nmvars_]:=Module[
{soleqn,solvar,outputs=nmvars, soltest1,soltest2},
For[i=1,i<=lenendogeqnorder,i++,
soleqn =endogextendedordered[[i]]/.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** ERROR: DURING BACKSUBSTITUTION A VARIABLE HAD NO SOLUTION
***"];Exit[],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];

583
];
];
];
outputs
];

endogaugmented =
Join[endoginitialextended,endogoutputsolver[endogoptimsolution[[2]]]];
Print["The final outputs of the endogenous equations are: "
,Sort[endogaugmented]];

Print["Execution time: ",Round[N[AbsoluteTime[]-starttime]/60,0.01],"


minutes using ", Round[N[MaxMemoryUsed[]/10^6],0.01]," Mb; with ",
Length[finalvars]," optimising variables and ",
Length[exogaugmented]+Length[endogaugmented], " final variables in total;
",
Length[exogparams]," exogenous parameters; ",
Length[exoginitial]," exogenous initial variables;",
Length[exogeqns]," exogenous equations; ",
Length[exogaugmented]," final exogenous variables; ",
Length[endogparams]," endogenous parameters; ",
Length[endoginitial]," endogenous initial variables; ",
Length[endogeqns]," endogenous equations; ",
Length[endogaugmented], " final endogenous variables"
];

A4.4 Appendix references


Baumert, K.A., Herzog, T. & Markoff, M., 2009. The Climate Analysis Indicators
Tool (CAIT), Washington DC: World Resources Institute. Available at:
http://cait.wri.org/cait.php [Accessed November 7, 2009].

IPCC, 2007. The Physical Science Basis. Contribution of Working Group I to


the Fourth Assessment Report of the Intergovernmental Panel on
Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis,
K.B. Averyt, M. Tignor and H.L. Miller (eds.)], Cambridge University
Press, Cambridge, United Kingdom and New York, NY, USA: United
Nations. Available at: http://www.ipcc.ch/ipccreports/ar4-wg1.htm
[Accessed April 5, 2008].

Nordhaus, W.D., 2008. A Question of balance: weighing the options on global


warming policies, Yale University Press.

Nordhaus, W.D., 2007. Notes on how to run the DICE model, Available at:
http://www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
[Accessed June 26, 2008].

584
585
Appendix 5 Acyclic Solver for Unconstrained
Optimisation

A5.1 Overview
In order to undertake the research in this dissertation a new flexible model for
optimising systems of nonlinear equations was developed using Mathematica.
The achievements in this model are:

• Climate change models: a new platform for investigating climate change


policy based on a widely available computer algebra system and
therefore opening up the area to a wider body of researchers than those
with specialised software such as GAMS and AMPL
• Climate change policy: Nordhaus equation modelling in an environment
with modern interior point solver, directed acyclic model and removal of
constraints. Nordhaus uses GAMS with the CONOPT solver. CONOPT
approximates nonlinear functions with straight line segments and then
uses the simplex method for linear functions. Miller (2000, Section
11.3.1, pp595-602) demonstrates the solution method of separable
functions, which are functions with multiple variables that can be
deconstructed into a number of single variable functions. A solution can
be found very quickly but it is only an approximate solution to the
original problem. However, Nordhaus has confidence in CONOPT
through usage but may not achieve the correct solution
• Operations research perspective: a new platform for solving nonlinear
discrete period optimisation problems employing recent Mathematica
interior point optimisation and Combinatorica combinatorial geometry.
Secondary but important advantages include being able to define
problems using Greek characters and that solutions remain in the
Mathematica environment for visualisation and post-processing.

A5.2 Modelling factors that affect performance


Arguably, nonlinear optimisation is as much an art as a science because of the
considerable number of factors that need to be controlled to achieve a
successful result.

587
• the solution algorithm: for example Nelder-Mead, differential evolution,
Brent's method or interior point are discussed below
• the complexity of the equations and treatment of roots: the Fundamental
Theorem of Algebra is that any non zero polynomial of n-degree always
has at least one. This may be a real number or a conjugate pairs of
complex numbers. Usually only real roots are of interest and minimising
functions like NMinimize and FindMinimum declare an error with
complex objective outcomes. The advantage of a topological method is
that learning may be introduced through a penalty function, which
moves the optimiser away from complex roots
• starting points for the optimising variables: may be more or less
appropriate for the optimisation
• selection of the best optimisation variables: prima facie it is tempting to
set the optimisation variables as the starting vertices of the DAG, as
suggested by the topological sort. For example, in solving a model with

the equation ygr [t ] = fn k [t ]0.3  the topological sort may suggest that
ygr [1 ] , ygr [2] , ygr [3 ] etc are the starting vertices. However, it can be

observed that an inverse function also exists k [t] = ifun ygr [t ]3.33  and

a small change in ygr [t] can produce a very large change in k [t ] .


Although ygr [t] has become an unruly optimising variable,
optimisations are possible given sufficient control of the starting
estimates. Tight control of a small number of starting vertices, such as
ygr [1 ] , ygr [2 ]... ygr [10] is certainly feasible using closely positioned
constants. Unfortunately, as the number of optimising variables
expands, say ygr [1 ] , ygr [2 ]... ygr [60] it becomes difficult to know in
advance how to customise starting estimates. If the tight starting
conditions cannot be maintained for an unruly starting variable like
ygr [t] then stability in the optimisation is materially enhanced by
using k [t ] as a desensitised optimisation variable
• hardware factors: the CPU clock speed, amount of RAM, operating
system (Linux or Windows) and 32-bit or 64-bit processing are well
known parameters affecting performance.i However, there are other
issues to consider that make comparing absolute execution times on
various machines somewhat tenuous. For example whether multiple

588
processors have only one memory bus and calculations are CPU-bound
or memory-bound

Mathematica optimisation algorithms


Numerical algorithms for constrained nonlinear optimisation either use direct
search or gradient search. Direct search is often used for global optimisation
and gradient search for local optimisation.

Direct search methods for global optimisation

Direct search methods include Nelder & Mead (1965), genetic algorithm,
differential evolution and simulated annealing. A “simplex” of values of the
objective function is kept for each iteration of optimising variables. The data
set is interpreted in order to “roll downhill” to the optimal solution. While
tolerant to noise in the objective function and constraints, steepest descent is a
strategy that tends to converge relatively slowly and the method is at the same
time very expensive in memory because each for n-dimensional iteration
maintains n1 points. The method is sometimes called the “downhill
simplex” and is unrelated to George Dantzig's well known simplex method
(Dantzig 2002).

By default Mathematica's NMinimize uses the Nelder-Mead method for


problems requiring a global minimum. However, NMinimize reverts to the
Differential Evolution algorithm if necessary. As described above, Nelder-Mead
is computationally intense and therefore slow and memory intensive. Although
more robust, Differential Evolution is even slower.

Even though NMinimize is nominally a global minimising function, it may only


find a local minimum unless the objective function and constraints are linear.
Other issues suggest that it may be more effective to directly use a fast local
optimising function, such as FindMinimum. For example, NMinimize usually
requires the problem domain to be bounded with constraints, which makes the
problem computationally intensive; a starting interval may need to be specified
to help achieve a better local minimum; and NMinimize often needs to call a
local minimising function in any case, in order to polish the end result.

589
Gradient search methods for local minimisation

Gradient search methods include sequential quadratic programming (SQP), the


augmented Lagrangian, and the modern interior point. The method may
employ first derivatives or second derivatives, which are called Hessians.

The local optimisation function FindMinimum is significantly faster than


NMinimize, particularly for large problems with few local minima such as
climate change policy equations.

FindMinimum's specific settings for method are Brent's principal axis,


Gradient, Conjugate Gradient, Levenberg Marquardt, Newton, Quasi Newton,
Interior Point and Linear Programming.

In the case where the method is left to default, FindMinimum selects a


different method based on whether constraints are present (the interior point
method is selected); there is one starting value for each variable (the Broyden-
Fletcher-Goldfarb-Shannon (BFGS) quasi-Newton method, with a limited
memory variant for large systems); or there are two starting values given for
the optimising variables and the objective function is real (Brent's principal
axis method).

Line search

Local minimising functions are based on quadratic models. ii

qk  p  = f  x k   ∇ f  x k T  1 /2 p T B k p
where k is the kth iterative step
and the step is x k1 = x k s k
which is guaranteed to converge to a local minimum
if x k is sufficiently close to a local minimum

Newton's method uses the exact Hessian:

B k = ∇2 f  xk 
with the step x k1 = x k  s k
and is guaranteed to converge to a local minimum
if x k is sufficiently close to a local minimum

590
However, the method is valid only insofar as the Newton quadratic model
reflects the function. Where the Hessian is not explicitly known, the system of
linear equations is solved by numerical approximation:

B k sk = − ∇ f  x k
where s k is a trial step.

Usually B k is an inaccurate approximation to the Hessian and the starting


value x k is rarely close enough to guarantee convergence.

Line search and trust region are two methods to improve the rate of
optimisation convergence and chance of success by controlling the sequence of
steps. The idea of a line search is to use the direction of the chosen step, but to

control the length, by solving a one-dimensional problem at each s k of


x k1 = x k  k s k such that certain optimisation conditions are satisfied.
Mathematica uses Wolfe's conditions to require sufficient decrease in value

and slope of f  x k 1 that guarantees the convergence of B k .iii

Brent's principal axis method

Brent's derivative-free univariate method seeks a minimum regardless of the


decrease and curvature factors. The first phase is bracketing the root. The
second phase is combining interpolation and golden section to find the
minimum. The advantage of this line search is that it does not require, as the
other two methods do, that the step be in a descent direction, since it will look
at both directions in an attempt to bracket the minimum.

In essence it is a safeguarded secant method, which keeps a point where the


function is positive and one where it is negative so that the root is always
bracketed. This special geometry of the zero-axis crossing means that at each
step, FindMinimum chooses between an interpolated (secant) step and a
bisection to ensure convergence. This means that Brent's method is a very
robust algorithm, which even provides a good estimate where functions are
very steep at the zero crossing or perhaps even discontinuous.

591
Brent's principal axis method uses the two staring points u 1 and u 2 to

commence a line search.iv Starting at a point x 0 , the algorithm undertakes a


line search from a point x i−1 to a point xi that minimises the objective

function along the search u 1 , u 2 ... u n directions. Then u i is replaced with


u i1 and at the end, u n is replaced with x 0− x i . Brent's method then
realigns the values (that are assumed to be not entirely independent) to the
principal directions for the local quadratic model. For efficiency, Brent uses

singular value decomposition of the matrix u 1 , u 2 ... u n instead of resolving


eigenvalues. The resulting u i can then be used for the next iteration.

Computing derivatives with finite differences is disadvantaged by significant


computing overhead and reduced the reliability of derivatives. Where symbolic
derivatives are not available, the alternative is to build a model using only
values from function evaluations.

With FindMinimum, it is advisable to provide two start estimates for the


optimising variables that (ideally) bracket the root i.e. one starting value gives
a positive value and the other a negative value. Notwithstanding whether or
not these two starting values do bracket the root, the fact that there are two
stating values means that FindMinimum will use Brent's method as default.

Starting estimates of optimising variables are, for example, provided as


follows:

opt ={{μ [t ] , 0.1,0.2}, {k [t ] , 300,1000}};

Start estimates automatically provide scoping for the variables; facilitate


compact, high performance unconstrained optimisation; facilitate removal of
constraints that might have been otherwise needed to position the optimising
process (as usually required with NMinimize). For example, instead of setting
constraints such as constraints={0 ≤ μ [t ]≤1, k [t ]≥100} .

Nevertheless, if a constraint such as μ [t ]≤1 is violated in execution, then a


penalty function can be used to teach the optimiser to seek in the correct

592
range. For example, when μ [t ]1 the following penalises the objective

function −Clip[[t]−1 , 0,1]10 6 .

Although Brent's algorithm is efficient in terms of its quadratic convergence


rate it is quite expensive because of the derivative-free line search that
requires a substantial number of function evaluations. The directions given to
the line search (especially at the beginning) are not necessarily descent
directions so the line search has to be able to search in both directions. For
problems with many variables, the individual linear searches in all directions
become very expensive, so this method is typically better suited to problems
without too many variables.

To make effective use of Brent's method it is necessary to have a way of


reducing the number of optimising variables. A major advantage of the
topological method described later in this appendix is that the number of
variables is significantly reduced to a small number of input vertices of the
network of the equations. For example, a network that adds 30 new nodes per
period may have only two new input vertices per period requiring optimisation.

Interior point

Over recent decades, interior point development has generated considerable


excitement in operations research because it permits nonlinear optimisation
comparable with Dantzig's (2002) extraordinarily efficient “simplex” method
used for linear programming. Dantzig's “simplex” method works around the
surfaces bounding the problem. In contrast, interior point seeks to pass
through the solid defined by the problem. It does this by constructing a
sequence of strictly feasible points lying in the solid interior that converge to
the solution.

Precedents for interior point are found as early as the 1960s in the use of
barrier functions. However, the method was not formalised until Karmarkar
(1984) and most modern implementations use the Mehrotra (1992) predictor-
corrector technique. Mehrotra's interior point method generally converges in
polynomial time, which is similar to George Dantzig's simplex method
(although both can become exponential under certain conditions).

593
Commencing with Mathematica version 6.0 (2006), the only method for
constrained optimisation in Mathematica's FindMinimum function is interior
point. It is based on the COIN Project IPOPT optimiser. In Mathematica 5.2 and
earlier, there are no standard functions for nonlinear constrained optimisation,
although some functionality was possible with the older approach of using
penalty functions to enforce constraints.

FindMinimum requires the first and second derivatives of the objective and
constraints. The second derivative (or Hessian) permits Newton's method to be
employed, which is a convergence strategy that is much faster than just using
first derivative downhill functions.

As its first approach, FindMinimum seeks to symbolically differentiate the


objective function and constraints. If this fails it calculates derivatives by finite
differences. While Newton's method may take fewer steps due to the
information it has about the curvature of the function, the execution time can
be longer because the symbolic Hessian is re-evaluated at each step.

One issue with interior point is that it may be unable to converge if the first
derivative at the optimal point is not continuous.

Over the last decade, large advances in nonlinear optimisation have been
achieved with the interior point method. An industrial solver IPOPT is available
as open source but the only convenient interfaces remain in AMPL and GAMS.

Mathematica 6 has benefited from the commoditisation of formerly proprietary


operations research optimisation knowhow.v The interior point method solves a
constrained optimisation by forming a barrier function from the constraints
and the objective function. Miller (2000, Section 9.1-9.5, pp494-529) explains
the interior point method in considerable detail

Minimise f  x
subject to: h x  = 0
for x ≥ 0

becomes:

594
Minimise  x  −  ∑ ln x i 
i
subject to h  x = 0
where  ≥ 0 is a barrier function

and such that the Karush-Kuhn-Tucker boundary condition is:

∇  − y T A x  = 0 , where A x  =  ∇ h 1  x , ...., ∇ hm  xT is dimension


nxm .

Which can be summarised as:

gx =  X −1 e − y T A x  = 0
h x  =0
Zeta X e =  e

Newton's Method can be used to solve this nonlinear system:

L  x , y = f  x  − h  x T y
m
H  x , y  = ∇ 2 L x , y  = ∇ 2 f  x − ∑ y i ∇ 2 hi x
i=1

and the Jacobi matrix is:

 H  x , y − A x T − I

  
g  x − z − y T A  x

  
x d
− A x  0 0  y =− − hx = − −d h
Z 0 X delta z Z X e − e dxz

As

 z ,  z = X −1 Z  x  d x z

then:

 H x , y   X −1  x − A x T  y = − d  − X −1 d x z

so:

595
      
−1
H  x , y   X −1 Z − A x T  x d  X dx z g  x  − A  xTy −  X −1 e
=−  =−
−A x  0 y −d h − h x 

Therefore the nonlinear equations can be solved iteratively with:

x :=x  x , t := y  y , z :=z  z

and the search direction given by solving the Jacobi system as:

 x , y ,  z

The augmented Langrangian merit function is defined as:

2
 x ,  = f  x  −  ∑ ln x i  − h  xT   ∣∣h  x ∣∣
i

where   0 is the barrier parameter and   0 is a penalty parameter.

The matrix is positively definite:

N  x , y  = H  x , y  X −1 Z

So the search direction given by solving the Jacobian is a descent direction for
the Langrangian merit function. This means x , y ,  satisfies the Karush-
Kuhn-Tucker (KKT) condition, which is a necessary condition for nonlinear
optimality (Karush 1939; Kuhn & Tucker 1951; Miller 2000, Section 4.4.5, pp
210-9)

While the constraints are positive, a line search can be commended along the
initial search direction with a step of 1. A backtracking procedure is then used
until the merit function satisfies the Armijo condition:vi

 x  t  x ,  ≤  x ,    ∇  x , T  x with y ∈0, 1/2 .

Convergence is given by:

596
∣∣g  x  − z − tT A x ∣∣  ∣∣h  x ∣∣  ∣∣Z X e − e∣∣ ≤ tol

where tol is set by default to 10−MachinePrecision /3 .

Both the accuracy condition and number of iterations are critical in finding a
solution to problems with significant complexity.

A5.3 Phases of model development

Phase I Model
In the first phase of developing an acyclic solver an abstraction layer was used
for the equations with direct and simultaneous optimisation of all independent
and dependent variables. While the “blunt instrument” approach of optimising
every variable simultaneously is perfectly suitable for small problems, it is
rather naïve to believe it can scale to thousands of variables and equations.
Indeed, a high performance cluster node with 4Gb RAM (Orion) exceeds
memory after just 9 periods and one with 16Gb RAM (Titan) fails after 14
periods, both falling far short of the 60 period goal. Projections of increasing
RAM to 64Gb indicated that the additional memory would only achieve one or
two more periods.

In the simplest presentation, each of the exogenous variables (scalars),


endogenous variables (model equations) and constraints, are elements of a list:

(* Exogenous variables *)
equations = {
gfacpop[t] == (Exp[popg*(t - 1)] - 1)/Exp[popg*(t - 1)],
l[t] == pop0*(1 - gfacpop[t]) + gfacpop[t]*popa,
ga[t] == ga[0]*Exp[-dela*10*(t - 1)],
a[t] == If[t == 1, a[0], a[t - 1]/(1 - ga[t - 1])],
gσ[t] == gσ[0]*Exp[-dσ1*10*(t - 1) – dσ2*10*(t - 1)^2],
σ[t] == If[t == 1, σ[0], σ[t - 1]/(1 – gσ[t])],
Θ[t] == (pback* σ[t]/θ)*((backrat - 1 + Exp[-gback*(t - 1)])/backrat),
eland[t] == eland[0]*(1 - 0.1)^(t - 1),
r[t] == 1/(1 + ρ)^(10*(t - 1)),
fex[t] == fex0 + If[ t < 12, 0.1*(fex1 - fex0)*(t - 1), 0.36],
κ[t] == If[ t == 1, κ[0], If[ t ≥ 25, κ21, κ21 + ( κ2 – κ21)*Exp[-dκ*(t –
2)]]],
Π[t] == κ[t]^(1 – θ),
s[t] == sr,
(* Exogenous variables *)
ceind[t] == eind[t - 1] + ceind[t - 1],

597
k[t] ≤ 10*inv[t - 1] + ((1 – δ)^10)*k[t - 1],
0.02*k[periods] ≤ inv[periods],
eind[t] == 10 * σ[t] *(1 – μ[t]) *ygr[t] + eland[t],
for[t] == η*(Log[(matav[t] + 0.000001)/mat1750]/Log[2]) + fex[t],
mat[t] == eind[t - 1] + φ11*mat[t - 1] + φ21*mup[t - 1],
matav[t] == (mat[t] + mat[t + 1])/2,
mlo[t] == φ23*mup[t - 1] + φ33*mlo[t - 1],
mup[t] == φ12*mat[t - 1] + φ22*mup[t - 1] + φ32*mlo[t - 1],
tat[t] == tat[t - 1] + ξ1*(for[t] – ξ2*tat[t - 1] – ξ3*(tat[t - 1] -
tlo[t - 1])),
tlo[t] == tlo[t - 1] + ξ4*(tat[t - 1] - tlo[t - 1]),
ygr[t] == a[t]* k[t]^ γ *l[t]^(1 – γ),
dam[t] == ygr[t]*(1 - 1/(1 + ψ1*tat[t] + ψ2*(tat[t]^ ψ3))),
Λ[t] == ygr[t] * Π[t] * Θ[t] * μ[t]^ θ,
y[t] == ygr[t]*(1 – Π[t]* Θ[t]* μ[t]^ θ)/(1 + ψ1*tat[t] + ψ2*(tat[t]^
ψ3)),
s[t] == inv[t]/(0.001 + y[t]),
ri[t] == γ*y[t]/k[t] - (1 - (1 – δ)^10)/10,
c[t] == y[t] - inv[t],
(*cpc[t] == c[t]*1000/l[t],*)
(*pcy[t] == y[t]*1000/l[t],*)
u[t] == ((c[t]/l[t])^(1 – α) - 1)/(1 – α),
cumu[t] == cumu[t - 1] + (l[t]*u[t]*r[t]*10)/scale1,
100 ≤ k[t],
20 ≤ c[t],
10 ≤ mat[t],
100 ≤ mup[t],
1000 ≤ mlo[t],
-1 ≤ tlo[t] ≤ 20,
tat[t] ≤ 20,
ceind[t] ≤ ceindlim,
0 ≤ q[t],
0 ≤ inv[t],
0 ≤ ygr[t],
0 ≤ eind[t],
0 ≤ matav[t],
0 ≤ μ[t] <= μlim
};

The solution algorithm is very simple:

eqextended= Cases[Union[Flatten[Map[Join[objvars,equations] /.t → # &,


Range[periods]]]]//.parametervals//.initialvalues
/.x_Symbol[i_Integer /;i ≤ 0] -> 0 ,Except[False|True]];
variables = Union[Cases[eqextended, _Symbol[_Integer], Infinity]] ;
soln=NMinimize[eqextended, variables]

Phase II model
The second phase of acyclic modeller used the symbolic recursion of equations
as functions and direct optimisation of resultant independent variables. A
recursed approach is far more elegant than using NMinimize (or
FindMinimum) as a blunt instrument for solving thousands of equations and
variables.

598
Recursion is not an abstraction structure. Instead it directly employs the
equations as active functions that form an auto-topology. This means the
optimising function need only solve for the independent variables, which can
either be specified exogenously or Mathematica can automatically calculate
using symbolic algebra.

The equations are given as functions, with scalars having a memory function,
shown in the exogenous equations. Endogenous functions (model equations)
are each optimised so cannot have a memory function in the same way as
scalars. Starting variables are associated with each function as a limit values of
the function:

(* exogenous equality constraints *)


gfacpop[t_] := gfacpop[t] = (Exp[popg*(t - 1)] - 1)/Exp[popg*(t - 1)];
l[t_] := l[t] = pop0*(1 - gfacpop[t]) + gfacpop[t]*popa;
ga[t_] := ga[t] = ga[0]*Exp[-dela*10*(t - 1)]; ga[0] = 0.092;
a[t_] := a[t] = a[t - 1]/(1 - ga[t - 1]); a[1] = a[0] = 0.02722;
gσ[t_] := gσ[t] = gσ[0]*Exp[-dσ1*10*(t - 1) – dσ2*10*(t - 1)^2]; gσ[0] =
-0.0730;
σ[t_] := σ[t] = σ[t - 1]/(1 – gσ[t]); σ[1] = σ[0] = 0.13418;
Θ[t_] := Θ[t] = (pback* σ[t]/θ )*((backrat - 1 + Exp[-gback*(t -
1)])/backrat);
eland[t_] := eland[t] = eland[0]*(1 - 0.1)^(t - 1); eland[0] = 11;
r[t_] := r[t] = 1/(1 + ρ)^(10*(t - 1));
fex[t_] := fex[t] = fex0 + If[ t < 12, 0.1*(fex1 - fex0)*(t - 1), 0.36];
κ[t_] := κ[t] = If[t ≥ 25, κ21, κ21 + ( κ2 – κ21)*Exp[-dκ*(t - 2)]]; κ[1]
= κ[0] = 0.25372;
Π[t_] := Π[t] = κ[t]^(1 – θ);
s[t_] := s[t] = sr;

(* endogenous equality constraints *)


ceind[t_] := eind[t - 1] + ceind[t - 1];
ceind[1] = ceind[0] = ceind0;
eind[t_] := 10 * σ[t] *(1 – μ[t]) *ygr[t] + eland[t];
for[t_] := η*(Log[(matav[t] + 0.000001)/mat1750]/Log[2]) + fex[t];
mat[t_] := eind[t - 1] + φ11*mat[t - 1] + φ21*mup[t – 1]; mat[1] = mat[0]
= mat0;
matav[t_] := (mat[t] + mat[t + 1])/2;
mlo[t_] := φ23*mup[t - 1] + φ33*mlo[t - 1]; mlo[1] = mlo[0] = mlo0;
mup[t_] := φ12*mat[t - 1] + φ22*mup[t - 1] + φ32*mlo[t – 1];
mup[1] = mup[0] = mup0;
tat[t_] := tat[t - 1] + ξ1*(for[t] – ξ2*tat[t - 1] – ξ3*(tat[t - 1] -
tlo[t - 1]));
tat[1] = tat[0] = tat0;
tlo[t_] := tlo[t - 1] + ξ4*(tat[t - 1] - tlo[t - 1]); tlo[1] = tlo[0] =
tlo0;
ygr[t_] := a[t]* k[t]^ γ *l[t]^(1 – γ);
dam[t_] := ygr[t]*(1 - 1/(1 + ψ1*tat[t] + ψ2*(tat[t]^ ψ3)));
Λ[t_] := ygr[t] * Π[t] * Θ[t] * μ[t]^ θ;
y[t_] := ygr[t]*(1 – Π[t]* Θ[t]* μ[t]^ θ)/(1 + ψ1*tat[t] + ψ2*(tat[t]^
ψ3)); y[0] = y0;
inv[t_] := (y[t] + 0.001)*s[t];
k[t_] := 10*inv[t - 1] + ((1 – δ)^10)*k[t - 1];
k[1] = k[0] = k0;

599
ri[t_] := γ*y[t]/k[t] - (1 - (1 – δ)^10)/10;
c[t_] := y[t] - inv[t]; c[0] = 30;
cpc[t_] := c[t]*1000/l[t];
pcy[t_] := y[t]*1000/l[t];
u[t_] := ((c[t]/l[t])^(1 – α) - 1)/(1 – α);
cumu[t_] := cumu[t - 1] + (l[t]*u[t]*r[t]*10)/scale1; cumu[1] = cumu[0] =
cumu0;
μ[1] = μ[0] = μ0;

Solution is quite straightforward, using symbolic or numeric evaluation of the


objective function and optimisation with NMinimize of FindMinimum. If the
functions are restricted to numerical evaluation then the functions need to
have a ?NumberQ filter to curtail symbolic evaluation, for example:

ceind[t_?NumberQ] := eind[t - 1] + ceind[t - 1];

Notwithstanding its impressive “grunt” in processing recursed equations,


Mathematica eventually fails in the same way as other algebraic processors
when dealing with recursion. Recursion memory/stack space issues issues are
well documented.

Projection Compilation Execution Memory


Periods Time (mins) Time (mins) Usage Mbytes
5 1.0 sec 1.2 34
6 2.4 secs 2.4 535
7 8.9 secs 24.8** 1234
8 0.5 12.8 572
9 2.3 40.9 943
10 9.4 66.3 8908
11 33.5* NA Exhausted
16Gb RAM
Notes:
* 10.5Gb used in compilation phase;
** this result looks odd but it was retested and therefore due to the
shape of the curve.

Scalar exogenous variables that are not optimisation may be precalculated


rather than left to be calculated in the recursion process. This creates
significant time savings in compilation prior to optimisation. It takes only 0.4
seconds and 6Mb to calculate either up to 60 periods. However, memory
remains a limiting factor in the optimising phase. Ten periods is the maximum
projection that can be evaluated in 16Gb.

600
Comparison statistics for recursed and precalculated scalars are:

Exogenous Projection Compilation Execution Memory


Variable Periods Time (mins) Time (mins) Usage Mbyte
Calculation
Approach
Recursed 10 8.3 66.8 8901
Pre-calculated 10 6 21.8 9051
Pre-calculated 11 23.9 NA Exhausted
16Gb RAM

One way of scaling-up the recursion approach to maximise and minimise


memory is to alternately store and clear intermediate iteration variables. The
three key limitations to pursuing this approach are:

• memory usage: the memory usage of this technique in storing a set of


parameters for each instance of optimisation variables can be
overwhelming. In addition to this there is always a very high memory
usage associated with NMinimize because it stores a vast simplex of
values to create its downhill roll. For the latter reason, I have found
myself moving toward using FindMinimum and particularly to Brent's
method (which is derivative free like NMinimize Nelder-Mead). In fact,
while the topological method can now solve very large problems of 40
periods (and I am testing more), the use of NMinimize causes machines
to run out of memory with only 4 periods!
• platform structure: it is quite important to not only solve the problem
but to separate the model from the solver code in order to have a
repeatable system for solving different sets equations. Hard coded
customised methods can make it very hard to rapidly change the model
once it moves into policy analysis. For example, Nordhaus proceeds to
solve many different configurations of his equations
• complex roots: when the number of periods is large, the model can stray
into complex numbers. This is a real “model wrecker”. It is necessary to
intricately customise the model by modifying equations with powers and
logarithms that can experience negative bases. The model is then

601
customised to avoid complex numbers, selecting only the real part of
complex numbers and introducing additional constraints as necessary

Phase III model approach


A third more poised and erudite approach uses neither massive optimisation
nor recursion. Using graph theory to produce a topological set of variables that
can be solved in sequence avoids the “one big objective calculation” of massive
optimisation or recursion by substituting an ordered set of incremental
calculations. This model relies on an abstraction layer for equations and solver
using directed acyclic graphs, and topological sort and a learning function.

Directed Acyclic Graphs

Graph theory formally commenced with Euler (1736) solution of the puzzling
Königsberg Bridge Problem. A directed graph, or “digraph”, is one in which
each graph edge is directional between two vertices. If there are no internal
cycles in the graph, where following a directed path one can return to the start
vertex, the graph is known as a directed acyclic graph or a "DAG" (Weisstein
2008).

Each vertex has a number if directed edges arriving and a number of directed
edges leaving. These are called “degrees” or “valencies”. The number of
directed edges arriving is the indegree and leaving is the outdegree. A vertex
with indegree of zero and any non-zero out degree is one of the DAG start
vertexes analogous to a leaf of a tree.

DAGs can be sorted in a topological way to provide a sequence of vertices that


can be visited in order to ensure that the requirements of each vertex have
been satisfied before the vertex is evaluated. This adopts the strengths and
deftly avoids the weaknesses of the previous two methods. For example,
limiting optimisation variables only to the start nodes is the same as in the
recursion method; and keeping the equations to be solved at an abstraction
level is similar in approach to the massive optimisation method.

The illustrations below show that topological sorts have quite complex acyclic
directed graphs for even three periods:

602
Illustration 39: Exogenous equations Illustration 40: Endogenous equations

Non-acyclic system and circular references

Part of the difficulty in solving Nordhaus' equations using Mathematica lies in


the structure of the equations, which are not directionally acyclic; and require
many constraints to condition the solver.

A directed acyclic graph can be best understood by analogy to a spreadsheet


that has a logical cell by cell layout. For example, the cell c1=a1b1 means
that cells a1 and b1 need to be found before c1 can be determined. Of
course, associating this logical layout with a geographical layout has proven to
be a major feature in the adoption of spreadsheets. This is because it uses
behavioural conventions and cultural intuitions to keep things clear to the user.
For example, in Western cultures, printed words and logic progress to the
right. Using the same example, a1 and b1 come before c1 and so the
logical layout is clarified by the geographical layout.

Most users of spreadsheets have also experienced circular references, for


example, in the calculation of interest on a loan or overdraft. It is apparently
logical to calculate interest as the interest rate times the average of the
opening balance of the loan and the closing balance. Novice analysts do not
realise that the closing balance is dependent upon the cash flow, which is in
turn dependent upon the interest paid.

603
Circular references can sometimes be solved by immense iteration.
Nevertheless, it cannot be guaranteed that the output is indeed the same
solution that would be achieved if the equations were to be better structured.

Circular references need to be “deconstructed” so that the circular reference


is removed. In the case of interest on a loan, this can be done by calculating
the cash flow and refinancing the loan each month or quarter.

In graph theory, circular references are referred to as cycles and a graph with
cycles as non-acyclic. A DAG cannot have any internal cycles and graphs can
be topologically processed only if they are DAGs. While cycles can be removed
with graphical techniques, the system of underlying equations means that it is
better to manually resolve any circular references.

Using the new topological model it has been possible to check the for the DAG
property in Nordhaus' equations and to rationalise where necessary to render
the model acyclic. This has also facilitated the removal of constraints, which
are very expensive on computing time.

Mathematica's Combinatorica is a modern and highly efficient graph package.


The code developed for pre-solving is complex but its implementation belies
this complexity.

It may be seen in the program code that the topological presolver requires two
DAGs, a Network flow analysis and a topological sort. The technique has been
investigated since Dinic (1970) developed an algorithm for maximum flow in a
network.

Subsequently, groups at McGill University pursued the implementation of


algorithms with causality assignment for the µModelica, δModelica and MuPAD
languages, described by (Xu 2005; Indrani 2003; Casey 2008a; Casey 2008b).

Casey provides the causality assignment implementation of Dinic’s algorithm


in µ- and δModelica as:

• create a vertex for each variable, each equation, the source and sink

604
• add an edge from the source to each equation
• add an edge from each equation to each variable it contains
• add an edge from each variable to the sink
• assign unit weight to each edge
• find the maximum network flow using Dinic’s breadth-first search to
determine the path from the source to sink. If such a path exists, each
edge in the path is reversed and repeat this step
• topologically sort the causally assigned dependency graph using a
double depth-first search to produce a topologically sorted list of
strongly-connected components and sets of internal cycles where
equations have circular dependencies
• solve for each variable using the topological sort order. Where a circular
dependency exists, the equations are solved simultaneously rather than
sequentially.

The method developed and implemented in this dissertation reverses the


direction of flow in order to use the standard Combinatorica functions in
Mathematica for network flow and topological sort:
• create a vertex for each variable, together with a source and a sink
• add an edge from the source to each variable
• create a vertex for each equation
• add an edge from each variable in an equation to the equation
• add an edge from each equation to the sink
• prepare a directed forward graph and check the forward graph is
acyclic
• determine the edges that have positive flow in the maximum flow from
source to sink
• prepare a revised graph excluding the source and sink and with the
direction of the positive flow edges reversed
• check the revised graph is acyclic
• topologically sort the revised graph
• select the independent variables, which are those in the topological sort
order before the occurrence of the first equation vertex
• provide the independent variables with values. Substitute these
independent variables as they occur in all succeeding equations

605
• proceeding by topological sequence, solve each equation for the
dependent variable implicit within it. Substitute the newly determined
variable as they occur in all succeeding equations.

Only acyclic graphs (that is, graphs with no internal cycles) may be
topologically sorted. Therefore the researcher needs to manually edit
equations having internal cycles to eliminate these circular references. This is
an accepted procedure for those familiar with spreadsheets.

Learning

As the number of variables approaches thousands with hundreds of optimising


variables, the search travels into complex numbers. Returning a complex
number as the result of an objective function causes an error in the solver. As
previously explained, this causes major issues for recursive solvers.

A topological model has the major advantage of being able to observe the
status of each intermediate variable during the evaluation of the objective
function. A penalty function can be used to return a real value when a complex
number is encountered.

This penalty function communicates "don't go there" to the solver. In the


current structure of equations the solver is seeking a minimum at
approximately -250,600. Therefore, the return value of the penalty function is
set to zero when a complex number is encountered. The FindMinimum function
does indeed learn and a solution is found.

Complex roots requires the use of numerical rather than


symbolic variable evaluation

With up to approximately twenty periods, a very fast solution can be achieved


using Mathematica's symbolic solvers such as Solve and Reduce that use fast
evaluation with techniques like the Gröbner Basis. However, it is not possible
to detect complex outcomes with Solve because it can return roots that are
symbolic (neither real nor complex) and there can be more than one root
provided as an OR alternative that can't be further processed.

606
Reduce and FindInstance allow domains to be controlled. For example, a root
can be requested in the domain of Reals. These functions fail if no real root
actually exists.

Mathematica's NSolve function is an efficient numerical solver, whose output


can be tested for complex variables and for multiple real roots. This obviates
the need for domain control and allows positive roots to be selected over
negative roots.

Topological processor for Phase III model


The topological processor was developed as a stand-alone package in
Mathematica and relies extensively on Combinatorica graph processing
(Pemmaraju & Skiena 2003).

BeginPackage["Topofunctions`",{"Combinatorica`"}]
toponodes::usage = "toponodes provides sequence of nodes."
optimsolver::usage="optimsolver solves systems of equations."
outputsolver::usage="outputsolver performs backsubstitution."

Begin["`Private`"]
toponodes[eqns_]:=
Module[{eqnvars,eqnvarsninvt,invt,flatvars,eqnlist,mysource,mysink,edges1
,edges2,
edges3,edges,vertices2,vertices,forwardgraph,networkflows,forwardflows,
forwardedges,revisededges,revisedgraph,toposort,sortedequations,
sortedvertices, posfirstequation,startvertices,f1},
eqnvars=Map[Cases[eqns[[#]],x_Symbol[_Integer..],Infinity]&,Range[Length[
eqns]]];
(*Print[eqnvars];*)
flatvars=Union[Flatten[eqnvars]];
eqnlist=Range[Length[eqns]];
f1[a_,b_]:={a,b};
edges1=Map[f1[mysource,flatvars[[#]]]&,Range[Length[flatvars]]];
edges2=Flatten[Map[Outer[f1,eqnvars[[#]],{eqnlist[[#]]}]&,eqnlist],2];
edges3=Map[f1[eqnlist[[#]],mysink]&,eqnlist];
edges=Join[edges1,edges2,edges3];
vertices2= Join[flatvars,eqnlist];
vertices=Join[{mysource},vertices2,{mysink}] ;
(*Print[vertices];*)
forwardgraph = MakeGraph[vertices, (MemberQ[edges,{#1,#2}])&, Type →
Directed, VertexLabel → True];
(* ShowGraph[forwardgraph]; *)
If[!AcyclicQ[forwardgraph],Print["*** ERROR: FORWARD GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"]];
networkflows=NetworkFlow[forwardgraph,1,Length[vertices],Edge];
forwardflows=Cases[networkflows[[All,1,All]],
{x_/;x>1,y_/;y<Length[vertices]}];
forwardedges = Map[vertices[[#]]&,forwardflows];
revisededges =
Join[Complement[edges2,forwardedges],Map[Reverse,forwardedges]];
revisedgraph= MakeGraph[vertices2, (MemberQ[revisededges ,{#1,#2}])&,
Type → Directed, VertexLabel->True];

607
(*ShowGraph[revisedgraph]*)
If[!AcyclicQ[revisedgraph], Print["*** ERROR: REVISED GRAPH IS NOT
ACYCLIC SO CHECK THE EQUATIONS ***"]; (*Print[ShowGraph[revisedgraph]];*)
Return[{{},{},{}}]];
(*ShowGraph[revisedgraph];*)
toposort=TopologicalSort[revisedgraph];
sortedvertices=Cases[vertices2[[toposort]],x_Symbol[_Integer..],1];
(*Print[vertices2[[toposort]]];*)
sortedequations = Cases[vertices2[[toposort]],_Integer,1];
posfirstequation=Apply[Plus,First[Position[vertices2[[toposort]],_Integer
,1]]];
startvertices = vertices2[[toposort[[Range[posfirstequation-1]]]]];
(*startvertices
=vertices2[[Select[vertices,InDegree[revisedgraph,#]==0&]]];*)
Return[{sortedequations,sortedvertices, startvertices}]
];

optimsolver[nmvars_,objtopo_,eqnordered_,leneqnorder_,optimpenalty_]:=
Module[{soleqn,solvar,outputs={},soltest1,soltest2,optimout},
For[i=1,i<=leneqnorder,i++,
soleqn =eqnordered[[i]]/.outputs;
solvar = Cases[soleqn,x_Symbol[_Integer..],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** infomessage: optimpenalty applied with ",soleqn," ***"];
Return[optimpenalty],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
optimout=objtopo/.outputs;
If[NumericQ[optimout],Return[optimout]]
];
Return[optimout]
]/; VectorQ[nmvars,NumberQ];

outputsolver[nmvars_,eqnordered_,leneqnorder_]:=
Module[ {soleqn,solvar,outputs=nmvars, soltest1,soltest2},
For[i=1,i<=leneqnorder,i++,
soleqn =eqnordered[[i]]/.outputs;
solvar = Cases[soleqn,x_Symbol[_Integer..],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0, Print["*** ERROR: DURING BACKSUBSTITUTION A
VARIABLE HAD NO SOLUTION ***"];Return[{}],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
outputs
];

End[]

608
EndPackage[]

Phase III model performance

The Nordhaus DICE models incorporating the topological processor are


provided in Appendix 4 Nordhaus DICE model. Its exceptional performance is
shown by the following table:

Periods Minutes Mbytes Variables


5 0.28 14 172
10 2.5 15 322
20 28 20 622
30 53 28 922
60 339 68 1822
100 1855 159 3022

This model is very successful as 60 periods solves in just 339 minutes


(approximately 5.5 hours) with 120 optimising variables, which is quite a task.
The increase of calculation time with periods modelled is shown in the
following log-log graph.

Comparison of DICE Models

The results of Nordhaus' GAMS/CONOPT non-acyclic/constrained approach


and the Mathematica acyclic/unconstrained topological approach are
compared below.

609
Constrained μ

The optimising variables, k and  , are the independent variables in the


model. The nonlinear solver adjusts the two variables in order to maximise the
objective function of cumulative social welfare. In a way, the final value of
these variables is the major “output” of the optimisation.

Mathematica's capital formation is almost The optimised emissions control rate


double that of GAMS/CONOPT. variable is similar in each model.

Firstly, the optimised value of the objective function, cumulative social welfare,
for each method is significantly different.

GAMS/CONOPT: 150,240
Mathematica constrained : 212,611
Mathematica unconstrained  :212,614

Endogenous variables are the variables determined in the model albeit directly
or indirectly dependent upon the optimising variables. These variables
illuminate the environmental, economic and technological ecosystem and
provide the rich meaning of the model. The effect of differences in the
GAMS/CONOPT and Mathematica optimisation approach intermediate
variables are illustrated below:

610
Unconstrained μ

Relaxing the constraint of  less than 1 indicates the importance of finding


a means to remove CO2 from the atmosphere. The overshoot of μ at the critical
point when it would otherwise level off has materially positive effects on

611
temperature reduction, net output of goods and services (that is, net of
abatement costs and damages) and social welfare.

In the Mathematica model, capital stock Mathematica's unconstrained emissions


increases to almost twice GAMS/CONOPT' control rate rises with Nordhaus but then
level. The “drop off” at the sixtieth decade is remains above 1 for 6 decades, suggesting a
due to it being the last year of the model and period of over control (i.e. removing carbon)
is the same in a hundred decade model. is required for maximum welfare.

612
Mathematica radiative forcing drops quickly Mathematica abatement costs are marginally
after 20 decades and reaches 1900 levels 30 higher than GAMS/CONOPT.
decade. In contrast, GAMS/CONOPT forcing
declines slowly.

As with radiative forcing, the remodelled Nordhaus' sustained radiative forcing and
global mean temperature falls quickly after terrestrial temperatures drive lower ocean
20 decades. Both models show a maximum temperature to the significantly greater level
surface temperature rise of almost 3.5C. of 2.4C compared to the remodelled 2.0C.

A5.4 Appendix references


Casey, A., 2008a. Non-causal equations in µModelica, Canada: McGill
University.

Casey, A., 2008b. Solving Dynamic Non-Causal Algebraic Equation Sets,


Canada: McGill University.

Dantzig, G.B., 2002. Linear Programming. Operations Research, 50(1), 42-47.

Dinic, E.A., 1970. Algorithm for solution of a problem of maximum flow in a


network with power estimation. Dokl. Akad. Nauk SSSR Soviet Math.
Dokl. Tom 194 (1970), 4(11).

613
Euler, L., 1736. Solutio problematis ad geometriam situs pertinentis (solution
of a problem relating to the geometry of position). Commetarii
Academiae Scientiarum Imperialis Petropolitanae, 8.

Indrani, A.M., 2003. Some issues concerning computer algebra in AToM


(Appendix A), Montreal, Quebec Canada: Modelling, Simulation and
Design Lab, School of Computer Science, McGill University. Available at:
http://moncs.cs.mcgill.ca/people/indrani/publications.dtml.

Karmarkar, N., 1984. A new polynomial-time algorithm for linear programming.


Combinatorica, 4, 373-395.

Karush, W., 1939. Minima of functions of several variables with Inequalities as


side constraints. M.Sc. Dissertation. Dept. of Mathematics. M.Sc.
Dissertation. Dept. of Mathematics, Univ. of Chicago, Chicago, Illinois.
Available at: http://wwwlib.umi.com/dxweb/details?doc_no=7371591.

Kuhn, H.W. & Tucker, A.W., 1951. Nonlinear programming. In Proceedings of


2nd Berkeley Symposium. Berkeley: University of California Press, pp.
481-489.

Mehrotra, S., 1992. On the implementation of a primal-dual interior point


method. SIAM J. Optimization, 2, 575-601.

Miller, R...E., 2000. Optimization: Foundations and Applications, New York:


John Wiley & Sons, Inc.

Nelder, J.A. & Mead, R., 1965. A simplex method for function minimization.
Comp. J., 7, 308-313.

Pemmaraju, S. & Skiena, S., 2003. Computational Discrete Mathematics:


Combinatorics and Graph Theory with Mathematica, Cambridge
University Press.

Weisstein, E.W., 2008. Acyclic Digraph. MathWorld--A Wolfram Web Resource.


Available at: ttp://mathworld.wolfram.com/AcyclicDigraph.html.

614
Xu, W., 2005. The design and implementation of the µModelica compiler .
School of Computer Science, McGill UNiversity, Montreal, Canada.
i The topological model calculations were completed on UTS' Orion high
performance cluster of 16 nodes running Red Hat Enterprise Linux 5 (64bit)
with the following specifications: 2.93GHz 4MB Cache X6800 Core 2 Extreme
(dual core) with 1066MHz FSB, 4GB 667MHz DDR2-RAM, 2x 80GB 7,200 RPM
SATA II Hard Drives (raid 0). Calculation on other memory intensive models
were completed on UTS' Titan cluster of 8 nodes running Red Hat Enterprise
Linux 5 (64bit) with the following specifications: 2 x 3.16GHz 2x6MB Cache
Xeon X5460 (quad core) with 1333MHz FSB, 16GB 667MHz DDR2-RAM, 2 x
300GB 15,000 RPM SAS Hard Drive (Raid 0)
ii http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationI
ntroduction.html#509267359 © 2008 Wolfram Research, Inc.
iii http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationL
ineSearchMethods.html © 2008 Wolfram Research, Inc.
iv http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationP
rincipalAxisMethod.html © 2008 Wolfram Research, Inc.
v http://reference.wolfram.com/mathematica/tutorial/ConstrainedOptimizationLoc
alNumerical.html#85183321 © 2008 Wolfram Research, Inc.
vi http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationL
ineSearchMethods.html © 2008 Wolfram Research, Inc.

615
Appendix 6 Benchmarking with Linear
Programming

A6.1 Data envelopment analysis


Joseph Farrell (1957) developed the method of data envelopment analysis
(DEA) to rank the efficiency of production units in an unbiased way. His
method uses linear programming to locate piecewise linear planes or facets of
the production function that sit at the outer of the observations where the
greatest efficiency occurs.

This technique assumes that at least some of the production units are
successfully maximising efficiency, while others may not be doing so. Implicitly,
the method creates a best virtual proxy on the efficient frontier for each real
producer. By computing the distance of these latter units from their best
virtual proxy frontier and partitioning inefficiency among the inputs, strategies
are suggested to make the sub-optimally performing production units more
efficient.

In contrast to PCA's statistical techniques, DEA's formulation of the production


function does not rely on probability distributions. For this reason, it is called a
non-parametric method.

DEA Advantages

The main advantages of DEA derive from its ability to reveal sensitivity data
and returns to scale that are not evident in PCA. For example, an input
minimising formulation provides additional information for each production
unit in direct relation to its peers on theta (θ) and iota (ι). Theta is the
proportion of inefficiency that could be eliminated by the proportional
reduction in inputs in order to obtain the projected input values. Iota ( ι) is the
total amount of inefficiency, equal to the total weighted distance between
observed and projected points standardised by inputs.

The DEA formulation to maximise efficiency of a production unit, which Farrell


calls a “decision making unit” (DMU) is stated as:

617
Maximise: aggregate outputs divided by aggregate inputs for each

production unit by finding output and input coefficients u r , v i 


that minimise the distance between each production unit and the
efficient frontier:

n =
∑ ur yr n
∑ vi xi n
where:
n = efficiency of production unit n
n = number of production unit, which ranges from 0 to n1

by varying:
u r = weight, shadow price or coefficient of output r that maximises n
v i = weight, shadow price or coefficient of input i that maximises n
where u r , v i ≥ 1

Constraint: subject to the same ratio for the other units not exceeding unity
(which is the maximum efficiency):

∑ ur y r n '≤' 1
∑ v i xi j
where:
y r n = output r of production unit n
x i n = input i of production unit n
j = index of production units, ranges from 1 to n
r = index of outputs that ranges from 1 to m (the number of outputs)
i = index of inputs that ranges from 1 to s (the number of inputs)

Every DEA computation may be formulated as either a primal output


maximising problem, as shown above, or the Lagrange multiplier solution
which is input minimising. This input minimising approach is known as the
“dual” solution.

Charnes, Cooper & Rhodes (1978) observed that Farrell's non-linear and
computationally complex objective function could be converted to ordinary
fractional linear programming problems. Their model assumed constant
returns to scale such that production can be increased or decreased without
affecting efficiency. This work led to the widespread uptake of DEA. The
seminal textbook on DEA is now Cooper, Seiford & Tone (2007).

618
The key assumptions in DEA are: at least some of the production units are
successfully maximising efficiency, while others may not be doing so; the best
producers can be used as a virtual proxy for the efficient frontier for each real
producer; inefficiency can be partitioned among the inputs based on the
distances; strategies are suggested to make the production units more
efficient; returns to scale are constant such that production can be increased
or decreased without affecting efficiency.i

Charnes, Cooper & Rhodes (1978, p.429) suggest that the usefulness of DEA
analysis is enhanced by virtue that inputs need only be ordinal amounts, for
example, psychometric or management performance factors. This allows the
inefficiency analysis to be examined with various partitions of inputs, which is
highly fertile for new management strategies.

Leibenstein & Maital (1992) suggest other advantages accrue because there is
no restriction on the form of the production function and it does not need to be
fully specified for the analysis to be successful; it is unbiased in that there is a
priori no priority given to any input or output over another; the technology can
be analysed to see if the production function should be forced through the
origin to model constant returns to scale (A. Charnes et al. 1978) or allowed to
exhibit variable returns to scale by not passing through the origin (Banker et
al. 1984); and organisations can be readily studied even if their inputs and
outputs are not subject to the market.

DEA disadvantages

Various authors note that DEA is less suited to a small number of production
units (William W. Cooper et al. 2007);ii DEA shows only relative inefficiency
rather than the potential for all production units (including those with best
practise) to perform much better; DEA uses extreme points of efficiency as
benchmarks but it's peers may be unable to emulate this for various reasons;
and a small change to one of the best practise units can lead to large changes
in analysis (William W. Cooper et al. 2007; Ahn & L. M. Seiford 1992;
Leibenstein & Maital 1992).

619
DEA returns to scale

The simplest assumption in using DEA is that returns to scale are constant, as
formulated in the illustration below. This means that production can rise or fall
with the same mix of inputs. Therefore all apparent inefficiencies are due to
management practices.

Minimise En
w1, ... , w N , En
Subject to:
N

∑ w j yi j − yi n ≥ 0 i=1, , I
j=1
N

∑ w j xk j − E n xk n ≤ 0 k =1,, K
j=1
wj ≥ 0 j=1,, N
where:
N = number of organisations
I = number of different outputs y i n
K = number of different inputs x k n
w j = weights applied across N organisations
E n = efficiency score of nth organisation
Illustration 41: DEA Constant Returns to Scale
Formulation (En)

Some production plants are constrained by being too small and therefore
inefficient. In other cases a production plant can be far too large for its current
throughput and so can increase production without adding capacity. In the
business world, there is a remorseless endeavour to introduce flexibility into
production functions. Mergers, takeovers and rationalisation tends to resolve
situations where returns to scale are permanently mismatched and not tuned
into a relatively constant band of operation. The marginal production function
in DEA may be adjusted for variable rather than constant returns to scale. A
constant returns to scale formulation is reformulated with an additional

constraint that the weights wj must sum to 1. This fits a tighter frontier to
the data. The following linear program problem is used for variable returns to
scale:

620
Minimise S n Minimise Rn
w 1, ... , w N , S n w 1, ... , w N , R n
Subject to: Subject to:
N N

∑ w j yi j − yi n ≥ 0 i=1, , I ∑ w j yi j − yi n ≥ 0 i=1, , I
j=1 j=1
N N

∑ w j xk j − Sn xk n ≤ 0 k =1,, K ∑ w j xk j − R n xk n ≤ 0 k =1, , K
j=1 j=1
N N

∑ wj =1 ∑ wj ≤1
j=1 j=1
wj ≥ 0 j=1,, N wj ≥ 0 j=1,, N
where: where:
N = number of organisations N = number of organisations
I = number of different outputs y i n I = number of different outputs y i n
K = number of different inputs x k n K = number of different inputs x k n
w j = organisation weights w j = organisation weights
S n = efficiency of nth organisation Rn = efficiency of nth organisation
Illustration 42: DEA variable returns to scale Illustration 43: DEA non-increasing returns
(S) to scale (R)

Scale Efficiency (SE) is calculated as the ratio of efficiency with Constant


Returns (CR) to efficiency with Variable Return (Illustration 3), i.e.

SE = E n / S n . If the value of this ratio is 1, then the production unit is


operating at optimal scale; if less than 1 it is not operating at optimum scale.

Where SE is less than 1, it is necessary to calculate another ratio to determine


whether a production unit is above or below its optimum scale: the ratio of
efficiency with Constant Returns to efficiency with Non-increasing Returns to

Scale. If the ratio of E n / Rn is equal to 1, then organisation n has increasing


returns to scale and needs to increase its size to achieve optimum scale.

Conversely, if E n / Rn is less than 1, then organisation n is subject to


decreasing returns to scale and is considered too large relative to its optimum
size, therefore needing to reduce its size.

621
A6.2 Linear programming

Primal and dual formulations


Although the intertemporal model developed in this dissertation employs
nonlinear programming, it is useful to understand how simple single period
models can be built with linear programming.

Linear programming in benchmarking was discussed in Chapter 4 Economic


models for climate change policy analysis. It was shown that Theory of
Complementary Slackness is important in presenting an optimal solution to the
dual formulation of a primal problem. The dual solution is the value of the
Lagrange multipliers, which are the marginal productivities of the resources
and equal to the shadow prices of the resources. In all cases where an optimal
solution for the primal problem will be feasible then it will be possible to find
an optimal solution to the dual formulation.

From first principles, it can be shown that the monetary output of the economy
is the price vector p multiplied by the quantity y of commodities (ten
Raa 2005). Therefore, an economy seeking to maximise welfare measured as
consumption will maximise p y . However, this maximisation will be subject
to constraints of labour and capital, and perhaps energy and pollution.

The primal maximisation problem is therefore:

Max p y : A x y ≤ x , k x ≤ M , l x ≤ N , x ≥ 0

Where the constraints represent:

Quantity A xy≤x where x is total output units , y is demand units


and A is Leontief ' s technical coefficient magtrix
Capital k x≤M where k is the capital required per unit of output ,
M is available capital stock
Labour l x≤N where l is the labour required per unit of output ,
N is available labour stock

622
Mathematica implements this with DualLinearProgramming, returning a vector
of x-values, shadow prices, lower bound and upper bound slacks:

Primal Min c T x : A 1 x =b 1, A2 x≥b2, l≤x≤u


T T T T
Dual Max b yl z−u w : A y z−w=c , y 2≥0, z , w≥0

DualLinearProgramming returns the vector {x , y , z , w }

If both problems are feasible, the solution is the same and two equations apply:

 A 2 x−b 2T y 2 = 0
l− x T z = u− x T w = 0

Gross National Product


For the primal maximisation problem above, which in shorter form is:

Max p y : A x y ≤ x , k x ≤ M , l x ≤ N , x ≥ 0

The constraints can be shown using matrix notation as:

[ ][ ] [ ] []
A −I I 0 0
k
l
0 x
0 y

M
N
x
or C ≤
y []
M
N
−I 0 0 0

With the objective function:

[0 p] x []
y
or a x
y []
Using these conventions, the specification of the linear program becomes:

623
[]
0
Max a
x
y [] []
x
: C ≤
y
M
N
0

The Lagrangian shadow prices are:

=[ p r   ]

Where:

p commodity price
r rate for rental of capital
 wage rate
 the slack

Following (Schrijver 1986, pp.90-6) the Lagrangian equation is given by:

[ ]
A−I I
[ p r  ] k 0 [
=0 p]
l 0
−I 0

which rearranges to two equations having the following meanings:

Equation Meaning
p= p shadow prices are the same as real world prices
p = p A r k  l− shadow prices are the aggregate of factor input prices

Now, the primal and the dual solutions are linked by the Main Theorem of
Linear Programming a x = b so:

[]
0
[0 []
x
p ] =[ p r   ]
y
M
N
0

Which provides the well-known macroeconomic value equation:

624
p y = r M  N
or National Income = National Product

Make and Use tables


The traditional approach to input output modelling is to use Wassily Leontief's
technical coefficients (A) matrix where the feasibility of industrial production
and bill of final goods is assessed with a “quantity system” and prices are
determined by a separate “price system”.

However, Leontief's A matrix is derived from the basic national accounts of


each country, standardised as Use and Make tables pursuant to UN System of
National Accounts 1993 (SNA93). The Use or U matrix records the
commodities demanded by industries for production. The V matrix records the
production of commodities by industries.

A Make table V lists all the commodity outputs per production unit. It is called
a “pure Make table” if there is just one commodity per production unit and
every commodity is produced by a production unit." The Australian and GTAP
input output tables are prepared on this basis.

The difference V-U provides the net output of each commodity. In commodity

terms V T −U is the Gross Domestic Product of the economy. In money terms

V T −U is the value-added by the economy, which is the Gross Domestic


Income. These are the same as the final macroeconomic equation of value
derived above:

p y = r M  N
or National Income = National Product

Since National Product is the sum of Consumer demand, Government demand


and Net Export demand, then if s is the level of activity of the production
units in the economy:

625
V T⋅s = U.s  Y  G  E – M
or
V T − U ⋅s = Y  G  E – M

which is analogous to the Leontief formulation:

A.x  Y  G  E – M = x
or
1 − A⋅x = Y  G  E – M

Equating the UV and Leontief formulations:

V T – U ⋅s = 1 – A⋅x

and since x = V T⋅s then the U, V and A matrices are related by the
equations:

U = A⋅V T , or U⋅V = A

Under optimisation, competitive equilibrium occurs by maximising the


objective function and determining shadow prices . Industry activities s
vary, causing labour and capital resources to substitute between production
sectors.

The substitution between production sectors depends upon the price of the
inputs, which is the assumption of the Transcendental Production function.
Also, the price of inputs responds to microeconomic supply & demand.

Primal and dual expressed as a UV formulation


ten Raa assumes that the criterion of economic policy is to maximise domestic
absorption. Technological constraints are provided by the UV material balance.
Resource constraints are provided by the usage of factor inputs of labour and
capital compared to endowments.

The primal becomes:

626
MaximiseY
subject to the constraints:
Material balance : U − V T ⋅s  Y  G  E – M ≤ 0
Labour endowment : ⋅s ≤ N
Capital endowment : k⋅s ≤ K

The dual of the linear program provides Lagrange multipliers and resource
slacks. As we have seen in Chapter 4 Economic models for climate change
policy analysis, the Lagrange multipliers represent the shadow prices
associated with the constraints, which are also the factor productivities.

The sorts of questions ten Raa has addressed with the UV technique are:

• How much can the level of final demand be raised if the economy is
made more efficient?
• What is the comparative advantage of the economy and best
composition of imports and exports?
• Is structural/technical/efficiency change or business cycle change
responsible for a rise in standard of living?
• Are competition and performance positively or negatively related?
• What is the increase in commodity prices with a new tax?
• What is the increase in employment if government expenditures
increase?
• What are the engines of growth in an economy, when productivity spills-
over to other industries?
• Can services increase productivity? Have increases in manufacturing
productivity been due to eliminating (outsourcing) low productivity
service activities?

Production function of the economy with trade


The net output of the economy is domestic demand plus net exports, so:

V T − U ⋅e = a  d
0[]

627
Where the value of net exports is the negative of the trade deficit p d '≥'−D
and domestic demand includes investment, which in competitive economies is
the Net Present Value of future consumption (Weitzman 1976).

The constrained formulation for maximisation of final domestic demand is:

Max e T ac : V T − U ⋅s ≥ a c  [ z ], Ks ≤ M , Ls ≤ N , p z ≥ − D , s ≥ 0
0

Where:

z = new export vector


c = a scalar expansion factor
actual output
1 /c = efficiency of the economy measured as
potential output

The usual prime and dual formulations are:

Max ax : Cx≤b
Min  b :  C=a

For which we have the primal schema:

[ ]
T
[ 0I ]
[]
U−V a 0

[] []
s s M
Max [ 0 e
T
]
0 c : K 0 0
c ≤ N
z L 0 0 z D
0 0 −p 0
I 0 0

and the dual schema:

[ ]
T
[ 0I ]
[]
0 U−V a
M
Min [ p r    ] N : [ p r   ] K 0 0 = [ 0 eT a 0 ]
D L 0 0
0 0 0 −p
I 0 0

628
The dual reduces to:

Min r ≥ 0 M   N   D : p V T −U  ≤ r K   L , pa = e T a , pT =  p

Where:

pT = vector of tradeable commodity prices


p = terms of trade world trade currency US $
p = vector of prices local currency 
and
 = exchange rate
= shadow price of the deficit constraint
= increase of final demand per dollar of international debt

Where two countries trade, the material balances of the two economies need to
be jointly balanced. There is only one level of imports and one international
shadow price for each traded commodity that satisfies the pooled material
balance, notwithstanding the direction of trade,:

V T −U ⋅s   V T −U ⋅s ≥ a c  a c

Secondly, the net exports for each country needs to be controlled so that the
pooled material balance does not runaway in favour of one country due to
better terms of trade as exports increase. This would lead to final demand in
one economy being maximised in the presence of massive production, while
production sectors in the other economy are shut down (with demand satisfied
by imports).

Therefore economies that experience a virtuous increase in terms of trade, for


the same value of exports, achieve a much higher attainable domestic demand.
The reverse occurs if the level of exports increases due to an expansion of
volume and reduction in terms of trade.

Where two countries engage in trade, the final demand vector c is


maximised with:

629
[ ] []
U−V T a 0
[ ]
IT
0 0
K 0 0 0 M

[]
L 0 0 0 s N
−I 0 0 0 c ≤ 0
Max c :
0 a  U −V T
[ ]
−I T
0
s
z
0
M
N
0 0 K 0
0 0 L 0 0
0 0 −I 0

and  is optimised to the trade balance, subject to pT z = p T d . The dual


is:

[]
0
M
N
Min [ p r     ]
p r  0 :
0
M
N
0

[ ]
U −V T a 0
[]
IT
0
K 0 0 0
L 0 0 0
−I 0 0 0
[ p r   p r   ] = [ 0 1 0 0]
0 a  U −V T
[ ]
−I T
0
0 0 K  0
0 0 L 0
0 0 −I 0

In a perfect world, a  would be sought that brings net exports to zero.


However, this is unrealistic and so the observed commodity trade vector is
used instead:

630
pT⋅z = pT⋅d

The location of comparative advantages in a system of more than two


economies requires the vector scanner,  , in a nonlinear maximisation to
find the value such that the consequent vector of national surpluses for all
economies but one is mapped into the observed surpluses. Walras’ law takes
care of the remaining economy.

MRIO formulation

For a two-country multiregional IO model, the LinearProgramming schema is:


Max a 1 c 1 a 2 c2 :

[ ] ]
U 1−V T1

[] [
a1 0 0 Rect1 ≤VertVector [0 ]
T
0 a2 0 U 2 −V 2 Rect1 ≤VertVector [0 ]
c1
0 0 K1 0 0 ≤ M1
c2
0 0 0 K2 0 ≤ M2
s1
0 0 L1 0 0 ≤ N1
s2
0 0 0 L2 0 ≤ N2
z
0 0 0 0 Rect2 ≤VertVector [E ]
0 0 0 0 Square =VertVector [0 ]

Where Rect1 is a matrix with rows equal to the number of commodities and
columns of countries∗countries∗commodities . The matrix expresses that each
commodity can be exported to the same commodity line of another country
(and indeed to itself, although this is constrained to zero in the trade
equivalences matrix).

exporting-> country 1 country 1 country 2 country 2


importing-> country 1 country 2 country 1 country 2
commodities commodities commodities commodities
commodities 1 0
0 1   1 0
0 1  
1 0
0 1
1 0
0 1    
Rect2 is a matrix of prices, with the number of rows equal to the number of
countries, with each element being −1 and the right hand vector of Total
Net Exports for each country equal to VertVector [ E ] (as above).

631
exporting-> country 1 country 1 country 2 country 2
importing-> country 1 country 2 country 1 country 2
commodities commodities commodities commodities
countries −1 −1
0 0
−1 −1
0 0  0
−1 −1
0 0
−1 −1
0
 
continued
next line ...   

 
z cou1 , cou1 ,commodities
.
z cou1 , cou2 , commodities
z cou2 , cou1 , commodities
<=
 Total Net Exports cou1
Total Net Exportscou2 
z cou2 , cou2 , commodities

Square is a countries∗countries∗commodities square matrix of trade


equivalences such that the trade of a country with itself is constrained to zero
and the trade of each commodity, between each pair of trading countries, is
constrained to zero such that total world trade flows net to zero:

If cou 2 = cou1 then z cou 2,


cou1, comm = 1 ( this is really an Identity matrix)
If cou 2 ≠ cou1 then z cou 1, cou2, comm = −1

Iterating through {cou , cou , commodities} with the last dimensions changing
the most frequently,creating a new line in the z-equivalence matrix with each
iteration …

country 1 country 1 country 2 country 2


country 1 country 2 country 1 country 2
commodities commodities commodities commodities
cou1 cou1 comm 1 0
0 1
0 0
0 0  0 0
0 0  0 0
0 0      
cou1 cou2 comm  0 0
0 0   
1 0
0 1 −1 0
0 −1   0 0
0 0 
cou2 cou1 comm  0 0
0 0  
−1 0
0 −1   
1 0
0 1  0 0
0 0 
cou2 cou2 comm  0 0
0 0   
0 0
0 0  
0 0
0 0 
1 0
0 1 
continued 

632

0
0

 
z cou1, cou1 ,commodities 0
z cou1, cou2 , commodities 0
. =
z cou2, cou1 , commodities 0
z cou2, cou2 , commodities 0
0
0

In addition to constraining trade variables with the above z-equivalence matrix,


non-traded commodities need to be further constrained such that there is zero
trade. This is achieved within the limits for each variable specified for the
LinearProgramming function. Limits on variables in the vector of the objective
function are:

c [cou] {0, ∞} for each country ' s domestic demand multiplier


s [U columns] {0, ∞} for each sector activity level
z [cou, cou , com ] {−∞ , ∞} for a traded commodity
z [cou , cou , com] {0,0} for a non−traded commodity

where :
cou =number of countries
com =number of commodities

A6.3 Emission permits, amelioration and abatement


Greenhouse gas pollution can be modelled as a “good” or “bad” commodity.
There are various ways of implementing each.

Modelling pollution as a “bad”


If a quantity constraint is placed on the emission of a “bad” then the constraint
is treated in the same way as a labour or capital constraint. Alternatively, the
“bad” can be modelled with an extra account in both the U and V matrices and
treated in the same way as other commodities. The key difference is that
“bads” are modelled with the inequality reversed to the normal situation of a
“good” commodity.

633
However, this dissertation implements pollution as a “good” rather than a
“bad”.

Modelling pollution as a “good”


A “bad” such as greenhouse gas pollution can be treated in the same way as a
“good” by redefining emissions as a new commodity requirement for emissions
permits. An extra account is added to both the U and V matrices and then
emissions permits can be treated as a traded market in the same way as other
commodities.

It may also be useful to create an additional account in each of the U and V


matrices for abatement services.

A6.4 Intertemporal stocks and flows model


ten Raa (ten Raa 2005, pp.166-75) derives a dynamic intertemporal model
from spatial distributions (convolutions) of stocks and flows. The primary
purpose of this analysis was to demonstrate equivalence of a UV dynamic
model with Leontief's dynamic model (A and B matrices). This was successfully
achieved and confirmed Brody's condition for the Leontief dynamic model.

Assuming that the trade vector Z is part of the consumption vector Y , the
“stocks” equation is production V ∗s equals uses U ∗s plus consumption
Y , where each of production and uses are convoluted with level activity in
each time period:

V ∗s = U ∗s  Y

which upon differentiating becomes the flows equation:

∂V ∗s = ∂U ∗s   ∂Y

The differential of a convolution product may be applied to either of the


operators, and this done variously for V and U:

634
∂ V ∗s = U∗∂ s  ∂ Y

The change in V is the depreciation ∂ V = −⋅V

Adjusting for zero elements in the convolution the flow equation becomes:

s t − ⋅V ∗s = U ∗∂ s  ∂Y

Substituting the first equation for stock balance V ∗s = U ∗s  Y :

s t −  ⋅U ∗s  Y  = U ∗ ∂ s  ∂ Y

Leontief's assumption of instantaneous production means:

U ∗s = U 0⋅s t and correspondingly U ∗∂ s = U 0  s t 1 – s t  so

s t   ⋅U 0 ⋅s t  Y  = U 0  s t1 − s t   Y t1 – Y t 

This leads to the important material balance:

[1  1 − ⋅ U 0 ]⋅ s t − U 0 ⋅ s t 1 − Y t1  1 − ⋅ Y t = 0

The static equation U −V  Y  I = 0 from above can be substituted into


this equation:

U 0 ⋅s t − V 0 ⋅s t  Y t  I = 0 and, upon rearranging,

U 0 ⋅s t  Y t = V 0 ⋅ s t − I .

Upon rearranging this provides the final material balance:

U 0 ⋅s t1  Y t 1 = s t  1 −   U 0 ⋅ s t  Y t  or

U 0 ⋅s t1  Y t 1 = s t  1 −   V 0 ⋅s t − I 

635
After investigating this dispersion method and discussing its application with
ten Raa, this dissertation research uses an alternative intertemporal
formulation based on standard accounting principles for stocks and flows.

A6.5 Appendix references


Ahn, T. & Seiford, L.M., 1992. Sensitivity of DEA to models and variable sets in
hypothesis test setting: the efficiency of university operations. In
Creative and Innovative Approaches to the Science of Management.
New York: Quorum.

Banker, R.D., Charnes, R.F. & Cooper, W.W., 1984. Some models for estimating
technical and scale inefficiencies in data envelopment analysis.
Management Science, 30, 1078-92.

Charnes, A., Cooper, W.W. & Rhodes, E., 1978. Measuring the efficiency of
decision making units. European Journal of Operational Research, 2,
429-44.

Cooper, W.W., Seiford, L.M. & Tone, K., 2007. Data envelopment analysis: a
comprehensive
text with models, applications, references and DEA-solver software. 2nd
ed., New York: Springer.

Farrell, M.J., 1957. The measurement of productive efficiency. Journal of the


Royal Statistical Society: Series A (Statistics in Society) , 120(3), 253-82.

Leibenstein, H. & Maital, S., 1992. Empirical estimation and partitioning of x-


inefficiency: A data envelopment approach. The American Economic
Review, 82(2), 428-433.

ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.

Schrijver, A., 1986. Theory of linear and integer programming, Wiley.

636
Weitzman, M.L., 1976. On the welfare significance of national product in a
dynamic economy. The Quarterly Journal of Economics, 156-162.

i Techniques for including decreasing returns to scale and expanding returns to


scale have since been developed to relax the assumption of constant returns to
scale
ii Since if the number of inputs and outputs is large compared to the number of
production units then some production units may be wrongly rated as efficient

637
Appendix 7 Mining the GTAP Database

A7.1 Aggregating the GTAP7 database

Region and commodity aggregations


The GTAP 7 database (Hertel 1999; Hertel & Walmsley 2008) may be
aggregated using GTAP utility functions in the GTAPAgg package.i

An aggregation scenario may be prepared using “aggedit.exe” to produce an


“agg” specification file, say “sntest01.agg”. In this research, the regions
defined in this file are:

Region Code Regions comprising


No.
1 NAFTA Can (Canada), usa (United States of America), mex
(North (Mexico)
America)
2 EU25 aut (Austria), bel (Belgium), cyp (Cyprus), cze (Czech
(European Republic), dnk (Denmark), est (Estonia), fin (Finland), fra
Union 25 (France), deu (Germany), grc (Greece), hun (Hungary), irl
countries) (Ireland), ita (Italy), lva (Latvia), ltu (Lithuania), lux
(Luxembourg), mlt (Malta), nld (Netherlands), pol (Poland),
prt (Portugal), svk (Slovakia), svn (Slovenia), esp (Spain),
swe (Sweden), gbr (United Kingdom)
3 ROW aus (Australia), nzl (New Zealand), xoc (Rest of Oceania),
(Rest of chn (China), hkg (Hong Kong), jpn (Japan), kor (Korea), twn
the (Taiwan), xea (Rest of Asia), khm (Cambodia), idn
World) (Indonesia), lao (Lao People's Democratic Republic), mmr
(Myanmar), mys (Malaysia), phl (Philippines), sgp
(Singapore), tha (Thailand), vnm (Vietnam), xse (Rest of
South East Asia), bgd (Bangladesh), ind (India), pak
(Pakistan), lka (Sri Lanka), xsa (Rest of South Asia), xna
(Rest of North America), arg (Argentina), bol (Bolivia), bra
(Brazil), chl (Chile), col (Colombia), ecu (Ecuador), pry
(Paraguay), per (Peru), ury (Uruguay), ven (Venezuela), xsm
(Rest of South America), cri (Costa Rica), gtm (Guatemala),
nic (Nicaragua), pan (Panama), xca (Rest of Central
America), xcb (Caribbean), che (Switzerland), nor

639
Region Code Regions comprising
No.
(Norway), xef (Rest of EFTA), alb (Albania), bgr (Bulgaria),
blr (Belarus), hrv (Croatia), rou (Romania), rus (Russian
Federation), ukr (Ukraine), xee (Rest of Eastern Europe),
xer (Rest of Europe), kaz (Kazakhstan), kgz (Kyrgyzstan),
xsu (Rest of former Soviet Union), arm (Armenia), aze
(Azerbaijan), geo (Georgia), irn (Islamic Republic of Iran),
tur (Turkey), xws (Rest of Western Asia), egy (Egypt), mar
(Morocco), tun (Tunisia), xnf (Rest of North Africa), nga
(Nigeria), sen (Senegal), xwf (Rest of Western Africa), xcf
(Central Africa), xac (South Central Africa), eth (Ethiopia),
mdg (Madagascar), mwi (Malawi), mus (Mauritius), moz
(Mozambique), tza (Tanzania), uga (Uganda), zmb (Zambia),
zwe (Zimbabwe), xec (Rest of Eastern Africa), bwa
(Botswana), zaf (South Africa), xsc (Rest of South Africa
Customs Union)

The commodity classifications are also aggregated, as follows:

Generic Commodities Comprising


Description
food pdr (paddy rice), wht (wheat), gro (cereal grains nec), v_f
(agriculture and (vegetables, fruit, nuts), osd (oil seeds), c_b (sugar cane,
food processing) sugar beet), pfb (plant-based fibres), ocr (crops nec), ctl
(bovine cattle, sheep and goats, horses), oap (animal
products nec), rmk (raw milk), wol (wool, silk-worm
cocoons),frs (forestry), fsh (fishing), cmt (bovine cattle,
sheep and goat meat products), omt (meat products), vol
(vegetable oils and fats), mil (dairy products), pcr
(processed rice), sgr (sugar), ofd (food products nec), b_t
(beverages and tobacco products)
mnfc coa (coal), oil (oil), gas (gas), omn (minerals nec), tex
(manufacturing) (textiles), wap (wearing apparel), lea (leather products),
lum (wood products), ppp (paper products, publishing), p_c
(petroleum, coal products), crp (chemical, rubber, plastic
products), nmm (mineral products nec), i_s (ferrous
metals), nfm (metals nec), fmp (metal products), mvh
(motor vehicles and parts), otn (transport equipment nec),
ele (electronic equipment), ome (machinery and equipment
nec), omf (manufactures nec)

640
Generic Commodities Comprising
Description
serv ely (electricity), gdt (gas manufacture, distribution), wtr
(services) (water), cns (construction), trd (trade), otp (transport nec),
wtp (water transport), atp (air transport), cmn
(communication), ofi (financial services nec), isr
(insurance), obs (business services nec), ros (recreational
and other services), osg (public administration and defence,
education, health), dwe (ownership of dwellings)

The factor aggregations are:

Generic Factor Comprising factors


land LAN (land), NTR (NatRes, natural resources)
labour ULA (UnSkLab, unskilled labour), SLA (SkLab, skilled
labour)
capital Capital

The “agg” file needs to be copied to a “txt” file, for example “sntest01.txt”. The
database is aggregated by running “data-agg.bat sntest01” where the
specification file is “sntest01.txt”. The aggregation function produces six
output files in a director of the same name “sntest01”. The files are in “har”
format, which is a proprietary GEMPACK format but may be viewed with
“viewhar.exe”:ii

• gdat.har (main economic data file)


• gpar.har (parameter file for GTAP CGE model)
• gset.har (definitions file)
• gtax.har (calculated tax rates)
• gview.har (additional data file)
• gvole.har (energy volumes Mtoe)

There are a number of methods of transforming the data in “har” files for use
in other database systems. Perhaps the most convenient is to generate a
standard “sql script” file from each “har” file using “seehar.exe” in the
Flexagg7 package of files. When “seehar.exe” initially executes, the following
sequence of commands achieves an sql-script file in the same directory
“sntest01”:

641
• type “sql” as the option and Carriage Return (Enter)
• type Carriage Return (Enter) to leave the options menu
• type the complete file location of the “har” file to be processed and
Carriage Return (Enter) to continue (it may help to put the full address
in Notepad and copy/paste it as the required location – then only the har
file name needs to be appended)
• Carriage Return (Enter) to accept the default output file press;
• Carriage Return (Enter) to continue
• type "r" as the option and then Carriage Return (Enter) to output the
data as an sql script file

The “sql” file can then be executed from an HSQLDB Database Engine or
within Mathematica to create a standalone HSQLDB database corresponding
to the “har” file. It will be necessary to remove some inconvenient
apostrophises from the sql using a text editor (i.e. change Firms' to Firms and
Agents' to Agents) and changing the table names to avoid conflict (i.e. edit
basedata.sql and change HEADLIST, SETLIST and RARRAY to, say,
HEADLISTBD, SETLISTBD and RARRAYBD).

In addition to the standard GTAP7 database files, GTAP also provides


consistent data for greenhouse gas emissions from fossil fuel combustion in Gg
CO2 (Giga Grams of CO2 ) in a supplementary file “gtap_co2_v7.har”, which
corresponds to the energy volumes data “gsdvole.har” (Lee 2008).

Social Accounting Matrix


Once the stand alone database has been created then data may be selected
from various tables. These tables have been arranged for GTAP's CGE model
and require considerable interpretation. The best guide to interpreting these
tables is with GTAP's own reconciliation to a Social Accounting Matrix (SAM),
which is shown in the diagram below (McDonald & Patterson 2004, p.6):

642
Illustration 44: Social Accounting Matrix (Source: McDonald & Patterson 2004)

The SAM equations can be rationalised with the GTAP database tables as set
out below.

Deriving commodity relationships from the SAM


Equations for the rows and columns:

Sales rows 1 & 2 = Purchases  columns 1 & 2

{ } { }
VIAM  VDAM   VIPM  VDPM  VOM  VIMS − VIWS
 VIGM  VDGM   VIIM  VDIM  =  VXWD − VXMD  VTWR
 VST  VXWD  VIWS − VTWR

Using:

U = VIAM VDAM
C = VIPM VDPM
G = VIGM VDGM
I = VIIM VDIM

and simplifying, the above equation becomes:

U C G I VST = VOM VIMS −VXMD


U C  IGVST −VOM −VIMS VXMD = 0

643
However, V = VOM and since VOM =VOAOUTTAX we need to include
taxes. Therefore VXWD = VXMD XTAX and since we need to use world
prices rather than market prices:

VXMD = VXWD− XTAX and VIMS = VIWS MTAX .

Therefore:

U C  IGVST −V −VIWS MTAX VXWD− XTAX  = 0

and rearranging to the material balance of the economy:

U −V C I GVXWD−VIWS − MTAX  XTAX VST = 0

Also, the net output of the economy is V −U  so:

V −U  = C I G VXWD−VIWS −MTAX  XTAX VST

where VXWD−VIWS  = Net Exports at World Prices .

Therefore:

GNP = V − U 

{ }
= C  I  G  Net Exports at − MTAX  XTAX   VST
World Prices

where:

VST = Export transport margins at Market Prices


VXMD = Bilateral Exports at Market Prices
VXWD = Bilateral Exports at World Prices
VIMS = Bilateral Imports at Market Prices
VIWS = Bilateral Imports at World Prices

644
Material Balance
The material balance equation is:

U −V C  I G Net Exports at World Prices−MTAX  XTAX VST ≤ 0


U −V C  I G Net Exports at World Prices−Bias ≤ 0
U −V C  I G Net Exports at World Prices ≤ Bias

where Bias =  MTAX  XTAX VST .

Therefore,

Bias = U −V C I G Net Exports at World Prices

where Bias is the difference in the rows of U −V C I G X −M from


zero due to taxes  MTAX − XTAX  and export transport margins VST  .

It might be noted that there is no column balance between the net exports of
various countries unless taxes are included. Therefore, the column balance
needs to be performed manually.

A7.2 Creation of GTAP economic databases within


Mathematica
File: gtap_make_mathematica_db_03.nb

(* Open Connection *)
<< DatabaseLink`
conn = OpenSQLConnection[]

Clear[as, varray, vselect, vsumdomimp];


as[a_] := If[a == {}, {0}, a];
varray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name];
(*vdpm=varray["VDPM"][[All,{1,3,4}]];*)

vsource = varray["GVIEWRA", "CM04"];


vrows = Union[varray["GVIEWRA", "CM04"][[All, 3]]];
vselect[array_, region_, component_] := Select[array, #[[4]] == region &&
#[[5]] == component &][[All, {1, 3}]];
vsumdomimp[region_] := vselect[vsource, region, "prodrev"];
vsumdomimp["NAFTA"] // MatrixForm;
vregion[region_] := DiagonalMatrix[vsumdomimp[region][[All, 1]]];
vNAFTA = vregion["NAFTA"];

645
Print["vNAFTA : ", TableForm[vNAFTA,
TableHeadings → {vrows, vrows}]];

Clear[uarray, uselect, usumdomimp, fsumdomimp];


uarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
usource = uarray["GVIEWRA", "SF01"];
uselect[array_, region_, source_, component_, urows_, ucols_] :=
Select[array, MemberQ[urows, #[[3]]] && MemberQ[ucols, #[[4]]] &&
#[[5]] == region && #[[6]] == source && #[[7]] == component &]
[[All, {1, 3, 4}]];
usumdomimp[region_] := Transpose[{ uselect[usource, region, "domestic",
"mktexp", vrows, vrows][[All, 1]] /. x_ /; x -> as[x] + uselect[usource,
region, "imported", "mktexp", vrows, vrows][[All, 1]] /. x_ /; x ->
as[x], uselect[usource, region, "domestic", "mktexp", vrows, vrows][[All,
2]], uselect[usource, region, "domestic", "mktexp", vrows, vrows][[All,
3]]}];
usumdomimp["NAFTA"] // MatrixForm;
uregion[region_] := Transpose[Partition[usumdomimp[region][[All, 1]],
Length[vrows]]];
uNAFTA = uregion["NAFTA"];
Print["uNAFTA : ", TableForm[uNAFTA,
TableHeadings → {vrows, vrows}]];

frows = Complement[Union[varray["GVIEWRA", "SF01"][[All, 3]]], vrows];


fsumdomimp[region_] := Transpose[{ uselect[usource, region, "domestic",
"mktexp", frows, vrows][[All, 1]] /. x_ /; x -> as[x] + uselect[usource,
region, "imported", "mktexp", frows, vrows][[All, 1]] /. x_ /; x ->
as[x], uselect[usource, region, "domestic", "mktexp", frows, vrows][[All,
2]], uselect[usource, region, "domestic", "mktexp", frows, vrows][[All,
3]]}];
fsumdomimp["NAFTA"] // MatrixForm;
fregion[region_] := Transpose[Partition[fsumdomimp[region][[All, 1]],
Length[frows]]];
fNAFTA = fregion["NAFTA"];
Print["fNAFTA factor inputs : ", TableForm[fNAFTA,
TableHeadings -> {frows, vrows}]]

gnpregion[region_] := uregion[region] - Transpose[vregion[region]];


gnpNAFTA = gnpregion["NAFTA"];
Print["uNAFTA - Inv_vNAFTA_Transpose : ", TableForm[gnpNAFTA,
TableHeadings -> {vrows, vrows}]]

aregion[region_] := uregion[region].Inverse[Transpose[vregion[region]]];
aNAFTA = aregion["NAFTA"];
Print["aNAFTA technical matrix : ", TableForm[aNAFTA,
TableHeadings -> {vrows, vrows}]]

Clear[yarray, yselect, ysumdomimp];


yarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT2"] -> "Ascending",
SQLColumn["ELEMENT1"] -> "Ascending"}];
ysource = yarray["GVIEWRA", "SF02"];
yselect[array_, region_, source_, component_, yrows_] := Select[array,
MemberQ[yrows, #[[3]]] && #[[4]] == region && #[[5]] == source && #[[6]]
== component &][[All, {1, 3}]];
ysumdomimp[region_] := Transpose[{ yselect[ysource, region, "domestic",
"mktexp", vrows][[All, 1]] /. x_ /; x -> as[x] + yselect[ysource, region,
"imported", "mktexp", vrows][[All, 1]] /. x_ /; x -> as[x],
yselect[ysource, region, "domestic", "mktexp", vrows][[All, 2]]}];
ysumdomimp["NAFTA"] // MatrixForm;

646
yregion[region_] := ysumdomimp[region][[All, 1]];
yNAFTA = yregion["NAFTA"];
Print["yNAFTA : ", TableForm[yNAFTA,
TableHeadings -> {vrows}]];

Clear[garray, gselect, gsumdomimp];


garray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT2"] -> "Ascending",
SQLColumn["ELEMENT1"] -> "Ascending"}];
gsource = garray["GVIEWRA", "SF03"];
gselect[array_, region_, source_, component_, yrows_] := Select[array,
MemberQ[yrows, #[[3]]] && #[[4]] == region && #[[5]] == source && #[[6]]
== component &][[All, {1, 3}]];
gsumdomimp[region_] := Transpose[{ gselect[gsource, region, "domestic",
"mktexp", vrows][[All, 1]] /. x_ /; x -> as[x] + gselect[gsource, region,
"imported", "mktexp", vrows][[All, 1]] /. x_ /; x -> as[x],
gselect[gsource, region, "domestic", "mktexp", vrows][[All, 2]]}];
gsumdomimp["NAFTA"] // MatrixForm;
gregion[region_] := gsumdomimp["NAFTA"][[All, 1]];
gNAFTA = gregion["NAFTA"];
Print["gNAFTA : ", TableForm[gNAFTA,
TableHeadings -> {vrows}]];

Clear[exarray, exselect, exsumdomimp, exsumdomimp];


exarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT2"] -> "Ascending",
SQLColumn["ELEMENT1"] -> "Ascending"}];
exsource = exarray["GVIEWRA", "BI01"];
tocous = Union[exsource[[All, 5]]];
exselect[array_, region_, urows_, toreg_] := Select[array, MemberQ[urows,
#[[3]]] && #[[4]] == region && #[[5]] == toreg && #[[6]] == "exprev" &]
[[All, {1, 3}]];
exsumdomimp[region_] := Transpose[{ Apply[Plus, Map[exselect[exsource,
region, vrows, #][[All, 1]] /. x_ /; x -> as[x] &, tocous]],
exselect[exsource, region, vrows, region][[All, 2]]}];
exregion[region_] := {Transpose[ Partition[exsumdomimp["NAFTA"][[All,
1]], Length[vrows]]]}
exsumdomimp["NAFTA"] // MatrixForm;
exNAFTA = exregion["NAFTA"];
Print["exNAFTA : ", TableForm[exNAFTA,
TableHeadings -> {{" "}, vrows}]]

Clear[imarray, imselect, imsumdomimp, imsumdomimp];


imarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT2"] -> "Ascending",
SQLColumn["ELEMENT1"] -> "Ascending"}];
imsource = imarray["GVIEWRA", "BI02"];
fromcous = Union[imsource[[All, 5]]];
imselect[array_, region_, urows_, toreg_] := Select[array, MemberQ[urows,
#[[3]]] && #[[4]] == region && #[[5]] == toreg && #[[6]] == "impcost" &]
[[All, {1, 3}]];
imsumdomimp[region_] := Transpose[{ Apply[Plus, Map[imselect[imsource,
region, vrows, #][[All, 1]] /. x_ /; x -> as[x] &, fromcous]],
imselect[imsource, region, vrows, region][[All, 2]]}];
imregion[region_] := {Transpose[ Partition[imsumdomimp["NAFTA"][[All,
1]], Length[vrows]]]}
imsumdomimp["NAFTA"] // MatrixForm;
imNAFTA = imregion["NAFTA"];
Print["imNAFTA : ", TableForm[imNAFTA,
TableHeadings -> {{" "}, vrows}]]

Clear[ygsum];

647
ygsum[region_] := Transpose[{ysumdomimp[region][[All, 1]] +
gsumdomimp[region][[All, 1]], vrows}];
ygsum["NAFTA"] // MatrixForm;
ygregion[region_] := ygsum[region][[All, 1]];
ygNAFTA = ygregion["NAFTA"];
Print["ygNAFTA : ", TableForm[ygNAFTA,
TableHeadings -> {vrows}]];

Clear[csarray, csselect, cssumdomimp];


csarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT1"] -> "Ascending"}];
cssource = csarray["GVIEWRA", "AG06"];
csselect[array_, region_] := Select[array, #[[3]] == region &][[All, {1,
3}]];
cssumdomimp[region_] := Transpose[{ csselect[cssource, region][[All,
1]] /. x_ /; x -> as[x], csselect[cssource, region][[All, 2]]}];
cssumdomimp["NAFTA"] // MatrixForm;
csregion[region_] := cssumdomimp[region][[All, 1]];
csNAFTA = csregion["NAFTA"];
Print["csNAFTA : ", TableForm[csNAFTA,
TableHeadings -> {{"cap "}, {""}}]];

Clear[parray, pselect, psumdomimp];


parray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT1"] -> "Ascending"}];
psource = csarray["GDATRA", "POP"];
pselect[array_, region_] := Select[array, #[[3]] == region &][[All, {1,
3}]];
psumdomimp[region_] := Transpose[{ pselect[psource, region][[All, 1]] /.
x_ /; x -> as[x], pselect[psource, region][[All, 2]]}];
psumdomimp["NAFTA"] // MatrixForm;
pregion[region_] := psumdomimp["NAFTA"][[All, 1]];
pNAFTA = pregion["NAFTA"];
Print["pNAFTA : ", TableForm[pNAFTA,
TableHeadings -> {{"pop "}, {""}}]];

Comparison of U & V matrices with SAM for


Mathematica
File: gtap_comparison_uv_amatrix.nb

(* Open Connection *)
<< DatabaseLink`
conn = OpenSQLConnection[]
Clear[as, varray, vselect, vsumdomimp];
as[a_] := If[a == {}, {0}, a];
varray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name];
(*vdpm=varray["VDPM"][[All,{1,3,4}]];*)
vsource = varray["GVIEWRA", "CM04"];
vrows = Union[varray["GVIEWRA", "CM04"][[All, 3]]];
vselect[array_, region_, component_] := Select[array, #[[4]] == region &&
#[[5]] == component &][[All, {1, 3}]];
vsumdomimp[region_] := vselect[vsource, region, "prodrev"];
vsumdomimp["NAFTA"] // MatrixForm;
vNAFTA = DiagonalMatrix[vsumdomimp["NAFTA"][[All, 1]]];
Print["vNAFTA : ", TableForm[vNAFTA,
TableHeadings -> {vrows, vrows}]];
Clear[uarray, uselect, usumdomimp, fsumdomimp];

648
uarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name,
SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
usource = uarray["GVIEWRA", "SF01"];
uselect[array_, region_, source_, component_, urows_, ucols_] :=
Select[array, MemberQ[urows, #[[3]]] && MemberQ[ucols, #[[4]]] &&
#[[5]] == region && #[[6]] == source && #[[7]] == component &]
[[All, {1, 3, 4}]];
usumdomimp[region_] := Transpose[{ uselect[usource, region, "domestic",
"mktexp", vrows, vrows][[All, 1]] /. x_ /; x -> as[x] + uselect[usource,
region, "imported", "mktexp", vrows, vrows][[All, 1]] /. x_ /; x ->
as[x], uselect[usource, region, "domestic", "mktexp", vrows, vrows][[All,
2]], uselect[usource, region, "domestic", "mktexp", vrows, vrows][[All,
3]]}];
usumdomimp["NAFTA"] // MatrixForm;
uNAFTA = Transpose[Partition[usumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["uNAFTA : ", TableForm[uNAFTA,
TableHeadings -> {vrows, vrows}]];

frows = Complement[Union[varray["GVIEWRA", "SF01"][[All, 3]]], vrows];


fsumdomimp[region_] := Transpose[{ uselect[usource, region, "domestic",
"mktexp", frows, vrows][[All, 1]] /. x_ /; x -> as[x] + uselect[usource,
region, "imported", "mktexp", frows, vrows][[All, 1]] /. x_ /; x ->
as[x], uselect[usource, region, "domestic", "mktexp", frows, vrows][[All,
2]], uselect[usource, region, "domestic", "mktexp", frows, vrows][[All,
3]]}];
fsumdomimp["NAFTA"] // MatrixForm;
fNAFTA = Transpose[ Partition[fsumdomimp["NAFTA"][[All, 1]],
Length[frows]]];
Print["fNAFTA factor inputs : ", TableForm[fNAFTA,
TableHeadings -> {frows, vrows}]]

gnpNAFTA = uNAFTA - Transpose[vNAFTA];


Print["uNAFTA - Inv_vNAFTA_Transpose : ", TableForm[gnpNAFTA,
TableHeadings -> {vrows, vrows}]]

aNAFTA = uNAFTA.Inverse[Transpose[vNAFTA]];
Print["aNAFTA technical matrix : ", TableForm[aNAFTA,
TableHeadings -> {vrows, vrows}]]

Clear[yarray, yselect, ysumdomimp];


yarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT2"] -> "Ascending",
SQLColumn["ELEMENT1"] -> "Ascending"}];
ysource = yarray["GVIEWRA", "SF02"];
yselect[array_, region_, source_, component_, yrows_] := Select[array,
MemberQ[yrows, #[[3]]] && #[[4]] == region && #[[5]] == source && #[[6]]
== component &][[All, {1, 3}]];
ysumdomimp[region_] := Transpose[{ yselect[ysource, region, "domestic",
"mktexp", vrows][[All, 1]] /. x_ /; x -> as[x] + yselect[ysource, region,
"imported", "mktexp", vrows][[All, 1]] /. x_ /; x -> as[x],
yselect[ysource, region, "domestic", "mktexp", vrows][[All, 2]]}];
ysumdomimp["NAFTA"] // MatrixForm;
yNAFTA = ysumdomimp["NAFTA"][[All, 1]];
Print["yNAFTA : ", TableForm[yNAFTA,
TableHeadings -> {vrows}]];

Clear[garray, gselect, gsumdomimp];

649
garray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT2"] -> "Ascending",
SQLColumn["ELEMENT1"] -> "Ascending"}];
gsource = garray["GVIEWRA", "SF03"];
gselect[array_, region_, source_, component_, yrows_] := Select[array,
MemberQ[yrows, #[[3]]] && #[[4]] == region && #[[5]] == source && #[[6]]
== component &][[All, {1, 3}]];
gsumdomimp[region_] := Transpose[{ gselect[gsource, region, "domestic",
"mktexp", vrows][[All, 1]] /. x_ /; x -> as[x] + gselect[gsource, region,
"imported", "mktexp", vrows][[All, 1]] /. x_ /; x -> as[x],
gselect[gsource, region, "domestic", "mktexp", vrows][[All, 2]]}];
gsumdomimp["NAFTA"] // MatrixForm;
gNAFTA = gsumdomimp["NAFTA"][[All, 1]];
Print["gNAFTA : ", TableForm[gNAFTA,
TableHeadings -> {vrows}]];

Clear[ygsum];
ygsum[region_] := Transpose[{ysumdomimp[region][[All, 1]] +
gsumdomimp[region][[All, 1]], vrows}];
ygsum["NAFTA"] // MatrixForm;
ygNAFTA = ygsum["NAFTA"][[All, 1]];
Print["ygNAFTA : ", TableForm[ygNAFTA,
TableHeadings -> {vrows}]];

Clear[csarray, csselect, cssumdomimp];


csarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT1"] -> "Ascending"}];
cssource = csarray["GVIEWRA", "AG06"];
csselect[array_, region_] := Select[array, #[[3]] == region &][[All, {1,
3}]];
cssumdomimp[region_] := Transpose[{ csselect[cssource, region][[All,
1]] /. x_ /; x -> as[x], csselect[cssource, region][[All, 2]]}];
cssumdomimp["NAFTA"] // MatrixForm;
csNAFTA = cssumdomimp["NAFTA"][[All, 1]];
Print["csNAFTA : ", TableForm[csNAFTA,
TableHeadings -> {{"cap "}, {""}}]];

Clear[parray, pselect, psumdomimp];


parray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT1"] -> "Ascending"}];
psource = csarray["GDATRA", "POP"];
pselect[array_, region_] := Select[array, #[[3]] == region &][[All, {1,
3}]];
psumdomimp[region_] := Transpose[{ pselect[psource, region][[All, 1]] /.
x_ /; x -> as[x], pselect[psource, region][[All, 2]]}];
psumdomimp["NAFTA"] // MatrixForm;
pNAFTA = psumdomimp["NAFTA"][[All, 1]];
Print["pNAFTA : ", TableForm[pNAFTA,
TableHeadings -> {{"pop "}, {""}}]];

(* CHECK ON A MATRIX *)

Clear[vdarray, vdselect, vdsumdomimp, vdsumdomimp];


vdarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
vdsource = vdarray["GDATRA", "VDFM"];
vdselect[array_, region_, urows_, ucols_] := Select[array, MemberQ[urows,
#[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &][[All, {1, 3,
4}]];
vdsumdomimp[region_] := Transpose[{ vdselect[vdsource, region, vrows,
vrows][[All, 1]] /. x_ /; x -> as[x], vdselect[vdsource, region, vrows,

650
vrows][[All, 2]], vdselect[vdsource, region, vrows, vrows][[All,
3]]}];
vdsumdomimp["NAFTA"] // MatrixForm;
vdNAFTA = Transpose[ Partition[vdsumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["vdNAFTA : ", TableForm[vdNAFTA,
TableHeadings -> {vrows, vrows}]]

Clear[viarray, viselect, visumdomimp, visumdomimp];


viarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
visource = viarray["GDATRA", "VIFM"];
viselect[array_, region_, urows_, ucols_] := Select[array, MemberQ[urows,
#[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &][[All, {1, 3,
4}]];
visumdomimp[region_] := Transpose[{viselect[visource, region, vrows,
vrows][[All, 1]] /. x_ /; x -> as[x], viselect[visource, region, vrows,
vrows][[All, 2]], viselect[visource, region, vrows, vrows][[All,
3]]}];
visumdomimp["NAFTA"] // MatrixForm;
viNAFTA = Transpose[ Partition[visumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["viNAFTA : ", TableForm[viNAFTA,
TableHeadings -> {vrows, vrows}]]

Clear[vdvisum];
vdvisum[region_] := Transpose[{vdsumdomimp[region][[All, 1]] +
visumdomimp[region][[All, 1]], vdsumdomimp[region][[All, 2]],
vdsumdomimp[region][[All, 3]]}];
vdvisum["NAFTA"] // MatrixForm;
vdviNAFTA = Transpose[Partition[vdvisum["NAFTA"][[All, 1]],
Length[vrows]]];
Print["vdviNAFTA : ", TableForm[vdviNAFTA,
TableHeadings -> {vrows, vrows}]];

Clear[enarray, enselect, ensumdomimp, ensumdomimp];


enarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
ensource = enarray["GDATRA", "VFM"];
enselect[array_, region_, urows_, ucols_] := Select[array, MemberQ[urows,
#[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &][[All, {1, 3,
4}]];
ensumdomimp[region_] := Transpose[{ enselect[ensource, region, frows,
vrows][[All, 1]] /. x_ /; x -> as[x], enselect[ensource, region, frows,
vrows][[All, 2]], enselect[ensource, region, frows, vrows][[All, 3]]}];
ensumdomimp["NAFTA"] // MatrixForm;
enNAFTA = Transpose[ Partition[ensumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["enNAFTA : ", TableForm[enNAFTA,
TableHeadings -> {frows, vrows}]]

Clear[fbarray, fbselect, fbsumdomimp, fbsumdomimp];


fbarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
fbsource = fbarray["GDATRA", "FBEP"];

651
fbselect[array_, region_, urows_, ucols_] := Select[array, MemberQ[urows,
#[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &][[All, {1, 3,
4}]];
fbsumdomimp[region_] := Transpose[{fbselect[fbsource, region, frows,
vrows][[All, 1]] /. x_ /; x -> as[x], fbselect[fbsource, region, frows,
vrows][[All, 2]], fbselect[fbsource, region, frows, vrows][[All, 3]]}];
fbsumdomimp["NAFTA"] // MatrixForm;
fbNAFTA = Transpose[ Partition[fbsumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["fbNAFTA : ", TableForm[fbNAFTA,
TableHeadings -> {frows, vrows}]]

Clear[ftarray, ftselect, ftsumdomimp, ftsumdomimp];


ftarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
ftsource = ftarray["GDATRA", "FTRV"];
ftselect[array_, region_, urows_, ucols_] := Select[array, MemberQ[urows,
#[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &][[ All, {1, 3,
4}]];
ftsumdomimp[region_] := Transpose[{ftselect[ftsource, region, frows,
vrows][[All, 1]] /. x_ /; x -> as[x], ftselect[ftsource, region, frows,
vrows][[All, 2]], ftselect[ftsource, region, frows, vrows][[All, 3]]}];
ftsumdomimp["NAFTA"] // MatrixForm;
ftNAFTA = Transpose[ Partition[ftsumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["ftNAFTA : ", TableForm[ftNAFTA,
TableHeadings -> {frows, vrows}]]

Clear[isarray, isselect, issumdomimp, issumdomimp];


isarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
issource = ftarray["GDATRA", "ISEP"];
isselect[array_, region_, urows_, ucols_, source_] := Select[array,
MemberQ[urows, #[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &&
#[[6]] == source &][[All, {1, 3, 4, 6}]];
issumdomimp[region_] := Transpose[{ isselect[issource, region, vrows,
vrows, "domestic"][[All, 1]] /. x_ /; x -> as[x] + isselect[issource,
region, vrows, vrows, "imported"][[All, 1]] /. x_ /; x -> as[x],
isselect[issource, region, vrows, vrows, "domestic"][[All, 2]],
isselect[issource, region, vrows, vrows, "domestic"][[All, 3]]}];
issumdomimp["NAFTA"] // MatrixForm;
isNAFTA = Transpose[ Partition[issumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["isNAFTA : ", TableForm[isNAFTA,
TableHeadings -> {vrows, vrows}]]

(*Clear[osarray,osselect,ossumdomimp,ossumdomimp];
osarray[array_,name_]:=SQLSelect[conn,array,SQLColumn["HEADNAME"]==
name,SortingColumns->{SQLColumn["ELEMENT2"]-
>"Ascending",SQLColumn[ "ELEMENT1"]->"Ascending"}];
ossource=ftarray["GDATRA","OSEP"];
osselect[array_,region_,urows_]:=Select[array,MemberQ[urows,#[[3]]]&&#[[4
]]==region&][[All,{1,3}]];
ossumdomimp[region_]:=Transpose[{osselect[ossource,region,vrows]
[[All,1]]/.x_/;x->as[x],osselect[ossource,region,vrows][[All,2]]}];
ossumdomimp["NAFTA"]//MatrixForm;
osNAFTA={Transpose[Partition[ossumdomimp["NAFTA"]
[[All,1]],Length[vrows]]]};

652
Print["osNAFTA :
",TableForm[osNAFTA,TableHeadings->{{" "},vrows}]]*)

Clear[tfarray, tfselect, tfsumdomimp, tfsumdomimp];


tfarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name,
SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
tfsource = tfarray["GDATRA", "TFRV"];
fromcous = Union[tfsource[[All, 5]]];
tfselect[array_, region_, urows_, fromreg_] := Select[array,
MemberQ[urows, #[[3]]] && #[[4]] == region && #[[5]] == fromreg &][[All,
{1, 3}]];
tfsumdomimp[region_] := Transpose[{ Apply[Plus, Map[tfselect[tfsource,
region, vrows, #][[All, 1]] /. x_ /; x -> as[x] &, fromcous]],
tfselect[tfsource, region, vrows, region][[All, 2]]}];
tfsumdomimp["NAFTA"] // MatrixForm;
tfNAFTA = {Transpose[ Partition[tfsumdomimp["NAFTA"][[All, 1]],
Length[vrows]]]};
Print["tfNAFTA : ", TableForm[tfNAFTA,
TableHeadings -> {{" "}, vrows}]]

Clear[vsarray, vsselect, vssumdomimp, vssumdomimp];


vsarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name,
SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
vssource = vsarray["GDATRA", "VTWR"];
fromcous = Union[vssource[[All, 6]]];
vsselect[array_, region_, urows_, ucols_, fromreg_] := Select[array,
MemberQ[urows, #[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &&
#[[6]] == fromreg &][[All, {1, 3}]];
vssumdomimp[region_] := Transpose[{ Apply[Plus, Map[vsselect[vssource,
region, vrows, vrows, #][[All, 1]] /. x_ /; x -> as[x] &, fromcous]],
vsselect[vssource, region, vrows, vrows, region][[All, 2]]}];
vssumdomimp["NAFTA"] // MatrixForm;
vsNAFTA = {Transpose[ Partition[vssumdomimp["NAFTA"][[All, 1]],
Length[vrows]]]};
Print["vsNAFTA : ", TableForm[vsNAFTA,
TableHeadings -> {{"serv"}, vrows}]]

Clear[totasum];
dim = {Max[Length[vrows]*Length[vrows], Length[frows]*Length[vrows]], 1};
totasum[region_] := Transpose[{Flatten[ SparseArray[Band[{1, 1}] ->
Thread[{vdvisum[region][[All, 1]]}], dim] + SparseArray[ Band[{1, 1}] ->
Thread[{ensumdomimp[region][[All, 1]]}], dim] + SparseArray[ Band[{1, 1}]
-> Thread[{fbsumdomimp[region][[All, 1]]}], dim] + SparseArray[ Band[{1,
1}] -> Thread[{ftsumdomimp[region][[All, 1]]}], dim] +
SparseArray[ Band[{1, 1}] -> Thread[{issumdomimp[region][[All, 1]]}],
dim] + (*SparseArray[Band[{1,1}]->Thread[{ossumdomimp[region][[All,
1]]}],dim]+*) SparseArray[ Band[{1, 1}] -> Thread[{tfsumdomimp[region]
[[All, 1]]}], dim] + SparseArray[ Band[{1, 1}] ->
Thread[{vssumdomimp[region][[All, 1]]}], dim]], ensumdomimp[region][[All,
2]], ensumdomimp[region][[All, 3]]}];
totasum["NAFTA"] // MatrixForm;
totaNAFTA = {Total[ Transpose[ Partition[totasum["NAFTA"][[All, 1]],
Length[vrows]]]]};

Print["totaNAFTA : ", TableForm[totaNAFTA,


TableHeadings -> {{" "}, vrows}]];

653
dataaNAFTA = vdviNAFTA.Inverse[DiagonalMatrix[Flatten[totaNAFTA]]];
Print["calc NAFTA technical matrix : ", TableForm[dataaNAFTA,
TableHeadings -> {vrows, vrows}]];

Print["compare aNAFTA technical matrix : ", TableForm[aNAFTA,


TableHeadings -> {vrows, vrows}]]

Greenhouse gas aggregation and creation of


database within Mathematica
File: eghg_aggregate.nb

<< DatabaseLink`
conn1 = OpenSQLConnection["gtap3eghg"]
conn2 = OpenSQLConnection["gtap3res"]
Clear[mapping, positiona, positionb, from, to, map, mapuc];
istream = OpenRead["/home/stuart/Documents/gtap/GTPAg7/sntest01.agg"];
records = Select[ReadList[istream, Record, RecordSeparators -> "= "],
StringFreeQ[#, "!"] &];
mapping[n_] := Rest[StringSplit[records[[n]]]];
positiona[n_] := Flatten[Position[mapping[n], "&"]];
positionb[n_] := Rest[RotateRight[Join[{-1}, positiona[n]]]];
to[n_] := mapping[n][[positiona[n] + 1]];
from[n_] := mapping[n][[positionb[n] + 2]];
map[n_] := Thread[from[n] -> to[n]];
mapuc[n_] := Thread[ToUpperCase[from[n]] -> to[n]];
produnitmap = map[2];
regionmap = mapuc[4];
factormap = map[6];
othermap = {"HH" -> "demand", "Govt" -> "demand", "CGDS" -> "invest"};
remap = Join[produnitmap, regionmap, factormap, othermap];
mappedarray = SQLSelect[conn1, "RARRAY"] /. remap
ghg = Union[mappedarray[[All, 3]]];
commodities = Union[mappedarray[[All, 4]]];
produnits = Union[mappedarray[[All, 5]]];
regions = Union[mappedarray[[All, 6]]];
(*Total[Select[mappedarray,
#[[3]]=="CO2"&&#[[4]]=="ecoa"&&#[[5]]=="food"&&#[[6]]=="EU25"&]
[[All,1]]];*)

SQLDropTable[conn2, "EGHG"];
SQLCreateTable[conn2, SQLTable["EGHG"], {SQLColumn["RVALUE", DataTypeName
-> "FLOAT"], SQLColumn["HEADNAME", DataTypeName -> "VARCHAR", DataLength
-> 10], SQLColumn["ELEMENT1", DataTypeName -> "VARCHAR", DataLength ->
10], SQLColumn["ELEMENT2", DataTypeName -> "VARCHAR", DataLength -> 10],
SQLColumn["ELEMENT3", DataTypeName -> "VARCHAR", DataLength -> 10]
}];

SQLDelete[conn2, "EGHG"];
For[l = 1, l <= Length[regions], l++,
For[k = 1, k <= Length[produnits], k++,
For[j = 1, j <= Length[commodities], j++,
For[i = 1, i <= Length[ghg], i++,
SQLInsert[conn2, "EGHG", SQLColumnNames[conn2, SQLTable["EGHG"]]
[[All, 2]], {Total[Select[mappedarray, #[[3]] == ghg[[i]] &&
#[[4]] == commodities[[j]] && #[[5]] == produnits[[k]] && #[[6]] ==
regions[[l]] &][[All, 1]]], ghg[[i]], commodities[[j]], produnits[[k]],
regions[[l]]}]
];];];];

654
A7.3 Data mining the GTAP database in Mathematica
File: Gtapfunctions.m

BeginPackage["Gtapfunctions`",{"DatabaseLink`"}]
vrows::usage="vrows gives the commodity rows of the matrix."
frows::usage="frows gives the factor rows of the matrix."
regions::usage="regions gives the regions in the dataset."
vregion::usage="vregion[n] gives the V matrix."
uregion::usage="uregion[n] gives the V matrix."
iregion::usage="iregion[n] gives the Investment matrix."
fregion::usage="fregion[n] gives the Factor matrix."
gnpregion::usage="gnpregion[n] gives the U-Transpose[V] matrix."
aregion::usage="aregion[n] gives the A matrix."
imregion::usage="imregion[n] gives the Import matrix."
exregion::usage="exregion[n] gives the Export matrix."
txregion::usage="txregion[n] gives the export transport margins."
yregion::usage="yrgion[n] gives the Household demand matrix."
gregion::usage="gregion[n] gives the Government demand matrix."
ygregion::usage="ygregion[n] gives the combined Household & Government
demand matrix."
biregion::usage"biregion[n] gives the bias of U-V+C+I+G+X-M"
csregion::usage="csregion[n] gives the Capital Stock."
pregion::usage="pregion[n] gives the Population."
eyregion::usage="yeregion[n] gives the combined Household energy demand
matrix."
eexregion::usage="eexregion[n] gives the Energy bilateral trade matrix."
euregion::usage="euregion[n] gives the firms' purchases of Energy."
gfgregion::usage="gfgregion[n] gives the firms production of greenhouse
gases."
gygregion::usage="gygregion[n] gives the combined Household & Government
production of greenhouse gases."
gigregion::usage="gigregion[n] gives the investment production of
greenhouse gases."

Begin["`Private`"]
conn=OpenSQLConnection["gtap3res"];
Clear[varray,vselect,vsumdomimp];
(*as[a_]:=If[a=={},{0},a];*)
(* the varray is different to others because V needs to be at market
prices, including output taxes *)
varray[array_,name_]:=SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name];
(*vdpm=varray["VDPM"][[All,{1,3,4}]];*)
vsource=varray["GVIEWRA","CM04"];
regions=Union[varray["GVIEWRA","CM04"][[All,4]]];
vrows=Union[varray["GVIEWRA","CM04"][[All,3]]];
vselect[array_,region_,component_]:= Select[array,#[[4]]==region &&
#[[5]] == component&][[All,{1,3}]];
vsumdomimp[region_]:= Transpose[{ vselect[vsource,region,"prodrev"]
[[All,1]] + vselect[vsource,region,"outtax"][[All,1]],
vselect[vsource,region,"prodrev"][[All,2]]}];
vregion[region_]:= DiagonalMatrix[vsumdomimp[region][[All,1]]];
(*vsumdomimp["NAFTA"]//MatrixForm
vNAFTA=vregion["NAFTA"];
Print["vNAFTA : ",TableForm[vNAFTA,TableHeadings
→ {vrows,vrows}]]; *)

Clear[uarray,uselect,usumdomimp];
uarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",

655
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
usource=uarray["GVIEWRA","SF01"];
uselect[array_,region_,source_,component_,urows_,ucols_]:=
Select[array,MemberQ[urows,#[[3]]] && MemberQ[ucols,#[[4]]]&&#[[5]] ==
region&&#[[6]] == source&&#[[7]] == component&][[All,{1,3,4}]];
usumdomimp[region_]:=
Transpose[{ uselect[usource,region,"domestic","mktexp",vrows,vrows]
[[All,1]] + uselect[usource,region,"imported","mktexp",vrows,vrows]
[[All,1]], uselect[usource,region,"domestic","mktexp",vrows,vrows]
[[All,2]], uselect[usource,region,"domestic","mktexp",vrows,vrows]
[[All,3]]}];
uregion[region_]:= Transpose[Partition[usumdomimp[region]
[[All,1]],Length[vrows]]];
(* usumdomimp["NAFTA"]//MatrixForm;
uNAFTA= uregion["NAFTA"];
Print["uNAFTA : ",TableForm[uNAFTA,TableHeadings
→ {vrows,vrows}]]; *)

Clear[fssumdomimp];
frows = Complement[Union[varray["GVIEWRA","SF01"][[All,3]]],vrows];
fsumdomimp[region_]:=Transpose[{ uselect[usource,region,"domestic","mktex
p",frows,vrows][[All,1]] +
uselect[usource,region,"imported","mktexp",frows,vrows][[All,1]],
uselect[usource,region,"domestic","mktexp",frows,vrows][[All,2]],
uselect[usource,region,"domestic","mktexp",frows,vrows][[All,3]]}];
fregion[region_]:= Transpose[Partition[fsumdomimp[region]
[[All,1]],Length[frows]]];
(* fsumdomimp["NAFTA"]//MatrixForm;
fNAFTA=fregion["NAFTA"];
Print["fNAFTA factor inputs : ",TableForm[fNAFTA,TableHeadings
→ {frows,vrows}]] *)

gnpregion[region_]:=uregion[region]-Transpose[vregion[region]];
(* gnpNAFTA=gnpregion["NAFTA"];
Print["uNAFTA - Inv_vNAFTA_Transpose :
",TableForm[gnpNAFTA,TableHeadings → {vrows,vrows}]] *)

aregion[region_]:=uregion[region].Inverse[Transpose[vregion[region]]];
(* aNAFTA=aregion["NAFTA"];
Print["aNAFTA technical matrix : ",TableForm[aNAFTA,TableHeadings
→ {vrows,vrows}]] *)

Clear[txarray,txselect,txsumdomimp];
txarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] →
"Ascending",SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"]-
>"Ascending"}];
txsource=txarray["GVIEWRA","CM01"];
txselect[array_,region_,urows_]:= Select[array,MemberQ[urows,
#[[3]]]&&#[[5]] == region&&#[[4]] == "trans"&][[All,{1,3}]];
txsumdomimp[region_]:=Transpose[{ txselect[txsource,region,vrows]
[[All,1]], txselect[txsource,region,vrows][[All,2]]}];
txregion[region_]:=txsumdomimp[region][[All,1]];

Clear[exarray,exselect,exsumdomimp];
(* the exarray is different to others because it needs to be at world
prices, including export taxes & transport so use the CIF disposition *)
exarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT3"] →
"Ascending",SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
exsource=exarray["GVIEWRA","BI03"];

656
tocous=Union[exsource[[All,5]]];
exselect[array_,region_,urows_,toreg_,component_]:=
Select[array,MemberQ[urows,#[[3]]]&&#[[4]] == region&&#[[5]] ==
toreg&&#[[6]] == component&][[All,{1,3}]];
exsumdomimp[region_]:=Transpose[{ Apply[Plus,Map[exselect[exsource,region
,vrows,#,"fob"][[All,1]]&,tocous]] +
Apply[Plus,Map[exselect[exsource,region,vrows,#,"trans"]
[[All,1]]&,tocous]], exselect[exsource,region,vrows,region,"fob"]
[[All,2]]}];
exregion[region_]:= Flatten[Transpose[Partition[exsumdomimp[region]
[[All,1]],Length[vrows]]]];
(*exsumdomimp["NAFTA"]//MatrixForm;
exNAFTA=exregion["NAFTA"];
Print["exNAFTA :
",TableForm[exNAFTA,TableHeadings → {{" "},vrows}]]*)

Clear[imarray,imselect,imsumdomimp];
imarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT3"] →
"Ascending",SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
imsource=imarray["GVIEWRA","BI02"];
fromcous=Union[imsource[[All,4]]];
imselect[array_,region_,urows_,toreg_]:=
Select[array,MemberQ[urows,#[[3]]]&&#[[5]] == region&&#[[4]] ==
toreg&&#[[6]] == "impcost"&][[All,{1,3}]];
imsumdomimp[region_]:=
Transpose[{ Apply[Plus,Map[imselect[imsource,region,vrows,#]
[[All,1]]&,fromcous]], imselect[imsource,region,vrows,region][[All,2]]}];
imregion[region_]:= Flatten[Transpose[Partition[imsumdomimp[region]
[[All,1]],Length[vrows]]]];
(*imsumdomimp["NAFTA"]//MatrixForm;
imNAFTA=imregion["NAFTA"];
Print["imNAFTA :
",TableForm[imNAFTA,TableHeadings → {{" "},vrows}]]*)

Clear[yarray,yselect,ysumdomimp];
yarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT2"] →
"Ascending",SQLColumn["ELEMENT1"] → "Ascending"}];
ysource = yarray["GVIEWRA","SF02"];
yselect[array_,region_,source_,component_,yrows_]:=
Select[array,MemberQ[yrows, #[[3]]] && #[[4]] == region&&#[[5]] ==
source&&#[[6]] == component&][[All,{1,3}]];
ysumdomimp[region_]:=Transpose[{ yselect[ysource,region,"domestic","mktex
p",vrows][[All,1]] + yselect[ysource,region,"imported","mktexp",vrows]
[[All,1]], yselect[ysource,region,"domestic","mktexp",vrows][[All,2]]}];
yregion[region_]:=ysumdomimp[region][[All,1]];
(* ysumdomimp["NAFTA"]//MatrixForm;
yNAFTA=yregion["NAFTA"];
Print["yNAFTA : ",TableForm[yNAFTA,TableHeadings
→ {vrows}]]; *)

Clear[garray,gselect,gsumdomimp];
garray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT2"] →
"Ascending",SQLColumn["ELEMENT1"] → "Ascending"}];
gsource=garray["GVIEWRA","SF03"];
gselect[array_,region_,source_,component_,yrows_]:=
Select[array,MemberQ[yrows, #[[3]]]&&#[[4]] == region&&#[[5]] ==
source&&#[[6]]==component&][[All,{1,3}]];

657
gsumdomimp[region_]:=Transpose[{ gselect[gsource,region,"domestic","mktex
p",vrows][[All,1]] + gselect[gsource,region,"imported","mktexp",vrows]
[[All,1]], gselect[gsource,region,"domestic","mktexp",vrows][[All,2]]}];
gregion[region_]:=gsumdomimp["NAFTA"][[All,1]];
(* gsumdomimp["NAFTA"]//MatrixForm;
gNAFTA=gregion["NAFTA"];
Print["gNAFTA : ",TableForm[gNAFTA,
TableHeadings → {vrows}]]; *)

Clear[ygsum];
ygsum[region_]:= Transpose[{ysumdomimp[region][[All,1]] +
gsumdomimp[region][[All,1]],vrows}];
ygregion[region_]:= ygsum[region][[All,1]];
(* ygsum["NAFTA"]//MatrixForm;
ygNAFTA=ygregion["NAFTA"];
Print["ygNAFTA : ",TableForm[ygNAFTA,
TableHeadings → {vrows}]]; *)

Clear[csarray,csselect,cssumdomimp];
csarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT1"] → "Ascending"}];
cssource=csarray["GVIEWRA","AG06"];
csselect[array_,region_]:= Select[array,#[[3]] == region&][[All,{1,3}]];
cssumdomimp[region_]:= Transpose[{ csselect[cssource,region][[All,1]],
csselect[cssource,region][[All,2]]}];
csregion[region_]:= cssumdomimp[region][[All,1]];
(* cssumdomimp["NAFTA"]//MatrixForm;
csNAFTA=csregion["NAFTA"];
Print["csNAFTA :
",TableForm[csNAFTA,TableHeadings → {{"cap "},{""}}]]; *)

Clear[parray,pselect,psumdomimp];
parray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT1"] → "Ascending"}];
psource=csarray["GDATRA","POP"];
pselect[array_,region_]:= Select[array,#[[3]] == region&][[All,{1,3}]];
psumdomimp[region_]:= Transpose[{ pselect[psource,region][[All,1]],
pselect[psource,region][[All,2]]}];
pregion[region_]:= psumdomimp[region][[All,1]];
(*psumdomimp["NAFTA"]//MatrixForm;
pNAFTA=pregion["NAFTA"];
Print["pNAFTA : ",TableForm[pNAFTA,TableHeadings
→ {{"pop "},{""}}]];*)

Clear[iarray,iselect,isumdomimp];
iarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
isource= iarray["GVIEWRA","SF01"];
iselect[array_,region_,source_,component_,urows_,ucols_]:=
Select[array,MemberQ[urows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]] ==
region && #[[6]] == source && #[[7]] == component&][[All,{1,3,4}]];
isumdomimp[region_]:=Transpose[{ iselect[isource,region,"domestic","mktex
p",vrows,{"CGDS"}][[All,1]] +
iselect[isource,region,"imported","mktexp",vrows,{"CGDS"}][[All,1]],
iselect[isource,region,"domestic","mktexp",vrows,{"CGDS"}][[All,2]],
iselect[isource,region,"domestic","mktexp",vrows,{"CGDS"}][[All,3]]}];
(*iregion[region_]:= Transpose[Partition[isumdomimp[region][[All,1]],
Length[vrows]]];*)
iregion[region_]:= isumdomimp[region][[All,1]];
(*iNAFTA=iregion["NAFTA"]
isumdomimp["NAFTA"]//MatrixForm

658
Print["iNAFTA : ",TableForm[iNAFTA,TableHeadings
→ {vrows,"CGDS"}]];*)

Clear[biregion];
biregion[region_]:= Total[uregion[region]-Transpose[vregion[region]],{2}]
+ ygregion[region] + iregion[region] + exregion[region] -
imregion[region];
(*biregion["NAFTA"]*)

Clear[eyarray,eyselect,eysumdomimp];
eyarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT2"] →
"Ascending",SQLColumn["ELEMENT1"] → "Ascending"}];
eysource = eyarray["GVOLERA","EVH"];
erows = Union[eyarray["GVOLERA","EVH"][[All,3]]];
eyselect[array_,region_,yrows_]:= Select[array,MemberQ[yrows,#[[3]]] &&
#[[4]] == region&][[All,{1,3}]];
eysumdomimp[region_]:= eyselect[eysource,region,erows];
(*the following form is required to cope with null values *)
eyregion[region_]:= Table[Apply[Plus,Select[eysumdomimp[region],#[[2]] ==
i&][[All,1]]],{i,erows}];
(*eyregion["NAFTA"]//MatrixForm
eyNAFTA=eyregion["NAFTA"];
Print["eyNAFTA :
",TableForm[eyNAFTA,TableHeadings → {erows}]];*)

Clear[eexarray,eexselect,eexsumdomimp];
(* the eexarray *)
eexarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
eexsource=eexarray["GVOLERA","EVT"];
tocous=Union[eexsource[[All,5]]];
eexselect[array_,region_,yrows_,tocous_]:=
Select[array,MemberQ[yrows,#[[3]]] && #[[4]] == region && #[[5]] ==
tocous&][[All,{1,3}]];
eexsumdomimp[region_]:=Transpose[{ Apply[Plus,Map[eexselect[eexsource,reg
ion,erows,#][[All,1]]&,tocous]], eexselect[eexsource,region,erows,region]
[[All,2]]}];
(*the following form is required to cope with null values *)
eexregion[region_]:= Table[Apply[Plus,Select[eexsumdomimp[region], #[[2]]
== i&][[All,1]]], {i,erows}];
(*eexsumdomimp["NAFTA"]//MatrixForm
eexNAFTA=eexregion["NAFTA"];
Print["eexNAFTA :
",TableForm[eexNAFTA,TableHeadings → {erows,{" "}}]]*)

Clear[eimarray,eimselect,eimsumdomimp];
(* the eimarray *)
eimarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
eimsource = eimarray["GVOLERA","EVT"];
fromcous= Union[eimsource[[All,4]]];
eimselect[array_,region_,yrows_,fromcous_]:=
Select[array,MemberQ[yrows,#[[3]]] && #[[5]] == region && #[[4]] ==
fromcous&][[All,{1,3}]];
eimsumdomimp[region_]:=
Transpose[{Apply[Plus,Map[eimselect[eimsource,region, erows,#]
[[All,1]]&,fromcous]], eimselect[eimsource,region,erows,region]
[[All,2]]}];

659
(*the following form is required to cope with null values *)
eimregion[region_]:= Table[Apply[Plus,Select[eimsumdomimp[region],
#[[2]]==i&][[All,1]]],{i,erows}];
(*eimsumdomimp["NAFTA"]//MatrixForm
eimNAFTA=eimregion["NAFTA"];
Print["eimNAFTA :
",TableForm[eimNAFTA,TableHeadings → {erows,{" "}}]]*)

Clear[euarray,euselect,eusumdomimp];
euarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
eusource=euarray["GVOLERA","EVF"];
euselect[array_,region_,erows_,ucols_]:=
Select[array,MemberQ[erows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]] ==
region&][[All,{1,3,4}]];
eusumdomimp[region_]:= euselect[eusource,region,erows,vrows];
(*the following form is required to cope with null values *)
euregion[region_]:=
Transpose[Table[Apply[Plus,Select[eusumdomimp[region], #[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,vrows},{j,erows}]];
(*eusumdomimp["NAFTA"]//MatrixForm
euNAFTA=euregion["NAFTA"];
Print["euNAFTA :
",TableForm[euNAFTA,TableHeadings → {erows,vrows}]];*)

Clear[gfgarray,gfgselect,gfgsumdomimp];
gfgarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
gfgsource = gfgarray["EGHG","CO2"];
ghgrows = Union[gfgarray["EGHG","CO2"][[All,3]]];
gfgselect[array_,region_,ghgrows_,ucols_]:=
Select[array,MemberQ[ghgrows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]]
== region&][[All,{1,3,4}]];
gfgsumdomimp[region_]:= gfgselect[gfgsource,region,ghgrows,vrows];
(*the following form is required to cope with null values *)
gfgregion[region_]:=
Transpose[Table[Apply[Plus,Select[gfgsumdomimp[region],#[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,vrows},{j,ghgrows}]];
(*gfgsumdomimp["NAFTA"]//MatrixForm
gfgNAFTA=gfgregion["NAFTA"];
Print["gfgNAFTA :
",TableForm[gfgNAFTA,TableHeadings → {ghgrows,vrows}]];*)

Clear[gygarray,gygselect,gygsumdomimp];
gygarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
gygsource = gygarray["EGHG","CO2"];
gygcols = {"demand"};
gygselect[array_,region_,ghgrows_,ucols_]:=
Select[array,MemberQ[ghgrows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]]
== region&][[All,{1,3,4}]];
gygsumdomimp[region_]:= gygselect[gygsource,region,ghgrows,gygcols];
(*the following form is required to cope with null values *)
gygregion[region_]:=
Flatten[Table[Apply[Plus,Select[gygsumdomimp[region],#[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,gygcols},{j,ghgrows}]];
(*gygsumdomimp["NAFTA"]//MatrixForm

660
gygNAFTA=gygregion["NAFTA"];
Print["gygNAFTA :
",TableForm[gygNAFTA,TableHeadings → {ghgrows,gygcols}]];*)

Clear[gigarray,gigselect,gigsumdomimp];
gigarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
gigsource=gigarray["EGHG","CO2"];
gigcols={"invest"};
gigselect[array_,region_,ghgrows_,ucols_]:=
Select[array,MemberQ[ghgrows,#[[3]]] && MemberQ[ucols,#[[4]]] &&
#[[5]]==region&][[All,{1,3,4}]];
gigsumdomimp[region_]:= gigselect[gigsource,region,ghgrows,gigcols];
(*the following form is required to cope with null values *)
gigregion[region_]:=
Flatten[Table[Apply[Plus,Select[gigsumdomimp[region],#[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,gigcols},{j,ghgrows}]];
(*gigsumdomimp["NAFTA"]//MatrixForm
gigNAFTA=gigregion["NAFTA"];
Print["gigNAFTA :
",TableForm[gigNAFTA,TableHeadings → {ghgrows,gigcols}]];*)

End[]
EndPackage[]

A7.4 Data mining Mathematica's country database


for the population growth using GTAP
aggregations
File: Gtapaggregation.m

BeginPackage["Gtapaggregation`"]
aggregions::usage="aggregions gives the input and output regions."
wgtpopgrowth::usage="wgtpopgrowth gives weighted population growth of
regions in the aggregation file."
Begin["`Private`"]
Clear[mapping,positiona,positionb,from,to,map,mapuc,mapuc2,regionmap,popu
lation,popgrowth,wgtpopgrowth];
istream=OpenRead["/home/stuart/Documents/gtap/GTPAg7/sntest01.agg"];
records=Select[ReadList[istream,Record,RecordSeparators->" = "],
StringFreeQ[#,"!"]&];
mapping[n_]:=Rest[StringSplit[records[[n]]]];
positiona[n_]:=Flatten[Position[mapping[n],"&"]];
positionb[n_]:=Rest[RotateRight[Join[{-1},positiona[n]]]];
to[n_]:=mapping[n][[positiona[n]+1]];
from[n_]:=mapping[n][[positionb[n]+2]];
(*map[n_]:=Thread[from[n]->to[n]];
mapuc[n_]:=Thread[ToUpperCase[from[n]]->to[n]];
produnitmap=map[2];regionmap=mapuc[4];factormap=map[6];*)
mapuc2[n_]:=Thread[{ToUpperCase[from[n]],to[n]}];
regionmap[n_]:=Select[mapuc2[4],#[[2]]==n&][[All,1]];
aggregions=Map[{#,regionmap[#]}&,Union[to[4]]];
population[r_,n_]:=CountryData[aggregions[[r,2,n]],"Population"];
popgrowth[r_,n_]:=CountryData[aggregions[[r,2,n]],"PopulationGrowth"];

661
wgtpopgrowth[m_]:= Sum[population[m,n]*popgrowth[m,n],
{n,Length[aggregions[[m,2]]]}] / Sum[population[m,n],
{n,Length[aggregions[[m,2]]]}];
nonrowcou=Quiet[Thread[CountryData[Flatten[Map[regionmap[#]&,Rest[RotateR
ight[Union[to[4]]]]]]]]];
rowcou1=Complement[CountryData["Countries"],nonrowcou];
rowcou2=Map[{#,CountryData[#,"PopulationGrowth"]}&,rowcou1];
rowcou3=Select[rowcou2,NumericQ[#[[2]]]&][[All,1]];
rowpop=Total[Map[CountryData[#,"Population"]&,rowcou3]];
rowwgt=Total[Map[CountryData[#,"Population"]*CountryData[#,"PopulationGro
wth"]&,rowcou3]];
wgtpopgrowth[Length[aggregions]]:=rowwgt/rowpop;
End[]
EndPackage[]

A7.5 Appendix references


Hertel, T. & Walmsley, T.L., 2008. GTAP: Chapter 1: Introduction, Center for
Trade Analysis, Purdue University. Available at:
https://www.gtap.agecon.purdue.edu/databases/v7/v7_doco.asp
[Accessed November 14, 2008].

Hertel, T.W., 1999. Global Trade Analysis: Modeling and Applications,


Cambridge University Press.

Lee, H., 2008. An Emissions Data Base for Integrated Assessment of Climate
Change Policy Using GTAP, Center for Global Trade Analysis. Available
at: https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=1143 [Accessed June 26, 2009].

McDonald, G.W. & Patterson, M.G., 2004. Ecological footprints and


interdependencies of New Zealand regions. Ecological Economics, 50(1-
2), 49-67.

i GTAPAgg contributed to GTAP by Mark Horridge, Centre of Policy Studies,


Monash University, Melbourne, Australia
ii GEMPACK is a General equilibrium modelling software developed at Centre for
Policy Studies, Monash University, Melbourne, Australia

662
Appendix 8 The Sceptre Model
Sceptre is an acronym for Spatial Climate-Economic Policy Tool for Regional
Equilibria.

A8.1 Sceptre Model Flowchart

Illustration 45: Sceptre model flowchart

663
A8.2 Sceptre Mathematica Code
File: m12_13p_2C_100.nb

Read data mining, acyclic processor and graphical


utilities
<< Topofunctions.m
<< Gtapfunctions.m
<< Gtapaggregation.m
<< PlotLegends`
(*DECADE MODEL*)
(*greenhouse gas emissions are included as an additional production
sector with amelioration cost based on backstop technology assumption*)
(*this version includes mitigated ghg as well as pure permit trading*)
(*This model returned to full non linear optimisation in one step using
the ability to quickly solve the Model with defined parameters. A
nonlinear constraint is also used for the damages feedback loop.*)

Clear arrays & set key control parameters


Clear[pop, popg, pops, u, v, inv, a, kap, lab, exim, deficit, ivector,
bias, kendowment, labempl, lendowment, vector, svector, zvector,
dvector, vector, invest, investv, ninvvec, 1yr, , s2a1yr, s2avect,
mvector, tatvec, alvect, utilfn, pre1, pre2, pre3, obj1, obj2, obj3,
model1, model2, model3, post1, post2, post3, ineqcons1, ineqcons2,
ineqcons3, eqcons1, eqcons2, eqcons3];
periods = 13;  = (1 + 0.04)^10 - 1; 1yrav = 0.04; unemp = 0.065;
(*conversion of Gg (=1,000 tonnes of CO2 equivalent) to Gigatonnes
carbon-equivalent divide by gg2gtc*)
gg2gtc = (44/12)*(10^9/10^3);

Project exogenous population growth


(*initial population*)
pop[0, n_] := pop[0, n] = pregion[regions[[n]]];
(*initial population growth per decade*)
popg[0, n_] := popg[0, n] = (1 + wgtpopgrowth[n])^10 - 1;
popg[m_, n_] := popg[0, n]*Exp[-0.341 m];
(*calibrated to reduce growth rates for asymptotic population of 8.6
billion in 10 decades*)
(*pop[m_,n_]:=pop[m-1,n]*(1+popg[m,n]);popt[m_]:=popt[m]=Sum[pop[m,n],
{n,cou}];popt[10]*)
pops[m_, n_] := pops[m, n] = pops[m - 1, n]*(1 + popg[m, n]);
pops[0, n_] := pops[0, n] = 1;

Read aggregated GTAP U and V matrices


(*include "gaml" for greenhouse gas amelioration and "gtra" for ghg
carbon permit trading*)
vrows2 = Flatten[Append[vrows, {"gaml", "gtra"}]]; ucols =
Length[vrows2]; urows = Length[vrows2]; vcols = urows; cou =
Length[regions];

664
Include carbon trading & abatement in data arrays
mnfccom = Flatten[Position[vrows2, "mnfc"]];
nontradcom = Flatten[Position[vrows2, "serv"]];
gamlcom = Flatten[Position[vrows2, "gaml"]];
gtracom = Flatten[Position[vrows2, "gtra"]];
a[0, n_] := a[0, n] = PadRight[ygregion[regions[[n]]], ucols]*10;
inv[0, n_] := inv[0, n] = PadRight[iregion[regions[[n]]], ucols]*10;
u[0, n_] := u[0, n] = PadRight[uregion[regions[[n]]], {urows, ucols}]*10;
v[0, n_] := v[0, n] = PadRight[vregion[regions[[n]]], {Length[vrows2],
vcols}]*10;

Read aggregated GTAP endowments


kap[0, n_] := kap[0, n] = PadRight[fregion[regions[[n]]][[1]], ucols]*10;
lab[m_, n_] := lab[m, n] = PadRight[fregion[regions[[n]]][[2]],
ucols]*10;

Read aggregated GTAP bilateral trade (net exim)


exim[0, n_] := exim[0, n] = PadRight[MapAt[0 &, exregion[regions[[n]]] -
imregion[regions[[n]]], nontradcom], {ucols}]*10;
exim[m_, n_] := exim[m, n] = ReplacePart[exim[0, n], {gamlcom -> If[n ==
1, cou - 1, -1], gtracom -> If[n == 1, cou - 1, -1]}];
ivector[m_, n_] := Table[1, {p, urows}];

Calculate trade deficit and taxation bias


deficit[m_, n_] := deficit[m, n] = Min[Total[exim[m, n]]*pops[m, n], 0];
bias[m_, n_] := bias[m, n] = (u[0, n] - Transpose[v[0, n]]).ivector[0, n]
+ a[0, n] + inv[0, n] + exim[0, n];
(*note that a bias is required for financial balance and represents
import less export taxes*)
kendowment[0, n_] := kendowment[0, n] = Total[kap[0, n]];
labempl[m_, n_] := labempl[m, n] = Total[lab[m, n]] ;
lendowment[m_, n_] := lendowment[m, n] = labempl[m, n] pops[m, n]/(1 –
unemp);

Arrays for optimisation parameters including


damages & abatement proportion
vector[m_, n_] := Table[[m, n], {urows}];
svector[m_, n_] := Table[s[m, n, p], {p, urows}];
zvector[m_, n_] := Table[z[m, n, p], {p, urows}];
(*MapAt[0&,Table[z[m,n,p],{p,urows}],nontradcom]*)
dvector[m_, n_] := ReplacePart[ivector[m, n]*dam[m], {gamlcom -> 1,
gtracom -> 1}];
vector[m_, n_] := ReplacePart[Table[[m, n, p], {p, urows}], {gamlcom ->
0, gtracom -> 0}];
investv[m_, n_] := ReplacePart[Table[invest[m, n, p], {p, urows}],
{gamlcom -> 0, gtracom -> 0}];(*set investv=0 for greenhousegases*)

665
Calculate initial assets, depreciation & Sales/Assets
ratios
investv[0, n_] := investv[0, n] = ReplacePart[ivector[0, n], {gamlcom ->
0, gtracom -> 0}];
ninvvec[m_, n_] := ReplacePart[Table[ninv[m, n, p], {p, urows}], {gamlcom
-> 0, gtracom -> 0}];(*set ninvect=0 for greenhousegases*)
ninvvec[0, n_] := ninvvec[0, n] = Total[csregion[regions[[n]]]] * kap[0,
n]/ kendowment[0, n];(*single year figure*)
(*the following is required because sometimes depreciation is more than
investment.
ninvec[0]=ninvec[-1](1-)+10*inv[0](1-()/2) where ninvec[0]=ninvec[-1]
and ninvec[-1] is eliminated*)
1yr[0, n_] := 1yr[0, n] = Map[Min[#, 1yrav] &, inv[0, n]/
(10*ninvvec[0, n] + inv[0, n]/2 + 10^-6)];
[m_, n_] := [m, n] = (1 + 1yr[0, n])^10 – 1 (*10 1yr[0,n]*);
(*s2avect[m_,n_]:=(Transpose[v[0,n]].ivector[0,n])*(ivector[m,n]-[m,n])/
(ninvvec[0,n]-inv[0,n]*(ivector[m,n]-[m,n]/2));*)

(*PRE PROCESSING: exogenous variables*)


s2a1yr[0, n_] := s2a1yr[0, n] = Transpose[v[0, n]].ivector[0, n]*
(ivector[0, n] - 1yr[0, n])/(10*ninvvec[0, n] - inv[0, n]*(ivector[0, n]
- 1yr[0, n]/2) + 10^-6);
s2avect[m_, n_] := s2avect[m, n] = s2a1yr[0, n]*Sum[(1 + ((1 + popg[0,
n])^0.1 - 1))^i, {i, 1, 10}];(**)

Read Nordhaus' exogenous climate parameters


initialvals = {
...(*conversion of GtC carbon content of the atmosphere to co2 ppm divide
by*) convppm -> 2.123,
(*initial growth rate of technology per decade*)ga0 -> 0.092,
(*initial growth rate of technology per decade*)ga[0] -> 0.092,
(*initial level of total factor productivity*)al[0] -> 1,
(*decline rate of technological change per decade*)dela -> 0.001,
(*estimated radiative forcing of non-carbon gases in 2000 & 2001*)
fex0 -> -0.06, fex1 -> 0.30,
(*emissions in control regime parameters for 2005, 2015, 2205*)
(*carbon emissions from land deforestation 2005 in GtC per decade*)
eland0 -> 11,
(*equivalent carbon growth parameter per year*) g0 -> -0.0730,
(*decarbonisation per decade linear parameter*) d1 -> 0.003,
(*decarbonisation per decade quadratic parameter*) d2 -> 0.00,
(*fraction of emissions in control regime in 2005, 2015 & 2205*) 1
-> 1, 2 -> 1, 21 -> 1,
(*decline rate of participation in control regime*)d -> 0,
(*abatement cost control exponent*)  -> 2.8,
(*backstop technology cost $'000 per tonne of carbon 2005*) pback ->
1.17,
(*backstop technology, final to inital cost ratio*) backrat -> 2,
(*backstop technology, initial rate of decline in cost per decade*)
gback -> 0.05,
(*CO2-equivalent emissions to GNP ratio 2005*) [0] -> 0.13418,
(*damage intercept calibrated for quadratic at 2.5 ˚C in 2105*) 1 ->
0,
(*damage quadratic calibrated for quadratic at 2.5 ˚C in 2105*) 2 ->
0.0028388,
(*damage exponent calibrated for quadratic at 2.5 ˚C in 2105*) 3 ->
2,

666
(*pre-industrial (1750) radiative forcing in watts per m2*) mat1750 ->
596.4,
(*estimated forcings as result of equilibrium CO2 doubling*)  -> 3.8,
(*CO2 concentration in atmosphere 2005 in GtC*) mat[0] -> 808.9,
(*CO2 concentration in upper strata of oceans 2005 in GtC*) mup[0] ->
1255,
(*CO2 concentration in lower strata of oceans 2005 in GtC*) mlo[0] ->
18365,
(*atmospheric temp change in ˚C from 1900 to 2000*) tat[0] -> 0.7307,
(*ocean lower strata temperature change in ˚C from 1900 to 2000*)
tlo[0] -> 0.0068,
(*climate-equation coefficient for upper level*) 1 -> 0.22,
(*transfer coefficient upper to lower ocean stratum*) 3 ->
0.3,
(*transfer coefficient for lower level of ocean*) 4 -> 0.05,
(*equilibrium temperature impact of double CO2 ˚C*) t2xco2 -> 3,
(*damages at base year*) 0 -> 0.99849,
0 -> 0.005 (*,[0]->0.66203,dam[1]->1*)};

Prepare climate Markov transformation functions


mvector[m_] := {mat[m], mup[m], mlo[m]};
mtransform = {{1, 11, 21, 0}, {0, 12, 22, 32}, {0, 0, 23,
33}} /. Flatten[Solve[{12 == 0.189288, 23 == 0.05, 11 == 1 - 12,
21 == 587.473*12/1143.894, 22 == 1 - 21 - 23, 32 ==
1143.894*23/18340, 33 == 1 - 32}]];

(*note that the above matrix rows & columns are the reverse of the \
indices, for consistency with DICE*)
tatvec[m_] := {tat[m], tlo[m]};
tatransform = {{1 - 1 (/ t2xco2 + 3), 1 3}, {4, 1 - 4}}
/.initialvals;
taforcing = {1, 0} /. initialvals;
alvect[m_, n_] := alvect[m, n] = ReplacePart[ivector[m, n]*al[m],
{gamlcom -> 1, gtracom -> 1}] /. initialvals;

Solve Nordhaus' exogenous equations


(*POST PROCESSING*)
pre1[m_, n_] := {};(**)
pre2[m_] := {
(*exogenous radiative forcing per year for other ghg*)
fex[m] == fex0 + If[m < 12, 0.1*(fex1 - fex0)*(m - 1), 0.36],
(*emissions from deforestation land changes*)
eland[m] == eland0*(1 - 0.1)^m,
(*cumulative improvement in energy efficiency*)
g[m] == g0*Exp[-d1*10*(m - 1) - d2*10*(m - 1)^2],
(*partfract, fraction of emissions in control regime, suspect WN
condition for stability [1]=0.25372*)
[m] == If[m == 1, 1, If[m < 25, 21 + (2 - 21) * Exp[-d*(m - 2)],
21]],
(*growth rate of productivity from start to period*)
ga[m] == ga0*Exp[-dela*10*(m - 1)],
(*total factor productivity*)
al[m] == al[m - 1]/(1 - ga[m]),
(*ratio of abatement cost with incomplete participation to that with
complete participation*)
[m] == [m]^(1 - ),
(*adjusted cost of backstop technology $'000/tC*)
[m] == (pback(*/*))*(backrat - 1 + Exp[-gback*(m – 1)])/ backrat
(*CO2 equivalent emissions output ratio*)

667
(*[m]==[m-1]/(1-g[m])*)
(*[m]==[m]*[m][m]^*)
};
pre3 = {};
preext = Simplify[Flatten[{
Array[pre1, {periods, cou}],
Array[pre2, periods],
pre3}] /. initialvals];
If[Length[preext] == 0, Print["*** error in PARAMETER PRE PROCESSING
***"]];
prelhs = preext /. {a_ == b_ -> a}; prerhs = preext /. {a_ == b_ -> b};
prerhsresult = prerhs //. Thread[prelhs -> prerhs];

Combine solutions to Nordhaus exogenous


parameters & equations
initialvals = Join[initialvals, Thread[prelhs -> prerhsresult]];
NotebookDelete[printtemp];
printtemp = If[Length[initialvals] == 0, PrintTemporary["**Error in
Parameter Pre Processing **"], PrintTemporary["Pre Processing
completed....commencing Objective solve...."]];

Prepare eliminated set of optimisation variables


(* MACROECONOMIC MODEL*)

modelinpvars = Complement[Union[Cases[Flatten[{
Array[svector, {periods, cou}],
Array[vector, {periods, cou}],
Array[zvector, {periods, cou}],
Array[vector, {periods, cou}],
Array[a, {periods, cou}],
Array[i, {periods, cou}],
Array[dvector, {periods, cou}]
}], x_Symbol[_Integer ..], Infinity]],
Flatten[Map[Table[s[m, 1, #], {m, periods}] &, Flatten[{1,
gamlcom}]]],
Flatten[Table[z[m, 1, p], {m, periods}, {p, urows}]],
Flatten[Map[Table[z[m, n, #], {m, periods}, {n, 2, cou}] &,
Flatten[{nontradcom, gamlcom, gtracom}]]]];
(*remove i in the optimisation variables because the values are zero*)
optimvars = Complement[modelinpvars, Flatten[Array[i, {periods, cou}]]];

Prepare objective function as net present value of


expansion factors
(*OBJECTIVE: endogenous variables, written as equations that equal zero.
Note that vector form is used*)
utilfn[x_] := x; utilpars = {};
(*utilfn[x_]:=1-Chop[ua Exp[-uk x],10^-5];utilpars=FindFit[{{0.85,0.5},
{1.5,0.85}},utilfn[x],{ua,uk},x];*)
(*utilfn[x_]:=ua-ub/(uc+x);utilpars=FindFit[{{0.5,-10},{1,1},{2,
1.15}},utilfn[ux],{ua,ub,uc},ux];*)
(*utilfn[x_]:=ua (1-ub/x);utilpars=FindFit[{{0.5,-1000},
{1,1}},utilfn[ux],{ua,ub},ux];*)
obj1[m_, n_] :=(*utility normalised by population growth, use for all
periods*){
utility[m, n] - (utilfn[[m, n]/pops[m, n]] /. {utilpars})};

668
obj2[m_] :=(*net present value of utility*){
npvutility[m] - (npvutility[m + 1] + Sum[utility[m, n], {n, cou}])/(1
+ )};
obj3 = {npvutility[periods + 1] - Sum[utility[periods, n], {n, cou}]/};
objext = Select[Simplify[Flatten[{
Array[obj1, {periods, cou}],
Array[obj2, {periods}],
obj3} /. initialvals]], ! NumericQ[#] &];

Optionally check acyclic structure of objective


function
(*USE THE FOLLOWING THREE LINES TO CHECK THE TOPOLOGICAL STRUCTURE OF THE
OBJECTIVE FUNCTION*)
(*objtopo=toponodes[objext/.Equal->Subtract/.Thread[modelinpvars->1]]; *)
(*Print["residual input variables: ",objtopo[[3]]];*)
(*If[Length[objinpvars]>0,printemp=PrintTemporary["Objective directed
acyclic graphs completed...."]];*)

objoutvars = Union[Cases[objext, x_Symbol[_Integer ..], Infinity]];


objvar = npvutility[1];
objcalcvars = Complement[objoutvars, modelinpvars];
objsolns = Flatten[Solve[Thread[objext == 0], objcalcvars]];
objfn = -objvar /. objsolns;
NotebookDelete[printtemp];
printtemp = If[Length[objsolns] == 0, PrintTemporary["**Error in
Objective Solve **"], PrintTemporary["Objective solve
completed....solving macroeconomic model...."]];

Generate functions for augmented consumption,


investment, U & V matrices
(*PREPARE MACROECONOMIC MODEL MATRICES*)
a[m_, n_] := a[m, n] = ReplacePart[a[0, n], Flatten[{
gamlcom -> (Total[gygregion[regions[[n]]], 2]*10/ gg2gtc)*a[m,
n],
gtracom -> (Total[gygregion[regions[[n]]], 2]*10/ gg2gtc)*(1 -
a[m, n])
}]] /. initialvals;
inv[m_, n_] := inv[m, n] = ReplacePart[inv[0, n], Flatten[{
gamlcom -> (Total[gigregion[regions[[n]]], 2]*10/ gg2gtc)*i[m,
n],
gtracom -> (Total[gigregion[regions[[n]]], 2]*10/ gg2gtc)*(1 -
i[m, n])
}]] /. Thread[Flatten[Array[i, {periods, cou}]] -> 0] /.
initialvals;(*remove i since there is no data for investment ghg*)
u[m_, n_] := u[m, n] =
(*this replacement sets up the cost of ameliorating greenhouse gases by
industry. It assumes that the same output v is available but with
increased u due to climate damage v-ku=(v-u)dam/0 so ku=v-(v-u)dam/0
This assumption implies constant returns to scale. Note also that labour
is not subject to environmental damages*)
ReplacePart[
Transpose[v[0, n]] - (Transpose[v[0, n]] - u[0, n])*alvect[m, n]*
dvector[m, n]/0, Flatten[{
Map[{ Flatten[{#, gamlcom}] -> Total[([m]*[m]*[m, n, #]^( -
1))*(10^3 10^9/10^6)*(Total[gfgregion[regions[[n]]]*10/gg2gtc][[#]]*[m,
n, #] + a[m, n][[gamlcom]]*a[0, n][[#]]/Total[a[0, n], 2] + inv[m, n]
[[gamlcom]]* inv[0, n][[#]]/Total[inv[0, n], 2]), 2],

669
Flatten[{gamlcom, #}] -> (Total[gfgregion[regions[[n]]]]
[[#]]*10/gg2gtc)*[m, n, #],
Flatten[{gtracom, #}] -> (Total[gfgregion[regions[[n]]]]
[[#]]*10/gg2gtc)*(1 - [m, n, #])
} &, Complement[Range[urows], gamlcom, gtracom]]}]] /.
initialvals;

v[m_, n_] := v[m, n] = ReplacePart[v[0, n], Flatten[{


Flatten[{gamlcom, gamlcom}] -> Total[u[m, n][[gamlcom, All]], 2] +
Total[a[m, n][[gamlcom]], 2] + Total[inv[m, n][[gamlcom]], 2] +
Total[exim[m, n][[gamlcom]], 2],
Flatten[{gtracom, gtracom}] -> Total[u[m, n][[gtracom, All]], 2] +
Total[a[m, n][[gtracom]], 2] + Total[inv[m, n][[gtracom]], 2] +
Total[exim[m, n][[gtracom]], 2]
}]] /. initialvals;

Generate intertemporal MRIO symbolic model with


carbon trading & abatement
(*MACROECONOMIC MODEL*)

model1[m_, n_] :=(*flows balance, use for all periods*){


(u[m, n] - Transpose[v[m, n]]).svector[m, n] + a[m, n]*vector[m, n] +
inv[m, n]*investv[m, n] + exim[m, n]*zvector[m, n] - bias[m, n],
ninvvec[m, n] - ninvvec[m - 1, n]*(ivector[m, n] - [m, n]) - inv[m,
n]*investv[m, n]*(ivector[m, n] - [m, n]/2),
Map[zvector[m, n][[#]] &, nontradcom](*nontraded commodities: forces
the trade to zero*)
(*Map[investv[m,n][[#]]&, Flatten[{gamlcom,gtracom}]]*)(*greenhouse
gases: forces investment to zero*)
};(**)
model2[m_] :=(*net zero trade between countries*){Sum[
exim[m, n]*zvector[m, n], {n, cou}]};
model3 = {};
modelext = Select[Simplify[Flatten[{
Array[model1, {periods, cou}],
Array[model2, periods],
model3} /. initialvals]], ! NumericQ[#] &];
modeloutvars = Select[Union[Cases[modelext, x_Symbol[_Integer ..],
Infinity]],
!NumericQ[#] &];

Optionally check acyclic structure of MRIO model


(*USE THE FOLLOWING FOUR LINES TO LOOK AT THE MODEL'S TOPOLOGICAL
STRUCTURE OF THE MACROECONOMIC MODEL*)
(*modeltopo=toponodes[modelext/.Equal->Subtract];*)
(*modelinpvars=modeltopo[[3]]*)
(*If[Length[modelinpvars]>0,printtemp=PrintTemporary["Model directed
acyclic graphs completed...."]];*)

Combine symbolic MRIO model with solutions to


exogenous equations
modelcalcvars = Complement[modeloutvars, modelinpvars];
modelsolns = Flatten[Solve[Thread[modelext == 0], modelcalcvars]];

670
Optionally check endogenous model variables
(*USE THE FOLLOWING EIGHT LINES TO CHECK THE ENDOGENOUS MODEL VARIABLES*)
(*Expect the first Solve to return an svars error. The second Solve
should be error free.*)
(*modeltest=Select[modelext/.Thread[Cases[Array[vector,
{periods,cou}],x_Symbol[_Integer..],Infinity]-
>0.005]/.Thread[Flatten[Array[a,{periods,cou}]]-
>0.005]/.Thread[Flatten[Array[dam,periods]]->1],!NumericQ[#]&];
modelsolnstest=Flatten[Solve[Thread[modeltest==0]]];
modelsolnstestallvars=Union[Flatten[Map[Cases[modelsolnstest[[#]],x_Symbo
l[_Integer..],Infinity]&,Range[Length[modelsolnstest]]]]];
modelsolnstestoutvars=Sort[Map[First[Cases[modelsolnstest[[#]],x_Symbol[_
Integer..],Infinity]]&,Range[Length[modelsolnstest]]]];
modelsolntestinpvars=Complement[modelsolnstestallvars,modelsolnstestoutva
rs]
optimvarsadjusted=Select[optimvars/.Thread[Cases[Array[vector,
{periods,cou}],x_Symbol[_Integer..],Infinity]-
>0.005]/.Thread[Flatten[Array[a,{periods,cou}]]-
>0.005]/.Thread[Flatten[Array[dam,periods]]->1],!NumericQ[#]&]
modeltestchecksolve=Solve[Thread[modeltest==0],optimvarsadjusted];
modeltestexcessvars=Complement[modelsolntestinpvars,modelinpvars]
modeltestcheckvars=Complement[optimvarsadjusted,modelsolntestinpvars]*)

NotebookDelete[printtemp];
printtemp = If[Length[modelsolns] == 0, PrintTemporary["** error in Model
Solve **"], PrintTemporary["Macroeconomic model solve completed....post
processing...."]];

Generate Nordhaus endogenous symbolic climate


model
(*POST PROCESSING DICE CLIMATE EQUATIONS*)
post1[m_, n_] := {};(**)

post2[m_] := {
(*eind is CO2-equivalent emissions GtC*)
eind[m] == Total[Sum[((Transpose[v[m, n]].svector[m, n]))[[gtracom]],
{n, cou}], 2],
(*etot is the sun of industrial and deforestation emissions*)
etot[m] == eland[m] + eind[m],
(*mat, mup & mlo are the Gt of carbon in the atmosphere*)
Thread[mvector[m] == mtransform.Prepend[mvector[m - 1], etot[m]]],
(*rforcing is radiative forcing in watts per m2*)
rforcing[m] == fex[m] +  Log[2, (mat[m - 1] + mat[m])/(2*mat1750)],
(*tat and tlo are ˚C temperature changes in the atmosphere & lower
ocean*)
Thread[tatvec[m] == tatransform.tatvec[m - 1] +
taforcing*rforcing[m]],
(* is damages multiplier of GNP*)
[m] == 1/(1 + 1*tat[m] + 2*tat[m]^3)
};
post3 = {};
postext = Select[Flatten[{
Array[post1, {periods, cou}],
Array[post2, periods],
post3} /. initialvals], FreeQ[#, True | False] &];
postlhs = postext /. {a_ == b_ -> a};
postrhs = postext /. {a_ == b_ -> b};
postrhsresult = postrhs //. Thread[postlhs -> postrhs];

671
postrhsresult = postrhsresult //. modelsolns;
If[Length[postrhsresult] == 0, Print["*** error in POST PROCESSING
***"]];

Combine Nordhaus symbolic climate model with


symbolic MRIO model & other solutions
modelsolns = Join[modelsolns, Thread[postlhs -> postrhsresult]];
NotebookDelete[printtemp]; printtemp = PrintTemporary["Post processing
completed....preparing constraints"];

Prepare symbolic models for inequality & equality


constraints
(*INEQUALITY CONSTRAINTS: note that the functions need to be threaded if
they are vectors*)
(*IMPORTANT NOTE: write all inequality constraints as m.x>=0 and omit
>=0*)
(*balance of payments, labour & sales2assets constraint*)
(*the first constraint places a price on carbon and the "-ve" sign is to
make ">=0" *)
(*note that the sales to assets constraint with s2avect is not applied to
the greenhouse gas equation where there is no stock of greenhouse
permits*)

ineqcons1[m_, n_] := {
Map[(ninvvec[m - 1, n]* s2avect[m, n] - (Transpose[v[m, n]].svector[m,
n]))[[#]] &, Complement[Range[urows], gamlcom, gtracom]],
exim[m, n].zvector[m, n] - deficit[m, n],
lendowment[m, n] - lab[m, n].svector[m, n],
lab[m, n].svector[m, n] - labempl[m, n]*[m, n],
Map[investv[m, n][[#]] &, Complement[Range[urows], gamlcom, gtracom]],
svector[m, n],
vector[m, n], 1 - vector[m, n],
a[m, n], 1 - a[m, n],
vector[m, n]
};
(*Map[-(((u[m,n]-Transpose[v[m,n]]).svector[m,n])*dvector[m,n]*al[m]+
a[m,n]*vector[m,n]+inv[m,n]*investv[m,n]+exim[m,n]*zvector[m,n]-
bias[m,n])[[#]]&,Flatten[{gamlcom,gtracom}]],*)
(*the first constraint following is to manage the ending inventories of
commodities except carbon*)
ineqcons2[n_] := {
Map[(investv[periods, n]*inv[periods, n] - investv[periods - 1,
n]*inv[periods - 1, n])[[#]] &, Complement[Range[urows], gamlcom,
gtracom]]
};
ineqcons3[m_] := If[m > 10, {tat[m - 1] - tat[m], eind[m - 1] - eind[m]},
{}];
(*After 100 years, temperature and emissions must continue to decline*)
ineqcons4 = {2.0 – tat[10]}; (*At 100 years, temperature rise must be
2C*)
ineqconsext = Union[Flatten[{
Array[ineqcons1, {periods, cou}],
Array[ineqcons2, cou],
Array[ineqcons3, periods],
ineqcons4
}]];
NotebookDelete[printtemp]; printtemp =

672
PrintTemporary["Inequality constraints completed...."];
(*EQUALITY CONSTRAINTS*)
(*IMPORTANT NOTE: write all equality constraints as m.x==0 and omit ==0*)
eqcons1[m_, n_] := {};(**)
eqcons2[n_] := {};(**)
eqcons3[m_] = {[m] – dam[m]}; (*calculated damage multiplier must equal
assumed parameter. Note that both  and dam are internal endogenous
variables and not optimisation variables*)
eqconsext = Union[Flatten[{
Array[eqcons1, {periods, cou}],
Array[eqcons2, cou],
Array[eqcons3, periods]}]];
NotebookDelete[printtemp];
printtemp = PrintTemporary["Equality constraints completed...."];
constraintsorig = Select[Join[Thread[ineqconsext >= 0], Thread[eqconsext
== 0]] /.initialvals, FreeQ[#, True | False] &];
constraints = Select[constraintsorig //. modelsolns /. initialvals,
FreeQ[#, True | False] &];
(*Print["Ready for optimisation..."];*)

Optionally test optimisation with fixed parameters


(*OPTIMISATION*)
NotebookDelete[printtemp]; printtemp =
PrintTemporary["Optimising...."];
(*TEST OPTIMISATION WITH FIXED  AND a*)
If[False, initvars2 = Join[
Thread[{Cases[optimvars, x_s | x_ | x_z], 1}],
Thread[{Cases[optimvars, x_dam], 0}]] /.initialvals;
constraints2 = Select[constraints /.Thread[Flatten[Array[, {periods,
cou, urows}]] -> 0]/.Thread[Flatten[Array[a, {periods, cou}]] ->
0]/.initialvals, FreeQ[#, True | False] &];
optim = FindMinimum[{objfn, constraints2 //. modelsolns}, initvars2 (*,
MaxIterations->1000*)] // Timing;
Print[optim]
];

Optimise economic expansion under climate


constraints
(*ITERATION VARIABLES*)
(*initvars=Thread[{optimvars,optimvars/.optimfinal}],*)
initvars = Join[
Thread[{Cases[optimvars, x_s], 1}],
Thread[{Cases[optimvars, x_ | x_z], 1}],
Thread[{Cases[optimvars, x_dam], 0}],
Thread[{Cases[optimvars, x_ | x_a], 0}]] /.initialvals;

(*USE THE FOLLOWING THREE LINES FOR INITIAL CONSTRAINT CHECKING*)


If[False,
initrepl = Thread[initvars[[All, 1]] -> initvars[[All, 2]]];
NotebookDelete[printtemp];
printtemp = Print["initial unsatisfied constraints: ",
Length[Cases[constraints /. initrepl, False]], " out of ",
Length[constraints]];
If[False, Print["initial constraints: ", constraints]]];

(*MAIN NON LINEAR OPTIMISATION*)


If[True,
itercount = 0;

673
objfncurr = 0;
optimaccuracygoal = 4;
optimmaxiterations = 2000;
NotebookDelete[printtemp];
printtemp = PrintTemporary["iter ... " <> ToString[Length[constraints]]
<> " constraints with accuracy goal of " <> ToString[10^-
optimaccuracygoal // N] <> " & max iterations " <>
ToString[optimmaxiterations]];
(*optim=Monitor[FindMinimum[{objfn,constraints},initvars, AccuracyGoal-
>optimaccuracygoal, MaxIterations-
>optimmaxiterations,StepMonitor:>(itercount++;
objfnprev=objfncurr;objfncurr=objfn;)],
{itercount,ScientificForm[objfncurr ],ScientificForm[objfncurr-
objfnprev], Count[constraints, False]}]//Timing;*)

optim = Monitor[
FindMinimum[{objfn, constraints}, initvars, (*AccuracyGoal →
optimaccuracygoal,*) MaxIterations -> optimmaxiterations,
StepMonitor :> itercount++],
itercount] // Timing;
resultvars = optim[[2, 2]];
(*update initvars for manual repeat calculations if required*)
initrepl = Thread[initvars[[All, 1]] -> initvars[[All, 2]]];
initvars = Thread[{optimvars, optimvars /. resultvars}];
(*NOTIFY OUTPUT OF OPTIMISATION*)
NotebookDelete[printtemp];
Print["Nonlinear optimisation in ", Round[optim[[1]], 1], " seconds in
", Round[MaxMemoryUsed[]*10^-6], "mb memory with objective function
result of ", objfn /. resultvars // Short];
Print["Maximum iterations set to ", optimmaxiterations, " with ",
itercount, " used. Optimisation accuracy set to ", 10^-
optimaccuracygoal // N , "."];
Print["There are ", Length[resultvars] + Length[modelsolns], "
variables in total (or ", Length[resultvars] + Length[modelsolns] +
Length[initialvals], " with parameters)."];
Print["The ", Length[resultvars], " optimisation variables are: ",
If[False, resultvars, resultvars // Short]]
];

Optionally examine slacks


(*USE THE FOLLOWING LINES FOR FINAL CONSTRAINT CHECKING*)
If[True,
consvaluepre = constraints /. initrepl;
consvaluepost = constraints /. resultvars;
unsatcons = Flatten[Position[consvaluepost, False]];
slacks = constraints[[unsatcons]] /. {a_ >= b_ -> (a - b), a_ <= b_ ->
(b - a), a_ == b_ -> (a - b)} /. resultvars;
slackcutoff = 10^-4;
slackszero = Flatten[Position[Chop[slacks, slackcutoff], 0]];
slacksnz = Complement[Range[Length[slacks]], slackszero];
slackskey = unsatcons[[slacksnz]];
slackskeyvals = constraints[[slackskey]] /. {a_ >= b_ -> (a - b), a_ <=
b_ -> (b - a), a_ == b_ -> (a - b)} /. resultvars;
Print["There are ", Length[constraints], “ constraints. Following
optimisation ",
Length[Cases[consvaluepost, False]], " remain unsatisfied compared to ",
Length[Cases[consvaluepre, False]], " prior to optimisation."];
If[False,
Print["The slacks of the ", Length[Cases[consvaluepost, False]], "
unsatisfied constraints are ", slacks]];
If[Length[constraints] == Length[constraintsorig],

674
If[Length[slackskey] > 0,
If[Length[slackskey] == 1,
Print["The only key unsatisfied constraint with slack > ",
slackcutoff // N, " is ", Flatten[Thread[{constraintsorig[[slackskey]],
slackskeyvals}]]],
Print["The ", Length[slackskey], " key unsatisfied constraints with
slacks > ", slackcutoff // N, " are ",
Thread[{constraintsorig[[slackskey]], slackskeyvals}]]],
Print["All ", Length[Cases[consvaluepost, False]] " of the unsatisfied
constraints have slacks < ", slackcutoff // N]
],
Print["Cannot identify constraints with slacks because constraint
lengths vary ", Length[constraintsorig], " ", Length[constraints]]];
Table[Beep[]; Pause[0.5], {i, 5}];
Speak["Stuart, I now have the results you asked for, so come over here
and give me a hug!"]]
FrontEndExecute[FrontEndToken["Save"]]

Data and graphical output


(*PLOT GRAPHS*)
modelsolns1 = modelsolns /. resultvars;
lhside = Map[
First[Cases[modelsolns1[[#]], x_Symbol[_Integer ..], Infinity]] &,
Range[Length[modelsolns1]]];
rhside = lhside /. modelsolns1;
modelsolns2 = Thread[lhside -> rhside];
optimfinal = Sort[Join[resultvars, modelsolns2]];

thiscase = "Max 2C @100yrs: ";


ListLinePlot[Transpose[Array[\[Gamma], {periods, cou}]] /. resultvars,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Expansion", "multiplier"]},
PlotLabel ->
Style[Framed[thiscase <> "expansion \[Gamma]"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[\[Mu][m, n, 2], {m, periods}, {n, cou}]] /.
resultvars, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Proportion", "Abated"]},
PlotLabel ->
Style[Framed[thiscase <> "Mfg abate \[Mu]"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[Transpose[Array[\[Mu]a, {periods, cou}]] /. resultvars,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Proportion", "Abated"]},
PlotLabel ->
Style[Framed[thiscase <> "amelioration \[Mu]a"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[{Array[eind, periods], Array[eland, periods]} //.
modelsolns2 /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Emissions", ""]},

675
PlotLabel ->
Style[Framed[thiscase <> ": emissions eind,eland"], Blue,
Background -> LightYellow], PlotLegend -> {"eind", "eland"},
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
Print[Round[Sum[eind[i], {i, 1, 5}]*3.67 /. optimfinal,
1], " Gt C02 2000-2050 "]
Print[Round[Sum[eind[i], {i, 1, periods}]*3.67 /. optimfinal,
1], " Gt C02 ", periods, " periods"]
ListLinePlot[{Array[tat, periods], Array[tlo, periods]} //.
modelsolns2, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Temperature", "rise \[Degree]C"]},
PlotLabel ->
Style[Framed[thiscase <> "temperature rise"], Blue,
Background -> LightYellow], PlotLegend -> {"tat", "tlo"},
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[{Array[mat, periods]/convppm} //. modelsolns2 /.
initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"], "ppm"},
PlotLabel ->
Style[Framed[thiscase <> "carbon in atmosphere (mat) ppm"], Blue,
Background ->
LightYellow](*,PlotLegend->{"mat"},LegendSize->{0.4,0.2},\
LegendShadow->{.02,-.02},LegendPosition->{-.7,-.1}*)]
ListLinePlot[{Array[rforcing, periods] //. modelsolns2},
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Watts/sqm", "from 1900"]},
PlotLabel ->
Style[Framed[thiscase <> ": radiative forcing"], Blue,
Background -> LightYellow]]
ListLinePlot[Array[\[CapitalOmega], {periods}] //. modelsolns2,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Damages", "multiplier"]},
PlotLabel ->
Style[Framed[thiscase <> "damages \[CapitalOmega]"], Blue,
Background -> LightYellow]]
optimfinal
FrontEndExecute[FrontEndToken["Save"]]

(*Spatial Plots*)
output=optimfinal;
ListLinePlot[
Transpose[Table[\[Mu][m, n, 1], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Proportion", "Abated"]},
PlotLabel ->
Style[Framed[thiscase <> "Food abate \[Mu]"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[Table[\[Mu][m, n, 3], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Proportion", "Abated"]},
PlotLabel ->
Style[Framed[thiscase <> "Services abate \[Mu]"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],

676
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]

ListLinePlot[
Table[Total[Table[investv[m, n]*inv[0, n], {n, cou}], 2]/1000000, {m,
periods}] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "$trillion/decade"]},
PlotLabel ->
Style[Framed[thiscase <> "Investment"], Blue,
Background -> LightYellow]]
ListLinePlot[
Table[Total[Table[ninv[m, n, p], {p, urows - 2}, {n, cou}], 2]/
1000000, {m, periods}] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Capital", "$trillion"]},
PlotLabel ->
Style[Framed[thiscase <> "Capital"], Blue,
Background -> LightYellow]]

ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu][m, n, 1]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},
PlotLabel ->
Style[Framed[thiscase <> "Food amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu][m, n, 2]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},
PlotLabel ->
Style[Framed[thiscase <> "Mfg amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu][m, n, 3]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},
PlotLabel ->
Style[Framed[thiscase <> "Services amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu]a[m, n]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},

677
PlotLabel ->
Style[Framed[thiscase <> "Consumpt. amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]

ListLinePlot[
Transpose[Table[s[m, n, 1], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> vrows2[[1]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, n, 2], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> vrows2[[2]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, n, 3], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> vrows2[[3]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]

ListLinePlot[
Transpose[
Table[v[m, n][[4, 4]]*s[m, n, 4] /. output, {m, periods}, {n,
cou}]], Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "amelioration & abatement"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[v[m, n][[5, 5]]*s[m, n, 5] /. output, {m, periods}, {n,
cou}]], Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "emission permits traded"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]

ListLinePlot[
Transpose[Table[s[m, 1, p], {m, periods}, {p, urows - 2}]] /. output,
Filling -> Axis,

678
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> aggregions[[All, 1]][[1]]], Blue,
Background -> LightYellow], PlotLegend -> vrows2,
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, 2, p], {m, periods}, {p, urows - 2}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> aggregions[[All, 1]][[2]]], Blue,
Background -> LightYellow], PlotLegend -> vrows2,
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, 3, p], {m, periods}, {p, urows - 2}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> aggregions[[All, 1]][[3]]], Blue,
Background -> LightYellow], PlotLegend -> vrows2,
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]

ListLinePlot[
Transpose[
Table[(z[m, n, 1]*exim[m, n][[1]]/1000) /.
z[m, 1, 1] -> -Sum[
z[m, i, 1]*exim[m, i][[1]]/exim[m, 1][[1]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$billion", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[1]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 2]*exim[m, n][[2]]/1000) /.
z[m, 1, 2] -> -Sum[
z[m, i, 2]*exim[m, i][[2]]/exim[m, 1][[2]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$billion", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[2]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 3]*exim[m, n][[3]]/1000) /.
z[m, 1, 3] -> -Sum[
z[m, i, 3]*exim[m, i][[3]]/exim[m, 1][[3]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$billion", "per decade"]},
PlotLabel ->

679
Style[Framed[thiscase <> "z " <> vrows2[[3]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]

ListLinePlot[
Transpose[
Table[(z[m, n, 4]*exim[m, n][[4]]) /.
z[m, 1, 4] -> -Sum[
z[m, i, 4]*exim[m, i][[4]]/exim[m, 1][[4]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC amel", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[4]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 5]*exim[m, n][[5]]) /.
z[m, 1, 5] -> -Sum[
z[m, i, 5]*exim[m, i][[5]]/exim[m, 1][[5]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC permits", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[5]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]

ListLinePlot[
Transpose[Table[invest[m, 1, p], {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "invest " <> aggregions[[All, 1]][[1]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, 2, p], {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "invest " <> aggregions[[All, 1]][[2]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, 3, p], {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "invest " <> aggregions[[All, 1]][[3]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]

680
ListLinePlot[
Transpose[Table[invest[m, n, 1], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "Activity"]},
PlotLabel ->
Style[Framed[thiscase <> "investment " <> vrows2[[1]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, n, 2], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "Activity"]},
PlotLabel ->
Style[Framed[thiscase <> "investment " <> vrows2[[2]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, n, 3], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "Activity"]},
PlotLabel ->
Style[Framed[thiscase <> "investment " <> vrows2[[3]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]

ListLinePlot[
Transpose[
Table[ninv[m, 1, p]/1000000, {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$trillion", "Capital"]},
PlotLabel ->
Style[Framed[thiscase <> "ninv " <> aggregions[[All, 1]][[1]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[ninv[m, 2, p]/1000000, {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$trillion", "Capital"]},
PlotLabel ->
Style[Framed[thiscase <> "ninv " <> aggregions[[All, 1]][[2]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[ninv[m, 3, p]/1000000, {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$trillion", "Capital"]},
PlotLabel ->
Style[Framed[thiscase <> "ninv " <> aggregions[[All, 1]][[3]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},

681
LegendPosition -> {-.7, -.3}]
FrontEndExecute[FrontEndToken["Save"]]

(*USE THE FOLLOWING LINES FOR BINDING CONSTRAINT CHECKING*)


consvaluepost = constraints /. resultvars;
satcons = Flatten[Position[consvaluepost, True]];
slacks = constraints[[satcons]] /. {a_ >= b_ -> (a - b),
a_ <= b_ -> (b - a), a_ == b_ -> (a - b)} /. resultvars;
slackcutoff = 10^-4;
slackszero = Flatten[Position[Chop[slacks, slackcutoff], 0]];
slackskey = satcons[[slackszero]];
slackskeyvals =
constraints[[slackskey]] /. {a_ >= b_ -> (a - b),
a_ <= b_ -> (b - a), a_ == b_ -> (a - b)} /. resultvars;
If[Length[constraints] == Length[constraintsorig],
Print["The binding constraints with slack below +/- ", slackcutoff,
" and slacks are: ",
Flatten[Thread[{constraintsorig[[slackskey]], slackskeyvals}]]
],
Print["Cannot identify constraints with slacks because constraint \
lengths vary ", Length[constraintsorig], " ", Length[constraints]]
];
FrontEndExecute[FrontEndToken["Save"]]

KKT multipliers
(*DUAL SOLUTION: using Kuhn Karush Tucker (KKT) conditions (Taha, 1982,
pp769-773)*)
(*This code is designed to cope with large scale optimisation results*)
Clear[, limitfn, limit2, gradg2, h];
gradf = SparseArray[D[objfn, {optimvars}] /. output];
outputres = Thread[optimvars -> (optimvars /. output)];
outputnonres = Complement[output, outputres];
limit0 = Simplify[
constraints /. {a_ >= b_ -> (a - b), a_ <= b_ -> (b - a),
a_ == b_ -> (a - b)} /. outputnonres];
limit1 = Simplify[limit0 /. outputres];
limit2[z_] := Module[{},
Options[limitfn] = outputres;
SetOptions[limitfn,
optimvars[[z]] -> OptionValue[limitfn, optimvars[[z]]] + h];
Return[limit0 /. Options[limitfn]]
];
(*Since integrals may be non-analytic use the general definition of an
integral*)
gradg2[z_] :=
SparseArray[
Limit[Chop[limit2[z] - limit1]/h,
h -> 0] /. {∞ -> 0, -∞ -> 0}];
(*Solving 500,000 derivative equations take about 20 hours on 4 cores \
so use parallel processing*)
DistributeDefinitions[optimvars, \
outputres, limit0, limit1, limit2, gradg2]
gradg = Parallelize[Table[gradg2[z], {z, Length[optimvars]}]];
If[False, Print["gradg"]; Print[Normal[gradg]]];
(*The UnitStep is inserted because some constraints have small negative
slacks and are therefore set as binding*)
(*kkt=Chop[Flatten[FindInstance[Flatten[{
Thread[gradf-Array[,Length[constraints]].Transpose[\
SparseArray[gradg]]==0],
Thread[Pick[Array[,Length[constraints]],constraints/.{a_>=b_-\
>True,a_==b_->False}]>=0],

682
Thread[Chop[limit1]*UnitStep[Chop[limit1]]*Array[,Length[\
constraints]]==0]
}],Array[,Length[constraints]],Reals]]]*)
kktzerosub =
Flatten[Solve[
Thread[Chop[limit1, 10^-5] UnitStep[Chop[limit1, 10^-5]]*
Array[, Length[constraints]] == 0]]];
kktnonzeros =
Cases[Array[, Length[constraints]] /. kktzerosub,
x_Symbol[_Integer], Infinity];
kktnonzero = FindInstance[Select[Flatten[{
Thread[
gradf - Array[, Length[constraints]].Transpose[
SparseArray[gradg]] == 0],
Thread[
Pick[(Array[, Length[constraints]]),
constraints /. {a_ >= b_ -> True, a_ == b_ -> False}] >= 0]
}] /. kktzerosub, FreeQ[#, True]], kktnonzeros, Reals, 20]
kkt = Union[kktzerosub, kktnonzero[[1]]];
(*Print KKT multipliers*)

If[True, Print["KKT multipliers: ", kkt]];


(*Print constraints with non-zero KKT multipliers*)
If[True,
If[Length[constraintsorig] == Length[constraints],
If[Length[kkt] > 0,
Print["Constraints with non-zero KKT multipliers: ",
Cases[
Table[{constraintsorig[[i]], ([i] /. kkt),
limit1[[i]]}, {i, Length[constraintsorig]}], {a_, b_, c_} /;
b != 0]
],
Print[
"Have not printed Constraints with non-zero KKT multipliers \
because the length of constraints vector is not equal to length of \
original constraints vector."
]]]];
(*Print Amelioration & Abatement KKT multipliers*)
If[True,
If[Length[constraintsorig] == Length[constraints],
If[Length[kkt] > 0,
Print["Amelioration & Abatement constraints & KKT multipliers: ",
Cases[
Table[{constraintsorig[[
i]], ([i] /. kkt)}, {i,
Length[constraintsorig]}], {a_, b_} /;

Length[Select[Cases[a, x_Symbol[_Integer ..], Infinity],

MemberQ[Complement[Flatten[Array[s, {periods, cou, 4}]],


Flatten[Array[s, {periods, cou, 3}]]], #] &]
] > 0]],
Print[
"Have not printed Amelioration & Abatement constraints & KKT \
multipliers because the length of constraints vector is not equal to \
length of original constraints vector."]
]]];
(*Print Emissions Permits KKT multipliers*)
If[True,
If[Length[constraintsorig] == Length[constraints],
If[Length[kkt] > 0,
Print["Emissions Permits constraints & KKT multipliers: ",
Cases[

683
Table[{constraintsorig[[
i]], ([i] /. kkt)}, {i,
Length[constraintsorig]}], {a_, b_} /;

Length[Select[Cases[a, x_Symbol[_Integer ..], Infinity],

MemberQ[Complement[Flatten[Array[s, {periods, cou, 5}]],


Flatten[Array[s, {periods, cou, 4}]]], #] &]
] > 0]],
Print[
"Have not printed Emissions Permits constraints KKT multipliers \
because the length of constraints vector is not equal to length of \
original constraints vector."]
]]];
FrontEndExecute[FrontEndToken["Save"]]

A8.3 Appendix references:


(Taha 1987)

Taha, H.A., 1987. Operations Research: An Introduction 4th ed., Macmillan


Publishing Company.

684
Colophon
This dissertation was prepared with OpenOffice 3.2.0.5 created by Novell Inc.
©Sun Microsystems Inc. on openSUSE 11.2 ©Sun Microsystems Inc. with KDE
4.3.4 “release 2” Desktop © The Regents of the University of California. The
font is Bitstream Vera Serif ©Bitstream Inc. released under an open source
agreement with the GNOME Foundation. The literature research was
undertaken through the University of Technology, Sydney databases and other
resources. References were recorded and inserted using the Zotero 2.0.rc5 ©
Centre for History and New Media, George Mason University.

685
CD-ROM Attachment
The paper version of this thesis contains a CD-ROM with the following files.

 stuart_nettleton_dissertation_files
 gtap_specification_files
☑ sntest01.agg Aggregation specification
☑ sntest01.txt Output of aggregation
 mathematica_utility_files
gtap_make_mathematica_db_03.nb Make Mathematica database from GTAP
gtap_comparison_uv_amatrix.nb Due diligence functions
eghg_aggregate.nb Emissions aggregation functions
Gtapfunctions.m Database mining functions
Gtapaggregation.m Database aggregation functions
Topofunctions.m Acyclic processor functions
gtap3res.script & gtap3res.m GTAP aggregated database
gtap3eghg.script & gtap3eghg.m GTAP emissions database
☑ readme_utility_files.txt Notes for placing database resources
 mathematica_model_files
m12_13p_2C_100.nb Base Case of 2°C rise at 100 years
m12_13p_2C_100_no_gaml_no_gtra.nb Base Case with no emission permits or
amelioration/abatement trading
m12_13p_2C_100_s2a.nb Base Case with impaired Sales/Asset
m12_13p_2C_100_tcx2.nb Base Case with 2xtechnology cost
m12_13p_2C_100_tcx10.nb Base Case with 10x increase in technology
cost
m12_13p_2C_100_tcx20.nb Base Case with 20x increase in technology
cost
m12_13p_350_100.nb Hansen/Gore/Tällberg 350 ppm Case
m12_13p_450_100.nb Previous world target of 450 ppm
m12_13p_550_100.nb Expected 550 ppm case
m12_13p_full_amel.nb Radical perspective case
m12_13p_full_amel_no_cost.nb Laissez faire case
m12_13p_full_amel_no_cost_no_dam.nb Sceptic Case
m12_13p_normal.nb Normal or “business as usual” case for
comparison with Nordhaus' DICE
topo_test12_comp_sceptre.nb Nordhaus' DICE business as usual case
☑ readme_model_files.txt Notes for running model files

687
688
Stuart Nettleton

Benchmarking
Climate Change Strategies
Under Constrained Resource Usage

University of Technology, Sydney

Dissertation Model Files v. m12


26 October 2009

689
Notes

691

You might also like