Sobre El Clima
Sobre El Clima
Sobre El Clima
Stuart Nettleton
12 February 2010
Trademarks and service marks. All trademarks, service marks, logos and
company names mentioned in this work are property of their respective owner.
They are protected under trademark law and unfair competition law.
I certify that the work in this thesis has not previously been submitted for a
degree nor has it been submitted as part of the requirements for a degree
except as fully acknowledged within the text.
I also certify that the thesis has been written by me. Any help that I have
received in my research work and the preparation of the thesis itself has been
acknowledged. In addition, I certify that all information sources and literature
used are indicated in the thesis.
Deepak's Energy Policy Program has been a jewel of the Faculty of Engineering
and Information Technology for eighteen years. I was fortunate to participate
in the Energy Modelling module, which provided an excellent foundation to
Input Output analysis and resource infrastructure policy. This was
complemented by practical studies in Policy Research through the Australian
Technology Network (ATN) of Universities LEAP program.
Although my other mentors William Nordhaus and Thijs ten Raa would at
present know me only from published comments in the Mathematica support
group or through a few emails, they have had a profound influence on my
research. William Nordhaus' 2007 book A Question of Balance showed me best
practice in assessing climate change policies. Thijs ten Raa's 2006 book The
Economics of Input Output Analysis provided a refreshing and exciting
paradigm for computable general equilibrium modelling that elevated Input
Analysis to a completely new level. My head became filled with exciting new
ways to understand the world. My passion to do this led me to an almost
singular obsession in implementing Thijs ten Raa's concepts in a Nordhaus-like
intertemporal economic-climate model.
Daniel Lichtbau at Wolfram is another dear mentor who constantly amazed me
at his knowledge of Mathematica optimisation, ability to obtain answers to
esoteric questions from the mysterious halls of Wolfram and his tireless
perseverance in responding to me as I pushed many of the limits of
Mathematica and its algorithms.
Lastly, the understanding of my family has made this work possible. My thanks
go to my children. To my daughter for deftly wielding her existential blade with
elegance and grace and to my son for his practical demonstration of that secret
of life that Herculean perseverance along with a good heart brings
extraordinary success.
Above all, the patient indulgence of my wonderful partner, Julie, has been
beyond the pale of duty and has made this work worthwhile and enjoyable.
Julie was the first to read this dissertation and see those things my eyes passed
over too quickly. I am truly grateful and humbled by her confidence and ever
encouraging smile for me to persevere with yoga.
Table of Contents
Glossary................................................................................................................................... v
Preface.................................................................................................................................. xiii
Chapter 1 Introduction............................................................................................................ 1
1.1 Background.................................................................................................................. 1
1.2 Policy context................................................................................................................ 4
1.3 Equilibrium tools for policy research..........................................................................18
1.4 Research aims............................................................................................................. 31
1.5 Research methodology................................................................................................ 31
1.6 Scope of research....................................................................................................... 32
1.7 Significance of research............................................................................................. 35
1.8 Chapter references..................................................................................................... 38
Chapter 2 Political Economy of the Anglo-American economic world view...........................47
2.1 Origins of the American worldview.............................................................................47
2.2 International relations................................................................................................ 52
2.3 Precursors of the Anglo-American world view............................................................59
2.4 Unexpected failures in the Anglo-American world view.............................................88
2.5 Evolution of a new Anglo-American world view........................................................108
2.6 Threats to the evolution of a new Anglo-American world view.................................134
2.7 Conclusion................................................................................................................ 138
2.8 Chapter references................................................................................................... 143
Chapter 3 Political Economy of the Anglo-American world view of climate change............165
3.1 Background.............................................................................................................. 165
3.2 Climate change science development....................................................................... 167
3.3 Climate change policy development......................................................................... 186
3.4 American climate policy development...................................................................... 200
3.5 Models to manage the commons............................................................................... 221
3.6 Conclusion................................................................................................................ 233
3.7 Chapter references................................................................................................... 237
Chapter 4 Economic Models for Climate Change Policy Analysis........................................253
4.1 Survey of Computable General Equilibrium modelling literature.............................253
4.2 Survey of Input Output modelling............................................................................. 273
4.3 Survey of mathematical modelling platforms...........................................................287
4.4 Survey of data sources.............................................................................................. 294
4.5 Conclusion................................................................................................................ 297
4.6 Chapter references................................................................................................... 298
Chapter 5 A new spatial, intertemporal CGE policy research tool.......................................315
5.1 Sceptre model flowchart........................................................................................... 315
5.2 Model assumptions................................................................................................... 319
5.3 Comparison of Sceptre with physical modelling.......................................................370
i
5.4 Comparison of Sceptre and DICE............................................................................. 370
5.5 Conclusion................................................................................................................ 378
5.6 Chapter references................................................................................................... 379
Chapter 6 Assessment of changes in regional and industry performance under resource
constrained growth.............................................................................................................. 385
6.1 Policy investigation with the Sceptre tool.................................................................386
6.2 Conclusion................................................................................................................ 481
6.3 Chapter References.................................................................................................. 483
Chapter 7 Conclusion and Suggestions for Further Research.............................................485
7.1 Conclusion................................................................................................................ 485
7.2 Suggestions for further research.............................................................................. 490
7.3 Chapter References.................................................................................................. 495
Appendix 1 Climate change engagement in Australia.........................................................497
A1.1 Submission to Garnaut Review............................................................................... 497
A1.2 Australian climate change policy development.......................................................504
A1.3 Appendix References.............................................................................................. 515
Appendix 2 CGE Modelling.................................................................................................. 517
A2.1 Elementary CGE modelling.................................................................................... 517
A2.2 Economic Equivalence of Competitive Markets and Social Planning.....................526
A2.3 Appendix references............................................................................................... 533
Appendix 3 Input Output Tables.......................................................................................... 535
A3.1 Input Output tables from the Australian Bureau of Statistics.................................535
A3.2 Leontief Matrix....................................................................................................... 539
A3.3 World Multiregional Input Output Model...............................................................545
A3.4 Appendix references............................................................................................... 556
Appendix 4 Nordhaus DICE Model...................................................................................... 559
A4.1 Basic scientific model............................................................................................. 559
A4.2 Model equations..................................................................................................... 559
A4.3 Implementation issues............................................................................................ 566
A4.4 Appendix references............................................................................................... 584
Appendix 5 Acyclic Solver for Unconstrained Optimisation................................................587
A5.1 Overview................................................................................................................ 587
A5.2 Modelling factors that affect performance.............................................................587
A5.3 Phases of model development................................................................................ 597
A5.4 Appendix references............................................................................................... 613
Appendix 6 Benchmarking with Linear Programming.........................................................617
A6.1 Data envelopment analysis..................................................................................... 617
A6.2 Linear programming............................................................................................... 622
A6.3 Emission permits, amelioration and abatement......................................................633
A6.4 Intertemporal stocks and flows model....................................................................634
A6.5 Appendix references............................................................................................... 636
Appendix 7 Mining the GTAP Database............................................................................... 639
ii
A7.1 Aggregating the GTAP7 database........................................................................... 639
A7.2 Creation of GTAP economic databases within Mathematica...................................645
A7.3 Data mining the GTAP database in Mathematica...................................................655
A7.4 Data mining Mathematica's country database for the population growth using GTAP
aggregations................................................................................................................... 661
A7.5 Appendix references............................................................................................... 662
Appendix 8 The Sceptre Model............................................................................................ 663
A8.1 Sceptre Model Flowchart....................................................................................... 663
A8.2 Sceptre Mathematica Code.................................................................................... 664
A8.3 Appendix references:.............................................................................................. 684
Colophon............................................................................................................................. 685
CD-ROM Attachment........................................................................................................... 687
Notes................................................................................................................................... 691
iii
Glossary
Abbreviations
Acronym Meaning
ABARE Australian Bureau of Agricultural and Resource Economics
ABS Australian Bureau of Statistics
APEC Asia Pacific Economic Forum
BAU Business As Usual
bbl Barrel of oil (159 litres)
BP Before (the) Present
Btu British Thermal Unit (about 1.06 x 103 Joules)
C (degrees) Celsius
CCS Carbon capture & storage
CDM Clean Development Mechanism
CEPII Centre d'Etudes Prospectives et d'Information Internationales
CFCs Chlorofluorocarbons
Computed “Computed” means “ascertained or arrived at by calculation or
General computation; (also) performed or controlled by a computer; computerized”
Equilibrium (OED, 2009). A Computed General Equilibrium (CGE) is the result,
(CGE) outcome, state or output of a “Computable General Equilibrium (CGE)
model” following calculation or computation. This dissertation draws no
distinction between models that are able to be computed and models that
have been computed and are represented by their computed state.
Therefore the terms “Computed General Equilibrium”, “Computable
General Equilibrium” and the acronym “CGE” have the same meaning.
Computable “Computable” means “Capable of being computed, calculable; solvable or
General decidable by (electronic) computation” (OED, 2009). A Computable
Equilibrium General Equilibrium (CGE) model is “A general equilibrium model of the
(CGE) model economy so specified that all equations in it can be solved analytically or
numerically. Computable general equilibrium models are used to analyse
the economy-wide effects of changes in particular parameters or policies”
(Black et. al.. 2009). See also Computed General Equilibrium (CGE) above.
CH4 Methane
CO2 Carbon Dioxide
CO2e Carbon Dioxide equivalent see Gt CO2
Cognitive “A cognitive therapy that is combined with behavioural elements (see
Behavioural behaviour therapy). The patient is encouraged to analyse his or her
Therapy (CBT) specific ways of thinking around a problem. The therapist then looks at the
v
Acronym Meaning
resulting behaviour and the consequences of that thinking and tries to
encourage the patient to change his or her cognition in order to avoid
adverse behaviour or its consequences. CBT is successfully used to treat
phobias, anxiety, and depression (it is among the recommended treatments
for anxiety and depression in the NICE guidelines)” (Martin, 2007)
COP Conference of the Parties (of the UNFCCC)
COP15 UNFCCC November 2009 meeting in Copenhagen, Denmark
CRS Constant returns to scale such that production can be increased or
decreased without affecting efficiency
CSIRO Australian Commonwealth Scientific and Industrial Research Organisation
Data Data Envelopment Analysis is a linear programming technique typically
Envelopment used to measure the technical (in)efficiency of decision making units
Analysis (DEA) compared to the units with best practice
DICE Dynamic Integrated Model of Climate and the Economy
DMU A decision making unit in Data Envelopment Analysis (DEA)
Effectiveness The extent to which outputs of service providers meet the objectives set
for them
Efficiency The degree to which the observed use of resources to produce outputs of a
given quality matches the optimal use of resources to produce outputs of a
given quality. This can be assessed in terms of technical efficiency
(conversion of physical inputs such as labour and materials into outputs),
allocative efficiency (whether inputs are used in the proportion which
minimises the cost of production) and dynamic efficiency (degree of
success in altering technology and products following changes in
consumer preferences or productive opportunities)
EU European Union
EU25 Twenty five countries of the EU in 2004, prior to its 2007 expansion to
Bulgaria and Romania
ETR Ecological/ Environmental Tax Reform
ETS Emissions trading scheme, which may be either a differential structure as
introduced in European Union countries, or an absolute structure where
emitters must purchase emissions permits in order to pollute the
atmosphere with greenhouse gases
gg Grammes (grams)
G5 Major emerging economies, comprising Brazil, India, China, Mexico and
South Africa
G8 Group of 8, comprising Canada, France, Germany, Italy, Japan, Russia,
United Kingdom, United States
G20 Group of 20, comprising Argentina, Australia, Brazil, Canada, China,
France, Germany, India, Indonesia, Italy, Japan, Mexico, Russia, Saudi
Arabia, South Africa, South Korea, Turkey, United Kingdom, United States,
vi
Acronym Meaning
European Union. In September 2009, the G20 announced it would be the
world's peak economic policy body, replacing the G8.
gC Grams of Carbon (see GgC)
Gg Giga-grammes (grams)
GAMS General Algebraic Modelling System
GDP Gross Domestic Product
GHG Greenhouse Gases
Global CO2 (1), CH4 (21, although a recently detected reaction with aerosols now
warming suggests 33), N2O (310), CF4 (6,500), C2F6 (9,200), SF6 (23,900), HFC-143a
potentials (3,800), HFC-23 (11,700), HFC-125 (2,800), HFC-134a (1,300), HFC-143a
(3,800)
GJ Gigajoules
Gt Gigatonnes
GtC Gigatonnes of Carbon
Gt CO2 Gigatonne of CO2 (3.67 Gt CO2 has the same carbon content as 1 GtC. The
factor of 3.67 represents the ratio of the molecular weight of CO2 , which
is 44.009, to the atomic weight of carbon, which is 12.011, see Oak Ridge
National Laboratory (Carbon Dioxide Information Analysis Center 1990,
Table 3) and Clark (1982, p467))
GTAP Global Trade Analysis Project (Purdue University)
GTEM Global Trade & Environment Model
HCFC Hydro-Chloro-Fluoro-Carbon
HFC Hydro-Fluoro-Carbon
IEA International Energy Agency
IAEA International Atomic Energy Agency
IMAGE Integrated Model to Assess the Greenhouse Effect
IPCC Intergovernmental Panel on Climate Change, based in Geneva,
Switzerland. In 2007, the IPCC and Al Gore shared the Nobel Peace Prize
IRIO Interregional Input Output Model (see also MRIO)
Kyoto Protocol The Kyoto Protocol stems from a 1992 United Nations Conference on
Environment and Development in Rio de Janeiro (Brazil), which considered
climate change regulations and a United Nations Framework Convention
on Climate Change in Berlin (Germany) the same year. In 1995, a
Conference of the Parties in Berlin proposed a new protocol to replace the
ambiguous agreement reached in 1992. In 1997, at the 3rd session of the
Conference of the Parties to the United Nations Framework Convention on
Climate Change in Kyoto, Japan, the Berlin proposal became the Kyoto
Protocol. Its target was that by 2008–2012 the net emissions of 6
greenhouse gases (CO2, CH4, N2O, HFC, PFC and SF6) would be reduced by
5.2% of the 1990 emission levels of these gases. While each signatory to
the Kyoto Protocol decides how it will implement the agreements of the
vii
Acronym Meaning
Treaty, the Kyoto Protocol offers mechanisms to achieve targeted
reductions in greenhouse gases including international and local emissions
trading schemes (ETS), emissions sinks (the development and
management of forests and agricultural soils), joint implementations
(where one company invests in another's facility and shares reductions in
emissions), clean development mechanisms (where companies invest in
reducing greenhouse pollution in developing countries), bubbling
(collectively attaining targets), etc.
Linear Programming algorithms to maximise or minimise an objective function
program subject to a set of linear mathematical constraints
MAD Mutually assured destruction
Mb millions of bytes of random access memory (RAM) or file size
MEF Major Economies Forum comprising Australia, Brazil, Canada, China,
Germany, the European Union, France, the United Kingdom, India,
Indonesia, Italy, Japan, Mexico, Russia, South Africa, South Korea and the
USA.
MJ Million (106) Joules or Mega Joules
MBTU Thousand (103) BTU (where M is the Roman Numeral for one thousand)
MMBTU Million (106) BTU
Moral Hazard “The observation that a contract which promises people payment on the
occurrence of certain events will cause a change in behaviour to make
these events more likely. For example, moral hazard suggests that if
possessions are fully insured, their owners are likely to take less good care
of them than if they were uninsured. The consequence is that insurance
companies cannot offer full insurance. Moral hazard results from
asymmetric information and is a cause of market failure” (Black et al.
2009)
MRIO Multiregional Input Output Model (see also IRIO)
NGO Non-Governmental Organisation
N2O Nitrous Oxide
NOAA United States of America National Oceanic & Atmospheric Administration
NOX Nitrogen Oxides
NPV Net Present Value
OECD Organisation for Economic Cooperation and Development
PCA Principal components analysis
PJ Peta Joule(s)
ppm Parts per million, used here as a measure of the concentration of
greenhouse gases in the atmosphere (1 ppm of CO2 in the atmosphere =
2.123 GtC in the atmosphere, which assumes an atmospheric mass of
5.137 × 1018 kg, see references for “Gt CO2”)
Principal - “The problem of how person A can motivate person B to act for A's benefit
viii
Acronym Meaning
Agent Problem rather than following self-interest. The principal, A, may be an employer
and the agent, B, an employee, or the principal may be a shareholder and
the agent a director of a company. The problem is how to devise incentives
which lead agents to report truthfully to the principal on the facts they
face and the actions they take, and to act for the principal's benefit.
Incentives include rewards such as bonuses or promotion for success, and
penalties such as demotion or dismissal for failure to act in the principal's
interests” (Black et al. 2009)
Prisoner's “A two-player game that illustrates the conflict between private and social
dilemma incentives, and the gains that can be obtained from making binding
commitments. The name originated from a situation of two prisoners who
must each choose between the strategies ‘Confess’ and ‘Don't confess’
without knowing what the other will choose. The important feature of the
game is that a lighter penalty follows for a prisoner who confesses when
the other does not. The game is summarized in the pay-off matrix where
the negative pay-offs can be interpreted as the disutility from
imprisonment” (Black et al. 2009)
Production A curve plotting the minimum inputs required to produce a given quantity
frontier of output
Productivity The ratio of physical output produced from the use of a quantity of inputs
(see also TFP)
quad Quadrillion BTU, equivalent to 1.055 x 1018 Joules
quadrillion One thousand million (1015) i.e. Peta
R&D Research & Development
Sceptre model Spatial Climate Economic Policy Tool for Regional Equilibria (the model of
this doctoral research and described in this dissertation)
Slacks In a linear program solution, the extra amounts by which an input (output)
can be reduced (increased) to attain technical efficiency after all inputs
(outputs) have been reduced (increased) in equal proportions to reach the
production frontier
SO2 Sulphur Dioxide
SRES United Nations' IPCC “Special Report on Emissions Scenarios”
tt Tonnes
TJ Terajoules
TFP Total Factor Productivity is the ratio of the quantity of all outputs
(weighted by revenue shares) to the quantity of all inputs (weighted by
cost shares)
UK United Kingdom
UKMO United Kingdom Meteorological Office
UN United Nations
UNFCCC United Nations Framework Convention on Climate Change, based in Bonn
ix
Acronym Meaning
USOSTP United States Office of Science and Technology Policy
USGCRP United States Global Change Research Program
USA or U.S. United States of America (America)
WHOSTP White House Office of Science and Technology Policy
WMO World Meteorological Organisation
WWF World Wildlife Fund
Mathematical Symbols
Symbol Meaning
∀ For each/all/any of
∂ Partial differential
∆ Difference
∈ Is an element of
∏ Cartesian product of
∑ Sum of
≤ Less than or equal to
≥ Greater than or equal to
Glossary references
(Carbon Dioxide Information Analysis Center 1990) (Clark 1982) (Black et al. 2009) (OED 2009) (Martin
2007)
Clark, W.C. ed., 1982. Carbon dioxide review: 1982, Clarendon, United
Kingdom: Oxford University Press.
x
Martin, E.A. ed., 2007. Concise Medical Dictionary. In Oxford Reference Online
Premium. Oxford University Press. Available at:
http://www.oxfordreference.com.ezproxy.lib.uts.edu.au/views/BOOK_SE
ARCH.html?book=t60 [Accessed February 5, 2010].
OED, 2009. OED Online. In The Oxford Dictionary of English. Oxford: Oxford
University Press. Available at:
http://dictionary.oed.com.ezproxy.lib.uts.edu.au/entrance.dtl [Accessed
February 5, 2010].
xi
Preface
You must become the change you wish to see in the world.
Mahatma Gandhi
Three wildly divergent views about the future were circulating in Western
markets about the commodity price spiral. The first was that it was merely a
trading bubble due to speculators and it would burst. The second view was that
the emergence of a prosperous middle class in China and India had a ravenous
demand for animal protein and cars. This meant the world was now in a new
era of high resource demand and prices. However, the world would adapt, as
always. We could be sanguine because the world had previously coped with
similar dire exigencies and would produce the necessary resources, so
commodity prices would fall. The third scenario was that the world had
reached or passed peak-oil production and was in a new era of scarce food,
energy and metals, and this meant a continuation of rapidly increasing demand
and high prices.
xiii
Yet in April 2008 a new dimension of Climate Change had emerged into a
major market factor in my home country, Australia. In its first day in office,
Australia's new Rudd Labour Government had just ratified the Kyoto Protocol.
xiv
more and my business or livelihood be disadvantaged for some theoretical
concept called climate change?” Indeed, following a fuel revolt in 2000, the
Constitutional Court of France declared environmental taxes unconstitutional.
As a result, the European Union emissions trading scheme introduced in 2004
included neither a tax on carbon nor the requirement for companies to bid for
permits to pollute. To date, America has steadfastly refused to affirm the Kyoto
Protocol. At the time of concluding this thesis, its pending Waxman Markey Bill
is not as strong as legislation in the United Kingdom and European Union.
There remains much division over policies to ameliorate and abate the
consequences of global warming. As I was drawing this dissertation to a
conclusion in August 2009, the opposition party in Australia used its upper
house majority to vote down the Government's Carbon Pollution Reduction
Scheme.
The end of my dissertation has coincided with the 80th anniversary of the start
of the Great Depression. I reflect on the past few years and am amazed at the
dazzling panoply of world events in the period: China and India burst onto the
xv
world stage as global leaders, America became a debtor nation with multi-
trillion dollar deficits, a global financial crisis of almost Great Depression
proportion came and went in just one year, as did a swine-flu pandemic.
Yet climate change policy remains in disarray as the United Nations' COP15
Copenhagen meeting approaches, Governments flounder and people remain
blasé or cognitively dissonant about scientific evidence. Climate sceptics
abound and mock the melting of the Arctic, Greenland and Antarctic ice-caps
that threaten many metres of sea rise, a shift in the earth's axis of rotation,
widespread earthquakes, tsunamis and volcanic eruptions from shifting
tectonic plates and the release of methane deposits from the sea beds and
permafrost.
Every day, almost every newspaper carries the latest stories of my research
topic and the policy imbroglio in climate change. It is at the same time
satisfying and disturbing that my research into benchmarking climate change
policies remains poignant and needed.
xvi
Chapter 1 Introduction
1.1 Background
In the 200 years from the Industrial Revolution through to the 21 st century,
technology, energy, the political economy of markets and democratic systems
delivered abundant food production, prosperity and growth in lifestyle. It has
truly been homo sapiens' golden age of expansion. In Common Wealth:
Economics for a Crowded Planet (2008a), Jeffrey Sachs writes that following a
millennium of static productivity, output per person jumped one hundred-fold
while aggregate global output exploded from a negligible level to US$70
trillion in 2008.i
Policy makers have never had an easy task in resolving competing priorities for
increased living standards against the backdrop of increasing population. In
1800, global population was about 1 billion. This grew to 2 billion in 1930.
Notwithstanding World War II, population was 3 billion by 1950. It took just ten
years to rise to 4 billion in 1960, 5 billion by 1975, 6 billion by 2000 and 6.7
billion in 2008. The United Nations projects that world population will exceed
9 billion by 2050.
1
Today, population and standards of living based on abundant energy from fossil
fuels, such as coal and oil, and its accompanying greenhouse gas pollution, are
leading to major world problems due to global warming. In November 2007,
scientists of the United Nations Intergovernmental Panel on Climate Change
(IPCC) confirmed that runaway greenhouse gas emissions had created a
situation where global warming was a real and pressing problem for the world
(IPCC 2007; Karoly 2007). The IPCC scientists warned of a 2°C to 6°C increase
in terrestrial atmospheric temperature between 2020 to 2080.
The unique imperative for climate policy and strategy is that decisions
implemented now will determine climate and environmental damage outcomes
in one hundred years time, such as impacts on biodiversity, flooding and mass
human migration. These issues affect people of all nations, from those living in
the poor Bangladeshi river delta to rich financial hubs like New York, London
and Sydney, to name only a few.
It is realistic to hope that policy makers will deal with climate change. The
world has previously united to solve similar problems of chlorofluorocarbon
gases damaging the ozone layer and acid rain. In his advice to the United
Nations, Jeffrey Sachs outlined that our crises have solutions but require good
science and technology, population control and finding ways to live sustainedly
with biodiversity and water production. He says (Sachs 2008b):
We have within reach solutions for all of these challenges, the irony
2
I should say …. is not that we are at an abyss …. its almost the
opposite, we've unlocked the ability to promote economic
development in all parts of the world, we have at our hand the
ability to end extreme poverty, we have before us …. technologies to
replace dirty fossil fuels …. we have these things, the question is
whether we can bring knowledge to bear on these solutions, and
then find a common purpose on the planet.
Unfortunately, until quite recently, the IPCC's message about climate change
fell on deaf ears. Politically conservative governments in America, Canada and
Australia continued to renege on their December 1997 commitments to the
Kyoto Protocol and were embarrassed by Russia's ratification of the treaty that
brought it into effect.ii In fact governments of most Anglo-American countries,
with the exception of the United Kingdomiii, actively discredited scientific
arguments about climate change and in some cases actively subverted action
(Ayres 2001; Hamilton 2006; Sachs 2008a).
New political parties have since been elected in Australia and America. It is
well known that Australia's Prime Minister Kevin Rudd, in his first act of office,
ratified the Kyoto Protocol on 3 December 2007. Furthermore, America's newly
elected President Obama appreciates that America is facing one of its greatest
ever challenges. His goal is to guide the American economy to sustainable
growth. President Obama unshackled the Environmental Protection Agency
(EPA) to deal with greenhouse gases as pollution and personally appealed for
American Congressmen to support the 2009 Waxman Markey Bill to mitigate
America's contribution to global warming.
With these political developments now behind us, the climate change debate
has moved from science into economics, technology and the competitive
strategies of nations, industries and businesses. If fossil fuel usage is
constrained by greenhouse gas emissions, all countries face the challenge of
using energy resources more efficiently. Industries and countries need to cope
with new commodity and factor substitutions between industries and between
countries. At the same time, national governments are still charged with
stewardship of their citizens' welfare. They need to balance policies for
improved standards of living, employment, utilisation of national endowments,
international trade, technology and security.
3
Climate change is therefore a major cross-disciplinary area of strategy and
policy. It involves macro and welfare economics, political economy, business
and industry strategies, security and warfare strategies, finance, valuation,
technology, climate science, operations research, game theory, philosophy,
sociology and psychology.
Examples may include Hurricane Katrina, aggravated El Niño effects and the
Iraq and Somali wars. According to the IPCC, worldwide sea levels rose 17cm
over the 20th century and are projected to rise by another 18-59 cm by 2100. If
Antarctica and Greenland thaw, the sea level rise could be as large as 75
metres.
4
The magnitude of the problem has prompted calls for a comprehensive set of
sustainability policies to address key risks across the environment, social
equity, economic futures and national culture. Lowe (2009, pp.1-4 & 19-20)
identifies the risks by asking questions about resources and social stability, for
example:
Policy needs
Cochran & Malone (1995) define the essence of policy as: “Public policy
consists of political decisions for implementing programs to achieve societal
goals.”
5
It is widely appreciated by the major economies of the world that they need to
join together with common policies to contain emissions. However, complexity
arises because equity is an important issue. Any policy response needs to
address a number of fundamental issues leading to different behaviour
amongst nations:
6
ways. For example, Australia's coal industry is threatened because coal
is the main polluting fossil fuel
• the world is now highly interlinked through trade. Exporting countries
such as China and Germany will suffer large loss of income if the
economies of their customers suffer climate change damage
• it is an unfortunate feature of international relations that countries often
cheat on their joint obligations. There needs to be effective auditing of
countries by the United Nations
• industries such as steel and aluminium production will move to where-
ever they find the lowest cost of production. This is called “carbon
leakage.” Polluting industries may gravitate to those jurisdictions where
there is no carbon levy and continue with undiminished pollution
• there is a need to legally protect the biodiversity of the planet because a
loss of biodiversity will adversely affect all people in the long term
• there is a need to invest in technology to accelerate the development of
substitution technologies (such as electric cars), supplementary
technologies (such as carbon capture and storage) and new technologies
that will remove CO2 from the atmosphere.
7
various countries are proceeding to change their institutions on the basis of
previous commitments and arrangements.
The Indian lawyer Anuradha (2009) has identified policy and legal
requirements relating to infrastructure and technology transfer that will
enable developing countries such as India to join with industrialised countries
in addressing global warming:
Evidence-based policy
In The Policy Context for Research, Lorman & Van Groningen (2009) provide a
more comprehensive social and behavioural definition of public policy:
Public policy is about the arrangements for social and economic life
in our society. It comes out of the interaction between the different
interests of stakeholders who have different views about what
8
constitutes a problem that needs to be solved. These interactions
are mediated through an extensive set of institutional arrangements
…. This engagement is done within and between organisations with
different traditions, including international organisations and
processes. Within formal political processes the emphasis is on
values and ideas, sustained through alliances, brokerage and
compromise …. Public policy is made when people engage with
others, through their interests, commitments and paid occupations
in shaping social and economic arrangements. This capacity to
engage and shape arrangements is greatly influenced by their
command over resources. Those with the greatest command over
resources have the greatest potential to influence policy outcomes.
Policy makers use many techniques in the democratic process of shaping policy
amongst stakeholders. These include evidence-based policy, “dialogic” policy
development (multiple ongoing stakeholder dialogue) and Lindbloom's (1959)
incrementalism or “muddling through” alternative to the rationalist model. As
Chapter 3 Political economy of Anglo-American world view of climate change
demonstrates, the development of climate policy has been amongst the largest
evidence based policy research projects ever undertaken. This dissertation
therefore uses evidence based policy research as the underlying paradigm for
its climate-economic policy research framework. As will be discussed below,
the concept of evidence is more synonymous with an estimate than an
irrefutable truth.
Although some would argue that Tony Blair's own Prime Minister's office did
not provide a very good example of transparency and evidence-based analysis,
the underlying assumption of evidence-based policy remains undisputed: better
policy is achieved with research and that better policy produces better
outcomes. In contrast, poor policy usually wastes money and fails its aims.
9
Influential deductivists such as Sir Karl Popper and Thomas Kuhn argue that
confidence can only be developed in a hypothesis by attempting to falsify it
through tests (see later in this Chapter). Deductivists would never agree that a
clinical trial is sufficient to be sure that a drug will cure the next person tested.
Popper's oft-quoted example is that no matter how many white swans are
observed, the absolute theory that all swans are white is never justified (Magee
1974, p.22). However, deductivists would agree that the more tests a drug
withstands without failure then the more robust is the efficacy hypothesis. iv
For example, modern political systems are not able to function by hypothesis
falsification. Many issues dominate politics. Neither politicians nor the
bureaucracy like to encourage negative criticism, even if rationality is
identified with the virtues of public criticism and falsification testing.
Unfortunately, people are not purely rational beings. They have emotions and
tend to respond poorly to falsification attacks. Pragmatism, realism, working
trade-offs and sub-optimisation are the norm rather than exception. Indeed,
the working assumptions of the bureaucracy are rarely, if ever, examined. This
is the reality of the messy social milieu for public policy formation.
10
Policy feasibility
11
with different answers to the same question. Cardiologists in
Davenport, Iowa, are quick to insert stents; cardiologists in Iowa
City and Sioux City are not. They can’t both be right. Some people
with heart disease are getting the best treatment, and some are not.
The same is true of debilitating back pain, various cancers and even
pregnancy. .... The lobbying groups for drug companies, device
makers, insurers, doctors and hospitals have succeeded, so far, in
keeping big, systemic changes out of the bills. And yet the modern
history of medicine nonetheless offers reason for optimism.
Medicine has changed before, after all. When it did, government
policy played a role. But much of the impetus came from inside the
profession. Doctors helped change other doctors.
Sir Karl Popper (1972) proposed that this conundrum be solved by recognising
a “World III” of objective knowledge comprising statute and common laws,
scientific papers, textbooks, documented procedures etc. While both Popper's
“World III” and the “objective theory of evidence” remain controversial in
philosophical circles, these ideas have had a profound influence on normative
theories for practical professional practise as described above.
12
repeatability testing (Achinstein 1991; 2001; Rehg & Staley 2008); and
Bayesian inference conforms to the “likelihood principle” because it merely
depends on prior probabilities, which have nothing to do with the experiment
(Birnbaum 1962, p. 271; Sprenger 2008, pp 197 & 204).v Therefore, the results
emerging from a process that applies the scientific method are qualified to be
considered as part of an independent body of knowledge (which is Popper's
“World III”).
The “objective theory of evidence” relies on two primary concepts. The first is
that true and false are not absolute states. In A Treatise on Probability (Keynes
1921, Chapters 15 & 17), Keynes hypothesised a continuum between falsity
and truth. He suggested that intermediate points in this interval are associated
with probabilities of truth. The legal system accepts his proposition, for
example, requiring guilt to be proven beyond reasonable doubt in serious cases
and on the balance of probabilities in less serious cases.
Bayes' theory demonstrates the reason why policy makers seek confirmation
from economic modellers that a particular policy represents a scenario that is
at least feasible. In an example analogous to the one above, we assume that
economic modelling has a 90% probability of correctly showing a particular
policy is feasible, if indeed it is feasible, and that say 60% of all proposed
policies are feasible. The probability of economic modelling identifying that
13
policies are feasible, notwithstanding whether the policy is or is not, is 58%. ix
Therefore, the Bayesian probability that a policy is feasible given that
modelling shows that it is feasible, is a more impressive 93%. x
Policy risks
As for all professionals, policy modellers' prima facie duties include integrity,
objectivity, an absence of any conflict of interest, possession of the necessary
skills and competence, and processes for care and due diligence. The duty of
care includes an appreciation of misleading (or deceptive) assumptions,
including misleading by omission.
Wise hands in policy formation recognise first and foremost that any projection
or forecast is a matter of opinion and judgement. They look for reasoned and
sustainable assumptions and a systematic modelling process. For their part,
modellers need to appreciate that a reader's understanding of the assumptions
is essential for their proper assessment of the information contained in the
model. Therefore, specialists and experts preparing models need to take as
much care with the formation and publication of the assumptions as they do
with the model results.
14
Non-systemic modelling risks
Communication risks
The way results are read by the intended audience is also important and there
is a risk that the presentation of results could be misleading. Due to a human
behavioural fallibility, many people act on the assumption that the middle value
of a table or range is the most likely value. Rather than extensive tables, it is
better to show the most probable outcomes and discuss the variables that have
a significant impact on these results.
From the above discussion it may be appreciated that a key requirement in the
process of evidence-based policy is that research is assembled that maximises
the probability of correctly determining that a proposed policy is both feasible
and the best policy. In this, consistency of the evidence is extremely important.
For example, developing a historical analysis of the political economy of the
policy area along with economic modelling for future scenarios of the policy.
15
As the great deductivists like Popper and Kuhn surmised, exposure of expertly
prepared evidence-based research to peer review and, ultimately, to an open
and transparent process of public criticism provides a diligent proving ground
for assuring that a proposed policy is both feasible and the best policy. In
particular, it is often only at the stage of public exposure that issues of social
equity and justice are appropriately weighed, for example, doing the most for
the majority while at the same time looking after the least well off as argued by
Rawls (1972).
Prior to public exposure, the process of developing expert opinion for evidence-
based policy usually relies on normative principles of systematic practice in the
respective profession, be it economics, law, engineering or another profession.
Ironically, while systematically applying inductivism throughout evidence-
based policy, enlightened professional practice complies with strict deductivist
principles in claiming only to represent current best working hypotheses and
shunning any ambit that these professional hypotheses be regarded as a
science of theories and laws.
Lastly, evidence based policy is always at risk of being subverted and the
“policy makers for policy making” need to be ever vigilant of degenerate
policy-driven evidence. This is selective or manipulated evidence provided to
justify or promote a particular policy. For example, Thomas Kuhn showed in
The Structure of Scientific Revolutions (1962) that vested interest groups will
invest large resources in defending the status quo. Bryson & Mobray (2005)
highlight the need for high level impartiality and a passion for diligent
governance to eliminate conflicts of interest.
16
Policy making may be understood from the research methods used for
economic decision making and allocation of resources. The main categories are
(expanding on Gruber 2007):
17
The importance of historical analysis in developing evidence for consistent and
believable hypotheses was referred to above. Discovery of the historical
background through political economy analysis has become de rigueur for
research in evidence based policy.
18
in concert with all of the other markets. This compound effect accounts for the
upward sloping supply curve.
The crux of the climate-CGE modelling approach can be deciphered from these
examples:
19
A neoclassical paradigm may easily be (or become) delaminated from reality.
Economics is not a science based on immutable laws. It can only ever be a
consistent discipline of practice with working assumptions that have proven
generally valid in the past. The past is not always a reliable guide to the future
(Popper 1959). Quite often assumptions become invalid and sometimes the
body of policy makers doesn't notice this happening. At this point the paradigm
diverges from reality. As we have seen from America's recent sub-prime credit
crisis and financial collapse, neither individual nor collective behaviour can be
fully predicted by sets of equations. Markets are subject to failure due to
behavioural factors such as the breakdown of enlightened self interest, which
is an article of faith in the dogma of self-regulation, and not being fully
accountable for the outcome of one's actions, which is called “moral hazard.”
One of the reasons that CGE modelling delaminates from reality is that its
assumptions about utility and profit maximisation are generalisations. At times
when individuals and, even more importantly, institutional stakeholders behave
in different ways then these assumptions can become unjustified. Any
numerical policy research needs to be supplemented with an understanding of
the values and ideas, alliances, brokerage and compromise of the strongly
competing stakeholder institutions that have large resources to influence
policy outcomes. It is necessary to evaluate the same policies with reference to
the tools of political economy, ideology, moral philosophy and influence
analysis as Lorman & Van Groningen (2009) note: “This capacity to engage and
shape arrangements is greatly influenced by their command over resources.
Those with the greatest command over resources have the greatest potential to
influence policy outcomes.”
The difficulty of achieving effective policy analysis may be gauged by the large
range of stakeholder institutions in policies with national and global
implications. These include international and global organisations such as the
United Nations, multinational corporations, international social movements,
and trade, aid and immigration policies within national political processes;
national governments and domestic institutions such as Ministers of
Parliament, Departments, courts, non-government organisations (NGOs) and
private sector industry organisations and companies (which are often striving
20
for self-regulation); the bureaucracy, which often features hierarchical control
and coordination; the professions, which are guided by principles of autonomy,
self-regulation and occupational control; and social movements which have
open and fluid structures, such as Greenpeace, the World Wildlife Fund,
German Watch and the David Suzuki Foundation.
Furthermore, the large range of institutions will often have just as large a
range of alternative agendas, different views on the importance of key issues
and even strong ideological and moral differences about the collective
behaviour of how societies work and individual behaviour, for example,
neoclassical rationalism, Keynesian, Monetarist, self-regulation, social
democracy, capitalism, “dry-liberal”, “wet-liberal”, welfare state, green and
radical views of all types. This can lead to highly contradictory contexts and
pragmatic tradeoffs in negotiating multiple and conflicting objectives.
It is the citizens and powerful vested interests in the country that can be the
real stakeholders in international agreements. For example, oil, coal and gas
producers and users, such as power stations and motorists, would be
significantly affected by taxes or escalating emissions permit costs designed to
switch users from using fuels that pollute to clean fuels and technologies.
Traditionally fossil fuel producer groups (or their industry associations) have
exceptionally strong influence on governments. These producers are often
commercialising national endowments and in doing so bringing much needed
income, industry and prosperity to the country. Resource companies often
control commodity cashflows with such immense magnitudes that they are
singularly important to countries. Global producers, such as the “six-sisters” of
the oil industry, are bigger than most national governments and on an equal or
better footing in negotiating with governments. These companies can “play the
employment security card” with their employees by threatening job losses. For
21
example job losses if logging of forests, fishing or coal mining is restricted or
financially impaired in any way.
This paradox of growth has led a number of authors from John Stuart Mill to
the present day to argue for a growth-less or steady state economy (Mill 1848;
Daley 1992; Hamilton 2006). Even the New Scientist editor writes in the
magazine's special issue The Folly of Growth: how to stop the economy killing
the planet: “Most economists care only about growth. Where resources come
from and where wastes go are largely irrelevant. If we are to leave any kind of
a planet to our children, this needs to change” (New Scientist 2008).
As already noted, CGE models are rationalist models that seek to maximise
growth, or at least welfare as measured by the expansion of consumption.
Those who criticise the paradigm of growth are equally scathing of CGE
models being tools of the cult of growth that conveniently justify growth
policies. However, criticism of neoclassical economics and CGE models as
promoting growth is largely misplaced. This is because constraints on resource
usage from natural endowment scarcity and specific policy implementation (for
example to control emissions) means that the dual solution provides the very
efficiency in resource utilisation that Lowe and others seek.
22
It can be seen that inherent conflicts in the outlooks and aims of individuals
and institutions necessitate policy implementation being fine-tuned through a
large number of potential instruments of intervention. Policy is often defined
by the instrument that is used: “It can express itself through the clarification of
public values and intentions; through commitments of money and services; by
the granting of rights and entitlements” (Considine 1994, p.3).
The traditional process for policy is to set the agenda; formulate policy options;
select policy instrument; implement; monitor; evaluate; review; and terminate
(Sutcliffe & Court 2005, p.9; Lorman & Van Groningen 2009; Young & Quinn
2002, pp.13-4). The first phase of setting the agenda seeks to identify all
aspect of the issue. For example, the reasons why the issue is important,
competing definitions of the problem, potential policy instruments; the steps
ahead; and the power blocs and the stakeholder engagement required for
alliances, brokerage and compromise.
CGE analysis has its place in the second stage of the policy forming process,
namely, research. This phase encompasses the iterative research needed to
establish what needs to be done; identify potential intervention responses;
potential instruments; institutions that will implement the policy; individuals
and institutions that will be affected; and to provide information to help
achieve stakeholder institutions support. The vast number of policy
instruments required for the fine tuning of policy implementation means that
high level policy research tools such as CGE models need to be carefully
finessed.
23
In order to fulfil this role, over the last four decades CGE researchers have
developed models for various influential institutional agendas and strategies.
For example, to take into account developments in instruments of intervention
such as carbon taxes and emissions trading.
The literature survey in Chapter 4 Economic models for climate change policy
analysis highlights that there are now many CGE models from policy
researchers investigations into different dimensions of problems and exploring
advances in theory, techniques, data availability and computing power. From a
climate change perspective, these CGE models have evolved from economic
models into energy models, then economic-energy-emissions (E3) models and
now into economic-climate models.
The reason for this is that markets in CGE models are constructed with many
equations. This is quite onerous and imbued with many assumptions such as
elasticities and marginal productivities. When the number of countries and
commodities is expanded, the complexity of the task increases dramatically.
The rapidly multiplying assumptions become copious and manifold. The shear
scope of addressing the huge set of exogenous variables means that detailed
due diligence of assumptions is difficult to complete. This compares to, say,
using data such as Input Output data at face value, and creating marketplaces
by virtual of primal and dual formulations present in all optimisations. For
24
example, the “Main Theory of Linear Programming” simultaneously maximises
an output isoquant while minimising resources. At the same time, the resource
marginal productivities are established endogenously, instead of exogenously
as in traditional CGE models.
This highlights the primary limitation in current CGE models, which can't
readily provide spatial disaggregation. For example, Australia's CGE models
are amongst the most sophisticated in the world. Yet none of the Australian
CGE models could easily model Australia's climate change policy in the world
setting. The Garnaut and Australian Treasury policy analysts need to manually
assemble a system of partial equilibriums (i.e. manually create a synthetic
national equilibrium in a world context). The lack of modelling flexibility
appears to have been exceedingly exasperating because the modelling team
ran late in its task.
A second major research gap in existing CGE models is the need to select a
production function, such as the Nordhaus DICE Cobb-Douglas function,
GTAP's Constant Elasticity function or a Translog function. It is difficult to
justify synthetic, econometrically-estimated production functions based on
calibration alone. Dale Jorgenson was the first person to use econometrics for
estimating American economic parameters, giving rise to the complex task of
econometric general equilibrium modelling (Johansen 1978; Hazilla & Kopp
1990).
25
A third research gap is present in Leontief, Nordhaus DICE and ten Raa's
modelling of intertemporal performance. In intertemporal models, population
and technology productivity are the only exogenous variables. Investment and
capital (i.e. accumulated and depreciated investment) are endogenous because
these factors have to be produced by the economy and the level of production
is determined by expectations of future consumer and industry demand.
Therefore, intertemporal models need a way of inherently controlling
investment in an industry.
26
with supplementary tools, the lack of productivity due to double handling and
absence of early visualisation stifles agility and creativity.
However, nations continue to vacillate about how long they can defer the
decision to switch from policies of unconstrained growth to policies of
constrained growth. This has led to dithering in international agreements,
what some might call “policy paralysis” and to the use of Prisoners Dilemma
game theory strategies to minimise losses.
All of these elements have been present at the UNFCCC Bonn meeting in June
2009, which failed to bring consensus to policy for 2012 and beyond. For
example, America determinedly sought China's agreement to targets such as
40% reduction in emissions by 2020 (compared to 1990 level). China
responded that this type of target is inappropriate but that China would be
cooperative in reducing emissions if America provides inexpensive green
technologies such as carbon capture and storage.
China's response has the merit of logic. A unique issue in climate policy is the
existence of a fixed tranche of emissions, beyond which global warming is
considered cataclysmic (see Chapter 3 Political economy of the Anglo-
American world view of climate change). This stark reality forces the
inescapable issue that either green technology is cheap and widely available or
countries will need to face large reallocations in their domestic production and
perhaps internationally. Many regions, for example the Spanish Asturias
(Arguelles et al. 2006), have expressed concern that limiting emissions will
have dire effects on their economies. Of course, making green technology
27
cheap and widely available leads to other issues such as minimal or no patent
protection for private technology developers.
Traditional CGE models have great difficulty in adequately coping with the
plurality of climate-economic policy constraints, which multiply the complexity
of models. For example, living within current income rather than borrowing to
maintain lifestyle, maintaining the purchasing power of the labour force,
managing energy requirements and greenhouse gas pollution, while achieving
social objectives such as expanding both population and the welfare of the
population.
A CGE framework for climate policy analysis is needed that captures the
background of changing Anglo-American, European, Chinese, Indian and other
world views, focusing on the various dimensions of the debate on climate
change and looking forward to satisfactory “win-win” solutions to the issues
that emerge from the interaction of such dimensions.
28
of Substitution model. Applied in a multi-industry model, there is substitution
between industries of the factors of production such as materials, labour and
capital. The optimisation process dual solution settles the market by balancing
marginal productivities and therefore marginal prices for tétonnement. This
overcomes the usual objection to Leontief production function where there is
no substitution of the factors of production within a single industry. Studies
comparing data envelopment analysis (DEA) and transcendental production
functions (Translog) demonstrate that there is little value in providing a more
advanced econometrically synthesised production function. The long use of
DEA in government and industry imparts confidence in the use of optimisation-
type production functions. Therefore, the use of Use and Make table
production functions within computable general equilibrium models appears to
be a prospective area for investigation.
29
Although traditional CGE models have a long heritage and are widely used
there has been no definitive testing of whether benchmarking CGE models are
superior to traditional CGE models. This is because economics, strategy and
policy making are all disciplines of practice rather than sciences in the strict
sense of hypothesis testing. Only the use of both traditional CGE models and
benchmarking CGE models over a reasonably long period will develop a deeper
understanding of whether one or other formulation has compelling advantages.
A way of addressing the fourth gap in CGE policy research, being a lack of
agility and poor communication, may be to use a modern mathematical
optimisation platform with a rich set of data visualisation functions. The last
decade has brought considerable advances in the development of advanced
optimisation techniques within visualisation environments.
30
1.4 Research aims
Against the above backdrop, the main aim of this research is to answer the
question:
31
The research methodology is shown diagrammatically as follows:
Time frame
This research project has been conducted during a period of intense
international negotiations over climate change policy. Countries involved in
these negotiations experienced many domestic and international pressures,
and employed various intriguing game strategies. This dissertation investigates
the political economy of these negotiations and strategies as Anglo-American
countries face new constraints on both their economic growth and
unilateralism.
While the political process in international relations has been in-train for
thousands of years and presumably will continue for thousands of years more,
bringing this research to a conclusion necessarily requires that a time scope be
32
set. Therefore the time frame for this research is the period ending with the
UNFCCC's Bangkok talks on 10 October 2009.
Data scope
The wide variety of sources of data assembled and, where necessary,
purchased for this research reflects its multidisciplinary nature across
economics and technology.
33
While GTAP 7 doesn't provide population growth rates, the GTAP data can be
supplemented with a wide range of financial and resource data, for example
from the Mathematica's Country Database, which provides population growth
rates for the year 2006.
Geographical scope
The research focus of this dissertation has been to understand the policy
challenges facing Anglo-American countries as they restructure from
unconstrained growth to an acceptance of climate change constraints. Policy
development in Anglo-American countries is contrasted to that in the European
Union and to BRIC countries (primarily Brazil, Russia, India and China), which
dominate the rest of the world category. Therefore, three regions of the world
have been modelled to understand policy development and outcomes with
reference to the two predominant trading blocs: the North Atlantic Free Trade
Association (NAFTA) and the European Union (EU of 25 countries). Countries
outside of these two trading blocs are aggregated into a Rest of World (ROW)
category.
Commodity scope
Three basic commodities are analysed in each region: food, manufactured
goods and services. The GTAP 7 economic and emissions databases are
aggregated for these commodities across the geographical scope of NAFTA,
EU and ROW. Additional emissions permits and carbon mitigation services
commodities are appended to enable climate change policies to be evaluated.
Intertemporal scope
The climate-CGE modelling tool developed in this dissertation facilitates the
study of climate change policies over projections of 130 years (13 decades)
34
from the data's base year of 2004. This is somewhat less than the full 60
decades of the Nordhaus DICE model. While the intertemporal scope is
sufficient, it is limited by the magnitude of the task in symbolically
representing the whole spatial and industry disaggregation model within
optimisation constraints combined with the operations research challenges of
nonlinear optimisation.
In its review of the role of economic modelling in the global financial crisis,
The Economist (2009) concludes: “Economists need to reach out from their
specialised silos: macro-economists must understand finance, and finance
professors need to think harder about the context within which markets work.
And everybody needs to work harder on understanding asset bubbles and what
35
happens when they burst. For in the end economists are social scientists,
trying to understand the real world.”
From the literature survey we have seen the decades of effort that
international organisations, such as United Nations, IPCC, IEA, IMF and World
Bank, and domestic organisations, such as the Australian Productivity
Commission, ABARE, Australian Treasury, CSIRO, Garnaut Review, Monash
University and others have devoted to the pursuit of better models. This
research is therefore timely in providing the first model of its type for
36
multiregional, intertemporal policy analysis in growth constrained by climate
change.
37
1.8 Chapter references
(Gödel 1931)
Achinstein, P., 1991. Particles and waves: Historical essays in the philosophy of
science, Oxford University Press, USA.
Achinstein, P., 2001. The Book of Evidence, New York: Oxford University Press.
Arguelles, M., Benavides, C. & Junquera, B., 2006. The impact of economic
activity in Asturias on greenhouse gas emissions: consequences for
environmental policy within the Kyoto Protocol framework. Journal of
Environmental Management, 81(3), 249-264.
ASIC, Practice Notes 42, 43, 74, 75 & 170, Canberra: Australian Securities and
Investment Commission.
Ayres, R.U., 2001. How economists have misjudged global warming. World
Watch, 14(5), 12-25.
38
Birnbaum, A., 1962. On the foundations of statistical inference. Journal of the
American Statistical Association, 57(298), 269-306.
Brooks, D., 2009. What Geithner Got Right. The New York Times. Available at:
http://www.nytimes.com/2009/11/20/opinion/20brooks.html?
_r=1&th&emc=th [Accessed November 20, 2009].
Bryson, L. & Mowbray, M., 2005. More Spray on Solution: Community, Social
Capital and Evidence Based Policy. Australian Journal of Social Issues,
40(1), 91-107.
Cochran, C.L. & Malone, E.F., 1995. Public policy: perspectives and choices,
McGraw-Hill.
Daley, H., 1992. Steady State Economics 2nd ed., London, UK.: Earthscan.
39
Garnaut Climate Change Review, 2008b. Supplementary Draft Report: Targets
and trajectories, Commonwealth of Australia. Available at:
http://www.garnautreport.org.au/ [Accessed September 7, 2008].
Gödel, K., 1931. Über formal unentscheidbare Sätze der Principia Mathematica
und verwandter Systeme I. Monatshefte für Mathematik, 38(1), 173-
198.
Gruber, J., 2007. Public finance and public policy 2nd ed., New York, NY: Worth
Publishers.
Hazilla, M. & Kopp, R.J., 1990. Social cost of environmental quality regulations:
A general equilibrium analysis. Journal of Political Economy, 853-873.
40
Keane, J., 2009. The life and death of democracy.
Kuhn, T., 1962. The Structure of Scientific Revolutions 2nd ed., Chicago:
University of Chicago Press.
Leonhardt, D., 2009. Making Health Care Better. The New York Times.
Available at:
http://www.nytimes.com/2009/11/08/magazine/08Healthcare-t.html?
_r=1 [Accessed November 24, 2009].
Ljungqvist, L. & Sargent, T.J., 2000. Recursive macroeconomic theory 1st ed.,
The MIT Press.
Lorman, D. & Van Groningen, J., 2009. Public Policy, Australia: Australian
Technology Network of Universities.
Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.
Mayo, D.G., 1996. Error and the growth of experimental knowledge, Chicago:
The University of Chicago Press. Available at:
http://books.google.com.au/books?
hl=en&lr=&id=FEsAh4L9r_EC&oi=fnd&pg=PR9&dq=
%22Deborah+Mayo%22&ots=j9ccfxtlW5&sig=_N-
Lacjg0yxbdrfhv0nNL2rNFxk.
41
Mayo, D.G. & Spanos, A., 2004. Methodology in practice: Statistical
misspecification testing. Philosophy of Science, 71(5), 1007-1025.
Mill, J.S., 1848. Principles of Political Economy 7th ed., London: Longmans,
Green and Co. Available at: http://www.econlib.org/library/Mill/mlP.html
[Accessed April 9, 2009].
New Scientist, 2008. Editorial: Time to banish the god of growth. New
Scientist, 199(2678), 5.
Nordhaus, W.D., 2007. Notes on how to run the DICE model, Available at:
http://www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
[Accessed June 26, 2008].
ten Raa, T., 2007. Review of A. Brody: Near Equilibrium - A Research Report on
Cyclic Growth. (Review of the book Near Equilibrium - A Research
Report on Cyclic Growth, Andraacutes Broacutedy Aula Publishing
House, Budapest, 2005, vi + 137 pp., ISBN 963-9478-95-4). Economic
Systems Research, 19(1), 111-3.
42
Rawls, J., 1972. A theory of justice, Oxford: Clarendon Press.
Rehg, W. & Staley, K., 2008. The CDF Collaboration and Argumentation
Theory: The Role of Process in Objective Knowledge. Perspectives on
Science, 16(1), 1-25.
Sachs, J., 2008a. Common Wealth: Economics for a Crowded Planet , Penguin
Press HC, The.
Staley, K. & Cobb, A., 2009. Internalist and Externalist Aspects of Justification
in Scientific Inquiry.
Sutcliffe, S. & Court, J., 2005. Evidence‐based policymaking: What is it? How
does it work? What relevance for Developing Countries?, London, UK.:
Overseas Development Institute. Available at: [Accessed May 15, 2009].
The Economist, 2009. Economics: What went wrong with economics. The
Economist. Available at:
http://www.economist.com/printedition/displayStory.cfm?
Story_ID=14031376 [Accessed July 22, 2009].
43
UK Hampel Committee, 1998. Committee on Corporate Governance (Final
Report), London: The Committee on Corporate Governance and Gee
Publishing Ltd. Available at:
http://www.econsense.de/_CSR_INFO_POOL/_CORP_GOVERNANCE/ima
ges/hampel_report.pdf.
Young, E. & Quinn, L., 2002. Writing Effective Public Policy Papers. Budapest:
Open Society Institute.
44
reference to the structure of E and E'” Its corollary is that the probability of
results that could have been observed is irrelevant to the statistical inference.
The “Likelihood Principle” contains the principles of sufficiency (S) and
conditionality (C). Birnbaum notes: “The fact that relatively few statisticians
have accepted (L) as appropriate for purposes of informative inference, while
many are inclined to accept (S) and (C), lend interest and significance to the
result, provided herein, that (S) and (C) together are mathematically equivalent
to (L)“(Birnbaum,1962, p271)
vi 5.9% is the probability of a correct positive identification plus the probability of
an incorrect positive identification (i.e. 95% x 1% + (100%-95%) x (100%-1%))
vii 16.9% is calculated as 95% probability of the test being positive if the person
has taken drugs x 1% probability of a person taking drugs regardless of other
factors / 5.9% probability of a positive result regardless of other factors
viiiIf the accuracy of the test remains at the new level of 99% and the probability of
drug use by sports people falls to just 0.1%, then the probability of drug use
given a positive test falls to only 9.02%. This illustration of Bayes' posterior
probability has been developed with reference to “Further Examples: Example 1
Drug Testing” at http://en.wikipedia.org/wiki/Bayes%27_theorem
ix 58% is calculated as 90% x 60% + (100%-90%) x (100%-60%)
x 93% is calculated as 90% probability that the modelling correctly shows a policy
is feasible if it is indeed feasible x 60% probability of a policy being feasible
regardless of other factors / 58% probability of a positive result regardless of
other factors
xi Without modification, ten Raa's linear programming benchmarking uses a
simple Leontief function for each industry, which implies a constant mix of
inputs. However, it provides full substitution to other industries in the world
with a better technology function and substitutes the factors of production
across domestic industries. The Leontief function can be modified by increasing
returns to scale and decreasing returns to scale if desired but in most situations
the simplicity of constant returns to scale is intuitive and appealing
xii The 2009 case of Pacific Brands transferring its underwear factories from
Australia to Asia is an example of industrial production shifting from one
structure of labour and technology to another
xiiiThe Armington Assumption is a constant elasticity of substitution (CES)
aggregate assumption for international trade
xivDuPont Analysis was developed by the American chemical conglomerate E. I. du
Pont de Nemours and Company
45
Chapter 2 Political Economy of the Anglo-
American economic world view
The 2008-9 Global Financial Crisis, wars in Iraq and Afghanistan and deep
deficits in the American economy have brought challenges and emerging
changes to the Anglo-American world view. This Chapter investigates the
development of the Anglo-American economic world view to establish a
framework for understanding climate change policy. America's unique themes
of liberty and free markets are distinct, pervasive and dominant in Anglo-
American culture.
Around 17 February 1775, the great ambassador Benjamin Franklin had firmly
highlighted the primacy of freedom in the American psyche (B. Franklin & W. T.
Franklin 1818) “ Those who desire to give up Freedom in order to gain
Security, will not have, nor do they deserve, either one.”
47
Perhaps the most fundamental of American character traits is the belief in
freedom. The tenet was bravely announced in The Unanimous Declaration of
The Thirteen United States of America adopted by the Congress of the United
States on July 4, 1776. It stated:
Nearly fifty years later, the French lawyer Alexis de Tocqueville visited America
to critically appraise the emergent American democracy. For his pioneering
work of observational political sociology De la démocratie en Amérique (1835)
de Tocqueville was decorated as a chevalier de la Légion d'honneur (Knight of
the Legion of Honour), elected to the Académie des sciences morales et
politiques and subsequently to the Académie française.
48
I know of no country, indeed, where the love of money has taken
stronger hold on the affections of men and where a profounder
contempt is expressed for the theory of the permanent equality of
property .... there are but few wealthy persons; nearly all Americans
have to take a profession …. in the Western settlements we may
behold democracy arrived at its utmost limits …. the population has
escaped the influence not only of great names and great wealth, but
even of the natural aristocracy of knowledge and virtue. None is
there able to wield that respectable power which men willingly
grant to the remembrance of a life spent in doing good before their
eyes. The new states of the West are already inhabited, but society
has no existence among them.
49
frequently aided by the seasonableness of an idea than by its strict
accuracy; and in the long run he risks less in making use of some
false principles than in spending his time in establishing all his
principles on the basis of truth. The world is not led by long or
learned demonstrations; a rapid glance at particular incidents, the
daily study of the fleeting passions of the multitude, the accidents of
the moment, and the art of turning them to account decide all its
affairs …. The greater part of the men who constitute these nations
are extremely eager in the pursuit of actual and physical
gratification. As they are always dissatisfied with the position that
they occupy and are always free to leave it, they think of nothing but
the means of changing their fortune or increasing it. To minds thus
predisposed, every new method that leads by a shorter road to
wealth, every machine that spares labor, every instrument that
diminishes the cost of production, every discovery that facilitates
pleasures or augments them, seems to be the grandest effort of the
human intellect. It is chiefly from these motives that a democratic
people addicts itself to scientific pursuits, that it understands and
respects them. In aristocratic ages science is more particularly
called upon to furnish gratification to the mind; in democracies, to
the body.
50
nations of our age. It may be rendered less dangerous, but it cannot
be cured, because it does not originate in accidental circumstances,
but in the temperament of these nations.
51
geographic and productive departments, on larger scales and in
more varieties than ever, is certain. In those respects the republic
must soon (if she does not already) outstrip all examples hitherto
afforded, and dominate the world …. I perceive clearly that the
extreme business energy, and this almost maniacal appetite for
wealth prevalent in the United States, are parts of amelioration and
progress, indispensably needed to prepare the very results I
demand. [Walt Whitman's emphasis] …. Political democracy, as it
exists and practically works in America, with all its threatening
evils, supplies a training-school for making first-class men. It is life's
gymnasium, not of good only, but of all.
National security
American shared beliefs uniquely shaped a foreign policy built on the dual
premises of America as a new promised land for a chosen people of God, and
an even more arrogant “bully-boy” attitude that “the most powerful player
makes the rules”. These attitudes were formalised as the Monroe Doctrine,
known more broadly as Manifest Destiny, that justified America attacking any
country in the world (Jensen 2000, pp.86-8; Perkins 2004, pp.69-70):
52
in the Dominican Republic, in Venezuela, and during the “liberation”
of Panama from Colombia. A string of U.S. Presidents – most notably
Taft, Wilson, and Franklin Roosevelt – relied on it to expand
Washington's Pan-American activities through the end of World War
II. Finally, during the latter half of the twentieth century, the United
States used the Communist threat to justify expansion of this
concept to countries around the globe, including Vietnam and
Indonesia.
A notable use of Manifest Destiny in its most extended form was the American
invasion of the sovereign nation Hawaii on 16 January 1893. Using a fabricated
excuse, American Marines invaded Hawaii and occupied Government buildings
and the Iolani Palace. On 18 December 1893, President Grover Cleveland
sought to redress the invasion with an impassioned plea to the Senate and
House of Representatives to not succumb to the wrongful acquisition of Hawaii
(Cleveland 1893):
53
thorough examination of the facts will force the conviction that the
provisional government owes its existence to an armed invasion by
the United States. Fair-minded people with the evidence before
them will hardly claim that the Hawaiian Government was
overthrown by the people of the islands or that the provisional
government had ever existed with their consent. I do not understand
that any member of this government claims that the people would
uphold it by their suffrages if they were allowed to vote on the
question …. But in the present instance our duty does not, in my
opinion, end with refusing to consummate this questionable
transaction. It has been the boast of our government that it seeks to
do justice in all things without regard to the strength or weakness of
those with whom it deals. I mistake the American people if they
favor the odious doctrine that there is no such thing as international
morality, that there is one law for a strong nation and another for a
weak one, and that even by indirection a strong power may with
impunity despoil a weak one of its territory …. a substantial wrong
has thus been done which a due regard for our national character as
well as the rights of the injured people requires we should endeavor
to repair.
President Grover indeed did “mistake the American people”. Instead of Hawaii
being returned to Queen Liliuokalani and her Government, President Grover's
successor, President William McKinley, annexed Hawaii through the Newlands
Joint Resolution of 7 July 1898. President McKinley justified his action as a
consequence of the Spanish-American War. One hundred years later, President
Clinton apologised to the nation of Hawaii (103rd Congress 1993):
54
Manifest Destiny continued to provide legitimacy for American incursions
across the world, in Vietnam, South America, Panama and ultimately in Iraq
(Perkins 2004, pp.181-2):
The United States military since 2004 has used broad, secret
authority to carry out nearly a dozen previously undisclosed attacks
against Al Qaeda and other militants in Syria, Pakistan and
elsewhere ... These military raids, typically carried out by Special
Operations forces, were authorized by a classified order that
Defense Secretary Donald H. Rumsfeld signed in the spring of 2004
55
with the approval of President Bush ... The secret order gave the
military new authority to attack the Qaeda terrorist network
anywhere in the world, and a more sweeping mandate to conduct
operations in countries not at war with the United States … the new
authority was spelled out in a classified document called “Al Qaeda
Network Exord,” or execute order.
Sachs (2008, p.10) notes that American failures, including the Bush
Administration's crude and violent unilateralism, are a legacy of ashes:
Resource wars
International policy is of course about a complex set of issues involving more
than wars between ideologies. Discussions of climate change and economic
growth cannot be divorced from the accompanying issue of energy security.
American consumer traits have, if anything, intensified further to the modern
56
day, demanding that troops be deployed to secure energy supplies for
American consumers.
57
characterise that US identity it would have to be “more”. For the
majority of contemporary Americans, the essence of life, liberty, and
the pursuit of happiness centres on a relentless personal quest to
acquire, to consume, to indulge, and to shed whatever constraints
might interfere with those endeavours ... oil dependence is key to
our weakness. America's imperial military overstretch since the
1980 promulgation of the Carter Doctrine – which holds that the
U.S. will defend vital interests in the Persian Gulf "by any means
necessary" – is a natural consequence of that oil dependency. Our
collective refusal to conserve oil, to learn to live more sensibly
within our means, requires an ever-growing military commitment to
the Middle East.
Others have supported Bacevich's views. In the January 2009 Darwin Day
Lecture to the British Humanist Society Can British Science Rise to the
Challenges of the 21st Century?, former UK Chief Scientist Sir David Kingii
rejected government claims that America and the United Kingdom invaded
Iraq because of weapons of mass destruction or to topple President Saddam
Hussein. Sir David maintained that the invasion was solely to lessen American
reliance on foreign oil (Randerson 2009):
The Iraq war was just the first of this century's resource wars, in
which powerful countries use force to secure valuable commodities
… future historians might look back on our particular recent past
and see the Iraq war as the first of the conflicts of this kind …. [the
USA,] casting its eye around the world – [saw] there was Iraq [and
its immense oil reserves for the taking] …. it was certainly the view
that I held at the time, and I think it is fair to say a view that quite a
few people in government held …. Unless we get to grips with this
problem globally, we potentially are going to lead ourselves into a
situation where large, powerful nations will secure resources for
their own people at the expense of others.
Highlighting with grim irony the place of oil in America's war in Iraq, Stanford
University's Professor Gretchen Daily has rhetorically asked (Lowe 2009, p.22)
“How concerned would the US administration be about Iraq if it had 10% of
the world's broccoli?”
58
A pessimistic interpretation of the traditional American approach to foreign
policy is that there will be many more wars over important resources such as
oil and water.
The founders of Western philosophy, Socrates (469 BCE–399 BCE) and Plato
(approx. 428BCE – 348BCE) first referred to the role of commerce in
organising society. However, it was Plato's student Aristotle (384 – 322 BC)
that philosophically investigated commerce in the role of work.
As many men have done before and after him, Aristotle sought an explanation
to the meaning of life. Consciously or unconsciously, Aristotle subscribed to the
dominant Greek Stoic world view that all things had a purpose and the world
was happily harmonious and in order only when objects followed their innate
and predetermined purpose. The liberal Epicureans regarded humans as free
to some extent but their thoughts were not to become mainstream for 2,300
years, with the German philosophers Immanuel Kant, Arthur Schopenhauer,
Johann Gottlieb Fichte and Friedrich Nietzsche.
Aristotle's view that every object seeks its natural purpose or goal is called
“teleology” (Saunders 1974). Theists believed that God determined this
purpose for each object, while others ascribed it to nature. Whatever the
source of the belief, it was understood that when humans deviated from their
inherent purpose, through misfortune or lack of understanding, then they
became miserable and the world was in disharmony.
59
Now of the Chief Good (i.e. of Happiness) men seem to form their
notions from the different modes of life, as we might naturally
expect: the many and most low conceive it to be pleasure, and hence
they are content with the life of sensual enjoyment. For there are
three lines of life which stand out prominently to view: that just
mentioned, and the life in society, and, thirdly, the life of
contemplation …. As for the life of money-making, it is one of
constraint, and wealth manifestly is not the good we are seeking,
because it is for use, that is, for the sake of something further: and
hence one would rather conceive the forementioned ends to be the
right ones, for men rest content with them for their own sakes ….
And now let us revert to the Good of which we are in search: what
can it be? for manifestly it is different in different actions and arts:
for it is different in the healing art and in the art military, and
similarly in the rest. What then is the Chief Good in each? Is it not
"that for the sake of which the other things are done?" and this in
the healing art is health, and in the art military victory, and in that of
house-building a house, and in any other thing something else; in
short, in every action and moral choice the End, because in all cases
men do everything else with a view to this. So that if there is some
one End of all things which are and may be done, this must be the
Good proposed by doing, or if more than one, then these …. Now
since the ends are plainly many, and of these we choose some with a
view to others (wealth, for instance, musical instruments, and, in
general, all instruments), it is clear that all are not final: but the
Chief Good is manifestly something final; and so, if there is some
one only which is final, this must be the object of our search: but if
several, then the most final of them will be it …. So then Happiness
is manifestly something final and self-sufficient, being the end of all
things which are and may be done …. But, it may be, to call
Happiness the Chief Good is a mere truism, and what is wanted is
some clearer account of its real nature. Now this object may be
easily attained, when we have discovered what is the work of man;
for as in the case of flute-player, statuary, or artisan of any kind, or,
more generally, all who have any work or course of action, their
60
Chief Good and Excellence is thought to reside in their work, so it
would seem to be with man, if there is any work belonging to him ….
we assume the work of Man to be life of a certain kind, that is to say
a working of the soul, and actions with reason, and of a good man to
do these things well and nobly, and in fact everything is finished off
well in the way of the excellence which peculiarly belongs to it: if all
this is so, then the Good of Man comes to be "a working of the Soul
in the way of Excellence," or, if Excellence admits of degrees, in the
way of the best and most perfect Excellence …. And we must add, in
a complete life; for as it is not one swallow or one fine day that
makes a spring, so it is not one day or a short time that makes a man
blessed and happy …. it is thus in fact that all improvements in the
various arts have been brought about, for any man may fill up a
deficiency …. Now with those who assert it to be Virtue
(Excellence), or some kind of Virtue, our account agrees: for
working in the way of Excellence surely belongs to Excellence ….
Why then should we not call happy the man who works in the way of
perfect virtue, and is furnished with external goods sufficient for
acting his part in the drama of life: and this during no ordinary
period but such as constitutes a complete life as we have been
describing it.
The concept that man's utility was his only value seemed appropriate in the
societies of ancient Greece and seventeenth century USA, which depended on
the exploitation of slave labour. It also matched the power and wealth structure
of society thereby justifying the implicit assumption that there is a natural and
defensible hierarchy (Aristotle says “degrees”) in the society of man.
61
regulation of agriculture at a time when the French monarchy was very
repressive. As agriculture was regarded as the only true production, land was
correspondingly the only scarce resource and was therefore considered the
most important form of wealth. From this perspective, extractive,
manufacturing and merchant services only convert material from one state to
another and are considered “sterile” of wealth creation.
Smith was ready to accept that goods included more than Quesnay's strict limit
of agricultural production. It seemed obvious to Smith that the tangible goods
had a value that could be readily calculated from the comprising factors from
which the goods were made: land rent, labour cost, capital cost and the return
for taking risk. The return for taking this risk was called entrepreneurship and
had been investigated by philosophers such as David Hume (1752) and David
Ricardo of the Mercantile Trading school.
62
Adam Smith's “invisible hand of capitalism” became the fundamentalist,
unproven doctrine of American commerce and social structure as observed by
de Tocqueville. Smith's book The Wealth of Nations became America's bible of
business and philosophy. Unfortunately for many people in society, Smith
categorised certain occupations as unproductive services. He included the
Sovereign along with “churchman, lawyers, physicians, men of letters of all
kinds, players, buffoons, musicians, and opera singers”.
Jean-Baptiste Say (1803) reasoned that production was the creation of utility
rather than the creation of matter or the growing of something new. For
example, a sword is still only iron ore so no matter has been created.
Therefore, he held, human labour services of churchman, lawyers, physicians
etc. as well as everyone else are intangible products consumed at the time of
production. He developed his now famous Say's law that “Production generates
an equivalent demand that in turn generates employment in production”.
63
Frederic Bastiat's Essays on Political Economy (1848) quickly swept forward
with Mill's concepts to suggest that the value of a man's services is quite
independent of any tangible goods and furthermore is not just an attribute of
tangible goods as Say and Mill still accepted.
64
America's number one ideological enemy was Karl Marx, who maintained in his
book Das Kapital (1867) that the specialisation of labour would remove the
ownership of production from individuals and introduce monotony, thereby
deprive individuals happiness in producing.iii To Marx, the deterministic
corollary of his theory of dialectical materialism would be that labour would
choose to move away from organisations employing specialisation to self-
producing communities. Violent revolutions occurred in his name in Russia and
China, although Marx did not specifically advocate such violence.
Implicit in Keynes' theories were two key arguments that upset classical
economists. The first was that the simplicity of classical economics could not
cope with economic cycles. The second was that an almost total lack of
government business regulation through the 1920s directly contributed to the
65
excesses of the decade, the 1929 Wall Street crash and the ensuing
Depression. At the time, as in 2009, many people lost confidence in the ability
of classical and neoclassical economics to predict or to fix the market failures.
66
contribute …. The allocation branch [of government], for example, is
to keep the price system workably competitive and to prevent the
formation of unreasonable market power …. and correcting, say by
suitable taxes and subsidies and by changes in the definition of
property rights, the more obvious departures from efficiency caused
by the failure of prices to measure accurately social benefits and
costs … A competitive price system gives no consideration to needs
and therefore it cannot be the sole device of distribution …. It is
clear that the justice of distributive shares depends upon the
background institutions and how they allocate total income, wages
and other income plus transfers. There is with reason strong
objection to the competitive determination of total income, since this
ignores the claims of need and an appropriate standard of life ….
But once a suitable minimum is provided by transfers, it may be
perfectly fair that the rest of total income be settled by the price
system, assuming that it is moderately efficient and free from
monopolistic restrictions, and unreasonable externalities have been
eliminated.
Americans have machinated over the potent challenges from Marx, Keynes and
Rawls. In most cases, it has not responded by action but used the challenges to
strengthen the defence of its core value system. For example, the defence of
American democracy and capitalism, the unified concept of human existence
and service value, and minimal government regulation.
67
economist Martin Lueck comments “If you draw a line dividing the winners and
losers [of the past 20 years], it is not between US or UK economic systems and
Europe's, but rather the owners of capital vs. the owners of work. The losers
are the owners of work in all parts of the world, particularly Western countries.
The winners have been the owners of capital” (Herbst 2009).
68
It may be noted in the illustrations below that the Australian income share of
labour is significantly less than international benchmarks (Krämer 2008).
69
The phenomena of ordinary people financing their current expenditure from
debt instead of income has led economists to conclude that this effect was one
of the largest contributors to the 2008 global financial crisis.
Peter Self, a trenchant critic of the American market system, summarises in his
book Rolling Back the Market (Self 2000, pp.xi, 6 & 12):
70
individuals in Anglo-American societies have accepted the need to work hard,
notwithstanding their scepticism about the workplace as being the font of
happiness. This resigned perseverance is known as the Protestant Work Ethic.
Max Weber, another important German philosopher, took this idea forward into
organisations, arguing that power structures take precedence over structures
of authority (Weber 1904). Uncloaked from its Aristotelian ideology of work as
a place of virtue, the workplace began to be perceived as a place of power
struggles.
71
freedom and that this is both the greatest prize and greatest burden of man:
“Man is nothing else but that which he makes of himself” (Sartre 1946).
The onerous task of making decisions and being fully accountable for the
outcome can be a lonely pursuit because humans individually need to make
decisions and live with the results. At times when the very foundation of
existence is challenged, humans usually begin to contemplate our own finite
mortality. Sartre writes of this time when our values are disturbed (Sartre
1946, Chapter 4):
72
specific human nature; in other words, there is no determinism--man
is free, man is freedom. Nor, on the other hand, if God does not
exist, are we provided with any values or commands that could
legitimise our behaviour. Thus we have neither behind us, nor before
us in a luminous realm of values, any means of justification or
excuse. We are left alone, without excuse. That is what I mean when
I say that man is condemned to be free. Condemned, because he did
not create himself; yet is nevertheless at liberty, and from the
moment that he is thrown into this world he is responsible for
everything he does.
Perhaps the greatest anxiety in life comes from loneliness and the realisation
that one's assumptions are invalid. Sartre dealt with the anxiety of confronting
emptiness in his first novel Nausea (1938). He later explained the experience
of anxiety in an essay, The Look (1992, p.347), as follows:
Free choices are always prey to one's sense of angst and anxiety. The
loneliness and magnitude of the tension between freedom and responsibility
often leads to despair. Sartre argues that for the most part this reveals that a
person's decisions are often in mauvaise foi (self-deception or bad faith) that
lack authenticity due to angst, which is irrational anxiety over a perceived
need for security. Therefore, we give in and exchange our authenticity for
things like belongingness.
73
Sartre's solution is twofold: firstly, to simply accept the situation that existence
is absurd because there is no “big picture” that gives it meaning; secondly, to
get on with life and be as authentic as possible to oneself in choices. He says
“Man’s task in life is to authenticate his existence. Approach your existence
creatively, and do something with it.”
Sartre chose to balance his personal life on the fulcrum of disruption, rather
than succumb to a conventional life. He was convinced that to be conventional
was bad faith. Sartre believed his only authentic choice was to remain in the
state of uncertainty, perpetually at the point where a man was not only free but
conscious of his total freedom.
Another influential existential philosopher, Albert Camus, won the 1957 Nobel
Prize in Literature at the age of 44.viii Camus claimed to have found meaning
within himself as a great outcome from a bleak and stressful experience. He
wrote of it in his essay Return to Tipasa: “In the depth of winter, I finally
learned that within me there lay an invincible summer” (Camus 1952).
74
The gods had condemned Sisyphus to ceaselessly rolling a rock to
the top of a mountain, whence the stone would fall back of its own
weight. They had thought with some reason that there is no more
dreadful punishment than futile and hopeless labour …. You have
already grasped that Sisyphus is the absurd hero. He is, as much
through his passions as through his torture. His scorn of the gods,
his hatred of death, and his passion for life won him that
unspeakable penalty in which the whole being is exerted toward
accomplishing nothing. This is the price that must be paid for the
passions of this earth …. The workman of today works everyday in
his life at the same tasks, and his fate is no less absurd …. One does
not discover the absurd without being tempted to write a manual of
happiness. "What!---by such narrow ways--?" There is but one world,
however. Happiness and the absurd are two sons of the same earth.
They are inseparable …. One must imagine Sisyphus happy.
If you aren't fully responsible for your own acts, if you can say, “I am
as I am because my parents maltreated me; I am as I am because
the nature of the universe is such”, and then you put the
responsibility on the back of the universe and shuttle it off your own.
And people don't want to be all alone, lonely persons responsible for
their own actions, they want some justification of what they do from
the nature of something greater, more stable in a way than
themselves. And people can do all sorts of things in the name of
history, in the name of progress, in the name of “my class”, in the
name of the church, which they might hesitate to do if it was
entirely up to them individually.
75
In a perceptive reflection on the human condition, Harrison (2008 pp.111-2)
observes that Aristotle's virtues are but a single facet of a multidimensional
moral paradigm. He writes “vices are every bit as cultivatable as virtues. The
cultivation of envy, spite, pride, greed can be taken to exquisite levels. But this
does not transform those vices into virtues; on the contrary, by submitting
them to extremely regimented rules and protocols, it gives them a style that
renders them more sublime while leaving their vicious essence intact.”
Indeed, virtues are the least part of this moral paradigm. Drawing on his
unique perspectives from Italian Medieval and Renaissance literature,
Harrison concludes that the Western human condition is fundamentally
restless and disconsonant (pp151-8). He compares the hero knights in
Ludovico Ariosto's Orlando's Furioso (1516) to the pilgrims of Dante's Divine
Comedy (circa 1310), suggesting that the existential boredom and aimless path
of the former characterises the modern Western journey:
76
lead to a constant search for diversion, a constant “turning-away”
from oneself … Orlando goes on to commit a mindless devastation of
what others have carefully cultivated, laying waste to farmers'
fields, the well-husbanded countryside, the quiet forests and rivers.
He particularly directs his rage against gardeners and shepherds.
This nihilistic vortex of pathological agitation and ravaging
destruction is the hero of the age of which he is the harbinger.
Herein lies the knights quintessential and even contemporary
modernity, for this is precisely the spiritual condition of the age
today: driven and aimless, we are under the compulsion of an
unmastered will to destroy whatever lies in our way, even though we
have no idea where the way leads or what its end point may be.
When Orlando roams with such unpredictable intent the safety of society is
compromised. In order to be happy, a society needs shared attitudes and
sanctions that promote trust. Weiner (2008, pp.234-6 & 405)found that the
deeper the trust ethos in society, the happier the society reports that it is:
Aristotle said more or less "Happiness is your state of mind and the
way you pursue that state of mind." How we pursue the goal of
happiness matters at least as much, perhaps more, than the goal
itself. The means and the end are the same. A virtuous life and a
happy life are the same thing. … Nietzsche says that a society
cannot avoid pain and suffering but the measure of a society is how
well it transforms this pain and suffering into something worthwhile
…. Trust - or to be more precise, a lack of trust - is why Moldova is
such an unhappy land .... Moldovians don't trust the products they
buy at the supermarket .... they don't trust their neighbours .... they
don't even trust their family members. ... For years, political
scientists assumed that people living under democracies were
happier than those living under any other form of government ... but
the collapse of the Soviet Union changed all that. Most (although
certainly not all) of these newly independent nations emerged as
quasi-democracies. Yet happiness levels did not rise. In some
countries they declined, and today the former Soviet republics are,
overall, the least happiest places on the planet .... It is not that
democracy makes people happy but rather that happy people are
77
much more likely to establish a democracy .... The institutions are
less important than the culture. And what are the cultural
ingredients necessary for democracy to take root? Trust and
tolerance. Not only trust of those inside your group - family, for
instance - but external trust. Trust of strangers. Trust of your
opponents, your enemies, even. That way you feel you can gamble
on other people ….Money matters, but less than we think and not in
the way we think. Family is important. So are friends.
American psychologists Aaron Beck and Albert Ellis are each credited with
independently originating Cognitive Behavioural Therapy to assist individuals
deal with distress about the vicissitudes of life, such as Harrison's
unpredictable Orlando and Sartre's angst. The therapy seeks to reorientate an
individual's thinking toward recognising and controlling their own
78
demandingness about needing happiness, authenticity and an environment of
trust, mutual care and peace. Over many decades the therapy has been very
successful in its objective of helping individuals think about their own thinking
and make accountable choices to move away from unremitting stressors.
Cognitive Behavioural Therapy is now highly influential and even the dominant
form of psychological therapy. It draws upon the same fundamental concepts
as does existentialism, for example, the unquestioned existential statement of
existence that “I am”. However, a key difference from existential philosophy
and therapy is that Cognitive Behavioural Therapy specifically circumvents the
major imponderables of life, such as whether life has meaning, if there is a God
etc., as answers are unlikely to be forthcoming. Existentialism emphatically
maintains that the answer to each of these questions is “no”. Cognitive
Behavioural Therapy also avoids Aristotle's idea that people find their
happiness solely or principally in work, or that human value or happiness can
be measured by work in any intrinsic way.
Americans value leaders with proactive plans and an inner impetus or passion
to move forward. The American proclivity for actions over words is legendary.
Nike Inc. registered the ubiquitous slogan “just do it” as a trademark. In his
inauguration speech, President Obama sought to motivate Americans and draw
79
the nations together using the mantra of the cartoon character Bob the
Builder, “Yes we can”.
American business practices and character traits surprise Europeans who tend
to be more methodical. For example, Americans prefer “learning by doing” to
extended planning and specification. They have a greater respect for doing
than thinking, or action over words. This is sometimes expressed as the tracer
bullet strategy “ready, fire, aim”. For Americans, tracer bullets are cheap so
the best way of locating a target is just to start shooting. Feedback
mechanisms quickly correct mistakes to provide the way forward.
In business, this means that Americans prefer projects with small investments,
very short payback periods and near term exit strategies. When starting a
project they look to do a “half, not half-assed” job (37 signals 2006, p.48). It
also means that instead of planning a comprehensive project that will provide
for contingencies and future growth, they prefer to limit a project to the
smallest essential element that will just satisfy current requirements. If
expansion is required, then it can be done as another project in the future. This
maximises value by creating “real options” for future stages. However, it can
also result in band-aid policies, shabby urban architecture and massive
cumulative liabilities for infrastructure refurbishment. x
Perhaps above all, Americans respect leaders that develop bold strategies and
have the charismatic personality to carry them forward. They share the
reverence given to Homer's heroes and also the forgiveness given to the often
misplaced, reckless and capricious acts of the Greek Gods. However, these old
legends do not shape their future.
Each individual is free to choose their own path. It is a truism that each person
learns through their own mistakes. The French novelist Marcel Proust neatly
expressed this concept in Remembrance of Things Past, Volume II Within a
Budding Grove and Chapter IV Seascape, with a Frieze of Girls (1913) where
he writes “We don't receive wisdom; we must discover it for ourselves after a
journey that no one can take for us or spare us.”xi
80
Open and closed institutional philosophies
It wasn't only personal psychology and philosophies that were emerging from
the tyranny of top down paradigms. There was equivalent friction taking place
in institutions between the European tradition of open establishment groups in
Science and American closed groups.
The clash between Popper and Kuhn is not about a mere technical
point in epistemology. It concerns our central intellectual values,
and has implications not only for theoretical physics but also for the
underdeveloped social sciences and even moral and political
philosophy.
Thomas Kuhn
It may take ten or even thirty years or more for a new paradigm to be accepted
in the scientific community. Perhaps, the person who had the original idea will
not even be alive to see its fruition. Kuhn found that it was wise not to raise
one's head before the time had come or else the person may be forever tarred
81
as a failure because of the idea (whether or not it subsequently turns out to be
a better theory) and many times a brilliant career could die with the idea.
It may even require generational change over twenty or thirty years to bring in
new people that are able to make the needed changes because they do not
have their careers and reputations invested in the old paradigm. This personal
interest factor is known as an “agency conflict” and arises from what Nietzsche
identified as the “will to power” (discussed above). All in all, it is expected that
the failures will be quietly corrected over time and there is no hurry because
institutions have plenty of that resource.
Of course, one is no doubt amazed by the fact that a new scientific idea could
have been placed in the same category as an injustice or a fraud. Kuhn
correctly identifies that the establishment's ferocious defence leads to changes
in the scientific paradigm coming in waves, rather than linearly. It is thought
that the paradigm will naturally switch with new circumstances in the
organisation.
Fortune magazine editor William H. Whyte Jr. was on the same track in his
book The Organization Man (1956). He identified a puzzling dichotomy
between conformity and individualism in American 1950s society. Whyte found
that corporation men willingly subordinated to unquestioned cooperation in
exchange for the security of belongingness. They were prepared to become
“yes men,” leaving their personalities at the door as they entered the office or
factory.
82
Fifty years later, Ehrenhal wrote of the impact of Whyte's book “By the
following spring, it was hard to find a college commencement speaker who
didn’t devote his remarks to the conformity crisis and its implications. “We
hope for nonconformists among you,” the theologian Paul Tillich told one
audience of graduates, “for your sake, for the sake of the nation, and for the
sake of humanity.” The president of Yale, A. Whitney Griswold, talked about a
“nightmare picture of a whole nation of yes men” (Ehrenhal 2006).
Fuller (2003, p.129) argues that Kuhn's findings arise from the three forms of
authority created by Roman Law, which operated until the twelfth century:
Gens, the transmission of the family status and wealth across generations;
Socius, goal based ventures such as business activities and military
expeditions, which were seen as temporary organisations for specific purposes;
Universitas, the enduring public service corporations of craft guilds,
universities, religious orders and city-states.
The important character of Universitas was that it gave certain groups niche
monopolies to perpetually decide what constitutes a worthy pursuit and who is
qualified to pursue it. These organisations are now the institutions of society.
83
Fuller (p46) notes of Kuhn's findings that public institutions which manage
science are “A politically social formation that combined qualities of the Mafia,
a royal dynasty and a religious order. It lacked the constitutional safeguards
that we take for granted in modern democracies that regularly force politicians
to be accountable to more people than just themselves.”
Any organisational failures are explained away as because the leaders need to
take risks and it is argued that in the absence of a pattern of fraud they should
be protected or indemnified from the consequences of these risks. This is
arguably one of the two key reasons for the slow and difficult implementation
of corporate accountability and systems of corporate governance in boardroom
and at the level of Government.
Popper maintains that the best theory is the one that has withstood the
greatest number of falsification attempts. According to Fuller (pp. 24-5) he
departs from the logical positivists on this very point: Popper requires that
logic be used to challenge rather than bolster scientific authority.
84
Of course, this is diametrically opposed to Kuhn's finding that scientific
institutions, far from submitting theories to falsification, go to extraordinary
lengths to defend their theories against falsification. Also, in the real world the
number of confirmations of success is regarded as more important than the
number of times a theory has failed or even survived falsification. For example,
an Australian Court of Law will accept widely used rules of thumb as
compelling evidence.
The approaches of Popper and Kuhn have been presented as being completely
opposed. However, in the 1965 debate, Popper readily accepted that Kuhn's
approach best described the way organisations operated and how science
advances in waves. Nevertheless, he says it is an inferior system that should be
replaced by critical thinking; proactively falsifying theories; and passionately
providing new ideas for peer review and receiving in return positive criticism.
Furthermore, that new ideas may die but the careers of the people who have
them should not. Indeed, an individual is even to be respected for sensibly
moving on to new and hopefully better ideas.
Ironically, this criticism turns against Popper his favourite quote from
Xenophanes of Colophon (570 – 480 BCE):xiii
85
Nor yet of all things of which I speak.
For even if by chance he were to utter
The final truth, he would himself not know it:
For all is but a woven web of guesses.
Even more ironically for Popper, Kuhn's empirical research finding that in
practice science doesn't proceed by falsification became widely accepted as a
test that falsified Popper's theory. The highly regarded anarchist philosopher,
Feyerabend (1975), one of Popper's greatest critics, concluded that the
theories of both Popper and Kuhn had failed and this left only the pluralist
approach of “anything goes” in Science.
Perhaps this is because Poppers World 3 breaks the simple Cartesian dualism
of matter and soulxiv and demands an answer to the old phenomenological
chestnut “Does a tree make a noise when it falls in a forest and there is no-one
to hear it?”
The following discussion of an instance where Popper's world view has been
implemented allows conclusions to be drawn on Popper's influence from the
boardroom to international democracy.
86
influenced by Solon (594 BC). However, it is best known for its golden age
under the leader Pericles (c. 495 BCE - 429 BCE). The Athenian democracy
existed for at least 186 years from the time of Cleisthenes (508 BCE) until its
suppression by the Macedonians in 322 BCE.
Fuller (p. 105) notes that: “Athens expected its citizens to speak their minds.
Indeed, failing to speak was worse than failing to persuade.” Not only was
criticism encouraged in Athens, it was actively demanded to protect society
from political capriciousness, or stasis. This is the Athenian term for the
agency conflict between the public interest of politicians and officials in
positions of authority and their private interests of staying in power and
enriching themselves through their position. The duty of public criticism was
designed to empower citizens and remove the mythology, superstition and
institutionalised dogma that accompanies Platonic stratification of knowledge
and authority in a society.
Following Bergson (1932), Popper uses the term open society to describe a
classical democracy that is predicated on debate, accountability and the
testing of ideas (Popper 1945). Fuller (p. 160) explains that Popper was
particularly concerned with what is nowadays called the “spiral of silence.”
This is the tendency in democratic societies for politicians to allow public
opinion to drift towards a minority position that has repeated exposure and
little formal opposition and for a culture of self-censorship to develop amongst
scientists, journalists and bureaucrats in order that to avoid career
victimisation.
87
positivists at the core of America's post World War II big-science phase and had
the alarming potential to cross the line from criticism of that pragmatism
approach to nihilism, which is Nietzsche's term for without meaning, purpose
or value.
Agency conflict
Agency conflict and corporate excess in the 1980s became the first indication
that something was really wrong with post World War II Anglo-American
capitalism. Perhaps the major deficiency in an elementary paradigm of
competitive markets is that of principal agent conflict. Much of the regulation
of markets has focused on the issues between both shareholders and directors,
and between directors and management.xv
88
The need for specific Corporate Governance regulations arises because
directors do not have a legal responsibility to individual stakeholders such as
shareholders, creditors or employees.xvi Prior to the need for Corporate
Governance being recognised, much of directors' duties regulation was merely
to ensure that directors carried out their fiduciary duties honestly and in good
faith for the benefit of the shareholders as a whole. For example, a fiduciary
duty was described by the UK High Court in Aberdeen Railway Co v Blaikie
Bros. (1854, 1 Macq 461) as “A duty to act with fidelity and trust to another, to
act honestly, in good faith and to the best of one’s ability in the interest of the
company.” This simple fiduciary duty leads to imperfect accountability of
directors and managers, which has been exploited in every possible way.
Based on Nietzsche's analysis (above) we might expect that directors and chief
executive officers of companies and organisations, would be reluctant to see
demands for accountability and governance impact on their personal “will to
power.”
Agency conflict is obviously a very big opportunity space for directors and
managers. They would prefer to leave it unresolved and flexible for
exploitation. For example, Duffner (2003, p.34) notes that an agent has the
opportunity to maximise their own utility, utilising better information about the
business, and perhaps pass over obligations such as contracts, laws and moral
standards. Kaplan & Stromberg (2004) emphasise that principal-agent
conflicts are ever present due to these information asymmetries.
89
However, these Codes of Conduct could only provide unenforceable statements
of good intent and the mission was flawed from the outset because the social
mandate extended by society was such a nebulous concept. In addition,
forfeiture of the mandate to operate is a very big issue, requiring such gravity
of circumstances, that it had rarely been invoked. Therefore, directors and
managers nodded in due deference to their vague accountability and
unenforceable obligations but were confident that in practice all these lofty
principles are inevitably subject to considerable interpretation in ambiguous
situations. As a result, Codes of Conduct did little to address moral hazard,
which continued to be an imperative because exploitation of company positions
for personal advantage continued unabated as a major ethical problem.
In the late 1990s and early 2000s, a number of prominent American and
Australian companies began to fail after Corporate Governance abuse. More
than any other example, the American company Enron showed what happens
when Corporate Governance goes awry. Bala Dharan, Professor of Accounting
at Rice University, noted in his testimony to the American House of
Representatives Committee on Energy and Commerce that the Enron debacle
will rank as one of the largest securities fraud cases in history. He noted that
many people were confused as to how this tragedy could have happened while
the company’s management, board of directors and outside auditors were
supposedly watching over for employees and investors. Dharan testified
(Dharan 2002):
90
board of directors and its audit committee, and compromised
independence in the attestation of financial statements by external
auditor.
With the HIH Royal Commission underway, the Australian Stock Exchange's
Corporate Governance Council acted to address Corporate Governance. Sadly,
under pressure from a political conservative government, it introduced an
undemanding and predominantly voluntary set of Corporate Governance
Guidelines (ASX Corporate Governance Council 2003; 2005; 2006; 2006;
2007). The Committee persevered with the now defunct assumption that
directors and managers would step-up to their responsibilities out of
“enlightened self-interest”. Perhaps predictably, the Governance Council was to
be embarrassed in its naive assumption by a lack of bona fide commitment
“Overall, the quality of exception reporting in 2004 annual reports was lower
than expected. Motherhood statements were commonly used, providing
insufficient disclosure to investors” (ASX Corporate Governance Council 2005).
91
One might be forgiven for assuming that the moral hazard had finally been
addressed by tough Corporate Governance rules across Anglo-American
economies. Unfortunately, this is not the case.
Daily, Dalton & Cannella (2003, p.371) found that Corporate Governance has
degenerated into a set of check the box requirements that do not meet
expectations of bona fide behaviour change: “The field of corporate
governance is at a cross roads. Our knowledge of what we know about the
efficacy of corporate governance mechanisms is rivalled by what we do not
know.”
In What makes great board great, Sonnenfeld characterises the ingredient that
continued to be missing as “the human side of governance” (Sonnenfeld 2004,
p.109). Two years earlier, he had concluded that the key to strong performance
is a social attitude of accountability rather than compliance where the focus
92
had been placed to date “So if following good-governance regulatory recipes
doesn’t produce good boards, what does? The key isn’t structural, it’s social.
The most involved, diligent, value-added boards may or may not follow every
recommendation in the good-governance handbook. What distinguishes
exemplary boards is that they are robust, effective social systems”
(Sonnenfeld 2002).
However, Petre noted that exceptional performance is still rare and corporate
culture has a complex balance of contributing factors that means it is always at
risk to subversive behaviour:
93
Excessive speculation in markets
While business practices had been out of control, attention had not been
focused on the rampant speculation occurring in commodity markets. By 2008,
American investment practices had become a major issue in world commodity
prices, particularly oil and food prices.
The fundamental dichotomy in commodity markets is the need for liquidity and
therefore for speculators. The contrary view of the hundreds of billions of
dollars that has flowed into the commodity markets from 2000-2007 is that
without this capital liquidity would have been far less and prices may have
been far higher and more volatile than they are now.
94
On the other hand, too much money causing excessive speculation leads to
massive bubbles in the price of basic commodities, which hurts ordinary people
and the economy. Commodities market regulations to prevent excessive
speculation were withdrawn in the final year of the President George H. W.
Bush Administration.
Measures to curb excessive speculation range from outright bans to raising the
capital requirements for futures trades. For example, in America, futures
trading in onions has been banned since 1958. A Congressional report at the
time stated “Speculative activity in the futures markets causes such severe and
unwarranted fluctuations in the price of cash onions as to require complete
prohibition of onion futures trading in order to assure the orderly flow of
onions in interstate commerce.”
In July 2008, just before the global financial crisis, Congress contemplated
raising margin requirements. Following World War II, President Harry Truman
had raised the deposit on margin trades to an unprecedented level of 33% of
the contract value saying “The cost of living in this country must not be a
football to be kicked about by gamblers.” However, increasing margin
requirements may not be effective since prices do not appear to be affected by
margin requirements, although volume of contracts certainly is.
95
Another bias in American markets is the well-known “Enron loophole,” also
called the “investment bank loophole”. Speculative investors such as
commodity index funds can dramatically increase the size of their commodity
bet in excess of normal limits by working with an investment bank. Operating
in a back-to-back way, both the investor and the investment bank avoids
regulation. For example, the investment banks sells a swap to the commodity
index fund for a commodity like corn. The investment bank then writes many
times the swap as a hedge in the commodity futures market. This technique
limiting speculators.
In addition to this new speculation that tilts the market toward higher prices,
there is always secret and collusive trading activity to produce illegal profits.
These things are often below the radar of the regulators due to the massive
volumes and sizes of transactions in the commodity market. However, major
commodity market manipulation scandals have become public, such as J. R.
Simplot's fixing of the Maine potato market, William Herbert Hunt's 1979
manipulation of the silver market to over US$50 per ounce before it collapsed
to US$10.80, Enron in the California energy market and British Petroleum's
2007 settlement of charges that it rigged the propane market.
The immensity of the hedge fund capital in a relatively small market has led to
a change in the nature of speculators from commodity traders and facilitators
to massive financial betting institutions. These financial institutions have at
their disposal highly sophisticated techniques to achieve extraordinary profits
from price volatility. For example, they are able to react faster to new
information than everybody else. It is a zero-sum game. In aggregate terms,
the profit taken by speculators is a major loss to the market.
Two main issues seem to have been highlighted by the failure of the
commodities market due to excessive speculation. The first is that the
continued existence of loopholes allowing massive speculation is inappropriate.
A free economy needs to facilitate speculation in a way that is pro-public
96
interest. Regulating markets at the point of market failure is not incompatible
with being pro-market. Secondly, in the end the market is just a mechanism
and is not a policy instrument. The market can only operate successfully in the
public interest where government provides a strong sustainable policy for the
commodities.
Sub-prime crisis
In October 2009, a decade of American consumers binging on Chinese imports
and gorging on Middle East oil came to a shuddering end as Americans could
not continue to borrow for their current consumption.xviii
According the Bacevich, the roots of the sub-prime credit crisis lie in the
Regan era (Bacevich 2008, p.36):
The Federal Reserve's low interest rates and vastly over-expanded money
supply fuelled a boom in property lending. Investment bankers who used every
avenue of unfettered financial innovation to maximise profits supercharged the
already huge volume of risky debt. One major innovation was bundling
97
mortgages into new unregulated collateralised debt obligations (CDOs). These
securitised debt products were sold to other banks, superannuation funds and
overseas investors. In this new model of business, banks transformed from
boring mortgage lenders into fee-for-service earners.
The new role of banks was to originate mortgages through their sales channels
and sell these mortgages in parcels, taking a fee for the transaction. Parcels of
mortgages had mixed credit quality, just as DeBeers has traditionally sold
parcels of diamonds with variable quality in the parcels. Ultimately, banks
ceased focusing on the credit quality of the mortgages in the parcel, which was
seen as a mortgage insurer risk. In America, Fannie Mae, Freddie Mac and
investment banks provided trillions of dollars of mortgages. AIG insured many
trillions of dollars of these loans against credit risk default. Then the sub-prime
mortgage crisis began as Bear Stearns failed on 13 March 2008. xix
The American Government had begun to believe in its own illusion that there
could be a new world economic order having growth without savings. This
became accepted in many advanced economies such as Australia. A compliant
American government and Federal Reserve became very confident and
permitted high risk lending to borrowers with doubtful credit histories. One of
the now infamous acronyms for such lending was the “NINJA loan,” a loan to
people with “No Income, No Job, no Assets”. In America, Australia and the
United Kingdom, there were regular advertisements for 110% mortgages and
urging existing house owners to withdraw equity from the rise in the value of
their house to spend on a car, boat or holiday.
Buyers with easy money chased properties so house prices began rising in
2000. Continued appreciation of house prices ensured attractive returns for all
involved. In concert, equity prices continued their bull run as the bellboys
(people that heard rumours in the lift) were making big money. Wise heads
knew that this meant a recession but the concerted and massive economic
stimulus meant that the recession just didn't come.
In 2006, China burst onto the world stage, supplying huge volumes of cheap
capital goods to America and the world. This supercharged the already
overheated equity and commodity markets.
98
In June 2004, the American Federal Reserve became very concerned about
runaway inflation. It began to increase interest rates from 1% to 5.25% by June
2006. This discouraged investors who stopped investing in new mortgage loans
and led to a build up in unsold homes. The oversupply of houses led to a steep
collapse in prices from their peak in 2006. By November 2008, American
metropolitan city house prices had fallen approximately 25%.
Moral hazard
The 2008 sub-prime debt crisis was a watershed in attitudes and a turning
point in history. Given the topicality of this section, perhaps it could have been
placed at the start of the chapter rather than in here its linear place as part of
the development of shared American attitudes.
Over the past 200 years the USA has endured frequent recessions
accompanied by asset bubbles and banking failures.xxi Yet the 2008 recession
has been special for the reason that America forfeited much more than its
global industrial competitiveness and accumulated a huge foreign debt. As
Paul Krugman winner of the 2008 Nobel Prize in Economics “for his analysis of
trade patterns and location of economic activity,” writes “The financial crisis
has had many costs. And one of those costs is the damage to America’s
reputation, an asset we’ve lost just when we, and the world, need it most”
(Krugman 2009b).
Along with abusing and subsequently forfeited its most precious asset of all, its
reputation, America lost its preeminent position as the leader of the Western
world. For at least thirty years, American national arrogance in being different
and exceptional, led by God, and above the law justified increasing hubris and
led to burgeoning moral hazard. The President of France, Nicolas Sarkozy,
concisely summarised the issue as “This crisis is not the crisis of capitalism. On
99
the contrary, it is the crisis of a system that has drifted away from the most
fundamental values of capitalism. It is the crisis of a system that drove
financial operators to be increasingly reckless in the risks they took, that
allowed banks to speculate instead of doing their proper business of funding
growth in the economy; a system, lastly, that tolerated a complete lack of
control over the activities of so many financial players and markets” (Sarkozy
2009).
However, in America, the moral hazard that led to the economic collapse of
2009 was not confined to any single sector or to consumers. Americans
engaged root and branch, as a people and as a nation, domestically and
internationally. Every institution of government, military, business and finance,
not excluding the Federal Reserve, was involved and culpable. The
pervasiveness of moral hazard in government was further compromised by
national and organisational psychopathy, deceit and agency conflict.
Other Anglo-American countries such as the United Kingdom and Australia had
enjoyed a goldilocks decade of abundance rooted in America's intoxication with
consumption. These Anglo-American nations enthusiastically followed America
down the path of moral hazard. In July 2007, the new Governor of Australia's
Reserve Bank stated of the Australian economy (Stevens 2007, pp.3-4):
100
growth. Long-term interest rates are not far above their 50-year
lows of a few years ago, even though short-term rates have risen in
most countries to be much closer to normal levels, the main
exception being Japan. Share prices have been rising steadily,
appetite for risk is strong, and volatility in prices for financial
instruments has been remarkably subdued. To some extent, these
trends in financial pricing may well reflect a genuine decline in
some dimensions of underlying risk. Variability in economic activity,
and in inflation and interest rates, has clearly diminished over the
past 15 years in a number of countries, including Australia …. The
associated prolonged period of attractive, steady returns on equity
investment and low cost of long-term debt funding certainly seems
to have set the stage for a return to somewhat higher leverage in
the corporate sector. This is most prominent in the rise in merger
and acquisition activity and the re-emergence of leveraged buyouts
around the world. Corporate leverage had been unusually low after
the excesses of the 1980s, so some increase is probably manageable.
Nonetheless, after more than a decade in which the main action in
many countries has been in household balance sheets, this trend in
corporate leverage will bear watching. For the time being, at any
rate, financial conditions are providing ample support for both
corporate investment and household spending around the world.
America is arguably facing its greatest ever challenge. Ironically its bailouts
and budget deficits have been funded from China's foreign reserves and the
children and grandchildren of current American consumers. This has
confronted the undisputed dogmas of the market system: “The prevailing
market system is supported by a very influential set of economic dogmas which
have come to occupy a dominant place in the lives of modern societies. These
include the high importance attached to market-led economic growth; the
value of complete free trade in money and capital as well as in goods and
services; the need to subordinate social welfare to market requirements; the
101
belief in cutting down or privatising government functions; the acceptability of
profit as a test of economic welfare; and others as well” (Self 2000, p.ix).
At the 2009 G-20 London Summit, President Barack Obama quietly took
responsibility for the world's economic crisis (Hujer et al. 2009):
102
century. By doing so, he has admitted that one of the excesses of the
American way of life -- the insatiable craving for huge profits -- has
brought the world to the brink of disaster. The others may have
played their part, but the origins lie in the US. The fact that Obama
has now admitted this sends a strong signal of hope to the world,
perhaps the strongest to emerge from the G-20 summit in London
last Wednesday and Thursday. Such an admission could begin to
pave the way towards rectifying the situation.
103
enfeebled by the Gordian knot of peak energy prices, planetary
overheating and global debt.
The financial crisis has also led David Brooks to a certainty that neoclassical
models are overly linear and rational, lacking psychological dimensions. He
writes (Brooks 2008):
104
careful, rational actors who make optimal decisions. There was little
allowance made for the frailty of the decision-making process, let
alone the mass delusions that led to the current crack-up ….
Democrats also have an unfaced crisis. Democratic discussions of
the stimulus package also rest on a mechanical, dehumanized view
of the economy. You pump in a certain amount of money and “the
economy” spits out a certain number of jobs …. But an economy is a
society of trust and faith .... This recession was caused by deep
imbalances and is propelled by a cascade of fundamental
insecurities …. The economic spirit of a people cannot be
manipulated in as simple-minded a fashion as the Keynesian
mechanists imagine …. Mechanistic thinkers on the right and left
pose as rigorous empiricists. But empiricism built on an inaccurate
view of human nature is just a prison.
Brooks has not dug down to the bedrock of the American economic paradigm
founded on Aristotle's analysis of human happiness. Nevertheless, his
questioning of the existing models is poignant for a number of additional
reasons. The most important of these is that models based on consumption
growth as society's main goal do not react well in low or volatile growth
situations.
105
are always completely correct. However, policy makers seek confirmation of
feasibility from modellers to improve the probability that their policy will be
feasible, not seer-like predictions of the future and iron-clad guarantees of
policy outcomes. Policy makers are well aware that the future will unfold quite
differently to that forecast in economic models. This is why policy makers
chuckle in good humour at John Kenneth Galbraith's quip that “economists
were invented to give fortune tellers a good name.”
106
safeguards in the faith that markets will solve all problems …. flaws-
and-frictions economics will move from the periphery of economic
analysis to its center …. they'll have to do their best to incorporate
the realities of finance into macroeconomics …. It will be a long
time, if ever, before the new, more realistic approaches to finance
and macroeconomics offer the same kind of clarity, completeness
and sheer beauty that characterizes the full neoclassical approach.
Despite deft oratory from President Obama and such profound reflection
amongst economists and policy makers, it seems that lessons may not have
been learned from the financial crisis. President Obama's most senior chief
economic adviser, former Harvard University President Lawrence Summers,
changed the definition of the crisis from moral hazard to over-exuberance and
over-confidence leading to too much debt. Astounding everyone at a June 2009
conference of Deutsche Bank's Alfred Herrhausen Society in Washington,
Summers' only solution was to rebuild confidence by making credit more
widely available (Steingart 2009a).
Perhaps even worse, two months later pre-crash "casino capitalism" had
returned in America, the United Kingdom and Germany (Herbst 2009). German
Finance Minister Peer Steinbrück criticised exorbitant bonuses to bank
executives in the following terms “Some executives didn't hear the bang ….
They are responsible for the fact that approval of our system of doing business
is waning …. Taxpayers are continuing to completely finance big bonuses”
(Spiegel Online 2009b).
A loss of confidence in the American economy has been taking place since
2000 with the American dollar depreciating 40% against the Euro over the
period 2000-2009. Even before the 2008 financial crisis, fewer investors in
China and Japan were prepared to finance the growing American deficit. The
Federal Reserve's response over the three years to 2009 was to increase the
money supply by 45%. Repurchasing Government securities has flooded money
into the economy to finance consumption rather than productive assets.
107
The American consumptive binge is accelerating with the greying and medical
insurance needs of the population. America's 2009 budget forecasts US$9
trillion additional debt for the decade 2010-2020. This imbalance of wild
growth in money supply to finance Americans living well beyond their means is
seen by many as a precursor to massive inflation and a collapse in the dollar.
108
Woods agreement replaced the pound sterling with US dollar as
reserve currency and made the US the manager of the world's
money. The US had unquestioned air and sea superiority, a nuclear
monopoly. In 1948, US per capita income was four times the
combined sum of Britain, France, Germany and Italy.
America built success on success with huge patent and copyright empires
across pharmaceuticals, computers (Intel, AMD, IBM), software (Microsoft,
Oracle, Sun), music, movies, publishing, food and beverage (Coca Cola,
MacDonald's, Kentucky Fried Chicken) and many other industries. It scooped a
margin from the majority of third world development by a form of economic
extortion. America owns 40% of IMF and World Bank and it used its dominant
ownership to control lending to third world countries, requiring that these
countries spend their loans to buy American manufactured equipment and to
employ American contractors (such as Bechtel). America also deployed the CIA
and Marines to coerce investment in American goods and services if economics
didn't work.
However, this miracle was not to last. The extended patent empires have
matured and third world countries such as Indonesia and in South American no
longer need or want International Monetary Fund loans with strings attached.
Bacevich writes (p29):
By 1950, the US had begun to import oil. Then came the crushing
defeat in Vietnam, oil shocks, a destabilised economy, inflation,
stagflation and currency devaluation. Following Vietnam, American
efforts to expand abundance and freedom have become increasingly
problematic … In the name of preserving the American way of life,
President Bush and his lieutenants committed the nation to a
breathtakingly ambitious project of near global domination. Hewing
a tradition that extended at least as far back as Jefferson, they
intended to expand American power to further the cause of
American freedom. Freedom assumed abundance. Abundance
seemingly required access to large quantities of cheap oil.
Guaranteeing access to that oil demanded that the United States
remove all doubts about who called the shots in the Persian Gulf. It
demanded oil wars.
109
Bacevich's point is that America has reach a low point in becoming the world's
biggest debtor nation and demonstrated its moral bankruptcy in having
thousands of American soldiers die in Iraq merely to secure oil for profligate
American consumers (pp 62-3 &155):
110
perhaps the first person to clearly perceive that economic progress would
change the world. He fervently believed that the future could be accurately
predicted by the application of sound mathematical principles. Following mixed
fortunes during the 1794 French Terror, Saint-Simon developed the seeds of
modern game theory (Strathern 2002, p.142-3). He accurately foresaw that
humans would choose science to civilise society because this would minimise
our maximum loss and any other strategy would cause a greater loss.
In August 1949, during the early days of the cold war, Russia detonated a
nuclear device that broke America's monopoly on nuclear weapons. It was
early days of the Cold War and America saw itself facing the stark choice of
being red or dead (Bacevich 2008, p.164) America's xenophobia was
exacerbated by Mao Zedong's Communist Revolution of 1 October, 1949. In
what Dick Cheney later called the “one-percent doctrine,” America stood ready
to protect its Manifest Destiny against any tangible threat. The nuclear hawks
sprung into action claiming that America could avoid the choice of “red or
dead”. Using his minimax game theory, Von Neumann vigorously lobbied
Presidents Harry Truman and Dwight D Eisenhowerxxv to launch a first strike
nuclear conflagration at the Soviet Union.
Von Neumann became head of the Atomic Energy Commission in 1954 until his
death on 8 February 1957, aged 53. Following his death, Life Magazine's
obituary reported that von Neumann said of his 1950 game theory strategy “If
111
you say why not bomb them tomorrow, I say why not today? If you say today at
five o' clock, I say why not one o' clock?” (Blair 1957, p.96).
Unfortunately for von Neumann, President Truman's key adviser Paul Nitze
thought the argument for preventative war was absurd. In the top secret
National Security Council document NSC68, promulgated in early 1950, Nitze
wrote that the idea of preventative war was “repugnant and morally
corrosive”.
With the onset of the Korean War in June 1950, President Truman agreed to
NSC68's dogma of mutually assured destruction (ironically with the acronym
MAD) and permanently investing in military capability. NSC68 optimistically
claimed “The economic effects of the program might be to increase the gross
national product by more than the amount being absorbed for additional
military and foreign assistance purposes …. [such as] fomenting and
supporting unrest and revolt [in the Soviet bloc]” (Bacevich 2008, pp.108-11).
Since this time, weapons manufacture for defence and export has underpinned
America's economic growth.
112
Elinor Ostrom (1990) found Harden's hypothesis to be true in many situations
of common property. She showed the “Tragedy of the Commons” is a case of
multiple Prisoners Dilemmas. Together with Edella Schlager, Ostrom
subsequently developed the concept of property rights (Schlager & E. Ostrom
1992; E. Ostrom & Schlager 1996).
113
the last iteration is to cheat, then the best alternative on the second last
iteration is also to cheat, which agrees with von Neumann's analysis.
Axelrod's game shows that way out of the Tragedy of the Commons and the
Prisoner's Dilemma game is for people to raise themselves from risk,
despoilment and despair by banding together for the greater good and thereby
together achieving increased individual welfare for all.
114
International Criminal Court, avoiding the latter in case Americans were tried
for foreign war crimes.
In order to move forward in concert with other major economic blocs in a spirit
of trust, America is beginning to recognise that it must rely less on cherry-
picking international agreements and more on trust strategies for a democracy
of nations. Jeffrey Sachs points out that America's attitude following World War
II was just this, which gave considerable guidance and hope to all peoples:
“Great acts of U.S. cooperative leadership include the establishment of the UN,
the IMF and World Bank, the promotion of an open global trading system, the
Marshall Plan to fund European reconstruction, the eradication of smallpox,
the promotion of nuclear arms control, and the elimination of ozone-depleting
chemicals” (2008, pp.8-10).
Sachs sees the finest hour of the American Presidency as October 1962, when
President John Kennedy led the Soviets and the world away from nuclear
Armageddon. In a secret agreement, American removed nuclear missiles from
Turkey at the same time as Russia removed its missiles from Cuba.
115
Nikita Khrushchev, the Russian President, responded that this was the finest
American Presidential speech since those of Franklin Roosevelt. Six weeks
later he joined America in a Partial Test Ban Treaty for nuclear weapons.
Joseph Stiglitz, a former Senior Vice President and Chief Economist of the
World Bank who shared the 2001 Nobel Prize in Economics with George
Akerlof and Michael Spence “for laying the foundations for the theory of
markets with asymmetric information,” writes that America needs to migrate
from failed economic models and look to successes like the German social
model (Stiglitz 2009):
116
economy with massive borrowing, Erhard kept Germany's gearing low. His
success has recently been reflected upon as follows (Steingart 2009b):
His plan shuns excessive debt. His argument was that people would
first make an effort when money became tight and, thus, more
valuable. You get the best results, he found, if, in the tried and true
manner of our forefathers, you work hard and don't forget to save.
"The state can't afford anything that doesn't come from the strength
of its own people," was the message. He also could have said: No
pain, no gain …. His key words were not consumption and credit,
but pay and performance. He insisted, practically to the point of
stubbornness, that work and only work is the foundation of
prosperity: "We must either make do with less or work more." He
felt that the third way, which leads to the vault of the next best bank,
was a dead end …. His record is impressive, even from today's
perspective. He gave Germany the longest economic boom in world
history, from 1949 to 1966. During this period, the country, still
recovering from the war, rose up to become a leading exporter. It
overtook first the French, then three years later the British and as of
1976 the Americans. Germany's currency remained stable and its
level of debt low. From the late 1950s onwards, there was full
employment in Germany …. Even the term "Wirtschaftswunder"
[economic miracle], coined by an admiring populace, was repugnant
to him. Anyone who used the expression in his presence was
snubbed. "There are no miracles," he liked to say.
117
dominance with a network of partnerships that will ensure that it remains the
'indispensable nation' …. Seen from Washington, there is something almost
infantile about how European governments behave towards them -- a
combination of attention seeking and responsibility shirking” (Witney &
Shapiro 2009).xxvii The authors note that there are “no more special
relationships” and that “governments in the EU must shake off illusions about
the transatlantic relationship if they want to avoid irrelevance on the global
stage.”
At a cultural level, Americans and Germans have different collective and primal
emotions (Malzahn 2009):
118
gave his speech at Berlin's Victory Column last summer, he talked
about the post-war airlift during the blockade of Berlin and about
the care packages the Candy Bombers distributed. And then he
asked, buried in a subordinate and somewhat cloudy clause of one of
his sentences, that Germans start thinking about how to pay back
this moral debt. However, if I know my countrymen, then this type of
nudging just isn't going to work …. When Obama says that the US is
about to change but that the U.S. cannot be the only one to change,
he should not overestimate the innate feelings of personal
responsibility in the German populace or assume that they will fill in
the unspoken subtext …. The difference being: Americans live in a
society which of course celebrates commerce and selfishness -- but
behind the bluster, a mere inch beneath the surface, there are often
huge reservoirs of idealism and selflessness in individual Americans.
We Germans, however, live in a world which in ways is much fairer
and more organized for the public good. Yet, so many of our
experiences from the Thirty Years War onwards have contributed to
a hard egotistical core which lurks just beneath the dutiful surface
of the national psyche.
For the American consumer economy, the quality of that growth is not so
important. For example, short term consumer growth through hollowing out of
119
the manufacturing industry is just as valuable as any other sort of growth. It
matters little that national income inequality is extraordinarily high and huge
differentials in consumption exist between ethno-cultural groups.
Prime in the German psyche are the constraints of money and resources. While
the German model still seeks to maximise consumption growth, this is subject
to two main constraints. The first is efficiently satisfying money and resource
limitations. Consistent with the previous discussion of Rawls' A Theory of
Justice, (1972) the second constraint is that society's stability be maintained by
automatic stabilisers that protect the weakest members. For example,
government subsidies that compensate employees when their hours are cut,
called “Kurzarbeit,” is credited with preserving hundreds of thousands of job
losses in the 2008 global financial crisis.
120
Source: Mathematica Country Data 9 April 2009 (note that the
graph is denominated in American dollars so the blue line for
Germany incorporates all exchange rate variability)
Eilenberger (2009) speculates that the world has seen the end of old models of
globalisation, including post-WWII American economic and military
imperialism. He sees the globalisation in a down-cycle, moving down a world
historical optimum:
121
empirically demonstrable reality. The European Union in the year
2009 represents a world-historical optimum. Never before have 500
million people united under a single political order been better off.
Never before have they been as free, as healthy, or as well educated;
and never before have they been as peaceful. To be sure, it is the
systemic improbability of this state of affairs that lends a certain
credence to the current pessimism about the future.
According to Eilenberger, the new paradigm is one where Europe and all
regions of the world will become inwardly focused: “The age of globalization is
over. The coming 30 years will be shaped by the logic of scarcity, resulting in a
turn away from global trade and the creation of self-reliant geopolitical zones.”
122
new departure, to which the now wise Candide responds, Cela est
bien dit, mais il faut cultiver notre jardin. (That is well said, but we
must cultivate our garden) …. Tending to one's own garden,
ensuring its sustainability, and continuing to cultivate it innovatively:
this is Europe's future -- behind walls.
These generic themes are suggested as potential applications for further policy
research in Chapter 7 Conclusions and suggestions for further research.
Policy reboot
The Global Financial Crisis of 2008-9 may well mark the end of a 30-year
bubble in finance and impending transition of world governance from America
to a group of major nations. Upon the election Barack Obama in November
2008, President Nicolas Sarkozy of France, who held the European Union's
rotating presidency, wrote to Barack Obama requesting that the world
governance granted to America at the end of World War II, at Bretton Woods,
be redistributed to other countries including the European Union.
The outbreak of the current crisis and its spillover in the world have
confronted us with a long-existing but still unanswered question,i.e.,
what kind of international reserve currency do we need to secure
global financial stability and facilitate world economic growth,
which was one of the purposes for establishing the IMF? There were
various institutional arrangements in an attempt to find a solution,
including the Silver Standard, the Gold Standard, the Gold
Exchange Standard and the Bretton Woods system. The above
question, however, as the ongoing financial crisis demonstrates, is
far from being solved, and has become even more severe due to the
inherent weaknesses of the current international monetary system
…. The acceptance of credit-based national currencies as major
international reserve currencies, as is the case in the current
123
system, is a rare special case in history. The crisis again calls for
creative reform of the existing international monetary system
towards an international reserve currency with a stable value, rule-
based issuance and manageable supply, so as to achieve the
objective of safeguarding global economic and financial stability ….
The desirable goal of reforming the international monetary system,
therefore, is to create an international reserve currency that is
disconnected from individual nations and is able to remain stable in
the long run, thus removing the inherent deficiencies caused by
using credit-based national currencies …. The IMF also created the
SDR in 1969, when the defects of the Bretton Woods system initially
emerged, to mitigate the inherent risks sovereign reserve currencies
caused. Yet, the role of the SDR has not been put into full play due
to limitations on its allocation and the scope of its uses. However, it
serves as the light in the tunnel for the reform of the international
monetary system …. The basket of currencies forming the basis for
SDR valuation should be expanded to include currencies of all major
economies, and the GDP may also be included as a weight. The
allocation of the SDR can be shifted from a purely calculation-based
system to a system backed by real assets, such as a reserve pool, to
further boost market confidence in its value.
124
believers …. we will restore science to its rightful place .… we reject as false
the choice between our safety and our ideals.”
Now, if we're honest with ourselves, we'll admit that for too long we
have not always met these responsibilities, as a government or as a
people. I say this not to lay blame or to look backwards, but because
it is only by understanding how we arrived at this moment that we'll
be able to lift ourselves out of this predicament. The fact is, our
economy did not fall into decline overnight. Nor did all of our
problems begin when the housing market collapsed or the stock
market sank. We have known for decades that our survival depends
on finding new sources of energy, yet we import more oil today than
ever before. The cost of health care eats up more and more of our
savings each year, yet we keep delaying reform. Our children will
compete for jobs in a global economy that too many of our schools
do not prepare them for. And though all of these challenges went
unsolved, we still managed to spend more money and pile up more
debt, both as individuals and through our government, than ever
before. In other words, we have lived through an era where too
often short-term gains were prized over long-term prosperity, where
125
we failed to look beyond the next payment, the next quarter, or the
next election. A surplus became an excuse to transfer wealth to the
wealthy instead of an opportunity to invest in our future.
Regulations - regulations were gutted for the sake of a quick profit
at the expense of a healthy market. People bought homes they knew
they couldn't afford from banks and lenders who pushed those bad
loans anyway. And all the while, critical debates and difficult
decisions were put off for some other time on some other day. Well,
that day of reckoning has arrived, and the time to take charge of our
future is here.
The Rev. Thomas Robert Malthus is mainly remembered for his courageous
albeit erroneous conviction that population would growth in a geometric
sequence and therefore overtake food production, which he thought could only
grow in an arithmetically progression (Malthus 1798). His notable achievement
of modelling with geometric and arithmetic progressions is seen as an
important precursor to neoclassical economics. xxix While he was completely
incorrect in his conclusions, it is ironic and at the same time very interesting
that the base case of the Club of Rome's 1972 Malthus-like projections has
indeed been borne out (Turner 2008).
126
main source of economic growth, President Obama has strongly advocated the
importance of substituting investment and saving for excess consumption,
while simultaneously removing the policy distortions of previous
Administrations that have exacerbated America's inequality of income by
diverting middle class wealth to the rich and played a major part in America's
financial collapse (Obama 2009e):
And most of all, I want every American to know that each action we
take and each policy we pursue is driven by a larger vision of
America's future – a future where sustained economic growth
creates good jobs and rising incomes;a future where prosperity is
fuelled not by excessive debt, reckless speculation, and fleeing
profit, but is instead built by skilled, productive workers; by sound
investments that will spread opportunity at home and allow this
nation to lead the world in the technologies, innovations, and
discoveries that will shape the 21st century. That is the America I
see. That is the future I know we can have …. Even as we clean up
balance sheets and get credit flowing; even as people start spending
and business start hiring – we have to realize that we cannot go
back to the bubble and bust economy that led us to this point …. It is
simply not sustainable to have a 21st century financial system that is
governed by 20th century rules and regulations that allowed the
recklessness of a few to threaten the entire economy. It is not
sustainable to have an economy where in one year, 40% of our
corporate profits came from a financial sector that was based too
much on inflated home prices, maxed-out credit cards, over-
leveraged banks and overvalued assets; or an economy where the
incomes of the top 1% have skyrocketed while the typical working
household has seen their income decline by nearly $2,000.
This crisis is neither the result of a normal turn of the business cycle
nor an accident of history. We arrived at this point as a result of an
era of profound irresponsibility that engulfed both private and
127
public institutions from some of our largest companies’ executive
suites to the seats of power in Washington, D.C. For decades, too
many on Wall Street threw caution to the wind, chased profits with
blind optimism and little regard for serious risks - and with even less
regard for the public good. Lenders made loans without concern for
whether borrowers could repay them. Inadequately informed of the
risks and overwhelmed by fine print, many borrowers took on debt
they could not really afford. And those in authority turned a blind
eye to this risk-taking; they forgot that markets work best when
there is transparency and accountability and when the rules of the
road are both fair and vigorously enforced. For years, a lack of
transparency created a situation in which serious economic dangers
were visible to all too few …. This irresponsibility precipitated the
interlocking housing and financial crises that triggered this
recession. But the roots of the problems we face run deeper.
Government has failed to fully confront the deep, systemic problems
that year after year have only become a larger and larger drag on
our economy. From the rising costs of health care to the state of our
schools, from the need to revolutionize how we power our economy
to our crumbling infrastructure, policymakers in Washington have
chosen temporary fixes over lasting solutions …. The time has come
to usher in a new era of responsibility in which we act not only to
save and create new jobs, but also to lay a new foundation of growth
upon which we can renew the promise of America …. This Budget is
a first step in that journey …. Our problems are rooted in past
mistakes, not our capacity for future greatness. We should never
forget that our workers are more innovative and industrious than
any on earth. Our universities are still the envy of the world. We are
still home to the most brilliant minds, the most creative
entrepreneurs, and the most advanced technology and innovation
that history has ever known. And we are still the Nation that has
overcome great fears and improbable odds. It will take time, but we
can bring change to America. We can rebuild that lost trust and
confidence. We can restore opportunity and prosperity. And we can
bring about a new sense of responsibility among Americans from
every walk of life and from every corner of the country.
128
However, the President was still appealing to the great American dream of
unrestrained economic growth. In contrast, the European Union and Japan
have shown that growth can occur in sustainability and quality of life without
extraordinary growth in GDP. In addition to GDP growth, environmental and
social considerations need to be included in all scenarios for sustainable
development.
In April 2009, President Barack Obama surprised the world with his
understanding of the new realpolitik across international economic, security,
energy and climate relations (Scherer 2009):
129
dismissive, even derisive." …. French President Nicolas Sarkozy
addressed the issue directly, speaking through an interpreter. "It
feels really good to be able to work with a U.S. President who wants
to change the world and who understands that the world does not
boil down to simply American frontiers and borders," he said. "And
that is a hell of a good piece of news for 2009.
On 4 June 2009, President Obama made a visionary speech to the Muslim and
Jewish worlds about international ethics and peace (Obama 2009c). This
speech linked directly to President John F. Kennedy's transformative and
enduring Peace Speech at American University in 1962 (Kennedy 1993). It was
to prove just as historic. President Obama said:
130
responsibilities under the nuclear Non-Proliferation Treaty …. all of
us must recognize that education and innovation will be the
currency of the 21st century and in too many Muslim communities,
there remains underinvestment in these areas …. On education, we
will expand exchange programs, and increase scholarships …. On
economic development, we will create a new corps of business
volunteers to partner with counterparts in Muslim-majority
countries …. On science and technology, we will launch a new fund
to support technological development in Muslim-majority countries,
and to help transfer ideas to the marketplace so they can create
more jobs. We'll open centers of scientific excellence in Africa, the
Middle East and Southeast Asia, and appoint new science envoys to
collaborate on programs that develop new sources of energy, create
green jobs, digitize records, clean water, grow new crops ….
eradicate polio …. And …. expand partnerships with Muslim
communities to promote child and maternal health …. Americans are
ready to join with citizens and governments; community
organizations, religious leaders, and businesses in Muslim
communities around the world to help our people pursue a better
life …. All of us share this world for but a brief moment in time. The
question is whether we spend that time focused on what pushes us
apart, or whether we commit ourselves to an effort - a sustained
effort - to find common ground, to focus on the future we seek for
our children, and to respect the dignity of all human beings.
131
Poland, Romania, the Czech Republic and the client war in Georgia (Spiegel
Online 2009a).
However, this policy reversal is more than merely pragmatic on the one hand
and a Kennedy-Obama vision of reducing nuclear weapons on the other. It puts
in place a new platform for sweeping change to Anglo-American international
relations. Firstly, looking to a new international consensus across all aspects of
security including nuclear weapons, terrorism and climate change. Secondly, a
de-escalation of harsh words and threats that lead to anxieties, high defence
costs for every country and see weapons systems across the globe placed on
hair triggers. Secondly, a recognition of the new financial reality that America
is unable to remain the world's policeman. Thirdly, a reorientation to domestic
issues, such as health over military spending, or “butter instead of guns.”
The Council reaffirmed its strong support for the Treaty on the Non-
Proliferation of Nuclear Weapons by adopting Resolution 1887 (2009) to end
nuclear weapons proliferation. The Meeting also called on States parties “to
comply fully with their obligations and to set realistic goals to strengthen, at
the 2010 Review Conference, all three of the Treaty’s pillars - disarmament of
countries currently possessing nuclear weapons, non-proliferation to countries
not yet in possession, and the peaceful use of nuclear energy for all.”
132
we will set our sights on the eradication of extreme poverty in our
time.
The Norwegian Nobel Committee has decided that the Nobel Peace
Prize for 2009 is to be awarded to President Barack Obama for his
extraordinary efforts to strengthen international diplomacy and
cooperation between peoples. The Committee has attached special
importance to Obama's vision of and work for a world without
nuclear weapons …. Obama has as President created a new climate
in international politics. Multilateral diplomacy has regained a
central position, with emphasis on the role that the United Nations
and other international institutions can play …. The vision of a world
free from nuclear arms has powerfully stimulated disarmament and
arms control negotiations. Thanks to Obama's initiative, the USA is
now playing a more constructive role in meeting the great climatic
challenges the world is confronting …. The Committee endorses
Obama's appeal that "Now is the time for all of us to take our share
of responsibility for a global response to global challenges."
133
Brandtxxxi hadn’t achieved much when he got the prize, but a process
had started that ended with the fall of the Berlin Wall …. The same
thing is true of the prize to Mikhail Gorbachev in 1990, for
launching perestroika. One can say that Barack Obama is trying to
change the world, just as those two personalities changed
Europe.xxxii
The news of the award was applauded around the world. French President,
Nicolas Sarkozy, congratulated President Obama saying: “It sets the seal on
America's return to the heart of all the world's peoples.” The Editor of The
New York Times (2009) noted that President Obama's Nobel Peace Prize failed
to resonate in America:
134
Don't expect a radical change in US climate policy after Barack
Obama takes over as president … America as a whole is not ready
for the contribution it needs to make in order to lessen the negative
affects of global warming … Washington will take pains to ensure
climate protection measures do not harm the US economy ... the
dominant issue in the US has always been energy security.
America's new direction of Democrat politics raises some issues about the
underlying attitudes of Americans and President Obama's ability to change
direction of policy. The discipline of moral psychology, established by Kohlberg
(1969), seeks to understand the difference in these conservative and liberal
attitudes.
Haidt & Graham (2007) have provided a behavioural model based on five
underlying psychological factors that characterise emotional reactions in
politics. These are harm-care, fairness-reciprocity, ingroup-loyalty (i.e. protect
the group or traditions), authority-respect, and purity-sanctity (i.e. religion).
Kass (1997) explains why this occurs. He argues that political conservatives
rely on their gut feel for moral principles and identify any violation as
repugnant and, by extension, a pernicious threat to the establishment. This is
Kass' well know “yuck factor.” Abortion, euthanasia and gay marriage are but a
few of the issues repugnant to political conservatives on the hard right. By
135
further extension, individuals involved with a violation of the moral principles
are characterised as foul, sub-social individuals who deserve no rights and to
whom torture may be an appropriate response.xxxiv
Political liberals call for freedom, autonomy and the right of individuals to
express their own preferences. In decision making, political liberals usually
seek principles of equality, natural justice, rational accountability and
transparent, evidence based debate.
Haidt suggests that conservatism is the default political attitude in the world.
Conservative reasoning exists across a wide spectrum of ideology outlooks as
diverse as conservative democracy, conservative Islam and conservative
Marxist-Leninist. In this respect, American Republican democracy and
conservative Islam have more in common than do the American Republicans
and Democrats.
136
down of Soviet Communism in 1991, American conservatives turned their
energies toward environmental scientists, whom they believed to be the next
threat to Anglo-American sovereignty and unilateralism. In reviewing the
failure of the 1992 Rio Earth Summit, German Finance Minister Klaus Topfer
noted “I am afraid that conservatives in the United States are picking
'ecologism' as their new enemy” (Greenhouse 1992).
137
Yale historian, Paul Kennedy (1993) sees America muddling through its
challenges but provides a word of caution. He observed that those people who
succeed in democratic political systems usually do so by managing to avoid
antagonising powerful interest groups. In this sense President Barack Obama
has a large challenge because as we have seen above, the set of issues
confronting him and the schisms in political values are enormous. Perhaps the
American financial crisis and peak-oil realisation will serve in his favour.
However, it remains to be seen if the broad base of Americans will be capable
of accepting a new humbleness of sustainable living where unilateral action to
secure resources is an international and punishable war crime.
2.7 Conclusion
The political analysis of the Anglo-American economic worldview in this
Chapter has identified a number of key themes.
With virgin territories for the taking, America's population grew rapidly and
with it the wealth of the nouveau riche. It became a powerful society. As shown
in the analysis of America's national security, the fiction that its domestic
success had been divinely ordained and was part of a Manifest Destiny became
entrenched in its dealings with the world. Such idiosyncratic beliefs defy
testing and encourage polemic. Therefore, it is unsurprising that Americans
accepted their own predetermined destiny while expressing outrage at another
equally untestable vision of pre-determined economics, that of dialectic
materialism.
138
Notwithstanding this, in the name of Manifest Destiny Americans justified
resource grabs from Hawaii to Iraq. In the majority of instances, America's
interventions led to a legacy of ashes. However, as shown in the analysis of
resource wars, it might be expected that powerful nations such as America will
continue to use military force to secure resources.
The systemic risk observed by Alexis de Tocqueville was to threaten the very
foundations of classical and neoclassical economics. Societies were rescued
from the Great Depression and the Global Financial Crisis of 2008-9 by
Keynesian lifelines. From this behavioural economics has assumed a great
importance.
In “Philosophy and psychology diverge from the paradigm” it was shown that
behaviourist philosophers began to invalidate the Aristotelian hypothesis that
139
humans find happiness through work. This started with the great European
philosophers Kant, Nietzsche, Stirner, Weber, Sartre and Camus. This bubbling
stream became a broad river with the engagement of the great deductivist
institutional philosophers, Popper and Kuhn.
140
become corpulent and happier to be a financial dealer and profiteer than to
produce and innovate. The turning point had been passed sometime in the
1970s.
141
America and other Anglo-American countries will not rise to the opportunities
of international symbiosis but turn inward to “tend their own gardens.”
The fresh Presidency of Barack Obama, his candid mea culpa on behalf of
America and exhortations for America to engage with international symbiosis
are discussed in “Policy Reboot.” It was found that world nations have not been
comfortable with American policies since the days of President Reagan and
this reached its most objectionable apogee during the term of President
George W. Bush. However, world leaders of all persuasions, including Russia,
have expressed the desire to help President Obama rebuild America's standing
in the international community and as a valued member of a new power
sharing of nations. The investigation of “United Nations Security Council
Resolution 1887” on the elimination of nuclear weapons showed how this new
cooperation might operate.
However, the analysis of President Barack Obama's award of the 2009 Nobel
Peace Prize, with special reference to his multilateralism and policy of action
on nuclear non-proliferation, showed that President Obama does not carry the
goodwill of a large number of Americans and that this may defeat his attempts
to reconcile America with other leading nations and power blocs such as the
European Union, Russia, India, China and South America.
142
The next Chapter examines Anglo-American political economy at the cusp of
change from unconstrained growth to climate constrained growth.
37 signals, 2006. The smarter, faster, easier way to build a successful web
application, 37 signals.
143
ASX Corporate Governance Council, 2006. Principles of Good Corporate
Governance and Good Practice Recommendations - Exposure draft of
changes (showing marked amendments), Sydney, Australia: Australian
Stock Exchange. Available at:
http://www.asx.com.au/supervision/pdf/asxcgc_marked_amended_princip
les_021106.pdf.
Atkinson, A.B. & Leigh, A., 2006. The Distribution of Top Incomes in Australia.
SSRN eLibrary. Available at: http://papers.ssrn.com/sol3/papers.cfm?
abstract_id=891892 [Accessed August 25, 2009].
Aumann, R.J. & Shapley, L., 1974. Values of non-atomic games, Princeton, N.J.:
Princeton University Press.
Axelrod, R.M., 1984. The evolution of cooperation, New York: Basic Books.
Bacevich, A., 2008. The Limits of Power: The End of American Exceptionalism ,
Metropolitan Books.
Bastiat, F., 1848. Selected Essays on Political Economy: Second Letter trans.
Seymour Cain., Irvington-on-Hudson, NY: The Foundation for Economic
Education, Inc. Available at:
http://www.econlib.org/library/Bastiat/basEss6.html [Accessed April 8,
2009].
Bergson, H., 1932. The two sources of religion and morality, Paris: Librairie
Felix Alcan.
Blair, C., 1957. The Passing of a Great Mind. Life Magazine, (February 25).
Blaug, M., 1992. The methodology of economics: Or, how economists explain ,
Cambridge University Press.
Bono, 2009. Rebranding America. The New York Times. Available at:
http://www.nytimes.com/2009/10/18/opinion/18bono.html [Accessed
November 2, 2009].
144
Brooks, D., 2009. An Economy of Faith and Trust. The New York Times.
Available at: http://www.nytimes.com/2009/01/16/opinion/16brooks.html
[Accessed January 21, 2009].
Brooks, D., 2008. The Behavioral Revolution. The New York Times. Available
at: http://www.nytimes.com/2008/10/28/opinion/28brooks.html
[Accessed January 21, 2009].
Brown & Williamson, 1969. Smoking and Health Proposal, United States:
Brown & Williamson. Available at:
http://legacy.library.ucsf.edu/tid/rgy93f00 [Accessed September 30,
2009].
Bucks, B.K. et al., 2009. Changes in U.S. Family Finances from 2004 to 2007:
Evidence from the Survey of Consumer Finances. Federal Reserve
Bulletin, 95, A1-A55.
Camus, A., 1942. Le Mythe de Sisyphe (The Myth of Sisyphus) 1955th ed.,
Available at:
http://www.sccs.swarthmore.edu/users/00/pwillen1/lit/msysip.htm
[Accessed April 12, 2009].
Camus, A., 1952. Retour à Tipasa (Return to Tipasa). In The Myth of Sisyphus
and Other Essays. Vintage.
Capek, K., Klinkenborg, V. & Capek, J., 2002. The Gardener's Year, Modern
Library.
Clarke, J., 2005. Working with monsters: how to identify and protect yourself
from the workplace psychopath, Random House Australia.
Cleveland, G., 1893. President Grover Cleveland's Message to the Senate and
House of Representatives, Washington DC. Available at:
http://www.hawaii-nation.org/cleveland.html [Accessed April 8, 2009].
145
Cohen, R., 2009. Germany Unbound. The New York Times. Available at:
http://www.nytimes.com/2009/10/01/opinion/01iht-edcohen.html?
th&emc=th [Accessed October 1, 2009].
Curd, M. & Cover, J., 1998. Philosophy of Science, Section 3, The Duhem-Quine
Thesis and Under-determination, W.W. Norton & Company. The
Philosophical Review, 60.
Daily, C., Dalton, M. & Cannella, A.A., 2003. Corporate governance: decades of
dialogue and data. Academy of Management Review, 28(3), 371-382.
Dharan, B.G., 2002. Enron's accounting Issues - what can we learn to prevent
future Enrons, Prepared Testimony Presented to the US House Energy
and Commerce Committee's Hearings on Enron Accounting , Available
at:
http://www.ruf.rice.edu/~bala/files/dharan_testimony_enron_accounting.
pdf.
146
Duffner, S., 2003. Principal-agent problems in venture capital finance, Basel:
University of Basel.
Duhem, P., 1906. La théorie physique: son objet et sa structure (The aim and
structure of physical theory). Foreword by Prince Louis de Broglie.
Translated from the French by P.P. Wiener, 1954, Princeton, New Jersey:
Princeton University Press.
Ehrenhal, A., 2006. How the Yes Man learned to say No. The New York Times.
Available at:
http://www.nytimes.com/2006/11/26/opinion/26ehrenhalt.html.
Feyerabend, P., 1975. Against Method, London, UK.: New Left Books.
Franklin, B. & Franklin, W.T., 1818. Memoirs of the Life and Writings of
Benjamin Franklin,
Fuller, S., 2003. Kuhn vs. Popper: the struggle for the soul of science,
Cambridge and Australia: Allen & Unwin.
Gibbs, W., 2009. From 205 Names, Panel Chose the Most Visible. The New
York Times. Available at:
http://www.nytimes.com/2009/10/10/world/10oslo.html?
_r=1&th&emc=th [Accessed October 11, 2009].
147
Greenhouse, S., 1992. A CLOSER LOOK; Ecology, the Economy and Bush. The
New York Times. Available at:
http://www.nytimes.com/1992/06/14/weekinreview/a-closer-look-ecology-
the-economy-and-bush.html?scp=1&sq=Klaus+Topfer+%22Ecology
%2C+the+Economy%2C+and+Bush%22&st=nyt [Accessed September
30, 2009].
Haidt, J. & Graham, J., 2007. When morality opposes justice: Conservatives
have moral intuitions that liberals may not recognize. Social Justice
Research, 20(1), 98-116.
Hardin, G., 1968. The tragedy of the commons. Science, 162(3859), 1243-1248.
Henriques, D.B., 2008. A bull market sees the worst in speculators. The New
York Times. Available at:
http://www.nytimes.com/2008/06/13/business/13speculate.html?
_r=1&pagewanted=2&th&emc=th&oref=slogin [Accessed June 15,
2008].
Herbst, M., 2009. Reversing the Economic Plunge: Will Germany Beat the US
to Recovery? SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/business/0,1518,643166,00.html#re
f=rss [Accessed August 20, 2009].
Hujer, M., Reuter, W. & Schwennicke, C., 2009. 'I Take Responsibility': Obama's
G-20 Confession. SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/world/0,1518,617639,00.html#ref=r
ss [Accessed April 7, 2009].
Jensen, D., 2000. A language older than words, New York: Context Books.
148
Kaplan, S.N. & Stromberg, P., 2004. Characteristics, contracts, and actions:
Evidence from venture capitalist analyses. The Journal of Finance,
LIX(5), 2177-2210.
Kass, L.R., 1997. The Wisdom of Repugnance: Why we should ban the cloning
of humans. The New Republic, 216(22), 17-26.
Kennedy, P.M., 1993. Preparing for the twenty-first century, London, UK.:
Harper Collins.
Keynes, J.M., 1936. The General Theory of Employment, Interest and Money,
Macmillan Cambridge University Press, for Royal Economic Society.
Available at:
http://www.marxists.org/reference/subject/economics/keynes/general-
theory/ [Accessed April 9, 2009].
Krugman, P., 2009a. All the President’s Zombies. The New York Times.
Available at:
http://www.nytimes.com/2009/08/24/opinion/24krugman.html?
_r=2&th&emc=th [Accessed August 24, 2009].
Krugman, P., 2009b. America the Tarnished. The New York Times. Available at:
http://www.nytimes.com/2009/03/30/opinion/30krugman.html?em
[Accessed March 31, 2009].
149
Krugman, P., 2009c. How Did Economists Get It So Wrong? The New York
Times. Available at:
http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html?
_r=2&th&emc=th [Accessed September 7, 2009].
Kuhn, T., 1962. The Structure of Scientific Revolutions 2nd ed., Chicago:
University of Chicago Press.
Levy, C.J. & Baker, P., 2009. Russia’s Reaction on Missile Plan Leaves Iran Issue
Hanging. The New York Times. Available at:
http://www.nytimes.com/2009/09/19/world/europe/19shield.html?
_r=1&scp=1&sq=missile%20shiled&st=cse [Accessed September 20,
2009].
Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.
150
Malzahn, C.C., 2009. Germany's Miracle Man. Spiegel Online - News -
International. Available at:
http://www.spiegel.de/international/world/0,1518,619431-2,00.html
[Accessed April 16, 2009].
Marx, K., 1867. Capital 1887th ed., Moscow, USSR: Progress Publishers.
Available at: http://www.marxists.org/archive/marx/works/1867-
c1/index.htm [Accessed April 9, 2009].
Mill, J.S., 1848. Principles of Political Economy 7th ed., London: Longmans,
Green and Co. Available at: http://www.econlib.org/library/Mill/mlP.html
[Accessed April 9, 2009].
von Neumann, J. & Morgenstern, O., 1953. Theory of games and economic
behavior 3rd ed., Princeton, New Jersey: Princeton University Press.
Nietzsche, F., 1882. The Gay Science (Die fröhliche Wissenschaft): With a
Prelude in Rhymes and an Appendix of Songs 1887th ed., Available at:
http://www.textlog.de/nietzsche-wissen.html.
Nietzsche, F., 1887. Thus Spoke Zarathustra (Also sprach Zarathustra): A book
for all and none 1995th ed., New York: Modern Library.
Obama, B., 2009a. President Obama’s Address to Congress. The New York
Times. Available at:
http://www.nytimes.com/2009/02/24/us/politics/24obama-text.html?
_r=2&ref=opinion&pagewanted=all [Accessed March 3, 2009].
Obama, B., 2009b. President's Message: Fiscal Year 2010 Budget Overview
Document: A New Era of Responsibility: Renewing America's Promise,
Washington DC: United States Office of Management and Budget.
Available at: http://www.gpoaccess.gov/usbudget/ [Accessed March 3,
2009].
151
Obama, B., 2009c. Remarks by the President on a New Beginning, Cairo
University, Egypt: The White House Press Office. Available at:
http://www.whitehouse.gov/the_press_office/Remarks-by-the-President-
at-Cairo-University-6-04-09/ [Accessed June 5, 2009].
Obama, B., 2009d. Remarks by the President to the United Nations General
Assembly, Washington DC: The White House Press Office. Available at:
http://www.whitehouse.gov/the_press_office/remarks-by-the-president-
to-the-united-nations-general-assembly/ [Accessed November 2, 2009].
Ostrom, E. & Schlager, E., 1996. The formation of property rights. Rights to
nature: Ecological, economic, cultural, and political principles of
institutions for the environment, 127–156.
Ostrom, E., 1990. Governing the commons: the evolution of institutions for
collective action, Cambridge; New York: Cambridge University Press.
152
Petre, M., 2003. Disciplines of innovation in engineering design, Expertise in
Design Design Thinking Research Symposium 6 hosted by Creativity
and Cognition Studios, Sydney, Australia: University of Technology,
Sydney. Available at:
http://research.it.uts.edu.au/creative/design/papers/16PertreDTRS6.pdf.
Popper, K., 1945. The open society and Its enemies, London: Routledge.
Randerson, J., 2009. UK's ex-science chief predicts century of 'resource' wars.
The Guardian. Available at:
http://www.guardian.co.uk/environment/2009/feb/13/resource-wars-
david-king/print [Accessed February 16, 2009].
Sachs, J., 2008. Common Wealth: Economics for a Crowded Planet, Penguin
Press HC, The.
Sarkozy, N., 2009. Failure Is not an Option -- History Would not Forgive Us.
SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/europe/0,1518,616713,00.html#ref
=rss [Accessed April 2, 2009].
153
Sartre, J., 1943. Being and Nothingness: An Essay on Phenomenological
Ontology Trans. Barnes H., New York: Philosophical Library.
Sartre, J., 1938. Nausea (La Nausée, originally called Melancholia), New
Directions Publishing Corporation, June 1969.
Sartre, J., 1992. The Look. In Being and Nothingness. New York: Washington
Square Press.
Saunders, A., 1974. A conversation with Isaiah Berlin - interview with John
Merson. Philosophers Zone. Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2593244.htm
[Accessed June 15, 2009].
Saunders, A., 2009a. A tribute to Isaiah Berlin - Interview with John Gray,
Professor of European Thought at the London School of Economics.
Philosophers Zone. Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2586694.htm#t
ranscript [Accessed June 15, 2009].
Saunders, A., 2009b. Governance and the Yuck Factor. Philosophers Zone.
Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2631260.htm#t
ranscript [Accessed August 13, 2009].
154
Scherer, M., 2009. Barack Obama's New World Order. Time. Available at:
http://www.time.com/time/world/article/0,8599,1889512,00.html
[Accessed April 7, 2009].
Schmitt, E. & Mazzetti, M., 2008. Secret Order Lets U.S. Raid Al Qaeda. The
New York Times. Available at:
http://www.nytimes.com/2008/11/10/washington/10military.html?
_r=2&th&emc=th&oref=slogin&oref=slogin [Accessed November 10,
2008].
Schwartz, B., Markus, H.R. & Snibbe, A.C., 2006. Is Freedom Just Another
Word for Many Things to Buy? The New York Times. Available at:
http://www.nytimes.com/2006/02/26/magazine/26wwln_essay.html
[Accessed April 15, 2009].
Self, P., 2000. Rolling back the market: economic dogma and political choice ,
Palgrave MacMillan.
Slattery, L., 2008. Abstract to application: after years of being ignored by the
money markets, academic economists are delighted as hard times and
complex problems mean demand for their skills. The Australian, 28-29.
Smith, A., 1776. An Inquiry into the Nature and Causes of the Wealth of
Nations Also known as: Wealth of Nations., Project Gutenberg.
Sonnenfeld, J.A., 2004. Good Governance and the Misleading Myths of Bad
Metrics. Academy of Management Executive, 18, 108-113.
Sonnenfeld, J.A., 2002. What makes great boards great. Harvard Business
Review, Article R0209H.
155
http://www.spiegel.de/international/world/0,1518,650228,00.html#ref=r
ss [Accessed September 22, 2009].
Steingart, G., 2009a. Obama's Mistakes: Chancellor Merkel Visits the Debt
President. SPIEGEL ONLINE - News - International. Available at:
http://www.spiegel.de/international/world/0,1518,632494,00.html#ref=r
ss [Accessed June 26, 2009].
Steingart, G., 2009b. What Obama Could Learn from Germany. Spiegle Online
International. Available at:
http://www.spiegel.de/international/world/0,1518,605695,00.html
[Accessed March 16, 2009].
156
Stiglitz, J., Sen, A. & Fitoussi, J., 2009. Report by the Commission on the
Measurement of Economic Performance and Social Progress, Paris:
French Commission on the Measurement of Economic Performance and
Social Progress. Available at: http://www.stiglitz-sen-
fitoussi.fr/en/index.htm [Accessed September 22, 2009].
The New York Times, 2009. Editorial: The Peace Prize. The New York Times.
Available at: http://www.nytimes.com/2009/10/10/opinion/10sat1.html?
th&emc=th [Accessed October 11, 2009].
The Norwegian Nobel Committee, 2009. The Nobel Peace Prize 2009 - Press
Release. Nobelprize.org. Available at:
http://nobelprize.org/nobel_prizes/peace/laureates/2009/press.html
[Accessed October 10, 2009].
The Royal Swedish Academy of Sciences, 2009. The Prize in Economics 2009 -
Press Release. Nobelprize.org. Available at:
http://nobelprize.org/nobel_prizes/economics/laureates/2009/press.html
[Accessed October 12, 2009].
Tucker, A.W., 1980. On Jargon: The Prisoner’s Dilemma. UMAP Journal, 1, 101-
103.
157
United Nations Security Council, 2009. 6191st Meeting (AM): Historic Summit
of Security Council Pledges Support for Progress on Stalled Efforts to
End Nuclear Weapons Proliferation: Resolution 1887 (2009) Adopted
with 14 Heads of State, Government Present, New York: United Nations.
Available at: http://www.un.org/News/Press/docs/2009/sc9746.doc.htm
[Accessed October 11, 2009].
Vargo, S.L. & Lusch, R.F., 2004. Evolving to a new dominant logic for
marketing. Journal of Marketing, 68(1), 1-17.
Von Neumann, J., 1928. Zur theorie der gesellschaftsspiele (Theory of Parlor
Games). Mathematische Annalen, 100(1), 295-320.
Weber, M.C.E., 1904. Die protestantische Ethik und der Geist des Kapitalismus
(Protestand Ethic and Spirit of Capitalism). In Die protestantische Ethik
und der Geist des Kapitalismus. Available at:
http://www.ne.jp/asahi/moriyuki/abukuma/weber/world/ethic/pro_eth_fra
me.html [Accessed April 12, 2009].
Weiner, E., 2008. The geography of bliss. One grump's search for the happiest
places in the world, New York.
Whitman, W., 1888. Democratic Vistas: And Other Papers, W. Scott; Toronto:
WJ Gage.
158
Witney, N. & Shapiro, J., 2009. Towards a post-American Europe: A Power
Audit of EU-US Relations, The European Council on Foreign Relations.
Available at: http://ecfr.eu/content/entry/towards_a_post-
american_europe_a_power_audit_of_eu-us_relations_shapiro_whi/
[Accessed November 2, 2009].
i Napoleon put into practice the principle of “order over chaos” on 5 October
1795 when he turned his cannons on a putsch of thirty thousand royalists
marching to the Tuileries Palace to overturn the Convention of the revolutionary
government. Napoleon's grape-shot killed more than 200 royalists and is still
visible in the walls of the Church of St. Roch at 286 Rue St.-Honore. In
gratitude, the government promoted the twenty-six year old hero of the
Revolution from brigadier general to commander-in-chief of the Army of the
Interior.
Similarly, in 1968 violent student protests against the Vietnam War took place at
the Paris Sorbonne. These protests erupted into France’s second revolution as
ten million French people went on strike over industrial conditions. In June,
President de Gaulle called an election to establish a mandate for reform and it
was granted. However, instead of reform, de Gaulle violently quashed the
protest and in response the people dismissed him in April 1969.
America experienced a similar incident. Following an April 1970 riot by
reportedly six thousand students at Harvard Square in Massachusetts at the
time of the Vietnam War, the American government became extremely nervous
about anti-war protests. One month later, in May, a terrible incident occurred at
Kent State University. National Guardsmen fired on demonstrators for 13
seconds, killing four students and badly wounding nine others
ii Sir David King, director of the Smith School of Enterprise and the Environment
at Oxford University, who was the British Government's Chief Scientific Adviser
at the start of Iraq war in March 2003
iii In 2005, a BBC Radio 4 poll found Karl Marx to be the world's greatest
philosopher by a wide margin (Critchley 2008, p.212)
iv Examples following World War II include the Marshall Plan to rebuild Western
Europe and the rebuilding of Japan in the 1950s and 1960s
159
v Mathematica Country Database (accessed 10 April 2009). Gini Indexes for some
other countries are: Australia 0.305, Canada 0.321, China 0.47, Denmark 0.24,
France 0.28, Russia 0.413, United Kingdom 0.34
vi In 1928, at the age of 25, John von Neumann developed the minmax (minimax)
and equivalently the converse maximin as part of formulating game theories
that would minimise one's maximum possible loss. Von Neumann said of his
strategy defeat is inevitable if you aim to win rather that avoid losing
vii Nietzsche uses this statement in The Gay Science (1882): section 108 (New
Struggles), section 125 (The Madman) and section 343 (The Meaning of our
Cheerfulness). Max Stirner was born Johann Kaspar Schmidt
viiiIn 1960 Albert Camus tragically died in a car accident at the age of 47. He had
earlier written that the point of life was to live and he could conceive of no more
meaningless death than in a car accident (Critchley 2008, p.262)
ix Isaiah Berlin (1909-1997) was an Oxford liberal humanist scholar of Russian-
Jewish descent
x For example, in 2005 the American Society of Civil Engineers calculated that
US$9.4 billion per year for 20 years is needed to refurbish America's collapsing
bridges
xi On ne reçoit pas la sagesse, il faut la découvrir soi-même après un trajet que
personne ne peut faire pour nous, ne peut nous épargner
xii Winston CHurchill's humorous version applied to US foreign policy is that “the
US always does the right thing, when all alternatives are exhausted”
xiiiTranslated by Karl Popper
xivcogito ergo sum (I am thinking therefore I exist, which is the inverse to
Existentialism's existence before essence)
xv For example, in Australia, Section 198A of the Corporations Act (2001) gives
directors all powers in a company, except those reserved for a general meeting.
As the principle of ultra vires (acting beyond mandate) has been removed,
nothing is beyond the powers of the company so a company may in fact do
anything. The directors also have common law duties to act honestly, in good
faith, with good faith and due diligence. However, these common law duties are
met by complying with the statutory Duty of Care and Due Diligence set out in
Section 180(1) of Corporations Law, which is commonly known as the “business
judgement rule.” It requires that directors act in good faith and for proper
purpose; have no personal interest in the outcome; take steps to inform
themselves on all issues; and rationally believe their decision is in the best
interests of the company
xviThe United Kingdom case Percival vs. Wright (1902) established that directors
do not have a general duty to individual shareholders. In Australia, the High
Court decision of Spies vs. R (2000) decided the issue that directors do not have
a duty to creditors, except where a company is insolvent or near-so. This also
resolves the position in respect to employees and other stakeholders, to whom
directors owe no duty except subject to specific laws that may apply. A similar
principle was embodied in the UK Hampel Committee Report on Corporate
Governance (1998)which is arguably the best encapsulation of the concept: the
Committee recommended that directors be accountable to shareholders for
preserving and enhancing the shareholders' investment and be responsible for
relations with stakeholders as part of this accountability
xviiThe Sarbanes-Oxley Act of 2002 (Pub. L. No. 107-204, 116 Stat. 745, also
known as the Public Company Accounting Reform and Investor Protection Act of
2002) is a United States federal law that addresses director responsibilities and
criminal penalties
xviiiBacevich (2008, p181) says of Americans' future: They will guzzle imported oil,
binge on imported goods, and indulge in imperial dreams
xixAs the traditional providers of mortgage finance became increasingly nervous
through 2007, Bear Stearns continued to sell mortgages, providing the finance
itself by rolling 24-hour borrowings with Federated, Fidelity Investment and
160
European lenders. On 6 March 2008 the first European lender, Rabobank, said it
would not renew its credit lines to Bear Stearns. On 11 March, ING followed.
Finally, on 13 March, when Bear Sterns was seeking to roll-over US$75 billion,
Federated and Fidelity Investments said they would no longer accept the sub-
prime mortgages as collateral security. The Federal Reserve requested J. P.
Morgan to review Bear Stearns' accounts and on 14 March, J. P. Morgan used
US$30 billion of Federal Reserve funds to provide Bear Stearns with unlimited
credit to avoid meltdown of the financial system
xx The “Minsky Moment” is named in honour of Russian economist Hyman Minsky.
The term was inspired by the 1998 Russian sovereign debt default. The Minsky
Moment is the point when investors doubt that cashflow can sustain debt
obligations, which leads to a panic sell-off as investor greed abruptly turns to
fear
xxiAmerican recession years: 1807, 1837, 1857, 1873, 1893, 1907, 1929, 1973,
1987, 2001 and 2009
xxiiSee Schwartz et al. 2006
xxiiiSun Tzu (544 – 496 BCE) wrote The Art of War, an immensely influential
ancient Chinese book on military strategy. Sun Tzu argued strongly for military
intelligence, claiming a general must have full knowledge of his own & the
enemy's strengths and weakness. His book was known for thousands of years
but a full copy was discovered only in 1972 on a set of bamboo engraved texts in
a grave near Linyi in Shandong
xxivThe Prisoner's Dilemma is a two person, non-zero sum game. Consider the oft
seen television police drama where two suspects are put in separate rooms for
questioning. Each suspect knows full well that if they both remain silent then
the police will have a hard task proving them guilty. In this case, the
unsatisfactory police evidence means that each prisoner will receive a nominal
sentence of only one year. However, the police keep fermenting the prisoners'
anxiety to turn “Queen's evidence.” This will result in a reduced or commuted
sentence in return for incriminating the accomplice. If only one confesses then
that prisoner will escape sentence and the other will receive a sentence of ten
years. If both prisoners confess, each will be sentenced to five years in prison.
The old maxim of “no honour amongst thieves” mostly holds true because their
agreements are neither binding nor enforceable. So each prisoner cannot trust
the other to remain silent. Each prisoner therefore looks at the situation from a
self-interested point of view and seeks to maximise his own benefit without
regard for what the other may do. Therefore, the dominant outcome is for each
to “rat” on the other in order to be released. However, because both do the
same thing, the separate strategies have the effect of resulting in a sentence of
five years for each. They have foregone the Pareto Optimum outcome of trusting
each other, which would have resulted in sentences of only one year each
xxvDwight D Eisenhower served as the 34th President from January 1953 to
January 1961, when he was succeeded by John F. Kennedy
xxviElinor Ostrom shared the 2009 Sveriges Riksbank Prize in Economic Sciences
in Memory of Alfred Nobel (often referred to as the Nobel Prize in Economics)
with Oliver E. Williamson "for his analysis of economic governance, especially
the boundaries of the firm"
xxviiThe European Council on Foreign Relations is a pan-European think-tank
established by George Soros
xxviiiWhile successful at the time, Shiite Government secret police, aided by the
American military, arrested various Sunni Members of the Sunni Awakening
Councils in late March 2009, notwithstanding that these people were helping
the American military
xxixEconomics was first formally taught as a discipline after the Great Depression
xxxInstead of supply creating its own demand, supply is now seen to be a function
of demand
161
xxxiReferring to the controversial 1971 award of the Nobel Peace Prize to West
Germany's Chancellor Willy Brandt for his “Ostpolitik” policy of reconciliation
with Communist Eastern Europe
xxxiiTwenty-one prominent American recipients of the Nobel Peace Prize include
President Theodore Roosevelt (1906) for his role in international dispute
arbitration, which led to peace between Russia and Japan; President Woodrow
Wilson (1919) for his role in ending World War I, the Treaty of Versailles and
facilitating the League of Nations; Martin Luther King (1964) for his
commitment to non-violent protest of African American civil rights; former US
Secretary of State Henry Kissinger (1973) for his role in negotiating a cease-fire
that ended the Vietnam War; Jimmy Carter (2002) for tireless efforts to spread
peace, democracy, human rights and development; and former Vice President Al
Gore and the IPCC (2007) for climate change leadership
xxxiiiIn politics, this is colloquially known as “playing the race card”, where race
includes racial origin or colour of the skin, religious belief, sexual persuasion,
physical or mental disability etc. Political parties usually tacitly agree not to
“play the race card” in campaigns because it inflames the worst of human
bigotry and often escalates to riots and murders
xxxivAuthoritarian, oppressive regimes and conservative democracies alike can
exhibit a kind of national psychopathy, for example, spying on their own citizens,
imprisonment without charge, suspension of habeas corpus, torture and public
lies. This can be extended to international relationships, for example, America's
so-called Coalition of the Willing comprising thirty members including United
Kingdom, Australia and Denmark. Australia forfeited its proud innocence and
that it had never used military force except in self-defence.
On 4 September 2009, the Chinese Ambassador Zhang Junsai responded to
Australia's criticism of China's military build-up with a diplomatic caution,
reminding Australia that China had never occupied one inch of foreign territory.
However, this assertion needs to be qualified by China's interpretation of
occupation. China considers itself to be peaceful, humble and providing
omnipresent rationality throughout its widespread provinces. Many in the West
have the opposite view. China is perceived to be aggressively expansionist
because of its 1950 so-called “peaceful liberalisation of its province of Tibet”,
1962 war with India over China's strategic occupation of the uninhabited region
of Aksai Chin, ever-present threats to reintegrate Taiwan by force, continued
repression of nationalist Uighurs in its province of Xinjiang Uyghur (East
Turkestan or Uyghuristan) and human rights abuses.
Chapter 3 Political Economy of the Anglo-American world view of climate
change shows how China and India formed an uneasy alliance to successfully
resist America's “might is right” approach in climate change negotiations.
Organisational psychopaths, whether individuals in the office or national
leaders, are difficult to identify because they adopt an overtly "conservative"
disguise (Clarke 2005). They thrive on the excitement of the chase, seeing "who
blinks first" and they will throw any amount of other people's money at a
campaign or litigation to create hysteria, chaos and confrontation.
Often they will lie without compunction - truth, nonsense, disinformation and
barefaced lies are all the same because the end justifies the means. For
example, the Australian Government's infamous “children overboard” lie about
refugee boat people. Usually, organisational psychopaths are not concerned in
the least about being found out for their lies. In fact, their main distinguishing
characteristic is an utter absence of remorse. They use accusations, lying,
bluffing, bullying and character assassination to advance their aims.
They become expert at casting "Fear, Uncertainty and Doubt" (FUD) by
offhandedly making allegations that are without merit to divert attention from
real issues and to put those seeking to route them out onto the defensive. A FUD
162
smear tactic is often used where the initial publicity surrounding claims vastly
overshadow any subsequent retraction or where the assertions cannot be
checked with third parties to whom the assertions are being attributed.
Makers of Kool, Viceroy, Raleigh and Belair cigarettes, Brown & Williamson
(1969) elucidate the “FUD” attack: “We have chosen the mass public as our
consumer for several reasons: - This is where the misinformation about smoking
and health has been focused. - The Congress and federal agencies are already
being dealt with - and perhaps as effectively as possible - by the Tobacco
Institute. - It is a group with little exposure to the positive side of smoking and
health. - It is the prime force in influencing Congress and federal agencies -
without public support little effort would be given to a crusade against
cigarettes. Doubt is our product since it is the best means of competing with the
"body of fact" that exists in the mind of the general public. It is also the means
of establishing a controversy. Within the business we recognize that a
controversy exists. However, with the general public the consensus is that
cigarettes are in some way harmful to the health. If we are successful in
establishing a controversy at the public level, then there is an opportunity to put
across the real facts about smoking and health. Doubt is also the limit of our
"product". Unfortunately, we cannot take a position directly opposing the anti-
cigarette forces and say that cigarettes are a contributor to good health. No
information that we have supports such a claim.”
Organisational psychopaths also use any technique they can to create pressure,
such as incessant delay, preventing routine things being finished and
determinedly side-tracking discussions. Brown & Williamson (1969) also
exemplify the often used diversionary tactic of setting up “straw men”, or
alternative subjects that are easily controlled: “Truth is our message because of
its power to withstand a conflict and sustain a controversy . If in our pro-
cigarette efforts we stick to well documented fact, we can dominate a
controversy and operate with the confidence of justifiable self-interest …. we
would want to be absolutely certain that there is no damage to our advertising
or to the consumer acceptance of our brands . So the first step for the
immediate future would be research . We are recommending basic research to
unearth specific problems in smoking and health that we can deal directly with.”
It is estimated that 1% to 3% of adult males and 0.5% to 1% of women exhibit
some degree of psychopathy. These people range from murderers, serial rapists,
con artists to predators at work and in social situations. Unfortunately, they are
attracted to positions of power in politics and public institutions, where they can
rise through ruthlessness rather than leadership. However, all have the same
profile of self gratification and excessive sexually promiscuity can be a strongly
identifying trait. Many of the finest sportsmen and women are found to be at
least mildly psychopathic.
There are common features for organisational psychopaths in political, social
and corporate environments. An organisational psychopath is difficult to identify
at first but indications become increasingly clear. Fro example, inconsistent lies
that don't match, amorality, defamation, enjoying ruthlessness and a total lack
of remorse. They can be very difficult to ferret-out because that employ multi-
agent predator strategies to amplify their tactics (Axelrod 1984).
Psychopathy is not to be confused with merely subjective behavioural choices by
people. Recent scientific evidence supports the fact that pathological liars
cannot control their habitual impulse to lie, cheat and manipulate others. Yang
et al. (2005a; 2005b) of the University of Southern California showed that
pathological liars had pre frontal cortex abnormalities. They found a 22%
increase in white matter and 14% decrease in grey matter compared to normal
controls. Autistic children were found to have the opposite characteristics.
When people are asked to make moral decisions, they rely on the pre frontal
163
cortex of the brain and it has long been associated with the ability in most
people to feel remorse or learn moral behaviour. In normal people, it's the grey
matter (the brain cells connected by the white matter) that help to keep the
impulse to lie in check. The results of this study are consistent with previous
studies on autistic children, who find it extremely difficult to lie and have an
opposite but complementary combination of white and grey matter. The
University of Southern California researchers suggested that lying takes a lot of
effort and the 22% more white matter in the brains of pathological liars provides
them with enhanced verbal skills to master the complex art of deceit. In
addition, the 14% less grey matter means they don't have the same moral
disinhibition as normal people do for misrepresentation.
Yang et al. commented that there is quite a lot to do in suppressing the truth.
Lying is almost mind reading in so far as the liar needs to understand the
mindset of the other person and the liar needs to suppress his or her own
emotions so as not to appear nervous. Their practical observations were that
pathological liars could not always tell truth from falsehood and would
contradict themselves in an interview; that they were manipulative and admit to
preying on people; and that they are very brazen in terms of their manner, but
very cool when talking about this. Aside from having histories of conning others
or using aliases, habitual liars also admitted to malingering, or telling lies to
obtain sickness benefits.
Whilst corporations and democracies ultimately recognise the psychopathic
behaviour for what it is, in the very existence of organisational psychopaths
there is a classic case of drama. Society suffers a permanent loss. Everyone with
whom an organisational psychopath comes in contact with has lost. A particular
victim is the company or country that mistakenly supports the organisational
psychopath.
As China chided Australia, in the Coalition of the Willing attacking Iraq, a
country that had never threatened them, the Anglo-American societies of
America, United Kingdom and Australia conspired to violate international
conventions on national sovereignty and in the process brought shame on the
institutions of Western democracy and in the process lost their most valuable
asset of all, their reputation.
164
Chapter 3 Political Economy of the Anglo-
American world view of climate change
3.1 Background
The previous chapter examined the changing Anglo-American world view and
emerging renaissance in constrained resource policy. This Chapter examines
the development of Anglo-American climate change policy and how the change
in Anglo-American world view is now influencing this policy.
Political systems don’t account for all the difference between the
United States and Europe. European private citizens, NGOs, and
corporations also have moved the needle. These Europeans have not
viewed climate change as a technological or an economic issue.
They have viewed it as a matter of basic common sense morality,
politics, economics and culture …. In contrast to Europe, where the
political system has created an opening for activism on behalf of
protecting the climate, the structure of American politics has been
an obstacle to action on this issue. That is, our federal system - and
particularly the United States Senate - empowers minorities to block
action. Beyond that, or perhaps as a result, our politics tend to
prioritize economic performance - at times almost entirely to the
exclusion of other policy priorities. Moreover, “low-expectation
pragmatism” can lead to half-measures …. while it is clear to
everyone that Europe, in particular, has led America to the point of
passing a real climate change law [Waxman Markey], this will be
sold in the United States as an example of American leadership and
independence …. chances are good that the U.S. will live up to
165
Winston Churchill’s famous quip that "America can always be
counted on to do the right thing, after it has exhausted all other
possibilities." After a decade of learning, the upside of American
pragmatism appears to be rising …. The new bridge that the U.S.
and Europe need to build together .... must be built on Europe’s
historic role as a leader on the issue, and must take advantage of
the United States’ self-centered “following-by-not-following” conceit
…. In short, we need to combine forces. We need to mobilize
Europe’s leadership on the issue: its moral vision, its emphasis on
lifestyles, its empowered minorities, its two millennia of experience
in constitutional construction, its technological elegance, and its
long-standing ties in key places around the world - from Russia to
Africa to Latin America to Southeast Asia. We also need to mobilize
America’s entrepreneurialism, imagination, regulatory uniformity,
and complimentary long-standing ties in other key places around the
world, such as East Asia, South Asia and Latin America.
Through the challenges and confusion of climate change, futurists like Antholis
are perceiving the gradual commencement of humanity's third industrial
revolution. It is important to understand the political economy underpinning
this momentous transition. McNeil, a scientific adviser to the Australian
Government, writes in The Clean Industrial Revolution (2009, p.6) :
Climate change has given all fossil fuels the knockout blow …. The
clean industrial revolution this century is one where the fuel is free
and infinite, and the materials grown or recycled. …. the power of
the sun, wind, ocean and earth is infinite …. Fostering clean
technological innovation and a low carbon economy cannot be
initiated from market forces alone because, for the time being, the
market doesn't account for the cost of carbon emissions or the
inevitable longer-term transition beyond fossil fuels. Slashing
greenhouse gas emissions by governments is needed to kick-start
the revolution.
166
3.2 Climate change science development
Sampling air trapped in Antarctic ice, Tripati et al. (2009) recently determined
that the last time a 387 ppm atmospheric CO2 concentration had occurred was
“during the Middle Miocene, when temperatures were [approximately] 3 to
6°C warmer and sea level 25 to 40 meters higher than present.” At this time
there was no permanent Arctic ice-cap and only a thin Antarctic polar cap.
Tripati et al. found the atmosphere's CO2 concentration decreased
167
synchronously “with major episodes of glacial expansion during the Middle
Miocene (~14 to 10 million years ago; Ma) and Late Pliocene (~3.3 to -2.4
Ma)”.iii
Oreskes & Renouf (2008) describe two confidential Jason Group reports into
the effect of climate change on the planet and the fabric of society. These
reports were prepared for the U.S. Department of Defence and provided to
President Jimmy Carter.
In 1977 the Group focused on climate change. Drawing upon information from
the National Centre for Atmospheric Research (NCAR) in Boulder, Colorado,
the group developed a climate model called “Features of Energy-Budget
Climate Models: An Example of Weather-Driven Climate Stability”. iv
168
In 1979, the Jason Group published a remarkably foresighted report JSR-78-07
The Long Term Impact of Atmospheric Carbon Dioxide on Climate (MacDonald
1989). This report predicted that the atmospheric CO 2 concentration would
double by 2035 [which the IPCC now expects to occur by 2050]; the planet
would warm by 2-3°C [which accords with the IPCC's projections]; polar
regions could warm by up to 10-12°C and quickly melt; the world’s crop-
producing capacity and productivity could significantly decline, particularly in
marginal areas.
James Hansen
In 1988, James Hansen, Director of NASA's Goddard Space Center, testified to
the US Senate Committee on Energy and Natural Resources that CO 2 pollution
would lead to dramatic damage from global warming. He noted that the earth
was warmer in 1988 than in any time in the 100 year history of measurement;
global warming could be ascribed to the greenhouse effect with 99%
confidence; and that computer simulations showed the greenhouse effect will
cause extreme climatic events such as summer heat waves. Hansen outlines
three policy scenarios that he had modelled, ranging from "business as usual"
to "draconian emission cuts" that would eliminate trace gas growth by 2000.
Courageously, he predicted that over the period from May 1988 to May 2008,
the earth would become "warmer than it has been in the past 100,000 years".
In the event, Hansen's prediction was technically wrong. With the benefit of
hindsight it became apparent that Hansen's base year of 1988 was an anomaly
in being extraordinarily warm. Indeed, all the years since 1988 have been
cooler. Nevertheless, the substance of Hansen's prediction is correct and
temperatures have been monotonically rising with a superimposed oscillation. v
In 2007, almost twenty years after Hansen's testimony, the world governments
of the Intergovernmental Panel on Climate Change (IPCC) concurred with him.
169
Hansen's current belief is that the only safe course of action is to urgently
lower the level of atmospheric CO2 from 385 ppm to 350 ppm (Pilkington
2008). Hansen is a member of the Tällberg Foundation, which published a full
page advertisement in the Financial Times, the International Herald Tribune
and the New York Times on 23 June 2008 to “Call upon all nations in the
ongoing climate negotiations to adopt 350 as the target to be reached
peacefully and deliberately, with all possible speed” (Tällberg Foundation
2008).
The Tällberg Foundation claims that the current discussion target of 450 ppm
and global mean temperature rise to 2°C above pre-industrial levels is the
wrong target and will have truly terrible consequences:
The oft-stated goal to keep global warming less than two degrees
Celsius (3.6 degrees Fahrenheit) is a recipe for global disaster, not
salvation .... the simple, yes shocking, truth is that we have gone too
far. We are going in the wrong direction and we have put planetary
systems, all inhabitants and generations to come in grave peril. It is
uncertain how long the planet can remain above the level of 350
ppm CO2 before cascading catastrophic effects spin beyond all
human control .... therefore, we must go back. We must cut carbon
emissions and draw down CO2 below the level of 350 ppm. If we are
to preserve the planet upon which civilisation has developed, we
have no choice but to make bold decisions that will change the way
the world works – together .... to avoid a world at 450 ppm CO 2 is
the greatest challenge humanity has ever had to face.
In June 2008, on the 20th anniversary of his seminal 1988 testimony, the US
Senate Committee on Energy Independence and Global Warming again heard
James Hansen's testimony (2008a). Hansen criticised the goal to keep global
warming less than 2°C, saying:
170
Warming so far, about two degrees Fahrenheit over land areas,
seems almost innocuous, being less than day-to-day weather
fluctuations. But more warming is already “in-the-pipeline”, delayed
only by the great inertia of the world ocean. And climate is nearing
dangerous tipping points. Elements of a “perfect storm”, a global
cataclysm, are assembled .... The disturbing conclusion ... is that the
safe level of atmospheric carbon dioxide is no more than 350 ppm
(parts per million) and it may be less. Carbon dioxide amount is
already 385 ppm and rising about 2 ppm per year. Stunning
corollary: the oft-stated goal to keep global warming less than two
degrees Celsius (3.6 degrees Fahrenheit) is a recipe for global
disaster, not salvation.
171
• an end to using China and India as scapegoats for non-action. Western
countries still have by far the highest emissions per capita and have
been (again, by far) the greatest source of accumulated emissions
leading to the current climate exigencies.
Hansen also expressed his personal opinion that the chief executives of large
energy companies such as Exxon Mobil and Peabody Coal should be tried for
crimes against humanity because they used disinformation to discredit the link
between global warming and burning fossil fuels. He likened this to the
disinformation campaign by tobacco companies such as R. J. Reynolds that
sought to bring the link between smoking and cancer into disrepute.
Al Gore
Former Vice President Al Gore (2008) tirelessly campaigns for the rise of
another hero generation with a sense of historic mission to solve the climate
crisis by changing the political will in America and laying a bright and
optimistic future for the world. He sees the mission of this new generation of
inspired activists to be singularly momentous as the actions of the fathers of
the Declaration of Independence, the people that ended slavery and the people
that gave women the vote.
172
Americans accepting that human activity causes global warming and that the
earth is heating up in a significant way.
Gore has concluded that the solution to climate change is a revenue neutral
carbon emissions tax with the proceeds replacing the employment taxes, first
introduced in Germany by Bismarck in the nineteenth century. Similarly to
James Hansen, Al Gore also flatly says “No new coal plants that do not capture
and store their own CO2.”
Gore says a few concerned people doing things themselves like changing light
bulbs, driving hybrid vehicles, digging geothermal wells and installing
photovoltaic panels on their roof is well and good but we need to change the
laws and solve the democracy crisis in good citizenship behaviour.
By July 2008, Gore had sharpened his focus even further. He boldly challenged
his fellow Americans to take an environmentally radial perspective and become
completely green in electricity generation “I’m going to issue a strategic
challenge that the United States of America set a goal of getting 100 percent of
our electricity from renewable resources and carbon-constrained fuels within
10 years …. We need to make a big, massive, one-off investment to transform
our energy infrastructure from one that relies on a dirty, expensive fuel, to fuel
that is free” (Broder 2008; Herbert 2008). Gore also proposed that payroll tax
be cut to offset the inevitably higher prices for fuel and electricity.
173
• a national smart grid for the transport of renewable electricity from
where it is generated to consumers in cities and smart ways for
consumers to control usage
• help for automakers to move production to plug-in hybrids
• the retrofit buildings with insulation and energy efficient windows and
lighting
• a cap on emissions that puts a price on carbon.
Illustration 9: IPCC Report Working Group Illustration 10: IPCC Report Working Group
III Figure SPM.8: Stabilisation scenario III Table SPM.5: Characteristics of post-TAR
categories stabilisation scenarios
In May 2007, member governments of the IPPC permitted for the first time the
display of a graph of greenhouse gases in parts per million versus expected
temperature rise as shown in Illustration 9 above (IPCC 2007). Coloured
shading shows the concentration bands for stabilisation of greenhouse gases in
the atmosphere corresponding to the stabilisation scenario categories in
Illustration 10 above.
174
Illustration 10 summarises the actions and consequences of various CO 2
reduction policies. For example, stabilisation scenario A2 from Illustration 9
implies a mean temperature rise of 2.4°C to 2.8°C, with a reduction in
emissions on 1990 levels of between 50% and 85%, and the CO 2 concentration
will peak between 490 ppm and 535 ppm, sometime between 2000 and 2020.
The black line in the middle of the band in Illustration 9 is the best estimate
climate sensitivity of 3°C. For example, a greenhouse gas concentration of 450
ppm will produce a mean temperature rise of about 2°C. The red line provides
an upper bound of likely range of climate sensitivity of 4.5°C, while the blue
line shows the lower bound of 2°C.
Based on the research of some 4,500 scientists and 2,500 peer reviewers, the
IPCC agreed that limiting global temperature rise to 2°C (3.6°F) was necessary
to avoid severe climate change damage. However, the IPCC conceded that we
are already on a path that will cause more than 2°C warming so policy might
need to be set around 3°C rise and to be prepared for quite large
consequences such as species loss, lack of rainfall in Australia, an increasing
frequency of high force tornadoes in America and nonlinear feedback loops
exacerbating global warming, such as methane release from ocean beds.
The next IPCC Assessment Report (AR5) is due in 2014 and will consider risk-
reduction strategies. However, scientists met in Copenhagen in March 2009 to
undertake an interim update of the IPCC's 2007 Fourth Assessment Report
(AR4). The conference concluded that global warming is already 50% greater
than expected. This places the world on track for the worst-case scenario of
the Fourth Assessment Report and a global temperature rise of between 3°C
and 5°C is now expected.
175
Non- CO2 greenhouse gas pollution
CO2 emissions constitute only half of greenhouse gas emissions. The other
greenhouse gases are black soot, nitrous oxide, methane and man-made gases
such as hydrofluorocarbons, perfluorocarbons and sulphur hexafluoride (SF 6).
The emission of many of these gases is easier and cheaper to control than the
emission of CO2. As a fall-back position to a successful international agreement
in Copenhagen, policy makers see non- CO2 gases as a policy deliverable.
Black carbon soot is quite important because its particulates blacken snow and
ice, causing the surface to absorb radiation and directly contributing to
melting of glaciers and polar caps. In addition, millions of human deaths each
year are attributed to soot pollution. Fortunately, soot can be readily and
cheaply abated using diesel filters and more efficient cooking stoves.
Harmful man-made gas emissions are also relatively easy to abate. For
example, under the Montreal Protocol to protect the ozone layer America and
industrialised countries have addressed 97% of chlorofluorocarbon emissions
(see below). A major new challenge is the emission of hydrofluorocarbons
(HFCs) from refrigerators and air conditioners, which continues to grow
strongly, and has 11,000 times the global warming effect of CO 2.
The scientists examined three policy scenarios for global temperature rise
using data from the United Kingdom's Met Office Hadley Centre:
176
• Kyoto plus - a new round of Kyoto-type targets at Copenhagen in
December 2009 that leads to rising emissions until 2030 with a global
temperature rise of 3.31°C by 2100
• immediate strong policy measures including emissions permits subject
to a cap set by the United Nationsxi - emissions would peak in 2017 and
the temperature rise would be 2.89°C by 2100. This exceeds the
European Union's target of 2°C, which has been set as the danger
threshold for extreme weather conditions, floods, spreading of deserts,
sea rise, and perhaps releasing methane from Siberian permafrost.
These results led many scientists to conclude that global temperature rise
cannot be contained even with the strongest policies.
The scientists noted that earlier estimates of CO2 absorption by marine and
terrestrial ecosystems are overly optimistic because oceans are becoming more
acidic and the deeper layers of water, which are being exposed by stronger
winds due to warmer weather, are already saturated with carbon; Northern
Hemisphere land is absorbing more heat than expected, which reduces CO2
sequestration by plants; and wildfire incidence increasing significantly,
contributing about a third as much carbon to the atmosphere as burning fossil
fuels. In conclusion, the scientists suggested that the rate of global warming is
likely to be much faster than recent predictions.
177
A fixed tranche of atmospheric emissions capacity
The IPCC concluded that atmospheric temperature rise needed to be limited to
2°C in order to minimise adverse effects of climate change. However, the
various feedback mechanisms of the carbon cycle governing the relationship
from emissions to carbon in the atmosphere and sea, atmospheric temperature
rise and thence to both physical and economic damages remain uncertain.
In the cross compared studies of M. Meinshausen et al. and Allen et al., the
authors deal with two issues. The first is the tranche of emissions from pre-
industrial times that will cause 2°C temperature rise. The second is the
remaining part of this tranche available from 2000-2050.
Allen et al. (2009) found that 3670 Gt CO2 (1,000 GtC) emissions from the time
of the Industrial Revolution c1750 would lead to a 2°C rise in about 2070,
assuming emissions peak in about 2020 at 44 Gt CO2 (12 GtC) per year and
decline sufficiently to limit the atmospheric concentration of CO 2 in 2100 and
beyond to 490 ppm. The 2°C temperature rise occurs at 470 ppm and has a 5%
to 95% confidence band of 1.3°C to 3.9°C.
Of the 3,670 Gt CO2 (1,000 GtC) aggregate, about 1,615 Gt CO 2 (440 GtC or
44%) occurred before the year 2000. This led to CO2 attributable warming by
the year 2000 of 0.85°C, with a 5–95% confidence range of 0.6°C to 1.1°C.
From 2000, a further 2,055 Gt CO2 (560 GtC or 56%) of emissions would lead
to the IPCC limit of 2°C rise over the pre-industrial temperature. Of this post
2000 tranche of 2,055 Gt CO2, it is expected that only 1,550 to 1,950 Gt CO2
could be emitted over the years 2000 to 2049. Of the total 3,670 Gt CO2 from
178
the time of the Industrial Revolution leading to a 2°C rise in temperature,
2,050 to 2,100 Gt CO2 emissions would occur after the year 2000.
In reviewing the dynamic performance of their model, the authors noted that
the relationship between cumulative emissions and peak warming is robust
and insensitive to the timing and rate of emissions. They suggest that policy
makers adopt Cumulative Warming Commitment (CWC) as a policy definition.
CWC is defined as the peak warming response to aggregate CO2 emissions and
has a normalised value of 1.9°C per TtC (i.e. per 1,000 GtC), with a 5–95%
confidence range of 1.4°C to 2.5°C per TtC.
The study finds that the G8's vision of a 50% reduction in world emissions by
2050 (compared to 1990 levels) has a 12% to 45% probability of exceeding
2°C. If abatement is delayed such that in 2020 emissions remain more than
25% above 2000 levels, the probability of exceeding 2°C rises to 53% to 87%.
179
al. 2009; Spiegel Online 2009; Schwägerl 2009a) This declines to 600 Gt CO2
for a three-in-four chance.
Schellnhuber et al. (2009) propose that the 750 Gt CO2 “emissions resource”
be allocated to countries on a per capita basis. The aggregate per capita
entitlement would be 110 tonne CO2 for the period 2010-2050. The following
table sets out a CO2 Budget by share of global population in 2010. The table
also shows the number of years of “emission resource” that each nation would
have at 2008 emissions levels.
Table: “Future responsibility": the period 2010-2050, 67% probability of achieving the
respected 2°C safety barrier (Source: Schellnhuber et al. 2009, p.28 Table 5.3.2 Option
II, * Appended Australian Department of Climate Change 2009, Table ES.1 xii
)
It may be noted in the above table that the “emission resources” for Australia,
America, Russia, Germany, Japan and EU are only 3, 6, 9, 10,11 and 12 years,
respectively. China's “emission resources” and the World average are 24 and
25 years respectively, all far short of the 40 year period. Brazil, Indonesia and
India have “emission resources” of 46, 67 and 88 years, respectively.
180
The implication for high emissions countries are extraordinary. For example,
Germany has a target of reducing emissions by 40% by 2020 (compared to
1990 levels). This has been considered exemplary but to meet the above CO 2
Budget would need to be increased to a 60% reduction by 2020 (compared to
1990 levels) with a total emissions moratorium by 2030.
Economic damage
Climate change undermines economic progress through a feedback loop from
economic activity and emissions to temperature rise, which causes socio-
economic damages. However, the damage function is an area of major
uncertainty due to our lack of previous experience and a lack of understanding
of other complex effects that act both directly and mutually.
The cost of climate change has historically been seen as deaths resulting from
extreme weather events, such as flooding and cyclones. This is because
approximately 97% of losses relate to weather events, whilst the other 3% of
losses are due to earthquakes, tsunamis or volcanic eruptions. Over the last
decade the countries affected by such extreme weather-related disasters have
included China, India, Bangladesh, Indonesia, Japan, Philippines, Dominica,
Vanuatu, Samoa and Myanmar.
The International Organisation for Migration forecasts that 200 million people
will be displaced by environmental pressures by 2050.
181
Climate change sceptics
This doctoral research assumes that climate change policies need to be
investigated because the governments of the world, under the auspices of the
United Nations Framework Convention on Climate Change (UNFCCC), have
agreed to address climate change. Underlying their decision is acceptance by
the Intergovernmental Panel on Climate Change (IPCC) of three decades of
scientific research demonstrating that global warming is predominantly a man-
made phenomenon.
Prior to the June 2009 UNFCCC meeting in Bonn, American President Obama
described the climate change situation as a “potentially cataclysmic disaster”
(see below). A report released by the United States Whitehouse and the United
States Global Change Research Program shortly thereafter underscored the
IPCC's conclusions “Observations show that warming of the climate is
unequivocal. The global warming observed over the past 50 years is due
primarily to human-induced emissions of heat-trapping gases. Warming over
this century is projected to be considerably greater than over the last century.
The global average temperature since 1900 has risen by about 1.5°F. By 2100,
it is projected to rise another 2 to 11.5°F In the U.S.” (Karl et al. 2009,
Executive Summary).xiii
However, the science of climate change remains controversial. This has major
implications for policy makers who need to incur great cost and inconvenience
to fundamentally change the technologies of production and consumption in
economies. Therefore, the issue of climate scepticism is addressed here, in the
context of policy rather than in the context of a discussion about the scientific
basis of climate change.
182
Nierenberg report
Oreskes & Renouf (2008) have established the inception or birth date of
climate scepticism.xv In 1980, President Ronald Reagan commissioned a third
opinion from the U.S. National Research Council: Carbon Dioxide Assessment
Committee (Nierenberg 1983) using Congress funding appropriated in 1979.
The chair of the committee was William Nierenberg, a member of President
Reagan's transition team, director of the Scripps Institution of Oceanography
Climate and a member of the Jason Group. He had been part of the Manhattan
Project team creating the atomic bomb.
To drive home the case, Nierenberg even argued that global warming was
benign and nothing new, that it would take many years to significantly affect
the planet, that humans had a successful capacity to adapt to new challenges
and that there was a good chance of finding new technological solutions.
183
would be better spent improving our knowledge (including knowledge of
energy and other processes leading to creation of greenhouse gases)
than in changing fuel mix or use (Chapters 1, 2, 9)
• It is possible that steps to control costly climate change should start
with non-CO2 greenhouse gases. While our studies focused chiefly on
CO2, fragmentary evidence suggests that non-CO2 greenhouse gases
may be as important a set of determinants as CO2 itself. While the costs
of climate change from non-CO2 gases would be the same as those from
CO2, the control of emissions of some non-CO2 gases may be more easily
achieved (Chapters 1, 2, 4, 9)
Nierenberg's report gave rise to the term “climate change sceptic.” One year
later Nierenberg cofounded the George C. Marshall Institute think tank, which
denies climate change is anything more than normal and natural fluctuation.
Nierenberg himself continued as an entrenched climate change critic.
184
• James Hansen's 1988 prediction was technically wrong. There is no
evidence that emission of CO2 is driving up global temperatures.
Statistics from the Hadley Centre and University of East Anglia show
that carbon emissions have been rising while global temperatures have
been stable or trending down
• there is no way of knowing whether the temperature and economic
modelling outcomes are realistic because the changes are based on
assumptions having large differences to our direct experience. Thus
there is a great deal of uncertainty between cause and effect
• temperature rise models generally only extend to 2100 when other
dynamic effects may ameliorate the problem in longer time frames, such
as the sea absorbing CO2
• NASA's solar cycle 24 for increased sunspot activity has not commenced
as expected so the planet may face a cooling cycle (in which a bit of
human induced global warming would be appreciated)
• there is considerable scepticism that humans can do anything about
global warming because of CO2 emissions that have nothing to do with
human activity.
The U.S. Chamber of Commerce has intensively lobbied against the Clean Air
Act, calling for climate change science to be put on trial.xvii Its strident
sceptical position led Exelon (America's largest nuclear utility), Pacific Gas &
Electric, PNM Resources (New Mexico's electricity utility) and Apple Computer
185
to resign their memberships (Krauss & Galbraith 2009). In addition, General
Electric, Johnson & Johnson and Nike issued statements distancing themselves
from the Chamber's position.
186
However, in what is now seen as one of its key failings, the Kyoto Protocol
avoided granting the same land conservation concession to other nations. As a
consequence, about 30 million acres of rainforest continued to be cleared
annually, which constitutes about 20% of all man-made emissions. xix European
countries argued at the time that paying poor countries to refrain from
rainforest deforestation was an improper way for wealthy counties to meet
their climate change obligations.
The IPCC has emphasised that it is essential that by 2010 all Governments
introduce policies to reduce greenhouse emissions. The IPCC estimates that if
member governments do act quickly, climate change can be brought under
control at a reasonable cost. It suggests that this will require a high level of
energy conservation, active investment in renewable energy and new
technologies, and emissions trading with a price on carbon based energy of
US$20-50 per tonne of CO2.
In December 2008, the United Kingdom Parliament passed the Climate Change
Act by 463 votes to three. The United Kingdom became the first country in the
world to unilaterally legislate to an 80% reduction in emissions by 2050
(compared to 1990 levels). An important feature of this legislation is that it
self-entrenches to irrevocably bind future governments.
187
The CCC is seeking a 40% emission reduction by 2020 from the power sector,
using wind, nuclear, carbon capture and storage and increased energy
efficiency. It is widely understood that implicit in this is the complete
decarbonisation of electricity generation and switching large parts of the
economy such as cars and gas home heating to electricity.
Bosquet (2000) surveyed 139 models with environmental taxes and concludes
that in the short to medium term a double dividend can exist if emissions
reductions are significant and environmental tax revenues are used to reduce
distorting taxes such as payroll tax (and wage-price inflation is prevented).
Bosquet also found that energy-intensive industries may be impacted but this
is unavoidable if the environmental goals are to be achieved. Also, revenue
recycling and support of vulnerable elements of society are able to overcome
188
harm to households that spend a greater share of their income on goods which
produce emissions.
Bayindir-Upmann & Raith (2003) found that only in low tax countries does a
revenue neutral green tax reform yield the effects of better environmental
quality and higher employment. In high tax countries, it is the positive
economic effect, which helps employment, that in turn leads to the
environmental dividend component of the double dividend being lost. The
authors suggest that this may be addressed by abandoning revenue neutrality,
pursuing more drastic tax reforms and using revenues for public works rather
than reducing payroll and income taxes.
In 1809, David Ricardo proposed that the rent of a resource (such as a piece of
land or a person's labour) is equal to the economic value of the best use of that
resource compared to using the best rent-free resource for the same purpose.
In other words, the resource owner appropriates the value of any excess
production because of the more advantageous resource. For example, the
value of marginal land for agriculture would be nil so the rent of that land
would be nil. Rent would increase with the fertility of the soil, irrespective of
any contribution by the landowner.
Bento & Jacobsen (2007) also disagree with the growing number of studies
that suggest fiscally-neutral swaps of environmental taxes for labour taxes
increase costs and eliminate the double dividend. It is claimed that the positive
welfare effect of revenue-recycling (i.e. reducing marginal tax rates) doesn't
offset the negative welfare effect arising from promoting alternative products
with pre-existing labour taxes that already distort factor-markets.
189
The authors criticise the underlying assumptions in these models: that labour
is a unique input and the production of all goods has a constant return to scale.
They argue that it is well established in public finance literature that a uniform
commodity tax system fails to adequately tax the rents from a fixed factor of
production. The quantity of a fixed factor of production cannot be changed in
the short-run, for example, land & buildings, plant & equipment and key
personnel. In the long run there are no limitations on scale.
Bento & Jacobsen criticise simplistic models that do not take into account that
rents on fixed factors are not fully exhausted. They claim that these models are
in fact beginning with flawed, non-optimal tax systems. Contrary to these
models, the authors hypothesise that it is possible for a double dividend to
occur where there are partially untaxable Ricardian rents for fixed factors in
the production of dirty goods.
However, the presence of the fixed factor means that part of the environmental
tax falls on the fixed factor so the price of the dirty good does not increase by
the whole of environmental tax. This reduces the welfare gain from improving
environmental quality. Fortunately, this reduction in benefit is mitigated by a
correspondingly lower tax-interaction effect because the environmental tax
moves the tax burden from labour to the fixed factor.
190
Experience with environmental taxes in Germany
Beuermann & Santarius (2006) find that five years after Germany introduced
ecological tax reform in 1999, Germans still regarded environmental policy and
economic policy as separate issues. There is both massive criticism and
unconditional support for environmental policies. As with all fiscally neutral
environmental taxes, the two virtuous macroeconomic effects were meant to
orient production towards energy efficiency and innovation and create
additional jobs due to reduced labour costs. Despite public concern that long
term unemployment was increasing, Germans neither understood nor
welcomed the linking of environmental taxes with employment objectives.
German's general distrust of politics and perceived information asymmetries
led the coalition of the Social Democrats and Greens to stop increasing
environmental tax rates beyond 2003.
The ETR had sought to be fiscally neutral by recycling taxes on labour to taxes
on pollution. It had also sought to achieve the double dividend proposed by
Hourcade & Robinson (1996).
Deroubaix & Leveque found that the government did not disseminate
information and develop consensus to build acceptance among key groups.
They also found that the distributive effects of a tax such as the ETR led to
different perceptions in different groups. In unexpected outcomes, businesses
that received a net benefit from ETR were the ones not exposed to
environmental issues. These businesses remained uninformed and relatively
ambivalent. However, the industries that were required to pay the tax
strenuously objected to ETR. These were energy intensive companies and
those companies with small highly skilled workforces that would not benefit
191
from lower labour taxes. The issue of whether or not to tax the energy used in
industrial processes was never resolved and remained a highly contentious
issue within the Government's own policy makers.
Carbon leakage
192
Environmental regulations create higher production costs for source emitters
of CO2 and other greenhouse gases. The usual assumption is that these higher
production costs will flow through into downstream producers in the form of
higher prices. However, the source emitters or the downstream direct and
indirect emitters may not be able to pass on price increases. This provides the
incentive for large industries to relocate to other jurisdictions where carbon
pollution is unregulated. This is called “carbon leakage migration.”
Arguelles, Benavides & Junquera (2006) studied the Asturias region of Spain
where Arcelor produces iron and steel using energy from coal-fired generation.
Low cost, coal-fired electricity has traditionally secured the region's position as
a low cost producer in the global iron and steel industry, which has been
suffering from competitiveness problems for many years.
It is quite apparent that environmental policy has the potential to change the
comparative advantages of regions and nations. This will favour some nations
and industries and reduce, perhaps fatally, the competitiveness of others.
Governments are unable to stand in the way of very significant pressures such
as companies relocating internationally for lower cost production.
193
Arguelles, Benavides & Junquera found that sector accountability for CO 2
emissions is radically modified if Input-Output analysis is applied to allocate
responsibility for direct, indirect and induced emissions. At the local scale, the
authors found the anomaly that certain sectors will bear the economic costs of
CO2 emissions while other sectors will be exempt, even though these
downstream sectors use outputs from sectors that are most effected by the
regulations.
The authors confirm the empirical analysis in Sijm et al. (2004, Section 5.2.2,
p.20) that found environment policies have, to date, not been influential
motives for relocation of energy-intensive processes investments like iron &
steel plants to developing countries. Instead, factors like growth in regional
demand and wage levels have been more important.
On 31 May 2002, the European Union and it's then fifteen member countries
ratified the Kyoto Protocol. The European Union set a “20/20/20” target of
reducing emissions by at least 20% by 2020 (compared to 1990 levels). xxi In
addition, the European Union committed to increasing the proportion of
energy from renewable sources (solar, wind, hydro and nuclear) from 8.5% to
20% and reducing energy consumption by 20%. The 2050 target for CO 2 was a
reduction of between 60% and 80% (compared to 1990 levels).
194
In December 2002, the European Union introduced an emissions trading
system with quotas across the six industries of energy, steel, cement, glass,
brick making and paper/cardboard. Chastened by the failure of French and
German environmental taxes, the European Union introduced a differential
system where emitters received emissions permits free of charge and could
sell surplus permits to those emitters which require additional permits. This
led to widespread profiteering with companies such as France's EdeF and
Germany's E.On passing on the price of permits to consumers regardless of the
fact that the companies had received the permits for free.
A second major error was allowing too many permits in the earliest phase of
the scheme from 2005 to 2007. This led to glut. The price of excess permits
initially rose to €30 ($42) in May 2006 and crashing to 2 euro-cents (3¢) by the
end of 2007. This eliminated all incentive for companies to ameliorate or abate
their emissions. As a result, the policy miserably failed to achieve any of its
objectives or reduce emissions over the first three-year phase. The only
winners in the emissions trading scheme were banks such as Barclay and
Goldman Sachs that traded CO2 permits in a market estimated at €62.7 billion
(US$90 billion) in 2008 (Scott 2009).
A third major error in the scheme was to create a double cost for industries
such as metal makers, chemical plants and paper mills. These industries have
their own carbon quotas and in addition were forced to pay higher power
prices.
In November 2008, the European Union (which had now expanded to twenty-
seven countries) extended the same targets to post 2012, when the Kyoto
Protocol no longer applies. A major feature in achieving the “20/20/20”
objectives in 2013 and thereafter will be that the differential system where
emissions permits are granted for free becomes an absolute system with all
emissions permits auctioned.
195
exemptions demanded by Germany and Italy for their steel, chemicals, cement,
aluminium and automobile manufacturing industries.
However, all industries in the European Union will need to reduce emissions
each year. Polluting power producers will receive subsidies and firms that face
international competition, which is estimated to be more than 90% of
European Union firms, will receive free emissions permits until 2020 if their
costs rise more than 5% due to buying permits.
The nine Eastern European countries that threatened to veto the post-Kyoto
“20/20/20” deal because of their highly polluting coal and lignite-fired power
stations were assuaged by free permits. When auctions commence in 2013,
countries with per capita income under half of the European Union average
and with more than 33% of their power from coal fired plants will receive free
permits equal to 70% of their average annual emissions from 2005-2007. This
will decline to zero at 2020.
The United Kingdom was compensated with an extra €3 billion for carbon
capture and storage development, increasing the total subsidy to €9 billion.
The European Union agreement also allows countries to earn emissions credits
by clean development mechanisms (CDMs), which are projects for emissions
amelioration in developing countries. This remains a controversial provision in
the lead-up to the UNFCCC's December 2009 meeting in Copenhagen.
The European Union also hopes that America will join with Europe to create a
global carbon market. As emissions permits in developing countries are
expected to be cheaper than in industrialised countries, European Union
members will be able to buy a proportion of their permits from foreign
countries. Those that meet the power and per capital income test will be able
to buy a higher proportion of permits.
196
protocol to be agreed at the United Nations climate meeting in Copenhagen in
December 2009.
In placing the best spin on this lack-lustre outcome of APEC, the host nation's
then Prime Minister, John Howard, noted that it marked the first time that
large polluting countries such as the United States, Russia and China had
agreed that they each have to make commitments to stop human activity from
causing dangerous changes to the climate. Commentators noted that the
wealthy Anglo-American nations regarded climate change as a "hundred year
agenda" and so there was no imperative to do anything immediately.
The Sydney conference also nimbly sidestepped the growing divide between
wealthy and developing nations over the Kyoto Protocol. Wealthy nations like
the USA and Australia had not ratified the Kyoto Protocol, claiming possible
adverse effects on economic and social growth. Most wealthy countries that
did sign, such as Canada, have failed to meet their targets.
197
Guinea's Prime Minister Sir Michael Somare told fellow leaders “While we
recognise that Kyoto Protocol has its flaws, it needs to be improved and
strengthened - not weakened.”
China remained strongly of the view that developing nations have a lesser role
to play and should be allowed to get on with economic growth and improve
lifestyle to Western standards. It has adopted targets, albeit rather low and
unclear, to reduce the energy intensity of economic activity by 20% by 2010
(compared to 2005 levels) and to sharply increase the contribution by
renewable energy to total energy supply.
China's President Hu Jintao chided developed nations over the need for them
to strictly abide by their targets under Kyoto to compensate for years of
booming economic activity that has produced copious CO2 emissions. Hu said
industrialised countries have polluted for longer and thus must take the lead in
cutting emissions and providing money and technology to help developing
countries clean up. He reminded the wealthy countries that (Yeoh & Gosh
2007) “In tackling climate change, helping others is helping oneself.”
Although the Anglo-American nations didn't have the ears to hear, Hu's theme
increasingly haunted them for almost another two years. America remained
intransigent in its dogged insistence that China adopt binding and equal
targets to America. The issue finally boiled over at the UNFCCC Bonn meeting
in June 2009, resulting in American negotiators desperately seeking a face-
saving solution to appease their own Senate.
198
environmentalists that the lacklustre outcome signalled a lack of bona fide
intentions amongst developed nations. South Africa’s minister of
environmental affairs, Marthinus van Schalkwyk, observed “Without short-
term targets the long-term goal is an empty slogan.”
The G8 nations, together with the so-called Outreach Five, together with South
Korea, Indonesia and Australia subsequently issued a statement suggesting,
rather self-evidently, that developed countries should share the biggest portion
of the climate change burden.
Consistent with this position, in November 2008 Senator John Kerry, brought
UN Secretary-General Ban Ki-moon the message from then President-elect
Barack Obama that he would personally lead coordinated global action in
Copenhagen.xxii Kerry also noted his personal view that “Without a new global
deal temperatures could be between 3°C and 5°C higher by mid-century than
they are now.”
199
With America intransigent on committing to any targets, the European Union,
China and India also declined to consider targets. As a result, the Poznan
conference became another vacuum in policy development. The leaders merely
deferred commitments until the November 2009 meeting scheduled for
Copenhagen.
200
coalitions (or even key individuals) ... the most visible Senate critics of Kyoto,
Senators Byrd and Hagel a conservative Democrat and Republican respected
in foreign affairs, represent precisely those views that will have to be won over
to reach the two-thirds majority.”
William Nordhaus
William Nordhaus, a senior policy adviser to the American Government over
many years, takes an economist's approach to climate policy in his book A
Question of Balance: Weighing the Options on Global Warming Policies (2008).
He assumes the science of climate change and long term consequences are
given and focuses only on policies of resource allocation to maximise the
financial benefit of the planet.
Dyson (2008) summarises the six major global warming policy alternatives
examined by Nordhaus:
201
trillion. In other words, this case has an additional cost over the base
case of $15 trillion
• "Al Gore policy" of reducing emissions gradually to 10% of current levels
by 2050. The net value of this over the base case is negative US$21
trillion
• "Low cost backstop technology", which is a hypothetical atmosphere
scrubbing technology to sequester the Keeling carbon wiggle, such as
pyrolation or genetically engineered carbon eating trees, or a low-cost
solar or geothermal energy technology that at present might only be
imagined in the realm of science fiction. It might be noted that the IPCC
does not give any credence to such highly speculative miracle-
technologies. The net value of a low-cost backstop technological
breakthrough over the base case is US$17 trillion, which is almost the
equivalent of a free solution to global warming.
Nordhaus concluded that the Stern and Gore policies would be prohibitively
expensive, while the "low-cost backstop technology" is enormously attractive.
Other policies like taxing carbon emissions and continuing the Kyoto Protocol
(with or without American involvement) are similar to the base case of
"business as usual".
202
While America has been tardy in setting greenhouse gas pollution reduction
targets for industry, producers themselves had sought Government protection
against “carbon leakage”. This is the loss of emissions intensive industry to
overseas locations and resulting in the import of formerly manufactured
products.
In February 2008, the Environment and Public Works Committee passed a bill
called “America's Climate Security Act (S. 2191)” requiring importers of
emissions intensive goods such as steel and aluminium to provide the
Government with emissions credits.
The bill was supported by American Electric Power, together with the
International Brotherhood of Electrical Workers. Steel producer Nucor Corp.,
also proposed that a tariff be imposed on goods imported from countries with
no carbon cap.
203
2050 (compared to 1990 levels). However, the Senate declined to address the
matter.
In dealing with the global financial crisis that began shortly after his election,
President Obama included some of these climate change policies in the
American Clean Energy and Security Act of 2009. This was not universally well
received. Notwithstanding calls for a 40% cut in emissions by 2020 (compared
to 1990 levels), America's new clean energy project provides only for a
reduction in greenhouse emissions of between 6% and 7%. European Union
countries expressed dismay at this perceived lack of American leadership in
the lead-up to the United Nation's December 2009 climate change meeting in
Copenhagen.
President Obama later defended the American Clean Energy and Security Act
in a speech at Georgetown University (Obama 2009):
204
energy. Some have argued that we shouldn't attempt such a
transition until the economy recovers, and they are right that we
have to take the costs of transition into account. But we can no
longer delay putting a framework for a clean energy economy in
place. If businesses and entrepreneurs know today that we are
closing this carbon pollution loophole, they will start investing in
clean energy now. And pretty soon, we'll see more companies
constructing solar panels, and workers building wind turbines, and
car companies manufacturing fuel-efficient cars. Investors will put
some money into a new energy technology, and a small business will
open to start selling it. That's how we can grow this economy,
enhance our security, and protect our planet at the same time.
On 17 April 2009, this resulted in the EPA issuing a report labelling CO 2 and
five other greenhouse gases a significant threat to public health and therefore
subject to its regulation under the Clean Air Act.
205
legislated to return State emissions to the 1990 level by 2020 with an 80%
reduction by 2050 (compared to 1990 levels). California also required a
reduction in new vehicle emissions of 14% by 2011 and 30% by 2016
(compared to 2008). However, these regulations were immediately blocked by
the Bush Administration. On 3 July 2009, President Obama overturned this
situation with the Environmental Protection Agency granting California the
right to enforce its own standards.
Earlier, in May 2009, President Obama had introduced America's first ever
measure to reduce greenhouse gases by placing an obligation on automobile
manufacturers to increase car and light truck average fuel efficiency by 40%
from 25 miles per gallon to 35.5 miles per gallon by 2016 and to decrease
greenhouse gas emissions by approximately one-third by 2016. The measures
will commence in 2012 and be overseen by the Environmental Protection
Agency and Department of Transport.
America and Australia were discussing such small targets in the order of 5% to
7% (compared to 1990 levels) that they could hardly criticise this policy stance
by developing countries. It was left to Russia, Japan and, ironically because of
its widely criticised duplicity, Canada to object that China and India were
already among the world's top emitters and should engage with the issue.
206
Global Change Research Program issued a comprehensive report on how
global climate change was impacting America (Karl et al. 2009). Thirteen
Federal agencies contributed to the report.
This is notable as it is the first time American policy makers have moved the
debate from hypothetical scientific confidence levels to declare, unequivocally,
that climate change was already impacting America across food production,
forests, coastlines and floodplains, water and energy supplies, transportation
and human health. The principal editor of the report, Dr. Thomas Karl,
emphasised the imperative for Americans to act quickly to reduce emissions or
face severe damages and adaption costs: “Our destiny is really in our hands ….
The size of those impacts is significantly smaller with appropriate controls.”
Waxman-Markey Bill
On 26 June 2009, the U.S. House of Representatives narrowly passed the
Waxman-Markeyxxiv Bill, called the “American Clean Energy and Security Act,
H.R. 2454” by 219 votes to 212, with 44 Democrats voting against it and 8
Republicans voting in favour. Democrats control 59% of the House but the
Democrat vote for the Bill was only 51%. If all Democrats had voted for the
Bill, it would have achieved 61% in favour.
The Waxman-Markey Bill includes a cap and trade system to reduce emissions
17% by 2020 (compared to the 2005 level) 42% by 2030 and 83% by 2050. It
also has the aim of ending foreign oil, increasing the use of renewable energy,
generating new clean-energy jobs and technology and setting efficiency
standards for buildings, lighting and industrial facilities.
207
The Bill also proposes a requirement that 20% of electricity be generated from
wind, solar and other renewable sources by 2020, including 5% from better
energy efficiency.
Environmental groups Greenpeace, Friends of the Earth and Public Citizen are
critical of the Bill because it allows too many free emissions permits for
polluting industries. They also claim it is risky because it relies on hypothetical
and perhaps unlikely reductions in emissions by developing countries.
208
The Boxer-Kerry Senate Bill differs from the Waxman-Markey Bill in three
ways. Firstly, it requires a 20% cut in emissions by 2020 (compared to 2005).
The Waxman-Markey Bill requires a cut of only 17% by 2020. This compares to
the cut of 14% originally proposed by President Obama. However, emissions
are already 8.8% lower than in 2005 due to the American recession. An 83%
cut by 2050 is the same in each case.
Secondly, while both Bills include an economy-wide emissions cap and trade
system, there is a major difference in the contentious issue of how emission
allowances will be distributed. The Waxman-Markey Bill provides for 85% of
emissions permits to be issued free of charge. However, the Senate bill does
not address the matter and leaves negotiations for later.
In June 2009, the UNFCCC's meeting in Bonn of 182 countries with 4,300
participants debated for the first time a draft Copenhagen Protocol to succeed
the Kyoto Protocol. The November 2009 meeting in Copenhagen (CoP15) is the
culmination of the 2006 Bali Road Map.
This Bonn meeting was the first such conference in which America had fully
engaged. Although a 200-page document was compiled, the countries were
unable to reach agreement on world action to ameliorate climate change. At
the end of the meeting, the exasperated UNFCCC Executive Secretary Yvo de
Boer concluded that a worldwide anti-climate change pact was "physically
209
impossible." However, there was a substantial step forward in the grim
acknowledgement by all parties that an effective policy was urgently needed.
The United Nations presented three draft protocols. The draft promoted by
France and Germany set minimum reductions necessary to cope with climate
change as between 25% and 40% by 2020 (compared to 1990 levels) and
between 50% and 85% by 2050 (compared to 1990 levels). It was proposed
that countries such as America that may be unable to react sufficiently quickly
to meet their domestic targets by 2020 would be able to satisfy their
obligations by CDMs through financing sustainable activities in developing
nations. In addition, the draft provided for levies of approximately US$100
billion per annum for structural adjustment and protection of vulnerable
communities, and additional compensation for historical emissions.
Other draft objectives included peaking emissions by 2015 and then reducing
emissions by 50% by 2050 (compared to 1990 current levels) in order to limit
temperature rise to 2°C and reducing emissions to 2 tonnes per capita. This
compares with 2006 per capita emissions of 19.78 tonnes in America, 7.99
tonnes in Europe and 4.58 tons in China 2006.xxvi Another objective sought a
reduction in atmospheric CO2 from the current level of 385 ppm to 350 ppm.
At the start of the Bonn meeting, the Inter-Academy Panel, representing the
science academies of seventy countries including those of Australia, Britain,
France, Japan and America, implored world governments to take action avoid
an underwater catastrophe from ocean acidification: “To avoid substantial
damage to ocean ecosystems, deep and rapid reductions of carbon dioxide
emissions of at least 50% (below 1990 levels) by 2050, and much more
thereafter, are needed.” xxvii
However, all major countries including Japan and
Russia persistently resisted any commitment while they awaited a resolution
between America and China.
During the conference the world's 6th biggest emitter, Japan, announced an 8%
domestic reduction by 2020 (compared to 1990 levels)xxviii, which is 1% deeper
than its Kyoto Protocol undertaking.xxix Previously Japan had signalled a wide
range from a 4% increase (compared to 1990 levels) to a 25% decrease.
However, the UNFCCC and environmentalists were aghast at Japan's meagre
210
new 8% target, with UNFCCC Executive Secretary Yvo de Boer venting his
frustration at the lacklustre support from developed countries: “For the first
time in my two and a half years in this job, I don't know what to say. We're still
a long way from the ambitious emission reduction scenarios that are a beacon
for the world”.xxx
I'm actually more optimistic than I was about America being able to
take leadership on this issue, joining Europe, which over the last
several years has been ahead of us on this issue …. Ultimately the
world is going to need targets that it can meet. It can't be general,
vague approaches …. We're going to have to make some tough
decisions and take concrete actions if we are going to deal with a
potentially cataclysmic disaster …. Unless the United States and
Europe, with our large carbon footprints, per capita carbon
footprints, are willing to take some decisive steps, it's going to be
very difficult for us to persuade countries that on a per capita basis
at least are still much less wealthy, like China or India, to take the
steps that they're going to need to take …. So we are very
committed to working together and hopeful that we can arrive in
Copenhagen having displayed that commitment in concrete ways.
211
bailouts and employment and health exigencies; and lastly, most Americans
remain highly sceptical that they could reduce emissions by even the United
Nations minimum goal of 25% by 2020 (compared to 1990 levels).
Nevertheless, American and China, who between them created 47% of world
greenhouse gas emissions in 2009, recognised they are the most important
participants in the approaching UNFCCC meeting in Copenhagen. In Bonn,
they engaged in complex negotiations across the canvas of their financial,
economic and climate change relationships. However, each country remained
highly suspicious of the others bona fide intentions, with both seeking an
approach analogous to mutually assured nuclear disarmament. For example,
based on measurable, verifiable and reportable reductions and tit-for-tat
retaliation to non-agreement or non-compliance (Broder & Ansfield 2009).
The Bonn meeting was also notable for its focus on historical accountability for
emissions, which gave rise to the dual concepts of current and historical
accountability for climate change debt. The first concept is well understood: it
is the liability of industrialised nations to redress the harm to developing
countries from changing climate patterns due to both the historical levels of
emissions and continuing emissions from industrialised countries. Everyone
expects that developing countries will incur significant expenses in adapting to
the physical effects of climate change.
However, the second notion of industrialised nation debt has only recently
become clear with the increased negotiating power of China and India. It is
reasoned that industrialised nations got to the “cookie jar” first and plundered
it, leaving developing nations with only crumbs.
212
Iraq
Former Serbia and Montenegro
Pakistan
Sw itzerland
Malaysia
Finland
Greece
Egypt
Bulgaria
Slovakia
Thailand
Denmark
Hungary
Korea, North
Austria
Belarus
Sw eden
Taiw an
Venezuela
Turkey
Argentina
Uzbekistan
Indonesia
Saudi Arabia
Romania
Iran
Netherlands
Korea, South
Brazil
Belgium
Czech Rep
Spain
Kazakhstan
Mexico
Australia
South Africa
Italy
Poland
Ukraine
Canada
India
France
Japan
United Kingdom
Germany
China
Russia
United States
Now that industrialised nations are desperate for cooperation from developing
countries, it no longer a situation of “let's forgive, forget, move on and start
anew”. Developing nations are seeking a kind of intergenerational and cross
geopolitical equity justice in demanding that withdrawals from the world's
fixed pool of emissions capacity be repaid in order that future generations in
developing countries are on an equal footing with citizens of developed
countries. Reduced to simple terms, China and India's position is that
industrialised countries would need to fix the whole global warming situation
before developing countries would join to go forward on an equal basis.
213
accumulated emissions and responsibility for the costs of mitigation over
forthcoming decades. Indeed, the Berlin Mandate exempted and prohibited
developing countries from entering into binding targets. With the benefit of
perfect hindsight, it may have been preferable for the Berlin Mandate to have
included transition provisions for countries, such as China and India, that had
the potential to emerge as major industrial powers and polluters.
In the event, this strategy didn't work. Both China and India responded by
calling America's bluff. Firstly, they accused America and other developed
countries of not engaging with the long-established philosophy of "common but
differentiated responsibility". Secondly, China demanded performance from
America and other industrialised countries as a precondition of its own action.
For example, China declined to commit to target levels, while demanding that
developed nations including America reduce emissions by at least 40% by 2020
(compared to 1990 levels) and contribute at least 0.5% to 1% of their GDP to
help developing countries upgrade technology.xxxi These claims were in stark
contrast to the Waxman-Markey target of just 4% reduction by 2020 (compared
to 1990 levels) and America's foreign aid budget of 0.17% of GDP.
214
Fourthly, China declined to further engage with America as a kind of leading
nations “G-2” to agree emissions quotas. China said that it would only engage
with the wider United Nations process (Schwägerl 2009b).
Fifthly, China claimed that green technologies were far too expensive for it to
contemplate any emissions target. Developing countries are still smarting from
two decades of abrasive dealings with developed countries over the intellectual
property rights for new AIDS pharmaceuticals. In emulation or secondary
pricing for AIDS pharmaceuticals, developing countries have called on
industrialised countries to require private technology developers to license
their intellectual property rights.
Even more disturbing for international amity, China demanded that America
provide the cheap carbon capture and storage (CCS) technologies that
industrialised countries such as America, Australia and the United Kingdom
have long touted as the “magic bullet” solution to reduce emissions.
Unfortunately, if the IPCC, Al Gore, James Hansen and many environmentalists
prove to be correct, Anglo-American nations may be “hoisted on their own
petard” by this challenge. Environmentalists refer to CCS as a “dirty lie”
because it has been used by the fossil fuel industry as a “red herring” to
absorb renewable energy research funding and deceptively mislead voters
about climate change. In late 2009, despite fledgling pilot projects, CCS still
appears to be merely hypothetical technology. Current indications are that CO 2
collection would reduce boiler burning efficiency by 25% and correspondingly
increase the fuel required by 25% to 30%. Perhaps an even greater hurdle is
the well known issue of CO2 egress from storage. The risks of CCS remain
exceedingly high, it is unlikely to be commercial before 2025 and, at the
current point in time, CCS has a very small probability of ever being available.
215
comparable with other countries but that this would not be enforceable.
However, neither China nor India responded to the suggested arrangement
(although, as discussed below, China subsequently softened its attitude).
216
be taking to address climate change. In the case of China, these are
already considerable, and are growing by the day …. Moreover,
working with China and India in particular (as well as Russia) is
critically important for how this issue connects to three other global
governance challenges: nuclear energy and non-proliferation, re-
energizing the global trade regime, and redrawing the scrambled
global financial architecture …. The other great challenge lies
beyond them, where the poorest are likely to suffer the most from
climate change, and also still lack capacity to adapt and respond.
Perhaps the most effective way to reach out to developing countries
and to the poorest nations is by focusing on real areas of
opportunity, where mitigation and adaptation can be addressed
simultaneously. This certainly applies in areas such as deforestation
and coastal preservation. But it also extends to infrastructure
development, especially power generation, transportation,
construction.
217
America is truly facing the major challenge identified at the end of Chapter 2
of whether Americans will accept a paradigm of constrained growth.
The MEF's inaugural meeting declared that the increase in global average
temperatures above pre-industrial levels should not exceed 2°C and that both
developed and developing countries need to work towards this goal.
The Kyoto Protocol requires only developed countries to reduce emissions. The
MEF declaration restated the responsibility of industrialised countries to do
this and “Take the lead by promptly undertaking robust aggregate and
individual reductions …. [with] sustainable development, supported by
financing, technology and capacity building.”
An important new aspect of the MEF declaration was that developing countries
“agreed to agree.” The communiqué stated that developing countries would
“[Commit to] promptly undertake actions whose projected effects on emissions
represent a meaningful deviation from business as usual in the mid-term.”
Although it is well understood that any agreement to agree is unenforceable,
developed countries see this outcome as providing a faint glimmer of hope that
China and India may engage with the Copenhagen process.
218
With the G20 assuming the mantle of the world's premier policy body in
September 2009, the MEF may become a redundant body.
Industrialised countries rallied to this new proposal and some began to resile
from existing commitments to date. For example, the European Union
withdrew its pledge to contribute up to US$22 billion per annum in assistance
for developing country adaptation.xxxii This unusual and uncharacteristic step
by the European Union was later reversed in part (Kanter & Castle 2009).
China and its G-77 coalition of developing countries expressed outrage at the
abrogation of responsibility by Kyoto Annex 1 industrialised nations and
introducing deal breakers such as cancelling developing country adaptation
funds (Pasternack 2009). It seems that this shocked indignation may have been
the very response sought by industrialised countries as they endeavour to
chasten China and India through playing-out a dire scenario in which no
country engages in effective emissions reductions. One may also speculate
about the UNFCCC's participation in the negotiations because at the end of the
talks Executive Secretary Yvo de Boer, arguably uncharacteristically and
prematurely, commented that the UNFCCC would now not ask for a new treaty
to replace the Kyoto Protocol (Ramanayake 2009).
219
more fractious to multilateral co-operation, India and China signed a pact to
develop technology and reduce greenhouse gas emissions (BBC News 2009).
Their official statement noted "Internationally legally binding [greenhouse gas]
reduction targets are for developed countries and developed countries alone,
as globally agreed under the [2007] Bali action plan."
The penultimate drafting meeting before the December 2009 Copenhagen will
be held in Barcelona in early November 2009. Little is expected to occur at this
meeting as various countries have expressed the view that final negotiating
positions will be reserved until the Copenhagen meeting.
The key features of the draft protocol were equal per capita emissions
allowances for each country and a 95% reduction in emissions by 2050
(compared to 1990). Countries would secure their emissions reductions with
financial bonds, which would be forfeit should they fail to achieve their target.
220
3.5 Models to manage the commons
In the tétonnement of a microeconomic model of supply and demand,
supplier's welfare and consumer's welfare are mutually and simultaneously
maximised by the equilibrium process.
In the case where the representative agent is the supplier, the model is one of
competitive markets. Where the representative agent is the consumer, the
model is that of a social planner. Appendix 2 CGE modelling provides Uhlig's
poof that a priori there is no economic difference between competitive market
optimisation and a social planners optimisation. Uhlig summarises his findings
as: “Whether one studies a competitive equilibrium or the social planners
problem, one ends up with the same allocation of resources” (Uhlig 1999).
221
Quantitative limits
In 1995 atmospheric scientists Paul Crutzen, Frank Sherwood Rowling and
Mario Molina shared the Nobel Prize in Chemistry “for their work in
atmospheric chemistry, particularly the formation and decomposition of
ozone.” In the 1970s, these distinguished scientists discovered, almost by
accident, that man-made nitrous oxides and chlorinated fluorocarbons (CFCs)
were severely damaging the ozone layer and leading to acute health risks for
humans, livestock, crops and marine phytoplankton (Crutzen 1970; Crutzen
1973; Rowland & Molina 1975).
In order to address ozone depletion, the United Nations sponsored the 1985
Vienna Convention on the Protection of the Ozone Layer. Within two years,
governments began to actively phase out CFC usage pursuant to the United
Nations' Montreal Protocol of 16 September 1987.xxxiii In fact, the ozone layer
issue was ameliorated at an actual cost of one percent of the US$135bn
originally suggested by critics.
However, more extensive experience has shown that generic quantitative limits
are a form of social or central planning that lacks flexibility. Whilst absolute
prohibition is useful in an emergency, as was the case with ozone depletion,
this policy instrument is very blunt. The main issue with planned quantitative
limit policies that try to be more sensitive is that this policy approach has all
the deficiencies of an economic system where a government tries to pick
winning strategies, industries and firms. As governments of all persuasions
have discovered to their dismay, picking winners is fraught with danger.
In his influential 1940s books The Road to Serfdom (2001) and The Use of
Knowledge in Society (2005), Friedrich Hayek argues that central planners can
222
never have sufficient information to make quantitative decisions such as the
optimal level of regulation. Hayek champions market price mechanisms for
self-organising societies, which he sees as even more essential to the human
condition than democracy (Hayek 1988). In 1974, Friedrich Hayek and Gunnar
Myrdal shared the Nobel Prize in Economics “for their pioneering work in the
theory of money and economic fluctuations and for their penetrating analysis
of the interdependence of economic, social and institutional phenomena.”
Taxation
From 2010, France will become the first country to tax carbon emitters (Butler
2009).xxxiv However, the initial tax rate of Euro 17 (US$25) per tonne of CO 2 is
less than an estimated Euro 40 per tonne necessary to change consumer
behaviour. The tax is expected to rise to Euro 100-200 per tonne by 2020.
Currently France's electricity is excluded because 90% of France's generation
is from carbon-free nuclear and hydroelectric sources.xxxv France has the lowest
cost electricity in the European Union. Its competitiveness makes it a large net
exporter of electricity and nuclear technology. France stands to become highly
resource expansive as other nations increase prices on dirty power.
Taxes on “bads” require the polluter to pay for the damage caused. Economists
favour a system of taxes on “bads” and negative taxes (i.e. subsidies) on
“goods”. These are called Pigouvian taxes because they are the marginal
productivities, marginal rate of substitution and shadow prices of the
223
disutilities (Pigou 1920). Pigou, a pioneer of welfare economics, stressed that
market transactions produced externalities, which are indirect social costs and
benefits.
Policy makers also favour Pigouvian taxes for simplicity and flexibility. Such
taxes are administratively straight-forward, avoid allocative decisions, provide
price certainty, capture activities that cannot be controlled in other ways (such
as by higher level regulations), can be collected at low cost and do not require
expensive overheads (such as an expensive superstructure for a market in
tradeable permits). The tax rate can be raised or lowered to directly influence
prices across wide sectors, expanded or contracted in coverage, and balanced
with other taxes to achieve welfare objectives. They also reinforce other
policies and integrate agendas such as long term environmental objectives into
mainstream economic policy. The European experience with revenue neutral
taxes and double dividends has been discussed above.
A carbon tax on coal, oil and gas is simple, applied at the first point
of sale or port of entry. The entire tax must be returned to the
public, an equal amount to each adult, a half-share for children. This
dividend can be deposited monthly in an individual’s bank account.
A carbon tax with a 100 percent dividend is non-regressive. On the
contrary, you can bet that low and middle income people will find
ways to limit their carbon tax and come out ahead. Profligate energy
users will have to pay for their excesses …. Demand for low-carbon
high-efficiency products will spur innovation, making our products
more competitive on international markets. Carbon emissions will
plummet as energy efficiency and renewable energies grow rapidly
…. Will the public accept a rising carbon fee? Surely – if the revenue
is distributed 100% to the public, and if the rationale has been well-
explained to the public. The revenue should not go to the
government to send to favored industries. Will the public just turn
224
around and spend the dividend on the same inefficient vehicle, etc.?
Probably not for long, if there are better alternatives and if the
public knows the carbon price will continue to rise. And there will be
plenty of innovators developing alternatives.
Property rights
In his book, The Fatal Conceit:The Errors of Socialism (1988), Hayek argues
that the establishment of property rights was the seminal factor instrumental
in the rise civilisation. Creating a property right for a resource places its
economic exploitation into the hands of a profit maximising decision maker.
This may be a person or company, a community management organisation,
state authority or international authority, such as the United Nations. Prima
facie, the decision maker is expected to act rationally by moderating the
harvest of the resource to an economically sustainable level, thereby
continuously maximising profit.
While in theory the two approaches of tax or property rights are equivalent,
there are at least six problems with creating property rights to address global
warming. The first is due to the uncertainties that society faces about the
marginal benefits and marginal costs of averting climate change. In this
respect, a tax on emissions has the economic advantage of certainty (United
States Congressional Budget Office 2009, p.4).
A second issue is social equity. Creating property rights has proven subject to
corruption and there is an ever present risk that scarce, public resources
might end up in the hands of powerful vested interests, who may exercise
monopoly power or disenfranchise the population. James Hansen is highly
critical of this risk (Hansen 2009):
225
Cap-and-trade is fraught with opportunities for special interests,
political trading, obfuscation from public scrutiny, accounting
errors, and outright fraud …. As with any law, caps can and will be
changed, many times, before 2050. The fact is that national caps
have been set and are widely rejected. When caps are accepted,
they are often set too high – as happened with Russia. If a complete
set of tight caps were achieved, global permit trading would likely
result in a Gresham’s Law effect – “bad money drives out good.”
Some countries will issue too many permits or fail to enforce
requirements. These permits, being cheapest, will find their way
into the world market and undermine the world cap. Caps are also
extremely hard to enforce, as demonstrated by the Kyoto Protocol.
226
The fourth issue is that the private sector has a short term focus on profit and
this is reflected in high discount rates on future profits. It will harvest the “low
hanging fruit” while leaving higher cost resources to future owners. For
example, enjoying open cut above ground mines now, leaving troublesome and
expensive underground mines for the future; and avoiding slower growing
plants and animals because they are “poor investments” (Sachs 2008, p.40).
This means future consumers face a higher cost then current consumers and,
all things being equal, intergenerational welfare will be distorted with current
consumers enjoy a greater welfare than future consumers who are not
represented in the market today.
Fifthly, as Jeffrey Sachs points out, there is a “tyranny of the present over the
future” in consumption. Our societies are impatient to consume. The free
market is seen as a right to consume as much as is wanted, with no regard for
the future.
227
Lowe (2009, p.1) is far more trenchantly critical of free market dogma:
Originally, environmentalists saw cap and trade for acid rain abatement as
merely a license to pollute because it freely gave valuable pollution permits to
228
powerful vested interests. However, arguably President George H. W. Bush's
Clean Air Act amendments have become the most successful domestic
environmental legislation ever enacted. According to the Environmental
Protection Agency (2004), there was close to complete compliance in achieving
a 50% reduction in pollution over the ensuing decade. In addition, the cost of
$1-$2 billion pa was significantly less than the EPA's original estimate of $2.7-
4.0 billion pa (Weiss 2008; Bohi & Burtraw 1997).
The proposed American cap and trade system for climate change amelioration
and abatement is similar to the current European differential model, which
runs to 2012. The American government will give all the emissions permits in
its treaty limit to large emitters. If emitters manage to increase efficiency and
thereby save permits then they may sell their unused permits on the market to
other emitters that need additional permits because they have over polluted.
Emitters know that the government supply of permits will be progressively
reduced and so they must move ahead of this market scarcity or face
potentially high market prices for permits.
David Ricardo (1817) developed his inspired theory that international trade
should be based on the relative or comparative advantage of each country's
commodity production rather than on the absolute advantage. His theory
remains the fundamental principle of modern trade and a major argument
against protectionism.
229
trade to form an aggregate supply curve. Trading allows this combination to be
achieved through horizontal aggregation.
The illustration shows that hypothetical CO2 mitigation curves for America and
Russia. The slope of the curve for America reflects the high cost of reducing
emissions due to America's coal fired generators. The curve for America shows
a cost $500 per tonne to reduce 400 tonnes of CO2.
230
Problems with naked market mechanisms and where parties
have unequal power
Six major inherent difficulties in the market mechanism for emissions trading
have been identified. The first is that the price of emissions permits rises under
trading due to the price premium. This premium arises because the price of
emissions permits includes commodity risk. An attendant volatility is brought
into existence. The size of a financial market for a commodity is often an order
of magnitude greater than the physical market. The usual volatility of the
physical market due to factors such as seasonality and weather is therefore
exacerbated and completely overshadowed by the risk introduced through the
speculation and gearing strategies of the traders who bear the price volatility.
Thirdly, a market has a large overhead cost for the public. This is in addition to
the exacerbated commodity risks that have been so damaging in food and oil in
recent years, the professional suckering that withdraws profits from
commodity markets, and the public's underwriting of the sector's losses
231
through “bail outs” that have received so much prominence in the 2008-9
Global Financial Crisis. There is a large overhead associated with any market
caused by its significant deadweight cost of participants including exchange
operators, compliance regulators, policy makers and lawmakers.
Fourthly is the wealthy country effect. There is little incentive for a wealthy
country such as America or Australia to turn its attention to bona fide
reductions in CO2 emissions; remove obsolete processes or address the linked
interdependencies in its economy; develop new core competences in CO 2
emissions reduction and equip local firms with these new technologies so they
can have higher productivity and lower emissions; develop new agility that
leads to new economies of scope and scale, and new synergies for reduction in
emissions; or develop new intellectual property in unexpected ways.
232
3.6 Conclusion
This Chapter reviewed the history of climate science and policy over the last
50 years with a detailed focus on measures to replace the UNFCCC Kyoto
Protocol, which is due to expire in 2012.
It was found that in mid-2009, scientists recognised that the nations of the
world had only about 750 Gt CO2 emissions capacity remaining if atmospheric
global temperature rise was to be kept within 2°C with a two-thirds probability.
As the world emitted approximately 45% of this amount between 2000 and
2008, the critical nature of the issue was accepted by all United Nations
governments, including those of Anglo-American nations.
233
In 2002, the European Union adopted a “20/20/20” target for a 20% reduction
in emissions by 2020, an increase to 20% in the proportion of energy from
renewable sources, and reducing energy consumption by 20%. It was found
that the European Union emissions trading scheme designed to place a market
price on emissions failed due to profiteering by electricity producers and
financial institutions. Nevertheless, in late 2008, with the new approach of
auctioning permits, the European Union reconfirmed its 20/20/20 targets for
the post-Kyoto period commencing in 2013.
In early 2008, the U.S. Congress House Energy and Air Quality Subcommittee
began to develop emissions legislation based around carbon capture and
storage technology. At the same time, American industry called for tariffs on
goods and services from countries with no carbon cap.
However, the climate debate only began to move forward following President
Barack Obama's election in November 2008. He addressed America's
234
renewable energy sector as part of stimulating the American economy through
the Recovery Act and amended the American Clean Energy and Security Act to
include small reductions in emissions of between 6% and 7% by 2020.
President Obama also released the EPA to regulate greenhouse gases, as the
U.S. Supreme Court had ruled it should do in 2007. The EPA also released
California and other States to regulate overall State emissions and new vehicle
emissions.
It was shown that America again pressed for Byrd Hagel conditions without
success at UNFCCC Bangkok talks in October 2009. In what had become a
somewhat desperate American negotiating strategy, the European Union joined
with America to show China a scenario where no nations agreed to reduce
emissions. These cliff-edge negotiations are expected to continue through to
the UNFCCC Copenhagen meeting in December 2009 as the Obama
Administration seeks Byrd Hagel concessions from China to fortify the passage
of its Kerry-Brown Senate Bill through the U.S. Senate, which remains hostile
to limiting U.S. unilateralism in any way.
235
There has been considerable debate about models to manage the commons.
This Chapter investigated competitive market optimisation, social planners
optimisation and three policy instruments for managing “bads.” These policy
instruments were quantitative limits, taxation and property rights. It was found
that all lead to similar outcomes and a mix is often the best policy solution. A
number of issues with naked market mechanisms were identified.
This Chapter has further established the policy dimensions on which this
doctoral research in CGE policy research will be framed. It has established a
policy Base Case of 2°C rise, consistent with geophysical modelling of a 750 Gt
CO2 carbon tranche. It has established the framework and risks for carbon
commodity markets (in both carbon permits and physical amelioration and
abatement). It has also established that a number of scenarios are important to
236
understanding the sensitivity of the Base Case. These scenarios include the
various points of view of the dominant groups in climate change debate, the
impact from increasingly severe of climate change reduction targets, the
strong faith in future technological solutions and the importance of technology
cost and availability, and the sensitivity of economic performance to
international carbon commodity trading.
Aitkin, D., 2008. A cool look at global warming, Canberra: Planning Institute of
Australia. Available at:
http://www.theaustralian.news.com.au/story/0,25197,23509775-
27703,00.html [Accessed April 14, 2008].
Arguelles, M., Benavides, C. & Junquera, B., 2006. The impact of economic
activity in Asturias on greenhouse gas emissions: consequences for
environmental policy within the Kyoto Protocol framework. Journal of
Environmental Management, 81(3), 249-264.
237
Baumol, W.J. & Wolff, E.N., 1981. Subsidies to New Energy Sources: Do They
Add to Energy Stocks? The Journal of Political Economy, 891-913.
BBC News, 2009. India-China climate change deal. BBC News. Available at:
http://news.bbc.co.uk/2/hi/south_asia/8318725.stm [Accessed November
2, 2009].
Bento, A.M. & Jacobsen, M., 2007. Ricardian rents, environmental policy and
the `double-dividend' hypothesis. Journal of Environmental Economics
and Management, 53(1), 17-31.
Bohi, D.R. & Burtraw, D., 1997. SO2 Allowance Trading: How Experience and
Expectations Measure Up, Washington DC: Resources for the Future.
Available at: http://www.rff.org/Documents/RFF-DP-97-24.pdf.
Bosquet, B., 2000. Environmental tax reform: does it work? A survey of the
empirical evidence. Ecological Economics, 34(1), 19-32.
Broder, J.M., 2009a. Climate Bill Clears Hurdle, but Others Remain. The New
York Times. Available at:
http://www.nytimes.com/2009/05/22/us/politics/22climate.html?_r=1
[Accessed May 31, 2009].
238
Broder, J.M., 2009b. E.P.A. Moves to Curtail Greenhouse Gas Emissions. The
New York Times. Available at:
http://www.nytimes.com/2009/10/01/science/earth/01epa.html?
_r=2&th&emc=th [Accessed October 1, 2009].
Broder, J.M., 2008. Gore urges change to dodge an energy crisis. The New York
Times. Available at:
http://www.nytimes.com/2008/07/18/washington/18gore.html?fta=y
[Accessed July 21, 2008].
Broder, J.M. & Ansfield, J., 2009. China and U.S. Seek a Truce on Greenhouse
Gases. The New York Times. Available at:
http://www.nytimes.com/2009/06/08/world/08treaty.html?
_r=2&th&emc=th [Accessed June 10, 2009].
Burck, J., Bals, C. & Ackermann, S., 2008. Climate Change Performance Index
2009: A comparison of the 57 top CO2 emitting nations, Bonn, Germany:
Germanwatch & CAN Europe. Available at:
http://www.germanwatch.org/klima/ccpi.htm [Accessed April 18, 2009].
Butler, D., 2009. France unveils carbon tax: Nature talks to climatologist Jean
Jouzel about the plans. Nature News. Available at:
http://www.nature.com.ezproxy.lib.uts.edu.au/news/2009/090911/full/ne
ws.2009.905.html [Accessed September 12, 2009].
Coase, R.H., 1960. The problem of social cost. The journal of Law and
Economics, 3(1), 1.
239
Crutzen, P.J., 1970. The influence of nitrogen oxides on the atmospheric ozone
content. Quarterly Journal of the Royal Meteorological Society , 96(408).
Deroubaix, J. & Leveque, F., 2006. The rise and fall of French ecological tax
reform: social acceptability versus political feasibility in the energy tax
implementation process. Energy Policy, 34(8), 940-949.
DeSouza, M., 2007. Canada declares APEC success, developing countries say
they were bullied. Canada National Post. Available at:
http://www.nationalpost.com/news/story.html?id=ef4bb5b6-81dc-4d36-
a7bd-25d20fdc9462&k=52757 [Accessed April 27, 2008].
Dyson, F., 2008. The question of global warming. The New York Review of
Books, 55(10). Available at: http://www.nybooks.com/articles/21494
[Accessed June 4, 2008].
Gore, A., 2008. New thinking on the climate crisis. In TED Conferences, LLC.
Available at: http://www.ted.com/index.php/talks/view/id/243 [Accessed
April 11, 2008].
Gruber, J., 2007. Public finance and public policy 2nd ed., New York, NY: Worth
Publishers.
Gupta, J., 2009. 'Copenhagen treaty' drafted by NGOs matches Indian actions.
Samay Live. Available at: http://www.samaylive.com/news/copenhagen-
treaty-drafted-by-ngos-matches-indian-actions/660830.html [Accessed
October 8, 2009].
Hansen, J.E., 2009. Can We Reverse Global Climate Change? Part I: Carbon tax
and dividend is the solution. Available at:
240
http://yaleglobal.yale.edu/display.article?id=12371 [Accessed June 10,
2009].
Hansen, J.E., 2008a. Global Warming Twenty Years Later: Tipping Points Near:
Briefing before the Select Committee on Energy Independence and
Global Warming, U.S. House of Representatives, and also presented at
the National Press Club on June 23., Washington: Select Committee on
Energy Independence and Global Warming, US House of
Representatives. Available at: http://www.columbia.edu/~jeh1/
[Accessed June 25, 2008].
Hansen, J.E., 2008b. Request to Australian Prime Minister Kevin Rudd to halt
construction of coal-fired power plants. Available at:
http://www.columbia.edu/
%7Ejeh1/mailings/20080401_DearPrimeMinisterRudd.pdf [Accessed
June 23, 2008].
Hayek, F.A., 1988. The fatal conceit: The errors of socialism W. W. Bartley, ed.,
London: Routledge.
Hayek, F.A., 2005. Use of Knowledge in Society, The. New York University
Journal of Law & Liberty, 1, 5.
Herbert, B., 2008. Yes we can. The New York Times. Available at:
http://www.nytimes.com/2008/07/19/opinion/19herbert.html?
_r=2&th&emc=th&oref=slogin&oref=slogin [Accessed July 21, 2008].
Hourcade, J. & Robinson, J., 1996. Mitigating factors : Assessing the costs of
reducing GHG emissions. Energy Policy, 24(10-11), 863-873.
241
Hurwicz, L., 1995. What is the Coase theorem? Japan & The World Economy,
7(1), 49-74.
IPCC, 1995. IPCC Second Assessment Climate Change 1995: A Report of the
Intergovernmental Panel on Climate Change, Geneva: IPCC. Available
at: http://www.ipcc.ch/ipccreports/tar/vol4/english/index.htm [Accessed
April 26, 2008].
Jacoby, H.D. & Reiner, D.M., 2001. Getting climate policy on track after The
Hague. International Affairs (Royal Institute of International Affairs
1944-), 77(2), 297-312.
Kanter, J. & Castle, S., 2009. E.U. Reaches Funding Deal on Climate Change.
The New York Times. Available at:
http://www.nytimes.com/2009/10/31/science/earth/31iht-UNION.html?
_r=3&adxnnl=1&adxnnlx=1257202699-iy2yXmWTpZCJisFNGCA+aA
[Accessed November 2, 2009].
Karl, T.R., Melillo, J.M. & Peterson, T.C. eds., 2009. Global Climate Change
Impacts in the United States, Cambridge University Press. Available at:
242
http://www.globalchange.gov/publications/reports/scientific-
assessments/us-impacts [Accessed June 17, 2009].
Keeling, C.D. & Whorf, T.P., 2005. Atmospheric CO 2 records from sites in the
SIO air sampling network. Trends: A compendium of data on global
change, 16–26.
Klein, N., 2009. Obama isn't helping. At least the world argued with Bush. The
Guardian. Available at:
http://www.guardian.co.uk/commentisfree/cifamerica/2009/oct/16/obam
a-isnt-helping [Accessed October 17, 2009].
Krauss, C. & Galbraith, K., 2009. Climate Bill Splits Exelon and U.S. Chamber.
The New York Times. Available at:
http://www.nytimes.com/2009/09/29/business/energy-
environment/29chamber.html?_r=2&emc=tnt&tntemail1=y [Accessed
October 7, 2009].
Krugman, P., 2009. The Joy of Sachs. The New York Times. Available at:
http://www.nytimes.com/2009/07/17/opinion/17krugman.html?_r=1&em
[Accessed July 19, 2009].
Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.
Lydersen, K., 2009. Scientists: Pace of Climate Change Exceeds Estimates. The
Washington Post. Available at: http://www.washingtonpost.com/wp-
dyn/content/article/2009/02/14/AR2009021401757_pf.html [Accessed
February 18, 2009].
243
Lynas, M., 2008. Climate chaos is inevitable. We can only avert oblivion. The
Guardian. Available at:
http://www.guardian.co.uk/commentisfree/2008/jun/12/climatechange.sc
ienceofclimatechange [Accessed June 19, 2008].
MacDonald, G., 1989. The long term impact of atmospheric carbon dioxide on
climate,
McKibbin, W.J. & Wilcoxen, P.J., 1999. The theoretical and empirical structure
of the G-Cubed model. Economic modelling, 16(1), 123-148.
244
Oreskes, N. & Renouf, J., 2008. Jason and the secret climate change war -
Times Online. Available at:
http://www.timesonline.co.uk/tol/news/environment/article4690900.ece
[Accessed September 15, 2008].
Pascal, B., 1662. Pascal's Pensées, France: E. P. Dutton & Co., Inc. 1958.
Available at: http://www.gutenberg.org/files/18269/18269-h/18269-h.htm
[Accessed November 2, 2009].
Pasternack, A., 2009. Post-Bangkok Q&A with Antonio Hill, Oxfam's Climate
Envoy: We Need a "Major Turnaround". TreeHugger. Available at:
http://www.treehugger.com/files/2009/10/antonio-hill-oxfam-climate-
representative-bangkok-meeting.php [Accessed October 17, 2009].
Pigou, A.C., 1920. The economics of welfare 2009th ed., New Brunswick, New
Jersey: Transaction Publishers.
Pilkington, E., 2008. Put oil firm chiefs on trial, says leading climate change
scientist. The Guardian. Available at:
http://www.guardian.co.uk/environment/2008/jun/23/fossilfuels.climatec
hange [Accessed June 24, 2008].
ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.
Ramanayake, W., 2009. Time runs out at Bangkok climate meeting. Climate
Change Media Partnership. Available at:
http://www.climatemediapartnership.org/reporting/stories/time-runs-
out-at-bangkok-climate-meeting/ [Accessed October 17, 2009].
Revelle, R. & Suess, H.E., 2002. Carbon dioxide exchange between atmosphere
and ocean and the question of an increase of atmospheric CO2 during
the past decades. Climate Change: Critical Concepts in the
Environment, 9(1), 92.
245
Ricardo, D., 1817. On the principles of political economy and taxation Third.,
London: John Murray. Available at:
http://www.econlib.org/library/Ricardo/ricP.html [Accessed April 10,
2008].
RTTNews, 2009. India stands by Kyoto Protocol on climate change. RTT News.
Available at: http://www.rttnews.com/Content/GeneralNews.aspx?
Node=B1&Id=1097683 [Accessed November 2, 2009].
Sachs, J., 2008. Common Wealth: Economics for a Crowded Planet, Penguin
Press HC, The.
Schellnhuber, H.J. et al., 2009. Solving the climate dilemma: The budget
approach, Berlin: German Advisory Council on Global Change (WBGU).
Available at: http://www.wbgu.de/wbgu_sn2009_en.html [Accessed
September 5, 2009].
Scott, M., 2009. Can Washington Learn from Brussels' Mistakes?: Avoiding
Europe's Carbon Trading Missteps. SPIEGEL ONLINE - News -
International. Available at:
246
http://www.spiegel.de/international/business/0,1518,641053,00.html#re
f=rss [Accessed August 18, 2009].
Slattery, L., 2008. Abstract to application: after years of being ignored by the
money markets, academic economists are delighted as hard times and
complex problems mean demand for their skills. The Australian, 28-29.
Spiegel Online, 2009. Fighting Global Warming: German Scientists Call for
'World Climate Bank'. Spiegel Online - News - International . Available
at: http://www.spiegel.de/international/germany/0,1518,645996,00.html
[Accessed September 1, 2009].
Stolberg, S.G., 2008. Richest Nations Pledge to Halve Greenhouse Gas. The
New York Times. Available at:
http://www.nytimes.com/2008/07/09/science/earth/09climate.html?
_r=2&th=&oref=slogin&emc=th&pagewanted=print&oref=slogin
[Accessed July 9, 2008].
Tällberg Foundation, 2008. How on earth can we live together - 350: remember
this number fo the rest of your life. Available at:
http://www.tallbergfoundation.org/T
%C3%84LLBERGINITIATIVES/350/tabid/429/Default.aspx [Accessed
July 13, 2008].
Tripati, A.K., Roberts, C.D. & Eagle, R.A., 2009. Coupling of CO2 and Ice Sheet
Stability Over Major Climate Transitions of the Last 20 Million Years.
Science, 1178296.
Uhlig, H., 1999. A toolkit for analyzing nonlinear dynamic stochastic models
easily. In R. Marimon and A. Scott: Computational Methods for the
247
Study of Dynamic Economies. Oxford and New York: Oxford University
Press, pp. 30-61.
United States Congressional Budget Office, 2009. How CBO Estimates the
Costs of Reducing Greenhouse-Gas Emissions, Washington DC: The
Congress of the United States. Available at: http://cboblog.cbo.gov/?
p=238 [Accessed June 17, 2009].
United States Environmental Protection Agency, 2004. Cap and Trade: Acid
Rain Program Results, Available at: http://www.epa.gov/airmarkets/cap-
trade/docs/ctresults.pdf.
Weiss, D.J., 2008. Global Warming Solution Studies Will Overestimate Costs,
Underestimate Benefits. Net News Publisher. Available at:
http://www.netnewspublisher.com/global-warming-solution-studies-will-
overestimate-costs-underestimate-benefits/ [Accessed July 19, 2009].
Whiteman, H., 2009. Pachauri: Stern stance on China climate talks 'pragmatic'.
CNN.com/technology. Available at:
http://edition.cnn.com/2009/TECH/science/06/15/ipcc.pachauri.climate.c
hange/ [Accessed June 16, 2009].
Yeoh, E. & Gosh, A., 2007. Hu, on climate change, says rich nations should
clean up acts. Bloomberg: Asia. Available at:
http://www.bloomberg.com/apps/news?
pid=20601080&sid=amhVcqkG6sz8 [Accessed April 27, 2008].
248
technologies that make their case
vi Hansen does not favour a “cap and trade” emissions trading system.
Furthermore, a bill to implement a “cap and trade” emissions trading system
had recently failed to achieve US Senate support in June 2008
vii With similar letters to Australia's State Premiers
viiiThe IPCC's dire outlook is highlighted by Pascal's Wager. In finding the
existence of God to be beyond reason, the French philosopher Blaise Pascal
suggested that it would be wise to behave as if God existed because In doing so
one has everything to gain, and nothing to lose (Pascal 1662, Note 233). If
Pascal's logic is extended to climate change, then countries would best address
the issue of CO2 emissions because they have everything to gain and nothing to
lose. There are three obvious gains. The first gain is to avoid the extraordinarily
high risk of a catastrophic situation where the hypothesised outcomes from
global warming and sea acidification indeed take place. The second gain is that
addressing emissions will greatly reduce pollution. Experience has shown that
this is generally a good thing to do. The third gain is that accelerating total
factor productivity through rapid technological advances will bring much better
standards of living to a much broader base of the world's population. As regards
having nothing to lose, there is no cost on consumers of addressing emissions if
revenue neutral environmental taxes are used as the policy instrument
ix An increase of 38% since pre-1850 levels of 280 ppm
x “Agree and ignore” where governments are assumed to not act for many reasons
ranging from continuing scepticism, a desire to protect their industries, to more
complex geopolitical and "game theory" reasons such as free-riding. There is
also natural concern about the effects of massive change from voluntary
proactive action that has material adverse effects (particularly on powerful
groups such as energy companies, generators, automotive producers and petrol
consumers) when reasons have an ideological component (because in science
there is no absolute certainty), and the new costs and new problems in society
that will be exposed such as carbon-profiteering or one nation being advantaged
over another (as can occur in free-trade agreements)
xi "Step change" in which all governments respond to major climate change
disasters in 2009 and 2010 with strong policy measures. This scenario mirrors
acid rain, which is the only time that world governments have acted in concert.
The governments agreed to cooperate only when the devastating evidence of
pollution was obvious and compelling. In the "step change" scenario, an
international treaty would require all carbon producing companies (coal mines
and oil and gas wells) in all countries to bid for a limited and decreasing number
of carbon permits in a world carbon permit market. The UN would set the
"upstream cap". The price of permits would presumably soar because of their
scarcity and demand for emissions-intensive products would fall
commensurately. The trillions of dollars raised in auctioning permits would be
spent on offsetting these impacts on humanity and as part of the the transition
to the new low-carbon economy. For example the relocation of the nations like
Bangladesh, Kiribati and the Maldives; amelioration of drought in West Africa,
Somalia and Ethiopia; cushioning the effect of price rises on poor nations
xii In 2007, Australia's CO2 equivalent emissions declared under the Kyoto Protocol
were 825.9 million tonnes or 39.3 tonnes per capita. Schellnhuber's table draws
on these Kyoto Protocol declarations. Some countries benefit from “Land Use,
Land Use Change and Forestry” sinks. For example, America's CO 2 net
emissions offset a sink benefit of 15% of the industrial total. Australia
experiences a “Land Use” source due to bushfires and land clearing. Excluding
“Land Use”, Australian emissions were 541.2 million tonnes or 25.8 tonnes per
capita, which is the figure publicised by the Australian Government. The
attractive concession granted uniquely to Australia so it would sign the Kyoto
Protocol, that “Land Use” from bushfires and clearing is offset by new growth,
has been actively debated with regard to many countries which would like the
249
same concession. The apparent symmetry in the assumption continues to be
seen as an error of logic and a glaring loophole. The current approach in the
lead-up to the Copenhagen meeting in December 2009 is that countries would
account for their “Land Use” but not be rewarded for new growth
xiiiThe “About This Report” preamble to the Global Climate Change Impacts in the
United States report notes: The USGCRP called for this report. An expert team
of scientists operating under the authority of the Federal Advisory Committee
Act, assisted by communication specialists, wrote the document. The report was
extensively reviewed and revised based on comments from experts and the
public. The report was approved by its lead USGCRP Agency, the National
Oceanic and Atmospheric Administration, the other USGCRP agencies, and the
Committee on the Environment and Natural Resources on behalf of the National
Science and Technology Council …. The report draws from a large body of
scientific information. The foundation of this report is a set of 21 Synthesis and
Assessment Products (SAPs), which were designed to address key policy-
relevant issues in climate science; several of these were also summarised in the
Scientific Assessment of the Effects of Climate Change on the United States
published in 2008. In addition, other peer-reviewed scientific assessments were
used, including those of the Intergovernmental Panel on Climate Change, the
U.S. National Assessment of the Consequences of Climate Variability and
Change, the Arctic Climate Impact Assessment, the National Research Council’s
Transportation Research Board's report on the Potential Impacts of Climate
Change on U.S. Transportation, and a variety of regional climate impact
assessments. These assessments were augmented with government statistics as
necessary (such as population census and energy usage) as well as publicly
available observations and peer-reviewed research published through the end of
2008
xivNote President Obama's speech to the United Nations General Assembly on 23
September 2009 (refer to discussion in Chapter 2, United Nations Security
Council Resolution 1887)
xv Confirmed in 2008 by Dr. George M. Woodwell, one of the few members of that
committee still alive: “Yes, I remember well that committee and how it was
controlled and deflected by new economic influences as the environmental
issues appeared to become acute. The study was under the auspices of the
National Research Council of the National Academy of Sciences, not the
National Science Foundation. We resorted to individual papers because we
could not agree, or see any way to agree, on a single report. Even within my
own paper there was systematic pressure to dilute the statements and the
conclusions. I had previously written and signed along with Roger Revelle,
David Keeling, and Gordon MacDonald a stronger statement for the CEQ at the
end of the Carter administration. That statement was widely publicised by Gus
Speth, then Chairman of CEQ, and ultimately used in testimony in the Congress
and as background for the Global 2000 Report published by CEQ in 1980. As far
as the summary statement of the Report was concerned, as the Preface states:
there were "no major dissents". That means no one chose to fight with the
chairman. It was poor, sickly job, deliberately made so for political reasons
characteristic of the corruption of governmental purpose in the Reagan regime.
Naomi Oreskes has it right.” Private correspondence disclosed with permission
by John Mashey in a comment submitted on Wed, 2008-09-10 17:26 (Littlemore
2008)
xviThe present world consumption of oil is 300 billion barrels per decade. It is
estimated that there is only 1.2 trillion barrels remaining. The IEA has forecast
that the world needs to increase energy production by 50% from 14 Terra Watt
in 2008 to 21 Terra Watt in 2030. The present mix is oil 5, coal 4, gas 3 and
other (nuclear and renewable energy) 1.5-2.0. With peak-oil threatening to
reduce the availability and percentage contribution of oil, there is an acute need
for both substitute energy sources and alternative liquid fuels from coal-to-oil
250
plants and shale oils. While wind, solar, tidal, geothermal, hydrogen and algae
are worthwhile technologies and need to be pursued to the utmost, only nuclear
fission (and ultimately fusion) has the ability to satisfy increasing demand for
electricity while reducing emissions. At present France generates 70% of its
power from nuclear. Japan also has a high nuclear component. The nuclear
threats are reactor meltdown (as with Chernobyl's graphite cooled reactor) and
weapons proliferation. The meltdown issue has been solved with new pebble-
bed reactors that do not have the negative coefficients of reactivity in water
cooled graphite reactors where a meltdown occurs if the cooling fails. When
pebble-bed reactors are turned-off, they simply cool down
xviiSimilar to the Scopes trial in the 1920s, which was a clash of creationists and
evolutionists
xviiiDonald Aitkin was formerly the vice-chancellor of the University of Canberra,
foundation chairman of the Australian Research Council and a researcher at the
Australian National University and Macquarie University
xixDeforestation of an acre of rainforest trees releases about 200 tonnes of carbon
xx Levied on the supply of fuels and electricity to industry, commerce, agriculture
and public administration
xxiBy 30% if other developed countries commit themselves to comparable
reductions
xxiiAn interregnum in Washington existed when this message was delivered by
Barack Obama's informal emissary, John Kerry. President-elect Barack Obama's
20 January 2009 inauguration was still 6 weeks away
xxiiiWilliam Nordhaus uses a 4% discount rate, which is the same conservative rate
that economists often use for long term projects. At 4%, a $1,000 benefit at the
hundredth year would be discounted to just $29. The same $1,000 at the two-
hundredth year would be worth just 39c. However, global warming and long
term mitigation over 200 years is not a normal project. The term is considerably
longer than the projects to which 4% is usually applied. Sir Nicholas Stern
maintains that no discounting should be applied because discriminating
between current and future generations is unethical.
xxivRepresentatives Henry A. Waxman of California and Edward J. Markey of
Massachusetts (Democrats)
xxvThe meeting was at the Commune Hotel, located at the Great Wall
xxviUS Department of Energy statistics
xxviiOcean acidification would prevent crustaceans forming their shells, dissolve
coral reefs, threaten food security, reduce coastal protection and damage local
economies. Acidification would be irreversible for thousands of years
xxviiiEquivalent to a 14% reduction by 2020 (compared to 2005 levels)
xxixIn an attempt to placate anger against Japan's 8% target, America's deputy
climate change envoy, Jonathan Pershing, noted Japan's new target was for
domestic reductions and compared favourably to the European Union's target of
20%, which allows for half of the reductions to be achieved through projects in
developing nations. However, the American support is misleading because the
important price effects of the two policies are not comparable
xxxDuring the G20's September 2009 Pittsburgh meeting, Japan's recently elected
Prime Minister Yukio Hatoyama expressed optimism that Japan would achieve a
full 25% reduction in emissions
xxxiChina National Development and Planning Commission Climate Policy Paper,
21 May 2009
xxxiiThe European Union's September 2009 commitment to the United Nations
Adaption Fund is part of a total package from industrialised nations of US$33
billion to US$74 billion per annum
xxxiii16 September is now designated World Ozone Day
xxxivPresident Sarkozy has also noted that France and Germany may introduce a
“border adjustment tax” on the assessed CO2 pollution content of goods
imported from countries with inferior climate control measures. India noted that
251
it could respond with a 99% tax on goods imported from countries that have
created the CO2 pollution problem
xxxvIn 2008, France generated 80% of its electricity from nuclear, compared to
23% from nuclear in Germany. The French say of nuclear electricity generation:
“No oil, no gas, no coal, no choice.” As well as being one of the largest net
exporters of electricity, France is a major exporter of nuclear technology
xxxviScrubbers mix lime with the flue gasses from coal-fired power stations to form
calcium sulphate
xxxviiNotwithstanding the adoption of the America's proposal for emissions trading
between nations, America did not ratify the Kyoto Protocol
xxxviiiThere is a common attitude that it is up to the governments of the third
world countries to protect their citizens from practices such as sweatshops and
child labour. However, often these governments deliberately turn a blind eye to
these practices, or merely pay lip-service, in order to earn foreign currency or
because they receive incentive payments or even bribes
xxxixAnalogous to wealthy nations buying products from nations with sweatshops,
child labour, slavery, abuse of human rights and abuse of the environment.
Commodities often associated with these types of practices are chocolate,
coffee, gold, diamonds, sports products, durable goods etc.
xl As Japan does to secure the votes of small nations in retaining loop-holes in the
moratorium on whaling. Another example is that in the period from 2001 to
2007, Australia began a deliberate policy of using foreign aid as a tool of
political intervention in surrounding countries
xli For example, a situation analogous to the stealthy take-over of a company on the
stock market without making a proper takeover offer including adequate
premium for control. Another example in the “property market” is where
Israelis acquired the homes of Palestinian families in Jerusalem for modest
prices, which was however part of an overall covert plan to remove Palestinian
families from areas of Jerusalem. The Government of Israel intervened and
declared this process to be unethical in disenfranchising a class of people of
their rights without adequate compensation
252
Chapter 4 Economic Models for Climate
Change Policy Analysis
Chapter 1 Introduction identified the role of computable general equilibrium
modelling in policy research. Chapter 2 Political economy of the Anglo-
American economic world view determined that neoclassical economics and
modelling had strengths and weaknesses but remained a primary tool for
evaluating policies in Anglo-American economies. Chapter 3 Political economy
of the Anglo-American world view of climate change identified the key
elements to be considered in climate policy, including changing stakeholder
attitudes and technology concerns, the various ways targets might be framed
and policy instruments for achieving these targets. The objective of this
Chapter is to build on this fabric of change by identifying a suitable
computable general equilibrium modelling approach, mathematical platform
and data source to achieve the research aim.
Science seeks to explain natural laws and the working of the universe through
testing theories in controlled experiments ceteris paribus.i In contrast,
253
economic and socio-technical engineering problems are huge, holistic and
often pressing issues with many feedback loops and dependencies. These
problems are at the core of the fabric of society. They require practical
solutions with mathematical precision, while at the same time guarding against
misplaced confidence in apparently precise numbers and recognising that the
results are merely indicators of possible trends. Examples are Australia's
planned Carbon Pollution Reduction Scheme (CPRS), the re-engineering of
banking systems and major infrastructure development.
This Chapter briefly addresses the heritage of CGE economic climate models
that form the jewel of many public policy centres. ii
Of course, there were many earlier kinds of invisible hands. One of the first
was Bernard Mandeville's The Fable of the Bees, Private Vices, Public Virtues
(1723), in which he marvelled that private vices, which are publicly deplored,
such as greed, vanity and ambition, indeed lead to the public virtue of
prosperity.
Another invisible hand was the Physiocrats' le droit naturel, or natural order of
things, which governed economic and social equilibriums. A prominent
member of the Physiocrats, François Quesnay is remembered for developing
France's Le Tableau économique (1758). This was the world's first economic
input-output table.iv Today a form of Quesnay's table can be found at the core
of all systems of national accounts.
254
development of the input-output method and for its application to important
economic problems.” His major contribution in this area is Input-Output
Economics (1966). Nowadays, the classical textbook on this topic is Miller &
Blair's Input output analysis: foundations and extensions (1985).
Chapter 2 also discussed John Stuart Mill's philosophy that production was not
the sole purpose of human existence. His book Principles of Political Economy
(1848) became the primary nineteenth textbook on classical economics. It
provided the unique new insight that production and consumption (or what he
called distribution) were decoupled. However, it was not until Alfred Marshall
drew his masterful graph of microeconomic supply and demand scissor curves
that the ramifications of this were fully appreciated. Nevertheless, Mill did
appreciate that there was something akin to producers and consumers surplus
and that various moral policies (such as utilitarianism) could be applied to
consumption while not affecting production.
255
climate models where many equilibriums occur within industries in national
economies and commodity substitutions occur between industries in different
countries.
Walras reduced the general equilibrium to five equations that could not be
solved because the number of variables exceeded the equations. In formulating
his problem, Walras was perhaps one of the first people to understand how a
small number of economic equations can rapidly develop into a complex model
requiring the most capable methods of operations research for solution.
256
Although Marshall had brought Walras' general equilibrium to maturity, there
was one last step. General equilibrium was thought to be merely a theoretical
construct. John von Neumann criticised its two weaknesses: that prices would
sometimes need to be negativevi and that the model was completely abstracted
from sociology, mechanical rather than human and social. In order to address
the first point, Von Neumann developed his own approach to general
equilibrium modelling (Von Neumann 1938; Champernowne 1945). Later he
developed “game theory” to address the behavioural weakness in general
equilibrium, which greatly enhanced Saint-Simon's tentative minimax social
optimisation (Von Neumann 1928; von Neumann & Morgenstern 1953).
The real power of general equilibrium modelling arrived when Kenneth Arrow,
Gerard Debreu and Lionel McKenzie proved that a general equilibrium could
really exist in an economy (Arrow & Debreu 1954; Debreu 1959). The Arrow-
Debreu theory of general equilibrium showed that markets discount future
events including inventions that have not yet occurred. This led to the
widespread use of computable general equilibrium (CGE) models in policy
analysis. Arrow shared the 1972 Nobel Prize with John Hicks “for their
pioneering contributions to general economic equilibrium theory and welfare
theory.” In 1982, Gerard Debreu was also awarded the Nobel Prize “for having
incorporated new analytical methods into economic theory and for his rigorous
reformulation of the theory of general equilibrium.”
257
models for policy analysis have become quite realistic models of regional,
national and global economies. Appendix 2 CGE modelling describes at length
the techniques used in elementary CGE modelling.
Partial equilibrium models are suitable for most regional analysis. In partial
equilibrium analysis, major economic parameters such as economic growth are
provided exogenously and changes in resources are seen as perturbations to
the initial equilibrium. For example, changes to the demand curve do not affect
the supply curve.
Partial equilibrium models have the same consumer utility and production
functions, market clearance and resource constraints as generic general
equilibrium models. The one additional feature of general equilibrium model is
an income balance where the prices of commodities multiplied by the
commodity volumes is equal to (or less than) the prices of the resources
multiplied by the volumes of resources. In the field of linear programming,
discussed later in this Chapter, this relationship is called the “Main Theorem of
Linear Programming.”
The analysis of national and global affairs has increasingly required general
equilibrium models where growth is calculated endogenously, changes to the
demand curves of commodities have a major effect on the supply curves, and
the imports and exports of countries have a major effect on growth rates.
Most major countries in the world have developed models for World Trade
Organisation, GATT and Free Trade Agreements, economic integration,
taxation policies, public finance, development strategies, energy security and
greenhouse gas pollution policies.
Nevertheless, models are never complete and only ever a snapshot in the
journey of emulating the complex and changing marketplace of the globe.
There are many specialist mathematical algorithms and optimisation
limitations involved. Unless policy makers remain highly specialised they can
rarely retain mastery of computable general equilibrium models as a practical
policy making tool. This means that the communication of results is always a
258
challenge from specialist researchers in policy at academic institutions to
policy makers in government and strategists in corporations.
The economic equations and behaviour of actors are usually solved analytically
before being entered into the system of equations. These equations are then
solved simultaneously. Therefore, a traditional CGE model does not seek to
optimise any objective function. In practice the equations are nonlinear so
cannot be solved by algebraic or linear techniques. Instead, an iterative
solution seeking algorithm changes prices until a solution to the model is
found.
Before test policies are introduced, the equations are calibrated to explain the
payments recorded in national accounts, which are usually provided as an
Input-Output table or system of double entries within a Social Accounting
Matrix (SAM).
259
Weaknesses in standard CGE formulations either relate to assumptions or
computational complexity. Even small CGE models with nonlinear formulations
rapidly become computationally complex and demanding (see Appendix 2 CGE
modelling).
Perhaps the major weakness in generic CGE models is the copious set of
assumptions involved. Firstly, markets are assumed to be in perfect
competition. It is assumed that both consumers and producers are respectively
rational utility and profit maximisers with the only determinant of their
behaviour being price. It is assumed that consumers are all price takers.
260
Fourthly, standard Cobb-Douglas and CES functional forms embody an
assumption of constant returns to scale. Therefore, standard formulations do
not provide for increasing or decreasing returns to scale.
Fifthly, international capital flows are not accommodated because there are no
international asset markets.
Sixthly, even though CGE equations are nonlinear, they are still linear in the
sense of a single non-discontinuous paradigm or frame of reference. CGE
models experience difficulty in migrating from one state to another, for
example, from an initial equilibrium to a new equilibrium in a dramatically
different paradigm.
One further weakness is common to all outputs from large, complex and
processing intensive models. Modellers need to be so diligent with
assumptions that they can fall into the trap of believing that the accuracy of
outputs, which is only an artefact of the technique, is or indeed should be
reality. However, the old maxim of “garbage in, garbage out” remains as valid
as ever and outputs are merely a function of the assumptions, equations and
numerical methodologies.
Of course, much work continues to improve CGE models. For example, the
complexity of CGE models has been solved in three ways. The first is specialist
modelling platforms such as GAMS and AMPL, which are discussed later in
this Chapter. Presolver eliminations and linearisation algorithms have
contributed greatly to computational feasibility. Lastly, new formats of
equations have been developed to simplify equation schemas, for example the
Negishi (welfare optimum) format and mixed complementarity open economy
(MCP) format that is well suited for econometric estimation (Ginsburgh &
Keyzer 1994, pp.93-7, 101-7 & 112-5). Stochastic programming has been
introduced to improve the understanding of risk.
Functional forms also have been extended for scale effects, monopolistic
competition, non-substitutable commodities and expanded product variety.
Different types of institutional behaviour have been modelled, for example,
261
changing consumer preferences and intertemporal tradeoffs through different
discounting techniques.
CGE energy models became popular following the 1973 and 1979 oil price
crises. For example, the Ford Foundation's model (Hudson & Jorgenson 1974;
Ford Foundation 1974) and Manne's ETA-MACRO model (Manne 1977). Mäler
(1974) is credited with the first CGE model encompassing public goods such as
environmental resources. However, it was not until the 1990s that energy CGE
models evolved into climate policy models, such as the OECD's global energy
and environment model GREEN (Burniaux et al. 1992).viii
262
comprehensive classification of CGE models, Bergman (2005) argues that
Nordhaus' models should not be classified as CGE because they have no
industries to settle in equilibrium.ix
In addition, there are numerous other models such as MERGE (Manne et al.
1995; Kypreos 2005; 2006; 2007), DIAM3 (Ha-Duong & Grubb 1997), DIMITRI
(Annemarth M. Idenburg & Harry C. Wilting 2000; A. Faber et al. 2007; Harry
C. Wilting et al. 2004; 2008), Duchin's world trade model (Duchin et al. 2002;
Duchin & Steenge 2007), RESPONSE (Ambrosi et al. 2003), G-Cubed
(McKibbin & Wilcoxen 1999; 2004), ENTICE (Popp 2004; 2006; 2006), MIND
(Edenhofer et al. 2005), WIAGEM (Kemfert 2005), Lenzen's generalised Input-
Output (Gallego & Lenzen 2005), WITCH (Bosetti et al. 2006), a Japanese
information technology infused model DEARS (Homma et al. 2006), E3MG
(Köhler et al. 2006), GINFORS (the Global INterindustry FORecasting System)
(Meyer et al. 2007), IAM (Muller-Furstenberger & G. Stephan 2007), the World
Bank's ENVISAGE model (Bussolo et al. 2008) and the PAGE2002 model used
by the United Kingdom Stern Review (Hope 2006).
One of the remaining goals of CGE development is to endogenise the long term
propagation of technological change through industries. This is a somewhat
elusive aim because technological change tends to come in disruptive waves.
Stone's RAS bi-proportional matrix balancing and scaling approach has been
used for many years to introduce technological change into input output
analysis (Kruithof 1937; Deming & F. F. Stephan 1940; W. W. Leontief 1941;
Stone et al. 1942; Stone 1961; 1962; Stone & Brown 1962).x Appendix 3 Input
output tables provides the modern approach of Wilting et al (Harry C. Wilting
et al. 2004; 2008). Haoran Pan, ten Raa's former student and now research
collaborator introduced S-shaped logistic, Gompertz and Bass model
propagation curves (Pan 2006; Pan & Kohler 2007). As an alternative to these
methods, Goulder proposes that R&D be modelled as a traded commodity
(Goulder & Schneider 1999; Goulder & Mathai 2000).
263
International Food Policy Research Institute (IFPRI).xi The second is Purdue
University's Global Trade and Analysis Project (GTAP) with Monash
University's Centre of Policy Studies (CoPS). Within each group, the
institutions regularly cross publish and swap staff and management.
Australia researchers have been most interested in the latter group. In 1993,
the Australian Productivity Commission and Monash University assisted
Thomas Hertel establish the Global Trade Analysis Project (GTAP) at Purdue
University. Purdue accepted the Australian Productivity Commission's project
database and CGE model of the world economy, called the Sectoral Analysis of
Liberalising Trade in the East Asian Region (SALTER) (Jornini et al. 1994).xii
264
In its literature study for the Garnaut Climate Change Review, Frontier
Economics (2008, pp.2-3) listed the requirements for a CGE model that could
estimate the benefits as well as the costs of greenhouse emissions:
265
that is appropriate for the modelling analysis that the [Garnaut] Review
requires.”
While the combination of GTEM and CSIRO's Mk3L facilitates a climate change
feedback loop and spatial economic disaggregation, a major disadvantage is
that GIAM remains two distinct sub models. The coupling between climate and
sectoral industry performance is therefore indirect and together with the
production function, in and between industries and countries, significantly
impact GIAM's dynamic performance.
266
at some time a backstop technology would emerge, which would at
some high cost, absorb emissions from the atmosphere and offset
emissions elsewhere and we assume that backstop technology would
come in at US$200; that’s about AU$250 today. At that point, on this
third assumption we assume that there would be a technological
breakthrough, that is, substantial costs would remove carbon
dioxide from the air for sequestration.
Shortly after this conference, the Australian Treasury published its modelling
report Australia's Low Pollution Future (Australian Treasury 2008). Appendix 1
of the report briefly outlines the models used (pp 203 & 218): “Treasury’s
climate change mitigation policy modelling includes three top-down,
computable general equilibrium (CGE) models developed in Australia: Global
Trade and Environment Model (GTEM)xvi; G-Cubed modelxvii; and the Monash
Multi-regional Forecasting (MMRF) modelxviii.”
267
quantities, which are aggregated over all other regions using ‘free
on board’ and ‘cost insurance freight’ value shares as weights. This
required careful linking to ensure that the world demand curve
determined within GTEM was inputted into MMRF in an appropriate
way …. A partial-equilibrium representation of the export demand
function faced by Australia for each GTEM commodity was derived.
Responsiveness of the export demand to world price changes were
estimated using GTEM parameters assuming that the rest of the
world does not respond to supply-side changes that occurred in
Australia. As the world economy responds to a given shock, such as
the imposition of an emission price, the export demand faced by
Australia shifts. A consistent measure of the shift in the export
demand functions was derived and used as input into the MMRF
model …. GTEM also determines the global emission price that
clears the global permit market. The equilibrium permit price
trajectory was used as input into the MMRF model.
268
growth on emissions and temperature trajectories? What will be the
effect of higher fossil-fuel prices on climate change? How will the
Kyoto Protocol or carbon taxes affect emissions, climate and the
economy? The purpose of integrated models like the DICE model is
not to provide definitive answers to these questions, for no definitive
answers are possible, given the inherent uncertainties about many
relationships. Rather, these models strive to make sure that the
answers at least are internally consistent and at best provide a
state-of-the-art description of the impact of different forces and
policies.
Bill Nordhaus at Yale did some very important pioneering work that
I've certainly learnt from as I was gearing up to this effort. I think
that our modelling is much more sophisticated on the structural side
than Nordhaus’. It takes the detail of changing technologies much
further than Nordhaus’ work, but his was very important pioneering
work. We come up with higher costs of mitigation and higher costs
of climate change than Nordhaus. Now, the biggest reasons for that
are not technological. The biggest reason for that is that having
reworked all the numbers on business as usual growth in emissions,
we've formed the confident view that business as usual growth in
emissions is far faster than Nordhaus assumed and the IPCC
assumed and Stern assumed, and that changes the outcome quite a
lot. Nordhaus took the view that we've got longer to deal with this
than our work shows that you have.
Nordhaus seeks to constantly update his model with the latest knowledge in
these areas and openly invites criticism of all his assumptions and
methodologies. To facilitate this he provides all his materials on his web site,
including laboratory notes of sub-models (Nordhaus 2007). An outline of the
DICE model is provided in Appendix 4 Nordhaus DICE model.
269
Nordhaus also details various shortcomings in his model as follows (Nordhaus
2008, pp.28, 34-5, 45, 53, 64-5, 193-4): whilst 600 years are projected, results
beyond 2050 become highly speculative because of expanding variances in
economic, scientific and technological factors; damage functions are a major
source of modelling uncertainty; the DICE model is global and aggregates
regional data sub models, which makes the model less useful for calculating
the costs and benefits of impacts and mitigation on specific regions and
countries (although a parallel effort called RICE is devoted to a multiregional
model); total factor productivity and carbon specific technological change are
exogenous rather than an endogenous variables because the robust modelling
of induced technological change has proven extremely difficult; the model has
no provision for ocean carbonate chemistry, which scientific models have
shown leads to reduced CO2 absorption over time; projecting in decades is
computationally efficient but leads to a loss of annual detail; and the CONOPT
optimisation solver is fast but at the expense of linearising DICE's nonlinear
climate equations (it also doesn't guarantee a global solution but this has not
been an issue).
Nordhaus (2009) recently addressed this issue with a model called RICE. This
model has 12 regions, including America, China, the European Union and Latin
America. However, each region is assumed to produce only a single commodity
and Bergman's criticism of the model not being a true CGE settlement of
industries, discussed above, continues to apply. Also, RICE remains in an
experimental form as Microsoft Excel spreadsheets.
The following illustrations show that geophysical outputs from RICE are quite
similar to those of DICE, while a higher carbon trading price (or tax) is
required due to changes in assumptions and higher growth in global output.
270
Illustration 13: RICE global temperature Illustration 14: RICE carbon price compared
increase compared to previous models to previous models (Source: Nordhaus 2009
(Source: Nordhaus 2009 Figure 9) Figure 10)
271
Recent innovations in integrated assessment models
The integrated assessment models above indicate that the main distinction
between modelling approaches has been whether the models are global,
multiregional or single region models.
Nevertheless, Input Output analysis remains popular because, in its own way,
the Leontief inverse is a straight forward form of optimisation. It is equivalent
to solving a system of linear equations for commodity flows. The analysis can
be enhanced by introducing production functions that closely match the
technology of the industry through engineering life cycle analysis.
Alternatively, a generic Transcendental Logarithmic (Translog) function can be
fitted to time series data using econometrics.
While the Leontief inverse remains at the centre of all CGE economic analysis,
to evolve toward full CGE status it needs a superstructure of objective
function, constraints and optimisation techniques.
CGE modellers generally use Input Output tables only for the data. Their
systems of equations then determine economic output using a range of
elasticities and production functions. Production functions can range from a
272
relatively simple Cobb-Douglas multiplication and Constant Elasticity of
Substitution (CES) to quite complex Translog functions.
The advent of ten Raa's technique closes the Input Output model and thereby
obviates the hoary old chestnut of friction between CGE modellers and
Leontief modellers. It means that it is no longer necessary to classify models as
CGE or Leontief, as top down or bottom up, and to join one or other of the
camps. In any case, nobody could really decide whether Leontief input output
analysis was indeed top down or bottom up.
Input Output tables are now widely used to predict flows between sectors of
the economy. There has been considerable work on disaggregating high level
inter-industry flows, for example in transportation, and investigating the effect
of industry investments on profits and trade flows.
273
Australian Input Output tables, Input Output mathematics and interregional
and multiregional input output models
However, after frantic development through the post-war period and a quieter
time in the 1990s, Augusztinovics (1995, p.275) announced the demise of Input
Output analysis: “Game theory and chaos [theory] have already established
themselves in economic model building. Young people, particularly, want
challenging problems and are eager to respond to the new type of demand,
coming mainly from the excessive financial superstructure. This is not to say
that there are no valuable new results in the input-output field. Interesting and
innovative papers are continuously being published that report on expansions
and new applications, address novel problems, extend the subject-matter and
polish the method. The heyday of Input-Output as a simple, transparent,
deterministic, static linear model are, however, certainly over.”
As Mark Twain was to wryly remark in New York Journal on 2 June 1897 “The
report of my death is an exaggeration”. A decade after Augusztinovics'
courageous pronouncement, Input Output analysis saw a renaissance as an
important analytical approach as a means of understanding globalisation. Faye
Duchin, President of the Input Output Association from 2004 to 2006, noted of
its renaissance: “After a lapse of a quarter of a century, models of the world
economy are once again in demand in connection with prospects for improving
the international distribution of income and for reducing global pressures on
the environment. While virtually all empirical models of the world economy
make use of input-output matrices to achieve consistent sector-level
disaggregation, only input-output models make full use of sectoral
interdependence to determine production levels” (Duchin 2005, p.144).
The unique feature of Input Output analysis is that rather than a single data
processing technique or a mathematical formula, it provides a platform for
evolving and customising new solutions to new global problems such as those
involving CO2 emissions.
The OECD has recently provided harmonised Input Output tables and bilateral
trade data to support the growing interest in world models. Wixted et al.
(2006) have summarised the types of policy questions that can be addressed
274
with this data, including world value chains, R&D and embodied technology,
productivity, growth, industrial ecology and sustainable development. Ahmad &
Wyckoff (2003) have already demonstrated the use of bilateral trade patterns
in analysing CO2 emissions embedded in trade.
Partial multipliers are always less than one because household income is
exogenous to the input-output table. Complete Keynesian multipliers can be
determined by bringing household income into the intermediate matrix. This is
called “closing the matrix”.
As Input Output tables are static, it is only possible to solve for the endogenous
variables in one equilibrium at a time. Investigation of the shift between
equilibria with different sets of values of parameters and exogenous variables
is known as “comparative statics”.
275
Comparative static analysis may be extended to dynamic analysis by taking
into account the process in moving from one equilibrium state to another and
investigating stability. This can be extended to dynamic optimisation where a
maxima or minima is sought by setting the first differential to zero.
Using their DIMITRI demand driven Input Output model, Wilting et al.
investigate these IPCC policy scenarios in the Netherlands for the period 2000-
2030. including the effect of technological change. The authors found that
current environmental pollution is in many cases due to non-sustainable
production and consumption (A. Faber et al. 2007). They also concluded that
technology changes this pattern but leads to unanticipated side effects.
276
Multiregional (MRIO) Input Output model (Polenske 1980) are each described
in detail in Appendix 3 Input output tables.
Miller & Blair (1985, pp.69-73) differentiate between IRIO and MRIO.
Theoretically, IRIO is superior to MRIO because it incorporates all inter-
industry flows whereas MRIO uses some averages. However, IRIO requires
significantly more data. MRIO only needs the standard format of bilateral trade
input data, where the declaring importer country identifies its own importing
industry and the partner exporting country. MRIO models are nevertheless
quite difficult to prepare. Miller & Blair outline the issues in data handling,
correcting conflicting and missing data.
Duchin's World Trade Model identifies optimal resources uses for a given
(exogenous) final demand under radical policy scenarios of sustainability
rather than the incremental scenarios usual in traditional CGE models.
The key features of Duchin's model are combining both price and quantity
input-output models (the price model has both resource prices and product
prices with flows in the quantity Input Output model stated in physical units);
mapping “value-added” from a monetary concept to payments for the factors of
production; and in addition to flows, including factor stocks and extending the
277
usual linear framework with nonlinear production functions that allow
substitution of factors.
Duchin's claim of minimising factor use for a given consumption, rather than
maximising consumption for a given factor use, needs some explanation. While
this is an advantage over traditional CGE models, the primal and dual models
inherent in all constrained Negishi-format welfare optimising models
contemporaneously solves both the output maximising and input minimising
formulations.
Benchmarking
Standard Costing
Following World War II, simple variance analysis evolved into a large schema of
standard costing with detailed drill-downs of production performance. The
factory was seen as a “cost centre” that needed to be micromanaged across
overheads, labour and materials. Each of these was finely divided into
spending, efficiency and volume variances.
In the 1980s, standard costing was heavily criticised for its distortions that led
managers to make decisions that did not reduce costs or maximise profits.
278
Standard costing became regarded as mostly suitable for mass-production
industries with large variable costs (such as labour) compared to fixed costs.
279
However the use of KPIs did not solve the basic problems in measuring
performance. It was still not possible to provide unbiased answers to important
performance questions (ten Raa 2008). For example, “What should be done if
different companies, divisions, industries or even countries scored differently
on the various ratios? What does one do with a business that scores well on
one dimension and poorly in another? Which division should get the capital or
the new business?”
There are various strategies in dealing with a business that has mixed
performance ratios. The management of the business can be directed to excel
on all ratios. Another is to bring in expertise to assist the managers do better
where they are weak. A third strategy is to permit the business to continue
specialising in its strengths and remove the causes of weak performance.
However, any change brings major issues with it. Doing anything always
affects something else because of dependencies. One of Donald Rumsfeld's
more memorable quotes was: “There are the known knowns, the known
unknowns and the unknown unknowns.” Changes always lead to expected as
well as unexpected tradeoffs in price and quality, and things like lower profits
from the reallocation of overheads and higher wages for more specialised staff.
280
Statistical analysis contributed the technique of Principal Components Analysis
(PCA), which regresses performance against input parameters to determine a
production function. Residual errors from the regression line are analysed as
inefficiency. It is assumed that these inefficiencies are observed in the
presence of statistical noise having a normal distribution. Inefficiency is
therefore the non-noise component of the error. It is expected that inefficiency
will have a one-sided normal, exponential or gamma distribution.
DEA uses linear programming to locate piecewise linear planes or facets of the
production function that sit at the outer of the observations where the greatest
efficiency occurs. This technique assumes that at least some of the production
units are successfully maximising efficiency, while others may not be doing so.
Implicitly, the method creates a best virtual proxy on the efficient frontier for
each producer. By computing the distance of these latter units from their best
virtual proxy frontier and partitioning inefficiency among the inputs, strategies
are suggested to make the sub-optimally performing production units more
efficient.
In contrast to standard costing where the budget prices are set by assumptions
and managerial agendas, DEA relies on Marshall's basic tenet of classical
281
economics that scarce resources are priced according to their marginal
productivities. Organisations bid for labour and commodities until supply and
demand is satisfied, which is the equilibrium where a price is set for the
resource.
How are these underlying prices set? The major feature in solving a problem of
constrained resources is that shadow or accounting prices are automatically
calculated by the dual solution to the linear programming primal problem
(Hotelling 1932; Samuelson 1953; Houthakker 1960). These shadow prices are
the Lagrange multipliers, denoted by the Greek letter lambda in honour of
Joseph Louis Lagrange (1736-1813).xxi These prices are also the same as the
marginal productivities of the resources. Free market prices of resources
usually directly reflect these marginal productivities. Indeed, the difference
between market prices and shadow prices provides a penetrating analytical
technique to investigate the inefficiency of monopolies and oligopolies.
ten Raa (2008) shows that shadow prices can be derived from the Lagrange
multipliers. For example, in the illustrations below, the performance of
282
production unit B can increase its output by adopting best practice from A and
C. Unit A might be a firm using labour to best advantage, while C might be
using capital to best advantage.
Illustration 15: DEA: Production Unit B can Illustration 16: DEA: Corresponding vectors
expand to B' using the best practices of A & and normals
C
Building on this example, we can see that the isoquant is the weighted average
of constraints from A and B, and the vectors likewise:
a 1, a 2 = 1 c 11, c 12 2 c 21, c 22
Where1 and 2 are accounting prices set by the market
and 1 and 2 ≥ 0
If constraint A is labour, then 1 is the wage rate
If constraint B is capital, then 2 is the interest rate to rent capital
The primal linear programming and Lagrangian dual formulations of this DEA
problem are:
283
Therefore, the following Lagrangian equations can be prepared for each
1∗[b1 −c 11 x 1 c 12 x 2 ] = 0
2∗[b2 −c 21 x 1c 22 x 2 ] = 0
1 [b1− c 11 x 1 c 12 x 2 ] = 0
2 [b2 −c 21 x 1c 22 x 2 ] = 0
This provides the “Main Theorem of Linear Programming” that the prices 1
and 2 measure the marginal productivity of the constrained resources b1
and b 2 and the prices multiplied by the quantity of the input resources is
equal to the output.
1 b1 2 b2 = a 1 x 1 a 2 x 2
Shadow prices exist even when market prices may not exist. It is this unique
feature of DEA that allows organisations to be readily studied using DEA even
when there are no market prices for an organisation's inputs and outputs, for
example government departments and non-profit organisations.
284
In the above example on clean air, the second point was the ability of industry
and consumers to reorganise themselves to minimise the new cost of emissions
permits. This focus on reorganisation and reallocation underlies the continued
evolution of CGE benchmarking out of the DEA benchmarking paradigm. xxii
CGE Benchmarking
In 1932, von Neumann wrote “We are interested in those states where the
whole economy expands without change of structure, i.e. where the ratios of
285
The mathematical calculation of the objective function with its utilitarian
assumption is only of limited usefulness in comparing strategies and policies.
Of much greater importance is the behaviour of shadow prices, the local and
international substitution of labour and commodities and, in the case of climate
models, the rate of switching from financial payment for emissions permits to
paying for backstop abatement technology services to remove emissions. For
example, after industry and consumers have reorganised themselves nationally
and internationally as much as possible in response to price signals, it is the
absolute reduction in emissions that is the important factor in ameliorating
climate change.
To add even more complexity to the task, the climate equations are non-linear.
This means that heavy duty non-linear optimisation techniques, such as
modern interior point optimisation, need to be called upon instead of the usual
fast linear programming algorithms, such as Simplex.
286
tables.xxiv Leontief encouraged ten Raa in this alternative perspective. In 1993
it become possible for ten Raa to apply his new methodology when nations
began to implement the United Nations' revised System of National Accounts,
SNA93 (United Nations 1993). A decade on, ten Raa's methodology has begun
to emerge as the bridge between Leontief analysis and CGE models as
mentioned above.
However, it is not so much ten Raa's substituting of Make and Use tables for
the Leontief A matrix in economic flows where Make and Use tables have their
key advantage. It is in intertemporal models where investment and capital are
endogenously calculated.xxv
The author is also aware of the requirement in economic modelling for acyclic
network solvers, having previously implemented such solvers in projects
involving production scheduling for ambulance and special vehicle
manufacture, an accounting system and in the creation of a modelling
language similar to Decision Support System (DSS).
287
For the purpose of building a new type of CGE model, the author conducted a
survey of algebraic modelling packages using as evaluation criteria the
functionality of the development packages in acyclic solvers, operations
research and data visualisation.
Lastly, linking the packages to various commercial and open source solvers
requires understanding of the various platform specifications.
The key issue with these older solvers is that they do not assume convexity and
seek only a local minima. However, a non-convex objective function can have
several local minima, or a unique minimum given a set of constraints. For
example, the energy reference system optimisation model Markal uses the
MINOS solver.xxvii MINOS in turn uses a quasi-Newton approximation to the
first gradient derivative (Hessian) method. This has no reference to convexity
so if it finds a minimum and this is global minimum then this outcome is pure
chance.
288
An issue with both a nonlinear objective and nonlinear constraints is that the
problem becomes more complex in terms of convexity. CONOPT and SNOPT
are usually used in this circumstance. For example, Nordhaus uses CONOPT
with GAMS.
Programming environments
The author investigated and experimented with a number of programming
environments as shown in the following table:
Environment Description
Algencan Stand-alone non-linear solver with integration to various
languagesxxix
AMPL Student version of commercial package with limited number
of variables & constraints. Designed for large economic
models but no graphics.xxx
Ascend Open source computer algebra environment, with the
extraordinary advantage of generously granted access to the
CONOPT solverxxxi
Axiom (also FriCAS, Open source equivalent of Mathematica
OpenAxiom)
Dr AMPL Open source AMPL model checking in preparation for
submission to NEOS serverxxxii
Galahad Stand-alone non-linear solver used by NEOS and Dr AMPL.
Includes the general nonlinear solver Lancelot. Requires
programs to be prepared in AMPL or Standard Input Format
(SIF)
GAMS New student version of commercial package with limited
number of variables and constraints. Designed for large
economic models but no graphicsxxxiii
IPOPT Stand-alone non-linear solver with integration in GAMS,
Neos and ascend
Maple Mathematica equivalent - literature research only
Mathematica UTS Enterprise Licence. Exceptional symbolic and
functional processing, LISP list management, Prolog pattern
management, graph processing, graphics and exception
optimisation functions including an implementation of the
most advanced interior point solver (IPOPT) and augmented
289
Environment Description
Langrangian techniques. Unique advantage is ability for
“whole of model” symbolic constraints in optimisation.
Graphics output is an important feature with major
advantages for communication with policy makers. While
Lagrange multipliers are provided by the
DualLinearProgramming function, unfortunately access is
not provided for the KKT multipliers in nonlinear analysis. xxxiv
Also tested Culoili KKT and Loehle solvers.
Matlab UTS Enterprise License. Procedural processing primarily for
matrix manipulation and inferior graphics to Mathematica
Maxima (Macsyma) Open source equivalent of Mathematica
MuPAD Literature research only
NEOS Server Comprehensive solver service for no charge to run GAMS
and AMPL models.xxxv Requires either GAMS of AMPL to
design programs. Batch processing rather than interactive
and no graphics.
Ocaml Open source symbolic processor similar to Mathematica,
significantly faster due to compilation but limited
functionality and lacking ease of use
Octave Open source equivalent of Matlab
OpenOpt Open source Python framework for accessing solversxxxvi
Pyneos Open source python connector to NEOSxxxvii
R Open source statistical package based on S.xxxviii This
includes network (Carter Butts' R package for graph theory),
mathgraph and genopt (Patrick Burns' R packages for graph
theory and genetic non-linear solver from S Poetryxxxix),
solver packages BB and Rdonlp2
Reduce Literature research only
Sage Open source equivalent of Mathematica - literature research
only
Scilab Open source equivalent of Matlab
yacas Open source equivalent of Mathematicaxl
Toolboxes Description
Nordhaus Equations for climate change policy modelling in GAMS xli
perturbationAIM Eric Swanson's Mathematica toolbox for stochastic
perturbation modellingxlii
Stochastic 4 Uhlig's Matlab/Octave toolbox and associated equation
290
Toolboxes Description
generator for stochastic modelling
CUTEr Fortran procedures providing the low level functionality
required by industrial solvers. Requires programming in
Standard Input Format (SIF)
Acyclic processing
Many people involved with the development of solvers see an equation to be
f x = b.x 1 b.x 22
g x = c.x 1 d.x 22
291
Solver developers rarely envisage the more complex case of recursion, for
example f t = a.f t−1 b.g t . Recursive equations require a higher
level of analysis using graph theory to topologically sort equations and
constraints into a solvable stream.
It may be helpful to describe the problem with the analogy of a spreadsheet for
those not familiar with recursive computer algebra. Spreadsheet cell
connections create a geographical connection between cells. If there is a time
dimension in the columns, for example 2009 to 2012, then the intersection of
the column 2010 with a row, say Revenue , may have a formula that
calculates Revenue 2010 as say
Thus recursion exists and is mapped to the geography (or topology) of the
spreadsheet. From this geography, the spreadsheet algorithm calculates a
network which, continuing with our example, identifies that Revenue 2009
must be calculated before Revenue 2010 can be calculated.
Revenue 2010 . Circular references are often found in calculating loans and
interest.
292
complex structure. This is where major algebraic environments like GAMS and
AMPL first found their market niche.
293
because the development of modern Interior Point techniques (for example,
IPOPT) has taken three decades to reach standard solvers.
NAMEA
National Account Matrices including Environmental Accounts (NAMEAs) are
national accounts of environmental emissions of 10 to 15 gases. De Haan &
Keuning (1996) describe how NAMEAs provide the direct contributions of
individual industries to environmental pressures, in both absolute and relative
terms. For example, ores, biomass, CO2, CO, N2O, NH3, NOx, SO2, CH4,
NMVOC, Pb, PM10, nutrient pollutants, value added and full-time-equivalent
jobs produced per tonne of mineral consumed. Input Output analysis of
NAMEA data reconstructs the production chain, notwithstanding it may not be
homogeneous.
NAMEA matrices are used in Input Output analysis for evaluating efficiencies
and targeting environmental policies. However, according to Tukker (2008),
the information within NAMEAs is merely sufficient to analyse global warming
impact and perhaps acidification but not the range of analysis required for
external costs, total material requirements and ecological footprints.
294
OECD
The OECD Input Output and bilateral trade databases have been mentioned
above in regard to input output models. In November 2007, the OECD released
its 2006 edition of harmonised Input Output tables and bilateral trade data
(OECD 2007a; 2007b).xliii These Input Output tables cover 28 OECD countries
(all members except Iceland and Mexico) and 10 non-member countries
(Argentina, Brazil, China, Chinese Taipei, India, Indonesia, Israel, Russia and
South Africa. This has increased from 18 OECD countries and 2 non-OECD
countries (Brazil and China) in the previous edition.
GTAP
The Global Trade Analysis Project (GTAP) Version 7 database of national input
output models, trade data and energy data has 2004 data for 57 sectors and
113 regions. It relies heavily on OECD's harmonised input output and STAN
bilateral trade data and on IEA's energy data.
GTAP's focus on the factors of production and a world economy MRIO table are
exceedingly useful in analysis. Hertel and Walmsley (2008, Chapter 1, 1.1.2)
note that:
295
avoid employing a data base which is exhaustive in its coverage of
commodities and countries. The Global Trade Analysis Project is
designed to facilitate such multi-country, economy-wide analyses.
EXIOPOL
Tukker (2008) describes “A New Environmental Accounting Framework Using
Externality Data and Input-Output Tools for Policy Analysis” (EXIOPOL). This is
a Euro 5 million collaborative project of 37 institutes funded by the European
Union with the 2010 objective of building a world multiregional Input Output
model (MRIO) from officially reported data as well as OECD and GTAP data.
296
GTAP is progressively resolving these issues. In the future EXIOPOL may
provide valuable enhancements to the GTAP database.
4.5 Conclusion
This Chapter investigates computable general equilibrium (CGE) theory and
models, mathematical platforms and data sources in order to establish how the
quality of climate-economic policy research might be improved by bringing
together recent developments in CGE policy research techniques and assets.
The Chapter extends the policy framework of Chapter 1 Introduction and the
analysis of political economy set out Chapters 2 and 3.
World class integrated assessment models are reviewed. Recent best practice
climate-economic modelling by the Australian Garnaut Review and Australian
Treasury is closely examined. A major issue in calculating and communicating
regional and commodity spatial results was identified.
297
change policy, even though their equilibriums are mostly a function of the
emissions control rate and do not settle across both regions and commodities.
298
Ahmad, N. & Wyckoff, A., 2003. Carbon dioxide emissions embodied in
international trade of goods, OCED Directorate for Science, Technology
and Industry. Available at:
http://www.olis.oecd.org/olis/2003doc.nsf/43bb6130e5e86e5fc12569fa00
5d004c/7f6eecff40a552d7c1256dd30049b5ba/$FILE/JT00152835.PDF.
Arrow, K.J. & Debreu, G., 1954. Existence of an Equilibrium for a Competitive
Economy. Econometrica: Journal of the Econometric Society, 265-290.
299
Blaug, M., 1992. The methodology of economics: Or, how economists explain ,
Cambridge University Press.
Bosetti, V. et al., 2006. A world induced technical change hybrid model. Energy
Journal, 27(Hybrid Modeling of Energy-Environment), 13-37.
Burniaux, J.M. et al., 1992. The costs of reducing CO2 emissions: evidence
from GREEN, OECD.
Bussolo, M. et al., 2008. Global Climate Change and its Distributional Impacts.
In Eleventh Annual Conference on Global Economic Analysis, Helsinki,
June.(Available on https://www. gtap. agecon. purdue. edu/) .
De Haan, M. & Keuning, S.J., 1996. Taking the Environment into Account: The
NAMEA Approach. Review of Income & Wealth, 42(2), 131-148.
300
Deke, O. et al., 2001. Economic impact of climate change: Simulations with a
regionalized climate-economy model, Kiel Institute of World Economics.
Deming, W.E. & Stephan, F.F., 1940. On a least square adjustment of a sampled
frequency table when the expected marginal totals are known. Annals of
Mathematical Studies, 11, 427-444.
Duchin, F., 2005. A world trade model based on comparative advantage with m
regions, n goods, and k factors. Economic Systems Research, 17(2), 141-
162.
Duchin, F. et al., 2002. Scenario Models of the World Economy. Cuadernos del
Fondo de Investigación Richard Stone, 7.
Eboli, F., Parrado, R. & Roson, R., 2008. Climate change feedback on economic
growth: exploration with a dynamic general equilibrium model. In
Eleventh Annual Conference on Global Economic Analysis, Helsinki,
June, available at< www. gtap. agecon. purdue. edu.
Edenhofer, O., Bauer, N. & Kriegler, E., 2005. The impact of technological
change on climate protection and welfare: Insights from the model
MIND. Ecological Economics, 54(2-3), 277-292.
301
Farrell, M.J., 1957. The measurement of productive efficiency. Journal of the
Royal Statistical Society: Series A (Statistics in Society) , 120(3), 253-82.
Ginsburgh, V. & Keyzer, M., 1994. The structure of applied general equilibrium
models, Cambridge, Massachusetts & London, England: The MIT Press.
Goulder, L.H. & Mathai, K., 2000. Optimal CO2 abatement in the presence of
induced technological change. Journal of Environmental Economics and
Management, 39(1), 1-38.
Goulder, L.H. & Schneider, S.H., 1999. Induced technological change and the
attractiveness of CO 2 abatement policies. Resource and Energy
Economics, 21(3-4), 211-253.
302
Ha-Duong, M. & Grubb, M., 1997. Influence of socioeconomic inertia and
uncertainty in optimal CO2-emission abatement. Nature, 390(6657),
270.
Harberger, A.C., 1962. The incidence of the corporation income tax. The
Journal of Political Economy, 215-240.
Hertel, T. & Walmsley, T.L., 2008. GTAP: Chapter 1: Introduction, Center for
Trade Analysis, Purdue University. Available at:
https://www.gtap.agecon.purdue.edu/databases/v7/v7_doco.asp
[Accessed November 14, 2008].
Hotelling, H., 1932. Edgeworth's taxation paradox and the nature of demand
and supply functions. The Journal of Political Economy, 577-616.
Hudson, E.A. & Jorgenson, D.W., 1974. US energy policy and economic growth,
1975-2000. The Bell Journal of Economics and Management Science,
461-514.
303
Idenburg, A.M. & Wilting, H.C., 2000. Dimitri: a dynamic input-output model to
study the impacts of technology related innovations. In University of
Macerata, Italy. Available at: http://www.iioa.org/pdf/13th
%20conf/Idenburg&Wilting_DMITRI.pdf.
IPCC, 1995. IPCC Second Assessment Climate Change 1995: A Report of the
Intergovernmental Panel on Climate Change, Geneva: IPCC. Available
at: http://www.ipcc.ch/ipccreports/tar/vol4/english/index.htm [Accessed
April 26, 2008].
Jevons, W.S., 1871. The Theory of Political Economy 3rd ed., London:
MacMillan & Co. Available at:
http://www.econlib.org/library/YPDBooks/Jevons/jvnPE3.html#Chapter
%203 [Accessed April 20, 2009].
304
Jornini, P. et al., 1994. The SALTER Model of the World Economy: Model
Structure, Database and Parameters, Canberra: Industry Commission.
Available at:
http://www.pc.gov.au/ic/research/models/saltermodel/workingpaper24
[Accessed May 5, 2009].
Kypreos, S., 2007. A Merge model with endogenous technological change and
the cost of carbon stabilization. Energy Policy, 35(11), 5327-5336.
305
Leontief, W.W., 1955. Input-Output analysis and economic structure: Studies in
the structure of the American economy: Theoretical and Empirical
Explorations in Input-Output Analysis. The American Economic Review,
45(4), 626-636.
Leontief, W.W., 1951. The structure of the American economy, 1919-1939 2nd
ed., White Plains, NY: International Arts and Sciences Press.
Lutz, W., Sanderson, W. & Scherbov, S., 2008. IIASA’s 2007 Probabilistic World
Population Projections, Vienna, Austria: International Institute of
Applied Systems Analysis. Available at:
http://www.iiasa.ac.at/Research/POP/proj07/index.html?sb=5.
Mandeville, B., 1723. The fable of the bees, or Private vices, publick benefits:
with an essay on charity and charity-schools, and a search into the
nature of society 1724th ed., printed for J. Tonson.
Manne, A., Mendelsohn, R. & Richels, R., 1995. A model for evaluating
regional and global effects of GHG reduction policies. Energy policy,
23(1), 17-34.
306
Manne, A.S., 1977. ETA-MACRO: A model of energy-economy interactions.
McKibbin, W.J. & Wilcoxen, P.J., 2004. Estimates of the costs of Kyoto:
Marrakesh versus the McKibbin-Wilcoxen blueprint. Energy Policy,
32(4), 467-479.
McKibbin, W.J. & Wilcoxen, P.J., 1999. The theoretical and empirical structure
of the G-Cubed model. Economic modelling, 16(1), 123-148.
Meyer, B., Lutz, C. & Wolter, I., 2007. The Global Multisector/Multicountry 3-E
Model GINFORS. A Description of the Model and a Baseline Forecast for
Global Energy Demand and CO2 Emissions, to be published in Journal of
Sustainable Development.
Miller, R.E. & Blair, P., 1985. Input output analysis: foundations and extensions,
Englewood Cliffs, N.J.: Prentice-Hall.
Mill, J.S., 1848. Principles of Political Economy 7th ed., London: Longmans,
Green and Co. Available at: http://www.econlib.org/library/Mill/mlP.html
[Accessed April 9, 2009].
von Neumann, J. & Morgenstern, O., 1953. Theory of games and economic
behavior 3rd ed., Princeton, New Jersey: Princeton University Press.
Nordhaus, W.D. & Radetzki, M., 1994. Managing the global commons: the
economics of climate change, MIT press Cambridge.
307
Nordhaus, W.D., 2009. Alternative Policies and Sea-Level Rise in the RICE-
2009 Model, New Haven, CT: Cowles Foundation, Yale University.
Available at: http://econpapers.repec.org/paper/cwlcwldpp/1716.htm
[Accessed September 10, 2009].
Nordhaus, W.D., 2007. Notes on how to run the DICE model, Available at:
http://www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
[Accessed June 26, 2008].
Nordhaus, W.D., 1979. The efficient use of energy resources, New Haven,
Conn.(EUA). Yale University Press.
Nordhaus, W.D. & Yohe, G.W., 1983. Future carbon dioxide emissions from
fossil fuels. Changing Climate: Report of the Carbon Dioxide
Assessment Committee, 87.
OECD, 2007a. The OECD 2006 Input Output Tables, Paris: OECD Directorate
for Science, Technology and Industry. Available at:
http://www.oecd.org/document/3/0,3343,en_2649_34445_38071427_1_1
_1_1,00.html.
OECD, 2007b. The OECD 2006 STAN: Bilateral Trade Database, Paris: OECD
Directorate for Science, Technology and Industry. Available at:
http://www.oecd.org/document/3/0,3343,en_2649_34445_38071427_1_1
_1_1,00.html.
Pan, H., 2006. Dynamic and endogenous change of input-output structure with
specific layers of technology. Structural Change and Economic
Dynamics, 17(2), 200-223.
308
Pan, H. & Kohler, J., 2007. Technological change in energy systems: Learning
curves, logistic curves and input-output coefficients. Ecological
Economics, 63(4), 749-758.
Popp, D., 2006. Entice-BR: The effects of backstop technology R&D on climate
policy models. Energy Economics, 28(2), 188-222.
Popp, D., 2004. Entice: endogenous technological change in the DICE model of
global warming. Journal of Environmental Economics and Management ,
48(1), 742-768.
ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.
ten Raa, T. & Pan, H., 2005. Competitive pressures on China: Income inequality
and migration. Regional Science and Urban Economics, 35(6), 671-699.
309
Samuelson, P.A., 1953. Prices of Factors and Good in General Equilibrium. The
Review of Economic Studies, 1-20.
Self, P., 2000. Rolling back the market: economic dogma and political choice,
Palgrave MacMillan.
Smith, A., 1776. An Inquiry into the Nature and Causes of the Wealth of
Nations Also known as: Wealth of Nations., Project Gutenberg.
Stone, R., Champernowne, D.G. & Meade, J.E., 1942. The precision of national
income estimates. The Review of Economic Studies, 111-125.
310
Tukker, A., 2008. EXIOPOL: towards a global Environmentally Extended Input-
Output Table. In Helsinki, Finland. Available at:
https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=2702 [Accessed November 14, 2008].
Von Neumann, J., 1928. Zur theorie der gesellschaftsspiele (Theory of Parlor
Games). Mathematische Annalen, 100(1), 295-320.
311
Industry. Available at:
http://www.oecd.org/dataoecd/46/46/37587419.pdf.
312
the MEGABARE model and the static GTAP model. The dimension of GTEM used
in this report represents the global economy through 13 regions (including
Australia, the United States, China and India) each with 19 industry sectors and
a representative household (for society). The regions are linked by trade and
investment. Government policies are represented by a range of taxes and
subsidies. The model also disaggregates three energy-intensive sectors into
specific technologies: electricity generation, transport, and iron and steel. Some
modifications have been made as part of the Treasury modelling program.
xviiAustralian Treasury 2008 (p209) G-Cubed models the global economy and is
designed for climate change mitigation policy analysis. An important
characteristic of G-Cubed is that economic agents are partly forward-looking:
they make decisions based not only on the present day economic situation, but
also based on expectations of the future. G-Cubed has limited detail on
technologies. Modelling using the G-Cubed model was conducted in conjunction
with the Centre for Applied economics Analysis (CAMA) and the Treasury. A
report from CAMA covering the joint modelling work is available on the
Treasury website.
xviiiAustralian Treasury 2008 (p211): The Monash Multi-Regional Forecasting
(MMRF) model is a detailed model of the Australian economy developed by the
Centre of Policy Studies (CoPS) at Monash University. MMRF has rich industry
detail (with 58 industrial sectors) and provides results for all eight states and
territories. It is also dynamic, employing recursive mechanisms to explain
investment and sluggish adjustment in factor markets.
xixFrederick Winslow Taylor, Father of Scientific Management, identified this
around 1900
xx Along with strategic management accounting came life cycle analysis,
competitor accounting (i.e. hypothesising the performance and costs of
competitors), marginal costing and target costing. In target costing, the future
selling price in the market was estimated and the designers and engineers were
instructed to reduce costs to the market price less the profit margin.
xxiLagrange multipliers occur in linear programming. In non-linear programming,
the correct terminology is Karush Kuhn Tucker (KKT) multipliers.
xxiiRather than comparing an individual industry to its peers, generic
benchmarking for CGE switches some or all of production to the most efficient
industry.
xxiiiThis embodies the assumption that national expansions can only be greater
than or equal to 1
xxivThe United Nations System of National Accounts (SNA93) requires data to be
measured in make (also called “source”) tables and use tables. A Leontief input-
output table can be calculated from the make-use format. The equations that
connect the two formulations are: A = U . Transpose[V] and x = Transpose[V] .
s, where V and U are the make and use tables, respectively, A is the Leontief
technology matrix, x is the commodity volume and s is the activity of the
production sector that produces the commodity. In the Commodity-technology
model using make use tables, consumption Y is determined by the equation Y =
(Transpose[V] – U) . s This is similar to the Leontief material balance where x
=a.x+y
xxvPersonal communications with Thijs ten Raa in January 2009 and with ten Raa's
former PhD student and now research collaborator, Haoran Pan , suggest that
ten Raa's work will address this further
xxviCONOPT solver by ARKI Consulting & Development A/S, Bagsvaerd, Denmark
(www.conopt.com). KNITRO solver by Zenia Optimization Inc. (www.ziena.com).
MINO solver by Stanford Business Software, Inc (sbsi-sol-
optimiSze.com/asp/sol_product_minos.htm). SNOPT solver by Philip Gill, Walter
Murray and Michael Saunders, available through Stanford Business Software,
Inc. (sbsi-sol-optimize.com/asp/sol_product_snopt.htm)
313
xxviiMARKAL by the International Energy Agency (IEA) Energy Technology
Systems Analysis Programme (ETSAP) (www.etsap.org/markal/main.html)
xxviiiThe Branch And Reduce Optimization Navigator (BARON) by The Sahinidis
Optimization Group of Carnegie Mellon University Department of Chemical
Engineering (www.andrew.cmu.edu/user/ns1b/baron/baron.html)
xxixSee www.ime.usp.br/~egbirgin/tango/index.php
xxxAMPL and GAMS employ pre-solvers to detect redundancy, determining the
values of some variables before applying the algorithm and so eliminate
variables and constraints. The pre-solve phase also determines if the problem is
feasible. Corresponding to the pre-solve phase, a post-solver is required to
restitute the original problem and variables
xxxiSee ascendwiki.cheme.cmu.edu
xxxiiSee www.gerad.ca/~orban/drampl/
xxxiiiSee AMPL and GAMS footnote above
xxxivPrivate communication with Mathematica suggests that following major
enhancements in the optimisation functions, further functionality will not be
possible until new developers are appointed
xxxvSee neos.mcs.anl.gov/neos/
xxxviSee scipy.org/scipy/scikits/wiki/OpenOptInstall
xxxviiSee www.gerad.ca/~orban/pyneos/pyneos.py
xxxviiiSee cran.r-project.org
xxxixSee www.burns-stat.com
xl See code.google.com/p/ryacas
xli See www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
xliiSee www.ericswanson.us/perturbation.html
xliii“Harmonised” means that the OECD input output tables use common industry
definitions with the OECD's STAN Industry Database (STAN), Business R&D
Expenditures by Industry (ANBERD) and Bilateral Trade Database (BTD). All
industry classification is based on ISIC Revision 3 (OECD Input-Output Database
edition 2006 - STI Working Paper 2006/8). The OECD estimates that between
85% and 95% of world trade is covered in its Bilateral Trade Database
314
Chapter 5 A new spatial, intertemporal CGE
policy research tool
Chapter 4 Economic models for climate change policy analysis identified a
suitable computable general equilibrium modelling approach, mathematical
platform and data source to achieve the research aim. The objective of this
Chapter is to describe and validate a benchmarking model that achieves the
research aim. The new model is called Sceptre, which is an acronym for Spatial
Climate Economic Policy Tool for Regional Equilibria.
315
It may be noted that the above flowchart has three vertical swim-lanes and an
optimisation pool. The first swim-lane contains those activities concerned with
mining GTAP's economic and emissions data. The second swim-lane calculates
exogenous climate equations and builds an endogenous symbolic model for
Nordhaus' DICE model economic-climate equations. These scientific equations
become a climate feedback loop within the constraints. The third swim-lane
builds a multiregional input output model in symbolic form, which becomes the
economic model embedded within the constraints. The optimisation pool draws
upon these models to interpret the optimisation constraints in terms of the
most fundamental or “minimum set” of input variables of the underlying
models.
316
Three regions are shown, which are bilaterally interconnected through trade.
Trade deficits of each are controlled such that unrealistic global imbalances do
not occur.
In addition, the regions are subject to an economic damage function from the
common effect of carbon emissions induced global warming. A Total Factor
Productivity function offsets the damage function in each region.
evolves is indicated as 1 to 3 .
317
Maximise NPV ∑ : where NPV is the discounted net present value of
pop
the simple sum of regional indexes of consumption
per capita , calculated as the index of expansion of
consumption in each regional economy ,
compared to the initial period , divided by the
index of population growth in each region pop
s , z , i , , inv ,
Subject to:
318
Industrial physical emissions = where emissions0 is the initial level
Emissions s∗1−∗emissions0 of industrial emissions and is the
Amelioration engineering control rate of emissions ,
& Abatement which incurs a regional backstop
technology cost dependent upon both
& time
Data
The GTAP economic and greenhouse gas emissions databases and acyclic
processing are discussed in Chapter 4 Economic models for climate change
analysis. A detailed procedure for aggregating GTAP data and preparing the
data for generic economic modelling is provided in Appendix 7 Mining the
GTAP database. The Appendix also shows how the data is cross-checked by
rationalising it to GTAP's Social Accounting Matrix.
319
The above illustrations show how the aggregated regions of the European
Union, NAFTA and Rest of World (ROW) compare in terms of Gross Domestic
Product and population. It may be noted that the three aggregated regions
have approximately the same share of Gross Domestic Product.
Key parameters
Four important control parameters in Sceptre are the number of periods in the
projection, social discount rate, depreciation rate and labour endowment
unemployment rate.
Projection periods
Two issues exacerbate the problem of large models. The first is nonlinearity
because it obviates the use of fast linear programming solvers. The
introduction of a nonlinear economic-climate feedback loop means all linear
constraints are interpreted through a nonlinear framework. This makes it
much harder to satisfy constraints and leads to performance issues. For
example, the linear programming optimisation of a 90 period Multi-regional
Input Output (MRIO) model might take only three hours to process on a high
power research computer node. Introduction of a nonlinear economic-climate
320
equation means it becomes increasingly difficult to satisfy all constraints and
necessary to find the best solution by scrubbing away at the constraints over
2,000 iterations. The capacity of the MRIO model drops to 13 periods and even
this takes 15 hours to compute.i
Discount rate
321
r = g
where :
is the pure time rate of preference
is the marginal elasticity of utility
g is the rate of growth of consumption per generation
Sir Nicholas Stern (2007), Nordhaus (2008, pp.10 & 61; 2009) and The
Garnaut Climate Change Review (2008, p28) all use the Ramsey equation,
albeit with quite different parameters:
preference
Exogenous 1 2 1&2
marginal
elasticity of
utility
Endogenously 1.3% Average of 2% for Average of 1.3%
calculated first 50 years and for period 2003-
Nordhaus suggests that high social discount rates reflect the real situation.
This is because entrepreneurs need to create new technology having returns
commensurate with other high technology investments. For example, the
returns from genetically modified crops.
322
However, the situation is more complex than Ramsey's equation (above)
suggests. For example, traditional intertemporal CGE models employ a
consumption welfare function embodying Arrow-Pratt's constant relative risk
aversion (CRRA) criterion. This provides a constant elasticity of intertemporal
substitution of :
= 1 /
−c 1 −
u c =
1 −
where :
is the constant elasticity of intertemporal substitution
is the marginal elasticity of utility
is the pure time rate of preference
u is welfare utility
c is per capita consumption
This welfare function is often modified by subtracting one from the numerator
in order to simplify the welfare function to the log utility ln c for the
special case of = 1 . By applying L'Hopital's Rule (Rudin 1976, p.109):
u c = [ −c 1 − −1
1 − ] 1
= lim 1− 0
−c 1− −1
1−
= −ln c
The result of this comparison is that the effective discount rate r is:ii
323
In the case of = 1 , this equation simplifies to the well-known relationship
that the discount rate is equal to the growth rate of the economy r=g .
However, in other cases the effective discount rate r is a function both of the
level of consumption c and the Ramsey parameters { , , g } . As a
consequence, the respective equivalent discount rates for the Stern, Nordhaus
and Garnaut studies vary widely with the consumption per capita c (except
in the special case marked with an asterisk where = 1 ):
While the Stern, Nordhaus and Garnaut CGE models endogenously calculate
real discount rate, this is based on four independent assumptions with non-
diversified cumulative errors. Two assumptions, {c , g } vary within and
across cases and the other two { , } are not well understood at all. For
example, Heal (2005) notes that the utility discount rate reflects ethical
judgements and its relationship to the social discount rate requires a wide
understanding of political economy issues such as preferences,
complementarities and substitutabilities.
324
multiplied more than necessary.iii As one of the major weaknesses in traditional
CGE modelling is the copious number of assumptions, restricting the number
of assumptions in the Sceptre model has been one of the guiding principles in
its design. In regard to consumption and production functions, this means a
simpler explanation is better than a complex one.
325
regarded as a truism of markets that future problems will elicit entrepreneurial
technological innovation to solve those problems. This belief is also expressed
as a strong preference for current consumption over future consumption, given
that the welfare of people in the future can be “dismissed” because they will be
better off due to technological progress.
Depreciation rate
Unemployment rate
326
Therefore, the labour constraint is de-bound by allowing for a notional
unemployment rate of 6.5% in the calculation of labour endowment. This is not
a critical assumption because the labour constraint is rarely binding for two
reasons. The first reason is that labour is assumed to grow with regional
population. The second reason is that the climate and asset constraints bind
before the labour constraint, except in extreme policy scenarios that force the
economies to contract. If policy scenarios binding labour are to be investigated
then the nature of the labour endowment in each region may need to be
researched in more detail.
For consistency with Nordhaus' model, the assumption for the time being
within Sceptre is that population saturates at 8.6 billion in 2100. Following the
effluxion of 10 decades, the actual year of population saturation would be
2104.
327
It may be seen that the population saturates with EU25 nations growing from
458 million people to 489 million, NAFTA growing from 433 million people to
559 million and the Rest of the World (ROW) growing from 5.5 billion to 7.6
billion. The absolute increases over the 130 year projection period are EU25
6.7%, NAFTA 29.1% and ROW 38.1%.
328
However, the introduction of a new commodity of CO2 amelioration or
abatement means that both emissions permits and amelioration or abatement
need to be modelled as substitutable commodities. The new amelioration or
abatement commodity includes higher cost energy source like solar or nuclear
power, consumer ameliorations such as house insulation and electric cars, and
abatement services such as CO2 collection and sequestration (CCS).
If the Use matrix is augmented then the Make matrix needs to be likewise
augmented. There is an industry producing amelioration and abatement
services and another producing emissions permits, albeit the latter is most
likely run by the government. The diagonal elements of the Make matrix are
set to the sum of the corresponding uses plus net exports, thereby facilitating
international trade.
329
Trade deficit and taxation bias
Trade deficits
ten Raa's MRIO method utilises net exports (exports less imports) and so
ignores the source of imported inputs in the same way as Leontief. However,
the use of U and V matrices means that the somewhat unrealistic product-mix
assumptions can be relaxed. This method is described in Appendix 6
Benchmarking with linear programming.
330
When compared to traditional CGE modelling, ten Raa's MRIO method
introduces a significant advantage and some disadvantages. The significant
advantage is that internal exports and imports within aggregated GTAP regions
are inherently eliminated by the use of net exports. This removes the need to
distribute exports and imports within the domestic intermediary and final
demand and supply matrices on some arbitrary basis, such as proportional
allocation.
The approach taken in Sceptre is to value exports and imports at world prices
and treat the difference of freight and taxes as a constant bias. This approach
is detailed in Appendix 7 Mining the GTAP database. The calculated bias
becomes less appropriate if the trade in particular commodities significantly
rises or falls, or reverses. However, the alternative is to introduce trade
multipliers on net exports. This also has problems. For example, as the mix of
imports and exports changes then the multiplier becomes inappropriate. It may
even be the case that import taxes become applied to exports. In addition, such
multipliers cannot be easily implemented in a linear programming schema,
which is the overall controlling paradigm for both linear and nonlinear
formulations.
331
Profit & Loss
Gross Profit V T −U ⋅s t
Depreciation − ninvt t −1
Dividend − a t
Increase of Retained Earnings R Et
where :
s t is the activity vector
ninvt t−1 is the net investment at the end of period t−1
invt t is the investment for period t
is the depreciation rate
is the economic expansion factor
a t is the consumption vector
For simplicity in explanation, it is assumed here that the trade vector for net
exports is part of the consumption vector. The accompanying Cash Flow is:
Cash Flow
Gross Profit V T −U ⋅s t
Investment −invest t
Dividend − a t
Increase of Cash Casht
Since all value added is used for investment or consumption (here the
V T −U ⋅s t−invest t − a t = 0
332
The net investment in a production unit comprises both inventory and fixed
capital investment. It is quite clear that inventory of a commodity is the
accumulation of the commodity. In addition, the fixed capital investment and
depreciation of this investment can be modelled as accumulations of the
commodity. In 1932, Von Neumann observed the remarkable duality between
monetary variables and technical variables such as commodity production
intensity. He concluded that money could be eliminated leaving only
commodities in economic models (Von Neumann 1938, p.1; Champernowne
1945, p.13).
Von Neumann also noted that household consumption and investment are each
parcels of commodities and that wear and tear (i.e. depreciation of the net
investment in the production process) could also be treated as a commodity.
Ultimately all net investment in a production process is absorbed into the
commodities produced by the production process itself. Depreciation is the
annual quantum of this depreciation. Therefore, von Neumann's assumptions
implicitly assume that commodities are equivalently bartered at fair value to
achieve the mix of commodities required for fixed capital equipment.
Applying von Neumann's assumptions, the “stocks model” for the production
unit is:
333
Balance Sheet at time: t−1 t
Cash 0 0
Net Investment ninvt t−1 ninvt t −1invest t − ninvt t −1
Total Assets ninvt t−1 1− ninvt t −1invest t
R Et = R E t −1 R E t
= ninvt t −1 invest t − ninvt t −1
= 1− ninvt t −1 invest t
ten Raa has developed an alternative approach to stocks and flows based on
convolution dispersions. This is described in Appendix 6 Benchmarking with
linear programming. However, after modelling both accounting and dispersion
models, the accounting model was found to be simpler to implement.
If we were only to model stocks and flows to this stage then the model would
be unstable. This is because the drive to maximise consumption will
cannibalise capital by sending investment negative. Whilst a constraint can be
set to ensure investment is not less than zero, this is not sufficient because
maximising total consumption will still set investment to zero and depreciation
will relentlessly cannibalise accumulated capital to zero.
334
stable for long periods, so much so that rules of thumb are often used. For
example, the Sales/Asset ratio is typically 1 for manufacturers and close to 2
for retailers.
Sales
V T⋅st ≤ ninvtt −1∗
Assets
For intertemporal models, this dynamic constraint takes the place of a static
material balance.
Depreciation rate
A single year depreciation rate for GTAP 2004 data is calculated for each
commodity in each country. The maximum depreciation rate of 4%pa was
discussed above. However, in the industries of some countries the annual
investment can be less than the net accumulated investment multiplied by the
default depreciation rate. In this case, the single year depreciation rate is set
to equal the annual investment divided by the net accumulated investment.
335
Sales to Assets ratios tend to remain stable over long periods. A comparison
over a period of 7 years may be calculated using GTAP data sets for the base
years of 1997 and 2004.vi The Sales to Assets ratios for each commodity and
region are shown in the following illustration.
336
Maximise NPV ypc: where ypc is consumption per capita
s , y , inv
Subject to:
T T
Commodity V −U ∗s where V is Make matrix ,
flows − y −inv=0 U is Use matrix , s is the
balance (∗signifies industry activity , y is the
convolution consumption vector & inv
product ) is the investment vector
In the above specification, the closing written down value of assets closewdv
is calculated as the convolution over time of investment with the per unit
depreciation profile used to write down the value of accumulated investment.
For example, with say 10% depreciation on a declining balance basis, the
profile for wring down assets would be {1, 0.9 , 0.81 , etc } . Accumulated
depreciation is the difference between accumulated investment to the end of a
period and the closing written down value of assets at the end of that period.
Annual depreciation may then be calculated from this series as required.
It may be noted that the above sales to assets constraint uses the closewdv
of the current period rather than the prior period (as in the specification of
Sceptre). The current period is demonstrated here as the most computationally
difficult situation because of the circular dependence of sales on assets, assets
on investment, and investment (and consumption) on sales.
337
It is useful to consider the economic model's performance in a Negishi welfare
maximising mode. More simply, to see how much growth can be driven through
the model when it is restrained merely by sales-to-asset infrastructure limits.
A second Game Theory issue is that the model requires a system of inter-period
equity or else the model will simply place consumption where it is maximised
and not where it is needed for the real welfare needs of society.
338
commodity resources to firms in order to ensure social stability through
continuity of the virtuous cycle that converts labour to consumption.
The model problem remains linear and may be quickly solves using a Simplex
or Revised Simplex method. In the illustration below, computed results are
shown for simplified example inputs. These inputs are a commodity V matrix
with a single element having value 100, the corresponding single element in
the U matrix having a value of 80, initial consumption 11, sales-to-asset ratio of
1.0, written down value of assets brought forward of 100, depreciation rate of
4% (declining balance basis), population growth rate of 2% and discount rate
4%.
339
Illustration 20: Model Performance with an
objective function to maximise the Net Present
Value of Consumption per capita
Of course, this “consumption party” could not occur in a model closed for
households for two reasons. The first is that the very large increase in industry
activity would increase the quantum of wages, with both the workforce and the
level of wages increasing. Both would lead to increased consumption. Secondly,
the massive consumption vector in the final years could not be afforded by
consumers in those years. Closing for households will stabilise the model and
is used within the final Sceptre model.
340
Another form of endogenous stabilisation is possible. Businesses and
governments look for monotonic growth over quite long periods. While
payback periods as short as one to three years are applied to incremental
investment, the business plans that underpin infrastructure investment
decisions usually range across the period of debt repayment. This is three to
five years as a minimum and might be as long as ten to fifteen years. Some
infrastructure facilities in the resource industry would be evaluated over
production lifetimes of twenty to fifty years. Long term business plans seek to
maximise growth in every year. Expressed as a single number objective, such
business plans seek to achieve the maximum throughput in the present as well
as in the future by maximising the minimum annual growth, which is a
Minimax function.
341
Maximise ygr : where ygr is the minimum growth of
consumption per capita in any period
s , y ,inv , ygr
Subject to:
Commodity V T −U ∗s where V T is Make matrix ,
flows − y−inv=0 U is Use matrix , s is the
balance ( ∗signifies industry activity , y is the
convolution consumption vector & inv
product ) is the investment vector
This Minimax formulation is still a linear model that can be solved by Simplex
methods but it takes quite a long time to find the Minimax solution. It may be
noted that consumption per capita at 100 years is approximately 6 on the log
scale, compared to 14 in the consumption party example above.
342
In this example the Minimax solution is a minimum annual growth rate of 3%.
As the “wavy” lines in the figure above show, growth in other years is variable
but all rates of growth are higher than in the minimum year. Overall, the
compound constant growth rate corresponds to a fairly robust 3.4% pa. This
equivalent constant growth rate is not itself a feasible solution because the
dynamics of the model require negative investment (i.e. cannibalised assets) in
many years. Indeed, the growth rate is constrained to be constant, the
maximum growth rate is a much more moderate 1.2% pa. This leads to a
significant reduction in economic performance, as shown in the following
figure.
343
Nordhaus' basic scientific model is detailed in Appendix 4 and included here as
an illustration for reference:
The main economic-climate equations from Nordhaus' DICE model used in this
research are shown below. The definitions of parameters can also be found in
Appendix 4. However, the reader is referred to the specific implementation
within Sceptre. This is provided in Appendix 8 The Sceptre model and further
referred to in the discussion of assumptions below.
344
A.04 Qt = t [1 − t ] A t K t L 1−
t industrial output
1
A.05 t = economic damage function
[1 1 T AT , t 2 T 2AT ,t ]
2
A.06 t = t t t abatement cost function
A.12 Et = E ind ,t Eland , t total emissions
A.13 M AT ,t = E t 11 M AT , t−1 21 M UP ,t −1 atmospheric carbon conc.
A.14 M UP ,t =
{
12 M AT , t−1 22 M UP, t−1
32 M LO , t−1 } upper oceans carbon conc.
A.15 M LO ,t = 23 M UP, t−1 33 M LO, t −1 lower oceans carbon conc.
M AT , t
A.16 Ft = log 2 [ ] F EX ,t radiative forcing function
M AT , 1750
A.17 T AT ,t =
{
T AT ,t −1 1 F t 1 2 T AT ,t −1
− 1 3 [T AT ,t −1 − T LO , t−1 ] } atmospheric temperature rise
While Nordhaus demonstrates that DICE outputs are consistent with other
physical modelling, there is a significant difference between industrial CO2
emissions and the equivalent global warming potential from all six greenhouse
345
gases (CO2, CH4, N20, PFC, HFC, SF6) as shown in the table below (Baumert et
al. 2009).
Prima facie agricultural CH4 and N2O emissions might be expected to rise
proportionately with food production. This suggests that the DICE approach of
treating CO2 as the sole element could be improved in a spatial model.
Notwithstanding the potential issues arising from the balance of fixed and
variable emissions, this dissertation adopts the same approach as Nordhaus
DICE in order to remain consistent with the geophysical model.
Data consistency is one of the key features of the GTAP database. Utilising this
advantage, energy related CO2 emissions have been matched to the economic
structure of the database (Lee 2008).
346
The table above compares the regional aggregations of GTAP energy related
CO2 emissions used in this dissertation with declared CO2 gas emissions from
energy, industrial, international bunkering and Land Use Changes and Forestry
(LUCF) that have been similarly aggregated.
This dissertation uses GTAP 2004 Energy Related emissions, with the DICE
“eland” adjustment. The reasons for this are:
It may be concluded from the analysis in this section that there is an element
of “modeller's art” in incorporating the global warming potential of CO 2 and
347
non-CO2 gas emissions into geophysical models. The means that element-by-
element comparisons are not always straightforward and validity needs to be
established with outputs rather than inputs. For many decades William
Nordhaus has demonstrated that the results from DICE are consistent with
those from researchers with other approaches and with the linear development
of his model over time.
Optimisation variables
The minimum set of optimisation variables comprises the input variables for
the acyclic topological structure of the economic model. This can be
investigated in two ways. The first is by using the topological processor
developed in this research for serial processing of the objective function in the
Nordhaus DICE model. The second method is to manually use Mathematica's
Solve function to determine the input variables. The difference is that
topological processor uses the initial processing order of the equation set to
make choices of input variables. Mathematica's Solve can be iteratively
customised to take advantage of consistent patterns in the MRIO model.
Objective function
The traditional CGE welfare function has been discussed in relation to discount
rate (above). The consumer welfare function was:
348
−c 1 −
u c =
1 −
where :
u is welfare utility
c is per capita consumption
is the marginal elasticity of consumption utility
Nor does Sceptre use consumption as in the traditional CGE welfare function.
For example, the utility function is suitable for a partial equilibrium study of a
single region. However, in a general equilibrium the function is weighted
toward large economies. For example, a 1% expansion of the American
economy would be valued at many thousands of times a 1% expansion of an
African economy.
In Sceptre, the objective function is simply net present value of the sum of the
annual per capita economic expansion of each country. The economic
expansion factor for a country is the multiplier of the GTAP 2004 data
consumption vector for the country divided by the index of population for that
year. Discounting expansion per capita means that all regions in the model are
evaluated in an unbiased way. For example, a 1% increase in the per capita
welfare of an American or an Australian has the same merit as a 1% increase in
the welfare of, say, a Chinese, Indian or African person.
349
In the conclusion to his analysis of intertemporal modelling, ten Raa writes
(2005, Chapter 13: Dynamic Modelling, pp.174-5):
−k b 1 − b
ae , a− and a
where :
is the economic expansion factor
350
Augmented consumption, investment and U & V
matrices
Each of the matrices U and V, and the vectors for consumption and investment
are derived from GTAP data as described in above and in Appendix 7 Mining
the GTAP database.
The matrices U and V are square matrices with the rows and columns equal to
the number of aggregated commodities. In this policy research, there are three
commodities { food , mnfc , services} , which form 3x3 matrices for each of the
three regions {NAFTA , EU25 , ROW } . The consumption and investment
matrices for each of these regions is a single column vector of the three
commodities { food , mnfc , services} .
351
As discussed above, the U and V matrices are augmented with two rows and
two columns, for amelioration and abatement services and for emission
permits trading:
The difference between the terms amelioration and abatement is merely one of
form over function. Abatement of emissions in power generation might be a
particular service such as carbon sequestration. In contrast, amelioration
achieves a similar effect by replacing the facility, for example retrofitting a
coal-fired generation plant with a nuclear boiler.
Following the augmentation of U and V by the creation of rows and columns for
{gaml , gtra } , the commodity set becomes
{ food , mnfc , services , gaml , gtra } . The last two rows of the U matrix have
formulae in the cells, not unique data. Emissions are read from the GTAP
industrial emissions database for each production unit in each country. A
production unit is a column in the U matrix (and in the transposed V matrix).
Emissions are treated as a new industrial input requirement for permits rather
than a production output of a “bad” from the process.
In each column of the U matrix, the final row for emissions permits gtra , has
the formula of emissions of the production unit (converted to carbon instead of
CO2 because Nordhaus climate equations are based on carbon emissions)
multiplied by 1− , where is the emissions control rate, which is the
proportion of emissions physically ameliorated or abated.
Additional rows are added to the consumption vector in the same way. The only
difference is that multipliers in the penultimate and final rows are:
352
{1− a , a }
where : .
a is the proportion ameliorated or abated for the consumption vector a
In Nordhaus' DICE model (Nordhaus 2008, pp.41-3, 52 & 77-9), the adjusted
cost of backstop technology per tonne of carbon is:
353
pback backrat−1e−gback∗t −1
= ⋅
backrat
where :
abatement cost exponent 2.8
pback maximum marginal backstop cost per tonne of carbon 1.7
backrat ratio of backstop technology final cost / initial cost 2
gback rate of decline of backstop technology cost per decade 0.05
The pback value of 1.7 means the last unit of amelioration or abatement in
the most value-adding industries, such as jet fuel or plastics has a cost of
US$1,700 in 2005 dollars.
The profile of average backstop technology cost with time, assuming full
amelioration or abatement of emissions is shown in the following illustration:
The following illustration shows how abatement cost varies with time and the
control rate :
354
Illustration 26: Variation of Abatement Cost with time
and controlled emissions
It may be seen that the average price of abatement can be as low as a few US
dollars per tonne of carbon. This low level of cost applies to the “low hanging
fruit” of amelioration and abatement opportunities. However, low costs have
also been suggested for large-scale geoengineering abatement, which is
nowadays known as “climate engineering”. While this geoengineering
technology providing such a low cost does not yet exist, it may include, for
example, large shades in space to block the sun's radiation, spraying seawater
into clouds to make them reflective, seeding clouds with aerosols that reflect
shortwave radiation and the air-capture of carbon dioxide. For comparison,
Charles (2009) estimates the cost of abating emissions from coal-fired power
stations as being about US$60/tonne for carbon capture and storage, albeit
still a hypothetical technology.
The final column in the U matrix has the resource purchases for the production
centre of emissions permits gtra . All these cells are zero since the
Government has no cost in issuing emissions permits.
355
End usage vectors
In the same way as the U and V matrices were augmented, additional rows are
also appended to the consumption, investment and net exports vector.
However, GTAP doesn't provide emissions data for investment and net exports.
In the investment vector, the cells are simply zero.
In the net exports vector, synthetic entries are made to facilitate international
emissions trading. These synthetic entries need to be small in order to not
disturb the material balance and initially sum to zero for the country.
Therefore, in examining trade flows in emission permits and amelioration and
abatement services, the interpretation of emissions permits traded by each
country will need to be divided by the vector of synthetic emissions used to
seed the international trading.
The illustration of V and U above shows sales from the V T matrix. The GTAP
V T matrix is diagonal. Augmentation commodities become further diagonal
elements for amelioration or abatement {gaml } and emission permits sales
{gtra } . Sales of each of these commodities are the sum of the respective
commodity demand including industrial uses, investment, consumption and net
exports.
356
Once the government settles on a policy to limit atmospheric temperature rise,
the government may then introduce quantity limits to create a profile of
scarcity and stimulate a price on emission permits. Sceptre can also be
operated in this mode, where a resource limit is placed on emission permits so
a price is generated.
T T V T −U observed T
V −U observed = V −U 0 ∗ 0 or = V −U 0
0
V T −U ' = V T −U o ∗dam
substituting the equationsabove :
' T V T −U observed
U =V − ∗dam
0
So the revised U ' is given by :
' T T dam
U = V −V −U observed ∗
0
357
U ' = V T −V T −U observed ∗al
where :
al is the index of total factor productivity
Combining the effect of total factor productivity with economic damage, the
resulting matrix is:
dam
U ' = V T −V T −U observed ∗ ∗al .
0
Neither economic damages nor total factor productivity benefits are applied to
amelioration or abatement services and emissions permits.
V T −U / Labour hours .
The illustration below shows that labour factor productivity in America and
Australia has grown by about 2%-3% pa over the last three decades (RBA
358
2009). On a per decade basis, this is equivalent to about 32% and 36% per
decade respectively.
However, Hicks (1932) and subsequently Solow (1957) suggest that production
functions be characterised with a constant relationship between the factors
and that Total Factor Productivity is independent of the factors. It is assumed
that the marginal rates of substitution of the factors remains constant and the
proportional balance of labour and other factors in a production function
remains unchanged notwithstanding an increase in economic output
occasioned by technological progress. This is discussed in Appendix 3 Input
Output Tables, in regard to Solow's variable for technological change A .
359
As a consequence of these three assumptions, labour per unit of industry
A2 V T −U s 2 A1 V T −U s 1
−
Growth in the Partial L2 L1
=
Productivity of Labour A1 V T −U s 1
L1
A2 s 2 A s
− 1 1
L2 L1
=
A1 s1
L1
L L
[ 1 / 2 ] A2 − A1
s1 s2
=
A1
L1 L
With constant returns to scale = 2 ,
s1 s2
Growth in the Partial A2 − A1
=
Productivity of Labour A1
= Growth in Total
Factor Productivity
360
technology have led to the situation of approximately 40% of all jobs globally
being in service related areas (Morris 2007). This rises to 80% in advanced
Western economies. The service sector is now twice as large as the
manufacturing sector.
361
Notwithstanding the differences in labour productivity between the
manufacturing and service sectors, competitive markets for labour are heavily
influenced by the sector with the highest capacity to pay. As manufacturing has
the highest marginal productivity then it often influences wages. The increase
of wages in the services sector without corresponding increase is productivity
is known as the Baumol disease (ten Raa & Wolff 2001).
The flow equations for each commodity in each country in each time period are
the aggregate of the following items, which sum to zero:
362
gamma industry activity export activity investment activity bias total
s1 s2 s3 s4 s5 z1 z2 z3 z4 z5 i1 i2 i3 i4 i5
c1 c2 c3 c4 c5 c1 c2 c3 c4 c5
Food c1 a1 0
Manufacturing c2 a2 0
Services c3 a3 U – VT 0
CO2 permits c4 a4 0
CO2 amelioration c5 a5 0
Labour hours <=N
Illustration 29: MRIO model linear programming schema
It may be seen in the above illustration that the labour used by industry is
constrained to be less than the labour endowment, N. The labour endowment
is usually calculated as the sum of the labour hours divided by one minus the
unemployment rate. The unemployment rate assumption has been discussed
above. When industry activities vary, labour hours are redistributed across the
industries.
If the model was static then a capital constraint would be present with a
limiting endowment M. However, capital is dynamically calculated in an
intertemporal model.
Constraints
While the objective function and its relationship to discount factor has been
extensively addressed above, the heart of a benchmarking model is in the
constraints. The theorem of complementary slackness and the main theory of
linear programming were discussed in Chapter 4 Economic models for climate
change policy analysis. These constraints make the commodity and factor
markets through the Dual formulation. The Lagrange multipliers are the prices
of the constraint resources.
363
are relatively simple and can be analytically expressed. When the model
becomes multi-period intertemporal, there is a rolling forward of single period
models. Each successive phase of the model comprises all the symbolic
equations of the antecedent models. The process relies on powerful symbolic
processing in Mathematica and results in extremely long, complicated and
highly nonlinear equations.
Inequality Constraints
Constraint Description
Sales/Asset ratio: As discussed in the accounting stocks and flows model
(above), the material balance is brought into the
Net investment in the optimisation model through the Sales to Assets
previous period * sales assumption. Sales in the current period, represented by
to asset ratio the V matrix multiplied by the activity vector, must be
≥ V . sector activity less than or equal to the assets in the previous period
multiplied by the Sales to Assets ratio. In dynamic input-
output modelling this is known as “closing the model for
investment”. This constraint also forms part of “closing
the model for trade”. This Sales to Asset constraint is
very important and a major part of Sceptre's innovation.
This is because it substitutes a dynamic material balance
for ten Raa's static material balance. Therefore, the
Main Theory of Linear Programming is able to form a
series of dynamic markets that maximise outputs while
minimising inputs. Furthermore, using Sales to Assets
ratios is a stable approach because these ratios tend to
be stable over medium term time fames. Therefore
ratios have not been changed over time.
Final period investment: Accumulated investment cannot be cannibalised for
consumption (except through depreciation). As
current period production is divided between investment, consumption
investment ≥ previous and net exports, the simplest assumption to achieve the
364
Constraint Description
period investment anti-cannibalism outcome is to require each industry's
investment be maintained in the final period
Country deficit limit: As discussed above, net exports multiplied by the
activity vector, must be less than or equal to the
exim . export activity ≥ country's actual GTAP 2004 deficit. The deficit is a
deficit negative number. This constraint is part of what is
known as “closing the model for trade” in input-output
modelling.
Labour constraint: Each country's labour endowment is assumed to rise
with its population growth. The labour used in a country
labour endowment ≥ is the sum of the labour used in each sector multiplied
vector of labour in by the activity of the sector. The labour used in a country
sector . vector of must be less than the country's labour endowment. All
activity of sector countries are assumed to have 6.5% unemployment in
2004 such that the initial labour endowment of a country
is the total labour used in 2004 divided by (1 -
unemployment rate).
Purchasing power As the labour force purchases the commodities that
constraint: constitute final demand, the vector of labour in sector .
vector of activity of sector (i.e. the labour used) must be
vector of labour in greater than or equal to the initial labour employed in a
sector . vector of country multiplied by the country's economic expansion.
activity of sector ≥ This constraint is equivalent to “closing the model for
labour employed * households” in input-output analysis, where employment
economic expansion and consumption are linked.
investment ≥ 0 Investment, sector activity and economic expansion
sector activity ≥ 0 must all be greater than zero.
economic expansion ≥ 0
1≥≥0 The proportion of substitution of amelioration or
1 ≥ a ≥ 0 abatement services for emissions permits must be
between 0 and 1, for both industry () and consumers
(a).
Limits on international Limits on international emissions trading may be
trading of emissions introduced here but have not been applied in this policy
permits research.
Limits on national Quantitative limits on national emissions trading may be
emissions introduced here but have not been applied in this policy
research.
365
Climate inequality constraints
Climate constraints will reflect the policy feasibility being investigated. For
example, in limiting the temperature rise to 2°C in 100 years time:
Constraint Description
2°C ≥ temperature rise The temperature rise in 100 years cannot exceed 2°C
at period 10
Following period 10: Following the maximum temperature rise, the
temperature rise must remain stable or decline
previous period
temperature rise ≥
current period
temperature rise
Following period 10: If emissions are not controlled in addition to
temperature, the end effect of the model will be to
previous period accelerate emissions. Therefore, following the maximum
emissions ≥ current temperature rise, industrial emissions must remain
emissions stable or decline
Equality constraints
There are two types of equality constraints. The first are called boundary
conditions such as x = 4 , which is a light imposition on optimisation and
normally eliminated by the in-built pre-solver. However, a second type of
equality constraint heavily encumbers the solution. These are equalities of
endogenous variables that lead to internal feedback loops.
Constraint Description
damage function active Economic damage increases resource usage and
in current period = increases emissions. Increased emissions cause
damage function increased temperature rise and increased economic
resulting from the damage. Therefore, a feedback loop exists. The initial
period economic damage needs to be settled in general
equilibrium with the resulting economic damage as they
are the same number. This is how the nonlinear climate
equations enter into the intertemporal MRIO model.
366
Optimisation
A number of factors needs to be considered in nonlinear optimisation. Prime
amongst these are the trade-offs between global and local minimisation,
methods of solution, and accuracy and iterations.
Global optimisers seek to find the best solutions in the presence of saddle-
points, where two or more optima may exist. Nordhaus (2008, p.45) notes that
the DICE model uses the local optimiser CONOPT. Experience with the DICE
model over many decades has not indicated any issue arising from saddle-
points.
The Mathematica package has both global and local optimisers. Use of these
packages in this current policy research confirms the robust nature of the
optimisation and that faster local optimisers can be confidently used.
Methods of solution
367
Accuracy and iterations
Constraint slacks
In cases where Mathematica's FindMinimum function cannot return an
optimisation result accurate to say 6 decimal places, it returns the best
solution found together with a message indicating residuals. For example: x
Illustration 30: FindMinimum return message when constraints not fully satisfied
The source of the inaccuracy may be inspected by printing out the unsatisfied
constraints having non-zero slacks and observing the magnitude of the slacks
368
that are unsatisfied. It is assumed in this model that slacks greater than 1*10 -4
merit investigation. In the Base Case model with 4,000 iterations, there are 13
unsatisfied constraints but none are material as shown by the output slacks:
With fewer iterations, there is more change of unsatisfied slacks. For example,
a message of the following form is produced with 2,000 iterations:
It may be noted in the above illustration that the slack is very small, and even
more so when considered as a proportion of the magnitude of the variables. In
the last line, the slack of -4.16 * 10-4 results from the difference of very large
numbers having magnitudes of 106 and 107.
369
5.3 Comparison of Sceptre with physical modelling
Chapter 2 Political Economy of the Anglo-American world view of climate
change introduced the concept of a fixed tranche of atmospheric emissions
capacity for a 2°C temperature rise.
As shown in the table below, the results of the Sceptre model developed in this
research compare favourably with physical climate change modelling by Allen
et al. (2009) and Meinshausen, M. et al. (2009) using a linear extrapolation of
emissions.
The Sceptre model is consistent with both sets of results. As discussed earlier
in this Chapter, minor differences are expected because the geophysical
framework deals with non-CO2 gas emissions through a combination of fixed
emissions and trends in radiative forcing.xii
370
Economic expansion
Prima facie there is quite a dramatic contrast between Sceptre and DICE. This
is especially so considering that these projections are in real dollars rather
than nominal dollars taking account of inflation. One would not intuitively
expect real income to increase in a J-curve.
One reason for the startling difference between DICE and Sceptre is to be
located in the difference between unconstrained and constrained models. In
DICE, the economic model underpins the objective function rather than the
constraints. In contrast, the economic model and climate damage feedback
loop in Sceptre appears within the constraints which is computationally a
much more expensive situation.
371
Most climate-economic modellers such as Garnaut are happy with a 100-year
time horizon. Indeed, Nordhaus notes that it would be unwise to rely on more
than the first 50 years. However, Nordhaus extends DICE to 60 decades (600
years) to show how the climate-economic ecosystem responds in the long term.
Operating experience with Sceptre has shown that a time-frame of at least 13
decades is required so performance up to 10 decades is unaffected by end
effects.
372
Although both models share Nordhaus' scientific-economic equations, it may
be seen that Sceptre is optimising in a different way to DICE. Sceptre is
constrained optimisation compared DICE's unconstrained “business as usual”
case. In Sceptre, the consumption expansion in each region is constrained by
the natural endowments of labour and capital, although capital is
endogenously calculated. In addition, consumption is constrained by three
other important factors. These are the purchasing power of labour, a limit on
trade deficits and by the preference given to investment.
In DICE, none of these constraints apply. The most important of all is DICE's
preference for consumption over investment, which arises because
consumption is maximised with respect to capital. This is discussed in
Appendix 4 Nordhaus DICE model and other aspects of DICE performance are
discussed in Appendix 5 Acyclic solver for unconstrained optimisation .
373
In contrast, Sceptre shows 5.5% amelioration and abatement in food, 4% in
manufacturing and 7% in services. Sceptre's emissions control rate and price
are shown in the following illustrations:
374
Sceptre Normal Case DICE Business as Usual
Due to its high participation rate, the price of amelioration and abatement in
DICE rises to US$142/tonne at decade 6 and US$390/tonne at decade 13. This
is significantly higher than Sceptre's amelioration and abatement cost of a few
dollars per tonne.
Industrial Emissions
Sceptre shows industrial emissions rising quickly over 1 decade from about 70
GtC to 80 GtC and then slowly stabilising at about 90 GtC. In contrast, DICE's
very high projection of production and consumption cause industrial emissions
to rise to 91 GtC after one decade and stabilise 40% higher at 128 GtC in
decade 9, before slowly decreasing to 115 GtC at decade 13.
Over the first five decades, total CO2 emissions are 1515 Gt and 1785 Gt for
Sceptre and DICE respectively.
375
Temperature rise
Initially, both models have similar atmospheric and sea temperature rise
profiles although DICE is more aggressive.
DICE's atmospheric temperature rise doubles increases from the present 0.8°C
to 1.0°C over 1 decade and then doubles from the present to 1.65°C over 4
decades. The same doubling in Sceptre occurs after 5 decades. With a similar
difference, the atmospheric temperature rise at the end of the projection
period of 13 decades is 3.2°C for DICE and 2.8°C for Sceptre. It will be shown
in the next section for the Base Case that such a difference in temperature rise
has extraordinary consequences for environmental cost.
376
Sceptre Normal Case DICE Business as Usual
Radiative forcing also mirrors CO2 concentration. DICE reaches 5.2 Watts/m2
after 13 periods and continues to accelerate. Sceptre reaches 4.5 Watts/m 2
while flattening.
Damage multiplier
Sceptre shows investment rising to US$500 trillion per decade after 6 decades
and to US$1200 trillion per decade at the end of the projection period. In
contrast, DICE investment per decade is similar at US$512 trillion after 6
decades and US$1254 trillion at decade 13.
377
Sceptre Normal Case DICE Business as Usual
5.5 Conclusion
This Chapter presented a new intertemporal computable general equilibrium
(CGE) model applying the Service Sciences technique of benchmarking to
multiregional Input Output modelling. Major design assumptions have been set
out and discussed. Key amongst these were the net present value discount
rate, population growth, climate scientific-economic equations, a new method
378
of intertemporal modelling using accounting stocks, flows and Sales/Assets
ratios, and the selection of an objective function.
Make and Use table augmenting methods have also been presented in regard
to carbon commodities (carbon permits and amelioration and abatement
services), impairing economic output for climate damage and enhancing output
for total factor productivity.
The model developed in this Chapter was validated with recent geophysical
research and found to be consistent. The model was also compared to the
William Nordhaus DICE model using a Normal case where output is maximised
without a climate change constraint. This is a “business as usual case” with
economic damages occurring as a result of global warming and with carbon
markets responding to this damage in order to maximise output.
It was found that the Nordhaus DICE model is a high growth, high emissions
control model. This contrasts to the benchmarking model developed in this
Chapter that has lower growth and a correspondingly lower the emissions
control regime.
379
Baumert, K.A., Herzog, T. & Markoff, M., 2009. The Climate Analysis Indicators
Tool (CAIT), Washington DC: World Resources Institute. Available at:
http://cait.wri.org/cait.php [Accessed November 7, 2009].
Charles, D., 2009. ENERGY RESEARCH: Stimulus Gives DOE Billions for
Carbon-Capture Projects. Science, 323(5918), 1158.
Chesbrough, H. & Spohrer, J., 2006. A research manifesto for services science.
Heal, G., 2005. Chapter 21: Intertemporal Welfare Economics and the
Environment. In Handbook of Environmental Economics. North
Holland.
380
Lee, H., 2008. An Emissions Data Base for Integrated Assessment of Climate
Change Policy Using GTAP, Center for Global Trade Analysis. Available
at: https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=1143 [Accessed June 26, 2009].
Lutz, W., Sanderson, W. & Scherbov, S., 2008. IIASA’s 2007 Probabilistic World
Population Projections, Vienna, Austria: International Institute of
Applied Systems Analysis. Available at:
http://www.iiasa.ac.at/Research/POP/proj07/index.html?sb=5.
Nordhaus, W.D., 2009. Alternative Policies and Sea-Level Rise in the RICE-
2009 Model, New Haven, CT: Cowles Foundation, Yale University.
Available at: http://econpapers.repec.org/paper/cwlcwldpp/1716.htm
[Accessed September 10, 2009].
ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.
ten Raa, T. & Mohnen, P., 2008. Competition and performance: The different
roles of capital and labor. Journal of Economic Behavior & Organization,
65(3-4), 573-584.
381
ten Raa, T. & Wolff, E.N., 2001. Outsourcing of Services and the Productivity
Recovery in U.S. Manufacturing in the 1980s and 1990s. Journal of
Productivity Analysis, 16(2), 149-165.
RBA, 2009. Chart Pack: A Collection of Graphs on the Australian Economy and
Financial Markets, Canberra: Reserve Bank of Australia. Available at:
http://www.rba.gov.au/ChartPack/index.html [Accessed September 28,
2009].
Solow, R.M., 1957. Technical change and the aggregate production function.
The Review of Economics and Statistics, 39(3), 312-320.
Weitzman, M., 1998. Gamma discounting for global warming. In First World
Congress of Environmental and Resource Economists. pp. 25–27.
382
Weitzman, M.L., 2001. Gamma discounting. American Economic Review, 260-
271.
i One model was run continuously for over 5 weeks on a high speed research
computing node in an unsuccessful test of ultimate constraint satisfaction
ii Fortunately, calculated very quickly using Mathematica's symbolic processing
iii Attributed to the Franciscan friar William of Ockham (1285-1349)
iv In this example, it may help to think of consumption plus net imports, where net
imports is just a negative number for net exports
v It is interesting to note that the use of DuPont analysis completes a full circle in
Leontief and CGE modelling. The Physiocrat Pierre Samuel du Pont de Nemours,
who became a prominent American industrialist, advocated low tariffs and free
trade
vi As there have been changes in the collection and classification of data between
GTAP5 and GTAP7, a more reliable analysis would require extended
econometric analysis using supplementary data sources
vii For example, Australia's net migration was 285,000 in 2009, compared to a
more normal level of 90,000 per annum
viiiDICE 2005 emissions is calculated from the equation for industrial emissions:
eind(0) = 10 σ(0) (1 – μ(0)) ygr(0) + eland(0)
= 10 x 0.13418 x (1 – 0.005) x 55.667 + 11
. = 85.3205 GtC per decade
Converting this equation into MtCO2 per annum:
eind(0) = 0.13418 x (1 – 0.005) x 55.667 x 3.67 x 1000 + 11/10 x 3.67 x 1000
= 27,276 + 4,037
= 31,313 MtCO2 per annum
ix ceteris paribus: other things being held constant
x This example is drawn from the file m12_13p_2C_100.nb
xi Personal communication with Wolfram indicates that this issue will be
addressed in a future release of Mathematica
xii GTAP's future release of a mapped non-CO2 gas emissions database will
facilitate further improvement in the geophysical model
xiiim12_13p_normal.nb
xivtopo_test12_comp_sceptre.nb
xv DICE “business as usual” has various miscellaneous non-binding constraints
383
Chapter 6 Assessment of changes in regional
and industry performance under resource
constrained growth
The foregoing Chapters have established the framework for a new lens through
which climate-economic polices may be analysed to address the research
question of identifying the regional and industry effects where resources are
limited by climate change. Chapter 5 A new spatial, intertemporal CGE policy
research tool described a new intertemporal, multiregional CGE model called
Sceptre, which is an acronym for Spatial Climate Economic Policy Tool for
Regional Equilibria. The objective of this Chapter is to use this new lens for
policy research to address an example of climate policy.
The Base Case adopted for this policy investigation is that the increase in
global average temperatures above pre-industrial levels should not exceed 2°C
and that both developed and developing countries need to work towards this
goal. This policy was accepted by the Major Economies Forum at its July 2009
inaugural meeting in L'Aquila, Italy (see Chapter 3 Political Economy of the
Anglo-American world view of climate change). This objective is consistent
with the IPCC's recommendations to ameliorate global warming and is
supported by the vast majority of scientists. In September 2009, 133 countries
and the European Union had accepted the proposed 2°C limit.
The multiplicity of results from spatial models is often celebrated and lamented
in rapid succession. Fortunately, Mathematica's rich data visualisation
capabilities allow the communication of the results to be relatively enjoyable
or, if not, then at least bearable.
Chapter 1 Introduction discussed the value of policy modelling: firstly for the
ability of modelling to test feasibility and secondly to provide an appreciation
of risks through the differences between scenarios. It was noted that other
modelling techniques would supplement CGE and ultimately public pluralist
processes, such as forums for stakeholder debate, would determine policy
decisions. So the aim in using the Sceptre policy investigation tool is to
contribute a reference position to the process of policy formation.
385
The 2°C Base Case is important in its own right. However, for the reasons of
systemic modelling risk discussed in Chapter 1 Introduction it is not an
immutable outcome of the policy. With this caveat, features of the Base Case
are discussed in this part of the Chapter. In the ensuing sensitivity cases, the
Base Case is used to contrast sensitivity scenarios for Point of View Analysis,
Constraint Severity Analysis, Technology Cost Analysis and Impaired Sales to
Asset Ratio Analysis.
The results are presented in terms of expansions from the current situation,
which is consistent with both the language and the mathematics of
benchmarking discussed in Chapter 4 Economic Models for Climate Change
Policy Analysis. Aggregate investment, accumulated capital and carbon
commodities are presented in absolute terms. These absolute values need to be
approached with the usual caveat concerning apparently accurate numbers in
projections.
Economic expansion
In 2004, the regions NAFTA (America, Canada and Mexico), the European
Union (25 countries) and the Rest of the World (ROW) had Gross Domestic
Products as shown in the following table:
386
first decade and saturates at about 14%. This compares to a 6.7% increase in
population as shown in Chapter 5 Sceptre model development.
NAFTA's economic expansion jumps 10% in the first decade and saturates at
about a 38% increase. This compares to a 29.1% increase in population. The
Rest of the World (ROW) sector expands 12% in the first decade. This saturates
toward a 48% expansion, which compares to an increase in population of
38.1%.
When due only to trade and production efficiency and the growth of labour
availability, these increases suggest a significant increase in output in real
terms. The average increase in living standard at the end of the projection is
the same in each case at about 6.95% in real terms. This reflects the objective
function that equally weights per capita increases in welfare in all regions.
The illustrations BC02 to BC04 below show the emissions control profile for
the production of food, manufactured goods and services respectively.
Illustration BC05 shows the control profile for consumer generated emissions.
387
Base Case emissions control rate by commodity
BC02 BC03
BC04 BC05
The following table summarises the saturation emissions control levels in each
country and industry.
It may be seen in the above table that the control requirements for food are
relatively modest. However, the high figure for ROW manufacturing and end
consumption shows how energy and emissions intensive these sectors are
across the ROW region. It may be noted that for services production, which
includes electricity production, very high or complete control is required in all
regions. This demonstrates the crucial importance of controlling emissions
from electricity generation.
388
Price of amelioration and abatement
The illustrations BC06 to BC09 below show the average price of amelioration
and abatement based on the above control rates.
BC06 BC07
BC08 BC09
The saturation prices for each commodity in each region are shown in the
following table:
389
Emissions control technology prices
US$ per tC EU25 NAFTA ROW
Food 9 5 14
Manufacturing 19 44 233
Services 209 322 320
Consumption 46 94 153
Industrial emissions
In order to meet the 2°C temperature rise constraint while maximising welfare,
industrial emissions show an increasing profile for 5 decades to a maximum of
80 GtC/decade. This is 8 Gt per annum, which is 38% higher than the 1990
level of 5.81 GtC.
390
After reaching the 80GtC/decade maximum, emissions must drop by 88% to 9.4
GtC/decade after 9 decades. This level is equivalent to 0.94GtC per annum,
which is an 83% reduction compared to the 1990 level.
This shows that various widely discussed objectives for a 20% or 40%
reduction by 2020 (compared to 1990 levels) and 50%, 60% or 80% reduction
by 2050 may not be fully consistent with maximising economic welfare but do
represent a progressive approach to controlling emissions that mitigates the
risk of needing to reduce emissions 88% in just one decade.
Illustration BC11 shows how the 2°C limit on atmospheric temperature rise is
approached after 8 decades and then stabilises. There is also a strong, albeit
delayed rise in ocean temperature, where the effects are yet to be fully
appreciated. Illustrations BC12 and BC13 show the associated concentration of
carbon and radiative forcing.
BC11 BC12
BC13 BC14
391
The second most important illustration is BC14, which is the economic damage
feedback multiplier. This is a function of atmospheric temperature rise and
asymptotically approaches 0.989, which is a reduction of economic output of
about 1.1%.
Industry activities
BC17 BC18
BC19 BC20
BC21 BC22
392
Base Case industry activity expansion by commodity and by region
BC23 BC24
Specialisation
393
has been observed in the off-shoring of Anglo-American jobs to Asia and China.
The real issue is when and how to control specialisation into a practical range.
394
Further investigation into Sceptre's objective function, engineering and
ecological infrastructure constraints, and technology propagation remain
policy research opportunities and are set out in Chapter 7 Conclusions and
Suggestions for Further Research.
Illustrations BC20 and BC21 (above) show the outputs of the augmented
carbon sectors, the amelioration and abatement sector and emission permits
trading sector respectively. It may be noted that in decade 6 the trading of
emissions permits switches over to physical amelioration and abatement. A
feature of the illustration is the strong growth EU25 emissions (for the reasons
discussed above) and in the region's equally strong amelioration and
abatement. Total emissions dealt with by both processes rises from 78GtC in
the first decade to 99 GtC in decade 13.
Commodity Export
Illustrations BC25 to BC29 show the export outputs for each commodity. A
positive amount is a net import while a negative amount is a net export.
Illustration BC27 shows no export activity because Services has been defined
as a nil-export commodity.
395
Base Case international trade in food, manufactured and carbon commodities
BC28 (carbon amelioration services trade) BC29 (carbon emissions permits trade)
In order to achieve its food expansiveness, illustration BC28 shows that the
EU25 imports permits from NAFTA and after decade 6 begins to import
significant permits from the ROW. However the dominant feature in
illustrations BC28 and BC29 is that after decade 6, EU25 imports large
amounts of both amelioration and abatement services and emissions permits.
Illustrations BC15 & BC16 show aggregate investment and capital in absolute
terms, which are mainly used for comparisons across scenarios.
396
Base Case aggregate investment and capital accumulation
BC15 BC16
The table below compares the Normal or “business as usual” scenario with no
climate constraint (see Chapter 5) to the 2°C Base Case. it may be seen that
the 2°C limit reduces accumulated capital in decade 13 by 11% or US$280
trillion.
Illustrations BC30 to BC33 (below) show the investment activity for each
region by commodity. This is a plot of the multipliers of the existing investment
vectors. Cross-tabs of investment activity for each commodity by region are
shown in illustrations BC33 to BC35.
These activities are the multiple of existing investment vectors, which are:
397
Base Case disaggregated investment by region and by commodity
BC30 BC31
BC32 BC33
BC34 BC35
398
Illustrations BC36 to BC38 show the net accumulated investment in each
region by commodity. As expected, illustration BC36 shows EU25 accumulated
investment in the food industry is high.
BC36 BC37
BC38
Illustrations B36, B37 and B38 exhibit the feature that investment in services
rises strongly due to the demands of amelioration and abatement.
The following constraints in the Base Case meet the first definition of the
Theorem of Complementary Slackness that the slack is zero when a constraint
is binding.i From the original 996 constraints, only 20 constraints have slack of
zero. These are shown in the following illustration.
399
Illustration 36: Binding Constraints, KKT multipliers and Residual Slacks for
Base Case
It may be noted from the table above that excluding the binding constraints for
the damage feedback function and emissions control rate, the only two binding
constraints remaining are for temperature rise and the EU25 food commodity.
A guide to the non-uniqueness of KKT multipliers can be gauged from the first
binding constraint, which is the main constraint of 2°C rise at 100 years (i.e.
the tenth decade):
400
168
This constraint shows a KKT multiplier of . However, this is the lowest
5
multiplier of the set of possible solutions as shown in the illustration below:
The KKT multiplier for a constraint represents the productivity of the resource,
which is the change in the objective function for a unit change in the resource
of the constraint. As Sceptre's objective function is the net present value of the
unbiased or unweighted sum of country expansion factors, the KKT multipliers
or shadow prices are given in terms of Net Present Value of economic
expansion rather than in dollars.
A KKT multiplier of 33.6 for the above constraint implies that a 33.6 increase
in the value of the objective function will result if the temperature rise is
relaxed by one unit, from 2°C at decade 10 to 3°C at decade 11.
401
From the table, it may be noted that the dollar value of the objective function is
about 100 trillion times the expansion value. Therefore a relaxation of the
temperature constraint by 0.01°C and consequent increase of 0.336 in the NPV
of the expansion factors is worth about US$38.8 trillion. This is almost equal to
the single year GDP US$40.97 trillion (2004).
While there may be a preferred point of view, there is no such thing as a “best”
one. Usually each point of view has a prominent and unique perspective. Often,
these points of view are orthogonal, that is, coming from different
philosophical or ideological bases and so are not strictly comparable.
For example, in the climate change debate, national and supra-national polity
need to engage with a range of views from sceptics to environmental radicals,
which represent two rather public clusters in climate change. As different as
the dichotomy of these two views may be, they are united in the
uncompromising demand that society adopt fundamental positions and accept
large risks. For example, sceptics shrug-off the risk of a climate change
induced collapse of civilisation. Radicals equally shrug-off the social risk
associated with mass dislocation of employment.
Although uncomfortable for many policy makers, extreme views have a place
because they stretch the debate. However, as discussed in Chapter 2 Political
economy of the Anglo-American world view the plurality of fundamentalist
views can be breathtaking. These range through such diverse approaches as
free-market, conservative, liberal, evangelical and Marxist-Leninist
perspectives. Even in establishment views great rifts exist. Krugman (2009)
notes that the global financial crisis has reignited irreconcilable differences
between American Keynesian and Monetarist philosophies in establishment
macroeconomics.ii
402
These multidimensional perception spaces offer many interesting pathways for
additional research to achieve a fine reduction and classification of policy
understanding. For reasons of expediency and policy making pragmatism, it is
assumed in this research that Pareto's Rule applies so that 80% of the desired
analysis in points of view can be understood through examining 20% of views,
subject to these being sufficiently diverse.
The points of view selected for analysis include the two extreme positions of
sceptic and radical. Somewhere in the multidimensional space in between are
points of view for Laissez-faire free markets and Government-regulated
markets. These latter two points of view roughly correspond to American free-
enterprise individualism and European free-market social democracy.
It needs to be kept in mind in analysing this point of view (as in the ones
below) that point of view analysis seeks to model the underlying assumptions
in the point of view. For example, a representative person might say “I feel that
this will be the outcome”. In the case of a Sceptic, it might be “I don't
acknowledge any global warming so I feel that there will be no climate induced
effects to look at.”
The point of view analysis does not endeavour to criticise these assumptions.
Nor does it seek to demonstrate whether or not the point of view is logically
consistent. Furthermore, nor does a point of view sensitivity seek to project a
realistic outcome. In other words, point of view projections try to encompass
the representative view at face value.
403
The assumptions used to model the Sceptic point of view are that there will be
no constraints on emissions, no carbon trading is required (in Sceptre, all
emissions will be ameliorated at no cost) and there will be no climate-economic
damage function.
The difference between this scenario and the Normal case (in Chapter 5) is
that in the Normal case the climate-economic damages mechanisms are
operating and, although there is no climate constraint, the model draws on
amelioration and abatement services in order to improve its economic
expansion. In this Sceptic point of view, it is never necessary to draw on
amelioration and abatement services as emissions have no impact on climate
change or economic performance.
The assumptions used in Sceptre to model the radical perspective are that all
emissions must be immediately ameliorated or abated in full and the climate-
economic damage function operates even though the low emissions prevent the
damage multiplier from significant activity.
404
Laissez-faire free market point of view
Market systems form the middle ground. The first point of view investigated in
market systems is Laissez-faire free market dogma. This is often identified with
unfettered Anglo-American capitalism and often called neoliberalism. With
regard to climate change, its underlying assumption is that any climate
induced economic damage will become priced in the market. The invisible
hand of capitalism will silently move to evoke entrepreneurial technologies to
solve any problem, if indeed there is money to be made in solving it. This
means “business as usual” and managers acting with self-enlightenment if it
suits them and is earnings accretive. The subject of market failures is met with
complete cognitive dissonance. For example, the Great Depression was merely
people choosing to have a holiday rather than being willing to work for lower
wages (Krugman 2009).
This point of view can be modelled in two ways. The first is the Normal
“business as usual” scenario presented at the beginning of this Chapter as a
comparison with Nordhaus' DICE model. It was seen there that Sceptre's
projection of temperature rise was increasing strongly through 2.5°C at
decade 10.
Therefore, the representative outlook or point of view is that there are neither
constraints on emissions nor any need for emission permits trading or
amelioration and abatement services. Optimistically all emissions will be dealt
with and a reasonable scenario will unfold. Therefore Sceptre's assumptions
are no constraints on emissions, no carbon trading (all emissions ameliorated
at no cost) but a climate-economic damage function is operating.
405
Government-regulated market point of view
The second market related point of view investigated here is one where
governments intervene to address potential or actual market failures. Its
underlying assumption is that Laissez-faire free markets have many
advantages over planned economies but that free markets do not work in
regard to commons, such as the environment. The planned adjustments are
designed to ensure sustainability. Chapter 2 Political economy of the Anglo-
American world view discussed the European Union's market system with
particular reference to Germany. Chapter 3 Political Economy of the Anglo-
American world view of climate change placed this discussion in the context of
climate change.
The United Nations was formed in 1945 to replace the League of Nations,
which America had never joined. Both organisations represent the type of
supranational symbiotic community that countries need to take ownership of
the international commons and protect it. In the climate change policy debate,
the UNFCCC and its' IPCC scientific panel represent the supra-national body.
The IPCC has recommended a maximum post-industrial temperature rise of
2°C. As there was no discernible temperature rise in the period from 1750 to
1900, the 2°C temperature rise effectively applies post 1900.
These four points of view were modelled in Sceptre model with the following
assumptions. The illustrations of the results are shown overleaf:
406
Climate Change Laissez-faire Free Maximum 2°C rise Radical Planet
iii iv v
Sceptic Market @ 100 years Protectionvi
No constraints on No constraints on 2°C limit at 100 years All emissions
emissions, no carbon emissions, no carbon with carbon trading ameliorated at full
trading (all emissions trading (all emissions and amelioration at cost and with a
ameliorated at no ameliorated at no cost, with a climate- climate-economic
cost) & no climate- cost) but with a economic damage damage function
economic damage climate-economic function
function damage function
The Sceptic, Laissez-faire and Radical perspectives all lead to similar outcomes
because each assumes the outcome will be fine (see results in the next section
of this Chapter). However, all three scenarios differ materially from the Base
Case. For example, Sceptic, Laissez-faire and Radicals all believe that
temperature rise will continue to hover at 0.8°C, in comparison to the Base
Case where it rises to 2°C.
A comparison of the objective functions of the Radical and the Base Cases
shows the extra Net Present Value cost of the Radical case to be US$3.8
trillion (2004 dollars) (cf. Previous Base Case analysis for method of
estimation). The Sceptic and Laissez-faire cases, which are not meaningful
comparisons, show savings over the Base Case of US$75 billion and US$14
billion respectively.
i For this purpose, an arbitrary chop of 10-6 is applied to slacks. This means that
slacks smaller than 10-6 are considered to be zero
ii Krugman (2009) refers to Keynesians as “saltwater economists” because they
tend to live on the East or West coast and Monetarists (or Chicago School) as
“freshwater economists” because they tend to live inland
407
iii m12_13p_full_amel_no_cost_no_dam.nb
iv m12_13p_full_amel_no_cost.nb
v m12_13p_2C_100.nb
vi m12_13p_full_amel.nb
408
Point of view simulation results
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
409
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
410
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
411
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
412
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
413
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
414
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
415
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
416
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
417
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
418
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
419
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
420
Climate change sceptic Laissez-faire markets Base Case 2°C rise Radical sensitivity
421
6.1.3 Atmospheric concentration constraint severity
In Chapter 3 Political Economy of the Anglo-American world view of climate
change it was noted that the Tällberg Foundation, Al Gore, James Hansen and
others emphatically seek an atmospheric concentration of 350 ppm compared
to 380 ppm in 2009. Until recently the IPCC and member governments
concurred with a 400 ppm or 450 ppm limit. However, with this target
becoming frustrated, MEF governments adopted a 2°C rise limit in lieu.
The values of the objective function for the three sensitivity scenarios of 350,
450 and 550 ppm show that these constraints impose an increased cost on the
economy compared to the Base Case of a 2°C temperature rise. However, this
increased cost is only in the order of US$15-20 billion, which is far less than
US$3.8 trillion for the Radical point of view discussed above.
Trends across the severity scenarios show that the control rate for
amelioration and abatement dramatically declines and is strongly delayed as
the atmospheric tolerance increases to 500 ppm. The 2°C Base Case has a
delayed requirement for emissions control but otherwise is similar to the 450
ppm case. The manufacturing emissions control rates are shown below for
each sensitivity:
422
Manufacturing emissions control rate for Base Case 2°C, 350, 450 & 550 ppm
However, because the emissions control begins immediately for 450 ppm
mitigation, the emissions profile approaches half that of the Base Case. After
10 decades the profile for 350 ppm begins to decrease for EU and NAFTA,
while the emissions control requirements for 450 and 550 ppm mitigation rise.
Atmospheric temperature rise for Base Case 2°C, 350, 450 & 550 ppm
423
Atmospheric temperature rise for Base Case 2°C, 350, 450 & 550 ppm
Accumulated capital for Base Case, 350, 450 & 550 ppm
424
As shown in the illustrations below, a limit of 350 ppm limits EU25's resource
expansive food production. This restriction is removed once the atmospheric
concentration is relaxed and EU25 food production increases markedly in the
450 ppm and 550 ppm cases and Base Case.
Food industry activity level for Base Case, 350, 450 & 550 ppm
vii m12_13p_350_100.nb
viiim12_13p_450_100.nb
ix m12_13p_550_100.nb
x m12_13p_2C_100.nb
425
Atmospheric concentration constraint severity sensitivity analysis
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
426
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
427
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
428
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
429
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
430
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
431
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
432
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
433
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
434
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
435
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
436
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
437
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
438
Base Case 2°C @ 100 years 350ppm Sensitivity 450ppm Sensitivity 550ppm Sensitivity
439
6.1.4 Technology cost sensitivity
In Chapter 3 Political Economy of the Anglo-American world view of climate
change it was noted that many developing countries including China and India
fear that ameliorating emissions will seriously retard economic growth. One
concern is that intellectual property royalties for green technologies will lead
to major transfer payments from developing economies to industrialised
economies. Poor and developing countries know that intellectual property
matters are difficult to resolve, as they have found in the ongoing imbroglio
over the supply of anti-retroviral (HIV) drugs.
Intellectual property concerns aside, there are situations where the abatement
and amelioration task retards economic growth. This is particularly the case
for developing countries due to rapidly rising standards of living and in many
cases, rapid population growth.
440
Manufacturing emissions control Manufacturing emissions control
rate technology price
441
Aggregate investment Aggregate accumulated capital
442
As may be seen in the above illustrations, global investment fractures, falling
25% from US$800 billion at decade 10 in the Base Case level to US$600 billion
for the 10x technology cost case. The fall is 50% for the 20x technology cost
case. There is a similar effect on accumulated capital, which falls from
US$1500 trillion at decade 10 to US$1100 trillion for 10x cost and US$800
trillion for 20x cost.
Technology risk
The scale of the task for developing economies in reducing emissions and the
situation that they usually do not have primary access to technology
intellectual property rights suggests that developing countries appear to face
the greatest technology risk. Developing countries have seen this sort of risk
before, for example in HIV medication.
The world view of China and India was discussed in Chapter 3 Political
Economy of the Anglo-American world view of climate change . This suggests
that industrialised nations will need to resolve the uncertainty about
technology availability and concern about being exploited by technology
providers before they are ready to engage in a common goal.
443
xi m12_13p_2C_100.nb
xii m12_13p_2C_100_tcx2.nb
xiiim12_13p_2C_100_tcx10.nb
xivm12_13p_2C_100_tcx20.nb
444
Abatement technology cost sensitivity analysis
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
445
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
446
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
447
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
448
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
449
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
450
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
451
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
452
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
453
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
454
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
455
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
456
Base Case 2°C rise 2x cost of technology 10x cost of technology 20x cost of technology
457
6.1.5 Effect of climate damage on Sales to Assets
ratios
Sceptre has been run with the climate damage function impairing both
industrial output and Sales to Asset ratios. This provides a new perspective on
climate-economic analysis.
A Sales to Assets ratio for the single year of 2004, which is the base year of
GTAP data, is calculated by dividing the V matrix by the opening assets for the
2004 year. The Sales for decade is calculated from the single year figure by
applying a multiplier comprising the sum of the population index.
Impaired Sales to Assets ratios are shown in the following illustrations for each
commodity of food, manufacturing and services.
458
Sales to Asset Ratio impairment
Impairing the Sales/Assets ratio reduces the value of the objective function by
about US$180 billion (2004 dollars) (cf. Previous Base Case analysis for
method of calculation). As might be expected, the small changes to
Sales/Assets ratio result in only minor changes to disaggregated results (see
next section of this Chapter).
It is notable that the emissions control rate for consumption increases, while
that for food and manufacturing decreases slightly. The control requirement for
services remains at the maximum. Although the increased responsibility of
consumers to ameliorate and abate is only an indicative trend, it demonstrates
that as industry needs increased assets to produce the same output then
consumers start to bear a greater burden to directly control their emissions.
459
Base Case Impaired Sales/Asset ratio
460
Base Case Impaired Sales/Asset ratio
disaggregated investment by region disaggregated investment by region
461
Sensitivity with impaired Sales/Asset ratios
462
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio
463
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio
464
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio
465
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio
466
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio
467
Base Case 2°C @ 100 years Impaired Sales/Asset Ratio
468
6.1.6 Effect of carbon commodity trading
In all previous sensitivities, unrestricted international trading in carbon
commodities is assumed. This sensitivity case removes the international
arbitrage of emission permits and amelioration and abatement services. It is
included for the case where international trade in these commodities is limited
or absent.
The overall net benefit of international trade in carbon commodities does not
appear to be very large. Indeed, there is a negligible US$2 billion (2004
dollars) gain in the objective function if trade is prevented (cf. Base Case
analysis for method of calculation).
469
Base Case No carbon commodity trading
emissions control rate emissions control rate
However, global capital increases significantly. At decade 10, Base Case global
capital accumulation of US$1,500 trillion (2004 dollars) rises to US$2,000
trillion in the case of no international carbon commodity trading.
470
Base Case No carbon commodity trading
capital accumulation capital accumulation
471
Base Case No carbon commodity trading
disaggregated capital accumulation disaggregated capital accumulation
NAFTA and ROW lift their food production significantly and the EU25 resource
expansive food production is less pervasive. From this it may be noted that
EU25 expansive food production is actually a function of trading carbon
commodities.
Perhaps the most important effect of all is that zero international carbon
commodity trading means that the amelioration and abatement task of ROW
rises to almost 60 GtC per decade, which is nearly three times that of NAFTA
and five times that of EU25.
472
Base Case No carbon commodity trading
amelioration and abatement amelioration and abatement
473
Sensitivity for no international trading in carbon commodities
474
Base Case 2°C rise No international trading of carbon commodities
475
Base Case 2°C rise No international trading of carbon commodities
476
Base Case 2°C rise No international trading of carbon commodities
477
Base Case 2°C rise No international trading of carbon commodities
478
Base Case 2°C rise No international trading of carbon commodities
479
Base Case 2°C rise No international trading of carbon commodities
480
6.2 Conclusion
Based on the results of the political economy analysis, a new benchmarking
type of CGE model has been developed and used to investigate a climate-
economic Base Case and discriminate five categories of sensitivities as shown
in the following table.
The Base Case shows that the IPCC, European Union, Major Economies Forum
and G20 policy of limiting temperature rise from pre-industrial times to a
maximum of 2°C is feasible.
The Point of View sensitivity demonstrates that the Base Case costs little more
than the Sceptic and Laissez-faire scenarios, so controlling emissions for the
safety of the globe does not incur a prohibitive cost. Indeed, the Radical ultra-
risk averse policy option of controlling emissions to 350 ppm has a relatively
small net present value premium over the Base Case of US$3.8 trillion. On this
basis governments may be advised to reconsider the Radical perspective of
strongly limiting emissions through a mix of quantitative regulation, taxes and
property rights.
The climate constraint sensitivity shows that the three sensitivity scenarios of
350, 450 and 550 ppm do not have a significant cost over the Base Case. The
450 ppm case and the Base Case are similar, as the IPCC found, although the
earlier control of emissions in the 450 ppm case results in a lesser temperature
rise of 1.7°C for 450 ppm at decade 10 compared to 2°C for the Base Case.
481
Increasing the cost of backstop technology ultimately leads to a fracturing of
economic performance. While this commences at 20 times current estimates of
the backstop technology cost, it is important to note that current cost and
availability estimates remain highly speculative. In addition, as has been the
case with HIV pharmaceuticals, there may be a disproportionately large risk
for countries that do not hold intellectual property rights. The political
economy analysis showed that this has led to a situation of considerable
anxiety for China, India and other newly developed and developing countries.
It has been a key reason that these countries have declined to engage in
binding emissions reduction targets. In order to minimise the significant
technology risk shown by this sensitivity analysis, governments would be
advised to implement strong quantitative limits in concert with robust market
price signals. These measures will minimise the market risk from technology
development business plans and catalyse immediate technology development.
It is unlikely that continuing the current policy of research subsidies for far
away technologies like carbon capture and storage can adequately address the
technology cost and availability risk.
This model is the first of its type known to use Sales/Assets ratios (instead of
resource limits) to mediate capital accumulation in the underlying economic
model and price resources. The impairment of Sales to Asset ratios has a
subtle influence on the Base Case. As industry struggles with needing more
assets for the same output, consumers are also exposed to a greater burden for
directly ameliorating or abating their emissions.
482
The combination of Base Case and sensitivity analyses using the new spatial
benchmarking CGE tool and informed by a deep investigation of political
economy, provides a range of policy insights at the global, regional and
commodity level. It demonstrates that this tool is appropriate for climate-
economic policy research.
Charles, D., 2009. ENERGY RESEARCH: Stimulus Gives DOE Billions for
Carbon-Capture Projects. Science, 323(5918), 1158.
Haidt, J. & Graham, J., 2007. When morality opposes justice: Conservatives
have moral intuitions that liberals may not recognize. Social Justice
Research, 20(1), 98-116.
Krugman, P., 2009. How Did Economists Get It So Wrong? The New York
Times. Available at:
http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html?
_r=2&th&emc=th [Accessed September 7, 2009].
ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.
Romer, P.M., 1994. New goods, old theory, and the welfare costs of trade
restrictions. Journal of Development Economics, 43, 5-38.
483
Saunders, A., 2009. Governance and the Yuck Factor. Available at:
http://www.abc.net.au/rn/philosopherszone/stories/2009/2631260.htm#t
ranscript [Accessed August 13, 2009].
484
Chapter 7 Conclusion and Suggestions for
Further Research
7.1 Conclusion
Chapter 1 Introduction discussed policy issues in climate change and the way
that the evidence-based policy methodology may help address those issues. It
identified that policy makers look to evidence-based techniques to confirm
policy and instrument feasibility and to understand the sensitivity of proposed
policy solutions. Systematic, non-systematic and communication risks of
modelling in the policy research process were investigated. The tools of policy
research were addressed and it was concluded that for issues with national or
global significance policy makers look to computable general equilibrium
(CGE) modelling in policy research.
The inadequacies of CGE tools were discussed in general and with reference to
developing effective climate change policies. Four research gaps were
identified: the difficulty of solving comprehensive general equilibrium with
spatial aggregation; the choice of production function; intertemporal
consistency; and communication of results. It was proposed that national
accounting Use and Make tables could resolve the first shortcoming,
benchmarking techniques the second, linking flows to stocks through
Sales/Assets ratios the third, and modern data visualisation could address the
last gap.
This analysis led to the research aim of this dissertation, which is to answer
the question “What changes in regional and industry performance are implied
by a change in the Anglo-American world view from unconstrained to climate-
constrained resource usage?” The means of achieving this was to develop a
new lens through which to understand the spatial and intertemporal effects of
climate policy on regions and commodities linked through trade.
485
new CGE tool, or lens, would have provenance in the political economy of the
world view being addressed and, in particular, with regard to the specific
policy area being investigated, in this case climate-economic policy.
Chapter 1 Introduction also set out the scope of the research and showed that
it was a subject of wide interest to national and international governmental
and non-governmental organisations and addressed a number of Australian
National Research Priority Areas.
486
Chapter 2 Political Economy of the Anglo-American economic world view also
found that President Obama has recognised that America's competitiveness
and financial position require immediate action and its future is linked to
multilateral cooperation. It was concluded that America may be on the cusp of
accepting its new reality of resource constrained growth but is not yet out of
the “storming” phase. Plans to reform America may be thwarted by the
political conservative psyche, which continues to be driven by dreams of
exceptionalism and is ideologically committed to unfettered American
unilateralism. The direction America ultimately takes will determine both its
future and that of the Anglo-American cohort.
Chapter 3 also investigates the three main policy instruments for reducing CO 2
emissions, namely quantitative limits, taxation and property rights. It finds that
while all have the same theoretical outcome, in practice each has strengths
and weaknesses. It is concluded that with adequate regulatory protections
against market abuse and market failure, the introduction of property rights is
487
a feasible and attractive way of pricing pollution and mobilising capital. From
this analysis it is concluded that carbon commodity trading is an appropriate
means of including amelioration and abatement measures in the policy
research model developed in this dissertation.
The blueprint for a new CGE model is described in Chapter 5 A new spatial,
intertemporal CGE policy research tool. The model is called “Sceptre,” which is
an acronym for Spatial Climate Economic Policy Tool for Regional Equilibria. It
unites CGE modelling with Input Output modelling by generating resource
pricing through an optimisation dual solution. This is made possible through
recent innovations in nonlinear interior-point techniques. The model employs
Thijs ten Raa's approach to using the Make and Use tables of national accounts
for benchmarking economies using Input Output data. In order to place this in
an intertemporal context, a new approach is introduced to link stocks and
flows through Sales/Assets ratios. This creates both a strong underlying
intertemporal economic framework for the constraints and allows resource
pricing to be generated through these dynamic resource constraints, rather
than through static or exogenous commodity resource limits. New commodities
are introduced for international carbon trading of permits and amelioration
and abatement services. Geophysical feedback is implemented using William
Nordhaus' technology functions and proven climate-economic equations. The
model was validated using the results of recent geophysical modelling and by
comparison with the William Nordhaus DICE model.
488
constrained growth. A Base Case of limiting atmospheric temperature rise to
2C maximum was formulated from the political economy analysis of Chapters 2
& 3. The regional and commodity effects are investigated in detail. In terms of
policy makers expectations for CGE modelling in confirming viability, the Base
Case policy is found to be “feasible.” A notable outcome is the degree to which
the European Union becomes resource expansive under the Base Case policy
constraint. This commodity specialisation is an example of the neoclassical
model applying knife-edge pricing, which may be unachievable if realistic land
use or regional self-sufficiency political constraints were included.
489
may pose significant challenges for the industry policy of countries and regions
that are struggling to maintain self-sufficiency or an independent industrial
base. For example, the political economy analysis showed that countries such
as France are well positioned and keen to capitalise on the resource expansive
growth. France already derives 90% of its electricity from zero emission
sources and is hurrying to make the transition to a fully green economy with
measures such as a carbon tax and mandating plug-in hybrid cars. It has
recognised that the new climate constraints will provide a magnificent one-
time opportunity to use its resource expansive competitiveness to seize global
market share.
This dissertation has addressed the research aim of answering the question
“What changes in regional and industry performance are implied by a change
in the Anglo-American world view from unconstrained to climate-constrained
resource usage?” This has been achieved through developing a new CGE policy
tool, or lens through which to undertake policy research in both sustainability
and international symbiosis for managing the commons across trade, security
and the environment.
Globalisation
490
The model developed in this dissertation may also be used to understand the
effect of emerging, binding constraints of scarcity as they replace relative
abundance. For example, the transition away from dependence on oil. Other
fruitful areas of research may be new security zones, autarchies established to
guard primary resources such as food and water, and new multipolar
superpower equilibriums. Perhaps these new equilibriums may be based on
enlightened democracies or on game theory's mutually assured destruction
framework.
Industry policy
Various social policies may be tested using different forms of utility functions.
For example, the recently proposed Net National Product (NNP), which is GDP
less depletion of natural and human capital (Stiglitz et al. 2009). In addition,
the interface of production specialisation and consumer employment could be
investigated. This would be of the greatest interest for those countries seeking
self-sufficiency in various commodities.
491
Multiple objective and minimax programming
GTAP expects to release data for non-CO2 greenhouse gas emissions. This data
will improve the modelling of climate feedbacks with greater detail for these
non-CO2 emissions.
GTAP's land use database and Mathematica' Country database provide the
opportunity to investigate other factors. For example, the nexus between
economic performance and commodities or factors such as water, fuels,
minerals, arable land, crop yield, forests, erosion and changes in biodiversity.
492
ecology and physical science analysis in the next IPCC Assessment Report
(AR5), which is due in 2014, can be expected to provide major advances in
realism. In addition, to better understanding the effect of damage on Make and
Use tables, specific country and industry risk analysis could be undertaken to
develop localised climate damage functions.
The amelioration and abatement cost used in this policy research is a function
of the proportion of emissions ameliorated or abated. At present, Nordhaus'
technology cost profile remains speculative. Technology costs will become
better known with the commercialisation of geoengineering, geosequestation,
wind, solar, hydrogen and nuclear projects. Engineering cost functions may be
embedded in the abatement cost function.
Improved data on historical Sales to Asset ratios, appraisal of the new risks
and volatility to industrial production of climate damage and estimates of
future Sales to Assets ratios would materially improve the reliability of the
model for government policy makers and to industry strategists.
Input Output tables have the advantage of being clear and consistent. The
material balance of commodities based on Input Output table monetary data is
common to traditional CGE and benchmarking models. However, the
relationship with physical material flows or ecological flows is more tenuous.
Commodities are assumed to be homogeneous but are only artificial categories
and there are many assumptions made in mapping resources to commodities.
The availability of integrated data through the EXIPOL project will allow
realism to be improved by substituting key rows and columns with data in
physical units such as tonnes of a commodity.
493
globe, actual data may be substituted in lieu of the IEA's estimated data to
improve realism.
The use of net exports has many advantages but alterations in trade flows
leads to mismatch with taxes and international freight. Further research into
modelling trade taxes in Sceptre would enhance the trade realism of the
model.
494
7.3 Chapter References
Pan, H., 2006. Dynamic and endogenous change of input-output structure with
specific layers of technology. Structural Change and Economic
Dynamics, 17(2), 200-223.
Pan, H. & Kohler, J., 2007. Technological change in energy systems: Learning
curves, logistic curves and input-output coefficients. Ecological
Economics, 63(4), 749-758.
Stiglitz, J., Sen, A. & Fitoussi, J., 2009. Report by the Commission on the
Measurement of Economic Performance and Social Progress, Paris:
French Commission on the Measurement of Economic Performance and
Social Progress. Available at: http://www.stiglitz-sen-
fitoussi.fr/en/index.htm [Accessed September 22, 2009].
495
Appendix 1 Climate change engagement in
Australia
18 April, 2008
Submission to ETS Discussion Paper
Garnaut Climate Change Review Secretariat
Level 2, 1 Treasury Place
East Melbourne, Victoria 3002
By email: [email protected]
1. Until the USA commits to an ETS, it may be too early for Australia to do
so.
2. Australia can immediately commence reducing emissions through price
mechanisms by implementing a moderate carbon tax applied on a
carbon-added basis.
Until the USA commits to an ETS, it may be too early for Australia to do
so.
497
The European Union implemented its ETS as a differential or relative model in
order to avoid an absolute carbon price or tax. A major reason for this was that
the concept of an environmental tax had been determined unconstitutional in
France. Therefore, the UN and EU sought a self-regulating means to reduce
emissions by using market forces and the profit motive. The differential
scheme introduced by the EU provides for the emissions of firms to be
assessed on a case-by-case basis, quotas determined and carbon permits
granted free for these quotas. Approximately 10,000 steel factories, power
plants, oil refineries, paper mills, and glass and cement installations were
involved, representing approximately half of the EU's emissions. Initially,
aluminium producers, the chemicals industry and the transport sector have not
been included. Through the ETS, firms can sell surplus emissions permits, or
conversely buy permits to offset excess emissions above quota.
Given the difference in the EU and Australian schemes, linking them together
as raised on page 69 of the Discussion Paper could prima facie expose the
Australian economy and tax base to great risks. While the EU proposes to
auction permits in the future, due to the failure of enlightened self-interest
amongst generators (leading to high electricity prices for customers because
the benefits of the free permits were not passed through to customers), it is by
no means certain that an auctioning of permits will be constitutionally possible
(Deroubaix & Leveque 2006).
498
The fabric of an Australian ETS would have a very large cost base, including
the Carbon Bank (acting as a Reserve Bank in permits), operators like the ASX,
regulators like ASIC, primary dealers, distribution brokers, etc. This fabric
would mean the ETS commences its existence from a position of considerably
negative value to the Australian economy.
With the introduction of the ETS, there will be a significant risk introduced into
industry and for consumers. Following considerable debate at the time of the
last Federal election, most Australian stakeholders are expecting a “price on
carbon”. The ETS Discussion model above does not provide such a price.
Instead, a price is set by the market. As for all commodity spot and futures
markets, the price will be extremely volatile as traders and speculators are
driven to hoard and liquidate by the usual emotions of greed and fear.
Certainty and stability are key issues. The ability of firms to plan ahead with
certainty will be impaired unless they commence sophisticated hedging
strategies using futures. This need for thousands of emitters to have new
financial departments to manage hedging portfolios could necessitate a burden
of financial sophistication on firms that is unwarranted and otherwise costly to
have independently managed.
499
part of the statement. This would be efficient to administer by standard
Australian Tax Office online procedures.
The concept of a carbon-adder very subtly changes the focus from emitters to
those firms that extract carbon from the earth or import carbon into Australia.
If a company, for example a coal company, sells coal it would need to pay the
Government a carbon-added tax. The generator that buys that coal does not
need an emission permit or to pay tax. It merely needs to pay the higher cost of
the coal including both GST and carbon-added tax.
One may ask whether the ability to readily pass on higher costs in the form of
higher prices to consumers will reduce incentive for generators to seek lower
carbon sources of supply. If the National Electricity Market continues to be
regulated then reductions in carbon will come from generators seeking
cheaper fuel sources and new entrants to the market with lower source costs.
As with GST, in the case where a user is able to successfully capture and store
carbon, then the user could claim back the tax on the captured carbon in the
same way GST is claimed back. Mining companies already have the necessary
expertise for storing carbon. Therefore, carbon storage is naturally a task for
the coal miners rather than the generators. As a consequence, coal companies
would both pay the carbon tax and claim the carbon tax offset. Presumably,
this sort of technically advanced coal company would have skilled financial and
accounting personnel and be aware of the risk of re-incurring the tax liability if
the sequestered carbon under its stewardship was inadvertently vented to the
atmosphere.
Export sales and imports could also operate on the same basis as the GST.
Carbon-added tax would not apply to exports as the carbon implications of
trade are for the receiving country to deal with. If that country is a signatory to
500
the Kyoto Protocol, then it may in its own discretion charge for embedded
emissions. Fortunately from Australia's perspective, having no carbon-tax on
exports obviates the need to differentiate between signatory and non-signatory
end destinations, which is a task inevitably complicated by transhipment. It is
also likely to be politically more palatable than the Government compensating
coal and metals exporting companies in cash or otherwise for the cost of their
direct and indirect emissions in producing the exports.
In regard to imports, all products brought into Australia would need to pay
carbon-added tax. With scientific and economic assistance, formulae for
taxable embedded carbon can be readily determined by the Australian Tax
Office in conjunction with Customs & Excise. Techniques such as Input output
analysis are available to model the flow of carbon emissions through the
economy and therefore the vesting in various products.
501
A firm can plan for price but not for being unpredictably denied a key
requirement for production. In order to provide certainty to industry, the
Government could introduce a carbon tax with a rising profile. For example,
this may begin at a moderately low rate and increase over 8-12 years to a high
rate. Such a scenario would provide plenty of certainty to firms and give them
the time and incentive to innovate in their production processes to reduce
costs. In addition, a rising profile would provide the opportunity for the
Government to slowly learn about this new paradigm of carbon-added tax and
to change the rate as necessary.
502
It is therefore submitted that the first stage of an Australian market-based
emissions reduction scheme would be a moderate tax applied on a carbon-
added basis. Stage 2 of the market-based scheme would be an ETS developed
as greater certainty evolves about the nature of an international model.
Further consideration of an Australian ETS could be deferred for a short period
of, say, three years.
Yours faithfully,
(signed)
Stuart J Nettleton
503
A1.2 Australian climate change policy development
Tim Flannery
In his speech at the the University of Technology's 20 th year celebration dinner
in May 2008, the indefatigable climate change campaigner and 2007
Australian of the Year, Professor Tim Flannery, noted that the climate change
problem is bigger and more urgent than currently being addressed (Flannery
2008). He predicted some form of emissions trading scheme proceeding in
Australia but did not hold much expectation of this reducing emissions.
Flannery's reasons were, firstly, that the standard of living of highly populous
nations such as China and India is rapidly rising with attendant energy
requirements. Developing nations are are not interested in anything to do with
carbon taxes as they claim their per-capita emissions are very low, and they
will not sacrifice economic growth because of this issue, and the problem was
created by the West so the West should pay to fix the problem.i
504
growing problem of safely storing nuclear waste and controlling the
proliferation of nuclear weapons.
Garnaut Report
The Garnaut Climate Change Review was established by the State and
Territory Governments of Australia in July 2007. The Commonwealth
Department of Climate Change joined the work in January 2008.
In the interim report (Garnaut Climate Change Review 2008b), Professor Ross
Garnaut recommended that emissions and climate change should be decoupled
from world economic growth because high world growth is driving Australia
and all the world towards high cost downside risks and that this is happening
more rapidly than commonly appreciated.
The major recommendation of the interim report is that Australia should press
for the strongest possible outcomes in global mitigation. Garnaut saw this as
being in Australia's self-interest in avoiding unacceptable levels of risk of
dangerous climate change effects on Australia's fragile land, biodiversity and
dry climate.
505
Garnaut says Australia should pursue deeper 2008, 2020 and 2050 emissions
cuts than the Rudd Government's single target of a 60% reduction on 2000
levels by 2050 “Waiting until 2020 would be to abandon hope of achieving
climate stabilisation at moderate levels.”
Garnaut highlighted that Australia could not remain complacent about being a
low emitter relative to the USA and China. Under convergent-contraction
principles being developed by the United Nations to ensure developing
countries join the reduction program, all reduction targets will switch from the
relative basis of improving on current emissions to an absolute target of
emissions on a per capita basis. This would greatly impact Australia because it
has one of the highest per capita rates of emissions in the world.
Garnaut also noted that Australia was an emerging world leader and role
model in setting the post-Kyoto framework of global objectives, greenhouse gas
stabilisation, emissions budgets and the principles for allocation of global
emissions among countries.
Garnaut et al. (2008) expand on the need for urgency in addressing the climate
change phenomena. The reason for Professor Garnaut's continued emphasis on
urgency is his hope to justify political expediency in accepting his somewhat
utopian concept of an all encompassing emissions trading scheme, where all
emissions permits need to be purchased. His ultimately unfulfilled hope was
that a cloak of urgency would generate sufficient groundswell to sweep aside
all the arguments of equity that have constrained debate in the European
Union and America and resulted in inferior forms of emissions schemes.
506
would provide for a capped quantity of total emissions for a specific time. It
could be used immediately or hoarded indefinitely. The Australian
Governmental would progressively reduce the volume of permits available to
the ETS as Australia's global emissions budget reduces.
There are two levels in international ETS schemes. The first is for countries to
trade emissions permits. The second is a market that connects local ETS
markets and facilitates arbitrage and fungibility.ii
Garnaut (p35) makes the point that an international ETS that connects local
markets is a long way off “Only a few countries have proposed national targets,
and fewer still have sought to ground their targets in a framework based on
global emissions budgets derived from explicit mitigation objectives .... All
developing countries reject binding targets.”
Connecting local ETS markets has many implications. Firstly, the linked
national schemes need to define a carbon unit in the same way and agree on
what constitutes a tradeable surplus. Secondly, price and volume fluctuations
in one market immediately cause price and volume changes in the other.
Therefore regulators in each market need to monitor and enforce minimum
standards.
507
For a local ETS, the first issue is the formula on which quotas are allocated.
This is a major issue given countries such as America, China, India and
Bangladesh are highly diverse in their life-style, population, degree of
industrialisation, current level of emissions, exposure to the effects of climate
change and the impact on industry and jobs of compliance with the quotas.
However, in perhaps his most controversial point, Garnaut argues against the
European Union's differential form of ETS where permits are granted free of
cost to emitters. Emitters receive the value of the scarcity of the permits,
which is not necessarily passed on to the end consumer. Indeed, in a failure of
enlightened self-interest, generator profits increased by the amount of the
windfall permits so households and people on low incomes suffered
considerable injury from the higher prices.
Garnaut's most contentious and perhaps disputed point is that the Government
would auction emissions permits and thus the market would set the price on
emissions.iii He envisages that the Government would apply the proceeds in the
same way the proceeds of a revenue neutral environmental tax would be used
to reduce labour or other taxes and increase public expenditure.
508
technologies, the market would set a relatively low price curve,
allowing relatively high use of Australia's emissions budget in the
early years, followed by later rapid reductions in emissions. Low
expectations of emissions would generate a higher price curve, a
faster decline in emissions in the early years, and a more gradual
reduction in later years. Any new information that increased
optimism about new, lower-emissions ways of producing some
product, whether they were expected to become available
immediately or in the future, would shift downwards the whole
structure of carbon prices, spot and forward. Any new information
that lowered expectations about the future availability of low-
emissions alternative technologies would raise the whole structure
of carbon prices, spot and forward .... It is important to allow
permits to be used when they have greatest value to market
participants, to the extent that this is consistent with taking account
of any additional climate impacts of early use of permits and with
emerging international agreements. The practical way to achieve
the desired outcome would be for the Government to define an
optimum path for use of permits - ideally based on analysis of the
minimum cost path of emissions reduction within the total emissions
budget - and to issue permits over time in line with this trajectory of
emissions reduction. The fixed schedule for release of permits could
then be accompanied by provision for banking permits in excess of
current economic use, and borrowing from the future allocations
when the value of current relative to future use suggested it. The
banking and borrowing would allow the market to modify the rate at
which permits were used in a way that minimised the cost of
mitigation. It would allow the market to shape and reshape the
“depletion curve” in response to new information about emissions-
related technology or practices.
Garnaut has also accepted the recommendations in the Report by the Task
Group on Emissions Trading (Australian Department of the Prime Minister and
Cabinet 2007), established by former Prime Minister Howard, which
recommends Government interventions in the ETS to support governance and
ameliorate market failures and innovation, R&D, demand-side energy use and
provision of network infrastructure to address weaknesses.
509
The Garnaut Review recommends a form of Reserve Bank to issue and monitor
the use of permits:
Garnaut (p50) argues that other firms that suffer because of higher prices on
inputs or on what they supply would not be compensated. He says there is no
tradition in Australia for compensating other firms for losses associated with
economic reforms, particularly because the business community has been able
to anticipate the risks of carbon pricing for many years. However, Garnaut
does makes the case for assistance to workers and communities who are
adversely affected by environmental reforms. He notes:
The essence of the problem with Garnaut's proposed Australian ETS perhaps
lies in the above point. The very reason emissions permits were given to firms
in the European Union was to avoid charging for the permits, which would
have constituted an environmental tax of the form found unpopular in Germany
510
and determined by the French Constitutional Court to be unconstitutional
(Deroubaix & Leveque 2006).
At the heart of addressing greenhouse gas emissions is the principle that the
cost of adjusting to climate change should not fall on individual countries,
firms or individuals. Garnaut's ETS proposal of auctioning permits is prima
facie inequitable because it leads to differential damage to firms. Firms and
indeed end consumers will have plenty of reason to object to such damage.
They have the right to ask “Why me? Why should I be sacrificed for the good of
the planet?” Garnaut's policy of not compensating firms and individuals who
suffer has not proven to be a point easily accepted. As in France, inequalities
of this nature mean the policy requires a supra-approval under the
Constitution's international treaty provisions.
The Treasurer of the New South Wales (NSW) State Government, Michael
Costa, in the process of privatising NSW's power stations, also reacted
immediately to the issue. He said the National Generators Forum was seeking
either free emissions permits or A$20 billion compensation from the proceeds
of an auction of emissions permits.
511
The National Generators Forum continues to be perplexed by the simplistic
views of such a complex area that Professor Garnaut espouses. It is an
indication of the difficulty of national consensus in Australia’s transition to a
low carbon economy.
Lastly, due to its unusual structure, another point in Garnaut's proposal has the
potential to become a major controversial issue. Garnaut (p48) says that firms
such as coal, iron ore and metals exporters, which may not be able pass on
price increases, would receive special treatment in the form of cash subsidies:
For the most part, the distinction is between firms selling into the
non-traded domestic sector, which will mostly be in a position
largely to pass on the permit price, and firms in the trade-exposed,
emissions- intensive sector, which mostly will not be able to pass on
the price of permits (in part of in whole) unless and until relevant
competitors in global markets are in a comparable position .... In
Australia, industries included in this category may include non-
ferrous metals smelting, iron and steel-making, and cement ....
There are environmental and economic reasons for establishing
special arrangements for highly emissions-intensive industries that
are trade-exposed and at risk during the transition to effective
global carbon pricing arrangements. The case for special
arrangements is based on efficiency in international resource
allocation. All other factors being equal, if such enterprises were
subject to a higher emissions price in Australia than in competitor
countries, there could be sufficient reason for relocation of
emissions-intensive activity to other countries. The relocation may
not reduce, and in the worst case may increase, global emissions.
The economic costs to Australia and the lack of a global
environmental benefit of such relocation of industry are obvious."
Although this point is analogous to exporters not charging goods and services
tax (GST) and reclaiming from the Government any GST paid on inputs, the
concept of a subsidy to extremely wealthy multinational resource companies is
on the face of it electorally unpalatable.
512
Australian Whitepaper & Carbon Pollution Reduction
Scheme
In December 2008, the Australian Government responded to the Garnaut
Review with a White Paper. The key features of the White Paper were
confirmation of a 60% reduction in emissions by 2050 (compared to 2000
levels); a unilateral 5% reduction by 2020 (compared to 2000 levels) and up to
15% if necessary to join with other nations in global action to limit CO 2
equivalent emissions to 450ppm or lower by 2050; 20% of Australia's energy
being produced from renewable sources by 2020; and an Emissions Trading
Scheme (ETS) to operate from 1 July 2011.
513
December 2008 – for our pollution levels in 2020 to be 5 per cent
less than they were in 2000, possibly up to 15 per cent should a
global agreement be reached – will not be adequate to promote
changes to the way we live and do business. On the contrary, the
government has proposed concessions to households and high-
emissions industries to ensure that their levels of consumption and
pollution remain unaffected by the scheme!
Minister for Climate Change & Water, Penny Wong, noted that the Government
would meet the maximum 25% target through the CPRS, the 20% renewable
energy target and from 2015 by purchasing international credits for up to 5%
of the target.
514
by 2020. The new RET now absorbs all existing and proposed state and
territory renewable energy schemes.
Barringer, F., 2008. Businesses in Bay Area may pay fee for emissions. The
New York Times. Available at:
http://www.nytimes.com/2008/04/17/us/17fee.html?
_r=3&th&emc=th&oref=slogin&oref=slogin&oref=slogin [Accessed
April 18, 2008].
Bento, A.M. & Jacobsen, M., 2007. Ricardian rents, environmental policy and
the `double-dividend' hypothesis. Journal of Environmental Economics
and Management, 53(1), 17-31.
Boshier, J., 2008. Media Release: Garnaut gets it wrong again , National
Generators Forum. Available at: http://www.ngf.com.au/html//index.php?
option=com_remository&Itemid=32&func=fileinfo&id=262 [Accessed
April 26, 2008].
Deroubaix, J. & Leveque, F., 2006. The rise and fall of French ecological tax
reform: social acceptability versus political feasibility in the energy tax
implementation process. Energy Policy, 34(8), 940-949.
Flannery, T., 2008. Sustainability - issues facing world ities and world city
universities (Unversity of Technology, Sydney Inaugural Anniversary
Address on its 20th Anniversary), The University of Technology, Sydney.
Available at: http://www.twenty.uts.edu.au/streaming/vod-tf/ [Accessed
June 1, 2008].
515
http://www.garnautreview.org.au/CA25734E0016A131/WebObj/ETSdiscu
ssionpaper-March2008/$File/ETS%20discussion%20paper%20-
%20March%202008.pdf.
Lowe, I., 2009. A Big Fix: Radical Solutions for Australia's Environmental
Crisis 2005th ed., Black Inc, Melbourne.
i Nevertheless, it is possible for the West to impute a carbon tax on imports from
countries that do not levy emissions.
ii Fungibility is the ability to trade a permit in different markets. For example, the
ready sale of an American emission permit on the Australian ETS market
iii Garnaut dismisses outright a carbon tax or capped-price at which the Australian
Government would sell any number of permits, which Garnaut says is the same
as a carbon tax
516
Appendix 2 CGE Modelling
−c 1 −
u c =
1 −
The simplest form of all utility functions that satisfies conditions for regularity
(i.e. monotonicity and convexity) is the Cobb-Douglas (1928) or log-linear
function
n
U = ∏ qi
i
i=1
factor of i .
517
The log-linear form of the utility function is (Chung 1994, p.8):
n
u = ln U = ∑ i q i
i=1
−1
[∑ ]
n
−
u= i q i
i=1
1−
= 1 .
While the CES function remains highly popular, it is limited by the assumption
of constant elasticities of substitution and it cannot model inferior goods. The
transcendental logarithmic (Translog) functions for price and quantity
developed by Christensen et al. (1975) provide a model free of these
restrictions. However, there are still deficiencies. The price and quantity
functions are approximated to the second order. Furthermore, demand
functions fitted to time-series data are not homogeneous and probably not
symmetric (Chung 1994, p.76 & 81). A Translog function can become unstable
if it takes a homothetic and separable form, whereupon it collapses to a Cobb-
Douglas function of Translog sub-aggregates (or the reverse).
518
Production function
Analogous to the three forms of utility function, there are three main types of
neoclassical production functions: Cobb-Douglas, Constant Elasticity of
Substitution (CES), transcendental logarithmic (Translog). In addition, the
Leontief Input-Output table of proportions is a special form or schema for a
neoclassical production function.
1
U x 1, x 2 = A x 1− x
1 2
1
between x1 and x 2 is where = or, alternatively,
1−
−1
= .
The CES production function is often nested so that pairs of composite inputs
(goods or factors), prices and conditional demand functions lead to composite
outputs. For example, labour and capital produce value added, and the
combination of value-added with commodities A & B produces commodity C.
Commodities A & B may have both been produced by other processes. The
same sort of nesting is used for consumer utility: commodities X & Y are
consumed, and this consumption together with savings produces the consumer
utility.
519
The deficiencies of the CES form are similar to those discussed above in
relation to utility. These are that the factor shares do not vary with total output
and the elasticity of substitution is the same for all input pairs (Chung 1994,
p.110).
The CES production function has three special cases, where the elasticity of
substitution approaches one, zero or infinity.
U = A x i x 1−
2
When 0 the CES function becomes the Leontief function of perfect
complements, where factors are contemporaneously used in fixed proportions
{a , b} . No substitution is possible. Therefore, an isoquant q is L-shaped
and the bottom left-hand corner of the isoquant is the minimum resource usage
of each input to achieve the output level q .
x1 x 2
U = Min [ , ]=q
a b
U=A ∑ n
i=1
xi
For example, the producer's behaviour is to minimise the total cost of inputs
subject to the constraint of achieving a minimum output of q (the isoquant):
Min p 1 x 1 p2 x 2
subject to:
1
A x 1− x = q
1 2
520
where p1 and p2 are the prices of the respective inputs.
L = p 1 x 1 p2 x 2 − [ log q / A−log 1 x 1 2 x 2 ]
∂L ∂L ∂L
Setting the partial differentials { = 0, = 0, = 0} to zero provides
∂ x1 ∂ x2 ∂
the equations:
x 1−1 1 x −1
2 2 q
{ p 1 =0, p 2 =0 , − log [ ]log [1 x 1 2 x 2 ]=0}
1 x 1 2 x 2 2 x 2 1 x 1 A
From the first two equations, the ratio of prices can be calculated as:
p 1 1 x −1
1
=
p 2 2 x −1
2
1 1
[
x 1−
{x 1 = 2
2 p2
1 p1 ] 1−
[
x 1−
, x2 = 1
p1 2
1 p2 ] 1−
}
1 1
q −1 / 1 1− q −1/ 2 1−
{x 1 = k , x2 = k }
A p1 A p2
Where:
521
[ ]
1 − 1 −
1− 1− 1− 1−
k= 1 p 1 2 p 2
The CES function exhibits constant returns to scale, therefore a single unit
numéraire cost function for demand of 1 unit c can be defined as the ratio
of input value to output quantity. Here it is given at the minimum value:
p1 x 1 p2 x 2
c=
q
1 −1
{c = k }
A
−1
Therefore, the factor k appearing in the equations for {x 1, x 2} is given
by:
−1 −1
k
= A c 1−
1 1
[ ] [ ]
1− 1 c 1− 1− 2 c 1−
{x 1 = q A , x2 = q A }
p1 p2
1
Using = for the elasticity of substitution between x1 and x2 ,
1−
{x 1 = q A
−1
[ ]
1 c
p1
, x2 = q A
−1
[ ]
2 c
p2
}
522
Generalised multi-input CES production function
This provides a specification for the multi-input CES function used in many
CGE models, where the share parameters {i } are defined slightly
differently to facilitate removal from the power function, for example, with
{−1
1 = , −1
2 = 1−} :
1
∑
1 −1
q=A ∑ n
i=1
−1
i x i
or q=A
n
i =1
x
i i
−1
1 /−1
1
c=
A ∑i =1 i p1−
n
i
and the conditional demand function xi for are relative prices of pi is:
xi = q A −1
i
[ ]
c
pi
There are two conditions for equilibrium. The first is that the total
consumption of each commodity equals the total production:
n
C i = X i −∑k=1 X k mi , k
523
Where mi, k is the proportion if the ith commodity requisite for producing the
Xk product.
(**Utility Function**)
(*Cobb Douglas*)
U = Product[Co[i]^s[i], {i, TSectors}];
(*CES*)
(*U=Sum[s[i]Co[i]^-s[TSectors+1],{i,1,TSectors}]^(-1/s[TSectors+1]);*)
(**Price Functions**)
Do[p[i] = D[U, Co[i]]/D[U, Co[1]], {i, TSectors}];
Y = Sum[p[i] Co[i], {i, Tsectors}];
(**Allocate Parameters**)
Assign[{unitspars_, prodpars_, intcoffs_, utilpars_, extpars_}] :=
Join[
Thread[Array[A, TSectors] -> unitspars],
(*TFactors increased by 1 in prodpars for Production X CES
elasticities*)
524
Thread[Array[s, TSectors + 1] -> utilpars],
Thread[Array[Ltot, TFactors] -> extpars]
];
(**Equilibrium Function**)
Equilibrium[pars_] := Solve[equations /. Assign[pars]];
(**Execute Equilibrium**)
pars = {{1, 1}, {{0.8, 0.2, 0.3}, {0.2, 0.8, 0.5}}, {{.1, .3}, {.4, .1}},
{0.6, 0.4, 0.5}, {400, 600}};
Assign[pars]
Equilibrium[pars]
Using Cobb-Douglas for each of the consumer utility and production functions,
the output with no intermediate inputs becomes:
The above formulation processes very quickly for both consumer utility and
producer production functions having the Cobb-Douglas form, and where there
is no intermediate inputs. However, if CES functions are used or intermediate
inputs are allowed then execution becomes laborious.
AB B
= A
A/ B B
= A−
B
= B
A B
AB = A B
AB AB
525
For example, the analytically reduced nonlinear equation for x1 in the CES
function derived above is:
x1 = q A
−1
[ ]
1 c
p1
Upon transformation using the linearisation rules, this becomes the simple
linear equation:
x1 = q
c − p 1 −1 A
Preferences
In the neoclassical growth model, utility of the representative agent is a time
discounted function of the expectation of consumption in the presence of risk
aversion :
526
C 1−
∞
t−1
U = E [∑ t
]
t=0 1−
where:
C t consumption
.i
the time discount factor
the coefficient of risk aversion
Technologies
Technology is represented with a Cobb-Douglas production function as follows:
C t K t = Z t K t−1 N 1−
t
1 − K t −1
where:
K t capital
N t labour
share of capital and 0 1
depreciation rate and 0 1
Z t total factor productivity
where:
i.i.d. N 0 ; 2
parameter and 0 '' 1
parameter
Z
Endowments
The representative agent is endowed with:
527
Information
∞
C 1− −1
t
max C , K ∞ E [∑ t
]
t t t=0
t=0 1−
s.t. K −1 , Z 0
C t K t = Z t K t −1 N 1−
t 1 − K t−1
log Z t = 1 − log Z log Z t−1 t
∞
C 1−
t −1
E [∑ t
L = max C , K ∞ − t C t K t − Z t K t−1 − 1 − K t−1 ]
t t t=0
t =0 1−
∂L
: 0 = Ct K t − Z t K t−1 − 1 − K t −1
∂ t
∂L
: 0 = C −
t − t
∂ Ct
∂L
: 0 = − t E t [ t1 Z t1 K −1
t 1 − ]
∂ Kt
C t = Z t K t −1 −1 − − K t
528
A transversality condition prevents unstable solutions. This is obtained by
setting the differential of the Kuhn-Tucker limiting condition to zero, as
follows:
0 = lim E 0 [ T C−
T KT ]
T ∞
In order to provide a set of equations from which a steady state solution can be
determined, Lucas' asset pricing equation can be used (Lucas 1978):
Ct
1 = E t [ R t1 ]
C t 1
where R t1 is the return on the capital in purchasing an additional unit of
next year's resources.
C t = Z t K t −1 1 − K t−1 − K t
−1
R t = Z t K t −1 1 −
C
1 = Et [ t R t 1 ]
C t1
log Z t = 1 − log Z log Z t−1 t
=
C 1 − K
Z K −K
−1
=
R Z K 1 −
1 = R
or alternatively as:
= 1
R
1
= Z
K 1−
− 1
R
Y = Z K
= Y − K
C
529
which may further be reduced to just one equation in Kt or to a popular
Competitive equilibrium
Analogous to the social planners objective function, competitive equilibrium in
markets needs to be defined in terms of the market powers. For example, if
competitive equilibrium is the sequence:
∞
C t , N t , K t , R t , W t t= 0
∞
t C 1− −1
max C , K s ∞ E [∑ t
]
t
t t=0
t =0 1−
s.t. N st ,
C t K s
t = W t N ts Rt K s
t −1
and the intertemporal budgetary restraint that, over time, returns will pay for
capital (and any borrowings will be paid for from returns) such that at time
∞ there is neither surplus capital nor borrowings (known as the “no-Ponzi-
game condition”):
0 = lim E0 R−1
t Kt
t ∞
The representative agent, the firm, demanding labour will pay wages and
∞
receive returns in the equilibrium function W t , Rt t =0 . Using the
superscript d for demand as we did s for supply:
530
max K d
t−1
d
,Nt
Z t K dt −1 N dt 1− 1 − K d d
t −1 − W t N t − Rt K t−1
where Z t is exogenous :
Z log Z t −1 t , t is i.i.d. N 0 ; 2
log Z t = 1 − log
Markets clear as follows, although Walras' Law is that only two of these three
equations are needed:
The demand curves for wages and capital return as first order approximations
are:
W t = 1 − Z t K t d−1 N d
t
−
d −1 1−
R t = Z t K t−1 N d t 1 −
Y t = Z t K t −1 N 1−
t
W t N t = 1 − Y t
R t K t−1 = Y t 1 − K t−1
Therefore, the income share of labour is just wages and the income share of
capital is the return on capital plus depreciation.
531
rt = R t − 1
Y
= t −
K t−1
∞
C 1−
t −1
L = max C , K ∞ E [∑ t − t C t K t − W t − R t K t−1 ]
t t t=0
t =0 1−
Again, the partial differential Euler equations provide first order condition
approximations:
∂L
: 0 = Ct K t − W t −Rt K t −1
∂ t
∂L
: 0 = C −
t − t
∂ Ct
∂L
: 0 = − t E t [ t1 R t1 ]
∂ Kt
Collecting the equations and substituting for W t and R t provides the same
equations as for the social planners problem:
C t = Z t K t −1 1 − K t−1 − K t
−1
R t = Z t K t −1 1 −
Ct
1 = Et [ R t 1 ]
C t1
log Z t = 1 − log Z log Z t−1 t
These are the same equations as for social planners problem! Thus,
whether one studies a competitive equilibrium or the social planners
problem, one ends up with the same allocation of resources.
532
A2.3 Appendix references
Arrow, K.J. et al., 1961. Capital-labor substitution and economic efficiency. The
Review of Economics and Statistics, 225-250.
Chung, J.W., 1994. Utility and Production Functions: Theory and Applications,
Oxford UK and Cambridge USA: Blackwell.
Cobb, C.W. & Douglas, P.H., 1928. A theory of production. The American
Economic Review, 139-165.
Gohin, A. & Hertel, T., 2003. A Note on the CES Functional Form and Its Use in
the GTAP Model, Purdue University. Available at:
https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=1370 [Accessed November 7, 2008].
Ljungqvist, L. & Sargent, T.J., 2000. Recursive macroeconomic theory 1st ed.,
The MIT Press.
Noguchi, A., 1991. The two sector general equilibrium model: numerical and
graphical representation of an economy. The Mathematica Journal, 1(3,
Winter), 96-103.
Uhlig, H., 1999. A toolkit for analyzing nonlinear dynamic stochastic models
easily. In R. Marimon and A. Scott: Computational Methods for the
533
Study of Dynamic Economies. Oxford and New York: Oxford University
Press, pp. 30-61.
i Uhlig uses capital letters to denote variables and small letters to denote log-
deviations. This notation is pursued in this analysis but is different to the usual
use of capital letters to represent aggregate variables and small letters to
represent individual variables.
534
Appendix 3 Input Output Tables
535
The Illustration above shows an industry-by-industry input output matrix.
Coefficients taken by row represent the distribution of an industry's output.
Columns provide the sources of inputs for an industry. The total of outputs in a
row is equal to the sum of its inputs in a column, including gross operating
surplus.
Quadrant 2 shows the distribution of output for consumption by the public and
private sector and individuals. It also includes changes in inventories.
Quadrants 1 and 2 together show the total usage of the goods and services
supplied by each industry, which is also equal to total supply.
In the Illustration, imports are shown as a distinct row in the Value Added area
across Quadrants 3 and 4. This is called a direct allocation of imports. It
assumes that each using sector draws on imports and domestic production in
the average proportions established for the total supply of each product.
Technology matrix
It is also possible to have an indirect allocation of imports where the total
output from each industry includes both Australian and imported content.
Imports are recorded as adding to the supply of the sector to which they are
primary and then this supply is allocated along the corresponding row of the
table.
536
This means that the coefficients reflect both domestic and imported supply. It
permits substitution between imports and domestic production without
affecting the size of the coefficients.
As materials coming into the system must be equal to the flows of materials out
of the system (plus any material accumulated within the system during the
period) then the law of conservation of mass is met.
Therefore, coefficients built from total dollar requirements also reflect the
actual technological relationship between industries. The same applies to
energy intensities and greenhouse emissions. For this reason, an input output
table with indirect allocation of imports is called a technology matrix.
However, the technology assumption implicitly requires that in the short run
products of the same type are homogeneous with the same input structure
wherever produced, there are no changes in relative input prices (unless
specific behavioural models are included to separately modify the coefficients),
technological structures are fixed, output is a linear function of inputs, so there
is neither increasing returns to scale nor other constraints in the system and
products are made in fixed proportions to each other.
The matrix of Leontief Inverse 1− A−1 coefficients is called the total
requirements coefficients matrix. Each coefficient represents the units of
537
industry i's output required both directly and indirectly for industry j to
produce 100 units of output. It needs to be remembered that the answers
obtained by applying these coefficients are in terms of the output of industries
and include the flows of products not primary to these industries.
It is also important to recognise the way imports have been allocated. With
direct allocation of imports the total requirements coefficients in Quadrant 1
refer only to the domestic production. Any use of the total requirements matrix
necessarily has the caveat assumption that imports are unchanged.
Supply Table
538
Use Table
Rows contain product groups and primary inputs, whether locally produced or
imported. Rows designated by prefix ‘P’ show the primary inputs which have
been purchased by industries and by final demand.
Columns show the composition of intermediate and primary inputs into each
industry and final demand category.
Imports table
Imports that are not produced in Australia, called complementary imports, are
recorded in separate columns. Coffee and natural rubber are examples of
complementary imports. Imports for re-export are treated the same way.
Margins table
This table relates the basic price and purchasers’ price of all flows in the use
table.
Mathematical Derivation
539
X i = zi 1 zi 2 zi i Y i
L = labour services
{ }
government services paid for as taxes
N = capital costs interest payments
land rental payments entrepreneurship profits
M = imports
P j = z 1 j z2 j z n j L j N j M j
a1 j = z 1 j / P j etc, so a i j = z i j / P j
Where:
Since inflows = outflows , X i = P j and therefore over all rows i and columns j:
540
zi 1 zi 2 zi n Y i = z1 j z2 j … zn j L j N j M j
because z i j ≠ z j i . For example, the value of steel that goes into a car is not
equal to the value of cars that go to make steel. However, from the definition of
gross profit GP=Sales− Raw Materials , we know that the value of all
materials purchased by a firm for its output is only different to the sales value
by gross profit (which in turn, represents the value add of
labour overheads profit ). Therefore the sum of the products
z i 1 z i2 z i n is logically equal to z 1 j z 2 j z n j . So we can
eliminate each side respectively leaving:
Y 1 = L1 N 1 M 1
LN =Y −M
and since:
Y =C IG E
then:
L N = C I G E − M
Where:
which means:
541
{ }{ }
Factor Payments in Total spent on
the economy for labour , = consumption ,
rent , interest , profit , investment
indirect taxes , etc. and net exports
In other words,
and
{ } { }
Gross National Product Gross National Product
at Factor Prices = at Market Prices
including Indirect Taxes
Leontief Inverse
On a per unit basis:
a i j = zi j / P j
Where:
Pj = { thevaluecolumn
added items such as wages , taxes & profits }
total for Australian Production after
ai j = technical coefficients
so
X j = a j1 X1 a j2 X 2 a jn Xn Y j
and therefore:
X j − aj 1 X 1 − a j 2 X2 − a j j X j − a j n X n = Y j
upon rearranging:
542
− a j 1 X 1 − a j 2 X 2 1−a j j X j − a j n X n = Y j
I – A X = Y
X = I − A −1 Y
where:
{ }
the adjoint whose element i , j is the cofactor of the element i , j
adj A = of the transpose of A where the minor ∣a i j∣ is the determinant of
the square that remains when row i & column j are removed
I− A −1 = I A A 2 A 3
As all parts of A are less than 1, the elements of the power series quickly
approach zero (for example 0.3 2 = 0.09 ), then it is usually only necessary to
iterate 3 times to capture most of the effects.
X = I − A−1 Y
543
can be interpreted as:
X i = − A i1 Y 1 − A i 2 Y 2 − A i j Y j − A i n Y n
X i = f Y 1 , Y 2 , Y j , Y n
In the case of direct allocation of imports, the Leontief A matrix derived from
Table 5 can be compared to ABS Table 6 and the Leontief Inverse to ABS Table
7. In addition, the output vector Y is provided by the column Final Uses (T5).
From this, the input vector X can be calculated as the matrix multiplication of
the total requirements coefficients matrix I − A −1 and the output vector Y.ii
Similar tables to the foregoing are provided for the main case of indirect
allocation of imports. Table 8 provides an Industry-by-Industry flow table with
indirect allocation of imports and basic prices, across 109 industries. The
544
Leontief A matrix can be compared to ABS Table 9 and the Leontief Inverse to
ABS Table 10.iii
It needs to be noted that the Leontief A matrix is only for Industry Uses in
Tables 6 and 9, for direct and indirect allocation of imports respectively. The
Leontief A matrix excludes Intermediate Input rows in the ABS tables, which
aggregates with the Leontief A matrix of Industry Uses to produce Australian
Production Compensation of employees (P1), Gross operating surplus & mixed
income (P2), Taxes less subsidies on products (P3) and Other taxes less
subsidies on production (P4).
The Leontief A matrix also excludes the following Final Use columns in the ABS
tables, which aggregates with the Leontief matrix of Industry Uses to produce
total supply Final consumption expenditure of the household (Q1) and
government (Q2) sectors, Gross fixed capital formation of private (Q3), public
enterprise (Q4) and general (Q5) government sectors, Changes in inventories
(Q6) and Exports (Q7).
[ ]
LL LM
Z Z
Z=
ZML ZMM
[ ]
LL LM
A A
A=
AM L AM M
where, as usual:
LM z iLjM
ai j = M
and X = 1− A −1 Y .
X j
545
In contrast to the full integration of an IRIO model, the MRIO approach seeks
to simplify the modelling paradigm by representing data in regional tables and
interregional trade tables.
Miller & Blair (1985, Appendix 3.2, pp.91-3) describe the method to build trade
tables as follows:
T Mi = Z 1M
i Z 2M LM
i Zi Z ipM
p
T = ∑ z iLM L≠M z iMM
M
i
L=1
p
X Li = ∑ z LM
i
M =1
Since the interregional trade coefficients are defined as the proportion of all
commodity i used in M that comes from L :
546
LM z iLM
c i =
T iM
then rearranging and substituting z iLM into the above leads to:
p
L
X = i ∑ c iLM T Mi
M =1
n
T Mi = ∑ a Mi j X Mj Y iM ,
j =1
substituting TM
i into X Li leads to:
p n
L LM
X = i ∑c i ∑ aMi j X Mi Y iM
M =1 j=1
p
X Li = ∑ C LM A M . X M Y M
M =1
X = C A.X Y
so:
X = I − CA −1 CY
547
[ ]
1
A 0 0
[ ]
⋮ ⋮ ⋮ ai1M a M1n
A = 0 A M 0 , A M = ⋮ ⋮ ,
M
⋮ ⋮ ⋮ an1 a Mnn
0 0 Ap
[ ] [
11 1M 1p
C C C
]
C 1LM 0 0
⋮ ⋮ ⋮ LM
C = C C C Lp , C LM = 0 C 2
L1 LM
⋮ ⋮ ⋮ ⋮
C p1 C pM C pp 0 C LM
n
[] [] [] []
c LM
1 X 1L X 1M Y 1M
⋮ z iLM ⋮ , X M '=' ⋮ , Y M= ⋮
C LM = , ci = M , X L =
LM
⋮ Ti ⋮ ⋮ ⋮
LM
cn X Ln X nM Y nM
Technological change
Solow assumed that factors are paid their marginal products such that:
548
Q = f K , L ; t
Q = A f K , L
Q̇ = Ȧ f K , L A ḟ K , L
= Ȧ f K , L A K̇
∂f
∂K
L̇
∂f
∂L
Q̇
Q
=
Ȧ
A Q
A
K̇
∂f
∂K
L̇
∂f
∂L
Ȧ K̇ L̇
= wk wl
A K L
where :
A K ∂f
wk =
Q ∂K
AL ∂f
wl =
Q ∂L
q̇ Ȧ k̇
= wk
q A k
where :
Q
=q
L
K
=k
L
wl = 1 − w k
Solow assumed that constant returns to scale was unavoidable. This being the
case:
Q = A f K , L
q = A f k , l
549
Using labour and capital statistics from The Economic Almanac , Solow
reconstructed A by replacing time derivatives with year-to-year changes as
follows:
A t q k
= – wk
A t q k
βi
y = Π A i xi
= A 1 x 1 β . A 2 x 2 β . A 3 x3 β
1 2 3
where :
y = output
xi = input
Ai = productivity effect that augments x
∑ βi = 1
β1 β2 β3
ln y = ln A 1 x 1 ln A 2 x 2 ln A 3 x 3
= ln A 1β ln A 2β2 ln A 3β3 ln x 1β ln x 2β ln x 3β3
1 1 2
= Σ β i ln A i Σ β i ln x i
According to Solow (1957, p.313) the learning effect is reflected with the
exponential:
Ai = er t i
550
This may be approximated by the discrete function At = 1 r i t so:
∑ β i ln A i = ∑ β i ri t
= t ∑ βi ri
= t β0
where :
β0 = ∑ βi r i
= measure of the growth of technology i.e. technological progress
Substituting in the above equation for ln y leads to the log linear identity:
ln y = t β 0 ∑ βi ln x i
ẏ ẋ
= β0 ∑ βi
y x
where :
z = ln y
dz dy
ż = .
dy dt
dy
dt
=
y
dx
ẋ =
dt
dy
ẏ =
dt
551
where :
F = the compound factor of production
ẋ
Ḟ / F = β 0 ∑ w i x i
c xi
wi = is the price of input x i
c = ∑ w i x i , the total cost of inputs
∂c
c
= , the output cost elasticity
∂y
y
y
TFP =
F
˙ d TFP
TFP =
dt
y
= – y F −2 Ḟ
F
y
– y F −2 Ḟ
˙
TFP F
=
TFP y
F
y Ḟ
= −
y F
So :
ẏ ˙
TFP Ḟ
=
y TFP F
ẏ −1 Ḟ
β0 = −
y F
TFP˙ Ḟ Ḟ
= − −1
TFP F F
˙
TFP −1 Ḟ
= 1 −
TFP F
552
˙
TFP −1 Ḟ
β0 = 1 −
TFP F
˙
TFP
=
TFP
It may be seen that for the case of constant returns to scale technological
ẏ ẋ
= β0 ∑ βi
y x
˙
TFP ẋ
= ∑ βi
TFP x
553
In an input output context there are two ways to propagate changes in
technology. The first is to extrapolate past trends into the future. While this is
relatively easy to do, it is little better than guessing. Using the popular
management example, the best place to find a drunk in a cornfield is to look in
the place where he was left. Analogously, the best estimate of future
technology is today's best technology. Investigations in the United Kingdom
and the Netherlands have found innovative changes in environmental
technology are very difficult to project.
Wilting et al. (2004; 2008) are critical that traditional approaches such as
Miller & Blair's (1985) use of marginal input coefficients and the University of
Maryland's extension of logistic growth curves in its INFORUM model fare no
better.
Wilting et. al. (2004; 2008) use trend extrapolation to generate an autonomous
reference path. They combine this with expert analysis of specific technology
life cycles along this reference path.
a R p = a00 a R
where :
p = the number of projection periods
R = the Reference scenario
a R p = final year technical coefficient based on reference scenario R
a 00 = original technical coefficient
a R = absolute change of the technical coefficient across projection horizon
= a00 R
The authors suggest there are two types of change that lead to technology
diffusion through technical coefficients. The first is changes to primary
554
production processes, for example lower demand for herbicides due to a
change from common to organic agriculture. The second is a more general
technological changes due to better information and communication, or
substitutions between inputs.
Wilting et. al. prepare future technical coefficients by surveying the mix of
existing primary technologies that provide the current coefficients of input
output tables and developing new coefficients for alternative technologies. The
difference between these coefficient sets is projected as a changing mix of
technologies independent of the existing primary technology.
All coefficients are then related to the new technologies. The implicit
assumptions are that pace of technological change in all industries is the same
and that the share of each technology in total production remains constant.
The ratio of coefficients for technologies i and 1 is constant as follows:
ai00 = i ∗ a100
where :
i = ratio of technical coefficents of i & 1
Therefore:
1 a 00
a00 =
∑ i00 i
i
555
Changes in non-primary technologies are assumed to lead to an improvement
factor :
a S ip = S a Rip
where :
S = general change coefficient of scenario S
Therefore:
a S p =∑ S ip a S ip
i
= ∑ Sip S a Rip
i
= ∑ Sip S i a Rip
i
where :
S ip = S [ R a ioo a i00 ] ∑ Sip i
i
=technology i share of sector total production for scenario S
∑ Sip =1
i
The diffusion changes in labour, capital and emissions can be carried out in the
same way as this projection of technical change.
556
http://www.abs.gov.au/AUSSTATS/[email protected]/mf/5209.0.55.001 [Accessed
July 23, 2009].
Denny, M., Fuss, M. & Waverman, L., 1981. The measurement and
interpretation of total factor productivity unregulated industries. In
Productivity Measurement in Regulated Industries (Eds) T. Cowing and
L. Waverman. Academic Press, pp. 35-49.
Miller, R.E. & Blair, P., 1985. Input output analysis: foundations and extensions,
Englewood Cliffs, N.J.: Prentice-Hall.
Rose, A., 1984. Technological change and input output analysis: an appraisal.
Socioecon Plann Sci, 18, 305-18.
Solow, R.M., 1957. Technical change and the aggregate production function.
The Review of Economics and Statistics, 39(3), 312-320.
557
i With each element multiplied by 100 to comply with ABS scaling
ii With each element divided by 100 to comply with ABS scaling
iii As a “round trip” check on ABS data and computation, process the Table 9
Leontief matrix to the Leontief Inverse and compares the result with Table 10.
R-package code for this comparison is:
# load the ABS Table 9 Leontief dataset (technical coefficients) into
matrix L_ABS:
L_ABS <- read.table("table09_L_indirect_imports.csv", header=FALSE,
sep=",", na.strings="", strip.white=TRUE)
# load the ABS Table 10 Leontief Inverse dataset (technical coefficients)
into matrix LI_ABS:
LI_ABS <- read.table("table10_LI_indirect_imports.csv", header=FALSE,
sep=",", na.strings="", strip.white=TRUE)
# create the Leontief inverse (total requirements coefficients) by using
solve to invert the identity matrix less A
# note that L_ABS and LI_ABS both arrive scaled up by 100
LI_CALC <- solve(diag(nrow(L_ABS))-L_ABS/100)
# compare the calculated Leontief Inverse with ABS Table 10:
LI_CALC-LI_ABS/100
558
Appendix 4 Nordhaus DICE Model
T max
A.01 W = ∑ u[c t , Lt ] R t
t =1
A.02 Rt = 1 − −t
c 1−
t
A.03 U [c t , Lt ] = Lt
1−
A.04 Qt = t [1 − t ] A t K t L1−
t
1
A.05 t =
[1 1 T AT , t 2 T 2AT , t ]
559
2
A.06 t = t t t
A.07 Qt = Ct I t
C
A.08 ct = t
Lt
A.09 Kt = I t 1 − K K t−1
1−
A.10 E ind ,t = t [1 − t ] A t K t Lt
T max
Where:
11 = 1 − 12
587.473 12
21 =
1143.894
22 = 1 − 21 − 23
1143.894 23
32 =
18340
33 = 1 − 32
2 =
t2xco2
pop ∗ord −1
e g
−1 t
560
2
t
t 1 = , = 0
1− g t 1 1
pback t backrat − 1 e−gback∗ord −1 t
1, t = .
2 backrat
ord t −1
E land ,t = E land ,0 1−0.1
t =1 = 1
t = 1−
t
2
ygrosst = a t k t l 1−
t
damagest = ygrosst 1−t
ynet t = ygrosst t
abate t = ygrosst t
1000 c t
cpc t =
lt
yt = t 1−t ygrosst
1000 q t
ypct =
lt
it
st =
q t 0.001
q t 1 − 1 − k 10
rit = −
kt 10
Boundary conditions:
k.lot = 100
mat.lot = 10
mup.lot = 100
mlo.lot = 1000
c.lot = 20
561
tlo.up t = 20
tlo.lo t = −1
tat.up t = 20
mu.upt = limit
mu.fxt =1 = 0
ccum.upt =tlast = ccumm
Preferences:
Emissions:
Carbon cycle:
562
mat 1750 = 596.4 atmospheric concentration mat 1750GtC
mat 0 = 808.9 atmospheric concentration 2005 GtC
mup0 = 1255 upper ocean concentration 2005 GtC
mlo 0 = 18365 lower ocean concentration 2005 GtC
11 = 0.810712
12 = 0.189288
21 = 0.097212
22 = 0.852787
23 = 0.05
32 = 0.003119
33 = 0.996881
Climate model:
tlo 0 = 0.0068 2000 lower ocean temp change deg C since 1900
tat 0 = 0.7307 2000 atmospheric temp change deg C since 1900
1 = 0.220 parameter of the climate equation flows per period
3 = 0.300 parameter of the climate equation flows per period
4 = 0.050 parameter of the climate equation flows per period
1 = 0.0000
2 = 0.0028388
3 = 2.00
563
2 = 2.8 control cost function exponential parameter
pback = 1.17 cost of backstop technology 2005 US $ ' 000 per tC
backrat =2 ratio of initial /final backstop cost
gback = 0.05 initial cost decline backstop percent per decade
mulimit =1 upper limit on control rate
Participation parameters:
scale1 = 194
scale2 = 381800
Other parameters:
1, t adjustment cost for backstop technology , parameter of the abatement cost function
t participation rate = controlled fraction of emissions
= proportion of emissions included by policy
2 climate model parameter
564
lt population labour inputs , millions
t participation cost markup , abatement cost with incomplete participation
as proportion of abatement cost with complete participation
rt average utility social time preference discount factor per time period
t ratio of uncontrolled industrial CO2 equivalent emissions / output ,
metric tons of carbon per unit of output 2005 prices
mlo t mass of carbon for lower ocean reservoir , GtC at beginning of period
t emissions control rate = proportion of uncontrolled emissions
mupt mass of carbon for upper shallow ocean reservoir , GtC at beginning
of period
t damage function climate damages as proportion of world output
qt gross world product output of goods , services net of damages and
abatement costs 2005 US trillion dollars
rit real interest rate
st gross savings rate as fraction of gross world product
Variables:
565
t abatement cost function, cost of emissions reductions as proportion of
world output
mat t mass of carbon in atmosphere reservoir at beginning of period , GtC
matavt average atmospheric concentration GtC
mlo t mass of carbon in lower ocean reservoir , at beginning of period , GtC
t emissions control rate = proportion of uncontrolled emissions
mupt mass of carbon in upper shallow ocean reservoir , at beginning
of period , GtC
t damage function , climate damages as proportion of world output
qt gross world product output of goods and services net of abatement and
damages , trillions of 2005 US dollars
A.04 Qt = t [1 − t ] At K t L1t −
A.07 Qt = Ct I t
A.09 Kt = I t 1 − K K t −1
566
For a particular period t , t [1 − t ] A t L1−
t
is an equilibrium settled
factor with both exogenous and endogenous components but having only a
second order feedback effect and no effect at all if the emissions control rate
A.04 Qt = K t
A.07 Qt = Ct I t
A.09 Kt = I t
C t = K t − K t
K t −1− 1 = 0
K t = { }1 −
Kt = k
where :
k = { }1 −
C t = k − k
Q t = k
567
constant and does not give rise to changes in abatement and economic
However, it is not the case that the emissions control rate is a constant.
Therefore, over the intertemporal space of the projection period DICE
optimises with respect to as the primary optimisation factor and capital
Equation A.9
Nordhaus' implementation of his economic-climate model in the General
Algebraic Modelling System (GAMS) has a number of aberrations from the
equations presented above.
A.09 K t = I t 1 − K K t−1
A.09 K t = I t −1 1 − K K t −1
Utility function
568
(a) Population L t is used in the instantaneous utility function Ut rather
than in the summation objective function 2.. However, this is merely a
rearrangement and has little effect on the outcome.
For example, the equations A.1 and A.3 are provided above as:
T max
A.01 W = ∑ u [c t , Lt ] Rt
t =1
c 1−
A.03 U [c t , Lt ] = L t t
1−
T max
A.01 W = ∑ Lt u [c t , L t ] R t
t =1
c 1−
t
A.03 U [c t , Lt ] =
1−
c 1− −1
A.03 U [c t , Lt ] = t
1−
569
T max
There are two differences in the implemented model. The first difference is
that Equation A.11 uses E ind , t , whereas the model implementation uses
Et :
T max
A.11 CCum = ∑ Et
t =0
A.12 Et = E ind , t E land ,t
While the output CCum is used only in the sense of setting a maximum
constraint, there is a major inconsistency in subjecting maximum total
emissions (of industrial and land) to the maximum conceived for industrial
emissions. Secondly, the use of CCum in further discussion and analysis is
inconsistent and likely to be highly confusing.
570
Equation A.13
This equation is specified as:
starttime=AbsoluteTime[];
periods = 5; (* projection periods *)
optimpenalty=0; (* optimisation return if iteration non-real *)
<<Combinatorica`
(* objective function *)
(* this program always minimises, so negative for maximisation *)
obj = {-cumu[periods]};
571
opt ={
(* emissions control rate, fraction of uncontrolled emissions *)
{ μ[t],0.01,0.5},
(* capital stock *) {k[t],300, 2000}
};
(* exogenous parameters *)
exogparams={
(* population 2005 millions *) pop0 → 6514,
(* population growth rate per decade *) popg → 0.35,
(* population asymptote *) popa → 8600,
(* technology growth rate per decade *) ga0 → 0.092,
(* technology depreciation rate per decade *) dela → 0.001,
(* equivalent carbon growth parameter *) gσ0 → -0.0730,
(* decline rate of decarbonisation per decade parameters*) dσ1 → 0.003,
dσ2 → 0.000,
(* backstop technology cost per tonne of carbon 2005 *) pback → 1.17,
(* backstop technology, final to inital cost ratio *) backrat → 2,
(* backstop technology, rate of decline in cost *) gback → 0.05,
(* pure rate of social time preference *) ρ → 0.015,
(* radiative forcing of non-carbon gases in 2000 & 2001 *) fex0 → -0.06,
fex1 → 0.30,
(* emissions in control regime parameters for 2005, 2015, 2205 *) κ1->1,
κ2 → 1, κ21 → 1,
(* emissions in control regime decline rate*) dκ → 0,
(* abatement cost control parameter *) θ → 2.8,
(* carbon emissions from land use 2005 *) eland0 → 11};
(* initial exogvars *)
exoginitial ={
gfacpop[1] → 0,gfacpop[0] → 0,
ga[0] → ga0,gσ[0] → gσ0,
a[1] → 0.02722, a[0] → 0.02722,
σ[1] → 0.13418, σ[0] → 0.13418,
eland[0] → eland0,
fex[0] → fex0, fex[1] → fex1,
κ[1] → 0.25372, κ[0] → 0.25372
};
(* exogenous equations *)
exogeqns ={
(* population growth factor *) gfacpop[t]==(Exp[popg*(t-1)]-
1)/Exp[popg*(t-1)],
(* population level *) l[t]== pop0*(1-gfacpop[t])+gfacpop[t]*popa,
(* productivity growth rate *) ga[t] == ga0*Exp[-dela*10*(t-1)],
(* total factor productivity *) a[t] ==a[t-1]/(1-ga[t-1]),
(* energy efficiency cumulative improvement *) gσ[t] ==gσ0*Exp[-
dσ1*10*(t-1)-dσ2*10*(t-1)^2],
(* carbon emissions output ratio *) σ[t] == σ[t-1]/(1-gσ[t]),
(* backstop technology adjusted cost *) Θ[t] ==(pback* σ[t]/
θ)*((backrat-1+Exp[-gback*(t-1)])/backrat),
(* carbon emissions from land use sources *) eland[t] ==eland0*(1-
0.1)^(t-1),
(* social time preference discount factor *) r[t] ==1/(1+ ρ)^(10*(t-1)),
(* radiative forcing of other greenhouse gases *) fex[t] ==fex0 + If[ t
≤ 12,0.36, 0.1*(fex1-fex0)*(t-1)],
(* fraction of emissions in control regime *) κ[t] ==If[ t ≥ 25, κ21,
κ21 + ( κ2- κ21)*Exp[-dκ*(t-2)]],
(* ratio of abatement cost with incomplete participation to that with
complete participation *) Π[t] == κ[t]^(1- θ)
};
(* endogenous parameters *)
572
endogparams={
(* elasticity of marginal utility of consumption *) α → 2.0,
(* elasticity of output with respect to capital in production function *)
γ → 0.30,
(* depreciation rate of capital *) δ → 0.1,
(* temperature forcing parameter *) η → 3.8,
(* temperature change with carbon doubling *) t2xco2 → 3,
(* damage function parameters *) ψ1 → 0, ψ2 → 0.0028388, ψ3 → 2,
(* climate equation parameters *) ξ1 → 0.22, ξ2 → η/t2xco2, ξ3 → 0.3, ξ4
→ 0.05,
(* carbon cycle parameters *) φ11 → 1- φ12a, φ12 → 0.189288, φ12a →
0.189288,
(* carbon cycle parameters *) φ21 → 587.473* φ12a/1143.894, φ22 → 1- φ21
– φ23a,
(* carbon cycle parameters *) φ23 → 0.05, φ23a → 0.05, φ32 → 1143.894*
φ23a/18340, φ33 → 1- φ32,
(* mass of carbon in atmosphere, pre-industrial *) mat1750 → 596.4,
(* μlim → 1, *)
(* ceindlim → 6000, *)
(* scaling factor *) scale1 → 194
};
(* endogenous initial *)
endoginitial = {
y[0] → 61.1, c[0] → 30,
inv[0] → 31.1,
k[1] → 137, k[0] → 137,
ceind[1] → 0, ceind[0] → 0,
Λ[1] → 0.66203, Λ[0] → 0.66203,
Ω[1] → 0.99849, Ω[0] → 0.99849,
mat[1] → 808.9, mat[0] → 808.9,
mup[1] → 1255, mup[0] → 1255,
mlo[1] → 18365, mlo[0] → 18365,
tat[1] → 0.7307, tat[0] → 0.7307,
tlo[1] → 0.0068, tlo[0] → 0.0068,
μ[1] → 0.005, μ[0] → 0.005,
cumu[1] → 381800, cumu[0] → 381800
};
(* endogenous variables *)
(* sn modifications of Nordhaus to render acyclic *)
endogeqns={
(* net present value of utility, the objective function *) cumu[t] ==
cumu[t-1]+(l[t]*u[t]*r[t]*10)/scale1,
(* utility function *) u[t] == ((c[t]/l[t])^(1- α)-1)/(1- α),
(* consumption of goods and services *) c[t] == y[t]-inv[t],
(* output of goods and services, net of abatement and damages *) y[t] ==
Ω[t]*(1- Λ[t])*ygr[t],
(* ratio of abatement to world output *) Λ[t] == Π[t] * Θ[t] * μ[t]^ θ,
(* output of goods and services, gross *) ygr[t] == a[t]* k[t]^ γ
*l[t]^(1- γ),
(* ratio of climate damages to world output *) Ω[t] == 1/(1+ \
[Psi]1*tat[t]+ ψ2*(tat[t]^ ψ3)),
(* global mean terrestrial temperature *) tat[t] == tat[t-1]+
ξ1*(for[t]- ξ2*tat[t-1]- ξ3*(tat[t-1]-tlo[t-1])),
(* global mean lower ocean temperature *) tlo[t] ==tlo[t-1]+ ξ4*(tat[t-
1]-tlo[t-1]),
(* radiative forcing total *) for[t] == η*Log[2,((mat[t]+mat[t-
1])/2)/mat1750]+fex[t],
(* mass of carbon in atmosphere *) mat[t] == eind[t] + φ11*mat[t-1] +
φ21*mup[t-1],
(* carbon emissions *) eind[t] == 10 * σ[t] *(1- μ[t]) *ygr[t]
+eland[t],
573
(* mass of carbon in lower oceans *) mlo[t] == φ23*mup[t-1]+ φ33*mlo[t-
1],
(* mass of carbon in upper oceans *) mup[t] == φ12*mat[t-1]+ φ22*mup[t-
1] + φ32*mlo[t-1],
(* carbon emissions cumulative *) ceind[t] == eind[t]+ceind[t-1],
(* capital stock as function of investment *) k[t] == 10*inv[t]+((1-
δ)^10)*k[t-1]
(* (* climate damages, gross *) dam[t] == ygr[t]*(1- Ω[t]),*)
(* (* savings ratio *) s[t] == inv[t]/y[t],*)
(* (* interest rate *) ri[t] == γ*y[t]/k[t] -(1-(1- δ)^10)/10,*)
(* (* consumption of goods and services, per capita *) cpc[t] ==
c[t]*1000/l[t],*)
(* (* output of goods and services, net per capita *) pcy[t] ==
y[t]*1000/l[t],*)
};
(* endogenous constraints *)
endogcons={(*
k[t] ≤ 10*inv[t]+((1- δ)^10)*k[t-1],
0.02*k[periods] ≤ inv[periods],
100 ≤ k[t],
20 ≤ c[t],
0 ≤ mat[t],
100 ≤ mup[t],
1000 ≤ mlo[t],
-1 ≤ tlo[t]<= 20,
tat[t] ≤ 20,
ceind[t] ≤ ceindlim,
0 ≤ q[t],
0 ≤ inv[t],
0 ≤ ygr[t],
0 ≤ eind[t],
0 ≤ μ[t] ≤ μlim *)
};
toponodes[eqns_]:=Module[
{eqnvars,flatvars,eqnlist,mysource,mysink,edges1,edges2,edges3,edges,vert
ices2,vertices,forwardgraph,networkflows,forwardflows,forwardedges,revise
dedges,revisedgraph,toposort,sortedequations,
sortedvertices,posfirstequation,startvertices},
eqnvars=Map[Cases[eqns[[#]],_Symbol[_Integer],Infinity]&,Range[Length[eqn
s]]];
flatvars=Union[Flatten[eqnvars]];
eqnlist=Range[Length[eqns]];
f1[a_,b_]:={a,b};
edges1=Map[f1[mysource,flatvars[[#]]]&,Range[Length[flatvars]]];
edges2=Flatten[Map[Outer[f1,eqnvars[[#]],{eqnlist[[#]]}]&,eqnlist],2];
edges3=Map[f1[eqnlist[[#]],mysink]&,eqnlist];
edges=Join[edges1,edges2,edges3];
vertices2= Join[flatvars,eqnlist];
vertices=Join[{mysource},vertices2,{mysink}] ;
forwardgraph=MakeGraph[vertices,(MemberQ[edges,{#1,#2}])&,Type-
>Directed,VertexLabel->True];
If[!AcyclicQ[forwardgraph],Print["*** ERROR: FORWARD GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"]];
networkflows=NetworkFlow[forwardgraph,1,Length[vertices],Edge];
forwardflows=Cases[networkflows[[All,1,All]],
{x_/;x>1,y_/;y<Length[vertices]}];
forwardedges = Map[vertices[[#]]&,forwardflows];
revisededges =
Join[Complement[edges2,forwardedges],Map[Reverse,forwardedges]];
574
revisedgraph=MakeGraph[vertices2,(MemberQ[revisededges ,{#1,#2}])&,Type-
>Directed,VertexLabel->True];
If[!AcyclicQ[revisedgraph],Print["*** ERROR: REVISED GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"];Exit[]];
(* Print[ShowGraph[revisedgraph]];*)
toposort=TopologicalSort[revisedgraph];
(*Print[toposort];*)
sortedvertices=Cases[vertices2[[toposort]],_Symbol[_Integer],1];
sortedequations = Cases[vertices2[[toposort]],_Integer,1];
posfirstequation=Apply[Plus,First[Position[vertices2[[toposort]],_Integer
,1]]];
startvertices = vertices2[[toposort[[Range[posfirstequation-1]]]]];
(*startvertices
=vertices2[[Select[vertices,InDegree[revisedgraph,#]==0&]]];*)
Return[{sortedequations,sortedvertices, startvertices}]
];
exogtoposolver[equations_]:=Module[
{eqnorder,soleqn,solvar,outputs={},soltest1,soltest2},
eqnorder = toponodes[equations/.Equal->Subtract][[1]];
For[i=1,i<=Length[eqnorder],i++,
soleqn =equations[[eqnorder[[i]]]]//.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** ERROR: DURING EXOGENOUS CALCULATIONS A VARIABLE HAD NO
SOLUTION ***"];Exit[],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
outputs
];
exogaugmented=Join[exoginitialextended,exogtoposolver[exogextended]];
Print["The exogenous variables calculate as: ", Sort[exogaugmented]];
575
Print["Please note that start vertices of the endogenous equation
tolopogy have not been automatically included as optimisation variables.
This is for flexibility as you may wish to use a surrogate based on your
observation of an alternative topological sort order. So please check the
endogenous start vertices here to confirm that these variables (or your
surrogates) have been included with optimisation variables at the start:
",
endogtoponodes[[3]]
];
lenendogeqnorder=Length[endogeqnorder];
endogextendedordered= endogextended[[endogeqnorder]];
(* commence solve *)
(* objective function ... *)
endogoptimsolver[nmvars_]:=Module[
{soleqn,solvar,outputs={},soltest1,soltest2},
For[i=1,i<=lenendogeqnorder,i++,
soleqn =endogextendedordered[[i]]/.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** Warning: during optimisation ",solvar," became complex or null
in the equation ",soleqn," so the specified optimisation penalty of
",optimpenalty," was applied ***"];Return[optimpenalty],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
576
Apply[Plus,objvar/.outputs]
]/; VectorQ[nmvars,NumberQ];
endogaugmented =
Join[endoginitialextended,endogoutputsolver[endogoptimsolution[[2]]]];
Print["The final outputs of the endogenous equations are: "
,Sort[endogaugmented]];
577
uses the Phase III acyclic topological processor described in Appendix 5
Acyclic solver for unconstrained optimisation.
starttime=AbsoluteTime[];
periods = 4; (* projection periods *)
optimpenalty=0; (* optimisation return if iteration non-real *)
<<Combinatorica`
(* objective function *)
(* this program always minimises, so negative for maximisation *)
obj = {-cumu[periods]};
578
(* initial exogvars *)
exoginitial ={
gfacpop[1] → 0,gfacpop[0] → 0,
ga[0] → ga0, gσ[0] → gσ0,
a[1] → 0.02722, a[0] → 0.02722,
σ[1] → 0.13418, σ[0] → 0.13418,
eland[0] → eland0,
fex[0] → fex0, fex[1] → fex1,
κ[1] → 0.25372, κ[0] → 0.25372
};
(* exogenous equations *)
exogeqns ={
(* total factor productivity *) a[t] ==a[t-1]/(1-ga[t-1]),
(* social time preference discount factor *) r[t] == 1/(1+ ρ)^(10*(t-1)),
(* carbon emissions from land use sources *) eland[t] == eland0*(1-
0.1)^(t-1),
(* radiative forcing of other greenhouse gases *) fex[t] == fex0 + If[ t
≤ 12, 0.36, 0.1*(fex1-fex0)*(t-1)],
(* ratio of abatement cost with incomplete participation to that with
complete participation *) Π[t] == κ[t]^(1-θ),
(* population growth factor *) gfacpop[t]==(Exp[popg*(t-1)]-
1)/Exp[popg*(t-1)],
(* population level *) l[t] == pop0*(1-gfacpop[t])+gfacpop[t]*popa,
(* productivity growth rate *) ga[t] == ga0*Exp[-dela*10*(t-1)],
(* energy efficiency cumulative improvement *) gσ[t] == gσ0*Exp[-
dσ1*10*(t-1)-dσ2*10*(t-1)^2],
(* carbon emissions output ratio *) σ[t] == σ[t-1]/(1-gσ[t]),
(* backstop technology adjusted cost *) Θ[t] ==(pback*σ[t]/θ)*((backrat-
1+Exp[-gback*(t-1)])/backrat),
(* fraction of emissions in control regime *) κ[t] ==If[t ≥ 25, κ21, κ21
+ (κ2- κ21)*Exp[-dκ*(t-2)]]
};
(* endogenous parameters *)
endogparams={
(* elasticity of marginal utility of consumption *) α →2.0,
(* elasticity of output with respect to capital in production function *)
γ → 0.30,
(* depreciation rate of capital *) δ → 0.1,
(* temperature forcing parameter *) η → 3.8,
(* temperature change with carbon doubling *) t2xco2->3,
(* damage function parameters *) ψ1 → 0, ψ2 → 0.0028388, ψ3 → 2,
(* climate equation parameters *) ξ1 → 0.22, ξ2 → η/t2xco2, ξ3 → 0.3, ξ4
→ 0.05,
(* carbon cycle parameters *) φ11 → 1- φ12a, φ12 → 0.189288, φ12a →
0.189288,
(* carbon cycle parameters *) φ21 → 587.473* φ12a/1143.894, φ22 → 1- φ21
– φ23a,
(* carbon cycle parameters *) φ23 → 0.05, φ23a → 0.05, φ32 → 1143.894*
φ23a/18340, φ33 → 1- φ32,
(* mass of carbon in atmosphere, pre-industrial *) mat1750->596.4,
(* scaling factor *) scale1 → 194
};
(* endogenous initial *)
endoginitial = {
y[0] → 61.1, c[0] → 30, inv[0] → 31.1,
ygr[1] → 55.667, ygr[0] → 55.667,
k[1] → 137, k[0] → 137,
ceind[1] → 0, ceind[0] → 0,
Λ[1] → 0.66203, Λ[0] → 0.66203,
Ω[1] → 0.99849, Ω[0] → 0.99849,
mat[1] → 808.9, mat[0] → 808.9,
mup[1] → 1255, mup[0] → 1255,
579
mlo[1] → 18365, mlo[0] → 18365,
tat[1] → 0.7307, tat[0] → 0.7307,
tlo[1] → 0.0068,tlo[0] → 0.0068,
μ[1] → 0.005, μ[0] → 0.005,
cumu[1] → 381800, cumu[0] →381800
};
(* endogenous variables *)
(* sn modifications of Nordhaus to render acyclic *)
endogeqns={
(* utility function *) u[t] == l[t]*((c[t] / l[t])^(1- α))/(1- α),
(* capital stock as function of investment *) k[t] == 10*inv[t]+((1-
δ)^10)*k[t-1],
(* output of goods and services, net of abatement and damages *) y[t] ==
Ω[t]*(1- Λ[t])*ygr[t],
(* output of goods and services, gross *) ygr[t] == a[t]* (k[t]^γ)
*(l[t]^(1- γ)),
(* ratio of climate damages to world output *) Ω[t] == 1/(1+ ψ1*tat[t]+
ψ2*(tat[t]^ ψ3)),
(* ratio of abatement to world output *) Λ[t] == Π[t] * Θ[t] * μ[t]^ θ,
(* consumption of goods and services *) c[t] == y[t]-inv[t],
(* carbon emissions from industrial sources *) eind[t] == 10 * σ[t] *(1-
μ[t]) *ygr[t],
(* carbon emissions from industrial sources cumulative *) (*ceind[t] ==
eind[t]+ceind[t-1],*)
(* carbon emissions total *) e[t]==eind[t]+eland[t],
(* mass of carbon in atmosphere *) mat[t] == e[t] + φ11*mat[t-1] +
φ21*mup[t-1],
(* mass of carbon in upper oceans *) mup[t] == φ12*mat[t-1]+ φ22*mup[t-1]
+ φ32*mlo[t-1],
(* mass of carbon in lower oceans *) mlo[t] == φ23*mup[t-1]+ φ33*mlo[t-
1],
(* radiative forcing total *) for[t] == η*Log[2,mat[t]/mat1750]+fex[t],
(* global mean terrestrial temperature *) tat[t] == tat[t-1]+ ξ1*(for[t]-
ξ2*tat[t-1]- ξ3*(tat[t-1]-tlo[t-1])),
(* global mean lower ocean temperature *) tlo[t] ==tlo[t-1]+ ξ4*(tat[t-
1]-tlo[t-1]),
(* net present value of utility, the objective function *) cumu[t] ==
cumu[t-1]+(u[t]*r[t]*10)/scale1
};
posteqns={
(* climate damages, gross *) dam[t] == ygr[t]*(1- Ω[t]),
(* savings ratio *) s[t] == inv[t]/y[t],
(* interest rate *) ri[t] == γ*y[t]/k[t] -(1-(1- δ)^10)/10,
(* consumption of goods and services, per capita *) cpc[t] ==
c[t]*1000/l[t],
(* output of goods and services, net per capita *) pcy[t] ==
y[t]*1000/l[t]
};
(* endogenous constraints *)
endogcons={(*
k[t] ≤ 10*inv[t] + ((1- δ)^10)*k[t-1],
0.02*k[periods] ≤ inv[periods],
100 ≤ k[t],
20 ≤ c[t],
0 ≤ mat[t],
100 ≤ mup[t],
1000 ≤ mlo[t],
-1 ≤ tlo[t] ≤ 20,
tat[t] ≤ 20,
ceind[t] ≤ 6000,
0 ≤ q[t],
0 ≤ inv[t],
0 ≤ ygr[t],
580
0 ≤ eind[t],
0 ≤ μ[t] ≤ 1 *)
};
exogtoposolver[equations_]:=Module[
{eqnorder,soleqn,solvar,outputs={},soltest1,soltest2},
eqnorder = toponodes[equations/.Equal->Subtract][[1]];
For[i=1,i<=Length[eqnorder],i++,
soleqn =equations[[eqnorder[[i]]]]//.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar]!=0,
581
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** ERROR: DURING EXOGENOUS CALCULATIONS A VARIABLE HAD NO
SOLUTION ***"];Exit[],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
outputs
];
exogaugmented=Join[exoginitialextended,exogtoposolver[exogextended]];
Print["The exogenous variables calculate as: ", Sort[exogaugmented]];
582
optimousvars= Union[Cases[optimous//.allparams,_Symbol[_Integer]
,Infinity]];
(* commence solve *)
(* objective function ... *)
endogoptimsolver[nmvars_]:=Module[
{soleqn,solvar,outputs={},soltest1,soltest2},
For[i=1,i<=lenendogeqnorder,i++,
soleqn =endogextendedordered[[i]]/.outputs;
solvar = Cases[soleqn,_Symbol[_Integer],Infinity];
If[Length[solvar] ≠ 0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** Warning: during optimisation ",solvar," became complex or null
in the equation ",soleqn," so the specified optimisation penalty of
",optimpenalty," was applied ***"];Return[optimpenalty],
soltest2 = Select[soltest1,(solvar/.#) >0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
Apply[Plus,objvar/.outputs]
]/; VectorQ[nmvars,NumberQ];
583
];
];
];
outputs
];
endogaugmented =
Join[endoginitialextended,endogoutputsolver[endogoptimsolution[[2]]]];
Print["The final outputs of the endogenous equations are: "
,Sort[endogaugmented]];
Nordhaus, W.D., 2007. Notes on how to run the DICE model, Available at:
http://www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm
[Accessed June 26, 2008].
584
585
Appendix 5 Acyclic Solver for Unconstrained
Optimisation
A5.1 Overview
In order to undertake the research in this dissertation a new flexible model for
optimising systems of nonlinear equations was developed using Mathematica.
The achievements in this model are:
587
• the solution algorithm: for example Nelder-Mead, differential evolution,
Brent's method or interior point are discussed below
• the complexity of the equations and treatment of roots: the Fundamental
Theorem of Algebra is that any non zero polynomial of n-degree always
has at least one. This may be a real number or a conjugate pairs of
complex numbers. Usually only real roots are of interest and minimising
functions like NMinimize and FindMinimum declare an error with
complex objective outcomes. The advantage of a topological method is
that learning may be introduced through a penalty function, which
moves the optimiser away from complex roots
• starting points for the optimising variables: may be more or less
appropriate for the optimisation
• selection of the best optimisation variables: prima facie it is tempting to
set the optimisation variables as the starting vertices of the DAG, as
suggested by the topological sort. For example, in solving a model with
the equation ygr [t ] = fn k [t ]0.3 the topological sort may suggest that
ygr [1 ] , ygr [2] , ygr [3 ] etc are the starting vertices. However, it can be
observed that an inverse function also exists k [t] = ifun ygr [t ]3.33 and
588
processors have only one memory bus and calculations are CPU-bound
or memory-bound
Direct search methods include Nelder & Mead (1965), genetic algorithm,
differential evolution and simulated annealing. A “simplex” of values of the
objective function is kept for each iteration of optimising variables. The data
set is interpreted in order to “roll downhill” to the optimal solution. While
tolerant to noise in the objective function and constraints, steepest descent is a
strategy that tends to converge relatively slowly and the method is at the same
time very expensive in memory because each for n-dimensional iteration
maintains n1 points. The method is sometimes called the “downhill
simplex” and is unrelated to George Dantzig's well known simplex method
(Dantzig 2002).
589
Gradient search methods for local minimisation
Line search
qk p = f x k ∇ f x k T 1 /2 p T B k p
where k is the kth iterative step
and the step is x k1 = x k s k
which is guaranteed to converge to a local minimum
if x k is sufficiently close to a local minimum
B k = ∇2 f xk
with the step x k1 = x k s k
and is guaranteed to converge to a local minimum
if x k is sufficiently close to a local minimum
590
However, the method is valid only insofar as the Newton quadratic model
reflects the function. Where the Hessian is not explicitly known, the system of
linear equations is solved by numerical approximation:
B k sk = − ∇ f x k
where s k is a trial step.
Line search and trust region are two methods to improve the rate of
optimisation convergence and chance of success by controlling the sequence of
steps. The idea of a line search is to use the direction of the chosen step, but to
591
Brent's principal axis method uses the two staring points u 1 and u 2 to
592
range. For example, when μ [t ]1 the following penalises the objective
Interior point
Precedents for interior point are found as early as the 1960s in the use of
barrier functions. However, the method was not formalised until Karmarkar
(1984) and most modern implementations use the Mehrotra (1992) predictor-
corrector technique. Mehrotra's interior point method generally converges in
polynomial time, which is similar to George Dantzig's simplex method
(although both can become exponential under certain conditions).
593
Commencing with Mathematica version 6.0 (2006), the only method for
constrained optimisation in Mathematica's FindMinimum function is interior
point. It is based on the COIN Project IPOPT optimiser. In Mathematica 5.2 and
earlier, there are no standard functions for nonlinear constrained optimisation,
although some functionality was possible with the older approach of using
penalty functions to enforce constraints.
FindMinimum requires the first and second derivatives of the objective and
constraints. The second derivative (or Hessian) permits Newton's method to be
employed, which is a convergence strategy that is much faster than just using
first derivative downhill functions.
One issue with interior point is that it may be unable to converge if the first
derivative at the optimal point is not continuous.
Over the last decade, large advances in nonlinear optimisation have been
achieved with the interior point method. An industrial solver IPOPT is available
as open source but the only convenient interfaces remain in AMPL and GAMS.
Minimise f x
subject to: h x = 0
for x ≥ 0
becomes:
594
Minimise x − ∑ ln x i
i
subject to h x = 0
where ≥ 0 is a barrier function
gx = X −1 e − y T A x = 0
h x =0
Zeta X e = e
L x , y = f x − h x T y
m
H x , y = ∇ 2 L x , y = ∇ 2 f x − ∑ y i ∇ 2 hi x
i=1
H x , y − A x T − I
g x − z − y T A x
x d
− A x 0 0 y =− − hx = − −d h
Z 0 X delta z Z X e − e dxz
As
z , z = X −1 Z x d x z
then:
H x , y X −1 x − A x T y = − d − X −1 d x z
so:
595
−1
H x , y X −1 Z − A x T x d X dx z g x − A xTy − X −1 e
=− =−
−A x 0 y −d h − h x
x :=x x , t := y y , z :=z z
and the search direction given by solving the Jacobi system as:
x , y , z
2
x , = f x − ∑ ln x i − h xT ∣∣h x ∣∣
i
N x , y = H x , y X −1 Z
So the search direction given by solving the Jacobian is a descent direction for
the Langrangian merit function. This means x , y , satisfies the Karush-
Kuhn-Tucker (KKT) condition, which is a necessary condition for nonlinear
optimality (Karush 1939; Kuhn & Tucker 1951; Miller 2000, Section 4.4.5, pp
210-9)
While the constraints are positive, a line search can be commended along the
initial search direction with a step of 1. A backtracking procedure is then used
until the merit function satisfies the Armijo condition:vi
596
∣∣g x − z − tT A x ∣∣ ∣∣h x ∣∣ ∣∣Z X e − e∣∣ ≤ tol
Both the accuracy condition and number of iterations are critical in finding a
solution to problems with significant complexity.
Phase I Model
In the first phase of developing an acyclic solver an abstraction layer was used
for the equations with direct and simultaneous optimisation of all independent
and dependent variables. While the “blunt instrument” approach of optimising
every variable simultaneously is perfectly suitable for small problems, it is
rather naïve to believe it can scale to thousands of variables and equations.
Indeed, a high performance cluster node with 4Gb RAM (Orion) exceeds
memory after just 9 periods and one with 16Gb RAM (Titan) fails after 14
periods, both falling far short of the 60 period goal. Projections of increasing
RAM to 64Gb indicated that the additional memory would only achieve one or
two more periods.
(* Exogenous variables *)
equations = {
gfacpop[t] == (Exp[popg*(t - 1)] - 1)/Exp[popg*(t - 1)],
l[t] == pop0*(1 - gfacpop[t]) + gfacpop[t]*popa,
ga[t] == ga[0]*Exp[-dela*10*(t - 1)],
a[t] == If[t == 1, a[0], a[t - 1]/(1 - ga[t - 1])],
gσ[t] == gσ[0]*Exp[-dσ1*10*(t - 1) – dσ2*10*(t - 1)^2],
σ[t] == If[t == 1, σ[0], σ[t - 1]/(1 – gσ[t])],
Θ[t] == (pback* σ[t]/θ)*((backrat - 1 + Exp[-gback*(t - 1)])/backrat),
eland[t] == eland[0]*(1 - 0.1)^(t - 1),
r[t] == 1/(1 + ρ)^(10*(t - 1)),
fex[t] == fex0 + If[ t < 12, 0.1*(fex1 - fex0)*(t - 1), 0.36],
κ[t] == If[ t == 1, κ[0], If[ t ≥ 25, κ21, κ21 + ( κ2 – κ21)*Exp[-dκ*(t –
2)]]],
Π[t] == κ[t]^(1 – θ),
s[t] == sr,
(* Exogenous variables *)
ceind[t] == eind[t - 1] + ceind[t - 1],
597
k[t] ≤ 10*inv[t - 1] + ((1 – δ)^10)*k[t - 1],
0.02*k[periods] ≤ inv[periods],
eind[t] == 10 * σ[t] *(1 – μ[t]) *ygr[t] + eland[t],
for[t] == η*(Log[(matav[t] + 0.000001)/mat1750]/Log[2]) + fex[t],
mat[t] == eind[t - 1] + φ11*mat[t - 1] + φ21*mup[t - 1],
matav[t] == (mat[t] + mat[t + 1])/2,
mlo[t] == φ23*mup[t - 1] + φ33*mlo[t - 1],
mup[t] == φ12*mat[t - 1] + φ22*mup[t - 1] + φ32*mlo[t - 1],
tat[t] == tat[t - 1] + ξ1*(for[t] – ξ2*tat[t - 1] – ξ3*(tat[t - 1] -
tlo[t - 1])),
tlo[t] == tlo[t - 1] + ξ4*(tat[t - 1] - tlo[t - 1]),
ygr[t] == a[t]* k[t]^ γ *l[t]^(1 – γ),
dam[t] == ygr[t]*(1 - 1/(1 + ψ1*tat[t] + ψ2*(tat[t]^ ψ3))),
Λ[t] == ygr[t] * Π[t] * Θ[t] * μ[t]^ θ,
y[t] == ygr[t]*(1 – Π[t]* Θ[t]* μ[t]^ θ)/(1 + ψ1*tat[t] + ψ2*(tat[t]^
ψ3)),
s[t] == inv[t]/(0.001 + y[t]),
ri[t] == γ*y[t]/k[t] - (1 - (1 – δ)^10)/10,
c[t] == y[t] - inv[t],
(*cpc[t] == c[t]*1000/l[t],*)
(*pcy[t] == y[t]*1000/l[t],*)
u[t] == ((c[t]/l[t])^(1 – α) - 1)/(1 – α),
cumu[t] == cumu[t - 1] + (l[t]*u[t]*r[t]*10)/scale1,
100 ≤ k[t],
20 ≤ c[t],
10 ≤ mat[t],
100 ≤ mup[t],
1000 ≤ mlo[t],
-1 ≤ tlo[t] ≤ 20,
tat[t] ≤ 20,
ceind[t] ≤ ceindlim,
0 ≤ q[t],
0 ≤ inv[t],
0 ≤ ygr[t],
0 ≤ eind[t],
0 ≤ matav[t],
0 ≤ μ[t] <= μlim
};
Phase II model
The second phase of acyclic modeller used the symbolic recursion of equations
as functions and direct optimisation of resultant independent variables. A
recursed approach is far more elegant than using NMinimize (or
FindMinimum) as a blunt instrument for solving thousands of equations and
variables.
598
Recursion is not an abstraction structure. Instead it directly employs the
equations as active functions that form an auto-topology. This means the
optimising function need only solve for the independent variables, which can
either be specified exogenously or Mathematica can automatically calculate
using symbolic algebra.
The equations are given as functions, with scalars having a memory function,
shown in the exogenous equations. Endogenous functions (model equations)
are each optimised so cannot have a memory function in the same way as
scalars. Starting variables are associated with each function as a limit values of
the function:
599
ri[t_] := γ*y[t]/k[t] - (1 - (1 – δ)^10)/10;
c[t_] := y[t] - inv[t]; c[0] = 30;
cpc[t_] := c[t]*1000/l[t];
pcy[t_] := y[t]*1000/l[t];
u[t_] := ((c[t]/l[t])^(1 – α) - 1)/(1 – α);
cumu[t_] := cumu[t - 1] + (l[t]*u[t]*r[t]*10)/scale1; cumu[1] = cumu[0] =
cumu0;
μ[1] = μ[0] = μ0;
600
Comparison statistics for recursed and precalculated scalars are:
601
customised to avoid complex numbers, selecting only the real part of
complex numbers and introducing additional constraints as necessary
Graph theory formally commenced with Euler (1736) solution of the puzzling
Königsberg Bridge Problem. A directed graph, or “digraph”, is one in which
each graph edge is directional between two vertices. If there are no internal
cycles in the graph, where following a directed path one can return to the start
vertex, the graph is known as a directed acyclic graph or a "DAG" (Weisstein
2008).
Each vertex has a number if directed edges arriving and a number of directed
edges leaving. These are called “degrees” or “valencies”. The number of
directed edges arriving is the indegree and leaving is the outdegree. A vertex
with indegree of zero and any non-zero out degree is one of the DAG start
vertexes analogous to a leaf of a tree.
The illustrations below show that topological sorts have quite complex acyclic
directed graphs for even three periods:
602
Illustration 39: Exogenous equations Illustration 40: Endogenous equations
603
Circular references can sometimes be solved by immense iteration.
Nevertheless, it cannot be guaranteed that the output is indeed the same
solution that would be achieved if the equations were to be better structured.
In graph theory, circular references are referred to as cycles and a graph with
cycles as non-acyclic. A DAG cannot have any internal cycles and graphs can
be topologically processed only if they are DAGs. While cycles can be removed
with graphical techniques, the system of underlying equations means that it is
better to manually resolve any circular references.
Using the new topological model it has been possible to check the for the DAG
property in Nordhaus' equations and to rationalise where necessary to render
the model acyclic. This has also facilitated the removal of constraints, which
are very expensive on computing time.
It may be seen in the program code that the topological presolver requires two
DAGs, a Network flow analysis and a topological sort. The technique has been
investigated since Dinic (1970) developed an algorithm for maximum flow in a
network.
• create a vertex for each variable, each equation, the source and sink
604
• add an edge from the source to each equation
• add an edge from each equation to each variable it contains
• add an edge from each variable to the sink
• assign unit weight to each edge
• find the maximum network flow using Dinic’s breadth-first search to
determine the path from the source to sink. If such a path exists, each
edge in the path is reversed and repeat this step
• topologically sort the causally assigned dependency graph using a
double depth-first search to produce a topologically sorted list of
strongly-connected components and sets of internal cycles where
equations have circular dependencies
• solve for each variable using the topological sort order. Where a circular
dependency exists, the equations are solved simultaneously rather than
sequentially.
605
• proceeding by topological sequence, solve each equation for the
dependent variable implicit within it. Substitute the newly determined
variable as they occur in all succeeding equations.
Only acyclic graphs (that is, graphs with no internal cycles) may be
topologically sorted. Therefore the researcher needs to manually edit
equations having internal cycles to eliminate these circular references. This is
an accepted procedure for those familiar with spreadsheets.
Learning
A topological model has the major advantage of being able to observe the
status of each intermediate variable during the evaluation of the objective
function. A penalty function can be used to return a real value when a complex
number is encountered.
606
Reduce and FindInstance allow domains to be controlled. For example, a root
can be requested in the domain of Reals. These functions fail if no real root
actually exists.
BeginPackage["Topofunctions`",{"Combinatorica`"}]
toponodes::usage = "toponodes provides sequence of nodes."
optimsolver::usage="optimsolver solves systems of equations."
outputsolver::usage="outputsolver performs backsubstitution."
Begin["`Private`"]
toponodes[eqns_]:=
Module[{eqnvars,eqnvarsninvt,invt,flatvars,eqnlist,mysource,mysink,edges1
,edges2,
edges3,edges,vertices2,vertices,forwardgraph,networkflows,forwardflows,
forwardedges,revisededges,revisedgraph,toposort,sortedequations,
sortedvertices, posfirstequation,startvertices,f1},
eqnvars=Map[Cases[eqns[[#]],x_Symbol[_Integer..],Infinity]&,Range[Length[
eqns]]];
(*Print[eqnvars];*)
flatvars=Union[Flatten[eqnvars]];
eqnlist=Range[Length[eqns]];
f1[a_,b_]:={a,b};
edges1=Map[f1[mysource,flatvars[[#]]]&,Range[Length[flatvars]]];
edges2=Flatten[Map[Outer[f1,eqnvars[[#]],{eqnlist[[#]]}]&,eqnlist],2];
edges3=Map[f1[eqnlist[[#]],mysink]&,eqnlist];
edges=Join[edges1,edges2,edges3];
vertices2= Join[flatvars,eqnlist];
vertices=Join[{mysource},vertices2,{mysink}] ;
(*Print[vertices];*)
forwardgraph = MakeGraph[vertices, (MemberQ[edges,{#1,#2}])&, Type →
Directed, VertexLabel → True];
(* ShowGraph[forwardgraph]; *)
If[!AcyclicQ[forwardgraph],Print["*** ERROR: FORWARD GRAPH IS NOT ACYCLIC
SO CHECK THE EQUATIONS ***"]];
networkflows=NetworkFlow[forwardgraph,1,Length[vertices],Edge];
forwardflows=Cases[networkflows[[All,1,All]],
{x_/;x>1,y_/;y<Length[vertices]}];
forwardedges = Map[vertices[[#]]&,forwardflows];
revisededges =
Join[Complement[edges2,forwardedges],Map[Reverse,forwardedges]];
revisedgraph= MakeGraph[vertices2, (MemberQ[revisededges ,{#1,#2}])&,
Type → Directed, VertexLabel->True];
607
(*ShowGraph[revisedgraph]*)
If[!AcyclicQ[revisedgraph], Print["*** ERROR: REVISED GRAPH IS NOT
ACYCLIC SO CHECK THE EQUATIONS ***"]; (*Print[ShowGraph[revisedgraph]];*)
Return[{{},{},{}}]];
(*ShowGraph[revisedgraph];*)
toposort=TopologicalSort[revisedgraph];
sortedvertices=Cases[vertices2[[toposort]],x_Symbol[_Integer..],1];
(*Print[vertices2[[toposort]]];*)
sortedequations = Cases[vertices2[[toposort]],_Integer,1];
posfirstequation=Apply[Plus,First[Position[vertices2[[toposort]],_Integer
,1]]];
startvertices = vertices2[[toposort[[Range[posfirstequation-1]]]]];
(*startvertices
=vertices2[[Select[vertices,InDegree[revisedgraph,#]==0&]]];*)
Return[{sortedequations,sortedvertices, startvertices}]
];
optimsolver[nmvars_,objtopo_,eqnordered_,leneqnorder_,optimpenalty_]:=
Module[{soleqn,solvar,outputs={},soltest1,soltest2,optimout},
For[i=1,i<=leneqnorder,i++,
soleqn =eqnordered[[i]]/.outputs;
solvar = Cases[soleqn,x_Symbol[_Integer..],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0,
Print["*** infomessage: optimpenalty applied with ",soleqn," ***"];
Return[optimpenalty],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
optimout=objtopo/.outputs;
If[NumericQ[optimout],Return[optimout]]
];
Return[optimout]
]/; VectorQ[nmvars,NumberQ];
outputsolver[nmvars_,eqnordered_,leneqnorder_]:=
Module[ {soleqn,solvar,outputs=nmvars, soltest1,soltest2},
For[i=1,i<=leneqnorder,i++,
soleqn =eqnordered[[i]]/.outputs;
solvar = Cases[soleqn,x_Symbol[_Integer..],Infinity];
If[Length[solvar]!=0,
soltest1 =Select[Chop[NSolve[soleqn,solvar]],(FreeQ[solvar/.#,Complex] )
&];
If[Length[soltest1]==0, Print["*** ERROR: DURING BACKSUBSTITUTION A
VARIABLE HAD NO SOLUTION ***"];Return[{}],
soltest2 = Select[soltest1,(solvar/.#)>0 &];
If[Length[soltest2]==0,
outputs=Join[outputs,First[Sort[soltest1,solvar/.# &]]],
outputs=Join[outputs,Last[Sort[soltest2,solvar/.# &]]]
];
];
];
];
outputs
];
End[]
608
EndPackage[]
609
Constrained μ
Firstly, the optimised value of the objective function, cumulative social welfare,
for each method is significantly different.
GAMS/CONOPT: 150,240
Mathematica constrained : 212,611
Mathematica unconstrained :212,614
Endogenous variables are the variables determined in the model albeit directly
or indirectly dependent upon the optimising variables. These variables
illuminate the environmental, economic and technological ecosystem and
provide the rich meaning of the model. The effect of differences in the
GAMS/CONOPT and Mathematica optimisation approach intermediate
variables are illustrated below:
610
Unconstrained μ
611
temperature reduction, net output of goods and services (that is, net of
abatement costs and damages) and social welfare.
612
Mathematica radiative forcing drops quickly Mathematica abatement costs are marginally
after 20 decades and reaches 1900 levels 30 higher than GAMS/CONOPT.
decade. In contrast, GAMS/CONOPT forcing
declines slowly.
As with radiative forcing, the remodelled Nordhaus' sustained radiative forcing and
global mean temperature falls quickly after terrestrial temperatures drive lower ocean
20 decades. Both models show a maximum temperature to the significantly greater level
surface temperature rise of almost 3.5C. of 2.4C compared to the remodelled 2.0C.
613
Euler, L., 1736. Solutio problematis ad geometriam situs pertinentis (solution
of a problem relating to the geometry of position). Commetarii
Academiae Scientiarum Imperialis Petropolitanae, 8.
Nelder, J.A. & Mead, R., 1965. A simplex method for function minimization.
Comp. J., 7, 308-313.
614
Xu, W., 2005. The design and implementation of the µModelica compiler .
School of Computer Science, McGill UNiversity, Montreal, Canada.
i The topological model calculations were completed on UTS' Orion high
performance cluster of 16 nodes running Red Hat Enterprise Linux 5 (64bit)
with the following specifications: 2.93GHz 4MB Cache X6800 Core 2 Extreme
(dual core) with 1066MHz FSB, 4GB 667MHz DDR2-RAM, 2x 80GB 7,200 RPM
SATA II Hard Drives (raid 0). Calculation on other memory intensive models
were completed on UTS' Titan cluster of 8 nodes running Red Hat Enterprise
Linux 5 (64bit) with the following specifications: 2 x 3.16GHz 2x6MB Cache
Xeon X5460 (quad core) with 1333MHz FSB, 16GB 667MHz DDR2-RAM, 2 x
300GB 15,000 RPM SAS Hard Drive (Raid 0)
ii http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationI
ntroduction.html#509267359 © 2008 Wolfram Research, Inc.
iii http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationL
ineSearchMethods.html © 2008 Wolfram Research, Inc.
iv http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationP
rincipalAxisMethod.html © 2008 Wolfram Research, Inc.
v http://reference.wolfram.com/mathematica/tutorial/ConstrainedOptimizationLoc
alNumerical.html#85183321 © 2008 Wolfram Research, Inc.
vi http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationL
ineSearchMethods.html © 2008 Wolfram Research, Inc.
615
Appendix 6 Benchmarking with Linear
Programming
This technique assumes that at least some of the production units are
successfully maximising efficiency, while others may not be doing so. Implicitly,
the method creates a best virtual proxy on the efficient frontier for each real
producer. By computing the distance of these latter units from their best
virtual proxy frontier and partitioning inefficiency among the inputs, strategies
are suggested to make the sub-optimally performing production units more
efficient.
DEA Advantages
The main advantages of DEA derive from its ability to reveal sensitivity data
and returns to scale that are not evident in PCA. For example, an input
minimising formulation provides additional information for each production
unit in direct relation to its peers on theta (θ) and iota (ι). Theta is the
proportion of inefficiency that could be eliminated by the proportional
reduction in inputs in order to obtain the projected input values. Iota ( ι) is the
total amount of inefficiency, equal to the total weighted distance between
observed and projected points standardised by inputs.
617
Maximise: aggregate outputs divided by aggregate inputs for each
n =
∑ ur yr n
∑ vi xi n
where:
n = efficiency of production unit n
n = number of production unit, which ranges from 0 to n1
by varying:
u r = weight, shadow price or coefficient of output r that maximises n
v i = weight, shadow price or coefficient of input i that maximises n
where u r , v i ≥ 1
Constraint: subject to the same ratio for the other units not exceeding unity
(which is the maximum efficiency):
∑ ur y r n '≤' 1
∑ v i xi j
where:
y r n = output r of production unit n
x i n = input i of production unit n
j = index of production units, ranges from 1 to n
r = index of outputs that ranges from 1 to m (the number of outputs)
i = index of inputs that ranges from 1 to s (the number of inputs)
Charnes, Cooper & Rhodes (1978) observed that Farrell's non-linear and
computationally complex objective function could be converted to ordinary
fractional linear programming problems. Their model assumed constant
returns to scale such that production can be increased or decreased without
affecting efficiency. This work led to the widespread uptake of DEA. The
seminal textbook on DEA is now Cooper, Seiford & Tone (2007).
618
The key assumptions in DEA are: at least some of the production units are
successfully maximising efficiency, while others may not be doing so; the best
producers can be used as a virtual proxy for the efficient frontier for each real
producer; inefficiency can be partitioned among the inputs based on the
distances; strategies are suggested to make the production units more
efficient; returns to scale are constant such that production can be increased
or decreased without affecting efficiency.i
Charnes, Cooper & Rhodes (1978, p.429) suggest that the usefulness of DEA
analysis is enhanced by virtue that inputs need only be ordinal amounts, for
example, psychometric or management performance factors. This allows the
inefficiency analysis to be examined with various partitions of inputs, which is
highly fertile for new management strategies.
Leibenstein & Maital (1992) suggest other advantages accrue because there is
no restriction on the form of the production function and it does not need to be
fully specified for the analysis to be successful; it is unbiased in that there is a
priori no priority given to any input or output over another; the technology can
be analysed to see if the production function should be forced through the
origin to model constant returns to scale (A. Charnes et al. 1978) or allowed to
exhibit variable returns to scale by not passing through the origin (Banker et
al. 1984); and organisations can be readily studied even if their inputs and
outputs are not subject to the market.
DEA disadvantages
Various authors note that DEA is less suited to a small number of production
units (William W. Cooper et al. 2007);ii DEA shows only relative inefficiency
rather than the potential for all production units (including those with best
practise) to perform much better; DEA uses extreme points of efficiency as
benchmarks but it's peers may be unable to emulate this for various reasons;
and a small change to one of the best practise units can lead to large changes
in analysis (William W. Cooper et al. 2007; Ahn & L. M. Seiford 1992;
Leibenstein & Maital 1992).
619
DEA returns to scale
The simplest assumption in using DEA is that returns to scale are constant, as
formulated in the illustration below. This means that production can rise or fall
with the same mix of inputs. Therefore all apparent inefficiencies are due to
management practices.
Minimise En
w1, ... , w N , En
Subject to:
N
∑ w j yi j − yi n ≥ 0 i=1, , I
j=1
N
∑ w j xk j − E n xk n ≤ 0 k =1,, K
j=1
wj ≥ 0 j=1,, N
where:
N = number of organisations
I = number of different outputs y i n
K = number of different inputs x k n
w j = weights applied across N organisations
E n = efficiency score of nth organisation
Illustration 41: DEA Constant Returns to Scale
Formulation (En)
Some production plants are constrained by being too small and therefore
inefficient. In other cases a production plant can be far too large for its current
throughput and so can increase production without adding capacity. In the
business world, there is a remorseless endeavour to introduce flexibility into
production functions. Mergers, takeovers and rationalisation tends to resolve
situations where returns to scale are permanently mismatched and not tuned
into a relatively constant band of operation. The marginal production function
in DEA may be adjusted for variable rather than constant returns to scale. A
constant returns to scale formulation is reformulated with an additional
constraint that the weights wj must sum to 1. This fits a tighter frontier to
the data. The following linear program problem is used for variable returns to
scale:
620
Minimise S n Minimise Rn
w 1, ... , w N , S n w 1, ... , w N , R n
Subject to: Subject to:
N N
∑ w j yi j − yi n ≥ 0 i=1, , I ∑ w j yi j − yi n ≥ 0 i=1, , I
j=1 j=1
N N
∑ w j xk j − Sn xk n ≤ 0 k =1,, K ∑ w j xk j − R n xk n ≤ 0 k =1, , K
j=1 j=1
N N
∑ wj =1 ∑ wj ≤1
j=1 j=1
wj ≥ 0 j=1,, N wj ≥ 0 j=1,, N
where: where:
N = number of organisations N = number of organisations
I = number of different outputs y i n I = number of different outputs y i n
K = number of different inputs x k n K = number of different inputs x k n
w j = organisation weights w j = organisation weights
S n = efficiency of nth organisation Rn = efficiency of nth organisation
Illustration 42: DEA variable returns to scale Illustration 43: DEA non-increasing returns
(S) to scale (R)
621
A6.2 Linear programming
From first principles, it can be shown that the monetary output of the economy
is the price vector p multiplied by the quantity y of commodities (ten
Raa 2005). Therefore, an economy seeking to maximise welfare measured as
consumption will maximise p y . However, this maximisation will be subject
to constraints of labour and capital, and perhaps energy and pollution.
Max p y : A x y ≤ x , k x ≤ M , l x ≤ N , x ≥ 0
622
Mathematica implements this with DualLinearProgramming, returning a vector
of x-values, shadow prices, lower bound and upper bound slacks:
If both problems are feasible, the solution is the same and two equations apply:
A 2 x−b 2T y 2 = 0
l− x T z = u− x T w = 0
Max p y : A x y ≤ x , k x ≤ M , l x ≤ N , x ≥ 0
[ ][ ] [ ] []
A −I I 0 0
k
l
0 x
0 y
≤
M
N
x
or C ≤
y []
M
N
−I 0 0 0
[0 p] x []
y
or a x
y []
Using these conventions, the specification of the linear program becomes:
623
[]
0
Max a
x
y [] []
x
: C ≤
y
M
N
0
=[ p r ]
Where:
p commodity price
r rate for rental of capital
wage rate
the slack
[ ]
A−I I
[ p r ] k 0 [
=0 p]
l 0
−I 0
Equation Meaning
p= p shadow prices are the same as real world prices
p = p A r k l− shadow prices are the aggregate of factor input prices
Now, the primal and the dual solutions are linked by the Main Theorem of
Linear Programming a x = b so:
[]
0
[0 []
x
p ] =[ p r ]
y
M
N
0
624
p y = r M N
or National Income = National Product
A Make table V lists all the commodity outputs per production unit. It is called
a “pure Make table” if there is just one commodity per production unit and
every commodity is produced by a production unit." The Australian and GTAP
input output tables are prepared on this basis.
The difference V-U provides the net output of each commodity. In commodity
p y = r M N
or National Income = National Product
625
V T⋅s = U.s Y G E – M
or
V T − U ⋅s = Y G E – M
A.x Y G E – M = x
or
1 − A⋅x = Y G E – M
V T – U ⋅s = 1 – A⋅x
and since x = V T⋅s then the U, V and A matrices are related by the
equations:
U = A⋅V T , or U⋅V = A
The substitution between production sectors depends upon the price of the
inputs, which is the assumption of the Transcendental Production function.
Also, the price of inputs responds to microeconomic supply & demand.
626
MaximiseY
subject to the constraints:
Material balance : U − V T ⋅s Y G E – M ≤ 0
Labour endowment : ⋅s ≤ N
Capital endowment : k⋅s ≤ K
The dual of the linear program provides Lagrange multipliers and resource
slacks. As we have seen in Chapter 4 Economic models for climate change
policy analysis, the Lagrange multipliers represent the shadow prices
associated with the constraints, which are also the factor productivities.
The sorts of questions ten Raa has addressed with the UV technique are:
• How much can the level of final demand be raised if the economy is
made more efficient?
• What is the comparative advantage of the economy and best
composition of imports and exports?
• Is structural/technical/efficiency change or business cycle change
responsible for a rise in standard of living?
• Are competition and performance positively or negatively related?
• What is the increase in commodity prices with a new tax?
• What is the increase in employment if government expenditures
increase?
• What are the engines of growth in an economy, when productivity spills-
over to other industries?
• Can services increase productivity? Have increases in manufacturing
productivity been due to eliminating (outsourcing) low productivity
service activities?
V T − U ⋅e = a d
0[]
627
Where the value of net exports is the negative of the trade deficit p d '≥'−D
and domestic demand includes investment, which in competitive economies is
the Net Present Value of future consumption (Weitzman 1976).
Max e T ac : V T − U ⋅s ≥ a c [ z ], Ks ≤ M , Ls ≤ N , p z ≥ − D , s ≥ 0
0
Where:
Max ax : Cx≤b
Min b : C=a
[ ]
T
[ 0I ]
[]
U−V a 0
[] []
s s M
Max [ 0 e
T
]
0 c : K 0 0
c ≤ N
z L 0 0 z D
0 0 −p 0
I 0 0
[ ]
T
[ 0I ]
[]
0 U−V a
M
Min [ p r ] N : [ p r ] K 0 0 = [ 0 eT a 0 ]
D L 0 0
0 0 0 −p
I 0 0
628
The dual reduces to:
Min r ≥ 0 M N D : p V T −U ≤ r K L , pa = e T a , pT = p
Where:
Where two countries trade, the material balances of the two economies need to
be jointly balanced. There is only one level of imports and one international
shadow price for each traded commodity that satisfies the pooled material
balance, notwithstanding the direction of trade,:
Secondly, the net exports for each country needs to be controlled so that the
pooled material balance does not runaway in favour of one country due to
better terms of trade as exports increase. This would lead to final demand in
one economy being maximised in the presence of massive production, while
production sectors in the other economy are shut down (with demand satisfied
by imports).
629
[ ] []
U−V T a 0
[ ]
IT
0 0
K 0 0 0 M
[]
L 0 0 0 s N
−I 0 0 0 c ≤ 0
Max c :
0 a U −V T
[ ]
−I T
0
s
z
0
M
N
0 0 K 0
0 0 L 0 0
0 0 −I 0
[]
0
M
N
Min [ p r ]
p r 0 :
0
M
N
0
[ ]
U −V T a 0
[]
IT
0
K 0 0 0
L 0 0 0
−I 0 0 0
[ p r p r ] = [ 0 1 0 0]
0 a U −V T
[ ]
−I T
0
0 0 K 0
0 0 L 0
0 0 −I 0
630
pT⋅z = pT⋅d
MRIO formulation
[ ] ]
U 1−V T1
[] [
a1 0 0 Rect1 ≤VertVector [0 ]
T
0 a2 0 U 2 −V 2 Rect1 ≤VertVector [0 ]
c1
0 0 K1 0 0 ≤ M1
c2
0 0 0 K2 0 ≤ M2
s1
0 0 L1 0 0 ≤ N1
s2
0 0 0 L2 0 ≤ N2
z
0 0 0 0 Rect2 ≤VertVector [E ]
0 0 0 0 Square =VertVector [0 ]
Where Rect1 is a matrix with rows equal to the number of commodities and
columns of countries∗countries∗commodities . The matrix expresses that each
commodity can be exported to the same commodity line of another country
(and indeed to itself, although this is constrained to zero in the trade
equivalences matrix).
631
exporting-> country 1 country 1 country 2 country 2
importing-> country 1 country 2 country 1 country 2
commodities commodities commodities commodities
countries −1 −1
0 0
−1 −1
0 0 0
−1 −1
0 0
−1 −1
0
continued
next line ...
z cou1 , cou1 ,commodities
.
z cou1 , cou2 , commodities
z cou2 , cou1 , commodities
<=
Total Net Exports cou1
Total Net Exportscou2
z cou2 , cou2 , commodities
Iterating through {cou , cou , commodities} with the last dimensions changing
the most frequently,creating a new line in the z-equivalence matrix with each
iteration …
632
0
0
z cou1, cou1 ,commodities 0
z cou1, cou2 , commodities 0
. =
z cou2, cou1 , commodities 0
z cou2, cou2 , commodities 0
0
0
where :
cou =number of countries
com =number of commodities
633
However, this dissertation implements pollution as a “good” rather than a
“bad”.
Assuming that the trade vector Z is part of the consumption vector Y , the
“stocks” equation is production V ∗s equals uses U ∗s plus consumption
Y , where each of production and uses are convoluted with level activity in
each time period:
V ∗s = U ∗s Y
634
∂ V ∗s = U∗∂ s ∂ Y
Adjusting for zero elements in the convolution the flow equation becomes:
s t − ⋅V ∗s = U ∗∂ s ∂Y
s t − ⋅U ∗s Y = U ∗ ∂ s ∂ Y
U 0 ⋅s t Y t = V 0 ⋅ s t − I .
U 0 ⋅s t1 Y t 1 = s t 1 − U 0 ⋅ s t Y t or
U 0 ⋅s t1 Y t 1 = s t 1 − V 0 ⋅s t − I
635
After investigating this dispersion method and discussing its application with
ten Raa, this dissertation research uses an alternative intertemporal
formulation based on standard accounting principles for stocks and flows.
Banker, R.D., Charnes, R.F. & Cooper, W.W., 1984. Some models for estimating
technical and scale inefficiencies in data envelopment analysis.
Management Science, 30, 1078-92.
Charnes, A., Cooper, W.W. & Rhodes, E., 1978. Measuring the efficiency of
decision making units. European Journal of Operational Research, 2,
429-44.
Cooper, W.W., Seiford, L.M. & Tone, K., 2007. Data envelopment analysis: a
comprehensive
text with models, applications, references and DEA-solver software. 2nd
ed., New York: Springer.
ten Raa, T., 2005. The Economics of Input Output Analysis, New York:
Cambridge UniversityPress. Available at:
www.cambridge.org/9780521841795.
636
Weitzman, M.L., 1976. On the welfare significance of national product in a
dynamic economy. The Quarterly Journal of Economics, 156-162.
637
Appendix 7 Mining the GTAP Database
639
Region Code Regions comprising
No.
(Norway), xef (Rest of EFTA), alb (Albania), bgr (Bulgaria),
blr (Belarus), hrv (Croatia), rou (Romania), rus (Russian
Federation), ukr (Ukraine), xee (Rest of Eastern Europe),
xer (Rest of Europe), kaz (Kazakhstan), kgz (Kyrgyzstan),
xsu (Rest of former Soviet Union), arm (Armenia), aze
(Azerbaijan), geo (Georgia), irn (Islamic Republic of Iran),
tur (Turkey), xws (Rest of Western Asia), egy (Egypt), mar
(Morocco), tun (Tunisia), xnf (Rest of North Africa), nga
(Nigeria), sen (Senegal), xwf (Rest of Western Africa), xcf
(Central Africa), xac (South Central Africa), eth (Ethiopia),
mdg (Madagascar), mwi (Malawi), mus (Mauritius), moz
(Mozambique), tza (Tanzania), uga (Uganda), zmb (Zambia),
zwe (Zimbabwe), xec (Rest of Eastern Africa), bwa
(Botswana), zaf (South Africa), xsc (Rest of South Africa
Customs Union)
640
Generic Commodities Comprising
Description
serv ely (electricity), gdt (gas manufacture, distribution), wtr
(services) (water), cns (construction), trd (trade), otp (transport nec),
wtp (water transport), atp (air transport), cmn
(communication), ofi (financial services nec), isr
(insurance), obs (business services nec), ros (recreational
and other services), osg (public administration and defence,
education, health), dwe (ownership of dwellings)
The “agg” file needs to be copied to a “txt” file, for example “sntest01.txt”. The
database is aggregated by running “data-agg.bat sntest01” where the
specification file is “sntest01.txt”. The aggregation function produces six
output files in a director of the same name “sntest01”. The files are in “har”
format, which is a proprietary GEMPACK format but may be viewed with
“viewhar.exe”:ii
There are a number of methods of transforming the data in “har” files for use
in other database systems. Perhaps the most convenient is to generate a
standard “sql script” file from each “har” file using “seehar.exe” in the
Flexagg7 package of files. When “seehar.exe” initially executes, the following
sequence of commands achieves an sql-script file in the same directory
“sntest01”:
641
• type “sql” as the option and Carriage Return (Enter)
• type Carriage Return (Enter) to leave the options menu
• type the complete file location of the “har” file to be processed and
Carriage Return (Enter) to continue (it may help to put the full address
in Notepad and copy/paste it as the required location – then only the har
file name needs to be appended)
• Carriage Return (Enter) to accept the default output file press;
• Carriage Return (Enter) to continue
• type "r" as the option and then Carriage Return (Enter) to output the
data as an sql script file
The “sql” file can then be executed from an HSQLDB Database Engine or
within Mathematica to create a standalone HSQLDB database corresponding
to the “har” file. It will be necessary to remove some inconvenient
apostrophises from the sql using a text editor (i.e. change Firms' to Firms and
Agents' to Agents) and changing the table names to avoid conflict (i.e. edit
basedata.sql and change HEADLIST, SETLIST and RARRAY to, say,
HEADLISTBD, SETLISTBD and RARRAYBD).
642
Illustration 44: Social Accounting Matrix (Source: McDonald & Patterson 2004)
The SAM equations can be rationalised with the GTAP database tables as set
out below.
{ } { }
VIAM VDAM VIPM VDPM VOM VIMS − VIWS
VIGM VDGM VIIM VDIM = VXWD − VXMD VTWR
VST VXWD VIWS − VTWR
Using:
U = VIAM VDAM
C = VIPM VDPM
G = VIGM VDGM
I = VIIM VDIM
643
However, V = VOM and since VOM =VOAOUTTAX we need to include
taxes. Therefore VXWD = VXMD XTAX and since we need to use world
prices rather than market prices:
Therefore:
Therefore:
GNP = V − U
{ }
= C I G Net Exports at − MTAX XTAX VST
World Prices
where:
644
Material Balance
The material balance equation is:
Therefore,
It might be noted that there is no column balance between the net exports of
various countries unless taxes are included. Therefore, the column balance
needs to be performed manually.
(* Open Connection *)
<< DatabaseLink`
conn = OpenSQLConnection[]
645
Print["vNAFTA : ", TableForm[vNAFTA,
TableHeadings → {vrows, vrows}]];
aregion[region_] := uregion[region].Inverse[Transpose[vregion[region]]];
aNAFTA = aregion["NAFTA"];
Print["aNAFTA technical matrix : ", TableForm[aNAFTA,
TableHeadings -> {vrows, vrows}]]
646
yregion[region_] := ysumdomimp[region][[All, 1]];
yNAFTA = yregion["NAFTA"];
Print["yNAFTA : ", TableForm[yNAFTA,
TableHeadings -> {vrows}]];
Clear[ygsum];
647
ygsum[region_] := Transpose[{ysumdomimp[region][[All, 1]] +
gsumdomimp[region][[All, 1]], vrows}];
ygsum["NAFTA"] // MatrixForm;
ygregion[region_] := ygsum[region][[All, 1]];
ygNAFTA = ygregion["NAFTA"];
Print["ygNAFTA : ", TableForm[ygNAFTA,
TableHeadings -> {vrows}]];
(* Open Connection *)
<< DatabaseLink`
conn = OpenSQLConnection[]
Clear[as, varray, vselect, vsumdomimp];
as[a_] := If[a == {}, {0}, a];
varray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name];
(*vdpm=varray["VDPM"][[All,{1,3,4}]];*)
vsource = varray["GVIEWRA", "CM04"];
vrows = Union[varray["GVIEWRA", "CM04"][[All, 3]]];
vselect[array_, region_, component_] := Select[array, #[[4]] == region &&
#[[5]] == component &][[All, {1, 3}]];
vsumdomimp[region_] := vselect[vsource, region, "prodrev"];
vsumdomimp["NAFTA"] // MatrixForm;
vNAFTA = DiagonalMatrix[vsumdomimp["NAFTA"][[All, 1]]];
Print["vNAFTA : ", TableForm[vNAFTA,
TableHeadings -> {vrows, vrows}]];
Clear[uarray, uselect, usumdomimp, fsumdomimp];
648
uarray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name,
SortingColumns -> {SQLColumn["ELEMENT3"] -> "Ascending",
SQLColumn["ELEMENT2"] -> "Ascending", SQLColumn["ELEMENT1"] ->
"Ascending"}];
usource = uarray["GVIEWRA", "SF01"];
uselect[array_, region_, source_, component_, urows_, ucols_] :=
Select[array, MemberQ[urows, #[[3]]] && MemberQ[ucols, #[[4]]] &&
#[[5]] == region && #[[6]] == source && #[[7]] == component &]
[[All, {1, 3, 4}]];
usumdomimp[region_] := Transpose[{ uselect[usource, region, "domestic",
"mktexp", vrows, vrows][[All, 1]] /. x_ /; x -> as[x] + uselect[usource,
region, "imported", "mktexp", vrows, vrows][[All, 1]] /. x_ /; x ->
as[x], uselect[usource, region, "domestic", "mktexp", vrows, vrows][[All,
2]], uselect[usource, region, "domestic", "mktexp", vrows, vrows][[All,
3]]}];
usumdomimp["NAFTA"] // MatrixForm;
uNAFTA = Transpose[Partition[usumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["uNAFTA : ", TableForm[uNAFTA,
TableHeadings -> {vrows, vrows}]];
aNAFTA = uNAFTA.Inverse[Transpose[vNAFTA]];
Print["aNAFTA technical matrix : ", TableForm[aNAFTA,
TableHeadings -> {vrows, vrows}]]
649
garray[array_, name_] := SQLSelect[conn, array, SQLColumn["HEADNAME"] ==
name, SortingColumns -> {SQLColumn["ELEMENT2"] -> "Ascending",
SQLColumn["ELEMENT1"] -> "Ascending"}];
gsource = garray["GVIEWRA", "SF03"];
gselect[array_, region_, source_, component_, yrows_] := Select[array,
MemberQ[yrows, #[[3]]] && #[[4]] == region && #[[5]] == source && #[[6]]
== component &][[All, {1, 3}]];
gsumdomimp[region_] := Transpose[{ gselect[gsource, region, "domestic",
"mktexp", vrows][[All, 1]] /. x_ /; x -> as[x] + gselect[gsource, region,
"imported", "mktexp", vrows][[All, 1]] /. x_ /; x -> as[x],
gselect[gsource, region, "domestic", "mktexp", vrows][[All, 2]]}];
gsumdomimp["NAFTA"] // MatrixForm;
gNAFTA = gsumdomimp["NAFTA"][[All, 1]];
Print["gNAFTA : ", TableForm[gNAFTA,
TableHeadings -> {vrows}]];
Clear[ygsum];
ygsum[region_] := Transpose[{ysumdomimp[region][[All, 1]] +
gsumdomimp[region][[All, 1]], vrows}];
ygsum["NAFTA"] // MatrixForm;
ygNAFTA = ygsum["NAFTA"][[All, 1]];
Print["ygNAFTA : ", TableForm[ygNAFTA,
TableHeadings -> {vrows}]];
(* CHECK ON A MATRIX *)
650
vrows][[All, 2]], vdselect[vdsource, region, vrows, vrows][[All,
3]]}];
vdsumdomimp["NAFTA"] // MatrixForm;
vdNAFTA = Transpose[ Partition[vdsumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["vdNAFTA : ", TableForm[vdNAFTA,
TableHeadings -> {vrows, vrows}]]
Clear[vdvisum];
vdvisum[region_] := Transpose[{vdsumdomimp[region][[All, 1]] +
visumdomimp[region][[All, 1]], vdsumdomimp[region][[All, 2]],
vdsumdomimp[region][[All, 3]]}];
vdvisum["NAFTA"] // MatrixForm;
vdviNAFTA = Transpose[Partition[vdvisum["NAFTA"][[All, 1]],
Length[vrows]]];
Print["vdviNAFTA : ", TableForm[vdviNAFTA,
TableHeadings -> {vrows, vrows}]];
651
fbselect[array_, region_, urows_, ucols_] := Select[array, MemberQ[urows,
#[[3]]] && MemberQ[ucols, #[[4]]] && #[[5]] == region &][[All, {1, 3,
4}]];
fbsumdomimp[region_] := Transpose[{fbselect[fbsource, region, frows,
vrows][[All, 1]] /. x_ /; x -> as[x], fbselect[fbsource, region, frows,
vrows][[All, 2]], fbselect[fbsource, region, frows, vrows][[All, 3]]}];
fbsumdomimp["NAFTA"] // MatrixForm;
fbNAFTA = Transpose[ Partition[fbsumdomimp["NAFTA"][[All, 1]],
Length[vrows]]];
Print["fbNAFTA : ", TableForm[fbNAFTA,
TableHeadings -> {frows, vrows}]]
(*Clear[osarray,osselect,ossumdomimp,ossumdomimp];
osarray[array_,name_]:=SQLSelect[conn,array,SQLColumn["HEADNAME"]==
name,SortingColumns->{SQLColumn["ELEMENT2"]-
>"Ascending",SQLColumn[ "ELEMENT1"]->"Ascending"}];
ossource=ftarray["GDATRA","OSEP"];
osselect[array_,region_,urows_]:=Select[array,MemberQ[urows,#[[3]]]&&#[[4
]]==region&][[All,{1,3}]];
ossumdomimp[region_]:=Transpose[{osselect[ossource,region,vrows]
[[All,1]]/.x_/;x->as[x],osselect[ossource,region,vrows][[All,2]]}];
ossumdomimp["NAFTA"]//MatrixForm;
osNAFTA={Transpose[Partition[ossumdomimp["NAFTA"]
[[All,1]],Length[vrows]]]};
652
Print["osNAFTA :
",TableForm[osNAFTA,TableHeadings->{{" "},vrows}]]*)
Clear[totasum];
dim = {Max[Length[vrows]*Length[vrows], Length[frows]*Length[vrows]], 1};
totasum[region_] := Transpose[{Flatten[ SparseArray[Band[{1, 1}] ->
Thread[{vdvisum[region][[All, 1]]}], dim] + SparseArray[ Band[{1, 1}] ->
Thread[{ensumdomimp[region][[All, 1]]}], dim] + SparseArray[ Band[{1, 1}]
-> Thread[{fbsumdomimp[region][[All, 1]]}], dim] + SparseArray[ Band[{1,
1}] -> Thread[{ftsumdomimp[region][[All, 1]]}], dim] +
SparseArray[ Band[{1, 1}] -> Thread[{issumdomimp[region][[All, 1]]}],
dim] + (*SparseArray[Band[{1,1}]->Thread[{ossumdomimp[region][[All,
1]]}],dim]+*) SparseArray[ Band[{1, 1}] -> Thread[{tfsumdomimp[region]
[[All, 1]]}], dim] + SparseArray[ Band[{1, 1}] ->
Thread[{vssumdomimp[region][[All, 1]]}], dim]], ensumdomimp[region][[All,
2]], ensumdomimp[region][[All, 3]]}];
totasum["NAFTA"] // MatrixForm;
totaNAFTA = {Total[ Transpose[ Partition[totasum["NAFTA"][[All, 1]],
Length[vrows]]]]};
653
dataaNAFTA = vdviNAFTA.Inverse[DiagonalMatrix[Flatten[totaNAFTA]]];
Print["calc NAFTA technical matrix : ", TableForm[dataaNAFTA,
TableHeadings -> {vrows, vrows}]];
<< DatabaseLink`
conn1 = OpenSQLConnection["gtap3eghg"]
conn2 = OpenSQLConnection["gtap3res"]
Clear[mapping, positiona, positionb, from, to, map, mapuc];
istream = OpenRead["/home/stuart/Documents/gtap/GTPAg7/sntest01.agg"];
records = Select[ReadList[istream, Record, RecordSeparators -> "= "],
StringFreeQ[#, "!"] &];
mapping[n_] := Rest[StringSplit[records[[n]]]];
positiona[n_] := Flatten[Position[mapping[n], "&"]];
positionb[n_] := Rest[RotateRight[Join[{-1}, positiona[n]]]];
to[n_] := mapping[n][[positiona[n] + 1]];
from[n_] := mapping[n][[positionb[n] + 2]];
map[n_] := Thread[from[n] -> to[n]];
mapuc[n_] := Thread[ToUpperCase[from[n]] -> to[n]];
produnitmap = map[2];
regionmap = mapuc[4];
factormap = map[6];
othermap = {"HH" -> "demand", "Govt" -> "demand", "CGDS" -> "invest"};
remap = Join[produnitmap, regionmap, factormap, othermap];
mappedarray = SQLSelect[conn1, "RARRAY"] /. remap
ghg = Union[mappedarray[[All, 3]]];
commodities = Union[mappedarray[[All, 4]]];
produnits = Union[mappedarray[[All, 5]]];
regions = Union[mappedarray[[All, 6]]];
(*Total[Select[mappedarray,
#[[3]]=="CO2"&&#[[4]]=="ecoa"&&#[[5]]=="food"&&#[[6]]=="EU25"&]
[[All,1]]];*)
SQLDropTable[conn2, "EGHG"];
SQLCreateTable[conn2, SQLTable["EGHG"], {SQLColumn["RVALUE", DataTypeName
-> "FLOAT"], SQLColumn["HEADNAME", DataTypeName -> "VARCHAR", DataLength
-> 10], SQLColumn["ELEMENT1", DataTypeName -> "VARCHAR", DataLength ->
10], SQLColumn["ELEMENT2", DataTypeName -> "VARCHAR", DataLength -> 10],
SQLColumn["ELEMENT3", DataTypeName -> "VARCHAR", DataLength -> 10]
}];
SQLDelete[conn2, "EGHG"];
For[l = 1, l <= Length[regions], l++,
For[k = 1, k <= Length[produnits], k++,
For[j = 1, j <= Length[commodities], j++,
For[i = 1, i <= Length[ghg], i++,
SQLInsert[conn2, "EGHG", SQLColumnNames[conn2, SQLTable["EGHG"]]
[[All, 2]], {Total[Select[mappedarray, #[[3]] == ghg[[i]] &&
#[[4]] == commodities[[j]] && #[[5]] == produnits[[k]] && #[[6]] ==
regions[[l]] &][[All, 1]]], ghg[[i]], commodities[[j]], produnits[[k]],
regions[[l]]}]
];];];];
654
A7.3 Data mining the GTAP database in Mathematica
File: Gtapfunctions.m
BeginPackage["Gtapfunctions`",{"DatabaseLink`"}]
vrows::usage="vrows gives the commodity rows of the matrix."
frows::usage="frows gives the factor rows of the matrix."
regions::usage="regions gives the regions in the dataset."
vregion::usage="vregion[n] gives the V matrix."
uregion::usage="uregion[n] gives the V matrix."
iregion::usage="iregion[n] gives the Investment matrix."
fregion::usage="fregion[n] gives the Factor matrix."
gnpregion::usage="gnpregion[n] gives the U-Transpose[V] matrix."
aregion::usage="aregion[n] gives the A matrix."
imregion::usage="imregion[n] gives the Import matrix."
exregion::usage="exregion[n] gives the Export matrix."
txregion::usage="txregion[n] gives the export transport margins."
yregion::usage="yrgion[n] gives the Household demand matrix."
gregion::usage="gregion[n] gives the Government demand matrix."
ygregion::usage="ygregion[n] gives the combined Household & Government
demand matrix."
biregion::usage"biregion[n] gives the bias of U-V+C+I+G+X-M"
csregion::usage="csregion[n] gives the Capital Stock."
pregion::usage="pregion[n] gives the Population."
eyregion::usage="yeregion[n] gives the combined Household energy demand
matrix."
eexregion::usage="eexregion[n] gives the Energy bilateral trade matrix."
euregion::usage="euregion[n] gives the firms' purchases of Energy."
gfgregion::usage="gfgregion[n] gives the firms production of greenhouse
gases."
gygregion::usage="gygregion[n] gives the combined Household & Government
production of greenhouse gases."
gigregion::usage="gigregion[n] gives the investment production of
greenhouse gases."
Begin["`Private`"]
conn=OpenSQLConnection["gtap3res"];
Clear[varray,vselect,vsumdomimp];
(*as[a_]:=If[a=={},{0},a];*)
(* the varray is different to others because V needs to be at market
prices, including output taxes *)
varray[array_,name_]:=SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name];
(*vdpm=varray["VDPM"][[All,{1,3,4}]];*)
vsource=varray["GVIEWRA","CM04"];
regions=Union[varray["GVIEWRA","CM04"][[All,4]]];
vrows=Union[varray["GVIEWRA","CM04"][[All,3]]];
vselect[array_,region_,component_]:= Select[array,#[[4]]==region &&
#[[5]] == component&][[All,{1,3}]];
vsumdomimp[region_]:= Transpose[{ vselect[vsource,region,"prodrev"]
[[All,1]] + vselect[vsource,region,"outtax"][[All,1]],
vselect[vsource,region,"prodrev"][[All,2]]}];
vregion[region_]:= DiagonalMatrix[vsumdomimp[region][[All,1]]];
(*vsumdomimp["NAFTA"]//MatrixForm
vNAFTA=vregion["NAFTA"];
Print["vNAFTA : ",TableForm[vNAFTA,TableHeadings
→ {vrows,vrows}]]; *)
Clear[uarray,uselect,usumdomimp];
uarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
655
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
usource=uarray["GVIEWRA","SF01"];
uselect[array_,region_,source_,component_,urows_,ucols_]:=
Select[array,MemberQ[urows,#[[3]]] && MemberQ[ucols,#[[4]]]&&#[[5]] ==
region&&#[[6]] == source&&#[[7]] == component&][[All,{1,3,4}]];
usumdomimp[region_]:=
Transpose[{ uselect[usource,region,"domestic","mktexp",vrows,vrows]
[[All,1]] + uselect[usource,region,"imported","mktexp",vrows,vrows]
[[All,1]], uselect[usource,region,"domestic","mktexp",vrows,vrows]
[[All,2]], uselect[usource,region,"domestic","mktexp",vrows,vrows]
[[All,3]]}];
uregion[region_]:= Transpose[Partition[usumdomimp[region]
[[All,1]],Length[vrows]]];
(* usumdomimp["NAFTA"]//MatrixForm;
uNAFTA= uregion["NAFTA"];
Print["uNAFTA : ",TableForm[uNAFTA,TableHeadings
→ {vrows,vrows}]]; *)
Clear[fssumdomimp];
frows = Complement[Union[varray["GVIEWRA","SF01"][[All,3]]],vrows];
fsumdomimp[region_]:=Transpose[{ uselect[usource,region,"domestic","mktex
p",frows,vrows][[All,1]] +
uselect[usource,region,"imported","mktexp",frows,vrows][[All,1]],
uselect[usource,region,"domestic","mktexp",frows,vrows][[All,2]],
uselect[usource,region,"domestic","mktexp",frows,vrows][[All,3]]}];
fregion[region_]:= Transpose[Partition[fsumdomimp[region]
[[All,1]],Length[frows]]];
(* fsumdomimp["NAFTA"]//MatrixForm;
fNAFTA=fregion["NAFTA"];
Print["fNAFTA factor inputs : ",TableForm[fNAFTA,TableHeadings
→ {frows,vrows}]] *)
gnpregion[region_]:=uregion[region]-Transpose[vregion[region]];
(* gnpNAFTA=gnpregion["NAFTA"];
Print["uNAFTA - Inv_vNAFTA_Transpose :
",TableForm[gnpNAFTA,TableHeadings → {vrows,vrows}]] *)
aregion[region_]:=uregion[region].Inverse[Transpose[vregion[region]]];
(* aNAFTA=aregion["NAFTA"];
Print["aNAFTA technical matrix : ",TableForm[aNAFTA,TableHeadings
→ {vrows,vrows}]] *)
Clear[txarray,txselect,txsumdomimp];
txarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] →
"Ascending",SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"]-
>"Ascending"}];
txsource=txarray["GVIEWRA","CM01"];
txselect[array_,region_,urows_]:= Select[array,MemberQ[urows,
#[[3]]]&&#[[5]] == region&&#[[4]] == "trans"&][[All,{1,3}]];
txsumdomimp[region_]:=Transpose[{ txselect[txsource,region,vrows]
[[All,1]], txselect[txsource,region,vrows][[All,2]]}];
txregion[region_]:=txsumdomimp[region][[All,1]];
Clear[exarray,exselect,exsumdomimp];
(* the exarray is different to others because it needs to be at world
prices, including export taxes & transport so use the CIF disposition *)
exarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT3"] →
"Ascending",SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
exsource=exarray["GVIEWRA","BI03"];
656
tocous=Union[exsource[[All,5]]];
exselect[array_,region_,urows_,toreg_,component_]:=
Select[array,MemberQ[urows,#[[3]]]&&#[[4]] == region&&#[[5]] ==
toreg&&#[[6]] == component&][[All,{1,3}]];
exsumdomimp[region_]:=Transpose[{ Apply[Plus,Map[exselect[exsource,region
,vrows,#,"fob"][[All,1]]&,tocous]] +
Apply[Plus,Map[exselect[exsource,region,vrows,#,"trans"]
[[All,1]]&,tocous]], exselect[exsource,region,vrows,region,"fob"]
[[All,2]]}];
exregion[region_]:= Flatten[Transpose[Partition[exsumdomimp[region]
[[All,1]],Length[vrows]]]];
(*exsumdomimp["NAFTA"]//MatrixForm;
exNAFTA=exregion["NAFTA"];
Print["exNAFTA :
",TableForm[exNAFTA,TableHeadings → {{" "},vrows}]]*)
Clear[imarray,imselect,imsumdomimp];
imarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT3"] →
"Ascending",SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
imsource=imarray["GVIEWRA","BI02"];
fromcous=Union[imsource[[All,4]]];
imselect[array_,region_,urows_,toreg_]:=
Select[array,MemberQ[urows,#[[3]]]&&#[[5]] == region&&#[[4]] ==
toreg&&#[[6]] == "impcost"&][[All,{1,3}]];
imsumdomimp[region_]:=
Transpose[{ Apply[Plus,Map[imselect[imsource,region,vrows,#]
[[All,1]]&,fromcous]], imselect[imsource,region,vrows,region][[All,2]]}];
imregion[region_]:= Flatten[Transpose[Partition[imsumdomimp[region]
[[All,1]],Length[vrows]]]];
(*imsumdomimp["NAFTA"]//MatrixForm;
imNAFTA=imregion["NAFTA"];
Print["imNAFTA :
",TableForm[imNAFTA,TableHeadings → {{" "},vrows}]]*)
Clear[yarray,yselect,ysumdomimp];
yarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT2"] →
"Ascending",SQLColumn["ELEMENT1"] → "Ascending"}];
ysource = yarray["GVIEWRA","SF02"];
yselect[array_,region_,source_,component_,yrows_]:=
Select[array,MemberQ[yrows, #[[3]]] && #[[4]] == region&&#[[5]] ==
source&&#[[6]] == component&][[All,{1,3}]];
ysumdomimp[region_]:=Transpose[{ yselect[ysource,region,"domestic","mktex
p",vrows][[All,1]] + yselect[ysource,region,"imported","mktexp",vrows]
[[All,1]], yselect[ysource,region,"domestic","mktexp",vrows][[All,2]]}];
yregion[region_]:=ysumdomimp[region][[All,1]];
(* ysumdomimp["NAFTA"]//MatrixForm;
yNAFTA=yregion["NAFTA"];
Print["yNAFTA : ",TableForm[yNAFTA,TableHeadings
→ {vrows}]]; *)
Clear[garray,gselect,gsumdomimp];
garray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name,SortingColumns → {SQLColumn["ELEMENT2"] →
"Ascending",SQLColumn["ELEMENT1"] → "Ascending"}];
gsource=garray["GVIEWRA","SF03"];
gselect[array_,region_,source_,component_,yrows_]:=
Select[array,MemberQ[yrows, #[[3]]]&&#[[4]] == region&&#[[5]] ==
source&&#[[6]]==component&][[All,{1,3}]];
657
gsumdomimp[region_]:=Transpose[{ gselect[gsource,region,"domestic","mktex
p",vrows][[All,1]] + gselect[gsource,region,"imported","mktexp",vrows]
[[All,1]], gselect[gsource,region,"domestic","mktexp",vrows][[All,2]]}];
gregion[region_]:=gsumdomimp["NAFTA"][[All,1]];
(* gsumdomimp["NAFTA"]//MatrixForm;
gNAFTA=gregion["NAFTA"];
Print["gNAFTA : ",TableForm[gNAFTA,
TableHeadings → {vrows}]]; *)
Clear[ygsum];
ygsum[region_]:= Transpose[{ysumdomimp[region][[All,1]] +
gsumdomimp[region][[All,1]],vrows}];
ygregion[region_]:= ygsum[region][[All,1]];
(* ygsum["NAFTA"]//MatrixForm;
ygNAFTA=ygregion["NAFTA"];
Print["ygNAFTA : ",TableForm[ygNAFTA,
TableHeadings → {vrows}]]; *)
Clear[csarray,csselect,cssumdomimp];
csarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT1"] → "Ascending"}];
cssource=csarray["GVIEWRA","AG06"];
csselect[array_,region_]:= Select[array,#[[3]] == region&][[All,{1,3}]];
cssumdomimp[region_]:= Transpose[{ csselect[cssource,region][[All,1]],
csselect[cssource,region][[All,2]]}];
csregion[region_]:= cssumdomimp[region][[All,1]];
(* cssumdomimp["NAFTA"]//MatrixForm;
csNAFTA=csregion["NAFTA"];
Print["csNAFTA :
",TableForm[csNAFTA,TableHeadings → {{"cap "},{""}}]]; *)
Clear[parray,pselect,psumdomimp];
parray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT1"] → "Ascending"}];
psource=csarray["GDATRA","POP"];
pselect[array_,region_]:= Select[array,#[[3]] == region&][[All,{1,3}]];
psumdomimp[region_]:= Transpose[{ pselect[psource,region][[All,1]],
pselect[psource,region][[All,2]]}];
pregion[region_]:= psumdomimp[region][[All,1]];
(*psumdomimp["NAFTA"]//MatrixForm;
pNAFTA=pregion["NAFTA"];
Print["pNAFTA : ",TableForm[pNAFTA,TableHeadings
→ {{"pop "},{""}}]];*)
Clear[iarray,iselect,isumdomimp];
iarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
isource= iarray["GVIEWRA","SF01"];
iselect[array_,region_,source_,component_,urows_,ucols_]:=
Select[array,MemberQ[urows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]] ==
region && #[[6]] == source && #[[7]] == component&][[All,{1,3,4}]];
isumdomimp[region_]:=Transpose[{ iselect[isource,region,"domestic","mktex
p",vrows,{"CGDS"}][[All,1]] +
iselect[isource,region,"imported","mktexp",vrows,{"CGDS"}][[All,1]],
iselect[isource,region,"domestic","mktexp",vrows,{"CGDS"}][[All,2]],
iselect[isource,region,"domestic","mktexp",vrows,{"CGDS"}][[All,3]]}];
(*iregion[region_]:= Transpose[Partition[isumdomimp[region][[All,1]],
Length[vrows]]];*)
iregion[region_]:= isumdomimp[region][[All,1]];
(*iNAFTA=iregion["NAFTA"]
isumdomimp["NAFTA"]//MatrixForm
658
Print["iNAFTA : ",TableForm[iNAFTA,TableHeadings
→ {vrows,"CGDS"}]];*)
Clear[biregion];
biregion[region_]:= Total[uregion[region]-Transpose[vregion[region]],{2}]
+ ygregion[region] + iregion[region] + exregion[region] -
imregion[region];
(*biregion["NAFTA"]*)
Clear[eyarray,eyselect,eysumdomimp];
eyarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT2"] →
"Ascending",SQLColumn["ELEMENT1"] → "Ascending"}];
eysource = eyarray["GVOLERA","EVH"];
erows = Union[eyarray["GVOLERA","EVH"][[All,3]]];
eyselect[array_,region_,yrows_]:= Select[array,MemberQ[yrows,#[[3]]] &&
#[[4]] == region&][[All,{1,3}]];
eysumdomimp[region_]:= eyselect[eysource,region,erows];
(*the following form is required to cope with null values *)
eyregion[region_]:= Table[Apply[Plus,Select[eysumdomimp[region],#[[2]] ==
i&][[All,1]]],{i,erows}];
(*eyregion["NAFTA"]//MatrixForm
eyNAFTA=eyregion["NAFTA"];
Print["eyNAFTA :
",TableForm[eyNAFTA,TableHeadings → {erows}]];*)
Clear[eexarray,eexselect,eexsumdomimp];
(* the eexarray *)
eexarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
eexsource=eexarray["GVOLERA","EVT"];
tocous=Union[eexsource[[All,5]]];
eexselect[array_,region_,yrows_,tocous_]:=
Select[array,MemberQ[yrows,#[[3]]] && #[[4]] == region && #[[5]] ==
tocous&][[All,{1,3}]];
eexsumdomimp[region_]:=Transpose[{ Apply[Plus,Map[eexselect[eexsource,reg
ion,erows,#][[All,1]]&,tocous]], eexselect[eexsource,region,erows,region]
[[All,2]]}];
(*the following form is required to cope with null values *)
eexregion[region_]:= Table[Apply[Plus,Select[eexsumdomimp[region], #[[2]]
== i&][[All,1]]], {i,erows}];
(*eexsumdomimp["NAFTA"]//MatrixForm
eexNAFTA=eexregion["NAFTA"];
Print["eexNAFTA :
",TableForm[eexNAFTA,TableHeadings → {erows,{" "}}]]*)
Clear[eimarray,eimselect,eimsumdomimp];
(* the eimarray *)
eimarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
eimsource = eimarray["GVOLERA","EVT"];
fromcous= Union[eimsource[[All,4]]];
eimselect[array_,region_,yrows_,fromcous_]:=
Select[array,MemberQ[yrows,#[[3]]] && #[[5]] == region && #[[4]] ==
fromcous&][[All,{1,3}]];
eimsumdomimp[region_]:=
Transpose[{Apply[Plus,Map[eimselect[eimsource,region, erows,#]
[[All,1]]&,fromcous]], eimselect[eimsource,region,erows,region]
[[All,2]]}];
659
(*the following form is required to cope with null values *)
eimregion[region_]:= Table[Apply[Plus,Select[eimsumdomimp[region],
#[[2]]==i&][[All,1]]],{i,erows}];
(*eimsumdomimp["NAFTA"]//MatrixForm
eimNAFTA=eimregion["NAFTA"];
Print["eimNAFTA :
",TableForm[eimNAFTA,TableHeadings → {erows,{" "}}]]*)
Clear[euarray,euselect,eusumdomimp];
euarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
eusource=euarray["GVOLERA","EVF"];
euselect[array_,region_,erows_,ucols_]:=
Select[array,MemberQ[erows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]] ==
region&][[All,{1,3,4}]];
eusumdomimp[region_]:= euselect[eusource,region,erows,vrows];
(*the following form is required to cope with null values *)
euregion[region_]:=
Transpose[Table[Apply[Plus,Select[eusumdomimp[region], #[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,vrows},{j,erows}]];
(*eusumdomimp["NAFTA"]//MatrixForm
euNAFTA=euregion["NAFTA"];
Print["euNAFTA :
",TableForm[euNAFTA,TableHeadings → {erows,vrows}]];*)
Clear[gfgarray,gfgselect,gfgsumdomimp];
gfgarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
gfgsource = gfgarray["EGHG","CO2"];
ghgrows = Union[gfgarray["EGHG","CO2"][[All,3]]];
gfgselect[array_,region_,ghgrows_,ucols_]:=
Select[array,MemberQ[ghgrows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]]
== region&][[All,{1,3,4}]];
gfgsumdomimp[region_]:= gfgselect[gfgsource,region,ghgrows,vrows];
(*the following form is required to cope with null values *)
gfgregion[region_]:=
Transpose[Table[Apply[Plus,Select[gfgsumdomimp[region],#[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,vrows},{j,ghgrows}]];
(*gfgsumdomimp["NAFTA"]//MatrixForm
gfgNAFTA=gfgregion["NAFTA"];
Print["gfgNAFTA :
",TableForm[gfgNAFTA,TableHeadings → {ghgrows,vrows}]];*)
Clear[gygarray,gygselect,gygsumdomimp];
gygarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending", SQLColumn["ELEMENT1"] →
"Ascending"}];
gygsource = gygarray["EGHG","CO2"];
gygcols = {"demand"};
gygselect[array_,region_,ghgrows_,ucols_]:=
Select[array,MemberQ[ghgrows,#[[3]]] && MemberQ[ucols,#[[4]]] && #[[5]]
== region&][[All,{1,3,4}]];
gygsumdomimp[region_]:= gygselect[gygsource,region,ghgrows,gygcols];
(*the following form is required to cope with null values *)
gygregion[region_]:=
Flatten[Table[Apply[Plus,Select[gygsumdomimp[region],#[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,gygcols},{j,ghgrows}]];
(*gygsumdomimp["NAFTA"]//MatrixForm
660
gygNAFTA=gygregion["NAFTA"];
Print["gygNAFTA :
",TableForm[gygNAFTA,TableHeadings → {ghgrows,gygcols}]];*)
Clear[gigarray,gigselect,gigsumdomimp];
gigarray[array_,name_]:= SQLSelect[conn,array,SQLColumn["HEADNAME"] ==
name, SortingColumns → {SQLColumn["ELEMENT3"] → "Ascending",
SQLColumn["ELEMENT2"] → "Ascending",SQLColumn["ELEMENT1"] →
"Ascending"}];
gigsource=gigarray["EGHG","CO2"];
gigcols={"invest"};
gigselect[array_,region_,ghgrows_,ucols_]:=
Select[array,MemberQ[ghgrows,#[[3]]] && MemberQ[ucols,#[[4]]] &&
#[[5]]==region&][[All,{1,3,4}]];
gigsumdomimp[region_]:= gigselect[gigsource,region,ghgrows,gigcols];
(*the following form is required to cope with null values *)
gigregion[region_]:=
Flatten[Table[Apply[Plus,Select[gigsumdomimp[region],#[[2]] == j &&
#[[3]] == i&][[All,1]]],{i,gigcols},{j,ghgrows}]];
(*gigsumdomimp["NAFTA"]//MatrixForm
gigNAFTA=gigregion["NAFTA"];
Print["gigNAFTA :
",TableForm[gigNAFTA,TableHeadings → {ghgrows,gigcols}]];*)
End[]
EndPackage[]
BeginPackage["Gtapaggregation`"]
aggregions::usage="aggregions gives the input and output regions."
wgtpopgrowth::usage="wgtpopgrowth gives weighted population growth of
regions in the aggregation file."
Begin["`Private`"]
Clear[mapping,positiona,positionb,from,to,map,mapuc,mapuc2,regionmap,popu
lation,popgrowth,wgtpopgrowth];
istream=OpenRead["/home/stuart/Documents/gtap/GTPAg7/sntest01.agg"];
records=Select[ReadList[istream,Record,RecordSeparators->" = "],
StringFreeQ[#,"!"]&];
mapping[n_]:=Rest[StringSplit[records[[n]]]];
positiona[n_]:=Flatten[Position[mapping[n],"&"]];
positionb[n_]:=Rest[RotateRight[Join[{-1},positiona[n]]]];
to[n_]:=mapping[n][[positiona[n]+1]];
from[n_]:=mapping[n][[positionb[n]+2]];
(*map[n_]:=Thread[from[n]->to[n]];
mapuc[n_]:=Thread[ToUpperCase[from[n]]->to[n]];
produnitmap=map[2];regionmap=mapuc[4];factormap=map[6];*)
mapuc2[n_]:=Thread[{ToUpperCase[from[n]],to[n]}];
regionmap[n_]:=Select[mapuc2[4],#[[2]]==n&][[All,1]];
aggregions=Map[{#,regionmap[#]}&,Union[to[4]]];
population[r_,n_]:=CountryData[aggregions[[r,2,n]],"Population"];
popgrowth[r_,n_]:=CountryData[aggregions[[r,2,n]],"PopulationGrowth"];
661
wgtpopgrowth[m_]:= Sum[population[m,n]*popgrowth[m,n],
{n,Length[aggregions[[m,2]]]}] / Sum[population[m,n],
{n,Length[aggregions[[m,2]]]}];
nonrowcou=Quiet[Thread[CountryData[Flatten[Map[regionmap[#]&,Rest[RotateR
ight[Union[to[4]]]]]]]]];
rowcou1=Complement[CountryData["Countries"],nonrowcou];
rowcou2=Map[{#,CountryData[#,"PopulationGrowth"]}&,rowcou1];
rowcou3=Select[rowcou2,NumericQ[#[[2]]]&][[All,1]];
rowpop=Total[Map[CountryData[#,"Population"]&,rowcou3]];
rowwgt=Total[Map[CountryData[#,"Population"]*CountryData[#,"PopulationGro
wth"]&,rowcou3]];
wgtpopgrowth[Length[aggregions]]:=rowwgt/rowpop;
End[]
EndPackage[]
Lee, H., 2008. An Emissions Data Base for Integrated Assessment of Climate
Change Policy Using GTAP, Center for Global Trade Analysis. Available
at: https://www.gtap.agecon.purdue.edu/resources/res_display.asp?
RecordID=1143 [Accessed June 26, 2009].
662
Appendix 8 The Sceptre Model
Sceptre is an acronym for Spatial Climate-Economic Policy Tool for Regional
Equilibria.
663
A8.2 Sceptre Mathematica Code
File: m12_13p_2C_100.nb
664
Include carbon trading & abatement in data arrays
mnfccom = Flatten[Position[vrows2, "mnfc"]];
nontradcom = Flatten[Position[vrows2, "serv"]];
gamlcom = Flatten[Position[vrows2, "gaml"]];
gtracom = Flatten[Position[vrows2, "gtra"]];
a[0, n_] := a[0, n] = PadRight[ygregion[regions[[n]]], ucols]*10;
inv[0, n_] := inv[0, n] = PadRight[iregion[regions[[n]]], ucols]*10;
u[0, n_] := u[0, n] = PadRight[uregion[regions[[n]]], {urows, ucols}]*10;
v[0, n_] := v[0, n] = PadRight[vregion[regions[[n]]], {Length[vrows2],
vcols}]*10;
665
Calculate initial assets, depreciation & Sales/Assets
ratios
investv[0, n_] := investv[0, n] = ReplacePart[ivector[0, n], {gamlcom ->
0, gtracom -> 0}];
ninvvec[m_, n_] := ReplacePart[Table[ninv[m, n, p], {p, urows}], {gamlcom
-> 0, gtracom -> 0}];(*set ninvect=0 for greenhousegases*)
ninvvec[0, n_] := ninvvec[0, n] = Total[csregion[regions[[n]]]] * kap[0,
n]/ kendowment[0, n];(*single year figure*)
(*the following is required because sometimes depreciation is more than
investment.
ninvec[0]=ninvec[-1](1-)+10*inv[0](1-()/2) where ninvec[0]=ninvec[-1]
and ninvec[-1] is eliminated*)
1yr[0, n_] := 1yr[0, n] = Map[Min[#, 1yrav] &, inv[0, n]/
(10*ninvvec[0, n] + inv[0, n]/2 + 10^-6)];
[m_, n_] := [m, n] = (1 + 1yr[0, n])^10 – 1 (*10 1yr[0,n]*);
(*s2avect[m_,n_]:=(Transpose[v[0,n]].ivector[0,n])*(ivector[m,n]-[m,n])/
(ninvvec[0,n]-inv[0,n]*(ivector[m,n]-[m,n]/2));*)
666
(*pre-industrial (1750) radiative forcing in watts per m2*) mat1750 ->
596.4,
(*estimated forcings as result of equilibrium CO2 doubling*) -> 3.8,
(*CO2 concentration in atmosphere 2005 in GtC*) mat[0] -> 808.9,
(*CO2 concentration in upper strata of oceans 2005 in GtC*) mup[0] ->
1255,
(*CO2 concentration in lower strata of oceans 2005 in GtC*) mlo[0] ->
18365,
(*atmospheric temp change in ˚C from 1900 to 2000*) tat[0] -> 0.7307,
(*ocean lower strata temperature change in ˚C from 1900 to 2000*)
tlo[0] -> 0.0068,
(*climate-equation coefficient for upper level*) 1 -> 0.22,
(*transfer coefficient upper to lower ocean stratum*) 3 ->
0.3,
(*transfer coefficient for lower level of ocean*) 4 -> 0.05,
(*equilibrium temperature impact of double CO2 ˚C*) t2xco2 -> 3,
(*damages at base year*) 0 -> 0.99849,
0 -> 0.005 (*,[0]->0.66203,dam[1]->1*)};
(*note that the above matrix rows & columns are the reverse of the \
indices, for consistency with DICE*)
tatvec[m_] := {tat[m], tlo[m]};
tatransform = {{1 - 1 (/ t2xco2 + 3), 1 3}, {4, 1 - 4}}
/.initialvals;
taforcing = {1, 0} /. initialvals;
alvect[m_, n_] := alvect[m, n] = ReplacePart[ivector[m, n]*al[m],
{gamlcom -> 1, gtracom -> 1}] /. initialvals;
667
(*[m]==[m-1]/(1-g[m])*)
(*[m]==[m]*[m][m]^*)
};
pre3 = {};
preext = Simplify[Flatten[{
Array[pre1, {periods, cou}],
Array[pre2, periods],
pre3}] /. initialvals];
If[Length[preext] == 0, Print["*** error in PARAMETER PRE PROCESSING
***"]];
prelhs = preext /. {a_ == b_ -> a}; prerhs = preext /. {a_ == b_ -> b};
prerhsresult = prerhs //. Thread[prelhs -> prerhs];
modelinpvars = Complement[Union[Cases[Flatten[{
Array[svector, {periods, cou}],
Array[vector, {periods, cou}],
Array[zvector, {periods, cou}],
Array[vector, {periods, cou}],
Array[a, {periods, cou}],
Array[i, {periods, cou}],
Array[dvector, {periods, cou}]
}], x_Symbol[_Integer ..], Infinity]],
Flatten[Map[Table[s[m, 1, #], {m, periods}] &, Flatten[{1,
gamlcom}]]],
Flatten[Table[z[m, 1, p], {m, periods}, {p, urows}]],
Flatten[Map[Table[z[m, n, #], {m, periods}, {n, 2, cou}] &,
Flatten[{nontradcom, gamlcom, gtracom}]]]];
(*remove i in the optimisation variables because the values are zero*)
optimvars = Complement[modelinpvars, Flatten[Array[i, {periods, cou}]]];
668
obj2[m_] :=(*net present value of utility*){
npvutility[m] - (npvutility[m + 1] + Sum[utility[m, n], {n, cou}])/(1
+ )};
obj3 = {npvutility[periods + 1] - Sum[utility[periods, n], {n, cou}]/};
objext = Select[Simplify[Flatten[{
Array[obj1, {periods, cou}],
Array[obj2, {periods}],
obj3} /. initialvals]], ! NumericQ[#] &];
669
Flatten[{gamlcom, #}] -> (Total[gfgregion[regions[[n]]]]
[[#]]*10/gg2gtc)*[m, n, #],
Flatten[{gtracom, #}] -> (Total[gfgregion[regions[[n]]]]
[[#]]*10/gg2gtc)*(1 - [m, n, #])
} &, Complement[Range[urows], gamlcom, gtracom]]}]] /.
initialvals;
670
Optionally check endogenous model variables
(*USE THE FOLLOWING EIGHT LINES TO CHECK THE ENDOGENOUS MODEL VARIABLES*)
(*Expect the first Solve to return an svars error. The second Solve
should be error free.*)
(*modeltest=Select[modelext/.Thread[Cases[Array[vector,
{periods,cou}],x_Symbol[_Integer..],Infinity]-
>0.005]/.Thread[Flatten[Array[a,{periods,cou}]]-
>0.005]/.Thread[Flatten[Array[dam,periods]]->1],!NumericQ[#]&];
modelsolnstest=Flatten[Solve[Thread[modeltest==0]]];
modelsolnstestallvars=Union[Flatten[Map[Cases[modelsolnstest[[#]],x_Symbo
l[_Integer..],Infinity]&,Range[Length[modelsolnstest]]]]];
modelsolnstestoutvars=Sort[Map[First[Cases[modelsolnstest[[#]],x_Symbol[_
Integer..],Infinity]]&,Range[Length[modelsolnstest]]]];
modelsolntestinpvars=Complement[modelsolnstestallvars,modelsolnstestoutva
rs]
optimvarsadjusted=Select[optimvars/.Thread[Cases[Array[vector,
{periods,cou}],x_Symbol[_Integer..],Infinity]-
>0.005]/.Thread[Flatten[Array[a,{periods,cou}]]-
>0.005]/.Thread[Flatten[Array[dam,periods]]->1],!NumericQ[#]&]
modeltestchecksolve=Solve[Thread[modeltest==0],optimvarsadjusted];
modeltestexcessvars=Complement[modelsolntestinpvars,modelinpvars]
modeltestcheckvars=Complement[optimvarsadjusted,modelsolntestinpvars]*)
NotebookDelete[printtemp];
printtemp = If[Length[modelsolns] == 0, PrintTemporary["** error in Model
Solve **"], PrintTemporary["Macroeconomic model solve completed....post
processing...."]];
post2[m_] := {
(*eind is CO2-equivalent emissions GtC*)
eind[m] == Total[Sum[((Transpose[v[m, n]].svector[m, n]))[[gtracom]],
{n, cou}], 2],
(*etot is the sun of industrial and deforestation emissions*)
etot[m] == eland[m] + eind[m],
(*mat, mup & mlo are the Gt of carbon in the atmosphere*)
Thread[mvector[m] == mtransform.Prepend[mvector[m - 1], etot[m]]],
(*rforcing is radiative forcing in watts per m2*)
rforcing[m] == fex[m] + Log[2, (mat[m - 1] + mat[m])/(2*mat1750)],
(*tat and tlo are ˚C temperature changes in the atmosphere & lower
ocean*)
Thread[tatvec[m] == tatransform.tatvec[m - 1] +
taforcing*rforcing[m]],
(* is damages multiplier of GNP*)
[m] == 1/(1 + 1*tat[m] + 2*tat[m]^3)
};
post3 = {};
postext = Select[Flatten[{
Array[post1, {periods, cou}],
Array[post2, periods],
post3} /. initialvals], FreeQ[#, True | False] &];
postlhs = postext /. {a_ == b_ -> a};
postrhs = postext /. {a_ == b_ -> b};
postrhsresult = postrhs //. Thread[postlhs -> postrhs];
671
postrhsresult = postrhsresult //. modelsolns;
If[Length[postrhsresult] == 0, Print["*** error in POST PROCESSING
***"]];
ineqcons1[m_, n_] := {
Map[(ninvvec[m - 1, n]* s2avect[m, n] - (Transpose[v[m, n]].svector[m,
n]))[[#]] &, Complement[Range[urows], gamlcom, gtracom]],
exim[m, n].zvector[m, n] - deficit[m, n],
lendowment[m, n] - lab[m, n].svector[m, n],
lab[m, n].svector[m, n] - labempl[m, n]*[m, n],
Map[investv[m, n][[#]] &, Complement[Range[urows], gamlcom, gtracom]],
svector[m, n],
vector[m, n], 1 - vector[m, n],
a[m, n], 1 - a[m, n],
vector[m, n]
};
(*Map[-(((u[m,n]-Transpose[v[m,n]]).svector[m,n])*dvector[m,n]*al[m]+
a[m,n]*vector[m,n]+inv[m,n]*investv[m,n]+exim[m,n]*zvector[m,n]-
bias[m,n])[[#]]&,Flatten[{gamlcom,gtracom}]],*)
(*the first constraint following is to manage the ending inventories of
commodities except carbon*)
ineqcons2[n_] := {
Map[(investv[periods, n]*inv[periods, n] - investv[periods - 1,
n]*inv[periods - 1, n])[[#]] &, Complement[Range[urows], gamlcom,
gtracom]]
};
ineqcons3[m_] := If[m > 10, {tat[m - 1] - tat[m], eind[m - 1] - eind[m]},
{}];
(*After 100 years, temperature and emissions must continue to decline*)
ineqcons4 = {2.0 – tat[10]}; (*At 100 years, temperature rise must be
2C*)
ineqconsext = Union[Flatten[{
Array[ineqcons1, {periods, cou}],
Array[ineqcons2, cou],
Array[ineqcons3, periods],
ineqcons4
}]];
NotebookDelete[printtemp]; printtemp =
672
PrintTemporary["Inequality constraints completed...."];
(*EQUALITY CONSTRAINTS*)
(*IMPORTANT NOTE: write all equality constraints as m.x==0 and omit ==0*)
eqcons1[m_, n_] := {};(**)
eqcons2[n_] := {};(**)
eqcons3[m_] = {[m] – dam[m]}; (*calculated damage multiplier must equal
assumed parameter. Note that both and dam are internal endogenous
variables and not optimisation variables*)
eqconsext = Union[Flatten[{
Array[eqcons1, {periods, cou}],
Array[eqcons2, cou],
Array[eqcons3, periods]}]];
NotebookDelete[printtemp];
printtemp = PrintTemporary["Equality constraints completed...."];
constraintsorig = Select[Join[Thread[ineqconsext >= 0], Thread[eqconsext
== 0]] /.initialvals, FreeQ[#, True | False] &];
constraints = Select[constraintsorig //. modelsolns /. initialvals,
FreeQ[#, True | False] &];
(*Print["Ready for optimisation..."];*)
673
objfncurr = 0;
optimaccuracygoal = 4;
optimmaxiterations = 2000;
NotebookDelete[printtemp];
printtemp = PrintTemporary["iter ... " <> ToString[Length[constraints]]
<> " constraints with accuracy goal of " <> ToString[10^-
optimaccuracygoal // N] <> " & max iterations " <>
ToString[optimmaxiterations]];
(*optim=Monitor[FindMinimum[{objfn,constraints},initvars, AccuracyGoal-
>optimaccuracygoal, MaxIterations-
>optimmaxiterations,StepMonitor:>(itercount++;
objfnprev=objfncurr;objfncurr=objfn;)],
{itercount,ScientificForm[objfncurr ],ScientificForm[objfncurr-
objfnprev], Count[constraints, False]}]//Timing;*)
optim = Monitor[
FindMinimum[{objfn, constraints}, initvars, (*AccuracyGoal →
optimaccuracygoal,*) MaxIterations -> optimmaxiterations,
StepMonitor :> itercount++],
itercount] // Timing;
resultvars = optim[[2, 2]];
(*update initvars for manual repeat calculations if required*)
initrepl = Thread[initvars[[All, 1]] -> initvars[[All, 2]]];
initvars = Thread[{optimvars, optimvars /. resultvars}];
(*NOTIFY OUTPUT OF OPTIMISATION*)
NotebookDelete[printtemp];
Print["Nonlinear optimisation in ", Round[optim[[1]], 1], " seconds in
", Round[MaxMemoryUsed[]*10^-6], "mb memory with objective function
result of ", objfn /. resultvars // Short];
Print["Maximum iterations set to ", optimmaxiterations, " with ",
itercount, " used. Optimisation accuracy set to ", 10^-
optimaccuracygoal // N , "."];
Print["There are ", Length[resultvars] + Length[modelsolns], "
variables in total (or ", Length[resultvars] + Length[modelsolns] +
Length[initialvals], " with parameters)."];
Print["The ", Length[resultvars], " optimisation variables are: ",
If[False, resultvars, resultvars // Short]]
];
674
If[Length[slackskey] > 0,
If[Length[slackskey] == 1,
Print["The only key unsatisfied constraint with slack > ",
slackcutoff // N, " is ", Flatten[Thread[{constraintsorig[[slackskey]],
slackskeyvals}]]],
Print["The ", Length[slackskey], " key unsatisfied constraints with
slacks > ", slackcutoff // N, " are ",
Thread[{constraintsorig[[slackskey]], slackskeyvals}]]],
Print["All ", Length[Cases[consvaluepost, False]] " of the unsatisfied
constraints have slacks < ", slackcutoff // N]
],
Print["Cannot identify constraints with slacks because constraint
lengths vary ", Length[constraintsorig], " ", Length[constraints]]];
Table[Beep[]; Pause[0.5], {i, 5}];
Speak["Stuart, I now have the results you asked for, so come over here
and give me a hug!"]]
FrontEndExecute[FrontEndToken["Save"]]
675
PlotLabel ->
Style[Framed[thiscase <> ": emissions eind,eland"], Blue,
Background -> LightYellow], PlotLegend -> {"eind", "eland"},
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
Print[Round[Sum[eind[i], {i, 1, 5}]*3.67 /. optimfinal,
1], " Gt C02 2000-2050 "]
Print[Round[Sum[eind[i], {i, 1, periods}]*3.67 /. optimfinal,
1], " Gt C02 ", periods, " periods"]
ListLinePlot[{Array[tat, periods], Array[tlo, periods]} //.
modelsolns2, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Temperature", "rise \[Degree]C"]},
PlotLabel ->
Style[Framed[thiscase <> "temperature rise"], Blue,
Background -> LightYellow], PlotLegend -> {"tat", "tlo"},
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[{Array[mat, periods]/convppm} //. modelsolns2 /.
initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"], "ppm"},
PlotLabel ->
Style[Framed[thiscase <> "carbon in atmosphere (mat) ppm"], Blue,
Background ->
LightYellow](*,PlotLegend->{"mat"},LegendSize->{0.4,0.2},\
LegendShadow->{.02,-.02},LegendPosition->{-.7,-.1}*)]
ListLinePlot[{Array[rforcing, periods] //. modelsolns2},
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Watts/sqm", "from 1900"]},
PlotLabel ->
Style[Framed[thiscase <> ": radiative forcing"], Blue,
Background -> LightYellow]]
ListLinePlot[Array[\[CapitalOmega], {periods}] //. modelsolns2,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Damages", "multiplier"]},
PlotLabel ->
Style[Framed[thiscase <> "damages \[CapitalOmega]"], Blue,
Background -> LightYellow]]
optimfinal
FrontEndExecute[FrontEndToken["Save"]]
(*Spatial Plots*)
output=optimfinal;
ListLinePlot[
Transpose[Table[\[Mu][m, n, 1], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Proportion", "Abated"]},
PlotLabel ->
Style[Framed[thiscase <> "Food abate \[Mu]"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[Table[\[Mu][m, n, 3], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Proportion", "Abated"]},
PlotLabel ->
Style[Framed[thiscase <> "Services abate \[Mu]"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
676
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Table[Total[Table[investv[m, n]*inv[0, n], {n, cou}], 2]/1000000, {m,
periods}] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "$trillion/decade"]},
PlotLabel ->
Style[Framed[thiscase <> "Investment"], Blue,
Background -> LightYellow]]
ListLinePlot[
Table[Total[Table[ninv[m, n, p], {p, urows - 2}, {n, cou}], 2]/
1000000, {m, periods}] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Capital", "$trillion"]},
PlotLabel ->
Style[Framed[thiscase <> "Capital"], Blue,
Background -> LightYellow]]
ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu][m, n, 1]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},
PlotLabel ->
Style[Framed[thiscase <> "Food amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu][m, n, 2]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},
PlotLabel ->
Style[Framed[thiscase <> "Mfg amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu][m, n, 3]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},
PlotLabel ->
Style[Framed[thiscase <> "Services amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[
Table[(\[CapitalPi][m]*\[CapitalTheta][
m]*\[Mu]a[m, n]^(\[Theta] - 1))*10^3, {m, periods}, {n,
cou}]] /. output /. initialvals, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$ per", "tonne"]},
677
PlotLabel ->
Style[Framed[thiscase <> "Consumpt. amel/abate price"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.1}]
ListLinePlot[
Transpose[Table[s[m, n, 1], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> vrows2[[1]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, n, 2], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> vrows2[[2]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, n, 3], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> vrows2[[3]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[v[m, n][[4, 4]]*s[m, n, 4] /. output, {m, periods}, {n,
cou}]], Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "amelioration & abatement"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[v[m, n][[5, 5]]*s[m, n, 5] /. output, {m, periods}, {n,
cou}]], Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "emission permits traded"], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, 1, p], {m, periods}, {p, urows - 2}]] /. output,
Filling -> Axis,
678
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> aggregions[[All, 1]][[1]]], Blue,
Background -> LightYellow], PlotLegend -> vrows2,
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, 2, p], {m, periods}, {p, urows - 2}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> aggregions[[All, 1]][[2]]], Blue,
Background -> LightYellow], PlotLegend -> vrows2,
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[s[m, 3, p], {m, periods}, {p, urows - 2}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "s " <> aggregions[[All, 1]][[3]]], Blue,
Background -> LightYellow], PlotLegend -> vrows2,
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 1]*exim[m, n][[1]]/1000) /.
z[m, 1, 1] -> -Sum[
z[m, i, 1]*exim[m, i][[1]]/exim[m, 1][[1]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$billion", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[1]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 2]*exim[m, n][[2]]/1000) /.
z[m, 1, 2] -> -Sum[
z[m, i, 2]*exim[m, i][[2]]/exim[m, 1][[2]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$billion", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[2]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 3]*exim[m, n][[3]]/1000) /.
z[m, 1, 3] -> -Sum[
z[m, i, 3]*exim[m, i][[3]]/exim[m, 1][[3]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$billion", "per decade"]},
PlotLabel ->
679
Style[Framed[thiscase <> "z " <> vrows2[[3]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 4]*exim[m, n][[4]]) /.
z[m, 1, 4] -> -Sum[
z[m, i, 4]*exim[m, i][[4]]/exim[m, 1][[4]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC amel", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[4]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[(z[m, n, 5]*exim[m, n][[5]]) /.
z[m, 1, 5] -> -Sum[
z[m, i, 5]*exim[m, i][[5]]/exim[m, 1][[5]], {i, 2, cou}], {m,
periods}, {n, cou}]] /. output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["GtC permits", "per decade"]},
PlotLabel ->
Style[Framed[thiscase <> "z " <> vrows2[[5]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, 1, p], {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "invest " <> aggregions[[All, 1]][[1]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, 2, p], {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "invest " <> aggregions[[All, 1]][[2]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, 3, p], {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Activity", "Industry"]},
PlotLabel ->
Style[Framed[thiscase <> "invest " <> aggregions[[All, 1]][[3]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
680
ListLinePlot[
Transpose[Table[invest[m, n, 1], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "Activity"]},
PlotLabel ->
Style[Framed[thiscase <> "investment " <> vrows2[[1]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, n, 2], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "Activity"]},
PlotLabel ->
Style[Framed[thiscase <> "investment " <> vrows2[[2]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[Table[invest[m, n, 3], {m, periods}, {n, cou}]] /. output,
Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["Investment", "Activity"]},
PlotLabel ->
Style[Framed[thiscase <> "investment " <> vrows2[[3]]], Blue,
Background -> LightYellow], PlotLegend -> aggregions[[All, 1]],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[ninv[m, 1, p]/1000000, {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$trillion", "Capital"]},
PlotLabel ->
Style[Framed[thiscase <> "ninv " <> aggregions[[All, 1]][[1]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[ninv[m, 2, p]/1000000, {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$trillion", "Capital"]},
PlotLabel ->
Style[Framed[thiscase <> "ninv " <> aggregions[[All, 1]][[2]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
LegendPosition -> {-.7, -.3}]
ListLinePlot[
Transpose[
Table[ninv[m, 3, p]/1000000, {m, periods}, {p, urows - 2}]] /.
output, Filling -> Axis,
AxesLabel -> {Labeled["Decades", "from 2004"],
Labeled["US$trillion", "Capital"]},
PlotLabel ->
Style[Framed[thiscase <> "ninv " <> aggregions[[All, 1]][[3]]],
Blue, Background -> LightYellow], PlotLegend -> Take[vrows2, 3],
LegendSize -> {0.4, 0.2}, LegendShadow -> {.02, -.02},
681
LegendPosition -> {-.7, -.3}]
FrontEndExecute[FrontEndToken["Save"]]
KKT multipliers
(*DUAL SOLUTION: using Kuhn Karush Tucker (KKT) conditions (Taha, 1982,
pp769-773)*)
(*This code is designed to cope with large scale optimisation results*)
Clear[, limitfn, limit2, gradg2, h];
gradf = SparseArray[D[objfn, {optimvars}] /. output];
outputres = Thread[optimvars -> (optimvars /. output)];
outputnonres = Complement[output, outputres];
limit0 = Simplify[
constraints /. {a_ >= b_ -> (a - b), a_ <= b_ -> (b - a),
a_ == b_ -> (a - b)} /. outputnonres];
limit1 = Simplify[limit0 /. outputres];
limit2[z_] := Module[{},
Options[limitfn] = outputres;
SetOptions[limitfn,
optimvars[[z]] -> OptionValue[limitfn, optimvars[[z]]] + h];
Return[limit0 /. Options[limitfn]]
];
(*Since integrals may be non-analytic use the general definition of an
integral*)
gradg2[z_] :=
SparseArray[
Limit[Chop[limit2[z] - limit1]/h,
h -> 0] /. {∞ -> 0, -∞ -> 0}];
(*Solving 500,000 derivative equations take about 20 hours on 4 cores \
so use parallel processing*)
DistributeDefinitions[optimvars, \
outputres, limit0, limit1, limit2, gradg2]
gradg = Parallelize[Table[gradg2[z], {z, Length[optimvars]}]];
If[False, Print["gradg"]; Print[Normal[gradg]]];
(*The UnitStep is inserted because some constraints have small negative
slacks and are therefore set as binding*)
(*kkt=Chop[Flatten[FindInstance[Flatten[{
Thread[gradf-Array[,Length[constraints]].Transpose[\
SparseArray[gradg]]==0],
Thread[Pick[Array[,Length[constraints]],constraints/.{a_>=b_-\
>True,a_==b_->False}]>=0],
682
Thread[Chop[limit1]*UnitStep[Chop[limit1]]*Array[,Length[\
constraints]]==0]
}],Array[,Length[constraints]],Reals]]]*)
kktzerosub =
Flatten[Solve[
Thread[Chop[limit1, 10^-5] UnitStep[Chop[limit1, 10^-5]]*
Array[, Length[constraints]] == 0]]];
kktnonzeros =
Cases[Array[, Length[constraints]] /. kktzerosub,
x_Symbol[_Integer], Infinity];
kktnonzero = FindInstance[Select[Flatten[{
Thread[
gradf - Array[, Length[constraints]].Transpose[
SparseArray[gradg]] == 0],
Thread[
Pick[(Array[, Length[constraints]]),
constraints /. {a_ >= b_ -> True, a_ == b_ -> False}] >= 0]
}] /. kktzerosub, FreeQ[#, True]], kktnonzeros, Reals, 20]
kkt = Union[kktzerosub, kktnonzero[[1]]];
(*Print KKT multipliers*)
683
Table[{constraintsorig[[
i]], ([i] /. kkt)}, {i,
Length[constraintsorig]}], {a_, b_} /;
684
Colophon
This dissertation was prepared with OpenOffice 3.2.0.5 created by Novell Inc.
©Sun Microsystems Inc. on openSUSE 11.2 ©Sun Microsystems Inc. with KDE
4.3.4 “release 2” Desktop © The Regents of the University of California. The
font is Bitstream Vera Serif ©Bitstream Inc. released under an open source
agreement with the GNOME Foundation. The literature research was
undertaken through the University of Technology, Sydney databases and other
resources. References were recorded and inserted using the Zotero 2.0.rc5 ©
Centre for History and New Media, George Mason University.
685
CD-ROM Attachment
The paper version of this thesis contains a CD-ROM with the following files.
stuart_nettleton_dissertation_files
gtap_specification_files
☑ sntest01.agg Aggregation specification
☑ sntest01.txt Output of aggregation
mathematica_utility_files
gtap_make_mathematica_db_03.nb Make Mathematica database from GTAP
gtap_comparison_uv_amatrix.nb Due diligence functions
eghg_aggregate.nb Emissions aggregation functions
Gtapfunctions.m Database mining functions
Gtapaggregation.m Database aggregation functions
Topofunctions.m Acyclic processor functions
gtap3res.script & gtap3res.m GTAP aggregated database
gtap3eghg.script & gtap3eghg.m GTAP emissions database
☑ readme_utility_files.txt Notes for placing database resources
mathematica_model_files
m12_13p_2C_100.nb Base Case of 2°C rise at 100 years
m12_13p_2C_100_no_gaml_no_gtra.nb Base Case with no emission permits or
amelioration/abatement trading
m12_13p_2C_100_s2a.nb Base Case with impaired Sales/Asset
m12_13p_2C_100_tcx2.nb Base Case with 2xtechnology cost
m12_13p_2C_100_tcx10.nb Base Case with 10x increase in technology
cost
m12_13p_2C_100_tcx20.nb Base Case with 20x increase in technology
cost
m12_13p_350_100.nb Hansen/Gore/Tällberg 350 ppm Case
m12_13p_450_100.nb Previous world target of 450 ppm
m12_13p_550_100.nb Expected 550 ppm case
m12_13p_full_amel.nb Radical perspective case
m12_13p_full_amel_no_cost.nb Laissez faire case
m12_13p_full_amel_no_cost_no_dam.nb Sceptic Case
m12_13p_normal.nb Normal or “business as usual” case for
comparison with Nordhaus' DICE
topo_test12_comp_sceptre.nb Nordhaus' DICE business as usual case
☑ readme_model_files.txt Notes for running model files
687
688
Stuart Nettleton
Benchmarking
Climate Change Strategies
Under Constrained Resource Usage
689
Notes
691