Math Climate

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

DanaMackenzie

2007
Publishedby
MathematicalSciencesResearchInstitute
Berkeley,CA
Anewdiscipline
foranuncertaincentury
Climate
Change
Mathematics
of
In the spring of 2007 MSRI organized a large public event, Climate Change: From
Global Models to Local Action, to examine some political and economic aspects
of climate: what we know, what we guess, how and how much our society can and
should respond to what we are learning. We chose the date to coincide with a visit by
Congressman Jerry McNerney, whose background, as both a mathematician and an
advocate for alternative energy sources, made him a model participant for an event
that would combine the perspectives of several disciplines.
Te public panel discussion was followed on the next two days by a scientifc
symposium, in which mathematicians from many diferent felds mixed with
economists, climate modelers and others who have already been working on the
many questions involved. Tis booklet is a record of some of the discussions and
ideas in those meetings.
Te purpose of these events was to
connect the mathematical community
with the best current research and
thinking about climate change, and
to point out the many diferent kinds
of mathematical challenges that are
presented by this issue. Society needs
to know more, and more accurately,
about what is happening with the earths
climate and to prepare for whatever
action is necessary and practical to
undertake. Mathematics and statistics
already play a central role in this as in
any sort of modeling efort. Likewise, computer science must have a say in the efort
to simulate Earths environment on the unprecedented scale of petabytes. With a
problem of this complexity, new mathematical tools will undoubtedly be needed
to organize and simplify our thinking. Tus it seemed to us at MSRI important
to encourage direct discussions between those already in the feld and the many
mathematicians whose skills, and whose students skills, can bring new insights.
As Director of MSRI I organized the conference, but as a non-expert I relied on
a number of others to make sure that the important scientifc aspects were well-
covered, and to make sure that the conference would represent the best current
science in the feld. I am particularly grateful to Inez Fung, Bill Collins and Chris
Jones for their scientifc advice, and to Orville Schell for his advice and help in
arranging the public event. Nat Simons provided expert suggestions as well as
enthusiasm and great support throughout without him the event could never
have happened.
David Eisenbud
Director, Mathematical Sciences Research Institute, 1997-2007
forEword
Inez Fung (left) and David Eisenbud (right),
co-organizers of the MSRI symposium on
climate change.

Page
forEword 2
IntroductIon 4
thE MSrI SyMpoSIuM on clIMatE changE 6
clIMatE changE MItIgatIon 7
clIMatE changE ModElIng 13
rESEarch topIcS In clIMatE changE 20
opportunItIES and challEngES for 22
MathEMatIcal ScIEncES
concluSIonS and rEcoMMEndatIonS 24
appEndIx 26
Case Studies
How Do We Know? 6
The Evidence for Climate Change
The IPCC Report: A Bleak Future 8
Energy Economics 10
Mathematics and Renewable 12
Energy
The Sea Ice Conundrum 14
Climate and the Indian Rice Crop 16
Rainfall: Beyond Its Warmer, 18
So Its Moister
contEntS
2008 Mathematical Sciences Research Institute
17 Gauss Way, Berkeley, CA 94720-5070
Telephone 510 642 0143
www.msri.org
Dana Mackenzie
[email protected]
MathEMatIcS of clIMatE changE
a new disciPline for an uncertain century

IntroductIon
When the history of climate change is written, the years 2006 and 2007 may be
seen as a turning pointa time when climate change ceased to be seen as a green issue and
became an everyone issue. In 2006, Al Gores movie An Inconvenient Truth placed global
warming on Americas movie screens. In October 2006, the British government released
the Stern Review, a frst attempt to quantify the economic costs of climate change. Over a
period of four months, from February to May 2007, the Intergovernmental Panel on Climate
Change (IPCC) released its fourth report on climate change, which attracted much more
publicity than the previous three. In April 2007, the United States Supreme Court ruled that
the Environmental Protection Agency has the authority to regulate carbon dioxide and other
greenhouse gases. In October 2007, Gore and the IPCC shared the Nobel Peace Prize for
their eforts to build up and disseminate greater knowledge about man-made climate change,
and to lay the foundations for the measures that are needed to counteract such change.
Te increase in public discussion may refect an increasing comprehension that the scientifc
debate over the reality of global warming has ended. (See sidebar, How Do We Know?) Te
IPCCs fourth assessment stated that warming of the climate is unequivocal and that
it was very likely (meaning more than 90 percent likely) that most of the warming is
anthropogenic. (See Figure 1.)
Tere are many uncertainties, however, in the specifcs of climate change and its impact.
Climate models tend to agree on the twenty-year projections, both in regard to their
sensitivity to variations in model physics as well as diferent
emissions scenarios
1
. Disagreement arises when projections are
carried out to the end of the century. For example, the equilibrium
response to a hypothetical scenario, involving an immediate
doubling of carbon dioxide, leads to varying predictions of warming
from 1 degree Centigrade to a truly staggering 12 degrees. (Note
that these should not be interpreted as literal forecasts, because an
overnight doubling is impossible.) Te diference arises primarily
from uncertainties in the climatic feedback processes represented in
the models, which tend to amplify the direct efects by two or three
times.
Other aspects of climate change are even harder to predict accurately
than temperature. We can be certain that precipitation patterns
will change, and all the models indicate that some subtropical and
tropical regions will experience severe droughts.
But the models give contradictory predictions of where the droughts are likely to occur. As
another example, scientists reported in early 2007 that glaciers in Greenland are melting
faster than any of the models in the IPCC report had predicted. Clearly, there are processes
going on that we do not understand. Yet the extent of the polar ice caps is a critical variable
in climate models, because it triggers a feedback loop: the more the ice melts, the more
sunlight is absorbed by Earth (instead of being refected into space by the ice). Tis leads
to an increase in temperature, which in turn stimulates more melting of the ice cap. Tis
melting is of concern even to people who live far away, because the melting of glaciers on
land is a major contributor to rising sea levels.
Diferent assumptions about emissions of greenhouse gases can be used as inputs into the
models.
4
FIguRE 1: Radiative Forcing Components
Anthropogenic (human-induced) contributions to global
climate change are measured in watts per square meter
in other words, the increase in solar radiation that would
produce an equivalent warming efect. Some contributions
(e.g., the greenhouse efect) are positive, and others (e.g.,
aerosols) are negative. However, the net anthropogenic efect
since 1750, 1.6 watts per square meter, is unambiguously
positive, and also signifcantly greater than the amount of
warming due to natural fuctuations in the suns brightness
(0.12 watts per square meter).
Image from Climate Change 2007: The Physical Science Basis,
Intergovernmental Panel on Climate Change.
Te climate models used to make the IPCC projections are very complex systems of nonlinear
equations solved on large computers. Te IPCC relied on 24 diferent climate models in its
report. Many of them have been developed under the auspices of national meteorological
ofces. Tough some models are superior to others, identifying them publicly is a ticklish
political issue. Because the models have diferent assumptions, we run the risk of comparing
apples to oranges. In many of its scenarios, the IPCC report simply averages all the models
together equally. It is not at all clear that this methodology is an optimal or even sound way to
integrate the data.
A serious limitation of the current models is their coarse scale. At present, even the highest-
resolution models chop up the world into pieces that are 10 to 50 kilometers wide.
Tis resolution is not fne enough to capture important details of topography,
such as mountain ranges, and it is also not fne enough to model individual
clouds, which play a complex and important role in the climate system.
2

(See Figure 2.) In practice, model parameters, especially those that represent
turbulent or fne-scale processes, are optimized or tuned in order to match
available observations. For example, the efect of clouds has to be added to
the model as an aggregate term, with all the uncertainties that implies. If the
climate models could be improved to 1-kilometer resolution, then clouds and
fner topography could be built into them; however, it has been estimated that
this would require a 10-petafop computer with 20 million core processors.
Tat kind of computing power is on its way, but it is not here yet. Even when
it arrives, its questionable whether climate modelers can take full advantage of it. Many models
are legacy codes of a half million lines or so that are not optimized for massively parallel
computation.
Finally, the mathematics of dynamical systems has taught us that uncertainty is an inevitable
part of predictions based on nonlinear physical models. Tis irreducible imprecision requires
us to use a variety of models, and run them with a diverse set of parameters, in order to capture
the real range of uncertainty in the climate system. It also means that climate modelers must
take care to communicate to policy makers that uncertainty is part of the story. As models
improve and more information becomes available, the model forecasts may change, and this
could lead to frustration among those needing to make decisions based on their predictions.
Tis frustration might be avoided if the original predictions are presented as a range of
possibilities rather than a single magic number.
Be that as it may, accurate and reliable prediction of global climate change is a key to policy
making. It is clear that policies should be based on predictions that are built on a sound
foundation. Mathematical scientists need to get involved, because the central questions facing
this research are mathematical in nature.
2 For example, clouds provide an important negative feedback mechanism that could reduce global warming. As
the moisture in the atmosphere builds up due to warming, it could create more clouds, which would refect more
sunlight back into space. However, this efect is by no means automatic; it depends on where the cloud is. High-
altitude clouds radiate to space at a colder temperature and actually produce a net warming.
Figure 2: gridding
5
Climate models divide the worlds atmosphere and oceans
up into a very coarse grid. At present, even the best models
do not have meshes fne enough to simulate individual
tropical cyclones or the efect of mountain ranges. Future
models may incorporate adaptive refnements of the mesh
size, as shown on the right.
From April 11 to April 13, 2007, the mAthemAticAl ScienceS reSeArch inStitute (mSri)
convened a symposium, sponsored by the Sea Change Foundation, to assess how
mathematicians can address
the broader issues of climate
change and the narrower
issues of methodology lying
behind the climate models.
Te symposium consisted
of two parts. Several leading
politicians, business people
and academic experts
on energy and climate
convened for a panel
discussion (see Figure 3)
at San Franciscos Palace of
Fine Arts Teater on April
11, which drew a crowd of more than 300 people. On the following two days, approximately
80 mathematicians and scientists attended a scientifc symposium at the MSRI headquarters
in Berkeley. (See Appendix B.)
Inez Fung, the co-director of the Berkeley Institute for the Environment and one of the
authors of the IPCC report, started of the public event with a brief overview of the evidence
for global warming and the current state of knowledge about what will happen next. She
characterized the IPCC report, which acknowledges that climate change has been caused by
anthropogenic efects, as a bittersweet victory, because weve been saying the same thing for
20 years. She outlined the reasons why we know that the climate is warming (see Sidebar,
How Do We Know?), and she discussed the main forecasts from the IPCC report (see Sidebar,
A Bleak Future).
Afer Fungs introduction, MSRI director David Eisenbud introduced Congressman Jerry
McNerney (see Figure 4, page 7) and California Assembly Member Ira Ruskin (see Figure
5, page 7), who represents Silicon Valley. Afer brief remarks by McNerney and Ruskin,
Eisenbud summoned onto the stage a panel of
experts, which consisted of Daniel Kammen,
professor of energy at the University of California
at Berkeley; Severin Borenstein, director of the
University of California Energy Institute; Nancy
McFadden, senior vice president of public afairs for
PG&E Corporation; Doug Ogden, executive vice
president of the Energy Foundation in San Francisco;
Michael Peevey, president of the California Public
Utilities Commission; and Inez Fung, who had
already been introduced. Te legislators were given
an opportunity to pose questions to the experts, and
then the foor was opened to questions from the
audience. Te following section is based in large part
on the questions and answers that ensued.
How Do We Know?
The evidence For climaTe
change
Climate models and their projections
for the futureespecially extended
out to 2100are subject to a
variety of uncertainties. These
include imperfections in the climate
models, the limitations of our
computing power, and the inherently
unpredictable nature of nonlinear
equations. These uncertainties
must not be allowed to obscure
the central facts emphasized in this
years IPCC report: Climate change
is happening, human activities are
responsible for most of the change,
and the evidence indicates that it is
accelerating.
The basic facts that lead to this
conclusion are the following:
1. Carbon dioxide levels (and levels
of other greenhouse gases, such
as methane) have been rising
for at least half a century. In fact,
they have risen by as much since
1960 as they did between the
last Ice Age and 1960. (See Figure
1.1.) The current concentration of
carbon dioxide in the atmosphere,
380 parts per million, is greater
than it has been at any time in
the last 650,000 years, according
to ice cores that contain trapped
bubbles of earlier atmospheres.
2. Carbon from fossil fuels is being
added to the atmosphere. We
know this because fossil fuels
contain a lower ratio of the
isotope carbon-13 to carbon-12
than the atmosphere as a whole
does, because they are derived
from plant matter and plants have
a preference for the lighter isotope
of carbon. Tree-ring and ice-core
data show that the 13C:12C ratio
began to decrease just at the same
time the overall levels of carbon
dioxide began to increase.

thE MSrI SyMpoSIuM on clIMatE changE


Figure 1.1 The concentration of carbon dioxide in the
atmosphere over the last 10,000 years (main fgure) and
over the last 250 years (inset). Data from the last 50 years
(pink and red) are based on direct measurement, and earlier
concentrations are inferred from ice cores. At right, the
concentrations are converted to an equivalent increase in solar
radiation (using the year 1750 as a baseline).
Image from Climate Change 2007: The Physical Science Basis,
Intergovernmental Panel on Climate Change.
The panel at MSRIs public symposium on climate change. Front row, left to right: Nancy McFadden,
Doug Ogden, Michael Peevey, Inez Fung. Back row, left to right: Daniel Kammen, Severin
Borenstein, Jerry McNerney, Ira Ruskin, David Eisenbud.
FIguRE 3
6
3. Evidence from ice cores shows a
very strong correlation between
carbon dioxide levels and global
temperatures. (See Figure 1.2)
When carbon dioxide levels go up,
so does the temperature.
4. The physics behind the
greenhouse effect is not in
dispute. It has been known
for more than a century that
gases such as carbon dioxide,
methane, and water vapor absorb
infrared radiation coming from
Earth (which would otherwise
escape to space) and re-radiate
some of its energy back toward
Earth. Therefore an increase in
greenhouse gases must lead to an
increase in temperature, unless
some other process comes along
to prevent it.
5. Finally, Earths surface temperature
has increased sharply in recent
years, just as one
would expect.
The observed
warming trend
over the last
100 years was
0.74 degrees
per century, but
over the last 50
years the rate
of increase has
nearly doubled,
to 1.3 degrees per
century. The six
hottest years on
record occurred
in 1998 (an El
Nino year), 2002,
2003, 2004, 2005,
and 2006. The
warming effect
is now too large
to be explained
as a statistical
aberration. (See
Figure 1.3.)
What actions is Washington taking to reduce global warming?
At present, the U.S. government is trailing both public opinion in the U.S. and many other
world governments in addressing the climate-change problem. Nevertheless, there are
some grounds for optimism. At the United Nations Climate Change Conference in Bali,
in December 2007, the U.S. agreed to a roadmap for future negotiations that did not set
specifc emissions targets. Also, in that same month Congress passed and President Bush
signed into law the Energy Independence and Security Act of 2007, which will increase
automobile mileage standards to 35 miles per gallon by 2020 (the frst change in the standards
in more than 30 years).
As of July 2007, one hundred bills related to climate change
had been introduced in the current session of Congress. For
example, H.R. 2809, the New Apollo Energy Act, would set a
target of decreasing greenhouse gas emissions to 80 percent
below 1990 levels by the year 2050. It would institute a carbon
cap-and-trade program, commit $49 billion in Federal loan
guarantees for the development of clean energy technologies,
ofer tax incentives for consumers to purchase plug-in hybrid
vehicles, increase funding for research and development of
clean energy technologies, and create a venture capital fund to
help move new technologies to market.
It is also important for the U.S. government to make climate
change a part of its foreign policy, because climate change is
an problem of unprecedented international scope. H.R. 2420,
the International Climate Cooperation Re-engagement Act,
would create an Ofce on Global Climate Change within the
State Department and commit the U.S. to sending high-level
diplomats to future international conferences on climate change.
Tough proposals like H.R. 2809 and H.R. 2420 did not become
law this year, they represent an increased awareness of the
climate change issue on Capitol Hill.
How is Sacramento addressing global warming?
Te panelists emphasized that California can do little on its own to solve the
climate change problem, because it is global in scope. Nevertheless, in the absence
of concerted Federal action, California has played and can continue to play an
important role as a model for other states and even other countries.

clIMatE changE MItIgatIon


Figure 1.2 Over the last 400,000 years, global greenhouse gas
concentrations (top) and estimated temperatures (bottom) have been
extremely tightly synchronized.
Figure 1.3 Direct measurements
of global mean temperature leave little
doubt that a warming trend exists, and is
accelerating. The average trends over the last
150 years (red), the last 100 years (blue), the
last 50 years (orange) and the last 25 years
(yellow) have gotten progressively steeper.
FIguRE 4 (left). u.S. Congressman
Jerry McNerney at the MSRI
symposium.
FIguRE 5 (right). California
Assembly member Ira Ruskin
speaking at the MSRI symposium.
I
Period Rate
The iPcc rePorT: a Bleak
FuTure
In 2007, the Intergovernmental Panel
on Climate Change (IPCC) released
its Fourth Assessment Report on the
worlds climate. The report has been
released in three sections, produced
by three separate working groups. A
final synthesis report was released
in November.
Working Group I focused on the
physical indicators of climate change
and projections of key climate
variables into the future. This group
used the combined output of 24
climate models to project surface
temperatures, precipitation, and sea
level changes out to the last decade
of this century, under six different
emissions scenarios. The best
estimates of the temperature change
range from 1.8 degrees Centigrade,
in the most optimistic case (a world
with high priority on sustainable
development) to 4.0 degrees in the
most pessimistic case (a business-
as-usual world with intensive fossil-
energy use). For comparison, the
report also includes one too good
to be true scenario, in which carbon
emissions stay constant at 2000
levels. This scenario represents the
minimum amount of climate change
to which we are already committed:
about 0.6 degrees Centigrade.
The IPCC report specifically avoids
any sort of doomsday scenario
involving a widespread breakdown
of social institutions (though such a
scenario might have made for juicier
headlines).
As explained elsewhere in this
report, individual numbers do
not adequately summarize the
complexity of climate models. For
instance, the likely range for the
business-as-usual scenario is from
2.4 to 6.4 degrees Centigrade. This
translates to a /3 probability that the
actual temperature increase would
lie within the stated range, and a 1/
probability that it would be greater
or less. The temperature increase is
In particular, the California
assembly last year passed
Assembly Bill 32, the Global
Warming Solutions Act of 2006,
which committed California
to reducing its greenhouse gas
emissions in 2020 to 1990 levels.
Te governor has proposed an
allocation of $36 million to create
the new positions required to
implement the actfor example,
to determine what exactly is
meant by 1990 levels. Panelist
Borenstein commented that
we should focus on ways of
meeting this target that are not
idiosyncratic to California, but
can be exported to the rest of the world.
On this years docket, the California legislature is considering bills to create green building
standards for the state government (A.B. 35); to provide funding for alternative fuel research
(A.B. 118); and to create rebates on the cleanest new cars and surcharges on the dirtiest ones
(A.B. 1493). Te latter bill was voted down between the time of the symposium and the
writing of this document.
What are the most promising technologies for mitigation of climate change?
Panelist Kammen commented that we should not look for a single magic bullet, but should
look to a variety of technologies. At present, Germany, Spain, and Denmark are leading
the way in wind energy, but the U.S. has a large untapped potential (see Figure 6) and
placed more new wind generation capacity in
service than any other nation in 2006. Te
photovoltaic industry is still trying to reduce
costs, but on a hot summer day solar power
can be generated more cheaply than the spot
market price of energy. Recently, scientists
invented spray-on solar panels, a material
like spray paint that can generate electricity
from infrared light. Biofuels continue to
be a major area of research, as scientists
try to fnd crops that can be harvested to
produce fuel more efciently than corn.
Energy efciency is also an important feld
of research, with hybrid vehicles, plug-in
hybrids, and all-electric vehicles leading the
way. (See Figure 7.)
A serious issue for these new technologies, Kammen said, is how to move past the so-called
valley of death of tiny market share and high cost. Policy-makers need to set aside money
for these emerging technologies not only in the research stage, but also in the stage of
moving them to market. Te New Apollo Energy Act would be a step in that direction.

clIMatE changE MItIgatIon


FIguRE 6
The Tesla Roadster has been cited often as a model for high-performance
all-electric vehicles. As of 2007, no Roadsters are yet available for
purchase, but reservations are being taken.
FIguRE 7
Map of installed wind-energy capacity in the united States in 2007, in megawatts. Wind
energy potential is greatest in the great Plains and Rocky Mountain states, but so far the
exploitation of this resource has been very uneven.
8
Will our efforts to reduce greenhouse gas emissions be overwhelmed by
the increasing emissions from China?
Yes, but that doesnt give America an excuse for inaction.
China is very dependent on coal energy, and built 92,000 megawatts of new coal-fred plants
in 2006enough in one year to cancel the entire greenhouse gas reductions pledged by
European nations under the Kyoto protocol. As long as the U.S. does not observe the Kyoto
treaty, China will be able to hide behind America.
Even so, China has made major commitments to improve its
energy efciency. Te eleventh Five-Year Plan calls for a 20
percent increase in energy efciency by 2010. All of Chinas
leading 1000 enterprises, which together account for one-third of
the countrys energy usage, have made a commitment to increase
their energy efciency by 20 percent. China has also pledged to
derive 15 percent of its energy from alternative fuel by 2020.
Although China has recently passed America as the worlds
largest emitter of greenhouse gases, panelist Ogden said that it
is important to realize that it also has a much larger population
base. Chinas per capita energy use is still only an eighth of ours.
It is difcult to tell the Chinese that they must cut back when they
have not yet reached the standard of living and energy use that
Americans enjoy.
Can renewable energy sources be integrated with the
rest of the power grid even though several of them are
intermittent, i.e., not always available?
Wind and solar energy, of course, are not under our control. Te wind doesnt blow when we
tell it to, and the sun shines only during the daytime (and even then it may be obscured by
clouds). Fortuitously, the time of peak availability of solar energy coincides with the time of
peak demand. Wind energy can be load-shaped, by using natural gas, for example, to fll in
gaps in availability. Also, pricing schemes called demand response programs can help shif
the demand from peak hours to other times of day. Battery storage may make it possible to
distribute energy availability more evenly. Finally, some alternative energy sources, such as
geothermal and biomass, do not have any intermittency problems.
Panelist Peevey noted that the California Public Utilities Commission is committed by
statute to obtain 20 percent of our energy from renewable sources by 2010, and committed
by policy to obtain 33 percent from renewables by 2020. He expects that the state will in
fact meet or come very close to the former target. Recent news reports, however, indicate
that California is still well below the target, with 12 percent of its energy coming from
renewables.
What kinds of governmental regulation would PG&E like to see?
Panelist McFadden said there was no doubt that we need caps on carbon production.
However, she felt that it would be a heavy lif to get such caps passed at a national level in
the short term. As an intermediate target, she suggested that the rest of the country should
improve its energy efciency as much as California has. Te per capita usage of energy in
California has remained constant in recent years, while increasing 50 percent in the United
States as a whole. Because California is a bellwether for the nation, California should
continue doing more to improve its energy efciency.
not uniformly distributed (see Figure
2.1) but is greater over land and
much greater in the Arctic. Increases
in precipitation are very likely in
polar regions and droughts are likely
in subtropical regions.
The consequences of
these climate changes
were explored in the
Working Group II
report. Many biological
effects are already
apparent, such as
earlier spring blooming
and shifts in the range
of species. Under even
the optimistic scenario,
the report states that
about 20 to 30 percent
of plant and animal
species are likely
to be at increased
risk of extinction. A
modest amount of
global warmingless
than 3 degrees
Centigradewould
be favorable for global
food production. However, greater
temperature increases would have
a negative effect, and in arid and
tropical regions, even a small rise
in temperature is expected to
decrease crop productivity. Extreme
weather events, such as floods
and hurricanes, will become more
common.
Finally, Working Group III reported
on the potential of mitigation
efforts to reduce greenhouse gas
emissions. It concluded that options
with net negative coststhose
that save more money over the long
run than they costcan already
reduce emissions by 7 to 10 percent,
compared to the business as usual
scenario. Further reductions, up to
46 percent, can be achieved with
strong enough incentives, in the
form of carbon trading and carbon
taxes or fees.

clIMatE changE MItIgatIon


9
Figure 2.1 The IPCCs climate change simulations for the
decade 2020-2029 (left) and 2090-99 (right), under three
diferent scenarios. Note the uneven distribution of temperature
change, with especially dramatic increases in polar regions.
Image from Climate Change 2007: The Physical Science Basis,
Intergovernmental Panel on Climate Change.
Is ethanol from corn a boondoggle?
Tis question elicited some disagreement. Panelist Kammen said that his studies show that
ethanol derived from corn is marginally more efcient than gasoline. Kammen went on to
say, though, that it would be surprising if corn, which has been developed for generations
as a food crop, happened to also be optimal for use as a fuel. (See Figure 8.) Other options,
including switchgrass or landfll waste, will probably turn out to be better. Also, the net
emissions efect of any biofuel will improve dramatically if the distillery runs on a cleaner
energy source. Tere would be no point in building an ethanol distillery and powering it
with a dirty coal-fred generator.
Panelist Borenstein remained skeptical. Better ways to produce ethanol also cost a lot,
he argued. He felt that the excitement over ethanol is motivated primarily by the economic
self-interest of the Midwestern states.
Besides carbon dioxide, methane has also been implicated as a greenhouse
gas. How serious a problem is it?
Molecule for molecule, the greenhouse efect of methane is 20 times stronger than that
of carbon dioxide, but it is not as big a problem for several reasons. Te absolute levels of
methane in the atmosphere are much lower than those of carbon dioxide (though they, too,
are rising fast). Second, methane remains active as a greenhouse gas for only about ten years
before chemical reactions in the atmosphere break it down. Carbon dioxide, on the other
hand is efectively immortal.
Finally, methane is harder to regulate than carbon dioxide, because much of it comes from
agricultural sources, such as cattle and rice paddies. However, in this country, the main
sources of methane are landflls and leakage from coal mines and gas pipelines. Terefore,
an opportunity exists to control it, simply by reducing the amount of leakage and the
amount of waste we put in landflls.
What are the prospects for nuclear power?
Surely one of the most controversial outcomes of climate change has been the rehabilitation
of nuclear power. In the audience for this symposium, opinions were deeply divided,
refecting the ambivalence of society as a whole toward nuclear power. (See Figure 9.)
In California, at least, the legal status of nuclear power is clear. Under state law, no new
nuclear power plants can be built in California unless and until the state government
certifes that there is a safe way to dispose of the waste. With the fate of the Yucca Mountain
nuclear repository still in limbo, it is clear that there will be no new investment in nuclear
power in California for the foreseeable future.
Economically, nuclear power is less well understood than any other energy source. Te
costs of waste management and protection against terrorism need to be factored into the

clIMatE changE MItIgatIon


energy economics
Electric fuel costs for different
energy sources are difficult to
compare. For natural gas, the
greatest expense is the fuel itself.
For nuclear power, the cost of fuel
is relatively small but the cost of
building the plant, running it safely,
and decommissioning it is much
higher. In addition, the costs may
depend on location; natural gas,
for instance, is cheaper in Texas,
and geothermal energy is not even
available in New England. The
capital costs for nuclear power are
particularly uncertain because no
new plants have been ordered since
1977.
Nevertheless, it can be useful to
compare the levelized cost of
electricity, which amortizes the
cost of an electric plant over its
entire (estimated) lifetime. The
Department of Energy estimates the
following costs for new plants that
would come online in 2015 and in
2030 (costs are given in cents per
kilowatt-hour):
Year 2015 2030
Coal 5.6 5.4
Natural gas 5.5 5.7
Wind 6.9 6.3 (*)
Nuclear 6.3 5.9
Biomass 6.4 (*)
Solar Thermal 13.1 (*)
Geothermal 6.1 (*)
(*) These figures are specifically for a
plant located in the Northwest.
Source: Annual Energy Outlook 2007,
Figures 56 and 62.
As these figures show, the costs of
several renewable energy sources
are expected to come down to the
10
E85gas pumps sell a blend of 85 percent
ethanol and 15 percent gasoline. Ethanol has
been highly touted in some places as a fuel
that can reduce greenhouse gas production.
However, the technology is problematic at
present, because the distillation of ethanol
itself requires energy that may come from a
greenhouse gas-producing power plant.
FIguRE 8
P
h
o
t
o
:
J
o
h
n
C
r
o
s
s
,
M
in
n
e
s
o
t
a
S
t
a
t
e
U
n
iv
e
r
s
it
y
.
point where they are nearly, but not
quite, competitive with conventional
sources. However, solar energy
remains prohibitively expensive.
In the reference scenario of the
Annual Energy Outlook report,
renewable energy sources will not
gain any ground as a percentage of
the market between now and 2030.
(See Figure 3.1.) They provided 9
percent of the U.S. overall output of
energy in 2005, and they are forecast
to provide 9 percent in 2030 as well.
Coal power is projected to increase
from 50 to 57 percent of the market.
Meanwhile, the total amount of
energy sold will increase from 3660
billion kWh to 5168 billion. Thus the
total output of energy from coal
will increase by 60 percentan
especially worrisome outcome for
the climate, because coal plants
produce the most greenhouse gases.
As noted in the report, changes
in fuel prices or in environmental
policies could affect all of these
projections.
cost of nuclear power, but no one has any
idea how large these costs will be. Also, the
nuclear industry gets a subsidy from the
government, in the form of protection from
insurance claims resulting from a catastrophic accident. Depending on your point of view,
the value of this subsidy may be anywhere from zero to infnity.
Even putting aside these unknowns, nuclear energy has an uncommonly large range of costs.
(See Sidebar, Energy Economics.) Te cost of nuclear power presently ranges from 3 cents to
12 cents per kilowatt-hour. If America is going to embark on an ambitious new program of
nuclear construction, we need to understand the reasons for this broad range of economies,
standardize the designs, and choose designs that are cheaper and safer.
All in all, nuclear power is back on the table. But it seems unlikely that America, afer
shunning it for more than 20 years, is ready for the kind of huge ramp-up that would be
required to have a signifcant impact on greenhouse gas emissions. Te problems of safety
and waste disposal are not mere public relations.
What is the status of carbon sequestration?
Sequestration refers to the process of burying carbon in the ground, the ocean, or in
vegetation and soils. One method of sequestration involves injecting pressurized carbon
dioxide into an oil feld. Tis procedure can help companies extract
more oil from it, so the process is sometimes called enhanced oil
recovery. Once injected, the carbon dioxide will hopefully
remain isolated indefnitely from the atmosphere. Whether this is
true in fact remains an open scientifc question.
Carbon sequestration is attractive to large oil companies because it
requires a minimal change from business as usual (and, in fact,
can be seen as improving business). Several sequestration projects
are already in place, in Texas, in the North Sea, and in Norway.
BP has recently announced plans for a new clean energy plant in
California, which would separate petroleum cokea very dirty
fuel that is currently shipped to China for burninginto hydrogen
compounds and carbon dioxide. Te hydrogen compounds would
be burned cleanly, while the carbon dioxide would be sequestered.
What are the prospects for carbon cap-and-trade agreements and
carbon taxes?
Although carbon cap-and-trade agreements may be a useful and even essential mechanism,
panelist Borenstein said that he does not consider them a solution by themselves to the
problem of greenhouse gases. Somebody, somewhere, has to cut back on the production of
carbon. Another unresolved question is how to enforce the agreements so that no one can
cheat on them.

clIMatE changE MItIgatIon


Figure 3.1 Sources of united States
electric power, historically and projected
through 2030. (Note that energy used for
transportation or for heating does not
appear in this fgure.)
FIguRE 9
The Three-Mile Island nuclear power plant in Pennsylvania is a symbol
of nuclear energys troubled past. For many years, no new nuclear
plants have been built in the u.S. because of concerns about safety and
storage of spent fuel. With climate change now looming as a greater
threat, even some former opponents of nuclear power are beginning to
reconsider this carbon-neutral energy option.
11
Electricity Generation by Fuel, 1980-2030
billion kilowatthours
3500
3000
2500
2000
1500
1000
500
0
1980 1990 2000 2005 2010 2020 2030
2,094
5,478
Electricity Demand
Projections History
Coal
Natural Gas
Nuclear
Renewables
Petroleum
2030 1980
Annual Energy Outlook 2007
maThemaTics and renewaBle energy
How can mathematics contribute to the development of
renewable or alternative energy sources?
Tis question was not discussed specifcally at the symposium.
However, some areas where mathematicians are currently
making contributions include:
Fuel cells. Te membranes in a fuel cell are made of a porous
Tefon-like material, which allows ions to pass through. Te
process of pore formation involves diferential equations that
have not been solved before. A good mathematical model of
the pores might bring down the cost of fuel cells, by reducing
the amount of platinum required as a catalyst.
1

Wind energy. Te mathematical problems in designing a
wind turbine are similar to those in designing airplane wings.
However, to maximize energy efciency, these wings have
to push the limits of size and weight. Large, lightweight wings
tend to futter, so engineers need methods to predict and
automatically compensate for this behavior.
2

Carbon sequestration. Mathematical models of porous media
are used to predict how long carbon dioxide will remain
underground. One recent study showed that abandoned
oil wells may compromise the ability of an oil feld to store
carbon dioxide.
3

Nuclear energy. Mathematicians are helping to design the
next generation of reactors. For example, researchers use
computational fuid dynamics to model the fow of coolant
past a fuel pin. Tey have showed that wrapping wire around
the pins, like a stripe on a barber pole, can improve the mixing
of coolant and bring down the temperature of the pins.
4

Wave energy. Harnessing energy from ocean waves is
still a technology in its infancy. Engineers used nonlinear
optimization, a mathematical technique, to design a generator
that produces energy from the relative oscillation between two
foats. Te product is expected to go on the market in 2010.
5

1 Keith Promislow, NSF Award Abstract # 0708804.
2 A. Balakrishnan, NSF Award Abstract # 0400730.
Barry Cipra, Geosciences Conference Tackles Global Issues, SIAM News,
June 2007.
P. Fischer et. al., Large Eddy Simulation of Wire-Wrapped Fuel Pins I: Hydro-
dynamics in a Periodic Array. Joint American Topical Meeting on
Mathematics and Computation and Supercomputing in Nuclear
Applications, 2007.
Scott Beatty, Capturing wave energy of the coast of BC a profle of an
intern, MITACS Connections, May 2007. The product is the SyncWave Power
Resonator.
Nevertheless, carbon cap-and-trade agreements are popular
in Washington because they use market forces. In the present
political environment, Congressman McNerney said, it is simply
impossible to talk about carbon taxes, even if they are called
fees. Te minute he arrived in Washington, his opponents
began painting him as an advocate of carbon taxes, even though
McNerney had never advocated them. Assembly Member Ruskin
strongly echoed this last point. He recalled a conversation with
an environmentalist in Europe who said that his country had
wasted ten years debating a carbon tax. We need to debate
things that are possible, Ruskin concluded.
How will climate change affect developing countries?
It seems certain that some of the efects of climate change will
hit developing countries hardest. For example, subtropical and
tropical regions are more likely to be subjected to drought. Low-
lying island nations will be threatened by rising sea levels. Most
importantly, poorer countries will not have the resources to
adapt to climate change, while wealthier countries will. For all of
these reasons, plus simple cost-efectiveness, investing in energy
efciency is the fairest and most universal approach to mitigating
climate change.
What can individuals do about climate change?
For individuals as for countries, the most cost-efective solution
is to reduce consumption through energy efciencyfor
example, changing from incandescent to compact fuorescent
lightbulbs. Several California communities provide good
examples of action a local level. For example, Palm Desert has
reduced its energy usage by 30 percent.
While individual and local conservation eforts are important,
panelist Fung noted that there is one other remedy that citizens
should be ready to use: the vote. Te problem is so large that
we need state and government-level action. Tat means voting,
Fung said. Congressman McNerney noted that the League of
Conservation Voters Dirty Dozen list had proven very efective
in the 2006 election. Assembly Member Ruskin added that
McNerney was being too modest, because he had personally
defeated one of the Dirty Dozen incumbents in 2006.

clIMatE changE MItIgatIon


12
the Scientific WorkShop portion of the mSri SympoSium on
Climate Change convened in Berkeley on April 12 and
13. (See fgure 10.) In this session, the focus shifed
from local action to global models, and from energy
policy to climate issues.
Te symposium was organized into six groups of
lectures (see Appendix A), which provide the source
material for this section. In addition, discussion groups
were formed to identify research problems in climate
models that would be amenable to mathematical
research. Te following two sections, Research
Topics in Climate Change and Opportunities and
Challenges for Mathematical Sciences, are based in
part on the reports of the discussion groups.
Te questions below give a representative, though not exhaustive, sample of the issues
discussed in the lectures.
What goes into a climate model?
Te main components of a climate model are the atmosphere, the ocean, land, and ice. As shown
in Figure 11, the atmosphere model incorporates four main diferential equations, which relate
the motion of air to the physical inputs. First, the momentum equation relates the acceleration of
any parcel of air to the forces on it: the pressure gradient, gravity, and friction. Tis equation also
includes the Coriolis force, from Earths rotation, and a nonlinear inertial term. Te conservation
of mass equation says that matter is neither created nor destroyed. Te energy equation says
that the energy of a unit of atmosphere can change in two waysby changing the temperature
or by advection (conveying the warm or cold air somewhere else). Te net of these two efects is
governed by four energy inputs: short-wave radiation from the Sun, long-wave radiation from
Earth, sensible heat, and latent heat (the heat stored or released in water when it changes phase).
Finally, a separate water vapor equation says that the amount of water in the atmosphere changes
by advection as well as by evaporation or condensation. Tis equation determines the water
vapor content of the atmosphere, which in turn afects its density and pressure, and in this way
feeds back into the momentum and mass equations.
Uncertainties in these equations enter on the physics side. How much energy is coming
in from the sun? How much is refected into space by clouds or aerosols? What is involved in
the turbulent mixing in the atmosphere, which governs the formation of clouds? Te efect
of convective mixing is added in as an extra term in the momentum, energy, and fresh water
vapor equations. Every climate model does this diferently.
Te ocean models likewise contain equations for momentum, mass, and energy, plus a fourth
equation describing the salinity. Te ocean exchanges momentum, energy, and water with the
atmosphere, so these equations are linked to the previous four. Salinity afects the ocean in much the same way that water content
afects the atmosphere: it changes the waters density, which in turn changes the pressure gradient. Two very important parts of the
ocean model are the wind-driven ocean currents at the surface, and the thermohaline circulation, which takes place deep in the
ocean. (See Figure 12.) Te thermo part of this word refects the fact that cool water tends to sink, and warm water tends to rise.
Te haline part refers to salinity, and the fact that saltier, denser water tends to sink, while less dense fresh water tends to rise. Te
interplay of these two factors creates a worldwide conveyor belt of water that redistributes heat from the equator to the poles, and is
believed to have a strong moderating efect on our climate.
Like the atmosphere models, the ocean models are complicated by convective mixing. Tey also have to deal with the complicated
geometry of coastlines and the ocean foor. Te rearrangement of continents has had a huge efect on ancient climates. However, on
a time scale of hundreds or thousands of years, the arrangement of land masses can be assumed constant.
clIMatE changE ModElIng
11
Equations of a typical climate model. The frst diferential
equation refects conservation of momentum, and the
second expresses the conservation of mass. This equation
is coupled to the frst by the ideal gas law (line 3). The third
diferential equation (line 4) models the energy fux, with
short-wave radiation (SW) and long-wave radiation (LW).
The latter term includes the efects of carbon dioxide (CO2)
and other greenhouse gases (gHg). The fnal diferential
equation tracks the motion of water in the atmosphere,
and must be coupled to an ocean model. Several of
these equations also include terms (red boxes) that
model convection in the atmosphere, which is still poorly
understood because it occurs at such a small scale (the
scale of individual clouds).
FIguRE 11: ATMOSPHERE
FIguRE 10
MSRIs scientifc symposium on climate change brought about 80
mathematicians and climate researchers to Chern Hall. Christopher
Jones (far left, facing camera) was instrumental in organizing the
two-day symposium.

u
+ u u + 2 u =
1
p + gk + F + (u)


+ (u) = 0
p = RT; = (T,q)
T
+ u T = SW

+ LW

+
SH + LH + (T)
SW = f (clouds, aerosols . . .)
LW = f (T,q,CO2, GHG . . . )
q + u q = Evap Condensation + (q)
convective mixing

Figure 13 represents in schematic form the various processes that enter into a climate
model. As described above, of the processes illustrated here, the most challenging for
modelers to get right are the clouds (which are too small-scale for the current generation
of models to describe accurately) and the ocean currents, including the vertical motions.
Te modeling of the solid phase of water presents its own peculiar problems. (See
Sidebar, Te Sea Ice Conundrum.)
Finally, there are signifcant unanswered questions about the amount of incoming solar
radiationthe solar constant, which is currently estimated at 1362 watts per square
meterand how constant it really is. Te total amount of anthropogenic forcing of the
climate since 1800 is estimated at 1.6 watts
per square meter. Tus, even a tenth of one
percent variation in the solar constant
would equal the entire human impact on
the world climate. At present, there is no
evidence that the solar constant has varied
that much in the last 200 years. However,
it may have varied by that much in the
past. What will happen to it in the future is
beyond the expertise of climate modelers,
who have to ask solar physicists for the
answer.
What is left out of climate models?
Figure 13 omits one important ingredient
in the climate: the entire carbon cycle.
In fact, this fgure represents the status
of climate models about 5 years ago,
when the work behind the fourth IPCC
Assessment was being done. At that time,
the concentrations of greenhouse gases
like carbon dioxide and methane had
to be added in as an exogenous forcing
term. Newer models are beginning to
incorporate the carbon cycle: the efects
of plants and animals, the efects of fossil
fuel burning, and the dozens of chemical
reactions that convert one form of carbon
to another in the ocean.
Another omission will be even more
challenging to repair: None of the models
contain any humans. Again, in the
IPCC simulations the results of human
activities (primarily the production of
greenhouse gases) are simply added in
by fat. However, such an approach is
not completely satisfactory. Even in the
absence of deliberate governmental policies, the change in climate will produce changes
in human behavior. Diferent crops will be planted, diferent regions of the world will
become suitable or unsuitable for agriculture, and so on. A truly integrated model should
include these efects (see Sidebar, Climate and the Indian Rice Crop, page 16).

clIMatE changE ModElIng


14
The sea ice conundrum
One of the most dramatic yet least
understood effects of global warming is
taking place in the Arctic Ocean, where
both observational data and climate models
point to a rapid melting of the polar ice
cap. The extent of the ice cap at the peak
of its summer melting has been decreasing
by 8 percent per year since 1979. The
area covered by sea ice in the winter has
not decreased as rapidly, because the ice
pack tends to recover during that season.
However, the thickness of the ice cap in
winter is decreasing. As the amount of
recovery during the winter decreases, the
extent of the ice pack in summer will also
tend to decrease.
It is well known that melting sea ice causes
an amplifying feedback loop, called the ice-
albedo feedback, which tends to exacerbate
global warming. Melting ice leaves more
open water exposed, which in turn absorbs
more solar energy rather than refecting
it into space. All of the climate models
incorporate this feedback loop, and as a
result they predict much steeper temperature
increases in the Arctic than worldwide (See
Figure 4.1).
Unfortunately, sea
ice is also one of the
least well-understood
ingredients in the
climate change puzzle.
Not only is the amount
of warming expected in
the Arctic greater than
the rest of the world,
but the uncertainty
in this forecast is also
greater. In the IPCC
climate models, while
the equatorial regions
face a 2 to 4-degree increase by the end
of the century, the North Pole region is
predicted to warm up by 4 to 12 degrees.
And the different models for the extent of
sea ice vary extravagantly (see Figure 4.2).
Some of them show the summer ice pack
virtually disappearing in the Arctic Ocean
by mid-century, while others predict only
a moderate decrease. This figure is more a
confession of our ignorance than a reliable
prediction. (Actual observations in this
figure are shown by the heavy red line.)
Figure 4.1 Predicted winter temperature increase by mid-
century (2040-59 against 1980-99). Winter warming in the Arctic is
at least double the global mean and peaks at more than 16 degrees
Centigrade in some places.
Figure 4.2 Projected summer ice extent in the Arctic, in the
business-as-usual scenario. The heavy red line represents observed data;
the remaining lines represent 18 diferent climate models used by the
IPCC. Diferent models disagree widely, due to diferent assumptions
about sea ice physics. However, the rate of retreat of ice in the 21st
century is signifcantly correlated with the mean ice extent in the late
20th century.
Image from Climate Change 2007: The Physical Science Basis,
Intergovernmental Panel on Climate Change.
Furthermore, climate modelers realize that in the new environment, decision
makers will be consulting their models more and more frequently, and they will ask
diferent sorts of questions. Instead of How much will the temperature rise? they
will ask How much will it cost? or What are the impacts? In other words, climate
variables will eventually have to be restated in economic or social-justice terms. Some
preliminary eforts to do this have been made. For example, the Stern Review in the
U.K. was an attempt to delineate the economic impacts of climate change. Another
example, presented at this meeting, was Max
Aufammers study of the efects of climate
change on agriculture. Nevertheless, a true
integration of climate and economic models
remains in the future.
How might economics enter into
climate change models and strategies?
In order to formulate the results of climate
change in economic terms, modelers will have
to learn from the great advances economists
have made in quantifying risk. However,
some conference attendees expressed concern
at a too narrow, market-centric approach to
defning risk. First, such an approach might
not give adequate weight to the interests of people who do not participate in fnancial
markets, such as native peoples, developing countries, or unborn generations.
3
Also, a
purely economic approach might downplay the importance of outcomes such as species
extinctions.
Possibly a separate issue, but nevertheless important, is the question of how we can
economically reach a desired emissions target. Can we get there using market forces and
cap-and-trade agreements? Do we need a carbon tax? Some attendees suggested using
game-theory approaches to design agreements that would be self-enforcingin other
words, to give both parties an economic incentive to abide by the agreement.
How does a climate
model differ from a
weather model?
Te physical equations in a
climate model are similar
to those in a weather
model, and some speakers
argued that there is no
real diference between
them. However, the time
scales involved are vastly
diferent, and the nature of
the questions asked of them
is diferent as well. Weather
models track the evolution
of weather systems, and lose
3 The Stern Review took a very hard-line position on this issue, arguing that all generations should be treated
equally, which implies a discount rate of 0 percent. Other economists have questioned this assumption
and argued that it leads to an unrealistically high estimate of the current cost of climate change.
clIMatE changE ModElIng
15
The models perform so erratically for
several reasons. First, they vary up to 50
percent in their estimates of cloudiness.
That translates to a variation of 40 watts
per square meter in the amount of
solar energy reaching the surfacean
uncertainty that swamps the greenhouse
gas effect. (Remember that the
anthropogenic change in carbon dioxide
accounts for 1.6 watts per square meter.)
In addition, none of the models treat sea
ice in a physically realistic way. They have
just begun to incorporate the thickness of
ice as well as its extent, and they do not
yet include an estimate of the floe size
distribution. As floes get smaller, there is
more contact between ice and water and
hence more rapid melting in summer (and
freezing in winter). In general, climate
models treat sea ice as a homogeneous
and continuous medium, but both
assumptions are wrong. Sea ice varies in
thickness and composition, and it is highly
fractured.
Why is it important to model sea ice
correctly? First, the regional impacts are
huge. The opening of the long-sought
Northwest Passage in the Arctic Ocean
could be an economic boon to Canada, or
an ecological nightmare. The lifestyles of
native populations would be threatened
by the retreat of the ice. Species like the
polar bear, which depends on the ice,
would suffer even more.
In addition, the extent of sea ice has
global ramifications. No, the melting of
sea ice does not raise the sea level (a
popular misconception), because the
ice and water are already in hydrostatic
balance. But the melting of land ice would
cause sea levels to rise. The increase in
temperatures caused by the ice-albedo
feedback affects glaciers on land, too
and indeed, observations show that the
ice on Greenland is melting even faster
than predicted. Finally, melting of sea
ice also reduces the salinity of the ocean,
an important ingredient in all climate
models. An extreme possibility would be
a shutdown of the oceans temperature
and salinity-driven circulation. This is the
circulation that brings warmth from the
tropics to the mid-latitudes, by powering
ocean currents such as the Gulf Stream.
Such a shutdown would produce a
negative feedback, and would alter the
climate dramatically.
FIguRE 12
The thermohaline circulation of sea water, driven by
diferences in temperature and salinity, has a major impact
on world climate. The shutdown of this circulation is often
cited as a tipping point that could lead to dramatic global
cooling.
FIguRE 13
This diagram illustrates the variety and complexity of interactions that enter into the current
generation of global climate models. Note that the current models omit one key ingredient: the
feedback between the climate and human activities.
accuracy afer four or fve days, and there is no real point in running them beyond a few months.
On the other hand, for climate predictions we are interested in running the models decades, even
100 years into the future.
Given the diference in time scales, an uncharitable observer might wonder whether a climate
model is anything more than an expensive random number generator. Te answer is yes and no,
because the type of question one asks of a climate model is diferent.
Climate is, by defnition, the statistics of the weather of an area over a long period of time,
including the long-term mean, the variability, and the extremes. Te mean climate is what
remains of weather afer you average out all the fuctuations. Numerical weather forecasting
is, by contrast, all about the fuctuationshow the weather yesterday is going to change today,
tomorrow, and so on. Te goal of a mathematical model is to capture the average state, and even
a probability distribution of deviations from the average, and is not to predict, say, the wind in
Edinburgh on December 13, 2080. If one imagines boiling water in a pot, weather prediction
is analogous to describing the location of the bubbles, while climate describes the temperature
in the pot. From another perspective, weather prediction is an initial-value problem whereas
initialization is less important in the climate. Predicting tomorrows weather is based on
information about todays. But in climate change the predictions are about seeing what happens
under diferent forcing scenarios. For instance, how will the climate system respond to the
doubling of CO2, or a change in the amount of energy from the Sun?
However, there remain some serious issues with climate models that make them a good deal
less predictable than the temperature of the heated water (in the analogy above). First, climate
models cannot be completely tested and validated, while weather models are validated every
day. Te climate forcing of a century ago is poorly known, and observations of what actually
happened are sparse. Climate models are assessed by plugging past forcing data (e.g. aerosols
from volcanic eruptions) into them and comparing the predicted climate states with available
observations.
Unfortunately, though, some models are already using the observations to estimate model
parameters. Tat makes it impossible to validate the model independently with past data. Finally,
even if a model tested out satisfactorily against the more static, pre-1950 climate, it would not
necessarily give correct answers for the changing climate of today because the processes that
are important for the future climate, such as sea ice and glacial dynamics, may not be operating
in the same way in the static early 20th century. Another diference between weather and
climate models is that weather models are constantly assimilating new data to update the initial
conditions for the next prediction. Tomorrows forecast will be based on a combination of todays
forecast and the new observations accumulated over the next 24 hours by weather instruments
and satellites. Tis allows the weather models to get back on track quickly afer an unsuccessful
prediction. On the other hand, if a climate model gets of track by 2030, its predictions for 2100
may be completely invalid.
Why do different climate models disagree?
First, it is worth pointing out that there are signifcant areas of agreement. All the climate models
agree that global warming is a reality, and their predictions for 2030 are also in rough agreement.
Teir predictions for 2100, however, span a wide range.
One reason for the wide range is that the models prioritize diferently the processes on the
physical side of the equationsparticularly the processes that are not well understood, such
as convective mixing in the atmosphere and ocean, and the formation of clouds, and hence
represent them diferently. To some extent, this divergence among models is a good thing. Most

clIMatE changE ModElIng


climaTe change and The
rice harvesT in india
A recent paper published in the
Proceedings of the National Academy
of Sciences1 exemplifes the insights
that can be obtained from an
integrated model that combines
climate and economy. Global climate
models show that the efect of
greenhouse gases is reduced, to
some extent, by industrial haze in
the atmosphere. Aerosols absorb
solar radiation and release it back to
space, thus reducing the energy that
reaches Earths surface from the sun.
The PNAS study highlights an
economic system where greenhouse
gases and aerosols have a comple-
mentary, not ofsetting, impact: the
Indian rice market (see Figure 5.1).
Rice grows better when nighttime
temperatures are cool, which
suggests that greenhouse gases
would reduce rice output, while the
Indo-Asian haze would increase
it. On the other hand, rice requires
plenty of rain during the monsoon
season. But the Indo-Asian haze
tends to reduce rainfall, by reducing
the temperature gradient between
the southern and northern Indian
Ocean. Thus a purely climatic
viewpoint leads to ambiguous
conclusions for the efect of aerosols.
M. Aufhammer, V. Ramanathan, and
J.Vincent. Integrated model shows
that atmospheric brown clouds and
greenhouse gases have reduced rice
harvests in India, Proc. Natl. Acad. Sci. 103
(2006), no. 52, 19668-19672.
Figure 5.1 Rice harvest, Kashmir, Pahalgam, India.
16
climate modelers agree that there is no such thing as a best model, and it is useful to have a
variety of models to sample the space of diferent possibilities. In fact, when weather models
are run on the time scale of months, to make seasonal predictions, an ensemble of several
models will usually perform better than any individual one.
In addition, a certain amount of tuning of the models is standard practice. Ideally, this is
justifed on the grounds that climate scientists can use the observations of diferent climate
variables (e.g. cloud top height and sea surface temperature) to deduce the best parametric
relationships linking them. But in practice, tuning is, as one participant said, a subterranean
process where all thats reported is the outcome. Some models are tuned to the point where
they actually violate well-known laws of physics. It might be desirable to discard or discount
models that are known to be less trustworthy, but politically this is hardly feasible.
Finally, another reason that models difer is that the climate system itself is inherently
unpredictable. Precipitation is especially difcult to forecast accurately (see Sidebar, Rainfall:
Beyond Its Warmer, So Its Moister). Even a mathematically exact model started from two
slightly diferent initial conditions may not be able to issue similar precipitation forecast a
season ahead, because precipitation processes, such as evaporation and condensation, are
inherently non-linear. At best, it would ofer a range of possibilities and a most likely case
and indeed, this is the way that the IPCC presents its model results. Te chaotic dynamics
within the climate system make it impossible to do better.
Tis inherent uncertainty may explain why a suite of models will outperform a single model.
A well-designed ensemble might be able to sample diferent parts of parameter space or
model space and in this way more clearly outline the uncertainties in the climate forecast.
Tere was a very strong consensus at the symposium that communicating the uncertainty
in the model predictions was a difcult and important challenge for climate modelers. One
speaker worried that as the models improve, they will inevitably give slightly diferent answers
from the old ones, and it will look to the public as if the climate modelers are changing their
mindswhen in fact the new predictions may lie within the error bars of the old predictions.
Tis is not merely an academic concern, as proved by some of the press coverage of the
fourth IPCC report. Te media made a fuss over the fact that the predicted rise in sea levels
was not as great as in the third IPCC assessment. Did this mean that global warming was not
going to be as bad as predicted? Not at all. It meant that the uncertainty had been improved,
and in fact the modelers had been more honestthey had no longer attempted to quantify
the uncertainty in sea ice melting, because the process is not well enough understood. An
improved product turned into a black eye for the modelers, as they were forced to explain
that they werent backing down on the dire consequences of global warming.
How can we combine the results of different models?
In general, the IPCC averages the outcomes of the diferent models and reports an ensemble
mean, along with error bars representing a 66 percent confdence interval. Such an approach
would be statistically valid if the models represented independent random samples from a

clIMatE changE ModElIng


However, a combined climatic-
economic analysis tells a diferent
story. When early-season rainfalls fall
short, farmers respond by shifting
the acreage planted in rice to other
crops. In this way, economic factors
enhanced the impact of the aerosols.
The article concluded that, over the
period from 1985 to 1998, aerosols led
to a 10.6 percent reduction in the rice
harvest, compared to the harvest in
a simulated climate without aerosols.
The combination of aerosols and
greenhouse gases reduced the rice
harvest by 14.4 percent over the same
period of time. These results coincided
with a period when Indias rice
production, which had grown rapidly
in the 1970s and early 1980s, began
to grow more slowly and eventually
leveled of. The study suggests that
the increasing levels of aerosols and
greenhouse gases in the atmosphere
were responsible.
The interaction between climate and
human behavior, driven by economic
factors, was crucial for understanding
the efects of the aerosols. Most of the
efect isnt on the plants themselves,
but on the farmers shifting to other
crops, Aufhammer said. In spite of
the titles description of an integrated
model, the interaction between
climate and economy in his paper was
fairly simplistic. Aufhammer simply
took the outputs from a climate model
and plugged them into a regression
equation to predict the farmers
response. In the future, he says, climate
scientists and economists should work
together on the same model. Instead
of merely downloading data, we need
a spirit of true collaboration across
disciplines, he said.
1I
Figure 6.1 Precipitation changes for the decade 2090-2099,
relative to 1980-1999. Business-as-usual scenario, December-
February (left) and June-August (right). White regions indicate
where fewer than two-thirds of the climate models used for
the IPCC report agreed on the direction of change; shading
indicates where more than 90 percent of them agreed.
Image from Climate Change 2007: The Physical Science Basis, Intergovernmental
Panel on Climate Change.
single model space. However, the existing 24 models used in the IPCC report are in no
way a rationally designed, systematic exploration of model space. Tey are a sample of
convenience. Moreover, as some participants pointed out, there is a danger of throwing
out the physical baby with the statistical bathwater. Diferences between models may result
from physical phenomena that are represented correctly in one model and incorrectly in
another. Obviously, it will be a challenge to modelers to try to distinguish chance efects
from diferences with real, physical causes.
One speaker illustrated the problem with averaging by a colorful parable. Tree
statisticians are asked whether it is safe to cross a river. Tey construct separate models
of the river, and each one fnds that the river is deeper than 6 feet in some place. But they
disagree on where. So they average their models, and fnd that in the ensemble mean,
the river never gets deeper than 3 feet. As one might guess, the parable ends with the
statisticians drowning. (See Figure 14.)
All in all, there must be a better way than taking a mean. A weighted average, which takes
into account each models strengths and weaknesses, might be an improvement. Even
better would be a Bayesian (machine-learning) approach, described by one speaker. In this
approach, one model is omitted from the ensemble, and then treated as a new model that
changes the a posteriori probability of various climate outcomes. Ten a diferent model
is lef out of the ensemble, and the process is repeated. Afer this process is repeated many
times, one can bootstrap up to a reasonable weighting of the diferent models.
How can we downscale global models in order to obtain local predictions?
How can we upscale local effects to incorporate them in global models?
Several modelers felt that the issue of unresolved processes or sub-grid processes
was crucial. Tey are besieged with questions like, What will happen to this species?
or How will this afect the water supply in that state? For elected ofcials, what really
matters is what will happen in their community or their constituency. If the climate
modelers shrug their shoulders and say they dont know, they will lose credibility (even
if thats the honest answer).
Te one obvious solution is more computing power, in order to resolve the models down
to smaller and smaller grid sizes. As computers have steadily increased in power, the
resolution of climate models has improved as well. For example, as seen in Figure 15, the
grid sizes in the IPCCs four assessment reports, over a period of less than two decades,
have shrunk from 500 kilometers to 110 kilometers. Even so, the grids of all global models
are too coarse to resolve individual clouds, or even to represent a hurricane realistically.
Besides increasing computer power, there are several other options for modeling subgrid
processes. One is adaptive
mesh refnement, in
which the size of the grid
is reduced in regions that
require more detailsay, a
storm system or a mountain

clIMatE changE ModElIng


rainFall: Beyond iTs
warmer, so iTs moisTer
The publics attention in discussions of
climate change has always tended to
focus on the increase in temperature.
Indeed, the most popular term for
many years was not climate change
but global warming. However, some
of the most disruptive effects of
climate change are likely to involve
precipitation: severe storms, floods, or
droughts.
It makes sense that an increase in global
temperatures should lead to an increase
in global precipitation. Warmer air can
hold more water vapor, and with more
water vapor in the atmosphere there
should be more clouds and eventually
more rainfall. However, common sense
can be misleading. Where water vapor is
concerned, its not necessarily true that
what goes up must come down. The
warmer air could simply hold onto the
extra water. For this reason, the IGCC
report predicts only a 1 to 3 percent
increase in global precipitation per
degree of global warming. However,
satellite observations disagree: Over the
last 20 years, the precipitation increase
has been closer to 7 percent per degree
of warming.
1
Precipitation also has a much more
complex pattern of local and regional
effects than temperature. Indeed, it is
hard to find any place in the world that
will have a decrease in temperature
between now and 2100. But
precipitation will decrease dramatically
in some places, while increasing in
others (see Figure 6.1, page 17). Even
under the conservative assumptions
of the climate models, many areas are
predicted to have precipitation changes
well over 20 percent.
Unfortunately, the different climate
models used for the IPCC report
disagree strongly on the regional
details (see Figure 6.2). Given the extent
of disagreement, can we say anything
solid about rainfall?
1 F.J. Wentz et.al., How Much More Rain Will
Global Warming Bring? Science 317 (2007),
233-235.
18
Figure 6.2 Two of the models that
were used in the IPCC forecast
disagree on the precise location
and magnitude of precipitation
increases or decreases. Never-
theless, the overall message of the
models is fairly consistent, with increased precipitation in the tropics and decreased precipitation in the
subtropics. (Units in the figure are 0.1 mm of rain per day, with increases in green and decreases in red.)
Image from Climate Change 2007: The Physical Science Basis, Intergovernmental Panel on Climate Change.
NCAR_PCM1
range. As one speaker pointed out, this
has to be done with caution because
it can lead to unrealistic artifacts
along grid boundaries. Methods do
exist for understanding what causes
these artifacts and controlling them.
Another speaker discussed Levy noise,
which would allow for a more realistic
depiction of atmospheric turbulence.
Everybody is looking forward to
petafop computers, which might bring
cloud-scale processes into the picture
for the frst time. However, as explained
next, Moores Law has some caveats.
What kind of results can we
anticipate from next-generation
computers?
In a word, the world is going parallel.
4

Moores Law (which says that the number
of transistors on a chip doubles every
year and a half) is still going strong, but
clock speeds are not keeping up, because
the heat density inside todays chips is
getting too high. Parallel processing
is a way to compensate for the lack of
improvement in speed. Te most powerful
processors today are, ironically, made for
computer games, and they typically have eight
cores. Programming for these machines is not
easy, and it may not become easy until new
languages are invented.
It is not clear that climate modelers are ready for the new computing environment. Teir
programs typically have half a million lines of code, and it will be a non-routine task to
convert them to work on parallel processors. Climate modelers will have to think about
what algorithms can work
efciently on parallel
processors. For example,
adaptive mesh refnement,
though it is desirable for
other reasons, is very
tricky to implement on
a parallel machine. In all
likelihood, it is not the
climate modelers who
will have to solve these
problems but the postdocs and graduate students whom they hire. But this talent will not
4 This section is based on a presentation by Kathy Yelick, Architectural Trends and Programming Model
Strategies for Large-Scale Machines.

clIMatE changE ModElIng


In fact, according to David Neelin,
the situation is not as bad as it looks.
The predictions do follow a pattern
that makes physical sense, which he
calls the rich-get-richer model of
precipitation. The regions that will
see the greatest rainfall increase are
precisely the ones that get the most
rainfall now, the tropical latitudes.
And the big decreases in rainfall will
occur at the edge of those regions,
where increased advection will bring
dry weather in from the subtropics.
Thus the models agree on the physical
processes. They disagree on the
precise location of the wet and dry
spots because of differences in wind
circulation from model to model.
In a few regions the models did
produce consistent predictions. In
the Caribbean, nine or even all ten
of the ten models in Neelins survey
agreed that there will be a more than
20 percent drop in precipitation. And
indeed, 50-year precipitation records
in the Caribbean already show a
pronounced decrease (See Figure 6.3).
Neelin concluded that these regions
need to take the climate forecasts very
seriously.
Climate modelers do need a better
understanding of the convective
threshold, the point where a moist
column of air starts to precipitate. The
onset of convection is usually described
by quasi-equilibrium models, but
according to Neelin, these make the
process appear too smooth. The result
is too many gentle rain showers and
not enough extreme weather events.
He presented an alternative model,
developed in conjunction with Ole
Peters, which describes convection
in a similar way to other threshold
phenomena in statistical mechanics.
A rainstorm is like an avalanche, with
a slow buildup and a fast release, so
the statistical frequency of mild and
intense rainfalls should resemble that
of small and large avalanches. Though
still relatively untested, Neelin and
Peters interdisciplinary approach might
find a place in future climate models.
19
Figure 6.3 One region where the IPCC models
substantially agree is the Caribbean Sea, where
precipitation records over the last 50 years
already show a significant decrease in rainfall,
which is expected to continue. Red shading
indicates the observed amount of decrease
over the past 50 years, measured in units
of 0.1 mm per day.
Parable of the statisticians (after Lenny Smith). Three statisticians
independently forecast that the river is unsafe to cross, but average of their
profles of the river bottom (red dashes) indicates that it is safe. Smith told
this story to illustrate the dangers of relying on ensemble means,instead
of critically examining each model on its own merits.
FIguRE 14
HadCM3
come cheap. Climate modelers will have
to compete with the big money being
ofered to these programmers by game
companies.
To run a global circulation model with
a 1-kilometer grid size, which would
be detailed enough to allow for the
modeling of clouds, a back-of-the-
envelope calculation suggests that
climate scientists will need a 10-petafop
computer, with 100 terabytes of memory
and 20 million processors. Both IBM and
Japan have set targets of developing a 10-
petafop computer by 2012.
What ideas can mathematicians
contribute that climate modelers
dont even know about yet?
Te main purpose of this workshop
was to inform mathematicians about
climate modelingnot vice versa. It is
hoped that the active participation of
mathematicians will lead to new insights
or new ways of doing business that the
climate scientists have not anticipated.
Te two following sections contain a much
more thorough discussion of the possible
role of mathematicians. However, two
points may be worth mentioning here
because they came up repeatedly at the
symposium:
Mathematicians like to work from simpler models to more complex
ones. Many of the mathematicians in the audience expressed serious
reservations about being able to carry out serious mathematical
investigations on models of such complexity. Mathematicians should
not try to re-invent the wheel by designing their own models. However,
climatologists do use smaller models, so-called Earth models of
intermediate complexity, for intuition-building. Tey also use simpler
models or process studies aimed at revealing phenomena and
cause-efect relations in more localized settings. For mathematicians
interested in working on climate change, these models may make a
good entry point.
Informal discussions seemed to show that climate modelers have a
few misconceptions about dynamical systems. Even if a dynamical
system is inherently unpredictable because of chaos, some aspects of
its behaviorthe probability distribution of its statesmay in fact be
tractable. Te technology transfer of ideas from stochastic dynamical
systems to climate models has not happened yet.
Climate modeling
A large amount of efort continues to go into modeling of climate. Te

clIMatE changE ModElIng


Each generation of climate models has used
fner and fner grids. (The labels FAR, SAR, TAR,
and 4AR refer to the frst, second, third, and
fourth assessment reports of the IPCC.) The
next generation may fnally be able to model
individual storms. However, to achieve this level
of refnement, climate scientists will have to
adapt their programs to run in a new, massively
parallel computing environment.
20
FIguRE 15
motion of fuid is described by partial diferential
equations (PDEs), using the framework of
continuum mechanics. Forcing terms, or external
inputs, for the fuid equations come from the
physics, chemistry, and biology of the atmosphere,
ocean, land, and cryosphere. Te main questions
are what efects to include in the model, how to
include them accurately and efciently (when their
efects range over several orders of magnitude),
and how much of an impact they will have on the
prediction. It is important to note that there is no
clear consensus of what needs to be included in the
model.
It is not out of the realm of possibility to remove
Newtonian physics from the equations entirely.
Tis radical proposal has a precedent in molecular
biology, where the most successful models
of protein folding do not model the protein
molecules from frst principles, but instead use an
empirical approach based on learning from data.
A particular challenge lies in the fact that the
continuum model response, and the forcing terms
going into the model, vary in time scales of hours
and days, while the predictions we need involve
the coarse behavior in time windows of decades.
Many established climate models have their origin
as numerical weather prediction models, which
do remarkably well at short-term prediction.
However, models for long-term prediction need
to include slow processes, for which there are few
observations.
Analysis of data
An enormous amount of climate data continues
to be collected at a wide range of locations, from
diverse platforms, and using diferent methods.
Tey need to be synthesized into coherent
frameworks and linked to standard climate
variables. For example, work needs to be done
to determine how satellite measurements, which
integrate over a column or noodle of atmosphere,
correspond to events at the surface. Te data
should guide modelers in deriving the proper
representation of climate processes, and the models
should indicate what other measurements should
be collected to gain further insight into the system.
In this way, a mutually benefcial feedback would
occur between models and theory. Another active

rESEarch topIcS In
A partial list of active research

clIMatE changE
areas in climate change is given below.
research area is to use data, possibly in conjunction with
models, to fngerprint the diferent factors that contribute to
climate change.
Computational methods and platforms
Once a model is chosen, and initial and boundary conditions
are specifed, numerical methods are used to simulate the
climate. Tere is a wide variety of computational approaches
to integrating time-dependent partial diferential equations.
Tis is an active area of research, as climate modelers strive to
balance efciency and accuracy, while being concerned with
stability of computational schemes.
Because numerical climate models usually involve a very large
number of operations, computational scientists need to design
algorithms that exploit capabilities of available computing
platforms. At present, high-end computing is moving more
and more toward parallel and multi-core processors, and
climate models need to take advantage of that fact. It seems
certain that ever-increasing computational capability and
resources will be required for climate modeling, as the
models move toward fner and fner resolution. However,
fner resolution and bigger computing platforms should not
become an end in themselves, but instead should be guided
by concrete needs as well as evidence that the increased power
will actually improve model performance.
Predictions from models and quantifcation of
uncertainty
Each climate model is based on slightly diferent assumptions
and therefore requires diferent specifcations of initial
conditions and forcing terms. Tis fact, together with
the fact that the forcing terms themselves are known only
approximately, leads to predictions that can be quite diferent
from model to model. Researchers need to be careful to
distinguish between variations due to chance and those that
have identifable physical or parametric causes.
Statistical techniques are used to assimilate the information
from various models and synthesize these projections.
Reporting a standard deviation of model results, as in the
IPCC report, is simple to describe but may not be the most
informative technique. Better alternatives include weighted
averages or Bayesian algorithms. Tis is an active area of
research, but is potentially controversial if it is viewed as
ranking the quality of the models.

Inverse problems and data assimilation
Available data can be used in conjunction with a model to
extract information about model parameters that cannot
be directly measured. Tis approach has been used very
efectively in short term weather prediction. It is a research
area that has the potential to contribute to development of
better climate models, and in turn, better predictions. It
can also be used as a platform for validating a model and for
studying the sensitivity of various factors in a model.
Economic concerns and efective policies
Quantitative methods are being applied to study the economic
impact of climate change, for example on crop yields. Te
approach is to use the prediction provided by the climate
models together with a simple model of its impact on
agriculture to understand the economic costs of warming.
Such analysis could also be applied to risk assessment and the
economic benefts of mitigation policies. At present, economic
models are not well integrated with climate models, and this is
a problem that requires attention. Furthermore, uncertainties
in climate model projections should be included in economic
or impact models, and metrics designed to compare the costs
and benefts of various policy decisions.
Research is also being conducted into developing
mechanisms for curbing emission of greenhouse gases
that will be efective on a political level (e.g., cap-and-
trade agreements). Agreements of this type are necessarily
multinational, and each player will operate with very diferent
objectives and constraints. Te challenge is to develop a
policy such that it will be to the beneft of each country to
comply with the policy, and still achieve global reductions in
greenhouse gas emissions.
21
HIgH DIMEnSIonAl DynAMICAl
SySTEMS
Tere is a pressing need to formulate a
language and theory of transient dynamics
for the classes of systems that arise in
climate science, including notions of
stability for transient dynamics. Te
current use of the language of chaos
and attractors leads to confusion for
transient dynamics because these terms
are imprecise in that context, and therefore
mean diferent things to diferent people.
Te existing vocabulary, designed for
phenomena that occur over extremely long
time periods, should be replaced by terms
that describe the transitory response of a
high-dimensional nonlinear system.
Tese may include transitions between
local attractors, or transient dynamics due
to external forcing. Computing bifurcations
in very high-dimensional systems is likely
to be helpful here. It will also be helpful
to fnd criteria under which the dynamics
of low-dimensional systems remain robust
when translated back into the high-
dimensional setting.
Relevant mathematics: Dynamical
systems, nonlinear ordinary diferential
equations, nonlinear partial diferential
equations, global analysis.
InTERPRETIng AnD TunIng
MoDElS
Climate scientists more or less agree that
there is no such thing as a best climate
model. Given an ensemble of models
with diferent strengths and weaknesses,
we need tools to address the quality and
relevance of models. For instance, one
model may be better at representing a given
variable under 20th century conditions,
but another may represent the stratosphere
better. It is an open question how well the
current suite of climate models cover the
space of models relevant to the Earths
climate system.
Methods need to be developed to evaluate
the impact of missing processes, and to
estimate the spatial and temporal scales
over which we can make a meaningful
interpretation of a particular model.
Shadowing experiments have been
suggested as a useful approach to the latter
problem.
A very serious question of quality control
arises from the tuning of climate models.
One unintended result can be that models
no longer obey the laws of physics that
are supposedly programmed into them.
Tuning needs, frst of all, to become an
open rather than clandestine practice.
Second, the mathematical foundations
should be clarifed so that climate modelers
can have an optimal, or at least systematic,
way to explore parameter space.
Relevant mathematics: Dynamical
systems, nonlinear diferential equations,
statistics, knowledge discovery.
MoDEl REDuCTIon
Climate models are inherently very
complicated and difcult to analyze.
Climate modelers themselves do not
rely only on state-of-the-art models for
gaining insight into the climate system.
Tey also use simple conceptual models
as well as Earth Models of Intermediate
Complexity, which can be roughly des-
cribed as climate models that are a few
generations old. Mathematicians can
experiment with these models and attempt
to determine how well they mimic the
dynamics of larger models. A systematic
and mathematically justifed model reduc-
tion method is needed in order to simplify
models so that they are amenable to mathe-
matical analysis and computation. Using
such an approach, mathematical ideas can
be developed on relatively simple models
and then tested in the full models. Te use
of a hierarchy of models is critical to the
productive involvement of mathematics.
Relevant mathematics: Diferential
equations, model reduction, asymptotic
analysis, mathematical and multiphysics
modeling.
MoDElIng
While much of the modeling process is
physically based and well understood,
some components in the forcing terms are
inherently stochastic. Terefore, there is
a need for understanding the dynamics of
climate model under stochastic forcing.
In addition, some phenomena that are
deterministic in the short term may
become efectively stochastic in the long
term. In the context of model reduction, it
may be useful to replace the deterministic
equations with stochastic ones.
Relevant mathematics: Stochastic
processes, stochastic PDEs.
MulTISCAlE CoMPuTATIonS
Tere has been considerable progress
in the development of computational
methods that obtain coarse scale behavior
when the problem involves multiple, fner
scales. Tis strategy has been particularly
successful in the area of mathematical
material science. Climate modeling could
also beneft from these developments. In
climate modeling, multiscale phenomena
occur both in space and time.
Any complete treatment of multiscale
behavior also needs to address phenomena
that occur at the sub-grid level, such as
turbulence, ice dynamics, and clouds.
Tere are also practical reasons for paying
attention to sub-grid phenomena. Many
communities working on subsystems
in areas such as hydrology, agricultural
production, and ecosystems need to predict
how their subsystem will respond to a
change in climate. Mathematicians must
provide the tools to answer such questions.
Just as important, they need to delineate
the limitations of the models and what
constitutes appropriate use of them.
Relevant mathematics: multiscale
methods, asymptotic analysis, fuid
dynamics, stochastic processes.

opportunItIES and challEngES for thE


MathEMatIcal ScIEncES
22
nuMERICAl AnD CoMPuTATIonAl
ASPECTS
Tere has been tremendous progress in
the numerical solution of PDEs that has
not made its way into climate modeling.
A careful study of the trade-of between
efciency and accuracy for the purpose of
climate modeling remains to be done.
Tis area of opportunity overlaps with
multiscale computations, because one
approach to multiscale problems is adaptive
mesh refnement. Simple schemes for mesh
refnement lead to ill-posed problems and
to artifacts in the models. Terefore, careful
thought should be devoted to developing
algorithms that minimize these efects on
a sphere. Furthermore, the appropriate
meshes for the atmosphere, land, and oceans
may difer, and techniques need to be
developed to couple them.
Te convergence of numerical methods
should be investigated, because many climate
models may be operating outside the domain
in which approximations can be expected
to converge to the real solution of the PDEs.
Tis factor may contribute to the irreducible
imprecision of climate models.
Relevant mathematics: numerical
analysis, spherical geometry, computational
science, computer science.
DATA ASSIMIlATIon
Data assimilation has been proven to be
efective in short-term weather prediction.
With the abundance of measured data
and well understood models, an efort can
be mounted to use techniques from data
assimilation to better obtain estimates of
model parameters and forcing terms.
Te challenge lies in the complexity and type
of models in climate science. Techniques
for linear or near-linear systems are well
established and relatively efective. Many of
these Kalman-based flters and variational
methods either break down or become
computationally unfeasible in models with
the high degree of nonlinearity typical of
the climate.
Te fusion of data into models that
incorporate a diverse array of physical,
chemical, and biological processes presents
particular obstacles. Te data can come in
diferent forms, and targeted techniques that
exploit the particular nature of the data will
prove very useful.
Relevant mathematics: numerical
analysis, optimization methods, fltering
methods.
unCERTAInTy quAnTIFICATIon.
Te validation of individual models
involves questions of how to integrate
observations into the model and how to
translate the uncertainty in the observations
to uncertainty of model output. Also, not
all model error is due to chance, and thus
a purely statistical approach may miss
more fundamental issues. Tools need to be
developed to understand the reasons why a
particular model is good or bad, with a focus
on the validation of physical processes.
Many climate predictions combine
the results of several models that have
undergone validation separately. Te big
question is how to combine the diferent
predictions, along with their individual
uncertainties, into a single prediction with
confdence, given that each model may
do particularly well in modeling diferent
aspects of the climate. What statistical
foundations and assumptions are necessary
to make inferences from this diverse
ensemble of models?
Attribution is a relatively novel procedure
in climate science, by which data, combined
with models, are used to identify causes that
produce observed efects. Statistical infer-
ence has the potential to provide a frame-
work for such analysis. In general, climate
modelers and mathematicians should make
an efort to understand and import ap-
proaches and methodology from other felds
in the natural sciences that are both highly
data- and model-driven, such as genomics,
particle physics, and nuclear testing.
Relevant mathematics: statistics, inverse
problems, learning theory.
EConoMICS AnD SoCIETAl
ASPECTS
Research is needed to quantify the
economic risks associated with various
actions in response to global climate
change. Mathematics from option theory,
econometrics and game theory can have a
role in this aspect.
On the international level, economic incen-
tives play a big role in whether a country
is motivated to enter into an agreement
and then abide by it. Here game theory for
multiple players with diverse objectives can
be used. On a national level, work needs to
be done on ways to use market mechanisms
to curb carbon production. Economic
modelers need to understand how cap-and-
trade agreements will afect greenhouse gas
production, and how that will in turn afect
the climate. Tey should also investigate
the ramifcations of proposals that are not
politically feasible in the United States at the
moment, such as carbon taxes. Te political
climate could change, and such mechanisms
could become feasible in the future.
Once policies to curb carbon dioxide
emission have been put in place, their efects
will feed back into the climate. Integrated
economic and climate models, perhaps
of an intermediate complexity, should be
developed to make this feedback loop easier
to understand. Modelers should strive to
make these models easy to use, so that
policy makers can ask specifc what if
questions and receive answers. Perhaps even
more interesting, and challenging, is to view
the process as a control problem, and thus
to design controls, i.e., policies, that force
the system to achieve a certain desirable
state. At a smaller scale, the auction of
emission allowances also poses interesting
mathematical problems of what economists
call mechanism design.
Relevant mathematics: probability,
statistics, stochastic processes, econometrics,
game theory, operations research,
optimization, control theory, fnancial
mathematics, intergenerational economics.
the SympoSium on climAte chAnge has
climAte chAnge provideS mAthemAticAl ScientiStS With A broAd rAnge of challenging research problems whose solutions
could have a large societal impact. Several mathematical research topics are listed below, along with the areas
of mathematics that might contribute to resolving the problems.
21
concluSIonS and rEcoMMEndatIonS
24
demonstrated that there are many opportunities for collaboration between mathematicians and
climate scientists. It is encouraging to see that climate modelers are aware of this fact and are
soliciting help from mathematicians. On the other hand, the mathematical community has, for the
most part, not awakened yet to the role it can play in informing climate models and policy. While
the thrust of the motivation outlined here comes from the need to understand our changing climate
better, a sentiment also emerged from the symposium that the application of mathematics in this area
will likely result in new and unexpected mathematical ideas and thus will enrich the mathematical
sciences.
Each of the Opportunities and Challenges identifed in the previous section carries with it an
implicit recommendation: Tese are areas and problems that the mathematical and climate modeling
communities should devote their attention to. Te question then arises of what concrete steps can be
taken to encourage this result.
First, mathematical institutes and organizations need to engage their members in climate change
research through targeted workshops. Already there are plans to build on the progress made in this
symposium. Te Joint Mathematics Meetings (January 2008) will include a plenary talk and several
special sessions on climate change. Some of these sessions include economists and policy makers
as well as geoscientists. As a follow-on to this symposium, MSRI may host a summer workshop in
2008. It has also been suggested that one of the mathematical sciences institutes, such as SAMSI (the
Statistical and Applied Mathematical Sciences Institute) or IMA (the Institute for Mathematics and its
Applications) might hold a special year on climate research. Te Newton Institute in the UK has also
indicated interest in a six-month or year-long program on the topic. Te organizers of these events
should work together to achieve a cumulative efect.
Tere has, as yet, been no formal discussion of establishing
permanent institutions devoted to mathematical aspects of climate
change (comparable to, say, the Mathematical Biosciences Institute
at Ohio State). However, a permanent institute would be the logical
culmination of three trends we expect to continue: the increasing
severity of climate change, increasing public awareness and
governmental and non-governmental support for research on climate
change, and increasing legitimacy of climate change research within
the mathematical community.
Eforts should be made to train a new generation of mathematical scientists to do research in climate
change. Some possible mechanisms are summer schools, specialized postdoctoral programs (possibly
jointly with organizations such as NCAR and NOAA), and visiting positions. Mathematical educators
should be trained and encouraged to introduce climate-related examples into the classroom. Research
experiences for undergraduates (REUs) should include realistic opportunities for climate research.
Tis broad range of training could be facilitated by common digital repositories of data, models, and
sofware that aford easy access to research tools. (See the web portal mentioned below.)
Much of the work of stimulating interest in this feld will have to be done at the local level.
Mathematics departments are encouraged to reach out to their colleagues in other disciplines to
develop collaborative eforts in climate change. Tis could start out modestly by organizing joint
seminars. Departments potentially ripe for engagement include environmental sciences, atmospheric
sciences, geology, oceanography, economics, ecology, natural resources and behavior sciences.
Climate modelers, too, can play a role in making this feld of research more hospitable to
mathematicians. First, they should realize that
mathematicians generally work from simplicity to
complexity, starting out with the simplest possible
models that display the behavior that needs to be
understood. Climate modelers could make readily
available a hierarchy of models, ranging from the
simplest to the most complex. Tis might be achieved
by devoting one issue of an applied mathematics
journal (SIADS, for example) to climate related
models of the simplest and intermediate complexities.
Te hidden process of tuning the models should also
be made more transparent -- a change that would, in
any event, be benefcial to the integrity of the feld.
Ideally, a web portal should be developed that would
be useful to all participants in mathematical climate
research. Tis could be a repository of mathematical problems; a source for simple and intermediate-
complexity climate models; a match-making mechanism for interdisciplinary teams; and a resource
that non-scientists (such as businesspeople or policy makers) could turn to for objective information
on climate change. It is not clear yet who would host or maintain such a portal.
Mathematicians should explore existing funding opportunities for research on climate change,
not only at the National Science Foundation (NSF) but also at the Department of Defense, the
Department of Energy, and private foundations. One particular source of new money at the NSF
is the Cyber-Enabled Discovery and Innovation (CDI) initiative, which will start in 2008 with a
frst-year budget of $52 million. Te themes of CDI (knowledge extraction, interacting elements,
computational experimentation, virtual environments, and educating researchers and students in
computational discovery) seem to mesh well with the objectives of climate research.
At the same time, mathematicians should explore the possibilities for creating new funding
opportunities. Te NSFs Collaboration in Mathematical Geosciences (CMG) provides a model
for such a funding mechanism. Research in climate change will involve several other disciplines
with diferent cultures and languages, such as economics and behavioral sciences. Terefore it is
imperative that collaborative mechanisms be well-designed.
Finally, both mathematicians and climate experts will need to communicate with the public, with
elected ofcials, and with the media in a way that emphasizes the robust aspects as well as the
uncertainties of climate predictions. Tey should be frank and forthright about the complexity of the
climate system and the limitations inherent in any approximation of it. To the extent possible, they
should educate the public to the fact that even the best model will produce a range of possibilities,
not a single forecast for the future. Tey should prepare the public to understand that the range of
possibilities may change as the models improve and as we get new data about the Earth system. Tis
will not mean that scientists are changing their minds or contradicting themselves; it is part of the
normal process of science. Te very real uncertainties in the projections of the future should not be
allowed to obscure the fact that climate change is occurring and cannot be ignored any longer.
Page 25
25

concluSIonS and rEcoMMEndatIonS

appEndIx a
Program of the Scientifc Workshop
Thursday, April 12, 2007
Background and Impact
Inez Fung (University of California at Berkeley) Issues in Climate Change
Cecilia Bitz (University of Washington) Sea ice cover in a changing climate
Uncertainty, Risks, and Decisions
Lisa Goldberg (MSCI Barra, Inc.) Forecasting the Risk of Extreme Events
Max Aufammer (University of California at Berkeley) Impact of Aerosols on Rice Production in India Trough Local Climate
Lenny Smith (London School of Economics) Seeing Trough Climate Models
Identifying Climate Change
Ben Santer (Lawrence Livermore National Laboratory) Detection and Attribution of Climate Change
Statistical Issues and Incorporating Data
Claudia Tebaldi Future Climate Projections from Multi-Model Ensembles: Motivation,
(National Center for Atmospheric Research) Challenges, Approaches and Ways Forward
Jef Anderson Using Observations to Estimate Climate Model Parameters
(National Center for Atmospheric Research)
Friday, April 13, 2007
Computational Issues
Phil Colella (Lawrence Berkeley National Laboratory), Algorithms for the Study of Climate Change
Kathy Yelick (University of California at Berkeley) Architectural Trends and Programming Model Strategies for
Large-Scale Machines
New Modeling Challenges
David Neelin (University of California at Los Angeles) Precipitation Change and the Challenges of Modeling
Cecile Penland When We Cant Keep Track Of Everything: On Difusion Processes and
(National Oceanic and Aeronautic Administration) Levy Flights in Climate Modeling
Jim McWilliams Irreducible Imprecision in Atmospheric and Oceanic Simulation
(University of California at Los Angeles)
Future Directions
Bill Collins (National Center for Atmospheric Research) Where do we go from here?
Videotapes of these lectures may be found at the MSRI website, www.msri.org.
26
Attendees of the Scientifc Workshop
Climate Change: From Global Models to Local Action
Mathematical Sciences Research Institute, April 12-13, 2007
Rafael Abramov, University of Illinois
Malcolm Adams, University of Georgia
Jef Anderson, National Center for
Atmospheric Research
Ari Ariyawansa, Washington State
University
Max Aufhammer, University of California
at Berkeley
George Avalos, University of Nebraska
Nathaniel Berkowitz, no afliation given
Bjorn Birnir, University of California
at Santa Barbara
Cecilia Bitz, University of Washington
Jonathan Block, University of Pennsylvania
Robert Bryant, Duke University
Alin Carsteanu, Cinvestav
Bem Cayco, San Jose State University
Fatih Celiker, Wayne State University
Bill Collins, National Center for
Atmospheric Research
Louis Crane, Kansas State University
Forrest DeGrof, no afliation given
Maarten v. de Hoop, Purdue University
Eric DeWeaver, University of Wisconsin
Frank Drost, University of New
South Wales
Philip Dufy, no afliation given
Bahman Engheta, University of California
at Riverside
Greg Eyink, Johns Hopkins University
Yue Fang, University of California
at Berkeley
Inez Fung, University of California
at Berkeley
Jimmy Fung, Hong Kong University
of Science and Technology
Ashis Gangopadhyay, Boston University
Daryl Neil Geller, Stony Brook University
Serge Guillas, Georgia Institute of
Technology
Dan Gunter, Lawrence Berkeley National
Laboratory
A. G. Helmick, North Carolina State
University
Robert Higdon, Oregon State University
Matthew Hofman, University of Maryland
Masaki Iino, University of Utah
Larens Imanyuel, Technominiaturization
Project
John Kahl, University of Missouri at
Columbia
Hans Kaper, National Science Foundation
Allan Kaufman, Lawrence Berkeley
National Laboratory
Boualem Khouider, University of Victoria
Eric Kostelich, Arizona State University
Charlie Koven, University of California
at Berkeley
Hugo Lambert, University of California
at Berkeley
Chloe Lewis, University of California
at Berkeley
Wing Suet Li, Georgia Institute of
Technology
Yi Li, University of Iowa
Regina Y. Liu, Rutgers University
Douglas Lind, University of Washington
Frank Ling, University of California
at Berkeley
David Lobell, Lawrence Livermore National
Laboratory
John MacDonald, University of British
Columbia
Brian Maurizi, Washington University of
St. Louis
Richard McGehee, University of Minnesota
Jim McWilliams, University of California
at Los Angeles
Robert Megginson, University of Michigan
Juan Meza, Lawrence Berkeley National
Laboratory
Norman Miller, Lawrence Berkeley
National Laboratory
John Moussouris, MicroUnity
David Neelin, University of California
at Los Angeles
Douglas Nychka, Institute for Mathematics
Applied to Geosciences
Myunghyun Oh, University of Kansas
Sergei Ovchinnikov, San Francisco
State University
Hyo-Seok Park, University of California
at Berkeley
Cecile Penland, National Oceanic and
Atmospheric Administration
Alexandra Piryatinska, San Francisco
State University
Dimitros Politis, University of California
at San Diego
Serge Preston, Portland State University
Renny Rueda, University of Externado
(Colombia)
Ben Santer, Lawrence Livermore National
Laboratory
Fadil Santosa, University of Minnesota
Eric Schechter, Vanderbilt University
Brad Shelton, University of Oregon
Emily Shuckburgh, University of
Cambridge
Lenny Smith, London School of Economics
Hartland Snyder, no afliation given
A. Spehr, no afliation given
Dave Stainforth, University of Oxford
Alexander Stine, University of California
at Berkeley
Andrea Taschetto, University of New
South Wales
Claudia Tebaldi, Athene Software, Inc.
Bruce Turkington, University of
Massachusetts at Amherst
Chunming Wang, University of Southern
California
Shouhong Wang, Indiana University
Michael Wehner, National Energy
Research Scientifc Computing Center
Chris Wiggins, Columbia University
Peter Wolenski, Louisiana State University
Carol S. Wood, Wesleyan University
Katherine Yelick, University of California
at Berkeley
Mary Lou Zeeman, Bowdoin College

appEndIx B
2I
The Mathematical Sciences Research Institute (MSRI), located in
Berkeley, California, fosters mathematical research by bringing
together foremost mathematical scientists from around the
world in an environment that promotes creative and effective
collaboration.
MSRIs research extends through pure mathematics into computer
science, statistics, and applications to other disciplines, including
engineering, physics, biology, chemistry, medicine, and fnance.
Primarily supported by the US National Science Foundation, the
Institute is an independent nonproft corporation that enjoys
academic affliation with ninety leading universities and support
from individuals, corporations, foundations, and other private
and governmental organizations.
MSRIs major programs, its postdoctoral training program, and
workshops draw together the strongest mathematical scientists
with more than 1,700 visits over the course of a year; at any
time about eighty-fve are in residence for extended stays. Public
outreach programs and the largest mathematical streaming
video archive in the world ensure that many others interact with
MSRI throughout the year.
Main Office 510-642-0143 Fax 510-642-8609
Mailing Address:
Shiing-Shen Chern Hall
17 Gauss Way Berkeley, CA 94720-5070
www.msri.org

You might also like