Math Climate
Math Climate
Math Climate
2007
Publishedby
MathematicalSciencesResearchInstitute
Berkeley,CA
Anewdiscipline
foranuncertaincentury
Climate
Change
Mathematics
of
In the spring of 2007 MSRI organized a large public event, Climate Change: From
Global Models to Local Action, to examine some political and economic aspects
of climate: what we know, what we guess, how and how much our society can and
should respond to what we are learning. We chose the date to coincide with a visit by
Congressman Jerry McNerney, whose background, as both a mathematician and an
advocate for alternative energy sources, made him a model participant for an event
that would combine the perspectives of several disciplines.
Te public panel discussion was followed on the next two days by a scientifc
symposium, in which mathematicians from many diferent felds mixed with
economists, climate modelers and others who have already been working on the
many questions involved. Tis booklet is a record of some of the discussions and
ideas in those meetings.
Te purpose of these events was to
connect the mathematical community
with the best current research and
thinking about climate change, and
to point out the many diferent kinds
of mathematical challenges that are
presented by this issue. Society needs
to know more, and more accurately,
about what is happening with the earths
climate and to prepare for whatever
action is necessary and practical to
undertake. Mathematics and statistics
already play a central role in this as in
any sort of modeling efort. Likewise, computer science must have a say in the efort
to simulate Earths environment on the unprecedented scale of petabytes. With a
problem of this complexity, new mathematical tools will undoubtedly be needed
to organize and simplify our thinking. Tus it seemed to us at MSRI important
to encourage direct discussions between those already in the feld and the many
mathematicians whose skills, and whose students skills, can bring new insights.
As Director of MSRI I organized the conference, but as a non-expert I relied on
a number of others to make sure that the important scientifc aspects were well-
covered, and to make sure that the conference would represent the best current
science in the feld. I am particularly grateful to Inez Fung, Bill Collins and Chris
Jones for their scientifc advice, and to Orville Schell for his advice and help in
arranging the public event. Nat Simons provided expert suggestions as well as
enthusiasm and great support throughout without him the event could never
have happened.
David Eisenbud
Director, Mathematical Sciences Research Institute, 1997-2007
forEword
Inez Fung (left) and David Eisenbud (right),
co-organizers of the MSRI symposium on
climate change.
Page
forEword 2
IntroductIon 4
thE MSrI SyMpoSIuM on clIMatE changE 6
clIMatE changE MItIgatIon 7
clIMatE changE ModElIng 13
rESEarch topIcS In clIMatE changE 20
opportunItIES and challEngES for 22
MathEMatIcal ScIEncES
concluSIonS and rEcoMMEndatIonS 24
appEndIx 26
Case Studies
How Do We Know? 6
The Evidence for Climate Change
The IPCC Report: A Bleak Future 8
Energy Economics 10
Mathematics and Renewable 12
Energy
The Sea Ice Conundrum 14
Climate and the Indian Rice Crop 16
Rainfall: Beyond Its Warmer, 18
So Its Moister
contEntS
2008 Mathematical Sciences Research Institute
17 Gauss Way, Berkeley, CA 94720-5070
Telephone 510 642 0143
www.msri.org
Dana Mackenzie
[email protected]
MathEMatIcS of clIMatE changE
a new disciPline for an uncertain century
IntroductIon
When the history of climate change is written, the years 2006 and 2007 may be
seen as a turning pointa time when climate change ceased to be seen as a green issue and
became an everyone issue. In 2006, Al Gores movie An Inconvenient Truth placed global
warming on Americas movie screens. In October 2006, the British government released
the Stern Review, a frst attempt to quantify the economic costs of climate change. Over a
period of four months, from February to May 2007, the Intergovernmental Panel on Climate
Change (IPCC) released its fourth report on climate change, which attracted much more
publicity than the previous three. In April 2007, the United States Supreme Court ruled that
the Environmental Protection Agency has the authority to regulate carbon dioxide and other
greenhouse gases. In October 2007, Gore and the IPCC shared the Nobel Peace Prize for
their eforts to build up and disseminate greater knowledge about man-made climate change,
and to lay the foundations for the measures that are needed to counteract such change.
Te increase in public discussion may refect an increasing comprehension that the scientifc
debate over the reality of global warming has ended. (See sidebar, How Do We Know?) Te
IPCCs fourth assessment stated that warming of the climate is unequivocal and that
it was very likely (meaning more than 90 percent likely) that most of the warming is
anthropogenic. (See Figure 1.)
Tere are many uncertainties, however, in the specifcs of climate change and its impact.
Climate models tend to agree on the twenty-year projections, both in regard to their
sensitivity to variations in model physics as well as diferent
emissions scenarios
1
. Disagreement arises when projections are
carried out to the end of the century. For example, the equilibrium
response to a hypothetical scenario, involving an immediate
doubling of carbon dioxide, leads to varying predictions of warming
from 1 degree Centigrade to a truly staggering 12 degrees. (Note
that these should not be interpreted as literal forecasts, because an
overnight doubling is impossible.) Te diference arises primarily
from uncertainties in the climatic feedback processes represented in
the models, which tend to amplify the direct efects by two or three
times.
Other aspects of climate change are even harder to predict accurately
than temperature. We can be certain that precipitation patterns
will change, and all the models indicate that some subtropical and
tropical regions will experience severe droughts.
But the models give contradictory predictions of where the droughts are likely to occur. As
another example, scientists reported in early 2007 that glaciers in Greenland are melting
faster than any of the models in the IPCC report had predicted. Clearly, there are processes
going on that we do not understand. Yet the extent of the polar ice caps is a critical variable
in climate models, because it triggers a feedback loop: the more the ice melts, the more
sunlight is absorbed by Earth (instead of being refected into space by the ice). Tis leads
to an increase in temperature, which in turn stimulates more melting of the ice cap. Tis
melting is of concern even to people who live far away, because the melting of glaciers on
land is a major contributor to rising sea levels.
Diferent assumptions about emissions of greenhouse gases can be used as inputs into the
models.
4
FIguRE 1: Radiative Forcing Components
Anthropogenic (human-induced) contributions to global
climate change are measured in watts per square meter
in other words, the increase in solar radiation that would
produce an equivalent warming efect. Some contributions
(e.g., the greenhouse efect) are positive, and others (e.g.,
aerosols) are negative. However, the net anthropogenic efect
since 1750, 1.6 watts per square meter, is unambiguously
positive, and also signifcantly greater than the amount of
warming due to natural fuctuations in the suns brightness
(0.12 watts per square meter).
Image from Climate Change 2007: The Physical Science Basis,
Intergovernmental Panel on Climate Change.
Te climate models used to make the IPCC projections are very complex systems of nonlinear
equations solved on large computers. Te IPCC relied on 24 diferent climate models in its
report. Many of them have been developed under the auspices of national meteorological
ofces. Tough some models are superior to others, identifying them publicly is a ticklish
political issue. Because the models have diferent assumptions, we run the risk of comparing
apples to oranges. In many of its scenarios, the IPCC report simply averages all the models
together equally. It is not at all clear that this methodology is an optimal or even sound way to
integrate the data.
A serious limitation of the current models is their coarse scale. At present, even the highest-
resolution models chop up the world into pieces that are 10 to 50 kilometers wide.
Tis resolution is not fne enough to capture important details of topography,
such as mountain ranges, and it is also not fne enough to model individual
clouds, which play a complex and important role in the climate system.
2
(See Figure 2.) In practice, model parameters, especially those that represent
turbulent or fne-scale processes, are optimized or tuned in order to match
available observations. For example, the efect of clouds has to be added to
the model as an aggregate term, with all the uncertainties that implies. If the
climate models could be improved to 1-kilometer resolution, then clouds and
fner topography could be built into them; however, it has been estimated that
this would require a 10-petafop computer with 20 million core processors.
Tat kind of computing power is on its way, but it is not here yet. Even when
it arrives, its questionable whether climate modelers can take full advantage of it. Many models
are legacy codes of a half million lines or so that are not optimized for massively parallel
computation.
Finally, the mathematics of dynamical systems has taught us that uncertainty is an inevitable
part of predictions based on nonlinear physical models. Tis irreducible imprecision requires
us to use a variety of models, and run them with a diverse set of parameters, in order to capture
the real range of uncertainty in the climate system. It also means that climate modelers must
take care to communicate to policy makers that uncertainty is part of the story. As models
improve and more information becomes available, the model forecasts may change, and this
could lead to frustration among those needing to make decisions based on their predictions.
Tis frustration might be avoided if the original predictions are presented as a range of
possibilities rather than a single magic number.
Be that as it may, accurate and reliable prediction of global climate change is a key to policy
making. It is clear that policies should be based on predictions that are built on a sound
foundation. Mathematical scientists need to get involved, because the central questions facing
this research are mathematical in nature.
2 For example, clouds provide an important negative feedback mechanism that could reduce global warming. As
the moisture in the atmosphere builds up due to warming, it could create more clouds, which would refect more
sunlight back into space. However, this efect is by no means automatic; it depends on where the cloud is. High-
altitude clouds radiate to space at a colder temperature and actually produce a net warming.
Figure 2: gridding
5
Climate models divide the worlds atmosphere and oceans
up into a very coarse grid. At present, even the best models
do not have meshes fne enough to simulate individual
tropical cyclones or the efect of mountain ranges. Future
models may incorporate adaptive refnements of the mesh
size, as shown on the right.
From April 11 to April 13, 2007, the mAthemAticAl ScienceS reSeArch inStitute (mSri)
convened a symposium, sponsored by the Sea Change Foundation, to assess how
mathematicians can address
the broader issues of climate
change and the narrower
issues of methodology lying
behind the climate models.
Te symposium consisted
of two parts. Several leading
politicians, business people
and academic experts
on energy and climate
convened for a panel
discussion (see Figure 3)
at San Franciscos Palace of
Fine Arts Teater on April
11, which drew a crowd of more than 300 people. On the following two days, approximately
80 mathematicians and scientists attended a scientifc symposium at the MSRI headquarters
in Berkeley. (See Appendix B.)
Inez Fung, the co-director of the Berkeley Institute for the Environment and one of the
authors of the IPCC report, started of the public event with a brief overview of the evidence
for global warming and the current state of knowledge about what will happen next. She
characterized the IPCC report, which acknowledges that climate change has been caused by
anthropogenic efects, as a bittersweet victory, because weve been saying the same thing for
20 years. She outlined the reasons why we know that the climate is warming (see Sidebar,
How Do We Know?), and she discussed the main forecasts from the IPCC report (see Sidebar,
A Bleak Future).
Afer Fungs introduction, MSRI director David Eisenbud introduced Congressman Jerry
McNerney (see Figure 4, page 7) and California Assembly Member Ira Ruskin (see Figure
5, page 7), who represents Silicon Valley. Afer brief remarks by McNerney and Ruskin,
Eisenbud summoned onto the stage a panel of
experts, which consisted of Daniel Kammen,
professor of energy at the University of California
at Berkeley; Severin Borenstein, director of the
University of California Energy Institute; Nancy
McFadden, senior vice president of public afairs for
PG&E Corporation; Doug Ogden, executive vice
president of the Energy Foundation in San Francisco;
Michael Peevey, president of the California Public
Utilities Commission; and Inez Fung, who had
already been introduced. Te legislators were given
an opportunity to pose questions to the experts, and
then the foor was opened to questions from the
audience. Te following section is based in large part
on the questions and answers that ensued.
How Do We Know?
The evidence For climaTe
change
Climate models and their projections
for the futureespecially extended
out to 2100are subject to a
variety of uncertainties. These
include imperfections in the climate
models, the limitations of our
computing power, and the inherently
unpredictable nature of nonlinear
equations. These uncertainties
must not be allowed to obscure
the central facts emphasized in this
years IPCC report: Climate change
is happening, human activities are
responsible for most of the change,
and the evidence indicates that it is
accelerating.
The basic facts that lead to this
conclusion are the following:
1. Carbon dioxide levels (and levels
of other greenhouse gases, such
as methane) have been rising
for at least half a century. In fact,
they have risen by as much since
1960 as they did between the
last Ice Age and 1960. (See Figure
1.1.) The current concentration of
carbon dioxide in the atmosphere,
380 parts per million, is greater
than it has been at any time in
the last 650,000 years, according
to ice cores that contain trapped
bubbles of earlier atmospheres.
2. Carbon from fossil fuels is being
added to the atmosphere. We
know this because fossil fuels
contain a lower ratio of the
isotope carbon-13 to carbon-12
than the atmosphere as a whole
does, because they are derived
from plant matter and plants have
a preference for the lighter isotope
of carbon. Tree-ring and ice-core
data show that the 13C:12C ratio
began to decrease just at the same
time the overall levels of carbon
dioxide began to increase.
u
+ u u + 2 u =
1
p + gk + F + (u)
+ (u) = 0
p = RT; = (T,q)
T
+ u T = SW
+ LW
+
SH + LH + (T)
SW = f (clouds, aerosols . . .)
LW = f (T,q,CO2, GHG . . . )
q + u q = Evap Condensation + (q)
convective mixing
Figure 13 represents in schematic form the various processes that enter into a climate
model. As described above, of the processes illustrated here, the most challenging for
modelers to get right are the clouds (which are too small-scale for the current generation
of models to describe accurately) and the ocean currents, including the vertical motions.
Te modeling of the solid phase of water presents its own peculiar problems. (See
Sidebar, Te Sea Ice Conundrum.)
Finally, there are signifcant unanswered questions about the amount of incoming solar
radiationthe solar constant, which is currently estimated at 1362 watts per square
meterand how constant it really is. Te total amount of anthropogenic forcing of the
climate since 1800 is estimated at 1.6 watts
per square meter. Tus, even a tenth of one
percent variation in the solar constant
would equal the entire human impact on
the world climate. At present, there is no
evidence that the solar constant has varied
that much in the last 200 years. However,
it may have varied by that much in the
past. What will happen to it in the future is
beyond the expertise of climate modelers,
who have to ask solar physicists for the
answer.
What is left out of climate models?
Figure 13 omits one important ingredient
in the climate: the entire carbon cycle.
In fact, this fgure represents the status
of climate models about 5 years ago,
when the work behind the fourth IPCC
Assessment was being done. At that time,
the concentrations of greenhouse gases
like carbon dioxide and methane had
to be added in as an exogenous forcing
term. Newer models are beginning to
incorporate the carbon cycle: the efects
of plants and animals, the efects of fossil
fuel burning, and the dozens of chemical
reactions that convert one form of carbon
to another in the ocean.
Another omission will be even more
challenging to repair: None of the models
contain any humans. Again, in the
IPCC simulations the results of human
activities (primarily the production of
greenhouse gases) are simply added in
by fat. However, such an approach is
not completely satisfactory. Even in the
absence of deliberate governmental policies, the change in climate will produce changes
in human behavior. Diferent crops will be planted, diferent regions of the world will
become suitable or unsuitable for agriculture, and so on. A truly integrated model should
include these efects (see Sidebar, Climate and the Indian Rice Crop, page 16).
rESEarch topIcS In
A partial list of active research
clIMatE changE
areas in climate change is given below.
research area is to use data, possibly in conjunction with
models, to fngerprint the diferent factors that contribute to
climate change.
Computational methods and platforms
Once a model is chosen, and initial and boundary conditions
are specifed, numerical methods are used to simulate the
climate. Tere is a wide variety of computational approaches
to integrating time-dependent partial diferential equations.
Tis is an active area of research, as climate modelers strive to
balance efciency and accuracy, while being concerned with
stability of computational schemes.
Because numerical climate models usually involve a very large
number of operations, computational scientists need to design
algorithms that exploit capabilities of available computing
platforms. At present, high-end computing is moving more
and more toward parallel and multi-core processors, and
climate models need to take advantage of that fact. It seems
certain that ever-increasing computational capability and
resources will be required for climate modeling, as the
models move toward fner and fner resolution. However,
fner resolution and bigger computing platforms should not
become an end in themselves, but instead should be guided
by concrete needs as well as evidence that the increased power
will actually improve model performance.
Predictions from models and quantifcation of
uncertainty
Each climate model is based on slightly diferent assumptions
and therefore requires diferent specifcations of initial
conditions and forcing terms. Tis fact, together with
the fact that the forcing terms themselves are known only
approximately, leads to predictions that can be quite diferent
from model to model. Researchers need to be careful to
distinguish between variations due to chance and those that
have identifable physical or parametric causes.
Statistical techniques are used to assimilate the information
from various models and synthesize these projections.
Reporting a standard deviation of model results, as in the
IPCC report, is simple to describe but may not be the most
informative technique. Better alternatives include weighted
averages or Bayesian algorithms. Tis is an active area of
research, but is potentially controversial if it is viewed as
ranking the quality of the models.
Inverse problems and data assimilation
Available data can be used in conjunction with a model to
extract information about model parameters that cannot
be directly measured. Tis approach has been used very
efectively in short term weather prediction. It is a research
area that has the potential to contribute to development of
better climate models, and in turn, better predictions. It
can also be used as a platform for validating a model and for
studying the sensitivity of various factors in a model.
Economic concerns and efective policies
Quantitative methods are being applied to study the economic
impact of climate change, for example on crop yields. Te
approach is to use the prediction provided by the climate
models together with a simple model of its impact on
agriculture to understand the economic costs of warming.
Such analysis could also be applied to risk assessment and the
economic benefts of mitigation policies. At present, economic
models are not well integrated with climate models, and this is
a problem that requires attention. Furthermore, uncertainties
in climate model projections should be included in economic
or impact models, and metrics designed to compare the costs
and benefts of various policy decisions.
Research is also being conducted into developing
mechanisms for curbing emission of greenhouse gases
that will be efective on a political level (e.g., cap-and-
trade agreements). Agreements of this type are necessarily
multinational, and each player will operate with very diferent
objectives and constraints. Te challenge is to develop a
policy such that it will be to the beneft of each country to
comply with the policy, and still achieve global reductions in
greenhouse gas emissions.
21
HIgH DIMEnSIonAl DynAMICAl
SySTEMS
Tere is a pressing need to formulate a
language and theory of transient dynamics
for the classes of systems that arise in
climate science, including notions of
stability for transient dynamics. Te
current use of the language of chaos
and attractors leads to confusion for
transient dynamics because these terms
are imprecise in that context, and therefore
mean diferent things to diferent people.
Te existing vocabulary, designed for
phenomena that occur over extremely long
time periods, should be replaced by terms
that describe the transitory response of a
high-dimensional nonlinear system.
Tese may include transitions between
local attractors, or transient dynamics due
to external forcing. Computing bifurcations
in very high-dimensional systems is likely
to be helpful here. It will also be helpful
to fnd criteria under which the dynamics
of low-dimensional systems remain robust
when translated back into the high-
dimensional setting.
Relevant mathematics: Dynamical
systems, nonlinear ordinary diferential
equations, nonlinear partial diferential
equations, global analysis.
InTERPRETIng AnD TunIng
MoDElS
Climate scientists more or less agree that
there is no such thing as a best climate
model. Given an ensemble of models
with diferent strengths and weaknesses,
we need tools to address the quality and
relevance of models. For instance, one
model may be better at representing a given
variable under 20th century conditions,
but another may represent the stratosphere
better. It is an open question how well the
current suite of climate models cover the
space of models relevant to the Earths
climate system.
Methods need to be developed to evaluate
the impact of missing processes, and to
estimate the spatial and temporal scales
over which we can make a meaningful
interpretation of a particular model.
Shadowing experiments have been
suggested as a useful approach to the latter
problem.
A very serious question of quality control
arises from the tuning of climate models.
One unintended result can be that models
no longer obey the laws of physics that
are supposedly programmed into them.
Tuning needs, frst of all, to become an
open rather than clandestine practice.
Second, the mathematical foundations
should be clarifed so that climate modelers
can have an optimal, or at least systematic,
way to explore parameter space.
Relevant mathematics: Dynamical
systems, nonlinear diferential equations,
statistics, knowledge discovery.
MoDEl REDuCTIon
Climate models are inherently very
complicated and difcult to analyze.
Climate modelers themselves do not
rely only on state-of-the-art models for
gaining insight into the climate system.
Tey also use simple conceptual models
as well as Earth Models of Intermediate
Complexity, which can be roughly des-
cribed as climate models that are a few
generations old. Mathematicians can
experiment with these models and attempt
to determine how well they mimic the
dynamics of larger models. A systematic
and mathematically justifed model reduc-
tion method is needed in order to simplify
models so that they are amenable to mathe-
matical analysis and computation. Using
such an approach, mathematical ideas can
be developed on relatively simple models
and then tested in the full models. Te use
of a hierarchy of models is critical to the
productive involvement of mathematics.
Relevant mathematics: Diferential
equations, model reduction, asymptotic
analysis, mathematical and multiphysics
modeling.
MoDElIng
While much of the modeling process is
physically based and well understood,
some components in the forcing terms are
inherently stochastic. Terefore, there is
a need for understanding the dynamics of
climate model under stochastic forcing.
In addition, some phenomena that are
deterministic in the short term may
become efectively stochastic in the long
term. In the context of model reduction, it
may be useful to replace the deterministic
equations with stochastic ones.
Relevant mathematics: Stochastic
processes, stochastic PDEs.
MulTISCAlE CoMPuTATIonS
Tere has been considerable progress
in the development of computational
methods that obtain coarse scale behavior
when the problem involves multiple, fner
scales. Tis strategy has been particularly
successful in the area of mathematical
material science. Climate modeling could
also beneft from these developments. In
climate modeling, multiscale phenomena
occur both in space and time.
Any complete treatment of multiscale
behavior also needs to address phenomena
that occur at the sub-grid level, such as
turbulence, ice dynamics, and clouds.
Tere are also practical reasons for paying
attention to sub-grid phenomena. Many
communities working on subsystems
in areas such as hydrology, agricultural
production, and ecosystems need to predict
how their subsystem will respond to a
change in climate. Mathematicians must
provide the tools to answer such questions.
Just as important, they need to delineate
the limitations of the models and what
constitutes appropriate use of them.
Relevant mathematics: multiscale
methods, asymptotic analysis, fuid
dynamics, stochastic processes.
appEndIx a
Program of the Scientifc Workshop
Thursday, April 12, 2007
Background and Impact
Inez Fung (University of California at Berkeley) Issues in Climate Change
Cecilia Bitz (University of Washington) Sea ice cover in a changing climate
Uncertainty, Risks, and Decisions
Lisa Goldberg (MSCI Barra, Inc.) Forecasting the Risk of Extreme Events
Max Aufammer (University of California at Berkeley) Impact of Aerosols on Rice Production in India Trough Local Climate
Lenny Smith (London School of Economics) Seeing Trough Climate Models
Identifying Climate Change
Ben Santer (Lawrence Livermore National Laboratory) Detection and Attribution of Climate Change
Statistical Issues and Incorporating Data
Claudia Tebaldi Future Climate Projections from Multi-Model Ensembles: Motivation,
(National Center for Atmospheric Research) Challenges, Approaches and Ways Forward
Jef Anderson Using Observations to Estimate Climate Model Parameters
(National Center for Atmospheric Research)
Friday, April 13, 2007
Computational Issues
Phil Colella (Lawrence Berkeley National Laboratory), Algorithms for the Study of Climate Change
Kathy Yelick (University of California at Berkeley) Architectural Trends and Programming Model Strategies for
Large-Scale Machines
New Modeling Challenges
David Neelin (University of California at Los Angeles) Precipitation Change and the Challenges of Modeling
Cecile Penland When We Cant Keep Track Of Everything: On Difusion Processes and
(National Oceanic and Aeronautic Administration) Levy Flights in Climate Modeling
Jim McWilliams Irreducible Imprecision in Atmospheric and Oceanic Simulation
(University of California at Los Angeles)
Future Directions
Bill Collins (National Center for Atmospheric Research) Where do we go from here?
Videotapes of these lectures may be found at the MSRI website, www.msri.org.
26
Attendees of the Scientifc Workshop
Climate Change: From Global Models to Local Action
Mathematical Sciences Research Institute, April 12-13, 2007
Rafael Abramov, University of Illinois
Malcolm Adams, University of Georgia
Jef Anderson, National Center for
Atmospheric Research
Ari Ariyawansa, Washington State
University
Max Aufhammer, University of California
at Berkeley
George Avalos, University of Nebraska
Nathaniel Berkowitz, no afliation given
Bjorn Birnir, University of California
at Santa Barbara
Cecilia Bitz, University of Washington
Jonathan Block, University of Pennsylvania
Robert Bryant, Duke University
Alin Carsteanu, Cinvestav
Bem Cayco, San Jose State University
Fatih Celiker, Wayne State University
Bill Collins, National Center for
Atmospheric Research
Louis Crane, Kansas State University
Forrest DeGrof, no afliation given
Maarten v. de Hoop, Purdue University
Eric DeWeaver, University of Wisconsin
Frank Drost, University of New
South Wales
Philip Dufy, no afliation given
Bahman Engheta, University of California
at Riverside
Greg Eyink, Johns Hopkins University
Yue Fang, University of California
at Berkeley
Inez Fung, University of California
at Berkeley
Jimmy Fung, Hong Kong University
of Science and Technology
Ashis Gangopadhyay, Boston University
Daryl Neil Geller, Stony Brook University
Serge Guillas, Georgia Institute of
Technology
Dan Gunter, Lawrence Berkeley National
Laboratory
A. G. Helmick, North Carolina State
University
Robert Higdon, Oregon State University
Matthew Hofman, University of Maryland
Masaki Iino, University of Utah
Larens Imanyuel, Technominiaturization
Project
John Kahl, University of Missouri at
Columbia
Hans Kaper, National Science Foundation
Allan Kaufman, Lawrence Berkeley
National Laboratory
Boualem Khouider, University of Victoria
Eric Kostelich, Arizona State University
Charlie Koven, University of California
at Berkeley
Hugo Lambert, University of California
at Berkeley
Chloe Lewis, University of California
at Berkeley
Wing Suet Li, Georgia Institute of
Technology
Yi Li, University of Iowa
Regina Y. Liu, Rutgers University
Douglas Lind, University of Washington
Frank Ling, University of California
at Berkeley
David Lobell, Lawrence Livermore National
Laboratory
John MacDonald, University of British
Columbia
Brian Maurizi, Washington University of
St. Louis
Richard McGehee, University of Minnesota
Jim McWilliams, University of California
at Los Angeles
Robert Megginson, University of Michigan
Juan Meza, Lawrence Berkeley National
Laboratory
Norman Miller, Lawrence Berkeley
National Laboratory
John Moussouris, MicroUnity
David Neelin, University of California
at Los Angeles
Douglas Nychka, Institute for Mathematics
Applied to Geosciences
Myunghyun Oh, University of Kansas
Sergei Ovchinnikov, San Francisco
State University
Hyo-Seok Park, University of California
at Berkeley
Cecile Penland, National Oceanic and
Atmospheric Administration
Alexandra Piryatinska, San Francisco
State University
Dimitros Politis, University of California
at San Diego
Serge Preston, Portland State University
Renny Rueda, University of Externado
(Colombia)
Ben Santer, Lawrence Livermore National
Laboratory
Fadil Santosa, University of Minnesota
Eric Schechter, Vanderbilt University
Brad Shelton, University of Oregon
Emily Shuckburgh, University of
Cambridge
Lenny Smith, London School of Economics
Hartland Snyder, no afliation given
A. Spehr, no afliation given
Dave Stainforth, University of Oxford
Alexander Stine, University of California
at Berkeley
Andrea Taschetto, University of New
South Wales
Claudia Tebaldi, Athene Software, Inc.
Bruce Turkington, University of
Massachusetts at Amherst
Chunming Wang, University of Southern
California
Shouhong Wang, Indiana University
Michael Wehner, National Energy
Research Scientifc Computing Center
Chris Wiggins, Columbia University
Peter Wolenski, Louisiana State University
Carol S. Wood, Wesleyan University
Katherine Yelick, University of California
at Berkeley
Mary Lou Zeeman, Bowdoin College
appEndIx B
2I
The Mathematical Sciences Research Institute (MSRI), located in
Berkeley, California, fosters mathematical research by bringing
together foremost mathematical scientists from around the
world in an environment that promotes creative and effective
collaboration.
MSRIs research extends through pure mathematics into computer
science, statistics, and applications to other disciplines, including
engineering, physics, biology, chemistry, medicine, and fnance.
Primarily supported by the US National Science Foundation, the
Institute is an independent nonproft corporation that enjoys
academic affliation with ninety leading universities and support
from individuals, corporations, foundations, and other private
and governmental organizations.
MSRIs major programs, its postdoctoral training program, and
workshops draw together the strongest mathematical scientists
with more than 1,700 visits over the course of a year; at any
time about eighty-fve are in residence for extended stays. Public
outreach programs and the largest mathematical streaming
video archive in the world ensure that many others interact with
MSRI throughout the year.
Main Office 510-642-0143 Fax 510-642-8609
Mailing Address:
Shiing-Shen Chern Hall
17 Gauss Way Berkeley, CA 94720-5070
www.msri.org