The Seven Measures of The World - Piero Martin
The Seven Measures of The World - Piero Martin
The Seven Measures of The World - Piero Martin
com
The Seven Measures of the World
OceanofPDF.com
The Seven
Measures
of the World
Piero Martin
Translated from the Italian by Gregory Conti
OceanofPDF.com
Published with assistance from the foundation established in memory of Calvin Chapin of the
Class of 1788, Yale College.
Originally published in Italy by Editori Laterza as Le 7 misure del mondo. Copyright © 2021,
Gius. Laterza & Figli, All rights reserved.
English translation copyright © 2023 by Yale University. All rights reserved.
This book may not be reproduced, in whole or in part, including illustrations, in any form
(beyond that copying permitted by Sections 107 and 108 of the U.S. Copyright Law and except
by reviewers for the public press), without written permission from the publishers.
Yale University Press books may be purchased in quantity for educational, business, or
promotional use. For information, please e-mail [email protected] (U.S. office) or
[email protected] (U.K. office).
A catalogue record for this book is available from the British Library.
10 9 8 7 6 5 4 3 2 1
OceanofPDF.com
To Paolo, student and mentor
OceanofPDF.com
Contents
Introduction
ONE
The Meter
TWO
The Second
THREE
The Kilogram
FOUR
The Kelvin
FIVE
The Ampere
SIX
The Mole
SEVEN
The Candela
Epilogue
Measures for Measure
Acknowledgments
Suggestions for Further Reading
Index
OceanofPDF.com
The Seven Measures of the World
OceanofPDF.com
Introduction
That night, at Große Freiheit 64, Hamburg, the doors of the Indra
Musikclub opened at the usual time. It was August 17, 1960, and during the
night, the temperature would dip down below 10°C (50°F). Summer was
drawing to a close, Elvis Presley was at the top of the charts all over the
world with “It’s Now or Never,” his cover of the Neapolitan standard “O
Sole Mio,” while in Germany, the French songstress Dalida scored a big hit
with a German cover of “Milord,” a song recorded a year earlier by Edith
Piaf.
For the young women and men waiting outside the Indra on that late
summer night, it would have been difficult, well-nigh impossible, to
imagine that the virtually unknown band they were about to listen to was on
the verge of taking the entire world of popular music on a rollicking
decade-long joyride. Equally in the dark about the impact those young
rockers were going to have on their company were the top managers of
Electric and Musical Industries, otherwise known as EMI.
Founded in London some thirty years earlier, following the merger of the
Columbia Graphophone Company and the Gramophone Company, famous
for its historic record label “His Master’s Voice,” EMI was an important
player in the music industry. In 1931, one of the company’s engineers, Alan
Blumlein, had patented the invention of stereophonic recording and
reproduction. In the 1960s, EMI produced successful records and created a
flourishing research program in the field of electronics. But on August 17,
when John Lennon, Paul McCartney, and George Harrison launched into
their opening number, things started to change for EMI, too. Harrison,
Lennon, and McCartney—together with Pete Best and Stuart Sutcliffe, later
to be replaced by Ringo Starr—had just recently become the Beatles, and
Hamburg was their first overseas gig. They played at the Indra for forty-
eight nights, and then worldwide for another nine years until January 30,
1969, their last live performance on the roof of No. 3 Savile Row, London.
What happened in between is legendary.
At the end of World War II, EMI’s expertise in electronics was largely in
products and instruments used by the military, which it then started
marketing to the civilian sector. But the company’s economic success came
with the rock and pop music explosion of the fifties and sixties. The
acquisition of the American company Capitol Records, the success of its
artists, and above all the contract it signed in 1962 with the Beatles brought
EMI considerable fame and remarkable profits. The projects that EMI
engineers were working on in the sixties included the pioneering effort to
develop computed tomography in medicine, better known as CT. Today the
CT scan is a fundamental instrument in medical diagnostics, making it
possible to create high-definition images of the inside of the human body.
Its practical application was developed in the EMI laboratories by one of
the company’s engineers, Godfrey Hounsfield, who applied the theoretical
work of the South African physicist Allan MacLeod Cormack.
Hounsfield and Cormack were awarded the Nobel Prize in Medicine in
1979. For years, rumor had it that a major contribution to the discovery of
this extraordinary diagnostic tool had been provided by the Fab Four from
Liverpool—though this was never claimed by the Beatles themselves.
Supposedly, the enormous profits EMI had taken in thanks to their songs
had, in part, been used to finance research on CT. In reality, according to an
article published by Canadian scientists Zeev Maizlin and Patrick Vos in the
Journal of Computer Assisted Tomography in 2012, the financial
contribution of the British Department of Health and Social Security to the
development of the CT scanner was significantly bigger than that of EMI.
Nevertheless, the Beatles’ colossal contribution to modern culture is
plain for all to see (and hear), as is the fact that medicine, thanks to the
laboratories of EMI, now possesses an essential diagnostic tool, which
literally helps to save human lives every day. CT is a device that measures
the amount of X-radiation emitted by a source and transmitted through the
human body and uses that data to reconstruct detailed high-definition
images. CT is one of the many examples of how a measurement can
provide information about ourselves. This is also the case with
measurements of our body temperature, our blood pressure, and the
frequency of our heartbeat. To arrive at each of these measurements, we
link a number or a set of numbers with a physical quantity, or with the
properties of a phenomenon, or with some aspect of nature. This numerical
value is obtained by using appropriate tools to compare the physical
quantity in question with another physical quantity, known as a unit of
measurement. In the case of body temperature, the tool is the thermometer
and the unit of measurement is the degree, Celsius or Fahrenheit.
Humans have measured the world from the very beginning. We measure
it to learn about it and explore it, to inhabit it, to live together with our
fellow humans, to bestow and obtain justice, to relate to divinity. From
ancient times on, measuring has interwoven the fabric of human life—just
think of the measurement of time and its relationship to our lives, how it
shapes our interactions with nature and with the supernatural. Humans
measure the world to know our own past, understand the present, and plan
the future.
Humans measure using tools that are the fruit of their creativity. In nature,
there are many recurring phenomena, like the alternation of day and night
or the cycle of the seasons. There are also natural objects whose shape and
weight are especially regular, like carob seeds, for example. Thirty thousand
years ago, a human living in what is now France engraved an ivory tablet
made from a mammoth tooth with what is believed to be a register of the
phases of the Moon over the period of a year—a sort of pocket calendar
ante litteram. It is human ingenuity which used these objects as the
inspiration for meridians, scales, and yardsticks. Nature, obviously, works
perfectly well even without measurements.
It comes as no surprise, then, that at the dawn of civilization, the first
measurements were made by using something that was universally
available, something that everyone always has with them: their own bodies.
Arms, legs, fingers, and feet are easily accessible tools, and, albeit with a
certain variability, they are all more or less of the same size. Five spans of
cloth measured by an adult man or woman are about a yard, or a meter, in
any part of the world.
Units of measure associated with body parts are found just about
everywhere. The cubit, for example, is the distance from the point of the
elbow (cubitus) to the fingertips, about a foot and a half. It was used by
many cultures throughout the Mediterranean basin: Egyptian, Hebrew,
Sumerian, Roman, and Greek. The foot is found in China, in ancient
Greece, and in Roman culture. In ancient Rome, distance was also
measured by the pace, 1,000 of which (milia passuum) gave us the Roman
mile. And it was also in the Eternal City that Marcus Vitruvius Pollio, who
lived from 80 to 20 BCE, wrote De architectura, his encyclopedic work
dedicated to architecture. In the first chapter of book 3, Vitruvius wrote
about symmetry: “The design of a temple is based on symmetry, whose
principles must be observed with great care by the architect. They are
rooted in proportion.” He connected architectural proportion with the
proportions of the human body:
Because the human body is designed by nature so that the face, from the chin to the top of the
forehead and to the lowest roots of the hair, is one-tenth of the entire height of the body. . . .
The length of the foot is one-sixth of the height of the body; the forearm one-quarter; the
width of the chest is also one-quarter. The other parts of the body also have their own
symmetrical proportions, and it was by employing them that the famous painters and
sculptors of ancient times achieved great and infinite fame.
The week of October 10, 1960, witnessed two important debuts. The second
one took place on Saturday, October 15, at Kirchenallee 57, Hamburg.
There John, Paul, George, and Ringo made their first recording together at
the Akustik Studio, playing the classic tune “Summertime” by George
Gershwin. The first happened on Wednesday, October 12, in Paris, at the
opening of the 11th General Conference on Weights and Measures. During
this meeting, the international system of units of measurement (abbreviated
as SI) was defined, the first truly universal system. The long and winding
road of measures finally achieved a fundamental goal. Right in the middle
of the Cold War, when the political boundaries between nations were
becoming more rigid, the boundaries between measurement systems were
eliminated. Although many people might think that the history of the
twentieth century was influenced more by that week’s second debut, it is
actually the first that has radically changed our dialogue with the universe.
Originally, the international system was composed of six units of
measure: the meter for length, the second for time, the kilogram for mass,
the ampere for electric current, the kelvin for temperature, and the candela
for luminous intensity. In 1971, the mole would be added as the base unit of
amount of substance, fundamentally for use in chemistry. At last humanity
had a coherent architecture for measurement, whose seven basic units
defined a complete and universal language for measuring not only our own
small world but all of nature, from the most obscure subatomic recesses to
the boundaries of the universe.
Modern society, science, and technology simply could not exist without
measurement. Time, length, distance, velocity, direction, weights, volumes,
temperature, pressure, force, energy, luminous intensity, power: these are
just some of the physical properties that are the daily objects of accurate
measurements.
Measuring is an activity that permeates every aspect of our lives. We
typically take it for granted, only to realize just how crucial it is when our
instruments of measurement do not work correctly or are unavailable.
Without the measurement of time, there would be no alarm clocks to wake
us in the morning. Without the measurement of volume, we wouldn’t know
how much fuel is left in our gas tank. Without the measurement of position
or velocity, our trains and planes would never arrive safely. Without the
measurement of our bodily functions, our health would soon be at risk.
Without the measurement of electricity, none of our electronic devices
would work.
Science and technology have taken giant steps forward since the French
revolutionaries defined the decimal metric system. Today, an amazing
number of highly precise measurements allow us to verify and assess new
theories. They are the tools of Nobel Prize winners—for example, the
measurement of the Higgs boson or the detection of gravitational waves.
They are indispensable for research on the cutting edge in all fields of
science. They have allowed us to fight back against the Covid-19 pandemic,
and they make modern technologies work, whether they be satellites in
orbit or the smartphones in our back pockets.
These measurements are based on the international system, whose units
have a physical reference in prototypical objects or phenomena, that is, in
something that is accessible to everyone. We have seen, for example, that
the meter was originally defined as one ten-millionth of the distance from
the North Pole to the equator. For practical reasons, it was redefined in 1889
as the distance between two notches engraved on a bar of platinum–iridium
deposited at the International Bureau of Weights and Measures in Sèvres.
This bar was meant to serve as the standard against which to compare every
other meter produced on Earth.
The second was initially defined as a fraction of the period of the Earth’s
rotation, the average solar day. In 1960, however, scientists realized that this
definition was not sufficiently precise, given that the length of a day varies
over time, so the second was redefined in terms of the revolution of the
Earth around the Sun. Just a few years after that, the second was again
redefined, to make it even more precise, as a multiple of the time period of
the transition between the two levels of the fundamental unperturbed
ground-state of the cesium-133 atom.
Sèvres is also home to the prototype kilogram, the International
Prototype Kilogram (IPK), a cylinder composed of 90 percent platinum and
10 percent iridium, about four centimeters high and wide. This prototype
replaced the original French definition of the kilogram, equal to the weight
of a liter of distilled water at a temperature of 4°C (32.9°F).
Despite the care with which they are conserved, bars and cylinders,
being pieces of metal, change over time. The standard kilogram was made
in 1889, together with five other identical copies. Compared to the copies,
in little more than a century the original lost 50 millionths of a gram. That
might seem like a pittance, more or less the weight of a grain of salt. But if
you consider the precision required by modern science, as well as that the
kilogram is part of the definition of derivative units such as the units of
force and energy, this variation jeopardizes the entire international system.
The deterioration of artifacts, albeit philosophically in line with the
ephemeral nature of their human makers, is incompatible with the
universality and certainty required by modern science. There was a risk,
therefore, of falling back into a new scientific dark age with no certainty of
measurement.
This risk was remedied by scientists, who, on November 16, 2018,
decided to redefine the units of the international system. No longer would
these be based on material objects or events; instead, they would use
universal physical constants, such as the speed of light in a vacuum or
Planck’s constant—constants that belong to fundamental physical laws and
theories. The speed of light, for example, is crucial for electromagnetism
and the theory of relativity, while Planck’s constant is central to quantum
mechanics.
This redefinition was a true Copernican revolution. In the past, these
fundamental constants were determined by measurements using the units of
the international system, based in turn on material prototypes. In November
2018, it was decided to reverse this procedure and to rely on fundamental
physical constants, to fix their values in immutable terms, and to define the
units of measure of the international system in reference to the constants
themselves. Describing the international system of measurements by relying
on fundamental physical constants, rather than material prototypes,
amounted to an affirmation that the natural laws governing the universe are
immutable. These constants can serve as the basis of a system of
measurements that is much more solid than systems based on objects or
events that we can see and touch. This was an earthshaking revolution for
science but also for humanity—a revolution that is still not very well known
and that we are about to discover.
Seven units of measure, a hymn to nature.
OceanofPDF.com
ONE
The Meter
The distance from 112 Mercer Street in Princeton, New Jersey, to the
physics department at Lincoln University in Pennsylvania is about 148,000
meters. Put that way, it seems like a very long trek. But if we translate it
into 148 kilometers, it is decidedly less imposing. Today, Google Maps
informs us that, by car, the drive takes an hour and forty minutes, but in
1946 the trip was much more daunting, especially if the person traveling
was in their seventies and had health problems. Considering that the reason
for the trip was the conferral of an honorary degree—the kind of event that
Albert Einstein did not find appealing, in view of the customary pomp and
circumstance—it would not have been strange had the father of the theory
of relativity declined the invitation. Lincoln, at the time, was a tiny
university with an enrollment of just over 250 students.
Nevertheless, Professor Einstein accepted, and gladly so. Because—and
these are his exact words, “That visit on May 3, 1946, was for a worthy
cause”—Lincoln University’s notoriety belied its small size. It was the first
American university to confer a college degree on an African American
student. Founded in 1854, it was nicknamed the Black Princeton for the ties
between its founders and first faculty members and the much more famous
university in New Jersey and because it was a touchstone for African
American university students.
In the post–World War II years, racial segregation still plagued the Black
community. Although the great majority of white Americans stubbornly
refused to recognize the problem, the voice of Albert Einstein rang out loud
and clear. Already in 1937, his attitude was unmistakable. He had offered
hospitality to Marian Anderson, one of the most famous opera singers of the
twentieth century, when she came to Princeton to sing and was denied a
room in a local hotel because of the color of her skin. In 1946, in an article
for the magazine Pageant—read primarily by a white audience—Einstein
had this to say about segregation: “The more I feel an American, the more
this situation pains me,” adding, “I can escape the feeling of complicity in it
only by speaking out.” On that May 3, the Nobel Prize winner—whose
emaciated face and simple ways, as a student at the time recalls, made him
look almost like a figure out of the Bible—attended the ceremony at
Lincoln University and gave a thank-you speech that would become
famous. He spoke harshly against racism and racial segregation, calling it,
“not a disease of colored people, but a disease of white people. I do not
intend to be quiet about it.”
It would take another nine years before the few meters that separate the
front of the bus from the back of the bus—the former reserved for whites
and the latter for Blacks—would mark the beginning of the modern civil
rights movement, thanks to the courage of Rosa Parks, who refused to walk
them. That day was December 1, 1955, and Einstein did not get to see it. He
died on April 18 of that year, having played the leading role in one of the
greatest revolutions of modern physics. With his theory of relativity,
Einstein had reshaped not only his own discipline but the whole of human
knowledge. He inspired, right up to the present, the recipients of a long
series of Nobel Prizes for research originating from his theory (for which,
irony of ironies, he himself was not awarded the Nobel) and became a
reference point for artists, philosophers, and intellectuals, as well as a pop
icon of physics.
It is only natural, therefore, that Einstein’s revolution has made itself felt
strongly also with regards to units of measurement. The theory of relativity
describes not a specific phenomenon but rather the environment in which
all physical phenomena take place: space-time. The theory does not merely
write one part of the big story that nature tells; it establishes the general
rules for storytelling. Relativity is a theory about space and time, and as
such, it takes precedence over all the others, which must be consistent with
it.
For millennia, humanity has attempted to create for itself a global and
universal system of units of measurement in order to describe and
understand the world and the nature that surrounds us; a system that
transcends borders and sovereignties and belongs to everyone. It is not
surprising that the theory of relativity has become a milestone on the way
toward the universalization of the meter, the unit with which we begin in
this chapter our journey of discovery of the measures of the world.
The name meter itself symbolizes the principle of measurement, both for
its etymology—from the Greek μέτρον (metron, measure)—and for having
lent its name to the first international treaty on units of measure, the Metre
Convention, signed in Paris by seventeen nations in 1875. This was an
event that rarely appears in history books but that established a first firm
foothold in the millennia-long march that started at the dawn of civilization.
They said that this king [Senwosret] distributed the land among all the Egyptians, giving each
an equal square lot, and that on the basis of this subdivision, he procured revenues, having
imposed a payment of an annual tribute. If the river swept away a part of the farmland, the
owner, having gone to the king, reported what happened. The king then sent his functionaries
to inspect and measure how much smaller the farmland had become, so that in the future the
owner paid a proportion of the tribute. I believe that, as a result of this, geometry was
invented and from there it passed on to Greece.
Allons, Enfants
Let’s pause for a moment and think about the importance of the message
communicated by those numbers engraved on the stones. What seem to us
today to be merely practical indications useful for organizing a trip were
instead a masterful message of power and inclusion, a way, perhaps as
effective as sending an army, for the central government to make its
presence known. Even in the most remote corners of the empire, the
measure of the distance from Rome intimated not only who was in
command—and who could use those roads to arrive with arms, if necessary
—but also that Rome was a place where, at least ideally, anyone could go,
even if they were not Romans. Rome was a center of power that was
accessible to all. The milestones indicated that the government was present
and that it took care of its territories. Those Roman numerals made it clear
where power was located but, at the same time, gave everyone the chance to
read them in the same way, no matter where they happened to be.
With the fall of the Roman Empire, the unifying force it had imposed
faded away, and, not surprisingly, this had repercussions for units of
measurement. For many centuries, measures of distance, among others,
became more or less a local affair. Every community had its own units,
which were often displayed on stone plaques in public gathering places.
Many of these still survive today. In Italy, we can see them in Padua,
Senigallia, Salò, and Cesena, just to give a few examples. In one of the
central squares of Padua, for example, there is a stone plaque bearing the
date 1277 on which the measurement units for flour, grains, bricks, and
fabric are engraved. These measurements were used as standards in case of
disagreements between buyers and sellers. Curiously, the location was
called, in the local dialect, “Cantoon dee busie,” or corner of lies: if a trader
was trying to cheat somebody, the fraud would be discovered there. In
France alone, there were an estimated 250,000 different units of
measurement.
The new units were not all that imaginative. Many were still based on
parts of the human body, typically that of the local ruler: arms, palm, feet,
and so on. Naturally, the territorial extension of the validity of such
measurements was much more limited than it was in ancient Egypt, and this
“metric sovereignty” caused more than a few problems. Imagine what it
was like for an itinerant seller of fabric or ropes. Today, we take all this so
much for granted that we never think about it. The price is by the meter, and
if we want a certain length of a certain fabric from a certain producer, we
pay the same amount whether we buy it in one city or another or on the
internet. But in those times, sellers had to recalculate the price in every
village or town, and if they were less than totally honest, they had ample
opportunities to cheat their customers. In general, the lack of common
standards for measuring made commerce extremely difficult and often left
the weaker strata of society at the mercy of the prevailing power brokers,
particularly for such things as the measurement of terrain and real estate.
A cardinal element of a fully realized democracy is equal access to
scientific ideas. Or at least, this should be the case. Recent events highlight
the crucial nature of this access. In view of much recent experience, the
conditional is obligatory. Certainly, the revolution that began with Galileo
and the subsequent propagation of the scientific method were, if only
indirectly, an essential component of the definition of a universal system of
units of measurement. This system safeguards the interests of everyone,
regardless of their status or power. Over time, the scientific community
increasingly felt the need for a system that would allow for the comparison
and reproducibility of the results of their experiments and observations,
which, from Galileo onward, came to be recognized as the engine of
scientific progress.
Several centuries were to pass, however, before the French Revolution,
with its universalistic and antiaristocratic aspirations, pushed for a radical
change in the system of measurement. From local—and thus not very
standardized—systems, which, in commerce, often favored the few who
managed them and who reaped lavish rewards from the generalized
confusion, the revolutionaries wished to establish a universal system that
was the same for everyone. It is not a coincidence that in the last decade of
the eighteenth century, the decimal metric system, precursor of the current
international system, was born in Paris.
The revolutionaries wanted to free themselves completely from the old
temporal and religious powers. They attempted to introduce a decimal
calendar that made it difficult to keep track of religious holidays and
Sundays in particular. This experiment was not all that successful. On the
other hand, two other revolutionary novelties not only survived but became
fundamental to what was to become the current metric system: the kilogram
and the meter. The meter was defined at the National Assembly session of
March 30, 1791, as one ten-millionth of the distance from the North Pole to
the equator, measured along the meridian that passes through Paris. Theory
soon became practice. Two scientists, Jean-Baptiste Delambre and Pierre
Méchain, were charged with measuring physically an arc of the terrestrial
meridian passing through Paris. They chose the arc between Dunkirk,
France, and Barcelona, Spain, equal to approximately one tenth of the
distance from the North Pole to the equator. This section of the meridian
has the great advantage of being for the most part flat. The two scientists set
out on their expedition in 1792. Delambre measured from Dunkirk all the
way to the cathedral of Rodez, France, while Méchain started from Rodez
and arrived in Barcelona. They thought they would be able to do it in one
year, but it took them six, in an enterprise that sometimes reached epic
proportions and that was made even more difficult by its having to be
carried out in a Europe turned topsy-turvy by the revolution.
In 1798, they brought their results to Paris, where they then became the
basis for the definition of the length of the meter. This was given material
form in a platinum bar called the mètre des archives. The bar was deposited
in the National Archives on June 22, 1799, as the standard. A number of
copies were made for practical use. To make the new unit of measure
familiar to the population, samples of the meter were fixed in various places
throughout Paris. Today it is still possible to see two of them, one at 36, rue
de Vaugirard and one at 13, place de Vendôme.
Old habits die hard, and the introduction of the new system met more
than a little resistance from the people, who continued to use the old units.
Finally, in 1812, Napoleon repealed the law that imposed the metric system.
In an odd parallel with the fate of the powerful, after the fall of Napoleon,
the traditional measuring system was replaced in 1837 by a law that went
into effect in 1840 and returned the metric system to the fore in France.
However, it would take until the middle of the nineteenth century for the
metric system to establish itself firmly in France and begin its conquest of
the rest of Europe.
On July 28, 1861, with passage of Law 132, the decimal metric system
was introduced in the Kingdom of Italy. There, too, its application was far
from immediate, and the central government pressured mayors to promote
the implementation of the new system by local populations. Specially
designed plaques with tables of equivalence were displayed in public
places. Public schools also played an interesting role in promoting metric
literacy. A glance at the elementary school curriculum for 1860, for
example, reveals instructions such as the following: “To these notions the
teacher shall add a brief explanation of the metric system, teaching pupils
the names of the new measures, explaining in detail what is meant by a
meter, how all the other units of measure derive from it, and what the value
is of each.” And “Teachers of the fourth year and of preceding years are
reminded that the most important subjects of elementary instruction are the
catechism and religious history, Italian grammar and composition,
arithmetic and the decimal metric system. To these subjects, therefore, they
shall direct most of their attention and consecrate most of the time at their
disposal in the school.”
By now, the road ahead was clear, and step by step (or rather meter by
meter), the revolutionary dream came true. On May 20, 1875, in Paris,
seventeen nations signed the Convention du Mètre, the treaty that
established a permanent organizational structure that allowed contracting
countries to act in common accord on all questions related to units of
measure. Furthermore, the treaty also instituted the General Conference on
Weights and Measures (Conférence générale des poids et mesures, CGPM)
as an international diplomatic organization responsible for the maintenance
of an international system of units of measure in harmony with the progress
of science and industry.
That same period saw the institution of an intergovernmental
organization known as the International Bureau of Weights and Measures
(Bureau international des poids et mesures, BIPM). Located just outside
Paris, in Sèvres, the BIPM is the organism through which contracting states
act on questions of metrological importance. The bureau also serves as both
the scientific and literal custodian of the international system of units.
Indeed, the international prototype meter is kept on the premises of the
BIPM: a bar of platinum–iridium with a special X-shaped cross-section (the
Tresca section, named for its inventor, Henri Tresca) to resist possible
distortion. In 1875, the length of the meter was defined as the distance
between two lines engraved in the bar (the so-called line standard) to avoid
problems owing to eventual wear on the ends. Thus, the bar itself was
longer than a meter.
This piece of metal would constitute the standard against which all
meters around the world would be calibrated. Of course, this process
required intermediaries: accurate copies of the prototype meter in Sèvres
were distributed to each contracting state of the Metre Convention. The
American copy is bar number 27, received by President Benjamin Harrison
on January 2, 1890.
A New Relativity
After his visit to Lincoln University, Einstein lived only nine more years.
He died in 1955, and fate denied him the chance to see the invention of one
of the premier scientific instruments for the study of light: the laser. The
first prototype was made in 1960. The laser produces a well-collimated and
monochromatic beam of light, which is to say that the light emitted is of a
precise color. In more technical terms, all of the electromagnetic energy that
composes the beam has the same well-defined frequency and therefore the
same energy, and this allows it to be monitored with accuracy. Even when it
makes a round-trip voyage to the Moon.
The laser’s monochromatic property and the fact that the speed of light is
a universal constant are the basis of one of the laser’s first spectacular
applications: establishing with precision the distance between the Earth and
its satellite. This enterprise was accomplished in 1962 by an Italian
physicist, Giorgio Fiocco, who at the time was working at MIT. In essence,
Fiocco shot a laser beam toward the Moon and measured the light that came
back after being reflected off its surface. With great experimental tenacity,
Fiocco and his colleague, Louis Smullin, searched for the packets of light
that were reflected directly off of the lunar surface. Given the extremely
weak intensity of the reflected light, this was not an easy task; but between
May 9 and May 11, 1962, they succeeded, and their results were published
on June 30, 1962, in Nature. They paved the way to a multitude of other
applications. Since the speed of light is well known and constant, by
measuring how much time light takes to return to Earth after being reflected
off the Moon—for the record, about 2.5 seconds—you obtain a remarkably
accurate measurement of the distance between the Earth and the Moon, on
average a distance of 384,400 kilometers.
Fiocco’s measurement is also conducted in our day to measure the
Moon’s distance from Earth, but with the assistance of instruments left
there by Apollo astronauts. This was an experiment entitled Lunar Laser
Ranging. Because of the relationship between its relative simplicity and the
huge amount of information it has produced, it has been called the most
successful experiment of the Apollo 11 mission. The component transported
to and left on the Moon is, in fact, a mirror: a square panel with sides
measuring about 50 centimeters oriented toward Earth. Fixed to this panel
are a hundred or so retroreflectors, special reflectors able to reflect light
with great efficiency back in the same direction it comes from (the same
principle applied by reflectors on bicycle wheels). The light in question is
the light “shot” from Earth with a laser, and the mirror functions as the
target.
Shooting from the Earth and hitting an object not much bigger than a
pizza box placed on the Moon may seem like something out of science
fiction. Yet the scientists from the Lick Observatory in California could
already do that, with the aid of a powerful telescope, just a few days after
the landing on the Moon. This was no mean feat, given the light’s voyage of
more than 700,000 kilometers and especially considering that the light
beam reaches the Moon with a diameter of about four kilometers and that
only the light that hits the little mirror is useful for the measurement. No
wonder scientists have written that aiming at a mirror on the Moon is like
using a rifle to hit a coin at a distance of three kilometers! With Lunar Laser
Ranging, the distance from Earth to Moon was determined with a precision
on the order of one centimeter, a margin of error of less than one ten-
billionth. The quality of the light emitted by the laser makes it possible to
reach levels of precision that were previously unimaginable, even—as we
have just seen—for the measurement of length. So it was the laser that
brought about, in 1983, the most recent modification of the definition of the
meter, the last probably for a long time. It was first among the new
discoveries that led, in 2018, to the complete redefinition of the
international system of measurement based on universal physical constants.
“c” as in Universal
“c”—a simple letter that holds within itself a universal property of nature. A
property of everyone and for everyone, intangible and immutable, totally
exempt from the inevitable decay of human experience. Is it any wonder
that it was precisely this property that was chosen for a universal definition
of the symbolic unit of the decimal metric system?
Before we arrive at the last act of the millenary history of the unit of
length, it is worth making a small digression to recount the origins of c.
Indeed, it is quite legitimate to ask why “c” and not “a,” “b,” or something
else. Both Maxwell and Einstein, in his first article in 1905, had used “V,”
for example. Other physicists used “c,” however, and it was this definition
that took hold, so that even Einstein himself converted in 1907. In reality,
there is no clear answer to this curious question. One school of thought
ascribes the origin of the choice of “c” to constant, based on the
universality of its value. Another explanation ties the choice to the Latin
term celeritas (velocity). As of today, despite numerous studies, the
ambiguity is still with us. All things considered, a little aura of mystery
about a celebrity like c isn’t such a bad thing.
The long history of the meter concludes, at least for now, in 1983, when
the last act was staged. At the 17th General Conference on Weights and
Measures, it was determined that “the present definition does not allow a
sufficiently precise realization of the meter for all requirements” and “that
progress made in the stabilization of lasers allows radiations to be obtained
that are more reproducible and easier to use than the standard radiation
emitted by a krypton 86 lamp” (which was the basis of the redefinition in
1960). Above all, however, it was noted that “progress made in the
measurement of the frequency and wavelength of these [laser-produced
electromagnetic] radiations has resulted in concordant determinations of the
speed of light whose accuracy is limited principally by the realization of the
present definition of the meter” and that “there is an advantage, notably for
astronomy and geodesy, in maintaining unchanged the value of the speed of
light recommended in 1975 by the 15th CGPM in its Resolution 2 (c =
299,792,458 meters per second).”
In other words, this means that between the meter of French
revolutionary days and the speed of light, science holds the second to be
more reliable. Another great tribute to the genius of Einstein. The
exigencies of modern science and technology require levels of precision in
the measurement of length that even the meter defined in terms of the
radiation emitted by krypton can no longer ensure. Rather than chasing after
new definitions that might allow ever more precise determinations of c, the
1983 convention preferred to establish a fixed point. The speed of light is
defined, once and for all, at the precise value known at the time,
299,792,458 meters per second, and the meter is defined indirectly in
relation to c and the definition of the second.
Since velocity corresponds to the ratio between the measure of the length
and that of the time employed to cover it, the length of a meter is defined,
thus and simply, as the distance traveled by light in a fraction of a second
equal to 1/299,792,458. The definition of the meter is, therefore, indirect,
and is based on the definition of the second, which, as we shall see, is
measured, thanks to atomic clocks, with much greater precision than that
with which the meter could be measured directly.
The era of definitions based on human artifacts came to an end for the
meter in 1983 and has begun to end for all the other units of measure. With
the meter, humanity has given its initial approval to a system that does not
depend on physical objects but is based entirely on the speed of light and
other universal constants of physics. These constants are fundamental for a
series of well-consolidated scientific principles, and they represent the
backbone of our continuously growing knowledge of the laws of nature.
A system of measurement that could be, finally and truly, for all time
and for all people.
OceanofPDF.com
TWO
The Second
A Moment of Madness
There are those who do it because a moment of madness can happen to
anyone. For the more fortunate among us, such a moment occurs while
they’re sleeping, but to some it happens when they’re awake, inside a store.
For others, it’s a matter of a thirst for revenge, to strike back for a suffered
wrong. Or mere masochism. Or even a flash of lucid cruelty. Whatever the
motivation, there are lots of us who purchase, and who then, perhaps, give
as a present to friends or enemies, one of the world’s most feared
ornamental trinkets: a snow globe, the mythical glass ball that, when you
shake it, gives you the impression that snow is falling inside it. The
operating principle is always the same: the sphere is full of a transparent
liquid surrounding a small three-dimensional artifact that reproduces a
landscape. The scenes are often related to Christmas, but not always. Some
represent famous monuments, puppets, cartoon characters, or religious
scenes. All that beauty is the fruit of the ingenuity of Erwin Perzy, a
Viennese manufacturer of surgical instruments, who made the first snow
globe in 1900. Inside was a reproduction of the Mariazell Basilica, with
snowflakes made from microscopic fragments of grated rice. Today there is
even a museum in Vienna in commemoration of the inventor that houses a
collection of his most prized pieces.
A smidgen of hypocrisy prevents many of us from admitting that we
have at least been tempted to buy a snow globe. It prompts others to
embellish the purchase with a cultural flourish, citing the scene from the
Orson Welles film Citizen Kane dedicated to our little knickknack.
Nevertheless, the success of this souvenir is confirmed by the data.
According to a study conducted a few years ago and reported in numerous
newspapers, the snow globe is the most frequently confiscated object at
security checkpoints in the London City Airport. Many of them, in fact,
contain a quantity of liquid in excess of the amount permitted for carry-on
luggage, and so they end their journeys ingloriously in the hands of security
personnel, saving from harm, in all likelihood, an old friendship or a
nascent romance. Other, much more banal, items fill out the roster of items
seized at the London airport: cosmetics, bottles of alcoholic beverages,
tennis rackets, handcuffs (!). Not even one atomic clock, it seems.
In 1971, by contrast, Joseph Hafele and Richard Keating, traveling in an
era when airport controls were far more relaxed, had no problem carrying
an atomic clock onto their commercial airline flight. Pictures from the time
show the clock as a rather cumbersome parallelepiped, about the size of an
average refrigerator, that managed to make more than one around-the-world
voyage in the company of its two fellow travelers.
Hafele and Keating were a physicist and an astronomer, respectively, and
the airplane voyage of the atomic clock was a crucial experiment for
verifying with macroscopic clocks the modification of time predicted by
Einstein’s theory of relativity, whether caused by movement—in the special
relativity theory—or by gravity, in the general theory. As reported in the
abstract of the famous article they published immediately after their voyage
in the prestigious journal Science, the experiment was a success:
Four cesium beam clocks flown around the world on commercial jet flights during October
1971, once eastward and once westward, recorded directionally dependent time differences
which are in good agreement with predictions of conventional relativity theory. Relative to
the atomic time scale of the U.S. Naval Observatory, the flying clocks lost 59 ± 10
nanoseconds [billionths of a second] during the eastward trip and gained 273 ± 7 nanoseconds
during the westward trip, where the errors are the corresponding standard deviations. These
results provide an unambiguous empirical resolution of the famous clock “paradox” with
macroscopic clocks.
The demand for accuracy did not find many answers. On the contrary,
with the fall of the Roman Empire, the evolution of the measurement of
time in Europe remained substantially blocked until the Middle Ages, when
the formation of new communities rekindled the need to measure time.
Mechanical clocks, installed in the towers of public buildings, became
instruments of community identity. Among the most famous was the Clock
Tower overlooking Saint Mark’s Square in Venice, commissioned in 1493.
A true revolution in the measurement of time, however, would have to wait
for the dawn of modern physics.
Liquefying Time
Legend has it that, at the end of an interview with a by-then elderly Pablo
Picasso, a journalist asked the master if he could have a sketch as a souvenir
of their conversation. Picasso grabbed a pencil and a notebook and made a
drawing. The journalist asked him, “Do you realize that it took you only a
few seconds to make this sketch, and now I could sell it for thousands of
pounds?” Picasso responded, “It didn’t take eight seconds to make this
drawing, it took eighty years.”
It took Michelangelo four years to paint the Sistine Chapel, but other
famous paintings took much less time. Salvador Dalí himself recounted
that, to paint his famous painting The Persistence of Memory, it took him
only a couple of hours, the time it took for his wife, Gala, to go to the
cinema to see a movie that he had skipped because of a headache.
That painting, from 1931, depicts a landscape on the Costa Brava
featuring some melted, almost liquefied, pocket watches. It is a reflection
on time, the ascetic time tracked by the watches but also by human
experience. With the melting of the watches, objective time becomes
flexible, subjective, personal. Relative. It is not surprising that many critics
have suggested that Dalí was strongly influenced by Einstein’s theory of
relativity, which was much talked about at the time. Recall the fanfare
surrounding the experimental confirmation of the theory achieved by the
British astronomer Arthur Eddington, which we will address in the chapter
on the kilogram. Having escaped from the academy, relativity became a
topic of conversation and a leading player in cultural debate. In 1929, just
two years before Dalí’s painting, the New York Times attributed to Einstein
the following remark: “When you sit with a nice girl for two hours you
think it’s only a minute. But when you sit on a hot stove for a minute you
think it’s two hours. That’s relativity.”
We don’t know if Einstein was really the author of this quip (which,
obviously, would be equally valid even if the roles were reversed or if the
protagonists were of the same gender). It is certainly true, however, that his
theory of relativity revolutionized the concept of time by denying its
absoluteness. Time is no longer an absolute concept. It flows in different
ways in frames of reference that move at different velocities. Two events
that are simultaneous for an immobile observer may not be for an observer
in motion. Less than three centuries after Galileo, time experienced another
revolution.
Up until the end of the eighteenth century, in fact, the basic elements of
mechanics were the Galilean principle of relativity and the concept of
absolute time. Then came the development of electromagnetism, the science
that described the electric and magnetic phenomena that were increasingly a
part of society, of the economy, of daily life. Just think of the revolutionary
developments of electric light, Marconi’s first transoceanic transmissions,
the first electric motors. In 1892, between Rome and Tivoli, in the nearby
Sabine Hills, the first experimental line for the transmission of electricity
went into active service. In short, the practical application of
electromagnetism was in full swing. So it was a real problem when
physicists noticed that the equations that describe it, Maxwell’s laws, were
not consistent with Galilean relativity. Maxwell’s equations are not
invariant under Galilean transformation. Electric and magnetic phenomena
in a moving frame of reference are different from those in a stationary one.
A real problem!
A problem rooted in a simple fact. Do you remember Superman? “Faster
than the speed of light!” . . . Well, as we have seen in the preceding chapter,
that is a flight of fancy.
Relative Present
“Tell me, what is the use of these experiments of yours?” Supposedly, one
day the English chancellor of the exchequer asked this question of Michael
Faraday. An insidious question, given that in the mid-nineteenth century,
Faraday was a scientist working in Her Majesty’s service and was therefore
rather sensitive to the finance minister’s opinion of his work. Faraday,
however, was not cowed: “I can’t tell you exactly, but one day you can tax
it.” And he was right. What he was working on was the experiment that
would demonstrate the possibility of transforming mechanical energy into
electrical energy thanks to the movement of a conductor in a magnetic field.
In essence, the prototype of modern generators of electricity. A glance at the
taxes on your electric bill confirms that Faraday had a good imagination.
Those were years of great ferment for electromagnetism, which on the
one hand was finding more and more practical applications, while on the
other hand was being given a complete theoretical apparatus by Maxwell’s
equations, developed in part based on the work of Faraday. However,
Maxwell’s equations, which describe the behavior of the electric and
magnetic field, have a precise consequence. Think of a light source, a light
bulb, for example. The light it emits always moves at the same constant
velocity, 299,792,458 meters per second, regardless of the velocity of the
source that emits it. In other words, as fast as we might be moving, light is
always faster than us by exactly the same amount: 299,792,458 meters per
second. Light always moves at the same speed in whatever frame of
reference.
This was a big problem for Galilean invariance. Einstein gets the credit
for the solution. He started from two givens:
1. All the laws of physics remain the same in all frames of reference that
move at constant velocity with respect to each other. In other words, it
cannot be determined through a law of physics if a frame is in uniform
motion, or in other words its absolute velocity.
2. The speed of light is the same in all frames of reference in motion.
Let’s suppose that astronauts have gone on an ambitious space mission and
have arrived on Proxima Centauri, one of the stars closest to Earth. In fact,
this star is about four light-years away from our planet, about 40 trillion
kilometers. This means that light takes four years to travel to Proxima
Centauri from Earth, and that is also the minimum time that a signal takes
to travel from that star to us.
On landing, the astronauts open a chat on some social network—yes,
they too have friends they want to impress—and during the chat they start
transmitting a live (for them) video to show us how things are going. Those
images will reach us four years after they send them. The whole telecast of
their mission will be delayed by four years.
“Here, now you can see some inhabitants of Proxima Centauri moving
toward me,” one of the astronauts tells us in the video, zooming in on the
natives. But the astronaut’s “now” has a meaning very different from our
“now.”
Our “now,” which reaches us by way of the telecast, refers to four years
ago on the star. A present, a universal “now” that separates past and future,
does not exist. We have no idea what is happening on Proxima Centauri in
this moment. We cannot know now if the natives offered the astronauts
some coffee or if the astronauts less amiably transformed the natives into
fuel for their spaceship. We will find out in four years. And if you are
reading this book by the light of the Sun, you do not know whether our little
private star is still shining now. The Sun could actually have gone dark, and
we would not realize it until eight minutes later, which is how much time it
takes for its light to reach us here on Earth. The James Webb Space
Telescope, which recently started producing its beautiful images thanks to a
partnership among NASA, the European Space Agency, and the Canadian
Space Agency, exploits this physics process to explore the origin of our
universe. The light measured today was in fact emitted by galaxies in the
early universe. As the Webb website says, “Webb will directly observe a
part of space and time never seen before. Webb will gaze into the epoch
when the very first stars and galaxies formed, over 13.5 billion years ago.”
“Now” is not something that is physically observable, because in the
case of Proxima Centauri it takes us, here on Earth, four years to measure it.
On that star, there are a series of events that have already happened—which
without doubt belong to our past—and that are before four years ago. Then
there are other events, which will belong to our future—and which will
happen in no less than four years. Yet there is also an undefined period of
eight years, which belongs neither to our past nor to our future. Eight years
of events, which, on the one hand, we do not yet know about and which, on
the other, we cannot influence. If our supercomputer were to predict that six
years from now it will rain on Proxima Centauri, we could send a message
to our astronauts and influence the future (they could buy umbrellas and
avoid getting wet). But if we predict that it will rain only three years from
now, there is nothing we can do to warn them.
The past is that set of events that can send light signals to observers and
thus influence them. The future is that set of events to which observers can
send signals of light and that, in principle, can then be influenced by the
observers themselves. Then there is a new series of events in space-time
that we cannot influence now from where we are, and that cannot influence
us here and now, because nothing can travel faster than light. This new set
of events represents a sort of extended present, neither past nor future, a
consequence of Einsteinian relativity. The duration of this extended present
depends on position: sixteen minutes in the case of the Sun, eight years in
the case of Proxima Centauri. Whereas before Einstein, space and time
were clearly distinct entities, now they blend. They must inevitably be
considered together and become space-time. It is not something that is easy
to accept. The notion of absolute time is deeply rooted in our experience.
In the words of Hermann Minkowski, a mathematician who was one of
Einstein’s professors in Zurich, “Henceforth space by itself, and time by
itself, are doomed to fade away into mere shadows, and only a kind of
union of the two will preserve an independent reality, space-time.”
Everyday Relativity
Time is also influenced by the mass of objects in the vicinity. This is the
result of the general theory of relativity, the extension and completion of the
special theory of relativity.
With his general relativity, Einstein put together his principle of
relativity with Newton’s universal law of gravity, another of the
fundamental laws of physics, which describes how two masses interact with
each other by way of an attractive force. Space-time is enriched by gravity,
this force at a distance that controls the movement of the planets. Space-
time is no longer something empty, rigid, but becomes a flexible entity, a
sort of network whose webbing intersects with the lines along which the
force of gravity operates. A network that bends in proportion to objects with
mass—the greater the mass the greater the bend—just as a mattress forms a
hollow equivalent to the size of the person who sits on it.
Massive objects bend space, and the curvature attracts other bodies
toward them. The Earth revolves around the Sun, thanks to its velocity, just
as a cyclist does in a track race. Can you envision those inclined tracks they
use in the races that are televised during the Olympics? The cyclists remain
up high on the track only as long as they are racing. When they stop racing,
they inevitably drift down toward the bottom of the track, like coins tossed
into those funnel-type things often seen in museums. If it slowed down, the
Earth would be attracted by and crash into the Sun. The general theory of
relativity helps us understand, for example, black holes, extremely
“massive” objects that attract toward themselves everything around them,
including light, which then never manages to come back out.
The general theory of relativity also contributes to the further
modification of our concept of time. Indeed, time is modified not only by
movement but also by gravity, by the presence of masses. Time loses
another part of its “absoluteness”—if there was any left after special
relativity—and now it flows at different speeds depending on the masses
that are present in the vicinity.
Time passes more slowly near to a mass, and therefore, on Earth it goes
by faster up high—more distant from the center of the Earth—than down
low. As if to say that tall people age faster than short people. On a human
scale, the effects are extremely small but measurable, as demonstrated by
the experiment of Hafele and Keating, mentioned at the beginning of this
chapter. As recently as 2018, a transportable atomic clock was taken to a
laboratory located in the Fréjus Road Tunnel (at an altitude of about 1,200
meters) in the Alps by researchers from the Italian National Meteorological
Research Institute, who verified that time flowed faster on the mountain
compared to their headquarters in Turin, situated at an altitude of about 200
meters.
One of the most striking experimental demonstrations of general
relativity is the recent measurement of gravitational waves. These are very
weak wrinkles in space-time, generated by modifications in the distribution
of masses on a cosmic scale, such as the collision between two black holes.
These wrinkles extend outward like waves in the ocean. But the
modification of time described by the theory of relativity also has a much
more practical application, one that we even carry in our pockets. By now,
all of our cell phones have an incorporated GPS (Global Positioning
System). Well, if the GPS did not correct for the modifications in the flow
of time owing to relativity, it would make kilometric errors in the
calculation of positions.
A GPS satellite orbits at about 20,000 kilometers from the surface of the
Earth, and it moves at a velocity of about 14,000 kilometers per hour with
respect to Earth. If we run the numbers, we find that just the effect of
special relativity would cause the clocks of the GPS to slow down by about
seven microseconds per day. To be sure, we are talking about a few
millionths of a second. Yet if the GPS clock did not take this into account,
since the electromagnetic waves with which the satellites send signals to
our receivers on Earth cover about 30 centimeters in a nanosecond, seven
microseconds would correspond to an error of two kilometers! Moreover, if
we add in the effect of gravity, given the distance of 20,000 kilometers
between the satellites and the receivers, the positioning error would increase
to 18 kilometers! In other words, if it hadn’t been for Einstein, we would
never have found that marvelous country house nestled sweetly in the hills
of . . . .
OceanofPDF.com
THREE
The Kilogram
Letters
Dear Edoardo,
Mein Führer!
It is hard to imagine that two letters with these salutations, one written by
hand to a friend, the other typewritten and sent to Adolf Hitler, might have
something in common. Yet there is actually a lot that unites the two
missives. For starters, the dates they were written, the first on August 15
and the second on October 25, just a few weeks apart in the year 1944.
Then the anxiety for the fate of loved ones—a father-in-law and a son—the
passion for the writers’ own work that can be read between the lines, and
the drama of a time oppressed by horrible dictatorships, fascism, and
Nazism that would do away with both of those loved ones. Above all, that
the authors of the letters were two of the most famous physicists of all time,
Enrico Fermi and Max Planck.
The summer of 1944 was a period of change for Enrico Fermi. The letter
was written in Chicago, but Fermi was about to leave, on his way to Los
Alamos, New Mexico. A victim of Benito Mussolini’s racial laws (Fermi’s
wife, Laura, was Jewish), he had left Italy in 1938, when he was awarded
the Nobel Prize. He went to Stockholm to receive it, and from there, after a
brief stop in Copenhagen for a visit with Niels Bohr, he embarked for the
United States. His American period began at Columbia University, in New
York, and then he moved on to the University of Chicago. There, in 1942,
Fermi built the first atomic pile and produced the first fission chain
reaction, an experiment that threw open the doors to the exploitation of
nuclear energy. Then, in 1944, he was called to Los Alamos by Robert
Oppenheimer to work on the Manhattan Project, which would produce the
first American atomic bombs, later dropped on Hiroshima and Nagasaki.
The addressee of Fermi’s letter was Edoardo Amaldi, one of the
youngest of the group of scientists in Rome known as the Via Panisperna
Boys. Rome had just been liberated. American troops, under the command
of Gen. Mark Wayne Clark, had entered the city on June 4, and Fermi took
advantage of the reopening of communications to write to his colleague and
friend. “Dear Edoardo, I had some recent news of you from Fubini1 on his
return from Italy. Now that postal communications with Rome have been
officially reopened, I hope this letter has a good chance of reaching you.”
Then he immediately writes about Augusto, father of his wife, Laura
(“Lalla”) Capon. “As you can imagine, Lalla has been very upset by the
news about her father. The uncertainty about his fate is much worse than
knowing he is dead.” Augusto Capon, a Jew, was a prominent admiral in
the Italian Royal Navy and a friend of Mussolini. Until 1938, he was the
head of the navy’s Secret Information Service. That was not enough to save
him when, on October 16, 1943, Italian and German soldiers combed the
city searching for Jews. That very day, Capon wrote in his diary, “Incredible
things are happening in Rome: this morning groups of fascists, they say
together with some German soldiers, have picked up Jews of any age and
gender and taken them to some unknown place. That this happened is
certain, how is not.” Capon died in Auschwitz the following week.
Later in the letter, Fermi’s passionate concern for the fate of physics in
his country comes to the fore. After years of darkness and the dissolution of
Italian research, Fermi hints at a note of optimism. “I was very pleased to
hear that you and Wick2 are hoping you will soon be able to get back to
your scientific work, and that you are looking to the future with a certain
degree of optimism. Judging the situation from this side of the Atlantic, I
sometimes hope that the reconstruction of Italy may be less difficult than
that of other European countries. Certainly fascism fell in such a miserable
fashion that it doesn’t seem possible it has left any regrets.”
It is to Adolf Hitler, who was at the center of the horrors of the war and
the extermination of the Jews, that Max Planck, the eminent German
physicist, winner of the Nobel Prize in 1918, and one of the fathers of
quantum mechanics, addresses his letter. He does so to plead for mercy for
his son Erwin.
Planck had already met Hitler personally in 1933, just after his rise to
power. At the time, the 75-year-old Planck was probably the most
authoritative scientific personality in Germany and president of the
prestigious Kaiser Wilhelm Society for the Advancement of Science, which
led German research. In that capacity, Planck requested a formal audience
with the new chancellor, who had taken office just a few months before.
The purpose of the audience was for him to pay his respects, but Planck—
who, unlike many of his colleagues, was never to leave Germany,
remaining loyal to his country despite his disagreement with the madness of
Nazi policies—took advantage of the occasion to ask the Führer’s clemency
on behalf of Jewish scientists. In those very months, Planck’s Jewish
colleagues were beginning to suffer the initial consequences of the racial
laws and were being dismissed from their positions. Among them was
Planck’s friend Fritz Haber, winner of the Nobel Prize in Chemistry in
1918, who had also distinguished himself (so to speak) as the mastermind
of chemical weapons used in World War I. Whether out of conviction, lack
of courage, or just plain realism, Planck did not express to Hitler his
disapproval of the racial laws as such. If he had done so, he probably
wouldn’t have made it back home. Yet he did try pragmatically to convince
him that it would be self-destructive for Germany to deprive itself of the
talents of its many Jewish intellectuals. He recalled how, without Haber’s
patriotic (and terrible) scientific contribution, Germany would probably
have been defeated very early on in that first world war and that many
eminent German scientists were Jews.
Hitler wanted nothing of it. “I have nothing against Jews in themselves.
But the Jews are all Communists, and it is the latter who are my enemies; it
is against them that I am fighting,” he responded to Planck, mocking him—
it seems—with a flat “I guess that means we’ll do without science for a few
years.” Considering the number and the caliber of the physicist fugitives
from Nazi-fascism who contributed to the development of the atomic bomb,
this was certainly not a renunciation without consequences. The
conversation quickly degenerated into a rabidly irrational monologue, in the
face of which there was nothing for Planck to do but remain silent and
withdraw.
When Planck tried to contact the Führer a second time, this time in
writing, 11 years had passed since their first encounter. Hitler had dragged
the world into war, and by now, the tide had turned against Nazi Germany.
Planck was old and wearied by life. His scientific success had been
accompanied by continual family dramas. In 1909, he had lost his first wife.
His firstborn son, Karl, was killed in the Battle of Verdun during World War
I. Two twin daughters, Grete and Emma, both died in childbirth, in 1917
and 1919, respectively. In 1944, his home in Berlin was destroyed in a
bombardment. His second son, Erwin, was taken prisoner in 1914 but had
managed to return home. After the war, he occupied various positions in
government, rising to the level of secretary of state under Chancellors Franz
von Papen and Kurt von Schleicher. When the latter resigned in 1933 and
Hitler took power, Erwin resigned his office and devoted himself to
business, all the while maintaining a strong interest in politics with views
increasingly critical of Hitler.
In the closing months of 1943, Erwin joined with the conspirators who
were plotting Operation Valkyrie, a coup attempt to overthrow Hitler and
negotiate a peace treaty with the Allies. The brain behind the operation was
army colonel Claus von Stauffenberg. The plan was to assassinate Hitler
with a bomb inside his general headquarters, known as the Wolf’s Lair, in
Rastenburg (today Kętrzyn, Poland). On July 20, 1944, the bomb was
placed in a briefcase and left by von Stauffenberg under the table in the
conference room, near Hitler, during a meeting of the general staff. Owing
to a series of coincidences—only one of the detonators worked and an
officer happened to move the briefcase with his foot just before the
explosion—the bomb exploded and caused a lot of damage, but Hitler was
only slightly wounded. Soon afterward, the conspirators and thousands of
other people were arrested. Among them was Erwin Planck, picked up by
the Gestapo and condemned to death forthwith.
His octogenarian father tried desperately to save his son from the
gallows by exploiting his contacts and his fame as a great scientist. On
October 25, he wrote to Hitler:
My Führer!
I am most deeply shaken by the message that my son Erwin has been sentenced to death
by the People’s Court.
The recognition of my achievements in service of our fatherland, which you, my Führer,
have expressed towards me repeatedly and in the most honoring way, makes me confident
that you will lend your ear to an imploring 87-year-old.
As the gratitude of the German people for my life’s work, which has become an
everlasting intellectual patrimony of Germany, I am pleading for my son’s life.
Max Planck
In this case, too, as with Fermi’s letter, the passion for physics stands out.
Planck proudly recalls his contribution, which increasingly would become
the heritage not only of his homeland but of all humankind. He, Nobel Prize
winner, loyal to his country right to the end, begs for pity. An act of
desperation, and perhaps, as he is writing, Planck remembers his encounter
of 11 years before, and he knows that, just as Hitler had no pity for the Jews
then, he would not have pity for Planck’s family now. Could science, even
for a minute, dent the delirium of Nazi fanaticism?
Erwin Planck was hanged on January 23, 1945. Four days later, the Red
Army liberated the extermination camp at Auschwitz and the world
discovered the horror of the Shoah.
Arthur Eddington was a great British scientist who lived at the turn of the
nineteenth century. Astronomer and physicist, he was the author of
pioneering studies on the behavior of the stars. He was the first, for
example, to hypothesize that nuclear fusion is a fundamental process in the
dynamics of stellar energy. Eddington was also a great admirer of Einstein,
and he tried to overcome the isolation of German-language scientists during
and right after World War I by publicizing the general theory of relativity in
the English-speaking world. Yet he didn’t stop there. He was also the first to
demonstrate the theory experimentally, using the mass of the Sun. Our
private star, which from the Earth looks like a small disc—a little bigger at
sunset, when it is lower on the horizon—is actually a rather massive body.
Indeed, its mass described in kilograms is a number with 31 figures, about
330,000 times bigger than the mass of the Earth.
The general theory of relativity was revolutionary and mathematically
complicated, and it was not accepted by everybody right away. Einstein
himself was aware of the need for an experimental demonstration and
suggested measuring the deflection of light coming from the stars caused by
the mass of the Sun, as predicted by his theory. The brightness of the Sun,
however, made direct observation impossible, and it was certainly not
possible to turn it off: except for—and this is where intuition comes in—
during a total eclipse, which would allow someone to photograph the stars
near the Sun and to verify if there was an apparent shifting of their position
compared to when they were observed in proximity to the Sun.
The first to take up the challenge was Erwin Freundlich, an enthusiastic
astronomer in Berlin. Freundlich had scheduled his wedding for the summer
of 1913 and planned his honeymoon in the Swiss Alps so that he could
meet Einstein in Zurich and talk about his experiment. We do not have any
direct evidence about the reaction of Mrs. Freundlich. The meeting did take
place, however, and gave rise to a plan for an expedition to Crimea, led by
Freundlich, on the occasion of the eclipse forecast to take place on August
21, 1914. But Freundlich was unlucky. Just when he arrived in Crimea,
World War I broke out in Europe. On August 1, Germany declared war on
Russia. When the Russians stopped Freundlich on their soil—a scientist
from an enemy power armed with binoculars and telescopes—they were not
inclined to believe that he was there to measure the deflection of starlight.
He was arrested and his equipment seized. One month later, he was freed
thanks to a prisoner exchange.
The baton was passed to Eddington, who proposed to take advantage of
the eclipse of May 29, 1919. Given the times, it was not such a simple idea.
Great Britain and Germany were still shedding each other’s blood, and for
the British there was little appeal in organizing an expedition to
demonstrate the validity of a theory produced by a German scientist.
Nevertheless, Eddington managed to pull it off, and as he later wrote, “By
testing the ‘enemy’ theory our national observatory kept alive the finest
traditions of science and the lesson is perhaps still needed today.”
To observe the eclipse, Eddington’s team split into two groups.
Eddington and a few colleagues went to the island of Principe, off the west
coast of Africa. The others went to Sobral, Brazil. That day on Principe, the
sky was cloudy and the bad weather could have sent months of preparation
up in smoke, but Eddington, unlike Freundlich, was a lucky guy. Just as the
eclipse was about to begin, the blanket of clouds opened up. This allowed
him to photograph some of the stars in the cluster of the Hyades. When the
measurements were interpreted, they confirmed the theory of general
relativity. On November 6, 1919, the results were presented to the Royal
Astronomical Society. The news—until then circumscribed within the
narrow circle of physicists—immediately bounced from one corner of the
world to the other. Einstein’s notoriety went global. The Times of London
proclaimed “REVOLUTION IN SCIENCE / NEW THEORY OF THE UNIVERSE /
NEWTONIAN IDEAS OVERTHROWN.” The headline in the New York Times was
slightly more sensationalistic: “Lights All Askew in the Heavens: Einstein’s
Theory Triumphs.” Partial justification for the American daily may be that,
since they had no scientific correspondent in London, the story was
reported by a golf correspondent. . . .
We often hear lamentations that the coal stored up in the earth is wasted by the present
generation without any thought of the future, and we are terrified by the awful destruction of
life and property which has followed the volcanic eruptions of our days. We may find a kind
of consolation in the consideration that here, as in every other case, there is good mixed with
the evil. By the influence of the increasing percentage of carbonic acid in the atmosphere, we
may hope to enjoy ages with more equable and better climates, especially as regards the
colder regions of the earth, ages when the earth will bring forth much more abundant crops
than at present, for the benefit of rapidly propagating mankind.
Let’s say that there are unfortunately many other negative consequences
that Arrhenius failed to foresee. . . .
Getting back to that December 10, Arrhenius opened his presentation of
Einstein by recalling that “there is probably no physicist living today whose
name has become so widely known as that of Albert Einstein. Most
discussion [about him] centers on his theory of relativity.” Then, however,
he changed course and spoke of other things. Indeed, as strange as it might
seem, Einstein won the 1921 Nobel Prize not for the theory of relativity but
rather for another discovery undoubtedly less well known to the general
public: the theoretical description of the photoelectric effect, another
milestone in the story of quantum mechanics.
As with thermal radiation, in this case, too, our experience of daily life is
a useful aid. Quantum mechanics sometimes shows up where you least
expect it, even in the elevator. The photoelectric effect is commonly used in
photocells, like the ones that keep elevator doors from closing when there is
something, or someone, in the way. The effect happens when a metal
surface is struck by an ultraviolet light and emits electrons, which detach
from the material and can be measured and produce an electrical signal that,
in this case, blocks the closing of the door. For there to be an emission of
electrons, the light must be ultraviolet. With visible or infrared light, the
photoelectric effect does not occur. This is impossible to explain with the
classic wave theory of light.
To resolve the impasse, in 1905, Einstein, like Planck, set aside the
tradition of classical physics and hypothesized that energy in the
electromagnetic field was quantized. In addition, he introduced the concept
of “quantum of light.” In his paper of March 1905 in the Annalen der
Physik, Einstein wrote: “The energy [of a beam of light] is not distributed
continuously over ever-increasing spaces, but consists of a finite number of
energy quanta that are localized in points in space, move without dividing,
and can be absorbed or generated only as a whole.” The quantum of energy
is a photon, which resolves the contradictions between the photoelectric
effect experiment and classical theory. Einstein assigned to the photon the
energy hf, the same quantum that Planck had found for the thermal radiation
of a black body. With this intuition, he formulated the theory that fully
explained the photoelectric effect. Now the framework was finally
complete. Not only is electromagnetic radiation produced in packets, as
Planck had understood, but it can also propagate itself as a particle, that is, a
photon.
“Owing to these studies by Einstein,” Arrhenius concluded his speech,
“the quantum theory has been perfected to a high degree and an extensive
literature grew up in this field whereby the extraordinary value of this
theory was proved.”
Identity Crisis
Perhaps never in the history of physics has there been a cluster of new
discoveries as abundant as the one in the decades around the turn of the
nineteenth century. Experiments that cast doubt on centuries of accumulated
knowledge, new theories that revolutionized the description of the universe.
The world of science was in continuous ferment, an ebullition of ideas that
undoubtedly influenced the young Louis de Broglie, scion of the French
nobility, who suddenly, after taking a degree in history, decided to make a
180-degree turn and devote himself to science, and specifically to physics.
He began to practice the discipline during World War I, when he worked on
the development of a system of radio communications for submarines. That
was certainly not what made him end up in the history books, which he had
so prematurely abandoned. Instead, it was his doctoral dissertation
presented at the University of Paris in 1924.
De Broglie was fascinated by the recent findings of Einstein and Arthur
Compton, which proved the corpuscular nature of light, and thus, its being
simultaneously wave and particle. He hypothesized that the wave-particle
duality could also be applied to matter. At that time, to associate wavelike
properties with something as solid as matter was literally something out of
science fiction. In effect, his thesis, though received with interest, was
considered to be of little practical import. Yet no more than two years had
gone by when, in 1926, two series of experiments confirmed de Broglie’s
hypothesis, for which he was awarded the Nobel Prize in 1929.
In line with quantum mechanics, de Broglie theorized a grand symmetry
of nature. In sum, the universe is composed of matter and radiation, and
both can behave either as waves or as particles.
Physics was now ready for the formalization of quantum mechanics, and
in 1925, the Austrian physicist Erwin Schrödinger formulated the equation
that is named after him and that describes the evolution of the quantum
world. In the microscopic realm, the concreteness of objects is replaced by
the uncertainty of probability.
This law tells us that if we know the interaction of a body with its
surrounding environment, that is, the force F, we can derive the acceleration
a, which in essence means knowing its motion. Put simply, the elegance and
power of this law lie in its statement that the motion of a body is completely
and unequivocally determined by its relationship with the world.
In other words, the same force causes different motions depending on the
mass of the body to which it is applied. This is something we all know quite
well from experience. We need only think of the different effect we obtain
by throwing with the force of our arms a soccer ball or a rock of the same
size. Therefore, mass describes the inertia of a body with respect to the
application of a force: the greater the mass, the less impact that a given
force will have on the body.
Newton’s law, as is true for all of classical mechanics, is deterministic:
given a force and the characteristics of the body’s motion in a specific
instant, we are able to predict with absolute accuracy the trajectory of the
body. This predictive capacity of the law is quite admirably displayed, for
example, in the description of the motion of the planets or in space voyages.
In July 1969, NASA scientists succeeded in taking two men to a precise
point on the Moon after a voyage of more than 384,000 kilometers. In
February 2021, the successors of those scientists guided the Perseverance
rover onto the surface of Mars after a voyage of about seven months and
480 million kilometers, landing it precisely in the right place. All of this
was thanks to mechanics, with which it was possible to calculate with
extreme precision the trajectory of the spaceships.
Classical mechanics works, and it allows us to predict the future.
But also not.
“Ibis et redibis non morieris in bello”: “You will go, you will return, never
in war will you perish,” or “You will go, you will return never, in war you
will perish.” What a difference a comma makes! The Cumaean Sibyl was
clever, but predicting the future has been a human ambition since well
before the birth of science. Prophets, witch doctors, and fortune-tellers have
always had an audience. The hope that, with enough acumen and the right
equipment—whether it was animal innards, the smoke clouds of a campfire,
or a crystal ball—one could foresee what had not yet happened has always
been well rooted in humanity. It is not hard to imagine, therefore, how
much this hope was nourished by Newton’s Principia, published in 1687.
With Newton, the prediction of the future became science and no longer
a wager or a question of interpretation. His equation of motion makes it
possible to predict with certainty the position of a body. If to that we add
the nineteenth-century development of electromagnetism, it, too,
completely deterministic, and the fact that all systems are constituted by
elementary building blocks, we can see how at the dawn of twentieth
century the dream of predicting the future seemed to be at hand once we
had acquired sufficient capability of calculation.
But they were jumping the gun. In the first decades of the twentieth
century, physics, the very same discipline whose classical mechanics
inspired the dream of deterministic predictability, began to undermine the
very foundations of classical theory.
Don’t let the apparent complexity scare you. Sure, a full understanding
of it is something reserved for specialists in the field, but in reality this
equation is in many ways analogous to Newton’s equation, which we saw
earlier. In this case, too, the starting point is the interaction of a particle with
the outside world—here represented by potential energy V—which then
serves as the basis for calculating the solution for predicting the future.
Only in this case the object of the prediction is not the precise position of
the particle but the probability of finding it in any given place. The function
ψ, which we obtain by resolving Schrödinger’s equation, describes, in fact,
a wave in complete coherence with de Broglie’s hypothesis. The wave
function ψ does not tell us, however, precisely where the particle is, but
only where probabilistically we could find it. The determinism of the
classical mechanics of Newton’s laws is ousted by the uncertainty of
quantum mechanics.
This does not mean that classical mechanics is wrong; rather, it only
works in certain realms. Quantum effects are visible only in the microscopic
world. On the macroscopic scale—where macroscopic includes everything
from a grain of sand to the planet—classical mechanics works just fine, as
we have seen. Just as special relativity extends the area of validity of
physical laws in conditions of high velocity, so quantum mechanics extends
their validity in conditions of microscopic dimensions, that is, on the atomic
or subatomic scale. And just as a universal physical constant—the speed of
light—is the hallmark of relativity, so another physical constant, Planck’s
constant, is the signature of quantum effects. Note, for example, that it also
appears in Schrödinger’s equation (3).
Newton’s law (F = ma) and Schrödinger’s equation are thus basic tools
with which physics describes the world, whether in the classical or the
quantum version. Although separated by centuries and symbols of
complementary worlds, they are also joined by what may look like a simple
letter of the alphabet, m, which signifies a fundamental property of any
object under study, whether it is a neutron or the Apollo 11 space capsule:
mass.
In both classical and quantum physics, mass plays the crucial role of
mediating the interaction of a body with forces, or rather with the world. In
certain cases, other properties of a body also play this role. Electric charge
and velocity, for example, mark the interaction with the electromagnetic
field, but mass is present whichever the force might be.
The mass of bodies is also central in another fundamental physical
process: universal gravitation. It was, of course, Newton who discovered
how two objects, by reason of their having mass, attract each other through
the force of universal gravitation, which is directly proportional to the
masses of the bodies and gradually decreases as the bodies grow farther
apart. Technically, the amplitude of the force of gravity between two bodies
of masses m1 and m2 separated by distance r is expressed as
October 21
On October 21, 1944, Max Planck was waiting anxiously, or perhaps with
resignation, for the outcome of the trial of his son Erwin, which just two
days later would conclude with the death sentence. On October 21, 1520,
Ferdinand Magellan discovered the strait that bears his name. On the same
date in 1879, Thomas Edison filed his application for a patent on the
incandescent light bulb; in 1833, Alfred Nobel was born; in 1917, Dizzy
Gillespie; and in 1995, Doja Cat.
A few minutes’ research on the internet is enough to discover dozens of
other events or famous birthdays that occurred on October 21. Considering
that a year has 365 days and that history-making events are much more
numerous, you don’t have to be a statistician to realize that there is nothing
special about October 21. Except with regard to the metric system: there are
not millions of units but rather only seven, and yet the definitions of not one
but two of these units were literally revolutionized on October 21. This is,
to say the least, extraordinary.
As we have seen in the preceding chapter, on October 21, 1983, the 17th
General Conference on Weights and Measures defined the meter in terms of
the speed of light. Twenty-eight years later, on October 21, 2011, the 24th
meeting of the same conference definitively decreed the end of an era: the
oldest of the artifacts used for the definition of a fundamental unit of
measurement, the prototype kilogram, was retired. The solidity of a piece of
precious metal—like all human works, inevitably transitory—was replaced
by the solidity of nature, universal and available to everybody. This new
solidity was obtained in part, in an apparent paradox, by way of the
universal constant that had undermined the certainty of classical mechanics
and that most evokes indeterminacy: Planck’s constant.
This equation ties together energy E, mass m, and the speed of light c.
In quantum mechanics, energy enters quantum mechanics with the
formula that expresses Planck’s quantum, which we have been talking about
in this chapter:
Pay attention! This is the same physical quantity, energy, that, thanks to
relativity and quantum mechanics, can be expressed as a function either of
the speed of light c, amply addressed in the preceding chapter, or of
Planck’s constant h. Energy thus becomes the bridge between relativity and
quantum mechanics, and above all, it permits us to write mass as a function
of two physical constants, which are universal and thus immutable, such as
c and h. Far from being a mere artifice for specialists in the field, the
relationship between mass and the two universal constants is of great
practical use for the new definition of the kilogram. Indeed, when the need
became evident to replace that perishable piece of metal that was the
prototype kilogram with something more durable, physicists went to work
and found various experimental methods for using the relationship that
binds mass with h and c. Creating a unit of measurement necessarily
involves the need to use it in practice. Specifically, if c and h are known
with accuracy, it was necessary to think of an experiment that would permit
the equally accurate measurement of mass and therefore to define the
prototype kilogram.
The primary experimental instrument is the Kibble balance: a two-pan
balance, no different in principle from those of 2000 BCE, just a little more
technological. On one pan, you put the mass to be weighed, while the
second balances the first. Rather than comparing the unknown mass to the
weight of another mass, as in ordinary balances, the Kibble balance works
by using an electromagnetic force. The value of this electromagnetic force
can be measured with great accuracy by exploiting two quantum effects: the
Josephson effect and the Hall quantum effect, and it is then expressed as a
function of Planck’s constant (omnipresent in quantum equations). If the
value of the force is both fixed and known precisely—something that has
been made possible in recent decades thanks to very refined experiments—
the Kibble balance can measure the mass using the definition of the
universal kilogram prototype, accurate and independent of material objects.
h is truly small. Its value expressed in units of the international system is
6.626070150 × 10–34, a number we can also write—if the publisher does
not reprimand us for the consumption of ink—as
0.0000000000000000000000000000000006626070150. An international
banking code number is child’s play by comparison.
This is the measure that was chosen as the new definition of the
prototype kilogram, which is based, therefore, on Planck’s constant.
On May 2, 1945, the Red Army raised the Soviet flag over the Reichstag in
Berlin while Hitler committed suicide in his bunker. On May 8, Nazi
Germany surrendered. The activities of the Manhattan Project, however,
continued without pause. At 5:29 in the morning of July 16, 1945, in the
desert of Jornada del Muerto, near the city of Socorro, New Mexico, an
artificial dawn lit up the sky. It was the Trinity Test, the first explosion of an
atomic bomb, the trial of ordnance that a few days later would raze the city
of Hiroshima to the ground. Among those present at the Trinity Test was
Enrico Fermi (his typed eyewitness testimony, which follows, is held at the
National Archives, RG 227, OSRD-S1 Committee, box 82, folder 6
“Trinity”):
On the morning of the 16th of July, I was stationed at the Base Camp at Trinity in a position
about ten miles from the site of the explosion.
The explosion took place at about 5:30 A.M. I had my face protected by a large board in
which a piece of dark welding glass had been inserted. My first impression of the explosion
was the very intense flash of light, and a sensation of heat on the parts of my body that were
exposed. Although I did not look directly towards the object, I had the impression that
suddenly the countryside became brighter than in full daylight. I subsequently looked in the
direction of the explosion through the dark glass and could see something that looked like a
conglomeration of flames that promptly started rising. After a few seconds, the rising flames
lost their brightness and appeared as a huge pillar of smoke with an expanded head like a
gigantic mushroom that rose rapidly beyond the clouds probably to a height of the order of
30,000 feet. After reaching its full height, the smoke stayed stationary for a while before the
wind started dispersing it.
About 40 seconds after the explosion the air blast reached me. I tried to estimate its
strength by dropping from about six feet small pieces of paper before, during and after the
passage of the blast wave. Since at the time, there was no wind, I could observe very
distinctly and actually measure the displacement of the pieces of paper that were in the
process of falling while the blast was passing. The shift was about 2 1/2 meters, which, at the
time, I estimated to correspond to the blast that would be produced by ten thousand tons of
T.N.T.
1. Eugenio Fubini was an Italian physicist who studied under Fermi in Rome and taught at the
University of Turin until being forced to leave Italy in 1938 by the racial laws. In the United States
he went on to become assistant secretary of defense in the Kennedy administration and then vice
president and chief scientist at IBM before founding his own consulting firm.
2. Gian Carlo Wick was an Italian theoretical physicist who was Fermi’s assistant in Rome and
after 1946 a professor at several American universities.
OceanofPDF.com
FOUR
The Kelvin
To Your Health!
“If we look at a glass of wine closely enough we see the entire universe.”
You might think such a statement was spoken after its author had already
had an intimate look at several of his own full glasses. After all, it has been
very well known since ancient times how much wine can favor inebriation
and its consequent fantasies. Archaeological evidence of the first large-scale
production of wine has been found near modern-day Tbilisi, Georgia, and
dates back to 6,000 BCE. It might surprise you, therefore, that this apology
for the glass of wine concludes a classic essay by the Nobel Prize winner
Richard Feynman, entitled “The Relation of Physics to Other Sciences,”
published in his book Six Easy Pieces. In reality, however, the bond
between wine and physics—and science in general—is much stronger than
is generally thought. Feynman continues:
[In the glass of wine, there is] the twisting liquid which evaporates depending on the wind
and weather, the reflections in the glass, and our imagination adds the atoms. The glass is a
distillation of the earth’s rocks, and, in its composition we see the secrets of the universe’s
age, and the evolution of stars. What strange array of chemicals are in the wine? How did
they come to be? There are the ferments, the enzymes, the substrates, and the products. There
in wine is found the great generalization: all life is fermentation. Nobody can discover the
chemistry of wine without discovering, as did Louis Pasteur, the cause of much disease.
The advent of the scientific revolution, at the turn of the sixteenth century,
brought two important innovations in the description of natural phenomena.
The first was a shift toward abstraction and the replacement of qualitative
descriptions with mathematical ones. This is summarized quite nicely by
Galileo in The Assayer (Il saggiatore): “Philosophy [that is, natural
philosophy] is written in this grand book—I mean the Universe—which
stands continually open to our gaze, but it cannot be understood unless one
first learns to comprehend the language and interpret the characters in
which it is written. It is written in the language of mathematics, and its
characters are triangles, circles, and other geometrical figures, without
which it is humanly impossible to understand a single word of it; without
these, we are left wandering aimlessly in a dark labyrinth.”
Galileo himself provides an admirable example of this revolutionary
approach in his descriptions of motion and inertia, in which he examines
motion apart from the complications of contingent effects, such as friction,
concentrating on ideal properties.
The second innovation involved turning to measurement as an essential
method to describe nature. This led to the development of new instruments,
including those for the measurement of temperature, which today is one of
the seven quantities of the international system and certainly one of the best
known and most used. The sensations of the different gradations of hot and
cold are intrinsic to human experience. Since ancient times, humanity has
understood how influential temperature is in our lives, as well as in nature
and its processes. Perhaps the most obvious example is the changing of the
seasons. No wonder then that with the advent of the Renaissance,
thermometry attracted the interest of scientists.
Galileo is credited with the invention, around 1592, of the first
thermometer or, more properly, the thermoscope. This was a useful
instrument for comparing the temperatures of two objects or for measuring
variations in temperature, but it did not give absolute values. Galileo’s
thermoscope was essentially a glass tube open on one end and with a bulb
on the other. Partially filled with water or wine, the tube was immersed,
open end down, in a container full of the same liquid, so that the bulb end
was filled with air. Putting the bulb in contact with the object whose
temperature was to be measured caused the air in the bulb to contract or
expand, depending on whether the temperature of the object was lower or
higher than the ambient temperature. If the air contracted, the level of the
liquid in the tube rose, indicating that the object was colder than the
thermoscope. If, instead, the air expanded, bubbles were created, which
gurgled in the liquid and caused its level to lower. This principle of
measurement had been known since the time of ancient Greece, when Philo
of Byzantium and Hero of Alexandria developed air thermoscopes. As in
many other fields, however, the prevalence of Aristotelian theories put an
end to the development of the sciences for more than a millennium.
Nevertheless, by Galileo’s time, there was a need for different
instruments to give the same reading when used in the same circumstances.
In the case of thermometry, one solution was to use instruments that were
exactly the same, though at the time this was not simple to do. Another
possibility, much easier, was to make the readings of different instruments
comparable, using a common point of reference. The basis is the principle
of causality, according to which similar effects have similar causes. In the
case of temperature, one starts with the observation that a certain
thermometer always gives the same reading every time it is put into contact
with different samples of melting ice, which is the point of reference in this
case. From the constancy of the effect (or the consistency of the
thermometer reading) one can deduce the constancy of the cause. This leads
to the conclusion that the same phenomenon, characterized by a constant
temperature, is at work in the various samples of melting ice. Consequently,
if another thermometer is immersed in melting ice, it will produce the same
temperature reading that was recorded on the first thermometer in a similar
situation, because the same cause must always have the same effect.
Among the first to use a scale based on points of reference was the
Venetian Giovanni Francesco Sagredo, a close friend of Galileo. Sagredo
built a series of air thermometers, which he claimed produced identical
results. Using them, he provided quantitative measurements, establishing a
scale that read 360 degrees at the apex of summer heat, 100 degrees in
snow, and zero degrees in a mixture of snow and salt. It was known that salt
water freezes at temperatures below that at which pure water does, 0°C
(32°F). It seems reasonable, therefore, that Sagredo chose snow and the
mixture of snow and salt as points of reference for fixing his temperature
values. Sagredo showed his passion and enthusiasm for the new horizons
opened up by the measurement of temperature in a letter he wrote to Galileo
in 1613: “The instrument for measuring heat, invented by Your Excellency,
has been reduced by me in various convenient and exquisite forms, so that
the difference of the temperature from one room to another is seen up to
100 degrees. I have with these speculated about a number of wondrous
things, as, for example, that in winter the air is colder than ice or snow.”
We owe the application of the thermometer to measuring body
temperature to Santorio Santorio, born in 1561 in Capodistria (modern-day
Koper, Slovenia), then a dominion of the Most Serene Republic of Venice.
An acquaintance of Galileo, he was called to teach medicine at the
University of Padua in 1611, where up until one year earlier Galileo himself
had also been on the faculty. Santorio was one of the pioneers of the use of
quantitative physical measurements in medicine, bringing to this discipline
the experimental method with which, in those very same years, Galileo was
revolutionizing science. Inspired by Galileo’s findings on the motion of the
pendulum, Santorio invented the pulsilogium, a device for measuring the
heartbeat. He was also the first physician to observe variations in human
body temperature and to interpret them as indicators of health or illness.
Santorio modified the air thermometer, inserting the bulb into the mouth of
the patient. For the gradations on the tube, he used two points of reference:
the temperatures of snow and the flame of a candle.
A Question of Equilibrium
In the same period that Santorio was teaching humanity how to take its
temperature, Swedish engineers were at work building what was believed to
be one of the most powerful warships of the time, the Vasa. On August 10,
1628, an amazed crowd lined the docks of the port of Stockholm where, in
the presence of the king, the ship was launched. But the crowd’s enthusiasm
soon turned to dismay. Less than a mile from the launch ramp the Vasa
suddenly sank, owing to what were supposedly harmless gusts of wind,
taking 30 crew members down with it. The ship was armed with 64 bronze
cannons, distributed on two decks. The upper deck of cannons, added at the
wish of the king, made the ship too tall with respect to its width and
therefore very unstable. Another flaw was that the wood structure of the
Vasa was thicker on its left side than on its right. The ship’s carpenters
appear to have used different systems of measurement. Indeed,
archaeologists have found four rulers used by the workers who built the
ship: two are calibrated in Swedish feet, which had 12 inches, while the
other two are calibrated in Amsterdam feet, which had only 11.
The Vasa fiasco is just one of many cases in which the use of different
scales of measurement for the same project has led to miserable failures.
We have already mentioned the Mars Climate Orbiter, which disintegrated
in the atmosphere of Mars because some engineers had used metric units
while others had used English units, but similar stories abound. One
example is Air Canada flight 143, which was scheduled to fly on July 23,
1983, between Montreal and Edmonton with an intermediate stop in
Ottawa. Due to a series of unfortunate circumstances, including the failure
of the automatic fuel quantity indication system, the plane was refueled
with fuel that had to be calculated by hand using a dipstick inserted in the
tanks. The dipstick reading was in centimeters, which had to be converted
into liters and finally into kilograms, using the appropriate conversion
factors. But in the final calculation, the conversion was made from liters to
pounds instead of kilograms. Because of this error, along with other issues,
the plane could not reach its destination and was forced to make an
emergency landing—luckily without consequences for the passengers and
crew—on a motor racing track on a former Royal Canadian Air Force base
in Gimli, Manitoba (the location has since given this flight the popular
name of “Gimli glider”). Reaching agreement on units of measurement is
no simple matter, as we have seen in the preceding chapters. Temperature is
no exception: witness the division in the world today between the use of
Fahrenheit and Celsius.
Less than a century had gone by since the development of the first
thermoscopes when scientists began to conceive of a universal scale for the
measurement of temperature. Newton and the Danish astronomer Ole
Rømer were among the first to come up with universal scales in the early
1700s, but it was not until 1724 that Daniel Fahrenheit proposed the scale
that bears his name, still used today in the United States. He also gets the
credit for the idea of using mercury as a thermometric liquid, a
transformative choice. Thanks to its elevated coefficient of expansion, the
use of mercury significantly improved the accuracy of thermometers. For a
given temperature variation, mercury expands much more than water or
alcohol, thus allowing for a more accurate visualization of the temperature
itself. Fahrenheit chose as reference points the temperature of a solution of
water, ice, and ammonium chloride, to which he assigned a value of 0, and
the average temperature of the human body, which he fixed at 96, also
noting, however, the value of 32 for the temperature of melting ice. Today,
the Fahrenheit scale, the official scale in the United States, is based on two
reference points separated by 180 degrees (indicated with °F): the
temperature of melting ice, fixed at 32°F, and that of the boiling point of
water, 212°F.
In 1742, the scientific world witnessed the debut of an alternative scale,
which would come to be dominant, this one invented by the Swedish
astronomer Anders Celsius. In an article that would later become famous,
Celsius defended the two reference points he chose for his thermometric
scale (originally selected by Santorio but not yet universally accepted).
Celsius identified the points in the temperature of melting ice and boiling
water in standard conditions of pressure, separating them by 100 degrees.
Originally, Celsius chose to assign the value of 100 to melting ice, and 0 to
boiling water, but the convention was reversed shortly after his death.
Today, this system bears his name and the indication °C.
In the late 1970s, Italy fell in love with Fantozzi, the endearingly hapless
white-collar time-card puncher created by comedian Paolo Villaggio, and
the hero of nine extraordinarily successful movies. In the second film in the
series, the off-screen narrator introduces a typical Fantozzi moment, as the
beleaguered bookkeeper returns home from the office and prepares to watch
the broadcast of the Italy versus England soccer match. “Fantozzi had a
fantastic plan: socks, underwear, flannel bathrobe, tray table in front of the
television screen, mouth-watering super-thick onion omelet, a six-pack of
ice-cold Peroni, berserk cheering, and unbridled burping.”
In Italy, if you want to talk about the temperature for serving beer, a
tribute to the great Paolo Villaggio and ice-cold Peroni is obligatory. It is
certainly true that in the 1970s, beer by definition was meant to be drunk
cold. (Cue the inevitable comments after a trip to England about the
lukewarm beer in the pubs.) Since then, however, the enormous evolution
of the range of beer offerings and the consequent refinement of beer
drinkers’ palates have accustomed us to serving temperatures that range
from around 0°C to nearly 16–18°C, depending on the brew. A rich
literature has grown up around this theme, which has received attention
even from a high-brow newspaper like the Wall Street Journal. With all due
respect to Fantozzi and ice-cold Peroni, the thermometer has become an
indispensable instrument for the proper enjoyment of a glass of beer.
Naturally, in this case, too, units of measurement must be used with caution,
given that in Europe a Pilsner is supposed to be imbibed at around 4–6°C,
while in the United States the same beer is served at 38–45°F. In case of
error, there are obvious repercussions for the taste experience.
The relationship between temperature and beer actually goes much
deeper, proof that wine is not the only beverage with an important stake in
this particular physical quantity. Temperature is a fundamental quantity in
thermodynamics, the branch of physics that studies macroscopic processes
involving exchanges of energy between systems and their environment, the
transformation of mechanical work to heat, and, vice versa, of heat into
mechanical work. Heat is a form of energy. More precisely, it is the energy
transferred between two bodies at different temperatures, with the warmer
body transferring energy to the cooler and thus becoming cooler itself.
One of the fathers of this discipline is James Prescott Joule, an English
brewer from Salford, in Lancashire. Joule is credited with the
demonstration, in the 1840s, of the equivalence between mechanical work
and heat, both of which are mechanisms for transferring energy to a system.
In one famous experiment, Joule showed that the temperature of water in a
container can be raised by using a mechanical process, specifically, by
making a sort of propeller rotate inside it. The mechanical energy used to
keep the propeller turning is converted, thanks to friction, into thermal
energy in the water. Joule laid the groundwork for modern thermodynamics
and, in particular, for the basic principle of the conservation of energy—the
first law of thermodynamics—disproving caloric theory. Caloric was
thought to be a kind of invisible and immaterial self-repellent fluid that
could flow from hotter to colder bodies and whose concentration explained
an object’s higher or lower temperature. Instead, Joule demonstrated that
heat, too, is a form of energy transfer. He obtained his results thanks to
highly accurate measurements of temperature, the fruit of his supreme
experimental skill, which is said to have derived from his practice of
brewing and, therefore, from his familiarity with chemistry and
instrumentation. In the cemetery of the suburb of Brookland, south of
Manchester, his tombstone is engraved with the number 772.55, which
corresponds to his most accurate measurement—performed in 1878—of the
factor of equivalence between mechanical energy and heat (expressed,
naturally, in English units, the foot-pound force per British thermal units . . .
).
Beer Molecules
This formula states that the temperature T of the room in which you are
reading these lines is proportionate to the average kinetic energy of the
molecules of air E, or the square of the average velocity of those same
molecules. The hotter the air, the faster the molecules move. The constant
of proportionality between energy and temperature is kB, which has a
universal value equal to 1.380649 × 10–23 in the international system of
units. At room temperature, the air molecules move at about 1,800
kilometers per hour!
The extraordinary elegance of physics lies in part in its ability to express
the bond between the infinitely small and the macroscopic world, between
an atom and a blimp, by way of a simple formula of just a few characters,
as in equation (1). This is crucial to expressing temperature with an
appropriate scale, a scale that bears the name of a small waterway and that
was proposed in 1848.
Brothers
The history of physics is filled with strange anecdotes. Like Joule,
Boltzmann also had a physics formula engraved on his tombstone, namely,
the equation for entropy. And if Harald Bohr’s notoriety as a soccer player
didn’t manage to overshadow that of his brother, Niels, the same applies to
the brothers Thomson. James and William were born two years apart in
Belfast, James in 1822 and William in 1824. James was a scientist and
inventor. He was the one who initiated the study of wine tears that we
discussed in the opening paragraphs of this chapter. Even the most expert
enologists probably do not remember him, however, since the description of
those alcohol tears was attributed to the Italian Carlo Marangoni, who
perfected it. As in the case of the Bohrs, only one of the Thomson brothers
entered the pantheon of physics, and it was not James. For his scientific
merits, the younger brother was the first English scientist to be named to the
House of Lords, receiving the title of Baron Kelvin. The title refers to a
little stream 35 kilometers long that flows north of Glasgow and whose
worldwide fame is due to its flowing by the laboratory of William
Thomson.
Thomson was an eclectic scientist and was involved in laying the first
transatlantic undersea telegraphic cable. His fame, however, is primarily
tied to thermodynamics, and he is credited with the introduction, in 1848, of
the temperature scale that bears his name. Although, in terms of everyday
use, it is not as well known as the Celsius and Fahrenheit scales, the Kelvin
scale is a keystone of thermodynamics since its definition is independent of
the properties of a substance, such as water or the human body. The unitary
increment of the Kelvin scale is identical to that of the Celsius scale, but
instead of fixing zero to the temperature of melting ice, it defines zero as
the coldest possible point for matter (–273.15°C). Nothing can be colder
than this “absolute zero.” The Kelvin scale, therefore, is an absolute scale
and describes the amount of movement energy of the microscopic
components of matter: atoms and molecules. In that sense, temperature
expressed in kelvin is exactly that which is used in the formula E = 3/2 kB T,
mentioned in the previous section.
The kelvin (indicated as K) has been the base unit for thermodynamic
temperature since 1954, when the General Conference on Weights and
Measures adopted it. To make a unit of measurement concrete, or rather to
convert its definition into practice, you need an experimental method. For
the unit of temperature, the procedure sets out not to achieve 1 K but rather
to attain 273.16 K, fixed at the triple point of water. This is when water
coexists in thermal equilibrium in its three phases: solid, liquid, and gas.
This is a valid universal standard because, at a given pressure, the triple
point always occurs at exactly the same temperature, which is precisely
273.16 K. Until 2019, the kelvin was defined, therefore, as “the fraction
1/273.16 of the thermodynamic temperature of the triple point of water.”
As with the other units of measurement, everything changed with the
revision of the international system in line with universal physical
constants. In 2019, the kelvin was redefined using Boltzmann’s constant,
now determined with extreme accuracy. This definition is based on the
assumption that the fixed numerical value of Boltzmann’s kB constant is
1.380649 × 10–23 kg m2 s–2 K–1, where kilogram, meter, and second are
identified in terms of the fundamental constants that we saw earlier.
However, though it no longer defines the kelvin, the triple point of water
still remains a convenient and practical way to calibrate thermometers.
The kelvin is the unit of measurement of temperature universally used in
physics, but Celsius is the dominant scale in everyday life and in multiple
practical applications. Whether because of tradition, the elegance and
simplicity of its two easily remembered reference points—0 for freezing
water (melting ice) and 100 for boiling water—or because it expresses
many of the temperatures used in daily life in the small numbers we tend to
prefer, the Celsius scale is used practically everywhere in the world today.
Only the United States, some Pacific islands, the Cayman Islands, and
Liberia use Fahrenheit as their official temperature scale.
An Unreachable Goal
The record for the lowest temperature ever recorded in the contiguous
United States is –57°C (–70°F), recorded on January 20, 1954, at Rogers
Pass, Montana, which cuts through the Rocky Mountains. If that seems
cold, it’s nothing compared to the world record, which appears to have been
set in the Antarctic at the Russian Vostok research station on July 21, 1983,
with a recorded temperature of –89.2°C (–128.6°F). But even that seems
rather mild compared to the –240°C (–400°F) measured by NASA’s Lunar
Reconnaissance Orbiter space probe in a crater near the lunar South Pole.
Yet –240°C is still a long way from absolute zero. Physics laboratories,
however, are able to obtain temperatures very close to it. In 2014, for
example, at the Gran Sasso laboratories of the Italian National Institute for
Nuclear Physics, researchers recorded a temperature of six millikelvins, a
remarkable result because it was obtained in a relatively large volume of
one cubic meter. In much smaller volumes, even lower temperatures are
reached, just a few hundred billionths of a kelvin above absolute zero.
Temperatures close to absolute zero are of interest to physicists because,
in those conditions, the behavior of matter is very different. At those
temperatures, the thermal, electric, and magnetic properties of many
substances undergo remarkable changes. Two important phenomena that
occur under certain critical temperatures are superconductivity and
superfluidity. Superconductive materials do not pose resistance to the flow
of electricity and are thus used, for example, when it is necessary to
generate intense magnetic fields, as in the LHC (Large Hadron Collider)
particle accelerator at the CERN (European Organization for Nuclear
Research). The LHC, in fact, has over 1,700 magnets that keep the particles
on the right trajectories, all made of superconductive material and some of
which weigh as much as 28 tons.
Regardless, absolute zero is a theoretically unreachable goal. The third
law of thermodynamics states that as the temperature approaches absolute
zero, it becomes more and more difficult to remove heat from a body, and
thereby cool it. Reaching absolute zero in a finite time and using finite
energy is therefore impossible. Quantum mechanics, emerging with the
concept of probability out of the certainty of classical mechanics, also
presented an issue. Heisenberg’s uncertainty principle states that an
experiment, no matter how accurate it may be, can never exactly determine
both the position and the velocity of a particle, or to be more precise, its
momentum, as given by the product of velocity and mass. This principle
also dictates a limit to the accuracy with which the energy of a system can
be determined during a certain time of observation.
In other words, the product of the precision with which you can measure
the energy in a system ΔE and the duration of the interval of time Δt during
which the measurement is performed cannot go below a certain limit. This
is described in the formula ΔE · Δt ≥ h/4π, where h is Planck’s constant,
which we have discussed in previous chapters. Establishing that a system is
at the temperature of absolute zero would involve, therefore, determining
with absolute precision that its energy is zero (ΔE = 0), something that
could be done only by hypothesizing an unrealistic infinite time of
observation. From another point of view, taking an object to absolute zero
would mean precisely stopping each of its atoms in a distinct point. This
would require fixing the exact position and the exact quantity of motion of
those atoms, which is again a contradiction of quantum mechanics.
On March 2, the supersonic Concorde jet made its maiden voyage. At its
cruising altitude, some 17,000 meters (55,000 feet) above sea level, the
temperature was about 57°C (70.6°F) below zero. On July 20, humankind
took its first steps on the lunar surface. During Neil Armstrong and Buzz
Aldrin’s sojourn the temperature oscillated between –23°C and 7°C (–9.4°F
and 44.6°F). On August 15, the mud of Woodstock welcomed a generation
of young people with the music of Joan Baez, Janis Joplin, and many
others. The daytime temperature on those days was around 28°C (82.4°F),
and it dropped all the way to 12°C (53.6°F) at night. It appears, however,
that few of those present noticed. We are speaking, naturally, about 1969. In
the spring of that year, a handful of English scientists left for Moscow,
carrying in their luggage a thermometer to measure a temperature of 10
million degrees. A thermometer that demonstrated how science could be an
instrument of peace, even at the height of the Cold War.
In those years, the tension between the two blocs, Western and Soviet,
was red hot, and the nuclear arms race knew no rest. In just nine years,
between 1960 and 1969, the USSR and the USA had conducted 660 bomb
tests, and the world was living in a state of terror. Parallel to this military
research, however, progressed a research program for the peaceful
exploitation of nuclear energy. The first process to be studied and put into
practice was nuclear fission, in which a heavy nucleus, struck by a neutron,
divides, and in so doing liberates energy. In 1951, EBR-1, the first
experimental fission reactor able to produce electric energy, began
operation in the United States. The energy generated by EBR-1 could
illuminate just four 200-watt light bulbs, but it was a historic event. In the
Soviet Union, on June 27, 1954, the first civil nuclear power station was
activated with the evocative name of Atom Mirny (peaceful atom). One
year later, in Arco, Idaho, the reactor BORAX-III was already able to light
up an entire city. In Europe the Calder Hall power station near Seascale,
England, began operation in 1956, while the first fission power station in
Italy entered into service in 1963.
Shortly after the end of World War II, however, an alternative process to
fission came under serious study: nuclear fusion. As we have seen in our
discussion of the kilogram, in the fusion process two lightweight nuclei of
hydrogen isotopes combine, thanks to the prevalence of nuclear interaction.
In the reaction, part of the mass of the reactants is converted into energy. As
in the case of fission, the energy liberated by a single reaction is much
greater than the energy obtainable from normal chemical combustion
reactions, and no CO2 is produced, but the enormous advantage of fusion is
that there is no production of long-lasting nuclear waste. The process is
intrinsically safe, and the fuel (water and lithium minerals) is substantially
unlimited. It is no surprise that research of this kind takes on strategic
importance. Therefore, at the height of the Cold War, the competition
between the two blocs with regard to fusion was extremely, even
dangerously, intense. Being able to exploit such an energy source would
amount to an immense economic and political advantage. In light of this,
when, in the summer of 1968, on the occasion of the Third International
Conference on Plasma Physics and Controlled Research on Nuclear Fusion,
the Soviets announced that the temperatures of the fuel in their experiment
had reached 10 million kelvin, a lot of people in the West broke out into a
cold sweat.
The reason for all the agitation was precisely the physics of fusion. To
make the process happen, it is necessary to heat the two hydrogen isotope
nuclei to very high temperatures in order to overcome the natural repulsive
force that exists between them. Both nuclei, in fact, have a positive charge,
and because of this electrostatic force—the Coulomb force—they tend to
repel each other. If, however, they manage to get sufficiently close, they
combine due to attractive nuclear forces. To get them close enough to each
other, they must be heated to millions of kelvin so as to exploit the motion
of thermal agitation, which we discussed earlier. At elevated temperatures,
the nuclei move very fast, and therefore they have sufficient kinetic energy
to overcome the repulsive barrier. At those temperatures, matter reaches the
so-called status of plasma, an ionized gas, which is, in fact, the fuel of
fusion. Scientists aim to heat the plasma of future fusion reactors to a
temperature of 150 million degrees, a temperature greater than that at the
center of the Sun.
For plasma, you need to have a container able to bear elevated thermal
charges, whose walls do not degrade. To meet this need, the physicists who
work on nuclear fusion have designed special doughnut-shaped steel
containers in which the plasma is confined by an intense magnetic field.
The magnetic field exercises a force on the charged particles in motion that
compose the plasma.
One of the main experiments of the 1960s was called T-3, conducted at
the Kurchatov Institute of Atomic Energy in Moscow. To perform the
experiment, the Soviet researchers created a special configuration of the
magnetic field using a machine called a tokamak, conceived a few years
earlier by their compatriots Andrei Sakharov and Igor Tamm. In 1968, after
several years of experiments, the scientists declared that they had heated the
plasma of T-3 to as much as 10 million kelvin. Taking into consideration
also other parameters of plasma, this was a spectacular result, which would
have given the Soviets temporary supremacy in such a strategic field.
Fortunately, despite the political divisions of the period, the scientific
channels between East and West had remained open. So it was that, in
accordance with the best scientific spirit, an independent evaluation was
proposed. The Soviet physicists, aware of the value of their results, invited
their British colleagues at the Culham Laboratory to come to Moscow in
person to measure the temperature of their experiment. In addition to
representing the “competition,” the British possessed a very special
thermometer, since they were experts in the measurement of the
temperature of plasmas by using the laser, then only recently invented.
Coming at the apex of the Cold War, the Russians’ proposal was bold
and anything but simple. The political and diplomatic implications and
difficulties were daunting, but both sides expected great benefits from the
enterprise. For the Soviets, it would provide confirmation of their
measurements and their supremacy. For the British, in contrast, it was a
spectacular testing ground and an international stage for their applied
physics and, in particular, for the technique of temperature measurement
known as Thomson scattering, which they were perfecting in those years.
This was a difficult technique based on tracking the light of a laser beam
scattered by the electrons in motion inside the plasma.
Despite the mutual diffidence and the complications, the mission came
off. The group of British scientists departed for Moscow, accompanied by
five tons of equipment. After weeks of preparation, their measurements
were successful and confirmed what their Soviet colleagues had reported
the previous year, opening the way to the international success of the
tokamak configuration. Just a few months later, the United States
transformed its main experiment in the Princeton laboratory into a tokamak
and quickly obtained similar results. In short, the tokamak configuration
became the leading player in worldwide research on controlled
thermonuclear fusion.
And science demonstrated that it could knock down walls.
OceanofPDF.com
FIVE
The Ampere
Scientific Wonders
Turning to another subject, I will proceed to explain by what law of nature it comes about that
iron can be attracted by that stone which the Greeks call the magnet after the name of its place
of origin, the territory of Magnesia. This stone is regarded by people with astonishment; for it
often forms a chain of rings suspended from itself.
Around the second century BCE, when the Great Wall of China was under
construction, the Chinese knew that a magnet hanging from a silk thread
always pointed in the same direction. The instrument was the precursor of
the compass, but at the time, it was used only as a divination tool for
predicting the future. It would be a long wait of over a thousand years
before it would become a useful aid for orientation and navigation. As
Massimo Guarnieri recounts in an article published in IEEE Industrial
Electronics Magazine, by the early second millennium CE, the compass
came to be used first for military purposes on land and later for maritime
navigation, which had previously relied only on the stars. The first mention
of a compass in Europe dates to 1190 in Alexander Neckam’s De naturis
rerum, and it is still not clear whether it arrived in the old continent from
China or was developed independently.
Today, we know that the compass works because the needle made of
magnetic material undergoes a force owing to the Earth’s magnetic field,
which always tends to orient the needle along the north-south axis.
Although its effects are very familiar, the origin of the terrestrial magnetic
field is not fully known, and much research is being done to explain it. We
do know for certain that the terrestrial magnetic field is related to the
electric currents flowing in our planet’s core of molten metals, but the
mechanism with which these currents sustain themselves is still a mystery.
Curious readers may want to look on the internet for the images of the
computerized simulations by scientists at the University of California, Santa
Cruz, which represent the magnetic field inside the Earth as though it were
a gigantic bowl of spaghetti!
We are indebted to Hans Christian Ørsted for the experiment that
demonstrated that electric currents are at the origin of the magnetic field.
Legend has it that, in 1820, as he was conducting a demonstrative lesson on
electrical and magnetic phenomena, Ørsted noted with wonder that a
compass needle moved when it was placed near a wire carrying electric
current. As sometimes happens, the anecdote appears not to be historically
accurate. An interesting contribution by Roberto de Andrade Martins to
Nuova Voltiana: Studies on Volta and His Times, volume 3, recounts that
the reality of scientific discovery is sometimes more complex than the way
it is simplified in the story. Nevertheless, the fact remains that the
observation was surprising, since, up to that moment, electricity
(represented here by the wire traversed by the current) and magnetism (the
compass needle) were described as two mutually extraneous phenomena.
Ørsted, however, demonstrated that it was indeed an electric current that
had generated a magnetic field.
News of Ørsted’s discovery spread rapidly, and it was Ampère who
conducted further and crucial experiments that confirmed and amplified the
Dane’s discovery and who developed the theory that described them.
Ampère then discovered that not only did a magnetic needle undergo a
force when it was in proximity to a wire traversed by a current but the same
phenomenon also occurred even if the needle was replaced by another wire
also traversed by a current. Just so you don’t think that all of these
developments were just a bunch of academic exercises for specialists in the
field, the principle of the force exerted by a magnetic field on an electric
current is used in electric motors, for example, in washing machines. If you
think about all the science there is behind it, the laundry basket of dirty
clothes certainly becomes much more fascinating.
Ampère was able to determine the amount of force between two wires
traversed by current as a function of the distance between them. His
findings were the basis for the definition of the ampere as a unit of
measurement for electric current until 2019. The definition was
cumbersome and not very practical: that constant current which, if
maintained in two straight parallel conductors of infinite length, of
negligible circular cross-section, and placed one meter apart in vacuum,
would produce between these conductors a force equal to 2 × 10–7 newton
per meter of length. Don’t let the definition scare you. There is no need to
go into the details. In substance, what this complicated sentence says is the
following: take two long wires traversed by the same current, place them at
a distance of one meter, and measure the force that attracts them to each
other. When this force equals a predefined amount, then those two wires are
traversed by one ampere. The predefined amount corresponds precisely to
those above-mentioned two-tenths of a millionth of a newton per meter of
wire. And there lies the first practical difficulty.
Two-tenths of a millionth of a newton is a very small force. To give you
an idea of how small, the weight force of a person who weighs 70
kilograms (154 pounds) (technically, the force of gravity with which he or
she is attracted by Earth) is about 700 newtons. Then there is the fact that,
according to the definition, the wires must be infinitely long and, above all,
that the ampere, despite being an electrical quantity, is defined in
mechanical terms, that is, as a force. The unit of measurement of force, the
newton, is not fundamental but is derived from the unit of mass in the
international system, the kilogram. Finally, we have seen that the value of
the prototype kilogram conserved in Sèvres drifted over time and that this
value drift limited the accuracy of its derived units. In sum, in both practice
and theory, the definition of the ampere in effect until 2019 was not
satisfactory. Once again, in order to resolve the problem, we turn to the
pillars of nature, or rather to another fundamental constant: the value of the
elementary charge (e).
At the beginning of this chapter, we recalled that an atom is made up of
protons, electrons, and neutrons. Protons and electrons have the same
electric charge, protons with a positive sign and electrons with a negative
sign. This electric charge is dubbed elementary. This name derives from the
evidence that electric charge is found in nature only in quantities that are
exact multiples of the elementary charge, as happens, for example, with
eggs. Think of a container full of eggs: a supermarket carton, a wholesale
case, a tractor trailer. No matter how many the eggs, they will always be a
multiple of the unit. The same is true of the charge. Rub a plastic comb on
the sleeve of your wool jacket. Like the ancients’ pieces of amber, it will
charge and it will be able to attract small pieces of paper. However great the
charge deposited on the comb, it will always be an exact multiple of the
elementary charge.
The value of the elementary charge is indicated with an e. It is a
universal constant, and e = 1.60217662 × 10–19 coulombs. The coulomb is
the unit of measurement of the electric charge. In the international system it
is a derived unit, hence not fundamental. It owes its name to Charles-
Augustin de Coulomb, a French physicist born in 1736, also one of the 72
scientists immortalized on the cornice of the Eiffel Tower. As can be seen
from its numerical value, the elementary charge is extremely small
compared to a coulomb. To make one coulomb it takes an enormous
number of elementary charges, about 6 billion billion of them (or precisely
6.24150907446 × 1018, the exact inverse of 1.60217662 × 10–19). For
convenience, let’s call it N.
At the beginning of the chapter, we saw that electric current is tied to the
movement of electric charges. To be precise, the electric current that
traverses a wire is defined as the quantity of electric current—measured in
coulombs—that passes through a section of the wire in one second.
According to the new definition, approved in 2019, the unit of measurement
of an ampere corresponds, therefore, to the passage of N (the enormous
number above) elementary charges per second. The ampere, too, therefore,
has been liberated from human artifacts (wires, masses, and so on) and has
finally been entrusted only to the universal constants of nature, in this case
the elementary charge.
Electricity and Sustainable Development
On November 20, 1985, when the two appeared for the closing ceremony,
the “personal chemistry was apparent. The easy and relaxed attitude toward
each other, the smiles, the sense of purpose, all showed through.” The two
were Mikhail Gorbachev and Ronald Reagan, at the time respectively
general secretary of the Soviet Union and president of the United States,
who were meeting for the first time at a bilateral summit of the two
superpowers. The description was provided by George Shultz, the
American secretary of state. The two leaders met at the height of the Cold
War to discuss the arms race and especially the possibility of reducing the
number of nuclear arms. Held in Geneva, the meeting was the first
American-Soviet summit in more than six years, during a period in which
the number of nuclear warheads had grown sharply and the strategic
relations between the United States and the USSR, as well as the stability of
the world, were entrusted to the doctrine of “mutual assured destruction.”
That doctrine held that if one of the two countries launched a first strike
against the other, the second would react and the ensuing nuclear war would
destroy them both.
Despite the lack of tangible progress on specific measures regarding
nuclear arms, the Geneva summit was a turning point for Soviet-U.S.
relations and marked the beginning of the reduction of atomic arsenals,
which has continued up to the present (even though the situation is not
totally reassuring, since there are still 9,500 nuclear warheads in military
stockpiles for potential use).
Beyond the topics related to the arms race, the two heads of state also
spoke about the peaceful use of nuclear energy. The official communiqué at
the close of the summit noted that the “two leaders emphasized the potential
importance of the work aimed at utilizing controlled thermonuclear fusion
for peaceful purposes and, in this connection, advocated the widest
practicable development of international cooperation in obtaining this
source of energy, which is essentially inexhaustible, for the benefit for all
mankind.” This commitment was soon translated into the start of ITER (in
Latin, “the way”), a huge international project for the study of nuclear
fusion (the process we discussed at the end of the previous chapter on the
kelvin). One year later, an agreement was reached among the European
Union, Japan, the Soviet Union, and the United States for the joint design of
the program. The People’s Republic of China and the Korean Republic
signed on to the project in 2003, followed by India in 2005. Despite the
good intentions expressed by Gorbachev and Reagan, it took almost twenty
years to formalize an executive agreement that allowed the construction of
ITER to begin. This achievement speaks volumes about the urgency that the
great economic powers have now attributed to the search for carbon-free
sources for the production of electrical energy as an alternative to fossil
fuels.
Today, the construction of ITER is proceeding apace near Aix-en-
Provence, in the south of France, and the first significant results will be
seen starting from 2030. The goal of ITER—a reactor that, just to give you
an idea of its size, will be as high as a ten-story building—is to demonstrate
the scientific and technological feasibility of controlled thermonuclear
fusion. ITER will have to produce an amount of thermal power from fusion
reactions 10 times as great (500 million watts) as the amount needed to fire
up the reactor (50 million watts). ITER will also have the task of laying the
groundwork for the next and definitive step: the construction of an
experimental reactor, called Demo, able to demonstrate on a large scale the
potential production of electricity. If everything goes as planned, Demo will
introduce the functionality of fusion in the second half of this century,
offering humanity an important tool to fight the environmental crisis.
ITER belongs to the tokamak category, a type of experiment for the
study of fusion in a toroidal or doughnut-shaped apparatus, which we
discussed in the previous chapter. Its basic working elements are an electric
current that flows in plasma—the super-high-temperature ionized gas that is
contained in the device and is the fuel for the fusion—and a magnetic field.
In essence, in a tokamak reactor, the fusion reactions and their consequent
liberation of energy happen in the plasma. It must be heated to a
temperature on the order of 150 million kelvin—about 10 times that of the
interior of the Sun—and remain confined in a stable and stationary manner
inside the reactor without interacting with its metallic walls, something that
would drastically decrease its performance. The confinement is obtained by
offsetting the expansive force coming from the variation in pressure with an
electromagnetic force. The situation is similar to that of an automobile tire.
The air inside the tire has a pressure of about two atmospheres, double the
ambient pressure on the outside. The confinement of air at high pressure
inside the tire is achieved mechanically, that is, by the elastic inner tube. It
is the inner tube that exerts a force in opposition to the expansive force
originating in the difference between the internal and external pressure of
the tire. In the plasma of a tokamak reactor, the situation is similar. The hot
plasma in the core of the device is at a higher pressure than the plasma on
the outer edges. To contrast the consequent expansive force, a balancing
force is needed.
As we have seen in the chapter on the kilogram, Newton’s fundamental
law (F = ma) tells us that if the interaction of a body with its surrounding
environment is known, that is, its force F, then we can calculate the
acceleration a, which basically means knowing its motion. This equation is
also valid in situations of static equilibrium, in which the sum of all the
agent forces on the body must be null and, consequently, also its
acceleration and its velocity. This is also applied, therefore, to the study of
the confinement of plasma, which requires determining a force in
opposition to the expansive force of the pressure. That force is obtained by
making an electric current flow inside the plasma and, at the same time,
applying a magnetic field to the plasma. The mathematics of the solution is
relatively simple and is expressed by the elegant equation
On the left, the term ∇p indicates the force coming from the pressure of
the plasma, which must be balanced by the force originating from the
interaction between the electric current that flows in the plasma and the
magnetic field .
As sometimes happens in science, putting an elegant equation into
practice can require a considerable engineering effort, and this is
undoubtedly the case with ITER. The current that flows in the plasma, in
fact, measures 15 million amperes. By way of comparison, it is over a
million times greater than the current that flows in the circuit of an electric
oven in an average kitchen. Producing such a current, the necessary
magnetic fields, the ultra-high vacuum container that holds the plasma, and
a variety of auxiliary components requires cutting-edge technology. The
magnetic fields, for example, are produced by magnets that use the
principle of superconductivity, which we discussed in the chapter on the
kelvin, and building the structure of ITER requires as much steel as it took
to build the Eiffel Tower.
A major contribution to the practical realization of fusion will come
from this broad international research effort. Important experimental
devices are being used to study fusion in the United States, such as the DIII-
D tokamak at General Atomics in San Diego, NSTX at the Princeton
Plasma Physics Laboratory, HBT at the Columbia University Plasma
Laboratory, MST at the University of Wisconsin–Madison, the SPARC
device under development thanks to a collaboration between MIT Plasma
Science and Fusion Center and the private fusion startup Commonwealth
Fusion Systems (CFS), and other devices at additional universities and
research centers. In Italy, a new experiment is taking place at the Divertor
Tokamak Test (DTT) facility. DTT employs cutting-edge technology
conceived in the laboratories of ENEA (Italy’s national atomic energy
agency) in Frascati and designed by researchers from ENEA, from Italian
universities and research centers, and from Eni, Italy’s global energy
company. With its laboratories in Frascati, Padua, and Milan and many
other research centers, Italy is at the forefront of the study of nuclear fusion.
The core of DTT is a steel doughnut about six meters in diameter. Inside
its core—caged in by a six-tesla magnetic field, among the highest values
ever reached in a large tokamak—the facility will produce a plasma that at
its maximum performance will reach a temperature of about 7 million
degrees Celsius. DTT’s main objective is to be a laboratory of innovation
for the study of the intense flows of power released by a fusion reactor. A
considerable fraction of the plasma’s energy is conveyed, in fact, to a
peripheral area of the tokamak known as the divertor. Recent experiments
seem to indicate that the power flows that are discharged in the divertor are
concentrated on relatively small surfaces with thermal loads per unit of
surface area equal to, or even greater than, those on the surface of the Sun.
A decidedly “hot” problem for the development of fusion for which DTT
will have to find a solution.
That 10 Percent
OceanofPDF.com
SIX
The Mole
Orange Peels
I was a chemist in a chemical plant, in a chemical laboratory (this too has been narrated), and
I stole in order to eat. If you do not begin as a child, learning how to steal is not easy; it had
taken me several months before I could repress the moral commandments and acquired the
necessary techniques. . . . I stole like [Buck of The Call of the Wild] and like the foxes: at
every favorable opportunity but with sly cunning and without exposing myself. I stole
everything except the bread of my companions.
From the point of view, precisely, of substances that you could steal with profit, that
laboratory was virgin territory, waiting to be explored. There was gasoline and alcohol, banal
and inconvenient loot: many stole them, at various points in the plant, the offer was high and
also the risk, since liquids require receptacles. This is the great problem of packaging, which
every experienced chemist knows: and it was well known to God Almighty, who solved it
brilliantly, as he is wont to, with cellular membranes, eggshells, the multiple peel of oranges,
and our own skins, because after all we too are liquids. Now, at that time, there did not exist
polyethylene, which would have suited me perfectly since it is flexible, light, and splendidly
impermeable: but it is also a bit too incorruptible, and not by chance God Almighty himself,
although he is a master of polymerization, abstained from patenting it: He does not like
incorruptible things.
This passage is taken from The Periodic Table, by Primo Levi (translated
by Raymond Rosenthal), a marvelous book of chemistry and life,
considered by the Royal Institution of Great Britain to be the best science
book ever written.
In 1937, Levi began his studies in chemistry at the University of Turin.
Chemistry is a science that studies matter: how it is made, its structure, its
properties, and the transformations of the substances from which it is
constituted and how they react. Chemistry is everywhere in our lives and in
all of our sensory perceptions: sight, touch, hearing, smell, and taste. Levi
—a graduate of a classical high school (liceo)—was fascinated by it, and he
expressed his fascination in another famous passage from The Periodic
Table:
I tried to explain to him some of the ideas that at the time I was confusedly cultivating. That
the nobility of Man, acquired in a hundred centuries of trial and error, lay in making himself
the conqueror of matter, and that I had enrolled in chemistry because I wanted to remain
faithful to this nobility. That conquering matter is to understand it, and understanding matter
is necessary to understanding the universe and ourselves: and that therefore Mendeleev’s
Periodic Table, which just during those weeks we were laboriously learning to unravel, was
poetry, loftier and more solemn than all the poetry we had swallowed down in liceo.
Moplen!
For Primo Levi, one bottle would have been enough. In 2021, according to
the authoritative website Statista, we produced worldwide 583 billion
plastic bottles. This number translates into 49 billion bottles a month, 1.6
billion per day, 67 million per hour, and about 1 million per minute. If we
were to pile them up, one on top of another, at the same rate of production,
it would take one minute to make a column that reached . . . the
International Space Station.
All these bottles amount to an enormous quantity of plastic, adding to
that which is manufactured for other purposes. The cumulative production
from the 1950s to today amounts to 8.5 billion tons of plastic, most of
which is still with us. With plastic, for the first time in human history, we
have produced and consumed on a global scale a material that lasts much
longer than we do because it takes such a long time to biodegrade. The only
hope is to recycle it, something we started to do only recently and still in
minimal amounts. As a result, since we started to mass-produce it after
World War II, plastic has accumulated: more than 6 billion of those 8.5
billion tons are scattered over land and sea, polluting our planet. As
reported by National Geographic Magazine, the oceans contain an
estimated 5.250 trillion pieces of plastic detritus, most of which do not float
but sink down to the depths, with devastating consequences for the
environment. A dramatic emergency, which we are only now starting to
recognize but that Primo Levi prophetically perceived as early as 1975,
when he wrote The Periodic Table.
Those were the years when the world was falling in love with plastic.
Light, resistant, colorful, durable—even too much so, in hindsight—plastic
became one of the symbols of modernity and the economic boom. The
Italian chemical industry played a starring role in the development of this
new material. Giulio Natta invented isotactic polypropylene and won the
Nobel Prize in 1953. Isotactic polypropylene, a scary name that conjures
images of mad, white-coated laboratory scientists. A world away from the
impression created when it was called by its commercial name—Moplen—
pronounced with a toothy grin in Italian television commercials by the great
comic actor Gino Bramieri (1928–1996). Throughout the 1960s, ads for
plastic were a regular feature of Carosello, the ten-minute advertising slot
screened every evening at around 8:45 p.m. The smiling Bramieri,
surrounded by plastic buckets, colanders, coffee cups, and toy cars,
crooned, “Mo-mo-moplen!” That’s right, Moplen was the cheery,
comforting nickname invented by Italy’s advertising “mad men” to sell the
isotactic polypropylene that revolutionized our homes in the sixties’
economic boom.
In the concentration camps, lightweight and splendidly impermeable
polyethylene, the most common among the plastics, would have been very
useful for Primo Levi. Especially polyethylene terephthalate, or PET, a
thermoplastic widely used in food packaging, bottles included. Which,
however, with characteristic subtle irony, Levi describes as “a bit too
incorruptible.” In nature, it takes hundreds of years for it to biodegrade.
Among the mythical 72 inscribed on the Eiffel Tower is one who almost
didn’t make it: Gustave de Coriolis, who gave his name to an important
physical phenomenon known as the Coriolis effect. The Coriolis force is
observable in a body moving in a rotating frame of reference like the Earth.
It is responsible, for example, for the formation of cyclonic and anticyclonic
weather systems in the atmosphere, and it is important in ballistics. A force
that would be lacking if the Earth did not rotate. Giovanni Battista Riccioli
(1598–1671), Jesuit and Ptolemaic astronomer, intuited its existence, but
because he still did not have the instruments to measure it and, above all,
because he was in some way a victim of his own geocentric prejudice, he
concluded that since (in his view) the Earth did not move, his theory must
have been mistaken. In so doing, he left the honor of the discovery and the
fame to Coriolis 150 years later, contenting himself with a lesser place in
the history of science, thanks to his drawing one of the first lunar maps and
to having an asteroid and a previously unknown Moon crater named after
him.
Someone who got it right but was misunderstood by his contemporaries
was the Italian scientist Lorenzo Romano Amedeo Carlo Avogadro, Count
of Quaregna and Cerreto. Amedeo to his friends, fellow scientists, and
students. Born in Turin in 1776, he studied law, specializing in canon law. It
turned out that codes and pandects were not his cup of tea, so he shifted his
interest to science, rapidly attaining brilliant results that were to become
pillars of modern chemistry. In particular, he outlined the law that today
bears his name. It states that equal volumes of different gases, at the same
temperature and pressure, contain the same number of molecules. The same
finding was made some years later by André-Marie Ampère, whom we
discussed extensively in the preceding chapter.
In his writings, Avogadro introduced the distinction between
“elementary molecules” and “compound molecules,” and he hypothesized
the possibility of the division of compound molecules. As Marco Ciardi
notes in his book Il segreto degli elementi (The Secrets of the Elements),
Avogadro’s elementary molecules were, at the time, thought of not as real
physical entities but rather as abstract entities of a mathematical nature.
These were extremely innovative concepts for the time, which were ignored
by the scientific community, despite providing one coherent explanation of
otherwise incomprehensible experimental observations. Only in 1860, four
years after Avogadro’s death, and thanks to the contribution of another
Italian chemist, Stanislao Cannizzaro, was his theory’s fundamental value
finally recognized, leading, over the span of a few decades, to the
identification of “elementary molecules” with atoms and “compound
molecules” with molecules in today’s parlance.
The scant recognition that Avogadro enjoyed in life has been
compensated by giving his name to what has become the most important
universal constant in chemistry. Avogadro’s number has today made
possible the redefinition of one of the seven fundamental units of the
international system, the mole.
[This] new life must start pulsating right away in Italian universities. . . . My purpose is to
promote forthwith the free intellectual training ground of university studies . . . where it is
possible to discuss and experience what freedom is, what are the economic and political
doctrines that are to be welcomed or rejected, what, finally, are the supreme interests of the
Nation, of the people, of working people. This must be the new air that penetrates Italian
universities right now, this the new breath that must be granted right away to the young
people of our universities.
Gentlemen, in these hours of anguish, amidst the ruins of an implacable war, the academic
year of our University reopens. That none of us, dear young people, should lack the spirit of
salvation. When it is with us, everything will rise again, that which was wrongly destroyed,
and everything will be accomplished, that which was rightly hoped for. Young people, trust in
Italy. Trust in its good fortune, as long as it is sustained by your discipline and your courage:
trust in the Italy that must continue to live for the joy and the honor of the world, in the Italy
that cannot fall into servitude without plunging into darkness the civilization of peoples. On
this day 9 November 1943 in the name of this Italy of workers, artists, and scientists, I declare
open the 722nd year of the University of Padua.
OceanofPDF.com
SEVEN
The Candela
The history of science has not been kind to Jan Ingenhousz. He is not
known by many, although he did acquire a bit of notoriety thanks to the
“doodle” (the modified version of its logo) that Google dedicated to him on
December 8, 2017, the 286th anniversary of his birthday. Yet Ingenhousz
certainly deserves an important place in our collective memory. In 1779, he
made a fundamental contribution to our understanding of photosynthesis,
the process by which plants convert sunlight into chemical energy.
A few years earlier, Joseph Priestley conducted experiments that
demonstrated how a plant was able to regenerate the oxygen consumed by a
candle that burned out under a closed bell jar. Ingenhousz deserves credit
for having understood the decisive role of light in association with plants.
He noted that leaves produce oxygen in sunlight and carbon dioxide in
darkness. He published these findings in 1779, thus exercising a decisive
influence on further research into vegetable life in the centuries that
followed.
Today, we know that plants, algae, and Cyanobacteria use sunlight,
water, and carbon dioxide to create oxygen and store energy in the form of
sugar molecules. Photosynthesis is a crucial process for life on Earth
because it enables the collection and transformation of an enormous amount
of solar energy. Most living beings rely on photosynthesis to produce the
complex organic molecules they rely on as an energy source. The sugars
produced during photosynthesis are the base for more complex molecules
obtained from the photosynthetic cell, such as glucose. Consider that, on
average, photosynthesis on Earth uses about 30 trillion watts, about five or
six times the power demand created by all human activities.
Beyond the transformation of energy, another effect of photosynthesis,
also fundamental for life, is the release of oxygen into the terrestrial
atmosphere. Most photosynthetic organisms generate oxygen as a by-
product of this process, and the advent of photosynthesis changed life on
Earth forever. Photosynthetic organisms also remove great quantities of
carbon dioxide from the atmosphere and use the carbon atoms to construct
organic molecules.
OceanofPDF.com
Epilogue
Measures for Measure
We have come to the end of our journey of discovery of the seven measures
of the world.
The international system of units of measurement is a powerful and
universal tool for understanding nature, the world, and ourselves. Thanks to
decades of work, metrology has finally arrived at a system of measurement
that no longer depends on human experience but rather rests on
unchangeable natural properties. If by some quirk of fate humans were to
vanish from the Earth, together with all measuring tools that we have made
—meters, balances, clocks—a new alien population who came to colonize
the planet could reconstruct our system of measurement as is, because the
speed of light and Planck’s constant will never change. The system of
measurement remains, however, a tool, and as such its primary added value
lies in the mastery of those who use it. A chisel is a simple piece of iron as
long as it sits on the table, but it becomes a tool that frees David from the
marble when it is the hands of Michelangelo. In the same way, a set of
electric and light measurements can be nothing more than a dry series of
numbers, or they can become some of the experimental foundations of
quantum mechanics, when interpreted by Albert Einstein in his analysis of
the photoelectric effect.
Measuring is fundamental for our lives, for our well-being, and for the
progress of human knowledge, but measurements must be made and used
well. Now that we have discovered the beauty of that intellectual
construction called a system of measures, we must recall the care and
attention that must be put into its use, especially when it is being used for
making collective decisions. By limiting ourselves to an incomplete set of
measurements for describing or interpreting an event or a system we risk
losing sight of complexity and many precious elements. A measuring
process voluntarily limited to a subset of the quantities that describe a
phenomenon can turn an impartial instrument of knowledge into a creator
of distortions. The incredible leap forward of CT (computed tomography),
which we encountered at the beginning of our journey, lies precisely in its
having replaced the single point of view of traditional radiography with a
multitude of observations from various angles.
If we want to describe the environmental qualities of electric cars, for
example, it is not sufficient to limit ourselves to measuring the reduction of
CO2 emissions coming from each vehicle. We also need to consider where
the electricity that powers those cars comes from and how much CO2 was
released in other places in order to produce it—places that may be quite far
from where the energy is used. Otherwise, we focus on the clean air where
electric cars are plentiful and believe that we have solved the problem, but
we forget that if the electricity comes mainly from fossil fuels, as happens
today, we are simply shifting the pollution from one place to another.
If we measure the success of a health care system only by the quantity of
services it performs, without asking ourselves if sufficient resources are
also being invested in the quality of those services and without determining
whether the primary consideration is profit or the patient, we risk making
wrong and discriminatory choices. If we base our evaluation of science
more on the number of publications than on the quality and the impact of
the scholarship, research has no future. If we evaluate people solely on the
basis of aseptic, preestablished performance parameters, we lose part of our
humanity and, incidentally, render less productive the environment in which
those people work. If we renounce complexity and simplify all our
evaluations with classifications, boxes, and divisions, we restrict ourselves
to incomplete measurements that will inspire hypersimplified decisions and
policies, incapable of constructing a future worthy of ourselves.
Measures are a precious tool, but they must be interpreted by humans,
using science and its method. Conscious that measurement is a fundamental
element of understanding, scientists have codified the measuring process
and made it universal so that its results can be shared and verified at any
moment and serve as the basis for theory. The analysis and choice of which
quantity to measure are fundamental aspects of experimental practice, with
the objective of describing as broadly as possible the system under
examination and taking into account all potential points of view, whether
real or imaginary. The discussion of experimental findings must be critical,
and their reproducibility is fundamental to drawing solid conclusions.
Science is not an automatic dispenser of certainties, from which anyone
can take what they need. On the contrary, scientific discoveries are the fruit
of doubts and errors, which, for researchers, are not a reason for shame but
rather a powerful instrument of knowledge. And they make science more
human. Indeed, errors and doubts are just as fundamental for life as they are
for research. Gianni Rodari, the great Italian educator and author of
children’s books, puts it this way in his Il libro degli errori (The Book of
Errors), published by Einaudi in 1964: “Errors are necessary, useful as
bread and often beautiful; for example, the tower of Pisa.” The history of
science is there to teach us that.
Great scientists like William Thomson, Baron Kelvin, Albert Einstein,
and Enrico Fermi made mistakes. Kelvin erred in assessing the age of the
Earth. Fermi thought he had found transuranic elements and did not realize
that what he was observing was instead the fission of the uranium nucleus.
Einstein introduced the cosmological constant to make relativity compatible
with a universe that he mistakenly believed was static. All these errors,
however, were fruitful. Kelvin’s work, though it led to a mistaken result,
nevertheless succeeded in transforming the study of the age of the Earth
into a new science, which would soon determine the right result of 4.5
billion years. Fermi’s conclusions on the presumed transuranic elements
were the stimulus for the discovery by Lise Meitner, Otto Hahn, and Fritz
Strassmann, made shortly thereafter, of uranium fission. Hahn himself
admitted that he, Meitner, and Strassmann would never have been interested
in uranium if it had not been for Fermi. Einstein’s cosmological constant
was actually an ingenious intuition, though at the time he arrived at it by
erroneous hypotheses. Decades later, in fact, it would be rediscovered by
astrophysicists to explain that the universe expands at nonconstant velocity.
These errors, like many others in the history of science, were generative,
and they catalyzed turning points for scientific thought. Because, for every
finding reached, for every measurement made, science does not stop. On
the contrary, it poses ever more questions. The enthusiasm for a discovery
is ephemeral, while doubt is a scientist’s lifelong companion. Doubt that—
to quote the Nobel Prize winner Richard Feynman—“is not to be feared”
but to be “welcomed as the possibility of a new potential for human
beings.” For science, to doubt means to be free from reverential awe of
preestablished authorities and ideas, because science is democratic and “one
person, one vote” really does hold true, provided everyone has had the
opportunity to bear the burden and the beauty of study. This is an
opportunity that has to be given to everyone. This freedom allows us to take
new paths, measure unexplored quantities, propose disruptive and
revolutionary visions. A way of thinking that can also be applied outside of
science, encouraging us not to wallow in a vision of progress and well-
being that is purely financial and economistic.
If we put aside the simplistic narratives of much of our politics and mass
media as well as the ex cathedra pronouncements that sometimes come
from academia, the alliance between science and society can help make the
ineluctable infinite complexity of the world that surrounds us accessible and
manageable for everyone, without fear. Abandoning a forcedly coherent
narrative—whether it be straightforward or esoteric—involves setting out
on a narrower and steeper road, but one that can lead to messages whose
content and value are much more elevated.
In the choice of tools with which to measure the world, humanity has
entrusted itself to nature. We now have to entrust ourselves to the
intelligence of individuals and communities so that those tools can permit
us to invent the new measure for a sustainable relationship with nature and
a well-being that is truly collective and universal.
OceanofPDF.com
Acknowledgments
In this book, you have encountered some very big numbers, such as
Avogadro’s number or the distance expressed in kilometers between the star
Proxima Centauri and Earth. Even they are too small, however, to represent
the amount of acknowledgments I owe to the many people who have helped
me on this journey.
I begin with him without whom this book would simply not have come
to be: Alessandro Marzo Magno, historian, excellent writer, and lifelong
friend. I owe him my introduction to my Italian publisher Laterza, his
encouragement to undertake a new project and return to book writing, as
well as his attentive reading of the manuscript and his many valuable
suggestions.
I asked many people for opinions; they all responded generously.
Giovanni Busetto, Alessandro De Angelis—Galileo scholar and
popularizer—and Leonardo Giudicotti, my colleagues in the Department of
Physics and Astronomy at the University of Padua, read the manuscript
with care and gave me very useful advice.
I had the good fortune to meet Mauro Sambi, Professor of General and
Inorganic Chemistry at our university, during the months in which I was
writing. I am grateful to him for his meticulous review of the chapter on the
mole, for the correction of my inaccuracies, and, above all, for having
generously shared his time and reflections.
The revision of an expert on the subject such as Marco Pisani, physicist
at the National Institute of Metrological Research in Turin, was crucial in
improving various passages of the book.
To Maestro Federico Maria Sardelli goes my gratitude for having
examined—and corrected—my improbable incursions into music and for a
pleasurable explanation on the theme of musical tempi and the relationship
between composer and performer.
I also owe a debt of thanks to Marina Santi, who reminded me in this
period that not everything in life can or should be measured; to Alessandra
Viola for her helpful advice; to Lia Di Trapani, Agnese Gualdrini, and
everyone at the publishing house Laterza, who followed me and helped me
in this adventure; to my colleagues, women and men, who, day after day,
make the University of Padua, the RFX Consortium, and the DTT facility
places of research and study in which I have learned so much and to which I
am deeply indebted.
“Grazie” to Gregory Conti, who translated this book and did a great job,
and a special thanks to Jean Thomson Black, from Yale University Press,
who gave me the opportunity to be part of the prestigious YUP community,
as well as to her editorial assistant, Elizabeth Sylvia, and manuscript editor
Laura Jones Dooley for their fine work on the manuscript.
My heartfelt thanks go to all those who, over the years and in various
ways, have stood by me in life, even when I didn’t know how to recognize
or appreciate their support. A special thanks to Annamaria and Carlo for
everything they have taught me, in this period, too. And the most grateful
thoughts to Andrea, my most important and precious reader.
All these people, and many others, have sustained and helped me with
generosity. The errors and inaccuracies that remain in the book are
exclusively my responsibility.
OceanofPDF.com
Suggestions for Further Reading
OceanofPDF.com
Index
immunization, 174–75
incandescent light bulb, 181
Indra Musikclub, 1–2
inertia, 35, 94, 110
Ingenhousz, Jan, 174, 175
Institut de France, 131–32
International Bureau of Weights and Measures (Bureau international des
poids et mesures, BIPM), 8, 10, 26, 82
International Energy Agency (IEA), 133
International Prototype Kilogram (IPK), 11, 82
international system of units of measurement (SI), 9, 10–11, 122, 183, 185
fundamental units, 134, 135, 163, 181, 183
isochronism of short swings, 51–52
isotactic polypropylene, 157–58
isotopes, 103, 126, 148
Italian Electro-Technical Association, 137
Italian National Institute for Nuclear Physics, 123
Italian National Metrological Research Institute, 69
Italy, 25–26, 74–75, 156
fascism in, 166–67
nuclear fusion in, 146
University of Padua, 166–70
University of Turin, 74n1, 154, 169
ITER, 143–45
nanoseconds, 47
Napoleon (Bonaparte), 25, 131–32
NASA (National Aeronautics and Space Administration), 66, 94
National Institute of Standards and Technology (NIST), 58
Natta, Giulio, 157
navigation, 55, 138, 194
Nazism, 32, 73, 76, 77, 78, 169
Neckam, Alexander, 139
neutrons, 135, 141
Newton, Isaac, 33, 59–60, 67, 93, 116
Principia, 95
newtons, 140–41
Newton’s equation, 94
Newton’s laws, 97, 98, 145
Ninth Symphony (Beethoven), 53
NIST (National Institute of Standards and Technology), 58
Noah’s ark, 19–20
Nobel, Alfred, 99
Nobel Prize: in Chemistry, 76, 90, 157
in Medicine, 2
in Physics, 10, 16, 17, 28, 48, 74, 75, 78, 85, 91, 93, 96, 107, 188
Noctes Atticae (Attic Nights), 50
NSTX, 146
nuclear arms, 143. See also atomic bombs
nuclear energy, 74, 125–26, 143, 158, 176–77
nuclear fission, 125
chain reaction, 74
power stations, 103, 126
nuclear fusion, 83, 103–4, 126, 143, 146, 147–48
nuclear medicine, 158
nuclear power, 103–4, 126
nuclear reactors, 146–47, 176
nuclear waste, 126
nucleus (of an atom), 135
numbers, 163–66
Avogadro’s number, 163, 165–66
Operation Valkyrie, 77
Oppenheimer, Robert, 74, 103
optics, 30
Ørsted, Hans Christian, 137, 138, 140
ounces, 81
oxidation, 159
oxygen: atoms of, 164, 165
and photosynthesis, 175–176
in water, 164–165
paces (passuum), 4, 21
Palatine Gallery of the Pitti Palace, 173
Papen, Franz von, 77
Parks, Rosa, 16–17
Pasteur, Louis, 108
pendulums, 51–52
pennies, 81
People’s Republic of China. See China
periodic table, 155–56
Periodic Table, The (Levi), 154, 157
Persistence of Memory (Dalí), 60–61
Perzy, Erwin, 46
Philo of Byzantium, 111
photocatalytic processes, 161
photoelectric effect, 28, 91–92, 186
photometry, 180, 182
photons, 92
photosynthesis, 175–76, 179
physics: applied, 128
astro-, 188
atomic, 155
and atomic power, 27, 103
classical, 34, 88–89, 91, 93, 95, 97
electricity and, 135–36, 137
experimental, 134
experiments in, 34
Fermi and, 75, 78
laws of, 34, 59, 63, 67, 98
measurement in, 27–30, 41
modern, 50, 85
new discoveries in, 27–28, 35–36, 92–93
and the Nobel Prize, 28, 90
nuclear, 31
quantum, 86, 93, 97
stereotypes associated with, 31–32, 51
sublime, 169
theoretical, 58
theories of, 100
and the theory of relativity, 35–36
universal constants of, 41
use of kelvin by, 122–23, 126
and wine, 107, 109, 113, 118, 120. See also Einstein, Albert
mechanics
Piazza Montecitorio, 49
Picasso, Pablo, 60
piezoelectric materials, 55
Planck, Emma, 77
Planck, Erwin, 75, 77–78, 99
Planck, Grete, 77
Planck, Karl, 77
Planck, Max, 74, 75–78, 87, 88, 92, 99
Planck’s constant (h), 12, 29, 57, 88–89, 97, 99, 100, 101, 124, 185
Planck’s quantum, 100
planetary motion, 33, 60, 68, 94, 98
plasma, 144–45
status of, 127
plastic: beneficial uses for, 158–59
bottles, 156–58
disposal of, 159
isotactic polypropylene, 157–58
thermo-, 158
Plato, 137
Pliny the Elder, 137
pollution: air, 149, 161
atmospheric, 161
avoiding, 161
particle, 148
shifting locations of, 186
vehicle exhaust, 161
water, 161
polyethylene, 158
polyethylene terephthalate (PET), 158
Possanti, Vincenzo, 52
pounds, 80, 81
Presley, Elvis, 1
pressure, 120
atmospheric, 182
of gases, 162
in a tire, 145
Priestley, Joseph, 175
Princeton Plasma Physics Laboratory, 146
probability, 135
prototypes: cubits, 20
kilogram, 11, 82, 99, 100, 101, 141, 182
of the laser, 37
meter, 6, 25, 26–27, 29, 30–31
pendulum clock, 52
protons, 135, 141
Proxima Centauri, 65–66, 67
pulsilogium, 112
Pyramid of Giza, 20
pyramids, 20
racial segregation, 16
radiation, 48
electromagnetic, 29, 57, 160, 177–79
and krypton, 30–31, 40
X-rays, 3, 28
radio, 33
radioactive decay, 96
radiography, 186
Rayleigh (Lord), 88
Reagan, Ronald, 142–43
relativity, theory of: and the cosmological constant, 188
Einstein as father of, 15, 17, 61
general, 67–68, 69, 83, 84
and the kilogram, 100
and the meter, 17
and Persistence of Memory (Dalí), 61
and space-time, 17, 36
special, 35–36, 47, 67, 69, 97
and the speed of light, 12, 32, 97
Sun as proof of, 86
and time, 47, 61, 64, 67–68, 69
understanding, 85. See also Galilean invariance (Galilean relativity)
repulsive barrier, 127
Riccioli, Giovanni Battista, 162
road building, 6, 21–22
Rodari, Gianni, 187
rods, 5
Roman Empire, 4–5, 21–22
Rømer, Ole, 116
Röntgen, Wilhelm, 28
Royal Air Force (RAF), 133
Royal Astronomical Society, 84
Royal Swedish Academy of Sciences, 89
Russia, 84. See also Soviet Union
Úcar, Iñaki, 54
Uffizi Galleries, 173
ultraviolet light, 91
United States: beer temperature in, 118
Einstein in, 15–17
Fermi in, 74
incandescent light bulbs in, 87, 181
prototypes in, 82
record low temperature, 123
and the Soviet Union, 142–43
study of nuclear fusion in, 125, 128, 146
time measurement in, 56, 58
use of Fahrenheit scale in, 116, 122
units of measurement: associated with body parts, 4–5, 23
different scales of, 115–16
electrical, 137
English, 115, 119
local, 22–23, 24
metric, 115
prototypes (cubit), 20
prototypes (kilogram), 11, 82, 99, 100, 101, 141, 182
prototypes (meter), 6, 25, 26–27, 29, 30–31
universal system of, 7. See also international system of units of
measurement (SI)
measurement
universal constants, 141, 142, 163, 165–66, 183
Avogadro’s number, 163, 165–66
elementary charge (e), 141, 183
KCD, 183
physical constant kB, 120
Planck’s constant (h), 12, 29, 57, 88–89, 97, 99, 100, 101, 124, 185
speed of light (c), 35–37, 41, 100, 183
universe, expansion of, 188
University Carlos III (Madrid), 54
University of California, Santa Cruz, 139
University of Chicago, 74
University of Padua, 166–70
University of Paris, 93
University of Turin, 74n1, 154, 169
University of Wisconsin–Madison, 146
uranium fission, 188. See also nuclear fission
vaccinations, 174–75
van den Broek, Antonius, 156
Vasa (warship), 114–15
velocity (v), 10, 40–41, 63–64
Via Panisperna Boys, 74
Villaggio, Paolo, 117
vision, importance of, 180
Vitruvian Man, 5
Vitruvius Pollio, Marcus, 4–5
Volta, Alessandro, 131–32, 133, 134, 137
voltaic pile, 132
Voltaire, 131, 174, 175
volts and voltage, 133, 135–36
Vox, Patrick, 3
X-radiation (X-rays), 3, 28
yards, 20
yardsticks, 4
zero, absolute, 121, 123–24
OceanofPDF.com