Scientific Secretary, Isro Deputy Director, RESPOND: Scientificsecretary@isro - Gov.i N Ddrespond@isro - Gov.in
Scientific Secretary, Isro Deputy Director, RESPOND: Scientificsecretary@isro - Gov.i N Ddrespond@isro - Gov.in
Scientific Secretary, Isro Deputy Director, RESPOND: Scientificsecretary@isro - Gov.i N Ddrespond@isro - Gov.in
html
At CERN, the European Organization for Nuclear Research, physicists and engineers are probing
the fundamental structure of the universe. They use the world's largest and most complex
scientific instruments to study the basic constituents of matter – the fundamental particles. The
particles are made to collide together at close to the speed of light. The process gives the
physicists clues about how the particles interact, and provides insights into the fundamental laws
of nature.
The instruments used at CERN are purpose-built particle accelerators and detectors. Accelerators
boost beams of particles to high energies before the beams are made to collide with each other or
with stationary targets. Detectors observe and record the results of these collisions.
Founded in 1954, the CERN laboratory sits astride the Franco-Swiss border near Geneva. It was
one of Europe's first joint ventures and now has 21 member states.
he name CERN
The name CERN is derived from the acronym for the French "Conseil Européen pour la
Recherche Nucléaire", or European Council for Nuclear Research, a provisional body founded in
1952 with the mandate of establishing a world-class fundamental physics research organization
in Europe. At that time, pure physics research concentrated on understanding the inside of the
atom, hence the word "nuclear".
Today, our understanding of matter goes much deeper than the nucleus, and CERN's main area
of research is particle physics – the study of the fundamental constituents of matter and the
forces acting between them. Because of this, the laboratory operated by CERN is often referred
to as the European Laboratory for Particle Physics.
The big bang should have created equal amounts of matter and antimatter. So why is there far
more matter than antimatter in the universe?
In 1928, British physicist Paul Dirac wrote down an equation that combined quantum theory and
special relativity to describe the behaviour of an electron moving at a relativistic speed. The
equation – which won Dirac the Nobel prize in 1933 – posed a problem: just as the equation x2=4
can have two possible solutions (x=2 or x=-2), so Dirac's equation could have two solutions, one
for an electron with positive energy, and one for an electron with negative energy. But classical
physics (and common sense) dictated that the energy of a particle must always be a positive
number.
Dirac interpreted the equation to mean that for every particle there exists a corresponding
antiparticle, exactly matching the particle but with opposite charge. For the electron there should
be an "antielectron", for example, identical in every way but with a positive electric charge. The
insight opened the possibility of entire galaxies and universes made of antimatter.
But when matter and antimatter come into contact, they annihilate – disappearing in a flash of
energy. The big bang should have created equal amounts of matter and antimatter. So why is
there far more matter than antimatter in the universe?
On 4 July 2012, the ATLAS and CMS experiments at CERN's Large Hadron Collider announced
they had each observed a new particle in the mass region around 126 GeV. This particle is
consistent with the Higgs boson but it will take further work to determine whether or not it is the
Higgs boson predicted by the Standard Model. The Higgs boson, as proposed within
the Standard Model, is the simplest manifestation of the Brout-Englert-Higgs mechanism. Other
types of Higgs bosons are predicted by other theories that go beyond the Standard Model.
On 8 October 2013 the Nobel prize in physics was awarded jointly to François Englert and
Peter Higgs "for the theoretical discovery of a mechanism that contributes to our understanding
of the origin of mass of subatomic particles, and which recently was confirmed through the
discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at
CERN's Large Hadron Collider."
CERN's main focus is particle physics – the study of the fundamental constituents of matter – but
the physics programme at the laboratory is much broader, ranging from nuclear to high-energy
physics, from studies of antimatter to the possible effects of cosmic rays on clouds.
Since the 1970s, particle physicists have described the fundamental structure of matter using an
elegant series of equations called the Standard Model. The model describes how everything that
they observe in the universe is made from a few basic blocks called fundamental particles,
governed by four forces. Physicists at CERN use the world's most powerful particle accelerators
and detectors to test the predictions and limits of the Standard Model. Over the years it has
explained many experimental results and precisely predicted a range of phenomena, such that
today it is considered a well-tested physics theory.
But the model only describes the 4% of the known universe, and questions remain. Will we see a
unification of forces at the high energies of the Large Hadron Collider (LHC)? Why is gravity so
weak? Why is there more matter than antimatter in the universe? Is there more exotic physics
waiting to be discovered at higher energies? Will we discover evidence for a theory
called supersymmetry at the LHC? Or understand the Higgs boson that gives particles mass?
Physicists at CERN are looking for answers to these questions and more – find out more below.
In August 1912, Austrian physicist Victor Hess made a historic balloon flight that opened a new
window on matter in the universe. As he ascended to 5300 metres, he measured the rate of
ionization in the atmosphere and found that it increased to some three times that at sea level. He
concluded that penetrating radiation was entering the atmosphere from above. He had discovered
cosmic rays.
These high-energy particles arriving from outer space are mainly (89%) protons – nuclei of
hydrogen, the lightest and most common element in the universe – but they also include nuclei of
helium (10%) and heavier nuclei (1%), all the way up to uranium. When they arrive at Earth,
they collide with the nuclei of atoms in the upper atmosphere, creating more particles, mainly
pions. The charged pions can swiftly decay, emitting particles called muons. Unlike pions, these
do not interact strongly with matter, and can travel through the atmosphere to penetrate below
ground. The rate of muons arriving at the surface of the Earth is such that about one per second
passes through a volume the size of a person’s head.
Cosmic accelerators
Just how do cosmic rays reach such high energies? Where are the natural accelerators? The
lowest energy cosmic rays arrive from the Sun in a stream of charged particles known as the
solar wind, but pinning down the origin of the higher-energy particles is made difficult as they
twist and turn in the magnetic fields of interstellar space.
Clues have come through studying high-energy gamma rays from outer space. These are far
fewer than the charged cosmic rays, but being electrically neutral they are not influenced by
magnetic fields. They generate showers of secondary particles that can be detected on Earth and
which point back towards the point of origin of the gamma rays. Sources of the highest energy
gamma rays in our own galaxy, the Milky Way, include the remnants of supernovae, such as the
famous Crab Nebula; the shock waves from these stellar explosions have long been proposed as
possible natural accelerators. Other sources of ultra-high-energy gamma rays lie in other
galaxies, where exotic objects such as supermassive back holes may drive the acceleration. There
is also evidence that the highest energy charged cosmic rays also have similar origins in other
galaxies.
Galaxies in our universe seem to be achieving an impossible feat. They are rotating with such
speed that the gravity generated by their observable matter could not possibly hold them
together; they should have torn themselves apart long ago. The same is true of galaxies in
clusters, which leads scientists to believe that something we cannot see is at work. They think
something we have yet to detect directly is giving these galaxies extra mass, generating the extra
gravity they need to stay intact. This strange and unknown matter was called “dark matter” since
it is not visible.
Dark matter
Unlike normal matter, dark matter does not interact with the electromagnetic force. This means it
does not absorb, reflect or emit light, making it extremely hard to spot. In fact, researchers have
been able to infer the existence of dark matter only from the gravitational effect it seems to have
on visible matter. Dark matter seems to outweigh visible matter roughly six to one, making up
about 26% of all the matter in the universe. Here's a sobering fact: The matter we know and that
makes up all stars and galaxies only accounts for 4% of the content of the universe! But what is
dark matter? One idea is that it could contain "supersymmetric particles" – hypothesized particles
that are partners to those already known in the Standard Model. Experiments at theLarge Hadron
Collider (LHC) may provide more direct clues about dark matter.
Many theories say the dark matter particles would be light enough to be produced at the LHC. If
they were created at the LHC, they would escape through the detectors unnoticed. However, they
would carry away energy and momentum, so physicists could infer their existence from the
amount of energy and momentum “missing” after a collision. Dark matter candidates arise
frequently in theories that suggest physics beyond the Standard Model, such as supersymmetry
and extra dimensions. One theory suggests the existence of a “Hidden Valley”, a parallel world
made of dark matter having very little in common with matter we know. If one of these theories
proved to be true, it could help scientists gain a better understanding of the composition of our
universe and, in particular, how galaxies hold together.
Dark energy
Dark energy makes up approximately 70% of the universe and appears to be associated with the
vacuum in space. It is distributed evenly throughout the universe, not only in space but also in
time – in other words, its effect is not diluted as the universe expands. The even distribution
means that dark energy does not have any local gravitational effects, but rather a global effect on
the universe as a whole. This leads to a repulsive force, which tends to accelerate the expansion
of the universe. The rate of expansion and its acceleration can be measured by observations
based on the Hubble law. These measurements, together with other scientific data, have
confirmed the existence of dark energy and provide an estimate of just how much of this
mysterious substance exists.
Why is gravity so much weaker than the other fundamental forces? A small fridge magnet is
enough to create an electromagnetic force greater than the gravitational pull exerted by planet
Earth. One possibility is that we don’t feel the full effect of gravity because part of it spreads to
extra dimensions. Though it may sound like science fiction, if extra dimensions exist, they could
explain why the universe is expanding faster than expected, and why gravity is weaker than the
other forces of nature.
A question of scale
In our everyday lives, we experience three spatial dimensions, and a fourth dimension of time.
How could there be more? Einstein’s general theory of relativity tells us that space can expand,
contract, and bend. Now if one dimension were to contract to a size smaller than an atom, it
would be hidden from our view. But if we could look on a small enough scale, that hidden
dimension might become visible again. Imagine a person walking on a tightrope. She can only
move backward and forward; but not left and right, nor up and down, so she only sees one
dimension. Ants living on a much smaller scale could move around the cable, in what would
appear like an extra dimension to the tightrope-walker.
How could we test for extra dimensions? One option would be to find evidence of particles that
can exist only if extra dimensions are real. Theories that suggest extra dimensions predict that, in
the same way as atoms have a low-energy ground state and excited high-energy states, there
would be heavier versions of standard particles in other dimensions. These heavier versions of
particles – called Kaluza-Klein states – would have exactly the same properties as standard
particles (and so be visible to our detectors) but with a greater mass. If CMS or ATLAS were to
find a Z- or W-like particle (the Z and W bosons being carriers of the electroweak force) with a
mass 100 times larger for instance, this might suggest the presence of extra dimensions. Such
heavy particles can only be revealed at the high energies reached by the Large Hadron
Collider (LHC).
A little piece of gravity?
Some theorists suggest that a particle called the “graviton” is associated with gravity in the same
way as the photon is associated with the electromagnetic force. If gravitons exist, it should be
possible to create them at the LHC, but they would rapidly disappear into extra dimensions.
Collisions in particle accelerators always create balanced events – just like fireworks – with
particles flying out in all directions. A graviton might escape our detectors, leaving an empty
zone that we notice as an imbalance in momentum and energy in the event. We would need to
carefully study the properties of the missing object to work out whether it is a graviton escaping
to another dimension or something else. This method of searching for missing energy in events is
also used to look for dark matter or supersymmetric particles.
Microscopic black holes
Another way of revealing extra dimensions would be through the production of “microscopic
black holes”. What exactly we would detect would depend on the number of extra dimensions,
the mass of the black hole, the size of the dimensions and the energy at which the black hole
occurs. If micro black holes do appear in the collisions created by the LHC, they would
disintegrate rapidly, in around 10-27 seconds. They would decay into Standard Model or
supersymmetric particles, creating events containing an exceptional number of tracks in our
detectors, which we would easily spot. Finding more on any of these subjects would open the
door to yet unknown possibilities.
For a few millionths of a second, shortly after the big bang, the universe was filled with an
astonishingly hot, dense soup made of all kinds of particles moving at near light speed. This
mixture was dominated by quarks – fundamental bits of matter – and by gluons, carriers of the
strong force that normally “glue” quarks together into familiar protons and neutrons and other
species. In those first evanescent moments of extreme temperature, however, quarks and gluons
were bound only weakly, free to move on their own in what’s called a quark-gluon plasma.
To recreate conditions similar to those of the very early universe, powerfulaccelerators make
head-on collisions between massive ions, such as gold or lead nuclei. In these heavy-ion
collisions the hundreds of protons and neutrons in two such nuclei smash into one another at
energies of upwards of a few trillion electronvolts each. This forms a miniscule fireball in which
everything “melts” into a quark-gluon plasma.
The fireball instantly cools, and the individual quarks and gluons (collectively called partons)
recombine into a blizzard of ordinary matter that speeds away in all directions. The debris
contains particles such as pions and kaons, which are made of a quark and an antiquark; protons
and neutrons, made of three quarks; and even copious antiprotons and antineutrons, which may
combine to form the nuclei of antiatoms as heavy as helium. Much can be learned by studying
the distribution and energy of this debris. An early discovery was that the quark-gluon plasma
behaves more like a perfect fluid with small viscosity than like a gas, as many researchers had
expected.
One type of debris is rare but particularly instructive. In an initial heavy-ion collision, pairs of
quarks or gluons may slam directly into each other and scatter back-to-back – a spurt of energy
that quickly condenses to a jet of pions, kaons, and other particles. First observed in accelerator-
based experiments in the early 1980s, jets are fundamental to quantum chromodynamics, the
theory that explains how quarks and gluons can combine depending on their different “colours”
(a quantum property that has nothing to do with visible colours).
In heavy-ion collisions, the first evidence for jets was seen in 2003 in
the STAR andPHENIX experiments at Brookhaven National Laboratory’s Relativistic Heavy Ion
Collider (RHIC) in the US. These jets showed a remarkable difference from those in simpler
collisions, however. In the most striking measurement, STAR observed that one of the two back-
to-back jets was invariably “quenched,” sometimes weakened and sometimes completely
extinguished. The further a jet has to push through the dense fireball of a heavy-ion collision –
30 to 50 times as dense as an ordinary nucleus – the more energy it loses.
Jets are “hard probes”, by nature strongly interacting but moving so fast and with so much
energy that they are often not completely absorbed by the surrounding quarks and gluons in the
quark-gluon plasma. The degree of jet quenching – a figure that emerges in data from millions of
collision events – plus the jets' orientation, directionality, composition, and how they transfer
energy and momentum to the medium, reveal what’s inside the fireball and thus the properties of
the quark-gluon plasma.
Recently the ALICE, ATLAS and CMS experiments at CERN’s Large Hadron Collider(LHC)
have confirmed the phenomenon of jet quenching in heavy-ion collisions. The much greater
collision energies at the LHC push measurements to much higher jet energies than are accessible
at RHIC, allowing new and more detailed characterization of the quark-gluon plasma.
Theoretical understanding of these measurements is challenging, however, and is one of the most
important problems in quantum chromodynamics today.
Scientists at CERN are trying to find out what the smallest building blocks of matter are.
All matter except dark matter is made of molecules, which are themselves made of atoms. Inside
the atoms, there are electrons spinning around the nucleus. The nucleus itself is generally made
of protons and neutrons but even these are composite objects. Inside the protons and neutrons,
we find the quarks, but these appear to be indivisible, just like the electrons.
Quarks and electrons are some of the elementary particles we study at CERN and in other
laboratories. But physicists have found more of these elementary particles in various
experiments, so many in fact that researchers needed to organize them, just like Mendeleev did
with his periodic table.
This is summarized in a concise theoretical model called the Standard Model. Today, we have a
very good idea of what matter is made of, how it all holds together and how these particles
interact with each other.
The Standard Model has worked beautifully to predict what experiments have shown so far about
the basic building blocks of matter, but physicists recognize that it is incomplete. Supersymmetry
is an extension of the Standard Model that aims to fill some of the gaps. It predicts a partner
particle for each particle in the Standard Model. These new particles would solve a major
problem with the Standard Model – fixing the mass of the Higgs boson. If the theory is correct,
supersymmetric particles should appear in collisions at the LHC.
At first sight, the Standard Model seems to predict that all particles should be massless, an idea
at odds with what we observe around us. Theorists have come up with a mechanism to give
particles masses that requires the existence of a new particle, the Higgs boson. However, it is a
puzzle why the Higgs boson should be light, as interactions between it and Standard-Model
particles would tend to make it very heavy. The extra particles predicted by supersymmetry
would cancel out the contributions to the Higgs mass from their Standard-Model partners,
making a light Higgs boson possible. The new particles would interact through the same forces
as Standard-Model particles, but they would have different masses. If supersymmetric particles
were included in the Standard Model, the interactions of its three forces – electromagnetism and
the strong and weak nuclear forces – could have the exact same strength at very high energies, as
in the early universe. A theory that unites the forces mathematically is called a grand unified
theory, a dream of physicists including Einstein.
Supersymmetry would also link the two different classes of particles known as fermions and
bosons. Particles like those in the Standard Model are classified as fermions or bosons based on a
property known as spin. Fermions all have half of a unit of spin, while the bosons have 0, 1 or 2
units of spin. Supersymmetry predicts that each of the particles in the Standard Model has a
partner with a spin that differs by half of a unit. So bosons are accompanied by fermions and vice
versa. Linked to their differences in spin are differences in their collective properties. Fermions
are very standoffish; every one must be in a different state. On the other hand, bosons are very
clannish; they prefer to be in the same state. Fermions and bosons seem as different as could be,
yet supersymmetry brings the two types together.
Finally, in many theories scientists predict the lighest supersymmetric particle to be stable and
electrically neutral and to interact weakly with the particles of the Standard Model. These are
exactly the characteristics required for dark matter, thought to make up most of the matter in the
universe and to hold galaxies together. The Standard Model alone does not provide an
explanation for dark matter. Supersymmetry is a framework that builds upon the Standard
Model’s strong foundation to create a more comprehensive picture of our world. Perhaps the
reason we still have some of these questions about the inner workings of the universe is because
we have so far only seen half of the picture.
n 1928, British physicist Paul Dirac wrote down an equation that combined quantum theory and
special relativity to describe the behaviour of an electron moving at a relativistic speed. The
equation – which won Dirac the Nobel prize in 1933 – posed a problem: just as the equation x2=4
can have two possible solutions (x=2 or x=-2), so Dirac's equation could have two solutions, one
for an electron with positive energy, and one for an electron with negative energy. But classical
physics (and common sense) dictated that the energy of a particle must always be a positive
number.
Dirac interpreted the equation to mean that for every particle there exists a corresponding
antiparticle, exactly matching the particle but with opposite charge. For the electron there should
be an "antielectron", for example, identical in every way but with a positive electric charge. The
insight opened the possibility of entire galaxies and universes made of antimatter.
But when matter and antimatter come into contact, they annihilate – disappearing in a flash of
energy. The big bang should have created equal amounts of matter and antimatter. So why is
there far more matter than antimatter in the universe?
The theories and discoveries of thousands of physicists since the 1930s have resulted in a
remarkable insight into the fundamental structure of matter: everything in the universe is found
to be made from a few basic building blocks called fundamental particles, governed by four
fundamental forces. Our best understanding of how these particles and three of the forces are
related to each other is encapsulated in the Standard Model of particle physics. Developed in the
early 1970s, it has successfully explained almost all experimental results and precisely predicted
a wide variety of phenomena. Over time and through many experiments, the Standard Model has
become established as a well-tested physics theory.
Matter particles
All matter around us is made of elementary particles, the building blocks of matter. These
particles occur in two basic types called quarks and leptons. Each group consists of six particles,
which are related in pairs, or “generations”. The lightest and most stable particles make up the
first generation, whereas the heavier and less stable particles belong to the second and third
generations. All stable matter in the universe is made from particles that belong to the first
generation; any heavier particles quickly decay to the next most stable level. The six quarks are
paired in the three generations – the “up quark” and the “down quark” form the first generation,
followed by the “charm quark” and “strange quark”, then the “top quark” and “bottom (or
beauty) quark”. Quarks also come in three different “colours” and only mix in such ways as to
form colourless objects. The six leptons are similarly arranged in three generations – the
“electron” and the “electron neutrino”, the “muon” and the “muon neutrino”, and the “tau” and
the “tau neutrino”. The electron, the muon and the tau all have an electric charge and a sizeable
mass, whereas the neutrinos are electrically neutral and have very little mass.
Discovered in 1983 by physicists at the Super Proton Synchrotron at CERN, the Z boson is a
neutral elementary particle. Like its electrically charged cousin, the W, the Z boson carries the
weak force.
The weak force is essentially as strong as the electromagnetic force, but it appears weak because
its influence is limited by the large mass of the Z and W bosons. Their mass limits the range of
the weak force to about 10-18 metres, and it vanishes altogether beyond the radius of a single
proton.
Enrico Fermi was the first to put forth a theory of the weak force in 1933, but it was not until the
1960s that Sheldon Glashow, Abdus Salam and Steven Weinberg developed the theory in its
present form, when they proposed that the weak and electromagnetic forces are actually different
manifestations of one electroweak force.
By emitting an electrically charged W boson, the weak force can cause a particle such as the
proton to change its charge by changing the flavour of its quarks. In 1958, Sidney Bludman
suggested that there might be another arm of the weak force, the so-called "weak neutral
current," mediated by an uncharged partner of the W bosons, which later became known as the Z
boson.
Physicists working with the Gargamelle bubble chamber experiment at CERN presented the first
convincing evidence to support this idea in 1973. Neutrinos are particles that interact only via the
weak interaction, and when the physicists shot neutrinos through the bubble chamber they were
able to detect evidence of the weak neutral current, and hence indirect evidence for the Z boson.
At the end of the 1970s, CERN converted what was then its biggest accelerator, the Super Proton
Synchrotron, to operate as a proton-antiproton collider, with the aim of producing W and Z
bosons directly. Both types of particle were observed there for the first time in 1983. The bosons
were then studied in more detail at CERN and at Fermi National Accelerator Laboratory in the
US.
During the 1990s, the Large Electron-Positron collider at CERN and the SLAC Linear Collider
in the US produced millions of Z bosons for further study.
These results culminated in the need to search for the final piece of the Standard Model–
the Higgs boson. In July 2012, scientists at CERN announced that they had observed a new
particle consistent with the appearance of a Higgs boson.
Although more time and analysis is needed to determine if this is the particle predicted by the
Standard Model, the discovery of the elusive Z bosons set the stage for this important
development.
In the 1860s, James Clerk Maxwell recognized the similarities between electricity and
magnetism and developed his theory of a single electromagnetic force. A similar discovery came
a century later, when theorists began to develop links between electromagnetism, with its
obvious effects in everyday life, and the weak force, which normally hides within the atomic
nucleus.
Support for these ideas came first from the Gargamelle experiment at CERN – when physicists
found the first direct evidence of the weak neutral current, which required the existence of a
neutral particle to carry the weak fundamental force. Further support came from the Nobel-prize-
winning discovery of the W and Z particles, which carry the electroweak force.
But it is only at the higher energies explored in particle collisions at CERN and other laboratories
that the electromagnetic and weak forces begin to act on equal terms. Will the unification of
other forces emerge at even higher energies? Experiments already show that the effect of the
strong force becomes weaker as energies increase. This is a good indication that at incredibly
high energies, the strengths of the electromagnetic, weak and strong forces are probably the
same. The energies involved are at least a thousand million times greater than particle
accelerators can reach, but such conditions would have existed in the early universe, almost
immediately (10-34 seconds) after the big bang.
Pushing the concept a step further, theorists even contemplate the possibility of including gravity
at still higher energies, thereby unifying all of the fundamental forces into one. This "unified
force" would have ruled in the first instants of the universe, before its different components
separated out as the universe cooled. Although at present we cannot recreate conditions with
energy high enough to test these ideas directly, we can look for the consequences of “grand
unification” at lower energies, for instance at the Large Hadron Collider. A very popular idea
suggested by such a unification is called supersymmetry.
Discovered in 1983, the W boson is a fundamental particle. Together with the Z boson, it is
responsible for the weak force, one of four fundamental forces that govern the behaviour of
matter in our universe. Particles of matter interact by exchanging these bosons, but only over
short distances.
The W boson, which is electrically charged, changes the very make up of particles. It switches
protons into neutrons, and vice versa, through the weak force, triggering nuclear fusion and
letting stars burn. This burning also creates heavier elements and, when a star dies, those
elements are tossed into space as the building blocks for planets and even people.
The weak force was combined with the electromagnetic force in theories of a unified
electroweak force in the 1960s, in an effort to make the basic physics mathematically consistent.
But the theory called for the force-carrying particles to be massless, even though scientists knew
the theoretical W boson had to be heavy to account for its short range. Theorists accounted for
the mass of the W by introducing another unseen mechanism. This became known as the Higgs
mechanism, which calls for the existence of a Higgs boson.
As announced in July of 2012 at CERN, scientists have discovered a boson that looks much like
the particle predicted by Peter Higgs, among others. While this boson is not yet confirmed as the
Higgs boson predicted to make sense of the electroweak force, the W boson had a large part in
its discovery.
In March 2012, scientists at Fermilab in the US confirmed the most precise measurement of the
W boson’s mass to date, at 80.385 +/- 0.016 GeV/c 2. According to the predictions of
the Standard Model, which takes into account electroweak theory and the theory of the Higgs
mechanism, the W boson at that mass should point to the Higgs boson at a mass of less than 145
GeV. Both the ATLAS and CMS collaborations place the mass of the new Higgs-like boson at
about 125 GeV, well within range.
Contact Us
Research Sections
HECAP - [email protected]
CMSP - [email protected]
Math - [email protected]
ESP - [email protected]
The Mathematics section is mainly oriented towards geometry and analysis. It has played an
important role in fostering mathematics research and education in developing countries.
Contact Us
How to contact us
http://www.andersoninstitute.com/physics-of-time.html
time, a measured or measurable period, a continuum that lacks spatial dimensions. Time is of
philosophical interest and is also the subject of mathematical and scientific investigation
This belief in Heilsgeschichte (salvational history) has been derived by Islām and Christianity
from Judaism and Zoroastrianism. Late in the 12th century, the Christian seer Joachim of
Fiore saw this divinely ordained spiritual progress in the time flow as unfolding in a series of
three ages—those of the Father, the Son, and the Spirit. Karl Jaspers, a 20th-century Western
philosopher, has discerned an “axis age”—i.e., a turning point in human history—in the 6th
century BC, when Confucius, the Buddha, Zoroaster, Deutero-Isaiah, and Pythagoras were alive
contemporaneously. If the “axis age” is extended backward in time to the original Isaiah’s
generation and forward to Muḥammad’s, it may perhaps be recognized as the age in which
humans first sought to make direct contact with the ultimate spiritual reality behind phenomena
instead of making such communication only indirectly through their nonhuman and social
environments.
The belief in an omnipotent creator god, however, has been challenged. The creation of time, or
of anything else, out of nothing is difficult to imagine; and, if God is not a creator but is merely a
shaper, his power is limited by the intractability of the independent material with which he has
had to work. Plato, in the Timaeus, conceived of God as being a nonomnipotent shaper and thus
accounted for the manifest element of evil in phenomena. Marcion, a 2nd-century Christian
heretic, inferred from the evil in phenomena that the creator was bad and held that a “stranger
god” had come to redeem the bad creator’s work at the benevolent stranger’s cost. Zoroaster saw
the phenomenal world as a battlefield between a bad god and a good one and saw time as the
duration of this battle. Though he held that the good god was destined to be the victor, a god who
needs to fight and win is not omnipotent. In an attenuated form, this evil adversary appears in the
three Judaic religions as Satan.
Observation of historical phenomena suggests that, in spite of the manifestness of evil, there has
been progress in the history of life on this planet, culminating in the emergence of humans who
know themselves to be sinners yet feel themselves to be something better than inanimate
matter.Charles Darwin, in his theory of the selection of mutations by the environment, sought to
vindicate apparent progress in the organic realm without recourse to an extraneous god. In the
history of Greek thought, the counterpart of such mutations was the swerving of atoms. After
Empedocles had broken up the indivisible, motionless, and timeless reality of Parmenides and
Zeno into four elements played upon alternately by Love and Strife, it was a short step for the
Atomists of the 5th century BC, Leucippus and Democritus, to break up reality still further into
an innumerable host ofminute atoms moving in time through a vacuum. Granting that one
single atom had once made a single slight swerve, the build-up of observed phenomena could be
accounted for on Darwinian lines. Democritus’ account of evolution survives in the fifth book
of De rerum natura, written by a 1st-century- BC Roman poet, Lucretius. The credibility of both
Democritus’ and Darwin’s accounts of evolution depends on the assumption that time is real and
that its flow has been extraordinarily long.
Heracleitus had seen in phenomena a harmony of opposites in tension with each other and had
concluded that War (i.e., Empedocles’ Strife and the Chinese Yang) “is father of all and king of
all.” This vision of Strife as being the dominant and creative force is grimmer than that of Strife
alternating on equal terms with Love and Yang with Yin. In the 19th-century West, Heracleitus’
vision has been revived in the view of G.W.F. Hegel, a German Idealist, that progress occurs
through a synthesis resulting from an encounter between a thesis and an antithesis. In political
terms, Heracleitus’ vision has reappeared in Karl Marx’s concept of an encounter between the
bourgeoisie and the proletariat and the emergence of a classless society without a government.
In the Zoroastrian and Jewish-Christian-Islāmic vision of the time flow, time is destined to be
consummated—as depicted luridly in the Revelation to John—in a terrifying climax. It has
become apparent that history has been accelerating, and accumulated knowledge of the past has
revealed, in retrospect, that the acceleration began about 30,000 years ago, with the transition
from the Lower to the Upper Paleolithic Period, and that it has taken successive “great leaps
forward” with the invention of agriculture, with the dawn of civilization, and with the
progressive harnessing—within the last two centuries—of the titanic physical forces of
inanimate nature. The approach of the climax foreseen intuitively by the prophets is being felt,
and feared, as a coming event. Its imminence is, today, not an article of faith but a datum of
observation and experience.
Arnold Joseph Toynbee
Newtonian mechanics, as studied in the 18th century, was mostly concerned with periodic
systems that, on a large scale, remain constant throughout time. Particularly notable was the
proof of thestability of the solar system that was formulated by Pierre-Simon, marquis de
Laplace, a mathematical astronomer. Interest in systems that develop through time came about in
the 19th century as a result of the theories of the British geologist Sir Charles Lyell, and others,
and the Darwinian theory of evolution. These theories led to a number of biologically inspired
metaphysical systems, which were often—as with Henri Bergson and Alfred North Whitehead—
rather romantic and contrary to the essentially mechanistic spirit of Darwin himself (and also of
present-day molecular biology).
Contemporary philosophies of time
Time in 20th-century philosophy of physics
TIME IN THE SPECIAL THEORY OF RELATIVITY
Since the classic interpretation of Einstein’s special theory of relativity by Hermann Minkowski,
a Lithuanian-German mathematician, it has been clear that physics has to do not with two
entities, space and time, taken separately, but with a unitary entity space–time, in which,
however, timelike and spacelike directions can be distinguished. The Lorentz transformations,
which in special relativity define shifts in velocity perspectives, were shown by Minkowski to be
simply rotations of space–time axes. The Lorentz contraction of moving rods and
the time dilatation of moving clocks turns out to be analogous to the fact that different-sized
slices of a sausage are obtained by altering the direction of the slice: just as there is still the
objective (absolute) sausage, so also Minkowski restores the absolute to relativity in the form of
the invariant four-dimensional object, and the invariance (under the Lorentz transformation) of
the space–time interval and of certain fundamental physical quantities such as action (which has
the dimensions of energy times time, even though neither energy nor time is separately
invariant).
Process philosophers charge the Minkowski universe with being a static one. The philosopher of
the manifold denies this charge, saying that a static universe would be one in which all temporal
cross sections were exactly similar to one another and in which all particles (considered as four-
dimensional objects) lay along parallel lines. The actual universe is not like this, and that it is not
static is shown in the Minkowski picture by the dissimilarity of temporal cross sections and the
nonparallelism of the world lines of particles. The process philosopher may say that change, as
thus portrayed in the Minkowski picture (e.g., with the world lines of particles at varying
distances from one another), is not true Bergsonian change, so that something has been left out.
But if time advances up the manifold, this would seem to be an advance with respect to
a hypertime, perhaps a new time direction orthogonal to the old one. Perhaps it could be a fifth
dimension, as has been used in describing the de Sitter universe as a four-dimensional
hypersurface in a five-dimensional space. The question may be asked, however, what advantage
such a hypertime could have for the process philosopher and whether there is process through
hypertime. If there is, one would seem to need a hyper-hypertime, and so on to infinity. (The
infinity of hypertimes was indeed postulated by John William Dunne, a British inventor and
philosopher, but the remedy seems to be a desperate one.) And if no such regress into hypertimes
is postulated, it may be asked whether the process philosopher would not find the five-
dimensional universe as static as the four-dimensional one. The process philosopher may
therefore adopt the expedient of Henri Bergson, saying that temporal process (the extra
something that makes the difference between a static and a dynamic universe) just cannot be
pictured spatially (whether one supposes four, five, or more dimensions). According to Bergson,
it is something that just has to be intuited and cannot be grasped by discursive reason. The
philosopher of the manifold will find this unintelligible and will in any case deny that anything
dynamic has been left out of his world picture. This sort of impasse between process
philosophers and philosophers of the manifold seems to be characteristic of the present-day state
of philosophy.
The theory of relativity implies that simultaneity is relative to a frame of axes. If one frame of
axes is moving relative to another, then events that are simultaneous relative to the first are not
simultaneous relative to the second, and vice versa. This paradox leads to another difficulty for
process philosophy over and above those noted earlier. Those who think that there is a continual
coming into existence of events (as the present rushes onward into the future) can be asked
“Which present?” It therefore seems difficult to make a distinction between a real present (and
perhaps past) as against an as-yet-unreal future. Philosophers of the manifold also urge that to
talk of events becoming (coming into existence) is not easily intelligible. Enduring things and
processes, in this view, can come into existence; but this simply means that as four-dimensional
solids they have an earliest temporal cross section or time slice.
When talking in the fashion of Minkowski, it is advisable, according to philosophers of the
manifold, to use tenseless verbs (such as the “equals” in “2 + 2 equals 4”). One can say that all
parts of the four-dimensional world exist (in this tenseless sense). This is not, therefore, to say
that they all existnow, nor does it mean that Minkowski events are “timeless.” The tenseless verb
merely refrains fromdating events in relation to its own utterance.
The power of the Minkowski representation is illustrated by its manner in dealing with the so-
calledclock paradox, which deals with two twins, Peter and Paul. Peter remains on Earth
(regarded as at rest in an inertial system) while Paul is shot off in a rocket at half the velocity of
light, rapidly decelerated at Alpha Centauri (about four light-years away), and shot back to Earth
again at the same speed. Assuming that the period of turnabout is negligible compared with those
of uniform velocity, Paul, as a four-dimensional object, lies along the sides AC and CB of a
space–time triangle, in which A and B are the points of his departure and return and C that of his
turnaround. Peter, as a four-dimensional object, lies along AB. Now, special relativity implies
that on his return Paul will be rather more than two years younger than Peter. This is a matter of
two sides of a triangle not being equal to the third side: AC + CB < AB. The “less than”—
symbolized < —arises from the semi-Euclidean character of Minkowski space–time, which calls
for minus signs in its metric (or expression for the interval between two events, which is ds =
√(c2dt2 - dx2 - dy2 - dz2) ). The paradox has been held to result from the fact that, from Paul’s
point of view, it is Peter who has gone off and returned; and so the situation is symmetrical, and
Peter and Paul should each be younger than the other—which is impossible. This is to forget,
however, the asymmetry reflected in the fact that Peter has been in only one inertial system
throughout, and Paul has not; Paul lies along a bent line, Peter along a straight one.
The general theory of relativity predicts a time dilatation in a gravitational field, so that, relative
to someone outside of the field, clocks (or atomic processes) go slowly. This retardation is a
consequence of the curvature of space–time with which the theory identifies the gravitational
field. As a very rough analogy, a road may be considered that, after crossing a plain, goes over a
mountain. Clearly, one mile as measured on the humpbacked surface of the mountain is less than
one mile as measured horizontally. Similarly—if “less” is replaced by “more” because of the
negative signs in the expression for the metric of space–time—one second as measured in the
curved region of space–time is more than one second as measured in a flat region. Strange things
can happen if the gravitational field is very intense. It has been deduced that so-called black
holes in space may occur in places where extraordinarily massive or dense aggregates of matter
exist, as in the gravitational collapse of a star. Nothing, not even radiation, can emerge from such
a black hole. A critical point is the so-called Schwarzschild radius measured outward from the
centre of the collapsed star—a distance, perhaps, of the order of 10 kilometres. Something falling
into the hole would take an infinite time to reach this critical radius, according to the space–time
frame of reference of a distant observer, but only a finite time in the frame of reference of the
falling body itself. From the outside standpoint the fall has become frozen. But from the point of
view of the frame of the falling object, the fall continues to zero radius in a very short time
indeed—of the order of only 10 or 100 microseconds. Within the black hole spacelike and
timelike directions change over, so that to escape again from the black hole is impossible for
reasons analogous to those that, in ordinary space–time, make it impossible to travel faster than
light. (To travel faster than light a body would have to lie—as a four-dimensional object—in a
spacelike direction instead of a timelike one.)
As a rough analogy two country roads may be considered, both of which go at first in a northerly
direction. But road A bends round asymptotically toward the east; i.e., it approaches ever closer
to a line of latitude. Soon road B crosses this latitude and is thus to the north of all parts of road
A. Disregarding the Earth’s curvature, it takes infinite space for road A to get as far north as that
latitude on road B; i.e., near that latitude an infinite number of “road A northerly units” (say,
miles) correspond to a finite number of road B units. Soon road B gets “beyond infinity” in road
A units, though it need be only a finite road.
Rather similarly, if a body should fall into a black hole, it would fall for only a finite time, even
though it were “beyond infinite” time by external standards. This analogy does not do justice,
however, to the real situation in the black hole—the fact that the curvature becomes infinite as
the star collapses toward a point. It should, however, help to alleviate the mystery of how a finite
time in one reference frame can go “beyond infinity” in another frame.
Most cosmological theories imply that the universe is expanding, with the galaxies receding from
one another (as is made plausible by observations of the red shifts of their spectra), and that the
universe as it is known originated in a primeval explosion at a date of the order of 15 × 10 9 years
ago. Though this date is often loosely called “the creation of the universe,” there is no reason to
deny that the universe (in the philosophical sense of “everything that there is”) existed at an
earlier time, even though it may be impossible to know anything of what happened then. (There
have been cosmologies, however, that suggest an oscillating universe, with explosion, expansion,
contraction, explosion, etc., ad infinitum.) And a fortiori, there is no need to say—as Augustine
did in hisConfessions as early as the 5th century AD—that time itself was created along with the
creation of the universe, though it should not too hastily be assumed that this would lead to
absurdity, because common sense could well be misleading at this point.
A British cosmologist, E.A. Milne, however, proposed a theory according to which time in a
sense could not extend backward beyond the creation time. According to him there are two
scales of time, “τ time” and “t time.” The former is a time scale within which the laws of
mechanics and gravitationare invariant, and the latter is a scale within which those of
electromagnetic and atomic phenomena are invariant. According to Milne τ is proportional to the
logarithm of t (taking the zero of t to be the creation time); thus, by τ time the creation is
infinitely far in the past. The logarithmic relationship implies that the constant of
gravitation G would increase throughout cosmic history. (This increase might have been
expected to show up in certain geological data, but apparently the evidence is against it.)
TIME IN MICROPHYSICS
Special problems arise in considering time in quantum mechanics and in particle interactions.
QUANTUM-MECHANICAL ASPECTS OF TIME
In quantum mechanics it is usual to represent measurable quantities by operators in an abstract
many-dimensional (often infinite-dimensional) so-called Hilbert space. Nevertheless, this space
is an abstract mathematical tool for calculating the evolution in time of the energy levels of
systems—and this evolution occurs in ordinary space–time. For example, in the
formula AH - HA = iℏ(dA/dt), in which i is √(−1) and ℏ is 1/2π times Planck’s constant, h,
the A and H are operators, but the t is a perfectly ordinary time variable. There may be something
unusual, however, about the concept of the time at which quantum-mechanical events occur,
because according to the Copenhagen interpretation of quantum mechanics the state of a
microsystem is relative to an experimental arrangement. Thus energy and time are conjugate: no
experimental arrangement can determine both simultaneously, for the energy is relative to one
experimental arrangement, and the time is relative to another. (Thus, a more relational sense of
“time” is suggested.) The states of the experimental arrangement cannot be merely relative to
other experimental arrangements, on pain of infinite regress; and so these have to be described
by classical physics. (This parasitism on classical physics is a possible weakness in quantum
mechanics over which there is much controversy.)
The relation between time uncertainty and energy uncertainty, in which their product is equal to
or greater than h/4π, ΔEΔt ⋜ h/4π, has led to estimates of the theoretical minimum measurable
span of time, which comes to something of the order of 10-24 second and hence to speculations
that time may be made up of discrete intervals (chronons). These suggestions are open to a very
serious objection, viz., that the mathematics of quantum mechanics makes use of continuous
space and time (for example, it contains differential equations). It is not easy to see how it could
possibly be recast so as to postulate only a discrete space–time (or even a merely dense one). For
a set of instants to be dense, there must be an instant between any two instants. For it to be a
continuum, however, something more is required, viz., that every set of instants earlier (later)
than any given one should have an upper (lower) bound. It is continuity that enables modern
mathematics to surmount the paradox of extension framed by the Pre-Socratic Eleatic Zeno—a
paradox comprising the question of how a finite interval can be made up of dimensionless points
or instants.
TIME IN PARTICLE INTERACTIONS
Until recently it was thought that the fundamental laws of nature are time symmetrical. It is true
that the second law of thermodynamics, according to which randomness always increases, is
time asymmetrical; but this law is not strictly true (for example, the phenomenon of Brownian
motion contravenes it), and it is now regarded as a statistical derivative of the fundamental laws
together with certain boundary conditions. The fundamental laws of physics were long thought
also to be charge symmetrical (for example, an antiproton together with a positron behave like
a proton and electron) and to be symmetrical with respect to parity (reflection in space, as in a
mirror). The experimental evidence now suggests that all three symmetries are not quite exact
but that the laws of nature are symmetrical if all three reflections are combined: charge, parity,
and time reflections forming what can be called (after the initials of the three parameters) a CPT
mirror. The timeasymmetry was shown in certain abstruse experiments concerning the decay
of K mesons that have a short time decay into two pions and a long time decay into three pions.
nother striking temporal asymmetry on the macro level, viz., that spherical waves are often
observed being emitted from a source but never contracting to a sink, has been stressed by Sir
Karl Popper, a 20th-century Austrian and British philosopher of science. By considering
radiation as having a particle aspect (i.e., as consisting of photons), Costa de Beauregard has
argued that this “principle of retarded waves” can be reduced to the statistical Boltzmann
principle of increasing entropy and so is not really different from the previously discussed
asymmetry. These considerations also provide some justification for the common-sense idea that
the cause–effect relation is a temporally unidirectional one, even though the laws of nature
themselves allow for retrodiction no less than for prediction.
A third striking asymmetry on the macro level is that of the apparent mutual recession of the
galaxies, which can plausibly be deduced from the red shifts observed in their spectra. It is still
not clear whether or how far this asymmetry can be reduced to the two asymmetries already
discussed, though interesting suggestions have been made.
The statistical considerations that explain temporal asymmetry apply only to large assemblages
of particles. Hence, any device that records time intervals will have to be macroscopic and to
make use somewhere of statistically irreversible processes. Even if one were to count the swings
of a frictionless pendulum, this counting would require memory traces in the brain, which would
function as a temporally irreversible recording device.
Temporal rhythms in both plants and animals (including humans) are dependent on temperature,
and experiments on human subjects have shown that, if their temperature is raised, they
underestimate the time between events.
Despite these facts, the Lockean notion that the estimation of time depends on the succession
ofsensations is still to some degree true. People who take the drugs hashish and mescaline, for
example, may feel their sensations following one another much more rapidly. Because there are
so many more sensations than normal in a given interval of time, time seems to drag, so that a
minute may feel like an hour. Similar illusions about the spans of time occur in dreams.
It is unclear whether most discussions of so-called biological and psychological time have much
significance for metaphysics. As far as the distorted experiences of time that arise through drugs
(and in schizophrenia) are concerned, it can be argued that there is nothing surprising in the fact
that pathological states can make people misestimate periods of time, and so it can be claimed
that facts of this sort do not shed any more light on the philosophy of time than facts about
mountains looking near after rainstorms and looking far after duststorms shed on the philosophy
of space.
The idea that psychological studies of temporal experience are philosophically important is
probably connected with the sort of Empiricism that was characteristic of Locke and still more of
the Empiricists George Berkeley and David Hume and their successors. The idea of time had
somehow to be constructed out of the primitive experience of ideas succeeding one another.
Nowadays, concept formation is thought of as more of a social phenomenon involved in the
“picking up” of a language; thus, contemporary philosophers have tended to see the problem
differently: humans do not have to construct their concepts from their own immediate sensations.
Even so, the learning of temporal concepts surely does at least involve an immediate
apprehension of the relation of “earlier” and “later.” A mere succession of sensations, however,
will go no way toward yielding the idea of time: if one sensation has vanished entirely before the
other is in consciousness, one cannot be immediately aware of the succession of sensations.
What Empiricism needs, therefore, as a basis for constructing the idea of time is an experience of
succession as opposed to a succession of experiences. Hence, two or more ideas that are related
by “earlier than” must be experienced in one single act of awareness. William James, a U.S.
Pragmatist philosopher and also a pioneer psychologist, popularized the term specious
present for the span of time covered by a single act of awareness. His idea was that at a given
moment of time a person is aware of events a short time before that time. (Sometimes he spoke
of the specious present as a saddleback looking slightly into the future as well as slightly into the
past, but this was inconsistent with his idea that the specious present depended on lingering
short-term memory processes in the brain.) He referred to experiments by the German
psychologist Wilhelm Wundt that showed that the longest group of arbitrary sounds that a person
could identify without error lasted about six seconds. Other criteria perhaps involving other sense
modalities might lead to slightly different spans of time, but the interesting point is that, if there
is such a specious present, it cannot be explained solely by ordinary memory traces: if one hears
a “ticktock” of a clock, the “tick” is not remembered in the way in which a “ticktock” 10 minutes
ago is remembered. The specious present is perhaps not really specious: the idea that it was
specious depended on an idea that the real (nonspecious) present had to be instantaneous.
If perception is considered as a certain reliable way of being caused to have true beliefs about the
environment by sensory stimulation, there is no need to suppose that these true beliefs have to be
about an instantaneous state of the world. It can therefore be questioned whether the
term specious is a happy one.
Two matters discussed earlier in connection with the philosophy of physics have implications for
the philosophy of mind: (1) the integration of space and time in the theory of relativity makes it
harder to conceive of immaterial minds that exist in time but are not even localizable in space;
(2) the statistical explanation of temporal asymmetry explains why the brain has memory traces
of the past but not of the future and, hence, helps to explain the unidirectional nature of temporal
consciousness. It also gives reasons for skepticism about the claims of parapsychologists to have
experimental evidence for precognition; or it shows, at least, that if these phenomena do exist
they are not able to be fitted into a cosmology based on physics as it exists today.
PRINCIPAL SCALES
Numerous time scales have been formed; several important ones are described in detail in
subsequent sections of this article. The abbreviations given here are derived from English or
French terms. Universal Time (UT; mean solar time or the prime meridian of Greenwich,
England), Coordinated Universal Time (UTC; the basis of legal, civil time), and leap seconds are
treated under the heading Rotational time. Ephemeris Time (ET; the first correct dynamical
time scale) is treated in the section Dynamical time, as are Barycentric Dynamical Time (TDB)
and Terrestrial Dynamical Time (TDT), which are more accurate than Ephemeris Time because
they take relativity into account.International Atomic Time (TAI; introduced in 1955) is covered
in the section Atomic time.
RELATIVISTIC EFFECTS
Accuracies of atomic clocks and modern observational techniques are so high that the small
differences between classical mechanics (as developed by Newton in the 17th century) and
relativistic mechanics (according to the special and general theories of relativity proposed
byEinstein in the early 20th century) must be taken into account. The equations of motion that
define TDB include relativistic terms. The atomic clocks that form TAI, however, are corrected
only for height above sea level, not for periodic relativistic variations, because all fixed terrestrial
clocks are affected identically. TAI and TDT differ from TDB by calculable periodic variations.
Apparent positions of celestial objects, as tabulated in ephemerides, are corrected for the Sun’s
gravitational deflection of light rays.
CLOCKS
The atomic clock provides the most precise time scale. It has made possible new, highly accurate
techniques for measuring time and distance. These techniques, involving radar, lasers, spacecraft,
radio telescopes, and pulsars, have been applied to the study of problems in celestial mechanics,
astrophysics, relativity, and cosmogony.
Atomic clocks serve as the basis of scientific and legal clock times. A single clock, atomic
or quartz-crystal, synchronized with either TAI or UTC provides the SI second (that is, the
second as defined in the International System of Units), TAI, UTC, and TDT immediately with
high accuracy.
TIME UNITS AND CALENDAR DIVISIONS
The familiar subdivision of the day into 24 hours, the hour into 60 minutes, and the minute into
60seconds dates to the ancient Egyptians. When the increasing accuracy of clocks led to the
adoption of the mean solar day, which contained 86,400 seconds, this mean solar second became
the basic unit of time. The adoption of the SI second, defined on the basis of atomic phenomena,
as the fundamental time unit has necessitated some changes in the definitions of other terms.
In this article, unless otherwise indicated, second (symbolized s) means the SI second; a minute
(m or min) is 60 s; an hour (h) is 60 m or 3,600 s. An astronomical day (d) equals 86,400 s. An
ordinarycalendar day equals 86,400 s, and a leap-second calendar day equals 86,401 s. A
common yearcontains 365 calendar days and a leap year, 366.
The system of consecutively numbering the years of the Christian Era was devised by Dionysius
Exiguus in about 525; it included the reckoning of dates as either AD or BC (the year before AD 1
was 1BC). The Julian calendar, introduced by Julius Caesar in the 1st century BC, was then in use,
and any year whose number was exactly divisible by four was designated a leap year. In
the Gregorian calendar, introduced in 1582 and now in general use, the centurial years are
common years unless their numbers are exactly divisible by 400; thus, 1600 was a leap year, but
1700 was not.
A calendar month may contain 28 to 31 calendar days; the average is 30.437. The synodic
month, the interval from New Moon to New Moon, averages 29.531 d.
ASTRONOMICAL YEARS AND DATES
In the Julian calendar, a year contains either 365 or 366 days, and the average is 365.25 calendar
days. Astronomers have adopted the term Julian year to denote an interval of 365.25 d, or
31,557,600 s. The corresponding Julian century equals 36,525 d. For convenience in specifying
events separated by long intervals, astronomers use Julian dates (JD) in accordance with a
system proposed in 1583 by the French classical scholar Joseph Scaliger and named in honour of
his father, Julius Caesar Scaliger. In this system days are numbered consecutively from 0.0,
which is identified as Greenwich mean noon of the day assigned the date Jan. 1, 4713 BC, by
reckoning back according to the Julian calendar. The modified Julian date (MJD), defined by the
equation MJD = JD - 2,400,000.5, begins at midnight rather than noon and, for the 20th and 21st
centuries, is expressed by a number with fewer digits. For example, Greenwich mean noon of
Nov. 14, 1981 (Gregorian calendar date), corresponds to JD 2,444,923.0; the preceding midnight
occurred at JD 2,444,922.5 and MJD 44,922.0.
Historical details of the week, month, year, and various calendars are treated in the
article calendar.
Rotational time
The Earth’s rotation causes the stars and the Sun to appear to rise each day in the east and set in
the west. The apparent solar day is measured by the interval of time between two successive
passages of the Sun across the observer’s celestial meridian, the visible half of the great circle
that passes through the zenith and the celestial poles. One sidereal day (very nearly) is measured
by the interval of time between two similar passages of a star. Fuller treatments of astronomical
reference points and planes are given in the articles astronomical map; and celestial mechanics.
The plane in which the Earth orbits about the Sun is called the ecliptic. As seen from the Earth,
the Sun moves eastward on the ecliptic 360° per year, almost one degree per day. As a result, an
apparent solar day is nearly four minutes longer, on the average, than a sidereal day. The
difference varies, however, from 3 minutes 35 seconds to 4 minutes 26 seconds during the year
because of the ellipticity of the Earth’s orbit, in which at different times of the year it moves at
slightly different rates, and because of the 23.44° inclination of the ecliptic to the Equator. In
consequence, apparent solar time is nonuniform with respect to dynamical time.
A sundial indicates apparent solar time.
The introduction of the pendulum as a timekeeping element to clocks during the 17th century
increased their accuracy greatly and enabled more precise values for the equation of time to be
determined. This development led to mean solar time as the norm; it is defined below. The
difference between apparent solar time and mean solar time, called the equation of time, varies
from zero to about 16 minutes.
The measures of sidereal, apparent solar, and mean solar time are defined by the hour angles of
certain points, real or fictitious, in the sky. Hour angle is the angle, taken to be positive to the
west, measured along the celestial equator between an observer’s meridian and the hour circle on
which some celestial point or object lies. Hour angles are measured from zero through 24 hours.
Sidereal time is the hour angle of the vernal equinox, a reference point that is one of the two
intersections of the celestial equator and the ecliptic. Because of a small periodic oscillation, or
wobble, of the Earth’s axis, called nutation, there is a distinction between the true and mean
equinoxes. The difference between true and mean sidereal times, defined by the two equinoxes,
varies from zero to about one second.
Apparent solar time is the hour angle of the centre of the true Sun plus 12 hours. Mean solar
time is 12 hours plus the hour angle of the centre of the fictitious mean Sun. This is a point that
moves along the celestial equator with constant speed and that coincides with the true Sun on the
average. In practice, mean solar time is not obtained from observations of the Sun.
Instead, sidereal time is determined from observations of the transit across the meridian of stars,
and the result is transformed by means of a quadratic formula to obtain mean solar time.
STANDARD TIME
Local mean solar time depends upon longitude; it is advanced by four minutes per degree
eastward. In 1869 Charles F. Dowd, principal of a school in Saratoga Springs, N.Y., proposed
the use of time zones, within which all localities would keep the same time. Others, including Sir
Sandford Fleming, a Canadian civil engineer, strongly advocated this idea. Time zones were
adopted by U.S. and Canadian railroads in 1883.
In October 1884 an international conference held in Washington, D.C., adopted the meridian of
the transit instrument at the Royal Observatory, Greenwich, as the prime, or zero, meridian. This
led to the adoption of 24 standard time zones; the boundaries are determined by local authorities
and in many places deviate considerably from the 15° intervals of longitude implicit in the
original idea. The times in different zones differ by an integral number of hours; minutes and
seconds are the same.
The International Date Line is a line in the mid-Pacific Ocean near 180° longitude. When one
travels across it westward a calendar day is added; one day is dropped in passing eastward. This
line also deviates from a straight path in places to accommodate national boundaries and waters.
During World War I, daylight-saving time was adopted in various countries; clocks were
advanced one hour to save fuel by reducing the need for artificial light in evening hours. During
World War II, all clocks in the United States were kept one hour ahead of standard time for the
interval Feb. 9, 1942–Sept. 30, 1945, with no changes made in summer. Beginning in 1967, by
act of Congress, the United States has observed daylight-saving time in summer, though state
legislatures retain the power to pass exempting laws, and a few have done so.
The day begins at midnight and runs through 24 hours. In the 24-hour system of reckoning, used
in Europe and by military agencies of the United States, the hours and minutes are given as a
four-digit number. Thus 0028 means 28 minutes past midnight, and 1240 means 40 minutes past
noon. Also, 2400 of May 15 is the same as 0000 of May 16. This system allows no uncertainty as
to the epoch designated.
In the 12-hour system there are two sets of 12 hours; those from midnight to noon are
designated AM(ante meridiem, “before noon”), and those from noon to midnight are
designated PM (post meridiem, “after noon”). The use of AM and PM to designate either noon or
midnight can cause ambiguity. To designate noon, either the word noon or 1200 or 12 M should
be used. To designate midnight without causing ambiguity, the two dates between which it falls
should be given unless the 24-hour notation is used. Thus, midnight may be written: May 15–16
or 2400 May 15 or 0000 May 16.
UNIVERSAL TIME
Until 1928 the standard time of the zero meridian was called Greenwich Mean Time (GMT).
Astronomers used Greenwich Mean Astronomical Time (GMAT), in which the day begins at
noon. In 1925 the system was changed so that GMT was adopted by astronomers, and in 1928
the International Astronomical Union (IAU) adopted the term Universal Time (UT).
In 1955 the IAU defined several kinds of UT. The initial values of Universal Time obtained at
various observatories, denoted UT0, differ slightly because of polar motion. A correction is
added for each observatory to convert UT0 into UT1. An empirical correction to take account of
annual changes in the speed of rotation is then added to convert UT1 to UT2. UT2 has since been
superseded by atomic time.
VARIATIONS IN THE EARTH’S ROTATION RATE
The Earth does not rotate with perfect uniformity, and the variations have been classified as (1)
secular, resulting from tidal friction, (2) irregular, ascribed to motions of the Earth’s core, and (3)
periodic, caused by seasonal meteorological phenomena.
Separating the first two categories is very difficult. Observations made since 1621, after the
introduction of the telescope, show irregular fluctuations about a decade in duration and a long
one that began about 1650 and is not yet complete. The large amplitude of this effect makes it
impossible to determine the secular variation from data accumulated during an interval of only
about four centuries. The record is supplemented, however, by reports—not always reliable—of
eclipses that occurred tens of centuries ago. From this extended set of information it is found
that, relative to dynamical time, the length of the mean solar day increases secularly about 1.6
milliseconds per century, the rate of the Earth’s rotation decreases about one part per million in
5,000 years, and rotational time loses about 30 seconds per century squared.
The annual seasonal term, nearly periodic, has a coefficient of about 25 milliseconds.
IME DETERMINATION
The classical, astrometric methods of obtaining UT0 are, in essence, determinations of the instant
at which a star crosses the local celestial meridian. Instruments used include the transit, the
photographic zenith tube, and the prismatic astrolabe.
The transit is a small telescope that can be moved only in the plane of the meridian. The observer
generates a signal at the instant that the image of the star is seen to cross a very thin cross hair
aligned in the meridian plane. The signal is recorded on a chronograph that simultaneously
displays the readings of the clock that is being checked.
The photographic zenith tube (PZT) is a telescope permanently mounted in a precisely vertical
position. The light from a star passing almost directly overhead is refracted by the lens, reflected
from the perfectly horizontal surface of a pool of mercury, and brought to a focus just beneath
the lens. A photographic plate records the images of the star at clock times close to that at which
it crosses the meridian. The vertical alignment of the PZT minimizes the effects of atmospheric
refraction. From the positions of the images on the plate, the time at which the star transits the
meridian can be accurately compared with the clock time. The distance of the star from the
zenith (north or south) also can be ascertained. This distance varies slightly from year to year and
is a measure of the latitude variation caused by the slight movement of the Earth’s axis of
rotation relative to its crust.
The prismatic astrolabe is a refinement of the instrument used since antiquity for measuring the
altitude of a star above the horizon. The modern device consists of a horizontal telescope into
which the light from the star is reflected from two surfaces of a prism that has three faces at 60°
angles. The light reaches one of these faces directly from the star; it reaches the other
after reflection from the surface of a pool of mercury. The light traversing the separate paths is
focused to form two images of the star that coincide when the star reaches the altitude of 60°.
This instant is automatically recorded and compared with the reading of a clock. Like the PZT,
the prismatic astrolabe detects the variation in the latitude of the observatory.
Dynamical time
Dynamical time is defined descriptively as the independent variable, T, in the differential
equations of motion of celestial bodies. The gravitational ephemeris of a planet tabulates its
orbital position for values of T. Observation of the position of the planet makes it possible to
consult the ephemeris and find the corresponding dynamical time.
The most sensitive index of dynamical time is the position of the Moon because of the rapid
motion of that body across the sky. The equations that would exactly describe the motion of the
Moon in the absence of tidal friction, however, must be slightly modified to account for the
deceleration that this friction produces. The correction is made by adding an empirical term, αT2,
to the longitude, λ, given by gravitational theory. The need for this adjustment was not
recognized for a long time.
The American astronomer Simon Newcomb noted in 1878 that fluctuations in λ that he had
found could be due to fluctuations in rotational time; he compiled a table of Δt, its difference
from the time scale based on uniform rotation of the Earth. Realizing that nonuniform rotation of
the Earth should also cause apparent fluctuations in the motion of Mercury, Newcomb searched
for these in 1882 and 1896, but the observational errors were so large that he could not confirm
his theory.
A large fluctuation in the Earth’s rotational speed, ω, began about 1896, and its effects on the
apparent motions of both the Moon and Mercury were described by the Scottish-born astronomer
Robert T.A. Innes in 1925. Innes proposed a time scale based on the motion of the Moon, and his
scale of Δt from 1677 to 1924, based on observations of Mercury, was the first true dynamical
scale, later called Ephemeris Time.
EPHEMERIS TIME
Further studies by the Dutch astronomer Willem de Sitter in 1927 and by Harold Spencer Jones
(later Sir Harold, Astronomer Royal of England) in 1939 confirmed that ω had secular and
irregular variations. Using their results, the U.S. astronomer Gerald M. Clemence in 1948
derived the equations needed to define a dynamical scale numerically and to convert
measurements of the Moon’s position into time values. The fundamental definition was based on
the Earth’s orbital motion as given by Newcomb’s tables of the Sun of 1898. The IAU adopted
the dynamical scale in 1952 and called it Ephemeris Time (ET). Clemence’s equations were used
to revise the lunar ephemeris published in 1919 by the American mathematician Ernest W.
Brown to form the Improved Lunar Ephemeris (ILE) of 1954.
EPHEMERIS SECOND
The IAU in 1958 defined the second of Ephemeris Time as 1/31,556,925.9747 of the tropical
year that began at the instant specified, in astronomers’ terms, as 1900 January 0 d 12h, “the
instant, near the beginning of the calendar year AD 1900, when the geocentric mean longitude of
the Sun was 279° 41′ 48.04″ ”—that is, Greenwich noon on Dec. 31, 1899. In 1960 the General
Conference of Weights and Measures (CGPM) adopted the same definition for the SI second.
Since, however, 1900 was past, this definition could not be used to obtain the ET or SI second. It
was obtained in practice from lunar observations and the ILE and was the basis of the
redefinition, in 1967, of the SI second on the atomic time scale. The present SI second thus
depends directly on the ILE.
The ET second defined by the ILE is based in a complex manner on observations made up to
1938 of the Sun, the Moon, Mercury, and Venus, referred to the variable, mean solar time.
Observations show that the ET second equals the average mean solar second from 1750 to 1903.
(TDT) is an auxiliary scale defined by the equation TDT = TAI + 32.184 s. Its unit is the SI
second. The constant difference between TDT and TAI makes TDT continuous with ET for
periods before TAI was defined (mid-1955). TDT is the time entry in apparent geocentric
ephemerides.
The definitions adopted require that TDT = TDB - R, where R is the sum of the periodic,
relativistic terms not included in TAI. Both the above equations for TDT can be valid only if
dynamical and atomic times are equivalent (see below Atomic time: SI second).
For use in almanacs the barycentric coordinates of the Earth and a body at epoch TDB are
transformed into the coordinates of the body as viewed from the centre of the Earth at the epoch
TDT when a light ray from the body would arrive there. Almanacs tabulate these geocentric
coordinates for equal intervals of TDT; since TDT is available immediately from TAI,
comparisons between computed and observed positions are readily made.
Since Jan. 1, 1984, the principal ephemerides in The Astronomical Almanac, published jointly by
theRoyal Greenwich Observatory and the U.S. Naval Observatory, have been based on a highly
accurate ephemeris compiled by the Jet Propulsion Laboratory, Pasadena, Calif., in cooperation
with the Naval Observatory. This task involved the simultaneous numerical integration of the
equations of motion of the Sun, the Moon, and the planets. The coordinates and velocities at a
known time were based on very accurate distance measurements (made with the aid of radar,
laser beams, and spacecraft), optical angular observations, and atomic clocks.
Atomic time
BASIC PRINCIPLES
The German physicist Max Planck postulated in 1900 that the energy of an atomic oscillator is
quantized; that is to say, it equals hν, where h is a constant (now called Planck’s constant) and ν
is the frequency. Einstein extended this concept in 1905, explaining that electromagnetic
radiation is localized in packets, later referred to as photons, of frequency ν and energy E = hν.
Niels Bohr of Denmark postulated in 1913 that atoms exist in states of discrete energy and that a
transition between two states differing in energy by the amount ΔE is accompanied
by absorption or emission of a photon that has a frequency ν = ΔE/h. For detailed information
concerning the phenomena on which atomic time is based, see electromagnetic
radiation, radioactivity, and quantum mechanics.
In an unperturbed atom, not affected by neighbouring atoms or external fields, the energies of the
various states depend only upon intrinsic features of atomic structure, which are postulated not to
vary. A transition between a pair of these states involves absorption or emission of a photon with
a frequency ν0, designated the fundamental frequency associated with that particular transition.
ATOMIC CLOCKS
Transitions in many atoms and molecules involve sharply defined frequencies in the vicinity of
1010hertz, and, after dependable methods of generating such frequencies were developed during
World War II for microwave radar, they were applied to problems of timekeeping. In 1946
principles of the use of atomic and molecular transitions for regulating the frequency of
electronic oscillators were described, and in 1947 an oscillator controlled by a quantum transition
of the ammonia moleculewas constructed. An ammonia-controlled clock was built in 1949 at the
National Bureau of Standards, Washington, D.C.; in this clock the frequency did not vary by
more than one part in 108. In 1954 an ammonia-regulated oscillator of even higher precision—
the first maser—was constructed.
In 1938 the so-called resonance technique of manipulating abeam of atoms or molecules was
introduced. This technique was adopted in several attempts to construct a cesium-beamatomic
clock, and in 1955 the first such clock was placed in operation at the National Physical
Laboratory, Teddington, Eng.
In practice, the most accurate control of frequency is achieved by detecting the interaction of
radiation with atoms that can undergo some selected transition. From a beam of cesium vapour, a
magnetic field first isolates a stream of atoms that can absorb microwaves of the fundamental
frequency ν0. Upon traversing the microwave field, some—not all—of these atoms do absorb
energy, and a second magnetic field isolates these and steers them to a detector. The number of
atoms reaching the detector is greatest when the microwave frequency exactly matches ν 0, and
the detector response is used to regulate the microwave frequency. The frequency of the cesium
clock is νt = ν0 + Δν, where Δν is the frequency shift caused by slight instrumental perturbations
of the energy levels. This frequency shift can be determined accurately, and the circuitry of the
clock is arranged so that νt is corrected to generate an operational frequency ν0 + ε, where ε is the
error in the correction. The measure of the accuracy of the frequency-control system is the
fractional error ε/ν0, which is symbolized γ. Small, commercially built cesium clocks attain
values of γ of ±1 or 2 × 10-12; in a large, laboratory-constructed clock, whose operation can be
varied to allow experiments on factors that can affect the frequency, γ can be reduced to ±5 × 10 -
14
.
Between 1955 and 1958 the National Physical Laboratory and the U.S. Naval Observatory
conducted a joint experiment to determine the frequency maintained by the cesium-beam clock
at Teddington in terms of the ephemeris second, as established by precise observations of the
Moon from Washington, D.C. The radiation associated with the particular transition of the
cesium-133 atom was found to have the fundamental frequency ν0 of 9,192,631,770 cycles per
second of Ephemeris Time.
The merits of the cesium-beam atomic clock are that (1) the fundamental frequency that governs
its operation is invariant; (2) its fractional error is extremely small; and (3) it is convenient to
use. Several thousand commercially built cesium clocks, weighing about 70 pounds (32
kilograms) each, have been placed in operation. A few laboratories have built large cesium-beam
oscillators and clocks to serve as primary standards of frequency.
RELATIVISTIC EFFECTS
A clock displaying TAI on Earth will have periodic, relativisticdeviations from the dynamical
scale TDB and from a pulsar time scale PS (see below Pulsar time). These variations,
denoted R above, were demonstrated in 1982–84 by measurements of the pulsar PSR 1937+21.
The main contributions to R result from the continuous changes in the Earth’s speed and distance
from the Sun. These cause variations in the transverse Doppler effect and in the red shift due to
the Sun’s gravitational potential. The frequency of TAI is higher at aphelion (about July 3) than
at perihelion (about January 4) by about 6.6 parts in 10 10, and TAI is more advanced in epoch by
about 3.3 milliseconds on October 1 than on April 1.
By Einstein’s theory of general relativity a photon produced near the Earth’s surface should be
higher in frequency by 1.09 parts in 1016 for each metre above sea level. In 1960 the U.S.
physicists Robert V. Pound and Glen A. Rebka measured the difference between the frequencies
of photons produced at different elevations and found that it agreed very closely with what was
predicted. The primary standards used to form the frequency of TAI are corrected for height
above sea level.
Two-way, round-the-world flights of atomic clocks in 1971 produced changes in clock epochs
that agreed well with the predictions of special and general relativity. The results have been cited
as proof that the gravitational red shift in the frequency of a photon is produced when the photon
is formed, as predicted by Einstein, and not later, as the photon moves in a gravitational field. In
effect, gravitational potential is a perturbation that lowers the energy of a quantum state.
Pulsar time
A pulsar is believed to be a rapidly rotating neutron star whose magnetic and rotational axes do
not coincide. Such bodies emit sharp pulses of radiation, at a short period P, detectable by radio
telescopes. The emission of radiation and energetic subatomic particles causes the spin rate to
decrease and the period to increase. Ṗ, the rate of increase in P, is essentially constant, but
sudden changes in the period of some pulsars have been observed.
Although pulsars are sometimes called clocks, they do not tell time. The times at which their
pulses reach a radio telescope are measured relative to TAI, and values of P and Ṗ are derived
from these times. A time scale formed directly from the arrival times would have a secular
deceleration with respect to TAI, but if P for an initial TAI and Ṗ (assumed constant) are
obtained from a set of observations, then a pulsar time scale, PS, can be formed such that δ, the
difference between TAI and PS, contains only periodic and irregular variations. PS remains valid
as long as no sudden change in P occurs.
It is the variations in δ, allowing comparisons of time scales based on very different processes at
widely separated locations, that make pulsars extremely valuable. The chief variations are
periodic, caused by motions of the Earth. These motions bring about (1) relativistic variations in
TAI and (2) variations in distance, and therefore pulse travel time, from pulsar to telescope.
Observations of the pulsar PSR 1937+21, corrected for the second effect, confirmed the
existence of the first. Residuals (unexplained variations) in δ averaged one microsecond for 30
minutes of observation. This pulsar has the highest rotational speed of any known pulsar, 642
rotations per second. Its period P is 1.55 milliseconds, increasing at the rate Ṗ of 3.3 × 10-
12
second per year; the speed decreases by one part per million in 500 years.
Continued observations of such fast pulsars should make it possible to determine the orbital
position of the Earth more accurately. These results would provide more accurate data
concerning the perturbations of the Earth’s motion by the major planets; these in turn would
permit closer estimates of the masses of those planets. Residual periodic variations in δ, not due
to the sources already mentioned, might indicate gravitational waves. Irregular variations could
provide data on starquakes and inhomogeneities in the interstellar medium.
Radiometric time
Atomic nuclei of a radioactive element decay spontaneously, producing other elements and
isotopes until a stable species is formed. The life span of a single atom may have any value, but a
statistical quantity, the half-life of a macroscopic sample, can be measured; this is the time in
which one-half of the sample disintegrates. The age of a rock, for example, can be determined by
measuring ratios of the parent element and its decay products.
The decay of uranium to lead was first used to measure long intervals, but the decays of
potassium to argon and of rubidium to strontium are more frequently used now. Ages of the
oldest rocks found on the Earth are about 3.5 × 10 9 years. Those of lunar rocks and meteorites
are about 4.5 × 109years, a value believed to be near the age of the Earth.
Radiocarbon dating provides ages of formerly living matter within a range of 500 to 50,000
years. While an organism is living, its body contains about one atom of radioactive carbon-14,
formed in the atmosphere by the action of cosmic rays, for every 1012 atoms of stable carbon-12.
When the organism dies, it stops exchanging carbon with the atmosphere, and the ratio of
carbon-14 to carbon-12 begins to decrease with the half-life of 5,730 years. Measurement of this
ratio determines the age of the specimen.
A goal in timekeeping has been to obtain a scale of uniform time, but forming one presents
problems. If, for example, dynamical and atomic time should have a relative secular acceleration,
then which one (if either) could be considered uniform?
By postulates, atomic time is the uniform time of electromagnetism. Leaving aside relativistic
and operational effects, are SI seconds formed at different times truly equal? This question
cannot be answered without an invariable time standard for reference, but none exists. The
conclusion is that no time scale can be proved to be uniform by measurement. This is of no
practical consequence, however, because tests have shown that the atomic clock provides a time
scale of very high accuracy.
Via Slashdot, I came by this article describing an apparently major breakthrough in quantum
physics at Harvard, Princeton, and Cal Tech, allowing particle calculations that are normally
incredibly long and complex to be boiled down to a relatively simple geometric object. What
would have been hundreds of pages long is now mind-numbingly simple (from the perspective of
a theoretical physicist), and may ultimately be extended to form the basis of a unified physics
that elegantly encompasses all known phenomena. The oddest implication of the work is that
space and time may both be illusions, and that the universe may actually be an unchanging
geometric object. The article is long and involved, but understandable and fully worth reading.
One of the more irritating features of quantum physics has been its mathematical "wordiness" -
the need to engage in math that's thousands upon thousands of terms long, and often need
supercomputers just to figure out relatively simple particle interactions on a fundamental level.
What the new research indicates is that this is doing things the incredibly hard and stupid way,
and completely unnecessary. Instead, they've found that a higher-dimensional geometric object
they're calling an "amplituhedron" (yeah, descriptiveness trumped aesthetics there) can be
articulated whose simple volume calculations that can be done on napkins do the same work as
500 pages of ordinary quantum algebra. Here's a representation of an amplituhedron: