Day 30 P
Day 30 P
Day 30 P
You should spend about 20 minutes on Questions 1–13, which are based on Reading
Passage 1 below.
Dino discoveries
When news breaks of the discovery of a new species of dinosaur, you would be
forgiven for thinking that the scientists who set out in search of the fossils are the ones
who made the find. The reality tells a different story, as Cavan Scott explains.
The BBC series Planet Dinosaur used state-of-the-art computer graphics to bring to life
the most impressive of those dinosaurs whose remains have been discovered in the
past decade. One of these is Gigantoraptor erlianensis. Discovered in 2005, it stands
more than three metres high at the hip and is the biggest bird-like dinosaur ever
unearthed. Yet its discoverer, Xu Xing of Beijing’s Institute of Vertebrate Palaeontology
and Paleoanthropology, was not even looking for it at the time. He was recording a
documentary in the Gobi Desert, Inner Mongolia.
‘The production team were filming me and a geologist digging out what we thought were
sauropod bones,’ says Xu, ‘when I realised the fossils were something else entirely.’
Gigantoraptor, as it later became known, turned out to be an oviraptorid, a therapod
with a bird-like beak. Its size was staggering. The largest oviraptorid previously
discovered had been comparable in size to an emu; the majority were about as big as a
turkey. Here was a creature that was probably about eight metres long, if the bone
analysis was anything to go by.
Sometimes it is sheer opportunism that plays a part in the discovery of a new species.
In 1999, the National Geographic Society announced that the missing link between
dinosaurs and modern birds had finally been found. Named Archaeoraptor
lianoingensis, the fossil in question appeared to have the head and body of a bird, with
the hind legs and tail of a 124-million-year-old dromaeosaur – a family of small
theropods that include the bird-like Velociraptor made famous by Jurassic Park films.
There was a good reason why the fossil looked half-bird, half-dinosaur. CT scans
almost immediately proved the specimen was bogus and had been created by an
industrious Chinese farmer who had glued two separate fossils together to create a
profitable hoax.
But while the palaeontologists behind the announcement were wiping egg off their
faces, others, including Xu were taking note. The head and body of the fake composite
belonged to Yanornis martini, a primitive fish-eating bird from around 120 million years
ago. The dromaeosaur tail and hind legs, however, were covered in what looked like
fine proto-feathers. That fossil turned out to be something special. In 2000, Xu named it
Microraptor and revealed that it had probably lived in the treetops. Although it couldn’t
fly, its curved claws provided the first real evidence that dinosaurs could have climbed
trees. Three years later, Xu and his team discovered a closely related Microraptor
species which changed everything. ‘Microraptor had two salient features,’ Xu explains,
‘long feathers were attached not just to its forearms but to its legs and claws. Then we
noticed that these long feathers had asymmetrical vanes, a feature often associated
with flight capability. This meant that we might have found a flying dinosaur.’
Some extraordinary fossils have remained hidden in a collection and almost forgotten.
For the majority of the 20th century, the palaeontology community had ignored the
1
frozen tundra of north Alaska. There was no way, scientists believed, that cold-blooded
dinosaurs could survive in such bleak, frigid conditions. But according to Alaskan
dinosaur expert Tony Fiorillo, they eventually realised they were missing a trick.
‘The first discovery of dinosaurs in Alaska was actually made by a geologist called
Robert Liscomb in 1961,’ says Fiorillo. ‘Unfortunately, Robert was killed in a rockslide
the following year, so his discoveries languished in a warehouse for the next two
decades.’ In the mid-1980s, managers at the warehouse stumbled upon the box
containing Liscomb’s fossils during a spring clean. The bones were sent to the United
States Geological Survey, where they were identified as belonging to Edmontosaurus, a
duck-billed hadrosaur. Today, palaeontologists roam this frozen treasure trove
searching for remains locked away in the permafrost.
The rewards are worth the effort. While studying teeth belonging to the relatively
intelligent Troodon therapod, Fiorillo discovered the teeth of the Alaskan Troodon were
double the size of those of its southern counterpart. ‘Even though the morphology of
individual teeth resembled that of Troodon, the size was significantly larger than the
Troodon found in warmer climates.’ Fiorillo says that the reason lies in the Troodon’s
large eyes, which allowed it to hunt at dawn and at dusk – times when other dinosaurs
would have struggled to see. In the polar conditions of Cretaceous Alaska, where the
Sun would all but disappear for months on end, this proved a useful talent. ‘Troodon
adapted for life in the extraordinary light regimes of the polar world. With this advantage,
it took over as Alaska’s dominant therapod,’ explains Fiorillo. Finding itself at the top of
the food chain, the dinosaur evolved to giant proportions.
It is true that some of the most staggering of recent developments have come from
palaeontologists being in the right place at the right time, but this is no reflection on their
knowledge or expertise. After all, not everyone knows when they’ve stumbled upon
something remarkable. When Argentine sheep farmer Guillermo Heredia uncovered
what he believed was a petrified tree trunk on his Patagonian farm in 1988, he had no
way of realising that he’d found a 1.5-metrelong tibia of the largest sauropod ever
known to walk the Earth. Argentinosaurus was 24 metres long and weighed 75 tonnes.
The titanosaur was brought to the attention of the scientific community in 1993 by
Rodolfo Coria and Jose Bonaparte of the National Museum of Natural Sciences in
Buenos Aires. Coria points out that most breakthroughs are not made by scientists, but
by ordinary folk. ‘But the real scientific discovery is not the finding; it’s what we learn
from that finding.’ While any one of us can unearth a fossil, it takes dedicated scientists
to see beyond the rock.
2
READING PASSAGE 2
You should spend about 20 minutes on Questions 14-26, which are based on Reading
Passage 2 below.
Art to the aid of technology
What caricatures can teach us about facial recognition, by Ben Austen
A Our brains are incredibly agile machines, and it is hard to think of anything they do
more efficiently than recognize faces. Just hours after birth, the eyes of newborns are
drawn to facelike patterns. An adult brain knows it is seeing a face within 100
milliseconds, and it takes just over a second to realize that two different pictures of a
face, even if they are lit or rotated in very different ways, belong to the same person.
B Perhaps the most vivid illustration of our gift for recognition is the magic of
caricature—the fact that the sparest cartoon of a familiar face, even a single line dashed
off in two seconds, can be identifed by our brains in an instant. It is often said that a
good caricature looks more like a person than the person themselves. As it happens,
this notion, counterintuitive though it may sound, is actually supported by research. In
the field of vision science, there is even a term for this seeming paradox—the caricature
effect—a phrase that hints at how our brains misperceive faces as much as perceive
them.
C Human faces are all built pretty much the same: two eyes above a nose that’s above
a mouth, the features varying from person to person generally by mere millimetres. So
what our brains look for, according to vision scientists, are the outlying features—those
characteristics that deviate most from the ideal face we carry around in our heads, the
running average of every “visage” we have ever seen. We code each new face we
encounter not in absolute terms but in the several ways it differs markedly from the
mean. In other words, we accentuate what is most important for recognition and largely
ignore what is not. Our perception fixates on the upturned nose, the sunken eyes or the
fleshy cheeks, making them loom larger. To better identify and remember people, we
turn them into caricatures.
D Ten years ago, we all imagined that as soon as surveillance cameras had been
equipped with the appropriate software, the face of a crime suspect would stand out in a
crowd. Like a thumbprint, its unique features and configuration would offer a biometric
key that could be immediately checked against any database of suspects. But now a
decade has passed, and face-recognition systems still perform miserably in real-world
conditions. Just recently, a couple who accidentally swapped passports at an airport in
England sailed through electronic gates that were supposed to match their faces to file
photos.
E All this leads to an interesting question. What if, to secure our airports and national
landmarks, we need to learn more about caricature? After all, it’s the skill of the
caricaturist—the uncanny ability to quickly distil faces down to their most salient
features—that our computers most desperately need to acquire. Clearly, better cameras
and faster computers simply aren’t going to be enough.
3
F At the University of Central Lancashire in England, Charlie Frowd, a senior lecturer in
psychology, has used insights from caricature to develop a better police-composite
generator. His system, called EvoFIT, produces animated caricatures, with each
successive frame showing facial features that are more exaggerated than the last.
Frowd’s research supports the idea that we all store memories as caricatures, but with
our own personal degree of amplification. So, as an animated composite depicts faces
at varying stages of caricature, viewers respond to the stage that is most recognizable
to them. In tests, Frowd’s technique has increased positive identifications from as low
as 3 percent to upwards of 30 percent.
H Pawan Sinha, director of MIT’s Sinha Laboratory for Vision Research, and one of the
nation’s most innovative computer-vision researchers, contends that these simple,
exaggerated drawings can be objectively and systematically studied and that such work
will lead to breakthroughs in our understanding of both human and machine-based
vision. His lab at MIT is preparing to computationally analyze hundreds of caricatures
this year, from dozens of different artists, with the hope of tapping their intuitive
knowledge of what is and isn’t crucial for recognition. He has named this endeavor the
Hirschfeld Project, after the famous New York Times caricaturist Al Hirschfeld.
J On a given face, four of 20 such Hirschfeld attributes, as Sinha plans to call them, will
be several standard deviations greater than the mean; on another face, a different
handful of attributes might exceed the norm. But in all cases, it’s the exaggerated areas
of the face that hold the key. As matters stand today, an automated system must
compare its target faces against the millions of continually altering faces it encounters.
But so far, the software doesn’t know what to look for amid this onslaught of variables.
Armed with the Hirschfeld attributes, Sinha hopes that computers can be trained to
focus on the features most salient for recognition, tuning out the others. ‘Then,’ Sinha
says, ‘the sky is the limit’.
4
READING PASSAGE 3
You should spend about 20 minutes on Questions 27-40, which are based on Reading
Passage 3 below.
Mind readers
It may one day be possible to eavesdrop on another person’s inner voice. Duncan
Graham-Rowe explains
As you begin to read this article and your eyes follow the words across the page, you
may be aware of a voice in your head silently muttering along. The very same thing
happens when we write: a private, internal narrative shapes the words before we
commit them to text.
What if it were possible to tap into this inner voice? Thinking of words does, after all,
create characteristic electrical signals in our brains, and decoding them could make it
possible to piece together someone’s thoughts. Such an ability would have phenomenal
prospects, not least for people unable to communicate as a result of brain damage. But
it would also carry profoundly worrisome implications for the future of privacy.
The first scribbled records of electrical activity in the human brain were made in 1924 by
a German doctor called Hans Berger using his new invention – the
electroencephalogram (EEG). This uses electrodes placed on the skull to read the
output of the brain's billions of nerve cells or neurons. By the mid-1990s, the ability to
translate the brain's activity into readable signals had advanced so far that people could
move computer cursors using only the electrical fields created by their thoughts.
The electrical impulses such innovations tap into are produced in a part of the brain
called the motor cortex, which is responsible for muscle movement. To move a cursor
on a screen, you do not think 'move left' in natural language. Instead, you imagine a
specific motion like hitting a ball with a tennis racket. Training the machine to realise
which electrical signals correspond to your imagined movements, however, is time
consuming and difficult. And while this method works well for directing objects on a
screen, its drawbacks become apparent when you try using it to communicate. At best,
you can use the cursor to select letters displayed on an on-screen keyboard. Even a
practised mind would be lucky to write 15 words per minute with that approach.
Speaking, we can manage 150.
Matching the speed at which we can think and talk would lead to devices that could
instantly translate the electrical signals of someone's inner voice into sound produced
by a speech synthesiser. To do this, it is necessary to focus only on the signals coming
from the brain areas that govern speech. However, real mind reading requires some
way to intercept those signals before they hit the motor cortex.
The translation of thoughts to language in the brain is an incredibly complex and largely
mysterious process, but this much is known: before they end up in the motor cortex,
thoughts destined to become spoken words pass through two 'staging areas' associated
with the perception and expression of speech.
5
The first is called Wernicke's area, which deals with semantics - in this case, ideas
based in meaning, which can include images, smells or emotional memories. Damage
to Wernicke's area can result in the loss of semantic associations: words can't make
sense when they are decoupled from their meaning. Suffer a stroke in that region, for
example, and you will have trouble understanding not just what others are telling you,
but what you yourself are thinking.
The second is called Broca's area, agreed to be the brain's speech-processing centre.
Here, semantics are translated into phonetics and, ultimately, word components. From
here, the assembled sentences take a quick trip to the motor cortex, which activates the
muscles that will turn the desired words into speech. Injure Broca's area, and though
you might know what you want to say, you just can't send those impulses.
When you listen to your inner voice, two things are happening. You 'hear' yourself
producing language in Wernicke's area as you construct it in Broca's area. The key to
mind reading seems to lie in these two areas.
The work of Bradley Greger in 2010 broke new ground by marking the first-ever
excursion beyond the motor cortex into the brain's language centres. His team used
electrodes placed inside the skull to detect the electrical signatures of whole words,
such as 'yes', 'no', 'hot', 'cold', 'thirsty, 'hungry', etc. Promising as it is, this approach
requires a new signal to be learned for each new word. English contains a quarter of a
million distinct words. And though this was the first instance of monitoring Wernicke's
area, it still relied largely on the facial motor cortex.
Greger decided there might be another way. The building blocks of language are called
phonemes, and the English language has about 40 of them – the 'kuh' sound in 'school,
for example, the 'sh' in 'shy'. Every English word contains some subset of these
components. Decode the brain signals that correspond to the phonemes, and you would
have a system to unlock any word at the moment someone thinks it.
In 2011, Eric Leuthardt and his colleague Gerwin Schalk positioned electrodes over the
language regions of four fully conscious people and were able to detect the phonemes
'oo', 'ah', 'eh' and 'ee'. What they also discovered was that spoken phonemes activated
both the language areas and the motor cortex, while imagined speech – that inner voice
- boosted the activity of neurons in Wernicke's area. Leuthardt had effectively read his
subjects' minds. 'I would call it brain reading,' he says. To arrive at whole words,
Leuthardt's next step is to expand his library of sounds and to find out how the
production of phonemes translates across different languages.
For now, the research is primarily aimed at improving the lives of people with locked-in
syndrome, but the ability to explore the brain's language centres could revolutionise
other fields. The consequences of these findings could ripple out to more general
audiences who might like to use extreme hands-free mobile communication
technologies that can be manipulated by inner voice alone. For linguists, it could provide
previously unobtainable insight into the neural origins and structures of language.
Knowing what someone is thinking without needing words at all would be functionally
indistinguishable from telepathy.
6