Geografi

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

General Certificate of Education Ordinary Level

2217 Geography November 2011


Principal Examiner Report for Teachers

GEOGRAPHY
Paper 2217/11
Paper 11

General comments

The examination was considered appropriate for the ability range of candidates and a high level of
differentiation was achieved throughout. Some excellent responses were seen to all questions (whichever
were opted for) and candidates were able to show their level of ability and understanding and gain A*/A
grades. The quality of responses seen seem to be improving compared to previous years. The full range of
grades were seen across the paper.

The more structured questions, worth fewer marks, allowed all candidates to achieve positively. Likewise,
questions referring to source materials provided all candidates with positive opportunities to gain marks and
source material was generally well used.

Inevitably there were candidates who performed poorly in the examination, this may be due to a variety of
factors i.e. they were poorly prepared for this type of examination, lack of understanding or linguistic
difficulties in understanding the question fully in another language.

Many candidates use geographical terminology appropriately and confidently and are able to recall case
studies in detail, particularly when they are case studies local to them or from within their own country.
Nevertheless there are still many candidates who fail to give place specific information in order to gain the
full Level 3 marks (having given some very detailed Level 2 responses). Weaker candidates tend to list their
responses in bullet point form and as a result do not gain more than Level 1.

Candidates would be advised to practice answering questions that require comparisons to be made, as they
tend to list ideas rather than making direct comparisons.

The following detailed comments for individual questions will focus upon candidates’ strengths and
weaknesses and are intended to help centers better prepare their candidates for future examinations.

The following items of general advice, which have been provided previously in this report, need to be given
to future candidates who should:

● make the choice of questions with care, ensuring that for each question they choose they have a
named case study about which they can write in detail and with confidence.
● answer the three chosen questions in order, starting with the one with which they are the most
confident, and finishing with the one with which they are least confident (in case they run out of
time).
● read the entire question first before answering any part, in order to decide which section requires
which information to avoid repetition of answers.
● highlight the command words and possibly other key words so that answers are always relevant to
the question.
● use the mark allocations in brackets as a guide to the amount of detail or number of responses
required, not devoting too much time to those questions worth few marks, but ensuring that those
worth more marks are answered in sufficient detail.
● consider carefully their answers to the case studies and ensure that the focus of each response is
correct, rather than including all facts about the chosen topic or area, developing each point fully
rather than writing extensive lists of simple, basic points. It is better to fully develop three ideas
rather than write extensive lists consisting of numerous simple points.
● study the resources such as maps, graphs, diagrams and extracts carefully, using appropriate facts
and statistics derived from resources to back up an answer and interpreting them by making
appropriate comments, rather than just copying parts of them.

1 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
Comments on specific questions

Question 1

This question was by far the most popular choice made by candidates as over two thirds elected to answer
this question.

(a) (i) The majority of candidates correctly answered within the allowed range of between 16.8 and 17 for
the total population in country A between 0 and 4 years. Some candidates just gave either the
male or female side of 8.5%, which did not gain a mark.

(ii) The majority of candidates gained both marks for correctly identifying that country B has the
highest life expectancy and country A has the highest dependency ratio. A few candidates failed to
gain the second mark as they either left it blank or wrote the same answer twice i.e. country B.

(iii) A wide range of responses were seen here as many candidates referred to age groups and birth
and death rates rather than the shapes of the pyramids. Those that described the base, top and
middle of the pyramids gained the full three marks as the question asked for three differences
between the shapes of the two pyramids. For example ‘Country A has a wider base; Country B is
wider in the middle; Country B is taller; Country B has a wider Apex’.

(iv) The majority of candidates were able to gain at least two or three marks for answers referring to
‘large percentage of 0-14 year olds; high birth rates; high death rates; low life expectancy’ these
were the most common responses used to explain how the population structure of country A is
typical of an LEDC. Some candidates who gained full marks also correctly recognised that there is
‘a high dependency ratio and there are decreasing numbers in the 15-64 age groups’. Some
candidates also included irrelevant reference to details explaining reasons why the population
structure has these features for example ‘a lack of family planning’.

(b) (i) Some good comparative answers were seen here with many candidates able to gain one or two
marks at least for comparing the sizes and age structures of the population in Africa and Europe in
2000. Most commonly seen responses were ‘Africa’s population was slightly larger than Europe’s;
Europe had a larger percentage of over 65’s; Africa had a larger percentage of 0-14’s’. Some
references were made to changes by 2025, which were not relevant. For comparative questions
candidates should be encouraged to compare properly using words like ‘whereas’ rather than
writing two discrete accounts.

(ii) Many good responses were seen here and candidates were able to develop their answers and in
many cases gain the full five marks available. Responses included ideas such as ‘longer life
expectancy; better treatment for names disease or diseases; improved healthcare facilities or
examples; improved food supplies; vaccinations; education or awareness of healthy living’ to name
just a few examples of why there is an expected increase in the percentage of population over the
age of 65 by 2025.

(c) Most candidates were able to gain marks and many responses gained Level 2 marks for developed
responses but fewer candidates gained full Level 3 marks. Most candidates were able to name a
relevant country and most could describe the problems caused by an increase in the percentage of
people over the age of 65 in simple terms such as ‘increased percentage of elderly dependents;
higher taxes; need for money to be spent on care of the elderly’. Some candidates were able to
gain four or five marks for Level 2 by expanding their responses such as ‘increased elderly
dependents puts an increased strain on the working population; need for more money to be spent
on care homes or healthcare’.

Question 2

Question 2 was the third most popular choice made by candidates as approximately half of all candidates
selected this question.

(a) (i) The majority of candidates were able to correctly give a definition for urbanisation. Many different
ways of phrasing this were expressed with some being more clearly expressed than others. Most
gained the mark for ideas such as ‘a growth in urban areas; an increase in the number of people
living in urban areas or rural areas become built up’.

2 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(ii) The majority of candidates were able to gain the full two marks here for correctly identifying a
country where 75% or more of the population lived in urban areas in the year 2000, such as
‘USA/Canada/Brazil/UK/France’ to name a few. The second mark was gained for candidates
correctly identifying a continent in which there were some countries with less than 45% of the
population living in urban areas in the year 2000, such as ‘Asia/South East Asia or Africa’.

(iii) Many candidates were able to gain at least one mark for describing the distribution of the world’s
fastest growing cities. Many gained a mark for referring to continents or LEDC’s, which was to a
maximum of one mark. Many others referred to the `south` but then added `hemisphere` which
was not allowed. Some reference was seen in relation to coastal locations and some mentioned
‘tropical’ (in variety of ways).

(iv) Some excellent full mark responses were seen to this question with candidates being able to
describe the environmental problems caused by the expansion of towns and cities into the
surrounding rural areas. Some candidates referred to problems for people but most could refer to
at least one or two environmental problems. Most commonly seen responses were: ‘loss of
habitat; deforestation; atmospheric pollution; floods; impact on food chains or ecosystems’.

(b) (i) This question was generally well answered by almost all candidates. They were able to use Figure
4 to suggest three reasons why many people are moving to towns and cities in Botswana. The
most common responses were: ‘good transport links or accessibility due to roads or railways;
mines for work; access to water supplies’. Most candidates gained the full three marks.

(ii) Many good responses were seen as candidates were able to identify the problems caused for
people who live in such rapidly growing urban areas in LEDC’s with issues such as ‘unemployment;
not enough housing; squatter settlements grow; service provision being inadequate i.e. not enough
Schools or health services and poverty’ being well understood.

(c) Many candidates used South American examples of towns and cities and quite often compared an
area of squatter settlement with a more affluent area. A few very good accounts were seen from
candidates although most did not reach Level 3, as they did not give developed comparisons of the
location and characteristics of the two contrasting areas. Many candidates gave a lot of detail on
one area but limited on the other area. Those candidates who compared directly for example ‘Area
A has houses which are self built but area B has houses which have been professionally built’ were
most likely to identify and develop clear comparisons. Those candidates who described areas
separately often did not compare them in detail. Some good inclusion of place specific detail was
seen but could not always be credited, as there was not enough detail elsewhere in the response
to have reached the top of Level 2.

Question 3

This question was the least popular choice made my candidates (sixth) with approximately only one quarter
of all candidates choosing it.

(a) (i) The majority of candidates were able to correctly identify the river feature being shown as a
‘Meander’. Thus, most candidates gained the mark.

(ii) Similarly the majority of candidates correctly identified the areas on Figure 5 where the processes
of erosion and deposition are taking place as X and Y respectively. Alternatively candidates may
have said ‘erosion on the outside of the bend and deposition on the inside of the bend’.

(iii) The vast majority of candidates gained two marks most commonly for referring to ‘high velocity
near X and low near Y’ in order to describe how the velocity varies across the river cross section.
Some candidates made references to ‘higher velocity in deeper water; lower velocity near the bed
or higher velocity near the surface’ thereby gaining the full three marks available.

(iv) Overall the vast majority of candidates struggled to explain why a flood plain and levees may
develop in the lower course of a river. One mark was most commonly scored mainly for reference
to flooding. There was a lot of misconception that rivers flow slowly. It is the stagnant or slowly
moving flood water which has overflowed that causes the deposition to occur, especially when it
retreats, evaporates or is absorbed. Very little reference was seen to coarser materials being
deposited to form levees and finer materials being deposited to form flood plains. A second mark

3 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
was often awarded for candidates explaining that the process repeats and levees build up over
time.

(b) (i) Most candidates were able to show some understanding of why the removal of forest to build the
airport road is likely to increase the risk of flooding in the area. Most common responses referred
to ‘lack of absorption or interception; soil will become compacted or saturated; more overland flow’.
Also some good references were made to the impact of replacement of soil with tarmac. It was
pleasing to see that hardly anyone copied out parts of the extract (Figure 6).

(ii) This question differentiated well and many good answers were seen to explain why in many
LEDC’s, large numbers of people live close to rivers which may flood. A wide range of reasons
were given most commonly referring to ‘flat building land; use of river for communications/transport;
water for irrigation; use of river for fish for food supply’. Some responses concentrated on one idea
mostly ‘water supply’ without developing the answer beyond this. Some candidates included
tangential references to tourism or children playing in the river.

(c) The majority of candidates were able to name a river, which they had studied and most of them
were able to gain some marks for explaining what had been done to reduce flooding. Candidates
most commonly gained marks for reference to building embankment or dams. Level 2 responses
were seen for example ‘build higher banks so that the river will have greater capacity’ however,
many candidates did not give three developed ideas with place specific information in order to gain
Level 3 marks.

Question 4

This was the fourth most popular choice made by candidates. Approximately just under half of all candidates
selected this question.

(a) (i) Only a few candidates could define drought. Many candidates merely referred to ‘shortage of
water’ idea and of those who referred to ‘lack of rain’ they did not include `long period of time` idea
and hence did not gain the mark.

(ii) Many varied responses were seen which may be due to some candidates missing the word
‘location’ from the question. The majority of candidates gained the similarity between the locations
of the areas affected by drought and tropical storms for ‘in tropical areas’ idea. The second mark
for a difference between them was usually gained for ‘droughts usually occur inland whilst tropical
storms usually occur near the coast’. Most candidates gained the similarity mark with fewer
gaining the difference mark.

(iii) This question was generally well answered and candidates were mostly able to describe three
hazards for people, which result from tropical storms. The majority of responses most commonly
included ideas such as ‘hurricane/strong winds; flooding; damage to roads and bridges; damage or
loss of housing; loss of electricity supplies’.

(iv) The majority of candidates gained one or two marks but answers tended to be vague. The majority
of candidates could not explain why earthquakes and volcanic eruptions occur in similar areas
beyond stating that they ‘occur at or near to plate boundaries, or that there is movement or a
description of the movement’. Only a few candidates identified that there is a ‘build up of pressure’.
Very few gained the full four marks available.

(b) (i) This question was generally well answered and many candidates included the use of statistics.
The majority of candidates were able to justify their answer as to whether bigger earthquakes
cause more deaths by using the data provided. Most commonly candidates said ‘yes they do as
the Indonesian earthquake in 2004 had a magnitude of 9.0 and killed 283 000 people’ or
candidates said ‘no or recognised that there is no obvious link or pattern to be made’ and justified
this by saying ‘ in the USA a 9.2 earthquake only killed 125 people whereas 9500 deaths were
caused in India by an earthquake measuring only 6.2’. However, some candidates included
reasoning which was a requirement of part (ii) rather than part (i). However, the vast majority of
candidates gained the full three marks available here.

(ii) A wide range of answers were seen to this question suggesting reasons for the variation in the
number of deaths caused by the earthquakes listed in Figure 8. Many candidates showed a good
understanding of a range of factors most commonly including: ‘magnitude; population density;

4 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
quality of housing; how prepared people are; level of economic development’. Many candidates
gained full marks and developed their answers clearly and in detail.

(c) This question was generally well answered with many candidates gaining Level 2 marks and some
going on to gain Level 3. The majority of candidates were able to name a volcano, which had
erupted and describe one or more effects. There were very few who confused effects with causes
and some candidates named the country rather than the volcano. Some excellent responses
related to `local` South or Central American volcanoes. There were some well learned examples of
Mount South Helens although not all accurate as some candidates referred to thousands of deaths
and others referred to people being killed by lava which is not true of Mount St Helens. The
majority of marks were gained for ideas such as ‘people killed by lava or toxic fumes; houses being
destroyed by lava; roads and railways being inaccessible as they were covered in ash’. Place
specific detail often included dates of the eruption, number of deaths, names of settlements or
rivers.

Question 5

This question was the second most popular choice made by candidates with over half of all candidates
selecting to answer it.

(a) (i) Virtually all candidates were able to correctly identify the percentage of Bolivia’s water used for
agriculture as 80%.

(ii) This question was also well answered by the majority of candidates with them able to gain the full
two marks for correctly identifying that the largest percentage of water used for domestic use is
‘Denmark’ and for industry/electricity generation it is ‘France’.

(iii) As with previous comparative questions the candidates who gave comparative ideas tended to
score full marks. Those who listed rank order for each made it much more difficult for themselves
to do so. Hence, ideas such as ‘greater proportion used for agriculture in LEDC’s; Greater
proportion used for industry/electricity generation in MEDC’s; greater proportion used for domestic
purposes in MEDC’s’ rather than naming individual countries and quoting the amount for each
sector that they use.

(iv) This question differentiated well with many candidates able to gain at least one or two marks for
suggesting the reasons for the variation in the use of water between countries at different levels of
economic development. Most candidates gained marks for reference to the relative importance of
agriculture and industry in LEDC’s/MEDC’s respectively. Candidates gaining full marks were rarely
seen.

(b) (i) This question also differentiated well as many varied responses were seen. Candidates mostly
gained two marks with some gaining all three. Candidates were usually able to state one way in
which the economic activity shown in Photograph A may cause water pollution by referring to
‘disposal of effluent/sewage or waste into the sea’. Fewer candidates gained the mark for
Photograph B as they did not recognise that agriculture could cause water pollution and some said
that it could not cause water pollution. Some candidates gained the second mark for identifying
that Photograph C could cause water pollution from ‘oil spillages or fuel leaks’.

(ii) Varied responses were seen for this question with some excellent well developed references to the
impacts of water pollution on people and the environment for full marks. Some candidates did not
develop their ideas beyond ‘killing fish’ and unspecified diseases. More developed responses
referred to ‘water borne diseases; impacts on food chains or ecosystems; fishing industry declines;
build up of algae and eutrophication’ were most commonly seen ideas.

(c) The majority of candidates were able to name a valid LEDC or an area like the Sahel to explain
how water shortages cause problems for the people living there. Many candidates were able to
gain Level 2 marks but few candidates included place specific detail in order to gain Level 3.
Answers most commonly referred to ideas such as: ‘crop destroyed; have no water to drink; people
killed’ for Level 1. More developed responses for Level 2 included: ‘people have to walk long
distances to find water; lower crop yield leads to lack of food supply; death due to starvation or
malnutrition’.

5 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
Question 6

This was the fifth most popular choice by candidates with approximately just over one third selecting this
question.

(a) (i) Virtually all candidates were able to recognise that the general relationship between the percentage
of people employed in agriculture and the GNP per person was a negative relationship and gained
the mark.

(ii) This was another well answered question with virtually all candidates naming ‘Tunisia’ as the
country that does not fit the general relationship which was stated in (a)(i) and most candidates
were able to give the reason as ‘its percentage employed in agriculture is relatively high for its
GNP’. Therefore almost all candidates gained the full two marks.

(iii) Many varied responses were seen to this question with most candidates gaining at least one mark
for explaining why a high percentage of the population of LEDC’s work in agriculture with the main
reason being given as ‘lack of machinery’. Some candidates gained a second mark for identifying
that many ‘are subsistence farmers’. Few candidates gained the full three marks available.

(iv) This question differentiated well with many varied answers showing in most cases some
understanding to explain why many LEDC’s suffer from food shortages, even though a large
percentage of their population are farmers. Most common responses referred to ideas such as:
‘drought; flooding; overcultivation; soil erosion and ideas related to lack of finances to buy fertilizers
or pesticides or machinery’. Many candidates gained two or three marks with fewer gaining full
marks or just one.

(b) (i) Many good answers were seen to this question with most of them gaining full marks. In order to
identify three differences in energy used between France and Kenya the most frequently used
ideas included: ‘more use of oil in France; more use of coal in France; Wood is used in Kenya but
not in France; Gas is used in France but not in Kenya’. A few candidates used statistics but did not
interpret them i.e. ‘60% of wood used in Kenya; 15% of gas used in France’.

(ii) Most candidates referred to the availability of different fuel types in different countries to show why
the importance of different fuel types varies from country to country. Some candidates also
referred to the availability of finances to develop or exploit different fuel types. Relatively few
candidates gained full marks but those that did developed their ideas with reference to
‘environmental conditions i.e. ‘to be able to develop solar or wind energy’.

(c) Many excellent responses were seen to describe the likely impacts of global warming on named
areas, which you have studied, referring to people and the natural environment. Full marks were
awarded in most cases as accurate references were made to at least two named areas along with
excellent developed ideas. However, some candidates gained simple Level 1 marks as they only
referred to ice melting and sea level rising. A few candidates did not gain any marks, as they seem
to have misunderstood the question as they wrote about deforestation in Amazonia. Most
commonly seen Level 3 ideas included: ‘ice melts at the polar regions causing sea level to rise
leading to coastal flooding in lowland areas such as Bangladesh; Loss of species like polar bears is
likely due to lack of ice’.

6 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers

GEOGRAPHY
Paper 2217/12
Paper 12

General Comments

2011 was the first year in which candidates used a question and answer booklet to write their answers, and
this was the second paper of this style, the first being in May/June. This format was well received and in the
vast majority of cases candidates made effective use of the space provided. Some candidates required
space on the extra lined page which was provided for their use and only a few used extra sheets of lined
paper. As question and answer booklets will be the format used in future it is important that candidates are
made aware that they should:

● write all their answers in the spaces provided in the booklet and not use additional sheets unless
it is absolutely necessary.
● write only on the lines provided, not underneath the final line or elsewhere on the page (e.g. in
any area of unused space at the bottom of a page).
● continue any answers for which they do not have space on the lined page(s) at the back of the
booklet. If they do this they must indicate that they have done this (e.g. by writing `continued on
Page XX`) and carefully write the number of the question at the beginning of the extra part of
their answer.

The examination was considered appropriate for all abilities and a high level of differentiation was achieved
throughout. Most answers were well presented, used relevant material, and showed a good understanding
of command words and key words and phrases in the questions. Excellent responses to all questions were
seen and all candidates, including those who gained A*/A grades, were able to show their level of ability.
Structured questions referring to source materials provided candidates of all levels with opportunities to gain
marks, and allowed candidates to achieve positively. Generally skills questions were well answered, though
reading from a pie chart and identifying the ‘main features’ with reference to a photograph were exceptions.
Whilst there were some candidates who for a variety of reasons performed poorly in the examination (e.g.
lack of understanding or linguistic difficulties), these were few in number and most made a genuine attempt
at answering their chosen questions.

Able and well prepared candidates attempted to use geographical terminology, some with great confidence,
and were able to recall detailed and place specific case studies. However some candidates need to learn
their terminology more fully. In this examination life expectancy, amenities, built environment and
greenhouse effect are examples of terms which were not well known by significant numbers of candidates.
Candidates should avoid using vague terms that are not precise enough to gain marks and these should
always be qualified in some way. Examples seen in this session include health facilities, disease, services,
facilities, amenities, natural disaster, pollution, erosion, weathering, natural beauty, political problems and
quality of life.

The use of local case studies is often helpful as candidates find them more meaningful to themselves, and
therefore easier to learn. It is also worth noting that small scale, detailed case studies are often more
effective than ones covering a large area. Many candidates are able to give developed Level 2 responses
though they need to include at least three developed statements to meet the full requirements of Level 2.
Some candidates include sufficient developed ideas but to improve further they should try to also include
place specific detail in order to achieve full Level 3 marks. Those candidates who list their responses in
bullet point form or make simple, brief points only gain marks in the Level 1 range and in order to improve
their performance they should try to develop each point they make.

It is becoming increasingly apparent that candidates from some Centres are using case study answers,
which have been included in previous mark schemes. Whilst there is much merit in teachers making use of
past mark schemes and familiarising candidates with the style used, care needs to be taken that candidates
are not encouraged be learn and repeat phrases from within old mark schemes rather than showing real
knowledge and understanding of their chosen case study. It must be remembered that ideas listed in mark

7 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
schemes are simply a guide to Examiners and if the case studies are used with candidates the materials
should be taught, as with any case study, and the place specific details (and others) included in context of
the example chosen. On no account should candidates be given a list of place specific phrases and be
encouraged to learn them without developing any understanding of the case study.

The following items of general advice, most of which have been included previously in this report, need to be
given to future candidates who should:

● choose their three questions with care, ensuring that for each question they have a named case
study about which they can write in detail and with confidence.
● answer the three chosen questions in order, starting with the one in which they are the most
confident, and finishing with the one in which they are least confident (in case they run out of
time).
● be succinct and immediately focus on answering the question as limited space is available in the
answer booklet. Long introductory preambles are not required.
● read the entire question first before answering any part, in order to decide which section requires
which information to avoid repetition of material which is irrelevant.
● highlight the command words and other key words so that answers are always relevant to the
question.
● use the mark allocations in brackets, and space provided in the question and answer booklet as
a guide to the amount of detail or number of responses required, not devoting too much time to
those questions worth few marks, but ensuring that those worth more marks are answered in
sufficient detail.
● consider carefully their answers to the case studies and ensure that the focus of each response
is correct, rather than including all the facts known about the chosen topic or area, developing
each point fully rather than writing extensive lists of simple, basic points. It is better to fully
develop three ideas rather than write extensive lists consisting of numerous simple points.
● study the resources such as maps, photographs, graphs, diagrams and extracts carefully,
selecting and using appropriate facts and statistics to support answers and interpreting them by
making appropriate comments, rather than just copying parts of them. Remember to include
units when statistics are quoted and refer to years and/or figures when describing graphs.

The following specific comments on individual questions will focus upon candidates’ strengths and
weaknesses and are intended to help Centres better prepare their candidates for future examinations.

Comments on specific questions

Question 1

This was a very popular question with many candidates scoring high marks.

(a) (i) Virtually all candidates correctly identified Stage 1.

(ii) Most candidates scored one or two marks for appropriate references to birth and death rates.
Some candidates lost a mark by wrongly suggesting that birth rate was increasing and sometimes
the answer needed further development as it just stated that there was a difference between birth
and death rates.

(iii) Many candidates suggested acceptable reasons and the content was well understood. Most
answers concentrated on the availability of, the use of, or the knowledge about, contraception,
however a range of other valid reasons were seen.

(iv) Many candidates correctly identified a country and its stage. Some LEDCs were incorrectly
identified as being in Stage 1. Most candidates who correctly identified a country gained generic
marks about the levels of birth rate and death rate, but very few suggested figures to gain full
marks.

(b) (i) Many candidates described life expectancy of the two countries without making a direct
comparison. Where candidates answered the question by making comparative statements they
usually scored marks for a descriptive comparison or increase and decrease. The best answers
also compared figures, usually for 1955 and 2005.

8 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(ii) The most common responses focused on improvements in medical care, water supply and
sanitation. Decreases in life expectancy were often explained by reference to conflict, famine and
drought.

(c) Most candidates chose China as their example, and scored relatively well on this case study.
Many wrote in detail about the one child policy though some did not develop enough of the ideas
which they expressed. Some candidates confused China with Japan and many gave reasons for
the policy or described its effects which led to lengthy and irrelevant sections to their answers, but
were less certain of the ways in which it is implemented. Some gave rather extreme views (e.g. 2nd
children being killed by the government). There is some confusion about exactly what the current
One Child policy involves.

Question 2

This was of medium popularity. Those who chose it seemed to answer most parts well, though part c
caused problems for some candidates.

(a) (i) Most candidates scored this mark.

(ii) Many candidates failed to score because they did not understand ‘amenities’ or focused on
amenities in the area as a whole rather than in the houses. Those who scored marks usually
referred to water, electricity and sanitation.

(iii) Many candidates gave acceptable answers. A variety of locations were suggested.

(iv) Many candidates scored marks by suggesting ideas such as the expense of existing housing and
the inability of the migrants to the city to pay for these. Some candidates also included the idea of
migration into the city though relatively few explored the idea of the lack of sufficient housing stock,
as a result of lack of funding by the authorities or due to increasing rates of population growth
(natural and through migration) in LEDC cities.

(b) (i) Most candidates identified some problems, with air pollution, noise pollution and traffic congestion
the three most common answers. Weaker answers included overpopulation or overcrowding.

(ii) Most candidates chose traffic congestion or air pollution and were able to suggest specific
measures to earn credit. The better answers developed their ideas to explain how they might solve
the problem further. Build more roads was a common answer which too vague for credit.

(c) Those who answered in the spirit of the question and syllabus (i.e. a new building or even current
one being changed) came up with worthwhile gains/losses though descriptions were often weak
and rarely developed. Those that saw a change as the use of the Internet, a fall in price of goods,
use of credit cards, bar codes or some other miniscule difference could not gain many marks.
Some good examples seen were local examples of developments in Harare and some other
African cities. Poorest were for London and New York where candidates just described what was
there now.

Question 3

A small number of candidates attempted this question. Those candidates choosing this question generally
knew about the topic(s) but some chose it inadvisedly and tended to include irrelevant material in their
answers.

(a) (i) Most candidates correctly identified the photograph taken closest to the source.

(ii) Most candidates accurately described the differences in the river characteristics.

(iii) Most candidates suggested either B or D, but the development of the explanation was often weak
and usually only related to the speed of flow and/or gradient.

(iv) Many candidates understood these terms well and were able to score full marks.

9 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(b) (i) Many candidates did not understand what a cross section diagram was and some labelled features
on the map. Where they did comprehend the question many scored two marks by identifying the
river cliff and slip off slope but fewer candidates correctly showed differences in the depth of water.
In order to do this the cross section shown needed to be labelled with the points P and Q, which
some candidates did not include.

(ii) Whilst most candidates expressed some idea of the oxbow lake being created by the cutting of the
neck of the meander/the river flowing straight many answers lacked accuracy in describing the
processes which resulted in that occurring. Higher scoring answers knew the sequence of
processes and scored full marks.

(c) Candidates were more confident when dealing with human aspects. Developed ideas often
referred to agriculture, settlements, fishing or other employment. Weaker answers focused only on
negative impacts. Many good answers used Bangladesh or the Nile or Ganges as a case study,
although most candidates got no further than good crops on alluvial soil, and damage by flooding.
There were a minority of excellent case studies which were more local to candidates, some
containing specific details referring to crocodiles, snakes and other water-dwelling creatures, as
well as the risks of malaria and water-borne disease.

Question 4

This was the least popular question. Whilst high quality responses were seen to all parts many other
candidates wrote weak or irrelevant answers.

(a) (i) Virtually all candidates identified chemical weathering as the correct answer.

(ii) Most candidates gave the correct figures; a small minority did not include the units.

(iii) Many candidates showed good understanding of the process and scored maximum marks for the
ideas of expansion, contraction and stress in the outer layers.

(iv) Whilst candidates could identify different types of weathering, they usually did not provide much
explanation of why warm, wet conditions encouraged these types of weathering.

(b) (i) Candidates usually identified appropriate features from the photograph and many scored full
marks. Better answers included detailed description of the rocks for which they often scored three
marks.

(ii) Many candidates failed to make the link between freeze-thaw weathering and Photograph E.
Perhaps they had not read the stem to (b) which referred to a mountain area in a temperate
climate. Then they focused their answer on exfoliation and scored zero. Many candidates who did
make the link to freeze-thaw weathering explained the process in detail and many included
diagrams in their explanation.

(c) Many candidates wrote about drought and scored within Level 2. Frequently their answers
focussed on the difficulty of obtaining food supplies and its effects. Other common ideas were the
effects on crops and livestock. Most of these answers gave an appropriate example but few
included place specific detail. Weaker answer focused on the causes of drought or more generic
desertification themes. Candidates who chose a tropical storm usually focused on the effects of
flooding.

Question 5

This question was popular with answers ranging from the very weak through to those of the highest quality.

(a) (i) Many candidates misinterpreted the pie charts and gave the answer 15%, which was incorrect.

(ii) Many candidates failed to gain any marks with vague or inaccurate responses, such as references
to scenery and the quality of services and vague phrases like `lots of things to do`.

(iii) Many candidates scored three marks. The main error was to ignore `natural` and suggest other
attractions.

10 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(iv) Some candidates did not understand the term ‘built environment’. Where candidates did score well
it was either by listing different examples of built attractions or explaining their attraction in a
generic way.

(b) (i) The most common answers referred to difficulties in the road/ railway infrastructure, the lack of
hotels and the water supply system or sewage system. Weaker candidates did not use accurate
enough language to earn the credit and suggested difficulties such as poor transport and wet
climate.

(ii) Many candidates scored well. Some answers were limited to three or four marks because they did
not meet the required balance of benefits and disadvantages. The most common benefits
suggested were economic growth, the provision of jobs and infrastructure improvement.
Disadvantages were not done quite so well but the adverse effects on culture, noise pollution,
traffic congestion and the effects of domination by foreign firms were suggested in high scoring
answers.

(c) Too many candidates missed the emphasis of the effect of tourism on the natural environment and
gave detailed descriptions of the effect of tourism on an area or country or on people which had
already been covered in earlier questions. The most common developed impacts were the loss of
habitats which threatened species, and the effects of litter on wild animals. Better answers were
characterised by a focus on a specific area such as the Great Barrier Reef whilst weaker answers
were less focused on a whole country such as Kenya.

Question 6

This was another popular question which differentiated well. There were many impressive answers,
including the case study, however some candidates continue to struggle with their understanding of global
warming.

(a) (i) Most candidates gave an appropriate explanation, if not always well expressed.

(ii) Many candidates failed to include reference to pollution of the air and therefore failed to score for
their suggestion about agriculture as their references to pesticides or herbicides did not state that it
was the spraying that would cause air pollution. Answers were usually more specific about fumes
or exhaust emissions from transport.

(iii) The links between air pollution and breathing difficulties and water pollution and disease such as
cholera were the most common valid answers. However some candidates did not link a type of
pollution with a problem faced by people (asthma from air pollution, lack of visibility from smog, ear
problems from noise etc.). The weakest candidates just listed types of pollution.

(b) (i) Generally candidates used the information from the article well. A few ignored the resource and
gave general concerns for no credit.

(ii) More able candidates correctly identified that carbon dioxide is a greenhouse gas and went to on to
explain the need for its reduction. Many answers were a reverse of the required answer but were
credited for the ideas about the build up of gases. Many weaker candidates exhibited the usual
confusion with the hole in the ozone layer and even acid rain.

(iii) There were many excellent answers with high degrees of detail often accompanied with
explanations as to how carbon dioxide levels would be reduced. The use of alternative fuels,
afforestation and the importance of more public transport were the most common ideas. Some
candidates referred to catalytic convertors (which increase CO2) and scrubbers for power
station/factory emissions (which reduce sulphur dioxide), whilst a small minority described
strategies which are not yet, or perhaps never will be, in use, e.g. water powered cars and methods
of carbon sequestration. Some answers could not be credited because the strategies described
were too extreme.

11 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(c) This was quite well done though causes and effects were generally combined in statements rather
than developed separately which would have achieved higher credit. Simple responses such as
litter/sewage/industrial waste were seen but so were many that gave full details of eutrophication
and of named factories that pollute rivers in a variety of ways. Good descriptions of agricultural
fertilizers affecting run-off into rivers and named diseases caused by water contamination were
seen. Popular examples included the Ganges and the Nile though many different examples were
used. Weaker candidates only suggested several simple statements about different causes or
effects, without including the development which would have achieved higher marks.

12 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers

GEOGRAPHY
Paper 2217/13
Paper 13

General Comments:

The examination was considered appropriate for the ability range of candidates and a high level of
differentiation was achieved throughout. Many excellent responses to all questions (whichever were opted
for) were seen and candidates were able to show their level of ability and gain A*/A grades.

The more structured questions worth fewer marks allowed all candidates to achieve positively. Also,
questions referring to source materials provided all candidates with positive opportunities to gain marks.

Inevitably there were candidates who performed poorly in the examination, this may be due to a variety of
factors i.e. they were poorly prepared for this type of examination, lack of understanding or linguistic
difficulties in understanding the question fully in another language. However, it has been noted that the
standard and quality of work seen from candidates is continuing to improve overall.

Many candidates use geographical terminology appropriately and confidently and are able to recall case
studies in detail, particularly when they are case studies local to them or from within their own country.
Nevertheless there are still many candidates who fail to give place specific information in order to gain the
full Level 3 marks (having given some very detailed level 2 responses). Weaker candidates tend to list their
responses in bullet point form and as a result do not gain more than Level 1.

The following detailed comments for individual questions will focus upon candidates’ strengths and
weaknesses and are intended to help centres better prepare their candidates for future examinations.

The following items of general advice, which have been provided previously in this report, need to be given
to future candidates who should:

• make the choice of questions with care, ensuring that for each question they choose they have a
named case study about which they can write in detail and with confidence.
• answer the three chosen questions in order, starting with the one with which they are the most
confident, and finishing with the one with which they are least confident (in case they run out of
time).
• read the entire question first before answering any part, in order to decide which section requires
which information to avoid repetition of answers.
• highlight the command words and possibly other key words so that answers are always relevant to
the question.
• use the mark allocations in brackets as a guide to the amount of detail or number of responses
required, not devoting too much time to those questions worth few marks, but ensuring that those
worth more marks are answered in sufficient detail.
• consider carefully their answers to the case studies and ensure that the focus of each response is
correct, rather than including all facts about the chosen topic or area, developing each point fully
rather than writing extensive lists of simple, basic points. It is better to fully develop three ideas
rather than write extensive lists consisting of numerous simple points.
• study the resources such as maps, graphs, diagrams and extracts carefully, using appropriate facts
and statistics derived from resources to back up an answer and interpreting them by making
appropriate comments, rather than just copying parts of them.

13 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
Comments on specific questions:

Question 1

This question was by far the most popular choice by candidates with approximately three quarters choosing
this question. It was well answered in the majority of cases.

(a) (i) The majority of candidates correctly identified that the number of deaths in Denmark in 1970 was
71 000 and gained the mark.

(ii) A – Most candidates were able to correctly identify a year where there were more births than
deaths choosing from 1970 – 1980 and 1988 – 2005.
B – Likewise, most candidates were able to correctly identify a year were there were more
emigrants than immigrants choosing from 1973 – 1976 and 1981/1982.

(iii) Many candidates made an impressive attempt at this question with accurate calculations. A few
candidates just gave the figures and calculations for natural increase or just the migration figure
and not both. Also a few candidates’ added figures together rather than subtracting them. The
majority of candidates were able to read off the figures accurately within the tolerance allowed and
thus gained full three marks as follows: births minus deaths – 64 000 to 65 000 minus 54 000 to 55
000 minus immigration minus emigration – 52 000 to 53 000 minus 45 000 to 46 000 giving an
answer of 15 000 to 19 000.

(iv) Many excellent responses were seen here with the majority of candidates scoring three or four
marks. They were able to identify and describe trends and/or refer accurately to the statistics. A
few candidates made references to birth and death rates, which did not score any marks. Most
candidates were able to show that migration fluctuates, that there was an increase or decrease and
an appropriate year or range given, peak and/or trough years were correctly identified and the
overall trend that there was more immigration than emigration with an exception of between 1974
and 1976.

(b)(i) There was a wider variety of responses here with some candidates easily gaining the full three
marks by naming examples such as Vietnam to Australia, Mexico to USA, China to USA, India to
UK. Some candidates gave examples of either MEDC’s to another MEDC or LEDC’s to another
LEDC. Also some candidates gave examples of types of migration such as forced, voluntary or
economic.

(ii) Many good answers were seen gaining marks for pull factors such as employment opportunities,
higher pay, better education or healthcare, better quality of life, good hygiene or sanitation to name
a few. Many candidates developed their answer and gave examples such as better healthcare with
more treatments for diseases available. Some candidates focused on push factors rather than pull
factors and thus lost marks however, some of those candidates also gave opposite statements for
pull factors so managed to gain the marks anyway.

(c) The majority of candidates scored marks up to Level 2 standard but fewer gained Level 3. There
were many valid generic ideas (which could be true of many case studies) but very few gave place
specific detail. Many candidates chose China as a case study which was unfortunate as the
responses were more pertinent to countries such as India than China. Also candidates included
reference to the one child policy which was not relevant. However, many candidates were able to
score four or five marks for ideas such as: ‘there are high birth rates so that children can help to
work on farms’ or ‘birth rates are high so that children can look after their parents in their old age’.

Question 2

This was the third most popular choice made by candidates and was selected by approximately half of all
candidates. It was generally well answered.

(a) (i) Most candidates scored the mark for correctly stating the buildings are scattered or spread out.
Some phraseology used was unclear.

(ii) The majority of candidates gained two marks for correctly identifying that B is nucleated and C is
linear.

14 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(iii) The majority of candidates were able to gain at least two marks for correctly identifying that B has
grown around the crossroads and C has developed along the road. Many candidates also
identified that C is restricted by the river on one side and the steep slopes on the other. Some also
recognised that B is flat all around and thus scored full three marks.

(iv) Some candidates focused all their response around a single issue such as transport, which limited
the number of marks gained. However, many excellent full mark responses were seen.
Candidates referred to accessibility, advantages of being close to a river for food supply or water
supply, near to a bridge and flat land to allow development without restrictions to growth.

(b) (i) More varied responses were seen to this question. Many candidates were able to describe the
distribution referring to ideas such as settlements are usually on lower land, near to roads/tracks or
footpaths and avoids forest areas. Some candidates did not describe the distribution but rather
listed features from the map without referring to the distribution of settlements.

(ii) Some candidates repeated their responses from the previous question rather than giving reasons.
However, the majority of candidates gave excellent answers referring to ideas such as
accessibility, water supply, irrigation, flat land for farming or fertile land to name a few.

(c) A wide range of examples were seen from candidates. Many selected two large cities to compare
despite the question asking for two settlements of different population size. Responses which
compared a large city with a smaller town or village from the same country and which was known
or familiar to the candidate scored the highest marks. Many responses contained a lot of place
specific detail yet they could not access the Level 3 marks as there was insufficient detail at Level
2. Candidates who made direct comparisons between services scored the highest marks e.g.
Settlement A only has a primary school whereas Settlement B has primary and secondary schools
with a university or college. Some candidates went into more detail on one settlement and didn’t
compare with the second settlement.

Question 3

This was a less popular choice with candidates and was the fourth most popular question selected overall.
Responses seen were varied as shown below.

(a) (i) Virtually all candidates correctly identified ‘C’ as the photograph showing a coastal landform formed
by erosion.

(ii) The vast majority of candidates were able to correctly match the landform with the corresponding
photograph as follows: beach is Photograph B, Headland is Photograph c, marsh is Photograph D
an sand dunes is Photograph A. Therefore most candidates scored both marks.

(iii) This question differentiated well as some candidates were not able to score any marks here as
they referred to ideas such as ‘next to the sea’ or ‘lots of rain’ or ‘muddy’ all of which did not gain
any credit. Candidates who were able to gain marks mostly did so for responses most commonly
referring to the ‘sheltered’ idea and ‘slow flowing water’. The few candidates who scored full marks
also referred to ‘flat land’ and ‘large amounts of sediment’.

(iv) This question was generally well answered with the majority of candidates scoring at least three out
of the four marks available. Candidates gained marks for ideas like: ‘wind blows sand and deposits
it around an obstacle which over time builds up’. Only a few candidates added that Marram grass
then colonises the dunes. A few candidates referred to the waves building up the dunes or
described the process of longshore drift, which did not score any marks.

(b) (i) Most candidates were able to gain marks for ‘constructive waves having more swash and
destructive waves having more backwash’ and ‘constructive waves deposit material whereas
destructive waves erode’. However, this question also differentiated well as some candidates were
not able to describe the differences between constructive and destructive waves even though there
is a resource to help them.

15 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(ii) This question also differentiated well as most candidates could explain at least one process with
some candidates explaining two or more. The most common responses referred to corrasion
having a sand paper effect on the cliff and hydraulic action creating pressure as they break against
the cliff and trapping air in the cracks in the rock. A few candidates referred to acid rain, which did
not gain any credit.

(c) Mixed responses were seen to this question thus differentiating well between candidates. The
hazards most commonly selected by candidates were coastal erosion and tropical storms with a
wide variety of locations given. The most popular examples were Hurricane Katrina or
Bangladesh, which usually gained high marks as they often contained place specific information
along with detailed Level 2 statements. For example ‘people had to relocate as their homes were
flooded or damaged’ ‘flooded roads made it difficult for people to escape or help to get in’ these
developed statements along with numbers of deaths or injuries or dates meant that candidates
could gain Level 3 marks.

Question 4
th
This was the least (6 ) popular choice made by candidates. A variety of responses were seen and the
question overall differentiated well.

(a) (i) The majority of candidates gained the mark for correctly estimating the figure for the annual
average precipitation at Agades from within the range allowed; 51 – 199.

(ii) Many varied responses were seen to this question and many candidates correctly made reference
to low latitude/near to the equator idea or sun is directly overhead or lack of cloud. Some
candidates gave responses that were more applicable to the following question and therefore did
not gain any marks in some instances.

(iii) Candidates tended to answer this question better than the previous one and some excellent
responses were seen. Explanations given as to why rainfall is low in areas such as Agades
referred to the distance from the sea or winds blowing over land and high pressure dominating
were the most common correct answers.

(b) (i) Most candidates were able to gain 2 or 3 marks in response to this question most commonly
stating that there is an uneven distribution, rain falls between April and September and August
being the wettest month. Some candidates were too specific quoting figures for individual years
and months without interpreting them for example August in year 2 had 67mm.

(ii) This question differentiated well between candidates as a wide range of responses were seen.
Many responses showed a good understanding as to how the rainfall distribution may affect the
people living in and around Agades including ideas like: not enough water for domestic use; they
may suffer from dehydration; they will have to walk long distances to collect water; animals will die;
less food supply’ to name a few.

(iii) This question also differentiated well between candidates with many excellent responses being
seen showing good knowledge and understanding of desertification. Many candidates were able to
develop their ideas to show why desertification occurs for example: there is drought coupled with
deforestation as trees are needed for firewood, which leads to a loss of nutrients in the soil.
Overgrazing of animals and using land for agriculture in marginal areas further reduces soil fertility
and as the soil is exposed the sun bakes the soil hard. Some candidates did not understand why
desertification occurs and struggled to gain any marks as they simply referred to the weather being
dry or trees dying which did not address the question.

(c) The majority of candidates was able to name a desert area and were able to describe desert the
features of the natural vegetation such as long roots or wide spreading roots, sparse vegetation,
spiky leaves being the most commonly referred to. Some candidates were also able to explain
how the vegetation can survive in the desert for example: Cacti have long roots to search for water
beneath the surface, spiky leaves reduce water loss etc. A few candidates were able to achieve
Level 3 full marks for also correctly naming examples of desert plants (not just cactus), with the
relevant detailed Level 2 responses.

16 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
Question 5.

This question was the second most popular choice made by candidates and it also differentiated well.

(a) (i) The vast majority of candidates correctly named a country where population growth is greater than
the increase in food production with the most common response being Madagascar.

(ii) This question was also well answered with the most common natural factors being identified for
causing food shortages as ‘drought, flooding, hurricanes or infertile soils’. A few candidates simply
stated natural hazards or disasters without stating what it was.

(iii) Some very perceptive responses were seen here as to how economic and political factors can
cause food shortages with responses referring to ideas like: poverty and not being able to afford
food; war and corruption. The majority of candidates gained at least one or two marks.

(iv) This question differentiated well. Many candidates were able to describe the effects of food
shortages in LEDC’s by referring to starvation or death and malnutrition gaining at least one or two
marks. Some candidates were then able to develop their ideas thinking of the further implications
of this and then named deficiency disease such as Marasmus or that people became too weak to
work or couldn’t plant crops’ thereby gaining the full four marks.

(b) (i) The majority of candidates recognised the downward trend in employment in agriculture for one
mark. Most candidates were also able to gain a second mark for use of data in their response for
example ‘halved between 1985 and 2010’. Fewer candidates were able to gain the third mark for
referring to trends between different years. Some candidates also took figures from the wrong side
of the graph.

(ii) This question differentiated well as many varied responses were seen. Some candidates were
able to develop their ideas and gain full marks. The most common responses to show why the
value of agricultural output per worker had increased referred to ideas like: increased
mechanization, with greater use of tractors, more use of fertilizers therefore adding nutrients to the
soil, more irrigation so that crops do not dry out which increases yields. Many candidates gained
fewer marks as they tended to focus upon the decreasing workforce without explaining that despite
this there were increases in output for the reasons outlined above.

(c) A small number of very good developed answers were seen to this question but the majority of
candidates were able to gain either top Level 1 or bottom Level 2 marks. If candidates relied solely
upon systems diagrams they were unlikely to gain marks beyond Level 1. Some candidates were
able to name a relevant farming area. Most candidates listed inputs, processes and outputs of the
farming system chosen without any development and therefore gained Level 1 marks for ideas
such as ‘soil, harvesting and crops’. Some candidates developed their ideas further by explaining
that ‘harvesting was done using a combine harvester’ very few responses contained relevant place
specific detail to gain full Level 3 marks.

Question 6.

This question was not very popular by candidates and was the fifth most popular choice.

(a) (i) The vast majority of candidates correctly identified ‘Nepal’ as the country that uses the largest
percentage of fuelwood to supply energy.

(ii) Most candidates gained the full two marks for providing two different uses of fuelwood in LEDC’s
most commonly for ‘cooking and heating’. Some candidates did not gain any marks as they wrote
about the use of ‘wood’ generally rather than the use of ‘fuelwood’.

(iii) A. This question differentiated well with some perceptive responses. Many candidates referred to
toxic fumes causing problems to the people who live in LEDC’s whilst more perceptive candidates
referred to ideas such as travelling long distances to collect fuelwood and the possibility of fire or
named diseases through breathing in toxic fumes. Some candidates also included reference to the
natural environment, which was not required here.

B. This section was also mostly well answered with the majority of candidates referring to the
impacts on the local natural environment such as ‘increased carbon dioxide, deforestation and soil

17 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
erosion. Some candidates as per part A above wrongly referred to impacts on local people, which
was not required in this section.

(b) (i) Many good responses were seen to this question with many candidates scoring full three marks.
Most candidates either used the resource well or understood the idea of heat being trapped in the
atmosphere. Candidates gained marks by describing that ‘heat from the sun passes through the
atmosphere and bounces back from the surface and is then trapped by a layer of greenhouse
gases. There were very few responses that made irrelevant references to ozone depletion or acid
rain.

(ii) The majority of candidates were able to score highly on this question. Most candidates were able
to give some developed ideas as to why people are concerned about global warming. Candidates
most commonly gained marks for explaining that ‘rising sea levels due to melting ice in the polar
regions is likely to cause flooding in lowland coastal areas’. Some candidates listed simple ideas
such as ‘ice caps melting, animals will die out such as polar bears, more hurricanes’ to name a few.
Most candidates were able to score at least three marks here.

(c) This question differentiated well as most candidates were able to gain some marks with some
excellent responses being seen. Many candidates were able to identify a country or area and
many candidates scored Level 1 marks for providing simple statements such as ‘building reservoirs
or dams’ to show how water supplies are being developed in that area. More developed responses
for Level two included ideas such as ‘building reservoirs in areas where there is high rainfall’ or
‘importing water supplies from neighbouring countries via underground pipelines’. Some
candidates also included place specific information such as names of damns or rivers, which gave
them access to the full Level 3 marks, provided they also included the relevant place specific detail.

18 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers

GEOGRAPHY
Paper 2217/21
Investigation and Skills

Section A

The number of candidates for this timezone variant of 2217 Paper 2 is too few for worthwhile comment. The
Principal Examiner comments for the IGCSE Paper 4 which makes up Section B of the 2217 Paper 2 follow
for reference.

Section B

General comments

Most candidates found this examination enabled them to demonstrate what they knew, understood and
could do. The overall range of marks went from 1 to 59 out of 60 - a wider range than previous years - with
weaker candidates scoring on the practical questions, such as drawing and interpreting maps and graphs,
and those of higher ability scoring well on the more challenging sections requiring explanation and
judgement especially regarding hypotheses. Overall the two questions proved to be of equal difficulty.

There is less general advice to be given for areas for improvement with this paper compared with others. As
there are no choices to make, it is difficult to miss sections out, although some candidates omit graph
completion questions which are usually ‘easier’ to answer. Although there were no reports of time issues
some candidates do write too much in some sub-sections. They should be encouraged to answer more
succinctly and perhaps give more thought to their answers. Most points for teachers to bear in mind, when
preparing candidates for future Paper 41 questions, relate to misunderstanding or ignoring command words
and the use of equipment in fieldwork. Particular questions where candidates did not score well also often
related to them not fully reading the question, for example Question 1(a) (ii) where candidates described
sphere of influence rather than explaining how it would vary. Questions which require candidates to develop
their own hypothesis or investigation methodology are common on this paper. This is an area which centres
could practise with candidates.

Centres need to realise that, although this is an Alternative to Coursework examination, candidates will still
be expected to show that they know how fieldwork equipment is used and appropriate fieldwork techniques
even if they have only limited opportunity for fieldwork within the centre, for example Questions 2(c) (i) (ii)
and (iii) required candidates to describe weather recording instruments and methods.

Comments on specific questions

Question 1

(a) (i) Most candidates selected the correct definition. Giving candidates a choice of answer appeared to
help them to focus on the correct meaning.

(ii) Few candidates scored full marks. The most common suggestion for variation was related to the
number of services provided in different sized settlements. Better answers also referred to
specialised services or the different order of services being provided. A common mistake was that
candidates described how the sphere of influence would vary rather than explaining why it varied.

(b) (i) This question was answered well by most candidates. They made sensible criticisms of the three
questions or suggested what questions could have been asked to improve the questionnaire.

(ii) This question was poorly answered. Most candidates did not appreciate that 10% is an appropriate
sample size when it involves asking 125 candidates. Many candidates suggested that all

19 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
candidates should be interviewed which is not appropriate for sampling. Some candidates who
responded ‘yes’ then referred to the questionnaire itself being short or quick to answer, rather than
relating their answer to sample size.

(iii) Candidate generally showed good understanding of sampling methodology. Many referred to
choosing candidates from each year or age group or School grade. They also recognised the
importance of an appropriate gender balance. Good ideas were also expressed about how this
sampling could be done by incorporating different sampling techniques within the classes chosen.
Weaker answers typically just named sampling techniques with no application to the task.

(c) (i) Most candidates correctly completed both parts of the task.

(ii) Almost all candidates correctly completed the pictogram.

(iii) Again almost all candidates completed the choropleth map accurately.

(iv) The quality of answers varied greatly. Whilst weaker answers recognised that a pictogram is easy
to read and compare areas, the best answers referred to being able to distinguish patterns, make
comparisons between districts, and group districts within categories.

(v) The question differentiated well. Many excellent responses recognised that the hypothesis was not
correct and supported this conclusion with appropriate data from the maps. In contrast and despite
obvious evidence some candidates tried to argue that the hypothesis was fully correct. Some
candidates who reached the correct conclusion did not score maximum marks because they used
statistics without interpreting them, for example, that some areas which were located near to the
School had few candidates attending. Candidates need to be aware that they cannot gain
maximum makes merely by quoting statistics.

(d) (i) Almost all candidates calculated the correct percentage figure.

(ii) Most candidates completed the pie graph accurately and shaded the sections correctly.

(iii) The question achieved good differentiation. Most candidates recognised that bus was the most
popular method of travel and gave evidence to support this fact. Some candidates only referred to
one mode of transport and so failed to score maximum marks. The best answers referred to car,
bus and train travel.

(iv) Most candidates gained credit by referring to further investigation into distance and / or time.
However, many different suggestions were credited, including investigation into correlation
between age and method of travel, and whether candidates only used one method of travel to
School.

Question 2

(a) Whilst almost all candidates correctly named a thermometer to measure temperature far fewer
were able to name a hygrometer or wet and dry bulb thermometer to measure relative humidity.

(b) Many candidates recognised the need to record measurements before and after the investigation in
order to show changes or comparison as a means of testing the hypothesis. However, a common
error was to focus on the idea of taking several measurements in order to calculate an average
figure which would be more accurate or reliable. This was not the purpose of the additional
readings.

(c) (i) Whilst almost candidates attempted to draw a diagram their quality was very variable. The more
accurate answers were often contained within the text. Candidates should be able to draw simple
diagrams of weather instruments as well as explain their use. A few excellent candidates drew
detailed annotated diagrams which scored maximum marks on the diagram alone.

(ii) Candidates generally scored better on this question than the previous one. They made use of the
diagrams provided to stimulate their explanation. There were many detailed answers explaining
how the instruments worked and how they were used to indicate wind speed and direction.
Weaker answers were unclear on how wind speed is measured and how a wind vane shows the
direction of wind (i.e. it points to where the wind is coming from, not where the wind is going to).

20 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
Answers included basic errors such as reference to the whole anemometer spinning, the wind vane
spinning and the compass points turning rather than the wind vane. Such errors may have been
the result of candidates not having used these instruments in fieldwork.

(iii) Some candidates were confused over whether the index pointer reads off the pressure or is used
to indicate change in pressure. Whilst most candidates did correctly refer to change of pressure
few explained that the index pointer was set to a previous reading. Most candidates thought that it
showed the average or ‘normal’ pressure.

(iv) Surprisingly most candidates did not know the correct unit of measurement of cloud cover. There
were many incorrect answers including metres, kilometres squared and octaves.

(d) (i) Most candidates correctly read the pressure reading on the barometer. However, a common error was
to read the measurement showed by the index pointer.

(ii) Nearly all candidates accurately plotted the rainfall bar.

(iii) Again nearly all candidates completed the line graph accurately.

(iv) The wind rose was also completed accurately by most candidates. Where candidates lost a mark it
was usually because the bar to indicate wind speed was inaccurate.

(v) This question proved to be more difficult than others in this section. Whilst many candidates were
able to draw an appropriate diagram some candidates shaded it too darkly to gain credit. Weaker
answers confused cumulus cloud with cumulo-nimbus cloud.

(vi) Nearly all candidates correctly completed the cloud cover diagram.

(e) (i) Many candidates recognised the inverse relationship and described it well. They also supported
their decision by using two sets of contrasting atmospheric pressure and rainfall data. A small
minority of candidates lost mark by not giving the units of measurement.

(ii) Candidates also answered this section well. Candidates described the relationships between
atmospheric pressure and wind characteristics and usually supported them with appropriate data.
More answers focused on wind speed than wind direction but generally candidates showed good
understanding of the hypothesis.

21 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers

GEOGRAPHY
Paper 2217/22
Investigation and Skills

Key Messages

● Practical skills questions need to be completed precisely.


● Given data should be interpreted to show understanding
● In Section B, careful analysis should be backed up with evidence

General comments

This paper was comparable to previous sessions, with candidates responding well to some of the longer
written sections such as Question 3(b), Question 5(c)(ii), Question 6(b), Question 8(b)(vi) and Question
8(c)(iii). Question 1(e)(iii) and Question 3(a) proved to be more difficult and there is still evidence of poor
understanding of the term “relief”.

Candidates could further improve descriptive answers with careful locational comments, as illustrated by
Questions 1(e)(ii) and (iii), Question 3 and Question 4(a). They should also take care to include
appropriate units on numerical answers.

In Section B, Question 8 was more popular than Question 7 by a ratio of about 5:1. The sub-sections here
contained fewer omissions than on previous occasions. Candidates need further practice on considering
how to extend an investigation, which is often the last part of these questions.

Comments on specific questions

Section A

Question 1

(a) The paper began with a couple of grid references. The confluence of the Ngezi and Runde rivers
was found in 3417, while the reservoirs at Ingezi Station were found at 288228 (or 288227) and
291226 (or 290225), though only one of these was needed. Candidates were not always sure
which square to choose for the confluence, perhaps due to the braiding of the river along this
section. Some gave a six-figure reference for the confluence, which is not really appropriate for
something that takes up such a large area.

(b) The trigonometrical station was at a height of 1329.6 metres. A number of candidates were not
sure whether to write this or 150/P (the reference of the station). However, examination of the
surrounding contours should have pointed them to the correct answer. It was also necessary to
indicate “metres”. Descending from the trigonometrical station, the steepest slope is to the north,
shown by the closeness of the contour lines in this direction.

(c) Examination of the map, in conjunction with Fig. 1, should have enabled candidates to identify A as
a weir, B as a dam, and C as staff quarters. It is important to locate the area on the map extract
and not try to identify the features solely from Fig. 1, since this is designed to show the candidates
where to look on the extract, rather than be an exact copy of the map. Thus most candidates were
correct for B and C but many had guessed bridge for A, when weir is clearly labelled next to the
feature on the map extract. Similarly D is labelled causeway on the map, though other expressions
such as “ford” or “through the river” were acceptable.

22 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(d) To complete the cross-section in Fig. 2, candidates had to position labels at the following locations
measured from the western edge of the section at 300200: the railway at 22 - 24 mm, Ngezi river at
63 - 68 mm and the western slope of Gwembudzi from 102 mm to the top of the peak. The easiest
of these was the western slope, which could be indicated without the need to make any
measurements. Many candidates had a correct response, while some had labelled at 800 m rather
than above 800 m. The river was also fairly easy, being relatively wide. The dip of the valley in the
section line also provided a clue, though some candidates had labelled in the Runde valley instead.
For the railway it was necessary to make a measurement in order to achieve the required
accuracy. Measuring from 300200 to the railway, along the section line on the map and then
transferring this measurement to Fig. 2, which has the same scale, is a good way to do this. Some
candidates had tried to use the height scale and had drawn horizontal lines to intersect the section
line. However, this tends to be inaccurate on gentle slopes, as was the case at the railway.

(e) Square 3719 contains the hut at an altitude above 800 m. In this case a six-figure grid reference
was an acceptable alternative. Candidates were then asked to describe the distribution of huts in
Fig. 3. Consequently many pointed out that the huts were mostly below 800 m, but other
appropriate comments included “edge of cultivation”, “on track / cut line / game trail”, “in sparse
bush” and “near streams / small rivers”, though it needed to be clear that these were the tributaries
rather than the main Runde river. Many candidates wanted to use the terms “nucleated”, “linear”
and “dispersed” but it was important to use them in relation to the features of the area e.g. “linear
along the track”.

The last part of this question involved description of the relief and drainage of the same area.
There was plenty of scope here, with more than five possible points for each of relief and drainage,
though candidates had to have at least one from each. Common responses included mention of
river tributaries forming a dendritic pattern and the presence of high land with steeper slopes in the
north. Some mentioned the highland without locating it properly - referring to the top of Fig. 3,
rather than the north. Some spotted the rapids, though others had assumed the R was for
reservoir.

Question 2

(a) This was done well. The major urban areas in Australia are largely on the coast and mostly in the
east or south-east of the country. Many candidates noted this and then mentioned Perth in the
SW, or the absence of major urban areas in Northern Territory or Tasmania, for the third mark.

(b) Here a simple calculation was required: 340 000 people divided by 2000 square kilometres giving a
population density of 170 people per square kilometre. The most likely error was as a
consequence of an incorrect decimal place but most candidates had a correct answer.

(c) Reading from the graph in Fig. 5, the population density of Queensland is 2.5 people per square
kilometre. Candidates were expected to have a minimum of “per square kilometre” as units
defining their answer. Some candidates had rounded the figure, presumably assuming that the
answer could not be a fraction of a person. Most managed a correct plot for part (ii) and had the
answer “4” for part (iii).

(d) To answer here, candidates had to look at the data in Fig. 5 and then shade on Fig. 4. Most
correctly identified Victoria, but some shaded over the graph on Fig. 5 rather than the map. A few
opted for Northern Territory: the least densely populated state.

Question 3

Photograph A was divided into three areas, X, Y and Z, each of which had different characteristics of relief
and vegetation, giving plenty of scope for candidates to comment.

(a) Due to the varied landscape within the one photograph, it was not enough to make general
statements, such as “it is flat”, without giving them some placement, such as “it is flat in the
distance / Area Z”. Other valid points included mention of the higher foreground, the steep forested
slope, or the V-shaped valley. Those who did not use any placement often finished up with
contradictory statements. Some based placement on assumption about direction of the camera in
relation to the compass. Some candidates did use a systematic approach and made comment on
each of the three areas. Some were unclear as to the meaning of “relief” and included vegetation
within this section.

23 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers

(b) Those who included vegetation in part (a) were then repeating the information in part (b), though
usually in more detail resulting in some good answers. Area X consisted of grass and low growing
plants, while Z consisted of fields of grass or crops with scattered trees. Area Y was more
complicated, with greater scope for scoring marks, though some elaboration was necessary
beyond the basic “grass and trees”. With forest on the steeper slope and trees along the valley
bottom and the field boundaries, there were potentially three marks for trees alone. Grass needed
to be located on the lower or flatter land. Candidates often scored for forest in Area Y, but the
other points were rarely seen.

Question 4

(a) This section required a different approach from that in Question 2(a). Apart from location along
the plate boundaries, it was necessary to relate the fold mountains to the continents, such as “they
are to the west of South America”, rather than simply “they are in the west”, referring to Fig. 6 as a
whole. Most candidates made the plate boundary point but they did not always go further. Some
tried to describe in relation to the lettered zones A - D.

(b) Fold mountains form where the plates are coming towards each other, i.e. converging, which is a
destructive boundary. Most candidates described this in some way. A few had put constructive.

(c) Most candidates successfully completed Table 1, with ticks against B and D. A few had an
additional tick, usually at C. Table 1 then provided the information for Table 2, where “all
volcanoes are in earthquake zones” and “all fold mountains are in earthquake zones” were the
correct answers. Candidates appeared to find this quite difficult and were perhaps trying to use
knowledge rather than Table 1.

Question 5

(a) Almost all candidates placed the labels at suitable positions on Fig. 7.

(b) Most candidates had drawn graph axes and placed labels on them to indicate rainfall and time. For
the third mark, the type of graph needed to be shown. Some candidates had produced a bar
graph, while a number had incorrectly chosen a line graph.

(c) S3 was the best site for the rain gauge and many candidates had selected this. They then went
on, in part (ii), to suggest inaccuracies due to obstruction / sheltering, dripping and splashing at the
other sites. There were some good answers in this section.

Question 6

(a) There were two coal-fired power stations shown in Scotland. Many candidates had a correct
answer. Some had put 18, as they had not seen the national boundary and assumed that the
whole area was Scotland. The station furthest south was Kings North.

(b) The small coal-fired power stations are mostly coastal. Most candidates noted this though some
then got distracted into giving reasons. Others then named the two inland stations and then either
used more detailed locations such as “one near the capital city” or gave the amount in each
country. Either approach was valid and many candidates scored three marks.

(c) Ratcliffe-on-Soar produces 2000 megawatts. In this case, candidates usually remembered to state
the units of their answer. Many successfully completed Fig. 9. The most common error here was
to show 2600 megawatts, due to incorrect interpretation of the scale. Finally, in Section A, almost
all candidates recognised Drax as having the largest capacity.

Section B

General comments

Most candidates found this examination enabled them to demonstrate what they knew, understood and
could do. Weaker candidates scored well on the practical questions, such as drawing graphs, diagram
completions and those of higher ability scoring well on the more challenging sections requiring explanation,

24 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
comparison and judgement especially regarding hypotheses. Fewer candidates scored high marks (over 50
out of 60) than in previous sessions largely due to difficulties with Questions 1d and 2e.

There is less general advice to be given for areas for improvement with this paper as with others. As there
are no choices to make, it is difficult to miss sections out – though a small minority of candidates do. There
were no reports of time issues as the booklet format does not allow or encourage over-writing of sub-
sections. Most points for teachers to consider, when preparing candidates for future Paper 42 questions,
relate to misunderstanding or ignoring command words, the use of equipment in fieldwork and formulating
practical hypotheses that could be realistically tested in the field. Particular questions where candidates do
not score well also often relate to them not fully reading the question or taking time to thoroughly understand
the resources referred to. Such failings mean that some candidates do not obtain a mark in line with their
geographical ability.

Centres need to be aware that, although this is an Alternative to Coursework examination, candidates will
still be expected to show that they know how fieldwork equipment is used even if they have only limited
opportunities within the Centre to use it. Question 1 required candidates to have experience of systematic
and random sampling techniques, a bi-polar environmental scoring system, and some ideas about how
natural features could be investigated by fieldwork on two coastal areas. Question 2 required candidates to
have experience drawing scatter graphs, carrying out village surveys, completing tables for services and
have some ideas for investigating changes in a village in an MEDC.

A few tips to pass on to candidates:

● When answering Hypotheses questions that ask whether you agree or not, always give your opinion
first before any supporting evidence. This will usually be Yes, No or partially/to some extent.
● When giving figures in an answer always give the Units if they are not stated for you.
● Read questions carefully and identify the command word e.g. Describe, Explain...
● When asked to compare, make judgements e.g. higher, lower, rather than just list comparative
statistics.
● Check you are using the resources that a question refers you to (e.g. Table 3, Fig. 2).
● Take into account the marks awarded. Examiners do not expect you to be writing outside of the
lines provided so do not write a paragraph when only two lines are given – this wastes time.
● If you have to write more than the lines allowed indicate this with a phrase such as “continued on
page 14”. This is very helpful to the Examiner in finding your additional answers.

Comments on specific questions

Question 1

(a) (i) There are still too many vague answers regarding systematic sampling. Examiners are looking for
clear understanding that this involves sampling in regular or equal intervals e.g. every 10th house.
Stating that it is in order, organised or step-by-step is not enough to distinguish it from other
sampling techniques e.g. using random numbers produces an order and is organised. Sampling
techniques as stated in the syllabus are a key area for centres to work on when preparing for this
examination.

(ii) Random sampling is a recognised alternative technique and it needs to be taught as thoroughly as
systematic sampling. Too many candidates do not appear to know the formal technique. They
regard random sampling as, for example, throwing quadrats around, or being able to choose your
friends/interviewees. While these loosely meet a definition of random, the technique taught should
be more sophisticated e.g. the use of random number tables. Then candidates could clearly
compare the advantages and disadvantages of both. One advantage of systematic is that it is
quicker to choose sites than working them out by random numbers; having an equal interval also
avoids the temptation to bias though random numbers does this too. Most candidates realised
systematic was quicker and credit, on this occasion, was awarded for systematic being a
fair/unbiased system given many candidates were unclear about what random sampling should
involve. Future marking of random sampling techniques may be less generous.

(b) (i) This was done well. Most candidates stated that it might be hard to distinguish the different sizes
of material and that it would be a subjective judgement between different people. Common
answers also alluded to the problem of mixed materials and other beach material being present.

25 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
For credit in the latter case, candidates should have suggested an example of other beach material
e.g. shells, wood.

(ii) Many candidates gained 2 marks for correct plotting at 18 degrees; a number were also credited
both marks for doing it correctly but the wrong way round. A significant minority however did not
attempt the pie chart.

(iii) The question asked candidates to describe how proportions differed. Some candidates just stated
the percentages at each beach with no comparative statement given. Those that scored well made
qualitative judgements as required e.g. more sand at Beach X than Y. A number used the Sites
1/2 and 3/4 despite the question clearly asking for a comparison between the beaches. This is an
example where the command word Describe... means write in words, not just state figures from the
data provided. Some candidates compared sites on the same beach rather than between
beaches.

(iv) Although the majority chose Beach Y correctly, too many made erroneous judgements and thereby
could not gain any of the 4 marks available. The data provided, if carefully studied, could have no
other outcome. Many candidates also wrote about Beach X even if they correctly chose Beach Y.
Those that chose Beach Y could recognise the significance of Sites 3 and 4, their relative positions
at the low water mark and the cliff, and could describe how the size of pebbles changed up the
beach while also quoting some back-up data e.g. shingle increased from 20-50%.

(c) (i) This was quite well done. Some candidates described how they would carry out the survey rather
than go through the decisions needed before carrying it out but the majority did suggest various
things that needed decisions e.g. when to carry it out, how to decide what would count as litter,
how to make consistent judgements on the scoring system and where the sites should be as well
as how the candidates might group themselves.

(ii) The majority of candidates could plot and shade -2 for plastic at Site C and +1 for plastic at Site D.
A number did not fully shade the +2 bar – they just shaded the +1 to +2 section and a few put the
bar on the wrong row or on the wrong side of 0.

(iii) The most common similarities were that Site A and B all reached +1 or that there were no negative
scores at Sites A and B or that they scored the same for no wood or plastic. Differences proved
more challenging and here some candidates were confused by the bi-polar system. They seemed
to judge that more shading to the right, such as +2, meant more litter than a +1 score. Not only
was this incorrect but it would also have a small influence on future judgements. It was not true
that there was more litter in B than A because B had a higher score; rather the reverse. If
candidates had looked more closely at Fig. 5 before carrying on, it was clear that the key
descriptors (e.g. no glass, lots of glass et al) meant that a high positive score meant no litter and a
high negative score meant lots of litter.

(iv) Most candidates agreed that there was a variation in environmental impact and gained 1 mark for
this. However several did not gain the second mark because they interpreted the bi-polar scores in
reverse i.e. that Beach X was more littered than Beach Y when careful observation of the scoring
system revealed that Beach X had less litter than Beach Y.

(v) Those candidates that had concluded that Beach X was cleaner than Beach Y often went on to
gain marks for suggesting that the hotels or council employed staff to keep the beach clean or that
there was an anti-litter policy or there were litter bins. They also linked the caravan site to the litter
at Beach Y and suggested that, as it was far from the urban area, there was less concern over litter
here. Candidates who misread the bi-polar system gave reasons why they thought Beach X was
more polluted mostly linked to the number of tourists and residents there.

(d) (i) The emphasis in this question was to get away from the human effects explored in the previous
questions and get candidates to think of an investigation to compare the natural features of the two
areas of coast. The phrase “natural features” appeared to be misunderstood by many candidates
who frequently suggested aspects of tourism and urbanisation to investigate. Hypotheses that did
not focus on natural features of coasts were not credited for example wind speed, air quality.
Those that were included comparing beach profiles, the effects of longshore drift at both beaches
and differences in vegetation. Too many were impossible, unrealistic or irrelevant to the idea of
natural features on the two coasts.

26 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(ii) Candidates who had suggested a hypothesis in (i) that was unacceptable were still allowed a
maximum of 2 marks out of 4 if they suggested sensible ways of going about their investigation.
Quite a number however gave vague statements that showed little awareness of fieldwork
techniques e.g. count the tourists, look at the beach, carry out a questionnaire, check whether
people are staying at hotels, and measure the length of the beach. There was little explanation as
to how they would carry out these activities. There is scope for centres to teach how to carry out
small-scale investigations of various topics based on those to be taught in the syllabus e.g. river
studies, coastal studies, urban studies. Incorporating these into a scheme of work when the topics
are taught would be a sensible move. Past examination papers provide an excellent vehicle for
covering the majority of fieldwork-based questions that have appeared in recent years.

Question 2

(a) (i) It was expected that candidates would be able to suggest two different types of secondary data that
could produce the village populations and many did suggest using a past census, village archives,
and the Internet or various records. However almost the majority focused on primary methods and
made unrealistic suggestions such as knocking on doors and asking how many people lived there,
counting people in the street or asking the village headman (these are MEDC villages as stated in
the rubric). Counting the houses and multiplying by an average number of people was accepted as
an estimate but some just suggested counting the houses.

(ii) Opinions were divided equally between which method was the best. Candidates who chose A
recognised that it would ensure all services were listed and none would be missed out; many
suggested that using B would not be as comprehensive. Those choosing B suggested that to be
less time-consuming as all that was needed was ticks on a sheet for each type rather than a list of
every service. They also recognised that a pre-prepared list would show what was not present and
would make for easy comparison of types. Any of these arguments gained credit. A few
candidates did not choose A or B but wrote Systematic, Random, Observation or other generic
methods that were not listed.

(iii) The best candidates realised that the focus of this question was about each pair of candidates
working in different villages; not about any problems candidates might meet in general by carrying
out a survey. Disadvantages such as each pair using different methods, lack of opportunity to
check with others that the correct procedure was being carried out, the lack of time to do a full
survey were all suggested. General issues such as people not being willing to answer, being busy,
privacy issues were not credited as these are not specifically a disadvantage for separate pairs of
candidates.

(b) (i) Almost all correctly placed the three ticks for the mark. Most also put crosses to match the other
rows; this was good practice though those that left blanks were not penalised.

(ii) Apart from the few candidates who did not attempt this, the rest all correctly totalled the ticks to 6.

(iii) It was very surprising to see that only a small number of candidates recognised the railway station
as the highest order service shown. Almost 90% of candidates put the post box which was the only
service present in each village – the most popular - but was not the highest order service. Other
answers gaining no credit included Ince (the village with most services), 9 and the general
store/surgery. Centres need to make clearer to candidates what high and low order services are
and be sure candidates can identify examples of each. This was a relatively easy mark lost by
many.

(iv) Various labels were added to the horizontal axis but most used the word population e.g. number of
population, population size, and village population and gained the mark. A few wrote “horizontal
axis” in the space having read the instruction too literally.

(v) Ince was well-plotted. The mark was given for the plot so adding the name was not essential but,
bearing in mind the names by the other plots, it was good practice to add Ince by the plot and most
did this.

(vi) This was well done. Most candidates recognised Ince had the highest population and number of
services and compared it with Stanley adding population figures and service statistics thereby
gaining 3 marks. A few just stated figures for population and services without any comparisons
made so credit was limited. It is important that statistics are used to support judgements, not just

27 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
“lifted” from data and stated - on their own they become meaningless. A few did identify and state
the positive correlation on the graph.

(vii) Most candidates could gain a straightforward mark for recognising the increased need for more
services due to the greater population. Only a few however gained the second mark for adding
that more people meant threshold populations were met for various services e.g. Schools, shops,
and that more people meant businesses could develop for profit. Given the answers to (iii),
Centres would do well to focus on settlement hierarchies, high/low order goods and services while
linking this to the sphere of influence. These topics were not well understood.

(c) (i) This was well done; almost all gained 3 marks for matching the reason to the three answers given.
Some candidates gave the wrong reason for 2 stating “work in or near the village” despite the
quote saying that the person worked in the city 40 kms away and took 30 minutes to get to work.
The reason had to be “Good access to the motorway”.

(ii) Well plotted by almost all candidates. A few plotted the smaller bar at 4 or 5; a very small number
did not attempt the graph.

(iii) This was quite well done with most candidates scoring 3 or 4 marks; a number did not give any
statistical support thereby limiting the credit to 2 or 3 marks. Candidates should have disagreed
with the hypothesis and then listed the main reasons from the graph e.g. good access to the
motorway while noting that 67/113 i.e. over 50% lived there for different reasons than in the
hypothesis. Those listed in the hypothesis were among the least popular rankings with only 28/113
or around 25%. Supporting judgements on hypotheses will always require some use of data from
the resources to back them up.

(c) In this answer, general problems in carrying out a survey were allowed as well as any specific to
Bethel. Here candidates could gain 1 mark for personnel issues such as too busy, unwilling, and
uncooperative though not for people telling lies – how would they know? Other marks were given
for recognition that Bethel was a large village so a pair of candidates might find it difficult to survey
a sufficient sample size in the time allotted. The fact that some people may have been at work
giving an unbalanced response was also credited. What was not allowed were suggestions that
the candidates would get tired, there was no cafe to have a snack or a rest or that they would not
be able to get there as there was no railway station.

(d) The answers to this were disappointing. The question clearly stated “in addition to population
changes” yet far too many candidates gave investigations that would look at past and present
populations or ideas relating to migration in and out of the village. A few candidates just suggested
looking at village changes without specifying which. Few answers involved the study of change but
credit up to 2 marks was given for topics that could be carried out in the present e.g. traffic flow
within the village, employment, litter/environmental surveys, housing types. The best candidates
suggested looking at changing services (past and future), changing housing/buildings/land-use
(past and future). How to access and use past records, photos, and maps however was never
really explained by most candidates.

28 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers

GEOGRAPHY
Paper 2217/23
Investigation and Skills

Section A

The number of candidates for this timezone variant of 2217 Paper 2 is too few for worthwhile comment.

The Principal Examiner comments for the IGCSE Paper 4 which makes up Section B of the 2217 Paper 2
follow for reference.

Section B

General comments

Most candidates found this examination enabled them to demonstrate what they knew, understood and
could do. The overall range of marks went from 1 to 58 out of 60 - a similar range to previous years - with
weaker candidates scoring on the practical questions, such as drawing and interpreting graphs and maps,
and those of higher ability scoring well on the more challenging sections requiring explanation and
judgement especially regarding hypotheses. Overall Question 2 proved to be slightly easier than Question
1.

There is less general advice to be given for areas for improvement with this paper compared with others. As
there are no choices to make, it is difficult to miss sections out, although some candidates omit graph
completion questions which are usually ‘easier’ to answer. Although there were no reports of time issues
some candidates do write too much in some sub-sections. They should be encouraged to answer more
succinctly and perhaps give more thought to their answers. Most points for teachers to bear in mind, when
preparing candidates for future Paper 43 questions, relate to misunderstanding or ignoring command words
and the use of appropriate fieldwork techniques. Particular questions where candidates did not score well
also often related to them not fully understanding the question, for example Questions 1(a) (ii) and 1 (c) (ii).
Questions which require candidates to develop their own hypothesis or investigation methodology are
common on this paper. This is an area which Centres could practise with candidates.

Centres need to realise that, although this is an Alternative to Coursework examination, candidates will still
be expected to show that they know how fieldwork equipment is used and appropriate fieldwork techniques
even if they have only limited opportunity for fieldwork within the centre. For example Question 2(b)
required candidates to describe how to make measurements of a river channel, and Question 2(f) instructed
them to suggest appropriate questions for a questionnaire.

Comments on specific questions

Question 1

(a) (i) Generally sensible advice was suggested with particular emphasis on the dangers of deep, fast
flowing water, slippery rocks, and the need to wear suitable clothing and footwear. Answers which
were not credited included wearing life jackets and the need for swimming lessons. Some weaker
answers gave advice about data collection rather than safety.

(ii) Many candidates did not understand what a pilot study is or why it might be useful. However, the
idea of a practise session is becoming more appreciated but without a real understanding of how it
might benefit a study. Some candidates still took the literal meaning of ‘pilot’ and wrote about
views from an aeroplane to see the whole river.

29 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(b) Most candidates showed understanding of the measurement techniques and equipment. Weaker
answers did not make it clear that a width measurement must go from bank to bank rather than just
across the river. Similarly the best answers specified that to measure the depth of the river the
ruler must touch the river bed and the measurement must be read where the water level comes up
to on the ruler.

(c) (i) Most candidates completed the cross section accurately by plotting the points, joining them and
shading the appropriate area.

(ii) Whilst better candidates measured the wetted perimeter very accurately, many candidates gave an
answer of 6.5 metres which showed a lack of understanding of the diagram and the idea of a
wetted perimeter.

(iii) Most candidates showed an understanding that the speed of flow will slow down due to increased
friction with the channel.

(iv) Many candidates recognised that all three measurements increased downstream. However, some
candidates did not refer to all three variables and did not gain the mark. Whilst many candidates
gave correct data some weaker answers included inaccurate statistics and lack of units.

(d) (i) Answers varied in accuracy and understanding shown. Answers needed to refer to the length or
longest axis of the pebble measured with the ruler. Many candidates did not refer to the roundness
score chart which suggests that they may not have followed the instruction to look at Figure 3.

(ii) Most candidates completed the bar graph accurately, although a small minority did not attempt the
question.

(iii) Most candidates agreed that there was a relationship between the two variables. They recognised
that smaller pebbles are also rounder.

(iv) The most commonly suggested reason for the change was erosion or attrition. Better candidates
also explained how why this would happen more downstream.

(e) This question was answered well by many candidates who made appropriate suggestions for
improvement. Amongst the more common suggestions were to take more measurements at the
existing sites, to make measurements at more sites along the river or a different river, to increase
the pebble sample size, to take more measurements at different times of the year when the river
conditions may vary.

Question 2

(a) (i) As in question one many sensible suggestions were made on the methodology of the traffic survey.
More common suggestions focused on how and when to do the counting, equipment that would be
needed, and appropriate grouping of the candidates.

(ii) Most candidates showed an understanding of a tally method of counting. They recognised that it
was an easy counting method, especially if vehicles were passing quickly. Fewer candidates
referred to the benefit of easy and accurate totalling of results after the count was completed.

(b) (i) Nearly all candidates identified the correct road.

(ii) Most candidates drew the two bars accurately in the appropriate places. Again a small minority did
not attempt the question.

(iii) The question differentiated well. Better answers recognised that the hypothesis was incorrect or
partially incorrect. They then supported this conclusion with appropriate statistics from different
roads. The best answers also stated that there was little variation on any of the roads, or that
traffic varied more between roads than with distance from the town centre. Weaker answers used
statistics from morning and evening surveys which was not the focus of this hypothesis.

30 © 2011
General Certificate of Education Ordinary Level
2217 Geography November 2011
Principal Examiner Report for Teachers
(c) (i) The question was difficult for many candidates but better candidates did make an appropriate
suggestion. These candidates realised that the results were similar at both sites on each road so it
would be simpler to use one set of data. Another good response was that results were only
needed from one site because the hypothesis focused on times of the day not distance from the
centre.

(ii) Most candidates drew the flow lines accurately and with the arrows pointing in the correct direction.
Only a small minority drew the arrows pointing incorrectly.

(iii) Nearly all candidates ranked the roads in the correct order.

(iv) Most candidates agreed that the hypothesis was correct and gave appropriate supporting data to
describe the different pattern of movement at 08.00 and 17.00. Weaker answers did not compare
different times on the same road, or compared movement in and out of the town centre at one time
only.

(d) This question was a good discriminator. Candidates generally suggested appropriate
improvements such as conducting another survey at midday, using more candidates to check the
accuracy of counting and to repeat the count on another working day or at weekend.

(e) Many candidates realised that the beach might affect traffic flow along the two roads leading to it.
Consequently they suggested that there would be more traffic at weekend or on sunny days as
more people went to the beach. Weaker candidates continued to concentrate on traffic going to
and from the town centre and so missed the important focus on the beach.

(f) (i) Many suitable hypotheses were suggested which focused on peoples’ attitudes to the traffic free
zone. A minority of candidates did not understand the meaning of traffic-free zone and so their
suggestions were vague.

(ii) Candidates who had suggested an appropriate hypothesis generally related their questions to it
well. The questions concentrated on whether the interviewee was for or against the traffic-free
zone and the benefits or disadvantages of it. If the candidates’ hypothesis was unacceptable they
were still able to gain marks by suggesting suitable questions about a traffic-free zone.

31 © 2011

You might also like