Illuminate AQA Psych Y2 RG 2ed Knowledge Check Answers
Illuminate AQA Psych Y2 RG 2ed Knowledge Check Answers
Illuminate AQA Psych Y2 RG 2ed Knowledge Check Answers
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 11
1. The first systematic experimental attempt to study the mind by breaking up conscious awareness
into basic structures of thoughts, images and sensations. Isolating the structure of consciousness in
this way is called structuralism.
2. In 1879, Wundt opened the first experimental psychology lab with the aim of describing the
nature of human consciousness (the ‘mind’). He pioneered the method of introspection – the first
attempt to study the mind by breaking up conscious awareness into basic structures of thoughts,
images and sensations. Isolating the structure of consciousness in this way is called structuralism.
The same standardised instructions were given to all participants so procedures could be repeated
(replicated). For instance, participants were given a ticking metronome and they would report their
thoughts, images and sensations, which were then recorded.
Wundt recorded the introspections within a controlled lab environment and all participants were
tested in the same way. For this reason, Wundt's research can be considered a forerunner to the
later scientific approaches in psychology that were to come. Other aspects of this research would be
considered unscientific, however. Wundt relied on participants self-reporting their ‘private’ mental
processes. Such data is subjective and participants may not have wanted to reveal some of the
thoughts they were having. Participants would also not have had exactly the same thoughts every
time, so establishing general principles would not have been possible (one of the key aims of
science).
3. Watson (1913) argued that introspection was subjective, in that it varied from person to person.
According to the behaviourist approach, ‘scientific’ psychology should only study phenomena that
can be observed and measured. B.F. Skinner (1953) brought the language and rigour of the natural
sciences into psychology. The behaviourists’ focus on learning, and the use of carefully controlled lab
studies, would dominate psychology for the next few decades.
Many claim that a scientific approach to the study of human thought and experience is not possible,
nor is it desirable, as there are important differences between the subject matter of psychology and
the natural sciences. Also, there are approaches in psychology that employ methods that are much
less rigorous and controlled than the behaviourist approach – such as the humanistic and
psychodynamic approaches which rely on more subjective methods such as case studies.
Page 13
1. Classical conditioning is a form of learning in which a neutral stimulus (e.g. bell) can come to elicit
a new learned response (conditioned response, CR) through association.
1
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. Rats and pigeons were placed in specially designed cages (Skinner boxes). When a rat activated a
lever (or a pigeon pecked a disc) it was rewarded with a food pellet. A desirable consequence led to
behaviour being repeated. If pressing a lever meant an animal avoided an electric shock, the
behaviour would also be repeated.
3. Positive reinforcement – receiving a reward when behaviour is performed – makes it more likely
to be repeated. Thus a child could be encouraged to come at 9pm by being allowed to stay out until
10pm at the weekend if they do.
Negative reinforcement – when an animal or human produces behaviour that avoids something
unpleasant. Before the child leaves the house they could be warned that if they are not in by 9pm,
they will be grounded for the rest of the week.
4. The behaviourist approach is only concerned with studying behaviour that can be observed and
measured. It is not concerned with mental processes of the mind. Introspection was rejected by
behaviourists as its concepts were vague and difficult to measure. Behaviourists tried to maintain
more control and objectivity within their research and relied on lab studies to achieve this. They also
suggest that the processes that govern learning are the same in all species, so animals (e.g. rats, cats,
dogs and pigeons) can replace humans as experimental subjects.
Pavlov introduced the concept of classical conditioning by training dogs to salivate at the sound of a
bell. Pavlov showed how a neutral stimulus (bell) can come to elicit a new learned response
(conditioned response) through association – by presenting the bell and food together on several
occasions.
Skinner placed rats and pigeons in specially designed cages (Skinner boxes). When a rat activated a
lever (or a pigeon pecked a disc) it was rewarded with a food pellet. A desirable consequence led to
behaviour being repeated. If pressing a lever meant an animal avoided an electric shock, the
behaviour would also be repeated. This is operant conditioning – behaviour is shaped and
maintained by its consequences.
One strength of behaviourism is that it uses well-controlled research. The approach has focused on
the careful measurement of observable behaviour within controlled lab settings. Behaviourists have
broken behaviour down into stimulus–response units and studied causal relationships. This suggests
that behaviourist experiments have scientific credibility.
However, this approach may oversimplify learning and ignore important influences on behaviour
(e.g. thought). Other approaches (e.g. social learning and cognitive) incorporate mental processes.
This suggests learning is more complex than just what we can observe.
Another strength is behaviourist laws of learning have real-world application. The principles of
conditioning have been applied to a broad range of real-world behaviours and problems. Token
economy systems reward appropriate behaviour with tokens that are exchanged for privileges
(operant conditioning). These are successfully used in prisons and psychiatric wards. This increases
the value of the behaviourist approach because it has widespread application.
One limitation is behaviourism is a form of environmental determinism. The approach sees all
behaviour as determined by past experiences that have been conditioned and ignores any influence
that free will may have on behaviour. Skinner suggested that free will was an illusion. When
something happens we may think, ‘I made the decision to do that’ but our past conditioning
2
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
determined the outcome. This is an extreme position and ignores the influence of conscious
decision-making processes on behaviour (as suggested by the cognitive approach).
Page 15
1. Children are more likely to imitate the behaviour of people with whom they identify. Such role
models are similar to the observer, tend to be attractive and have high status. For instance, a little
boy may identify with Justin Bieber because of his popularity, attractiveness and boundless talent.
3. To learn to bake a cake a child must first pay attention to the actions of its mother. The child must
store the sequence of events in memory (retention) – the ingredients, lining the cake tin, etc. The
child must be capable of reproducing the behaviour – they must have access to the correct utensils
and be physically capable of imitating the actions. Finally, the child must be motivated to reproduce
the behaviour. They may have observed cake-making behaviour being rewarded in the past – such as
the look on their mum’s happy face when tucking into what she has made (vicarious reinforcement).
4. Bandura agreed with the behaviourist approach that learning occurs through experience.
However, he also proposed that learning takes place in a social context through observation and
imitation of others' behaviour. Children (and adults) observe other people’s behaviour and take note
of its consequences. Behaviour that is seen to be rewarded (reinforced) is much more likely to be
copied than behaviour that is punished. Bandura called this vicarious reinforcement.
Mediational (cognitive) processes play a crucial role in learning. There are four mediational
processes in learning:
1. Attention – whether behaviour is noticed.
2. Retention – whether behaviour is remembered.
3. Motor reproduction – being able to do it.
4. Motivation – the will to perform the behaviour.
The first two processes relate to the learning of behaviour, the last two relate to the performance of
behaviour (so, unlike behaviourism, learning and performance do not have to occur together).
Finally, identification with role models is also important. Children are more likely to imitate the
behaviour of people with whom they identify. Such role models are similar to the observer, tend to
be attractive and have high status.
One strength is SLT emphasises the importance of cognitive factors. Neither classical conditioning
nor operant conditioning can offer a comprehensive account of human learning on their own
because cognitive factors are omitted. Humans and animals store information about the behaviour
of others and use this to make judgements about when it is appropriate to perform certain actions.
This shows that SLT provides a more complete explanation of human learning than the behaviourist
approach by recognising the role of mediational processes.
3
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, recent research suggests that observational learning is controlled by mirror neurons in the
brain, which allow us to empathise with and imitate other people. This suggests that SLT may make
too little reference to the influence of biological factors on social learning.
One limitation is SLT relies too heavily on evidence from contrived lab studies. Many of Bandura’s
ideas were developed through observation of children's behaviour in lab settings and this raises the
problem of demand characteristics. The main purpose of a Bobo doll is to hit it. So, the children in
those studies may have been behaving as they thought was expected. Thus, the research may tell us
little about how children actually learn aggression in everyday life.
Another strength is SLT has real-world application. Social learning principles can account for how
children learn from other people around them, as well as through the media, and this can explain
how cultural norms are transmitted. This has proved useful in understanding a range of behaviours
such as how children come to understand their gender role by imitating role models in the media.
This increases the value of SLT as it can account for real-world behaviour.
Page 17
1. Schema are packages of information developed through experience. They act as a ‘mental
framework’ for the interpretation of incoming information received by the cognitive system. Babies
are born with simple motor schema for innate behaviours such as sucking and grasping, but as we
get older, our schema become more sophisticated.
2. A theoretical model is a sequence of boxes and arrows, often represented as a flow diagram,
which represents the passage of information through the cognitive system. The information
processing approach suggests that information flows through a sequence of stages that include
input, storage and retrieval, as in the multi-store model of memory. This model shows how sensory
information is registered, then passed through STM and LTM where it is retained unless forgotten.
3. Cognitive neuroscience is the scientific study of the influence of brain structures (neuro) on
mental processes (cognition). With advances in brain-scanning technology in the last twenty years,
scientists have been able to describe the neurological basis of mental processing. This involves
pinpointing those brain areas/structures that control particular cognitive processes. This includes
research in memory that has linked episodic and semantic memories to opposite sides of the
prefrontal cortex in the brain. Scanning techniques have also proven useful in establishing the
neurological basis of some disorders, e.g. the parahippocampal gyrus and OCD.
4. In direct contrast to the behaviourist approach, the cognitive approach argues that mental
processes should be studied, e.g. studying perception and memory. Mental processes are ‘private’
and cannot be observed, so cognitive psychologists study them indirectly by making inferences
(assumptions) about what is going on inside people’s heads on the basis of their behaviour.
Cognitive psychologists emphasise the importance of schema: packages of information developed
through experience which act as a ‘mental framework’ for the interpretation of incoming
information received by the cognitive system.
One strength is the cognitive approach uses scientific and objective methods. Cognitive
psychologists have always employed controlled and rigorous methods of study, e.g. lab studies, in
order to infer cognitive processes at work. In addition the two fields of biology and cognitive
psychology come together (cognitive neuroscience) to enhance the scientific basis of study. This
means that the study of the mind has established a credible, scientific basis.
4
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, the use of inference means cognitive psychology can occasionally be too abstract and
theoretical. Also, research often uses artificial stimuli (such as word lists). Therefore, research on
cognitive processes may lack external validity and may not represent everyday experience.
5. In direct contrast to the behaviourist approach, the cognitive approach argues that mental
processes should be studied, e.g. studying perception and memory. Mental processes are ‘private’
and cannot be observed, so cognitive psychologists study them indirectly by making inferences
(assumptions) about what is going on inside people’s heads on the basis of their behaviour.
Cognitive psychologists emphasise the importance of schema: packages of information developed
through experience which act as a ‘mental framework’ for the interpretation of incoming
information received by the cognitive system.
Theoretical models are used to describe and explain how ‘unseen’ cognitive processes work. The
information processing model suggests that information flows through the cognitive system in a
sequence of stages that include input, storage and retrieval, as in the multi-store model of memory.
The ‘computer analogy’ suggests similarities in how computers and human minds process
information. For instance, the use of a central processor (the brain), changing of information into a
useable code and the use of ‘stores’ to hold information.
One strength is the cognitive approach uses scientific and objective methods. Cognitive
psychologists have always employed controlled and rigorous methods of study, e.g. lab studies, in
order to infer cognitive processes at work. In addition the two fields of biology and cognitive
psychology come together (cognitive neuroscience) to enhance the scientific basis of study. This
means that the study of the mind has established a credible, scientific basis.
However, the use of inference means cognitive psychology can occasionally be too abstract and
theoretical. Also, research often uses artificial stimuli (such as word lists). Therefore, research on
cognitive processes may lack external validity and may not represent everyday experience.
Another strength of the approach is the application to everyday life. The cognitive approach is
dominant in psychology today and has been applied to a wide range of practical and theoretical
contexts. For instance, artificial intelligence (AI) and the development of robots, the treatment of
depression and improving eyewitness testimony. This supports the value of the cognitive approach.
One limitation is that the approach is based on machine reductionism. Although there are
similarities between the operations of the human mind and computers (inputs-outputs, central
processor, storage systems), the computer analogy has been criticised. For instance, emotion and
motivation have been shown to influence accuracy of recall, e.g. in eyewitness accounts. These
factors are not considered within the computer analogy. This suggests that machine reductionism
may weaken the validity of the cognitive approach.
Page 19
1. The mind and body are one and the same. From the biological approach, the mind lives in the
brain – meaning that all thoughts, feelings and behaviour ultimately have a physical basis. This is in
contrast to the cognitive approach, which sees the mind as separate from the brain.
Behaviour has a neurochemical and genetic basis. Neurochemistry explains behaviour, for example
low levels of serotonin in OCD. Psychological characteristics (e.g. intelligence) are inherited in the
same way as physical characteristics (e.g. height).
5
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. A person’s genotype is their actual genetic make-up. Phenotype is the way that genes are
expressed through physical, behavioural and psychological characteristics. The expression of
genotype (phenotype) is influenced by environmental factors. For example, PKU is a genetic disorder
(genotype), the effects of which can be prevented by a restricted diet (phenotype).
3. Any genetically determined behaviour that enhances survival and reproduction will be passed on
to future generations. Such genes are described as adaptive and give the possessor and their
offspring advantages. For instance, attachment behaviours in newborns promote survival and are
therefore adaptive and naturally selected.
The mind and body are one and the same. From the biological approach, the mind lives in the brain
– meaning that all thoughts, feelings and behaviour ultimately have a physical basis. This is in
contrast to the cognitive approach which sees the mind as separate from the brain.
Twin studies are used to investigate the genetic basis of behaviour. Concordance rates between
twins are calculated – the extent to which twins share the same characteristic. Higher concordance
rates among identical (monozygotic, MZ) twins than non-identical (dizygotic, DZ) twins is evidence of
a genetic basis.
However, antidepressant drugs do not work for everyone. Cipriani et al. (2018) compared 21
antidepressant drugs and found wide variations in their effectiveness. This challenges the value of
the biological approach as it suggests that brain chemistry alone may not account for all cases of
depression.
Another strength is the biological approach uses scientific methods. In order to investigate both
genetic and neurochemical factors, the biological approach makes use of a range of precise and
objective methods. These include scanning techniques (e.g. fMRI), which assess biological processes
in ways that are not open to bias. This means that the biological approach is based on objective and
reliable data.
One limitation is that biological explanations are determinist. Biological explanations tend to be
determinist in that they see human behaviour as governed by internal, genetic causes over which we
have no control. However, the way genotype is expressed (phenotype) is heavily influenced by the
environment. Not even genetically identical twins look and think exactly the same. This suggests that
the biological view is too simplistic and ignores the mediating effects of the environment.
6
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 21
1. The unconscious mind is a vast storehouse of biological drives and instincts that have been
repressed during childhood. The psychodynamic approach explains all behaviour as determined by
unconscious conflicts over which we have no control. Even something as apparently random as a
'slip of the tongue' is driven by unconscious forces and has deep symbolic meaning – so mistakenly
describing our partner’s new dress as ‘fattening’ rather than ‘flattering’ may reveal our true feelings!
2. Defence mechanisms are used by the Ego to keep the Id 'in check' and reduce anxiety. Denial is
when we refuse to acknowledge reality so someone may continue to turn up for work even though
they have lost their job.
3. The oral stage occurs from 0 to 1 years and the focus of pleasure is the mouth; the mother’s
breast is the object of desire.
4. The psychodynamic approach suggests that the unconscious mind has an important influence on
behaviour. Freud proposed that the mind is made up of the conscious mind – what we are aware of
at any one time; the preconscious mind – we may become aware of thoughts through dreams and
‘slips of the tongue’; the unconscious mind – a vast storehouse of biological drives and instincts that
influence our behaviour.
Freud also introduced the tripartite structure of personality and claimed that the dynamic
interaction between the three parts determines behaviour. The Id is the primitive part of the
personality which operates on the pleasure principle and demands instant gratification. The
Ego works on the reality principle and is the mediator between the Id and Superego. Finally,
the Superego is our internalised sense of right and wrong. It is based on the morality principle
and punishes the ego through guilt for wrongdoing.
Freud proposed five psychosexual stages that determine adult personality. Each stage is
marked by a different conflict that the child must resolve to move on to the next. Any conflict
that is unresolved leads to fixation where the child becomes ‘stuck’ and carries behaviours
associated with that stage through to adult life. For instance, the Oedipus complex is an
important psychosexual conflict occurring at the phallic stage which influences gender role
and the formation of moral values.
Although psychoanalysis is claimed successful for clients with mild neuroses, it is inappropriate,
even harmful, for more serious mental disorders (such as schizophrenia). Therefore Freudian
therapy (and theory) may not apply to mental disorders where a client has lost touch with
reality.
Another strength is the psychodynamic approach has explanatory power. Freud’s theory is
controversial and often bizarre, but it has had huge influence on Western contemporary
7
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
thought. It has been used to explain a wide range of behaviours (moral, mental disorders) and
drew attention to the influence of childhood on adult personality. This suggests that, overall,
the psychodynamic approach has had a positive influence on psychology and modern-day
thinking. This contrasts with the humanistic approach which has been described as a loose set
of abstract concepts and has had limited application in psychology and society as a whole.
Page 23
1. A parent who sets boundaries on their love for their child (conditions of worth) by claiming ‘I will
only love you if...’ is storing up psychological problems – related to their sense of self-worth – for
that child in future. For instance, a father may say to his teenage daughter, ‘I will only love you if you
stop seeing that boy’.
2. Self-actualisation refers to the innate tendency that each of us has to want to achieve our full
potential and become the best we can possibly be. In Maslow’s hierarchy of needs the four lower
levels (deficiency needs) must be met before the individual can work towards self-actualisation – a
growth need.
However, the concept of self-actualisation is a vague, abstract idea that is difficult to test – what
exactly is someone’s potential? This means that the humanistic approach, and the concept of self-
actualisation, lacks empirical evidence to support it.
3. In Maslow’s hierarchy of needs the four lower levels (deficiency needs such as food, water and
safety) must be met before the individual (baby, child or adult) can work towards self-actualisation –
a growth need. Self-actualisation refers to the innate tendency that each of us has to want to
achieve our full potential and become the best we can possibly be.
One strength is that Maslow’s hierarchy is anti-reductionist. Humanistic psychologists reject any
attempt to break up behaviour and experience into smaller components. They advocate holism – the
idea that subjective experience can only be understood by considering the whole person (their
relationships, past, present and future, etc.). This approach may have more validity than its
alternatives by considering meaningful human behaviour within its real-world context.
4. In Maslow’s hierarchy of needs the four lower levels (deficiency needs such as food, water and
safety) must be met before the individual (baby, child or adult) can work towards self-actualisation
– a growth need. Self-actualisation refers to the innate tendency that each of us has to want to
achieve our full potential and become the best we can possibly be.
One strength is that Maslow’s hierarchy is anti-reductionist. Humanistic psychologists reject any
attempt to break up behaviour and experience into smaller components. The hierarchy suggests
there are multiple needs that must be met before humans can meet their potential. This
approach has validity as it considers meaningful human behaviour within its real-world context.
However, humanistic psychology has relatively few concepts that can be reduced to single
variables and measured and this applies to Maslow’s hierarchy too. Self-actualisation is a
hypothetical concept that cannot be observed or measured in a laboratory in the same way that
ideas within, say, the behaviourist approach can be. This means that Maslow’s hierarchy and
humanistic psychology in general is short on empirical evidence to support its claims.
subjective experience rather than general laws – a person-centred approach. The concept of self-
actualisation is central and refers to the innate tendency that each of us has to want to achieve our
full potential and become the best we can possibly be. In Abraham Maslow’s hierarchy of needs the
four lower levels (deficiency needs) must be met before the individual can work towards self-
actualisation – a growth need.
Carl Rogers argued that personal growth requires an individual’s concept of self to be congruent
with their ideal self (the person they want to be). If too big a gap exists between the two selves, the
person will experience a state of incongruence and self-actualisation isn’t possible.
In Rogers’ client-centred therapy (counselling) the aim is to increase feelings of self-worth and
reduce incongruence between the self-concept and the ideal self. An effective therapist should
provide the client with three things: genuineness, empathy and unconditional positive regard (which
the client may not have received from their parents) so as to remove the psychological barriers that
may be preventing self-actualisation.
One strength is that humanistic psychology is anti-reductionist. Humanistic psychologists reject any
attempt to break up behaviour and experience into smaller components. They advocate holism – the
idea that subjective experience can only be understood by considering the whole person (their
relationships, past, present and future, etc.). This approach may have more validity than its
alternatives by considering meaningful human behaviour within its real-world context.
However, humanistic psychology, unlike behaviourism, has relatively few concepts that can be
reduced to single variables and measured. This means that humanistic psychology in general is short
on empirical evidence to support its claims.
Another strength is the approach is a positive one. Humanistic psychologists have been praised for
promoting a positive image of the human condition – seeing people as in control of their lives and
having the freedom to change. Freud saw human beings as slaves to their past and claimed all of us
existed somewhere between ‘common unhappiness and absolute despair’. Therefore, humanistic
psychology offers a refreshing and optimistic alternative.
One limitation is that the approach may be guilty of a cultural bias. Many humanistic ideas (e.g. self-
actualisation), would be more associated with individualist cultures such as the United States.
Collectivist cultures such as India, which emphasise the needs of the group, may not identify so
easily with the ideals and values of humanistic psychology. Therefore, it is possible that the approach
does not apply universally and is a product of the cultural context within which it was developed.
Page 25
1. Both approaches offer psychological therapies that are designed to deal with anxiety-related
disorders. Freud saw these as emerging from unconscious conflicts and overuse of defence
mechanisms, whereas humanistic therapy is based on the idea that reducing incongruence will
stimulate personal growth.
2. Behaviourists suggest that all behaviour is environmentally determined by external forces that we
cannot control. Skinner famously said that free will is an ‘illusion’ and even behaviour that appears
freely chosen is the result of our reinforcement history. Although social learning theorists agree that
we are influenced by our environment to some extent, they also believe that we exert some
influence upon it (reciprocal determinism). They also place more emphasis on cognitive factors
suggesting that we have some control over when we perform particular behaviours.
9
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. In terms of views on development, the cognitive approach proposes stage theories of child
development, particularly the idea of concept formation (schema) as children get older. This is in
some ways similar to the biological approach, which suggests that genetically determined
maturational changes influence behaviour, for example cognitive/intellectual development. So
cognitive advances are not possible until the child is physiologically and genetically ‘ready’.
The cognitive approach recognises that many of our information-processing abilities are innate, but
are constantly refined by experience. The biological approach would place less emphasis on the
influence of experience and instead claims that ‘anatomy is destiny’: behaviour stems from the
genetic blueprint we inherit from our parents. This is an extreme nature approach and distinct from
the interactionist approach offered by the cognitive approach.
The cognitive approach advocates machine reductionism in its use of the computer analogy to
explain human information processing. This ignores the influence of emotion and motivation on
behaviour. The biological approach is also reductionist and explains human behaviour at the level of
the gene or neuron – underplaying ‘higher level’ explanations at a cultural or societal level.
Finally, the cognitive approach has led to cognitive therapies such as cognitive behaviour therapy
(CBT) which has been used in the treatment of depression and aims to eradicate faulty thinking. In
contrast, psychoactive drugs that have been developed by biological psychologists to regulate
chemical imbalances in the brain have revolutionised the treatment of mental disorders. Although
such drugs are relatively cheap and fast-acting, they may not be as effective in the long term as
cognitive therapies which lead to greater insight.
10
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Chapter 2 Biopsychology
Page 27
1. When a stressor is perceived – for instance, your psychology teacher tells you that you have an
important test in the morning – the hypothalamus triggers activity in the sympathetic branch of the
ANS. The ANS changes from its normal resting state (the parasympathetic state) to the
physiologically aroused sympathetic state. The stress hormone adrenaline is released from the
adrenal medulla into the bloodstream. Adrenaline triggers physiological changes in the body, e.g.
increased heart rate, dilation of the pupils, decreased production of saliva. This is called the ‘fight or
flight response’. The body will slowly return to its resting state but the response may be reactivated
when you walk into the test room in the morning!
2. The autonomic nervous system (ANS) governs vital functions in the body such as breathing, heart
rate, digestion, sexual arousal and stress responses.
The somatic nervous system (SNS) governs muscle movement and receives information from
sensory receptors.
3. The major endocrine gland is the pituitary gland, located in the brain. It is called the ‘master
gland’ because it controls the release of hormones from all the other endocrine glands in the body.
The adrenal gland secretes adrenaline, which is released during the stress response and causes
physiological changes in the body, such as increased heart rate.
4. The nervous system is a specialised network of cells and our body’s primary communication
system. The endocrine system works alongside the nervous system to control vital functions in the
body through the action of hormones. The endocrine system supports the nervous system. The
endocrine system works much more slowly than the nervous system but has widespread and
powerful effects.
Page 29
1. Motor neurons connect the CNS to effectors such as muscles and glands, whereas relay neurons
connect sensory neurons to motor or other relay neurons.
2. Neurons vary in size but all have the same basic structure:
Cell body (or soma) – includes a nucleus which contains the genetic material of the cell.
Dendrites – branch-like structures that carry nerve impulses from neighbouring neurons towards the
cell body.
Axon – carries the electrical impulse away from the cell body down the neuron.
Terminal buttons at the end of the axon communicate with the next neuron in the chain across the
synapse.
4. When the electrical impulse reaches the end of the neuron (the presynaptic terminal) it triggers
the release of neurotransmitter from tiny sacs called synaptic vesicles. Once the neurotransmitter
crosses the gap, it is taken up by the postsynaptic receptor site on the next neuron. The chemical
message is converted back into an electrical impulse and the process of electric transmission begins.
11
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 31
1. Motor area: at the back of the frontal lobe (both hemispheres), it controls voluntary movement.
Damage may result in loss of control over fine motor movements.
Somatosensory area: at the front of the parietal lobes, it processes sensory information from the
skin (touch, heat, pressure, etc.). The amount of somatosensory area devoted to a particular body
part denotes its sensitivity.
Visual area: in the occipital lobe at the back of the brain. Each eye sends information from the right
visual field to the left visual cortex, and from the left visual field to the right visual cortex.
2. Peterson et al. (1988) used brain scans to show activity in Wernicke's area during a listening task
and in Broca's area during a reading task, suggesting these areas of the brain have different
functions. Also, a study of long-term memory by Tulving et al. (1994) revealed semantic and episodic
memories are located in different parts of the prefrontal cortex. Dougherty et al. (2002) reported on
44 people with OCD who had had a cingulotomy (isolating the cingulate gyrus). At a 32-week follow-
up, 30% met the criteria for successful response to surgery and 14% for partial response.
3. A limitation of localisation theory is the existence of contradictory research. The work of Lashley
(1950) suggests higher cognitive functions (e.g. learning processes) are not localised but distributed
in a more holistic way in the brain. Lashley removed up to 50% of the cortex in rats learning the
route through a maze. No one area was more important than any other in terms of the rats’ ability
to learn the route. As learning required every part of the cortex rather than just particular areas, this
suggests learning is too complex to be localised and involves the whole of the brain.
Another limitation is that the language localisation model has been questioned. Dick and Tremblay
(2016) found that very few researchers still believe language is only in Broca’s and Wernicke’s area.
Advanced techniques (e.g. fMRI) have identified regions in the right hemisphere and the thalamus.
This suggests that, rather than being confined to a couple of key areas, language may be organised
more holistically in the brain, which contradicts localisation theory.
4. Scientists in the early 19th century supported the holistic theory that all parts of the brain were
involved in processing thought and action. But specific areas of the brain were later linked with
specific physical and psychological functions (localisation theory). If an area of the brain is damaged
(as in the example) through illness or injury, the function associated with that area is also affected.
At the back of the frontal lobe in both hemispheres is the motor area, which controls voluntary
movement. Damage, as in the brain-injured client, may result in loss of control over fine motor
movements on the opposite side of the body from the damaged hemisphere. The somatosensory
area is at the front of the parietal lobes. It processes sensory information from the skin (touch, heat,
pressure, etc.). The amount of somatosensory area devoted to a particular body part denotes its
sensitivity. Damage to this area will result in a lack of sensitivity to touch, heat, pressure, etc.
Broca’s area was identified by Paul Broca in the 1880s, in the left frontal lobe. Damage to this area
causes Broca’s aphasia, which is characterised by speech that is slow, laborious and lacking in
fluency. Broca’s patients (as in the example) may have difficulty finding words and naming certain
objects. Wernicke’s area deals with language comprehension and was identified by Karl Wernicke in
the 1880s in the left temporal lobe. Damage to this area causes Wernicke’s aphasia, characterised by
problems in understanding language (although people are still able to produce language), resulting
12
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
in fluent but meaningless speech. People with Wernicke’s aphasia will often produce nonsense
words (neologisms) as part of the content of their speech.
One strength of localisation theory is support from neurosurgery. Neurosurgery is used to treat
mental disorders e.g. cingulotomy involves isolating the cingulate gyrus – dysfunction of this area
may be a cause of OCD. Dougherty et al. (2002) studied 44 people with OCD who had a cingulotomy.
At follow-up, 30% met the criteria for successful response and 14% for partial response. The success
of such procedures strongly suggests that behaviours associated with serious mental disorders may
be localised.
Another strength of localisation theory is brain scan evidence to support it. Petersen et al. (1988)
used brain scans to show activity in Wernicke’s area during a listening task and in Broca’s area during
a reading task. Also, a study of long-term memory by Tulving et al. (1994) revealed semantic and
episodic memories are located in different parts of the prefrontal cortex. There now exists a number
of sophisticated and objective methods for measuring activity in the brain, providing sound scientific
evidence of localisation of function.
That said, Lashley removed areas of the cortex (up to 50%) in rats learning the route through a maze.
Learning required all of the cortex rather than being confined to a particular area. This suggests that
higher cognitive processes (e.g. learning) are not localised but distributed in a more holistic way in
the brain.
One limitation is the language localisation model has been questioned. Dick and Tremblay (2016)
found that very few researchers still believe language is only in Broca’s and Wernicke’s area.
Advanced techniques (e.g. fMRI) have identified regions in the right hemisphere and the thalamus.
This suggests that, rather than being confined to a couple of key areas, language may be organised
more holistically in the brain, which contradicts localisation theory.
Page 33
1. Eleven split-brain participants were studied by Sperry (1968). An image or word was projected to
the right visual field (RVF, processed by the left hemisphere, LH), and the same, or different, image
was projected to the left visual field (LVF, processed by the right hemisphere, RH). Presenting the
image to one hemisphere meant that the information could not be conveyed from that hemisphere
to the other.
When an object is shown to the RVF, the participant can describe what is seen (due to the language
centres in the LH). When an object is shown to LVF, the participant cannot name the object (no
language centres in RH). They can, however, select a matching object behind a screen using their left
hand. They can also select an object closely associated with the picture (e.g. an ashtray if the picture
was a cigarette). When a pinup picture was shown to the LVF, the participant giggled but reported
seeing nothing. This demonstrates how certain functions are lateralised in the brain, and shows that
the LH is verbal and the RH is ‘silent’ but emotional.
2. As above.
3. One limitation is the idea of analyser versus synthesiser brain may be wrong. There may be
different functions in the RH and LH but research suggests people do not have a dominant side,
creating a different personality. Nielsen et al. (2013) analysed 1000 brain scans, finding people did
use certain hemispheres for certain tasks but no dominance. This suggests that the notion of right-
or left-brained people is wrong (e.g. ‘artist’ brain).
13
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
4. Sperry devised a unique procedure to test his split-brain participants as a way of investigating
hemispheric lateralisation. An image or word is projected to a participant’s right visual field
(processed by the left hemisphere) and another image to the left visual field (processed by the right
hemisphere). In the neurotypical brain, the corpus callosum ‘shares’ information between both
hemispheres. In the split brain, the information cannot be conveyed from the chosen hemisphere to
the other.
When an object is shown to the RVF, the participant easily describes what is seen. When an object is
presented to the LVF, the participant says, ‘there’s nothing there’. This is because to describe objects
in the LVF would require the RH and this hemisphere usually lacks language centres. Messages
received by the RH are normally relayed via the corpus callosum to language centres in the LH.
When an object is shown to the LVF, the participant could not name it but could select a matching
object using their left hand (connected to RH receiving information from LVF). The left hand could
also select an object that was associated with an image presented to the LVF (e.g. ashtray selected in
response to a picture of a cigarette). In each case, the participant could not verbally identify what
they had seen (because the LH is needed for this) but they could ‘understand’ what the object was
(using the RH) and select the corresponding object.
This evidence suggests that the two hemispheres have different functions.
One strength is support from more recent split-brain studies. Luck et al. (1989) showed that split-
brain participants are better than normal controls e.g. twice as fast at identifying the odd one out in
an array of similar objects. In the normal brain, the LH’s superior processing abilities are ‘watered
down’ by the inferior right hemisphere (Kingstone et al. 1995). This supports Sperry’s earlier findings
that the ‘left brain’ and ‘right brain’ are distinct in terms of functions and abilities.
One limitation is that causal relationships are hard to establish. In Sperry’s research the behaviour of
the split-brain participants was compared to a neurotypical control group. However, none of the
control group had epilepsy. Any differences between the groups may be due to epilepsy not the
split-brain (a confounding variable). This means that some of the unique features of the split-brain
participants’ cognitive abilities might have been due to their epilepsy.
One final issue is the ethics of the split-brain studies. Sperry’s participants were not deliberately
harmed and procedures were explained in advance to gain informed consent. However, participants
may not have understood they would be tested for many years, and participation was stressful. This
suggests that there was no deliberate harm but the negative consequences make the study
unethical.
Page 35
1. The brain is plastic in the sense that its structure is not static; synaptic connections are lost,
reformed and ‘pruned’ throughout life, particularly in childhood.
2. Functional recovery of the brain after trauma is an important example of neural plasticity –
healthy brain areas take over functions of areas damaged, destroyed or even missing. The brain is
able to rewire and reorganise itself by forming new synaptic connections close to the area of
damage. Secondary neural pathways that would not typically be used to carry out certain functions
are activated or ‘unmasked’ to enable functioning to continue.
14
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. One strength of plasticity and recovery research is its real-world application. Understanding
processes involved in plasticity has contributed to the field of neurorehabilitation. Understanding
axonal growth encourages new therapies. For example, constraint-induced movement therapy
involves massed practice with an affected arm while the individual’s unaffected arm is restrained.
This shows that research into functional recovery helps medical professionals know when
interventions can be made
One limitation is that cognitive reserve affects functional recovery of the brain. Evidence suggests a
person’s educational attainment may influence how well the brain functionally adapts after injury.
Schneider et al. (2014) found the more time brain injury patients had spent in education (an
indication of their cognitive reserve), the greater their chances of a disability-free recovery. 40% of
patients who achieved DFR had more than 16 years’ education compared to about 10% of patients
who had less than 12 years’ education. This suggests that cognitive reserve is a crucial factor in
determining how well the brain adapts after trauma.
4. During infancy, the brain experiences a rapid growth in synaptic connections, peaking at about
15,000 at age 2–3 years (Gopnick et al. 1999). As we age, rarely-used connections are deleted and
frequently-used connections are strengthened – this is called synaptic pruning. It was once thought
these changes were limited to childhood. But recent research suggests neural connections can
change or be formed at any time, due to learning and experience.
The concept of plasticity is supported by studies which reflect the content of the newspaper article.
Maguire et al. (2000) found significantly more volume of grey matter in the posterior hippocampus
in London taxi drivers than in a matched control group. This part of the brain is linked with the
development of spatial and navigational skills. As part of their training, London cabbies take a
complex test called ‘The Knowledge’ to assess their recall of city streets and possible routes. This
learning experience appears to alter the structure of the taxi drivers’ brains! The longer they had
been in the job, the more pronounced was the structural difference.
Plasticity is also supported by Draganski et al. (2006) who imaged the brains of medical students
three months before and after final exams. Learning-induced changes were seen in the posterior
hippocampus and the parietal cortex, presumably as a result of the exam.
Functional recovery of the brain after trauma is an important example of neural plasticity – healthy
brain areas take over functions of areas that are damaged, destroyed or even missing. The brain
‘rewires’ itself by forming new synaptic connections. Secondary neural pathways that would not
typically be used to carry out certain functions are activated or ‘unmasked’ to enable functioning to
continue.
One limitation of plasticity is possible negative behavioural consequences. The brain’s adaptation to
prolonged drug use leads to poorer cognitive functioning in later life, as well as an increased risk of
dementia (Medina et al. 2007). 60–80% of amputees have phantom limb syndrome (experience
sensations in missing limb due to changes in somatosensory cortex). This suggests that the brain’s
15
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
ability to adapt to damage is not always beneficial and may lead to physical and psychological
problems.
One strength of plasticity is that it may not decline sharply with age. Ladina Bezzola et al. (2012)
demonstrated how 40 hours of golf training produced changes in the neural representations in
participants aged 40–60. Using fMRI, motor cortex activity in the novice golfers increased compared
to a control group, suggesting positive effects after training. This shows that neural plasticity can
continue throughout the lifespan.
Finally, seasonal plasticity occurs in response to environmental changes, e.g. the suprachiasmatic
nucleus (SCN) shrinks in spring and expands in autumn (Tramontin and Brenowitz 2000). However,
much of the work on seasonal plasticity has been done on animals, most notably songbirds. Human
behaviour may be controlled differently. This suggests that animal research may be a useful starting
point but can’t simply be generalised to humans.
Page 37
1. fMRI is conducted on live brains whereas post-mortem examinations involve analysis of the brains
of dead people. Post-mortems tend to involve the brains of people who have experienced some
unusual form of deficit in life. fMRI is equally likely to be performed on neurotypical brains.
2. Electroencephalogram (EEG) measures electrical activity within the brain via electrodes using a
skull cap (like a swimming cap with the electrodes attached to it). The scan recording represents the
brainwave patterns generated from thousands of neurons. This shows overall brain activity. EEG is
often used as a diagnostic tool. For example, unusual arrhythmic patterns of brain activity may
indicate abnormalities such as epilepsy, tumours or sleep disorders.
Event-related potentials (ERPs) are what is left when all extraneous brain activity from an EEG
recording is filtered out. This is done using a statistical technique, leaving only those responses that
relate to the presentation of a specific stimulus or performance of a certain task (for example). ERPs
are types of brainwave that are triggered by particular events. Research has revealed many different
forms of ERP and how these are linked to cognitive processes (e.g. perception and attention).
3. A limitation of post-mortems is that causation may be an issue. Observed damage in the brain
may not be linked to the deficits under review but to some other related trauma or decay.
Another limitation is post-mortem studies raise ethical issues of consent from the patient before
death. Patients may not be able to provide informed consent (e.g. patient HM) and families may be
unwilling to do so.
4. Electroencephalogram (EEG) measures electrical activity within the brain via electrodes using a
skull cap (like a swimming cap with the electrodes attached to it). The scan recording represents the
brainwave patterns generated from thousands of neurons. This shows overall brain activity. EEG is
often used as a diagnostic tool. For example, unusual arrhythmic patterns of brain activity may
indicate abnormalities such as epilepsy, tumours or sleep disorders.
Event-related potentials (ERPs) are what is left when all extraneous brain activity from an EEG
recording is filtered out. This is done using a statistical technique, leaving only those responses that
relate to the presentation of a specific stimulus or performance of a certain task (for example). ERPs
are types of brainwave that are triggered by particular events. Research has revealed many different
forms of ERP and how these are linked to cognitive processes (e.g. perception and attention).
16
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Post-mortem examinations involve the analysis of a person’s brain following their death. Areas of
the brain are examined to establish the likely cause of a deficit or disorder that the person
experienced in life. This may also involve comparison with a neurotypical brain in order to assess the
extent of the difference.
A strength of EEG is that it has contributed to our understanding of the stages of sleep. It also has
high temporal resolution and can detect brain activity at a resolution of a single millisecond.
A limitation of EEG is that it produces a generalised signal from thousands of neurons which makes it
difficult to know the exact source of neural activity. EEG can’t distinguish the activity of different but
adjacent neurons.
A strength of ERPs is very specific measurement of neural processes. ERPs are more specific than
EEGs. Another strength is that, like EEGs, they have excellent temporal resolution. This is especially
so compared to fMRI, for example. A limitation of ERPs is lack of standardisation in methodology
between studies. This makes it difficult to confirm findings in studies involving ERPs. Another
limitation is that background ‘noise’ and extraneous material must be completely eliminated. This
may not always be easy to achieve.
A strength of post-mortems is that they provided the foundation for understanding the brain. Broca
and Wernicke both relied on post-mortem studies to link memory deficits to damage in the brain. A
limitation of post-mortems is that causation may be an issue. Observed damage in the brain may not
be linked to the deficits under review but to some other related trauma or decay. Another limitation
is post-mortem studies raise ethical issues of consent from the patient before death. Patients may
not be able to provide informed consent (e.g. patient HM).
Page 39
1. The circadian rhythm is a type of biological rhythm which lasts for about 24 hours (circa meaning
‘about’ and diem meaning ‘day’). There are several important types of circadian rhythm such as the
sleep/wake cycle.
2. Folkard et al. (1985) studied a group of 12 people who lived in a dark cave for three weeks, going
to bed when the clock said 11.45 pm and waking when it said 7.45 am. The researchers gradually
speeded up the clock (unbeknown to the participants) so an apparent 24-hour day eventually lasted
only 22 hours. Only one participant comfortably adjusted to the new regime. This suggests the
existence of a strong free-running circadian rhythm that cannot easily be overridden by changes in
the external environment.
3. One strength of circadian rhythm research is practical application to shift work. Boivin et al. (1996)
found shift workers experience a lapse of concentration around 6 am (a circadian trough) so
mistakes and accidents are more likely. Research also suggests a link between shift work and poor
health, with shift workers three times more likely to develop heart disease (Knutsson 2003). Thus,
research into the sleep/wake cycle may have economic implications in terms of how best to manage
worker productivity.
One limitation is that generalisations are difficult to make. Studies of the sleep/wake cycle often use
small groups of participants (e.g. Aschoff and Wever), or even single individuals (e.g. Siffre).
Participants may not be representative of the wider population and this limits the ability to make
meaningful generalisations. Siffre observed that his internal clock ticked much more slowly at 60
17
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
than when he was younger. This suggests that, even when the same person is involved, there are
factors that may prevent general conclusions being drawn.
4. French caver Siffre spent long periods in dark caves to examine the effects of free-running
biological rhythms – two months (in 1962) in the caves of the Southern Alps and six months (in the
1970s) in a Texan cave (when he was 60). In each case study, Siffre’s free-running circadian rhythm
settled down to just above the usual 24 hours (about 25 hours). Importantly, he did have a regular
sleep/wake cycle.
Aschoff and Wever also found a similar circadian rhythm in a similar study. A group of participants
spent four weeks in a World War 2 bunker deprived of natural light (Aschoff and Wever 1976). All
but one (whose sleep/wake cycle extended to 29 hours) displayed a circadian rhythm between 24
and 25 hours. Siffre’s experience and the bunker study suggest that the ‘natural’ sleep/wake cycle
may be slightly longer than 24 hours but is entrained by exogenous zeitgebers associated with our
24-hour day (e.g. number of daylight hours, typical mealtimes, etc.). This suggests the existence of a
strong free-running circadian rhythm that cannot easily be overridden by changes in the external
environment.
One strength of circadian rhythm research is application to shift work. Shift work creates
desynchronisation of biological rhythms. Boivin et al. (1996) found shift workers experience a lapse
of concentration around 6 am (a circadian trough) so accidents are more likely. Research also
suggests a link between shift work and poor health, with shift workers three times more likely to
develop heart disease (Knutsson 2003). Thus, research into the sleep/wake cycle may have economic
implications in terms of how best to manage shift work.
However, the research is correlational, therefore desynchronisation may not be the cause of
observed difficulties. For example, Solomon (1993) concluded that high divorce rates in shift workers
might be due to missing out on important family events. This suggests that it may not be biological
factors that create the adverse consequences associated with shift work.
Another strength is real-world application to medical treatment. Circadian rhythms co-ordinate the
body’s basic processes (e.g. heart rate, hormone levels) with implications for chronotherapeutics
(timing medication to maximise effects on the body). Aspirin reduces heart attacks, which are most
likely in the morning. Bonten et al. (2015) found taking aspirin is most effective last thing at night.
This shows that circadian rhythm research can help increase the effectiveness of drug treatments.
One limitation is that generalisations are difficult to make. Studies of the sleep/wake cycle often use
small groups of participants (e.g. Aschoff and Wever), or even single individuals (e.g. Siffre).
Participants may not be representative of the wider population and this limits the ability to make
meaningful generalisations. Siffre observed that his internal clock ticked much more slowly at 60
than when he was younger. This suggests that, even when the same person is involved, there are
factors that may prevent general conclusions being drawn.
Page 41
1. Biological rhythms that occur many times a day are ultradian rhythms, for example the stages of
sleep. Infradian rhythms take more than a day to complete, for example the female menstrual cycle.
2. Stern and McClintock (1998) studied 29 women with irregular periods. Pheromones were taken
from some of the women at different stages of their cycles, via a cotton pad under their armpits.
These pads were cleaned with alcohol and later rubbed on the upper lips of the other participants.
18
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
68% of women experienced changes to their cycle which brought them closer to the cycle of their
‘odour donor’. This suggest that the female menstrual cycle can be synchronised.
3. One strength is research on the menstrual cycle shows its evolutionary basis. For our distant
ancestors it may have been advantageous for females to menstruate together and become pregnant
at the same time. In a social group, this would allow babies who had lost their mothers to have
access to breast milk, thereby improving their chances of survival. This suggests that synchronisation
is an adaptive strategy.
One limitation is the methodology used in synchronisation studies. Commentators argue that there
are many factors that may change a woman's menstrual cycle and act as confounding variables in
research (e.g. stress, changes in diet). So any pattern of synchronisation (e.g. in Stern and
McClintock's study) may have occurred by chance. This may be why other studies (e.g. Trevathan
et al. 1993) have not replicated Stern and McClintock’s original findings. This suggests that menstrual
synchrony studies are flawed.
4. Stern and McClintock (1998) studied 29 women with irregular periods (the female menstrual cycle
is an example of an infradian rhythm). Pheromones were taken from some of the women at
different stages of their cycles, via a cotton pad under their armpits. These pads were cleaned with
alcohol and later rubbed on the upper lips of the other participants. 68% of women experienced
changes to their cycle which brought them closer to the cycle of their ‘odour donor’. This suggest
that the female menstrual cycle can be synchronised.
One strength is research on the menstrual cycle shows its evolutionary basis. For our distant
ancestors it may have been advantageous for females to menstruate together and become pregnant
at the same time. In a social group, this would allow babies who had lost their mothers to have
access to breast milk, thereby improving their chances of survival. This suggests that synchronisation
is an adaptive strategy.
One limitation is the methodology used in synchronisation studies. Commentators argue that there
are many factors that may change a woman's menstrual cycle and act as confounding variables in
research (e.g. stress, changes in diet). So any pattern of synchronisation (e.g. in Stern and
McClintock's study) may have occurred by chance. This may be why other studies (e.g. Trevathan
et al. 1993) have not replicated Stern and McClintock’s original findings. This suggests that menstrual
synchrony studies are flawed.
Our pattern of sleep is an example of an ultradian rhythm and occurs in 90-minute periods. Each
period is divided into five stages, each characterised by a different level of brainwave activity
(monitored using EEG). Stages 1 and 2 are light sleep where a person may be easily woken. In stage
1, brain waves are high frequency and have a short amplitude (alpha waves). In stage 2, the alpha
waves continue but there are occasional random changes in pattern called ‘sleep spindles’. Stages 3
and 4 are deeper sleep where it is difficult to rouse someone. Deep sleep or slow wave sleep (SWS)
is characterised by individual waves which have lower frequency and higher amplitude. Stage 5 is
REM sleep. The body is paralysed yet brain activity closely resembles that of the awake brain. During
this time, the brain produces theta waves and the eyes occasionally move around, thus rapid eye
movement (REM). Dreams are most often experienced during REM sleep, but may also occur in deep
sleep.
One strength is understanding age-related changes in sleep. SWS reduces with age. Growth
hormone is produced during SWS so this becomes deficient in older people. van Cauter et al. (2000)
19
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
suggest the reduced sleep may explain impairments in old age. SWS sleep can be improved using
relaxation and medication. This suggests that knowledge of ultradian rhythms has practical value.
One limitation is individual differences in sleep stages. Tucker et al. (2007) found large differences
between participants in the duration of stages 3 and 4. They suggest that these differences are
biologically determined. This makes it difficult to describe ‘normal sleep’ in any meaningful way.
Page 43
1. Endogenous pacemakers are internal biological ‘clocks’ such as the suprachiasmatic nucleus which
maintain regular rhythms within our body. Exogenous zeitgebers refer to external changes in the
environment, such as changes in the pattern of light which affect or entrain our biological rhythms.
2. The influence of the SCN on the sleep/wake cycle was demonstrated with chipmunks and
hamsters. DeCoursey et al. (2000) destroyed SCN connections in the brains of 30 chipmunks which
were returned to their natural habitat and observed for 80 days. Their sleep/wake cycle disappeared
and many were killed by predators. This demonstrated the importance of the SCN in maintaining a
regular sleep/wake cycle.
3. Light can reset the body’s main endogenous pacemaker (SCN), and also has an indirect influence
on key processes in the body controlling hormone secretion, blood circulation, etc. Campbell and
Murphy (1998) woke 15 participants at various times and shone a light on the backs of their knees –
producing a deviation in the sleep/wake cycle of up to 3 hours. This suggests that light is a powerful
exogenous zeitgeber detected by skin receptor sites and does not necessarily rely on the eyes to
influence the SCN.
4. The suprachiasmatic nucleus (SCN) is a tiny bundle of nerve cells in the hypothalamus which helps
maintain circadian rhythms (e.g. sleep/wake cycle). The influence of the SCN on the sleep/wake
cycle was demonstrated with chipmunks and hamsters. DeCoursey et al. (2000) destroyed SCN
connections in the brains of 30 chipmunks which were returned to their natural habitat and
observed for 80 days. Their sleep/wake cycle disappeared and many were killed by predators. This
demonstrated the importance of the SCN in maintaining a regular sleep/wake cycle.
Light is a key exogeneous zeitgeber that influences the sleep/wake cycle. Light can reset the body’s
main endogenous pacemaker (SCN), and also has an indirect influence on key processes in the body
controlling hormone secretion, blood circulation, etc. Campbell and Murphy (1998) woke 15
participants at various times and shone a light on the backs of their knees – producing a deviation in
the sleep/wake cycle of up to 3 hours. This suggest that light is a powerful exogenous zeitgeber
detected by skin receptor sites and does not necessarily rely on the eyes to influence the SCN.
Social cues also have an important influence on the sleep/wake cycle. The sleep/wake cycle is fairly
random in human newborns, but most babies are entrained by about 16 weeks. Schedules imposed
by parents are a key influence, including adult-determined mealtimes and bedtimes.
One limitation of SCN research is that it may obscure other body clocks. Body clocks (peripheral
oscillators) are found in many organs and cells (e.g. lungs, skin). They are highly influenced by the
actions of the SCN but can act independently. Damiola et al. (2000) showed how changing feeding
patterns in mice altered circadian rhythms of cells in the liver for up to 12 hours, leaving the SCN
unaffected. This suggests there may be many other complex influences on the sleep/ wake cycle,
aside from the master clock (SCN).
20
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Another issue is the ethics of such research. Animal studies of the sleep/wake cycle are justified
because there are similar mechanisms in all mammals, so generalisations can be made to the human
brain. However, a disturbing issue is the ethics involved. Animals were exposed to considerable risk
in the Decoursey et al. study and most died as a result. This suggests that studies like these cannot
be justified and researchers should find alternative ways of studying endogenous pacemakers.
One limitation is that the effects of exogenous zeitgebers differ in different environments.
Exogenous zeitgebers do not have the same effect on people who live in places where there is very
little darkness in summer and very little light in winter. For instance, the Innuit Indians of the Arctic
Circle have similar sleep patterns all-year round, despite spending around six months in almost total
darkness. This suggests the sleep/wake cycle is primarily controlled by endogenous pacemakers that
can override environmental changes in light.
Another limitation is case study evidence undermines the effects of exogenous cues. Miles et al.
(1977) reported the case of a man, blind from birth, with an abnormal circadian rhythm of 24.9
hours. Despite exposure to social cues, such as mealtimes, his sleep/wake cycle could not be
adjusted. This suggests that social cues alone are not effective in resetting the biological rhythm and
the natural body clock is stronger.
21
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 45
1.
Scattergraph showing the relationship
between accuracy of memory recall and age.
Memory
recall
Age in years
2. The relationship could be described as a negative correlation. As age increases, the accuracy of
memory falls. The strength of the correlation is moderately strong at –.72 (the closer to +1 or –1 the
coefficient is, the stronger the relationship).
3. Correlation coefficients indicate the strength of the correlation and have a value somewhere
between –1 and +1. The closer the coefficient is to 1 (+1 or –1), the stronger the relationship
between the co-variables. The closer to zero, the weaker the relationship is.
4. A correlation of –.72 would indicate a moderately strong negative relationship between the co-
variables.
Page 47
1. This would involve conducting an in-depth study of the father’s behaviour over a long period of time (a
longitudinal study). In order to get the most reliable and detailed data the researcher might conduct
interviews with the mother of the baby, asking her about the amount of time spent with the baby at
various ages, the type of activities undertaken, e.g. play, care, feeding. They might additionally ask the
father to keep a daily log of similar behaviour so that the two could be cross-referenced.
The researcher could choose to either study what they considered a ‘typical’ father, for example one that
worked 9–5, five days a week, or they may choose a more unusual example, one who works nightshifts
and is not awake much of the time that the baby is, or perhaps a father that works away a lot, or one
who is always at home.
There could be opportunities for both quantitative data (e.g. how many hours per week spent interacting
with the baby) and qualitative data about how the father feels about being a father, etc.
2. Thematic analysis produces qualitative data whereas content analysis produces quantitative data.
Content analysis is a form of observational study where people’s behaviour is studied via spoken or
written forms of interaction, e.g. diaries, articles, etc. This can be coded by categorising to produce the
quantitative data whereas the data can also be considered in terms of any recurring themes that can be
identified – this would be thematic analysis.
22
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 49
1. Reliability is a measure of consistency, and validity is the extent to which the findings are legitimate
and genuine in terms of real-world behaviour. So, for example, in the case of an experiment, its reliability
would relate to the extent to which it would bear replication and produce the same results whereas its
validity would be the extent to which it was measuring what it set out to measure – real-world memory,
for example.
2. Inter-observer reliability could first be tested by asking two observers to observe specific behavioural
categories independently over the same time period. Examples of some behavioural categories are:
crossing the road without pressing the button or pressing the button and waiting for the green man
before crossing the road. The results of their observations could then be correlated to measure inter-
observer reliability. A correlation of less than +.8 would indicate a need to improve the reliability. This
could be done by redefining categories (e.g. adding another category of pressing the button but crossing
the road before the green man lights up), retraining the observers and then a repetition of the above
procedure until a correlation above +.8 was achieved.
3. The researcher might first have assessed face validity. This is the most basic method and involves
simply 'eyeballing' the experiment to check that it appears to be measuring attachment behaviour, or
they might even ask an expert to check.
An alternative would be to check concurrent validity. If the experiment was to be considered a reliable
measure of attachment behaviour then we could compare the result to those produced by a
standardised, accepted measure such as the Strange Situation. It would be expected that the correlation
between the performance in the experiment and the existing measure would exceed +.8 to be valid.
Page 50
1. Chi-squared:
The data is nominal (either binge-drinker or not, and over-50 or under-30).
The analysis is a test of difference.
The data is unrelated.
2. Wilcoxon:
The analysis is a test of difference.
The data is ordinal.
The design is related (repeated measures).
3. Chi-squared:
The data is nominal (either depressed or happy, and working day or night shifts).
The analysis is a test of difference.
The data is unrelated.
Page 51
1. This means the probability that the observed effect (the result) occurred by chance is equal to or less
than 5%. We would then be justified to reject the null hypothesis.
2. Probability is a measure of the likelihood that a particular event will occur, where 0 is a statistical
impossibility and 1 a statistical certainty. Psychologists tend to work at the 5% level to decide whether
the null hypothesis is accepted or rejected.
23
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. A Type I error occurs when the null hypothesis is rejected and the alternative hypothesis accepted
when actually the null hypothesis is true. A Type II error occurs when the null hypothesis is accepted
when actually the alternative hypothesis is true.
A Type I error is most likely to occur when the selected significance level is too lenient (e.g. 10%),
whereas a Type II error is most likely to occur when the selected significance is too strict (e.g. 1%).
Page 53
1. df = 12 + 16 – 2 = 26
2. 1 – (6 × 309/20(20 × 20 – 1))
1 – (1854/7980) = 0.77 (to two decimal places).
3. As the calculated value (0.77) is greater than the critical value (0.380) for a one-tailed test when p =
0.05, we can conclude that this result is significant.
Page 54
1. An abstract is a short summary (about 150–200 words in length) that includes all the major elements:
the aims and hypotheses, method/procedure, results and conclusions. It appears at the start of a report.
3. The method section should be detailed to allow replication and is subdivided as follows:
Design – e.g. independent groups, naturalistic observation, etc. and justification given for each choice.
Sample – how many participants, biographical/demographic information (as long as this does not
compromise anonymity), the sampling method and target population.
Apparatus/materials – detail of any assessment instruments used and other relevant materials.
Procedure – a 'recipe-style' list of everything that happened in the investigation. This includes a verbatim
record of everything that was said to participants: briefing, standardised instructions and debriefing.
Ethics – how these were addressed within the study.
Page 55
2. Empirical methods such as experimental and observational methods emphasise the importance of
data collection based on direct, sensory experience. Early empiricists such as John Locke saw knowledge
as determined only by experience and sense perception. A theory cannot claim to be scientific unless it
has been empirically tested so empirical methods are essential to the scientific method process.
3. Hypothesis testing is a key part of developing a theory. Theory construction depends on being able to
make clear and precise predictions on the basis of the theory (i.e. to state a number of possible
hypotheses). A hypothesis can then be tested using scientific methods to determine whether it will be
24
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
supported or refuted.
Paradigms are shared sets of assumptions and methods and a shift occurs as a result of a scientific
revolution. Once there is too much contradictory research to allow acceptance of a current paradigm,
then researchers tend to shift to different assumptions and methods.
4. Falsifiability is a key criterion of a scientific theory in that any genuine theory needs to be both
testable and capable of being proved wrong whereas replicability, also a feature of science, relates to the
fact that we must be able to gain the same findings across a number of settings in order to be able to
trust them and see how generalisable they are.
25
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 57
1. Universality refers to any underlying characteristic of human beings that is capable of being applied to
all, despite differences of experience and upbringing. The universality of findings in psychology are
threatened by gender bias.
Bias is a tendency to treat one individual or group in a different way from others so in terms of gender
that would be males and females. Concerns of bias are raised when research or theories offer a view that
might not justifiably represent the experience and behaviour of men or women (usually women).
2. Concerns about gender bias are raised when research or theories offer a view that might not justifiably
represent the experience and behaviour of men or women (usually women).
Alpha bias is said to occur where the differences between the sexes are presented as real, enduring,
fixed and inevitable. These differences occasionally heighten the value of women, but are more likely to
devalue females in relation to males. For example, sexual promiscuity in males is naturally selected and
genetically determined but females who engage in the same behaviour are seen as going against their
‘nature’. This amounts to an exaggeration of the difference between the sexes (alpha bias).
On the other hand, beta bias occurs when differences between men and women are underestimated; for
example, where female participants are not included in the research process and it is assumed that
research findings apply equally to both sexes. For example, early research into fight or flight was based
exclusively on male animals but the fight or flight response was assumed to be a universal response to a
threatening situation.
Alpha and beta bias are consequences of androcentrism (male-centredness). Psychology has
traditionally been a subject dominated by males – a list of 100 famous psychologists contained just 6
females. This leads to female behaviour being misunderstood and even pathologised (taken as a sign
of illness).
3. Gender bias (in the form of alpha bias and beta bias) promotes sexism in the research process. Women
are underrepresented in university departments (Murphy et al. 2014). Research is more likely to be
conducted by males which may disadvantage females. For example, a male researcher may expect
female participants to be irrational and unable to complete complex tasks (Nicolson 1995), which may
mean they underperform. This means that the institutional structures and methods of psychology may
produce findings that are gender-biased.
A further limitation is research challenging bias may not be published. Formanowicz et al. (2018)
analysed 1000 articles relating to gender bias – such research is funded less often and is published by
less prestigious journals. This still held true when gender bias was compared to ethnic bias, and when
other factors were controlled (e.g. the gender of the author(s) and methodology). This suggests that
gender bias in psychological research may not be taken as seriously as other forms of bias.
4. Concerns about gender bias are raised when research or theories offer a view that might not justifiably
represent the experience and behaviour of men or women.
Alpha bias is a form of gender bias that exaggerates differences between males and females. Differences
between the sexes are usually presented as fixed and inevitable. These differences occasionally heighten
the value of women, but are more likely to devalue females in relation to males. Examples of alpha bias
26
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
include psychodynamic theory. Freud (1905) claimed children, in the phallic stage, desire their opposite-
sex parent. This is resolved by identification with their same-sex parent. But a girl’s identification is
weaker, creating a weaker Superego and weaker moral development. There are also examples of alpha
bias favouring females; for example, Chodorow (1968) said that daughters and mothers are more
connected than sons and mothers because of biological similarities – so women develop better bonds
and empathy for others.
Beta bias minimises the differences between men and women. Ignoring or underestimating differences
between men and women often occurs when female participants are not included in the research
process but it is assumed that research findings apply equally to both sexes. Alpha and beta bias are
consequences of androcentrism. Psychology has traditionally been a subject dominated by males – a
recent list of 100 famous psychologists contained just six females. This leads to female behaviour being
misunderstood and even pathologised (taken as a sign of illness).
The quote on the website is an example of alpha bias. Differences between males and females are
exaggerated – the suggestion that all men are better at maths, and all women are better at talking. This
creates misleading stereotypes about what males and females are capable of, and may impact upon their
choice of subjects at school, eventual career and their own perception of themselves.
Gender bias (in the form of alpha bias and beta bias) promotes sexism in the research process. Women
are underrepresented in university departments (Murphy et al. 2014). Research is more likely to be
conducted by males which may disadvantage females. For example, a male researcher may expect
female participants to be irrational and unable to complete complex tasks (Nicolson 1995), which may
mean they underperform. This means that the institutional structures and methods of psychology may
produce findings that are gender-biased.
A further limitation is research challenging bias may not be published. Formanowicz et al. (2018)
analysed 1000 articles relating to gender bias – such research is funded less often and is published by
less prestigious journals. This still held true when gender bias was compared to ethnic bias, and when
other factors were controlled (e.g. the gender of the author(s) and methodology). This suggests that
gender bias in psychological research may not be taken as seriously as other forms of bias.
Page 59
1. Universality refers to any underlying characteristic of human beings that is capable of being applied to
all, despite differences of experience and upbringing. The universality of findings in psychology is
threatened by culture bias as findings often only reflect the culture of those studied.
Bias is an inclination for or a prejudice against one individual or group. Concerns of bias are raised
when research or theories offer a view that might not justifiably represent the experience and
behaviour of different cultures. Culture bias means judging a particular behaviour from the
standpoint of one particular culture – usually a WEIRD one (Westernised, Educated people from
Industrialised, Rich Democracies) as this is where most research takes place. This means that any cultural
differences in behaviour will inevitably be seen as ‘abnormal’, ‘inferior’ or ‘unusual’.
2. Many critics argue that although psychology may claim to have unearthed truths about people all
over the world (universality), in reality, findings from studies only apply to the particular groups of
people who were studied (i.e. show cultural bias). Researchers have wrongly assumed that findings
from studies of WEIRD people (Westernised, Educated people from Industrialised, Rich
Democracies) can be applied all over the world. This bias can also lead to ethnocentrism, the belief
in the superiority of one’s own cultural group.
27
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
This is exemplified by Ainsworth and Bell’s Strange Situation research, which is criticised as reflecting
only the norms and values of American culture in attachment research. They identified that anxiety on
separation was the key defining variable of attachment type and that the ideal (or secure) attachment
was the infant showing moderate distress when left alone by the mother figure. When cultural variations
were noted they tended to be misinterpreted, for example German mothers were seen as cold and
rejecting rather than encouraging independence in their children. Thus the Strange Situation was
revealed as an inappropriate measure of attachment type for non-US children.
Ainsworth and Bell’s research illustrates an imposed etic – they studied behaviours within a single
culture (America) and then assumed their ideal attachment type could be applied universally.
3. Cultural relativism helps to avoid cultural bias. The ‘facts’ that psychologists discover may only make
sense from the perspective of the culture within which they were discovered. Being able to recognise
this is one way of avoiding cultural bias in research.
Cross-cultural research can challenge dominant individualist ways of thinking and viewing the world. This
may provide us with a better understanding of human nature. However, research (e.g. Ekman 1989)
suggests that facial expressions for emotions (such as disgust) are the same all over the world, so some
behaviours are universal. This suggests a full understanding of human behaviour requires both, but for
too long the universal view dominated.
4. A review found that 68% of research participants came from the United States, and 96% from
industrialised nations (Henrich et al. 2010). Another review found that 80% of research participants were
undergraduates studying psychology (Arnett 2008).
What we know about human behaviour has a strong cultural bias. Henrich et al. coined the term WEIRD
to describe the group of people most likely to be studied by psychologists – Westernised, Educated
people from Industrialised, Rich Democracies. If the norm or standard for a particular behaviour is set by
WEIRD people, then the behaviour of people from non-Western, less educated, agricultural and poorer
cultures is inevitably seen as ‘abnormal’, ‘inferior’ or ‘unusual’.
A key form of cultural bias is ethnocentrism which refers to the perceived superiority of one culture over
others. In psychological research this may be communicated through a view that any behaviour that
does not conform to a European/American standard is somehow deficient or underdeveloped. One
example of ethnocentrism is the Strange Situation. Ainsworth and Bell’s (1970) research on attachment
type reflected the norms of US culture. They suggested that ideal (secure) attachment was defined as a
baby showing moderate distress when left alone by the mother figure. This has led to misinterpretation
of child-rearing practices in other countries which deviated from the US norm, e.g. Japanese babies are
rarely left on their own, and are therefore more likely to be classed as insecurely attached as they
showed distress on separation (Takahashi 1986).
One limitation is that many classic studies in psychology are culturally-biased. Both Asch’s and Milgram’s
original studies were conducted with white middle-class US participants. Replications of these studies in
different countries produced rather different results. Asch-type experiments in collectivist cultures found
significantly higher rates of conformity than the original studies in the US, an individualist culture (e.g.
Smith and Bond 1993). This suggests our understanding of topics such as social influence should only be
applied to individualist cultures.
However, the individualism–collectivism distinction may no longer apply due to increasing global media,
e.g. Takano and Osaka (1999) found that 14 of 15 studies comparing the US and Japan found no evidence
28
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
of individualist versus collectivist differences. This suggests that cultural bias in research may be less of
an issue in more recent psychological research.
One strength of recognising cultural bias is the emergence of cultural psychology. Cultural psychology is
the study of how people shape and are shaped by their cultural experience (Cohen 2017). It is an
emerging field that takes an emic approach. Research is conducted from inside a culture, often alongside
local researchers using culturally-based techniques. Fewer cultures are considered when comparing
differences (usually just two). This suggests that modern psychologists are mindful of the dangers of
cultural bias and are taking steps to avoid it.
Another limitation is ethnic stereotyping. Gould (1981) explained how the first intelligence tests led to
eugenic social policies in America. During WWI psychologists gave IQ tests to 1.75 million army recruits.
Many test items were ethnocentric (e.g. name US presidents) so recruits from south-eastern Europe and
African-Americans scored lowest and were deemed genetically inferior. This illustrates how cultural bias
can be used to justify prejudice and discrimination towards ethnic and cultural groups.
Page 61
1. Hard determinism implies that free will is not possible as our behaviour is always caused by internal or
external events beyond our control, whereas soft determinism suggests behaviour can also be
determined by our free will in the absence of coercion.
Whilst hard determinism is completely compatible with the aims of science, which assume that what we
do is dictated by internal or external forces that we cannot control, soft determinism requires scientists
to determine the forces acting upon us whilst acknowledging our freedom to make choices.
2. Biological determinism is the belief that behaviour is caused by biological (genetic, hormonal,
evolutionary) influences that we cannot control. This is of course associated with the biological
approach.
Environmental determinism is the belief that behaviour is caused by features of the environment (such
as systems of reward and punishment) that we cannot control. This is very much associated with the
behaviourist approach.
Psychic determinism is the belief that behaviour is caused by unconscious conflicts that we cannot
control. An example of such an approach is the psychodynamic model.
3. One strength is evidence supports determinism. Libet et al. (1983) asked participants to randomly flick
their wrist and say when they felt the will to move. Brain activity was also measured. The unconscious
brain activity leading up to the conscious decision to move came half a second before the participant’s
conscious decision to move. This may be interpreted as meaning that even our most basic experiences of
free will are actually determined by our brain before we are aware of them.
One limitation of determinism is the role of responsibility in law. The hard determinist stance is not
consistent with the way in which our legal system operates. In court, offenders are held responsible for
their actions. Indeed, the main principle of our legal system is that the defendant exercised their free will
in committing the crime. This suggests that, in the real world, determinist arguments do not work.
Determinism places psychology on an equal footing with other more established sciences and has
led to valuable real-world applications, such as therapies. However free will has intuitive appeal.
Most of us see ourselves as making our own choices rather than being ‘pushed’ by forces we cannot
control. Some people (e.g. a child of a criminal parent) prefer to think that they are free to self-
determine. This suggests that if psychology wants to position itself alongside the natural sciences,
29
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
determinist accounts are likely to be preferred. However, common-sense experience may be better
understood by an analysis of free will.
4. The free will and determinism debate centres on whether or not human beings are free to choose
their thoughts and actions or whether the biological and environmental influences on our behaviour are
causal. Hard determinism (fatalism) refers to the view that all human action has a cause and it should be
possible to identify these causes. Soft determinism refers to the view that human action has a cause but
people also have conscious mental control over behaviour.
Biological determinism is the belief that behaviour is caused by biological (genetic, hormonal,
evolutionary) influences that we cannot control. This is of course associated with the biological
approach. Environmental determinism is the belief that behaviour is caused by features of the
environment (such as systems of reward and punishment) that we cannot control. This is very much
associated with the behaviourist approach. Psychic determinism is the belief that behaviour is caused by
unconscious conflicts that we cannot control and is typically associated with the psychodynamic
approach.
The notion that human behaviour is orderly and obeys laws places psychology on an equal footing with
other more established sciences, increasing its credibility. Another strength is that the prediction and
control of human behaviour has led to the development of treatments and therapies (e.g. drug
treatments to manage schizophrenia). The experience of schizophrenia (losing control over thoughts and
behaviour) suggests some behaviours are determined (no-one ‘chooses’ to have schizophrenia).
One strength of free will is it has practical value. Roberts et al. (2000) looked at adolescents who had a
strong belief in fatalism – that their lives were ‘decided’ by events outside of their control. These
individuals were at greater risk of developing depression. People who exhibit an internal, rather than
external, locus of control are more likely to be optimistic. This suggests that, even if we do not have free
will, the fact that we believe we do may have a positive impact on mind and behaviour.
One limitation is evidence doesn’t support free will, it supports determinism. Libet et al. (1983) asked
participants to randomly flick their wrist and say when they felt the will to move. Brain activity was also
measured. The unconscious brain activity leading up to the conscious decision to move came half a
second before the participant’s conscious decision to move. This may be interpreted as meaning that
even our most basic experiences of free will are actually determined by our brain before we are aware of
them.
The fact that people consciously become aware of decisions milliseconds after they had begun to enact
the decision still means they may have made the decision to act. Our consciousness of the decision is a
‘read-out’ of our sometimes unconscious decision-making. This suggests this evidence is not appropriate
as a challenge to free will.
One limitation of determinism is the role of responsibility in law. The hard determinist stance is not
consistent with the way in which our legal system operates. In court, offenders are held responsible for
their actions. Indeed, the main principle of our legal system is that the defendant exercised their free will
in committing the crime. This suggests that, in the real world, determinist arguments do not work.
30
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 63
1. The debate centres around the contributions of the two factors to the development of a behaviour.
Early nativists (e.g. Descartes) argued that human characteristics are innate, in other words the result of
heredity and genes, whereas empiricists (e.g. Locke) argue that the mind is a blank slate at birth upon
which experience writes.
As an example, IQ has been researched and the heritability is said to be around 0.5 which suggests that
both nature (genes) and nurture (environment) have a role to play in the development of someone’s
intelligence.
2. The nature–nurture debate centres around the contributions of these two factors to the development
of a behaviour. Nativists argue that human characteristics are innate, in other words the result of
heredity and genes, whereas empiricists argue that the mind is a blank slate at birth upon which
experience writes.
Extreme beliefs in the influence of heredity or environment may have negative implications for how we
view human behaviour. Nativists suggest genes determine behaviour and characteristics (‘anatomy is
destiny’). This has led to controversies such as linking ethnicity to eugenics policies. So increasing
recognition that human behaviour is influenced by both nature and nurture is a more reasonable way to
approach the study of human behaviour.
A strong commitment to either a nature or nurture position corresponds to a belief in hard determinism.
The nativist perspective suggests ‘anatomy is destiny’, whilst empiricists argue that interaction with the
environment is all. These equate to biological determinism and environmental determinism, showing
how nature–nurture links to other debates.
3. The extreme nativist stance is determinist and has led to controversy, e.g. linking ethnicity, genetics
and intelligence, and eugenic policies. Empiricists suggest that any behaviour can be changed by altering
environmental conditions (e.g. aversion therapy). This may lead to a society that controls and
manipulates its citizens. This shows that both positions, taken to extremes, may have dangerous
consequences for society so a moderate, interactionist position is preferred.
4. Early nativists (e.g. Descartes, 17th century) argued that human characteristics are innate – the result
of our genes. Psychological characteristics (e.g. intelligence or personality) are determined by biological
factors, just like eye colour or height. This is nurture, the influence of the environment. Empiricists (e.g.
Locke, 17th century) argued the mind is a blank slate at birth, and is shaped by interaction with the
environment e.g. the behaviourist approach. The nature–nurture debate considers the relative
contribution of each of these influences.
The nature–nurture debate is not really a debate because all characteristics combine nature and nurture
(even eye colour is only .80 heritable). For example attachment can be explained in terms of quality of
parental love (Bowlby 1958) or child’s temperament (Kagan 1984). Environment and heredity therefore
interact.
This suggests that the quote in the question is correct, that nature and nurture cannot be separated and
31
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
we are all a combination of both influences. It is meaningless to try and disentangle the two influences as
they both influence each other as soon as (and possible before) we are born.
One strength in nature–nurture research is adoption studies. If adopted children are more similar to
their adoptive parents, this suggests environmental influence. If they are more similar to their biological
parents, this suggests genetic influence. Rhee and Waldman (2002) found in a meta-analysis of adoption
studies that genetic influences accounted for 41% of variance in aggression. This shows how research can
separate nature and nurture influences.
However, children create their own nurture by selecting environments appropriate to their nature. For
example, a naturally aggressive child will choose aggressive friends and become more aggressive (what
Plomin called ‘niche-picking’). This suggests that it does not make sense to look at evidence of either
nature or nurture.
Another strength of the nature–nurture debate is support for epigenetics. In 1944, the Nazis blocked the
distribution of food to the Dutch people and 22,000 died of starvation (the Dutch Hunger Winter). Susser
and Lin (1992) found that women who became pregnant during the famine had low birth weight babies
who were twice as likely to develop schizophrenia. This suggests that the life experiences of previous
generations can leave epigenetic ‘markers’ that influence the health of their offspring.
Page 65
1. The notion of levels of explanation suggests there are different ways of viewing the same phenomena
in psychology and some are more reductionist than others. (Reductionist approaches analyse behaviour
by breaking it down into smaller units.)
For example, OCD may be understood at a socio-cultural level as it involves behaviour most people
would regard as odd (e.g. repetitive hand-washing). It could also be understood at a psychological
level by focusing on the individual's experience of having obsessive thoughts, or a physical level
involving the sequence of movements involved in washing one’s hands, or an environmental level
involving learning experiences. The level of reductionism increases in each case with the
physiological level focusing on abnormal functioning in the frontal lobes and the neurochemical level
explaining the OCD in terms of underproduction of serotonin.
Biological reductionist stances assume that all behaviour is at some level biological and can be explained
through neurochemical, neurophysiological, evolutionary and genetic influences. This assumption has
been successfully applied to the explanation and treatment of mental disorder, for example depression
treatment by antidepressants.
The behaviourist approach is built on environmental reductionism and behaviourists study observable
behaviour, breaking complex learning down into simple stimulus–response links.
In an environmental determinist approach the key unit of analysis occurs at the physical level. The
behaviourist approach, as an example, is not concerned with cognitive processes at the psychological
level. Instead the mind is regarded as a 'black box' and irrelevant to our understanding of behaviour. This
approach has been successfully applied in behaviour management in schools and other settings.
3. One strength of reductionism is its scientific status. In order to conduct well-controlled research,
variables need to operationalised – target behaviours broken down into constituent parts. This makes it
32
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, reductionist explanations at the level of the gene or neurotransmitter do not include an
analysis of the context within which behaviour occurs and therefore lack meaning. This suggests that
reductionist explanations can only ever form part of an explanation.
One limitation of reductionism is the need for higher level explanations. There are aspects of social
behaviour that only emerge within a group context and cannot be understood in terms of the individual
group members. For example, the Stanford prison study could not be understood by observing the
participants as individuals, it was the behaviour of the group that was important. This shows that, for
some behaviours, higher (or even holistic) level explanations provide a more valid account.
4. The holism–reductionism debate discusses which position is preferable for psychology – study the
whole person (holism) or study component parts (reductionism)? As soon as you break down the ‘whole’
it isn’t holistic. Reductionism can be broken down into levels of explanation. Holism proposes that it only
makes sense to study a whole system – the whole is greater than the sum of its parts (Gestalt
psychology). For example humanistic psychology focuses on experience which can’t be reduced to
biological units, qualitative methods investigate themes.
Reductionism is based on the scientific principle of parsimony – that all phenomena should be explained
using the simplest (lowest level) principles. For example, OCD may be understood in different ways:
• Socio-cultural level – behaviour most people would regard as odd (e.g. repetitive handwashing).
• Psychological level – the individual’s experience of having obsessive thoughts.
• Physical level – the sequence of movements involved in washing one’s hands.
• Environmental/behavioural level – learning experiences (conditioning).
• Physiological level – abnormal functioning in the frontal lobes.
• Neurochemical level – underproduction of serotonin.
We can argue about which is the ‘best’ explanation of OCD, but each level is more reductionist than the
one before.
One limitation of holism is that it may lack practical value. Holistic accounts of human behaviour become
hard to use as they become more complex which presents researchers with a practical dilemma. If many
different factors contribute to, say, depression, then it becomes difficult to know which is most
influential and which to prioritise for treatment. This suggests that holistic accounts may lack practical
value (whereas reductionist account may be better).
One strength of reductionism is its scientific status. In order to conduct well-controlled research
variables need to operationalised – target behaviours broken down into constituent parts. This makes it
possible to conduct experiments or record observations (behavioural categories) in a way that is
objective and reliable. This scientific approach gives psychology greater credibility, placing it on equal
terms with the natural sciences.
However, reductionist explanations at the level of the gene or neurotransmitter do not include an
analysis of the context within which behaviour occurs and therefore lack meaning. This suggests that
reductionist explanations can only ever form part of an explanation.
One limitation of reductionism is the need for higher level explanations. There are aspects of social
behaviour that only emerge within a group context and cannot be understood in terms of the individual
group members. For example, the Stanford prison study could not be understood by observing the
33
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
participants as individuals, it was the behaviour of the group that was important. This shows that, for
some behaviours, higher (or even holistic) level explanations provide a more valid account.
Page 67
1. Idiographic approaches aim to describe the nature of the individual, so people are studied as unique
entities with their own subjective experiences, motivations and values. There is no attempt to compare
these to a larger group standard or norm. Humanistic psychology is the best example of this approach.
Rogers and Maslow were interested only in documenting the conscious experience of the individual or
‘self’, rather than producing general laws of behaviour.
The nomothetic approach aims to produce general laws of behaviour which can then provide a
benchmark against which people can be compared, classified and measured. Future behaviour can also
be predicted and controlled if necessary. Behaviourist research would meet the criteria of the
nomothetic approach, for example when it tests the response of a large group of people to a given
stimulus.
2. Idiographic approaches aim to describe the nature of the individual so people are studied as unique
entities with their own subjective experiences, motivations and values, whereas the nomothetic
approach aims to produce general laws of behaviour which can then provide a benchmark against which
people can be compared, classified and measured.
Idiographic approaches typically employ psychological methods which produce qualitative data such as
case studies and unstructured interviews, whereas the nomothetic approach would be more likely to use
questionnaires and psychological tests to establish how people are similar to or different from one
another.
3. One limitation of the nomothetic approach is that it focuses on general laws and may ‘lose the whole
person’ within psychology. For example, knowing about a 1% lifetime risk of schizophrenia says little
about having the disorder – which might be useful for therapeutic ideas. This means, in its search for
generalities, the nomothetic approach may sometimes fail to relate to ‘experience’.
4. The idiographic–nomothetic debate is a debate over which position is preferable for psychology: the
detailed study of one individual or one group to provide in-depth understanding (idiographic), or the
study of larger groups with the aim of discovering norms, universal principles or ‘laws’ of behaviour
(nomothetic). The two approaches may both have a place within a scientific study of the person.
The nomothetic approach is associated with quantitative research. General principles of behaviour (laws)
are developed which are then applied in individual situations, such as in therapy. Hypotheses are
formulated, samples of people (or animals) are gathered and data analysed for its statistical significance.
Nomothetic approaches seek to quantify (count) human behaviour.
One strength is idiographic and nomothetic approaches work together. The idiographic approach uses in-
depth qualitative methods which complements the nomothetic approach by providing detail. In-depth
34
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
case studies such as HM (damaged memory) may reveal insights about normal functioning which
contribute to our overall understanding. This suggests that even though the focus is on fewer individuals,
the idiographic approach may help form ‘scientific’ laws of behaviour.
However, the idiographic approach on its own is restricted as there is no baseline for comparison. This
suggests that it is difficult to build effective general theories of human behaviour in the complete
absence of nomothetic research.
Another strength is that both approaches fit with the aims of science. Nomothetic research (like the
natural sciences) seeks objectivity through standardisation, control and statistical testing. Idiographic
research also seeks objectivity through triangulation (comparing a range of studies), and reflexivity
(researchers examine their own biases). This suggests that both the nomothetic and idiographic
approaches raise psychology’s status as a science.
One limitation of the nomothetic approach is that it focuses on general laws and may ‘lose the whole
person’ within psychology. For example, knowing about a 1% lifetime risk of schizophrenia says little
about having the disorder – which might be useful for therapeutic ideas. This means, in its search for
generalities, the nomothetic approach may sometimes fail to relate to ‘experience’.
Page 69
1. Ethical implications are the impact that psychological research may have at a societal level, in terms of
influencing public policy and/or the way in which certain groups of people are regarded. For example,
Bowlby’s argument, that mother love in infancy is as important for mental health as vitamins are for
physical health, influenced the way in which at least one generation of children were raised. It may have
also influenced the UK government’s decision not to offer free child care places to children under-five
(despite the fact that this is typical in other European countries).
2. The phrasing of the research question influences how the findings are interpreted. For example, if a
research study is looking at ‘alternative relationships’ this is likely to focus on homosexual relationships
and may overlook heterosexual ones because ‘alternative’ suggests alternative to heterosexual
relationships (Kitzinger and Coyle 1995). Also, there is the issue of dealing with participants which may
include informed consent, confidentiality and psychological harm. For example, when interviewing
victims of domestic abuse, participants may worry about an ex-partner finding out what they said and
also participants may find the experience of talking about abusive experiences stressful.
3. Same as above.
4. Psychologists must be aware of the consequences of research for the research participants or for the
group of people represented by the research. Some research is more socially sensitive (e.g. studying
depression) but even seemingly innocuous research (e.g. long-term memory in a student population)
may have consequences (e.g. for exam policy).
The potential consequences of research studies and/or theories should be considered at all stages of the
research process. For instance, the phrasing of the research question influences how the findings are
interpreted. For example, if a research study is looking at ‘alternative relationships’ this is likely to focus
on homosexual relationships and may overlook heterosexual ones because ‘alternative’ suggests
alternative to heterosexual relationships (Kitzinger and Coyle 1995). Another issue is dealing with
participants and the issues of informed consent, confidentiality and psychological harm. For example,
when interviewing victims of domestic abuse, participants may worry about an ex-partner finding out
what they said and also participants may find the experience of talking about abusive experiences
35
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
stressful. Finally, there is a need to consider in advance how findings might be used, especially because
findings may give scientific credence to prejudices. For example, the use of early (flawed) IQ tests in
America during World War I led to prejudice against Eastern Europeans and lower immigration quota.
One strength of socially sensitive research (SSR) is the benefits for the group that is studied. The DSM-1
listed homosexuality as a ‘sociopathic personality disorder’ but this was finally removed in 1973, as a
result of the Kinsey report (Kinsey et al. 1948). Anonymous interviews with over 5000 men about their
sexual behaviour concluded that homosexuality is a normal variant of human sexual behaviour. This
illustrates the importance of researchers tackling topics that are sensitive.
However there may be negative consequences that could have been anticipated, e.g. research on the
‘criminal gene’ implies that people can’t be held responsible for their wrongdoing. This suggests that,
when researching socially sensitive topics, there is a need for very careful consideration of the possible
outcomes and their consequences.
Another strength is that policymakers rely on SSR. The government needs research when developing
social policy related to child care, education, mental health provision, crime etc. It is better to base such
policies on scientific research rather than politically-motivated views. For example the ONS (Office for
National Statistics) is responsible for collecting, analysing and disseminating objective statistics about the
UK’s economy, society and population. This means that psychologists also have an important role to play
in providing high quality research on socially sensitive topics.
Finally, one limitation is that poor research design may have a long-term impact. For example, Burt’s
(1955) research on IQ showed it is genetic, fixed and apparent by age 11. This led to the 11+ exam which
meant not all children had the same educational opportunities. The research was later revealed to be
based on invented evidence but the system didn’t change and continues in parts of the UK today (e.g.
Kent and Belfast). Therefore any SSR needs to be planned with the greatest care to ensure the findings
are valid because of the enduring effects on particular groups of people.
36
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Chapter 5 Relationships
Page 71
2. Partner preference is based on anisogamy. Male sperm are continuously produced from puberty
to old age whereas female ova are produced at intervals for a limited time. Therefore there are
many fertile males but fewer fertile females. This gives rise to two strategies for choosing a partner.
The preferred strategy of females is inter-sexual. Quality of mates is more important than quantity
because ova are relatively rare. The female makes a greater investment of resources before, during
and after the birth of her offspring. Although both sexes are choosy, the consequences of making a
wrong choice are much more serious for the female than for the male. Therefore, the female’s
optimum mating strategy is to select a genetically fit partner who can provide resources. This leaves
the males competing for the opportunity to mate with the fertile female and the attributes of fit
partners are passed on through generations.
The preferred strategy of males is intra-sexual. Quantity of mates is more important than quality because
sperm is plentiful. So males prefer youthful partners because they are likely to be fertile, and males seek
indictors of fertility such as a narrow waist and wider hips. The winner of competition between males for
a fertile female is the one who reproduces and passes on to his offspring the characteristics that
contributed to his victory. These include physical characteristics such as size. But, more controversially,
they also include psychological and behavioural characteristics such as deceitfulness, intelligence and
aggression. Female preference drives the evolutionary selection of these characteristics in males.
3. One limitation is that social and cultural influences are underestimated. Partner preferences have
been impacted over time by changing social norms and cultural practices. The changes have
occurred too rapidly to be explained in evolutionary terms. The wider availability of contraception
and changing roles in the workplace mean women’s partner preferences are no longer resource-
oriented (Bereczkei et al. 1997). This suggests that partner preferences today are likely to be due to
both evolutionary and cultural influences – a theory which fails to explain both is limited.
However, one strength of the relationship is evidence supporting intrasexual selection. Buss’s (1989)
cross-cultural study in 33 countries found that females still placed greater value on resource-related
attributes than males did (e.g. ambition, prospects). Males valued physical attractiveness and youth
(indicator of fertility) more than females did. This outcome reflects consistent gender differences in
partner preferences that support predictions of sexual selection.
4. Partner preference is based on anisogamy. Male sperm are continuously produced from puberty
to old age whereas female ova are produced at intervals for a limited time. Therefore there are
many fertile males but fewer fertile females. This gives rise to two strategies for choosing a partner.
The preferred strategy of females is inter-sexual. Quality of mates is more important than quantity
because ova are relatively rare. The female makes a greater investment of resources before, during
and after the birth of her offspring. Although both sexes are choosy, the consequences of making a
wrong choice are much more serious for the female than for the male. Therefore, the female’s
37
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
optimum mating strategy is to select a genetically fit partner who can provide resources. This leaves
the males competing for the opportunity to mate with the fertile female and the attributes of fit
partners are passed on through generations.
The preferred strategy of males is intra-sexual. Quantity of mates is more important than quality because
sperm is plentiful. So males prefer youthful partners because they are likely to be fertile, and males seek
indictors of fertility such as a narrow waist and wider hips. The winner of competition between males for
a fertile female is the one who reproduces and passes on to his offspring the characteristics that
contributed to his victory. These include physical characteristics such as size. But, more controversially,
they also include psychological and behavioural characteristics such as deceitfulness, intelligence and
aggression. Female preference drives the evolutionary selection of these characteristics in males.
Research into inter-sexual selection is also supportive of the relationship. Clark and Hatfield (1989) sent
students to approach other students and ask, ‘I have been noticing you around campus. I find you to be
very attractive. Would you go to bed with me tonight?’. No female students agreed in response to
requests from males. But 75% of males did agree to female requests. This supports the suggestion of
female choosiness and that males have evolved a different strategy to ensure their reproductive success.
However, the relationship suggested by evolutionary theory between sexual selection and reproductive
behaviour is simplistic. It is not the case that one strategy is adaptive for all males and another strategy
adaptive for all females. It depends on the length of the relationship. Buss and Schmitt (2016) argue that
males and females seeking long-term relationships in fact adopt similar mating strategies – both are very
choosy and look for loyalty and kindness. The true picture of partner preference is complex and nuanced
because it takes account of the context of reproductive behaviour.
Another limitation is that social and cultural influences are underestimated. Partner preferences have
been impacted over time by changing social norms and cultural practices. The changes have occurred too
rapidly to be explained in evolutionary terms. The wider availability of contraception and changing roles
in the workplace mean women’s partner preferences are no longer resource-oriented (Bereczkei et al.
1997). This suggests that partner preferences today are likely to be due to both evolutionary and cultural
influences – a theory which fails to explain both is limited.
Despite this, another strength of the relationship is some evidence supporting intrasexual selection.
Buss’s (1989) cross-cultural study in 33 countries found that females still placed greater value on
resource-related attributes than males did (e.g. ambition, prospects). Males valued physical
attractiveness and youth (indicator of fertility) more than females did. This outcome reflects
consistent gender differences in partner preferences that support predictions of sexual selection.
Page 73
1. Self-disclosure is the concept of revealing personal information about yourself. Romantic partners
reveal more about their true selves as their relationship develops. These self-disclosures about one’s
deepest thoughts and feelings can strengthen a romantic bond when used appropriately.
2. Altman and Taylor (1973) proposed the social penetration theory of self-disclosure suggesting that
self-disclosure is quite limited at the start of a relationship. As the relationship progresses there is a
gradual process of revealing your inner self to your partner. Revealing personal information is a sign of
trust and the partner will reciprocate with revealing information about themselves, too. Increasing
disclosure means that the romantic partners tend to penetrate into each other’s lives and gain a greater
understanding of each other.
Both breadth and depth of self-disclosure are key according to the social penetration theory. Breadth is
38
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
narrow at the start of a relationship because if too much information is revealed this may be off-putting
and one partner may decide to quit the relationship. As a relationship develops, more layers are
gradually revealed and we are likely to reveal more intimate information including painful memories,
secrets, etc.
Reis and Shaver (1988) suggest that, in addition to a broadening and deepening of self-disclosure, there
must be reciprocity, in other words successful relationships will involve disclosure from one partner
which is received sensitively by the other partner and this in turn should then lead to further self-
disclosure from the other partner.
3. One strength is that there is some support from research studies. For example, Sprecher and Hendrick
(2004) found strong correlations between several measures of satisfaction and self-disclosure in
heterosexual couples. Men and women who used self-disclosure (and those who believed their partners
also disclosed) were more satisfied with and committed to their romantic relationship. Whilst this is
supportive of a relationship between self-disclosure and satisfaction we cannot make causal assumptions
from such data. Therefore, it gives only limited support to the concept of self-disclosure being a key
component of committed romantic relationships.
4. Altman and Taylor (1973) proposed the social penetration theory of self-disclosure, suggesting that
self-disclosure is quite limited at the start of a relationship. As the relationship progresses there is a
gradual process of revealing the inner self to the partner. In the case of Asad and Sabiha, if they were on
a first date it would not be expected that very personal information would be revealed. Sabiha is
therefore likely to be surprised at Asad’s revelation that he wants to get married and start a family very
soon.
Increasing disclosure means that the romantic partners tend to penetrate into each other’s lives and gain
a greater understanding of each other. So Sabiha would have expected that level of disclosure to have
occurred more gradually than it did.
According to the social penetration theory, breadth of self-disclosure is narrow at the start of a
relationship and if too much information is revealed this may be off-putting and one partner may decide
to quit the relationship as happened here when Sabiha decided not to go on a second date.
Reis and Shaver (1988) suggest that, in addition to a broadening and deepening of self-disclosure, there
must be reciprocity in this scenario. Sabiha seemed to be unwilling to engage in similar levels of self-
disclosure and therefore the trust that comes from reciprocal disclosure was not exhibited and the
relationship faltered.
There is evidence in support of the role of self-disclosure. Sprecher and Hendrick (2004) found strong
correlations between several measures of satisfaction and self-disclosure in heterosexual couples. Men
and women who used self-disclosure (and those who believed their partners also disclosed) were more
satisfied with and committed to their romantic relationship. Whilst this is supportive of a relationship
between self-disclosure and satisfaction we cannot make causal assumptions from such data. Therefore,
it gives only limited support to the concept of self-disclosure being a key component of committed
romantic relationships.
Hass and Stafford (1998) also found that 57% of homosexual men and women reported open and honest
self-disclosure was a maintenance strategy. Couples used to ‘small talk’ can be encouraged to increase
self-disclosure in order to deepen their own relationships. This highlights the importance of self-
disclosure and suggests the theory can be used to support people having relationship problems.
39
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, Tang et al. (2013) concluded that people in the US (an individualist culture) self-disclose
significantly more sexual thoughts and feelings than people in China (a collectivist culture). Both levels of
self-disclosure are linked to relationship satisfaction in those cultures but nevertheless the pattern of
self-disclosure is different. Social penetration theory is therefore a limited explanation of romantic
relationships and not necessarily generalisable to other cultures.
Page 75
1. Shackelford and Larsen (1997) found that people with symmetrical faces are rated as more attractive.
It is thought that this is a signal of genetic fitness that cannot be faked (which makes it an ‘honest’
signal). The associated ‘robust’ genes are likely to be passed on and therefore symmetry is perpetuated.
Research by Dion et al. (1972) found that physically attractive people are consistently rated as kind,
strong, sociable and successful compared with unattractive people. This is known as the ‘halo effect’ and
suggests we hold preconceived ideas about the attributes of attractive people. We believe that all their
other attributes are overwhelmingly positive.
On the other hand, the matching hypothesis (Walster et al. 1966) states that people choose romantic
partners who are roughly of similar physical attractiveness to each other, rather than the most attractive
available. To do this we have to make a realistic judgement about our own ‘value’ to a potential partner
and choose one of similar rating to avoid potential rejection.
2. Psychologists have sought to explain why physical attractiveness seems to be quite so important in
forming relationships. Whilst the evolutionary explanation would propose that we would seek attractive
mates either because this is a sign of genetic fitness (e.g. facial symmetry) or that it promotes caring
instincts (e.g. neotenous, baby-like features), the matching hypothesis by Walster et al (1966) proposes
that rather than seeking the most attractive partner, as might be suggested by evolutionary theories, we
in fact choose a partner whose attractiveness matches ours.
To do this we need to assess our own value to a potential partner. For example, if we judge ourselves as
6/10 then we are likely to seek a mate of a similar level of attractiveness.
The argument is that by seeking the most attractive mate (who may provide us with the best offspring)
we run the risk of being rejected because the partner we aim for is ‘out of our league’ in terms of
attractiveness. As such we choose someone who we judge to be in the ‘same league’.
3. A limitation of the matching hypothesis is that it is challenged by real-world research outside the
lab.
Taylor et al. (2011) studied actual online date choices rather than preferences. Their analysis of
activity logs of an online dating site showed that daters generally wanted to meet with potential
partners who were more physically attractive than themselves. The matching hypothesis predicts
that daters would limit their choices to people of a similar level of attractiveness, so this contradicts
its central prediction.
40
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Therefore the matching hypothesis may lack validity because it does not generalise easily from
laboratory situations to physical attractiveness in the real world.
However, Walster et al.’s (1966) initial study failed to support the theory as they found students
preferred partners who were more physically attractive rather than matching their level. On the other
hand, Feingold’s (1988) meta-analysis of studies of ‘actual’ partners found a significant correlation in
ratings of attractiveness between them. These findings from more realistic studies support the
hypothesis even though the original studies did not.
There is also cultural consistency in what is considered attractive. Cunningham et al. (1995) found large
eyes, small nose and prominent cheekbones in females were rated as highly attractive by white, Asian
and Hispanic males. This consistency suggests physical attractiveness is culturally independent and may
have evolutionary roots.
A second factor affecting attraction is self-disclosure. Altman and Taylor (1973) proposed the social
penetration theory suggesting that self-disclosure is quite limited at the start of a relationship. As the
relationship progresses there is a gradual process of revealing the inner self to the partner. Revealing
personal information is a sign of trust and the partner will reciprocate with revealing information about
themselves too. Increasing disclosure means that the romantic partners tend to penetrate into each
other’s lives and gain a greater understanding of each other.
There is evidence in support of the role of self-disclosure. Sprecher and Hendrick (2004) found strong
correlations between several measures of satisfaction and self-disclosure in heterosexual couples. Men
and women who used self-disclosure (and those who believed their partners also disclosed) were more
satisfied with and committed to their romantic relationship. This supports the concept of self-disclosure
being a key component of committed romantic relationships.
Hass and Stafford (1998) also found that 57% of homosexual men and women reported open and honest
self-disclosure was a maintenance strategy. Couples used to ‘small talk’ can be encouraged to increase
self-disclosure in order to deepen their own relationships This highlights the importance of self-
disclosure and suggests the theory can be used to support people having relationship problems.
However, Tang et al. (2013) concluded that people in the US (an individualist culture) self-disclose
significantly more sexual thoughts and feelings than people in China (a collectivist culture). Both levels of
self-disclosure are linked to relationship satisfaction in those cultures but nevertheless the pattern of
self-disclosure is different. Social penetration theory is therefore a limited explanation of romantic
relationships and not necessarily generalisable to other cultures.
41
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 77
1. Social demography refers to factors such as geographical location and social class. These two, for
example, have been shown to rule out a large number of available partners. This means many
relationships are formed between partners who share similar characteristics (homogamy).
Similarity in attitudes refers to the fact we find partners who share our basic values attractive in the
earlier stages of a relationship, so we tend to discount available individuals who differ markedly from us
in their attitudes.
Complementarity refers to the fact that similarity becomes less important as a relationship develops, and
is replaced by a need for your partner to balance your traits with opposite ones of their own.
2. According to the filter theory there are three main factors that act as filters to narrow down our range
of partner choice to a field of desirables. The first is social demography. They include geographical
location (or proximity), social class, level of education, for example. You are much more likely to meet
people who are physically close and share several of these demographic characteristics. The key benefit
of proximity is accessibility. It doesn’t require much effort to meet people who live in the same area, go
to the same school or university, and so on.
The second filter is similarity in attitudes. Partners will often share important beliefs and values, partly
because the field of availables has already been narrowed by the first filter to those who have significant
social and cultural characteristics in common. There is a need for partners in the earlier stages of a
relationship to agree over basic values, the things that really matter to them. Similarity is such a powerful
influence on attraction in the early stages that Byrne (1997) calls it the law of attraction.
3. One strength is research support for two filters from Kerckhoff and Davis’s original (1962) study. They
found that relationship closeness was associated with similarity of values in partners who had been
together less than 18 months. Complementarity of needs was associated with closeness after this time.
This supports the view that similarity is important in the early stages of a relationship, but
complementarity is more important later on, as predicted by the theory.
One limitation is the lack of replication of the original findings. Levinger (1974) has suggested that
social change and difficulties in defining the depth of a relationship could be the reason for lack of
replicability. Kerckhoff and Davis (1962) assumed that partners that had been together for over 18
months were more committed but this might not be the case in all cultures or cases today. This
suggests that filter theory is based on research evidence that lacks validity.
Another limitation is that actual similarity may be less important than perceived similarity. Montoya
et al.’s (2008) meta-analysis showed that actual similarity affected attraction only in short-term lab-
based interactions. In real-world relationships, perceived similarity was a stronger predictor of
attraction. This implies that the theory has the link the wrong way round – partners perceive greater
similarity as they become more attracted, so similarity is an effect of attraction and not a cause.
A final limitation is that complementarity does not always predict satisfaction. For instance, Markey
and Markey (2013) found greatest relationship satisfaction in lesbian couples of equal dominance.
Therefore similarity of needs rather than complementarity may be associated with long-term
satisfaction, at least in some couples.
4. According to filter theory, there are three main factors that act as filters to narrow down our range of
partner choice to a field of desirables. The first is social demography. This includes geographical location
42
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
(or proximity), social class, level of education, etc. You are more likely to meet people who are physically
close and share these demographic characteristics. This supports the view that ‘birds of a feather flock
together’, meaning that people are attracted to people similar to themselves in these ways. The filter
theory would therefore suggest that we actually filter out ‘opposites’ rather than being attracted to
them.
The second filter is similarity of attitudes. Partners will often share important beliefs and values, partly
because the field of availables has already been narrowed by the first filter to those who have significant
social and cultural characteristics in common. There is a need for partners in the earlier stages of a
relationship to agree over basic values, the things that really matter to them. So in terms of the quote
the filter theory certainly suggests that once again birds of a feather flock together and similarity is
attractive.
One strength is research support for two filters from Kerckhoff and Davis’s original (1962) study. They
found that relationship closeness was associated with similarity of values in partners who had been
together less than 18 months. Complementarity of needs was associated with closeness after this time.
This supports the view that ‘birds of a feather flock together’ in the early stages of a relationship, but
opposites are attractive (complementarity) later on, as predicted by the theory.
One limitation is the lack of replication of the original findings. Levinger (1974) has suggested that
social change and difficulties in defining the depth of a relationship could be the reason for lack of
replicability. Kerckhoff and Davis (1962) assumed that partners that had been together for over 18
months were more committed but this might not be the case in all cultures or cases today. This
suggests that filter theory is based on research evidence that lacks validity.
Another limitation is that actual similarity may be less important than perceived similarity. Montoya
et al.’s (2008) meta-analysis showed that actual similarity affected attraction only in short-term lab-
based interactions. In real-world relationships, perceived similarity was a stronger predictor of
attraction. This implies that the theory has the link the wrong way round – birds of a feather flock
together as they become more attracted, so similarity is an effect of attraction and not a cause.
A final limitation is that complementarity does not always predict satisfaction. For instance, Markey
and Markey (2013) found greatest relationship satisfaction in lesbian couples of equal dominance.
Therefore similarity of needs rather than complementarity may be associated with long-term
satisfaction (which suggests that birds of a feather continue to flock together in longer-term
relationships), at least in some couples.
Page 79
1. Social exchange theory assumes that romantic partners act out of self-interest in exchanging rewards
and costs. A satisfying and committed relationship is maintained when rewards exceed costs and
potential alternatives are less attractive than the current relationship.
Rewards could include companionship, sex, and emotional support. Costs might include time, stress,
energy, compromise, and so on. Also in economic terms, a relationship incurs another kind of cost, an
opportunity cost. Your investment of time and energy in your current relationship means using resources
that you cannot invest elsewhere – this could include other romantic relationships or even friendships.
The theory assumes that we measure the profit in a romantic relationship first by the comparison level
(CL) – the amount of reward that you believe you deserve to get. It develops out of our experiences of
previous relationships and social norms which feed into our expectations of the current one. We
43
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
The second measure of profit considers the comparison level for alternatives (CLalt) where we compare
our current profit with the potential rewards and costs from another relationship. SET predicts that we
will stay in our current relationship as long as we believe it is more rewarding than the alternatives.
2. The answer to question 1 above could also be used here (an alternative theory is filter theory – see
answers for p77).
3. One limitation is that studies into SET ignore the role of equity. What matters to most romantic
partners is not the balance of rewards and costs but the partners perception that the ratio of rewards to
costs is fair. This neglect of equity means that SET is a limited explanation that does not account for a
significant proportion of research findings that confirm the importance of equity.
Another limitation is that SET deals in concepts that are vague and hard to quantify. Unlike in research,
real-world rewards/costs are subjective and hard to define because they vary, e.g. ‘having your partner’s
loyalty’ is not rewarding for everyone. Also comparison levels are problematic – it is unclear what the
values of CL and CLalt need to be before individuals feel dissatisfied. This means SET is difficult to test in
a valid way.
4. Social exchange theory assumes that romantic partners act out of self-interest in exchanging rewards
and costs. SET suggests that a satisfying and committed relationship is maintained when rewards exceed
costs and potential alternatives are less attractive than the current relationship. Dom is striving to ensure
that Shelley also feels rewarded but is struggling to maintain the same levels as she is giving him. They
are both likely to be considering the profit of their relationship by offsetting the costs such as time and
stress against these rewards.
SET assumes that we measure the profit in a romantic relationship first by the comparison level (CL) –
the amount of reward that you believe you deserve to get. It develops out of our experiences of previous
relationships and social norms which feed into our expectations of the current one. We consider a
relationship worth pursuing if our CL is high, so whether Dom and Shelley’s relationship continues will
depend on their expectations and also their judgement of the comparison level for alternatives (CLalt).
This is where they will compare their current profit with the potential rewards and costs from another
relationship. SET predicts Dom and Steph will stay in their relationship as long as they believe it is more
rewarding than the alternatives.
A strength of SET is support from research studies. For instance, Kurdek (1995) measured SET variables in
gay, lesbian and heterosexual couples. The partners who were most committed perceived the most
rewards and fewest costs. They also viewed alternatives as relatively unattractive. Clearly, Dom is highly
sensitive to the rewards and costs (possibly feelings of guilt) in the relationship. The findings match
predictions from SET, confirming the theory’s validity in a wide range of romantic partners.
However, applying economic ideas to romantic relationships may be inappropriate. Clark and Mills
(2011) argue that communal relationships (e.g. romantic partners) involve giving and receiving of
rewards without thinking of profit. At the start of a romantic relationship, tallying of exchanges might be
viewed with some suspicion and even distaste, as is suggested by Dom’s reaction here. This suggests that
SET may not provide a suitable explanation for all types of relationships.
Another limitation of SET is that it ignores equity. What matters to most romantic partners is not the
exact amount of rewards and costs but the ratio of the two. Satisfied partners perceive that the ratio of
rewards to costs is fair. For example, Dom may be gaining more rewards than Steph, but perhaps he is
44
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
also experiencing the most costs. This neglect of equity means that SET is a limited explanation that does
not account for a significant proportion of research findings.
A further limitation is that SET deals in concepts that are vague and hard to quantify. Unlike in research,
real-world rewards/costs are subjective and hard to define because they vary, e.g. ‘having your partner’s
loyalty’ is not rewarding for everyone. Also comparison levels are problematic – it is unclear what the
values of CL and CLalt need to be before individuals feel dissatisfied. This means SET is difficult to test in
a valid way.
A final limitation of SET is the direction of cause and effect may be wrong. SET claims we become
dissatisfied after we conclude that costs outweigh rewards and/or that alternatives are more
attractive (i.e. these factors cause dissatisfaction). But Argyle (1987) suggests that it is only once we
become dissatisfied that we monitor costs and rewards or consider alternatives. As long as Dom and
Steph are satisfied, they probably won’t even notice attractive alternatives. Therefore, considering
costs/alternatives is caused by dissatisfaction rather than the reverse.
Page 81
1. Equity theory like SET acknowledges the impact of rewards and costs on relationship satisfaction, but
criticises SET for ignoring the central role of equity, the perception partners have that the distribution of
rewards and costs in the relationship is fair. Walster et al.’s equity theory (1978) suggests that partners
have a need for equity.
Both underbenefitting and overbenefitting can lead to dissatisfaction. The underbenefitted partner is
likely to be the least satisfied and their feelings may be evident in anger and resentment but the
overbenefitted partner is still likely to feel discomfort and shame.
In equity theory, it is not the size or amount of the rewards and costs that matters – it’s the ratio of the
two to each other. For example, if one partner puts a lot into the relationship but at the same time gets a
lot out of it, then that will seem fair enough.
It is argued that the sense of inequity impacts negatively on relationships. The greater the perceived
inequity, the greater the dissatisfaction and this applies to both the overbenefitted and underbenefitted
partner.
The consequences may change over the course of the relationship. For example, at the start it may feel
perfectly natural to contribute more than you receive but if this continues as the relationship develops
then satisfaction with the relationship may fall.
2. Social exchange theory (SET) assumes that romantic partners act out of self-interest in exchanging
rewards and costs and as long as rewards exceed costs and potential alternatives are less attractive than
the current relationship, it will continue.
The theory assumes that we measure the profit in a romantic relationship first by the comparison level
(CL) – the amount of reward that you believe you deserve to get (based on previous experience, social
norms, etc.) We consider a relationship worth pursuing if our CL is high. The second measure of profit
considers the comparison level for alternatives (CLalt) where we compare our current profit with the
potential rewards and costs from another. SET predicts that we will stay in our current relationship as
long as we believe it is more rewarding than the alternatives.
On the other hand, Walster et al.’s equity theory (1978) suggests that partners have a need for equity
45
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
and as such consider the ratio of rewards and costs. Both underbenefitting and overbenefitting can lead
to dissatisfaction. The underbenefitted partner is likely to be the least satisfied and their feelings may be
evident in anger and resentment but the overbenefitted partner is still likely to feel discomfort and
shame.
It is argued that the sense of inequity impacts negatively on relationships. The greater the perceived
inequity, the greater the dissatisfaction and this applies to both the overbenefitted and underbenefitted
partner.
3. Aumer-Ryan et al. (2007) found couples in an individualist culture (the US) linked satisfaction to equity
but partners in a collectivist culture (Jamaica) were most satisfied when they were overbenefitting. This
is not predicted by equity theory as it assumes that everyone is motivated to achieve equity.
Furthermore, this was true of both men and women, suggesting it is a consistent social-based rather
than gender-based difference. So the assumption that equity is key to satisfying relationships in all
cultures is not supported and means that the theory is limited in its ability to account for all romantic
relationships.
4. Equity theory like SET acknowledges the impact of rewards and costs on relationship satisfaction, but
criticises SET for ignoring the central role of equity, the perception partners have that the distribution of
rewards and costs in the relationship is fair. Walster et al.’s equity theory (1978) suggests that partners
have a need for equity.
Both underbenefitting and overbenefitting can lead to dissatisfaction. The underbenefitted partner is
likely to be the least satisfied and their feelings may be evident in anger and resentment but the
overbenefitted partner is still likely to feel discomfort and shame.
In equity theory, it is not the size or amount of the rewards and costs that matters – it’s the ratio of the
two to each other. For example, if one partner puts a lot into the relationship but at the same time gets a
lot out of it, then that will seem fair enough. It is argued that the sense of inequity impacts negatively on
relationships. The greater the perceived inequity, the greater the dissatisfaction and this applies to both
the overbenefitted and underbenefitted partner.
The consequences may change over the course of the relationship. For example, at the start it may feel
perfectly natural to contribute more than you receive but if it continues as the relationship develops
then satisfaction with the relationship may fall.
There is some research support for equity theory. For example, Utne et al. (1984) found that newly-weds
who considered their relationship equitable were more satisfied than those who considered themselves
as over- or underbenefitting. So it would seem that profit is not the key issue in judging relationships,
rather it is equity. This research supports the central predictions of equity theory supporting its validity
as an explanation of romantic relationships.
However, Berg and McQuinn (1986) found that equity did not increase in their longitudinal study of
dating couples, as equity theory would predict. The theory does not distinguish between those
relationships which ended and those that continued. Variables such as self-disclosure appeared to be
more important. This is a strong criticism because it was based on real couples studied over time.
Another limitation is that equity is a culturally-limited concept. Aumer-Ryan et al. (2007) found couples
in an individualist culture (the US) linked satisfaction to equity but partners in a collectivist culture
(Jamaica) were most satisfied when they were overbenefitting. This is not predicted by equity theory as
it assumes that everyone is motivated to achieve equity. Furthermore, this was true of both men and
46
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Also Huseman et al. (1987) suggest that some people are less sensitive to equity than others. Some
partners are happy to contribute more than they get (benevolents, underbenefitted). Others believe they
deserve to be overbenefitted and accept it without feeling distressed or guilty (entitleds). This shows that
far from being a universal characteristic, a desire for equity is subject to individual differences.
Page 83
Satisfaction is the extent to which romantic partners feel the rewards of the relationship exceed the
costs.
Investment describes the resources associated with a romantic relationship which the partners would
lose if the relationship were to end.
2. Social exchange theory (SET) assumes that romantic partners act out of self-interest in exchanging
rewards and costs and, for as long as rewards exceed costs and potential alternatives are less attractive
than the current relationship, it will continue.
The theory assumes that we measure the profit in a romantic relationship first by the comparison level
(CL) – the amount of reward that you believe you deserve to get (based on previous experience, social
norms, etc.) We consider a relationship worth pursuing if our CL is high. The second measure of profit
considers the comparison level for alternatives (CLalt) where we compare our current profit with the
potential rewards and costs from another. SET predicts that we will stay in our current relationship as
long as we believe it is more rewarding than the alternatives.
Rusbult’s (2011) investment model emphasises the central importance of commitment in relationships,
suggesting that it depends on satisfaction and comparison with alternatives which are very similar to
elements of SET but also investment size. ‘Investment’ refers to the extent and importance of the
resources associated with the relationship. An investment can be understood as anything we would lose
if the relationship were to end – these could be intrinsic (e.g. money and possessions) or extrinsic
resources (mutual friendships, etc.). The theory suggests that if we know the state of these factors we
can confidently predict the commitment to the relationship.
3. One strength is the supporting evidence is based on self-report techniques which are an appropriate
research method since the model is based on subjective judgements about size of investment and
alternatives. What matters, according to the model, is the partners’ subjective perceptions of their
investments. This is a methodological strength because it is a more valid test of the model.
However, Goodfriend and Agnew (2008) argue that there is more to investment than just the resources
you have already put into a relationship. Early in a relationship partners make very few actual
investments but they do invest in future plans. It is future plans that motivate partners to commit so that
the plans can become reality. This means that the original model is a limited explanation as it fails to
consider the true complexity of investment.
4. Rusbult’s (2011) investment model further developed SET, suggesting that commitment depends on
47
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
satisfaction level, comparison with alternatives (CLalt) and investment size. A satisfying relationship is
one where the partners are getting more out of the relationship than they expect, given social norms and
their previous experiences. Cath agrees with this view suggesting that level of satisfaction determines
commitment. Commitment is also determined through investment size according to Rusbult’s model;
these investments include all of the resources associated with a romantic relationship which would be
lost if the relationship ended.
The satisfaction level is the extent to which partners feel the rewards of the romantic relationship
exceed the costs, and the comparison with alternatives is a judgement about whether a relationship with
a different partner would reduce costs and increase reward. Katie suggests that commitment determines
satisfaction whilst Rusbult claims the effect is the other way around, i.e. if investments are increasing
and satisfaction is high, then the relationship is likely to continue.
Cath, on the other hand, agrees with Rusbult in that she believes that commitment matters more than
satisfaction. This explains why, for example, a dissatisfied partner stays in a relationship when their level
of investment is high. They will be willing to work hard to repair problems in the relationship so their
investment is not wasted.
One strength is the supporting evidence is based on self-report techniques which are an appropriate
research method since the model is based on subjective judgements about size of investment and
alternatives. What matters, according to the model, is the partners’ subjective perceptions of their
investments. This is a methodological strength because it is a more valid test of the model.
Le and Agnew's (2003) review found that satisfaction, comparison with alternatives and investment size
all predicted relationship commitment. Where commitment was greatest, relationships were most stable
and lasted longest. This chimes with the view of Cath rather than Katie. The support is particularly strong
given that the results were true for men and women in either heterosexual or homosexual relationships.
This suggests that the claim that these factors are universally important in relationships is valid.
However, much of this research is correlational. No matter how strong the correlation, it does not follow
that one variable causes the other. Perhaps the more committed you are to a relationship, the more
investment you are willing to make, which reflects the view of Katie rather than of the model. Therefore,
it is unclear that the model has uncovered which factors cause commitment.
Rusbult and Martz (1995) found that women who reported making the greatest investment and who had
the fewest attractive alternatives were the most likely to return to the partners who had abused them.
The concept of satisfaction as important to relationship duration cannot explain this tendency but the
level of commitment can. This is closer to Katie’s view because it suggests that there are strong
influences against letting an investment ‘go to waste’. Therefore the model can explain the apparently
inexplicable behaviour of staying in an abusive relationship.
However, Goodfriend and Agnew (2008) argue that there is more to investment than just the resources
you have already put into a relationship. Early in a relationship, partners make very few actual
investments but they do invest in future plans. It is future plans that motivate partners to commit so that
the plans can become reality. This means that the original model is a limited explanation as it fails to
consider the true complexity of investment.
Page 85
1. The intra-psychic phase starts when someone thinks, ‘I can’t stand this anymore’, indicating a
determination that something has to change. A partner becomes dissatisfied with the relationship in its
48
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
current form. They then brood on the reasons for this and this will usually focus on their partner’s
shortcomings. The dissatisfied partner tends to keep this to themselves but may share their thoughts
with a trusted friend, weighing up the pros and cons of continuing.
The social phase begins when the dissatisfied partner concludes, ‘I mean it’. Once a partner wants to end
the relationship then they will seek support particularly from joint friends. These friends may be
encouraged to choose a side but others may try and prevent the break-up by acting as a go-between.
Once the news is public, though, this is usually the point of no return.
2. The model proposes that the ending of a relationship is not a one-off event but a process that takes
time and goes through four distinct phases. Each phase is characterised by a partner reaching a threshold
where their perception of the relationship changes. The partner may reassess and decide the
relationship isn't so bad, halting the process of breakdown. Or they cross the threshold and move on to
the next stage of the model.
The intra-psychic phase starts when someone thinks, ‘I can’t stand this anymore’, indicating a
determination that something has to change. A partner becomes dissatisfied with the relationship in its
current form. They then brood on the reasons for this and this will usually focus on their partner’s
shortcomings. The dissatisfied partner tends to keep this to themselves but may share their thoughts
with a trusted friend, weighing up the pros and cons of continuing.
The dyadic phase is initiated by the threshold, ‘I would be justified in withdrawing’. Once a partner
concludes that they are justified in ending the relationship they have to discuss this with their partner.
Dissatisfactions about equity, commitment etc. are aired.
The social phase begins when the dissatisfied partner concludes, ‘I mean it’. Once a partner wants to end
the relationship then they will seek support particularly from joint friends. These friends may be
encouraged to choose a side but others may try and prevent the break-up by acting as a go-between.
Once the news is public, though, this is usually the point of no return.
Finally, the grave dressing phase begins when the partner thinks, ‘It’s now inevitable’. Once the end
becomes inevitable then a suitable story of the relationship and its end is prepared for wider
consumption. This is likely to include an attempt to ensure that the storyteller will be judged most
favourably and to enable the partner to ‘move on’.
3. Duck argued that relationship breakdown is not a single one-off event but a process over time.
Breakdown goes through four distinct phases, each one marked by a threshold or a point at which a
partner realises their perception of the relationship has changed (e.g. ‘I would be justified in
withdrawing’).
4. The model proposes that the ending of a relationship is not a one-off event but one that goes through
four distinct phases, so as the news item suggests it can be regarded as a process which takes time.
Each phase is characterised by a partner reaching a threshold where their perception of the relationship
changes. The partner may reassess and decide the relationship isn't so bad, halting the process of
breakdown so the theory concurs with the news item’s view that relationships can be saved at almost
every stage. Or they cross the threshold and move on to the next stage of the model.
The intra-psychic phase starts when someone thinks, ‘I can’t stand this anymore’, indicating a
determination that something has to change. A partner becomes dissatisfied with the relationship in its
current form. They then brood on the reasons for this and this will usually focus on their partner’s
shortcomings. The dissatisfied partner tends to keep this to themselves but may share their thoughts
49
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
with a trusted friend, weighing up the pros and cons of continuing, so indeed breakdown is not
inevitable.
The dyadic phase is initiated by the threshold, ‘I would be justified in withdrawing’. Once a partner
concludes that they are justified in ending the relationship they have to discuss this with their partner.
Dissatisfactions about equity, commitment etc. are aired.
The social phase begins when the dissatisfied partner concludes, ‘I mean it’. Once a partner wants to end
the relationship then they will seek support particularly from joint friends. These friends may be
encouraged to choose a side but others may try and prevent the break-up by acting as a go-between.
Once the news is public, though, this is usually the point of no return.
Finally, the grave dressing phase begins when the partner thinks, ‘It’s now inevitable’. Once the end
becomes inevitable then a suitable story of the relationship and its end is prepared for wider
consumption. This is likely to include an attempt to ensure that the storyteller will be judged most
favourably and to enable the partner to ‘move on’.
Rollie and Duck (2006) added a resurrection phase in which ex-partners use what they have learned from
the last relationship to prepare for a future one. This refined version also clarifies a point not highlighted
in the news item – that movement through the stages is neither linear nor inevitable and partners may
return to an earlier phase. This suggests that the original phase model is therefore only a partial
explanation of the process of relationship breakdown.
The news item is right to say ‘…relationships can be saved at almost every stage’, and the model suggests
that some repair strategies are more effective at one stage than another. For example, in the intra-
psychic stage partners could brood more positively. It would be less helpful to encourage brooding if a
person had already reached the social phase. This suggests that the model can lead to supportive
suggestions that may help people through this difficult time in their lives.
Felmlee (1995) suggests a ‘fatal attraction’ theory stating that the attributes that partners found
attractive at the start of a relationship can often become too much. For example, someone who was
attracted to a ‘so funny’ partner may then decide to end the relationship because the partner ‘fails to
take life seriously’. This highlights the fact that Duck's model only tells us what happens and not why.
Finally, Moghaddam et al. (1993) propose relationships in individualist cultures are mostly voluntary and
end quite often, whilst in collectivist cultures relationships are more frequently ‘obligatory’ and less easy
to end. The whole concept of a relationship differs between cultures and therefore the process of
relationship breakdown is likely to differ. This is a limitation because it means that the model can only be
applied to some cultures and types of relationship.
Page 87
1. Sproull and Kiesler (1986) suggests that virtual relationships are less effective due to the lack of
nonverbal cues (e.g. physical appearance, emotional responses) – in FtF relationships we rely on these
cues. Lack of cues about emotional state (voice and facial expressions) leads to de-individuation. People
then feel freer from the constraints of social norms (disinhibition) and this leads to blunt and even
aggressive communication and this leads to a reluctance to self-disclose.
Walther’s (2011) hyperpersonal model suggests that early self-disclosure means that virtual relationships
develop quickly. Such relationships can become more intense and intimate than FtF ones. Self-disclosure
differs in virtual relationships because a person can manipulate their online presentation. The sender of
50
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
a message can be selective about what and how they present when self-disclosing (both hyperhonest
and hyperdishonest). The message receiver gains a positive impression of the sender and gives feedback
that reinforces the sender’s online self-selected presentation.
2. Since all the research chosen above relates to self-disclosure the above answer would be appropriate
here too.
3. There is support for the idea that the absence of gating in virtual relationships is helpful for some
people. McKenna and Bargh (2000) studied online communication by shy and socially anxious
people. In this group, 71% of the romantic relationships initially formed online survived more than
two years, compared to 49% formed offline (Kirkpatrick and Davis 1994). This suggests that shy
people do benefit online presumably because the gating that obstructs FtF relationships is absent
online.
However, theories need to include the fact that relationships are usually conducted both online and
offline. The interaction between people online will influence the interaction in the FtF relationship,
including the level and speed of self-disclosure. As such these two kinds of communication have to be
considered together and not separately. This suggests that current theories may underestimate the
complexity of virtual relationships including the role of gating.
4. Sproull and Kiesler (1986) suggests that CMC relationships are less effective due to the lack of
nonverbal cues (e.g. physical appearance, emotional responses) – in FtF relationships we rely on these
cues. Lack of cues about emotional state (voice and facial expressions) leads to de-individuation. People
then feel freer from the constraints of social norms (disinhibition) and this would at least partly explain
the fact that abusive posts do appear in social media. The blunt and even aggressive communication that
occurs may lead to a reluctance to self-disclose and even, according to the article, a decision to close
down social media accounts.
Walther’s (2011) hyperpersonal model suggests that early self-disclosure means that virtual relationships
develop quickly. Such relationships can become more intense and intimate. However, the intensity of
virtual relationships can also lead to them ending quickly, and this is borne out by the suggestion that
people are simply deleting their accounts as an easy way of getting rid of relationships that have turned
abusive.
Walther and Tidwell (1995) assert that cues in virtual relationships are simply different from those in FtF
ones. They found that there are plenty of cues in virtual relationships but they are just not the non-
verbal ones that we recognise in FtF communication. Emoticons and acronyms (e.g. LOL) are considered
effective substitutes in virtual relationships for the lack of the usual non-verbal cues, so the proposal that
there are reduced cues in virtual relationships appears unfounded. This suggests that there may be no
differences in self-disclosure between virtual and FtF relationships, which does not support reduced cues
theory.
Whitty and Joinson (2009) found supporting evidence for both hyperhonest and hyperdishonest online
disclosures. Questions asked in online discussions tend to be direct, probing and intimate (hyperhonest);
dating profiles can be misleading (hyperdishonest). This is quite different from FtF conversations. This is
consistent with the prediction of the model that these are distinctive types of disclosure in virtual
relationships and this may partly explain the existence of abusive messages.
There is support for the idea that the absence of gating in virtual relationships is helpful for some
people. McKenna and Bargh (2000) studied online communication by shy and socially anxious
people. In this group, 71% of the romantic relationships initially formed online survived more than
51
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
two years, compared to 49% formed offline (Kirkpatrick and Davis 1994). This suggests that shy
people do benefit online presumably because the gating that obstructs FtF relationships is absent
online. This reminds us that not all posts are abusive and there is a positive side to social media.
However, theories need to include the fact that relationships are usually conducted both online and
offline. The interaction between people online will influence the interaction in the FtF relationship,
including the level and speed of self-disclosure. As such these two kinds of communication have to be
considered together and not separately. This suggests that current theories may underestimate the
complexity of virtual relationships including the role of gating.
From online e-commerce forms through to Facebook and to online dating, the level of self-disclosure
varies considerably. People disclose more in areas that they consider private (e.g. Facebook statuses that
will only be seen by ‘friends’) and disclose less on webforms that involve the collection of data. This
means that the validity of theories that consider all virtual relationships in the same way will be limited.
Page 89
1. The Celebrity attitude scale (CAS) was used by Maltby et al. (2006) to identify three levels of parasocial
relationship. The first level is ‘entertainment-social’. This is the least intense level where celebrities are
viewed as sources of entertainment and fuel for social interaction so, for example, a number of people
might enjoy chatting about Beyoncé’s latest releases and even her pregnancy.
The second level is ‘intense-personal’, an intermediate level where someone becomes more personally
involved with a celebrity and this may include obsessive thoughts. So someone might want to contact
Beyoncé and dream about being her best friend, going on holiday together, etc.
The third level is ‘borderline-pathological’, the strongest level of celebrity worship where fantasies are
uncontrollable and behaviour is more extreme. The need to be close to Beyoncé might lead to trying to
be where she is and getting jealous, etc.
2. This theory, based on Bowlby’s evolutionary explanation, links early difficulties in attachment with
difficulties in forming successful relationships later in life. The early relationships are thought to be a
template for future ones through the medium of the internal working model. Such difficulties may lead
to a preference for parasocial relationships to replace those within one’s own social circle, as parasocial
relationships do not require the same social skills.
Ainsworth (1979) identified two attachment types associated with unhealthy emotional development:
insecure–resistant and insecure–avoidant. Insecure–resistant types are most likely to form parasocial
relationships because they want to have their unfulfilled needs met in a relationship where there is no
real threat of rejection. Insecure–avoidant types prefer to avoid the pain and rejection of any type of
relationship, either social or parasocial.
3. Maltby et al. (2005) studied female adolescents who reported an intense-personal relationship with a
female celebrity whose body shape they admired. The participants tended to have a poor body image.
The researchers speculated that this could be a precursor to development of an eating disorder. This
study supports the model because it shows a correlation between the level of parasocial relationship and
poor psychological functioning.
However, this and other studies are correlational. It is very unclear whether parasocial involvement
causes poor body image or a pre-existing poor body image triggers increasing ‘addiction’ to celebrity
worship. Alternatively a third variable, such as a characteristic of the individual’s personality (e.g.
52
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
neuroticism), could cause both. This does not help us to prevent the more dangerous and disturbing
forms of parasocial relationships. This means the absorption-addiction model is limited in its
explanatory power and its application for supporting people whose celebrity worship has become
problematic.
4. The Celebrity attitude scale (CAS) was used by Maltby et al. (2006) to identify three levels of parasocial
relationship. First level is ‘entertainment-social’. This is the least intense level where celebrities are
viewed as sources of entertainment and fuel for social interaction so, in Denise’s case, this may have
been the level of involvement her parents were happy for Denise to have with Zoella.
The next level is ‘intense-personal’, where someone becomes more personally involved with a celebrity
and this may include obsessive thoughts. Her parents may be worried that Denise has got to this level as
she is spending increasing amounts of time on the channel. If her parents recognise that there is a
gradual increase in her involvement with Zoella, they may be concerned that she may be heading for the
borderline-pathological level, the strongest level of celebrity worship where fantasies are uncontrollable
and behaviour is more extreme.
Maltby et al. (2005) studied female adolescents who reported an intense-personal relationship with a
female celebrity whose body shape they admired. The participants tended to have a poor body image.
The researchers speculated that this could be a precursor to development of an eating disorder. This
study supports the model because it shows a correlation between the level of parasocial relationship and
poor psychological functioning.
There is no evidence that Denise is at risk of an eating disorder. However, as she has only just started
secondary school, there may be a possibility that she is struggling to make friends. According to the
absorption-addiction model, someone in the absorption phase is seeking fulfilment in celebrity
worship to identify with them and their more exciting lives. Denise may be triggered towards a
higher level by a stressful life event and starting school would certainly count as one of these.
However, much of the research is correlational. It is very unclear whether parasocial involvement
causes poor body image or a pre-existing poor body image triggers increasing ‘addiction’ to celebrity
worship. Alternatively a third variable, such as a characteristic of the individual’s personality (e.g.
neuroticism), could cause both. This does not help us to prevent the more dangerous and disturbing
forms of parasocial relationships. This means the absorption-addiction model is limited in its
explanatory power and its application for supporting people such as Denise whose celebrity worship
has become problematic.
Perhaps Denise’s parasocial involvement with Zoella is linked to attachment experiences in her
childhood. In Ainsworth’s (1979) terms, Denise may have developed an insecure–resistant
attachment as a child. This would make her prone to forming parasocial relationships because she
would want to have her unfulfilled needs met in a relationship where there is no real threat of
rejection.
A strength of this theory is that is explains why people from many different cultures have a desire for
parasocial involvement (which could explain the huge popularity of online celebrities such as Zoella).
There is support for this from a study by Dinkha et al. (2015) who compared a collectivist culture
(Kuwait) with an individualist one (US). The researchers found that people with an insecure–resistant
attachment type in both cultures were most likely to form intense parasocial relationships with TV
characters. This supports the view that people like Denise may have childhood attachment
experiences that disposed them to celebrity worship, regardless of their cultural background.
53
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, other evidence is not so supportive. For example, McCutcheon et al. (2006) found that
attachment insecurity was not related to the likelihood of forming a parasocial relationship with a
celebrity. Insecurely-attached participants were no more likely to form such relationships than
participants with secure attachments. This suggests that Denise’s parasocial relationship with Zoella
may not have developed as a way of compensating for attachment issues after all.
54
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Chapter 6 Gender
Page 91
1. Sex is an innate or biological status (nature) whereas gender is a psychosocial status (nurture). Sex
is determined by genetic make-up, namely chromosomes, which influence hormonal and anatomical
differences that distinguish males and females whereas gender reflects all the attitudes, behaviours
and roles we associate with being masculine or feminine.
2. Sex is an innate or biological status (nature) whereas gender is a psychosocial status (nurture). Sex
is determined by genetic make-up, namely chromosomes, which influence hormonal and anatomical
differences.
3. Sex-role stereotypes are shared by a culture or social group and consist of expectations regarding
how males and females should behave. These expectations are transmitted through a society and
reinforced by members of it (e.g. parents, peers, etc.).
Sex-role stereotypes may or may not represent something real. Some expectations have some basis
in reality. Research confirms sex-role stereotypes in the media. A study of TV adverts (Furnham and
Farragher 2000) found men were more likely to be shown in autonomous roles in professional
contexts, whereas women were seen occupying familial roles in domestic settings. This, along with
other studies, demonstrates both the existence of sex-role stereotypes and the role the media has in
reinforcing them.
4. Sex-role stereotypes are shared by a culture or social group and consist of expectations regarding
how males and females should behave. These expectations are transmitted through a society and
reinforced by members of it (e.g. parents, peers, etc.).
Sex-role stereotypes may or may not represent something real. Some expectations have some basis
in reality. Research confirms sex-role stereotypes in the media. A study of TV adverts (Furnham and
Farragher 2000) found men were more likely to be shown in autonomous roles in professional
contexts, whereas women were seen occupying familial roles in domestic settings. This along with
other studies demonstrates both the existence of sex-role stereotypes and the role the media has in
reinforcing them.
Brian makes sure the car is working properly, puts the bins out and is responsible for the TV remote
control. Shirley does most of the housework, including the cooking, and is the one who remembers
everybody’s birthdays. These behaviours conform to sex-role stereotypes – popular beliefs about
what men and women ‘do’, which are transmitted throughout society and its members.
Page 93
2. The Bem Sex Role Inventory (BSRI) was developed by asking 50 male and 50 female judges to rate
200 traits in terms of how desirable they were for men and women. The traits that were the highest
scorers in each category became the 20 masculine and 20 feminine traits on the scale. 20 neutral
55
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
items were also added. The BSRI was then piloted with over 1000 students and the results broadly
corresponded with the participants’ own description of their gender identity. This resulted in a scale
that is able to distinguish masculinity–femininity and androgynous–undifferentiated.
A follow-up study involving a smaller sample of the same students revealed similar scores when the
students were tested a month later. This suggests that the scale has high test-retest reliability.
Stereotypical ideas of masculinity and femininity have changed since the BSRI was developed 40
years ago. Also, it was devised by a panel who were all from the US. This suggests that the BSRI may
lack temporal validity and be culturally biased and not a suitable measure of gender identity today.
3. One criticism is that the links made between well-being and androgyny as measured by the scale
are challenged. Bem emphasised that androgynous individuals are more psychologically healthy
because they are more able to deal with scenarios that demand a masculine, feminine or
androgynous response. In other words, they are more flexible and able to cope with a variety of
situations. However, some researchers (e.g. Adams and Sherer 1985) have argued that people who
display a greater proportion of masculine traits are better adjusted than androgynous people as
these traits are more highly valued in Western society. This suggests that the BSRI did not take
adequate account of the social and cultural context in which it was developed.
4. The Bem Sex Role Inventory (BSRI) was developed by asking 50 male and 50 female judges to rate
200 traits in terms of how desirable they were for men and women. The traits that were the highest
scorers in each category became the 20 masculine and 20 feminine traits on the scale. 20 neutral
items were also added. The BSRI was then piloted with over 1000 students and the results broadly
corresponded with the participants’ own description of their gender identity. This resulted in a scale
that is able to distinguish masculinity–femininity and androgynous–undifferentiated.
A follow-up study involving a smaller sample of the same students revealed similar scores when the
students were tested a month later. This suggests that the scale has high test-retest reliability.
One strength is that gender identity is measured quantitatively. Bem’s numerical approach is useful
when it is necessary to quantify a dependent variable but Spence (1984) suggests a qualitative
approach may represent gender identity better. One compromise is to combine different scales. For
example, the Personal Attribute Questionnaire (PAQ) adds another dimension (instrumentality and
expressivity) to Bem’s masculinity–femininity dimension. This suggests that quantitative together
with qualitative approaches may be useful for studying different aspects of gender identity.
Another strength is that the BSRI has been found to be both valid and reliable. Development of the
scale involved 50 males and 50 females judging 200 traits in terms of gender desirability. The top 20
in each case were used. Piloting with 1000 students showed the BSRI reflected their gender identity
(validity). A follow-up study involving a smaller sample of the same students produced similar scores
when the students were tested a month later, suggesting high test-retest reliability. Together this
evidence suggests that the BSRI had a degree of both validity and reliability at the time it was
developed.
That said, stereotypical ideas of masculinity and femininity have changed since the BSRI was
developed 40 years ago. Also, it was devised by a panel who were all from the US. This suggests that
the BSRI may lack temporal validity and be culturally biased and not a suitable measure of gender
identity today.
56
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
One limitation is people may lack insight into their gender identity. Gender is a social construct
which may be more open to interpretation than, say, sex (which is a biological fact). Furthermore,
the questionnaire’s scoring system is subjective and people’s application of the 7-point scale may
differ. This suggests that the BSRI may not be a scientific way of assessing gender identity.
Page 95
1. Testosterone is a hormone which controls the development of male sex organs before birth. It is
present in both sexes and is linked to aggressive behaviour.
Oestrogen is a hormone which controls female sexual characteristics including menstruation. During
the menstrual cycle, some women experience heightened emotionality and irritability known as
premenstrual tension or premenstrual syndrome.
Oxytocin is typically produced in larger amounts by women than men and stimulates lactation and
bonding after birth. It is said that it may explain why females are more interested in intimacy in
relationships than men.
2. It is the 23rd pair of chromosomes, made from DNA, which determines the biological sex of a
foetus. Under a microscope these chromosomes are either X or Y shape. The female sex
chromosome is XX and male is XY.
A baby’s sex is determined by whether the sperm that fertilises the egg is an X or a Y chromosome
since the X is gained from the ova. The Y chromosome carries a gene called the sex-determining
region Y (SRY). This causes the testes to develop and androgens to be produced in a male embryo.
Without these androgens, the embryo develops into a female.
There are exceptions, though. About one in 600 males have Klinefelter’s syndrome which is
characterised by XXY chromosomal structure. Individuals who have this condition are biological
males with male anatomy but an additional X chromosome – 10% of cases are identified prenatally
but up to 66% may not be aware of it. Diagnosis often comes about accidentally via a medical
examination for some unrelated condition.
Furthermore, one in 5000 females have Turner’s syndrome which is caused by an absence of one of
the two X chromosomes leading to 45 rather than 46 chromosomes. This is characterised as XO
chromosomal structure.
3. One strength is that evidence supports the role of testosterone. Wang et al. (2000) gave 227
hypogonadal men (men with low levels of testosterone) testosterone therapy for 180 days.
Testosterone replacement improved sexual function, libido and mood, and significantly increased
muscle strength in the sample. This study suggests that testosterone exerts a powerful and direct
influence on male sexual and physical behaviour even in adult males.
However, in another study increasing testosterone levels in healthy young men did not significantly
increase either interactional (frequency of sexual intercourse) or non-interactional (libido)
components of sexual behaviour (O’Connor et al. 2004). This suggests that, in ‘normal’ adults,
additional testosterone has no effects on sexual or aggressive behaviour – though this doesn’t
challenge the role of testosterone in early development.
4. It is the 23rd pair of chromosomes, made from DNA, which determines the biological sex of a
foetus. Under a microscope these chromosomes are either X or Y shape. The female sex
57
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
chromosome is XX and male is XY. A baby’s sex is determined by whether the sperm that fertilises
the egg is an X or a Y chromosome since the X is gained from the ova. The Y chromosome carries a
gene called the sex-determining region Y (SRY). This causes the testes to develop and androgens to
be produced in a male embryo. Without these androgens the embryo develops into a female. It is
likely that Caster has the female chromosomal pattern of XX.
Hormonal explanations of sex and gender focus on the fact that prenatally hormones act upon brain
development and cause the development of the reproductive organs. At puberty, a burst of
hormonal activity triggers the development of secondary sexual characteristics such as pubic hair.
Males and females produce the same hormones but in different concentrations; for example,
testosterone plays a key role in male development and aggression. In this case, Caster has much
higher levels of testosterone than other women and this could be argued, as in the scenario, to give
her an unfair advantage over female runners. On the other hand, oestrogen plays the key role in
female development and behaviour, controlling female sexual characteristics including menstruation
and it is likely in the case of Caster that oestrogen did not play a full part in her development as a
female during the foetal development resulting in her lack of ovaries.
One strength is that evidence supports the role of testosterone. Wang et al. (2000) gave 227
hypogonadal men (men with low levels of testosterone) testosterone therapy for 180 days.
Testosterone replacement improved sexual function, libido and mood, and significantly increased
muscle strength in the sample. This study suggests that testosterone exerts a powerful and direct
+
influence on male sexual and physical behaviour even in adult males.
However, in another study increasing testosterone levels in healthy young men did not significantly
increase either interactional (frequency of sexual intercourse) or non-interactional (libido)
components of sexual behaviour (O’Connor et al. 2004). This suggests that, in ‘normal’ adults,
additional testosterone has no effects on sexual or aggressive behaviour – though this doesn’t
challenge the role of testosterone in early development.
One limitation is that biological accounts ignore social factors. Hofstede et al. (2010) claim that
gender roles are more about social factors than biology. Countries that value competition and -
independence above community (individualist cultures), e.g. US and UK, are more masculine, and
masculine traits are more valued than in collectivist cultures. This challenges biological explanations
of gender behaviour and suggests social factors may ultimately be more important in shaping gender
behaviour and attitudes.
Another limitation is that biological explanations are reductionist. Accounts that reduce gender to
the level of chromosomes and hormones exclude alternative explanations. Cognitive explanations
include the influence of, for example, schema. Psychodynamic explanations include the importance
of childhood experiences. This suggests that gender is more complex than its biological influences
alone.
Page 97
1. Atypical sex chromosome patterns occur when the chromosomal pattern of XX for females and XY
for males develop differently. For example in the case of the one in 600 males who have Klinefelter’s
syndrome, the pattern of the 23rd chromosomes is XXY. Individuals who have this condition are
biological males with male anatomy but an additional X chromosome.
58
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. Approximately one in 5000 females have Turner’s syndrome which is caused by an absence of one
of the two X chromosomes leading to 45 rather than 46 chromosomes. The chromosomal pattern is
therefore XO rather than the XX usually associated with females.
Individuals with Turner’s syndrome have the following physical characteristics: no menstrual cycle as
their ovaries fail to develop, leaving them sterile; a broad ‘shield’ chest and no developing of breasts
at puberty; low-set ears and a ‘webbed’ neck; hips that are not much bigger than the waist.
The syndrome also has an impact on psychological characteristics: a high reading ability but also
social immaturity and lower-than-average performance on spatial, visual memory and mathematical
tasks.
3. One strength of the research is its contribution to the nature–nurture debate. Comparing both
chromosome-typical and atypical individuals highlights psychological and behavioural differences.
For example, Turner’s syndrome is associated with higher verbal ability. It might be logically inferred
that these differences have a biological basis and are a direct result of the abnormal chromosomal
structure. This would suggest that innate ‘nature’ influences have a powerful effect on psychology
and behaviour.
However, behavioural differences may result from social influences. Social immaturity in Turner’s
may be because individuals are treated that way due to their immature appearance. This shows that
it could be wrong to assume that psychological and behavioural differences in people with atypical
sex chromosome patterns are due to nature.
4. Approximately 1 in 5000 females have Turner’s syndrome which is caused by an absence of one of
the two X chromosomes leading to 45 rather than 46 chromosomes. The chromosomal pattern is
therefore XO rather than the XX usually associated with females.
Individuals with Turner’s syndrome have the following physical characteristics: no menstrual cycle as
their ovaries fail to develop, leaving them sterile; a broad ‘shield’ chest and no developing of breasts
at puberty; low-set ears and a ‘webbed’ neck; hips that are not much bigger than the waist.
The syndrome also has an impact on psychological characteristics: a high reading ability but also
social immaturity and lower-than-average performance on spatial, visual memory and mathematical
tasks.
About one in 600 males have Klinefelter’s syndrome. They are biological males with male anatomy
but have an additional X chromosome. 10% of cases are identified prenatally but up to 66% may not
be aware of it. It is associated with lack of body hair, health problems, some breast development at
puberty (gynaecomastia) and underdevelopment of genitals. Additionally, individuals have poor
language skills and are shy as well lacking cognitive skills such as problem-solving.
One strength of the research is its contribution to the nature–nurture debate. Comparing both
chromosome-typical and atypical individuals highlights psychological and behavioural differences.
For example, Turner’s syndrome is associated with higher verbal ability. It might be logically inferred
that these differences have a biological basis and are a direct result of the abnormal chromosomal
structure. This would suggest that innate ‘nature’ influences have a powerful effect on psychology
and behaviour.
59
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, behavioural differences may result from social influences. Social immaturity in Turner’s
may be because individuals are treated that way due to their immature appearance. This shows that
it could be wrong to assume that psychological and behavioural differences in people with atypical
sex chromosome patterns are due to nature.
Another strength of research is its application to managing the conditions. Continued research into
atypical sex chromosome patterns leads to earlier and more accurate diagnoses and positive
outcomes. A study of 87 individuals with Klinefelter’s syndrome showed that those identified when
young benefitted in terms of managing their condition (Herlihy et al. 2011). This suggests that
increased awareness of these conditions has real-world application.
One limitation is there may be a sampling issue. Generally, only those people who have the most
severe symptoms are included in the Klinefelter’s database, therefore the typical profile may be
distorted. The use of prospective studies show the majority of those with Klinefelter’s don’t have
cognitive or psychological problems, and many are highly successful (Boada et al. 2009). This
suggests that the typical picture of Klinefelter’s (and Turner’s) syndrome may be exaggerated.
Page 99
Gender constancy according to Kohlberg was the final stage of development, around age 6 when
children recognise that gender is consistent across time and situations, and this understanding is
applied to other people’s gender as well as their own. They are no longer fooled by changes in
outward appearance; for example, a man in a dress is still a man underneath.
2. Kohlberg’s theory takes a cognitive-developmental approach considering both the child’s thinking
about their gender and the changes in thinking over time. He suggested that transition from stage to
stage is gradual rather than sudden.
According to Kohlberg’s cognitive-developmental approach the first stage of gender identity occurs
around 2 years and is when a child is correctly able to identify themselves as a boy or a girl. It is
followed by the ability to identify others but there is no sense that it is a permanent state so, for
example, a two-and-a-half-year-old boy may be heard to say, ‘when I grow up I will be a mummy’.
Around the age of 4 children acquire gender stability. With this comes the realisation that they will
always stay the same gender and that this is an aspect of themselves that remains consistent over
time. That said, children of this age cannot apply this logic to other people in other situations. They
are often confused by external changes in appearance – they may describe a man who has long hair
as a woman.
Gender constancy, according to Kohlberg, was the final stage of development, around age 6 when
children recognise that gender is consistent across time and situations, and this understanding is
applied to other people’s gender as well as their own. They are no longer fooled by changes in
outward appearance; for example, a man in a dress is still a man underneath.
60
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. One limitation is the methodology of supporting studies. Bem (1989) suggests it is no wonder
younger children are confused by changes in appearance because our culture demarcates gender
through e.g. clothes and hairstyle. Bem found 40% of children aged 3–5 demonstrated constancy if
they were first shown a naked photo of the child-to-be identified. This suggests the typical way of
testing gender constancy may misrepresent what younger children actually know.
Another limitation is there may be different degrees of constancy. Martin et al. (2002) suggest an
initial degree of constancy may help children choose friends or seek gender information, for
instance, and develops before age 6. A second degree (which develops later) may heighten
responsiveness to gender norms under conditions of conflict, such as choosing appropriate clothes
or attitudes. This suggests that the acquisition of constancy may be a more gradual process and
begins earlier than Kohlberg thought.
4. Kohlberg’s theory takes a cognitive-developmental approach considering both the child’s thinking
about their gender and the changes in thinking over time. He suggested that transition from stage to
stage is gradual rather than sudden.
According to Kohlberg’s cognitive-developmental approach the first stage of gender identity occurs
around 2 years and is when a child is correctly able to identify themselves as a boy or a girl. It is
followed by the ability to identify others but there is no sense that it is a permanent state so, for
example, a two-and-a-half-year-old boy may be heard to say, ‘when I grow up I will be a mummy’. So
five-year-old Ryan would have achieved this stage recognising that daddy was male.
The next stage, around the age of 4, is where children acquire gender stability or the realisation that
they will always stay the same gender and that this is an aspect of themselves that remains
consistent over time. That said, children of this age cannot apply this logic to other people in other
situations. This would explain why Ryan thought his dad might have become a ‘lady’ as he was taking
on a role associated typically with females.
In order to recognise that his father could not have changed gender Ryan would need to be a little
older (around 6) when, according to Kohlberg, gender constancy occurs as this is where children
recognise that gender is consistent across time and situations, and this understanding is applied to
other people’s gender as well as their own.
One limitation is the methodology of supporting studies. Bem (1989) suggests it is no wonder
younger children are confused by changes in appearance because our culture demarcates gender
through e.g. clothes and hairstyle. Bem found 40% of children aged 3–5 demonstrated constancy if
they were first shown a naked photo of the child-to-be identified. This suggests the typical way of
testing gender constancy may misrepresent what younger children actually know.
Another limitation is there may be different degrees of constancy. Martin et al. (2002) suggest an
initial degree of constancy may help children choose friends or seek gender information, for
instance, and develops before age 6. A second degree (which develops later) may heighten
responsiveness to gender norms under conditions of conflict, such as choosing appropriate clothes
or attitudes. This suggests that the acquisition of constancy may be a more gradual process and
begins earlier than Kohlberg thought.
61
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 101
1. Gender schema theory (GST) suggests that understanding of gender changes with age. GST also
suggests that children actively structure their own learning of gender.
Gender schema develop after gender identity and contain and organise our knowledge of gender.
Schema are mental constructs that develop via experience and are used by us to organise our
knowledge and contain what we know in relation to gender and gender-appropriate behaviour.
Martin and Halverson suggest that first a child establishes gender identity (around 2–3 years). The
child then begins to look around for further information to develop their schema. Gender-
appropriate schema expand over time to include a range of behaviours and personality traits based
on stereotypes (e.g. boys liking trucks and girls liking dolls). The schema directs the child’s behaviour,
(e.g. ‘I am a boy so I play with trucks.’). This reinforces existing ideas about gender. By 6 years of age
Martin and Halverson suggest children have acquired a rather fixed and stereotypical idea about
what is appropriate for their gender.
The theory suggests that children pay more attention to, and have a better understanding of, the
schema appropriate to their own gender (ingroup) than those of the opposite sex (outgroup).
Ingroup identity bolsters the child’s level of self-esteem as there is always a tendency to judge
ingroups more positively and at around 8 years of age children develop elaborate schema for both
genders.
2. As GST was chosen to answer the question above, the response is equally valid here.
3. One strength is that GST has research support. Martin and Halverson (1983) found that children
under 6 were more likely to recall gender-appropriate photographs than gender-inappropriate ones
when tested a week later. Children tended to change the gender of the person carrying out the
gender-inappropriate activity in the photographs when asked to recall them. This supports gender
schema theory which predicts that children under 6 would do this (in contrast with Kohlberg who
said this happens in older children).
One limitation is that gender identity probably develops earlier. Zosuls et al. (2009) analysed twice-
weekly reports from 82 mothers on their children’s language from 9–21 months and videotapes of
the children at play. Children labelled themselves as a ‘boy’ or ‘girl’ (gender identity), on average, at
19 months – almost as soon as they began to communicate. This suggests that Martin and Halverson
may have underestimated children’s ability to use gender labels for themselves.
However, for Martin and Halverson the ages are averages rather than absolutes. It is the sequence
of development that is more important. This suggests that Zosuls et al.’s finding is not a fundamental
criticism of the theory.
Another strength is that GST can account for cultural differences. Cherry (2019) argues that gender
schema not only influence how people process information but also what counts as culturally-
appropriate gender behaviour. In societies where perceptions of gender have less rigid boundaries,
children are more likely to acquire non-standard gender stereotypes. This contrasts with some other
explanations of gender development, such as psychodynamic theory, which suggests gender identity
is more driven by unconscious biological urges.
4. Kohlberg’s theory takes a cognitive-developmental approach considering both the child’s thinking
about their gender and the changes in thinking over time. According to Kohlberg’s cognitive-
62
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
developmental approach, the first stage of gender identity occurs around 2 years when a child is
correctly able to identify themselves as a boy or a girl. It is followed by the ability to identify others’
gender but there is no sense that it is a permanent state. Around the age of 4 children acquire
gender stability. With this comes the realisation that they will always stay the same gender and that
this is an aspect of themselves that remains consistent over time. Gender constancy was the final
stage of development, around age 6, when children recognise that gender is consistent across time
and situations, and this understanding is applied to other people’s gender as well as their own.
Kohlberg’s stages are heavily influenced by changes in the developing child’s brain and subsequent
cognitive and intellectual maturation. The biological basis of the theory is supported by Munroe et
al.’s (1984) cross-cultural evidence of Kohlberg’s stages in countries as far afield as Kenya, Samoa
and Nepal. This suggests that gender development has a considerable maturational element and
universality, supporting a biological approach.
Gender schema theory (GST) suggests that understanding of gender changes with age and that
children actively structure their own learning of gender. Gender schema develop after gender
identity and are mental constructs that develop via experience. They are used by us to organise our
knowledge and contain what we know in relation to gender and gender-appropriate behaviour.
GST suggests a child establishes gender identity (around 2–3 years) and the gender-appropriate
schema expand over time to include a range of behaviours and personality traits based on
stereotypes and also direct the child’s behaviour. By 6 years of age it is suggested that children have
acquired a rather fixed and stereotypical idea about what is appropriate for their gender. Finally,
around 8 years of age, children develop elaborate schema for both genders.
One strength is that GST has research support. Martin and Halverson (1983) found that children
under 6 were more likely to recall gender-appropriate photographs than gender-inappropriate ones
when tested a week later. Children tended to change the gender of the person carrying out the
gender-inappropriate activity in the photographs when asked to recall them. This supports gender
schema theory which predicts that children under 6 would do this (in contrast with Kohlberg who
said this happens in older children).
One limitation is that gender identity probably develops earlier. Zosuls et al. (2009) analysed twice-
weekly reports from 82 mothers on their children’s language from 9–21 months and videotapes of
the children at play. Children labelled themselves as a ‘boy’ or ‘girl’ (gender identity), on average, at
19 months – almost as soon as they began to communicate. This suggests that Martin and Halverson
may have underestimated children’s ability to use gender labels for themselves.
However, for Martin and Halverson the ages are averages rather than absolutes. It is the sequence
of development that is more important. This suggests that Zosuls et al.’s finding is not a fundamental
criticism of the theory.
Another strength is that GST can account for cultural differences. Cherry (2019) argues that gender
schema not only influence how people process information but also what counts as culturally-
appropriate gender behaviour. In societies where perceptions of gender have less rigid boundaries,
children are more likely to acquire non-standard gender stereotypes. This contrasts with some other
explanations of gender development, such as psychodynamic theory, which suggests gender identity
is more driven by unconscious biological urges.
63
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 103
1. The Oedipus complex in boys is said to stem from the boy’s desire for his mother and hatred of his
father. During what Freud referred to as the phallic stage, boys develop incestuous feelings towards
their mother and feel a jealous hatred for their father who has what they desire (the mother). Then
recognising their father is more powerful, they fear that on discovering their desire for their mother,
their father will castrate them.
The Electra complex in girls stems from resentment of the mother. It was used by Jung to describe
the conflict that girls experience and Freud called this penis envy. During the phallic stage girls feel
competition with their mother for their father’s love. Girls also resent their mother because they
believe that she is responsible for their lack of a penis.
2. According to Freud the phallic stage (the third of his psychosexual stages) is the key time for
gender development. During this stage, at around 3–6 years, boys experience the Oedipus complex.
They develop incestuous feelings towards their mother and feel a jealous hatred towards their
father who has what they desire (the mother). Then recognising their father is more powerful, they
fear that on discovering their desire for their mother, their father will castrate them.
The Electra complex in girls stems from resentment of the mother. It was used by Jung to describe
the conflict that girls experience and Freud called this penis envy. During the phallic stage girls feel
competition with their mother for their father’s love. Girls also resent their mother because they
believe that she is responsible for their lack of a penis.
Resolution of this conflict is through identification with the same-sex parent. For a boy, the conflict
between his desires and his castration anxiety is resolved when the boy gives up his love for his
mother and begins to identify with his father. Girls acknowledge that they will never have the penis
that they desire and so they substitute this with a desire to have their own children and through this
they finally identify with their mother and her gender. Identification with the same-sex parent leads
to internalisation. Boys adopt the attitudes and values of their father, and girls adopt those of their
mother.
3. One strength is there is some support for the Oedipus complex. Freud’s theory means that, for
boys, ‘normal’ development depends on being raised by at least one male parent. There is some
support for this idea. Rekers and Morey (1990) rated the gender identity of 49 boys (aged 3–11).
75% of those judged ‘gender disturbed’ had no biological or substitute father living with them. This
suggests that being raised with no father may have a negative impact upon gender identity, in line
with what Freud’s theory would predict.
In contrast, Bos and Sandfort (2010) compared 63 children with lesbian parents and 68 children from
‘traditional’ families. There were no differences in terms of psychosocial adjustment or gender
identity. This contradicts Freud’s theory as it suggests that fathers are not necessary for healthy
gender identity development.
4. According to Freud the phallic stage (the third of his psychosexual stages) is the key time for
gender development. During this stage, at around 3–6 years, boys experience the Oedipus complex.
They develop incestuous feelings towards their mother and feel a jealous hatred towards their
father who has what they desire (the mother). Then recognising their father is more powerful, they
fear that on discovering their desire for their mother, their father will castrate them.
64
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
The Electra complex in girls stems from resentment of the mother. It was used by Jung to describe
the conflict that girls experience and Freud called this penis envy. During the phallic stage girls feel
competition with their mother for their father’s love. Girls also resent their mother because they
believe that she is responsible for their lack of a penis.
Resolution of this conflict is through identification with the same-sex parent. For a boy, the conflict
between his desires and his castration anxiety is resolved when the boy gives up his love for his
mother and begins to identify with his father. Girls acknowledge that they will never have the penis
that they desire and so they substitute this with a desire to have their own children and through this
they finally identify with their mother and her gender. Identification with the same-sex parent leads
to internalisation. Boys adopt the attitudes and values of their father, and girls adopt those of their
mother.
One strength is there is some support for the Oedipus complex. Freud’s theory means that, for boys,
‘normal’ development depends on being raised by at least one male parent. There is some support
for this idea. Rekers and Morey (1990) rated the gender identity of 49 boys (aged 3–11). 75% of
those judged ‘gender disturbed’ had no biological or substitute father living with them. This suggests
that being raised with no father may have a negative impact upon gender identity, in line with what
Freud’s theory would predict.
In contrast, Bos and Sandfort (2010) compared 63 children with lesbian parents and 68 children from
‘traditional’ families. There were no differences in terms of psychosocial adjustment or gender
identity. This contradicts Freud’s theory as it suggests that fathers are not necessary for healthy
gender identity development.
One limitation is Freud’s theory does not fully explain female development. Freud’s idea of penis
envy has been criticised as merely reflecting the era he lived and worked in, where males held so
much of the power. Horney (1942) argued that in fact men’s womb envy was more prominent (a
reaction to women’s ability to nurture and sustain life). This challenges the idea that female gender
development was founded on a desire to want to be like men (an androcentric bias).
Another limitation is that the theory is pseudoscientific. Freud is criticised for the lack of rigour in his
methods (case studies). Also many of his concepts (e.g. penis envy) are unconscious and untestable.
This makes Freud’s theory pseudoscientific (not genuine science) as his key ideas cannot be falsified,
i.e. proved wrong through scientific testing. This questions the validity of Freud’s theory as it is not
based on sound scientific evidence.
Page 105
1. Social learning theory (SLT) acknowledges the role of social context in gender development.
Gender behaviour is learned from observing others and being reinforced for the imitation of the
behaviour. SLT draws attention to the influence of the environment (nurture) in shaping gender
development. Influences can include peers, parents, teachers, culture and the media.
Children are reinforced directly for gender-appropriate behaviour. For example, boys may be praised
for being active and assertive and punished for being passive or gentle. Differential reinforcement
explains why boys and girls learn distinctly different gender behaviours – they are reinforced for
different behaviours, which they then reproduce.
65
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
There is also indirect reinforcement. Firstly, vicarious reinforcement which means that if the
consequences of another person’s behaviour are favourable, that behaviour is more likely to be
imitated by a child (e.g. if a girl sees her mother being complimented when wearing a pretty dress).
On the other hand, vicarious punishment means that if consequences of behaviour are seen to be
unfavourable (i.e. punished), behaviour is less likely to be imitated (e.g. if a little boy sees another
boy teased for displaying feminine behaviour they are unlikely to copy it).
2. Social learning theory (SLT) focuses on the role of social context and learning in gender
development suggesting that gender behaviour is learned from observing others, whereas the
psychodynamic approach focuses on the role of conflicts during the phallic stage of psychosexual
development.
SLT assumes that gender develops as a result of differential reinforcement in both a direct and
vicarious manner, whereas the psychodynamic approach suggests that the process of development
necessitates conflict, identification with same-sex parents and finally internalisation of their gender.
3. One strength is that SLT can explain cultural changes. There is more androgyny (less of a clear-cut
distinction between stereotypically masculine and feminine behaviour) in many societies today than
there was in, say, the 1950s. This shift in social expectations and cultural norms means new forms of
gender behaviour are unlikely to be punished and may be reinforced. This shows that social learning
not biology can better explain gender behaviour (cognitive factors could also explain cultural
changes in terms of schema/stereotypes).
4. Social learning theory (SLT) acknowledges the role of social context in gender development.
Gender behaviour is learned from observing others and being reinforced for the imitation of the
behaviour. SLT draws attention to the influence of the environment (nurture) in shaping gender
development. Influences can include peers, parents, teachers, culture and the media.
Children are reinforced directly for gender-appropriate behaviour. For example, boys may be praised
for being active and assertive and punished for being passive or gentle. Differential reinforcement
explains why boys and girls learn distinctly different gender behaviours – they are reinforced for
different behaviours, which they then reproduce.
There is also indirect reinforcement. Firstly, vicarious reinforcement which means that if the
consequences of another person’s behaviour are favourable, that behaviour is more likely to be
imitated by a child (e.g. if a girl sees her mother being complimented when wearing a pretty dress).
On the other hand, vicarious punishment means that if consequences of behaviour are seen to be
unfavourable (i.e. punished), behaviour is less likely to be imitated (e.g. if a little boy sees another
boy teased for displaying feminine behaviour they are unlikely to copy it).
One strength is supporting evidence for differential reinforcement. Smith and Lloyd (1978) observed
adults with babies aged 4–6-months who (irrespective of their actual sex) were dressed half the time
in boys’ clothes and half the time in girls’ clothes. Babies assumed to be boys were encouraged to be
adventurous and active and given a hammer-shaped rattle. Babies assumed to be girls were
reinforced for passivity, given a doll and praised for being pretty. This suggests that gender-
appropriate behaviour is stamped in at an early age through differential reinforcement and supports
the SLT explanation of gender development.
However, differential reinforcement may not be the cause of gender differences. Adults may
respond to innate gender differences in their own children e.g. encouraging naturally boisterous
66
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
boys to be active. This suggests that it is likely that social learning is only part of the explanation of
how children acquire gender-related behaviours.
Another strength is that SLT can explain cultural changes. There is more androgyny (less of a clear-
cut distinction between stereotypically masculine and feminine behaviour) in many societies today
than there was in, say, the 1950s. This shift in social expectations and cultural norms means new
forms of gender behaviour are unlikely to be punished and may be reinforced. This shows that social
learning not biology can better explain gender behaviour (cognitive factors could also explain
cultural changes in terms of schema/stereotypes).
One limitation is that SLT does not explain the developmental process. The implication of SLT is that
modelling of gender-appropriate behaviour can occur at any age, i.e. from birth onwards. It’s illogical
that children who are, say, two years old learn in the same way as children who are nine (this
conflicts with Kohlberg’s theory, for instance). This shows that influence of age and maturation (i.e.
development) on learning gender concepts is not considered by SLT, and this is a limitation.
Page 107
1. Mead’s (1935) research on cultural groups in Samoa supported the cultural determination of
gender roles. The Arapesh people were gentle and responsive (similar to the stereotype of
femininity in industrialised societies). The Mundugumor people were aggressive and hostile (similar
to the stereotype of masculinity in industrialised societies). And finally the Tchambuli women were
dominant and they organised village life, whilst men were passive and considered to be decorative
(the reverse of gender behaviour in industrialised societies).
In contrast, Buss (1995) found consistent mate preferences in 37 countries studied across all
continents. In all cultures women sought men offering wealth and resources and men looked for
youth and physical attractiveness. Munroe and Munroe (1975) found that in most societies, division
of labour is organised along gender lines.
2. Children are most likely to imitate role models who are the same sex as they are and who are
engaging in gender-appropriate behaviour. This maximises the chance of gender-appropriate
behaviours being reinforced.
Bussey and Bandura (1999) found that the media provides rigid gender stereotypes, for example
men are independent, ambitious and advice-givers; women are dependent, unambitious and advice-
seekers. Furnham and Farragher (2000) found that men were more likely to be shown in
autonomous roles within professional contexts, whereas women were often seen occupying familial
roles within domestic settings.
Seeing other people perform gender-appropriate behaviours increases a child’s belief that they are
capable of such behaviours (= self-efficacy). Mitra et al. (2019) found girls in India who watched a
programme challenging gender stereotypes were more likely to see themselves as capable of
working outside the home than non-viewers.
3. One strength is that the influence of culture has research support. In industrialised cultures,
changing expectations of women are a function of their increasingly active role in the workplace
(Hofstede 2001). In traditional societies women are still housemakers as a result of social, cultural
and religious pressures. This suggests that gender roles are very much determined by the cultural
context.
67
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
One limitation is that Mead’s research has been criticised. Freeman (1983) studied the Samoan
people after Mead’s study, and claimed Mead had been misled by some of her participants. He also
claimed Mead’s preconceptions of what she would find had influenced her reading of events
(observer bias and ethnocentrism). This suggests that Mead’s interpretations may not have been
objective and questions the conclusions that she drew.
4. Mead’s (1935) research on cultural groups in Samoa supported the cultural determination of
gender roles. The Arapesh people were gentle and responsive (similar to the stereotype of
femininity in industrialised societies). The Mundugumor people were aggressive and hostile (similar
to the stereotype of masculinity in industrialised societies). And finally the Tchambuli women were
dominant and they organised village life, whilst men were passive and considered to be decorative
(the reverse of gender behaviour in industrialised societies).
In contrast, Buss (1995) found consistent mate preferences in 37 countries studied across all
continents. In all cultures women sought men offering wealth and resources and men looked for
youth and physical attractiveness. Munroe and Munroe (1975) found that in most societies, division
of labour is organised along gender lines.
Children are most likely to imitate role models who are the same sex as they are and who are
engaging in gender-appropriate behaviour. This maximises the chance of gender-appropriate
behaviours being reinforced.
Bussey and Bandura (1999) found that the media provides rigid gender stereotypes, for example
men are independent, ambitious and advice-givers; women are dependent, unambitious and advice-
seekers. Furnham and Farragher (2000) found that men were more likely to be shown in
autonomous roles within professional contexts, whereas women were often seen occupying familial
roles within domestic settings.
Seeing other people perform gender-appropriate behaviours increases a child’s belief that they are
capable of such behaviours (= self-efficacy). Mitra et al. (2019) found girls in India who watched a
programme challenging gender stereotypes were more likely to see themselves as capable of
working outside the home than non-viewers.
One strength is that the influence of culture has research support. In industrialised cultures,
changing expectations of women are a function of their increasingly active role in the workplace
(Hofstede 2001). In traditional societies women are still housemakers as a result of social, cultural
and religious pressures. This suggests that gender roles are very much determined by the cultural
context.
One limitation is that Mead’s research has been criticised. Freeman (1983) studied the Samoan
people after Mead’s study, and claimed Mead had been misled by some of her participants. He also
claimed Mead’s preconceptions of what she would find had influenced her reading of events
(observer bias and ethnocentrism). This suggests that Mead’s interpretations may not have been
objective and questions the conclusions that she drew.
One strength of media influence is that it has a theoretical basis. The more time individuals spend
‘living’ in the media world, the more they believe it reflects the social reality of the ‘outside’ world
(cultivation theory). Bond and Drogos (2014) found a positive correlation between time spent
watching Jersey Shore and permissive attitudes towards casual sex (other factors controlled). This
68
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
suggests the media ‘cultivates’ perception of reality and this affects gender behaviour (e.g. sexual
behaviour).
One limitation is there may not be a causal relationship. Durkin (1985) argues that even very young
children are not passive recipients of media messages, and family norms are a bigger influence. If
media representations confirm gender roles held by the family, norms are reinforced in a child’s
mind. If not, then they are likely to be rejected. This suggests that media influences are secondary to
other influences, such as family.
Page 109
1. Gender dysphoria occurs when there is a mismatch between a person’s biological sex and the
gender they feel they are. DSM-5 specifically excludes atypical gender conditions with a biological
basis (e.g. Klinefelter’s syndrome).
2. One biological explanation is the genetic explanation. Coolidge et al. (2002) studied 157 twin pairs
(MZ and DZ) and suggest that 62% of these cases could be accounted for by genetic variance.
Heylens et al. (2012) found that nine (39%) of their sample of MZ twins were concordant for GD, but
none of the DZs were.
Social constructionism suggests that confusion (dysphoria) arises because people have to select a
gender. Therefore dysphoria is not pathological (a mental disorder) but due to social factors. For
example, McClintock (2015) studied biological males in New Guinea born with female genitals due to
a genetic condition. At puberty genitals change and the individuals are accepted as kwolu-aatmwol
(females-then-males). However, after contact with the West kwolu-aatmwol are seen as abnormal
instead of normal.
3. One limitation is that brain sex theory assumptions have been challenged. Hulshoff Pol et al.
(2006) scanned transgender individuals’ brains during hormone treatment and found the size of the
bed nucleus of the stria terminalis (BST) had changed significantly. Kruijver et al. and Zhou et al.
examined the BST post-mortem and after transgender individuals had received hormones during
gender reassignment treatment. This suggests that differences in the BST may have been an effect of
hormone therapy, rather than the cause of gender dysphoria.
One strength is that there may be other brain differences. Rametti et al. (2011) analysed brains of
both male and female transgender individuals, crucially before they began hormone treatment as
part of gender reassignment. In most cases, the distribution of white matter corresponded more
closely to the gender the individuals identified themselves as being rather than their biological sex.
This suggests that there are early differences in the brains of transgender individuals.
4. The bed nucleus of the stria terminalis (BST) is involved in emotional responses and male sexual
behaviour in rats. This area is larger in men than women and is female-sized in transgender females
(Kruijver et al. 2000). People with gender dysphoria (GD) have a BST which is the size of the sex they
identify with, not the size of their biological sex. This fits with people who are transgender who feel,
from early childhood, that they were born the wrong sex (Zhou et al. 1995).
Social constructionism suggests that confusion (dysphoria) arises because people have to select a
gender. Therefore dysphoria is not pathological (a mental disorder) but due to social factors. For
example, McClintock (2015) studied biological males in New Guinea born with female genitals due to
a genetic condition. At puberty genitals change and the individuals are accepted as kwolu-aatmwol
69
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
(females-then-males). However, after contact with the West kwolu-aatmwol are seen as abnormal
instead of normal.
One limitation is that brain sex theory assumptions have been challenged. Hulshoff Pol et al. (2006)
scanned transgender individuals’ brains during hormone treatment and found the size of the bed
nucleus of the stria terminalis (BST) had changed significantly. Kruijver et al. and Zhou et al.
examined the BST post-mortem and after transgender individuals had received hormones during
gender reassignment treatment. This suggests that differences in the BST may have been an effect of
hormone therapy, rather than the cause of gender dysphoria.
One strength is that there may be other brain differences. Rametti et al. (2011) analysed brains of
both male and female transgender individuals, crucially before they began hormone treatment as
part of gender reassignment. In most cases, the distribution of white matter corresponded more
closely to the gender the individuals identified themselves as being rather than their biological sex.
This suggests that there are early differences in the brains of transgender individuals.
One strength is evidence of more than two gender roles. Some cultures recognise more than two
genders, e.g. fa’afafine of Samoa, challenging male versus female. Increasing numbers of people
now describe themselves as non-binary, showing cultural changes now match the lived experience
of many. This suggests that gender identity (and dysphoria) is best seen as a social construction than
a biological fact.
Some people with GD will decide to have gender reassignment surgery. However, GD may not
continue through to adulthood – only 12% of GD girls were still GD at 24 years old (Drummond et al.
2008). This suggests that gender reassignment surgery before the age of consent must be very
carefully managed with appropriate support and safeguards.
70
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 111
1. Piaget referred to his idea of units of knowledge as ‘schema’. Each schema contains our
understanding of an object, person or idea. Schema become increasingly complex during
development as we experience those objects, etc., more and acquire more information about each
object or idea.
Equilibration takes place when we have encountered new information and built it into our
understanding of a topic, either by assimilating it into an existing schema or accommodating to it by
forming a new one. Disequilibrium drives development as it is an unpleasant experience. Once
everything is balanced again we reach a state of equilibration.
2. Assimilation takes place when we understand a new experience and remove disequilibrium
through assimilating information into our existing schema, whereas accommodation takes place in
response to dramatically new experiences and involves either radically changing current schema or
forming new ones. For example, a child in a family with dogs can adapt to the existence of different
dog breeds by assimilating them into their dog schema (assimilation) whereas on meeting a cat they
may initially think of cats as dogs but then accommodate to the existence of a separate species
called cats. This will involve altering the animal/pet schema to include cats and forming a new ‘cat’
schema.
3. Piaget asserts that maturation causes changes in the way children think not just how much they
know. The theory focuses on what motivates the development and how knowledge develops. He
suggests that cognitive development includes the construction of increasingly detailed schema
which help us organise our knowledge. The few innate schema are built on in infancy and as adults
we build schema for people, objects, physical actions and for more abstract ideas like justice or
morality.
We are motivated to learn when we experience disequilibrium. For example, when a child cannot
make sense of their world because existing schema are insufficient, they are motivated to reduce
the discomfort to a state of equilibration. They can do this through two key processes: the first is
assimilation when the new experiences can be understood with adjustment to existing schema.
Secondly, when this is not possible because the experience is radically different from the existing
schema, then accommodation involves the creation of whole new schema or wholesale changes to
existing ones.
For example, a child with pet cats who has not come across dogs (has no dog schema) on meeting a
dog will initially try to incorporate the dog into their cat schema. When the dog acts rather
differently (e.g. sitting when told to, barking, etc.), then the child needs to do something more
dramatic than assimilation. The child will accommodate by forming a separate dog schema. Both
development and equilibration have taken place.
4. Piaget asserts that maturation causes changes in the way children think not just how much they
know. The theory focuses on what motivates the development and how knowledge develops. He
suggests that cognitive development includes the construction of increasingly detailed schema
which help us organise our knowledge. The few innate schema are built on in infancy and as adults
71
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
we build schema for people, objects, physical actions and for more abstract ideas like justice or
morality.
We are motivated to learn when we experience disequilibrium. For example, when a child cannot
make sense of their world because existing schema are insufficient, they are motivated to reduce
the discomfort to a state of equilibration. They can do this through two key processes: the first is
assimilation when the new experiences can be understood with adjustment to existing schema.
Secondly, when this is not possible because the experience is radically different from the existing
schema, then accommodation involves the creation of whole new schema or wholesale changes to
existing ones.
For example, a child with pet cats who has not come across dogs (has no dog schema) on meeting a
dog will initially try to incorporate the dog into their cat schema. When the dog acts rather
differently (e.g. sitting when told to, barking, etc.), then the child needs to do something more
dramatic than assimilation. The child will accommodate by forming a separate dog schema. Both
development and equilibration have taken place.
A strength of Piaget’s theory is support from research. Howe et al. (1992) put 9–12-year-olds in
groups to discuss how objects move down a slope. They found that the level of children’s knowledge
and understanding increased after the discussion. This means that the children formed their own
individual mental representations of the topic – as Piaget would have predicted.
Another strength is that Piaget’s ideas have revolutionised teaching, ensuring that activity-oriented
classrooms allow children to learn in a more natural way as children actively engage in tasks that
allow them to construct their own understanding of the curriculum. At A level, discovery learning
may be ‘flipped’ lessons where students read up on content, forming their own basic mental
representation of the topic prior to teaching. This shows how Piaget-inspired approaches may
facilitate the development of individual mental representations of the world.
On the other hand, although Piaget’s theory has been influential, there is little firm evidence that
discovery learning is more effective than direct teaching. Lazonder and Harmsen (2016) reviewed
the evidence and concluded that input from others, rather than discovery per se, is the crucial
element. Therefore discovery learning is less effective than we would expect if Piaget’s theory was
correct.
However, whilst Piaget recognised teachers are important for setting up discovery situations for
children, other theories suggest that the role of others in learning is more central. For example,
Vygotsky argued that learning is more of a social process and more advanced learning is possible
only with the help of experts or peers. This suggests that Piaget’s theory is somewhat limited in its
explanation of the cognitive development process.
Furthermore, Piaget believed that disequilibrium was the motivating factor in cognitive development
but not all children are equally motivated to remove disequilibrium. Piaget studied children from
middle-class families who may have been more motivated to learn than other children. If the role of
equilibration is doubted, then as a central part of his explanation this weakens the validity of his
theory.
72
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 113
1. Object permanence refers to the ability to realise that an object still exists when it passes out of
the visual field. So, for example, when a baby’s rattle gets lost over the side of a high chair it ceases
to exist in their view and they lose interest. Piaget believed that the ability to understand that it
continues to exist despite not being seen appears at around eight months of age.
Conservation is the name given to the ability to realise that quantity remains the same even when
the appearance of an object or group of objects changes. For example, the volume of liquid stays the
same whether it is in a short fat or long thin glass. Piaget believed that this skill developed during the
pre-operational stage.
2. During the sensorimotor stage (approximately 0–2 years) a baby’s focus is on physical sensations
and the basic co-ordination between what they see and their body movement. During this stage,
they also come to understand that other people are separate objects, and they acquire some basic
language.
A key skill developed during this stage is that of object permanence (the understanding that objects
still exist when they are out of sight). Before 8 months, children immediately switch their attention
away from an object once it is out of sight but after 8 months children continue to look for it. At that
point in time it is assumed that children then understand that objects continue to exist when
removed from view.
4. Piaget suggested that there were four stages of development each with a different level of
reasoning ability. Movement through the stages is said to occur through schema and
disequilibrium/equilibration and he proposed that all children develop through the same sequence
of stages.
During the sensorimotor stage (approximately 0–2 years) a baby’s focus is on physical sensations
and the basic co-ordination between what they see and their body movement. Before 8 months,
children immediately switch their attention away from an object once it is out of sight but after 8
months children continue to look for it. This suggests that children then understand that objects
continue to exist and that they have developed object permanence.
During the pre-operational stage (2–7 years) children are said to be egocentric (unable to perceive
matters from a point of view other than their own) as tested by Piaget and Inhelder’s (1956) three
mountains task. Furthermore, they have no sense of class inclusion, for example younger children
cannot simultaneously see a dog as both a member of the dog class and the animal class.
At the start of the concrete operations stage (at around 7 years), children have mastered
conservation, which is an understanding of the fact that quantity remains constant even when the
appearance changes and are improving on egocentrism and class inclusion. They continue to have
73
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
some reasoning problems though and can only reason/operate on physical objects in their presence
(concrete operations).
Piaget suggested that abstract reasoning develops during the formal operations stage (11+ years).
Children can now focus on the form of an argument and not be distracted by its content.
A limitation of Piaget’s stage theory is evidence that challenges his methods. Piaget’s method may
have led the children to think something must have changed (or why would the researcher ask the
question?). McGarrigle and Donaldson (1974) found that in a conservation of number task, if the
counters were moved accidentally by a ‘naughty teddy’, 72% of children under 7 correctly said the
number was the same as before. This suggests that Piaget underestimated the conservation ability
of children aged 4–6 years as children of this age can conserve, as long as they are not put off by the
way they are questioned.
Another limitation is more evidence challenging Piaget’s findings concerning class inclusion. Siegler
and Svetina (2006) found that when 5-year-olds received feedback that pointed out subsets, they did
develop an understanding of class inclusion contrary to Piaget’s belief that sufficient intellectual
development for class inclusion was not possible until around 7 years. This again suggests that Piaget
underestimates children’s cognitive abilities and calls into question the validity of his stages.
The assertions about egocentrism are not supported either. Hughes (1975) found that even at 3½
years a child could position a boy doll in a model building with two intersecting walls so that the doll
could not be seen by a policeman doll. They could do this 90% of the time. 4-year-olds could do this
90% of the time when there were two police officers to hide from. This again suggests the manner of
Piaget’s studies and tasks led him to underestimate children’s intellectual abilities.
However, it is important to note that these criticisms boil down simply to Piaget’s claims about the
ages at which children pass through stages. For example, Hughes argues that children can decentre
at a younger age than Piaget thought, but decentring as a cognitive ability that develops through
stages is not in question. Therefore the core principles of Piaget’s stages stand, although the
methods he used meant the timings were inaccurate.
Page 115
1. The zone of proximal development (ZPD) refers to the gap between a child’s current level of
development, defined by the cognitive tasks they can perform unaided, and what they can
potentially do with the right help from a more expert other.
Scaffolding is the process of helping a learner cross the ZPD and advance as much as they can, given
their stage of development. Typically, the level of help given in scaffolding declines as the learner
crosses the ZPD.
2. Whilst Vygotsky agreed with Piaget that children develop reasoning skills sequentially, he believed
that this process was mainly dependent on social processes. He claimed that knowledge is first
intermental (between someone more expert and someone less expert) and then intramental (within
the individual). He went on to suggest that cultural differences in learning could be explained
through differing experiences because reasoning abilities are acquired via contact with those around
us and children pick up the mental ‘tools’ that are most important for life from the world they live in.
Vygotsky referred to the ZPD as the gap between what a child knows or can do alone, and what the
child is capable of, following interaction with someone more expert. The role of a teacher was to
74
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
guide the child through this gap. The most advanced (formal) reasoning can only be achieved with
the help of experts, not simply through exploration.
Experts use scaffolding to help learners cross the ZPD and advance as much as they can, given their
stage of development. Typically, the level of help given in scaffolding declines as the learner crosses
the ZPD. Progressive scaffolding strategies include demonstration (e.g. mother draws an object with
child) all the way up to general prompts (e.g. mother says, ‘Now draw something else.’).
3. Wood et al. (1976) observed children’s learning with adults and identified five aspects to
scaffolding which enable learners to traverse Vygotsky’s ZPD. They concluded that these were ways
in which an adult can help a child better understand and perform a task. They also noted the
strategies that the ‘experts’ (suggested by Vygotsky to be essential to more complex development)
use when scaffolding, concluding that the level of help given in scaffolding declines from
‘demonstration’ (most help) to ‘general prompts’ (least help). An adult is more likely to use a high
level of help strategies when first helping, then gradually withdraws the level of help as the child
grasps the task. The progressive strategies are:
• Demonstration – e.g. adult draws an object with crayons.
• Preparation – e.g. adult helps child grasp a crayon.
• Indication of materials – e.g. adult points at crayon.
• Specific verbal instructions – e.g. adult says ‘What about the green crayon?’.
• General prompts – e.g. adult says ‘Now draw something else’.
4. Whilst Vygotsky agreed with Piaget that children develop reasoning skills sequentially, he believed
that this process was mainly dependent on social processes. He claimed that knowledge is first
intermental (between someone more expert and someone less expert) and then intramental (within
the individual). This ‘expert’ does not have to be an adult and can just as easily be an expert peer as
Uriah has observed.
Vygotsky went on to suggest that cultural differences in learning could be explained through
differing experiences because reasoning abilities are acquired via contact with those around us and
children pick up the mental ‘tools’ that are most important for life from the world they live in, once
again children can acquire these from other children.
He referred to the ZPD as the gap between what a child knows or can do alone, and what the child is
capable of, following interaction with someone more expert. A ‘teacher’ or more expert peer as
suggested by Uriah can help guide other children through this gap through scaffolding. The most
advanced (formal) reasoning can only be achieved with the help of experts, not simply through
exploration which is borne out by Uriah’s personal observations.
A strength of Vygotsky’s theory is research support for the ZPD. Roazzi and Bryant (1998) found that
4–5-year-olds performed better on a ‘number of sweets’ challenge when working with peers (who
offered support on estimating) rather than alone. This demonstrates that children can develop more
advanced reasoning skills when working with more expert people and supports both the validity of
the ZPD as a developmental concept and Uriah’s observations.
There is also research support for Vygotsky’s concept of scaffolding. Conner and Cross (2003) found
in observations of children at intervals between the ages of 16 and 54 months that mothers used
less direct intervention as children developed. This supports the idea that the level of help given by
an expert partner declines over time as suggested by the process of scaffolding and the process by
which children move through their ZPD.
75
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Another strength is real-world application of the theory. Educational techniques such as group work,
peer tutoring and individual adult assistance are all based on Vygotsky’s ideas and are increasingly
used in the 21st century. Van Keer and Verhaeghe (2005) found that 7-year-olds tutored by 10-year-
olds, in addition to their whole-class teaching, progressed further in reading than a control group
who only had class teaching. This suggests that Vygotsky was correct in assuming that more able
people, even if they are essentially peers, can enhance development and learning as suggested by
Uriah.
On the other hand, although Vygotsky’s ideas about social interaction have found real-world
application this is not universal. Liu and Matthews (2005) point out that classes of up to 50 children
in China learn very effectively in lecture-style classrooms with little interaction with peers or
teachers. Therefore Vygotsky may have overestimated the importance of scaffolding on learning.
There is evidence to support Vygotsky’s idea that interaction with a more experienced other can
enhance learning (e.g. Conner and Cross). However, if Vygotsky was right about interactive learning,
we would expect children learning together to learn the same things, however it varies a lot. This
means that Piaget might have described learning better than Vygotsky, in spite of Vygotsky’s useful
emphasis on interaction.
Page 117
1. Knowledge of the physical world refers to the extent to which we understand how the physical
world works. Baillargeon’s violation of expectation research aims to investigate this understanding.
An example of this knowledge is object permanence, the understanding that objects continue to
exist when they leave the visual field. There is a debate concerning the age at which children
develop this kind of knowledge.
2. Baillargeon and Graber (1987) showed 24 infants, aged 5–6 months, a tall and a short rabbit pass
behind a screen with a window. In the expected condition the tall rabbit can be seen passing the
window but the short one cannot. In the unexpected condition neither rabbit appeared at the
window. Measurements were taken as to how long the infants spent looking at each condition.
It was found that the infants looked for an average of 33.07 seconds at the unexpected event as
compared to 25.11 seconds in the expected condition. The researchers interpreted this as meaning
that the infants were surprised at the unexpected condition. For them to be surprised it follows that
they must have known that the tall rabbit should have reappeared at the window. This
demonstrates an understanding of object permanence.
3. There were always criticisms of Piaget’s methods for studying children’s knowledge of the physical
world and he assumed that when a baby shifted attention away from an out-of-sight object this
meant that the child no longer knew it existed. However, the child might have shifted attention
simply because they lost interest. The VOE method is probably a better method for investigating
whether a child has some understanding of the permanent nature of objects because it eliminates
this confounding variable and means that the VOE method has better validity than some
alternatives.
However, whilst Baillargeon’s research clearly shows that infants look for significantly longer at
some scenes than others, what they show is that babies behave as we might expect them to if they
understood the physical world. So, we are assuming how a baby might behave in response to a
violation of expectations and they might not actually look at unexpected events for longer than
expected events. Also, although infants look for different lengths of time at different events this
76
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
merely means that they see them as different but there could be any number of reasons why they
find one scene more interesting than another. This means that the VOE method may not be an
entirely valid way of investigating infant understanding of the physical world.
4. Baillargeon suggested that infants in the sensorimotor stage may have a better-developed
understanding of the physical world than proposed by Piaget. For example, Piaget suggested that
infants did not reach for a hidden object because they lacked an understanding of object
permanence but Baillargeon suggested it might be because they didn’t have the necessary motor
skills.
Baillargeon considered that the methods used by Piaget led him to underestimate children’s abilities
and she developed the violation of expectation (VOE) technique to compare infant reactions to an
expected and an unexpected event and thus could make inferences about the infant’s cognitive
abilities.
Furthermore, Baillargeon et al. (2012) proposed that we are born with a physical reasoning system
(PRS) to enable us to learn details of the physical world more easily. This primitive awareness
becomes more sophisticated as we learn from experience. Baillargeon referred to object persistence
(like Piaget’s object permanence) and claimed that this was one such ability. PRS means infants are
predisposed to attend and learn from unexpected events. An innate PRS means that, when an infant
is shown an unexpected occurrence (tall rabbit event where tall rabbit does not appear), it draws
their attention. This will help them to develop their understanding of the physical world.
A strength is that, whilst Piaget assumed that when an infant failed to search for a hidden object the
infant thought it no longer existed, the use of the VOE technique enables us to control for the fact
they may simply have lost interest. This means that Baillargeon’s explanation provides a more valid
account of infant abilities than Piagetian theories.
On the other hand, Piaget himself pointed out that acting in accordance with a principle is not the
same as understanding it. Even if babies recognise and give more attention to unexpected events,
this doesn’t mean they understand them. To understand something means to think about it
consciously and apply reasoning about different aspects of the world. Therefore, even though babies
do appear to respond to unexpected conditions as Baillargeon suggested, this may not represent a
change in their cognitive abilities.
A methodological issue is that babies’ responses may not be to the unexpectedness of the event. All
VOE shows is that babies find certain events more interesting. We are inferring a link between this
response and object permanence. Actually, the different levels of interest in the two different events
may be for any number of reasons. This means that the VOE method may not be a valid way to study
a very young child’s understanding of the physical world.
Another strength is that the PRS can explain why physical understanding is universal. We all have a
good understanding of the physical world regardless of culture and experience. So if we drop a key
ring we all understand that it will fall to the ground. This universal understanding suggests that a
basic understanding of the physical world is innate. Otherwise we would expect cultural and
individual differences. This means that Baillargeon’s PRS appears to be a good account of infant
cognitive abilities.
y study: Baillargeon and Graber (1987) VOE research
77
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 119
1. Social cognition describes the mental processes we make use of when engaged in social
interaction. For example, we make decisions on how to behave based on our understanding of a
social situation. One of the skills this requires is perspective-taking, which is the ability to appreciate
a social situation from the perspective (point of view) of other people. This cognitive ability underlies
much of our normal social interaction and both the understanding and the decision-making are
cognitive processes.
2. Selman (1976) proposed five stages of social cognitive development suggesting that development
through these stages is based on both maturity and experience.
In stage 0 (3–6 years, egocentric) the child cannot reliably distinguish between their own emotions
and those of others. They can generally identify emotional states in others but do not understand
what social behaviour might have caused them. In Stage 1 (6–8 years, social-informational) they
start to be able to tell the difference between their own point of view and that of others, but they
can usually focus on only one of these perspectives.
It is in Stage 2 (8–10 years, self-reflective) that the child can put themselves in the position of
another person and fully appreciate their perspective. They still cannot take on more than one
viewpoint at a time until Stage 3 (10–12 years, mutual). In the final stage, Stage 4 (12+ years, social
and conventional system), they recognise that sometimes understanding others' viewpoints is not
enough to allow people to reach agreement and that social conventions are needed to keep order.
3. Selman (1971) looked at changes that occurred with age in children’s responses to scenarios in
which they were asked to take the role of different people in a social situation – 30 boys and 30 girls
took part in the study, 20 aged four, 20 aged five and 20 aged six. All were individually given a task
designed to measure perspective-taking ability. This involved asking them how each person felt in
various scenarios. For example, one scenario featured a child called Holly who has promised her
father she will no longer climb trees, but who then comes across her friend whose kitten is stuck up
a tree. The task was to describe and explain how each person would feel if Holly did or did not climb
the tree to rescue the kitten.
4. Selman (1971) looked at changes that occurred with age in children’s responses to scenarios in
which they were asked to take the role of different people in a social situation. Based on children’s
typical responses to perspective-taking scenarios at different ages, Selman (1976) proposed five
stages of social cognitive development suggesting that development through these stages is based
on both maturity and experience.
In stage 0 (3–6 years, egocentric) the child cannot reliably distinguish between their own emotions
and those of others. They can generally identify emotional states in others but do not understand
what social behaviour might have caused them. In Stage 1 (6–8 years, social-informational) they
start to be able to tell the difference between their own point of view and that of others, but they
can usually focus on only one of these perspectives.
It is in Stage 2 (8–10 years, self-reflective) that the child can put themselves in the position of
another person and fully appreciate their perspective. They still cannot take on more than one
viewpoint at a time until Stage 3 (10–12 years, mutual). In the final stage, Stage 4 (12+ years, social
and conventional system), they recognise that sometimes understanding others' viewpoints is not
enough to allow people to reach agreement and that social conventions are needed to keep order.
78
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
A strength is support from Selman’s (1971) own research. He looked at changes that occurred with
age in children’s responses to scenarios in which they were asked to take the role of different people
in a social situation. Boys and girls aged 4, 5 and 6 years were individually given a task designed to
measure perspective-taking ability. This involved asking them how each person felt in various
scenarios. For example, one scenario featured a child called Holly who has promised her father she
will no longer climb trees, but who then comes across her friend whose kitten is stuck up a tree. The
task was to describe and explain how each person would feel if Holly did or did not climb the tree to
rescue the kitten. The findings revealed a number of distinct levels of perspective-taking, as outlined
above. These correlated with age, showing a clear developmental sequence as predicted by Selman’s
theory.
This was further supported by longitudinal follow-up studies which confirm that perspective-taking
develops with age. This is a strength of the levels idea generally, particularly as it is supported by a
range of evidence.
However, the evidence is mixed as to how important perspective-taking is. Buijzen and Valkenburg
(2008) found a negative correlation between age, perspective-taking and coercive behaviour,
suggesting that perspective-taking is important in developing prosocial behaviour. However, Gasser
and Keller (2009) found that bullies displayed no difficulties in perspective-taking. This suggests that
perspective-taking may not be a key element in healthy social development.
However, critics point out Selman’s theory looks only at cognitive factors whereas children’s social
development involves more than their developing cognitive abilities. For example, internal factors
(e.g. empathy) and external factors (e.g. family atmosphere) are important and it is likely that social
development is due to a combination of these. This means that Selman’s approach to explaining
perspective-taking is too narrow.
Wu and Keysar (2007) found that young adult Chinese participants did significantly better in
perspective-taking than matched Americans. This indicates that the development of perspective-
taking is influenced by sociocultural inputs and not just maturity. However, Selman believed that his
stages of perspective-taking were based primarily on cognitive maturity and so were universal
(Vassallo 2017). This suggests there may be an interaction between nurture and nature, and perhaps
Selman wrongly downplayed this.
Page 121
1. The Sally–Anne study was Baron-Cohen et al.’s (1985) method of studying theory of mind.
Children were told a story involving two dolls, Sally and Anne. Sally places a marble in her basket, but
when Sally is not looking Anne moves the marble to her box. The task is to work out where Sally will
look for her marble. Understanding that Sally does not know that Anne has moved the marble
requires an understanding of Sally’s false belief about where the marble is.
Baron-Cohen et al. (1985) recruited 20 high-functioning children diagnosed with autism spectrum
disorder (ASD) and control groups of 14 children with Down Syndrome and 27 without a diagnosis
and individually administered the Sally–Anne test to them.
They found that 85% of children in the control groups correctly identified where Sally would look for
her marble suggesting that they had the social cognition skill. However, only four of the children in
79
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
the ASD group (20%) could answer this correctly. This dramatic difference demonstrated that ASD
involves a ToM deficit and a problem with social cognition.
2. Baron-Cohen et al. (1985) recruited 20 high-functioning children diagnosed with autism spectrum
disorder (ASD) and control groups of 14 children with Down syndrome and 27 without a diagnosis
and individually administered the Sally–Anne test to them.
They found that 85% of children in the control groups correctly identified where Sally would look for
her marble suggesting that they had the social cognition skill. However, only four of the children in
the ASD group (20%) could answer this correctly. This dramatic difference demonstrated that ASD
involves a ToM deficit and a problem with social cognition.
Older children with ASD can succeed on false belief tasks, despite problems with empathy, social
communication, raising questions as to whether ASD can be explained by ToM deficits.
Baron-Cohen et al. (1997) developed the Eyes Task as a more challenging test of ToM and found that
adults with high-functioning ASD struggled on this task. This supports the idea that ToM deficits
might be the cause of ASD.
3. One limitation is the reliance on false belief tasks to test the theory. Bloom and German (2000)
suggest that false belief tasks require other cognitive abilities (e.g. visual memory) as well as ToM, so
‘failure’ may be due to a memory deficit and not ToM deficits. Furthermore, children who cannot
perform well on false belief tasks still enjoy pretend-play, which requires a ToM. This means that
false belief tasks may not really measure ToM, meaning ToM lacks evidence.
4. Theory of mind (ToM) is a personal theory or belief about what other people know, are feeling or
thinking and is tested differently according to age. Meltzoff (1988) allowed children to observe
adults placing beads into a jar. In the experimental condition adults appeared to struggle with this
and dropped the beads, whereas in the control condition the adults successfully placed the beads in
the jar. In both conditions toddlers successfully placed the beads in the jar, suggesting that they
were imitating what the adult intended to do rather than what they actually did, demonstrating
ToM.
Baron-Cohen et al. (1985) recruited 20 high-functioning children diagnosed with ASD and control
groups of 14 children with Down syndrome and 27 without a diagnosis and individually administered
the Sally–Anne test to them (a false belief task developed to test whether children can understand
that people can believe something that is not true).
They found that 85% of children in the control groups correctly identified where Sally would look for
her marble, suggesting that they had the social cognition skill. However, only four of the children in
the ASD group (20%) could answer this correctly. This dramatic difference demonstrated that ASD
involves a ToM deficit and a problem with social cognition and it has also been suggested that ToM
deficits might in fact be a complete explanation of ASD.
Baron-Cohen et al. (1997) developed the Eyes Task as a more challenging test of ToM and found that
adults with high-functioning ASD struggled with this task. This supports the idea that ToM deficits
might be the cause of ASD.
One limitation is the reliance on false belief tasks to test the theory. Bloom and German (2000)
suggest that false belief tasks require other cognitive abilities (e.g. visual memory) as well as ToM, so
80
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
‘failure’ may be due to a memory deficit and not ToM deficits. Furthermore, children who cannot
perform well on false belief tasks still enjoy pretend-play, which requires a ToM. This means that
false belief tasks may not really measure ToM, meaning ToM lacks evidence.
One strength of ToM research is its application to understanding ASD. People with ASD find ToM
tests difficult which shows they do have problems understanding what others think. This in turn
explains why people with ASD find social interaction difficult – because they don’t pick up cues for
what others are thinking and feeling. This means that ToM research has real-world relevance.
However, ToM does not provide a complete explanation for ASD. Not everyone with ASD
experiences ToM problems, and ToM problems are not limited to people with ASD (Tager-Flusberg
2007). This means that there must be other factors that are involved in ASD, and the association
between ASD and ToM is not as strong as first believed.
Perner (2002) suggests that ToM develops in line with other cognitive abilities (domain-general). A
Piagetian view such as this suggests ToM is based on an innate ability which develops with age.
However, Astington (1998) takes a more Vygotskian approach, focusing on the social influences that
affect ToM and suggesting we internalise our ToM during early interactions with adults. This is
supported by Liu et al. (2004) who also found that ToM appeared at different ages in different
cultures. This means that the rate of development is modified by the social environment – nature
and nurture.
Page 123
1. The mirror neuron system consists of special brain cells called mirror neurons distributed in
several areas of the brain. Mirror neurons are unique because they fire both in response to personal
action and action on the part of others. These special neurons may be involved in social cognition,
allowing us to interpret intention and emotion in others.
2. Rizzolatti et al. (2002) noted that the same area of a monkey’s motor cortex became activated
when the monkeys observed a researcher reaching for his lunch and when the monkey itself reached
for food. The researchers later confirmed that it was the same brain cells firing.
Gallese and Goldman (1998) suggested that mirror neurons respond not just to observed actions but
to intentions behind behaviour and that we need to understand the intentions of others to interact
socially. The research on mirror neurons suggests we simulate the action of others in our own brains
and thus experience their intentions through our mirror neurons.
Ramachandran (2011) suggested that mirror neurons have shaped human evolution, how we have
evolved as a social species. Furthermore, their research suggested that mirror neurons enable us to
understand intention, emotion and perspective. These are fundamental requirements for living in
large groups with the complex social roles and rules that characterise human culture.
3. A strength is research that supports the link between ASD and mirror neuron deficits by finding a
smaller thickness of the pars opercularis in participants with ASD (Hadjikhani 2007). Other studies
using fMRI have shown lower activity in brain areas associated with mirror neurons in participants
with ASD. This suggests a cause of ASD may lie in the mirror neuron system. However, a systematic
review of studies by Hamilton (2013) concluded that evidence was highly inconsistent and results
hard to interpret. This means there may not be a link between ASD and mirror neuron activity after
all.
81
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
4. The mirror neuron system consists of special brain cells called mirror neurons distributed in
several areas of the brain. They are unique because they fire both in response to personal action and
action on the part of others. These special neurons may be involved in social cognition, allowing us
to interpret intention and emotion in others.
Rizzolatti et al. (2002) noted that the same area of a monkey’s motor cortex became activated when
the monkeys observed a researcher reaching for his lunch and when the monkey itself reached for
food. The researchers later confirmed that it was the same brain cells firing.
Gallese and Goldman (1998) suggested that mirror neurons respond not just to observed actions but
to intentions behind behaviour and that we need to understand the intentions of others to interact
socially. The research on mirror neurons suggests we simulate the action of others in our own brains
and thus experience their intentions through our mirror neurons.
Ramachandran (2011) suggested that mirror neurons have shaped human evolution, how we have
evolved as a social species. Furthermore, their research suggested that mirror neurons enable us to
understand intention, emotion and perspective. These are fundamental requirements for living in
large groups with the complex social roles and rules that characterise human culture.
Support for the role of mirror neurons comes from research by Haker et al. (2012). They
demonstrated via fMRI scans that Brodmann’s Area 9 (a part of the brain rich in mirror neurons) is
involved in contagious yawning. Mouras et al. (2008) found when men watched heterosexual
pornography, activity in the pars opercularis was seen immediately before sexual arousal. This
suggests that mirror neurons produced perspective-taking, making the pornography arousing. Both
studies support the importance of mirror neurons in social cognition through activation when
empathy or perspective-taking take place.
However, evidence for mirror neuron activity usually comes from brain scanning. This technique
identifies activity levels in regions of the brain but cannot measure activity in individual brain cells.
Inserting electrodes is the only way of measuring activity at a cellular level and is not ethically
possible in humans. Therefore there is no gold standard for measuring mirror neuron activity in
humans (Bekkali et al. 2019), and no direct evidence for mirror neuron activity in humans.
A strength is research that supports the link between ASD and mirror neuron deficits by finding a
smaller thickness of the pars opercularis in participants with ASD (Hadjikhani 2007). Other studies
using fMRI have shown lower activity in brain areas associated with mirror neurons in participants
with ASD. This suggests a cause of ASD may lie in the mirror neuron system. However, a systematic
review of studies by Hamilton (2013) concluded that evidence was highly inconsistent and results
hard to interpret. This means there may not be a link between ASD and mirror neuron activity after
all.
Some research shows that mirror neurons are involved in physical perspective-taking. Maranesi et
al. (2017) found that specific mirror neurons in monkeys’ motor cortex fired according to the
position and angle from which experimenters gestured. This shows that physical perspective is
encoded by mirror neurons, consistent with Piaget’s view that physical and social perspective-taking
are part of the same phenomenon.
On the other hand, a recent review by Bekkali et al. (2019) concluded that there is only weak
evidence linking mirror neurons to social cognition in humans. If physical and social perspective-
taking were closely linked we would expect more consistent evidence, e.g. showing abnormal
82
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
structure and function in mirror neuron-rich brain regions in people with deficits in perspective-
taking.
83
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Chapter 8 Schizophrenia
Page 125
1. Hallucinations are sensory experiences that have no basis in reality or are distorted perceptions of
real things. For example, hearing voices or seeing people who aren’t there.
Delusions are beliefs that have no basis in reality. For example, beliefs about being a very important
person or the victim of a conspiracy.
2. Positive symptoms are additional experiences beyond those of ordinary existence, such as
hallucinations. Negative symptoms lead to a loss of usual abilities and experiences, such as avolition.
3. Co-morbidity is the occurrence of two illnesses together which confuses diagnosis and treatment.
Around half of all people with schizophrenia are also diagnosed with depression.
Symptom overlap is when two or more conditions share symptoms, questioning the validity of the
classification. For instance, schizophrenia shares some symptoms with the mania phase of bipolar
disorder, such as disorganised language and thinking.
4. Diagnosis and classification are interlinked. To diagnose a specific disorder, we need to be able to
distinguish one disorder from another. Classification involves identifying symptoms that go together
to produce a disorder. Diagnosis is when clinicians identify symptoms and use a classification system
to identify the disorder (e.g. depression, OCD, schizophrenia etc.). There are two main classification
systems in use: DSM-5 – one positive symptom must be present (delusions, hallucinations or speech
disorganisation) and ICD-10 (V11 published but not used for diagnosis until 2022) – in which two or
more negative symptoms are sufficient for diagnosis (e.g. avolition and speech poverty).
One limitation of diagnosis of schizophrenia is low validity. Criterion validity involves seeing whether
different procedures used to assess the same individuals arrive at the same diagnosis. Cheniaux et
al. (2009) had two psychiatrists independently assess the same 100 clients. 68 were diagnosed with
schizophrenia with ICD and 39 with DSM. This means that schizophrenia is either over- or
underdiagnosed, suggesting that criterion validity is low.
That said, in the Osório study there was excellent agreement between clinicians using different
procedures both derived from the DSM system. This means that the criterion validity for
schizophrenia is good provided it takes place within a single diagnostic system.
Another limitation is co-morbidity with other conditions. If conditions often co-occur then they
might be a single condition. Schizophrenia is commonly diagnosed with other conditions. For
example Buckley et al. (2009) concluded that schizophrenia is comorbid with depression (50% of
cases), substance abuse (47%) or OCD (23%). This suggests that schizophrenia may not exist as a
distinct condition.
A further limitation is gender bias. Men are diagnosed with schizophrenia more often than women,
in a ratio of 1.4:1 (Fischer and Buchanan 2017). This could be because men are more genetically
84
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
vulnerable, or women have better social support, masking symptoms. This means that some women
with schizophrenia are not diagnosed so miss out on helpful treatment.
Page 127
1. Neural correlates are measurements of the structure or function of the brain that correlate with
the positive or negative symptoms of schizophrenia. For instance, loss of motivation (avolition) in
schizophrenia may be explained by low activity levels in the ventral striatum.
3. One strength is the strong evidence base. Family studies (e.g. Gottesman, facing page) show risk
increases with genetic similarity. One twin study found 33% concordance for MZ and 7% for DZ twins
(Hilker et al. 2018). Adoption studies (e.g. Tienari et al. 2004) show that biological children of parents
with schizophrenia are at greater risk even if they grow up in an adoptive family. This shows that
some people are more vulnerable to schizophrenia because of their genes.
One limitation is evidence for environmental risk factors. Biological risk factors include birth
complications (Morgan et al. 2017) and smoking THC-rich cannabis in teenage years (Di Forti et al.
2015). Psychological risk factors include childhood trauma e.g. 67% with schizophrenia (38%
matched controls) reported at least one childhood trauma (Mørkved et al. 2017). This means genes
alone cannot provide a complete explanation for schizophrenia.
4. There is a strong relationship between genetic similarity of family members and shared risk of
developing schizophrenia. Gotteman’s (1991) family study found identical twins (100% genes
shared) have a 48% shared risk of schizophrenia. Siblings (50% genes shared) have a 9% shared risk.
Schizophrenia is polygenic (requires several genes) and aetiologically heterogeneous (risk is affected
by different combinations).
One strength is the strong evidence base. Family studies (e.g. Gottesman, facing page) show risk
increases with genetic similarity. One twin study found 33% concordance for MZ and 7% for DZ twins
(Hilker et al. 2018). Adoption studies (e.g. Tienari et al. 2004) show that biological children of parents
with schizophrenia are at greater risk even if they grow up in an adoptive family. This shows that
some people are more vulnerable to schizophrenia because of their genes.
One limitation is evidence for environmental risk factors. Biological risk factors include birth
complications (Morgan et al. 2017) and smoking THC-rich cannabis in teenage years (Di Forti et al.
2015). Psychological risk factors include childhood trauma e.g. 67% with schizophrenia (38%
matched controls) reported at least one childhood trauma (Mørkved et al. 2017). This means genes
alone cannot provide a complete explanation for schizophrenia.
85
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
the dopamine hypothesis have focused on low levels of dopamine – hypodopaminergia – in the
prefrontal cortex (responsible for thinking and decision-making).
One strength is support for dopamine in the symptoms of schizophrenia. Amphetamines (increase
DA) mimic symptoms (Curran et al. 2004). Antipsychotic drugs (reduce DA) reduce intensity of
symptoms (Tauscher et al. 2014). Candidate genes act on the production of DA or DA receptors. This
strongly suggests that dopamine is involved in the symptoms of schizophrenia.
One limitation is evidence for a central role for glutamate. Post-mortem and scanning studies found
raised glutamate in people with schizophrenia (McCutcheon et al. 2020). Also, several candidate
genes for schizophrenia are believed to be involved in glutamate production or processing. This
means that a strong case can be made for a role for other neurotransmitters in schizophrenia.
Page 129
1. Lower levels of information processing in some areas of the brains of people with schizophrenia
suggest cognition is impaired. For example, reduced processing in the ventral striatum is associated
with negative symptoms. Meta-representation is the cognitive ability to reflect on thoughts and
behaviour (Frith et al. 1992). This dysfunction disrupts our ability to recognise our thoughts as our
own – could lead to the sensation of hearing voices (hallucination) and having thoughts placed in the
mind by others (delusions).
2. Double-bind theory – Bateson et al. (1972) described situations where a child may be regularly
trapped in situations where they fear doing the wrong thing, but receive conflicting messages about
what counts as wrong. They cannot express their feelings about the unfairness of the situation.
When they ‘get it wrong’ (often) the child is punished by withdrawal of love – they learn the world is
confusing and dangerous, leading to disorganised thinking and delusions.
Expressed emotion (EE) is the level of emotion (mainly negative) expressed towards the person with
schizophrenia and includes verbal criticism of the individual, hostility towards them and emotional
over-involvement in their life. High levels of EE cause stress in the person and may trigger
onset of schizophrenia or relapse.
3. One strength is evidence for dysfunctional thought processing. Stirling et al. (2006) compared
performance on a range of cognitive tasks (e.g. Stroop task) in people with and without
schizophrenia. As predicted by central control theory, people with schizophrenia took over twice as
long on average to name the font-colours. This supports the view that the cognitive processes of
people with schizophrenia are impaired.
4. The double-bind theory is a form of family dysfunction argument for schizophrenia. Bateson et al.
(1972) described situations where a child may be regularly trapped in situations where they fear
doing the wrong thing, but receive conflicting messages about what counts as wrong. They cannot
express their feelings about the unfairness of the situation. When they ‘get it wrong’ (often) the
child is punished by withdrawal of love – they learn the world is confusing and dangerous, leading to
disorganised thinking and delusions.
Expressed emotion (EE) is the level of emotion (mainly negative) expressed towards the person with
schizophrenia and includes verbal criticism of the individual, hostility towards them and emotional
over-involvement in their life. High levels of EE cause stress in the person and may trigger
onset of schizophrenia or relapse.
86
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
One strength is evidence linking family dysfunction to schizophrenia. A review by Read et al. (2005)
reported that adults with schizophrenia are disproportionately likely to have insecure attachment
(Type C or D). Also, 69% of women and 59% of men with schizophrenia have a history of physical
and/or sexual abuse. This strongly suggests that family dysfunction does make people more
vulnerable to schizophrenia.
One limitation is the poor evidence base for any of the explanations. There is almost no evidence to
support the importance of traditional family-based theories e.g. schizophrenogenic mother and
double bind. Both theories are based on clinical observation of patients and informal assessment of
the personality of the mothers of patients. This means that family explanations have not been able
to explain the link between childhood trauma and schizophrenia. It is far more likely that patients
with schizophrenia have inherited a biological vulnerability to schizophrenia (through a faulty gene
or genes) which may be triggered as a result of trauma. This supports the diathesis-stress model of
schizophrenia.
Research in this area may be useful, e.g. showing that insecure attachment and childhood trauma
affect vulnerability to schizophrenia. However, research is socially sensitive because it can lead to
parent-blaming. This creates additional stress for parents already seeing their child experience
schizophrenia and taking responsibility for their care. This means that research into family
dysfunction and schizophrenia will always be very controversial but worth it for potential benefits.
Page 131
1. Typical antipsychotic drugs (e.g. chlorpromazine) work by acting as antagonists in the dopamine
system and aim to reduce the action of dopamine – they are strongly associated with the dopamine
hypothesis. Dopamine antagonists work by blocking dopamine receptors in the synapses in the
brain, reducing the action of dopamine.
Atypical antipsychotics (e.g. clozapine) aim to improve the effectiveness of drugs in suppressing
psychoses such as schizophrenia and also minimise the side effects. Clozapine acts on dopamine,
glutamate and serotonin to improve mood as well as cognitive functioning.
2. Typical antipsychotic drugs (e.g. chlorpromazine) have been around since the 1950s. They work by
acting as antagonists in the dopamine system and aim to reduce the action of dopamine – they are
strongly associated with the dopamine hypothesis. Dopamine antagonists work by blocking
dopamine receptors in the synapses in the brain, reducing the action of dopamine.
Initially, dopamine levels build up after taking chlorpromazine, but then production is reduced. This
normalises neurotransmission in key areas of the brain, which in turn reduces symptoms like
hallucinations. Chlorpromazine also has an effect on histamine receptors, which appears to lead to a
sedation effect. Therefore it is also used to calm anxious patients when they are first admitted to
hospital.
3. A limitation is that antipsychotic drugs may simply be a ‘chemical cosh’. Antipsychotics may have
been used in hospital situations to calm patients and make them easier for staff to work with, rather
than to benefit the patients themselves. However, calming people distressed by hallucinations and
delusions probably makes them feel better, and allows them to engage with other treatments (e.g.
CBT) and services. On balance there are clear benefits to using antipsychotics to calm people with
schizophrenia and in the absence of a better alternative they should probably be prescribed.
87
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
4. Typical antipsychotic drugs (e.g. chlorpromazine) have been around since the 1950s. They work by
acting as antagonists in the dopamine system and aim to reduce the action of dopamine – they are
strongly associated with the dopamine hypothesis. Dopamine antagonists work by blocking
dopamine receptors in the synapses in the brain, reducing the action of dopamine. Initially,
dopamine levels build up after taking chlorpromazine, but then production is reduced. This
normalises neurotransmission in key areas of the brain, which in turn reduces symptoms like
hallucinations.
Atypical antipsychotics (e.g. clozapine) bind to dopamine receptors as chlorpromazine does but also
act on serotonin and glutamate receptors. This drug was more effective than typical antipsychotics –
clozapine reduces depression and anxiety as well as improving cognitive functioning. It also improves
mood, which is important as up to 50% of people with schizophrenia attempt suicide.
One strength of antipsychotics is evidence of their effectiveness. Thornley et al. (2003) reviewed
data from 13 trials (1121 participants) and found that chlorpromazine was associated with better
functioning and reduced symptom severity compared with placebo. There is also support for the
benefits of atypical antipsychotics. Meltzer (2012) concluded that clozapine is more effective than
typical antipsychotics, and that it is effective in 30–50% of treatment-resistant cases. This means
that, as far as we can tell, antipsychotics work.
However, most studies are of short-term effects only and some data sets have been published
several times, exaggerating the size of the evidence base (Healy 2012). Also benefits may be due to
the calming effects of drugs rather than real effects on symptoms. This means the evidence of
effectiveness is less impressive than it seems.
One limitation of antipsychotic drugs is the likelihood of side effects. Typical antipsychotics are
associated with dizziness, agitation, sleepiness, weight gain, etc. Long-term use can lead to lip-
smacking and grimacing due to dopamine supersensitivity (a condition known as tardive dyskinesia).
The most serious side effect is neuroleptic malignant syndrome (NMS) caused by blocking dopamine
action in the hypothalamus (which can be fatal due to disrupted regulation of several body systems).
This means that antipsychotics can do harm as well as good and individuals may avoid them
(reducing effectiveness).
Another limitation of antipsychotics is that we do not know why they work. The use of most of these
drugs is strongly tied up with the dopamine hypothesis and the idea that there are higher-than-usual
levels of dopamine in the subcortex of people with schizophrenia. But there is evidence that this
may not be correct and that dopamine levels in other parts of the brain are too low rather than too
high. If so, most antipsychotics shouldn’t work. This means that antipsychotics may not be the best
treatment to opt for – perhaps some other factor is involved in their apparent success.
Page 133
1. Family therapy aims to reduce levels of expressed emotion (EE), especially negative emotions such
as anger and guilt which create stress. Reducing stress is important to reduce the likelihood of
relapse. The therapist encourages family members to form a therapeutic alliance whereby they all
agree on the aims of therapy. The therapist also tries to improve families’ beliefs about and
behaviour towards schizophrenia. A further aim is to ensure that family members achieve a balance
between caring for the individual with schizophrenia and maintaining their own lives.
2. The aims of CBT in general are to help clients identify irrational thoughts (e.g. delusions and
hallucinations) and try to change them. The treatment usually consists of 5–20 sessions, individually
88
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
or in a group. CBT helps clients to understand their symptoms. Clients are helped to make sense of
how their delusions and hallucinations impact on their feelings and behaviour. For example, a client
may hear voices and believe they are demons so they will be very afraid. Normalisation involves
explaining to the client that hearing voices is an ordinary experience.
3. One strength of family therapy is evidence of its effectiveness. McFarlane (2016) concluded family
therapy is effective for schizophrenia and relapse rates were reduced by 50–60%. It is particularly
promising during time when mental health initially starts to decline. NICE recommends family
therapy. This means that family therapy is good for people with both early and ‘full-blown’
schizophrenia.
Another strength is the benefits for the whole family. Therapy is not just for the benefit of the
identified patient but also for the families that provide the bulk of care for people with schizophrenia
(Lobban and Barrowclough). Family therapy lessens the negative impact of schizophrenia on the
family and strengthens ability of the family to give support. This means family therapy has wider
benefits beyond the obvious positive impact on the identified patient.
4. Family therapy aims to reduce levels of expressed emotion (EE), especially negative emotions such
as anger and guilt which create stress. Reducing stress is important to reduce the likelihood of
relapse. The therapist encourages family members to form a therapeutic alliance whereby they all
agree on the aims of therapy. The therapist also tries to improve families’ beliefs about and
behaviour towards schizophrenia. A further aim is to ensure that family members achieve a balance
between caring for the individual with schizophrenia and maintaining their own lives.
One strength of family therapy is evidence of its effectiveness. McFarlane (2016) concluded family
therapy is effective for schizophrenia and relapse rates were reduced by 50–60%. It is particularly
promising during time when mental health initially starts to decline. NICE recommends family
therapy. This means that family therapy is good for people with both early and ‘full-blown’
schizophrenia.
Another strength is the benefits for the whole family. Therapy is not just for the benefit of the
identified patient but also for the families that provide the bulk of care for people with schizophrenia
(Lobban and Barrowclough). Family therapy lessens the negative impact of schizophrenia on the
family and strengthens ability of the family to give support. This means family therapy has wider
benefits beyond the obvious positive impact on the identified patient.
The aims of CBT in general are to help clients identify irrational thoughts (e.g. delusions and
hallucinations) and try to change them. The treatment usually consists of 5–20 sessions, individually
or in a group. CBT helps clients to understand their symptoms. Clients are helped to make sense of
how their delusions and hallucinations impact on their feelings and behaviour. For example, a client
may hear voices and believe they are demons so they will be very afraid. Normalisation involves
explaining to the client that hearing voices is an ordinary experience.
One strength of CBT is evidence for its effectiveness. Jauhar et al. (2014) reviewed 34 studies of CBT
for schizophrenia, and concluded that there is evidence for significant effects on symptoms. Pontillo
et al. (2016) found reductions in auditory hallucinations. Clinical advice from NICE (2019)
recommends CBT for people with schizophrenia. This means both research and clinical experience
support CBT for schizophrenia.
One limitation is the quality of the evidence. Thomas (2015) points out that different studies have
focused on different CBT techniques and people with different symptoms. Overall modest benefits
89
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
of CBT for schizophrenia may conceal a range of effects of different techniques on different
symptoms. This means that it is hard to say how effective CBT will be for treating a particular person
with schizophrenia.
Page 135
1. Token economies are reward systems (operant conditioning) used to manage the behaviour of
people with schizophrenia who spend long periods in psychiatric hospitals. Tokens (e.g. coloured
discs) are given to individuals who carry out desirable behaviours (e.g. getting dressed, making a
bed, etc.). This reward reinforces the desirable behaviour and because it is given immediately it
prevents ‘delay discounting’ (reduced effect of a delayed reward).
Tokens have no value in themselves but can be swapped later for tangible rewards (e.g. sweets, a
walk outside, etc.). They are secondary reinforcers because they only have value due to the learned
association (classical conditioning) with innate primary reinforcers.
2. As above.
3. One limitation is the ethical issues raised. Professionals have the power to control people’s
behaviour and this means imposing one person’s norms on to others (e.g. a patient may like to look
scruffy). Also restricting the availability of pleasures to people who don’t behave as desired means
that very ill people, already experiencing distressing symptoms, have an even worse time. This
means that benefits of token economies may be outweighed by the impact on freedom and short-
term reduction in quality of life.
4. Ayllon and Azrin (1968) used a token economy in a schizophrenia ward. A gift token was given for
every tidying act and tokens were later swapped for privileges e.g. films. Token economies were
extensively used in the 1960s and 70s but there was a decline in the UK due to a shift towards care
in the community rather than hospitals and because of ethical concerns. Token economies still
remain a standard approach to managing schizophrenia in many parts of the world.
Institutionalisation occurs in long-term hospital treatment. Matson et al. (2016) identified three
categories of institutional behaviour that can be tackled using token economies: personal care,
condition-related behaviours (e.g. apathy) and social behaviour. Modifying these behaviours does
not cure schizophrenia but has two major benefits. First, token economies improve the quality of life
within the hospital setting, e.g. putting on make-up or becoming more sociable with other residents.
Second, individuals are encouraged to return to more ‘normal’ behaviour, making it easier to adapt
back into the community e.g. getting dressed or making the bed.
One strength is evidence of effectiveness. Glowacki et al. (2016) identified seven high quality studies
published between 1999 and 2013 on the effectiveness of token economies in a hospital setting. All
the studies showed a reduction in negative symptoms and a decline in frequency of unwanted
behaviours. This supports the value of token economies.
That said, seven studies is quite a small evidence base. One issue with such a small number of
studies is the file drawer problem – a bias towards publishing positive findings. This means that
there is a serious question over the effectiveness of token economies.
One limitation is the ethical issues raised. Professionals have the power to control people’s
behaviour and this means imposing one person’s norms on to others (e.g. a patient may like to look
scruffy). Also restricting the availability of pleasures to people who don’t behave as desired means
90
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
that very ill people, already experiencing distressing symptoms, have an even worse time. This
means that benefits of token economies may be outweighed by the impact on freedom and short-
term reduction in quality of life.
Another limitation is the existence of more pleasant and ethical alternatives. Other approaches do
not raise ethical issues, e.g. art therapy is a high-gain, low-risk approach to managing schizophrenia
(Chiang et al. 2019). Even if the benefits of art therapy are modest, this is true for all approaches to
treatment and management of schizophrenia and art therapy is a pleasant experience. This means
that art therapy might be a good alternative to token economies – there are no side effects or
ethical abuses.
Page 137
1. In the original diathesis-stress model, diathesis was entirely the result of a single ‘schizogene’.
Meehl (1962) argued that someone without this gene should never develop schizophrenia, no
matter how much stress they were exposed to. But a person who does have the gene is vulnerable
to the effects of chronic stress (e.g. a schizophrenogenic mother). The schizogene is necessary but
not sufficient for the development of schizophrenia.
Turkington et al. (2006) suggests it is possible to believe in biological causes of schizophrenia and still
practise CBT to relieve psychological symptoms. But this requires adopting an interactionist model –
it is not possible to adopt a purely biological approach, tell patients that their condition is purely
biological (no psychological significance to their symptoms) and then treat them with CBT.
2. As above.
3. One limitation of the original diathesis-stress model is it is oversimplistic. Multiple genes increase
vulnerability, each with a small effect on its own – there is no schizogene. Stress comes in many
forms, including dysfunctional parenting. Researchers now believe stress can also include biological
factors. For example, Houston et al. (2008) found childhood sexual trauma was a diathesis and
cannabis use a trigger. This means that there are multiple factors, biological and psychological,
affecting both diathesis and stress.
4. In the original diathesis-stress model, diathesis was entirely the result of a single ‘schizogene’.
Meehl (1962) argued that someone without this gene should never develop schizophrenia, no
matter how much stress they were exposed to. But a person who does have the gene is vulnerable
to the effects of chronic stress (e.g. a schizophrenogenic mother). The schizogene is necessary but
not sufficient for the development of schizophrenia.
Turkington et al. (2006) suggests it is possible to believe in biological causes of schizophrenia and still
practise CBT to relieve psychological symptoms. But this requires adopting an interactionist model –
it is not possible to adopt a purely biological approach, tell patients that their condition is purely
biological (no psychological significance to their symptoms) and then treat them with CBT.
Thea’s father was also diagnosed with schizophrenia which suggests Thea possesses the gene or the
combination of genes required to increase her vulnerability/predisposition to schizophrenia
(diathesis). This, coupled with her traumatic and unpredictable childhood which acts as the
environmental trigger (stress), is sufficient for her to develop schizophrenia. Thea’s psychiatrist
appears to recognise the importance of an interactionist approach to treatment. The antipsychotics
will stabilise her symptoms, which means Thea will be more receptive to the benefits of CBT.
91
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
One strength is support for the dual role of vulnerability and stress. Tienari et al. (2004) studied
children adopted away from mothers diagnosed with schizophrenia. The adoptive parents’ parenting
styles were assessed and compared with a control group of adoptees with no genetic risk. A child-
rearing style with high levels of criticism and conflict and low levels of empathy was implicated in the
development of schizophrenia but only for children with a high genetic risk. This shows that a
combination of genetic vulnerability and family stress leads to increased risk of schizophrenia.
One limitation of the original diathesis-stress model is it is oversimplistic. Multiple genes increase
vulnerability, each with a small effect on its own – there is no schizogene. Stress comes in many
forms, including dysfunctional parenting. Researchers now believe stress can also include biological
factors. For example, Houston et al. (2008) found childhood sexual trauma was a diathesis and
cannabis use a trigger. This means that there are multiple factors, biological and psychological,
affecting both diathesis and stress.
92
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 139
Taste aversion is a predisposition to learn to avoid potentially toxic foods. Bitter compounds in food
are usually a reliable warning sign of toxins or that the food has gone off, so it is beneficial to survival
to be able to detect these compounds quickly. It is a preference noted before any learning of taste
preference has taken place, strongly suggesting an innate mechanism at work.
2. The evolutionary explanation focuses on explaining how the preferences seen in babies might be
explained in terms of adaptiveness and survival. For example, the preference for sweetness is
because it is a reliable signal of high-energy food, salt because it is necessary for many essential cell
functions and for fat because it is high in calories and makes foods more palatable.
Neophobia is the innate predisposition to avoid any new foods, which is an adaptive behaviour as it
reduces the potential safety risks until we learn they are safe. It diminishes once we learn that
specific foods will not poison us or cause us to become ill and gives way to a different evolutionary
mechanism that encourages consumption of a more varied diet and important nutrients.
Taste aversion is said to occur to avoid potentially toxic foods which are often signified by a bitter
taste. It is therefore beneficial to be able to detect these compounds quickly. It is a preference noted
before any learning of taste preference had taken place, strongly suggesting an innate mechanism at
work.
3. Torres et al.’s (2008) review of studies concluded that humans do tend to prefer high-fat foods in
periods of stress. Stress triggers the fight or flight response which creates high energy demands.
Therefore an increased fat preference during times of stress supports the view that such a
preference is important for survival.
However there is evidence that neophobia is no longer adaptive in the modern food environment
and can be a disadvantage. Most food consumed in many parts of the world is sold by retailers and
outlets subject to strict laws and is safer than it has ever been offering little threat to survival.
Caution about trying new foods in childhood (neophobia) protected us from sickness and death but
now it prevents us from eating safe foods from an early age. Therefore neophobia restricts a child’s
diet and limits access to a wider variety of safe foods that provide nutritional benefits.
4. The evolutionary explanation focuses on explaining how the preferences seen in babies might be
explained in terms of adaptiveness and survival. For example, the preference for sweetness is
because it is a reliable signal of high-energy food, salt because it is necessary for many essential cell
functions and for fat because it is high in calories and makes foods more palatable.
Neophobia is the innate predisposition to avoid any new foods, which is an adaptive behaviour as it
reduces the potential safety risks until we learn they are safe. It diminishes once we learn that
specific foods will not poison us or cause us to become ill and gives way to a different evolutionary
mechanism that encourages consumption of a more varied diet and important nutrients.
93
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Taste aversion is said to occur to avoid potentially toxic foods which are often signified by a bitter
taste. It is therefore beneficial to be able to detect these compounds quickly. It is a preference noted
before any learning of taste preference had taken place, strongly suggesting an innate mechanism at
work.
Torres et al.’s (2008) review of studies concluded that humans do tend to prefer high-fat foods in
periods of stress. Stress triggers the fight or flight response which creates high energy demands.
Therefore an increased fat preference during times of stress supports the view that such a
preference is important for survival.
However there is evidence that neophobia is no longer adaptive in the modern food environment
and can be a disadvantage. Most food consumed in many parts of the world is sold by retailers and
outlets subject to strict laws and is safer than it has ever been offering little threat to survival.
Caution about trying new foods in childhood (neophobia) protected us from sickness and death but
now it prevents us from eating safe foods from an early age. Therefore neophobia restricts a child’s
diet and limits access to a wider variety of safe foods that provide nutritional benefits.
There are other limitations to some aspects of the theory, for example taste aversions are not
universal as shown by Drewnowski et al. (2001). They found that some people cannot taste the
bitter-tasting chemical PROP but others are very sensitive to it and avoid foods containing it. We
would not expect this arrangement if taste aversion is an adaptive trait. It seems that some adaptive
preferences are not selected in the way we would expect according to evolutionary theory.
On the other hand, PROP insensitivity may be linked to other traits that are adaptive. Some bitter
compounds in some foods may protect against cancer. People who cannot detect the bitterness may
be benefitting in another way. This suggests that a preference for bitter foods in our evolutionary
history could be an adaptive trait after all.
Finally, one of the greatest concerns regarding the evolutionary approach to food preferences is that
it cannot explain cultural differences. Cashdan (1998) argues culture plays the main role in
determining which foods are accepted and rejected, and a role in ethnic identity. However,
evolutionary factors may be at work – different cultures share similar food preferences, and food
preferences are difficult to change. Therefore, evolutionary influences seem to be more important in
food preferences because they underlie even cultural differences.
Page 141
1. The social learning theory of food preferences proposes that children acquire the food
preferences of role models they observe eating certain foods. The effect is greatest when the model
is rewarded and the child identifies with them. As such, family influences are the most obvious social
influence on preference learning because parents are ‘gatekeepers’ of children’s eating.
2. One social influence on food preference is explained by social learning theory. Children acquire
the food preferences of role models they observe eating certain foods. This modelling is beneficial
because it ensures children eat foods that are obviously safe because others are eating them,
otherwise toddlers will try to eat potentially dangerous foods. The effect of modelling is greatest
when the model is rewarded and the child identifies with them. This means the family is the most
obvious social influence on preference learning because parents are ‘gatekeepers’ of children’s
eating.
94
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Cultural factors are probably the most powerful influences on food preference, as culture
determines to a large extent which foods children are exposed to in the first place. In most cases
children learn cultural rules related to food from eating with family members. Cultural norms
influence preferences, e.g. ‘meat and two veg’ and believing the main Sunday meal must be roast
dinner are common cultural ideals in British households. These norms are established through
vicarious reinforcement, for example, children see their parents enjoying these foods (rewarding)
and classical conditioning, for example, we associate many foods we eat as adults with happiness
growing up.
3. Flavour-flavour learning attempts to explain both food preferences and aversions. However, the
evidence is much stronger for aversion learning than preference learning. For instance Baeyens et al.
(1996) found that pairing a new food with a soapy-flavoured chemical called Tween created a lasting
aversion to the food. This suggests that the classical conditioning explanation can account for the
development of food aversions.
Unfortunately, the same study suggested that classical conditioning is less successful in explaining
food preferences. The researchers paired a new food with a sweet flavour for one group of students,
and paired it with a neutral flavour for another group and there was no difference in flavour
preference between the two groups. Therefore there is in fact very little evidence that classical
conditioning via flavour-flavour learning is a valid explanation for food preferences.
There is support for a social learning approach to food preferences. Jansen and Tenney (2001) found
children’s most preferred taste was an energy-dense drink taken at the same time as a teacher who
clearly enjoyed it. They identify with and imitate the teacher’s preference for the drink and the
preference is reinforced because they observed that drinking it was rewarding. This evidence
supports the roles of identification and vicarious reinforcement in social learning of preferences.
4. One explanation of food preference that includes social influences is operant conditioning.
Parents and older siblings often provide rewards (e.g. praise) or punishments for younger children
eating certain foods. In both Maricel’s and Jade’s cases praise for eating like the rest of the family
might have reinforced the preference. However it is still notoriously hard to establish a preference
for some foods (e.g. green vegetables) in children using rewards, which suggests that social learning
is probably a more powerful form of food preference learning than operant conditioning.
In terms of SLT, the chances are that Maricel and Jade both acquired the food preferences of role
models they observed eating certain foods. The effect of modelling would have been greatest when
the models – most likely parents – were rewarded and Maricel and Jade identified with them. This
means the family is the most obvious social influence on preference learning in both cases because
Maricel’s and Jade’s parents would have been ‘gatekeepers’ of their eating.
There is support for a social learning approach to food preferences. Jansen and Tenney (2001) found
children’s most preferred taste was an energy-dense drink taken at the same time as a teacher who
clearly enjoyed it. They identify with and imitate the teacher’s preference for the drink and the
preference is reinforced because they observed that drinking it was rewarding. This evidence
supports the roles of identification and vicarious reinforcement in social learning of preferences and
may explain both set of preferences.
Another social influence on Maricel’s and Jade’s preferences may have been television. As children
become independent of their parents’ food choices, other models become more important. Maricel
and Jade may have encountered many TV programmes in which culturally-appropriate food
preferences were promoted (e.g. characters in soap operas eating fish and chips).
95
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, although family influences on preferences can last a lifetime, the social learning effects of
TV are less persistent. Hare-Bruun et al. (2011) found children who watched most TV also had the
most unhealthy food preferences. But this link was much weaker in a six-year follow-up, and
disappeared altogether for girls. This suggests that as children get older, close friends may be more
powerful social influences on long-term preferences.
Cultural norms are also known to establish preferences: fish and chips is a British tradition and sapin-
sapin is a Philippino one and this may also account for Jade’s and Maricel’s preferences. Such norms
are established through vicarious reinforcement, for example, children seeing parents enjoying
these foods (rewarding) and classical conditioning, for example, we associate many foods we eat as
adults with happiness growing up.
This is supported by research into cultural change. A major cultural change in many societies has
been the increasing availability of food outside the home. This has generally encouraged a
preference for ‘fast food’ which is high in fat, salt and sugar. So whereas fish and chips would once
have been seen as an occasional ‘treat’, it is now a regular part of many people’s diet. Therefore
wider cultural changes strongly influence the type of foods people eat.
Page 143
1. The hypothalamus regulates the level of glucose (energy source) in blood. Glucose-sensing
neurons in the hypothalamus detect fluctuations in blood glucose concentration and the
hypothalamus also regulates glucose by directing insulin and anti-insulin hormones (e.g. glucagon) in
the pancreas.
Ghrelin is an appetite stimulant secreted by the stomach. The longer we go without food (the more
empty our stomach becomes) the more ghrelin is released. The level is detected by receptors in the
arcuate nucleus of the hypothalamus. When levels rise above a set point the arcuate nucleus signals
the lateral hypothalamus (LH) to secrete Neuropeptide Y (NPY).
Leptin is an appetite suppressant which is secreted by adipose cells. Leptin blood level increases with
fat level and is detected by the ventromedial hypothalamus (VMH). When the level of leptin
increases beyond a set point a person feels full and stops eating.
2. The hypothalamus controls both neural and hormonal mechanisms in the control of eating. It
regulates the level of glucose (energy source) in the blood and it is also the site of glucose-sensing
neurons which detect fluctuations in blood glucose concentration. Hormonally the hypothalamus
regulates glucose by directing insulin and anti-insulin hormones (e.g. glucagon) in the pancreas.
The dual-centre model of eating suggests that there are two structures of the hypothalamus that
provide homeostatic control. The ‘on switch’ is the lateral hypothalamus (LH) which contains cells to
detect glucose levels in liver. The ‘off switch’ is in the ventromedial hypothalamus (VMH) and eating
leads to a rise in levels of glucose in the bloodstream and liver (glycogen) – detected by cells in the
VMH. The VMH is triggered once levels increase past a set point – LH activity is inhibited at the same
time, so the person becomes satiated (feels full and stops eating).
Hormonal mechanisms
Ghrelin and leptin are both hormones with the former being an appetite stimulant and the latter a
suppressant. The longer we go without food (more empty stomach) the more ghrelin is released –
the level is detected by receptors in the arcuate nucleus of the hypothalamus. When levels rise
above a set point the arcuate nucleus signals the LH to secrete Neuropeptide Y (NPY). Meanwhile
96
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
leptin blood level increases with fat level and is detected by the VMH – part of the VMH satiety
mechanism. When the level of leptin increases beyond a set point a person feels full and stops
eating.
3. One limitation of hormonal models of eating control is that social and cultural influences are
underplayed. Woods (2004) points out that the LH feeding centre detects falling blood glucose and
stimulates hunger only in severe energy deprivation. Neurochemistry plays a lesser role in everyday
eating onset, which is more controlled by social/cultural factors (e.g. times of day for meals).
This suggests a biological approach ignores potentially important nonbiological factors that may
contribute more to controlling eating behaviour.
A further limitation of the role is that it is oversimplified. Valassi et al. (2008) argue that biological
contributions to eating behaviour are numerous. CCK (cholecystokinin) is a hormone that activates
the nerve that sends signals to the hypothalamus to ‘stop eating’. It may be a more powerful
appetite suppressant than leptin. This suggests that a relatively straightforward homeostatic account
does not accurately reflect the true complexity of eating control.
4. The hypothalamus controls both neural and hormonal mechanisms in the control of eating. It
regulates the level of glucose (energy source) in the blood and it is also the site of glucose-sensing
neurons which detect fluctuations in blood glucose concentration. Hormonally the hypothalamus
regulates glucose by directing insulin and anti-insulin hormones (e.g. glucagon) in the pancreas.
The dual-centre model of eating suggests that there are two structures of the hypothalamus that
provide homeostatic control. The ‘on switch’ is the lateral hypothalamus (LH) which contains cells to
detect glucose levels in liver. The ‘off switch’ is in the ventromedial hypothalamus (VMH) and eating
leads to a rise in levels of glucose in the bloodstream and liver (glycogen) – detected by cells in the
VMH. The VMH is triggered once levels increase past a set point – LH activity is inhibited at the same
time, so the person becomes satiated (feels full and stops eating).
A strength of the dual-centre model of eating is research support. Hetherington and Ranson (1942)
found that lesioning the VMH of rats made them hyperphagic (overeat) and severely obese. Anand
and Brobeck (1951) lesioned the LH of rats and found aphagia (cessation of eating/starvation). This
confirms the homeostatic mechanism – two brain centres with opposing functions as predicted by
the dual-centre model. However, Gold (1973) claims Hetherington and Ranson’s operation also
damaged the rats’ paraventricular nucleus (PVN) and that when lesions are limited to the VMH,
hyperphagia does not occur. This suggests that physiological control of eating behaviour may involve
more than two brain centres.
However, Valassi et al. (2008) argue that biological contributions to eating behaviour are numerous.
CCK (cholecystokinin) is a hormone that activates the nerve that sends signals to the hypothalamus
to ‘stop eating’. It may be a more powerful appetite suppressant than leptin. This suggests that a
relatively straightforward homeostatic account does not accurately reflect the true complexity of
eating control.
Ghrelin and leptin are both hormones with the former being an appetite stimulant and the latter a
suppressant. The longer we go without food (more empty stomach) the more ghrelin is released –
the level is detected by receptors in the arcuate nucleus of the hypothalamus. When levels rise
above a set point the arcuate nucleus signals the LH to secrete Neuropeptide Y (NPY). Meanwhile
leptin blood level increases with fat level and is detected by the VMH – part of the VMH satiety
97
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
mechanism. When the level of leptin increases beyond a set point a person feels full and stops
eating.
One limitation of hormonal models of eating control is that social and cultural influences are
underplayed. Woods (2004) points out that the LH feeding centre detects falling blood glucose and
stimulates hunger only in severe energy deprivation. Neurochemistry plays a lesser role in everyday
eating onset, which is more controlled by social/cultural factors (e.g. times of day for meals).
This suggests a biological approach ignores potentially important nonbiological factors that may
contribute more to controlling eating behaviour.
A limitation of both neural and hormonal mechanisms is that our knowledge is based mostly on
animal research. We should be cautious about extrapolating findings to humans without considering
differences between species that may make generalisations invalid. This is because eating behaviour
is more complex in humans than in rats, e.g. there are more influences affecting human eating
behaviour. But studying rats may be a valid way of understanding neural and hormonal mechanisms
as most structures found in a human brain are in a rat brain too. This suggests that the model is well
supported.
Page 145
1. These are any explanations of anorexia nervosa (AN) in terms of dysfunctions of the brain and
nervous system. This includes the activity of brain structures such as the hypothalamus, and
neurotransmitters such as serotonin and dopamine. For instance, decreased dopamine levels are
associated with AN. Kaye et al. (1991) found that levels of the dopamine metabolite HVA were lower
in recovered AN participants compared with controls.
2. The genetic explanation focuses on the fact that AN tends to run in families whereas the neural
explanation focuses on the effects of the neurotransmitters serotonin and dopamine on AN
behaviours and the associated anxiety.
The genetic explanation has been able to identify at least one candidate gene (Ephx2) that codes for
an enzyme involved in cholesterol metabolism. Many people in acute phase of AN have abnormally
high levels of cholesterol whereas serotonin research indicates underactivity of the serotonin system
in AN.
3. A strength of the dopamine explanation is it is supported by research evidence. Kaye et al. (1999)
compared severely underweight women diagnosed with AN with women who had no history of
eating disorders. The levels of the dopamine metabolite HVA were 30% lower in the women with
AN, on average. This strongly suggest that a dysfunction of dopamine metabolism contributes to the
symptoms of AN.
A limitation of the serotonin explanation is other neurotransmitters are involved. Nunn et al. (2012)
argue that serotonin alone does not distinguish between people with and without AN. Serotonin
accounts for some features of AN but not others. AN is better explained by considering interactions
between serotonin and noradrenaline. Other neurotransmitters (e.g. GABA) are also involved. This is
a reminder that neurotransmitter systems do not operate in isolation; instead there are complex
interactions. But the explanation is recent and remains to be fully tested.
4. Serotonin has been found to be involved in many AN-related behaviours (e.g. obsessiveness) and
is therefore on the side of the ‘chemical imbalances’ argument. For example, Bailer and Kaye (2011)
found low levels of 5-HIAA (serotonin metabolite) in people with AN return to normal after short-
98
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
term weight recovery – levels actually increase beyond normal in the long term. Decreased
dopamine levels are also associated with AN. Kaye et al. (1991) found HVA (dopamine metabolite)
levels were lower in recovered AN participants compared with controls.
A strength of the dopamine explanation is it is supported by research evidence. Kaye et al. (1999)
compared severely underweight women diagnosed with AN with women who had no history of
eating disorders. The levels of the dopamine metabolite HVA were 30% lower in the women with
AN, on average. This strongly suggest that a dysfunction of dopamine metabolism contributes to the
symptoms of AN.
A limitation of the serotonin explanation is other neurotransmitters are involved. Nunn et al. (2012)
argue that serotonin alone does not distinguish between people with and without AN. Serotonin
accounts for some features of AN but not others. AN is better explained by considering interactions
between serotonin and noradrenaline. Other neurotransmitters (e.g. GABA) are also involved. This is
a reminder that neurotransmitter systems do not operate in isolation; instead there are complex
interactions. But the explanation is recent and remains to be fully tested.
On the other hand, studies of MZ and DZ twins show anorexia nervosa (AN) does run in families.
Holland et al. (1988) found MZ concordance rate of 56% but only 5% for DZs. A candidate-gene
association study (CGAS) by Scott-Van Zeeland et al. (2014) sequenced 152 candidate genes possibly
linked with features of AN and found only one gene associated with AN: Ephx2 (epoxide hydrolase
2). But this gene is significant because it codes for an enzyme involved in cholesterol metabolism.
People in the acute phase of AN do have abnormally high levels of cholesterol.
Boraska et al. (2014) identified 72 separate genetic variations but none were significantly related to
AN, possibly because the study was not sensitive enough to detect genetic influences.
One limitation of twin studies is that they may lack validity. The assumption of ‘equal environments’
may be incorrect. It can be argued that MZ twins are treated more similarly than DZs by parents,
other family members, friends, teachers. They spend more time together and may even have a
closer bond than DZs. Greater environmental similarity for MZs suggests heritability estimates are
artificially inflated and genetic influences on AN are not as great as twin studies suggest.
One strength is that gene studies illustrate the polygenic nature of AN. Gene studies have been
unsuccessful in identifying any single gene for AN, many candidate genes have been discarded. No
single gene can be responsible for the wide variety of physical and psychological symptoms of AN
(e.g. appetite loss, body image distortions). Therefore gene studies have shown that AN is polygenic,
which means that many genes make important but modest contributions to the disorder.
It is therefore likely that AN cannot be understood in terms of genes alone. AN is best understood in
terms of genes that create a vulnerability to AN (diathesis) that only expresses itself when the
individual tries to lose weight (a stressor) – this is the diathesis-stress model. People lose weight for
many reasons and non-biological risk factors play a triggering role. This suggests whilst biological
explanations are still valid they must be seen in a wider context of other non-genetic factors.
Page 147
1. Enmeshment is where members of an ‘anorexic family’ are overinvolved and overprotective. Their
self-identities are bound up with each other. Roles are poorly defined and there is little privacy.
Minuchin et al. (1978) noted this as a key characteristic of a dysfunctional family system in causing
eating disorders.
99
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Autonomy refers to our experience of freedom in deciding how we should behave, and
independence from others. For example, AN may be caused by an adolescent daughter’s struggle
against dependence on her domineering and intrusive family.
2. The central difference is that autonomy is a goal and control is a means of achieving that goal.
Someone with AN wishes to achieve independence from her domineering and intrusive family and
experience freedom in deciding for herself how to behave. This is autonomy. In order to achieve this,
she engages in behaviours such as self-starvation to control her self-identity as a person
independent of her family. She controls her destiny by controlling her body.
3. Family systems theory (FST) is a psychodynamic theory of anorexia nervosa (AN) by Minuchin et
al. (1978) which suggests four problematic features of a typical ‘anorexic family’.
Firstly, members of anorexic families are overinvolved with each other and the boundaries are
‘fuzzy’ (enmeshment). Family members spend lots of time together and can impinge on each other’s
privacy. An adolescent daughter in an anorexic family tries to differentiate her identity and assert
her independence by refusing to eat.
Secondly, there is an issue of overprotectiveness where family members constantly defend each
other from external threats. Obsessive nurturing reinforces family loyalty leaving no room for
independence. Palazzoli (1974) described an enmeshed family in which the mother of a daughter
with AN saw her role as a personal sacrifice. The mother felt that all her decisions were for her
daughter’s benefit and not her own. It is then much easier to blame the daughter with AN when
things go wrong.
Rigidity of interactions is also characteristic of AN families. Problems arise when situations change
due to pressure – the family is too rigid to adapt so is thrown into crisis. An adolescent daughter
seeks independence but the rest of the family quash her attempt at self-differentiation and she may
turn to AN behaviour.
Finally, family members take whatever steps necessary to prevent or suppress conflict (e.g. no
discussion of issues where difference of opinion might arise). So, problems are not resolved and
continue to fester until crisis develops. The daughter continues to refuse to eat, starving herself
while her family refuses to accept there is anything to discuss.
4. Family systems theory (FST) is a psychodynamic theory of anorexia nervosa (AN) by Minuchin et
al. (1978) which suggests four problematic features of a typical ‘anorexic family’. Firstly, members of
anorexic families are overinvolved with each other and the boundaries are ‘fuzzy’ (enmeshment).
Family members spend lots of time together and can impinge on each other’s privacy. An adolescent
daughter in an anorexic family tries to differentiate her identity and assert her independence by
refusing to eat.
Secondly, there is an issue of overprotectiveness where family members constantly defend each
other from external threats. Obsessive nurturing reinforces family loyalty leaving no room for
independence. Palazzoli (1974) described an enmeshed family in which the mother of a daughter
with AN saw her role as a personal sacrifice. The mother felt that all her decisions were for her
daughter’s benefit and not her own. It is then much easier to blame the daughter with AN when
things go wrong.
100
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Rigidity of interactions is also characteristic of AN families. Problems arise when situations change
due to pressure – the family is too rigid to adapt so is thrown into crisis. An adolescent daughter
seeks independence but the rest of the family quash her attempt at self-differentiation and she may
turn to AN behaviour.
Finally, family members take whatever steps necessary to prevent or suppress conflict (e.g. no
discussion of issues where difference of opinion might arise). So, problems are not resolved and
continue to fester until crisis develops. The daughter continues to refuse to eat, starving herself
while her family refuses to accept there is anything to discuss.
One strength of FST is support from evidence such as Strauss and Ryan (1987) who found women
diagnosed with AN showed greater disturbances of autonomy than women who did not have AN.
For instance, they had a more rigid and controlling way of regulating their own behaviour, and
differentiated less clearly between their own and their families’ identities (they were enmeshed).
These findings show that desire for autonomy, when it is frustrated, is a risk factor for AN in females.
However, these findings are challenged by Aragona et al. (2011) who found that families of females
with AN were no more enmeshed/rigid than non-AN families. These contradictory findings may be
due to vague concepts defined differently in studies. This means that it is difficult to find conclusive
support for FST theory, and ultimately it is not a scientific theory because the concepts cannot be
tested.
A strength of FST, though, is that it has led to behavioural family systems therapy (BFST) which aims
to disentangle family relationships and reduce parental control over the eating of the individual with
AN. Robin et al. (1995) reported that this was successful in 6 out of 11 females with AN after 16
months of BFST, and three more had recovered after another year. FST-based therapy therefore
appears to have practical value.
One limitation is that family influences on AN depend on other factors. Davis et al. (2004) studied
such mediating factors and found that family interactions affected eating disorders only in
adolescents with high anxiety. Young et al. (2004) found that family factors had no effect on eating
disorders in cases where there is no depression and no peer influences. These mediating factors are
mostly independent of family factors which shows that family factors alone cannot explain AN.
FST explains two features of AN that other theories struggle with: its tendency to appear in
adolescence (link with autonomy) and its much greater incidence in females. However, it follows
that FST has trouble explaining AN in non-adolescent females and in males, and it also ignores the
role of fathers in family dysfunction. Therefore FST may be a useful and valid theory of AN in most
cases, but it is worth bearing in mind that the theory is limited in scope.
Page 149
1. From the observer’s perspective modelling is imitating the behaviour of a role model so, for
example, an adolescent might copy the diet of a favourite celebrity. The role model is ‘modelling’ or
demonstrating the specific behaviour, e.g. disordered eating, that may be imitated by an observer.
Reinforcement is the consequence of AN behaviour that increases the likelihood of that behaviour
being repeated. For example, in the early days an individual with AN may be rewarded with praise
from others for losing some weight.
101
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. The social learning theory (SLT) explains direct and indirect learning and can be used to explain
anorexia nervosa (AN). Direct learning of AN involves classical and operant conditioning of an
individual’s behaviour whereas indirect learning involves observation of other people and the
modelling and imitation of a behaviour which can be vicariously reinforced.
So, it suggests that AN is acquired indirectly through modelling an observed model. That model
provides a ‘template’ to imitate (modelling) and could exist in real life (e.g. a family member) or be
symbolic (e.g. a cartoon character). SLT suggests that the observations modify social norms by
establishing acceptable or usual behaviour (e.g. a child observes an older sibling restricting their
food intake and learns that this is ‘normal’) and the impact is greatest if the child identifies with
model.
Vicarious reinforcement is said to increase the chance that the eating behaviour will be imitated
because, if the model is rewarded, then the child learns that the behaviour (losing weight) has
positive consequences. The media is a powerful transmitter of cultural ideals of body shape/size. The
ideal body shape for women has become thinner over time (e.g. Size Zero).
3. The media is influential in AN because it provides a rich source of modelling and vicarious
reinforcement. Music videos, magazines, websites, social media and television all communicate
cultural ideals about body shape and size. As the ideal in many cultures has become thinner and
thinner, this is the body presented as something for young women to aim for.
The power of models is enhanced by identification. Young women may identify with the glamour of
celebrities in the media who conform to the ‘thin ideal’. They may be motivated to imitate them by
losing weight through dieting and exercise. This behaviour is vicariously reinforced by the rewarding
fame, success, wealth, etc. that young women observe in female role models in the media.
4. Social learning theory (SLT) explains direct and indirect learning and can be used to explain
anorexia nervosa (AN). Direct learning of AN involves classical and operant conditioning of an
individual’s behaviour whereas it is also suggested AN is acquired indirectly through imitating an
observed model. That model provides a ‘template’ to imitate and the observations modify social
norms by establishing acceptable or usual behaviour and the impact is greatest if the child identifies
with the model. In Lillia’s case the role model is her mother and she is imitating the dysfunctional
eating that she has seen from her mother (who she identifies with) and this style of eating has
become the norm in her view.
Vicarious reinforcement is said to increase the chance that the eating behaviour will be imitated as if
the model is rewarded then the child learns that the behaviour (losing weight) has positive
consequences. In this case Lillia sees her mum being rewarded with positive consequences (praise
from her dad) and is therefore more likely to imitate the behaviour. Media may further reinforce the
behaviour and is known to have a significant impact.
One strength of the SLT explanation is research support such as Becker et al.’s (2002) natural
experiment when TV was introduced to the island of Fiji in 1995. In that year, 13% of a sample of
adolescent girls gained a high score on a questionnaire measuring eating disorder risk. Three years
later the figure for another sample of girls was 29%. The higher figure may be explained by a new
cultural ideal of female body shape broadcast on TV and influencing girls on Fiji. This shows that
eating disorders can be the outcome of social learning processes and suggests that the same
102
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
mechanism could be responsible in Lillia’s case with her mum taking the role of the model rather
than the media.
SLT also explains cultural changes linked to AN. AN is still more common in some cultures than
others but incidence rates are increasing rapidly and SLT can explain this in terms of changing
cultural norms. For example, Chisuwa and O’Dea (2010) found increased rates of AN in Japan in the
last 40 years, as traditional values favouring plumpness are displaced by the thinness ideal from
individualist cultures (e.g. the US). SLT shows this change is driven in part by media representations
but also in Lillia’s case to family members’ perceptions.
Studies of MZ and DZ twins show that AN runs in families. Holland et al. (1988) found an MZ
concordance rate of 56% but only 5% for DZs. A candidate-gene association study (CGAS) by Scott-
Van Zeeland et al. (2014) sequenced 152 candidate genes possibly linked with features of AN and
found only one gene associated with AN: Ephx2 (epoxide hydrolase 2). But this gene is significant
because it codes for an enzyme involved in cholesterol metabolism. People in the acute phase of AN
do have abnormally high levels of cholesterol. So it is possible that Lillia has simply inherited AN
from her mother (or at least inherited a vulnerability to developing it).
However, a limitation of gene studies is that the search for a single gene is futile and we cannot
therefore be sure that Lillia has inherited the AN genes. Several candidate genes have been put
forward but no one gene can be found responsible for the wide variety of physical and psychological
symptoms in AN (e.g. appetite loss, body image distortions, fear of weight gain). Furthermore,
single-gene studies divert attention from understanding the true polygenic nature of AN.
Page 151
1. Cognitive distortions are faulty, biased and irrational ways of thinking that mean we perceive
ourselves, other people and the world inaccurately and usually negatively. For example, people with
AN become more and more critical of their own bodies and they misinterpret their emotional states
as ‘feeling fat’, even as they get thinner and thinner.
Irrational beliefs (thoughts) are defined in Ellis’s model as thoughts that are likely to interfere with a
person’s happiness. Dysfunctional thoughts such as, ‘If I don’t control my weight, I’m worthless’ can
lead to AN.
2. Williamson et al. (1993) researched cognitive distortions by asking participants to choose from
silhouettes of increasing size to match their own body; 37 participants diagnosed with anorexia and
a control group of 95 participants with no eating disorder estimated their current body size and their
ideal size. It was found that the participants with AN were significantly less accurate in their size
estimates than the control participants, with a marked tendency to overestimate their size. The ideal
body shape for the AN participants was also significantly thinner than it was for the controls.
Treasure and Schmidt (2013) have focused on irrational beliefs and proposed a cognitive
interpersonal maintenance model of AN which, among other things, suggests that people with AN
experience problems with set-shifting. This means they find it difficult to switch fluently from one
task to another that requires a different set of cognitive skills. Instead, they tend to apply
persistently the same skills in a changed situation where they are no longer useful. Once a
vulnerable individual gets started on the weight loss process, they rigidly persist with it and continue
to perceive themselves as needing to lose weight. In effect, their weight loss is a solution to a
problem that no longer exists, but they are unable to perceive this accurately.
103
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. The social learning theory of AN also explains cultural changes linked to AN. AN is still more
common in some cultures than others but incidence rates are increasing rapidly and SLT can explain
this in terms of changing cultural norms. For example, Chisuwa and O’Dea (2010) found increased
rates of AN in Japan in the last 40 years, as traditional values favouring plumpness are displaced by
the thinness ideal from individualist cultures (e.g. the US). SLT shows this change is driven in part by
media representations.
One strength of the cognitive explanation is research support for disturbed perceptions. Sachdev et
al. (2008) found no differences in brain activity between people with AN and non-AN controls when
they viewed images of other people’s bodies. However, when viewing images of their own bodies,
AN participants showed less activity (than non-AN) in parts of brain involved in attention. This shows
that disturbed perceptions exist in AN in terms of how people with AN attend to their own body.
4. The cognitive theory suggest that distortions are a cause of anorexia nervosa (AN) and this idea is
central to the diagnosis of AN in DSM-5 (2012). People with AN filter experiences of life through
three factors, the first being disturbed perceptions about body shape and weight. Disturbed
perceptions cause preoccupations with thoughts of food, eating and body shape which in turn lead
to behaviours such as food restriction and checking (e.g. constantly looking in the mirror). People
with AN misinterpret emotional states as ‘feeling fat’, even as they get thinner.
Overestimation of body size and weight is another cognitive distortion associated with AN.
Williamson et al. (1993) asked people with AN and a non-AN control group to estimate current and
ideal body sizes and found that AN participants’ estimates were significantly less accurate, with a
marked tendency to overestimate size and their ideal body size was significantly thinner than for
controls.
Irrational beliefs are views and attitudes about AN that do not make sense. Such thoughts give rise
to automatic negative thoughts (Beck), for example: ‘If I don’t control my weight, I’m worthless’ (all-
or-nothing thinking). Perfectionism is a key irrational belief in AN and, for example, a person who
exhibits perfectionism will feel that they must meet demanding standards in all areas of life but
especially eating, body shape, weight loss.
According to the theory people with AN also have problems switching fluently between tasks
requiring a different set of cognitive skills (set-shifting). They apply the same skills in a changed
situation where they are no longer useful. For example when a vulnerable person begins a weight
loss process, they rigidly persist and continue to perceive themselves as needing to lose weight. They
cannot switch to a more adaptive way of thinking.
One strength of the cognitive explanation is research support for disturbed perceptions. Sachdev et
al. (2008) found no differences in brain activity between people with AN and non-AN controls when
they viewed images of other people’s bodies. However, when viewing images of their own bodies,
AN participants showed less activity (than non-AN) in parts of brain involved in attention. This shows
that disturbed perceptions exist in AN in terms of how people with AN attend to their own body.
Another strength is that there is support for perfectionism. Halmi et al. (2012) studied women
diagnosed with AN, who completed the SIAB to assess current symptoms and the EATATE Interview
to retrospectively measure childhood perfectionism. They found that childhood perfectionism
(e.g. schoolwork perfectionism) was associated with current AN symptoms. This suggests
perfectionism precedes onset of AN, so is a potential risk factor for development of the disorder.
104
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
On the other hand, Cornelissen et al. (2013) used a morphing task where women adjusted a
computerised image of themselves until it matched their estimate of body size. There was no
significant difference between women with and without AN in the correlation between estimated
and actual body mass index (BMI). This suggests that women with AN do not have a distorted body
perception, challenging a key element of the cognitive theory of AN.
There is evidence that challenges the cognitive theory’s view that cognitive factors are causal in the
development of AN. This is a very strong claim, but it is just as likely that cognitive factors are effects
of AN rather than causes. For instance, Murphy et al. (2010) studied preoccupations with body
shape which probably develop after AN begins, given that people with AN become more and more
critical of their bodies as AN progresses. Another example is misperception of body shape and size.
Someone who already has AN may overestimate their body size rather than overestimation being a
cause. This suggests that cognitive factors are more likely to be consequences of AN, but they do
affect how the disorder develops over time.
Page 153
1. The difference is the level of biological processes at which obesity is explained. The neural
explanation is at the level of brain and nervous system activity. This includes the idea that
dysfunctions of biochemistry in the form of neurotransmitters cause obesity (e.g. levels of both
serotonin and dopamine may be low).
The genetic explain is the ‘lower’ level of the two because it can also explain the neural dysfunction
in obesity. Genes associated with variations in BMI are transmitted through generations of family
members. These genes may cause obesity indirectly by affecting neurotransmitter levels.
2. It is suggested that genes associated with variations in body mass index (BMI) are transmitted
through generations of families. This is confirmed by the observation that obesity often runs in
families.
Family studies have established a BMI concordance rate for obesity in first-degree relatives of 20–
50% (Chaput et al. 2014). Twin studies have found MZ concordance rates for obesity are 61–80%,
which suggests a substantial genetic component (Nan et al. 2012).
There is no single genetic cause of obesity and many genes are thought to be involved with small
effects interacting to produce overall outcome, for example Locke et al. (2015) found 97 genes
associated with variations in BMI but accounted for only 2.7% of the variation.
3. One biological explanation for obesity is the neural explanation which focuses on the role of
neurotransmitters.
One strength is that this explanation is supported by evidence concerning serotonin. Ohia et al.
(2013) highlight the importance in obesity of one serotonin receptor in particular, the 2C receptor.
Studies of ‘knockout’ mice with no functioning 2C receptors show they develop late-onset obesity.
This is evidence of a link between obesity and a dysfunctional serotonin system, at least in mice.
There is also research support for the role of dopamine. Spitz et al. (2000) looked at the dopamine
D2 receptor which has been implicated in obesity in many studies. They compared the genomes of
obese and non-obese participants and found that one version of the gene that codes for the D2
receptor was twice as prevalent in the obese participants. It seems that people who inherit fewer D2
receptors have low dopamine levels and so they experience less dopamine-activated pleasurable
105
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
reward from eating. This makes them more likely to overeat in order to get satisfaction. This
supports the view that a dysfunction of dopamine activity is involved in obesity.
4. The genetic argument of the website is based on the idea that genes associated with variations in
body mass index (BMI) are transmitted through generations of families. This is confirmed by the
observation that obesity often runs in families. Family studies have established a BMI concordance
rate for obesity in first-degree relatives of 20–50% (Chaput et al. 2014). Twin studies have found MZ
concordance rates for obesity are 61–80%, which suggests a substantial genetic component (Nan et
al. 2012), which certainly agrees with the website article.
There is no single genetic cause of obesity. The article is correct in using the term ‘genes’ (i.e. plural).
Many genes are thought to be involved, with small effects interacting to produce an overall
outcome, for example Locke et al. (2015) found 97 genes associated with variations in BMI but
accounted for only 2.7% of BMI. However, the concordance rates of less than 100% points to the fact
that genetics can only account for some obesity and so does not mean that obesity is definitely
genetic.
This means that there are potential alternative explanations. For example, even putting aside
psychological theories, there are neural explanations for obesity. Low levels of serotonin signal to
the hypothalamus that we have eaten to satiety. Dysfunctions of the serotonin system can result in
abnormally low levels of serotonin and therefore inaccurate satiety signals are sent to the
hypothalamus. The result is that eating behaviour is disinhibited (i.e. not controlled), leading to
carbohydrate cravings (i.e. desire for energy-dense foods including sugars) causing weight gain
through excess calories.
Although this shows that obesity may indeed not always be directly genetic, as suggested by the
website article, there may well be an indirect effect because genes determine serotonin activity (e.g.
number of serotonin receptors).
One strength is a plausible mechanism to explain how genes work. Genes may influence responses
to the environment (O’Rahilly and Farooqi 2008). For example, sensitivity to food-related cues and
influence on neurotransmitter systems linked with obesity. This ability to explain how genes operate
in obesity increases the validity of the genetic explanation.
There is further evidence challenging the role of genes. Paracchini et al. (2005) conducted a meta-
analysis of 25 studies investigating genes possibly involved in regulating leptin (LEP gene) and leptin
receptors (LEPR gene). The study found no evidence of a link between these genes and obesity.
Whatever the role of leptin in obesity, it does not have a solely genetic basis. This suggests that
obesity is a complex phenomenon and other non-genetic factors are important in its causation and
development.
Support for the fact that obesity is not only explained by genetics comes in the form of support for
the role of serotonin. Ohia et al. (2013) highlight the importance in obesity of one serotonin receptor
in particular, the 2C receptor. Studies of ‘knockout’ mice with no functioning 2C receptors show they
develop late-onset obesity. This is evidence of a link between obesity and a dysfunctional serotonin
system, at least in mice.
106
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 155
Restraint theory suggests that dieters deliberately restrict their food/calorie intake. But this is self-
defeating because restrained eaters become more preoccupied with food rather than less.
The effect of this is often disinhibition. According to this theory, restrained eaters are vulnerable to
food-related cues which can be internal (e.g. mood) or external (e.g. media images, odours). These
cues may trigger a loss of control over eating, resulting in a binge.
2. The boundary model assumes that both hunger and satiety are aversive. For example, when
energy levels dip below a ‘set point’ we feel an aversive state of hunger and are motivated to eat
whilst eating to fullness creates an aversive state of discomfort so we are motivated to stop eating.
The model describes a zone of biological indifference (ZBI) when we feel neither hungry nor full. In
this zone, psychological factors (cognitive and social) have more influence than biological ones on
food intake. It is argued that the ZBI is wider for restrained eaters and people who restrict food
intake have a lower hunger boundary and a higher satiety boundary. As such more of their eating
behaviour is under cognitive rather than biological control making them vulnerable to disinhibited
eating.
3. One strength is support for food-related cues in disinhibition. Boyce and Kuijer (2014) showed
images of thinness to restrained (dieters) and unrestrained eaters, then measured food intake in
a ten-minute ‘taste test’ where they had access to unlimited snacks. Restrained eaters ate
significantly more than unrestrained eaters after seeing the images (food-related cues), with no
difference for neutral images, e.g. furniture. This shows that food-related cues act as disinhibitors
which may trigger overeating and obesity in restrained eaters.
Further support comes from a study by Wardle and Beales (1988) who randomly allocated 27 obese
women to a diet (restrained), exercise or control group. Restrained eaters ate significantly more
because they experienced occasional disinhibition and binged beyond feeling full. This shows that
restraint leading to disinhibition is a causal factor in overeating which inevitably leads to weight gain
and obesity.
4. Restraint theory suggests that in restraining eating a dieter has to think about eating much of the
time and thus exert cognitive control, e.g. by categorising foods into ‘good’ and ‘bad’ and creating
rules about which foods are allowed and which are forbidden. The outcome is that the restrained
eater becomes more preoccupied with food not less and no longer eats when hungry and stops
when full. Their eating behaviour becomes disinhibited. Periods of restrained eating are often
followed by disinhibited eating in which the individual eats as much as they want, leading to a loss of
control in the presence of a disinhibitor, a food-related cue, either internal (e.g. mood) or external
(e.g. media images). Restrained eaters are sensitive to these cues and vulnerable to loss of control
leading to unrestrained eating (a binge).
One strength is support for food-related cues in disinhibition. Boyce and Kuijer (2014) showed
images of thinness to restrained (dieters) and unrestrained eaters, then measured food intake in
a ten-minute ‘taste test’ where they had access to unlimited snacks. Restrained eaters ate
significantly more than unrestrained eaters after seeing the images (food-related cues), with no
difference for neutral images, e.g. furniture. This shows that food-related cues act as disinhibitors
which may trigger overeating and obesity in restrained eaters.
107
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Further support comes from a study by Wardle and Beales (1988) who randomly allocated 27 obese
women to a diet (restrained), exercise or control group. Restrained eaters ate significantly more
because they experienced occasional disinhibition and binged beyond feeling full. This shows that
restraint leading to disinhibition is a causal factor in overeating which inevitably leads to weight gain
and obesity.
However, in a prospective study, Savage et al. (2009) found that increases in restrained eating were
linked to decreases in weight in 163 women over a six-year period. Therefore restrained eating leads
to weight loss rather than weight gain in the long term, the opposite outcome to that predicted by
restraint theory.
The boundary model assumes that both hunger and satiety are aversive. For example, when energy
levels dip below a ‘set point’ we feel an aversive state of hunger and are motivated to eat whilst
eating to fullness creates an aversive state of discomfort so we are motivated to stop eating.
The model describes a zone of biological indifference (ZBI) when we feel neither hungry nor full. In
this zone, psychological factors (cognitive and social) have more influence than biological ones on
food intake. It is argued that the ZBI is wider for restrained eaters and people who restrict food
intake have a lower hunger boundary and a higher satiety boundary. As such more of their eating
behaviour is under cognitive rather than biological control making them vulnerable to disinhibited
eating.
However, the role of restraint is complex. Two forms of restraint are rigid restraint (all-or-nothing
approach to limiting food intake) and flexible restraint (allows limited amounts of ‘forbidden’ foods
without triggering disinhibition). Only rigid restraint is likely to lead to obesity and this could explain
why Savage et al. (2009) found that restrained eating can produce weight loss. The fact that the
boundary model presents restraint as a single behaviour does not reflect its true nature and makes
this a limited approach to understanding obesity.
Many studies supporting the boundary model are lab experiments. For example, Boyce and Kuijer
measured disinhibited eating with a ten-minute taste test, allowing participants to eat as much as
they like. This situation is artificial and highly controlled and quite unlike most real-world food-
related environments. When restrained eaters break their diets in the real world, most compensate
for disinhibition afterwards (i.e. restrict calorie intake even further). This does not happen in lab
experiments. Therefore, lab experiments may be useful for establishing psychological causes of
obesity, but they tell us little about real-world obesity.
Page 157
1. The spiral model (Heatherton and Polivy 1992) suggests that diet failure leads to a sense of
personal deficiency. Food-restricted dieting often begins in adolescence when an ‘unsatisfactory’
body shape leads to low self-esteem and a desire to lose weight. Initial success is often followed by
the weight being regained and a sense of personal deficiency. This creates a downward spiral
whereby dieters do not radically rethink their approach but simply make a bigger effort and
experience more frustration and emotional distress making them vulnerable to disinhibited eating.
Metabolic changes in body make weight loss physically more difficult (e.g. ghrelin levels increase,
leptin levels decrease) and the result is more failure followed by more attempts to ‘diet harder’,
lowering of self-esteem and even an increase in depression.
108
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. Adriaanse et al. (2011) showed that there is an ironic rebound effect in dieting. Just being
presented with a statement such as ‘I will not eat chocolate when I am sad’ reinforces the
association between ‘chocolate’ and ‘being sad’. This makes the link more accessible in memory and
easier to recall. They also showed that this ironic effect is not just cognitive but is behavioural too.
Participants who were presented with such statements ate more unhealthy snacks and consumed
more calories in the following week than a control group. This finding shows how just thinking of
oneself as dieting can lead to the failure of the diet.
The researchers showed that restricted eating diets often fail because food becomes more salient
when a diet imposes rules about eating. So the paradoxical outcome of trying to suppress a thought
about food is to make disinhibited eating more likely.
This can be the trigger for a spiral into dieting failure because when eating becomes disinhibited in
this way the dieter may try even harder not to think about food, which makes it more likely they do,
leading to further disinhibition.
3. Ironic processes theory is supported by Adriaanse et al. (2011) who found exposure to statements
like ‘When I am sad, I will not eat chocolate’ reinforced association between ‘being sad’ and ‘eating
chocolate’, making the link accessible in memory and recall more likely. This so-called ironic rebound
effect is behavioural as well as cognitive because snack diaries showed participants ate more
unhealthy snacks and calories than the control group in the following week confirming the difficulty
in suppressing thoughts of eating once they become accessible in memory.
However, although evidence shows ironic processes operate in eating behaviour it is unclear how far
they account for success and failure of dieting. The effects of ironic processes are exaggerated in
‘snapshot’ laboratory experiments and are less relevant to real-life attempts to lose weight over
time. This suggests other factors are likely to be more important in determining a diet’s success.
4. Heatherton and Polivy’s spiral model suggests that diet failure leads to a sense of personal
deficiency. Food-restricted dieting often begins in adolescence when body dissatisfaction leads to
low self-esteem and a desire to lose weight. There is initial success but weight is often regained
leading to a sense of personal deficiency and a downward spiral is created where dieters do not
radically rethink their approach, instead they make a bigger effort and experience more frustration
and emotional distress making them vulnerable to disinhibited eating. The resultant metabolic
changes in the body make weight loss physically more difficult and the result is more failure followed
by more attempts to ‘diet harder’, lowering of self-esteem, and an increase in depression.
The spiral model has practical uses and a key lesson of the model is to prevent lowering of self-
esteem and thus avoid the worst consequences of diet failure. For example, people who think about
avoiding putting on weight rather than trying to lose it are less likely to experience disinhibited
eating because their self-esteem is higher (Lowe and Kleifield 1988). This may be a better plan for
Leander and Uday.
According to ironic process theory, being on a diet increases preoccupation with food and is one
reason why people like Leander suggest that diets don’t work. The paradoxical outcome of trying to
suppress a thought is to make it more likely and dieters label certain foods as ‘forbidden’ so they
stand out. This leads to increased thinking about food and disinhibition of eating, loss of control,
excessive food intake and dieting failure.
109
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Ironic processes theory is supported by Adriaanse et al. (2011) who found exposure to statements
like ‘When I am sad, I will not eat chocolate’ reinforced association between ‘being sad’ and ‘eating
chocolate’, making the link accessible in memory and recall more likely. This so-called ironic rebound
effect is behavioural as well as cognitive because snack diaries showed participants ate more
unhealthy snacks and calories than the control group in the following week confirming the difficulty
in suppressing thoughts of eating once they become accessible in memory.
However, although evidence shows ironic processes operate in eating behaviour, it is unclear how
far they account for success and failure of dieting. The effects of ironic processes are exaggerated in
‘snapshot’ laboratory experiments and are less relevant to real-life attempts to lose weight over
time. This suggests other factors are likely to be more important in determining a diet’s success.
Disinhibition theory suggests that dieters make a conscious effort to restrain eating, therefore
behaviour is under cognitive control – Uday may well believe this. Dieters tend to experience
cognitive distortions and are vulnerable to internal and external food-related cues tempting them to
break their diet so research suggests Leander is right and that Uday may thus experience disinhibited
eating and consume many calories very quickly – resulting in the dieter losing no more weight than
someone not dieting.
Ogden (2010) suggests that disinhibition theory (and other theories claiming dieting is
counterproductive) has trouble explaining why some people lose weight even when preoccupied
with food. These people are a minority but obviously include people with anorexia who lose weight
through restricted eating and people with an internal locus of control. So, disinhibition theory lacks
validity because it does not apply to all cases of people dieting to lose weight.
110
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Chapter 10 Stress
Page 159
1. The first stage is the alarm reaction. The sympathetic branch of the autonomic nervous system
(ANS) is activated by the hypothalamus. This stimulates the adrenal medulla to release adrenaline
and noradrenaline to prepare the body for fight or flight.
The second stage is resistance. The body tries to adapt by resisting the stressor. The body’s
resources are consumed at a harmful rate (e.g. stress hormones become depleted). The
parasympathetic branch is activated to conserve energy.
The third stage is exhaustion. The adaptation to the chronic stressor fails because resources needed
to resist it are drained. The symptoms of sympathetic arousal (e.g. raised heart rate) damage the
adrenal glands and suppress the immune system. Stress-related illnesses (e.g. raised blood pressure,
coronary heart disease and depression) are now more likely.
HPA is self-regulating via a negative feedback loop – cortisol in the bloodstream is monitored at the
pituitary and the hypothalamus. High levels of cortisol trigger reduction in both CRF and ACTH,
resulting in a corresponding reduction in cortisol.
3. The sympathomedullary pathway (SAM) controls the fight or flight response and therefore the
body’s acute, short-term response to a stressor. The hypothalamus activates the sympathetic branch
of the ANS, which stimulates the adrenal medulla to release adrenaline and noradrenaline into the
bloodstream (heart beats faster, muscles tense, liver converts stored glycogen into glucose to
provide energy to fuel fight or flight response). Once the stressor stops, the parasympathetic
nervous system is activated and physiological arousal decreases – the priority now is energy
conservation, the rest and digest response.
The hypothalamic-pituitary-adrenal system (HPA) controls the body’s chronic response to long-term
stress. The hypothalamus produces corticotropin releasing factor (CRF). This is detected by the
anterior lobe of the pituitary gland and causes the release of adrenocorticotropic hormone (ACTH).
ACTH is detected by the adrenal cortex which secretes cortisol. HPA is self-regulating via a negative
feedback loop – cortisol in the bloodstream is monitored at the pituitary and the hypothalamus.
High levels of cortisol trigger reduction in both CRF and ACTH, resulting in a corresponding reduction
in cortisol.
4. In the GAS model, the first stage is the alarm reaction. The sympathetic branch of the autonomic
nervous system (ANS) is activated by the hypothalamus. This stimulates the adrenal medulla to
release adrenaline and noradrenaline to prepare the body for fight or flight.
111
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
The second stage is resistance. The body tries to adapt by resisting the stressor. The body’s
resources are consumed at a harmful rate (e.g. stress hormones become depleted). The
parasympathetic branch is activated to conserve energy.
The third stage is exhaustion. The adaptation to the chronic stressor fails because resources needed
to resist it are drained. The symptoms of sympathetic arousal (e.g. raised heart rate) damage the
adrenal glands and suppress the immune system. Stress-related illnesses (e.g. raised blood pressure,
coronary heart disease and depression) are now more likely.
One strength of the GAS is that there is evidence to support it. Selye (1936) subjected rats to
stressors (e.g. extreme cold, surgical injury). He found the same collection of responses (‘syndrome’)
regardless of the stressor. Stress was a general body response appearing after 6–48 hours that was
not unique to specific stressor. He tracked response to the stressor through the resistance and
exhaustion stages. This suggests the body’s general response to a stressor is a physiological reality as
Selye argued, at least in rats.
One limitation of the GAS is that it may not be a general response to stressors. Key to the GAS is that
the stress response is non-specific (i.e. it is always the same, regardless of the stressor). Mason
(1971) replicated Selye’s procedures with monkeys. Effects varied depending on the stressor
(extreme cold increased urinary cortisol; extreme heat reduced it). This challenges the central
concept of Selye’s theory by showing specific stressors can produce specific patterns of responses,
undermining the validity of the GAS.
The sympathomedullary pathway (SAM) controls the fight or flight response and therefore the
body’s acute, short-term response to a stressor. The hypothalamus activates the sympathetic branch
of the ANS, which stimulates the adrenal medulla to release adrenaline and noradrenaline into the
bloodstream (heart beats faster, muscles tense, liver converts stored glycogen into glucose to
provide energy to fuel fight or flight response). The hypothalamic-pituitary-adrenal system (HPA)
controls the body’s chronic response to long-term stress. The hypothalamus produces corticotropin
releasing factor (CRF). This is detected by the anterior lobe of the pituitary gland and causes the
release of adrenocorticotropic hormone (ACTH). ACTH is detected by the adrenal cortex which
secretes cortisol.
A limitation of research into this physiological stress response is that psychological factors are
ignored. Cognitive appraisal was demonstrated in Speisman et al.’s (1964) study, in which students
watched a gruesome medical procedure on film while their heart rates were measured. If the
traumatic nature of operation was emphasised, heart rates increased; if described as a voluntary rite
of passage, heart rates decreased. It is difficult for a purely physiological explanation to account for
this finding.
A strength of research is that it offers real-world benefits. Addison’s disease is rare disorder of the
adrenal glands (people cannot produce cortisol). Stress can trigger a life-threatening Addisonian
crisis (confusion, abnormal heart rhythm, drop in blood pressure). This can be treated with self-
administered cortisol replacement therapy which allows people to lead relatively normal lives.
Therefore a better understanding of stress physiology has improved the lives of some people.
Page 161
1. Immunosuppression occurs when stress prevents the immune system from carrying out its usual
task of identifying and destroying antigens. Stress can cause immunosuppression directly (cortisol
inhibits production of immune cells) or indirectly (lifestyle behaviours).
112
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Cardiovascular disorders (CVDs) are disorders of the heart and blood vessels – including coronary
heart disease (CHD) and stroke (blocked blood vessels in the brain). Some evidence shows stress has
immediate effects on CVDs (acute) as well as longer-term effects (chronic).
2. Wilbert-Lampen et al. (2008) found that on the days Germany played in the 1996 football World
Cup, cardiac emergencies in Germany increased by 2.66 times compared with a control period. The
acute emotional stress of watching a favourite football team more than doubled participants’ risk of
a cardiovascular event. Yusuf et al. (2004) found that there are several chronic stressors linked to
CVDs including workplace stress and stressful life events (a greater contribution than obesity). These
contribute to the development of CVDs but they also make existing disorders worse.
3. One limitation is that some research shows that stress can be protective. An assumption
underlying stress and illness research is that stress suppresses the immune system. But some studies
show stress can have immune-enhancing effects. Dharbhar (2008) subjected rats to mild stressors
which stimulated a major immune response. Immune cells (e.g. lymphocytes) flooded into the
bloodstream and body tissues to protect against acute stress – chronic stress may be more
damaging. This suggests that the relationship between stress, the immune system and illnesses is
complex and not yet fully understood.
4. Immunosuppression through stress can occur directly. Cortisol produced by the hypothalamic-
pituitary-adrenal system (HPA) inhibits the production of immune cells. It can also occur indirectly as
stress influences lifestyle behaviours (smoking, drinking) that have a negative effect on immune
functioning. Kiecolt-Glaser et al. (1984) obtained blood samples from 75 medical students, tested
before the exam period (low-stress) and on the day of the first exam (high-stress). They also
completed questionnaires measuring sources of stress and self-reported psychological symptoms.
The activity of natural killer (NK) and killer T cells decreased between the first and second samples –
and this was evidence of an immune response suppressed by a common stressor. Decline was
greatest in those students who reported feeling lonely and who were experiencing other sources of
stress (e.g. life events). Immunosuppression may explain Fabrizio’s sniffles, aches and pains as his
recent experiences have reduced immune functioning.
CVDs are disorders of the heart and blood vessels – including coronary heart disease (CHD) and
stroke (blocked blood vessels in the brain). Some evidence shows stress has immediate effects on
CVDs (acute) as well as longer-term effects (chronic). It is worrying that Fabrizio is experiencing an
irregular heartbeat, which suggests chronic stress may be having long-term effects, and he may be
more susceptible to cardiovascular disorder.
One limitation is that some research shows that stress can be protective, which does not appear to
be Fabrizio’s experience. An assumption underlying stress and illness research is that stress
suppresses the immune system. But some studies show stress can have immune-enhancing effects.
Dharbhar (2008) subjected rats to mild stressors which stimulated a major immune response.
Immune cells (e.g. lymphocytes) flooded into the bloodstream and body tissues to protect against
acute stress – chronic stress may be more damaging. This suggests that the relationship between
stress, the immune system and illnesses is complex and not yet fully understood.
Another limitation is the effects of stress on CVDs are mostly indirect. The evidence for stress as an
indirect factor in CVDs is much stronger than evidence that it directly causes CVDs. Stress can
increase the risk of heart attack in people who already have CVDs. Orth Gomer et al. (2000) found
that marital conflict for women with CVDs created stress that tripled the risk of heart attack. This
suggests that stress increases vulnerability to CVDs, mainly through indirect effects (e.g. lifestyle).
113
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Perhaps the source of Fabrizio’s problems lies elsewhere (his relationship?) and this job loss has
made it worse.
However there is some evidence that chronic stressors are linked to CVDs. Yusuf et al. (2004)
conducted the INTERHEART study across 53 countries. They found that there are several chronic
stressors linked to CVDs including workplace stress and stressful life events (a greater contribution
than obesity). These contribute to the development of CVDs but they also make existing disorders
worse. This research confirms that Fabrizio’s girlfriend may be right and he is experiencing a chronic
response to the stressful life event of losing his job. This is of great concern given that he has an
irregular heartbeat.
Page 163
1. Major sources of stress are the really important things that happen to us from time to time. For
example, getting married/divorced, when a close relative dies, changes to financial state (better or
worse), when a new child is born. Life changes are stressful because you make major psychological
adjustment to adapt to changed circumstances – the bigger the change, the greater the adjustment
and associated stress. Life changes are cumulative – they add together to create more stress
because they require even more change to adapt. This applies as much in relation to positive life
changes as to negative ones.
2. Rahe et al. (1970) found a significant positive correlation (of +.118) between LCU scores of navy
personnel for the six months prior to departure and illness scores aboard ship. Those who
experienced the most stressful life changes in the final six months before leaving had the most
(severe) illnesses on ship. The researchers concluded that life changes were a reasonably robust
predictor of later illness. Lietzén et al. (2011) found that having a high level of life change stress was
a reliable predictor of asthma onset. This link could not be explained by other well-established
common risk factors such as smoking or having a pet at home.
3. One strength of the life changes concept is supportive research evidence. Lietzén et al. (2011)
found a high level of life change was a reliable predictor of asthma onset. This link was not explained
by known risk factors (e.g. pet at home or smoking). This study suggests that stressful life changes
can contribute to the onset of a chronic illness.
One limitation of life changes research is it ignores individual differences. Stress is perceived
differently by different individuals, e.g. moving house will be more stressful to somebody when it is
due to a lack of money rather than as a result of being better off. Byrne and Whyte (1980) tried to
predict who would experience myocardial infarction (heart attack) based on SRRS scores. This only
worked when they took into account the subjective interpretations that participants gave to their
life changes. This suggests that the classic life changes approach fails to consider the impact of
individual differences in how life changes are perceived, reducing the validity of this approach as an
explanation of stress.
4. Major sources of stress are the really important things that happen to us from time to time. For
example, getting married/divorced, when a close relative dies, changes to financial state (better or
worse), when a new child is born. Life changes are stressful because you make major psychological
adjustment to adapt to changed circumstances – the bigger the change, the greater the adjustment
and associated stress. Life changes are cumulative – they add together to create more stress
because they require even more change to adapt. This applies as much in relation to positive life
changes as to negative ones. This can be related to Tad and Tadita’s experiences. Even though some
of the life changes they have experienced are positive, all life changes require significant
114
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
psychological adjustment. The cumulative effect of these will mean that both will have experienced
a considerable amount of stress.
Holmes and Rahe’s (1967) Social readjustment rating scale (SRRS) gives number of life change units
(LCUs). The higher the LCU value, the more adjustment the life change needs, making it more
stressful (e.g. divorce is 73 LCUs, marriage is 50). Participants tick all the life changes they recall over
previous months (usually 12). We can assume that Tad and Tadita would score highly on the SRRS,
and a high score is correlated with high experience of stress.
One strength of the life changes concept is supportive research evidence. Lietzén et al. (2011) found
a high level of life change was a reliable predictor of asthma onset. This link was not explained by
known risk factors (e.g. pet at home or smoking). This study suggests that stressful life changes can
contribute to the onset of a chronic illness. All this research confirms that it is events in Tad and
Tadita’s lives that have caused them stress.
However, one limitation of life changes research is it ignores individual differences. Stress is
perceived differently by different individuals. For example, the stress Tadita felt because she became
pregnant depends on various things, such as whether it was planned or unexpected. Byrne and
Whyte (1980) tried to predict who would experience myocardial infarction (heart attack) based on
SRRS scores. This only worked when they took into account the subjective interpretations that
participants gave to their life changes. This suggests that the classic life changes approach fails to
consider the impact of individual differences in how life changes are perceived, reducing the validity
of this approach as an explanation of stress. Although Tad and Tadita have experienced stress from
life events, it is very unlikely they will have done so to the same extent.
Another limitation of life changes research is it assumes all change is stressful. The SRRS mixes
together different types of life changes (e.g. positive and negative). But positive and negative
changes may have different effects. Turner and Wheaton (1995) found negative life changes caused
most stress measured by the SRRS. This could be due to frustration associated with negative life
changes. Depending on his exact circumstances, Tad will have found moving house particularly
stressful because it was the result of relationship breakdown. On the other hand Tadita got married,
which presumably was a positive life change. This challenges the validity of the life changes
approach, because positive and negative life changes have different effects.
Page 165
Daily hassles are frequent and everyday irritations and frustrations. They range from minor
inconveniences (e.g. can’t find keys) to greater pressures and difficulties (e.g. not enough time). Each
hassle on its own does not have the impact of a significant life change, but their added effects leave
us feeling stressed.
Life changes are major events in a person’s life that happen much less often than daily hassles but
may be more stressful because they require significant psychological adjustment (e.g. getting
married, losing one’s job or experiencing a bereavement).
2. Kanner et al. (1981) found significant positive correlations between hassle frequency and
psychological symptoms at the start and end of the study. The more hassles the participants
experienced the more severe were the psychological symptoms of depression and anxiety. Hassles
were a stronger predictor of psychological symptoms than life changes both during the ten months
115
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
of the study and from 2½ years earlier. Ivancevich (1986) showed that daily hassles are strong
predictors of poor health, poor job performance and absenteeism from work. Relatively minor
everyday stressors can accumulate and have significant effects in the workplace.
3. One strength is the daily hassles concept has research evidence to support it. Ivancevich (1986)
found that daily hassles were strong predictors of poor health, poor job performance and
absenteeism from work. There is a substantial body of research to suggest that daily hassles are a
more valid explanation of stress than life changes.
However, Ivancevich’s study (and others) was based on retrospective self-report. Participants had to
recall daily hassles from over the previous month. Because they are relatively minor, hassles are
easily forgotten or their significance could be misremembered and exaggerated. This means the
validity of some hassles research might be doubtful.
4. According to Lazarus et al. (1980) daily hassles range from minor inconveniences (e.g. can’t find
keys) to greater pressures and difficulties (e.g. not enough time). Each hassle on its own does not
have the impact of a significant life change – but their added effects leave us feeling stressed.
Stressfulness of daily hassles depends on psychological appraisal. Lazarus argued that when we
experience a hassle we engage in primary appraisal – we work out subjectively how threatening it is
to our psychological health. If we deem that the hassle is threatening we engage in secondary
appraisal – we subjectively consider how well equipped we are to cope with the hassle.
The Hassles and uplifts scale (HSUP) is a self-report measure of how many hassles are experienced
and how severe they are, as well as uplifts – the small, daily pleasant and enjoyable things that offset
the stress of hassles (e.g. getting on well with friends).
Effects of life changes and daily hassles are different. Life changes have indirect effects – they are
distal sources of stress. Daily hassles have direct and immediate effects on our everyday lives – they
are proximal sources of stress.
One strength is the daily hassles concept has research evidence to support it. Ivancevich (1986)
found that daily hassles were strong predictors of poor health, poor job performance and
absenteeism from work. There is a substantial body of research to suggest that daily hassles are a
more valid explanation of stress than life changes.
However, Ivancevich’s study (and others) was based on retrospective self-report. Participants had to
recall daily hassles from over the previous month. Because they are relatively minor, hassles are
easily forgotten or their significance could be misremembered and exaggerated. This means the
validity of some hassles research might be doubtful.
Another limitation is that hassles research is mostly correlational. There are many studies showing
strong positive correlations between stress, hassles and various outcomes. But even the strongest
correlation does not demonstrate causation. Because another, unmeasured, factor may be involved,
we cannot conclude that hassles cause stress. For instance, people who are depressed may
experience daily hassles intensely and at the same time feel stressed. Hassles and stress appear to
be linked, but it is the depression that is causal. Therefore the link between hassles, stress and illness
may be indirect and depend on other factors.
On the other hand, a strength of hassles research is that it can account for individual differences.
Lazarus emphasises that how stressful a hassle is depends on how we interpret it. For example, one
116
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
person who loses their keys will perceive it as a disaster, but another person will not. This is primary
appraisal. One person believes they can cope, but the other falls to pieces. This is secondary
appraisal. Therefore the daily hassles approach incorporates the idea that people differ in their
perception of what makes a hassle and this has differing effects on our health and behaviour.
Page 167
1. Workload refers to the demands a job makes on an employee. Some jobs make great demands of
time and/or effort and so an employee will experience overload. Conversely, other employees might
experience underload because their jobs are relatively undemanding.
Control is the degree of freedom an employee has to perform their job how they wish. For example,
he or she may have greater leeway to make decisions or take longer to perform a task or determine
the steps involved in doing so.
2. Research has shown that high workload and a lack of control are both stressful in the workplace.
In the Whitehall Studies, Bosma et al. (1997) showed that employees who reported low job control
at the start of the study were more likely to have coronary heart disease five years later. This was
true even when other risk factors (e.g. lifestyle, diet) were statistically accounted for and also was
found across all job grades. If a job lacks control, higher status does not reduce the risks of stress to
health.
Johansson et al. (1978) found that ‘finishers’ in a Swedish sawmill had higher levels of stress
hormones than cleaners. Their hormone levels were higher even before they got to work and
increased over the day (whereas the cleaners’ levels decreased). Finishers also experienced more
stress-related illness and absenteeism. The key difference was that finishers had much less control
over their work than the cleaners did.
3. One limitation of the job demands-control model is that it is simplistic. Lack of control is a
significant stressor for many workers (at least in some cultures) but is not the only one. How much
stress a worker experiences is the outcome of a complex interaction between the kind of work they
do, how well they use coping mechanisms and their perception of how much control they have. The
job demands-control model ignores other factors and lacks validity because of a simplistic focus on
just control and workload.
Another limitation is the model may not explain cultural differences. Györkös et al. (2012) reviewed
cross-cultural studies and found a lack of job control was perceived as more stressful in individualist
cultures (e.g. UK and US). However, in collectivist cultures (e.g. China and other Asian countries)
control was considered less desirable. The concept of job control may be a culture-specific notion
reflecting individualist ideals of equity and personal rights. It may not generalise to collectivist
cultures which prioritise the good of wider society.
4. The newspaper item reflects the view of Karasek (1979) in his job demands-control model of
workplace stress. The article and the model state that the demands of a job (e.g. work overload) can
lead to poor health, dissatisfaction, and absenteeism. But this relationship depends upon the
amount of control an employee has over their work. So when two people have equally demanding
jobs (because the workload is too great) only the one who lacks control becomes ill. According to
Karasek, having control acts as a ‘buffer’ against the negative effects of job overload.
117
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
The key sources of stress identified by the newspaper article have been explored in two major
studies.
Bosma et al. (1997) investigated control in the Whitehall Studies, prospective studies of over 10,000
civil servants in a wide range of job grades. The researchers found that employees who reported low
job control at the start of the study were more likely to have CHD five years later – even when other
risk factors (e.g. lifestyle, diet) were statistically accounted for. This finding also existed across all job
grades – status and support given to higher grade civil servants did not offset risk of developing CHD
if the job lacked control.
Johansson et al. (1978) investigated workload, control and stress. A natural experiment was
conducted in a Swedish sawmill which compared a group of wood ‘finishers’ and a group of cleaners.
Measures of employee illness, absenteeism, and levels of the stress hormones adrenaline and
noradrenaline were taken. Finishers had little control over their work because it was dictated by the
machine – but job demands were high because it was complex, skilled and carried a lot of
responsibility. The researchers found higher level of stress hormones in finishers overall – higher
even before they got to work and increased over the day (but cleaners’ levels decreased). There was
more stress-related illness and absenteeism among finishers.
So overall the research picture is slightly different from the view offered by the newspaper item. It
appears that having ‘too much work to do’ is not as stressful as having low job control. However,
both studies provide support for the newspaper’s view that lack of job control is potentially
dangerous, especially the Whitehall Studies because they were prospective and showed that lack of
control predicts negative outcomes.
One limitation of the job demands-control model is that it is simplistic. Lack of control is a significant
stressor for many workers (at least in some cultures) but is not the only one. How much stress a
worker experiences is the outcome of a complex interaction between the kind of work they do, how
well they use coping mechanisms and their perception of how much control they have. The job
demands-control model ignores other factors and lacks validity because of a simplistic focus on just
control and workload. The newspaper article is therefore equally simplistic in failing to highlight
other potential sources of stress.
Another limitation is the model may not explain cultural differences. Györkös et al. (2012) reviewed
cross-cultural studies and found a lack of job control was perceived as more stressful in individualist
cultures (e.g. UK and US). However, in collectivist cultures (e.g. China and other Asian countries)
control was considered less desirable. The concept of job control may be a culture-specific notion
reflecting individualist ideals of equity and personal rights. It may not generalise to collectivist
cultures which prioritise the good of wider society. The newspaper article is taking a very narrow
view of what is meant by job control.
Page 169
1. The main difference is that one is a subjective measure and the other is objective.
Self-report scales measure subjective judgements of stress using questionnaires, such as the SRRS.
They provide valuable information about psychological factors linked to stress.
Physiological measures (e.g. the skin conductance response) monitor the effects of the autonomic
nervous system and are therefore objective measures of stress. These are more difficult to ‘fake’
because there is less risk of social desirability of response.
118
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. One consequence of stress is that we sweat more – human skin is a good conductor of electricity
and sweat enhances that – the more we sweat, the more conductance there is. To measure
conductance, electrodes are attached to the index and middle fingers of one hand to detect
sweating. A tiny current (cannot be felt) is applied to electrodes to measure how much electricity is
conducted. Conductance can be measured (in microSiemens) – the signal is amplified and displayed
on a screen. Tonic conductance is a baseline measure taken when we are not experiencing a
stressful stimulus. It is compared against phasic conductance, which occurs when a stimulus is
applied.
3. One strength of self-report is that it is a valid way to measure stress. Stress is personal so the best
way to understand it is to ask people about their experiences. Asking questions about experiences
‘makes sense’ to people as a way to measure stress, so people are more honest. Therefore the
findings of studies based on self-report measures are true reflections of the stress participants feel.
However, Dohrenwend et al. (1990) found that the most stressed people made the most negative
interpretations of scale items (e.g. ‘Serious illness’). This means there is an inbuilt bias that inflates
stress scores and reduces the validity of self-report measures.
One limitation is that self-report scales mix causes and effects of stress. SRRS and HSUP items
(causes of stress) overlap with symptoms (effects of stress), e.g. ‘Personal injury or illness’ (SRRS).
This is like saying, ‘You have a stress-related illness because you are experiencing a personal illness’ –
scales reflect illness, they do not predict it. This is why self-report measures should be abandoned
and replaced by direct observations of behaviour.
4. Self-report measures of stress include the Social readjustment rating scale (SRRS) created by
Holmes and Rahe (1967). It uses medical records to identify events in patients’ lives that happened
not long before they became ill. There are 43 life events, and a life change unit (LCU) score is
provided for each as a measure of stress. The LCU was calculated for each life event by asking a
group of people to estimate readjustment required for each event, using marriage (500 units) as a
baseline. The SRRS is used by asking participants to indicate which life events they have experienced
in the past 12 months – LCUs for these are added to give an overall (global) stress score.
One strength of self-report is that it is a valid way to measure stress. Stress is personal so the best
way to understand it is to ask people about their experiences. Asking questions about experiences
‘makes sense’ to people as a way to measure stress, so people are more honest. Therefore the
findings of studies based on self-report measures are true reflections of the stress participants feel.
However, Dohrenwend et al. (1990) found that the most stressed people made the most negative
interpretations of scale items (e.g. ‘Serious illness’). This means there is an inbuilt bias that inflates
stress scores and reduces the validity of self-report measures.
One limitation is that self-report scales mix causes and effects of stress. SRRS and HSUP items
(causes of stress) overlap with symptoms (effects of stress), e.g. ‘Personal injury or illness’ (SRRS).
This is like saying, ‘You have a stress-related illness because you are experiencing a personal illness’ –
scales reflect illness, they do not predict it. This is why self-report measures should be abandoned
and replaced by direct observations of behaviour.
Physiological measures of stress measure arousal in the autonomic nervous system (ANS) which is
produced by stress. One consequence is that we sweat more – human skin is a good conductor of
electricity and sweat enhances that – the more we sweat, the more conductance there is. To
119
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
measure conductance, electrodes are attached to the index and middle fingers of one hand to
detect sweating. A tiny current (cannot be felt) is applied to electrodes to measure how much
electricity is conducted. Conductance can be measured (in microSiemens) – the signal is amplified
and displayed on a screen.
A limitation of SCRs is individual differences. SCR measurement recognises people have different
patterns of skin conductance, so a baseline measure (tonic conductance) is taken before a stimulus is
presented. However, some people are stabiles (SCRs vary little when they are at rest, and are not
much influenced by internal thoughts or external events). Others are labiles (produce a lot of SCRs
even when resting). This suggests the SCR measurement is not a straightforward matter of
comparing baseline SCRs (tonic) against stimulated SCRs (phasic).
However, one strength of SCRs (and other physiological measures) is that they are not affected by
personal biases. Skin conductance, blood pressure and hormone secretion are all reliably associated
with stress. As noted above, physiological measures have a ‘baseline’ which varies from person to
person. But this can be accounted for and as long as it is physiological measures are free of the
biases that affect self-reports (e.g. cortisol levels are not affected by social desirability but SRRS
scores are). This means that physiological measures are considered to be more scientific measures of
the body’s physiological stress response.
Page 171
1. Friedman and Rosenman identified the characteristics of Type B personality – relaxed, tolerant,
reflective, ‘laid back’ and less competitive than Type As. Type C people demonstrate pathological
niceness, are ‘people pleasers’, compliant, passive and self-sacrificing. They avoid conflict by
repressing emotions, especially anger (particularly relevant to cancer-proneness). Temoshok (1987)
proposed Type C is linked with cancer.
2. Friedman and Rosenman (1959) conducted the Western collaborative group study (WCGS) on
3000 males in California who were medically assessed as free of coronary heart disease (CHD) at the
start of the study. They were assessed for personality type by answering 25 questions in a structured
interview. The interviews were conducted to incite Type A-related behaviour (e.g. the interviewer
would be aggressive and frequently interrupt the participants). Eight-and-a-half years later Friedman
and Rosenman (1974), 257 men had developed CHD. 70% of these had been assessed at the start of
the study as Type A – considerably more than the Type Bs who developed CHD. Type As had higher
levels of adrenaline and noradrenaline and higher blood pressure and cholesterol. This suggests that
Type A personality makes people vulnerable to stressors because impatience and hostility cause
raised physiological stress response.
3. Friedman and Rosenman’s (1974) study used an interview procedure. 3000 males in California
were medically assessed as free of coronary heart disease at the start of the study. They were
assessed for personality type by answering 25 questions in a structured interview. The interviews
were conducted to incite Type A-related behaviour (e.g. the interviewer would be aggressive and
frequently interrupt).
Dattore et al. (1980) used a different methodology to assess Type C – self-report questionnaires.
They studied 200 veterans of the Vietnam War, 75 of whom were cancer patients and the rest
formed a control group of people with non-cancer diagnoses. They had all completed scales to
measure repression of emotions and symptoms of depression several years before they were
diagnosed. So, like the Friedman and Rosenman study, this was prospective.
120
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
4. Friedman and Rosenman (1959) observed that patients with coronary heart disease (CHD) shared
a pattern of behaviour, which they called Type A personality: competitive (driven, achievement-
motivated, ambitious, aware of status), time urgent (fast-talking, impatient, proactive, multitaskers),
and hostile (aggressive, intolerant and quick to anger). Characteristics of a Type B personality are
being relaxed, tolerant, reflective, ‘laid back’ and less competitive than Type As. Type C people
demonstrate pathological niceness, are ‘people pleasers’, compliant, passive and self-sacrificing.
They avoid conflict by repressing emotions, especially anger (particularly relevant to cancer-
proneness). Temoshok (1987) proposed Type C is linked with cancer.
Friedman and Rosenman (1959) conducted the Western collaborative group study (WCGS) on 3000
males in California who were medically assessed as free of coronary heart disease (CHD) at the start
of the study. They were assessed for personality type by answering 25 questions in a structured
interview. The interviews were conducted to incite Type A-related behaviour (e.g. the interviewer
would be aggressive and frequently interrupt the participants). Eight-and-a-half years later Friedman
and Rosenman (1974), 257 men had developed CHD. 70% of these had been assessed at the start of
the study as Type A – considerably more than the Type Bs who developed CHD. Type As had higher
levels of adrenaline and noradrenaline and higher blood pressure and cholesterol. This suggests that
Type A personality makes people vulnerable to stressors because impatience and hostility cause
raised physiological stress response.
One strength of the Type A/B concept is that it can be used practically to improve health-related
outcomes. For example, Ragland and Brand (1988) followed up men from Friedman and Rosenman’s
original study who had survived a heart attack. Type B survivors were more likely to die after several
years than Type As. This was an unexpected result but one explanation for this is that Type As were
more likely to change their behaviour after surviving the first heart attack (e.g. becoming less driven,
ambitious, busy, etc.) and thus avoiding stress. This suggest that research findings can be used to
convince Type As to change and live longer.
However, the same study also highlights how some of the research is gender-biased. All of the
participants were men (and in the original study), which means some of our knowledge of the role of
personality is based on the male stress response. This might be of less relevance to women. This is
an example of beta bias, or applying findings from males to females without further testing. This
means that practical advice about surviving CVDs may not work as well for women as it does for
men.
Another limitation of the Type A concept is that it is too broad. Type A personality includes too many
different traits. Research focus moved to the hostility component of Type A (hostile people are
selfish, manipulative, mistrusting and contemptuous) to explain the link between stress and CHD.
Carmelli et al. (1991) found very high CHD-related deaths after 27 years in a subgroup of WCGS men
with high hostility scores. Therefore, it looks like it is not the broad Type A personality that is linked
to illness but the narrower hostility component.
Evidence suggests two distinct personality types (A and B) that respond to stress differently. Type As
are more likely to deal with stress in a way that harms their health. However, other evidence shows
this link is weak and correlational – inconsistent and contradictory findings suggest the Type A/B
distinction is blurred. Therefore Type A is no longer a particularly useful concept because it cannot
be used to predict who will become ill in response to stress.
121
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 173
1. Commitment: hardy people are deeply involved in relationships, activities and themselves. They
throw themselves wholeheartedly into life, optimistic they will learn something valuable.
Challenge: hardy people are resilient and welcome change as an opportunity or a challenge rather
than a threat. They recognise life is unpredictable, but this is exciting and stimulating.
Control: hardy people have a strong belief that they are in charge of events. They actively strive to
influence environments rather than being powerless and passive observers of life passing by.
2. Kobasa (1979) used the Schedule of recent experiences (forerunner to the SRRS) to measure life
events in male American managers. She also measured illness and noted the number of days taken
off work. Many managers who experienced high levels of stress over the previous three years
became ill with a high level of absenteeism. But some of them coped with stress without becoming
ill. They all scored highly on measures of challenge, commitment and control (i.e. hardiness).
Maddi (1987) studied managers and supervisors at an American company that went through one of
the biggest reorganisations in history. It was extremely stressful for people who were fortunate
enough to keep their jobs. About two-thirds of the managers had significant declines in performance
and health, including CVDs, depression and drug abuse. But the other third flourished, they felt
happier and more satisfied at work and were energised by the stress. Again these were the ones
who scored highly on the Three Cs. They saw the stressful events as a challenge which they could
control and worked at being committed to change.
3. One limitation is that the concept of hardiness may be too broad. Hull et al. (1987) argued that
research should focus on control, as research shows it is so important to well-being. And to a lesser
extent, commitment. However, Contrada (1989) claims that challenge is the most important
component of hardiness. This suggests the concept of hardiness is so broad it has very little validity
and may not exist at all.
One strength is that many research studies show that hardiness has an important role in how we
respond to stressors. An example is a study by Contrada (1989) who studied the cardiovascular
responses of male students to a stressful laboratory task. Students who scored highly on a measure
of hardiness had a lower resting blood pressure in response to the task. The students with the
lowest blood pressure also had Type B personalities, which shows that hardiness interacts with other
individual differences. This shows that hardiness affects the physiological stress response and may
protect from some stress-related illnesses.
4. Kobasa (1979) proposed hardiness is a set of personality characteristics that protect us against
stress. Maddi (1986) argues hardiness gives us ‘existential courage’ – the will or determination to
keep going despite the setbacks life throws at us and uncertainties about the future.
122
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
It would appear that Padraig is a ‘hardy’ individual as he sees the changes at work as an opportunity
rather than a stressor (this links to one of the Three C’s – challenge). His desire to work hard relates
to the concept of commitment. Finally, he appears to be in control of events and his reaction to the
changes, rather than being controlled by them. For this reason, he is likely to have the existential
courage to cope with the changing circumstances.
A strength is research support for the hardiness concept. For example, Maddi (1987) studied
managers and supervisors at an American company that went through one of the biggest
reorganisations in history, mirroring the events at Padraig’s college. It was extremely stressful for
people who were fortunate enough to keep their jobs. About two-thirds of the managers
experienced poor performance and ill health, including CVDs, depression and drug abuse. But, like
Padraig, the other third flourished, they felt happier and more satisfied at work and were energised
by the stress. These were the ones who scored highly on the Three Cs, as Padraig probably would if
he were to be assessed. They saw the stressful events as a challenge which they could control and
worked at being committed to change.
Another strength is that it may be possible to develop hardiness in the real world. Maddi and Kobasa
have worked with many organisations to increase challenge, control and commitment in employees
to help reduce the effects of stress. This may be of benefit not so much to Padraig but to some of his
colleagues who are overwhelmed by the stress of the reorganisation. Therefore being able to
develop hardiness in some people could help them to respond more positively to stress and prevent
poor health, absenteeism and poor performance.
One limitation is that the concept of hardiness may be too broad. There seems to be an element of
control at the heart of both commitment and challenge. It could be that Padraig’s positive response
is entirely due to his feeling of control rather than to a vague concept of hardiness. Hull et al. (1987)
argued that research should focus on control, as research shows it is so important to well-being. And
to a lesser extent, commitment. However, Contrada (1989) claims that challenge is the most
important component of hardiness. This suggests the concept of hardiness is so broad it has very
little validity and may not exist at all.
Page 175
1. Drug therapy is treatment of stress that involves chemicals that affect the functioning of the brain
and nervous system. For example, benzodiazepines (e.g. diazepam) reduce the anxiety associated
with stress by reducing central nervous system (CNS) arousal. They tap into one way the body
naturally combats anxiety. The mode of action of BZs involves GABA, which is a neurotransmitter
that inhibits activity of most neurons in the brain. During synaptic transmission, GABA combines with
receptors on the postsynaptic neuron. This makes it less likely that the postsynaptic neuron will fire
so neural activity is slowed. BZs enhance this natural inhibition, lowering CNS activity even further.
2. Benzodiazepines (e.g. diazepam) reduce the anxiety associated with stress by reducing central
nervous system (CNS) arousal. They tap into one way the body naturally combats anxiety. The mode
of action of BZs involves GABA, which is a neurotransmitter that inhibits activity of most neurons in
the brain. During synaptic transmission, GABA combines with receptors on the postsynaptic neuron.
This makes it less likely that the postsynaptic neuron will fire so neural activity is slowed. BZs
enhance this natural inhibition, lowering CNS activity even further.
Beta-adrenergic blockers (beta blockers or BBs) act on the sympathetic nervous system to reduce
sympathetic arousal, a key part of stress-related anxiety. BBs (e.g. atenolol) are prescribed to reduce
blood pressure and treat heart problems. BBs stop beta-adrenergic receptors being stimulated by
123
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
adrenaline and noradrenaline. This slows heart rate, reduces blood pressure, etc., reducing the need
for oxygen. BBs reduce stress-related anxiety without altering alertness because they don’t operate
directly on the brain. So they are ideal for people who want to eliminate physical symptoms of stress
but remain alert (e.g. stage performers, surgeons).
3. One strength of BZs is high-quality research evidence that shows they are effective. In a double-
blind placebo-controlled trial, half the participants take a placebo (an inactive version of the drug)
but neither they nor the researcher knows who is taking it. A review of high-quality studies by
Baldwin et al. (2013) concluded there is good evidence that BZs are significantly more effective than
placebo in reducing acute anxiety. This is strong evidence that BZs are a good choice of drug
treatment for people wishing to reduce anxiety, at least in the short term.
One strength of BBs is that research evidence shows they are effective. Kelly (1980) concluded that
BBs were effective for treating everyday anxieties associated with public speaking, exam nerves and
even civil disturbances of living in Northern Ireland in the 1970s. Studies consistently demonstrate
BBs may be even more effective when used with other drugs such as BZs (Hayes and Schulz 1987).
Therefore, drug combination therapy with BBs and BZs may be the best way to treat the
physiological symptoms of stress for most people.
One limitation of drug therapy is side effects. BZs can cause breathing problems and paradoxical
reactions (opposite effects) e.g. impulsive behaviours and uncontrollable emotions (Gaind and
Jacoby 1978). BBs may reduce heart rate and blood pressure too much in some people, and so are
not suitable for people with diabetes or severe depression. Therefore side effects are problematic
because, as a consequence, a person may stop taking the drug making them ineffective.
4. Benzodiazepines (e.g. diazepam) reduce the anxiety associated with stress by reducing central
nervous system (CNS) arousal. They tap into one way the body naturally combats anxiety. The mode
of action of BZs involves GABA, which is a neurotransmitter that inhibits activity of most neurons in
the brain. During synaptic transmission, GABA combines with receptors on the postsynaptic neuron.
This makes it less likely that the postsynaptic neuron will fire so neural activity is slowed. BZs
enhance this natural inhibition, lowering CNS activity even further.
One strength of BZs is high-quality research evidence that shows they are effective. In a double-blind
placebo-controlled trial, half the participants take a placebo (an inactive version of the drug) but
neither they nor the researcher knows who is taking it. A review of high-quality studies by Baldwin et
al. (2013) concluded there is good evidence that BZs are significantly more effective than placebo in
reducing acute anxiety. This is strong evidence that BZs are a good choice of drug treatment for
people wishing to reduce anxiety, at least in the short term.
Beta-adrenergic blockers (beta blockers or BBs) act on the sympathetic nervous system to reduce
sympathetic arousal, a key part of stress-related anxiety. BBs (e.g. atenolol) are prescribed to reduce
blood pressure and treat heart problems. BBs stop beta-adrenergic receptors being stimulated by
adrenaline and noradrenaline. This slows heart rate, reduces blood pressure, etc., reducing the need
for oxygen. BBs reduce stress-related anxiety without altering alertness because they don’t operate
directly on the brain. So they are ideal for people who want to eliminate physical symptoms of stress
but remain alert (e.g. stage performers, surgeons).
One strength of BBs is that research evidence shows they are effective. Kelly (1980) concluded that
BBs were effective for treating everyday anxieties associated with public speaking, exam nerves and
even civil disturbances of living in Northern Ireland in the 1970s. Studies consistently demonstrate
BBs may be even more effective when used with other drugs such as BZs (Hayes and Schulz 1987).
124
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Therefore, drug combination therapy with BBs and BZs may be the best way to treat the
physiological symptoms of stress for most people.
One limitation of drug therapy is side effects. BZs can cause breathing problems and paradoxical
reactions (opposite effects) e.g. impulsive behaviours and uncontrollable emotions (Gaind and
Jacoby 1978). BBs may reduce heart rate and blood pressure too much in some people, and so are
not suitable for people with diabetes or severe depression. Therefore side effects are problematic
because, as a consequence, a person may stop taking the drug making them ineffective.
Drugs have costs because of side effects and dependency is an issue because BZs are addictive with
long-term use. Drugs also do not offer a cure for anxiety/stress. However, there are benefits because
they give short-term relief, which means psychological therapies can be used. They are also cost-
effective and non-disruptive. Therefore the benefits outweigh the costs as long as anti-anxiety drugs
are only used to relieve short-term stress.
Page 177
1. Stress inoculation therapy is a form of cognitive behaviour therapy which is based on the idea that
the way in which people cognitively appraise a situation determines their stress level. In contrast,
drug therapy works on the idea that the best way to tackle stress is to target the physiological
effects associated with it.
2. There are three phases involved in SIT. In the conceptualisation phase, the client and therapist
work together to identify and understand the stressors the client faces. The client is educated about
the nature of stress and its effects. There should be a warm and collaborative rapport between
therapist and client.
In the skills acquisition and rehearsal phase, the client learns skills to cope with stress (e.g.
relaxation, social skills, communication, cognitive restructuring). The major element of skills
acquisition is learning to monitor and use self-talk. The client uses coping self-statements (‘You can
do this!’, ‘Stick to the plan!’) to replace anxious internal dialogue. The client plans in advance how to
cope using their learnt skills when stress occurs.
In the real-life application and follow-through phase, the therapist creates opportunities for the
client to try out skills in a safe environment. Various techniques are used to increase realism of
stressful situations (e.g. role playing, visualisation, virtual reality, mobile apps). Learned skills are
gradually transferred to the real world through homework tasks for the client to deliberately seek
out moderately stressful situations and use their coping skills in everyday life (‘personal
experiments’). The client later feeds back to the therapist for discussion and further work if
necessary.
3. One strength of SIT is its flexibility. SIT incorporates a wide variety of stress management
techniques in the skills acquisition phase. It can be used with individuals, couples, groups and in a
variety of settings. This means techniques can be tailored to specific needs – some skills are more
suitable for elderly people or people with learning difficulties. SIT can even be adapted for use
online. This suggests that SIT is so flexible it has the potential to be an effective method of managing
any form of stress.
4. Benzodiazepines (e.g. diazepam) reduce the anxiety associated with stress by reducing central
nervous system (CNS) arousal. They tap into one way the body naturally combats anxiety. The mode
of action of BZs involves GABA, which is a neurotransmitter that inhibits activity of most neurons in
125
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
the brain. During synaptic transmission, GABA combines with receptors on the postsynaptic neuron.
This makes it less likely that the postsynaptic neuron will fire so neural activity is slowed. BZs
enhance this natural inhibition, lowering CNS activity even further.
One strength of BZs is high-quality research evidence that shows they are effective. In a double-blind
placebo-controlled trial, half the participants take a placebo (an inactive version of the drug) but
neither they nor the researcher knows who is taking it. A review of high-quality studies by Baldwin et
al. (2013) concluded there is good evidence that BZs are significantly more effective than placebo in
reducing acute anxiety. This is strong evidence that BZs are a good choice of drug treatment for
people wishing to reduce anxiety, at least in the short term.
One limitation of all drug therapies is side effects. Side effects of anti-stress drugs include
drowsiness, respiration problems and paradoxical reactions (opposite outcomes to ones you expect
from treatment, e.g. impulsive behaviours, uncontrollable emotional responses). BBs also reduce
heart rate and blood pressure too much in some individuals and are not suitable for people with
diabetes or severe depression. A person might stop taking the drug because of side effects, so
anxiety symptoms return. This means that side effects need to be carefully weighed up against the
benefits of the drug, and also against alternatives including psychological therapies (e.g. stress
inoculation therapy).
Stress inoculation therapy is a form of cognitive behaviour therapy which is based on the idea that
the way in which people cognitively appraise a situation determines their stress level. There are
three phases involved in SIT: conceptualisation (the client and therapist work together to identify
and understand stressors the client faces), skills acquisition and rehearsal (the client learns skills to
cope with stress, e.g. relaxation, social skills, communication, cognitive restructuring), real-life
application and follow-through (the therapist creates opportunities for the client to try out skills in a
safe environment).
One strength of SIT is its flexibility. SIT incorporates a wide variety of stress management techniques
in the skills acquisition phase. It can be used with individuals, couples, groups and in a variety of
settings. This means techniques can be tailored to specific needs – some skills are more suitable for
elderly people or people with learning difficulties. SIT can even be adapted for use online. This
suggests that SIT is so flexible it has the potential to be an effective method of managing any form of
stress.
One limitation of SIT is that it is a very demanding therapy. Clients must make big commitments of
time and effort and be highly motivated. Training can be lengthy and involves self-reflection and
learning new skills. Applying SIT techniques to real life is especially challenging – some people find it
difficult to use coping self-statements when experiencing the anxiety of a stressful situation. These
demands and sense of failure mean some people don’t continue treatment, making it unsuccessful
in many cases.
Page 179
1. The main difference concerns the role of cognitive factors. Stress inoculation therapy is a form of
cognitive behaviour therapy which is based on the idea that the way in which people cognitively
appraise a situation determines their stress level. Biofeedback trains people to control involuntary
physiological processes (e.g. heart rate, muscle tension). The client is connected to a machine which
converts physiological activity into visual and/or auditory signals. There is no emphasis placed on
cognitive factors.
126
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. Biofeedback trains people to control involuntary physiological processes (e.g. heart rate, muscle
tension) by connecting them to machines that give visual and/or auditory feedback of the processes
(e.g. a tone representing muscular tension).
Phase 1 of training is an educational phase with a lot of input from the trainer/therapist. The client
learns to become aware of their physiological responses.
In Phase 2 the client applies learned stress management techniques. The client monitors the effect
of changes – for example, they can see that changed breathing causes a change on a visual display in
the desired direction (e.g. altering the line of the graph). Biofeedback from the machine is rewarding
and reinforces the client’s behaviour, making further success more likely (i.e. operant conditioning).
In Phase 3, once the client becomes aware of their physiological response and how to control it (e.g.
reducing heart rate), they transfer control to everyday life. They practise stress management
techniques in stressful situations rather than in the therapy room.
3. One strength of biofeedback is research evidence. Lemaire et al. (2011) trained medical doctors to
use a biofeedback device three times a day over a 28-day period. The doctors also completed a
questionnaire measuring perception of how stressed they were. Mean stress score for biofeedback
users fell significantly over the course of the study. The corresponding score for a control group also
fell but by a much smaller amount. This suggests biofeedback has benefits in helping to improve the
psychological state of someone experiencing stress.
One limitation of biofeedback is its effectiveness depends on what is measured. Lemaire et al. (2011)
found that biofeedback had very little effect on objective, physiological indicators of the stress
response (e.g. blood pressure) – no more so than placebo. Therefore the effectiveness of
biofeedback depends on the outcome measure, what it is you actually aim to ‘treat’.
4. Benzodiazepines (e.g. diazepam) reduce the anxiety associated with stress by reducing central
nervous system (CNS) arousal. They tap into one way the body naturally combats anxiety. The mode
of action of BZs involves GABA, which is a neurotransmitter that inhibits activity of most neurons in
the brain. During synaptic transmission, GABA combines with receptors on the postsynaptic neuron.
This makes it less likely that the postsynaptic neuron will fire so neural activity is slowed. BZs
enhance this natural inhibition, lowering CNS activity even further.
One strength of BZs is high-quality research evidence that shows they are effective. In a double-blind
placebo-controlled trial, half the participants take a placebo (an inactive version of the drug) but
neither they nor the researcher knows who is taking it. A review of high-quality studies by Baldwin et
al. (2013) concluded there is good evidence that BZs are significantly more effective than placebo in
reducing acute anxiety. This is strong evidence that BZs are a good choice of drug treatment for
people wishing to reduce anxiety, at least in the short term. This is strong support for Parveneh’s
statement that drugs are effective for ‘most people’.
However, one limitation of all drug therapies is side effects. Side effects of anti-stress drugs include
drowsiness, respiration problems and paradoxical reactions (opposite outcomes to ones you expect
from treatment, e.g. impulsive behaviours, uncontrollable emotional responses). BBs also reduce
heart rate and blood pressure too much in some individuals and are not suitable for people with
diabetes or severe depression. A person might stop taking the drug because of side effects, so
anxiety symptoms return. This means that Parveneh needs to carefully weigh up the side effects
against the benefits of the drug, and also against alternatives including psychological therapies (e.g.
biofeedback).
127
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Biofeedback trains people to control involuntary physiological processes (e.g. heart rate, muscle
tension). In Phase 1 the client is connected to a machine which converts physiological activity into
visual and/or auditory signals. In Phase 2 the client applies learned stress management techniques.
The client monitors the effect of changes – for example, they can see that changed breathing causes
a change on a visual display in the desired direction (e.g. altering the line of the graph). In Phase 3,
once the client becomes aware of their physiological response and how to control it (e.g. reducing
heart rate), they transfer control to everyday life.
One strength of biofeedback is research evidence. Lemaire et al. (2011) trained medical doctors to
use a biofeedback device three times a day over a 28-day period. The doctors also completed a
questionnaire measuring perception of how stressed they were. Mean stress score for biofeedback
users fell significantly over the course of the study. The corresponding score for a control group also
fell but by a much smaller amount. This suggests biofeedback has benefits in helping to improve the
psychological state of someone experiencing stress. Percy might well benefit from biofeedback given
that he had the necessary drive and perseverance to succeed with it.
One limitation of biofeedback is its effectiveness depends on what is measured. Lemaire et al. (2011)
found that biofeedback had very little effect on objective, physiological indicators of the stress
response (e.g. blood pressure) – no more so than placebo. Therefore the effectiveness of
biofeedback depends on the outcome measure, what it is you actually aim to ‘treat’. So everything
depends on what Percy means by ‘it worked’. Perhaps he meant that biofeedback helped make him
‘feel better’. But the effects on stress-related risk factors for CVD are much less clear.
Page 181
1. Men tend to use problem-focused methods. Lazarus and Folkman (1984) suggest problem-
focused methods reduce stress by tackling the root causes in a direct, practical and rational way. For
example, taking control to remove or escape from stress, learning new skills such as time
management or relaxation techniques.
Women tend to use emotion-focused methods. Lazarus and Folkman suggest emotion-focused
methods reduce stress indirectly by tackling the anxiety associated with a stressor. For example,
various forms of avoidance such as keeping busy and using cognitive appraisal to think about the
stressor more positively.
2. Peterson et al. (2006) assessed coping strategies of men and women diagnosed as infertile – using
several measures, including the Ways of coping questionnaire. Men are more likely to use planful
problem-solving – a feature of a problem-focused approach. Women are more likely to accept blame
and use various avoidance tactics – characteristics of an emotion-focused approach.
Taylor et al. (2000) argue from an evolutionary perspective that fight or flight is disadvantageous for
females because confronting or fleeing from a predator makes it hard to protect one’s offspring.
They argue that a different response has evolved in females – tend and befriend. Tending involves
protecting, calming and nurturing offspring, blending in with environment. Befriending involves
seeking support from social networks at times of stress in order to cope.
Oxytocin is mainly a female hormone. It promotes feelings of goodwill and affiliation with others,
and helps the body recover more quickly from physiological effects of stressors. Taylor et al. (2002)
found higher levels of oxytocin linked with lower cortisol levels only in female participants. The
female sex hormone oestrogen increases the effects of oxytocin, but male hormones (e.g.
128
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
testosterone) reduce the effects – so oxytocin effects are stronger in women, creating a reduced
stress response.
3. One limitation is that there is no clear distinction between the coping strategies that men and
women are thought to use. In many studies, there are usually more gender similarities than
differences. Peterson et al. (2006) found that men and women often use strategies that are hard to
categorise as problem-focused or emotion-focused. For example, seeking social support from others
can be classified as either or both. Both genders use social support a lot, sometimes to seek
information (problem-focused) and sometimes to help them feel better (emotion-focused).
Therefore the distinction between emotion- and problem-focused strategies is unworkable and it
is not valid to conclude that women mostly use one and men the other.
Another limitation is that many studies use retrospective recall. Participants have to recall which
methods they have used in the past to cope with stress. According to de Ridder (2000), women only
appear to use emotion-focused strategies more because they recall doing so more often than men.
When a concurrent method of recall is used (in which participants report their strategies at regular
intervals during the day), the gender difference disappears. This means the gender difference in use
of coping strategies is an illusion that depends on what participants can remember.
4. Men tend to use problem-focused methods. Lazarus and Folkman (1984) suggest problem-
focused methods reduce stress by tackling the root causes in a direct, practical and rational way. For
example, taking control to remove or escape from stress, learning new skills such as time
management or relaxation techniques. Women tend to use emotion-focused methods. Lazarus and
Folkman suggest emotion-focused methods reduce stress indirectly by tackling the anxiety
associated with a stressor. For example, various forms of avoidance such as keeping busy and using
cognitive appraisal to think about the stressor more positively.
Peterson et al. (2006) assessed coping strategies of men and women diagnosed as infertile – using
several measures, including the Ways of coping questionnaire. Men are more likely to use planful
problem-solving – a feature of a problem-focused approach. Women are more likely to accept blame
and use various avoidance tactics – characteristics of an emotion-focused approach.
Taylor et al. (2000) argue from an evolutionary perspective that fight or flight is disadvantageous for
females because confronting or fleeing from a predator makes it hard to protect one’s offspring.
They argue that a different response has evolved in females – tend and befriend. Tending involves
protecting, calming and nurturing offspring, blending in with environment. Befriending involves
seeking support from social networks at times of stress in order to cope.
One limitation is that there is no clear distinction between the coping strategies that men and
women are thought to use. In many studies, there are usually more gender similarities than
differences. Peterson et al. (2006) found that men and women often use strategies that are hard to
categorise as problem-focused or emotion-focused. For example, seeking social support from others
can be classified as either or both. Both genders use social support a lot, sometimes to seek
information (problem-focused) and sometimes to help them feel better (emotion-focused).
Therefore the distinction between emotion- and problem-focused strategies is unworkable and it
is not valid to conclude that women mostly use one and men the other.
Another limitation is that many studies use retrospective recall. Participants have to recall which
methods they have used in the past to cope with stress. According to de Ridder (2000), women only
appear to use emotion-focused strategies more because they recall doing so more often than men.
When a concurrent method of recall is used (in which participants report their strategies at regular
129
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
intervals during the day), the gender difference disappears. This means the gender difference in use
of coping strategies is an illusion that depends on what participants can remember.
A strength of the ‘tend and befriend’ concept is evidence to support it. Tamres et al. (2002) found
women were significantly more likely than men to seek social support – a central part of the
befriending response to stress. Women are more likely to create, maintain and use social networks
to promote caring for others (mainly offspring), which means that they are likely to receive support
from others at times of stress that reduces its negative impact. This suggests that there are gender
differences in social support/tend and befriend, with this response being more prevalent in females.
On the other hand, fight or flight may sometimes be a more adaptive response for females than tend
and befriend. Protecting offspring is a complex task that benefits from the ability to respond flexibly.
It is adaptive for females sometimes to be aggressive to protect offspring. Similarly, men can use
tend and befriend as a coping response in situations where it is more adaptive than fight or flight.
This suggests that a strict gender distinction in the use of tend and befriend is actually blurred and
complex.
Page 183
1. Schaefer et al. (1981) suggest instrumental support could be: physically doing something (e.g.
giving someone a lift to the hospital); providing information (e.g. telling someone what you know
about stress).
Emotional support is what we provide when we say ‘I really feel for you’, or ‘I’m sorry you’re going
through such a tough time’ – it expresses warmth, concern, affection, empathy and love.
Esteem support is when we reinforce someone’s faith in themselves and their belief in their ability
to tackle a stressful situation. Increasing their confidence in themselves reduces feelings of stress.
2. Cohen et al.’s (2015) investigation into hugs as a form of social support showed that the
participants who experienced the most stress (interpersonal conflicts such as arguments) were most
likely to become ill (after being exposed to a common cold virus). Those who perceived they had
greater social support had a significantly reduced risk of illness. Hugs accounted for up to one-third
of the protective effect of social support. Participants who had the most frequent hugs were less
likely to become infected (or symptoms were less severe). This shows that perceived social support
is a buffer against stress.
Fawzy et al. (1993) studied patients with malignant melanoma (skin cancer) and showed that when
they received support from a group for just six weeks (one session a week), six years later they had
better NK cell functioning and were more likely to be alive and free of cancer compared with
patients in a control group.
3. One strength is research evidence to confirm the beneficial effects of social support. A wealth of
research links various forms of social support with well-being, and absence of support with illness.
Fawzy et al. (1993) studied patients with malignant melanoma (skin cancer) and showed that when
they received support from a group for just six weeks (one session a week), six years later they had
better NK cell functioning and were more likely to be alive and free of cancer compared with
patients in a control group. This shows that beneficial effects of social support can be substantial and
long-lasting. The validity of these findings is greater because the study was well-controlled and
prospective (social support predicted outcome several years on).
130
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
One limitation is social support does not benefit men and women equally. Research shows women
and men benefit from social support but in different ways. It depends on the type of social support.
Luckow et al.’s (1998) review of studies showed that women used emotional support much more
than men, but men did use instrumental support more. This suggests that men may only benefit
from the support of others in certain circumstances.
4. Schaefer et al. (1981) suggest instrumental support could be physically doing something (e.g.
giving someone a lift to the hospital), providing information (e.g. telling someone what you know
about stress). Emotional support is what we provide when we say ‘I really feel for you’, or ‘I’m sorry
you’re going through such a tough time’ – it expresses warmth, concern, affection, empathy and
love. Esteem support is when we reinforce someone’s faith in themselves and their belief in their
ability to tackle a stressful situation. Increasing their confidence in themselves reduces feelings of
stress.
Cohen et al. (2015) telephoned healthy adult participants every evening for 14 consecutive days to
report how many hugs they’d received that day. They also completed a questionnaire on perceived
social support. Researchers placed participants in quarantine, exposed them to a common cold virus
and monitored them for illness (stress acts as an immunosuppressant so we expect people who are
more stressed to become ill).
Participants who experienced the most stress (interpersonal conflicts such as arguments) were most
likely to become ill. Those who perceived they had greater social support had a significantly reduced
risk of illness – hugs accounted for up to one-third of the protective effect of social support.
Participants who had the most frequent hugs were less likely to become infected (or symptoms were
less severe). This shows that perceived social support is a buffer against stress.
One strength is research evidence to confirm the beneficial effects of social support. A wealth of
research links various forms of social support with well-being, and absence of support with illness.
Fawzy et al. (1993) studied patients with malignant melanoma (skin cancer) and showed that when
they received support from a group for just six weeks (one session a week), six years later they had
better NK cell functioning and were more likely to be alive and free of cancer compared with
patients in a control group. This shows that beneficial effects of social support can be substantial and
long-lasting. The validity of these findings is greater because the study was well-controlled and
prospective (social support predicted outcome several years on).
One limitation is social support does not benefit men and women equally. Research shows women
and men benefit from social support but in different ways. It depends on the type of social support.
Luckow et al.’s (1998) review of studies showed that women used emotional support much more
than men, but men did use instrumental support more. This suggests that men may only benefit
from the support of others in certain circumstances.
Another limitation is that support can have negative effects. Emotional support from friends,
relatives and from sources online is usually welcomed, but instrumental support from these sources
can be unreliable. Even emotional support from a friend/relative can be unhelpful, e.g. they go with
us to a hospital appointment and we feel more anxious. This suggests that social support is not
universally beneficial but depends on many factors.
A final limitation is that social support may be less beneficial than personality characteristics such as
hardiness. According to Kobasa (1979), commitment, challenge and control are characteristics of the
hardy person which do not depend on support from others. Hardiness is also a more reliable buffer
against stress because social support can backfire and have negative effects. Some people may also
131
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
find it easier to develop their hardiness than to acquire a circle of friends to offer support. Therefore
social support has an important role to play in coping with stress but its value may have been
exaggerated.
132
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Chapter 11 Aggression
Page 185
1. Papez (1937) and Maclean (1952) linked the limbic system to emotions e.g. aggression. The
system includes the hypothalamus, amygdala and parts of the hippocampus. Speed and sensitivity of
limbic system responses to stimuli are important predictors of aggressive behaviour in humans. The
amygdala in particular plays a key role in how we assess and respond to environmental threats.
Scans have shown that aggressive reactions are associated with a fast and heightened response by
the amygdala.
Normal levels of serotonin in the orbitofrontal cortex are inhibitory and linked with reduced firing of
neurons and associated with greater behavioural self-control. Decreased serotonin disturbs this
mechanism, reduces self-control and increases impulsive behaviours, including aggression (Denson et al.
2012). Virkkunen et al. (1994) compared levels of a serotonin metabolite (5-HIAA) in the cerebrospinal
fluid of violent impulsive and non-impulsive offenders. Levels are significantly lower in impulsive
offenders – disturbance of this pattern implies disruption of serotonin functioning.
2. Testosterone is a hormone responsible for the development of masculine features. It helps regulate
social behaviour via influence on areas of the brain involved in aggression. Dolan et al (2001) found a
positive correlation between testosterone levels and aggressive behaviours in male offenders in UK
maximum security hospitals. Most offenders had personality disorders (e.g. psychopathy) and had
histories of impulsively violent behaviour.
Animal studies (Giammanco et al. 2005) show that experimental increases in testosterone are related to
aggressive behaviour. The converse is also true – testosterone decrease leads to a reduction in
aggression in castration studies.
3. One limitation is the neural (limbic system) explanation excludes other possibilities. Limbic
structures like the amygdala function in tandem with the non-limbic orbitofrontal cortex (OFC) to
maintain self-control and inhibit aggression. Coccaro et al. (2007) showed OFC activity is reduced in
people with psychiatric disorders that feature aggression. This shows that the neural regulation of
aggression is more complex than theories focusing on the amygdala suggest.
However, there is supporting evidence for the role of serotonin. Research shows drugs that increase
serotonin activity also reduce levels of aggressive behaviour. Berman et al. (2009) found that participants
given a serotonin-enhancing drug called paroxetine gave fewer and less intense electric shocks to a
confederate than people in a placebo group. This was only true of participants who had a prior history of
aggressive behaviour, but is evidence of a link between serotonin function and aggression that goes
beyond causal findings.
4. Papez (1937) and Maclean (1952) linked the limbic system to emotions e.g. aggression. The system
includes the hypothalamus, amygdala and parts of the hippocampus. Speed and sensitivity of limbic
system responses to stimuli are important predictors of aggressive behaviour in humans. The amygdala
in particular plays a key role in how we assess and respond to environmental threats so disruption to this
part of the brain, for example through damage, might account for Petra’s aggression. Scans have shown
that aggressive reactions (in Petra’s case lashing out) are associated with fast and heightened response
by the amygdala.
One limitation is the neural (limbic system) explanation excludes other possibilities. Limbic
structures like the amygdala function in tandem with the non-limbic orbitofrontal cortex (OFC) to
133
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
maintain self-control and inhibit aggression. Coccaro et al. (2007) showed OFC activity is reduced in
people with psychiatric disorders that feature aggression. This shows that the neural regulation of
aggression is more complex than theories focusing on the amygdala suggest and that the regulation
of aggression cannot be explained by the limbic system alone and that it is unlikely to be the sole
explanation of Petra’s behaviour.
However, there is supporting evidence for the role of serotonin. Research shows drugs that increase
serotonin activity also reduce levels of aggressive behaviour. Berman et al. (2009) found that participants
given a serotonin-enhancing drug called paroxetine gave fewer and less intense electric shocks to a
confederate than people in a placebo group. This was only true of participants who had a prior history of
aggressive behaviour, but is evidence of a link between serotonin function and aggression that goes
beyond causal findings. It is possible that Petra has a serotonin dysfunction given that she also is
depressed and has sleep problems, both of which have been linked with serotonin.
Testosterone is a hormone responsible for the development of masculine features. It helps regulate
social behaviour via influence on areas of the brain involved in aggression. Dolan et al (2001) found a
positive correlation between testosterone levels and aggressive behaviours in male offenders in UK
maximum security hospitals. Most offenders had personality disorders (e.g. psychopathy) and had
histories of impulsively violent behaviour. Whilst this study was conducted on males only, it would be
reasonable to suggest that the hormone might have at least a similar effect on women and may be one
explanation for Petra’s behaviour.
However, evidence for the role of testosterone in human aggression is mixed as some research shows
other hormones have a significant role, too. Carré and Mehta’s (2011) dual-hormone hypothesis claims
high testosterone leads to aggression but only when cortisol is low – high cortisol blocks its influence on
aggressive behaviour. So the combined action of serotonin and cortisol might actually be a better
explanation for Petra’s aggression than testosterone alone.
Reaching conclusions about Petra’s aggressive behaviour is difficult given that much of the research into
neural and hormonal mechanisms is based on non-human studies. Aggression in Petra is more complex
than it is in rats or even monkeys. Cognitive factors are likely to play an important role in Petra’s
aggressive behaviour, unlike in the case of animals. Therefore, animal studies can help us understand
hormonal and neural influences on aggression but findings must be treated cautiously because human
aggression is more complex.
Page 187
1. Twin studies show genetic factors account for about 50% of variance in aggressive behaviour. Coccaro
et al. (1997) studied adult male monozygotic (MZ) and dizygotic (DZ) twins. For direct physical
aggression, the researchers found concordance rates of 50% for MZ twins and 19% for DZs. For verbal
aggression, the figures were 28% for MZ twins and 7% for DZ twins.
A dysfunction in the operation of the MAOA gene may lead to abnormal activity of the MAOA enzyme,
which affects levels of serotonin (low levels of this are linked to aggression). Brunner et al. (1993) found
that 28 male members of a Dutch family involved in impulsively violent behaviour possessed the low-
activity variant of the MAOA gene (MAOA-L).
Frazzetto et al. (2007) found an association between antisocial aggression and the MAOA-L gene variant
in adult males but only in those who experienced significant trauma (e.g. sexual or physical abuse) during
the first 15 years of life. Those who had not experienced trauma were not especially aggressive as adults
even if they possessed the MAOA-L gene variant.
134
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. Monoamine oxidase A (MAOA) is an enzyme which ‘mops up’ neurotransmitters after a nerve impulse
has been transmitted between neurons. It breaks down the neurotransmitter (e.g. serotonin) into
constituent chemicals to be recycled or excreted. Production of this enzyme is determined by the MAOA
gene and a dysfunction in the operation of this gene may lead to abnormal activity of the MAOA enzyme,
which affects levels of serotonin (low levels of this are linked to aggression). Specifically, aggression has
been linked to inheritance of the low-activity variant of the MAOA gene (MAOA-L) by Lea and Chambers
(2007).
Brunner et al. (1993) found that 28 male members of a Dutch family involved in impulsively violent
behaviour possessed the MAOA-L gene variant. Frazzetto et al. (2007) found an association between
antisocial aggression and the MAOA-L gene variant in adult males but only in those who experienced
significant trauma (e.g. sexual or physical abuse) during the first 15 years of life. Those who had not
experienced trauma were not especially aggressive as adults even if they possessed the MAOA-L gene
variant.
3. One problem with interpreting genetic research is that it is difficult to separate genetic and
environmental factors. For example, Frazzetto et al. (2007) found an association between antisocial
aggression and the MAOA-L gene variant in adult males but only in those who experienced significant
trauma (e.g. sexual or physical abuse) during the first 15 years of life. Those who had not experienced
trauma were not especially aggressive as adults even if they possessed the MAOA-L gene variant. This
may explain why it has proven surprisingly difficult to identify the precise genetic mechanisms involved
in aggression.
Another limitation is that the mechanism of the MAOA-serotonin-aggression link is unclear. Research has
linked aggression with low levels of serotonin. But the MAOA-L gene variant causes low activity of the
MAO-A enzyme which in turn should lead to higher serotonin. This is because the low-activity variant
does not deactivate serotonin, leaving more serotonin in the synapse for transmission. So it is more
accurate to say that serotonin levels are disrupted in people with the MAOA-L variant. This shows that
the link between the MAOA gene, serotonin and aggression is not yet fully understood.
4. Twin studies show genetic factors account for about 50% of variance in aggressive behaviour. Coccaro
et al. (1997) studied adult male monozygotic (MZ) and dizygotic (DZ) twins. For direct physical
aggression, the researchers found concordance rates of 50% for MZ twins and 19% for DZs. For verbal
aggression the figures were 28% for MZ twins and 7% for DZ twins. From this we might suggest that
Estelle’s mum is correct and her father’s genes have played a part in Estelle’s aggressive behaviour.
Adoption studies show genetic factors account for about 41% of variance in aggressive behaviour.
Similarities in aggressive behaviour between adopted child and biological parents suggest genetic
influences are operating; but similarities with adoptive parents suggest environmental factors. This
reminds us that the genetic role in Estelle’s aggression is only part of the story and her environment will
also have played its part.
The mechanism through which Estelle might have inherited this characteristic is abnormal activity of the
MAOA enzyme, which affects levels of serotonin (low levels of this are linked to aggression). Brunner et
al. (1993) found that 28 male members of a Dutch family involved in impulsively violent behaviour
possessed the low-activity variant of the MAOA gene. Estelle’s father may have passed on this low
activity MAOA gene.
One problem with interpreting genetic research is that it is difficult to separate genetic and
environmental factors. For example, Frazzetto et al. (2007) found an association between antisocial
135
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
aggression and the MAOA-L gene variant in adult males but only in those who experienced significant
trauma (e.g. sexual or physical abuse) during the first 15 years of life. Those who had not experienced
trauma were not especially aggressive as adults even if they possessed the MAOA-L gene variant. There
is no evidence that this was the cause of Estelle’s father’s violence, but it may explain why it has proven
surprisingly difficult to identify the precise genetic mechanisms involved in aggression.
Another limitation is that the mechanism of the MAOA-serotonin-aggression link is unclear. Research has
linked aggression with low levels of serotonin. But the MAOA-L gene variant causes low activity of the
MAO-A enzyme which in turn should lead to higher serotonin. This is because the low-activity variant
does not deactivate serotonin, leaving more serotonin in the synapse for transmission. So it is more
accurate to say that serotonin levels are disrupted in people with the MAOA-L variant. This shows that
the link between the MAOA gene, serotonin and aggression is not yet fully understood.
Finally, another limitation of genetic research is problems with the validity of twin studies. In every pair
of twins, both individuals share the same environment (because each pair is raised together). But DZ
twins may not share their environment to the same extent that MZ twins share theirs. Yet twin research
assumes they do – this is the equal environments assumption. The assumption may be wrong because
one aspect of the environment is the way twins are treated by others. MZ twins are treated very
similarly, especially by parents (e.g. praising them equally for being aggressive). DZs are treated in less
similar ways. This means that concordance rates are inflated and genetic influences on aggression may
not be as great as twin studies suggest.
Page 189
1. One difference is where each comes in the chain of environmental influence on behaviour.
An innate releasing mechanism (IRM) is a built-in physiological process or structure (e.g. a network of
neurons in the brain). It acts as a ‘filter’ to identify threatening stimuli in the environment and is
activated by an environmental stimulus.
An IRM triggers a fixed action pattern (FAP). A FAP is a relatively unchanging behavioural sequence
(ritualistic) found in every individual of a species (universal) and follows an inevitable course which
cannot be altered before it is completed (ballistic).
2. An innate releasing mechanism (IRM) is a built-in physiological process or structure (e.g. a network of
neurons in the brain). It acts as a ‘filter’ to identify threatening stimuli in the environment and is
activated by an environmental stimulus. For instance, a network of neurons in a region of the brain may
fire in response to seeing an aggressive facial expression. This aggressive environmental stimulus triggers
the IRM which in turn ‘releases’ a highly specific sequence of behaviours called a fixed action pattern
(FAP).
136
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. One strength of this explanation is support from research by Brunner et al. (1993) which shows the
low-activity variant of the MAOA gene is closely associated with aggressive behaviour in humans,
suggesting an innate biological basis. There is also evidence for IRMs for aggression in the brain – activity
in the limbic system (especially the amygdala) triggers aggressive behaviour in humans and other
animals. As the ethological explanation argues that aggression is genetically determined, its validity is
supported by evidence that demonstrates the genetic and physiological basis of aggression.
However, cultural differences present huge problems for the explanation. Nisbett (1993) found that
when white males from the southern United States were insulted in a research situation, they were more
likely than northern white males to become aggressive. This was only true for reactive aggression
triggered by arguments, so Nisbett concluded the difference was caused by a culture of honour –
impulsive aggression was a learned social norm. It is difficult for ethological theory, with its view of
aggression as instinctive, to explain how culture can override innate influences.
4. The ethological explanation of aggression suggests that it is adaptive in that it reduces competition
and establishes dominance. For example, if a defeated animal is not killed but forced into territory
elsewhere this reduces competition pressure and also reduces the possibility of starvation because it
may find new resources. Aggression also helps establish dominance hierarchies (e.g. a male
chimpanzee’s dominance gives him special status including mating rights over females).
Much aggression is suggested to be ritualistic, for example Lorenz (1966) observed most intra-species
aggression consisted mainly of ritualistic signalling (e.g. displaying teeth) and rarely became physical.
Intra-species aggression usually ends with an appeasement display – this indicates acceptance of defeat
and inhibits aggression in the winner, preventing injury to the loser. This is adaptive because every
aggressive encounter ending with death of an individual could threaten the existence of species.
An innate releasing mechanism (IRM) is a built-in physiological process or structure (e.g. a network of
neurons in the brain). It acts as a ‘filter’ to identify threatening stimuli in the environment and is
activated by an environmental stimulus, whereas an FAP is what is triggered by that IRM. The FAP is a
relatively unchanging behavioural sequence (ritualistic) found in every individual of a species (universal)
and follows an inevitable course which cannot be altered before it is completed (ballistic).
One strength of this explanation is support from research by Brunner et al. (1993) which shows the low-
activity variant of the MAOA gene is closely associated with aggressive behaviour in humans, suggesting
an innate biological basis. There is also evidence for IRMs for aggression in the brain – activity in the
limbic system (especially the amygdala) triggers aggressive behaviour in humans and other animals. As
the ethological explanation argues that aggression is genetically determined, its validity is supported by
evidence that demonstrates the genetic and physiological basis of aggression.
However, cultural differences present huge problems for the explanation. Nisbett (1993) found that
when white males from the southern United States were insulted in a research situation, they were more
likely than northern white males to become aggressive. This was only true for reactive aggression
triggered by arguments, so Nisbett concluded the difference was caused by a culture of honour –
impulsive aggression was a learned social norm. It is difficult for ethological theory, with its view of
aggression as instinctive, to explain how culture can override innate influences.
Furthermore, Goodall (2010) observed male chimps from one community systematically slaughter the
members of another group in a co-ordinated and premeditated fashion. This happened despite the
victims offering signals of appeasement and defencelessness – these did not inhibit the aggression of the
attacking chimps as predicted by the ethological explanation. Goodall’s observations challenge the view
137
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
of the ethological explanation that aggression has evolved into a self-limiting and relatively physically
harmless ritual.
There is also the question as to whether FAPs are actually fixed. Hunt (1973) points out that
sequences of behaviours that appear to be fixed and unchanging are actually much more influenced
by environmental factors and learning experiences than Lorenz thought. He believed that aggression
is inevitable and this is demonstrated by the unchanging and unalterable nature of FAPs. However
this is now considered an outdated view. This means that FAPs are more flexible than implied and
many ethologists now prefer the term ‘modal action pattern’ to reflect this. Therefore patterns of
aggressive behaviour are much more flexible than Lorenz thought, especially in humans.
Page 191
1. The evolutionary explanation suggests that men in our evolutionary past who could avoid cuckoldry
(having to raise offspring that are not their own) were more reproductively successful, so psychological
mechanisms have evolved to increase anti-cuckoldry behaviours in men (e.g. sexual jealousy felt more
strongly by men than women). It is, according to evolutionary theory, these mechanisms that drive the
often aggressive mate retention strategies men use to keep their partners and prevent them from
‘straying’ – these were adaptive in our evolutionary history.
2. Sexual jealousy is a key motivator of aggression in males because men – unlike women – can never be
sure that they are really their child’s parent. Men in our evolutionary past who could avoid cuckoldry
(having to raise offspring that are not their own) were more reproductively successful, so psychological
mechanisms have evolved to increase anti-cuckoldry behaviours in men (e.g. sexual jealousy felt more
strongly by men than women). It is, according to evolutionary theory, these mechanisms that drive the
often aggressive mate retention strategies men use to keep their partners and prevent them from
‘straying’ – these were adaptive in our evolutionary history.
Wilson and Daly (1996) identified aggressive mate retention strategies which have evolved in males.
One is direct guarding which involves males monitoring their partner’s behaviour (e.g. checking their
movements). Another is negative inducements, which are threats to harm either the partner (‘I’ll kill
you if you have an affair’) or the self (‘I’ll kill myself if you leave me’). Wilson et al. (1995) found that
such strategies are closely linked to aggression. Men who employed them were twice as likely to
inflict physical violence on their partners as men who did not use them.
3. On the one hand many research studies demonstrate mate retention strategies are associated with
sexual jealousy and aggression. For example, Wilson et al. (1995) found that men who employed them
were twice as likely to inflict physical violence on their partners as men who did not use them. This
suggests that aggression may have evolved as an adaptive behaviour for males to achieve goals related
to reproduction.
However, a limitation is that there are wide cultural differences in aggressive behaviour. Aggression
is not universal because there are some cultures where it appears to be almost non-existent. For
example, the !Kung San people have very negative attitudes towards the use of aggression which is
discouraged from childhood in both boys and girls and is therefore rare. Therefore, since some
cultures do not show aggressiveness, such behaviour may not necessarily be adaptive.
4. The evolutionary explanation suggests that men are more sexually jealous than women and that is
because of the threat of cuckoldry (having to raise offspring that are not their own). This can be
considered to be a ‘waste of his resources’ because it contributes to survival of a rival’s genes and leaves
the ‘father’ with fewer resources to invest in his own future offspring. As such aggressive mate retention
138
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
strategies (the abuse in this example) were a method that men used in our evolutionary past to keep
their partners faithful. As such that aggressive behaviour was adaptive and also would be universal
(behaviour seen ‘in lots of different parts of the world’).
Wilson and Daly (1996) identify two major mate retention strategies involving aggression: direct
guarding – a man’s vigilance over a partner’s behaviour (e.g. checking who they’ve been seeing); and
negative inducements (e.g. threats of consequences for infidelity – ‘I’ll kill myself if you leave me.’).
On the one hand many research studies demonstrate mate retention strategies are associated with
sexual jealousy and aggression. For example, Wilson et al. (1995) found that men who employed them
were twice as likely to inflict physical violence on their partners as men who did not use them. This
suggests that aggression may have evolved as an adaptive behaviour for males to achieve goals related
to reproduction.
A limitation is that there are wide cultural differences in aggressive behaviour. Aggression is not
universal because there are some cultures where it appears to be almost non-existent. For example,
the !Kung San people have very negative attitudes towards the use of aggression which is
discouraged from childhood in both boys and girls and is therefore rare. Therefore, since some
cultures do not show aggressiveness, such behaviour may not necessarily be adaptive.
However there is controversy over exactly how harmless the !Kung really are. Lee (1979) describes
the homicide rate as high for such a peaceable people. Contradictory findings may be due to
differences in how ‘outsider’ researchers perceive behaviour in other cultures, which are biased by
expectations.
Researchers have traditionally viewed bullying as a maladaptive behaviour (e.g. due to poor social skills
or childhood abuse) but evolutionary ancestors may have used it as an adaptive strategy to increase
chances of survival by promoting their own health and creating reproduction opportunities. In men,
bullying suggests dominance (attracting a mate in the example) acquisition of resources, strength and
also wards off potential rivals (keeping a partner in the example) according to Volk et al. (2012).
However, women could be prone to bullying for evolutionary reasons, too. Women use bullying
behaviour to secure a partner’s fidelity, which means he continues to provide resources for future
offspring.
One strength of evolutionary explanations is that they can point us towards ways to reduce bullying.
Several interventions are based on the assumption that addressing a bully’s deficiencies will reduce
their bullying. Yet bullying is still prevalent. Ellis et al. (2016) suggest instead a different strategy
based on the evolutionary view that bullying is adaptive for the bully because they benefit from it.
The ‘meaningful roles’ intervention aims to increase the costs of bullying and the rewards of
prosocial (non-bullying) alternatives. For example, bullies might be given roles in school that provide
them with a different source of status. Therefore viewing bullying as adaptive may present
opportunities for reducing bullying in real-world situations where nothing else has worked.
Page 193
1. Dollard et al.’s (1939) hypothesis suggests that frustration always leads to aggression, and aggression
is always the result of frustration. This is based on the psychodynamic approach – aggression is a
psychological drive similar to biological drives (e.g. hunger) and we experience frustration if our attempt
to achieve a goal is blocked by an external factor. The aggressive drive leads to aggressive
thoughts/behaviour (violent fantasy, verbal outburst, physical violence). Expression of the aggressive
drive in behaviour is cathartic because the aggression created by the frustration is satisfied which in turn
139
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
reduces the drive, making further aggression less likely – we feel better for getting it ‘off our chest’.
Aggression may be expressed indirectly, displaced onto an alternative target that is weaker and is
available (an object, a pet, younger sibling, etc.).
2. Geen (1968) arranged for participants to be insulted as they completed a task. These participants gave
the strongest (fake) electric shocks when they had the opportunity, because they had experienced the
highest level of frustration. A non-frustrated control group gave the lowest level of shock.
Berkowitz and LePage (1967) created frustration in their participants, who gave the highest level of
(fake) shock to a confederate when there were two guns on a nearby table – this is the weapon
effect.
Marcus-Newhall et al.’s (2000) meta-analysis showed that frustrated participants who were
prevented from being aggressive towards the source of frustration were likely to aggress against an
innocent target instead – displaced aggression is a reliable phenomenon.
However, Bushman (2002) found that participants who vented their anger by hitting a punch bag
became angrier and more aggressive rather than less. Using venting to reduce anger is like using petrol
to put out a fire. Bushman argues it does not work even for people who believe in its value. In fact, the
better people feel after venting, the more aggressive they are according to Bushman. This casts doubt on
the validity of a central assumption of the hypothesis.
4. Dollard et al.’s (1939) hypothesis suggests that frustration always leads to aggression, and aggression
is always the result of frustration. This is based on the psychodynamic approach – aggression is a
psychological drive similar to biological drives (e.g. hunger) and we experience frustration if our attempt
to achieve a goal is blocked by an external factor. In the case of the students they were likely to have
been aiming for higher grades because they were working very hard. The theory suggests that the
aggressive drive leads to aggressive behaviour but this was only the case for Camilla (she head-butted a
wall). As such Ricardo’s response (going quiet and having another go) is not easily explained by the
theory that says frustration always leads to aggression. According to the theory, Camilla should feel
better afterwards as expression of the aggressive drive in behaviour is cathartic. The aggression created
by the frustration is satisfied, which in turn reduces the drive, making further aggression less likely – she
should feel better for getting it ‘off her chest’.
In this case the aggression is indirect (it wasn’t the wall’s fault!) because the cause of frustration is
abstract (the examining board for creating such tricky questions) or too powerful for them to vent their
anger at directly (e.g. the teacher that assigned the mark or who gave you a low grade) or unavailable
(e.g. the teacher left before you saw the grade).
Marcus-Newhall et al. (2000) conducted a meta-analysis of 49 studies of displaced aggression to test the
frustration-aggression hypothesis. Participants who were provoked but unable to retaliate directly
against the source of their frustration were significantly more likely to aggress against an innocent party
than people who were not frustrated. This supports a central claim of the hypothesis that frustration
always leads to aggression and is reliably displaced against another target if the true source of frustration
140
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
is unavailable.
However, Bushman (2002) found that participants who vented their anger by hitting a punch bag
became angrier and more aggressive rather than less. Using venting to reduce anger is like using petrol
to put out a fire. Bushman argues it does not work even for people who believe in its value. In fact, the
better people feel after venting, the more aggressive they are according to Bushman. This casts doubt on
the validity of a central assumption of the hypothesis.
Another limitation is that the link between frustration and aggression is more complex than the F-A
hypothesis suggested. Both research and everyday experience show that frustration does not always
lead to aggression, and that aggression can occur without frustration. Someone who feels frustrated may
behave in a variety of ways. Camilla behaved aggressively, but Ricardo’s response was quite different.
Helplessness and determination are also potential responses to frustration. This means the F-A
hypothesis lacks validity because it fails to explain how aggression arises only in some situations but not
in others.
However, recognising this, Berkowitz (1989) reformulated the hypothesis in his negative affect theory,
arguing that frustration is just one of many aversive stimuli that create negative feelings. Aggression is
triggered by negative feelings generally rather than by frustration specifically. The outcome of frustration
can be a range of responses only one of which is aggression. This is a strength because it highlights the
flexibility of the hypothesis.
Page 195
1. Social learning theory suggests that not only is aggression learned directly through positive and
negative reinforcement (operant conditioning), but also indirectly through observation. It is proposed
that in fact the latter explains most aggressive behaviour. For example, a child will learn through parental
and other models how aggressive behaviour is performed. They also learn the consequences of
aggression in the same indirect way (vicarious reinforcement). Social learning requires attention,
retention, reproduction and motivation.
The frustration-aggression hypothesis suggests that frustration always leads to aggression, and
aggression is always the result of frustration. Aggression is a psychological drive similar to biological
drives and we experience frustration if our attempt to achieve a goal is blocked by an external factor.
The aggressive drive leads to aggressive behaviour. Expression of the aggressive drive in behaviour can
be direct or indirect but is said to be cathartic because the aggression created by the frustration is
satisfied, which in turn reduces the drive making further aggression less likely.
2. Social learning theory tells us that aggression is learned directly through positive and negative
reinforcement (operant conditioning), but also indirectly through observation. Indirect or vicarious
learning in fact explains most aggressive behaviour. For example, a child will learn through parental and
other models how aggressive behaviour is performed. They also learn the consequences of aggression in
the same indirect way (vicarious reinforcement). SLT also considers the cognitive requirements for
aggression and suggests first that attention to a model’s aggressive action is necessary. The observer
must then retain or remember that behaviour, reproduce it and finally be motivated to repeat it. Our
self-efficacy in relation to aggression increases every time the behaviour produces rewards.
3. One strength of SLT is support from research studies such as Poulin and Boivin (2000). They found
most aggressive boys (aged 9 to 12 years old) formed friendships with other aggressive boys. Therefore
they were exposed frequently to models of physical aggression (each other) and to its reinforcing
consequences (including rewarding approval). For example, the boys observed each other being
141
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
rewarded for using proactive aggression (bullying peers to get what they wanted). This supports the
conditions necessary for imitation predicted by SLT as an explanation for aggression.
However, this study did not find similar outcomes for all types of aggression. It was only proactive
aggression (‘cold-blooded’) that was learnt through social learning processes. The boys were much less
likely to imitate each other’s reactive aggression (i.e. ‘hot-blooded’ outbursts). This may be because the
consequences of reactive aggression are less reinforcing because they are unpredictable – a boy who
uses angry and impulsive aggression may find himself on the receiving end of the same. Therefore SLT is
limited because it is a relatively weak explanation of reactive aggression.
4. Social learning theory suggests that not only is aggression learned directly through positive and
negative reinforcement (operant conditioning), but also indirectly through observation. If Tabitha is
exposed regularly to aggressive behaviour amongst her friends, then she will learn vicariously. She will
also be learning the consequences of aggression in the same indirect way (vicarious reinforcement) so if
they are experiencing positive consequences she will be learning that aggression is rewarding.
In terms of the cognitive element of the theory, Tabitha is certainly paying attention to the models (the
girls she is hanging around with) and she finds them ‘interesting and exciting’. This suggests that not only
has she remembered their behaviour, but since she is also excited by it she is showing clear signs of
motivation to repeat it and become aggressive herself – she will therefore just need the ability to
reproduce the aggressive behavioiur in order to meet all the conditions. Finally, SLT suggests that
repeated exposure to aggression means that efficacy increases every time that the behaviour produces
rewards hence Tabitha’s growing confidence.
One strength of SLT is support from research studies such as Poulin and Boivin (2000). They found most
aggressive boys (aged 9 to 12 years old) formed friendships with other aggressive boys. Therefore they
were exposed frequently to models of physical aggression (each other) and to its reinforcing
consequences (including rewarding approval). For example, the boys observed each other being
rewarded for using proactive aggression (bullying peers to get what they wanted). This supports the
conditions necessary for imitation predicted by SLT as an explanation for aggression, which are similar to
the conditions of Tabitha’s situation.
However, this study did not find similar outcomes for all types of aggression. It was only proactive
aggression (‘cold-blooded’) that was learnt through social learning processes. The boys were much less
likely to imitate each other’s reactive aggression (i.e. ‘hot-blooded’ outbursts). This may be because the
consequences of reactive aggression are less reinforcing because they are unpredictable – a boy who
uses angry and impulsive aggression may find himself on the receiving end of the same. Therefore SLT is
limited because it is a relatively weak explanation of reactive aggression. Even so, as bullying tends to
use more proactive aggression SLT still provides a valid explanation of Tabitha’s experience.
A strength of SLT is that it acknowledges that Tabitha is not a passive recipient of reinforcement – she is
shaping her own aggressive behaviour by choosing situations which reward aggression, in other words by
remaining with her current friendship group. A way to reduce aggression is to break this cycle by
encouraging aggressive children to form friendships with children who do not habitually behave
aggressively. The same social learning processes that would otherwise lead Tabitha into aggressive
behaviour can be harnessed in a more constructive direction as she imitates the behaviour of rewarded
non-aggressive models. Therefore SLT provides a practical benefit of understanding Tabitha’s aggression
and encouraging her to break the cycle she is in by spending more time with other friends.
142
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 197
1. The de-individuation explanation suggests that aggression occurs as a result of environmental factors,
such as crowds or darkness, which reduce the constraints we place on our aggression, whereas SLT
suggests we learn the behaviour from our own direct reinforcement and the vicarious reinforcement of
models around us. In de-individuation, the process involves believing that personal responsibility for any
aggression is shared, for example in a crowd, whereas SLT suggests a more complex cognitive process of
attention, retention, reproduction and motivation.
2. De-individuation occurs when we become part of a crowd, lose restraint and behave in ways we
otherwise would not. We might become aggressive because we lose our sense of self-identity, disregard
social norms and no longer feel responsible for our behaviour. The responsibility is shared throughout
the crowd, so we feel less guilt about being aggressive because we are not personally responsible.
Zimbardo (1969) argued that de-individuated behaviours are emotional, impulsive and anti-
normative. In this state we ‘live for the moment’, fail to plan and stop monitoring our behaviour.
Aggression is promoted by conditions of de-individuation such as darkness and masks because they
provide anonymity.
As a de-individuated part of a faceless crowd, we are more likely to become aggressive because our
private self-awareness is reduced. Our attention becomes focused on events around us and we think
less about our own feelings, becoming less self-critical. Our public self-awareness is also reduced.
We believe we can behave aggressively because we are less likely to be judged by others so we do
not care how others see us.
3. There is certainly research support for the de-individuation theory. For example, Douglas and McGarty
(2001) looked at aggressive online behaviour in chatrooms and uses of instant messaging. They found a
strong correlation between anonymity and ‘flaming’ (posting hostile messages). The most aggressive
messages were sent by those who hid their identities. This supports a link between anonymity, de-
individuation and aggressive behaviour in a context that has even greater relevance today with the
possibilities of social media.
Another strength is that de-individuation can explain the surprisingly aggressive behaviour of ‘baiting
crowds’. Mann (1981) investigated cases of ‘suicidal jumpers’ (e.g. people jumping from buildings and
bridges). He found 21 examples in US newspapers of crowds gathering to encourage (‘bait’) people to
jump, often in very aggressive ways. This was more likely to happen when the conditions matched those
predicted by de-individuation theory, e.g. when it was dark, the crowds were large and the jumpers were
distant. This suggest there is validity to the idea that people can become aggressive as part of a de-
individuated faceless crowd.
However, some research shows that the conditions for de-individuation do not necessarily lead to
aggression. In his ‘deviance in the dark’ study, Gergen et al. (1973) put strangers in a darkened room and
told them to do what they wanted as they could not be identified and would never meet again. They
soon started kissing and touching each other. Despite a guarantee of anonymity creating the conditions
for de-individuation, aggressive behaviour was not an outcome of this study and de-individuation cannot
explain why this was not the case.
4. Social learning theory tells us that aggression is learned directly through positive and negative
reinforcement (operant conditioning), but also indirectly through observation. Indirect or vicarious
learning in fact explains most aggressive behaviour. For example, a child will learn through parental and
other models how aggressive behaviour is performed. They also learn the consequences of aggression in
143
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
the same indirect way (vicarious reinforcement). Social learning requires attention, retention,
reproduction and motivation.
One strength of SLT is support from research studies such as Poulin and Boivin (2000). They found most
aggressive boys (aged 9 to 12 years old) formed friendships with other aggressive boys. Therefore they
were exposed frequently to models of physical aggression (each other) and to its reinforcing
consequences (including rewarding approval). For example, the boys observed each other being
rewarded for using proactive aggression (bullying peers to get what they wanted). This supports the
conditions necessary for imitation predicted by SLT as an explanation for aggression.
However, this study did not find similar outcomes for all types of aggression. It was only proactive
aggression (‘cold-blooded’) that was learnt through social learning processes. The boys were much less
likely to imitate each other’s reactive aggression (i.e. ‘hot-blooded’ outbursts). This may be because the
consequences of reactive aggression are less reinforcing because they are unpredictable – a boy who
uses angry and impulsive aggression may find himself on the receiving end of the same. Therefore SLT is
limited because it is a relatively weak explanation of reactive aggression.
The de-individuation theory argues that aggressive behaviour is usually constrained by social norms but
when we experience de-individuated conditions we lose individual self-identity and responsibility for our
behaviour. For example, when we are in a crowd, responsibility is shared throughout the crowd – we
ignore social norms and experience less personal guilt at harmful aggression directed at others.
Anonymity (in crowds or in the dark) reduces private self-awareness because our attention is focused
outwardly to the events around us. This means we think less about our own beliefs and feelings – we are
less self-critical and evaluative. Anonymity also reduces public self-awareness because we realise we are
anonymous and our behaviour is less likely to be judged by others.
There is certainly research support for the de-individuation theory. For example, Douglas and McGarty
(2001) looked at aggressive online behaviour in chatrooms and uses of instant messaging. They found a
strong correlation between anonymity and ‘flaming’ (posting hostile messages). The most aggressive
messages were sent by those who hid their identities. This supports a link between anonymity, de-
individuation and aggressive behaviour in a context that has even greater relevance today with the
possibilities of social media.
Another strength is that de-individuation can explain the surprisingly aggressive behaviour of ‘baiting
crowds’. Mann (1981) investigated cases of ‘suicidal jumpers’ (e.g. people jumping from buildings and
bridges). He found 21 examples in US newspapers of crowds gathering to encourage (‘bait’) people to
jump, often in very aggressive ways. This was more likely to happen when the conditions matched those
predicted by de-individuation theory, e.g. when it was dark, the crowds were large and the jumpers were
distant. This suggest there is validity to the idea that people can become aggressive as part of a de-
individuated faceless crowd.
However, some research shows that the conditions for de-individuation do not necessarily lead to
aggression. In his ‘deviance in the dark’ study, Gergen et al. (1973) put strangers in a darkened room and
told them to do what they wanted as they could not be identified and would never meet again. They
soon started kissing and touching each other. Despite a guarantee of anonymity creating the conditions
for de-individuation, aggressive behaviour was not an outcome of this study and de-individuation cannot
explain why this was not the case.
144
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 199
1. Dispositional explanations suggest that inmates bring with them into prisons a subculture typical of
criminality – including beliefs, values, norms, attitudes, learning experiences and personal characteristics
(e.g. gender and ethnicity). Inmates import these to negotiate their way through the unfamiliar prison
environment in which existing inmates use aggression to establish power, status and access to resources.
Situational explanations focus on harsh prison conditions and the way that they cause stress for inmates
who cope by behaving aggressively. For example, deprivations such as an unpredictable prison regime
that regularly uses ‘lock-ups’ to control behaviour.
2. Dispositional explanations suggest that inmates bring with them into prisons a subculture typical of
criminality – including beliefs, values, norms, attitudes, learning experiences and personal characteristics
(e.g. gender and ethnicity). Inmates import these to negotiate their way through the unfamiliar prison
environment in which existing inmates use aggression to establish power, status and access to resources.
DeLisi et al. (2011) studied juvenile delinquents in California institutions who imported several negative
dispositional features. This included violent behaviour specifically but also childhood trauma, anger and
histories of substance abuse which could explain the high levels of aggression in the institutions. The
inmates in the DeLisi et al. study were more likely to engage in suicidal activity and sexual misconduct,
and committed more acts of physical violence compared with a control group of inmates with fewer
negative dispositional features.
3. There is certainly support for individual-level factors as predictors of aggression but research shows
that some situational variables are also highly influential. Cunningham et al. (2010) analysed inmate
homicides in Texas prisons and found motivations for the behaviours were linked to some of the
deprivations suggested by Clemmer’s model. For example, many homicides followed arguments between
inmates when ‘boundaries’ were judged to have been crossed often involving drugs and possessions. As
these are factors predicted by the deprivation model to make aggression more likely, these findings
support the validity of a situational explanation.
On the other hand, the deprivation model predicts a lack of freedom such as heterosexual contact would
lead to high levels of aggression in prisons but the available evidence does not support this. Hensley et
al.’s (2002) study found that allowing conjugal visits was not associated with reduced aggressive
behaviour in the institution. This shows situational factors do not necessarily affect prison violence and
casts some doubt on the validity of the deprivation model.
4. Dispositional explanations suggest that inmates bring with them into prisons a subculture typical of
criminality – including beliefs, values, norms, attitudes, learning experiences and personal characteristics
(e.g. gender and ethnicity). Inmates import these to negotiate their way through the unfamiliar prison
environment in which existing inmates use aggression to establish power, status and access to resources.
This is the point of view expressed by the student who suggests the aggression is because of ‘the people
in them’. They are suggesting that the ‘type’ of person placed in prison is likely to have greater
tendencies towards aggression than non-inmates.
Camp and Gaes (2005) placed half of their male inmate participants in low-security Californian prisons
and the other half in high-security prisons and found that there was no significant difference in
aggressive misconduct between the two groups. This supports the idea that features of the prison
environment are less important predictors of aggressive behaviour than characteristics of inmates and as
such supports the dispositional view of the first student.
145
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
However, it is argued that the dispositional explanation is inadequate because it ignores roles of prison
officials and factors linked to running prisons. Dilulio (1991) proposed an administrative control model
(ACM) which states that poorly managed prisons are more likely to experience the most serious forms of
inmate violence (e.g. homicides, rioting), suggesting that these factors are more important than the
inmate characteristics focused on by dispositional explanations.
Situational explanations focus on harsh prison conditions and the way that the deprivations involved
with lack of choices and freedom causes stress for inmates who cope by behaving aggressively. This is
the view expressed by the second student who is saying that the aggression is best explained by ‘the way
prisons are run’. One example of this would be unpredictable prison regimes that regularly uses ‘lock-
ups’ to control behaviour. This could disrupt the few benefits and positives that prisoners have (for
example education or television time), causing stress.
There is certainly support for individual-level factors as predictors of aggression but research shows that
some situational variables are also highly influential. Cunningham et al. (2010) analysed inmate
homicides in Texas prisons and found motivations for the behaviours were linked to some of the
deprivations suggested by Clemmer’s model. For example, many homicides followed arguments between
inmates when ‘boundaries’ were judged to have been crossed often involving drugs and possessions. As
these are factors predicted by the deprivation model to make aggression more likely, these findings
support the validity of a situational explanation.
On the other hand, the deprivation model predicts a lack of freedom such as heterosexual contact would
lead to high levels of aggression in prisons but the available evidence does not support this. Hensley et
al.’s (2002) study found that allowing conjugal visits was not associated with reduced aggressive
behaviour in the institution. This shows situational factors do not necessarily affect prison violence and
casts some doubt on the validity of the deprivation model.
The most realistic view may be an interactionist one though – that institutional aggression is due to a
combination of the individual characteristics imported into the prison by inmates. It is also more realistic
because it better reflects the complex nature of institutional aggression, which is unlikely to have just
one cause (or set of causes) as assumed by the importation and deprivation models.
Page 201
1. Computer games may lead to aggressive behaviour for two main reasons. Firstly, the player takes a
more active role compared to a relatively passive television viewer. Secondly, game-playing is more
directly rewarding for a player, so direct learning through operant conditioning is a key process.
Bartholow and Anderson’s (2002) participants played a violent or non-violent computer game for ten
minutes – then carried out the Taylor competitive reaction time task, a standard lab measure of
aggression (choosing volumes of noise blasts). Those who played the violent game selected significantly
higher noise levels compared with non-violent players, highlighting the potential influence of computer
games on aggression.
2. Bartholow and Anderson’s (2002) participants who played a violent computer game for ten minutes
selected significantly higher noise levels on the Taylor competitive reaction time task compared with
players of a non-violent game. This is an indicator of high aggression because the task involves punishing
a (non-existent) opponent with blasts of white noise.
DeLisi et al. (2013) found that aggressive behaviour was positively correlated with how much time
juvenile offenders spent playing violent computer games. According to these researchers, this shows that
the link is so well-established that aggression should be considered a public health issue and computer
146
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
The research also shows that the effects of violent computer games depends on how aggression is
defined. Studies often define it in terms of physical violence (e.g. blasting white noise) but, although
all violence is aggression, not all aggression is violence.
3. There are numerous methodological issues involved with researching media influences. For example,
The Taylor competitive reaction time task measures aggression as the volume of noise selected by
participants as punishment which is an unrealistic measure. Many studies are correlational and so we
cannot conclude that media influences cause aggression. As such this casts doubt on the validity of the
link between the two.
On the other hand, the link has been researched by the full range of methodologies. Individual studies
may be limited but the strengths of one often compensate for the limitations of another (e.g. internal
and external validity). For example, correlational studies tend to measure media exposure in real-world
situations such as people’s homes (external validity), whereas lab experiments do not. Therefore, taken
together, a range of different methodologies come to similar conclusions suggesting exposure to violent
media may have a causal influence on aggressiveness.
4. Bartholow and Anderson’s (2002) participants played a violent or non-violent computer game for ten
minutes – then carried out the Taylor competitive reaction time task, a standard lab measure of
aggression (choosing volume of noise blasts). Those who played the violent game selected significantly
higher noise levels compared with non-violent players, highlighting the potential influence of computer
games on aggression.
There are numerous methodological issues involved with researching the effects of violent computer
games. For example, The Taylor competitive reaction time task measures aggression as the volume of
noise selected by participants as punishment which is an unrealistic measure. Many studies are
correlational and so we cannot conclude that media influences cause aggression. As such this casts doubt
on the validity of the link between the two.
On the other hand, the link has been researched by the full range of methodologies. Individual studies
may be limited but the strengths of one often compensate for the limitations of another (e.g. internal
and external validity). For example, correlational studies tend to measure game-playing in real-world
situations such as people’s homes (external validity), whereas lab experiments do not. Therefore, taken
together, a range of different methodologies come to similar conclusions suggesting exposure to violent
computer game-playing may have a causal influence on aggressiveness.
There is also support from correlational studies. DeLisi et al. (2013) found that aggressive behaviour was
positively correlated with how much time juvenile offenders spent playing violent computer games.
According to these researchers, this shows that the link is so well-established that aggression should be
considered a public health issue and computer game violence a significant risk factor.
However, we cannot make causal explanations from such evidence. A positive correlation between
violent computer games and aggression could arise for more than one reason: violent games may cause
people to be aggressive or else people who are already aggressive choose to play violent games.
Direction of causality cannot be settled by such studies so we cannot presume that the video games are
the cause of the aggression.
Another strength is that the findings of research, especially laboratory studies, can be explained by social
learning theory. SLT is described by Anderson et al. (2017) as a convincing theoretical framework to
147
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
explain media effects on aggression (as shown by Bandura’s Bobo doll studies). As it is widely accepted
that exposure to violence at home is harmful to children, then logically it makes sense that computer
games are other sources of social learning. Children are more likely to imitate aggressive behaviours
when they see them being rewarded in computer games (vicarious reinforcement), especially if they
identify with on-screen characters. This enhances the validity of research because having a unifying
explanation to account for findings is a key feature of science.
However, this is a research area that is plagued by unsupported conclusions. It is important we maintain
a sense of balance about this issue. Many research studies are methodologically weak (e.g. confounding
variables, poor sampling methods). As pointed out above, many studies are correlational so it is not clear
that game-playing is causal in aggression. Finally, lab studies lack external validity because real-world
game-playing occurs in very different conditions. Therefore some researchers are guilty of drawing
premature conclusions about game-playing and aggression based on findings that lack validity.
Page 203
1. Desensitisation refers to how repeated exposure to violent media reduces normal levels of
physiological and psychological arousal associated with anxiety, making aggressive behaviour more likely.
The desensitisation can be psychological as well as physiological, so repeated exposure also promotes a
belief that using aggression as a method of resolving conflict is socially acceptable.
Disinhibition refers to how normal social constraints against aggression can be weakened by repeated
exposure via the media so people become ‘freer’ about using aggression. If exposure to aggressive
behaviours makes that behaviour appear normative and socially sanctioned, these behaviours then
appear temporarily socially acceptable and therefore more likely.
2. The cognitive priming explanation suggests that repeated experience of aggressive media can provide
us with a ‘script’ about how violent situations may ‘play out’. Huesmann (1998) argues that this script is
stored in memory so we become ‘ready’ (primed) to be aggressive. This is an automatic process because
a script can direct our behaviour without us being aware of it and the script is triggered when we
encounter cues in a situation that we perceive as aggressive.
Concern has been raised about the possible impact of aggressively derogatory lyrics about women in
setting up aggressive scripts through cognitive priming. In a study by Fischer and Greitemeyer
(2006), male participants heard songs featuring aggressively derogatory lyrics about women.
Compared with when they listened to neutral lyrics, participants later recalled more negative
qualities about women and behaved more aggressively towards a female confederate. Similar
results were found with female participants and ‘men-hating’ lyrics.
3. A strength of the disinhibition explanation is research support. Berkowitz and Alioto (1973) showed a
film depicting aggression as vengeance. Participants gave more (fake) electric shocks of longer duration
to a confederate suggesting that media violence may disinhibit aggressiveness when it is presented as
justified. This finding demonstrates the link between removal of social constraints and subsequent
aggressive behaviour.
Another strength is that disinhibition can explain the influence of cartoon violence. When children watch
aggression in cartoons they do not learn specific behaviours from cartoon models as many of them are
not physically possible. Children learn social norms instead. The aggression carried out by cartoon
models is socially normative, especially when it goes unpunished. This supports the disinhibition
hypothesis because children learn from cartoons that aggression is rewarding and achieves goals in a
socially acceptable way.
148
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
4. In desensitisation, repeated exposure to violent media reduces normal levels of physiological and
psychological arousal associated with anxiety, making aggressive behaviour more likely. The
desensitisation can be psychological as well as physiological, so repeated exposure also promotes a belief
that using aggression as a method of resolving conflict is socially acceptable. This may weaken negative
attitudes towards violence, reduce empathy felt for victims and encourage minimisation of injuries
sustained by them.
A strength of this explanation is supporting research evidence. Krahé et al. (2011) showed participants
violent (and non-violent) film clips while measuring physiological arousal using skin conductance.
Habitual viewers of violent media showed lower arousal when watching violent film clips, and level of
arousal was negatively correlated with unprovoked aggression in a ‘noise blast’ task. This suggests that
lower arousal levels in violent media users reflect desensitisation to the effects of violence leading to a
greater willingness to be aggressive.
However, Krahé et al. (2011) failed to find a link between media viewing, lower arousal and provoked
(reactive) aggression. This may be because catharsis occurred – viewing violent media acts as a safety
valve, allowing participants to release aggressive impulses without behaving violently. This suggests that
catharsis (i.e. the frustration-aggression hypothesis) might be a better explanation of what happened
than desensitisation.
In disinhibition, normal social constraints against certain aggression can be weakened by repeated
exposure via the media, and people become ‘freer’ about using aggression. If exposure to aggressive
behaviours makes that behaviour appear normative and socially sanctioned, these behaviours then
appear temporarily socially acceptable and therefore more likely.
A strength of the disinhibition explanation is research support. Berkowitz and Alioto (1973) showed a
film depicting aggression as vengeance. Participants gave more (fake) electric shocks of longer duration
to a confederate suggesting that media violence may disinhibit aggressiveness when it is presented as
justified. This finding demonstrates the link between removal of social constraints and subsequent
aggressive behaviour.
Another strength is that disinhibition can explain the influence of cartoon violence. When children watch
aggression in cartoons they do not learn specific behaviours from cartoon models as many of them are
not physically possible. Children learn social norms instead. The aggression carried out by cartoon
models is socially normative, especially when it goes unpunished. This supports the disinhibition
hypothesis because children learn from cartoons that aggression is rewarding and achieves goals in a
socially acceptable way.
The cognitive priming explanation suggests that repeated experience of aggressive media can provide us
with a ‘script’ about how violent situations may ‘play out’. Huesmann (1998) argues that this script is
stored in memory so we become ‘ready’ (primed) to be aggressive. This is an automatic process because
a script can direct our behaviour without us being aware of it and the script is triggered when we
encounter cues in a situation that we perceive as aggressive.
A strength is that understanding how cognitive priming influences aggression has useful practical
application and can potentially save lives. Whether situations break into violence depends on how
individuals interpret cues which depends on scripts stored in memory. Bushman and Anderson (2002)
claim someone who habitually watches violent media accesses stored aggressive scripts more readily.
This raises the possibility that effective interventions could reduce aggressive behaviour by challenging
hostile cognitive scripts and encouraging habitual violent media users to consider alternatives.
149
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 205
1. Organised offenders are characterised by evidence of planning the crime – the victim is
deliberately targeted and the killer/rapist may have a ‘type’ of victim. They also show a high degree
of control during the crime and little evidence is left behind at the scene.
Disorganised offenders are characterised by little evidence of planning, suggesting the offence may
have been spontaneous. The crime scene reflects the impulsive nature of the act – the victim’s body
is still at the scene and the crime shows little control on the part of the offender.
2. The top-down approach involves matching the crime/offender to pre-existing templates. The pre-
existing template was developed by the FBI by interviewing 36 sexually-motivated murderers and
using this data, together with characteristics of their crimes, to create two categories (organised and
disorganised). If the data from a crime scene matched some of the characteristics of one category
we could then predict other characteristics that would be likely. Murderers or rapists are classified in
one of two categories (organised and disorganised) based on this evidence. This then informs the
investigation.
The organised and disorganised distinction is based on the idea that serious offenders have certain
signature ‘ways of working’. These generally correlate with a particular set of social and
psychological characteristics that relate to the individual.
Organised offenders are characterised by evidence of planning the crime – the victim is deliberately
targeted and the killer/rapist may have a ‘type’ of victim. They also show a high degree of control
during the crime and little evidence is left behind at the scene.
Disorganised offenders are characterised by little evidence of planning, suggesting the offence may
have been spontaneous. The crime scene reflects the impulsive nature of the act – the victim’s body
is still at the scene and the crime shows little control on the part of the offender.
3. One limitation is evidence for top-down profiling was flawed. Canter et al. (2004) argues that the
FBI agents did not select a random or even large sample, nor did it include different kinds of
offender. There was no standard set of questions so each interview was different and therefore not
really comparable. This suggests that top-down profiling does not have a sound, scientific basis.
4. The top-down approach involves matching the crime/offender to pre-existing templates. The pre-
existing template was developed by the FBI. Murderers or rapists are classified in one of two
categories (organised and disorganised) based on this evidence. This then informs the investigation.
The organised and disorganised distinction is based on the idea that serious offenders have certain
signature ‘ways of working’. These generally correlate with a particular set of social and
psychological characteristics that relate to the individual. Organised offenders are characterised by
evidence of planning the crime – the victim is deliberately targeted and the killer/rapist may have a
‘type’ of victim. They show a high degree of control during the crime and little evidence is left behind
at the scene.
Disorganised offenders are characterised by little evidence of planning, suggesting the offence may
have been spontaneous. The crime scene reflects the impulsive nature of the act – the body still at
the scene and the crime shows little control on the part of the offender.
150
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
The murder scene that Octavia is investigating appears to be an example of an organised offender.
This is because there is very little physical evidence, which means the offender has tried to ‘cover
their tracks’ suggesting it was a planned and not a spontaneous act. The fact that there was evidence
that the murder was ‘organised and controlling’ is further evidence of an organised killer. Octavia
can now infer lots of other things about the murderer such as they are likely to be intelligent,
charismatic, have a skilled job and family. This helps narrow down the list of suspects.
One strength is research support for an organised category. Canter et al. (2004) looked at 100 US
serial killings. Smallest space analysis was used to assess the co-occurrence of 39 aspects of the
serial killings. This analysis revealed a subset of behaviours of many serial killings which match the
FBI’s typology for organised offenders. This suggests that a key component of the FBI typology
approach has some validity.
However, Godwin (2002) argues that, in reality, most killers have multiple contrasting characteristics
and don’t fit into one ‘type’. This suggests that the organised–disorganised typology is probably
more of a continuum.
Page 207
1. In the top-down approach the profiler matches the crime/offender to pre-existing templates.
Murderers or rapists are classified in one of two categories (organised and disorganised) based on
this evidence.
Unlike the US top-down approach, the British bottom-up model does not begin with fixed
typologies. Instead, the profile is ‘data-driven’ and emerges as the investigator rigorously scrutinises
the details of a particular offence.
2. Lundrigan and Canter (2001) collated information from 120 murder cases involving serial killers in
the US. Smallest space analysis revealed spatial consistency in the behaviour of the killers. The
location of each body disposal site was plotted and a ‘centre of gravity’ was identified – the
offender’s base was invariably in the centre of the pattern. The effect was more noticeable for
‘marauders’ (offenders travelling short distances). This supports Canter’s claim that spatial
information can be a key factor in determining the base of an offender, and thus aiding their
identification.
3. One strength is that evidence supports investigative psychology. Canter and Heritage (1990)
conducted an analysis of 66 sexual assault cases using smallest space analysis. Several behaviours
were identified in most cases (e.g. using impersonal language). Each individual displayed a pattern of
such behaviours, and this helps establish whether two or more offences were committed by the
same person (known as ‘case linkage’). This supports one of the basic principles of investigative
psychology (and the bottom-up approach) that people are consistent in their behaviour.
4. Unlike the US top-down approach, the British bottom-up model does not begin with fixed
typologies. Instead, the profile is ‘data-driven’ and emerges as the investigator rigorously scrutinises
the details of a particular offence. The aim is to generate a picture of the offenders’ characteristics,
routines and background through analysis of the evidence.
In investigative psychology, statistical procedures detect patterns of behaviour that are likely to
occur (or coexist) across crime scenes. This is done to develop a statistical ‘database’ which then acts
as a baseline for comparison. Features of an offence can be matched against this database to
151
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
suggest potentially important details about the offender, their personal history, family background,
etc. A central concept is interpersonal coherence – the way an offender behaves at the scene
(including how they ‘interact’ with the victim) may reflect their behaviour in everyday situations (e.g.
controlling, apologetic, etc.); i.e. their behaviour ‘hangs together’ (has coherence). This might tell the
police something about how the offender relates to women (for example) more generally.
In geographical profiling the locations of crime scenes are used to infer the likely home or
operational base of an offender – known as ‘crime mapping’. Location can also be used alongside
psychological theory to create hypotheses about the offender and their modus operandi (habitual
way of working).
One strength is that evidence supports investigative psychology. Canter and Heritage (1990)
conducted an analysis of 66 sexual assault cases using smallest space analysis. Several behaviours
were identified in most cases (e.g. using impersonal language). Each individual displayed a pattern of
such behaviours, and this helps establish whether two or more offences were committed by the
same person (known as ‘case linkage’). This supports one of the basic principles of investigative
psychology (and the bottom-up approach) that people are consistent in their behaviour.
However, the database is made up of only solved crimes which are likely to be those that were
straightforward to link together. This is a circular argument and suggests that investigative
psychology may tell us little about crimes that have few links between them and therefore remain
unsolved.
Another strength is that evidence also supports geographical profiling. Lundrigan and Canter (2001)
collated information from 120 murder cases in the US. Smallest space analysis revealed spatial
consistency – a centre of gravity. Whereas offenders leave their home base in different directions
when dumping a body, this creates a circular effect, especially in the case of marauders. This
supports the view that geographical information can be used to identify an offender.
One limitation is that geographical profiling may not be sufficient on its own. Recording of crime is
not always accurate, it can vary between police forces and an estimated 75% of crimes are not even
reported to police. Even if crime data is correct, other factors are important, e.g. timing of the
offence and the age and experience of the offender (Ainsworth 2001). This suggests that
geographical information alone may not always lead to the successful capture of an offender.
Page 209
1. Lombroso proposed that criminals were ‘genetic throwbacks’ – a primitive sub-species who were
biologically different from non-criminals. This is the ‘atavistic form’. Offenders were seen by
Lombroso as lacking evolutionary development. Their savage and untamed nature meant that they
would find it impossible to adjust to civilised society and would inevitably turn to crime.
2. Lombroso proposed that criminals were ‘genetic throwbacks’ – a primitive sub-species who were
biologically different from non-criminals. This is the ‘atavistic form’. Offenders were seen by
Lombroso as lacking evolutionary development. Their savage and untamed nature meant that they
would find it impossible to adjust to civilised society and would inevitably turn to crime. Therefore
Lombroso saw offending behaviour as an innate tendency and thus was proposing a new perspective
(for his time) that the offender was not at fault. In this way his ideas were revolutionary. Lombroso
argued the offender subtype could be identified as being in possession of physiological ‘markers’.
These ‘atavistic’ characteristics are biologically determined and are mainly features of the head and
face that make criminals appear physically different from the rest of us. For example, the atavistic
152
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
form included a narrow, sloping brow, a strong prominent jaw, high cheekbones and facial
asymmetry.
3. One strength of Lombroso’s theory is it changed criminology. Lombroso (the ‘father of modern
criminology’, Hollin 1989) shifted the emphasis in crime research away from moralistic to scientific.
Also, in describing how particular types of people are likely to commit particular types of crime, the
theory heralded offender profiling. This suggests that Lombroso made a major contribution to the
science of criminology.
However, many of the features that Lombroso identified as atavistic (curly hair, dark skin) are most
likely to be found among people of African descent, a view that fitted 19th-century eugenic attitudes
(to prevent some groups from breeding). This suggests that his theory might be more subjective
than objective, influenced by racist prejudices.
4. Lombroso proposed that criminals were ‘genetic throwbacks’ – a primitive sub-species who were
biologically different from non-criminals. This is the ‘atavistic form’. Offenders were seen by
Lombroso as lacking evolutionary development. Their savage and untamed nature meant that they
would find it impossible to adjust to civilised society and would inevitably turn to crime. Therefore
Lombroso saw offending behaviour as an innate tendency and thus was proposing a new perspective
(for his time) that the offender was not at fault. In this way his ideas were revolutionary. Lombroso
argued the offender subtype could be identified as being in possession of physiological ‘markers’.
These ‘atavistic’ characteristics are biologically determined and are mainly features of the head and
face that make criminals appear physically different from the rest of us. For example, the atavistic
form included a narrow, sloping brow, a strong prominent jaw, high cheekbones and facial
asymmetry.
One strength of Lombroso’s theory is it changed criminology. Lombroso (the ‘father of modern
criminology’, Hollin 1989) shifted the emphasis in crime research away from moralistic to scientific.
Also, in describing how particular types of people are likely to commit particular types of crime, the
theory heralded offender profiling. This suggests that Lombroso made a major contribution to the
science of criminology.
However, many of the features that Lombroso identified as atavistic (curly hair, dark skin) are most
likely to be found among people of African descent, a view that fitted 19th-century eugenic attitudes
(to prevent some groups from breeding). This suggests that his theory might be more subjective
than objective, influenced by racist prejudices.
One limitation is evidence contradicts the link between atavism and crime. Goring (1913) compared
3000 offenders and 3000 non-offenders and found no evidence that offenders are a distinct group
with unusual facial and cranial characteristics. He did suggest though that many people who commit
crime have lower-than-average intelligence (offering limited support for atavistic theory). This
challenges the idea that offenders can be physically distinguished from the rest of the population,
therefore they are unlikely to be a subspecies.
Another limitation is Lombroso’s methods were poorly controlled. Lombroso didn’t compare his
offender sample with a control group, and therefore failed to control confounding variables. For
example, modern research shows that social conditions (e.g. poverty) are associated with offending
behaviour, which would explain some of Lombroso’s links (Hay and Forrest 2009). This suggests that
Lombroso’s research does not meet modern scientific standards.
153
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 211
1. Crowe (1972) found that adopted children who had a biological mother with a criminal record had
a 50% risk of having a criminal record at 18 years of age. Whereas adopted children whose mother
didn’t have a criminal record only had a 5% risk.
A genetic analysis of about 800 offenders by Tiihonen et al. (2015) suggested two genes that may be
associated with violent crime. The MAOA gene regulates serotonin and has been linked to aggressive
behaviour. The CDH13 gene is linked to substance abuse and ADHD. The study found that 5–10% of
all severe violent crime in Finland is attributable to the MAOA and CDH13 genotypes.
2. Twin and adoption studies suggest genes predispose offenders to crime. Christiansen (1977)
studied over 3500 twin pairs in Denmark, finding a concordance for offender behaviour of 35% for
MZ males and 13% for DZ males (slightly lower rates for females). This supports a genetic
component in offending.
There may be neural differences in the brains of offenders and non-offenders. Raine et al. (2000)
found reduced activity and an 11% reduction in the volume of grey matter in the prefrontal cortex of
people with APD compared to controls. This is the part of the brain that regulates emotional
behaviour.
3. One strength is support for the link between crime and the frontal lobe. Kandel and Freed (1989)
researched people with frontal lobe damage, including the prefrontal cortex. They found evidence of
impulsive behaviour, emotional instability and inability to learn from mistakes. This supports the
idea that structural abnormalities in the brain are a causal factor in offending behaviour.
One limitation is the link between neural differences and APD is complex. Farrington et al. (1981)
studied adult males with high APD scores. They were raised by a convicted parent and physically
neglected. These early experiences may have caused APD and associated neural differences, e.g.
reduced activity in the frontal lobe due to trauma. This suggests that the relationship between
neural differences, APD and offending is complex and there may be intervening variables.
4. The genetic explanation of offending behaviour is supported by twin and adoption studies, which
suggest that genes predispose offenders to crime. Christiansen (1977) studied over 3500 twin pairs
in Denmark, finding a concordance for offender behaviour of 35% for MZ males and 13% for DZ
males (slightly lower rates for females). This supports a genetic component in offending.
Crowe (1972) also found that adopted children who had a biological mother with a criminal record
had a 50% risk of having a criminal record at 18 years of age. Whereas adopted children whose
mother didn’t have a criminal record only had a 5% risk.
One limitation of genetic explanations is that twin studies assume equal environments, i.e. that
environmental factors are the same for MZ and DZ twins because they experience similar
environments. However, because MZ twins look identical, people (especially parents) tend to treat
them more similarly which, in turn, affects their behaviour. Therefore higher concordance rates for
MZs may be because they are treated more similarly than DZs, suggesting conclusions lack validity.
One strength is the support for a diathesis-stress model of offending. Mednick et al. (1984) studied
13,000 Danish adoptees having at least one court conviction. They found that conviction rates were
13.5% (where neither biological nor adoptive parents had convictions), 20% (one biological parent
had a conviction), and 24.5% (where both adoptive and biological parents had a conviction). This
154
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
data suggests that both genetic inheritance and the environment influence criminality – supporting
the diathesis-stress model of crime.
The neural explanation of offending behaviour proposes there may be neural differences in the
brains of offenders and non-offenders. For example, antisocial personality disorder (APD) is
associated with a lack of empathy and reduced emotional responses. Many convicted offenders
have a diagnosis of APD. Raine et al. (2000) found reduced activity and an 11% reduction in the
volume of grey matter in the prefrontal cortex of people with APD compared to controls. This is the
part of the brain that regulates emotional behaviour.
One strength of neural explanations is support for the link between crime and the frontal lobe.
Kandel and Freed (1989) researched people with frontal lobe damage, including the prefrontal
cortex. They found evidence of impulsive behaviour, emotional instability and inability to learn from
mistakes. This supports the idea that structural abnormalities in the brain are a causal factor in
offending behaviour.
One limitation is the link between neural differences and APD is complex. Farrington et al. (1981)
studied adult males with high APD scores. They were raised by a convicted parent and physically
neglected. These early experiences may have caused APD and associated neural differences, e.g.
reduced activity in the frontal lobe due to trauma. This suggests that the relationship between
neural differences, APD and offending is complex and there may be intervening variables.
Page 213
1. Eysenck suggested personality types are innate and based on the nervous system we inherit.
Extraverts have an underactive nervous system, which means they seek excitement and stimulation
and engage in risk-taking. Neurotic individuals have a high level of reactivity in the sympathetic
nervous system – they respond quickly to situations of threat (fight or flight). This means they tend
to be nervous, jumpy and overanxious so their behaviour is difficult to predict. Psychotic individuals
are suggested to have higher levels of testosterone – they are cold, unemotional and prone to
aggression.
The criminal personality type is a combination of personality types: neurotic extravert + high
psychoticism. To explain, neurotics are unstable and therefore prone to overreact to situations of
threat. Extraverts seek more arousal and thus engage in dangerous activities. Psychotics are
aggressive and lacking empathy.
Eysenck saw criminal behaviour as developmentally immature in that it is selfish and concerned with
immediate gratification. Criminals are impatient and cannot wait for things – so they are more likely
to act antisocially.
2. Eysenck and Eysenck (1977) compared 2070 male prisoners’ scores on the EPQ with 2422 male
controls. On measures of E, N and P (across all the age groups that were sampled) prisoners
recorded higher average scores than controls. This agrees with the predictions of the theory that
offenders rate higher than average across the three dimensions Eysenck identified.
However, Farrington et al. (1981) conducted a meta-analysis and reported that offenders tended to
score high on measures of P, but not for E and N. Also there is inconsistent evidence of different
cortical arousal in extraverts and introverts (Küssner 2017). This means that some of the central
assumptions of the criminal personality have been challenged.
155
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. See above.
4. Eysenck suggested personality types are innate and based on the nervous system we inherit.
Extraverts have an underactive nervous system, which means they seek excitement and stimulation
and engage in risk-taking. Neurotic individuals have a high level of reactivity in the sympathetic
nervous system – they respond quickly to situations of threat (fight or flight). This means they tend
to be nervous, jumpy and overanxious so their behaviour is difficult to predict. Psychotic individuals
are suggested to have higher levels of testosterone – they are cold, unemotional and prone to
aggression.
The criminal personality type is a combination of personality types: neurotic extravert + high
psychoticism. To explain, neurotics are unstable and therefore prone to overreact to situations of
threat. Extraverts seek more arousal and thus engage in dangerous activities. Psychotics are
aggressive and lacking empathy.
Eysenck saw criminal behaviour as developmentally immature in that it is selfish and concerned with
immediate gratification. Criminals are impatient and cannot wait for things – so they are more likely
to act antisocially.
Cruz would appear to have an extravert personality as he is charming, friendly and outgoing. He is
also unfeeling, selfish, anxious and tense. Cruz’s unfeeling and selfish nature suggests he would
score high on measures of psychoticism. His anxiety and tenseness are characteristics of
neuroticism. Thus, Cruz would appear to have all three elements of the criminal personality so it
comes as little surprise that he is currently in prison for serious assault.
One limitation is the view that all offending is explained by personality. Moffitt (1993) distinguished
between offending behaviour that only occurs in adolescence (adolescence-limited) and that which
continues into adulthood (life-course-persistent). She considers persistence in offending behaviour
to be a reciprocal process between individual personality traits and environmental reactions to
those traits. This is a more complex picture than Eysenck suggested, that offending behaviour is
determined by an interaction between personality and the environment.
Another limitation is cultural factors are not taken into account. Bartol and Holanchock (1979)
studied Hispanic and African-American offenders in a New York maximum security prison, dividing
them into six groups based on offending history and offences. All six groups were less extravert than
a non-offender control group. Bartol and Holanchock suggested this was because the sample was a
different cultural group from that investigated by Eysenck. This questions the generalisability of the
criminal personality – it may be a culturally relative concept.
Page 215
1. Hostile attribution bias describes where ambiguous situations are judged as threatening.
Schönenberg and Jusyte (2014) found violent offenders were more likely than non-offenders to
perceive ambiguous facial expressions as angry and hostile. Offenders misread non-aggressive cues
(e.g. being ‘looked at’) and this can trigger a disproportionate and violent response.
Minimalisation describes the downplaying of the significance of a crime in a way that reduces a
person’s sense of guilt. For example, burglars may describe themselves as ‘doing a job’ or
‘supporting my family’ as a way of minimising the seriousness of their actions and their sense of
guilt. This is particularly likely in sex offenders – Barbaree (1991) found that 54% rapists denied they
156
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
had committed an offence at all and a further 40% minimised the harm they had caused to the
victim.
2. Kohlberg proposed that people’s decisions and judgements about right and wrong can be
identified in his stage theory of moral development. The higher the stage the more sophisticated the
reasoning. Kohlberg et al. (1973) used a moral dilemma technique (e.g. the Heinz dilemma) and
found offenders tend to be at the pre-conventional level, whereas non-criminals progress to the
conventional level and beyond.
The pre-conventional level is characterised by a need to avoid punishment and gain rewards and a
less mature, childlike reasoning. Offenders may commit crime if they can get away with it or gain
rewards (e.g. money, respect). Research shows that offenders are often self-centred (egocentric)
and display poorer social perspective-taking skills (Chandler 1973). Individuals who reason at a
higher level tend to empathise more and exhibit behaviours such as honesty, generosity and non-
violence.
3. One limitation is cognitive distortions depend on the type of offence. Howitt and Sheldon (2007)
found that non-contact sex offenders (accessed sexual images on the internet) used more cognitive
distortions than contact sex offenders (physically abused children). Those who had a previous history
of offending were also more likely to use distortions as a justification for their behaviour. This
suggests that cognitive distortions are not used in the same way by all offenders.
4. Kohlberg proposed that as children get older their decisions and judgements about right and
wrong become more sophisticated. A person’s level of reasoning (thinking) affects their behaviour.
Offenders are at a lower, less mature level. Kohlberg et al. (1973) used a moral dilemma technique
(e.g. the Heinz dilemma) and found that offenders tend to be at the preconventional level, whereas
non-offenders progress higher. The pre-conventional level is characterised by a need to avoid
punishment and gain rewards and a less mature, childlike reasoning. Offenders may commit crime if
they can get away with it or gain rewards (e.g. money, respect).
One strength is that evidence supports the role of moral reasoning. Palmer and Hollin (1998)
compared moral reasoning of offenders and non-offenders on a SRM-SF scale (11 moral dilemmas).
Offenders showed less mature moral reasoning than the non-offender group (e.g. not taking things
that belong to someone else). This is consistent with Kohlberg’s theory, and suggests his theory of
criminality has validity.
One limitation is that moral reasoning may depend on the type of offence. Thornton and Reid (1982)
found that people whose crimes were for financial gain (e.g. robbery) were more likely to show a
pre-conventional level than if their crime was impulsive (e.g. assault). Pre-conventional moral
reasoning tends to be associated with crimes in which offenders believe they have a good chance of
evading punishment. This suggests that Kohlberg’s theory may not apply to all forms of crime.
There are two cognitive biases linked to offending: hostile attribution bias and minimisation.
Hostile attribution bias describes where ambiguous situations are judged as threatening.
Schönenberg and Jusyte (2014) found violent offenders were more likely than non-offenders to
perceive ambiguous facial expressions as angry and hostile. Offenders misread non-aggressive cues
(e.g. being ‘looked at’) and this can trigger a disproportionate and violent response.
Minimalisation describes the downplaying of the significance of a crime in a way that reduces a
person’s sense of guilt. For example, burglars may describe themselves as ‘doing a job’ or
157
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
‘supporting my family’ as a way of minimising the seriousness of their actions and their sense of
guilt.
One strength of cognitive distortions is its application to therapy. In cognitive behaviour therapy,
offenders are helped to ‘face up’ to what they have done and have a less distorted view of their
actions. Studies (e.g. Harkins et al. 2010) suggest that reduced denial and minimalisation in therapy
is associated with less reoffending. This suggests that the theory of cognitive distortions has practical
value.
One limitation is cognitive distortions depend on the type of offence. Howitt and Sheldon (2007)
found that non-contact sex offenders (accessed sexual images on the internet) used more cognitive
distortions than contact sex offenders (physically abused children). Those who had a previous history
of offending were also more likely to use distortions as a justification for their behaviour. This
suggests that cognitive distortions are not used in the same way by all offenders.
Page 217
1. Sutherland (1924) developed a set of scientific principles that could explain all types of offending.
Individuals learn the values, attitudes, techniques and motives for offending behaviour through
interaction with others – these ‘others’ are different from one person to the next (hence, differential
association). His theory ignores the effects of class or ethnic background, what matters is who you
associate with.
Offending behaviour is acquired through the process of learning. Learning occurs through
interactions with significant others who the child values most and spends most time with, such as
family and peer group. Offending arises from two factors: learned attitudes towards offending and
learning of specific offending acts. When a person is socialised into a group they will be exposed to
certain values and attitudes. This includes values and attitudes toward the law – some of these will
be pro-crime, some will be anti-crime. Sutherland argues that if the number of pro-crime attitudes
the person comes to acquire outweighs the number of anti-crime attitudes, they will go on to
offend.
2. One difference between differential association theory and the genetic theory of offending is
where offending is seen to originate. Sutherland’s theory argues that offending behaviour is the
result of nurture, i.e. it develops within a dysfunctional family environment and is learned through
association with inappropriate role models. The genetic theory meanwhile emphasises the role of
nature, i.e. that offending behaviour is innate and the result of an inherited predisposition.
3. One strength of differential association theory is the shift of focus. Sutherland moved emphasis
away from early biological explanations (e.g. Lombroso) and from theories of offending as the
product of individual weakness or immorality. Differential association theory draws attention to
deviant social circumstances and environments as being more to blame for offending than deviant
people. This approach offers a more realistic solution to offending instead of eugenics (the biological
solution) or punishment (the morality solution).
However, the theory risks stereotyping people from impoverished, crime-ridden backgrounds. This
ignores that people may choose not to offend despite such influences, as not everyone who is
exposed to pro-crime attitudes goes on to offend.
4. Sutherland (1924) developed a set of scientific principles that could explain all types of offending.
Individuals learn the values, attitudes, techniques and motives for offending behaviour through
158
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
interaction with others – these ‘others’ are different from one person to the next (hence, differential
association). His theory ignores the effects of class or ethnic background, what matters is who you
associate with.
Offending behaviour is acquired through the process of learning. Learning occurs through
interactions with significant others who the child values most and spends most time with, such as
family and peer group. Offending arises from two factors: learned attitudes towards offending and
learning of specific offending acts. When a person is socialised into a group they will be exposed to
certain values and attitudes. This includes values and attitudes toward the law – some of these will
be pro-crime, some will be anti-crime. Sutherland argues that if the number of pro-crime attitudes
the person comes to acquire outweighs the number of anti-crime attitudes, they will go on to
offend.
One strength of differential association theory is the shift of focus. Sutherland moved emphasis
away from early biological explanations (e.g. Lombroso) and from theories of offending as the
product of individual weakness or immorality. Differential association theory draws attention to
deviant social circumstances and environments as being more to blame for offending than deviant
people. This approach offers a more realistic solution to offending instead of eugenics (the biological
solution) or punishment (the morality solution).
However, the theory risks stereotyping people from impoverished, crime-ridden backgrounds, which
is the viewpoint of the politician. This ignores that people may choose not to offend despite such
influences, as not everyone who is exposed to pro-crime attitudes goes on to offend.
Another strength is that the theory has wide reach. Whilst some crimes (e.g. burglary) are clustered
in inner-city working class communities, other crimes are clustered in more affluent groups, as the
psychologist points out. Sutherland was particularly interested in so-called ‘white-collar’ or
corporate offences and how this may be a feature of middle-class groups who share deviant norms.
This shows that it is not just the ‘lower’ classes who commit offences and that differential
association can be used to explain all offences.
One limitation is difficulty testing the theory’s predictions. Sutherland promised a scientific and
mathematical framework for predicting offending behaviour, but the concepts can’t be
operationalised. It is unclear how we can measure the numbers of pro- or anti-crime attitudes a
person is exposed to – so how can we know at what point offending would be triggered? This means
the theory does not have scientific credibility.
Page 219
1. Freud’s psychodynamic approach suggests that the Superego is guided by the morality principle
leading to feelings of guilt for wrongdoing. Blackburn (1993) argued that if the Superego is
inadequate (weak, deviant or over-harsh) then the Id (governed by the pleasure principle) is given
‘free rein’ – an uncontrolled Id means that offending behaviour is inevitable.
A weak Superego comes about through absence of the same-sex parent. During the phallic stage the
Superego is formed through the resolution of the Oedipus complex (or Electra complex). If the same-
sex parent is absent during this stage a child cannot internalise a fully-formed Superego as there is
no opportunity for identification. This would make offending behaviour more likely.
159
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
The deviant Superego is when the child internalises deviant values. A child internalises the same-sex
parent’s moral attitudes to form their Superego. If these internalised moral attitudes are deviant this
would lead to a deviant Superego and to offending behaviour.
Finally, an over-harsh Superego occurs when committing crimes satisfies a need for punishment. An
excessively punitive or overly harsh parent creates a child who has an over-harsh Superego and the
child is crippled by guilt and anxiety. This may (unconsciously) drive the individual to perform
criminal acts in order to satisfy the Superego’s overwhelming need for punishment.
Kohlberg proposed that people’s decisions and judgements about right and wrong can be identified
in his stage theory of moral development. The higher the stage the more sophisticated the
reasoning. Kohlberg et al. (1973) used a moral dilemma technique and found criminal offenders tend
to be at the pre-conventional level – non-criminals progress to the conventional level and beyond.
The pre-conventional level is characterised by a need to avoid punishment and gain rewards and less
mature, childlike reasoning. Offenders may commit crime if they can get away with it or gain
rewards (e.g. money, respect).
3. One limitation of Freudian theory is that it is gender-biased. Psychodynamic theory assumes girls
develop a weaker Superego than boys – they do not experience castration anxiety, so have less need
to identify with their mothers. However, there are 20 times more men than women in prison and
Hoffman (1975) found no gender differences in children’s moral behaviour. This suggests there is
alpha bias at the heart of Freud’s theory and means it may not be appropriate as an explanation of
offending behaviour.
Another limitation is that Bowlby’s theory is based on an association. Lewis (1954) analysed 500
interviews with young people, and found that maternal deprivation was a poor predictor of future
offending and the ability to form close relationships in adolescence. Even if there is a link there are
countless other reasons for it, for example maternal deprivation may be due to growing up in
poverty. This suggests that maternal deprivation may be one of the reasons for later offending
behaviour, but not the only reason.
4. Freud proposed that the Superego is guided by the morality principle and leads to feelings of guilt
for wrongdoing and feelings of pride for moral behaviour. Blackburn (1993) argued that if the
Superego is somehow inadequate then the Id (governed by the pleasure principle) is given ‘free rein’
and is not properly controlled – an uncontrolled id means that criminal behaviour is inevitable. There
are three types of ‘inadequate’ Superego: weak, deviant or over-harsh.
Bowlby (1944) argued that a warm, continuous relationship with a mother-figure was crucial to
future relationships, well-being and development. A loss of attachment in infancy (maternal
deprivation) could lead to affectionless psychopathy (lack of empathy and guilt) and increased
likelihood of delinquency. Bowlby supported his claims with his investigation of 44 juvenile thieves.
He found that 14 of the thieves showed signs of affectionless psychopathy – 12 of these had
experienced prolonged separation from their mothers in infancy. In a control group, only two had
experienced prolonged separation (maternal deprivation). Bowlby concluded that the effects of
160
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
maternal deprivation had caused affectionless psychopathy and delinquent behaviour among
juvenile thieves.
Ashton can be very cruel to others but never seems to feel any guilt. This suggests he has an
inadequate Superego – the part of the personality that forces the Ego to experience guilt for
wrongdoing. Without an adequately functioning Superego, the Id is allowed free rein and is not
properly controlled. Ashton never expresses warmth or positive emotion towards others. This may
suggest he has developed the affectionless psychopathy personality type, which is characterised by
lack of empathy and cruelty. This may have come about through maternal deprivation in Ashton’s
childhood but we cannot know this from the text.
One strength is research support for the link to the Superego. Goreta (1991) conducted a Freudian-
style analysis of ten offenders referred for psychiatric treatment. In all those assessed, disturbances
in Superego formation were diagnosed. Each offender experienced the need for punishment
manifesting itself as a desire to commit acts of wrongdoing and offend (possibly due to an overharsh
Superego). This evidence seems to support the role of psychic conflicts and an over-harsh Superego
as a basis for offending.
If this theory were correct though we would expect harsh, punitive parents to raise children who
often experience guilt. Evidence suggests that the opposite is true – such children rarely express guilt
(Kochanska et al. 2001). This calls into question the relationship between a strong, punitive internal
parent and excessive feelings of guilt within the child.
One limitation of Freudian theory is that it is gender-biased. Psychodynamic theory assumes girls
develop a weaker Superego than boys – they do not experience castration anxiety, so have less need
to identify with their mothers. However, there are 20 times more men than women in prison and
Hoffman (1975) found no gender differences in children’s moral behaviour. This suggests there is
alpha bias at the heart of Freud’s theory and means it may not be appropriate as an explanation of
offending behaviour.
Another limitation is that Bowlby’s theory is based on an association. Lewis (1954) analysed 500
interviews with young people, and found that maternal deprivation was a poor predictor of future
offending and the ability to form close relationships in adolescence. Even if there is a link there are
countless other reasons for it, for example maternal deprivation may be due to growing up in
poverty. This suggests that maternal deprivation may be one of the reasons for later offending
behaviour, but not the only reason.
Page 221
1. Token economy systems are managed by prison staff to modify the behaviour of inmates. Based
on operant conditioning, desirable inmate behaviours are rewarded (reinforced) with tokens.
Desirable behaviours might include avoiding conflict, being quiet in the cell, following rules and so
on. Tokens are not rewarding in themselves but they are rewarding because they can be exchanged
for something desirable. The subsequent reward will vary according to the institution, but may
include exchanging tokens for a phone call to a loved one, time in the gym or exercise yard, extra
cigarettes or food.
2. Recidivism refers to reoffending. Recidivism rates in ex-prisoners tell us to what extent prison acts
as an effective deterrent. Rates vary with age, crime committed and country. The US, Australia and
Denmark record rates over 60%. In Norway rates may be as low as 20% (Yukhnenko et al. 2019). This
161
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
last figure is significant because in Norway there is less emphasis on incarceration and greater
emphasis on rehabilitation and skills development.
3. One limitation is the negative effects of custodial sentencing. Bartol (1995) said prison is ‘brutal,
demeaning and generally devastating’. Suicide rates in prisons (England and Wales) are nine times
higher than in the general population. The Prison Reform Trust (2014) found that 25% of women and
15% of men in prison reported symptoms of psychosis (e.g. schizophrenia). This supports the view
that oppressive prison regimes may be detrimental to psychological health which could impact on
rehabilitation.
One strength is that prison provides training and treatment. The Vera Institute of Justice (Shirley
2019) claims that offenders who take part in college education programmes are 43% less likely to
reoffend following release. This will improve employment opportunities on release, which reduces
the likelihood of reoffending. This suggests that prison may be a worthwhile experience assuming
offenders are able to access these programmes.
4. There are several psychological effects that are associated with time in prison. First, stress and
depression – suicide rates and self-harm are higher in prison than in the general population. Dagny’s
self-harming suggests she is suffering psychologically in prison and is experiencing stress and
depression.
Second, institutionalisation, which describes the inability to function outside of prison having
adapted to the norms and routines of prison life. Dagny has forgotten how to do things for herself,
which suggests she has become too accustomed to the norms and routines of prison life. She may
struggle to adjust to life on the ‘outside’ if she were ever released.
Third, prisonisation, which describes behaviours that are unacceptable outside prison and which are
encouraged via socialisation into an ‘inmate code’. Dagny is no longer shocked by what goes on
inside prison. This suggests she has become socialised into the ‘inmate code’ and sees things that
would be unacceptable outside prison as trivial and ‘run of the mill’.
Bartol (1995) said prison is ‘brutal, demeaning and generally devastating’. Suicide rates in prisons
(England and Wales) are nine times higher than in the general population. The Prison Reform Trust
(2014) found that 25% of women and 15% of men in prison reported symptoms of psychosis (e.g.
schizophrenia). This supports the view that oppressive prison regimes may be detrimental to
psychological health which could impact on rehabilitation.
One strength is that prison provides training and treatment. The Vera Institute of Justice (Shirley
2019) claims that offenders who take part in college education programmes are 43% less likely to
reoffend following release. This will improve employment opportunities on release, which reduces
the likelihood of reoffending. This suggests that prison may be a worthwhile experience assuming
offenders are able to access these programmes.
Prisons can become ‘universities for crime’. Alongside the legitimate skills that offenders may
acquire during their time in prison, they may also undergo a more dubious ‘education’. Differential
association theory suggests time spent with hardened criminals may give younger inmates the
chance to learn ‘tricks of the trade’ from experienced offenders. This may undermine attempts to
rehabilitate prisoners, making reoffending more likely.
162
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Page 223
1. Behaviour modification programmes are designed with the aim of reinforcing obedient behaviour
whilst punishing disobedience in the hope that it dies out (becomes extinct). It is based on operant
conditioning because desirable inmate behaviours are rewarded (reinforced) with tokens. Desirable
behaviours might include avoiding conflict, being quiet in the cell, following rules and so on.
Primary reinforcers might include a phone call to a loved one, time in the gym, extra cigarettes or
food. Target behaviours within the custodial setting are operationalised by breaking them down into
components parts e.g. ‘interaction with other prisoners’ may be broken down into ‘speaking politely
to others’, ‘not touching others’, etc. Each ‘unit’ of behaviour should be objective and measurable
and agreed with staff and prisoners in advance.
One limitation is that there is little rehabilitative value. Some treatments (e.g. anger management)
are longer lasting because they involve understanding causes of, and taking responsibility for, one’s
own behaviour. In contrast, offenders can ‘play along’ with a token economy system to access
rewards, but this produces little change in their overall character. This may explain why, once the
token economy is discontinued, an offender may quickly regress back to their former behaviour.
4. The behaviourist approach proposes that behaviour is learned and therefore it should be possible
to unlearn behaviour using the same principles. Behaviour modification programmes are designed
with the aim of reinforcing obedient behaviour whilst punishing disobedience in the hope that it dies
out (becomes extinct). Tokens are given to reinforce desirable behaviours. Token economy systems
are managed by prison staff to modify the behaviour of inmates. This is based on operant
conditioning – desirable inmate behaviours are rewarded (reinforced) with tokens. Desirable
behaviours might include avoiding conflict, being quiet in the cell, following rules and so on.
A strength of behaviour modification is that it is easy to implement. Behaviour modification does not
need a specialist professional involved, whereas this is true for other forms of treatment (e.g. anger
management). Token economy systems can be designed and implemented by virtually anyone. They
are cost-effective and easy to follow once methods have been established. This suggests that
behaviour modification techniques can be established in most prisons and accessed by most
prisoners.
One limitation is that there is little rehabilitative value. Some treatments (e.g. anger management)
are longer lasting because they involve understanding causes of, and taking responsibility for, one’s
own behaviour. In contrast, offenders can ‘play along’ with a token economy system to access
163
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
rewards, but this produces little change in their overall character. This may explain why, once the
token economy is discontinued, an offender may quickly regress back to their former behaviour.
Novaco (1975) suggests that cognitive factors trigger the emotional arousal that comes before
aggressive acts. Novaco’s argument is that, in some people, anger is quick to surface in situations
they perceive to be threatening or anxiety-inducing. Anger management programmes are a form of
cognitive behaviour therapy (CBT) in which the individual is taught to recognise the cognitive factors
that trigger their anger and loss of control and develop behavioural techniques that bring about
conflict resolution without the need for violence.
One limitation is that success depends on individual factors. Howells et al. (2005) found that
participation in an anger management programme had little overall impact when compared to a
control group who received no treatment. However, progress was made with offenders who showed
intense levels of anger before the programme and offenders who were motivated to change
(‘treatment readiness’). This suggests that anger management may only benefit offenders who fit a
certain profile.
Another limitation is that anger management is expensive. Anger management programmes require
highly-trained specialists who are used to dealing with violent offenders. Many prisons may not have
the resources. In addition, change takes time and commitment, and this is ultimately likely to add to
the expense of delivering effective programmes. This suggests that effective anger management
programmes are probably not going to work in most prisons.
Page 225
1. The behaviourist approach proposes that behaviour is learned and therefore it should be possible
to unlearn behaviour using the same principles. Behaviour modification programmes are designed
with the aim of reinforcing obedient behaviour whilst punishing disobedience in the hope that it dies
out (becomes extinct). Tokens are given to reinforce desirable behaviours. Token economy systems
are managed by prison staff to modify the behaviour of inmates. This is based on operant
conditioning – desirable inmate behaviours are rewarded (reinforced) with tokens. Desirable
behaviours might include avoiding conflict, being quiet in the cell, following rules and so on.
Novaco (1975) suggests that cognitive factors trigger the emotional arousal that comes before
aggressive acts. Novaco’s argument is that, in some people, anger is quick to surface in situations
they perceive to be threatening or anxiety-inducing. Anger management programmes are a form of
cognitive behaviour therapy (CBT) in which the individual is taught to recognise the cognitive factors
that trigger their anger and loss of control and develop behavioural techniques that bring about
conflict resolution without the need for violence.
2. Same as above.
3. One limitation is that success depends on individual factors. Howells et al. (2005) found that
participation in an anger management programme had little overall impact when compared to a
control group who received no treatment. However, progress was made with offenders who showed
intense levels of anger before the programme and offenders who were motivated to change
(‘treatment readiness’). This suggests that anger management may only benefit offenders who fit a
certain profile.
164
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
4. Anger management programmes are a form of cognitive behaviour therapy (CBT). An individual is
taught to recognise the cognitive factors that trigger their anger and loss of control and to develop
behavioural techniques that bring about conflict resolution without the need for violence.
Stage 1 is cognitive preparation. This stage requires the offender to reflect on past experience – they
learn to identify triggers to anger and the ways their interpretation of events may be irrational. For
instance, the offender may interpret someone looking at them as confrontation. In redefining the
situation as non-threatening, the therapist is attempting to break what may be an automatic
response for the offender.
Stage 2 is skills acquisition. Offenders are introduced to a range of techniques and skills to help them
deal with anger-provoking situations. Techniques may be cognitive (positive self-talk to promote
calmness), behavioural (assertiveness training to communicate more effectively, becoming
automatic if practised) and physiological (methods of relaxation and/or meditation).
Stage 3 is application practice. Offenders are given the opportunity to practise their skills in a
carefully monitored environment. For example, role play between the offender and therapist may
involve re-enacting scenarios that led to anger and violence in the past. If the offender deals
successfully with the role play this is given positive reinforcement by the therapist.
One strength is that benefits outlast behaviour modification. Unlike behaviour modification, anger
management tackles the causes of offending, i.e. the cognitive processes that trigger anger, and
ultimately, offending behaviour. This may give offenders new insight into the cause of their
criminality, allowing them to self-discover ways of managing themselves outside of prison. This
suggests that anger management is more likely than behaviour modification to lead to permanent
behavioural change.
However, whilst anger management may have an effect on offenders in the short term, it may not
help cope with triggers in real-world situations (Blackburn 1993). This suggests that, in the end,
anger management may not reduce reoffending.
One limitation is that success depends on individual factors. Howells et al. (2005) found that
participation in an anger management programme had little overall impact when compared to a
control group who received no treatment. However, progress was made with offenders who showed
intense levels of anger before the programme and offenders who were motivated to change
(‘treatment readiness’). This suggests that anger management may only benefit offenders who fit a
certain profile.
Another limitation is that anger management is expensive. Anger management programmes require
highly-trained specialists who are used to dealing with violent offenders. Many prisons may not have
the resources. In addition, change takes time and commitment, and this is ultimately likely to add to
the expense of delivering effective programmes. This suggests that effective anger management
programmes are probably not going to work in most prisons.
Page 227
1. Restorative justice (RJ) is a process of managed collaboration between offender and survivor (the
preferred term for ‘victim’) based on the principles of healing and empowerment. The survivor is
given the opportunity to explain how the incident affected them (including emotional distress) – an
important part of the rehabilitative process.
165
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Novaco (1975) suggests that cognitive factors trigger the emotional arousal that comes before
aggressive acts. Novaco’s argument is that, in some people, anger is quick to surface in situations
they perceive to be threatening or anxiety-inducing. Anger management programmes are a form of
cognitive behaviour therapy (CBT).
2. Restorative justice is less about ‘retribution’ – that is, punishing the offender and more about
‘reparation’ – repairing the harm caused. RJ seeks to focus on two things: the survivor (victim) of the
crime and their recovery and the offender and their recovery/rehabilitation process.
RJ programmes can be quite diverse but most share key features. A trained mediator supervises the
meeting in a non-courtroom setting where the offender voluntarily meets with the survivor(s). The
meeting is face-to-face or remote via video link. The survivor explains how the incident affected
them, so the offender can understand the effects of their crime. Thereis active rather than passive
involvement of all parties with a focus on positive outcomes for both survivors and offenders. Other
relevant community members may be involved and explain further consequences (e.g. neighbours,
friends, family members). RJ may occur pre-trial and may affect sentencing, it may be given as an
alternative to prison (especially if the offender is young) or it can take place while the offender is
serving a prison sentence, as an incentive to reduce the length of the sentence.
3. One strength of RJ is that it supports the needs of survivors. The Restorative Justice Council
(Shapland et al. 2008) reported the results of a 7-year project, 85% of survivors said they were
satisfied with the process, 78% would recommend it, about 60% said the process made them feel
better about the incident, and 2% said it made them feel worse. This suggests that restorative justice
is a worthwhile experience and helps survivors of crime cope with the aftermath of the incident.
Another strength is that RJ leads to a decrease in offending. In a meta-analysis, Strang et al. (2013)
found offenders who experienced RJ were less likely to reoffend – though reduction was larger in
cases of violent crime compared with property crime. Bain (2012) found lowered recidivism with
adult offenders who had one-to-one contact with their survivor (rather than community contact).
This suggests that RJ has a positive impact on reoffending, maybe more so for some types of offence
than others and some approaches.
One limitation is that offenders may abuse the system. The success of RJ hinges on an offender
genuinely feeling regret for their actions. Van Gijseghem (2003) suggests that offenders may use
restorative justice to avoid punishment, play down their faults or even take pride in their
relationship with the survivor. This would explain why not all offenders ultimately benefit from
restorative justice and go on to reoffend.
4. Restorative justice is less about ‘retribution’ – that is, punishing the offender and more about
‘reparation’ – repairing the harm caused. RJ seeks to focus on two things: the survivor (victim) of the
crime and their recovery and the offender and their recovery/rehabilitation process.
RJ programmes can be quite diverse but most share key features. A trained mediator supervises the
meeting in a non-courtroom setting where the offender voluntarily meets with the survivor(s). The
meeting is face-to-face or remote via video link. The survivor explains how the incident affected
them, so the offender can understand the effects of their crime. Thereis active rather than passive
166
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
involvement of all parties with a focus on positive outcomes for both survivors and offenders. Other
relevant community members may be involved and explain further consequences (e.g. neighbours,
friends, family members).
One strength of RJ is that it supports the needs of survivors. The Restorative Justice Council
(Shapland et al. 2008) reported the results of a 7-year project, 85% of survivors said they were
satisfied with the process, 78% would recommend it, about 60% said the process made them feel
better about the incident, and 2% said it made them feel worse. This suggests that restorative justice
is a worthwhile experience and helps survivors of crime cope with the aftermath of the incident.
Novaco (1975) suggests that cognitive factors trigger the emotional arousal that comes before
aggressive acts. Novaco’s argument is that, in some people, anger is quick to surface in situations
they perceive to be threatening or anxiety-inducing. Anger management programmes are a form of
cognitive behaviour therapy (CBT). The individual is taught to recognise the cognitive factors that
trigger their anger and loss of control and develop behavioural techniques that bring about conflict
resolution without the need for violence.
One strength is that benefits outlast behaviour modification. Unlike behaviour modification, anger
management tackles the causes of offending, i.e. the cognitive processes that trigger anger, and
ultimately, offending behaviour. This may give offenders new insight into the cause of their
criminality, allowing them to self-discover ways of managing themselves outside of prison. This
suggests that anger management is more likely than behaviour modification to lead to permanent
behavioural change.
However, whilst anger management may have an effect on offenders in the short term, it may not
help cope with triggers in real-world situations (Blackburn 1993). This suggests that, in the end,
anger management may not reduce reoffending.
167
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Chapter 13 Addiction
Page 228
1. Addiction is a disorder in which a person takes a substance or carries out a behaviour that
provides pleasure but eventually becomes compulsive and has harmful consequences. It is marked
by psychological and/or physical dependence, tolerance and withdrawal. For example, someone may
be addicted to smoking if they experience physical withdrawal symptoms and strong cravings
(dependence) when they cannot smoke, and find they need to smoke more in order to get the same
effect (tolerance).
2. Physical dependence occurs when a withdrawal syndrome is produced by stopping the drug whereas
psychological dependence refers to the compulsion to experience the rewarding effects of a drug
(cravings).
Physical dependence can only be determined when the individual reduces or stops their
intake/behaviour and withdrawal appears, whereas psychological dependence is experienced by the
individual throughout the process of taking a drug/carrying out a behaviour.
3. Tolerance occurs when an individual’s response to a drug is reduced. This means they need even
greater doses to produce the same effect on behaviour. Tolerance is caused by repeated exposure to
a drug.
Page 229
1. A risk factor is any internal or external influence that increases the likelihood that a person will start
using addictive substances or engage in addictive behaviours. Examples include peer influences, family
influences, genetic vulnerability and stress. These factors also contribute to someone increasing their
current level of use/engagement.
2. Peers are an important risk factor in addiction. In fact, relationships with friends become the most
important risk factor as children get older, as they become increasingly independent of family influences.
Even when an adolescent’s peers have not used drugs themselves, their attitudes towards them can still
be highly influential.
3. Some people may inherit from their parents a vulnerability or predisposition to dependence. The
mechanism for this could be that genes determine the activity of neurotransmitter systems such as
dopamine. These systems in turn affect behaviours that predispose someone to dependence, e.g.
impulsivity. For example, the number of dopamine D2 receptors in the brain is genetically controlled and
addiction is associated with an abnormally low concentration of them.
A strength of this explanation is that there is research support for it from adoption studies. For example,
Kendler et al. (2012) investigated Swedish adults who, as children, had been adopted away from
biological families in which at least one parent had an addiction. Compared with a control group, these
adults had a significantly increased risk of developing an addiction themselves. This finding is supported
by twin studies and strongly suggests that genetic predisposition may be the central risk factor in
addiction.
168
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
This is further supported when we look at the roles of other factors. No single risk factor is causal in
addiction, but they all appear to interact with a genetic vulnerability. Peer and family influences, stress
and personality are all proximate factors and you need to go further back in the chain of causes to
explain them. Therefore, genetic vulnerability may be the ultimate factor that influences all the others.
Page 231
1. Family influences: Parents may approve of addiction. Livingston et al. (2010) found when parents
allowed their children to drink alcohol at home in their final school year, their children were more
likely to drink excessively at college the next year. Parents may simply have little interest in
monitoring their child’s behaviour. Adolescents are more likely to start using alcohol where it is an
everyday feature of family life or where there is a history of alcohol addiction.
Peers: Peer behaviour does not have to specifically concern drugs. Instead a group norm that favours
rule-breaking generally can be influential. O’Connell et al. (2009) suggest there are three major
elements to peer influence for alcohol addiction. First, attitudes about drinking are influenced by
associating with peers who use alcohol. Second, peers provide more opportunities to use alcohol.
Third, individuals overestimate how much their peers are drinking and attempt to keep up with the
perceived norm.
2. Some people may inherit from their parents a vulnerability or predisposition to dependence. The
mechanism for this could be that genes determine the activity of neurotransmitter systems such as
dopamine. These systems in turn affect behaviours that predispose someone to dependence, e.g.
impulsivity. For example, the number of dopamine D2 receptors in the brain is genetically controlled and
addiction is associated with an abnormally low concentration of them. Fewer receptors means less
dopamine activity. As dopamine is associated with pleasurable reward, addiction may be a way of
compensating for a lack of reward due to dopamine deficiency.
3. A strength of the genetic vulnerability explanation is that there is research support for it from
adoption studies. For example, Kendler et al. (2012) investigated Swedish adults who, as children, had
been adopted away from biological families in which at least one parent had an addiction. Compared
with a control group, these adults had a significantly increased risk of developing an addiction
themselves. This finding is supported by twin studies and strongly suggests that genetic predisposition
may be the central risk factor in addiction.
A limitation of stress as a risk factor is to do with cause and effect. Many studies have shown a
strong positive correlation between stressful experiences and addiction behaviours. But stress may
not be a risk factor – it depends which develops first. The addiction then creates stress because of its
negative effects on relationship and finances, etc. Stress and addiction are thus strongly correlated
but in this case the addiction caused the stress. Therefore correlational studies – common in this
area – cannot help us choose between these two competing explanations of the link.
4. Tim’s risk of addiction may best be explained in terms of genetic vulnerability. Some people may
inherit from their parents a vulnerability or predisposition to dependence. The mechanism for this
could be that genes determine the activity of neurotransmitter systems such as dopamine. These
systems in turn affect behaviours that predispose someone to dependence, e.g. impulsivity. For
example, the number of dopamine D2 receptors in the brain is genetically controlled and addiction is
associated with an abnormally low concentration of them. As Tim comes from a family of people
addicted to alcohol, he may have inherited a genetic predisposition to reduced D2 receptors in the
brain.
169
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
A strength of the genetic vulnerability explanation is that there is research support for it from adoption
studies. For example, Kendler et al. (2012) investigated Swedish adults who, as children, had been
adopted away from biological families in which at least one parent had an addiction. Compared with a
control group, these adults had a significantly increased risk of developing an addiction themselves. This
finding is supported by twin studies and strongly suggests that genetic predisposition may be the central
risk factor in addiction.
However, it should be noted that Tim has not inherited an ‘alcohol addiction’ as such. He may have
inherited a vulnerability that is triggered by other risk factors. These other factors might not be
experienced by Tim (e.g. no stressful life events). So there is nothing inevitable about Tim developing
an addiction just because it is common in his family.
Evidence suggests there is no ‘addictive personality’ so perhaps Kim can be reassured that there is
no ‘sort of person’ who is addicted to gambling. However, some traits (e.g. hostility) may be linked
to addiction. Antisocial personality disorder (APD) is strongly correlated with addiction-related
behaviour and begins in early adolescence. The key component is impulsivity: risk-taking, a lack of
planning and a preference for immediate gratification. Kim may be concerned that she has some of
these traits and this could increase her risk of gambling addiction.
Jim may already be a smoker but the stress he is experiencing might have increased his usage. This
could be a response to traumatic events that Jim experienced in childhood, which may influence
how he copes with stressors. Andersen and Teicher (2008) argue that early experiences of trauma
have damaging effects on the developing brain in a sensitive period which creates a vulnerability to
later stress. The stressors Jim is experiencing now may trigger that vulnerability, causing him to self-
medicate with nicotine.
On the other hand, a limitation of stress as a risk factor is to do with cause and effect. Many studies
have shown a strong correlation between stressful experiences and addiction. But stress may not be
a risk factor because the addiction might develop first. Perhaps Jim was already addicted to nicotine
before he became stressed. The addiction then creates stress because of its negative effects on
relationships and finances, etc. Stress and addiction are thus strongly correlated but in this case the
addiction caused the stress. Therefore correlational studies – common in this area – cannot help us
choose between these two competing explanations of the link.
Page 233
1. Brain neurochemistry concerns chemicals in the brain that regulate biological and psychological
functioning. A chemical closely linked to nicotine addiction is the neurotransmitter dopamine. Some
neurons that produce dopamine are in the ventral tegmental area (VTA) of the brain. These neurons
have acetylcholine (ACh) receptors that also respond to nicotine – these receptors are called nicotinic
acetylcholine receptors (nAChRs).
2. Some neurons that produce dopamine are in the ventral tegmental area (VTA) of the brain. These
neurons have acetylcholine (ACh) receptors that also respond to nicotine – these receptors are
called nicotinic acetylcholine receptors (nAChRs). When the neurotransmitter dopamine is released
from the VTA it is transmitted along the mesolimbic pathway to the nucleus accumbens to be
released in the frontal cortex. At the same time, dopamine is also transmitted along the
mesocortical pathway to be released directly in the frontal cortex. The dopamine system creates a
sense of reward and pleasure (e.g. reduced anxiety, mild euphoria, increased alertness) which
gradually becomes associated with smoking through operant conditioning.
170
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Once activated, nAChRs immediately shut down – they are desensitised which leads to a reduction in
active neurons (downregulation). This continues as long as the person smokes regularly. But when
they stop for a time (e.g. when asleep) nAChRs become functional again, so dopamine neurons
resensitise and become available (upregulation). However there is no nicotine to bind with the
receptors, so they are overstimulated by ACh and the smoker experiences withdrawal until they
smoke another cigarette. This reactivates the dopamine system causing pleasure and reinforcing
smoking behaviour.
3. One strength is that there is supporting research evidence. McEvoy et al. (1995) studied smoking
behaviour in people with schizophrenia, some of whom were taking Haloperidol, a dopamine
antagonist drug treatment for schizophrenia. Haloperidol treatment increased smoking in this
sample of participants. It appears that this was a form of self-medication, an attempt to achieve the
nicotine ‘hit’ by increasing dopamine release, supporting the central role of dopamine in nicotine
neurochemistry.
One limitation of the brain neurochemistry explanation is that it does not fully explain withdrawal.
The explanation argues that withdrawal depends mainly on the amount of nicotine in the blood. But
Gilbert (1995) points out that these factors are not strongly correlated. Withdrawal can be mild or
severe almost independently of nicotine levels in the blood. Withdrawal instead depends much
more on environment and personality, e.g. people who are strongly neurotic usually experience
worse symptoms than people who are emotionally stable. Therefore withdrawal is better explained
by other factors without reference to nicotine neurochemistry.
4. Some neurons that produce dopamine are in the ventral tegmental area (VTA) of the brain. These
neurons have acetylcholine (ACh) receptors that also respond to nicotine – these receptors are
called nicotinic acetylcholine receptors (nAChRs). When the neurotransmitter dopamine is released
from the VTA it is transmitted along the mesolimbic pathway to the nucleus accumbens to be
released in the frontal cortex. At the same time, dopamine is also transmitted along the
mesocortical pathway to be released directly in the frontal cortex. The dopamine system creates a
sense of reward and pleasure (e.g. reduced anxiety, mild euphoria, increased alertness) which
gradually becomes associated with smoking through operant conditioning.
Once activated, nAChRs immediately shut down – they are desensitised which leads to a reduction in
active neurons (downregulation). This continues as long as the person smokes regularly. But when
they stop for a time (e.g. when asleep) nAChRs become functional again, so dopamine neurons
resensitise and become available (upregulation). However there is no nicotine to bind with the
receptors, so they are overstimulated by ACh and the smoker experiences withdrawal until they
smoke another cigarette. They experience strong cravings to smoke again and when they do the
dopamine system is reactivated causing pleasure and reinforcing smoking behaviour.
One strength is that there is supporting research evidence. McEvoy et al. (1995) studied smoking
behaviour in people with schizophrenia, some of whom were taking Haloperidol, a dopamine
antagonist drug treatment for schizophrenia. Haloperidol treatment increased smoking in this
sample of participants. It appears that this was a form of self-medication, an attempt to achieve the
nicotine ‘hit’ by increasing dopamine release, supporting the central role of dopamine in nicotine
neurochemistry.
However, a limitation of the explanation is that it only considers dopamine. Any such explanation of
nicotine addiction is limited because there are many other neural mechanisms involved. The current
picture suggests a highly complex interaction of several systems such as GABA and endogenous
171
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Another limitation of the brain neurochemistry explanation is that it does not fully explain
withdrawal. The explanation argues that withdrawal depends mainly on the amount of nicotine in
the blood. But Gilbert (1995) points out that these factors are not strongly correlated. Withdrawal
can be mild or severe almost independently of nicotine levels in the blood. Withdrawal instead
depends much more on environment and personality, e.g. people who are strongly neurotic usually
experience worse symptoms than people who are emotionally stable. Therefore withdrawal is better
explained by other factors without reference to nicotine neurochemistry.
Page 235
1. Any other stimuli present at the same time as (or just before) smoking (and intake of nicotine)
become associated with the pleasurable effect of smoking (i.e. classical conditioning has taken
place). These stimuli become secondary reinforcers (rewarding in their own right). Certain
environments (e.g. pubs) and certain people or objects (e.g. a lighter) create a sense of anticipation
and pleasure and thus become secondary reinforcers. The secondary reinforcers also act as cues,
because their presence produces a similar response to nicotine itself.
2. Smoking is intrinsically rewarding (not learned). It doesn’t have to be learned because of the
biologically determined effects of nicotine on the dopamine reward system. The pleasure created by
nicotine reinforces the behaviour so the individual is more likely to smoke again. Any other stimuli
present at the same time as (or just before) smoking (and intake of nicotine) become associated with
the pleasurable effect of smoking (i.e. classical conditioning has taken place). These stimuli become
secondary reinforcers (rewarding in their own right). Certain environments (e.g. pubs) and certain
people or objects (e.g. a lighter) create a sense of anticipation and pleasure and thus become
secondary reinforcers. The secondary reinforcers also act as cues, because their presence produces a
similar response to nicotine itself. This is called cue reactivity and is indicated by three main
elements: a self-reported desire to smoke, physiological signs of reactivity to a cue (e.g. heart rate),
and objective behavioural indicators when the cue is present (e.g. how many ‘draws’ are taken on
the cigarette).
172
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Another strength is that learning theory forms the basis of treatment programmes for nicotine addiction.
Aversion therapy uses counterconditioning by associating smoking with a self-administered electric
shock (aversive stimulus). Smith (1988) found that 52% of clients who completed such a programme
were still abstaining after one year. This suggests that treatments based on learning theory can save NHS
resources, improve health and save lives.
4. One explanation for nicotine addiction is operant conditioning. If the consequence of a behaviour
is rewarding to an individual, then that behaviour is more likely to occur again. Smoking can create
feelings of mild euphoria, which positively reinforce the smoking behaviour. Nicotine is a powerful
reinforcer because of its physiological effects on the dopamine reward system in the mesolimbic
pathway. Nicotine stimulates the release of dopamine which produces the feeling of mild euphoria.
One strength of the operant conditioning explanation is support from non-human animal studies.
Levin et al. (2010) gave rats the choice of self-administering doses of nicotine or water by licking one
of two water spouts (one with nicotine). The rats licked the nicotine water spout significantly more
often. This behaviour increased in frequency with every subsequent training session. The effects of
nicotine positively reinforce nicotine self-administration in rats, suggesting a similar mechanism in
humans.
Even so, nicotine addiction in humans is undoubtedly more complex than it is in rats. For example,
cognitive factors influence learning processes which means humans think about reinforcers in a way
that rats do not. There are also strong subjective desires/cravings in human cue reactivity that are
hard to understand in rats. Therefore, animal studies can help us understand learning processes in
addiction but findings must be treated cautiously because other factors are involved in human
addiction which make it more complex.
Another explanation for nicotine addiction is cue reactivity through classical conditioning. Any other
stimuli present at the same time as (or just before) smoking (and intake of nicotine) become
associated with the pleasurable effect of smoking (i.e. classical conditioning has taken place). These
stimuli become secondary reinforcers (rewarding in their own right). Certain environments (e.g.
pubs) and certain people or objects (e.g. a lighter) create a sense of anticipation and pleasure and
thus become secondary reinforcers. Even the seemingly harsh feeling of smoke hitting the back of
the throat can become a secondary reinforcer because it is associated with the pleasurable impact of
nicotine.
Carter and Tiffany’s (1999) meta-analysis looked at studies that presented smokers and non-smokers
with images of smoking-related cues (e.g. lighters, ashtrays, etc.). Cravings were measured through
self-reported ratings and physiological measures such as heart rate were also taken. Dependent
smokers reacted most strongly to these cues (e.g. increased arousal and cravings). This suggests that
dependent smokers learn secondary associations between smoking-related stimuli and the
pleasurable effects of smoking, making this behaviour more likely to occur again.
A further strength is the real-life application of learning theory. Aversion therapy works on the basis
of counterconditioning nicotine addiction by associating the pleasurable effects of smoking with an
aversive stimulus such as a painful electric shock. Smith (1988) found that 52% of participants who
gave themselves electric shocks whenever they engaged in smoking-related behaviours were still
abstaining after one year. Such effective applications of learning theory have measurable and
significant practical benefits in terms of reducing NHS spending and improving health.
However, studies of the effectiveness of aversion therapy are sometimes methodologically weak.
The above study is a case in point because it lacked a placebo control group. This undermines the
173
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
validity of any conclusions – we cannot tell that 52% of participants abstaining after one year is a
good outcome when there is nothing to compare the figure with. Also, higher-quality studies suggest
that any benefits of aversion therapy are relatively short-lived compared with other therapies (Hajek
and Stead 2001). Therefore learning theory may not be a useful basis for an effective treatment of
nicotine addiction after all.
Page 237
1. A partial reinforcement schedule leads to more persistent behaviour change. When only some
bets are rewarded there is an unpredictability about which gambles will pay off, which is enough to
maintain the gambling even when most gambles are not rewarded. A variable reinforcement
schedule is a partial reinforcement schedule where the intervals between rewards vary. This kind of
reinforcement schedule is highly unpredictable. For example, a slot machine might pay out after an
average of 25 spins, but not on every 25th spin. The first payout might be on the 11th spin, then the
21st, then the 38th, etc.
2. Positive reinforcement in gambling comes from a direct gain (e.g. winning money), and from the
‘buzz’ that accompanies a gamble (which is exciting). Negative reinforcement occurs because
gambling can offer a distraction from aversive stimuli (e.g. the anxieties of everyday life). Skinner’s
research with rats found that continuous reinforcement schedules do not lead to persistent
behaviour change. A partial reinforcement schedule leads to more persistent behaviour change.
When only some bets are rewarded there is an unpredictability about which gambles will pay off,
which is enough to maintain the gambling even when most gambles are not rewarded. A variable
reinforcement schedule is a partial reinforcement schedule where the intervals between rewards
vary. This kind of reinforcement schedule is highly unpredictable. Whilst it takes longer for learning
to be established if the reinforcement schedule is variable, once it is established it is more resistant
to extinction. The gambler learns that they will not win with every gamble, but they will eventually
win if they persist (and then the gambling is reinforced). This explains why some people continue to
gamble despite big losses.
3. One explanation for gambling is learning theory. One strength of the learning theory explanation
is research support. Dickerson (1979) found high-frequency (dependent) gamblers in natural settings
were more likely than low-frequency gamblers to place bets in the last two minutes before a race.
These gamblers may delay betting to prolong the rewarding excitement of the ‘build up’ (e.g. the
tension they get from the radio commentary heard in the betting shop). This is evidence for the role
of positive reinforcement on gambling behaviour in frequent gamblers in a more ‘real-life’ setting
than a psychology lab.
4. Positive reinforcement in gambling comes from a direct gain (e.g. winning money), and from the
‘buzz’ that accompanies a gamble (which is exciting). Negative reinforcement occurs because
gambling can offer a distraction from aversive stimuli (e.g. the anxieties of everyday life). Skinner’s
research with rats found that continuous reinforcement schedules do not lead to persistent
behaviour change. A partial reinforcement schedule leads to more persistent behaviour change.
When only some bets are rewarded there is an unpredictability about which gambles will pay off,
which is enough to maintain the gambling even when most gambles are not rewarded. A variable
reinforcement schedule is a partial reinforcement schedule where the intervals between rewards
vary. This kind of reinforcement schedule is highly unpredictable. Whilst it takes longer for learning
to be established if the reinforcement schedule is variable, once it is established it is more resistant
to extinction. The gambler learns that they will not win with every gamble, but they will eventually
win if they persist (and then the gambling is reinforced). This explains why some people continue to
gamble despite big losses.
174
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Cue reactivity explains how associated stimuli can trigger gambling. In the course of their gambling,
an individual will experience many secondary reinforcers – things they associate with the exciting
arousal experienced through gambling. For example, the features referred to in the question such as
the noises and flashing lights from the slot machine can all cue the arousal that the gambler craves.
These low-level reminders are difficult to avoid. These cues can both maintain gambling and cause
its reinstatement after a period of abstinence.
One strength of the learning theory explanation is research support. Dickerson (1979) found high-
frequency (dependent) gamblers in natural settings were more likely than low-frequency gamblers
to place bets in the last two minutes before a race. These gamblers may delay betting to prolong the
rewarding excitement of the ‘build up’ (e.g. the tension they get from the radio commentary heard
in the betting shop). This is evidence for the role of positive reinforcement on gambling behaviour in
frequent gamblers in a more ‘real-life’ setting than a psychology lab.
However we should note that this study did have some methodological problems. For example, only
one person observed betting behaviour in the shops. This meant there was no way to check the
reliability of the observations, which would normally have been done by calculating a correlation
between two observers’ observations (inter-observer reliability). This means that observer bias may
not have been eliminated so the findings of the study might not be valid.
Learning theory attempts to explain the whole cycle of addiction, from initiation through
maintenance and cessation to relapse. But a limitation is that some psychologists believe that parts
of the cycle are poorly explained by learning theory. For example, people who dabble with gambling
experience the same reinforcements as people who become addicted. Most people who try
gambling never become addicted even though they observe others enjoying it, experience rewarding
excitement and are distracted from everyday stress. So there must be other factors involved in
addiction. Perhaps a genetic vulnerability or ways of thinking about gambling may explain why the
addiction cycle begins for some people but not for others. Therefore, learning theory can explain
some aspects of the addiction cycle, but others may be better explained by biological and cognitive
theories.
Page 239
1. The cause of gambling addiction lies in the fact that addicts hold beliefs about gambling that are
irrational (i.e. cognitive biases). Such cognitions may involve attention and/or memory processes –
addiction occurs and is maintained due to the selective attention to and memory of gambling-
related information. One example is of perceived skill and judgement – gambling addicts have an
illusion of control and overestimate their skill against chance (e.g. believing themselves especially
skilled at choosing lottery numbers).
2. We all have expectations about the future benefit and costs of our behaviour. If people expect the
benefits of gambling to outweigh the costs, then addiction becomes more likely. This sounds like a
conscious decision but it is not. This is because memory and attention processes do not operate in a
rational and logical manner. The cause of gambling addiction lies in the fact that addicts hold beliefs
about gambling that are irrational (i.e. cognitive biases). Such cognitions may involve attention
and/or memory processes – addiction occurs and is maintained due to the selective attention to and
memory of gambling-related information. An example of a cognitive bias concerns perceived skill
and judgement. Addicted gamblers have an illusion of control and overestimate their skill against
chance (e.g. believing themselves especially skilled at choosing lottery numbers).
175
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
3. In terms of the cognitive explanation, we all have expectations about the future benefit and costs of
our behaviour. If people expect the benefits of gambling to outweigh the costs, then addiction becomes
more likely. This sounds like a conscious decision but it is not. This is because memory and attention
processes do not operate in a rational and logical manner. The cause of gambling addiction lies in the
fact that addicts hold beliefs about gambling that are irrational (i.e. cognitive biases). Such cognitions
may involve attention and/or memory processes – addiction occurs and is maintained due to the
selective attention to and memory of gambling-related information. An example of a cognitive bias
concerns perceived skill and judgement. Addicted gamblers have an illusion of control and overestimate
their skill against chance (e.g. believing themselves especially skilled at choosing lottery numbers).
There is also a learning explanation. Cue reactivity explains how associated stimuli can trigger
gambling. In the course of their gambling, an individual will experience many secondary reinforcers –
things they associate with the exciting arousal experienced through gambling. For example, the
things referred to in the question such as the noises and flashing lights from the slot machine can all
cue the arousal that the gambler craves. These low-level reminders are difficult to avoid. These cues
can both maintain gambling and cause its reinstatement after a period of abstinence.
4. According to cognitive theory, we all have expectations about the future benefit and costs of our
behaviour. If people expect the benefits of gambling to outweigh the costs, then addiction becomes
more likely. But memory and attention processes do not operate in a rational and logical manner.
The cause of gambling addiction lies in the fact that addicts hold beliefs about gambling that are
irrational (i.e. cognitive biases). Such cognitions may involve attention and/or memory processes –
addiction occurs and is maintained due to the selective attention to and memory of gambling-
related information.
Rickwood et al. (2000) categorised four different categories of cognitive bias. First, skill and
judgement – gambling addicts have an illusion of control and overestimate their skill against chance.
Second, personal traits/ritual behaviours – addicts believe they are especially lucky or engage in
superstitious behaviour. Third, selective recall – gamblers remember their wins but ignore/forget
their losses. And fourth, faulty perceptions – gamblers have distorted views of chance (e.g. believing
that a losing streak cannot last).
One strength is the evidence supporting the cognitive theory. Michalczuk et al. (2011) compared 30
addicted gamblers with a non-gambling control group. The addicted gamblers had significantly
higher levels of gambling-related cognitive biases. They were also more impulsive and were more
likely to prefer immediate rewards, even if the rewards were smaller than those they could gain if
they waited. These findings support the view that there is a strong cognitive component to gambling
addiction.
However this study does highlight one limitation of cognitive research into gambling addiction.
Cognitive biases were measure using the gambling-related cognitions scale which gives a score
covering five types of bias (illusion of control, etc.). A gambler’s high score could mean that they
have frequent biased cognitions (as the study suggests). But it could equally mean that they use their
beliefs to justify their behaviour and their thinking isn’t biased at all. Therefore the findings of this
study may not truly reflect a gambler’s actual beliefs about gambling.
Another strength is that cognitive theory highlights how cognitive biases appear to be automatic in
addicted gamblers. McCusker and Gettings (1997) asked participants to complete a modified Stroop
task. Participants had to pay attention to ink colour while ignoring word meanings. Gamblers took
longer to do this compared to a control group when gambling words were shown. This suggests
gamblers have an automatic cognitive bias to pay attention to such information. This supports the
176
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
view of the cognitive explanation that many cognitive biases influence addiction and operate
without us even being aware we have them. This is very difficult for a purely learning-based theory
to explain.
Another limitation is that cognitive biases are only proximate explanations of gambling behaviour. In
other words, cognitive theory describes the addicted gambler’s biased beliefs about change but does
not explain what causes these beliefs. To understand this, we have to go back further in the chain of
causation to find the ultimate explanation. For example, it may be that gamblers have learned to
think in a biased way, which suggests that learning theory may be a more valid explanation of the
true causes of gambling addiction.
Page 241
1. One strength of drug therapy is research evidence that it is effective. Hartmann-Boyce et al.
(2018) did a meta-analysis of high-quality studies into NRT and concluded that all forms of NRT were
more effective in helping smokers quit than placebo and no therapy at all. Using NRT increased the
rate of quitting by up to 60%, without clients becoming dependent on the nicotine in the NRT
product. Therefore NRT is an effective drug therapy which may save lives, improve health and
reduce costs to the NHS.
However, a limitation of this study is that the researchers only included studies that had been
published. This means there is a risk of publication bias. Published studies are more likely to show
‘positive’ results. i.e. supporting the effectiveness of NRT. Studies with non-significant results or that
show no effect are not usually published because a negative effect is not interesting. The
researchers did write to manufacturers of NRT products to track down unpublished studies but the
response was poor. This means that NRT may not be as effective as the findings of this meta-analysis
suggest.
2. One drug therapy is nicotine replacement therapy (NRT) which uses gum, inhalers or patches to
give the smoker a clean, controlled dose of nicotine which operates neurochemically just like
nicotine from cigarettes. Nicotine is an agonist which activates nAChRs in the mesolimbic pathway of
the brain and stimulates dopamine release in the nucleus accumbens into the frontal cortex. The
amount of nicotine can be reduced by using smaller and smaller patches which means the
withdrawal syndrome can be managed over a period of several weeks, reducing the unpleasantness
of the symptoms.
A promising candidate for drug treatment of gambling addiction is the opioid antagonist naltrexone
(normally used to treat heroin addiction). Gambling may tap into the same dopamine reward system
as heroin, nicotine and other drugs. Opioid antagonists enhance the release of the neurotransmitter
GABA in the mesolimbic pathway. Increased GABA activity reduces the release of dopamine in the
nucleus accumbens (and ultimately the frontal cortex). This has been linked with subsequent
reductions in gambling behaviour.
3. One strength of drug therapy is research evidence that it is effective. Hartmann-Boyce et al.
(2018) did a meta-analysis of high-quality studies into NRT and concluded that all forms of NRT were
more effective in helping smokers quit than placebo and no therapy at all. Using NRT increased the
rate of quitting by up to 60%, without clients becoming dependent on the nicotine in the NRT
product. Therefore NRT is an effective drug therapy which may save lives, improve health and
reduce costs to the NHS.
177
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
4. There are three main types of drug therapy used to treat addiction. Aversive drugs produce
unpleasant consequences such as vomiting when paired with a substance of addiction such as
alcohol. A client associates drinking alcohol with unpleasant outcomes rather than with enjoyment
(classical conditioning). Agonists are drug substitutes, providing a similar effect to the addictive
substance. They stabilise the individual because they are used to control the withdrawal syndrome.
Antagonists block neuron receptor sites so that the substance of dependence cannot have its usual
effects, especially the feeling of euphoria.
One agonist drug therapy is nicotine replacement therapy (NRT) which uses gum, inhalers or patches
to give the smoker a clean, controlled dose of nicotine which operates neurochemically just like
nicotine from cigarettes. Nicotine is an agonist which activates nAChRs in the mesolimbic pathway of
the brain and stimulates dopamine release in the nucleus accumbens into the frontal cortex. The
amount of nicotine can be reduced by using smaller and smaller patches which means the
withdrawal syndrome can be managed over a period of several weeks, reducing the unpleasantness
of the symptoms.
A promising candidate for drug treatment of gambling addiction is the opioid antagonist naltrexone
(normally used to treat heroin addiction). Gambling may tap into the same dopamine reward system
as heroin, nicotine and other drugs. Opioid antagonists enhance the release of the neurotransmitter
GABA in the mesolimbic pathway. Increased GABA activity reduces the release of dopamine in the
nucleus accumbens (and ultimately the frontal cortex). This has been linked with subsequent
reductions in gambling behaviour.
One strength of drug therapy is research evidence that it is effective. Hartmann-Boyce et al. (2018)
did a meta-analysis of high-quality studies into NRT and concluded that all forms of NRT were more
effective in helping smokers quit than placebo and no therapy at all. Using NRT increased the rate of
quitting by up to 60%, without clients becoming dependent on the nicotine in the NRT product.
Therefore NRT is an effective drug therapy which may save lives, improve health and reduce costs to
the NHS.
However, a limitation of this study is that the researchers only included studies that had been
published. This means there is a risk of publication bias. Published studies are more likely to show
‘positive’ results. i.e. supporting the effectiveness of NRT. Studies with non-significant results or that
show no effect are not usually published because a negative effect is not interesting. The
researchers did write to manufacturers of NRT products to track down unpublished studies but the
response was poor. This means that NRT may not be as effective as the findings of this meta-analysis
suggest.
A limitation of all drug therapies is side effects. Common ones side effects of NRT are sleep
disturbances, dizziness and headaches. In relation to gambling, the dose of naltrexone required
leads to side effects worse than would be the case when using it to treat opioid addiction. Such side
effects mean there is a risk that the individual will discontinue the therapy, especially when they
have also lost the pleasurable effects of the addiction. The risk of side effects should be carefully
weighed up against the benefits of the drug therapy and psychological therapies such as covert
sensitisation.
Another strength is the removal of addiction stigma. Drug therapy encourages a growing perception
that drug addiction is a medical problem. Research is rapidly revealing the neurochemical and
genetic basis of addiction. This is changing the view that addiction is a form of psychological or moral
failure. Addiction therefore becomes less stigmatised as more people accept that it may not be the
178
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
addicted person’s ‘fault’. This is a strength because in turn it could encourage more addicts to seek
treatment.
Page 243
1. The main difference is that aversion therapy is experienced in vivo (the unpleasant stimulus is
experienced) and covert sensitisation is in vitro (the unpleasant stimulus is imagined rather than
actually experienced). Traditional aversion therapy is actually experienced by the client in the form
of an unpleasant consequence associated with the addictive drug or behaviour through classical
conditioning. As an addiction can develop through repeated associations between a drug and the
pleasurable state of arousal caused by it, it follows that the addiction can be reduced by associating
the drug with an unpleasant state (counterconditioning). In covert sensitisation, the client imagines
the unpleasant consequences rather than experiencing them in reality.
3. One limitation is that aversion studies suffer from methodological problems. Hajek and Stead
(2001) reviewed 25 studies of aversion therapy for nicotine addiction, claiming it was impossible to
judge its effectiveness because the studies had glaring methodological problems. In most studies
‘blind’ procedures were not used, so the researchers who evaluated the outcomes of the studies
knew which participants had received therapy or placebo. Such inbuilt biases generally make
therapy appear more effective than it actually is, which challenges the validity of the findings.
Another limitation of aversion therapy is that it lacks long-term effectiveness. Fuller et al. (1986)
gave disulfiram to a group of people addicted to alcohol every day for one year. These participants
and a placebo control group had weekly counselling sessions for six months as well. After one year,
there was no difference in total abstinence from drinking between the two groups. This suggests
that traditional aversion therapy is no more effective for alcohol addiction than placebo, so it may be
that counselling had the greater impact.
179
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
One limitation is that aversion studies suffer from methodological problems. Hajek and Stead (2001)
reviewed 25 studies of aversion therapy for nicotine addiction, claiming it was impossible to judge its
effectiveness because the studies had glaring methodological problems. In most studies ‘blind’
procedures were not used, so the researchers who evaluated the outcomes of the studies knew
which participants had received therapy or placebo. Such inbuilt biases generally make therapy
appear more effective than it actually is, which challenges the validity of the findings.
Another limitation of aversion therapy is that it lacks long-term effectiveness. Fuller et al. (1986)
gave disulfiram to a group of people addicted to alcohol every day for one year. These participants
and a placebo control group had weekly counselling sessions for six months as well. After one year,
there was no difference in total abstinence from drinking between the two groups. This suggests
that traditional aversion therapy is no more effective for alcohol addiction than placebo, so it may be
that counselling had the greater impact.
Traditional aversion therapy has been largely superseded by covert sensitisation. This is a type of
aversion therapy, but in vitro rather than in vivo, in that the unpleasant stimulus is imagined rather
than actually experienced. People with nicotine addiction are first encouraged to relax, then to
conjure up a vivid image of themselves smoking a cigarette (CS), followed by the most unpleasant
consequences (CR) such as vomiting (including graphic details of smells, sights, etc.). The association
formed (classical conditioning) should reduce smoking behaviour.
A strength of covert sensitisation is research support. McConaghy et al. (1983) found that after one
year, gambling addicts who had received covert sensitisation were much more likely to have reduced
their gambling activity than those who received aversion therapy. The participants also reported
experiencing fewer and less intense gambling cravings than the aversion-treated participants. This is
one of many studies suggesting covert sensitisation is a highly promising treatment for addiction to
alcohol, nicotine and gambling.
Both interventions can be evaluated in terms of ethical issues. This is a limitation of traditional
aversion therapy. Inflicting nausea and pain can be seen as unethical and clients could lose their
dignity by vomiting in social situations. However, it is a strength of covert sensitisation in that it
generally avoids ethical criticism. It does not induce vomiting or other self-shaming behaviours,
allowing individuals to retain their dignity and self-esteem. This means that aversion therapy is
questionable because the ethical costs are high but the benefits in terms of effectiveness are low.
The relationship is the other way round for covert sensitisation, making it the preferred intervention.
Page 245
1. Cognitive behaviour therapy (CBT) has two key elements: cognitive – identify, tackle and replace
cognitive distortions that underlie the addiction (functional analysis), and behavioural – skills-
training helps a client develop coping behaviours to avoid the high-risk situations that trigger the
addiction-related behaviour. This is in contrast to aversion therapy which just deals with the learned
behavioural aspects of addiction and not the cognitive aspects.
180
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
2. Cognitive behaviour therapy (CBT) aims to tackle distorted thinking and develop coping
behaviours. CBT has two key elements. The cognitive element aims to identify, tackle and replace
cognitive biases that underlie the addiction (functional analysis). The behavioural element includes
skills-training, which helps the client develop coping behaviours to avoid the high-risk situations that
trigger the addiction-related behaviour.
CBT starts with the client and therapist together identifying the high-risk situations that lead to the
client’s substance abuse or gambling. The therapist reflects on what the client is thinking before,
during and after such a situation. The therapist’s role in the relationship is to challenge the client’s
cognitive biases. Cognitive restructuring confronts and challenges faulty beliefs. For example, a
gambler may hold faulty beliefs about probability, randomness and control in gambling. In the initial
education phase, the therapist may give the client information about how to challenge these faulty
beliefs.
People seeking treatment for addiction may have a huge range of problems but only one way of
dealing with them – their addiction. CBT helps to replace this strategy with more constructive ones
by developing new skills. These include specific skills such as anger management or assertiveness
training but also broader social skills to help clients cope with encountering the drug of addiction in
social situations.
3. One strength of CBT is research support. Petry et al. (2006) found that gamblers assigned to a
treatment condition (Gamblers Anonymous meetings + CBT) were gambling less than a control
group (GA meetings only) 12 months later. An important feature of this study is that the participants
were randomly allocated to the CBT group or the control group, and there were no significant
differences in the extent of their gambling at the start. Therefore, these findings are strong evidence
that CBT is effective in treating gambling addiction, from a methodologically sound study.
One limitation, however, is a lack of long-term gains. Cowlishaw et al. (2012) found that CBT has
definite beneficial effects for up to three months after treatment. However, after 9–12 months,
there were no significant differences between CBT and control groups. In addition, the researchers
also concluded that the studies they reviewed were of such poor methodological quality that they
probably overestimated the efficacy of treatment with CBT. Therefore, CBT may be effective in
reducing gambling behaviour, but the ‘durability of therapeutic gain’ is unclear.
Another strength is that CBT is especially useful in preventing relapse. Relapse is not an unusual
event in addiction recovery. Addiction is really a cycle of cessation and relapse, so a therapy that can
prevent relapse is very beneficial. CBT presents a very realistic view of recovery and has built into it
the probability of relapse. Relapse is therefore not seen as a failure but as an opportunity for clients
to engage in further cognitive restructuring and learning. Relapse is inevitable but also manageable
as long as the client’s psychological and social functioning improves. Therefore, as long as clients
stick with CBT, it can help them to recover quickly from relapse by maintaining a stable lifestyle.
4. Cognitive behaviour therapy (CBT) aims to tackle distorted thinking and develop coping
behaviours. CBT has two key elements. The cognitive element aims to identify, tackle and replace
cognitive biases that underlie the addiction (functional analysis). The behavioural element includes
skills-training, which helps the client develop coping behaviours to avoid the high-risk situations that
trigger the addiction-related behaviour.
CBT starts with the client and therapist together identifying the high-risk situations that lead to the
client’s substance abuse or gambling. The therapist reflects on what the client is thinking before,
during and after such a situation. The therapist’s role in the relationship is to challenge the client’s
181
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
cognitive biases. This process of functional analysis continues throughout the treatment, not just at
the beginning of the therapy.
One strength of CBT is research support. Petry et al. (2006) found that gamblers assigned to a
treatment condition (Gamblers Anonymous meetings + CBT) were gambling less than a control
group (GA meetings only) 12 months later. An important feature of this study is that the participants
were randomly allocated to the CBT group or the control group, and there were no significant
differences in the extent of their gambling at the start. Therefore, these findings are strong evidence
that CBT is effective in treating gambling addiction, from a methodologically sound study.
One limitation, however, is a lack of long-term gains. Cowlishaw et al. (2012) found that CBT has
definite beneficial effects for up to three months after treatment. However, after 9–12 months,
there were no significant differences between CBT and control groups. In addition, the researchers
also concluded that the studies they reviewed were of such poor methodological quality that they
probably overestimated the efficacy of treatment with CBT. Therefore, CBT may be effective in
reducing gambling behaviour, but the ‘durability of therapeutic gain’ is unclear.
One limitation is that aversion studies suffer from methodological problems. Hajek and Stead (2001)
reviewed 25 studies of aversion therapy for nicotine addiction, claiming it was impossible to judge its
effectiveness because the studies had glaring methodological problems. In most studies ‘blind’
procedures were not used, so the researchers who evaluated the outcomes of the studies knew
which participants had received therapy or placebo. Such inbuilt biases generally make therapy
appear more effective than it actually is, which challenges the validity of the findings.
Another limitation of aversion therapy is that it lacks long-term effectiveness. Fuller et al. (1986)
gave disulfiram to a group of people addicted to alcohol every day for one year. These participants
and a placebo control group had weekly counselling sessions for six months as well. After one year,
there was no difference in total abstinence from drinking between the two groups. This suggests
that traditional aversion therapy is no more effective for alcohol addiction than placebo, so it may be
that counselling had the greater impact.
Page 247
1. According to Ajzen’s (1985, 1991) theory of planned behaviour (TPB), changes in addiction-related
behaviour can be predicted from our intentions to change, which in turn are influenced by three
factors.
‘Personal attitudes’ refers to the entire collection of attitudes that the addicted person holds about
their addiction. Their overall attitude is formed from weighing up the balance of favourable and
unfavourable attitudes. For example, ‘it gives me a thrill’ and ‘it’s an escape’ versus ‘I lose more
money than I win’ and ‘it makes me feel anxious’.
182
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Subjective norms are the addicted person’s beliefs about whether key people in their life would
approve or disapprove of their addictive behaviour. If the person concludes that others are unhappy
about their gambling, for instance, this would make them less likely to plan/intend to gamble. The
most influential aspect of subjective norms is the addicted person’s perception. For example,
parents may express favourable attitudes towards something in general (e.g. getting drunk) but
disapprove of their own children doing it. Nevertheless, the perception is that they approve.
Perceived behavioural control is about how much control we think we have over our behaviour. This
is called self-efficacy. For example, does the addicted gambler believe they are capable of giving up
gambling? This may be related to their perception of resources available to them (e.g. support, time,
skill, determination).
2. The theory of planned behaviour could be used to change addictive behaviour by changing the
addicted person’s subjective norms. For instance, adolescents often overestimate how much their
peers are drinking, so providing messages such as, ‘Other people are not drinking as much as you
think’ could help change subjective norms as long as the source is credible.
An intervention could also change the person’s perceived behavioural control by increasing their
self-efficacy. This could involve encouraging them to adopt an optimistic outlook and develop
confidence in their ability not to gamble, for instance. Support from other people could also develop
perceived control.
3. One strength is that there is some research support. Hagger et al. (2011) found that the TPB’s
three factors all predicted an intention to limit drinking. Intentions were also found to influence
actual alcohol consumption after one and three months. These findings support predictions derived
from the theory which suggests it is valid. However, the study failed to predict some alcohol-related
behaviours (e.g. binge-drinking), so the success of the TPB depends on the behaviour being
measured. This suggests that even supportive research indicates that the predictive validity of the
TPB is limited.
4. According to Ajzen’s (1985, 1991) theory of planned behaviour (TPB), changes in addiction-related
behaviour can be predicted from our intentions to change, which in turn are influenced by three
factors.
‘Personal attitudes’ refers to the entire collection of attitudes that the addicted person holds about
their addiction. Their overall attitude is formed from weighing up the balance of favourable and
unfavourable attitudes. For example, ‘it gives me a thrill’ and ‘it’s an escape’ versus ‘I lose more
money than I win’ and ‘it makes me feel anxious’.
Subjective norms are the addicted person’s beliefs about whether key people in their life would
approve or disapprove of their addictive behaviour. If the person concludes that others are unhappy
about their gambling, for instance, this would make them less likely to plan/intend to gamble. The
most influential aspect of subjective norms is the addicted person’s perception. For example,
parents may express favourable attitudes towards something in general (e.g. getting drunk) but
disapprove of their own children doing it. Nevertheless, the perception is that they approve.
Perceived behavioural control is about how much control we think we have over our behaviour. This
is called self-efficacy. For example, does the addicted gambler believe they are capable of giving up
gambling? This may be related to their perception of resources available to them (e.g. support, time,
skill, determination).
183
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
Simon’s intention to change would be key to whether he is able to stop smoking. Simon must weigh
up the pros and cons of smoking and this will determine whether he has a favourable or
unfavourable attitude towards it. For example, does the thrill or buzz he gets from smoking
outweigh his fear of the negative effects on his health? Also, what are Simon’s perceptions of the
social norms of smoking? Do others around him think it is a disgusting habit? Do they think less of
Simon for continuing to smoke? Finally, what is Simon’s perceived behavioural control? Does he see
himself as able to give up smoking or think he does not have sufficient willpower?
One strength is that there is some research support. Hagger et al. (2011) found that the TPB’s three
factors all predicted an intention to limit drinking. Intentions were also found to influence actual
alcohol consumption after one and three months. These findings support predictions derived
from the theory which suggests it is valid. However, the study failed to predict some alcohol-related
behaviours (e.g. binge-drinking), so the success of the TPB depends on the behaviour being
measured. This suggests that even supportive research indicates that the predictive validity of the
TPB is limited.
One limitation is that the TPB does not explain the intention-behaviour gap. Miller and Howell
(2005) found strong support for the element of TPB that predicts gambling intentions from attitudes,
norms and perceived behavioural control in underage teenagers. However, the model did not
predict the occurrence of actual gambling behaviour. Psychologists now question whether TPB is an
effective model of behaviour change. If the theory can’t predict behaviour change, it is difficult to
create drug-related interventions that bridge the gap between intention to reduce the behaviours
and the actual behaviours themselves.
Page 249
1. Ajzen’s (1985, 1991) theory of planned behaviour (TPB) suggests we change behaviours in a
rational way, evaluating positive and negative consequences. Addiction-related behaviour can be
predicted from a person’s intentions. These intentions arise from three key influences: first, personal
attitudes towards the addiction, second, subjective norms (i.e. the perception of what others think)
and third, perceived behavioural control.
Prochaska and DiClemente (1983) suggest a six-stage model in which overcoming addiction is a
cyclical process. The model is based on two insights about behavioural change: first, people differ in
how ready they are to change, and second, the usefulness of a treatment intervention depends on
the stage the person has reached.
2. Prochaska and DiClemente (1983) suggest a six-stage model in which overcoming addiction is a
cyclical process.
In Stage 1 Precontemplation, the person is not thinking about changing their addiction-related
behaviour within the next six months either because of denial or demotivation. Intervention should
focus on helping the person consider the need for change.
In Stage 2 Contemplation, the person is now thinking about making a change in the next six months.
Intervention should focus on helping them see that the pros outweigh the cons and help them reach
a decision to change.
In Stage 3 Preparation, the individual believes that the benefits are greater than the costs and has
decided to make a change within the next month. But because they have not decided how to make
184
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
the change, intervention should give individuals support in constructing a plan (e.g. to ring a
helpline).
In Stage 4 Action, the person has done something to change their addictive behaviour in the last six
months (e.g. they have removed alcohol from the house). Intervention should focus on coping skills
needed to quit.
In Stage 5 Maintenance, the person has maintained some behavioural change (e.g. stopped
gambling) for more than six months. Intervention should focus on relapse prevention.
In Stage 6 Termination, abstinence becomes automatic and the person no longer returns to
addictive behaviours to cope with anxiety, stress, loneliness, etc. Intervention is not required.
3. A strength is that the model recognises the true dynamic nature of addictive behaviour.
Traditional theories have considered recovery from addiction as an ‘all-or-nothing’ event. However,
the six-stage model stresses a dynamic and continuing process and the importance of time. This is
why the model proposes that behavioural change occurs through six stages of varying duration for
each person and that these stages may not be linear. Therefore, the six-stage model provides a
realistic view of the complex and active nature of addiction and recovery.
4. Prochaska and DiClemente (1983) suggest a six-stage model in which overcoming addiction is a
cyclical process.
In Stage 1 Precontemplation, the person is not thinking about changing their addiction-related
behaviour within the next six months either because of denial or demotivation. Intervention should
focus on helping the person consider the need for change.
In Stage 2 Contemplation, the person is now thinking about making a change in the next six months.
Intervention should focus on helping them see that the pros outweigh the cons and help them reach
a decision to change.
In Stage 3 Preparation, the individual believes that the benefits are greater than the costs and has
decided to make a change within the next month. But because they have not decided how to make
the change, intervention should give individuals support in constructing a plan (e.g. to ring a
helpline).
In Stage 4 Action, the person has done something to change their addictive behaviour in the last six
months (e.g. they have removed alcohol from the house). Intervention should focus on coping skills
needed to quit.
In Stage 5 Maintenance, the person has maintained some behavioural change (e.g. stopped
gambling) for more than six months. Intervention should focus on relapse prevention.
In Stage 6 Termination, abstinence becomes automatic and the person no longer returns to
addictive behaviours to cope with anxiety, stress, loneliness, etc. Intervention is not required.
A strength is that the model recognises the true dynamic nature of addictive behaviour. Traditional
theories have considered recovery from addiction as an ‘all-or-nothing’ event. However, the six-
stage model stresses a dynamic and continuing process and the importance of time. This is why the
model proposes that behavioural change occurs through six stages of varying duration for each
185
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.
Illuminate Publishing
AQA Psychology for A Level Year 2 Revision Guide 2nd Edition – Knowledge Check answers
person and that these stages may not be linear. Therefore, the six-stage model provides a realistic
view of the complex and active nature of addiction and recovery.
Another strength of the model is the positive attitude to relapse. DiClemente et al. (2004) suggest
that ‘relapse is the rule rather than the exception’. The model does not view relapse as a failure, but
as an inevitable part of the dynamic process of behaviour change. The model takes relapse seriously
and does not underestimate its potential to blow change off course. Changes to behaviour require
several attempts to reach the maintenance or termination stages. This means the model has face
validity with clients and is more acceptable because they can see it is realistic about relapse.
A limitation is contradictory research that challenges the model. Taylor et al. (2006) carried out a
major review of available evidence for NICE, which included several meta-analyses. They concluded
that the model is no more effective than any other stage model in changing nicotine addiction. They
also stated that there is no valid evidence for the existence of such clearly-defined stages as those in
the model. This suggests that despite optimistic claims made for the model by some, the overall
research picture is negative.
Considering this further, a related limitation is the arbitrary nature of the stages. It is impossible in
real addictions to distinguish one stage from another. Kraft et al. (1999) claim that the six stages of
the model can be reduced to just two useful ones – precontemplation, plus all the others grouped
together. This is a real problem because each stage is supposed to be linked to an intervention, but
this lack of validity suggests this is not a useful approach. Therefore the stage model has little
usefulness either for understanding change in addictive behaviour over time or for recommending
treatments.
186
PLEASE NOTE: This document contains suggested model answers that would achieve a good mark if you gave them in an exam.
They are designed to help guide and instruct you, but they should not be considered definitive or the only answers you could give.