NOTE 5 - Validity and Data Gathering Technique

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

VALIDITY

Validity refers to whether the measure actually measures what it is supposed to measure. If a
measure is unreliable, it is also invalid. Validity is described as the degree to which a research
study measures what it intends to measure.

That is, if you do not know what it is measuring, it certainly cannot be said to be measuring
what it is supposed to be measuring. More specifically, validity applies to both the design and
the methods of your research. Validity in data collection means that your findings truly
represent the phenomenon you are claiming to measure. Valid claims are solid claims.

There are two main types of validity, internal and external.


o Internal validity refers to the validity of the measurement and test itself.
o External validity refers to the ability to generalize the findings to the target population.
Both are very important in analyzing the appropriateness, meaningfulness and usefulness
of a research study.

TYPES OF VALIDITY
Following are the validity types that are typically mentioned in texts and research papers when
talking about the quality of measurement. Each type views validity from a different perspective
and evaluates different relationships between measurements.

A. Face Validity.
Face validity refers to the degree to which a test appears to measure what it purports to measure.
The stakeholders can easily assess face validity. Although this is not a very ‘scientific’ type of
validity, it may be an essential component in enlisting motivation of stakeholders. If the
stakeholders do not believe the measure is an accurate assessment of the ability, they may
become disengaged with the task.

B. Predictive Validity.
Predictive validity refers to whether a new measure of something has the same predictive
relationship with something else that the old measure had. In predictive validity, we assess the
operationalization’s ability to predict something it should theoretically be able to predict. For
instance, we might theorize that a measure of math ability should be able to predict how well
a person will do in an engineering-based profession.

We could give our measure to experienced engineers and see if there is a high correlation
between scores on the measure and their salaries as engineers. A high correlation would provide
evidence for predictive validity; it would show that our measure can correctly predict
something that we theoretically think it should be able to predict.

C. Criterion-Related Validity.
Criterion validity is a test of a measure when the measure has several different parts or
indicators in it compound measures. Each part or criterion of the measure should have a
relationship with all the parts in the measure for the variable to which the first measure is related
in a hypothesis. When you are expecting a future performance based on the scores obtained
currently by the measure, correlate the scores obtained with the performance.

The later performance is called the criterion and the current score is the prediction. It is used
to predict future or current performance; it correlates test results with another criterion of
interest. For example, if a physics program designed a measure to assess cumulative student
learning throughout the major.

The new measure could be correlated with a standardized measure of ability in this discipline,
such as GRE subject test. The higher the correlation between the established measure and new
measure, the more faith stakeholders can have in the new assessment tool.

D. Content Validity.
In content validity, you essentially check the operationalization against the relevant content
domain for the construct. This approach assumes that you have a good detailed description of
the content domain, something that’s not always true.

In content validity, the criteria are the construct definition itself, it is a direct comparison. In
criterion-related validity, we usually make a prediction about how the operationalization will
perform based on our theory of the construct.
When we want to find out if the entire content of the behavior/ construct/ area is represented in
the test we compare the test task with the content of the behavior. This is a logical method, not
an empirical one. Example, if we want to test knowledge on Bangladesh Geography it is not
fair to have most questions limited to the geography of Australia.

E. Convergent Validity.
Convergent validity refers to whether two different measures of presumably the same thing are
consistent with each other whether they converge to give the same measurement. In convergent
validity, we examine the degree to which the operationalization is similar to (converges on)
other operationalizations that it theoretically should be similar to.

For example, to show the convergent validity of a test of arithmetic skills, we might correlate
the scores on test with scores on other tests that purport to measure basic math ability, where
high correlations would be evidence of convergent validity.

Or, if SAT scores and GRE scores are convergent, then someone who scores high on one test
should also score high on the other. Different measures of ideology should classify the same
people the same way. If they do not, then they lack convergent validity.

F. Concurrent Validity.
Concurrent validity is the degree to which the scores on a test are related to the scores on
another already established, test administered at the same time or to some other valid criterion
available at the same time.

This compares the results from a new measurement technique to those of a more established
technique that claims to measure the same variable to see if they are related. Often two
measurements will behave in the same way, but are not necessarily measuring the same
variable; therefore, this kind of validity must be examined thoroughly.

For example, if we come up with a way of assessing manic-depression, our measure should be
able to distinguish between people who are diagnosed manic-depression and those diagnosed
paranoid schizophrenic.
If we want to assess the concurrent validity of a new measure of empowerment, we might give
the measure to both migrant farm workers and to the farm owners, theorizing that our measure
should show that the farm owners are higher in empowerment. As in any discriminating test,
the results are more powerful if you are able to show that you can discriminate between two
groups that are very similar.

G. Construct Validity.
Construct validity is used to ensure that the measure is actually measure what it is intended to
measure (i.e., the construct), and no other variables. Using a panel of ‘experts’ familiar with
the construct is a way in which this type of validity can be assessed.

The experts can examine the items and decide what that specific item is intended to measure.
This is whether the measurements of a variable in a study behave in exactly the same way as
the variable itself.

This involves examining past research regarding different aspects of the same variable. It is
also the degree to which a test measures an intended hypothetical construct. For example, if we
want to validate a measure of anxiety.

We have a hypothesis that anxiety increases when subjects are under the threat of an electric
shock, then the threat of an electric shock should increase anxiety scores.

H. Discriminant Validity.
In discriminant validity, we examine the degree to which the operationalization is not similar
to (diverges from) other operationalizations that it theoretically should be not be similar to.

For instance, to show the discriminant validity of a Head Start program, we might gather
evidence that shows that the program is not similar to other early childhood programs that don’t
label themselves as Head Start programs.

Or, to show the discriminant validity of a test of arithmetic skills, we might correlate the scores
on test with scores on tests that of verbal ability, where low correlations would be evidence of
discriminant validity.
DATA GATHERING TECHNIQUES
Data gathering techniques in psychology refer to the methods used by researchers to collect
information and data for their studies. These techniques are crucial in helping researchers
understand human behavior, emotions, thoughts, and other psychological phenomena. There
are various types of data gathering techniques used in psychology, each with its own strengths
and limitations.

1. Observational Techniques.
Observational techniques involve observing and recording behavior in a natural setting without
interfering with the subjects. This method allows researchers to study behavior as it occurs
naturally, providing valuable insights into human behavior. Observational techniques can be
used in both quantitative and qualitative research.

2. Surveys and Questionnaires.


Surveys and questionnaires are commonly used in psychology to gather data from a large
number of participants. These tools typically consist of a series of questions that participants
answer, providing researchers with quantitative data that can be analyzed statistically. Surveys
and questionnaires are useful for studying attitudes, beliefs, and behaviors.

3. Interviews.
Interviews involve direct interaction between the researcher and the participant, allowing for
in-depth exploration of thoughts, feelings, and experiences. Interviews can be structured, semi-
structured, or unstructured, depending on the research goals. Interviews are particularly useful
for gathering qualitative data and gaining a deeper understanding of complex psychological
phenomena.

4. Psychological Tests.
Psychological tests are standardized measures used to assess various aspects of an individual’s
psychological functioning, such as intelligence, personality, and mental health. These tests can
provide researchers with objective and reliable data that can be used to make inferences about
an individual’s psychological characteristics.

5. Case Studies.
Case studies involve an in-depth examination of a single individual or a small group of
individuals. Researchers collect detailed information about the participants’ background,
experiences, and behaviours, often using a variety of data gathering techniques. Case studies
are useful for studying rare or unique phenomena and providing rich, detailed descriptions of
psychological processes.

6. Experimental Methods.
Experimental methods involve manipulating variables and observing the effects on behavior.
Researchers use experimental designs to establish cause-and-effect relationships between
variables, allowing them to test hypotheses and draw conclusions about psychological
phenomena. Experimental methods are commonly used in controlled laboratory settings but
can also be adapted for field research.

7. Physiological Measures.
Physiological measures involve recording biological responses, such as heart rate, brain
activity, and hormonal levels, to study psychological processes. These measures provide
objective data that can complement self-report measures and behavioral observations.
Physiological measures are particularly useful for studying the physiological correlates of
psychological phenomena.

You might also like