Research Design

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 24

Research Design

Natalia Anggrarini, M.Pd.


Wiralodra University
Research in Education
Research Design
Research design refers to the overall strategy utilized to carry out research that
defines a succinct and logical plan to tackle established research question through the
collection, interpretation, analysis and discussion of the data.
1. Experimental Designs - Outline
1. Define experimental research, describe when to use it, and how to develop it
2. Identify the key characteristics of experimental designs
3. State the types of experimental designs
4. Recognize potential ethical issues in experimental research
5. Describe the steps in conducting an experiment
6. Evaluate the quality of an experimental study
What is an experiment?
In an experiment, you test an idea (or practice or procedure) to determine whether it
influences an outcome or dependent variable.
Steps:
1. Decide the idea what to experiment
2. Assign individual to experience it
3. Determine whether those who experienced the idea performed better on some
outcome than those who did not experience it
When we use experiment
-When we want to establish cause and effect between independent and dependent
variables
-When the independent variable influences the dependent variable
-When we have two or more groups to study
What are key characteristics of
experiments?
◆ Random assignment
◆ Control over extraneous variables
◆ Manipulation of the treatment conditions
◆ Outcome measures
◆ Group comparisons
◆ Threats to validity
Random Assignment
Random assignment is the process of assigning individuals at random to groups or to different
groups in an experiment. ( to avoid bias and to control the extraneous characteristics that might
influence the outcome)
This random assignment distinguish a rigorous, TRUE EXPERIMENT, experiment from an
adequate, but less-than-rigorous, QUASI EXPERIMENT.
Equating the groups means that the researcher randomly assigns individuals to groups and
equally distributes any variability of individuals between or among the groups or conditions in
the experiment.
Random selection, we select the sample that representative of the population and generalize the
result obtained during the study of the population
So random assignment and random selection is distinguished from their different purpose
Control over extraneous variables
Random assignment – control the extraneous variable that might influence the
practice and the outcome. RA is decided by the investigation before the experiments
begin.
Extraneous factors are any influence in the selection of participants, the procedures,
the statistics, or the design likely to affect the outcome and provide an alternative
explanation for the result than the expected.
Pretest – Posttest
A pretest provides a measure on some attribute or characteristic that you assess for participants
in an experiment before they receive a treatment.
Strength and weakness of pretest:
1. Take time and effort to administer
2. Can raise the participants’ expectation about outcome
3. Influence the experimental treatment
4. The score of pretest might affect the posttest score (the participants anticipate the answer
from pretest)
A posttest is a measure on some attribute or characteristic that is assessed for participants in an
experiment after a treatment.
Covariates
Because pretests may affect aspects of the experiment, they are often statistically con
trolled for by using the procedure of covariance rather than by simply comparing
them with posttest scores.
Covariates are variables that the researcher controls for using statistics and that relate
to the dependent variable but that do not relate to the independent variable.
Matching of participants
Matching is the process of identifying one or more personal characteristics that
influence the outcome and assigning individuals with that characteristic equally to
the experimental and control groups.
Typically, experimental researchers match on one or two of the following
characteristics: gender, pretest scores, or individual abilities.
Homogenous sample
Another approach used to make the groups comparable is to choose homogeneous
samples by selecting people who vary little in their personal characteristics.
When the experimenter assigns students to the two classes, the more similar they are
in personal characteristics or attributes, the more these characteristics or attributes are
controlled in the experiment.
Blocking variables
A blocking variable is a variable the researcher controls before the experiment starts
by dividing (or “blocking”) the participants into subgroups (or categories) and
analyzing the impact of each subgroup on the outcome.
The variable (e.g., gender) can be blocked into males and females; similarly, high
school grade level can be blocked into four categories: freshmen, sophomores,
juniors, and seniors.
Manipulating Treatment Conditions
In experimental treatment, the researcher physically intervenes to alter the conditions
experienced by the experimental unit.
Procedure:
1. Identify a treatment variable
2. Identify the condition (the level) of variables
3. Manipulate the treatment conditions
Treatment variables
In Quantitative research, independent variables influence or affect the dependent
variables.
The two major types of independent variables were treatment and measured variables.
In experiments, treatment variables are independent variables that the researcher
manipulates to determine their effect on the outcome, or dependent variable.
Treatment variables are categorical variables measured using categorical scales.

Condition
In both of these examples, we have two categories within each treatment variable.
In experiments, treatment variables need to have two or more categories, or levels. In
an experiment, levels are categories of a treatment variable.
Intervening in the Treatment Conditions
The experimental researcher manipulates one or more of the treatment variable
conditions.
In other words, in an experiment, the researcher physically intervenes (or
manipulates with interventions) in one or more condition so that individuals
experience something different in the experimental conditions than in the control
conditions.
This means that to conduct an experiment, you need to be able to manipulate at least
one condition of an independent variable. It is easy to identify some situations in
which you might measure an independent variable and obtain categorical data but not
be able to manipulate one of the conditions.
Outcome Measures
In experiments, the outcome (or response, criterion, or posttest) is the dependent
variable that is the presumed effect of the treatment variable. It is also the effect
predicted in a hypothesis in the cause-and-effect equation.
Examples of dependent variables in experiments might be:
◆ Achievement scores on a criterion-referenced test
◆ Test scores on an aptitude test
Good outcome measures are sensitive to treatments in that they respond to the
smallest amount of intervention.
Group Comparisons
In an experiment, you also compare scores for different treatments on an outcome. A
group comparison is the process of a researcher obtaining scores for individuals or groups
on the dependent variable and comparing the means and variance both within the group
and between the groups.

Threats to Validity
A final idea in experiments is to design them so that the inferences you draw are true or
correct. Threats to drawing these correct inferences need to be addressed in experimental
research.
Threats to validity refer to specific reasons for why we can be wrong when we make an
inference in an experiment because of covariance, causation constructs, or whether the
causal relationship holds over variations in persons, setting, treatments, and outcomes
( Shadish, Cook, & Campbell, 2002 ).
Types of Validity
◆ Statistical conclusion validity, which refers to the appropriate use of statistics (e.g.,
violating statistical assumptions, restricted range on a variable, low power) to infer
whether the presumed independent and dependent variables covary in the experiment.
◆ Construct validity, which means the validity of inferences about the constructs (or
variables) in the study.
◆ Internal validity, which relates to the validity of inferences drawn about the cause
and effect relationship between the independent and dependent variables.
◆ External validity, which refers to the validity of the cause-and-effect relationship
being generalizable to other persons, settings, treatment variables, and measures.
Threats to internal validity
Threats to internal validity are problems in drawing correct inferences about whether
the covariation (i.e., the variation in one variable contributes to the variation in the
other variable) between the presumed treatment variables and the outcome reflects a
causal relationship ( Shadish, Cook, & Campbell, 2002 )
The following category addresses threats that typically occur during an experiment
and relate to the procedures of the study:
1. Testing
2. instrumentation
Threats to external validity
Threats to external validity are problems that threaten our ability to draw correct
inferences from the sample data to other persons, settings, treatment variables, and
measures. According to Cook and Campbell (1979) , three threats may affect this
generalizability:
1. Interaction of selection and treatment
2. Interaction of history and treatment
Type of Experimental Designs
Condition

You might also like