Research Design
Research Design
Research Design
Condition
In both of these examples, we have two categories within each treatment variable.
In experiments, treatment variables need to have two or more categories, or levels. In
an experiment, levels are categories of a treatment variable.
Intervening in the Treatment Conditions
The experimental researcher manipulates one or more of the treatment variable
conditions.
In other words, in an experiment, the researcher physically intervenes (or
manipulates with interventions) in one or more condition so that individuals
experience something different in the experimental conditions than in the control
conditions.
This means that to conduct an experiment, you need to be able to manipulate at least
one condition of an independent variable. It is easy to identify some situations in
which you might measure an independent variable and obtain categorical data but not
be able to manipulate one of the conditions.
Outcome Measures
In experiments, the outcome (or response, criterion, or posttest) is the dependent
variable that is the presumed effect of the treatment variable. It is also the effect
predicted in a hypothesis in the cause-and-effect equation.
Examples of dependent variables in experiments might be:
◆ Achievement scores on a criterion-referenced test
◆ Test scores on an aptitude test
Good outcome measures are sensitive to treatments in that they respond to the
smallest amount of intervention.
Group Comparisons
In an experiment, you also compare scores for different treatments on an outcome. A
group comparison is the process of a researcher obtaining scores for individuals or groups
on the dependent variable and comparing the means and variance both within the group
and between the groups.
Threats to Validity
A final idea in experiments is to design them so that the inferences you draw are true or
correct. Threats to drawing these correct inferences need to be addressed in experimental
research.
Threats to validity refer to specific reasons for why we can be wrong when we make an
inference in an experiment because of covariance, causation constructs, or whether the
causal relationship holds over variations in persons, setting, treatments, and outcomes
( Shadish, Cook, & Campbell, 2002 ).
Types of Validity
◆ Statistical conclusion validity, which refers to the appropriate use of statistics (e.g.,
violating statistical assumptions, restricted range on a variable, low power) to infer
whether the presumed independent and dependent variables covary in the experiment.
◆ Construct validity, which means the validity of inferences about the constructs (or
variables) in the study.
◆ Internal validity, which relates to the validity of inferences drawn about the cause
and effect relationship between the independent and dependent variables.
◆ External validity, which refers to the validity of the cause-and-effect relationship
being generalizable to other persons, settings, treatment variables, and measures.
Threats to internal validity
Threats to internal validity are problems in drawing correct inferences about whether
the covariation (i.e., the variation in one variable contributes to the variation in the
other variable) between the presumed treatment variables and the outcome reflects a
causal relationship ( Shadish, Cook, & Campbell, 2002 )
The following category addresses threats that typically occur during an experiment
and relate to the procedures of the study:
1. Testing
2. instrumentation
Threats to external validity
Threats to external validity are problems that threaten our ability to draw correct
inferences from the sample data to other persons, settings, treatment variables, and
measures. According to Cook and Campbell (1979) , three threats may affect this
generalizability:
1. Interaction of selection and treatment
2. Interaction of history and treatment
Type of Experimental Designs
Condition