IRMS Lecture w3 BB PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Lecture Week 3

Quality of Measurement
Instruments; Introduction SPSS
Introduction to Research Methods & Statistics
2013 – 2014

Hemmo Smit
Overview

„ Quality of Measurement Instruments


„ Introduction SPSS

Read:
- Leary: Chapters 3 (pp. 53-70)
Two aspects of the quality of a measure

Aim of research: explaining variability (week 1)


Ideal: variability measurement = variability characteristics

1. Reliability = the extent to which we measure correctly,


i.e. without measurement error (random error).

2. Validity = the extent to which we measure what we


intended to measure, i.e. without bias (systematic error).
Reliability

= the extent to which we measure without measurement error

Observed score Systematic score Error


(the measurement) = (true score) + (Meas. Err.)

Total Variance Systematic Error


= Variance + Variance

Reliability = Proportion Variance Accounted For

Systematic (true score) variance


Reliability
Total (observed) variance
( y  y)

Reliability coefficient
NOTE! For
diagnostic
Lies between 0 and 1. purposes higher
Rule of thumb: .70 or higher is sufficient reliability required!
Determine with repeated measurement

1) Test-retest reliability

2) Parallel form reliability

3) Interitem reliability (also: Internal Consistency)

4) Replication (whole study)


( y  y)

Reliability coefficients (1)

1) Test-Retest Reliability
- One measurement or whole instrument
- Measure twice and compare outcomes
- Consistency of a measurement over time

2) Parallel Form Reliability


- One measurement or whole instrument
- Same as test-retest, but with two parallel instruments
( y  y)

Reliability coefficients (2)

3) Interitem Reliability (also: Internal consistency)


- For whole instrument
- Coherence of the items in the instrument (scale)

4) Replication
- For whole study
- Repeat the whole study and compare the outcomes
Internal Consistency

Instrument consists of items that all (aim to) measure the


same underlying construct / concept.

Repeated measurement:
- Each item is a small measurement instrument
- All items are parallel test forms of each other

Respondents’ scores on items are consistent


High-high and low-low (Beware of reverse scoring)

Beware! Internal consistency will always be high if you ask


almost the same question 10 times → remember
content validity!
( y  y)

Measures of Internal consistency

1) Item-total correlation
2
1

2) Split-half reliability

?
3) Cronbach’s Alpha
( y  y)

Assessing Cronbach’s α

The closer to 1 the higher the Internal consistency

α Assessment
<.60 Insufficient
.60-.80 Reasonable
>.80 Good

Note. For diagnostic purposes higher α required!


Reliability: Categories of error variance

Observed score Systematic score Error


(the measurement) = (true score) + (Meas. Err.)

1) transient states
2) stable attributes
3) situational factors
4) charachteristics of the measure
5) mistakes
( y  y)

Increasing the reliability of a measure

→ Eliminating measurment error

1) Standardize administration of the measure


2) Clarify instructions and questions
3) Train observers
4) Minimize errors in coding data
( y  y)

Validity

= the extent to which we measure what we intended

Observed score Systematic score Error


(the measurement) = (true score) + (Meas. Err.)

Systematic
score X Bias

Note: Validity requries Reliability, but not the other way around
Validity Measurement Instruments (1)

1) Face validity
Does it appear to measure what it’s supposed to measure?

2) Content validity
- does the measure cover all aspects of a construct?
- requires independent observers
Note: Not in Leary
Validity Measurement Instruments (2)

3) Construct Validity
Does a measure relate to other measures as it should?
a) Convergent validity: Strong correlations with instruments
that measure comparable or opposing constructs
b) Discriminant validity: weak / no correlation with
instruments that measure different constructs

4) Criterion-Related Validity
Does a measure relate to a particular behavioral criterion?
a) Concurrent Validity: present behavior
b) Predictive Validity: future behavior
Validity of a Study

Statistical Validity: Was data analysis done correctly?

Internal Validity: Have alternative explanations been ruled


out?

External Validity: Is the result generalizable?

Construct Validity: Are all measurement instruments valid?


SPSS – Variable view (1)
SPSS – Variable view (2)
SPSS – Variable view: Type
SPSS – Variable view: Values (1)
SPSS – Variable view: Values (2)
SPSS – Variable view: Missing
SPSS – Variable view: Measure
SPSS – Menu: Data
SPSS – Menu: Transform
SPSS – Menu: Analyze
SPSS – Menu: Graphs
SPSS – Menu: Help
What have you learned today?

„ Wat are reliability and validity?


„ What are the different types of reliability?
„ What are the different sources of random errors?
„ What are the different types of validity?

„ How to interpret a reliability coefficient.


Next week

Inspecting data: Distributions

Read:
Leary: Chapter 6
Howell: Chapter 2

You might also like