Methods of Research Week 14 Assessment
Methods of Research Week 14 Assessment
Methods of Research Week 14 Assessment
WEEK 14 ASSESSMENT
2. How adequately do the questions in the instrument represent that which is being
measured?
Content validity- determines if a test is representative of all features of the concept.
3. Do the items that the instrument contains logically reflect that which is being
measured?
Face validity:Determines whether a test is reflective of all facets of the construct.
4. Are there a variety of different types of evidence (test scores, teacher ratings,
correlations, etc.) that all measure this variable?
Criterion validity: the results correspond to a different test of the same thing
5. How well do the scores obtained using this instrument predict future performance?
Content validity-To obtain valid findings, the content of a test, survey or measuring
procedure must cover all important portions of the subject it wants to assess.
1. Construct validity refers to whether or not the test measures the topic meant to be
measured.
3. Face validity: Does the test's content appear to be appropriate for its purposes?
4. Validity criterion: Do the findings correlate to a separate test of the same thing?
5. It should be noted that this article focuses on test validity types that establish the
correctness of a measure's real components. When doing experimental research, you
must also examine internal and external validity, which deal with experimental design
and outcomes generalizability. Construct validity assesses if a measurement instrument
genuinely reflects the item we are interested in assessing. It is critical for determining a
method's overall validity.
A construct is a notion or trait that cannot be directly observed but may be quantified by
looking at other indicators that are related to it. Constructs might be individual traits like
intellect, obesity, work satisfaction, or depression, or they can be wider notions applied
to organizations or social groupings like gender equality, corporate social responsibility,
or free speech.
Face validity analyzes how appropriate the content of a test seems on the surface. Face
validity is comparable to content validity, but it is a more casual and subjective
examination. Face validity analyzes how appropriate the content of a test seems on the
surface. Face validity is comparable to content validity, but it is a more casual and
subjective examination. Criterion validity assesses how well your test results correlate to
the findings of another test. The criteria is a different way of saying the same thing. It is
generally a well-established or extensively utilized test that is already thought to be
genuine. To assess criteria validity, compute the correlation between your measurement
results and the criterion measurement results. If there is a strong correlation, this
indicates that your test is measuring what it is supposed to measure.
1. Consistency throughout time—would the findings have been the same if the test or
evaluation had been performed on a different day or at a different time?
2. Consistency across tasks—would the outcome have been the same if different tasks
had been chosen to test the learning?
3. Consistency among markers—would the findings have been the same if the
evaluation had been graded by another marker?
The greater the uniformity, the more dependable the outcomes. However, no results are
totally dependable. There is usually some random fluctuation that influences the
evaluation.
An illustration from ordinary life might help you understand dependability. The
consistency of the findings we acquire when measuring the length of a room depends
on the device we use to take the measurement. A traditional meter ruler, for example,
will provide far more reliable measurements than an elastic tape measure. The ruler is
stiff and sturdy; nonetheless, when we use it to measure the room, there is a high
degree of agreement between measurements.
In contrast, the elastic tape measure will need to be stretched just enough to indicate an
accurate meter, resulting in far less reliable readings. Occasionally we may extend the
tape too far and end up underestimating the length; sometimes we will not stretch the
tape far enough, and underestimate it. The flexibility of the tape measure puts a random
element into our measuring procedure, making our measurements less consistent and
hence less dependable.
We may be certain that repeated or similar evaluations will provide consistent findings
when the outcomes of an assessment are dependable. This allows us to make more
generic claims about a student's level of success, which is extremely useful. When we
use assessment data to make decisions about teaching and learning, or when we report
back to kids and their parents or caregivers, this is critical.
Test/retest is same questionnaire survey, or a similar version, is given to the same set of
participants twice. The two sets of findings are then compared to calculate the reliability
coefficient. This approach indicates the consistency of the findings across time or
between equivalent variants of the same test.
Imagine that you are conducting a study for which you must develop a survey
questionnaire in sociology for grade 7 students. You develop a 30-point survey
questionnaire and distribute it to a class of 12, grade 7 students. You then administer
the survey questionnaire again one month later to the day. The scores of the students
on the two administrations of the survey questionnaire are listed below.
Scores in the first Scores in the second
Student
administration administration
A 17 15
B 22 18
C 25 21
D 12 15
E 7 14
F 28 27
G 27 24
H 8 5
I 21 25
J 24 21
K 27 27
L 21 19
1. What observations can you make about the reliability of the questionnaire? Explain.
The questionnaire’s score fluctuates so the reliability of the questionnaire is
questionable.
2. What factors might affect the reliability of the 30-point survey questionnaire?
The factors might include stress of the students, their personal life, and if the students
have already memorized the questionnaire.