Lucyana Simarmata Edres C Week 11

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Lucyana Simarmata

21202241095
PBI C
Educational Research
WEEK 11
Topic: Validity and reliability
1. Summarize information on reliability and validity from Educational
Research by Burke Johnson and Larry Christensen, pp. 163-176
 Reliability and validity are the two most important psychometric
properties for us to consider when using a test or assessment
procedure.
 Reliability is consistency or can be called stability in tests and
scores, while validity is accuracy in the conclusions or
interpretations, we make from test scores. If the conclusions in a
test or score are not stable or consistent in results, then its validity
is irrelevant.

A. Reability: Refers more to the consistency or stability of a set of test


scores.
 In reliability there is the term reliability coefficient, where the correlation
coefficient is a measure of reliability. There are several conclusions that a
zero-reability coefficient means there is no reliability at all (If we get a
negative correlation, treat it as no reliability and our test is wrong).
 Test-retest reliability: This is the consistency or stability of test scores
over time.
- In test-retest reliability, it can be said that the test score is reliable
if in two tests in one class with the same participants and the
testing is given a distance between the test scores and the results
of the first and second exams are the same, if the results of the first
and second exam scores are different then it can be said that the
score test is unreliable.
- The interval time in test-retest is very important and influential.
 Equivalent-forms reliability: Refers more to consistency the trend of
scores of a group of individuals on alternative forms of tests designed to
measure the same characteristics.
- The success of this method depends on the ability to construct two
equivalents the same test form. It is difficult to create two
equivalent versions of a test, because both versions cannot include
the same items.
 Internal consistency: Refers to how consistently the items in the test
measure something construction or concept.
- Homogeneous test a unidimensional test in which all the items
measure a single construct.
- Split-half reliability involves splitting a test into two equal halves
and then assessing-testing the consistency of scores on two test
sections, specifically by correlating scores from the two sections.
 Interscorer Reliability: refers to the level of agreement a relationship
between two or more scorers, judges, or evaluators.

B. Validity: we define it as the appropriateness of the interpretations,


conclusions, and actions we make based on the testing score.
- Evidence of validity is empirical evidence the rationale and
theoretical foundation that supports the interpretations and actions
we take based on the score or scores we obtain from an assessment
procedure.
 When we use content-related evidence, we evaluate the extent of the
evidence indicates that the items, tasks, or questions on our test represent
the domain of interest.
 When we check the internal structure, we will make sure that the circuit is
different items do measure separate dimensions. A useful technique for
examining the internal structure of a test is called factor analysis. Factor
analysis is a statistical procedure that analyses the relationships between
items determine whether a test is unidimensional (i.e., all items measure
one construct) or multidimensional (i.e., different sets of items tap
different constructs or components of a broader construct).
 Evidence Based on Relations to Other Variables
- Criterion-related evidence refers to what extent score from a test
can be used for predict or conclude active performance several
such criteria as a test or future performance
- Convergent evidence is based on the relationship between focal
and other test scores independent measurements of the same
construct.
- Discriminant evidence occurs when the test scores on your focus
test are not highly correlated to scores from other tests designed to
measure theoretically different constructs. This information is
important because it is also important to demonstrate what our
tests do not measure.
2. Summarize information on trustworthiness from the attached article
entitled Strategies for ensuring trustworthiness in qualitative research
projects.
A. Credibility: is the main criterion or requirement by which researchers
ensure that what they are researching is measuring or testing what actually
happened.
In this journal the author describes several phenomena:
a. The adoption of research methods well established both in qualitative
investments-triation in general and in information science in particular.
b. The development of an early familiarity with the culture of participating
organisations before the first data collection dialogues take place.
c. Random sampling of individuals to serve as informants. Using a random
process could disprove claims that the researcher selected volunteers
unfairly. Preece points out that random sampling also helps in
guaranteeing that any "unknown influences" are dispersed equally
throughout the sample.
d. Triangulation. Triangulation may entail the application of several
techniques, including focus groups, individual interviews, and observation
—which are the main tactics for gathering data in a lot of qualitative
research. Triangulation is a technique that reduces the influence of distinct
local characteristics that are specific to one institution on the study by
involving informants from several organizations.
e. Tactics to help ensure honesty in informants when contributing data. To
guarantee that only those who are genuinely willing to participate and
ready to provide freely with data are included in the data collecting
sessions, each person who is close should be given the option to decline to
be involved in the project.
f. Iterative questioning. These could involve repeating questions to elicit
related information from the informant and using probes to obtain detailed
information. Another technique is iterative questioning.
g. Negative case analysis: In one type of negative case analysis, the
researcher might iterate a hypothesis until it takes into account every case
in the data.
h. Frequent debriefing sessions between the researcher and his or her
superiors, such as a project director or steering group. The investigator's
perspective may broaden during a discussion as others share their
perspectives and experiences.
i. Peer scrutiny of the research project. It is important to welcome
opportunities for colleagues, peers, and academics to review the project, as
well as feedback provided to the researcher during any presentations (such
as those at conferences) that are made during the project.
j. The researcher's “reflective commentary”.
k. Background, qualifications and experience of the investigator.
l. Member checks,
m. Thick description of the phenomenon under scrutiny.
n. Examination of previous research findings to assess the degree to which
the project's results are congruent with those of past studies.

B. Transferability: Merriam writes that external validity “concerns the


extent to which the findings of one study can be applied to other
situations”
There are a few things to consider before making any transference efforts:
a. The number of organizations taking part in the study and where they are
based;
b. Any restrictions in the type of people who contributed data;
c. The number of participants involved in the fieldwork;
d. The data collection methods that were employed;
e. The number and length of the data collection sessions;
f. The time period over which the data was collected.

C. Dependability
To address dependency issues more directly, process within
research should be reported in detail, allowing future researchers to repeat
it work, if not necessarily to get the same results.

D. Confirmability
The concept of confirmability is a concern of qualitative investigators that
is comparable to objectivity. Here steps should be taken to help ensure as
far as possible that the research findings are true the result of the
informant's experiences and ideas, not his or her characteristics and the
researcher's preferences.

You might also like