Revision Notes - Unit On A Page
Revision Notes - Unit On A Page
Revision Notes - Unit On A Page
Rigor A good theoretical base and sound methodological design give rigor to the research.
Rigor indicates carefulness and degree of exactitude in research.
Testability ”Scientific research lends itself to testing logically developed hypothesis to see
whether or not data support the educated conjecture or hypothesis.”
Replicability The results of the test of hypothesis should be supported again and again when
same type of research is conducted in other similar circumstances.
Precision Precision refers to the closeness of the findings to reality based on a sample.
& Confidence Precision reflects the degree of exactness and accuracy of the results on the basis of
samples. Also known as confidence interval in statistics.
Confidence refers to the probability that our estimation are correct so that we can
confidently claim that 95% of the time our results will be true and there is only 5%
chance of our results being false.”
Objectivity The conclusion drawn through the interpretation of the results of data analysis
should be objective that is, they should be based on facts of the findings derived
from actual data and not on our own subjective or emotional values.
Generalizability It refers to the scope of replicability of the research findings in one organizational
settings to others, the wider the range of replicability of the solution generated by
the research the more useful the research is to the user.”
Parsimony Simplicity in explaining the phenomenon or problem that occurs in generating the
solutions of the problem is preferred as compared to complex research frame work.
Problem Statement
A good problem statement contains:
Ethics:
Ethical conduct should also be reflected in the behavior of the researchers who conduct the
investigation, the participants who provide the data, the analysts who provide the results, and
the entire research team that presents the interpretation of the results and suggests alternative
solutions.
Rights and obligations of the researcher: Objectivity, Protect the right to confidentiality of both
subjects and clients, and Advocacy research
Qualitative Research is appropriate for the study of: Practices; types of behaviour, Encounters, Roles,
Relationships, Groups, and Lifestyles/Subcultures
Exploratory Research
An exploratory research is undertaken when not much is known about the situation at hand, or no
information is available on how similar problems or research issues have been solved in the past. In such
cases, extensive preliminary work needs to be done to understand what is occurring, assess the magnitude of
the problem, and/or gain familiarity with the phenomena in the situation.
Exploratory research often relies on secondary data such as review of literature and/or qualitative
approaches to data gathering. Exploratory studies are usually not generalized to the population.
Sources of Data:
Primary data: information obtained first-hand by the researcher on the variables of interest for the
specific purpose of the study. Examples: individuals, focus groups, panels
Secondary data: information gathered from sources already existing. Examples: company records or
archives, government publications, industry analyses offered by the media, web sites, the Internet,
and so on.
Interviews:
Unstructured interviews:
the interviewer does not enter the interview setting with a planned sequence of questions to
be asked of the respondent.
Structured interviews:
Conducted when it is known at the outset what information is needed.
The interviewer has a list of predetermined questions to be asked of the respondents either
personally, through the telephone, or via the computer.
Telephone vs Personal
Self-administered
Projected method
Word association technique – respondents associate a word with the first thing that comes
to mind (attitude or feelings)
Thematic apperception tests (TAT) – respondents weave a story around a picture
Observation: Observation involves going into ‘the field’, - the factory, the supermarket, the waiting room,
the office, or the trading room - watching what workers, consumers, or day traders do, and
describing, analyzing, and interpreting what one has seen.
Types of observations:
Controlled observation occurs when observational is carried out under carefully arranged
conditions.
Uncontrolled observation is an observational technique that makes no attempt to control,
manipulate or influence the situation.
Participant observation is an approach that has frequently been used in case studies, ethnographic
studies, and grounded theories. In Participant observation the researcher gathers data by
participating in the daily life of the group or organization under study.
In Non-participant observation, the researcher is never directly involved in the actions of the
actors, but observes them from outside the actors’ visual horizon, for instance, via a one-way mirror
or a camera.
In Structured observational studies, the researcher has a predetermined set of categories of
activities or phenomena planned to be studies. Structured observation is generally quantitative in
nature.
In unstructured observation, the researcher records practically everything that is observed.
Unstructured observation are claimed to be the hallmark of qualitative research.
Concealment of observation relates to whether members of the social group under study are told
that they are being studied. A primary advantage of concealed observation is that the researcher
subjects are not influenced by the awareness that they are being observed.
Unconcealed observation is more obtrusive, perhaps upsetting the authenticity of the behaviour
under study.
Descriptive Designs:
Survey - Survey studies collect data at a single point in time, much like a still-life
camera takes a snap shot.
Experimental Approach: a study design in which the researcher might create an artificial
setting, control some variables, and manipulate the independent variable to establish cause
and effect-relationships.
Experimental Designs:
Ex post facto experimental design: Studying subjects who have already been
exposed to a stimulus and comparing them to those not so exposed, so as to
establish cause-and-effect relationships.
In a quasi-experimental design, the research substitutes statistical "controls" for
the absence of physical control of the experimental situation. The most common
quasi-experimental design is the Comparison Group Pre-test/Post-test Design. This
design is the same as the classic controlled experimental design except that the
subjects cannot be randomly assigned to either the experimental or the control
group, or the researcher cannot control which group will get the treatment. In other
words, participants do not all have the same chance of being in the control or the
experimental groups, or of receiving or not receiving the treatment.
Direct Observation
Experiments
Surveys/Questionnaires *
Records
* If items on the questionnaires are open ended; the data being collected is qualitative in nature.
Organization
Topical
Distant to close
Debate
Chronological
Seminal
Dependent Variable
The dependent variable is the variable a researcher is interested in It depends on other
factors that are measured. These variables are expected to change as a result of an
experimental manipulation of the independent variable or variables. It is the presumed
effect.
Independent Variable
An independent variable is a variable believed to affect the dependent variable. It is
stable and unaffected by the other variables you are trying to measure. It refers to the
condition of an experiment that is systematically manipulated by the investigator. It is the
presumed cause.
Scales of Measurement
Nominal Scale: A scale of measurement in which the scale values represent categories that
only differ from one another qualitatively (i.e., differ in “type” rather than in
“amount”). Variables measured using a nominal scale are also known as
“qualitative” variables. Examples are: ethnic group, major, religious
affiliation, course of study, gender, etc.
For nominal scale, think of names.
Ordinal Scale: A scale of measurement in which the scale values represent categories that
differ quantitatively in term of order, but in which the intervals between
numbers (i.e., between categories) cannot be assumed to be equal. For
example, John is taller than Jane. This statement does not state how much
taller.
For ordinal scale, you could think of order or ranking.
Interval Scale: A scale of measurement in which the distance between any two adjacent
scores is the same as the distance between any other two adjacent scores.
However, there is no “true” or “neutral” zero point and therefore meaningful
ratios cannot be formed. Numbers are spread across equal intervals without
a natural zero point.
Ratio Scale: With ratio scales, the scale values are numbers that represent equal
distances in some attribute, and there also is an absolute zero point. Thus,
meaningful ratios can be formed. Examples: length, height, time, number of
errors made performing a task, exam grade, number of tickets sold, etc.
Evaluating Measures
Accuracy: The degree to which an instrument yields results that agree with accepted
standard. Examples, time, length, weight, decibels, etc.
Reliability: The extent to which a measure yields consistent results. The following are
different types of reliability:
Parallel-form: When responses on two comparable sets of measures tapping into the same
construct are highly correlated, we have parallel-form reliability.
Interitem
Consistency: The interitem consistency reliability is a test of the consistency of
respondent’s answers to all the items in a measure. The most common test
of internal consistency is Cronbach’s Coefficient alpha (Chronbach’s alpha).
In general, reliabilities less than 0.60 are considered to be poor, those
in the 0.70 range, acceptable, and those over 0.80 good.
Validity: The extent to which a measure truly assesses what it is claimed to measure;
the degree to which the measure achieves the aims for which it was
designed. The following are different types of validity:
Face Validity: The extent to which a measure simply appears to be a reasonable measure
of some trait.
Content validity: The degree to which the content of a measure (e.g., an IQ test, job aptitude
test, achievement test) covers representative sample of the domain (e.g.,
intelligence, job skills, knowledge) being assessed.
Criterion validity: The degree to which scores on a measure are meaningful to predict some
current or future behaviour.
Sampling Designs
Sample Design Description
Simple random All elements in the population are considered and each
sampling element has an equal chance of being chosen.
Systematic sampling Every nth element in the population is chosen starting from
Probability Sampling
subjects
sampling
The abstract
Chapter 2 – Methodology
Theoretical framework
Type of design and the assumptions that under lie it
The role of the researcher (including qualifications and assumptions)
Selection and description of the site and participants
Data collection strategies
Methods of achieving validity
Chapter 3 - Findings
Relationship to literature
Relationship to theory
Relationship to practice
References
Appendices
References
Appendices
Preliminary Pages
1. Title Page
2. Abstract
3. Acknowledgements
4. Table of Contents
5. List of Tables
6. List of Figures