Practical Research 2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

PRACTICAL RESEARCH 2 samples.

This design leads to logical conclusions


CHAPTER 4 UNDERSTANDING DATA AND WAYS TO and pertinent recommendations.
SYSTEMATICALLY COLLECT DATA
 However, the descriptive research design is
dependent to a high degree on data collection
LESSON 1 Choosing Appropriate instrumentation for the measurement of data and
Quantitative Research Design analysis.

According to Polit and Hungles (1999), the following


research design research designs are classified as descriptive design:
-is your overall concept or strategy to put together the
components of your study in a logical manner. Additionally, Survey
the design ensures that the research problem is -The survey research design is usually used in securing
appropriately addressed. opinions and trends through the use of questionnaires
and interviews.
The research problem and questions shall determine the -A survey is used in gathering data from institutions,
type of research design you should use. government and businesses to help in decision-making
regarding change strategies, improving practices, analyzing
Types of Quantitative Research Designs views on choice of products or market research. Surveys
can be conducted face-to-face or online. Online surveys are
A. Exploratory Research Design widely used because gathering data from the target
 is often used to establish an initial understanding respondents or completing of questionnaires is fast using
and background information about a research the Internet.
study of interest, oftenly with very few or no
earlier related studies found relevant to the • Correlation Research
research study. -is used for research studies aimed to determine the
existence of a relationship between two or more variables
 This research design is described as an informal or and to determine the degree of the relationship.
unstructured way of investigating available
sources. You may conduct library search, • Evaluation Research
secondary data analysis, experience surveys, -is conducted to elicit useful feedback from a variety of
opinionnaire, case analysis, focus groups, respondents from various fields to aid in decision making
projective techniques and Internet searches. or policy formulation.
Commonly used types of evaluation based on the purpose
 Secondary data include information which you may of the study are
gather from books, journals. proceedings,  Formative evaluation is used to determine the
newsletters, magazines, annual reports and many quality of implementation of a project, the
others. Experience surveys refer to gathering data efficiency and effectiveness of a program,
from key informants about a research topic. In case assessment of organizational processes such as
analysis, you may review past experiences or procedures, policies, guidelines, human resource
situations that may have some similarities with development and the like.
the present research problem. You can also gather  On the other hand, Summative evaluation is done
small groups of people and conduct focus group after the implementation of the program. It
discussions. Through an unstructured discussion, examines the outcomes, products or effects of the
you can gain information relevant to the research program.
study that you plan to undertake.
examples of formative evaluation
B. Descriptive Research Design
Needs Assessment
 is used to gather information on current
-Evaluates the need for the program or project
situations and conditions. It helps provide answers
to the questions of who, what, when, where and
Process Evaluation
the how of a particular research study. Descriptive
-Evaluates the process of implementation of a program.
research studies provide accurate data after
subjecting them to a rigorous procedure and using
large amounts of data from large numbers of
Implementation Evaluation
-Evaluates the efficiency or effectiveness of a project or
program LESSON 2 Describing Sample Size and
Sampling Procedures
Program Monitoring
-Evaluates the performance and implementation of an
unfinished program. The evaluation is done prior to the Sample Size Determination
completion of the program. It helps improve -A sample (n) is a selection of respondents for a research
implementation and achieve best results. study to represent the total population (N). Making a
decision about sample size for a survey is important Too
examples of summative evaluation: large a sample may mean a waste of resources, both
human and financial. On the other hand, too small a
Secondary Data Analysis sample decreases the utilization of the results
-You may examine existing data for analysis.
The following are some reasons for the use of samples
Impact Evaluation 1. A sample saves time compared to doing a complete
-This is used to evaluate the overall effect of the program census which requires more time
in its entirety 2. A sample saves money because it is less costly than
conducting a complete census
Outcome Evaluation 3. A sample allows more particular attention to be given to
-This is done to determine if the program has caused useful a number of elements than when doing a census.
effects based on target outcomes. 4. There is a greater error in reporting results of a census
caused by inexperienced interviewers. There is less
Cost-effectiveness Evaluation sampling error in a survey
-Also called cost-benefit analysis, it compares the relative 5. Some research studies in the industry may only be
costs to the outcomes or results of some courses of action. performed on a sample of items. For example, testing the
length of time a battery will last
C. Causal Research Design
 is used to measure the impact that an SLOVIN'S Formula
independent variable (causing effect) has on -in Determining the Sample Size
another variable (being effected) or why certain -Population (N) consists of members of a group that a
results are obtained. A valid conclusion may be researcher is interested In studying the members of a
derived when an association between the group that usually have common or similar characteristics
independent variable and the dependent variable -Margin of error is the allowable error margin in research.
is obtained. It can also be used to identify the A confidence interval of 95% gives a margin of enor of 5%,
extent and nature of cause-and-effect a 98% gives a margin of error of 2%; a 99% confidence
relationships. Causal research can help businesses interval gives a 1% margin of error
determine how decisions may affect operations. WHERE
n= sample size
N= total population
e= margin of error

Sampling Procedures
-a formal process of choosing the correct subgroup called
a sample from a population to participate in a research
study. The subgroup shall be the representative of the
large group from where they were selected.

Probability Sampling Procedures


-The most important characteristics of probability sampling
procedure is the random selection of the samples.
-Specifically, each sample (n) or element from the
population (N) has an equal chance of selection under a
given sampling technique.
Simple Random Sampling -In multi-stage sampling, more than two steps are taken in
-This is the most frequently used type of probability selecting clusters from clusters.
sampling technique. This is characterized by the idea that
the chance of selection is the same for every member of Non-Probability Sampling Procedures
the population. There are situations when the researcher cannot employ
random selection. In cases where probability sampling is
Systematic Random Sampling not applicable, you may consider some non-probability
-follows specific steps and procedures in doing the sampling alternatives.
random selection of the samples. It requires a list of the
elements and every nth element in the list is drawn for Convenience Sampling
inclusion in the sample. If for instance, you have a list of This is a method of selecting samples that are available and
5,000 persons and you need a sample of 500, here are the are capable of participating in a research study on a current
steps to follow: issue. This method is sometimes called haphazard or
 Divide the number of elements in the population availability sampling..
by the desired sample size. In this case, you divide
5,000 by 500 which gives a value of 10. Snowball Sampling
 Choose a random number between one and the is a technique where the researcher identifies a key
value you obtained from Step 1. In this example, informant about a research of interest and then ask that
you choose a number between 1 and 10, let's say respondent to refer or identify another respondent who
you choose 5. can participate in the study. The identification of the
 Starting with the number you picked which is 5, samples follows a multiplier effect, that is, one person is
you take every tenth (10) (from Step 1) and you asked to refer the researcher to another respondent and so
use 5 as your starting point. Thus, you have to on
select the samples whose numbers are 5, 15, 25,
35, 45 and so on until you reach Purposive Sampling
-sometimes called judgmental or subjective sampling
Stratified Random Sampling employs procedure in which samples are chosen for a
- the population is first divided into two or more mutually special purpose it may involve members of a limited group
exclusive categories based on your variables of interest in of population.
the research study. The population is organized into
homogeneous subsets before drawing the samples. With Quota Sampling
stratified random sampling, the population is divided into -is gathering a representative sample from a group based
subpopulation called strata on certain characteristics of the population chosen by the
researcher. Usually the population is divided into specific
Cluster Sampling groups if the specific condition,
-Most large scale surveys use cluster sampling method. -The main difference between stratified random sampling
-is used when the target respondents in a research study and quota sampling can be explained in a way that in quota
is spread across a geographical location. In this method, sampling, you use non-random selection
the population is divided into groups called clusters which
are heterogeneous in nature and are mutually exclusive. A
random sampling technique is used on relevant clusters to LESSON 3 Designing the Questionnaire
be included in the study and Establishing Validity and
Reliability
-Cluster sampling may be classified as either single-stage,
two-stage cluster sampling or there also exists multi-stage
cluster sampling. Designing The Questionnaire
A questionnaire is an instrument for collecting data. It
- In single-stage cluster sampling, all the members from consists of a series of questions that respondents provide
each of the selected clusters are used in the sampling answers to a research study.
process.
Step 1 Background
-In two-stage cluster sampling, a subset of elements within You do a basic research on the background of the chosen
each selected cluster is randomly selected for inclusion in variable or construct. Choose a construct that you can use
the sample. to craft the purpose and objective of the questionnaire. In
research, the term construct refers to a trait or -The questions should be clear, concise and simple using
characteristic that you like to evaluate or measure. minimum number of words. Avoid lengthy and confusing
lay-out.
(5) main types of variables: -Classify your questions under each statement based on
your problem statement
Dependent variables -Questions should be consistent within the needs of the
-These are variables that you are trying to explain. study.
-Avoid sensitive or highly debatable question.
Independent or explanatory variables types of questions
-These are variables that cause, influence or explain a Dichotomous question
change in the dependent variable. There may be one or -This is a "Yes/No" or "Like/Dislike" question where only
more independent variables in a research study. two (2) choices are provided. Male/Female and Good/Bad
are also examples of dichotomous choices.
Control variables
-These are variables that are used to test for a possible Open-ended question
erroneous relationship between the identified independent -This type of question usually answers the question "why".
and dependent variables. It allows the respondents to give their ideas and insights on
-It is possible that the observed relationship between the a particular issue. This hype of question gives additional
dependent and independent variables may be explained by challenge to the researcher who must review each
the presence of another variable. response before assigning codes and analyzing the data

Continuous variables Closed questions.


-These are variables defined on a continuous scale. -These are also called multiple-choice questions. These
questions may consist of three or more mutually exclusive
Discrete variables questions with different categories.
-These are variables which can also be counted but must
be a whole number. Some variables are continuous but Rank-order scale questions.
reported as discrete when they are rounded of. -Respondents are asked to rank their choices on each
statement or item. Ranking requires that a set of items be
Step 2- Questionnaire Conceptualization ranked in order to compare each item to all others.
-Choose the response scale to use. This is how you want
your respondents to answer the questions in your study. Rating scale question.
You can choose from the following response scales: -You construct a scale the like those examples given for
-Yes/No/Don't Know Likert scale ratings
-This type of response scale allows the respondent to select
only one answer. The "don't know" answer is the neutral Step 3-Establish the Validity of the
response. Questionnaire
-Validity is traditionally defined as "degree to which a test
Likert Scale measures what it claims, or purports, to be measuring"
-is a very popular rating scale used by researchers to (Brown, 1996).
measure behaviors and attitudes quantitatively. It consists -A questionnaire undergoes a validation procedure to make
of choices that range from one extreme to another from sure that it accurately measures what it aims to do. A valid
where respondents choose a degree of their opinions. It is questionnaire helps to collect reliable and accurate data.
the best too l for measuring the level of opinions.
ways to assess the validity of a set of
measurements:

Face validity
This is a superficial or subjective assessment. The
questionnaire appears to measure the construct or variable
-Generate the items or questions of the questionnaire that the research study is supposed to measure.
based on the purpose and objectives of the research study.

guidelines in mind:
Content validity
-is most often measured by experts or people who are consistency is called Cronbach's alpha. This can be
familiar with the construct being measured. The experts computed using manual and electronic computations such
are asked to provide feedback on how well each question as the Statistical Package for the Social Sciences. Cronbach
measures the variable or construct under study. The alpha can range
experts make judgements about the degree to which the
test items or statements match the test objectives or Step 5-Pilot Testing of the Questionnaire
specifications. -Pre-testing or pilot testing a questionnaire is important
before you use it to collect data. Through this process, you
Criterion-related validity can identify questions or statements which are not clear to
-This type of validity measures the relationship between a the participants or there might be some problems with the
measure and an outcome. Criterion-related validity can be relevance of the questionnaire to the current study.
further divided into concurrent and predictive validity. Such remarks may be any of the following:
 "Delete this statement. I don't understand the
Concurrent validity question/statement."
This type of validity measures how well the results of an  "Revise the question/statement. Indicate the
evaluation or assessment correlate with other assessments specific variables to be measured."
measuring the same variables or constructs.  "Retain the question/statement. This is good."
"There are missing options in the list of choices."
Predictive validity  "The question is so long. It's getting boring."
This measures how well the results of an assessment can
predict a relationship between the construct being Step 6- Revise the Questionnaire
measured and future behaviour. After identifying the problem areas in your questionnaire,
revise the instrument as needed based on the feedback
Construct validity provided during the pre-testing or pilot-testing. The best
This is concerned with the extent to which a measure is questionnaire is one that is edited and refined towards
related to other measures as specified in a theory or producing clear questions arranged logically and in
previous research. It is an experimental demonstration that sequential order. The questionnaire should match with the
a test is measuring the construct it claims to be measuring. research objectives.

Step 4 Establish the Reliability of the


Questionaire LESSON 4 Planning Data Collection
-Reliability indicates the accuracy or precision of the Procedures
measuring instrument (Norland, 1990). It refers to a
condition where measurement process yields consistent
responses over repeated measurements. To apply this
Data collection refers to the process of gathering
concept in research, you need a questionnaire that is
information. The data that you will collect should be able
reliable. You need questions that yield consistent scores
to answer the questions you posed in your Statement of
when asked repeatedly.
the Problem.
ways to assess the reliability of a
questionnaire: Types Of Quantitative Data Collection
Test-retest reliability Procedures
-This is the simplest method of assessing reliability. The
A. Observation
same test or questionnaire is administered twice and
-This method of gathering data is usually used in situations
correlation between the two sets of scores is computed.
where the respondents cannot answer the researcher's
Split-half method
question to obtain information for a research study.
-This method is also called equivalent or parallel forms. In
this method, two different tests covering the same topics
-The observation is structured to elicit information that
are used and the correlation between the two sets of
could be coded to give numerical data. As a researcher,
scores is calculated.
you have to prepare a checklist using an appropriate rating
scales that may categorize the behaviour, attitude or
Internal consistency
attribute that you are observing to answer the questions
This method is used in assessing reliability of questions
posed in your study. As you observe, you will record your
measured on an interval or ratio scale, The reliability
observation by using checkmarks or cross marks on your
estimate is based on a single form of test administered on
checklist.
a single occasion. One popular formula to measure internal
A standardized questionnaire has gone through the process
B. Survey of psychometric validation, has been piloted and revised.
Quantitative data can be collected using four (4) main Sauro (2012) provided the advantages of standardized
types of survey usability questionnaire:
Sample survey
-The researcher collects data from a sample of a population Validity
to estimate the attributes or characteristics of the It has undergone the process of validation procedures.
population That is, it determines how well the questionnaire measures
what it is intended to measure.
Administrative data
-This is a survey on the organization's day-to-day Reliability
operations. This kind of data is now supported with various The repeatability of the questionnaire has been tested. It
ICT tools and softwares making it easy for organizations refers to how consistent responses are to the questions.
especially government, schools, industry, NGO to update
their records efficiently and effectively and put up their Sensitivity
own Management Information Systems (MIS) It is often measured using resampling procedures to see
how well the questionnaire can differentiate at a fraction
Census of the sample size.
-The researcher collects data from the selected
population. It is an official count on survey of a population Objectivity
with details on demographics, economic and social data To attain this measure, practitioners or experts are
such as age, sex, education, marital status, household size, requested to verify statements of other practitioners in the
occupation religion, employment data, educational same field.
qualifications, and housing. The collected data are usually Quantification
used by government or private firms for planning purposes The standardized questionnaire has undergone statistical
and development strategies. analysis

Tracer studies Norms


-This is a regular survey with a sample of those surveyed The standardized questionnaire have normalized
within a specific time or period. references and databases which allow one to convert raw
-In school settings, tracer studies are used by educational scores to percentile ranks.
institutions to follow up their graduates. The survey is
usually sent to a random sample after one or two years For a researcher-made questionnaire that has been
after graduation from their courses. Tracer studies gather developed by the researcher specifically for a research
data on work or employment data, current occupation and study, the following should be discussed:
competencies needed in the workplace to determine gaps 1. the corrections and suggestions made on the draft to
in curriculum and other related activities between improve the instrument
academe and industry. 2. the different persons involved in the correction and
refinement of the research instrument.
C. Quantitative Interview 3. the pre-testing efforts and subsequent instrument
-The interview may be used for both quantitative and revisions
qualitative research studies. Both research methods 4. the type of items used in the instrument
involve the participation of the researcher and the 5. the reliability of the data and evidence of validity
respondent 6. the steps involved in scoring, guidelines for
interpretation
-In conducting a quantitative interview, the researcher
prepares an interview guide or schedule. It contains the
list of questions and answer options that the researcher
will read to the respondent. The interview guide may
contain closed-ended questions and a few open-ended
questions as well, that are delivered in the same format
and same order to every respondent.

D. Questionnaire
-A questionnaire may be standardized or researcher-made.

You might also like