Huhduashdiahzfkae
Huhduashdiahzfkae
Huhduashdiahzfkae
Nature of Research
Descriptive-survey- suitable wherever the subjects vary among themselves and one is interested
to know the extent to which different conditions and situations are obtained among these
subjects.
Descriptive-normative survey- Good and states (1972) stressed that “the term normative is
sometimes used because surveys are frequently made to ascertain the normal or typical condition
for practice or to compare local test results with a state or national norm”
Descriptive-analysis- determines or describes the nature of an object by separating it into its
parts. Its purpose is to discover the nature of things
Descriptive-classification- this method is employed in natural sciences subjects namely: Biology,
Botany, Zoology and the like. The specimens collected are classified from phylum to species.
Descriptive- evaluative- this design is to appraise carefully the worthiness of the current study.
Descriptive- comparative- the researcher considers two variables (not manipulated) and
establishes a formal procedure to compare and conclude that one is better than the other.
Correlational-survey- this is designed to determine the relationship of two variables (X and Y)
whether the relationship is perfect, very high, high or marked or moderate, slight or neglible.
Longitudinal survey- this involves much time allotted for investigation of the same subjects of
two or more points in time.
Types of Experimental Research: (Calmorin,2012)
Single-group design- this design involves a single treatment with two or more levels
Two-group design- two comparable groups are employed as experimental and control groups or
two comparable groups are both experimental groups.
Two-pair group design- elaboration of the two-group design wherein there are two control
groups and two experimental groups.
Randomized Complete Block- uses a group of test plants and animals as subjects of the study
which are studied once but subsequent treatments applied are replicated to determine the cause of
change.
Correlational- used to determine the relationship of two dependent variables, X and Y, on how
they are manipulated by the independent variable.
Applied research aims at finding a solution for an immediate problem facing a society or an
industrial/business organisation while Fundamental research is mainly concerned with
generalisations and with the formulation of a theory.
Conceptual research is that related to some abstract idea(s) or theory. It is generally used by
philosophers and thinkers to develop new concepts or to reinterpret existing ones. On the other
hand, Empirical research relies on experience or observation alone, often without due regard for
system and theory. It is data-based research, coming up with conclusions which are capable of
being verified by observation or experiment and it also called as experimental type of research.
Historical research is that which utilizes historical sources like documents, remains, etc. to study
events or ideas of the past, including the philosophy of persons and groups at any remote point of
time.
The purpose of research is to discover answers to questions through the application of scientific
procedures. The main aim of research is to find out the truth which is hidden and which has not
been discovered as yet. Though each research study has its own specific purpose, we may think
of research objectives as falling into a number of following broad groupings:
To gain familiarity with a phenomenon or to achieve new insights into it (studies with this object
in view are termed as exploratory or formulative research studies);
To portray accurately the characteristics of a particular individual, situation or a group (studies
with this object in view are known as descriptive research studies);
To determine the frequency with which something occurs or with which it is associated with
something else (studies with this object in view are known as diagnostic research studies);
To test a hypothesis of a causal relationship between variables (such studies are known as
hypothesis-testing research studies).
Characteristics of a Good Research
Empirical
Research is based on direct experience or observation by the researcher. The collection of data
relies on practical experience without benefit of the scientific knowledge or theory.
Logical
Research is based on valid procedures and principles. Scientific investigation is done in an
orderly manner, so that the researcher has confidence in the results. Logical examination of the
procedure used in the research enables the researcher to draw valid conclusions; thus, the logic of
valid research makes it important for decision-making.
Cyclical
Research is a cyclical process. It starts with a problem and ends with a problem. For instance, a
researcher who completes his study states his findings and draws up his conclusions and
recommendations. In his recommendations, many problems may crop up as other subjects for the
study; hence, the cycle is repeated.
Analytical
Research utilizes proven analytical procedures in gathering data, whether historical, descriptive,
experimental or any alternative research methods. In historical research, the data gathered focus
on the past; in descriptive research, the study focuses on the present situation; experimental,
future and in an alternative, either past, present, and the future.
Replicability
The research design and procedures are replicated to enable the researcher to arrive at a valid
and conclusive results. Similarities and differences of replicated researchers can be compared.
The more replicated of researches there are, the more valid and conclusive the results would be.
Critical
Research exhibits careful and precise judgment. A higher level of confidence must be
established, i.e., at 1.0 percent or 5.0 percent level of confidence. Based on these levels, the
researcher is confidently precise in his interpretations on whether the results are significant or
insignificant, or whether to reject or accept the hypothesis.
Methodical
Research is conducted in a methodical manner without bias using systematic method and
procedures.
There are 10 qualities of a good researcher and has the acronym RESEARCHER. These qualities
are:
Research-oriented
Efficient
Scientific
Effective
Active
Resourceful
Creative
Honest
Economical
Religious
A research who possesses these qualities is the kind of investigator the government needs
because he can respond to the socioeconomic development of the country and compete globally.
Intellectual curiosity
A researcher undertakes deep thinking and inquiry of the things, problems and situations around
him. He is keen to get information on these problems and situations often due to unusualness and
newness.
Prudence
The researcher is careful to conduct his research study at the right time and at the right place
wisely, efficiently and economically. In other words, he does the right thing at the right time.
Healthy criticism
The researcher is always doubtful as to the truthfulness of the results. Normally, the investigator
always doubts the authenticity or validity of his finding even if the data is gathered honestly. For
instance, the researcher administers the questionnaires to the subjects of the study. He doubts if
the subjects answers correctly the items.
Intellectual honesty
An intelligent researcher is honest to collect or gather data or facts in order to arrive at honest
results. The success or failure of his research lies on his hand.
Intellectual creativity
A productive and resourceful investigator always creates new researches. He enjoys inventing
unique, novel and original researches and considers research as his hobby. In other words, a
creative researcher is also innovative.
When we hear scientific method we can only think of it as used in science but not in research
however doing research we utilized scientific method to arrive to our conclusion in research.
The scientific method attempts to explain the natural occurrences (phenomena) of the universe
by using a logical, consistent, systematic method of investigation, information (data) collection,
data analysis (hypothesis), testing (experiment), and refinement to arrive at a well-tested, well-
documented, explanation that is well-supported by evidence, called a theory.
The scientific method is a process for gathering data and processing information. It provides
well-defined steps to standardize how scientific knowledge is gathered through a logical, rational
problem-solving method.
To better understand the process of the scientific method, take a look at the following example:
1. Problem/Objectives
2. Hypotheses
3. Theoretical/Conceptual Framework
4. Assumptions
5. Review of Related Literature
6. Research design
7. Data collection
8. Data processing and statistical treatment
9. Analysis and Interpretation
10. Summary, Conclusions and Recommendations
To make it easier to understand: A population is the set of all objects under study, a sample is any
subset of a population.
Step 6: Collecting the data
In dealing with any real life problem it is often found that data at hand are inadequate,
and hence, it becomes necessary to collect data that are appropriate. There are several ways of
collecting the appropriate data which differ considerably in context of money costs, time and
other resources at the disposal of the researcher.
Primary data can be collected either through experiment or through survey. If the researcher
conducts an experiment, he observes some quantitative measurements, or the data, with the help
of which he examines the truth contained in his hypothesis. But in the case of a survey, data can
be collected by any one or more of the following ways:
o By observation
o Through personal interview
o Through telephone interviews
o By mailing of questionnaires
o Through schedules
Step 9: Hypothesis-testing
After analysing the data as stated above, the researcher is in a position to test the hypotheses, if
any, he had formulated earlier. Do the facts support the hypotheses or they happen to be
contrary? This is the usual question which should be answered while testing hypotheses. Various
tests, such as Chi square test, t-test, F-test, have been developed by statisticians for the purpose.
The hypotheses may be tested through the use of one or more of such tests, depending upon the
nature and object of research inquiry. Hypothesis-testing will result in either accepting the
hypothesis or in rejecting it. If the researcher had no hypotheses to start with, generalisations
established on the basis of data may be stated as hypotheses to be tested by subsequent
researches in times to come.
1. The researcher selects a problem area and specifies research questions. From personal
experience, professional readings or perhaps discussion with colleagues, the
researcher selects a problem area for study.
2. The researcher examines and researches data bases to review existing results and
define terms. At this stage, the researcher tries to find out what other researchers have
done to answer similar questions. By consulting books, educational encyclopedias,
professional journals, and electronic data bases, the researcher gains insights about
what others have done and what conclusions were drawn from their research. The
researcher knows that his work is based on certain assumptions, one of which is that
it will add to the body of knowledge. His aim is to help other researchers reach some
agreement about the controversy surrounding the teaching of writing to pupils with
learning disabilities.
3. The researcher formulates researchable questions at this point, the researcher returns
to the questions about answering one or more of them.
4. The researcher selects a research design.
5. The researcher determines the research method.
6. If applicable, the researcher describes and selects the respondents to be used in the
study.
7. The researcher selects testing methods/procedures for the study.
8. The researcher conducts the study.
9. The researcher analyzes the data and determines the implications of the researches.
After conducting the study and collecting the data, the researcher analyzes the data
using appropriate statistical (quantitative) and non-statistical (qualitative) methods,
after which he determines what implications the results of the study have for other
researchers.
10. The researcher publishes the results of his study. After the research has been
completed, a written report is made. The researcher describes:
o
The reason for conducting the study
The conclusions he has made about the research
The steps undertaken to select the respondents
The instructional programs/manuals
The statistical results (if there is any)
Generally speaking, a research process starts with a research problem which the investigator has
identified as researchable and has implications to government thrusts. Based on the major
problem identified, he formulates the specific problems/objectives of the study. Using these
specific problems/ objectives as basis, he tests the hypotheses to have a scientific conclusion of
the study either to reject or accept it. He constructs a theoretical or conceptual framework as
basis for describing properly the relationships of variables to be used in the study. He then states
the assumptions clearly to provide the foundation of the study.
The next step is to review related literature and studies to determine the similarities
and differences of the findings to past studies and to gain insights into the aspects of the problem
that are critical and controversial. Then he uses the most appropriate research design in his
study. From the research design, he can decide the definite research instrument for collecting
data and these data are processed either manually or by machine, whichever is more convenient,
economical and accurate, using the correct statistical tools in order that a reasonable precise
analysis and interpretation of results can be attained. After the analysis and interpretation of
results, he finally summarizes the whole study, draws conclusions based on the findings and
hypotheses tested, and makes recommendations for further research. Such recommendations
should dovetail with the conclusion.
Accuracy -- a term used in survey research to refer to the match between the target
population and the sample.
Affective Measures -- procedures or devices used to obtain quantified descriptions of
an individual's feelings, emotional states, or dispositions.
Aggregate -- a total created from smaller units. For instance, the population of a
county is an aggregate of the populations of the cities, rural areas, etc. that comprise
the county. As a verb, it refers to total data from smaller units into a large unit.
Anonymity -- a research condition in which no one, including the researcher, knows
the identities of research participants.
Baseline -- a control measurement carried out before an experimental treatment.
Behaviorism -- school of psychological thought concerned with the observable,
tangible, objective facts of behavior, rather than with subjective phenomena such as
thoughts, emotions, or impulses. Contemporary behaviorism also emphasizes the
study of mental states such as feelings and fantasies to the extent that they can be
directly observed and measured.
Beliefs -- ideas, doctrines, tenets, etc. that are accepted as true on grounds which are
not immediately susceptible to rigorous proof.
Benchmarking -- systematically measuring and comparing the operations and
outcomes of organizations, systems, processes, etc., against agreed upon "best-in-
class" frames of reference.
Bias -- a loss of balance and accuracy in the use of research methods. It can appear in
research via the sampling frame, random sampling, or non-response. It can also occur
at other stages in research, such as while interviewing, in the design of questions, or
in the way data are analyzed and presented. Bias means that the research findings will
not be representative of, or generalizable to, a wider population.
Case Study -- the collection and presentation of detailed information about a
particular participant or small group, frequently including data derived from the
subjects themselves.
Causal Hypothesis -- a statement hypothesizing that the independent variable affects
the dependent variable in some way.
Causal Relationship -- the relationship established that shows that an independent
variable, and nothing else, causes a change in a dependent variable. It also establishes
how much of a change is shown in the dependent variable.
Causality -- the relation between cause and effect.
Central Tendency -- any way of describing or characterizing typical, average, or
common values in some distribution.
Chi-square Analysis -- a common non-parametric statistical test which compares an
expected proportion or ratio to an actual proportion or ratio.
Claim -- a statement, similar to a hypothesis, which is made in response to the
research question and that is affirmed with evidence based on research.
Classification -- ordering of related phenomena into categories, groups, or systems
according to characteristics or attributes.
Cluster Analysis -- a method of statistical analysis where data that share a common
trait are grouped together. The data is collected in a way that allows the data collector
to group data according to certain characteristics.
Cohort Analysis -- group by group analytic treatment of individuals having a
statistical factor in common to each group. Group members share a particular
characteristic [e.g., born in a given year] or a common experience [e.g., entering a
college at a given time].
Confidentiality -- a research condition in which no one except the researcher(s)
knows the identities of the participants in a study. It refers to the treatment of
information that a participant has disclosed to the researcher in a relationship of trust
and with the expectation that it will not be revealed to others in ways that violate the
original consent agreement, unless permission is granted by the participant.
Confirmability Objectivity -- the findings of the study could be confirmed by
another person conducting the same study.
Construct -- refers to any of the following: something that exists theoretically but is
not directly observable; a concept developed [constructed] for describing relations
among phenomena or for other research purposes; or, a theoretical definition in which
concepts are defined in terms of other concepts. For example, intelligence cannot be
directly observed or measured; it is a construct.
Construct Validity -- seeks an agreement between a theoretical concept and a
specific measuring device, such as observation.
Constructivism -- the idea that reality is socially constructed. It is the view that
reality cannot be understood outside of the way humans interact and that the idea that
knowledge is constructed, not discovered. Constructivists believe that learning is
more active and self-directed than either behaviorism or cognitive theory would
postulate.
Content Analysis -- the systematic, objective, and quantitative description of the
manifest or latent content of print or nonprint communications.
Context Sensitivity -- awareness by a qualitative researcher of factors such as values
and beliefs that influence cultural behaviors.
Control Group -- the group in an experimental design that receives either no
treatment or a different treatment from the experimental group. This group can thus
be compared to the experimental group.
Controlled Experiment -- an experimental design with two or more randomly
selected groups [an experimental group and control group] in which the researcher
controls or introduces the independent variable and measures the dependent variable
at least two times [pre- and post-test measurements].
Correlation -- a common statistical analysis, usually abbreviated as r, that measures
the degree of relationship between pairs of interval variables in a sample. The range
of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship
between two variables.
Covariate -- a product of the correlation of two related variables times their standard
deviations. Used in true experiments to measure the difference of treatment between
them.
Credibility -- a researcher's ability to demonstrate that the object of a study is
accurately identified and described based on the way in which the study was
conducted.
Critical Theory -- an evaluative approach to social science research, associated with
Germany's neo-Marxist “Frankfurt School,” that aims to criticize as well as analyze
society, opposing the political orthodoxy of modern communism. Its goal is to
promote human emancipatory forces and to expose ideas and systems that impede
them.
Data -- factual information [as measurements or statistics] used as a basis for
reasoning, discussion, or calculation.
Data Mining -- the process of analyzing data from different perspectives and
summarizing it into useful information, often to discover patterns and/or systematic
relationships among variables.
Data Quality -- this is the degree to which the collected data [results of measurement
or observation] meet the standards of quality to be considered valid [trustworthy] and
reliable [dependable].
Deductive -- a form of reasoning in which conclusions are formulated about
particulars from general or universal premises.
Dependability -- being able to account for changes in the design of the study and the
changing conditions surrounding what was studied.
Dependent Variable -- a variable that varies due, at least in part, to the impact of the
independent variable. In other words, its value “depends” on the value of the
independent variable. For example, in the variables “gender” and “academic major,”
academic major is the dependent variable, meaning that your major cannot determine
whether you are male or female, but your gender might indirectly lead you to favor
one major over another.
Deviation -- the distance between the mean and a particular data point in a given
distribution.
Discourse Community -- a community of scholars and researchers in a given field
who respond to and communicate to each other through published articles in the
community's journals and presentations at conventions. All members of the discourse
community adhere to certain conventions for the presentation of their theories and
research.
Discrete Variable -- a variable that is measured solely in whole units, such as,
gender and number of siblings.
Distribution -- the range of values of a particular variable.
Effect Size -- the amount of change in a dependent variable that can be attributed to
manipulations of the independent variable. A large effect size exists when the value
of the dependent variable is strongly influenced by the independent variable. It is the
mean difference on a variable between experimental and control groups divided by
the standard deviation on that variable of the pooled groups or of the control group
alone.
Empirical Research -- the process of developing systematized knowledge gained
from observations that are formulated to support insights and generalizations about
the phenomena being researched.
Epistemology -- concerns knowledge construction; asks what constitutes knowledge
and how knowledge is validated.
Ethnography -- method to study groups and/or cultures over a period of time. The
goal of this type of research is to comprehend the particular group/culture through
immersion into the culture or group. Research is completed through various methods
but, since the researcher is immersed within the group for an extended period of time,
more detailed information is usually collected during the research.
External Validity -- the extent to which the results of a study are generalizable or
transferable.
Factor Analysis -- a statistical test that explores relationships among data. The test
explores which variables in a data set are most related to each other. In a carefully
constructed survey, for example, factor analysis can yield information on patterns of
responses, not simply data on a single response. Larger tendencies may then be
interpreted, indicating behavior trends rather than simply responses to specific
questions.
Field Studies -- academic or other investigative studies undertaken in a natural
setting, rather than in laboratories, classrooms, or other structured environments.
Focus Groups -- small, roundtable discussion groups charged with examining
specific topics or problems, including possible options or solutions. Focus groups
usually consist of 4-12 participants, guided by moderators to keep the discussion
flowing and to collect and report the results.
Framework -- the structure and support that may be used as both the launching point
and the on-going guidelines for investigating a research problem.
Generalizability -- the extent to which research findings and conclusions conducted
on a specific study to groups or situations can be applied to the population at large.
Grounded Theory -- practice of developing other theories that emerge from
observing a group. Theories are grounded in the group's observable experiences, but
researchers add their own insight into why those experiences exist.
Group Behavior -- behaviors of a group as a whole, as well as the behavior of an
individual as influenced by his or her membership in a group.
Hypothesis -- a tentative explanation based on theory to predict a causal relationship
between variables.
Independent Variable -- the conditions of an experiment that are systematically
manipulated by the researcher. A variable that is not impacted by the dependent
variable, and that itself impacts the dependent variable. In the earlier example of
"gender" and "academic major," (see Dependent Variable) gender is the independent
variable.
Individualism -- a theory or policy having primary regard for the liberty, rights, or
independent actions of individuals.
Inductive -- a form of reasoning in which a generalized conclusion is formulated
from particular instances.
Inductive Analysis -- a form of analysis based on inductive reasoning; a researcher
using inductive analysis starts with answers, but formulates questions throughout the
research process.
Internal Consistency -- the extent to which all questions or items assess the same
characteristic, skill, or quality.
Internal Validity -- the rigor with which the study was conducted [e.g., the study's
design, the care taken to conduct measurements, and decisions concerning what was
and was not measured]. It is also the extent to which the designers of a study have
taken into account alternative explanations for any causal relationships they explore.
In studies that do not explore causal relationships, only the first of these definitions
should be considered when assessing internal validity.
Margin of Error -- the permittable or acceptable deviation from the target or a
specific value. The allowance for slight error or miscalculation or changing
circumstances in a study.
Measurement -- process of obtaining a numerical description of the extent to which
persons, organizations, or things possess specified characteristics.
Meta-Analysis -- an analysis combining the results of several studies that address a
set of related hypotheses.
Methodology -- a theory or analysis of how research does and should proceed.
Methods -- systematic approaches to the conduct of an operation or process. It
includes steps of procedure, application of techniques, systems of reasoning or
analysis, and the modes of inquiry employed by a discipline.
Mixed-Methods -- a research approach that uses two or more methods from both the
quantitative and qualitative research categories. It is also referred to as blended
methods, combined methods, or methodological triangulation.
Modeling -- the creation of a physical or computer analogy to understand a particular
phenomenon. Modeling helps in estimating the relative magnitude of various factors
involved in a phenomenon. A successful model can be shown to account for
unexpected behavior that has been observed, to predict certain behaviors, which can
then be tested experimentally, and to demonstrate that a given theory cannot account
for certain phenomenon.
Models -- representations of objects, principles, processes, or ideas often used for
imitation or emulation.
Naturalistic Observation -- observation of behaviors and events in natural settings
without experimental manipulation or other forms of interference.
Norm -- the norm in statistics is the average or usual performance. For example,
students usually complete their high school graduation requirements when they are 18
years old. Even though some students graduate when they are younger or older, the
norm is that any given student will graduate when he or she is 18 years old.
Null Hypothesis -- the proposition, to be tested statistically, that the experimental
intervention has "no effect," meaning that the treatment and control groups will not
differ as a result of the intervention. Investigators usually hope that the data will
demonstrate some effect from the intervention, thus allowing the investigator to reject
the null hypothesis.
Panel Study -- a longitudinal study in which a group of individuals is interviewed at
intervals over a period of time.
Participant -- individuals whose physiological and/or behavioral characteristics and
responses are the object of study in a research project.
Peer-Review -- the process in which the author of a book, article, or other type of
publication submits his or her work to experts in the field for critical evaluation,
usually prior to publication. This is standard procedure in publishing scholarly
research.
Policy -- governing principles that serve as guidelines or rules for decision making
and action in a given area.
Policy Analysis -- systematic study of the nature, rationale, cost, impact,
effectiveness, implications, etc., of existing or alternative policies, using the theories
and methodologies of relevant social science disciplines.
Population -- the target group under investigation. The population is the entire set
under consideration. Samples are drawn from populations.
Predictive Measurement -- use of tests, inventories, or other measures to determine
or estimate future events, conditions, outcomes, or trends.
Principal Investigator -- the scientist or scholar with primary responsibility for the
design and conduct of a research project.
Probability -- the chance that a phenomenon will occur randomly. As a statistical
measure, it is shown as p [the "p" factor].
Questionnaire -- structured sets of questions on specified subjects that are used to
gather information, attitudes, or opinions.
Random Sampling -- a process used in research to draw a sample of a population
strictly by chance, yielding no discernible pattern beyond chance. Random sampling
can be accomplished by first numbering the population, then selecting the sample
according to a table of random numbers or using a random-number computer
generator. The sample is said to be random because there is no regular or discernible
pattern or order. Random sample selection is used under the assumption that
sufficiently large samples assigned randomly will exhibit a distribution comparable to
that of the population from which the sample is drawn. The random assignment of
participants increases the probability that differences observed between participant
groups are the result of the experimental intervention.
Reliability -- the degree to which a measure yields consistent results. If the
measuring instrument [e.g., survey] is reliable, then administering it to similar groups
would yield similar results. Reliability is a prerequisite for validity. An unreliable
indicator cannot produce trustworthy results.
Representative Sample -- sample in which the participants closely match the
characteristics of the population, and thus, all segments of the population are
represented in the sample. A representative sample allows results to be generalized
from the sample to the population.
Rigor -- degree to which research methods are scrupulously and meticulously carried
out in order to recognize important influences occurring in an experimental study.
Sample -- the population researched in a particular study. Usually, attempts are made
to select a "sample population" that is considered representative of groups of people
to whom results will be generalized or transferred. In studies that use inferential
statistics to analyze results or which are designed to be generalizable, sample size is
critical, generally the larger the number in the sample, the higher the likelihood of a
representative distribution of the population.
Sampling Error -- the degree to which the results from the sample deviate from
those that would be obtained from the entire population, because of random error in
the selection of respondent and the corresponding reduction in reliability.
Saturation -- a situation in which data analysis begins to reveal repetition and
redundancy and when new data tend to confirm existing findings rather than expand
upon them.
Standard Deviation -- a measure of variation that indicates the typical distance
between the scores of a distribution and the mean; it is determined by taking the
square root of the average of the squared deviations in a given distribution. It can be
used to indicate the proportion of data within certain ranges of scale values when the
distribution conforms closely to the normal curve.
Statistical Analysis -- application of statistical processes and theory to the
compilation, presentation, discussion, and interpretation of numerical data.
Statistical Bias -- characteristics of an experimental or sampling design, or the
mathematical treatment of data, that systematically affects the results of a study so as
to produce incorrect, unjustified, or inappropriate inferences or conclusions.
Statistical Significance -- the smartbility that the difference between the outcomes of
the control and experimental group are great enough that it is unlikely due solely to
chance. The probability that the null hypothesis can be rejected at a predetermined
significance level [0.05 or 0.01].
Statistical Tests -- researchers use statistical tests to make quantitative decisions
about whether a study's data indicate a significant effect from the intervention and
allow the researcher to reject the null hypothesis. That is, statistical tests show
whether the differences between the outcomes of the control and experimental groups
are great enough to be statistically significant. If differences are found to be
statistically significant, it means that the probability [likelihood] that these
differences occurred solely due to chance is relatively low. Most researchers agree
that a significance value of .05 or less [i.e., there is a 95% probability that the
differences are real] sufficiently determines significance.
Testing -- the act of gathering and processing information about individuals' ability,
skill, understanding, or knowledge under controlled conditions.
Theory -- a general explanation about a specific behavior or set of events that is
based on known principles and serves to organize related events in a meaningful way.
A theory is not as specific as a hypothesis.
Treatment -- the stimulus given to a dependent variable.
Trend Samples -- method of sampling different groups of people at different points
in time from the same population.
Unit of Analysis -- the basic same entity or phenomenon being analyzed by a study
and for which data are collected in the form of variables.
Validity -- the degree to which a study accurately reflects or assesses the specific
concept that the researcher is attempting to measure. A method can be reliable,
consistently measuring the same thing, but not valid.
Variable -- any characteristic or trait that can vary from one person to another [race,
gender, academic major] or for one person over time [age, political beliefs].
Weighted Scores -- scores in which the components are modified by different
multipliers to reflect their relative importance.
In research process, the first and foremost step happens to be that of selecting and properly
defining a research problem. A researcher must find the problem and formulate it so that it
becomes susceptible to research. Like a medical doctor, a researcher must examine all the
symptoms (presented to him or observed by him) concerning a problem before he can diagnose
correctly. To define a problem correctly, a researcher must know: what a problem is?
Now that we know the definition of a problem, let’s define the research problem.
Definition of Research Problem
A research problem is the main organizing principle guiding the analysis of your paper. The
problem under investigation offers us an occasion for writing and a focus that governs what we
want to say. It represents the core subject matter of scholarly communication, and the means by
which we arrive at other topics of conversations and the discovery of new knowledge and
understanding.
If you will remember in the previous module, formulating a research problem is the first step in
research process but what are factors to be considered in selecting a research problem???
There are five factors to consider to determine whether that a problem is researchable or not.
These factors are as follows: (Calmorin, 2012)
1.
1. The problem is existing in the locality or country but no known solution to
the problem
2. The solution can be answered by using statistical methods or techniques
3. There are probable solutions but they are not yet tested
4. The occurrence of phenomena requires scientific investigation to arrive at
a precise solution
5. The serious needs/problems of the people where it demands research
Based on the foregoing factors, the investigator can choose a researchable problem. In writing a
thesis, formulating research problem is in the Chapter 1: The Problem, Rationale and
Background
1. What is the achievement and nutritional intake of teacher education students who
reside at home and boarding houses?
2. Which of the teacher education students, residing at home or boarding houses, have
higher achievement?
3. Is there a significant difference on the achievement and nutritional intake of teacher
education students who reside at home and boarding houses?
The characteristics of research problem is very easy to remember, it has an acronym of SMART
which stands for: (Kothari,2004)
Specific
Measurable
Achievable/ Attainable
Realistic
Time-bound
Specific
The problem should be specifically stated.
For example:
-In an experimental research: “What is the mean growth increment of Eucheuma cultured in
municipal waters of Estanchia, Iloilo, Philippines using lantay and hanging methods?”
(Calmorin, 2012)
- For descriptive research: “What is the mean performance of teachers in the city and province of
Iloilo, Philippines? (Calmorin, 2012)
Measurable
It is easy to measure by using research instruments, apparatus or equipment.
For example:
-In experimental research: the equipment and apparatus used in collecting data on the cultivation
of Eucheuma using lantay and hanging methods are weighing scale to get the weight of seaweed,
refractometer, salinity of water;thermometer,temperature of water; DO meter, dissolved oxygen
of water and pH meter, acidity of water. (Calmorin, 2012)
- In descriptive research: the instruments used in gathering of data are questionnaires, tests,
checklist and many others. (Calmorin, 2012)
Achievable (Attainable)
The data are achievable using correct statistical tools to arrive at precise results.
For example:
-In experimental research: t-test is the appropriate statistical tool used in this specific problem:
“Is there a significant difference on the mean growth increment of Eucheuma cultured in the
municipal waters of of Estancia, Iloilo, Philippines using lantay and hanging methods?” If the
results show significant difference, this means the mean growth increment of Eucheuma using
lantay and hanging method, is better. If no significant difference existed, this means growth
increment of Eucheuma using lantay and hanging methods are almost the same. (Calmorin,
2012)
-In descriptive research: Friedman’s two-way ANOVA (Analysis of Variance) is the statistical
tool used to determine the significant difference of the achievement and nutritional intake of
teacher education students who reside at home or boarding houses. If significant difference
exists, this means achievement and nutritional intake of teacher education students who reside at
home or boarding houses really differ with each other. If no significant difference exists, this
means achievement and nutritional intake of teacher education students who reside at home or
boarding houses are almost the same. (Calmorin, 2012)
Realistic
Real results are attained because they are gathered scientifically and not manipulated or
maneuvered.
Time-bound
Time frame is required in every activity because the shorter the completion of the activity, the
better.
There are several sources of research problem that can be investigated. The sources of research
problem are: (Calmorin, 2012)
Current and past researches are rich sources of research problem even for research replication by
using the same instrument, apparatus or equipment.
For example: Experimental research replicability though using the same method, species of test
plants or animals, apparatus and equipment but the study is conducted to different location or
research stations to compare the similiraties and differences of the results.
You can also replicate researches and check if the conclusion of the previous researcher is still
valid in modern times.
The researchers of the said theses, dissertations and research journals recommend for future
researches to be conducted as sources also of research problem.
For example: the researcher of the study recommends that the future researchers uses big
population size or sample size.
4. Original and creative ideas of the researcher based on the problems met in the
locality and country.
For example: A research problem met in the locality is that there are many mineral bottle waste
and the researcher looks where it can be used to lessen the waste in the community. The
researcher creates a monoblock chair for students out a mineral bottle waste.
Criteria for Selecting Research Problem
When selecting a research problem/topic there are a number of considerations to keep in mind
which will help to ensure that your study will be manageable and that you remain motivated.
These considerations are: (Kumar, 2011)
Interest – Interest should be the most important consideration in selecting a research problem. A
research endeavour is usually time consuming, and involves hard work and possibly unforeseen
problems. If you select a topic which does not greatly interest you, it could become extremely
difficult to sustain the required motivation and put in enough time and energy to complete it.
Magnitude – You should have sufficient knowledge about the research process to be able to
visualise the work involved in completing the proposed study. Narrow the topic down to
something manageable, specific and clear. It is extremely important to select a topic that you can
manage within the time and with the resources at your disposal. Even if you are undertaking a
descriptive study, you need to consider its magnitude carefully.
Measurement of concepts – If you are using a concept in your study (in quantitative studies),
make sure you are clear about its indicators and their measurement. For example, if you plan to
measure the effectiveness of a health promotion programme, you must be clear as to what
determines effectiveness and how it will be measured. Do not use concepts in your research
problem that you are not sure how to measure. This does not mean you cannot develop a
measurement procedure as the study progresses. While most of the developmental work will be
done during your study, it is imperative that you are reasonably clear about the measurement of
these concepts at this stage.
Level of expertise – Make sure you have an adequate level of expertise for the task you are
proposing. Allow for the fact that you will learn during the study and may receive help from your
research supervisor and others, but remember that you need to do most of the work yourself.
Relevance – Select a topic that is of relevance to you as a professional. Ensure that your study
adds to the existing body of knowledge, bridges current gaps or is useful in policy formulation.
This will help you to sustain interest in the study.
Availability of data – If your topic entails collection of information from secondary sources
(office records, client records, census or other already-published reports, etc.) make sure that this
data is available and in the format you want before finalising your topic.
Ethical issues – Another important consideration in formulating a research problem is the ethical
issues involved. In the course of conducting a research study, the study population may be
adversely affected by some of the questions (directly or indirectly); deprived of an intervention;
expected to share sensitive and private information; or expected to be simply experimental
‘guinea pigs’. How ethical issues can affect the study population and how ethical problems can
be overcome should be thoroughly examined at the problem-formulation stage.
Steps in formulating a research problem (Kumar, 2011)
Step 1: Identify a broad field or subject area of interest to you
Ask yourself what topic interests you. For Example, your interest is on HIV which is a broad
field.
Step 2: Dissect the broad area into subareas.
Now that you have a topic that interest you. You can now dissect it into a subareas. For example
the topic is on HIV. It can be divided into subareas:
When looking for a research journal, the first one you will read is the title. The title is very
important because the title will help the readers or other researchers to arouse their interest in
reading your research article and it also tells what the research is all about.
Some readers only read title and the abstract of the study and very few readers are reading the
full article but having a precise and specific title you can entice them in reading your article.
Here are some few guidelines in writing the title: (Tullu, 2019)
Example of titles and the comments/ remarks on the title: (Tullu, 2019)
*the title in the examples are the study made by the author, Milind Tullu
The research proposal always starts with an Introduction. Admittedly, organizing and
conveying your ideas is quite challenging to write. The purpose of the introduction is to
introduce the background information of the study in order to place the study on its proper
perspective. It establishes the framework for the research in a manner that the readers can easily
understand and how the research is related to other previous researches. In the Introduction, the
reader’s attention is captured on how the problem is conveyed and why the research is being
conducted.
It is noteworthy that in writing the Introduction, the review of literature plays an important role
and serves two main functions:
1. it acquaints you, the researcher, with the available literature in the area of study,
thereby broadening your knowledge base
(d) targeting an audience and noting the significance of the problem for this audience
- who are the major players, stakeholders
- demographics
- geography (ex: Metro Manila or Calabarzon)
- who will benefit from the study (significant contribution of your research)
The significance of the study is basically the importance of your research. This section is often
referred to as the “rationale” or justification in which you try to convince your target audience
that the study is worth doing. Why is your study important?
The significance of a study must be stated in the Introduction section of the research paper.
While stating the significance, you must highlight how your research will be beneficial to the
development of science and the society in general.
You can first outline the significance in a broader sense by stating how your research will
contribute to the broader problem in your field and gradually narrow it down to demonstrate the
specific group that will benefit from your research. While writing the significance of your study,
you must answer questions:
Use the following checklists to fine tune the significance of the study:
1.
1. Why is this study important?
2. What are the implications of doing it?
3. How does your study link to other knowledge?
4. How does it stand to inform or influence policy making?
5. What new perspective will your study introduce to the subject?
6. What benefits might your study have for others in the subject area or to
the general public?
7. How is your study expected to resolve lingering questions or gaps in
knowledge in your field of study?
8. How is your study expected to develop better theoretical models in the
field of medical technology or health sector?
9. How your study will change the way people do their jobs in a particular
field or may change the way people live.
Example:
Fig 1: Sample of significance of the study
The Introduction: Hypotheses
Definition
A classic definition of hypothesis is “it is a tentative conclusion or answer to a specific question
at the beginning of the investigation”. It is an educated guess about the answer to a specific
question.
Importance of Hypotheses:
1. Hypotheses serves as an important link between the research question and the design of the
study. It rephrases the research question and turn it into a testable or measurable statement.
2. Hypotheses may identify the anticipated direction of the proposed relationship between stated
variables. Information regarding directionality of a relationship between variables is usually not
contained in the actual research question.
Hence, a hypothesis is a hunch, assumption, suspicion, assertion or an idea about a phenomenon,
relationship or situation, the reality or truth of which you do not know. A researcher calls these
assumptions, assertions, statements or hunches hypotheses and they become the basis of an
enquiry. In most studies the hypothesis will be based upon either previous studies or your own or
someone else’s observations.
Testing the Hypotheses:
Testing of hypotheses employs statistical procedures in which the investigator draws inferences
about the population from a study sample. Hypotheses are used often in experiments or
intervention trials in which investigators compare groups.
Figure 1*:
Forms of Hypotheses:
1. Null Hypothesis
It represents the traditional approach: It makes a prediction that in the general population, no
relationship or no significant difference exists between groups on a variable.
The wording is, “There is no difference (or relationship)” between the groups.
n Module 3, you have learned the basics of formulating the research question. By now, you have
finalized your research problem relating to the field of medical technology.
The statement of the problem is a focal point of any research and a bridge between the literature
reviewLinks to an external site. and the research methodologyLinks to an external site.. The
aims and objectives should lead directly to your research questions.
Oftentimes, the research problem is confused with the research question. Unfortunately, many
researchers do not clearly identify the research problem, therefore, the study objectives are
difficult to relate to the other aspects of the research.
Here are some tips in writing the Statement of the Problem. An effective problem statement is
concise and concrete. It should:
In academic research, writing a problem statement can help you contextualize and understand the
significance of your research problem. A problem statement can be several paragraphs long and
serve as the basis for your research proposal., or it can be condensed into just a few sentences
in the introduction..
Scope and limitations are two terms that address the details of a research project.
Scope refers to the problem or issue that the researcher wants to study with the
project.
Limitations is the term used for constraints that impact the researcher’s ability to
effectively study the scope of the project.
For example, your study is on impact of sleep deprivation among medical technology students,
you will need to consider the following:
Always acknowledge a study's limitations. It is far better for you to identify and acknowledge
your study’s limitations than to have them pointed out by your professor and be graded down
because you appear to have ignored them.
Keep in mind that acknowledgement of a study's limitations is an opportunity to make
suggestions for further research. If you do connect your study's limitations to suggestions for
further research, be sure to explain the ways in which these unanswered questions may become
more focused because of your study.
When discussing the limitations of your research, be sure to:
Ethical Considerations
You are required to submit the research proposal to the CEU-Ethics Research Committee (CEU-
ERC) for review and approval prior to conduct any study-related activities.
All academic institutions are particular about any ethical issues that research may have. In other
words, all research must be conducted ethically, and much of the preparation of your research
proposal and the actual research is concerned with ensuring that your proposed research will be
ethical in all its aspects.
It is important that when conducting research, you identify any ethical issues and describe how
you propose to deal with them. You need to look at the ethical issues particularly from the
viewpoint of your respondents and, in case of any potential ‘harm’, psychological or otherwise,
you need to detail the mechanism in place to deal with it.
Being ethical means adhering to the code of conduct that has evolved over the years for an
acceptable professional practice. Any deviation from this code of conduct is considered as
unethical and the greater the deviation, the more serious the breach.
Ethical issues in research can be looked at as they relate to research participants, researchers and
sponsoring organizations (ex; PCHRD).
Ethical Considerations on Research Participants:
o Improper collection of information
o Lack of informed consent or informed consent obtained improperly
o providing incentives that may affect decision-making of participant
o obtaining sensitive information not relevant to the study
o possibility of causing harm to participants
o maintaining confidentiality
It is the responsibility of the Ethics Committee to ensure that the rights and safety of
research participants are protected.
Conceptual framework
"Miles and Huberman (1994) defined a conceptual framework as a visual or written product, one
that “explains, either graphically or in narrative form, the main things to be studied—the key
factors, concepts, or variables—and the presumed relationships among them”. (p. 18)
The most important thing to understand about your conceptual framework is that it is primarily a
conception or model of what is out there that you plan to study, and of what is going on with
these things and why—a tentative theory of the phenomena that you are investigating. This
theory will help you to assess and refine your goals, develop realistic and relevant research
questions, select appropriate methods, and identify potential validity threats and also helps you
justify your research.
An alphabetical list of important terms or acronyms that you define, particularly ambiguous
terms or those used in a special way.
Your thesis proposal will likely include terms that are not widely known outside of your
discipline. These terms include particular theoretical constructs, formulas, operational definitions
that differ from colloquial definitions, schools of thought and discipline-specific acronyms. This
part of your proposal offers the reader a list of definitions of these terms.
How you define such terms could considerably affect how the reader
understands your thesis
Be sure you use these terms in a consistent manner throughout your
proposal and thesis
1. Define only the terms, words, or phrases which are special or unique meanings in
your study.
2. Define terms operationally, ie how they are used in the study.
3. The researcher may develop his/her own definition.