APPLYING EDUCATIONAL RESEARCH Report
APPLYING EDUCATIONAL RESEARCH Report
APPLYING EDUCATIONAL RESEARCH Report
CONTENTS /
Part I: The Contribution of Educational Research
gngxin
* Action Research: it is carried out to improve ones own practice. It doesnt intend to obtain findings that can be
t u gung
generalized.
Characteristics of Educations Research that show its scientific character:
-Creation of concepts and procedures that are shared, precise, and publicly accessible
-Replicability of Findings
-Refutability of knowledge Claims
-Control for researcher errors and biases
zhu gi
-Literature search
sh j i
-Research design
c i yng
-Sampling
-Variables & methods of data collection
chng x
---
susu
/
*Publications indexed by preliminary sources are either secondary or primary sources.
---
g s h
1. Abstract
2. Introduction: problem statement, discussion of previous relevant research, indicate preliminary sources and
ynyn chn
zng
Preparing a report of the Literature Review:
It describes a state of knowledge about a topic or questions investigated
y n y n
c h b z lio susu
*It is useful to maintain a record of your search of preliminary sources for not duplicating efforts and remind the steps
of searching information, useful for writing in your report.
*How to know which preliminary source use? there are many. Appendix 2 presents a list.
*Current Index Journals in Education (CIJE) and Resources in Education (RIE) are convenient for Education. Those are in
ERIC (www.eric.ed.gov/about/about.html) (www.searcheric.org) CIJERIE
*An inappropriate preliminary source will make you miss many publications.
*The search is based on the use of Descriptors. So, it is important to identify good descriptors of your topic and avoid
using ones that are not indexed.
Selecting Descriptors:
*It is important to determine whether the descriptors you are using exist or match with the (ERIC thesaurus helps to this
end) descriptors of the database, otherwise you will not find the needed information.
*When using ERIC thesaurus, it shows the category to which your descriptor belongs (broad category), the topics or
other descriptors that belong to your descriptor as a category of them (Narrow term), the terms or descriptors
related/associated, etc.
*ERIC can advise you to change your descriptor into an appropriated or existing one.
*You can limit (precise/pin down/nail/specify) your search by using codes, such as * (truncation), and/on (connectors),
+, -, quotation marks , parenthesis (), combination of them, specific words/time, etc. /
//
*If you dont locate any publications, you should reconsider your descriptors and your choice of a preliminary source.
---
*Defining a Research Strategy:
-Based on key questions related to your information needs.
-Define descriptors or key words from those questions.
chng x
-Keep records of the search procedures. To improve this process, avoid search repetition,
and to describe on out report.
-Examine searches procedures conducted by other researchers/experts. /
-Search by Thesaurus Descriptors to check if your key words are Descriptors of the database and see which ones are so
or which are associated with it. By this mean, we can refine our descriptors and search process.
-Decide the amount of detail of the entries regarding to author by clicking the option for limiting/retrieving according to
your request. You can also combine key words with these detailed words (author, etc.)
---
*Searching an electronic preliminary source by Conducting a Free-text Search (natural-language search)
-Entries where a particular word or a set of words (key words) appear anywhere in the entry.
-Key words with more than one word, we need to surround the entire phrase by quotation marks to avoid
retrieving/requesting/producing that have nothing to do.
-We can combine descriptors and keywords by using connectors (and / or). Or is indicated by comma (,).
-It is also possible and useful to combine connectors, truncation mark*, and various words + keywords by quotation
marks to limit our search. E.g.: parent* and high risk students and academic achievement
e.g.: (curriculum alignment) and (achievement gains, academic achievement, national competency
tests)
-All of these search procedures should be kept in a record.
---
*Obtaining publication after search an Electronic Search:
-Ask the librarian to locate journal articles and books in the library or through interlibrary loans.
-Microfiche: More than 1000 ERIC Resource collections: (http://searcheric.org/derec.htm) Doesnt work/exist.
-Order for a fee a Hard copy: ERIC Document Reproduction Service (www.edrs.com)
Order for a fee PDF format: E*subscribe (www.edrs.com/products/suscription.cfm)
-Request a copy from the author or institution (www.aera.net)
*It is necessary to distinguish between Secondary sources coming from primary reviews or secondary reviews, and
between quantitative and qualitative studies.
*Examples of secondary sources are textbooks, encyclopedia, (literature reviews, theoretical frameworks?)
*Carry out literature review search of primary sources has some advantages over relaying on a secondary source.
*To avoid wasting time, you should define your researchs problem/questions, select appropriate preliminary sources,
define correct descriptors/key words, identify relevant primary and secondary sources.
*Secondary sources are useful for police makers to take decisions, it is authoritative, because there is not personal
responsibility in the findings or content.
*There are 2 secondary sources: primary source analyses and professional reviews.
*Primary source analyses focus on quantitative and qualitative research studies.
*Professional reviews tend to analyze a wide variety of sources and focus on implications for practice.
*Primary source analyses of Quantitative research synthesize (combine) quantitative findings to get one single result
from many sources or determine statistical significance (a generalizable result). The statistical techniques are: Vote
counting, Chi-square, Meta-Analysis. The information necessary for carrying out those processes are: means, standard
deviation or p, f, t values. ()
.
-Meta-Analysis is the most widely used quantitative review method. It calculates the effect size.
-Effect size is used to determine the effectiveness of a particular program. It indicates the degree to which participants
in the experimental group show superior performance compared to a comparison group.
*Primary Source Analysis of Qualitative Research uses a method for synthesizing qualitative research studies that
enables to acknowledge the unique characteristics of each case, but also to identify concepts and principles that are
presented across cases.
-Qualitative case studies typically are reported in a narrative form.
*Professional Reviews are sources that draw implications for educational practice. Typically use nontechnical language
to describe research findings, and brief and selective citations of primarily sources.
*Professional reviews examine published research studies, primary source analyses, and theoretical writings related to
professional practice improvement.
*To locate published Literature Reviews we can use ERIC, and for the entries search we can use combine specific
descriptors or key words. E.g. Meta-analysis or literature review with key words referred to the topic of information
we need curriculum alignment, also use the connector and.
Abstract, introduction, Methods (Sampling procedures, Measures or Materials, Research Design and procedures),
Results and Discussion.
*The introductory section presents: studys purpose, variables, hypotheses-questions-objectives, and previous research
findings or other relevant information.
*Construct: one or more observed Abstract phenomenon that acquires a meaning and a name (concept name or
conceptualization).
*Hypotheses usually are formulated on the basis of theory and previous research findings. If not theoretical basis for
formulating it, then questions or objectives are formulated. These are the guidance of the Study design.
*The literature review should include opposing findings, to avoid bias.
*In the Method section, it is important to reduce the random sampling error. This is to assure that the sample really
represents the population.
*target Population, accessible population, Research sample.
*The size of the errors tends to become smaller as we select a larger random sample.
*Sampling errors in Nonrandom samples cannot be estimate by mathematical procedures.
*Types of Sampling: Probabilistic (simple Random, stratified random, Cluster, proportional random sampling), Non-
probabilistic (Ad-hoc, Convenience/Volunteering, Local available, Purposive. E.g. men-women)
*The response rate to a survey questionnaire or interview should be above 70 percent.
*Population validity refers to the degree to which the sample of individuals in the study is representative of the
population from which it was selected.
-Researchers should demonstrate that sample, and accessible population are similar to the target population
(join the same characteristics / variables relevant to the research problem).
*Measures are tests or instruments for gathering information: those are based on the constructs (variable
operationalization) that are intended to measure, scoring procedures, their validity and reliability.
*Measures (tests/instruments) information commonly applied to Educational practices can be found at Mental
Measurements or Yearbook: Test Reviews Online.
*Types of measures: paper-and-pencil tests, questionnaires, interviews, direct observation, content analysis.
*Tests usually measure one or two variables, questionnaires typically measure many variables.
*If an interview was a primary measure in a research study, the report should include at least the main questions.
*Direct observation tends to yield more accurate data compared to questionnaires or interviews.
*There would be evidence that the researchers inferences from the observational data are valid and reliable.
*Tests of Validity of Measures are important to determine the quality of tests. There is a book of the standards for
Educational and Psychological Testing.
*Test Validity implies to make sure through evidence (and theory) that the measure mean, measures what is really
intended (Construct). E.g.: a score that represents how much a student has learned about an intended subject.
*Test validity implies show evidence (and theory) such as: test content, internal structure, relationship to other
variables, response process, consequences of testing.
*Reliability: results are the same when testing with the same test over time and over the same target population.
*Reliability entails a test or other measurement tool when it is free of measurement error.
-Measurement error: the difference between the scores that individuals actually obtain on a test and their true
scores if it were possible to obtain a perfect measure of their performance.
*Procedures for Estimating Test Reliability: Different types of measurement errors in a test (measure/measurement) are
estimated through these procedures: Item consistency, Stability of measurement (or test-retest), Consistency of
administration and scoring, Standard error of measurement, Item response theory (tests reliable for all individuals from
different skills degrees).
*How many procedures of test reliability should the researcher develop and include in his study? As many as possible,
depending on the complexity of the study. One type of reliability is typically of most concern.
*Limitations to tests of Validity and Reliability: A measure may be valid and reliable for one population, but not for
another.
In the process to test validity and reliability 3 aspects are entailed: ;1 Instruments for gathering information (the most important), ;2 Construct
process (conceptualization, variables analysis/correlational statistics, factor analysis are implied), ;3 Sample (for estimating population/sample
validity to assure is the sample if represents to the population/external validity). The validity and reliability tests are applied to Instruments or
measures. The reliability test to the sample.
*Depending on the Research Design, the description of the procedures might be brief or it may be quite detailed.
Descriptive research designs generally are simple. Experimental designs, require more detailed explanations (of each of
the experimental treatments).
*The results section of a quantitative research report presents the results of the statistical analysis.
*Discussion section: Conclusions, meaning and implications of the results, interpretations, shortcomings,
recommendations for further research. Researchers judgements are supported by his research results as well as the
results of the previous research that he cites.
*We can make better judgements about appropriateness of the use of statistics when publishing or reviewing a
research studies.
*We must know the most important statistical techniques.
*The type of statistical procedure or technique to be applied depends on the type of score.
*Continuous scores: there are 2 types, Raw score and Derived score.
-Derived score: Provide a quantitative comparison of each raw score relative to a comparison group or standard
score (norming sample). Age / Grade equivalents, percentiles, standard scores, rank scores.
*Gain scores: the difference in an individual score on the measure from time to the next (comparison between 2 scores
of the same individual).
*Categorical Scores: Discrete and nonordered. Discrete when only one category can be chosen. There is also Dichotomy
categorical variable (and multi-categorical variable).
*Descriptive Statistics is useful to summarize all the data in the form of a few simple numerical expressions (mean, SD,
median, Mode, Frequency counts and percentages) SD
*Range of Standard Deviation: how much variation is present in the individual scores. You can see if the scores are at or
near the mean and if they vary widely from the mean.
*Standard Deviation SD: how much individual scores vary around the mean score.
*Normal curve: A normal probability distribution. It is normally distributed when the majority of individuals obtain
scores near or surrounding the mean score of all the sample.
*The calculation of the mean and the Standard Deviation of the scores on a measure is very important for determine
other statistic measurements (variability of the scores) and to provide a succinct summary of the raw data.
*Standard deviation can be expressed as a single numerical value or as unit (degree).
*Standard deviation units are representative values that are associated with the raw standard deviation value. In the
example (usually) those unit are 3 positive numbers from 1 to 3 and 2 negative ones, the central unit is 0 (cero) which
represents the mean. Those units also can be interpreted as degrees (1o, 2o, 3o, 0 -1o, -2o, -3o)
*A large or big SD value represents heterogenous individuals of the sample, they are not alike with respect to the
variable subject to measure and mean. It does not represent a normal curve.
*Standard deviation is meaningful for continuous and gain scores but not for categorical ones.
*The variability of the scores in categorical scores is done just by looking the frequency counts or percentages in each
category.
Concept of SD and Variance from the Calculation/formula: DS es la raz cuadrada de la Varianza. Por tanto, para calcular la SD, se
debe calcular primero la Varianza. Varianza es el cuadrado del promedio de las diferencias cuadrticas (diferencia entre la Media y
el valor de cada individuo) de una distribucin normal.
*Correlational statistics can be used to describe the extent of the relationship between variables in mathematically
precise terms. Correlation coefficient is calculated, it is represented by the symbol r (r value).
*If two variables are involved, a bivariate correlational statistic is calculated.
*Multivariate correlational statistics explore the relationship between more than two variables at the same time. A
correlational procedure known as multiple regression can be used.
Inferential Statistics:
*The purpose is to generalize from the sample to the population, to make inferences about the population from the
results obtain from the sample.
*It is useful for making inferences about a population based on the descriptive statistics that are calculated on data
from a sample that represents this population.
*To conclude that the results are generalizable, we first must reject the possible explanation that the results are a
chance finding. It can be done by Replication studies or effect size studies.
*The basic purpose of any inferential statistic is to test the null hypothesis.
*Null hypothesis is the explanation that an observed result for a sample is a change finding.
*p and Statistical Significance: p value refers to the percentage of occasions that a chance difference (result) between
mean scores of a certain magnitude will occur when the population means are identical. The lower p value, the less
often a chance difference, therefore the more likely it is that the null hypothesis is false. e.g.: a p value of .001 indicates
that is more likely that the null hypothesis is false than would a p value of .01p
() p
*a p value of 5 is generally considered sufficient to reject the null hypothesis in Educational Research. 5
p
*When the obtained result is Statistically significant, the null hypothesis is rejected.
*Various inferential statistics are called tests of Statistical significance, this is the prove of rejection of the Null
hypothesis, thus conclude that the obtained result is not a chance one and can be generalized to the population.
*There are different types of inferential statistics used in Educational Research to test the generalizability of the
different type of results obtained in research studies. The main types are: The t test, analysis of variance, Analysis of
Covariance, The Chi-square test, Parametric vs nonparametric tests.
t
*The t test is used to determine whether an observed difference (result) between the means scores of two groups on a
measure is likely to have occurred by chance or reflects a true difference. t
*To reject the Null Hypothesis, the t value calculated must be higher that the t value got from the table of the t
distribution at a statistical significance level of .05. Meaning that the results obtained that show difference between two
groups dont fall into the error margin.
*t test can also be used to determine whether observed correlation coefficients occurred by chance.
*t test can compare only two means at a time (related to the independent variable). For three or more mean scores,
Analysis of Variance is used.
*Analysis of Variance (ANOVA) is a test used to determine the likelihood that the differences between three or more
mean scores occurred by a chance. ANOVA
*Analysis of Variance yields an inferential statistic called an F value. It is compared with a F value from the F distribution
table. If the first one exceeds, the null hypothesis is rejected, therefore the results/differences are generalizable to the
population.
Analysis of Covariance
*It is a statistical technique used to make the groups involved in the experiment equivalent on a pretest.
*In this method, each individuals posttest score is adjusted up or down to take into account his/her pretest
performance.
The chi-square Test
*This is the appropriate inferential statistics for categorical data (not continuous or rank score form).
*It is also used to determine whether the results or differences of a research occur by chance or not. (The type of
research is not experimental)
*The X2 value is associated with a p value that is less than .01 to reject the Null hypothesis that suggests that the results
occur by chance. Therefore, results can be generalized to the population.
*Nonparametric tests of significance are those whose variables are in the form of categories, which do not form an
interval scale, such as the Chi-square.
*Parametric tests: measures are in form of equal intervals between scores, the scores are normally distributed about
the mean score.
Practical Significance and Effect Size
*Statistical significance means that the result probably did not occurred by chance and it can be generalized. The
practical significance of statistical results is a matter of judgement. e.g. a small effect in a learning result can be
considered important or not.
*This process involves the calculation of the effect size as an aid in determining the practical significance.
*Effect size of 0.33 or more is considered to have practical significance.
*Practical significance seeks for judgments about whether an observed result is sufficiently large to have implications
for practice.
Procedures in Statistical Analysis
*Do statistical analysis by hand, use Statistical Package for the Social Sciences/SPSS, or Statistical Analysis System/SAS.
---
CASE STUDIES
*Case studies are systematic investigations that attempt to satisfy research standards for validity and reliability.
*Involved in-depth inquiry into a particular phenomenon in its natural context.
*The purpose of case studies is usually description, evaluation, or explanation of particular phenomena.
*Purposefully sampling is usually used in selection of participants.
Elements included in a report (research process) of a case study are: Introduction, Research design, Sampling procedure,
measures (Data-collection procedures), Data analysis (theoretical saturation, Interpretation analysis, Reflective Analysis,
findings), Discussion.
*Interpretivist view or orientation is an important criterion in Case studies (in contrast to positivism orientation of some
case studies).
*Interpretivism: Construction of the reality (meanings) by the individuals who participate in a research.
*Interpretational analysis involves an explicit category coding system.
*Coding data is an important aspect of Interpretational analysis. It is done by classification, categorization (hierarchal
categorization/hierarchal coding)
*The etic and emic perspective is crucial in the data analysis.
*Interpretation or interpretational analysis is crucial in case studies. It implies identify constructs, themes and patterns
that make sense of the data.
*There is a model of interpretational analysis based on the theory known as Grounded theory. It is used to discover
theory that is grounded in the data in an inductive fashion. It involves coding data.
*In the interpretational analysis process, findings are analyzed based on deductive and inductive strategies
(approaches). Deductive approach is based on theories or concepts generated a priori. Inductive implies discovering
unanticipated concerns.
*It is important to practice reflexivity that means, engaging in self-reflection to identify, communicate, and attempt to
reduce the effects of their personal biases.
*Reflective analysis is an important aspect in data analysis. Involves reflection on the findings based on both, reflection
on the conceptual framework and a deep personal process of pondering a phenomenon.
*The most common forms of data collection are individual or focus group interviews, participant observation, analysis
of documents and media, and use of paper-and-pencil measures.
*Data analysis is sometimes done by reflective analysis, relaying in researchers own intuition and personal judgment.
*Case studies offer thick descriptions of their cases to enable reflections about applicability in other settings.
*Evaluative strategies to demonstrate credibility and trustworthiness of Case Studies findings, can be: usefulness,
participant involvement, inclusion of quantitative data, long-term observation, coding checks, member checking,
triangulation, contextual completeness, chain of evidence, reflexivity.
---
ACTION RESEARCH
*Action research is more closely tied to educators practice, because its potential to solve particular problems of
practice. Action research purpose is not building theory and generalizable knowledge.
*Systematic data collection, analysis, and reflection distinguish action research from other approaches of problem
solving.
*Action research tends to improve educators practice.
*Action research can be carried out in specific classrooms and departments.
*Action research starting point is a problem of practice from a perspective of an insider, but outsiders can help to
design the study.
*Action researchers can use a number of validity criteria to achieve maximin credibility and trustworthiness to their
research studies.
*Action research implies practical interventions or implemented procedures (taking action) to change a reality.
*Convenience sampling is usually used in Action research.
*Action research usually presents limited review of research literature, emphasizing secondary sources.
*In Action research, researchers can begin by taking a new action before collecting any data. Reflection follows action,
reflection is appropriated at various points throughout the project.
*Design steps of action research: Selection of a focus for the study, data collection (observation, evaluation of changes
and behavior, skills), Taking action, Reflection, Continuation or Modification of Practices, Preparing a report of findings.
The steps are not always performed in the same order, specially the one related to data collection, taking action and,
reflection.
*Journaling is a critical tool. A writing record of practice.
* Evaluation of credibility and trustworthiness. Qualitative criteria: Outcome validity, Process validity, democratic
validity, Catalytic validity, dialogic validity.
-Outcome validity: led to a resolution of a problem
-Process validity: adequacy of the process, triangulation, literature review.
-Democratic validity: project done in collaboration, multiple perspectives, ethics and social justice.
-Catalytic validity: reorients focuses, energizes/mobilize participants towards a new view of the reality.
-Dialogic validity: based on extensive dialogue with peers about findings, alternative explanations.
*Action research projects credibility can be based on quantitative criteria, referred to descriptive research, group
comparison research, correlational research, experimental research.
---
MIXED-METHODS RESEARCH
-The decision about which method or approach to use is based on the research question, objective or scope.
-Mixed-methods researchers use both quantitative and qualitative techniques-either concurrently or sequentially to
address the same or related research questions.
-Mixed methods research reports include both quantitative results (typically based on statistical analysis) and qualitative
results (typically based on the search for themes in observational or verbal data).
-The discussion section of a mixed-methods research report should include: a summary of findings, implications, further
research reflection, limitations.
-Qualitative studies can rely on the study of a single case, while quantitative studies on samples that represents a whole
population.
-In qualitative studies, the purpose is to discover concepts and theories inductively from people responses, through
Grounded theory process. In quantitative studies, the purpose is to use known concepts and theories to create new
research knowledge.
-Types of mixed-methods research: Sequential-explanatory research design, sequential-transformative design,
concurrent-triangulation design.
-Sequential-transformative research design: use theoretical perspective to guide both qualitative and quantitative
phase.
-Concurrent-triangulation research design: uses quantitative and qualitative findings to determine points of agreement
or contradiction relating to the same research question. To corroborate data.
-Qualitative or quantitative phase can occur first in the study. Can be presented in various sequences.
-Qualitative data can be used to amplified or deepen quantitative findings.
-Organization of mixed-methods research report: vary in how they are organized. Introduction, research design,
sampling procedure, Measures, Results, Discussion.
-Quantitative and qualitative methods and results can be presented in various sequences.
-Quantitative data or method does not subordinate to qualitative data, or the reverse. Both types of data are respected
as sources for inquiring about the research question.
-Mixed-methods research design imposes no restrictions on the types of measures that are permissible.
-In the results section of the report, quantitative and qualitative analysis are often presented separately and then
integrated in a subsequent section.
-Evaluating reports of mixed-methods studies: by qualitative and quantitative evaluation methods.
-Sampling: can use according to the technique or method for gathering data. If quantitative: randomization, if
qualitative, purposeful or convenience sampling strategies.
Examples:
Question: How music teachers in urban schools go about their work? Methods: phase 1-Interviews, phase 2-survey.
Question: How teachers should respond to student errors during mathematics instruction? Methods: class video-taping,
interview.