Basic Statistical Tools in Research and Data Analysis

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5
At a glance
Powered by AI
The key takeaways are the different types of statistical variables and tests that can be used for analyzing data, including qualitative vs. quantitative variables and parametric vs. non-parametric tests.

The document discusses qualitative and quantitative variables, including categorical, ordinal, nominal, discrete, continuous, dependent and independent variables.

Experimental research manipulates the independent variable to determine the effect on the dependent variable, while non-experimental research does not manipulate variables and examines relationships between variables.

Basic statistical tools in research and data analysis

Zulfiqar Ali and S Bala Bhaskar1

Variables

Qualitative quantitative

Ordinal discrete continuous a variable with infinite number of values, like


Time and weight

Interval ratio
l.

 Categorical variable: variables than can be put into categories. For example, the category “Toothpaste Brands” might contain the
variables Colgate and Aquafresh.
 Confounding variable: extra variables that have a hidden effect on your experimental results.
 Continuous variable: a variable with infinite number of values, like “time” or “weight”.
 Control variable: a factor in an experiment which must be held constant. For example, in an experiment to determine whether
light makes plants grow faster, you would have to control for soil quality and water.
 Dependent variable: the outcome of an experiment. As you change the independent variable, you watch what happens to the
dependent variable.
 Discrete variable: a variable that can only take on a certain number of values. For example, “number of cars in a parking lot” is
discrete because a car park can only hold so many cars.
 Independent variable: a variable that is not affected by anything that you, the researcher, does. Usually plotted on the x-axis.
 A measurement variable has a number associated with it. It’s an “amount” of something, or a”number” of something.
 Nominal variable: another name for categorical variable.
 Ordinal variable: similar to a categorical variable, but there is a clear order. For example, income levels of low, middle, and high
could be considered ordinal.
 Qualitative variable: a broad category for any variable that can’t be counted (i.e. has no numerical value). Nominal and ordinal
variables fall under this umbrella term.
 Quantitative variable: A broad category that includes any variable that can be counted, or has a numerical value associated with
it. Examples of variables that fall into this category include discrete variables and ratio variables.
 Random variables are associated with random processes and give numbers to outcomes of random events.
 A ranked variable is an ordinal variable; a variable where every data point can be put in order (1st, 2nd, 3rd, etc.).
 Ratio variables: similar to interval variables, but has a meaningful zero.
Experimental research: In experimental research, the aim is to manipulate an independent variable(s) and then examine the effect that
this change has on a dependent variable(s). Since it is possible to manipulate the independent variable(s), experimental research has the
advantage of enabling a researcher to identify a cause and effect between variables. For example, take our example of 100 students
completing a maths exam where the dependent variable was the exam mark (measured from 0 to 100), and the independent variables
were revision time (measured in hours) and intelligence (measured using IQ score). Here, it would be possible to use an experimental
design and manipulate the revision time of the students. The tutor could divide the students into two groups, each made up of 50
students. In "group one", the tutor could ask the students not to do any revision. Alternately, "group two" could be asked to do 20 hours
of revision in the two weeks prior to the test. The tutor could then compare the marks that the students achieved.

Non-experimental research: In non-experimental research, the researcher does not manipulate the independent variable(s). This is not
to say that it is impossible to do so, but it will either be impractical or unethical to do so. For example, a researcher may be interested in
the effect of illegal, recreational drug use (the independent variable(s)) on certain types of behaviour (the dependent variable(s)).
However, whilst possible, it would be unethical to ask individuals to take illegal drugs in order to study what effect this had on certain
behaviours. As such, a researcher could ask both drug and non-drug users to complete a questionnaire that had been constructed to
indicate the extent to which they exhibited certain behaviours. Whilst it is not possible to identify the cause and effect between the
variables, we can still examine the association or relationship between them.In addition to understanding the difference between
dependent and independent variables, and experimental and non-experimental research, it is also important to understand the different
characteristics amongst variables. This is discussed next.

Categorical and Continuous Variables


Categorical variables are also known as discrete or qualitative variables. Categorical variables can be further categorized as
either nominal, ordinal or dichotomous.
Nominal variables are variables that have two or more categories, but which do not have an intrinsic order. For example, a real estate
agent could classify their types of property into distinct categories such as houses, condos, co-ops or bungalows. So "type of property" is
a nominal variable with 4 categories called houses, condos, co-ops and bungalows. Of note, the different categories of a nominal variable
can also be referred to as groups or levels of the nominal variable. Another example of a nominal variable would be classifying where
people live in the USA by state. In this case there will be many more levels of the nominal variable (50 in fact).

Dichotomous variables are nominal variables which have only two categories or levels. For example, if we were looking at gender, we
would most probably categorize somebody as either "male" or "female". This is an example of a dichotomous variable (and also a
nominal variable). Another example might be if we asked a person if they owned a mobile phone. Here, we may categorise mobile phone
ownership as either "Yes" or "No". In the real estate agent example, if type of property had been classified as either residential or
commercial then "type of property" would be a dichotomous variable.

Ordinal variables are variables that have two or more categories just like nominal variables only the categories can also be ordered or
ranked. So if you asked someone if they liked the policies of the Democratic Party and they could answer either "Not very much", "They
are OK" or "Yes, a lot" then you have an ordinal variable. Why? Because you have 3 categories, namely "Not very much", "They are OK"
and "Yes, a lot" and you can rank them from the most positive (Yes, a lot), to the middle response (They are OK), to the least positive (Not
very much). However, whilst we can rank the levels, we cannot place a "value" to them; we cannot say that "They are OK" is twice as
positive as "Not very much" for example.

Quantitative variables
Quantitative or numerical data are subdivided into discrete and continuous measurements. Discrete numerical data are recorded as a
whole number such as 0, 1, 2, 3,… (integer), whereas continuous data can assume any value. Observations that can be counted constitute
the discrete data and observations that can be measured constitute the continuous data. Examples of discrete data are number of
episodes of respiratory arrests or the number of re-intubations in an intensive care unit. Similarly, examples of continuous data are the
serial serum glucose levels, partial pressure of oxygen in arterial blood and the oesophageal temperature.
A hierarchical scale of increasing precision can be used for observing and recording the data which is based on categorical, ordinal,
interval and ratio scales.
Categorical or nominal variables are unordered. The data are merely classified into categories and cannot be arranged in any particular
order. If only two categories exist (as in gender male and female), it is called as a dichotomous (or binary) data. The various causes of re-
intubation in an intensive care unit due to upper airway obstruction, impaired clearance of secretions, hypoxemia, hypercapnia,
pulmonary oedema and neurological impairment are examples of categorical variables.
Ordinal variables have a clear ordering between the variables. However, the ordered data may not have equal intervals. Examples are the
American Society of Anesthesiologists status or Richmond agitation-sedation scale.
Interval variables are similar to an ordinal variable, except that the intervals between the values of the interval variable are equally
spaced. A good example of an interval scale is the Fahrenheit degree scale used to measure temperature. With the Fahrenheit scale, the
difference between 70° and 75° is equal to the difference between 80° and 85°: The units of measurement are equal throughout the full
range of the scale.
Ratio scales are similar to interval scales, in that equal differences between scale values have equal quantitative meaning. However, ratio
scales also have a true zero point, which gives them an additional property. For example, the system of centimetres is an example of a
ratio scale. There is a true zero point and the value of 0 cm means a complete absence of length. The thyromental distance of 6 cm in an
adult may be twice that of a child in whom it may be 3 cm.
STATISTICS: DESCRIPTIVE AND INFERENTIAL STATISTICS
Descriptive statistics[4] try to describe the relationship between variables in a sample or population. Descriptive statistics provide a
summary of data in the form of mean, median and mode. Inferential statistics[4] use a random sample of data taken from a population to
describe and make inferences about the whole population. It is valuable when it is not possible to examine each member of an entire
population.

Example of descriptive and inferential statistics


Descriptive statistics
The extent to which the observations cluster around a central location is described by the central tendency and the spread towards the
extremes is described by the degree of dispersion.
Measures of central tendency
The measures of central tendency are mean, median and mode.[6] Mean (or the arithmetic average) is the sum of all the scores divided
by the number of scores. Mean may be influenced profoundly by the extreme variables. For example, the average stay of
organophosphorus poisoning patients in ICU may be influenced by a single patient who stays in ICU for around 5 months because of
septicaemia. The extreme values are called outliers. The formula for the mean is

Mean,
where x = each observation and n = number of observations. Median[6] is defined as the middle of a distribution in a ranked data (with
half of the variables in the sample above and half below the median value) while mode is the most frequently occurring variable in a
distribution. Range defines the spread, or variability, of a sample.[7] It is described by the minimum and maximum values of the
variables. If we rank the data and after ranking, group the observations into percentiles, we can get better information of the pattern of
spread of the variables. In percentiles, we rank the observations into 100 equal parts. We can then describe 25%, 50%, 75% or any other
percentile amount. The median is the 50th percentile. The interquartile range will be the observations in the middle 50% of the
observations about the median (25th -75thpercentile). Variance[7] is a measure of how spread out is the distribution. It gives an indication
of how close an individual observation clusters about the mean value. The variance of a population is defined by the following formula:
where σ2 is the population variance, X is the population mean, Xi is the ith element from the population and N is the number of elements in
the population. The variance of a sample is defined by slightly different formula:

where s2 is the sample variance, x is the sample mean, xi is the ith element from the sample and n is the number of elements in the sample.
The formula for the variance of a population has the value ‘n’ as the denominator. The expression ‘n−1’ is known as the degrees of
freedom and is one less than the number of parameters. Each observation is free to vary, except the last one which must be a defined
value. The variance is measured in squared units. To make the interpretation of the data simple and to retain the basic unit of
observation, the square root of variance is used. The square root of the variance is the standard deviation (SD).[8] The SD of a population
is defined by the following formula:

where σ is the population SD, X is the population mean, Xi is the ith element from the population and N is the number of elements in the
population. The SD of a sample is defined by slightly different formula:

where s is the sample SD, x is the sample mean, xi is the ith element from the sample and n is the number of elements in the sample. An
example for calculation of variation and SD is illustrated in Table 2.

xample of mean, variance, standard deviation


Normal distribution or Gaussian distribution
Most of the biological variables usually cluster around a central value, with symmetrical positive and negative deviations about this
point.[1] The standard normal distribution curve is a symmetrical bell-shaped. In a normal distribution curve, about 68% of the scores
are within 1 SD of the mean. Around 95% of the scores are within 2 SDs of the mean and 99% within 3 SDs of the mean [Figure 2].

Normal distribution curve


Skewed distribution
It is a distribution with an asymmetry of the variables about its mean. In a negatively skewed distribution [Figure 3], the mass of the
distribution is concentrated on the right of Figure 1. In a positively skewed distribution [Figure 3], the mass of the distribution is
concentrated on the left of the figure leading to a longer right tail.

Curves showing negatively skewed and positively skewed distribution


Inferential statistics
In inferential statistics, data are analysed from a sample to make inferences in the larger collection of the population. The purpose is to
answer or test the hypotheses. A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. Hypothesis tests are thus
procedures for making rational decisions about the reality of observed effects.
Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1 (where 0
indicates impossibility and 1 indicates certainty).
In inferential statistics, the term ‘null hypothesis’ (H0 ‘H-naught,’ ‘H-null’) denotes that there is no relationship (difference) between the
population variables in question.[9]
Alternative hypothesis (H1 and Ha) denotes that a statement between the variables is expected to be true.[9]
The P value (or the calculated probability) is the probability of the event occurring by chance if the null hypothesis is true. The P value is
a numerical between 0 and 1 and is interpreted by researchers in deciding whether to reject or retain the null hypothesis [Table 3].

P values with interpretation


If P value is less than the arbitrarily chosen value (known as α or the significance level), the null hypothesis (H0) is rejected [Table 4].
However, if null hypotheses (H0) is incorrectly rejected, this is known as a Type I error.[11] Further details regarding alpha error, beta
error and sample size calculation and factors influencing them are dealt with in another section of this issue by Das S et al.[12]

Illustration for null hypothesis


PARAMETRIC AND NON-PARAMETRIC TESTS
Numerical data (quantitative variables) that are normally distributed are analysed with parametric tests.[13]
Two most basic prerequisites for parametric statistical analysis are:
The assumption of normality which specifies that the means of the sample group are normally distributed
The assumption of equal variance which specifies that the variances of the samples and of their corresponding population are equal.
However, if the distribution of the sample is skewed towards one side or the distribution is unknown due to the small sample size, non-
parametric[14] statistical techniques are used. Non-parametric tests are used to analyse ordinal and categorical data.
Parametric tests
The parametric tests assume that the data are on a quantitative (numerical) scale, with a normal distribution of the underlying
population. The samples have the same variance (homogeneity of variances). The samples are randomly drawn from the population, and
the observations within a group are independent of each other. The commonly used parametric tests are the Student's t-test, analysis of
variance (ANOVA) and repeated measures ANOVA.
Student's t-test
Student's t-test is used to test the null hypothesis that there is no difference between the means of the two groups. It is used in three
circumstances:
To test if a sample mean (as an estimate of a population mean) differs significantly from a given population mean (this is a one-sample t-
test)

The formula for one sample t-test is


where X = sample mean, u = population mean and SE = standard error of mean
To test if the population means estimated by two independent samples differ significantly (the unpaired t-test). The formula for
unpaired t-test is:

where X1 − X2 is the difference between the means of the two groups and SE denotes the standard error of the difference.
To test if the population means estimated by two dependent samples differ significantly (the paired t-test). A usual setting for paired t-test
is when measurements are made on the same subjects before and after a treatment.
The formula for paired t-test is:

where d is the mean difference and SE denotes the standard error of this difference.
The group variances can be compared using the F-test. The F-test is the ratio of variances (var l/var 2). If F differs significantly from 1.0,
then it is concluded that the group variances differ significantly.
Analysis of variance
The Student's t-test cannot be used for comparison of three or more groups. The purpose of ANOVA is to test if there is any significant
difference between the means of two or more groups.
In ANOVA, we study two variances – (a) between-group variability and (b) within-group variability. The within-group variability (error
variance) is the variation that cannot be accounted for in the study design. It is based on random differences present in our samples.
However, the between-group (or effect variance) is the result of our treatment. These two estimates of variances are compared using the
F-test.
A simplified formula for the F statistic is:

where MSb is the mean squares between the groups and MSw is the mean squares within groups.
Repeated measures analysis of variance
As with ANOVA, repeated measures ANOVA analyses the equality of means of three or more groups. However, a repeated measure
ANOVA is used when all variables of a sample are measured under different conditions or at different points in time.
As the variables are measured from a sample at different points of time, the measurement of the dependent variable is repeated. Using a
standard ANOVA in this case is not appropriate because it fails to model the correlation between the repeated measures: The data violate
the ANOVA assumption of independence. Hence, in the measurement of repeated dependent variables, repeated measures ANOVA should
be used.
Non-parametric tests
When the assumptions of normality are not met, and the sample means are not normally, distributed parametric tests can lead to
erroneous results. Non-parametric tests (distribution-free test) are used in such situation as they do not require the normality
assumption.[15] Non-parametric tests may fail to detect a significant difference when compared with a parametric test. That is, they
usually have less power.
As is done for the parametric tests, the test statistic is compared with known values for the sampling distribution of that statistic and the
null hypothesis is accepted or rejected. The types of non-parametric analysis techniques and the corresponding parametric analysis
techniques are delineated in Table 5.

Analogue of parametric and non-parametric tests


Median test for one sample: The sign test and Wilcoxon's signed rank test
The sign test and Wilcoxon's signed rank test are used for median tests of one sample. These tests examine whether one instance of
sample data is greater or smaller than the median reference value.
Sign test
This test examines the hypothesis about the median θ0 of a population. It tests the null hypothesis H0 = θ0. When the observed value (Xi)
is greater than the reference value (θ0), it is marked as+. If the observed value is smaller than the reference value, it is marked as − sign.
If the observed value is equal to the reference value (θ0), it is eliminated from the sample.
If the null hypothesis is true, there will be an equal number of + signs and − signs.
The sign test ignores the actual values of the data and only uses + or − signs. Therefore, it is useful when it is difficult to measure the
values.
Wilcoxon's signed rank test
There is a major limitation of sign test as we lose the quantitative information of the given data and merely use the + or – signs.
Wilcoxon's signed rank test not only examines the observed values in comparison with θ0 but also takes into consideration the relative
sizes, adding more statistical power to the test. As in the sign test, if there is an observed value that is equal to the reference value θ0, this
observed value is eliminated from the sample.
Wilcoxon's rank sum test ranks all data points in order, calculates the rank sum of each sample and compares the difference in the rank
sums.
Mann-Whitney test
It is used to test the null hypothesis that two samples have the same median or, alternatively, whether observations in one sample tend
to be larger than observations in the other.
Mann–Whitney test compares all data (xi) belonging to the X group and all data (yi) belonging to the Y group and calculates the
probability of xi being greater than yi: P (xi > yi). The null hypothesis states that P (xi > yi) = P (xi < yi) =1/2 while the alternative
hypothesis states that P(xi > yi) ≠1/2.
Kolmogorov-Smirnov test
The two-sample Kolmogorov-Smirnov (KS) test was designed as a generic method to test whether two random samples are drawn from
the same distribution. The null hypothesis of the KS test is that both distributions are identical. The statistic of the KS test is a distance
between the two empirical distributions, computed as the maximum absolute difference between their cumulative curves.
Kruskal-Wallis test
The Kruskal–Wallis test is a non-parametric test to analyse the variance.[14] It analyses if there is any difference in the median values of
three or more independent samples. The data values are ranked in an increasing order, and the rank sums calculated followed by
calculation of the test statistic.
Jonckheere test
In contrast to Kruskal–Wallis test, in Jonckheere test, there is an a priori ordering that gives it a more statistical power than the Kruskal–
Wallis test.[14]
Friedman test
The Friedman test is a non-parametric test for testing the difference between several related samples. The Friedman test is an alternative
for repeated measures ANOVAs which is used when the same parameter has been measured under different conditions on the same
subjects.[13]
Tests to analyse the categorical data
Chi-square test, Fischer's exact test and McNemar's test are used to analyse the categorical or nominal variables. The Chi-square test
compares the frequencies and tests whether the observed data differ significantly from that of the expected data if there were no
differences between groups (i.e., the null hypothesis). It is calculated by the sum of the squared difference between observed (O) and the
expected (E) data (or the deviation, d) divided by the expected data by the following formula:

A Yates correction factor is used when the sample size is small. Fischer's exact test is used to determine if there are non-random
associations between two categorical variables. It does not assume random sampling, and instead of referring a calculated statistic to a
sampling distribution, it calculates an exact probability. McNemar's test is used for paired nominal data. It is applied to 2 × 2 table with
paired-dependent samples. It is used to determine whether the row and column frequencies are equal (that is, whether there is ‘marginal
homogeneity’). The null hypothesis is that the paired proportions are equal. The Mantel-Haenszel Chi-square test is a multivariate test as
it analyses multiple grouping variables. It stratifies according to the nominated confounding variables and identifies any that affects the
primary outcome variable. If the outcome variable is dichotomous, then logistic regression is used.

You might also like