Nakagawa Etal 2017 MetaAnalysisBiology PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Nakagawa et al.

BMC Biology (2017) 15:18


DOI 10.1186/s12915-017-0357-7

REVIEW Open Access

Meta-evaluation of meta-analysis: ten


appraisal questions for biologists
Shinichi Nakagawa1,2*, Daniel W. A. Noble1, Alistair M. Senior3,4 and Malgorzata Lagisz1

biological sciences [6], including ecology, evolutionary


Abstract
biology, conservation biology, and physiology. Here non-
Meta-analysis is a statistical procedure for analyzing the human species, or even ecosystems, are the main focus
combined data from different studies, and can be a [7–12]. Despite this somewhat later arrival, interest in
major source of concise up-to-date information. The meta-analysis has been rapidly increasing in biological
overall conclusions of a meta-analysis, however, depend sciences. We have argued that the remarkable surge in
heavily on the quality of the meta-analytic process, and interest over the last several years may indicate that meta-
an appropriate evaluation of the quality of meta- analysis is superseding traditional (narrative) reviews as a
analysis (meta-evaluation) can be challenging. We more objective and informative way of summarizing
outline ten questions biologists can ask to critically biological topics [8].
appraise a meta-analysis. These questions could also act It is likely that the majority of us (biologists) have never
as simple and accessible guidelines for the authors of conducted a meta-analysis. Chances are, however, that al-
meta-analyses. We focus on meta-analyses using non- most all of us have read at least one. Meta-analysis can
human species, which we term ‘biological’ meta- not only provide quantitative information (such as overall
analysis. Our ten questions are aimed at enabling a effects and consistency among studies), but also qualita-
biologist to evaluate whether a biological meta-analysis tive information (such as dominant research trends and
embodies ‘mega-enlightenment’, a ‘mega-mistake’, or current knowledge gaps). In contrast to that of many med-
something in between. ical and social scientists [3, 5], the training of a biologist
does not typically include meta-analysis [13] and, conse-
Keywords: Effect size, Biological importance, Non-
quently, it may be difficult for a biologist to evaluate and
independence, Meta-regression, Meta-research,
interpret a meta-analysis. As with original research stud-
Publication bias, Quantitative synthesis, Reporting bias,
ies, the quality of meta-analyses vary immensely. For ex-
Statistical significance, Systematic review
ample, recent reviews have revealed that many meta-
analyses in ecology and evolution miss, or perform poorly,
Meta-analyses can be important and informative, several critical steps that are routinely implemented in the
but are they all? medical and social sciences [14, 15] (but also see [16, 17]).
Last year saw 40 years since the coining of the term The aim of this review is to provide ten appraisal ques-
‘meta-analysis’ by Gene Glass in 1976 [1, 2]. Meta- tions that one should ask when reading a meta-analysis
analyses, in which data from multiple studies are com- (cf., [18, 19]), although these questions could also be used
bined to evaluate an overall effect, or effect size, were first as simple and accessible guidelines for researchers con-
introduced to the medical and social sciences, where ducting meta-analyses. In this review, we only deal with
humans are the main species of interest [3–5]. Decades ‘narrow sense’ or ‘formal’ meta-analyses, where a statistical
later, meta-analysis has infiltrated different areas of model is used to combine common effect sizes across
studies, and the model takes into account sampling error,
which is a function of sample size upon which each effect
* Correspondence: [email protected]
All authors contributed equally to the preparation of this manuscript size is based (more details below; for discussions on the
1
Evolution & Ecology Research Centre and School of Biological, Earth and definitions of meta-analysis, see [15, 20, 21]). Further, our
Environmental Sciences, University of New South Wales, Sydney, NSW 2052, emphasis is on ‘biological’ meta-analyses, which deal with
Australia
2
Diabetes and Metabolism Division, Garvan Institute of Medical Research, 384 non-human species, including model organisms (nema-
Victoria Street, Darlinghurst, Sydney, NSW 2010, Australia todes, fruit flies, mice, and rats [22]) and non-model
Full list of author information is available at the end of the article

© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Nakagawa et al. BMC Biology (2017) 15:18 Page 2 of 14

organisms, multiple species, or even entire ecosystems. Superficial documentation also makes it hard to tell
For medical and social science meta-analyses concerning whether the search really was comprehensive, and, more
human subjects, large bodies of literature and excellent importantly, systematic.
guidelines already exist, especially from overseeing organi- A comprehensive search attempts to identify (almost)
zations such as the Cochrane (Collaboration) and the all relevant studies/data for a given meta-analysis, and
Campbell Collaboration. We refer to the literature and the would thus not only include multiple major databases
practices from these ‘experienced’ disciplines where ap- for finding published studies, but also make use of vari-
propriate. An overview and roadmap of this review is pre- ous lesser-known databases to locate reports and unpub-
sented in Fig. 1. Clearly, we cannot cover all details, but lished studies. Despite the common belief that search
we cite key references in each section so that interested results should be similar among major databases, over-
readers can follow up. laps can sometimes be only moderate. For example,
overlap in search results between Web of Science and
Q1: Is the search systematic and transparently Scopus (two of the most popular academic databases) is
documented? only 40–50% in many major fields [23]. As well as read-
When we read a biological meta-analysis, it used to be ing that a search is comprehensive, it is not uncommon
(and probably still is) common to see a statement like “a to read that a search was systematic. A systematic search
comprehensive search of the literature was conducted” needs to follow a set of pre-determined protocols aimed
without mention of the date and type of databases the at minimizing bias in the resulting data set. For example,
authors searched. Documentation on keyword strings a search of a single database, with pre-defined focal
and inclusion criteria is often also very poor, making questions, search strings, and inclusion/exclusion cri-
replication of search outcomes difficult or impossible. teria, can be considered systematic, negating some bias,

Fig. 1. Mapping the process (on the left) and main evaluation questions (on the right) for meta-analysis. References to the relevant figures (Figs. 2,
3, 4, 5 and 6) are included in the blue ovals
Nakagawa et al. BMC Biology (2017) 15:18 Page 3 of 14

though not necessarily being comprehensive. It is not- comparing differences between the variances, rather than
able that a comprehensive search is preferable but not means, of two groups [38–40]. It is important to assess
necessary (and often very difficult to do) whereas a sys- whether a study used an appropriate effect size statistic
tematic search is a must [24]. for the focal question. For example, when the authors are
For most meta-analyses in medicine and social sci- interested in the effect of a certain treatment, they should
ences, the search steps are systematic and well docu- typically use SMD or response ratio, rather than Zr. Most
mented for reproducibility. This is because these studies biological meta-analyses will use one of the standardized
follow a protocol named the PRISMA (Preferred Report- effect sizes mentioned above. These effect sizes are re-
ing Items for Systematic Reviews and Meta-Analyses) ferred to as standardized because they are unit-less (di-
statement [25, 26]; note that a meta-analysis should usu- mension-less), and thus are comparable across studies,
ally be a part of a systematic review, although a system- even if those studies use different units for reporting (for
atic review may or may not include meta-analysis. The example, size can be measured by weight [g] or length
PRISMA statement facilitates transparency in reporting [cm]). However, unstandardized effect sizes (raw mean dif-
meta-analytic studies. Although it was developed for ference or regression coefficients) can be used, as happens
health sciences, we believe that the details of the four in medical and social sciences, when all studies use com-
key elements of the PRISMA flow diagram (‘identifica- mon and directly comparable units (for example, blood
tion’, ‘screening’, ‘eligibility’, and ‘included’) should also be pressure [mmHg]).
reported in a biological meta-analysis [8]. Figure 2 That being said, a biological meta-analysis will often bring
shows: A) the key ideas of the PRISMA statement, which together original studies of different types (such as combi-
the reader should compare with the content of a bio- nations of experimental and observational studies). As a
logical meta-analysis; and B) an example of a PRISMA general rule, SMD is considered a better fit for experimen-
diagram, which should be included as part of meta- tal studies, whereas Zr is better for observational (correl-
analysis documentation. The bottom line is that one ational) studies. In some cases different effect sizes might
should assess whether search and screening procedures be calculated for different studies in a meta-analysis and
are reproducible and systematic (if not comprehensive; then be converted to a common type prior to analysis: for
to minimize potential bias), given what is described in example, Zr and SMD (and also lnOR) are inter-
the meta-analytic paper [27, 28]. convertible. Thus, if we were, for example, interested in the
effect of temperature on growth, we could combine results
Q2: What question and what effect size? from experimental studies that compare mean growth at
A meta-analysis should not just be descriptive. The best two temperatures (SMD) with results from observational
meta-analyses ask questions or test hypotheses, as is the studies that compare growth across a temperature gradient
case with original research. The meta-analytic questions (Zr) in a single meta-analysis by transforming SMD from
and hypotheses addressed will generally determine the experimental studies to Zr [29–32].
types of effect size statistics the authors use [29–32], as
we explain below. Three broad groups of effect size sta- Q3: Is non-independence taken into account?
tistics are based on are: 1) the difference between the Statistical non-independence occurs when data points
means of two groups (for example, control versus treat- (in this case, effect sizes) are somewhat related to each
ment); 2) the relationship, or correlation, between two other. For example, multiple effect sizes may be taken
variables; and 3) the incidence of two outcomes (for ex- from a single study, making such effect sizes correlated.
ample, dead or alive) in two groups (often represented in Failing to account for non-independence among effect
a 2 by 2 contingency table); see [3, 7] for comprehensive sizes (or data points) can lead to erroneous conclusions
lists of effect size statistics. Corresponding common ef- [14, 41–44]—typically, an invalid conclusion of statistical
fect size statistics are: 1) standardized mean difference significance (type I error; also see Q7). Many authors do
(SMD; often referred to as d, Cohen’s d, Hedges’ d or not correct for non-independence (see [15]). There are
Hedges’ g) and the natural logarithm (log) of the re- two main reasons for this: the authors may be unaware
sponse ratio (denoted as either lnR or lnRR [33]); 2) of non-independence among effect sizes or they may
Fisher’s z-transformed correlation coefficient (often de- have difficulty in appropriately accounting for the corre-
noted as Zr); and 3) the natural logarithm of the odds lated structure despite being aware of the problem.
ratio (lnOR) and relative risk (lnRR; not to be confused To help the reader to detect non-independence where
with the response ratio). the authors have failed to take it into account, we have
We have also used and developed methods associated illustrated four common types of dependent effect sizes
with less common effect size statistics such as log hazard in Fig. 3, with the legend including a biological example
ratio (lnHR) for comparing survival curves [34–37], and for each type. Phylogenetic relatedness (Fig. 3d) is
also the log coefficient of variation ratio (lnCVR) for unique to biological meta-analyses that include multiple
Nakagawa et al. BMC Biology (2017) 15:18 Page 4 of 14

a b

Fig. 2. Preferred Reporting Items for Systematic Reviews and Meta-Analyses. (PRISMA). a The main components of a systematic review or meta-analysis.
The data search (identification) stage should, ideally, be preceded by the development of a detailed study protocol and its preregistration. Searching at
least two literature databases, along with other sources of published and unpublished studies (using backward and forward citations, reviews, field experts,
own data, grey and non-English literature) is recommended. It is also necessary to report search dates and exact keyword strings. The screening and
eligibility stage should be based on a set of predefined study inclusion and exclusion criteria. Criteria might differ for the initial screening (title, abstract)
compared with the full-text screening, but both need to be reported in detail. It is good practice to have at least two people involved in screening, with a
plan in place for disagreement resolution and calculating disagreement rates. It is recommended that the list of studies excluded at the full-text screening
stage, with reasons for their exclusion, is reported. It is also necessary to include a full list of studies included in the final dataset, with their
basic characteristics. The extraction and coding (included) stage may also be performed by at least two people (as is recommended in
medical meta-analysis). The authors should record the figures, tables, or text fragments within each paper from which the data were
extracted, as well as report intermediate calculations, transformations, simplifications, and assumptions made during data extraction. These
details make tracing mistakes easier and improve reproducibility. Documentation should include: a summary of the dataset, information
on data and study details requested from authors, details of software used, and code for analyses (if applicable). b It is now becoming
compulsory to present a PRISMA diagram, which records the flow of information starting from the data search and leading to the final
data set. WoS Web of Science

species [14, 42, 45]. Correction for phylogenetic non- non-independence directly by using multilevel meta-
independence can now be implemented in several main- analytic models (see Q4) if the dataset contains a sufficient
stream software packages, including metafor [46]. number of studies (complex models usually require a large
Where non-independence goes uncorrected because of sample size) [14].
the difficulty of appropriately accounting for the correlated
structure, it is usually because the non-independence is in- Q4: Which meta-analytic model?
compatible with the two traditional meta-analytic models There are three main kinds of meta-analytic models,
(the fixed-effect and the random-effects models—see Q4) which differ in their assumptions about the data being
that are implemented in widely used software (for example, analyzed, but for all three the common and primary goal
Metawin [47]). Therefore, it was (and still is) common to is to estimate an overall effect (but see Q5). These
see averaging of non-independent effect sizes or the selec- models are: i) fixed-effect models (also referred to as
tion of one among several related effect sizes. These solu- common-effect models [31]); ii) random-effects models
tions are not necessarily incorrect (see [48]), but may be [50]; and iii) multilevel (hierarchical) models [14, 49].
limiting, and clearly lead to a loss of information [14, 49]. We have depicted these three kinds of models in Fig. 4.
The reader should be aware that it is preferable to model When assessing a meta-analysis, the reader should be
Nakagawa et al. BMC Biology (2017) 15:18 Page 5 of 14

a b

c d

Fig. 3. Common sources of non-independence in biological meta-analyses. a–d Hypothetical examples of the four most common scenarios of
non-independence (a-d). Orange lines and arrows indicate correlations between effect sizes. Effect size estimate (gray boxes, ‘ES’) is the ratio of (or difference
between) the means of two groups (control versus treatment). Scenarios a, b, and d may apply to other types of effect sizes (e.g., correlation), while
scenario c is unique to situations where two or more groups are compared to one control group. a Multiple effect sizes can be calculated from a single
study. Effect sizes in study 3 are not independent of each other because effects (ES3 and ES4) are derived from two experiments using samples from the
same population. For example, a study exposed females and males to increased temperatures, and the results are reported separately for the two sexes.
b Effect sizes taken from the same study (study 3) are derived from different traits measured from the same subjects, resulting in correlations among these
effect sizes. For example, body mass and body length are both indicators of body size, with studies 1 and 2 reporting just one of these measurements and
study 3 reporting both for the same group of individuals. c Effect sizes can be correlated via contrast with a common ‘control’ group of individuals; for
example, both effect sizes from study 3 share a common control treatment. A study may, for example, compare a balanced diet (control) with two levels
of a protein-enriched diet. d In a multi-species study effect sizes can be correlated when they are based on data from organisms from the same taxonomic
unit, due to evolutionary history. Effect sizes taken from studies 3 and 4 are not independent, because these studies were performed on the same species
(Sp.3). Additionally, all species share a phylogenetic history, and thus all effect sizes can be correlated with one another in accordance with time since
evolutionary divergence between species

aware of the different assumptions each model makes. the assumption that all studies are based on samples
For the fixed-effect (Fig. 4a) and random-effects (Fig. 4b) from the same underlying population, meaning that
models, all effect sizes are assumed to be independent these models can be used when different studies are
(that is, one effect per study, with no other sources of likely to quantify different underlying mean effects (for
non-independence; see Q3). The other major assump- example, one study design yields a different effect than
tion of a fixed-effect model is that all effect sizes share a another), as is likely to be the case for a biological meta-
common mean, and thus that variation among data is analysis (Fig. 4b). A random-effects model needs to quan-
solely attributable to sampling error (that is, the sam- tify the between-study variance, τ2, and to estimate this
pling variance, vi, which is related to the sample size for variance correctly requires a sample size of perhaps over
each effect size; Fig. 4a). This assumption, however, is ten effect sizes. Thus, random-effects models may not be
unrealistic for most biological meta-analyses (see [22]), appropriate for a meta-analysis with very few effect sizes,
especially those involving multiple populations, species, and fixed-effect models may be appropriate in such situa-
and/or ecosystems [14, 51]. The use of a fixed-effect tions (bearing in mind the aforementioned assumptions).
model could be justified where the effect sizes are ob- Multilevel models relax the assumptions of independence
tained from the same species or population (assuming made by fixed-effect and random-effects models; that is,
one effect per study and that the effect sizes are inde- for example, these models allow for multiple effect sizes to
pendent of each other). Random-effects models relax come from the same study, which may be the case if one
Nakagawa et al. BMC Biology (2017) 15:18 Page 6 of 14

Fig. 4. Visualizations of the three main types of meta-analytic models and their assumptions. a The fixed-effect model can be written as yi = b0 + ei, where
yi is the observed effect for the ith study (i = 1…k; orange circles), b0 is the overall effect (overall mean; thick grey line and black diamond) for all k studies
and ei is the deviation from b0 for the ith study (dashed orange lines), and ei is distributed with the sampling variance νi (orange curves); note that this
variance is sometimes called within-study variance in the literature, but we reserve this term for the multilevel model below. b The random-effects model
can be written as yi = b0 + si + ei, where b0 is the overall mean for different studies, each of which has a different study-specific mean (green squares and
green solid lines), deviating by si (green dashed lines) from b0, si is distributed with a variance of τ2 (the between-study variance; green curves); note that this is
the conventional notation for the between-study variance, but in a biological meta-analysis, it can be referred to as, say, σ2[study]. The other notation is as
above. Displayed on the top-right is the formula for the heterogeneity statistic, I2 for the random-effects model, where v is a typical sampling variance
(perhaps, most easily conceptualized as the average value of sampling variances, νi). c The simplest multilevel model can be written as yij = b0 + si + uij + eij,
where uij is the deviation from si for jth effect size for the ith study (blue triangles and dashed blue lines) and is distributed with the variance of σ2 (the
within-study variance or it may be denoted as σ2[effect size]; blue curves), eij is the deviation from uij, and the other notations are the same as above. Each of k
studies has m effect sizes (j = 1…m). Displayed on the top-right is the multilevel meta-analysis formula for the heterogeneity statistic, I2, where both the
numerator and denominator include the within-study variance, σ2, in addition to what appears in the formula for the random-effects model

study contains several different experimental treatments, appropriate model or set of models (see Q3), because
or the same experimental treatment is applied across spe- results from inappropriate models could lead to erro-
cies within one study. The simplest multilevel model neous conclusions. For example, applying a fixed ef-
depicted in Fig. 4c includes study effects, but it is probably fect model, when a random effects model is more
not difficult to imagine this multilevel approach being ex- appropriate, may lead to errors in both the estimated
tended to incorporate more ‘levels’, such as species effects, magnitude of the overall effect and its uncertainty
as well (for more details see [13, 14, 41, 45, 49, 51–54]; [55]. As can be seen from Fig. 4, each of the three
incorporating the types of non-independence described in main meta-analytical models assume that effect sizes
Fig. 3b–d requires modeling of correlation and covariance are distributed around an overall effect (b0). The
matrices). reader should also be aware that this estimated over-
It is important for you, as the reader, to check all effect (meta-analytic mean) is most commonly pre-
whether the authors, given their data, employed an sented in an accompanying forest plot(s) [22, 56, 57].
Nakagawa et al. BMC Biology (2017) 15:18 Page 7 of 14

Figure 5a is a forest plot of the kind that is typically seen meta-analysis under consideration. The quantification and
in medical and social sciences, with both overall means reporting of heterogeneity statistics is essential for any
from the fixed-effect or the common effect meta-analysis meta-analysis, and you need to make sure some or
(FEMA/CEMA) model, and the random-effects meta- combinations of these three statistics are reported in a
analysis (REMA) model. In a multiple-species meta-analysis, meta-analysis before making generalisations based on the
you may see an elaborate forest plot such as that in Fig. 5b. overall mean effect (except when using fixed-effect models).

Q5: Is the level of consistency among studies Q6: Are the causes of variation among studies
reported? investigated?
The overall effect reported by a meta-analysis cannot be After quantifying variation among effect sizes beyond
properly interpreted without an analysis of the heterogen- sampling variation (I2 ), it is important to understand the
eity, or inconsistency, among effect sizes. For example, an factors, or moderators, that might explain this additional
overall mean of zero can be achieved when effect sizes are variation, because it can elucidate important processes
all zero (homogenous; that is, the between-study variance mediating variation in the strength of effect. Moderators
is 0) or when all effect sizes are very different (heteroge- are equivalent to explanatory (independent) variables or
neous; the between study variance is >0) but centered on predictors in a normal linear model [8, 49, 62]. For ex-
zero, and clearly one should draw different conclusions in ample, in a meta-analysis examining the effect of experi-
each case. Rather disturbingly, we have recently found that mentally increased temperature on growth using SMD
in ecology and evolutionary biology, tests of heterogeneity (control versus treatment comparison) studies might vary
and their corresponding statistics (τ2, Q, and I2) are only in the magnitude of temperature increase: say 10 versus
reported in about 40% of meta-analyses [58]. Cochran’s Q 20 °C in the first study, but 12 versus 16 °C in the second.
(often referred to as Qtotal or QT) is a test statistic for the In this case, the moderator of interest is the temperature
between-study variance (τ2), which allows one to assess difference between control and treatment groups (10 °C
whether the estimated between-study variance is non-zero for the first study and 4 °C for the second). This difference
(in other words, whether a fixed-effect model is appropri- in study design may explain variation in the magnitude of
ate as this model assumes τ2 = 0) [59]. As a test statistic, Q the observed effect sizes (that is, the SMD of growth at
is often presented with a corresponding p value, which is the two temperatures). Models that examine the effects of
interpreted in the conventional manner. However, if pre- moderators are referred to as meta-regressions. One im-
sented without the associated τ2, Q can be misleading be- portant thing to note is that meta-regression is just a spe-
cause, as is the case with most statistical tests, Q is more cial type of weighted regression. Therefore, the usual
likely to be significant when more studies are included standard practices for regression analysis also apply to
even if τ2 is relatively small (see also Q7); the reader meta-regression. This means that, as a reader, you may
should therefore check whether both statistics are pre- want to check for the inclusion of too many predictors/
sented. Having said that, the magnitude of the between- moderators in a single model, or ‘over-fitting’ (the rule of
study variance (τ2) can be hard to interpret because it is thumb is that the authors may need at least ten effect sizes
dependent on the scale of the effect size. The heterogen- per estimated moderator) [64], and for ‘fishing expedi-
eity statistic, I2, which is a type of intra-class correlation, tions’ (also known as ‘data dredging’ or ‘p hacking’; that is,
has also been recommended as it addresses some of the is- non-hypothesis-based exploration for statistical signifi-
sues associated with Q and τ2 [60, 61]. I2 ranges from 0 to cance [28, 65, 66]).
1 (or 0 to 100%) and indicates how much of the variation Moderators can be correlated with each other (that is,
in effect sizes is due to the between-study variance (τ2; be subject to the multicollinearity problem) and this de-
Fig. 4b) or, more generally, the proportion of variance not pendence, in turn, could lead authors to attribute an effect
attributable to sampling (error) variance (v ; see Fig. 4b, c; to the wrong moderator [67]. For example, in the afore-
for more details and extensions, see [13, 14, 49, 58]). Ten- mentioned meta-analysis of temperature on growth, the
tatively suggested benchmarks for I2 are low, medium, and study may claim that females grew faster than males when
high heterogeneity of 25, 50, and 75% [61]. These values exposed to increased temperatures. However, if most fe-
are often used in meta-analyses in medical and social sci- males came from studies where higher temperature in-
ences for interpreting the degree of heterogeneity [62, 63]. creases were used but males were usually exposed to small
However, we have shown that the average I2 in meta- increases, the moderators for sex and temperature would
analyses in ecology and evolution may be as high as 92%, be confounded. Accordingly, the effect may be due to the
which may not be surprising as these meta-analyses are severity of the temperature change rather than a sex effect.
not confined to a single species (or human subjects) [58]. Readers should check whether the authors have examined
Accordingly, the reader should consider whether these potential confounding effects of moderators and reported
conventional benchmarks are applicable to the biological how different potential moderators are related to one
Nakagawa et al. BMC Biology (2017) 15:18 Page 8 of 14

a b

c d

Fig. 5. Examples of forest plots used in a biological meta-analysis to represent effect sizes and their associated precisions. a A conventional forest
plot displaying the magnitude and uncertainty (95% confidence interval, CI) of each effect size in the dataset, as well as reporting the associated
numerical values and a reference to the original paper. The sizes of the shapes representing point estimates are usually scaled based on their
precision (1/Standard error). Diamonds at the bottom of the plot display the estimated overall mean based on both fixed-effect meta-analysis/
‘common-effect’ meta-analysis (FEMA/CEMA) and random-effects meta-analysis (REMA) models. b A forest plot that has been augmented to
display a phylogenetic relationship between different taxa in the analysis; the estimated d seems on average to be higher in some clades than in
the others. A diamond at the bottom summarizes the aggregate mean as estimated by a multi-level meta-analysis accounting for the given
phylogenetic structure. On the right is the number of effect sizes for each species (k), although similarly one could also display the number of
individuals/sample-size (n), where only one effect size per species is included. c As well as displaying overall effect (diamond), forest plots are
sometimes used to display the mean effects from different sub-groups of the data (e.g., effects separated by sex or treatment type), as estimated
with data sub-setting or meta-regression, or even a slope from meta-regression (indicating how an effect changes with increasing continuous
variable, e.g., dosage). d Different magnitudes of correlation coefficient (r), and associated 95% CIs, p values, and the sample size on which each
estimate is based. The space is shaded according to effect magnitude based on established guidelines; light grey, medium grey, and dark grey
correspond to small, medium, and large effects, respectively
Nakagawa et al. BMC Biology (2017) 15:18 Page 9 of 14

another. It is also important to know the sources of the vary according to the biological questions and systems
moderator data; for example, species-specific data can be (for example, 1% difference in fitness would not matter
obtained from sources (papers, books, databases) other in ecological time but it certainly does over evolutionary
than the primary studies from which effect sizes were time). We stress that authors should primarily be dis-
taken (Q1). Meta-regression results can be presented in a cussing their effect sizes (point estimates) and uncertain-
forest plot, as in Fig. 5c (see also Q6 and Fig. 6e, f; the ties in terms of point estimates (confidence intervals, or
standardization of moderators may often be required for credible intervals, CIs) [29, 70, 72]. Meta-analysts can
analyzing moderators [68]). certainly note statistical significance, which is related to
Another way of exploring heterogeneity is to run sep- CI width, but direct description of precision may be
arate meta-analysis on data subsets (for example, separ- more useful. Note that effect magnitude and precision
ating effect sizes by the sex of exposed animals). This is are exactly what are displayed in forest plots (Fig. 5).
similar to running a meta-regression with categorical
moderators (often referred to as subgroup analysis), with Q8: Has publication bias been considered?
the key difference being that the authors can obtain het- Meta-analysts have to assume that research is published
erogeneity statistics (such as I2) for each subset in a regardless of statistical significance, and that authors
subset analysis [69]. It is important to note that many have not selectively reported results (that is, that there is
meta-analytic studies include more than one meta-analysis, no publication bias and no reporting bias) [74–76]. This
because several different types of data are included, even is unlikely. Therefore, meta-analysts should check for
though these data pertain to one topic (for example, the ef- publication bias using statistical and graphical tools. The
fect of increased temperature not only on body growth, but reader should know that the commonly used methods
also on parasite load). You, as a reader, will need to evaluate for assessing publication bias are funnel plots (Fig. 6a,
whether the authors’ sub-grouping or sub-setting of their b), radial (Galbraith) plots (Fig. 6c), and Egger’s (regres-
data makes sense biologically; hopefully the authors will sion) tests [57, 77, 78]; these methods visually or statisti-
have provided clear justification (Q1). cally (Egger’s test) help to detect funnel asymmetry,
which can be caused by publication bias [79]. However,
Q7: Are effects interpreted in terms of biological you should also know that funnel asymmetry may be an
importance? artifact of too few a number of effect sizes. Further, fun-
Meta-analyses should focus on biological importance nel asymmetry can result from heterogeneity (non-zero
(which is reflected in estimated effects and their uncer- between-study variance, τ2) [77, 80]. Some readily-
tainties) rather than on p values and statistical signifi- implementable methods for correcting for publication
cance, as is outlined in Fig. 5d [29, 70–72]. It should be bias also exist, such as trim-and-fill methods [81, 82] or
clear to most readers that interpreting results only in the use of the p curve [83]. The reader should be aware
terms of statistical significance (p values) can be mis- that these methods have shortcomings; for example, the
leading. For example, in terms of effects’ magnitudes trim-and-fill method can under- or overestimate an
and uncertainties, ES4 and ES6 in Fig. 5d are nearly overall effect size, while the p curve probably only works
identical, yet ES4 is statistically significant, while ES6 is when effect sizes come from tightly controlled experi-
not. Also, ES1–3 are all what people describe as ‘highly ments [83–86] (see Q9; note that ‘selection modeling’ is
significant’, but their magnitudes of effect, and thus bio- an alternative approach, but it is more technically diffi-
logical relevance, are very different. The term ‘effective cult [79]). A less contentious topic in this area is the
thinking’ is used to refer to the philosophy of placing time-lag bias, where the magnitudes of an effect dimin-
emphasis on the interpretation of overall effect size in ish over time [87–89]. This bias can be easily tested with
terms of biological importance rather than statistical sig- a cumulative meta-analysis and visualized using a forest
nificance [29]. It is useful for the reader to know that plot [90, 91] (Fig. 6d) or a bubble plot combined with
each of ES1–3 in Fig. 5d can be classified as what Jacob meta-regression (Fig. 6e; note that journal impact factor
Cohen proposed as small, medium, and large effects, can also be associated with the magnitudes of effect sizes
which are r = 0.1, 0.3, and 0.5, respectively [73]; for [92], Fig. 6f ).
SMD, corresponding benchmarks are d (SMD) = 0.2, 0.5, Alarmingly, meta-reviews have found that only half of
and 0.8 [29, 61]. Researchers may have good intuition meta-analyses in ecology and evolution assessed publica-
for the biological relevance of a particular r value, but tion bias [14, 15]. Disappointingly, there are no perfect
this may not be the case for SMD. Thus, it may be help- solutions for detecting and correcting for publication
ful to know that Cohen’s benchmarks for r and d are bias, because we never really know with certainty what
comparable. Having said that, these benchmarks, along kinds of data are actually missing (although usually sta-
with those for I2, have to be used carefully, because what tistically non-significant and small effect sizes are under-
constitute biologically important effect magnitudes can represented in the dataset; see also Q9). Regardless, the
Nakagawa et al. BMC Biology (2017) 15:18 Page 10 of 14

a b

c d

e f

Fig. 6. (See legend on next page.)


Nakagawa et al. BMC Biology (2017) 15:18 Page 11 of 14

(See figure on previous page.)


Fig. 6. Graphical assessment tools for testing for publication bias. a A funnel plot showing greater variance among effects that have larger standard
errors (SE) and that are thus more susceptible to sampling variability. Some studies in the lower right corner of the plot, opposite to most major
findings, with large SE (less likely to detect significant results) are potentially missing (not shown), suggesting publication bias. b Often funnel plots are
depicted using precision (1/SE), giving a different perspective of publication bias, where studies with low precision (or large SE) are expected to show
greater sampling variability compared to studies with high precision (or low SE). Note that the data in panel b are the same as in panel a, except that
a trim-and-fill analysis has been performed in b. A trim-and-fill analysis estimates the number of studies missing from the meta-analysis and creates
‘mirrored’ studies on the opposite side of the funnel (unfilled dots) to estimate how the overall effect size estimate is impacted by these missing studies.
c Radial (Galbraith) plot in which the slope should be close to zero, if little publication bias exists, indicating little asymmetry in a corresponding funnel
plot (compare it with b); radial plots are closely associated with Egger’s tests. d Cumulative meta-analysis showing how the effect size changes as the
number of studies on a particular topic increases. In this situation, the addition of effect size estimates led to convergence on an overall estimate of
0.36, and the confidence intervals decrease as the precision of the estimate increases. e Bubble plot showing a temporal trend in effect size (Zr) across
years. Here effect sizes are weighted by their precision; larger bubbles indicate more precise estimates and smaller bubbles less precise. f Bubble plot
of the relationship between effect size and impact factors of journals, indicating that larger magnitudes of effect sizes (the absolute values of Zr) tend
to be published in higher impact journals

existing tools should still be used and the presentation phenomena) it is possible that a meta-analysis may be
of results from at least two different methods is underpowered [99–101].
recommended.
Q10: Is the current state (and lack) of knowledge
summarized?
Q9: Are results really robust and unbiased?
In the discussion of a meta-analysis, it is reasonable to
Although meta-analyses from the medical and social sci-
expect the authors to discuss what conventional
ences are often accompanied by sensitivity analysis [69, 93],
wisdoms the meta-analysis has confirmed or refuted and
biological meta-analyses are often devoid of such tests. Sen-
what new insights the meta-analysis has revealed [8, 19,
sitivity analyses include not only running meta-analysis and
71, 100]. New insights from meta-analyses are known as
meta-regression without influential effect sizes or studies
‘review-generated evidence’ (as opposed to ‘study-gener-
(for example, many effect sizes that come from one study
ated evidence’) [18] because only aggregation of studies
or one clear outlier effect size; sometimes also termed
can generate such insights. This is analogous to com-
‘subset analysis’), but also, for example, comparing meta-
parative analyses bringing biologists novel understanding
analytic models with and without modeling non-
of a topic which would be impossible to obtain from
independence (Q3–5), or other alternative analyses [44, 93].
studying a single species in isolation [14]. Because meta-
Analyses related to publication bias could generally also be
analysis brings available (published) studies together in a
regarded as part of a sensitivity analysis (Q8). In addition, it
systematic and/or comprehensive way (but see Q1), the
is worthwhile checking if the authors discuss missing data
authors can also summarize less quantitative themes
[94, 95] (different from publication bias; Q8). Two major
along with the meta-analytic results. For example, the
cases of missing data in meta-analysis are: 1) a lack of the
authors could point out what types of primary studies
information required to obtain sampling variance for a por-
are lacking (that is, identify knowledge gaps). Also, the
tion of the dataset (for example, missing standard devia-
study should provide clear future directions for the topic
tions); and 2) missing information for moderators [96] (for
under investigation [8, 19, 71, 100]; for example, what
example, most studies report the sex of animals used but a
types of empirical work are required to push the topic
few studies do not). For the former, the authors should run
forward. An obvious caveat is that the value of these
models both with and without data with sampling variance
new insights, knowledge gaps and future directions is
information; note that without sampling variance (that is,
contingent upon the answers to the previous nine ques-
unweighted meta-analysis) the analysis becomes a normal
tions (Q1–9).
linear model [21]. For both cases 1 and 2, the authors could
use data imputation techniques (as of yet, this is not stand-
ard practice). Although data imputation methods are rather Post meta-evaluation: more to think about
technical, their implementation is becoming easier [96–98]. Given that we are advocates of meta-analysis, we are
Furthermore, it may often be important to consider the certainly biased in saying ‘meta-analyses are enlight-
sample size (the number and precision of constituent effect ening’. A more nuanced interpretation of what we
sizes) and statistical power of a meta-analysis. One of the really mean is that meta-analyses are enlightening
main reasons to conduct meta-analysis is to increase when they are done well. Mary Smith and Gene Glass
statistical power. However, where an overall effect is ex- published the first research synthesis carrying the
pected to be small (as is often the case with biological label of ‘meta-analysis’ in 1977 [102]. At the time,
Nakagawa et al. BMC Biology (2017) 15:18 Page 12 of 14

their study and the general concept was ridiculed


with the term ‘mega-silliness’ [103] (see also [16, 17]).
Although the results of this first meta-analysis on the
References
efficacy of psychotherapies still stand strong, it is pos- 1. Glass GV. Primary, secondary, and meta-analysis research. Educ Res. 1976;5:3–8.
sible that a meta-analysis contains many mistakes. In 2. Glass GV. Meta-analysis at middle age: a personal history. Res Synth Methods.
a similar vein, Robert Whittaker warned that the 2015;6(3):221–31.
3. Cooper H, Hedges LV, Valentine JC. The handbook of research synthesis and
careless use of meta-analyses could lead to ‘mega-mis- meta-analysis. New York: Russell Sage Foundation; 2009.
takes’, reinforcing his case by drawing upon examples 4. Hedges L, Olkin I. Statistical methods for meta-analysis. New York: Academic
from ecology [104, 105]. Press; 1985.
5. Egger M, Smith GD, Altman DG. Systematic reviews in health care: meta-
Even where a meta-analysis is conducted well, a future analysis in context. 2nd ed. London: BMJ; 2001.
meta-analysis can sometimes yield a completely opposing 6. Arnqvist G, Wooster D. Meta-analysis: synthesizing research findings in
conclusion from the original (see [106] for examples from ecology and evolution. Trends Ecol Evol. 1995;10:236–40.
7. Koricheva J, Gurevitch J, Mengersen K. Handbook of meta-analysis in
medicine and the reasons why). Thus, medical and social ecology and evolution. Princeton: Princeton University Press; 2013.
scientists are aware that updating meta-analyses is ex- 8. Nakagawa S, Poulin R. Meta-analytic insights into evolutionary ecology: an
tremely important, especially given that time-lag bias is a introduction and synthesis. Evolutionary Ecol. 2012;26:1085–99.
9. van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O'Collins V,
common phenomenon [87–89]. Although updating is still Macleod MR. Can animal models of disease reliably inform human studies?
rare in biological meta-analyses [8], we believe this should PLoS Med. 2010;7(3), e1000245.
become part of the research culture in the biological sci- 10. Stewart G. Meta-analysis in applied ecology. Biol Lett. 2010;6(1):78–81.
11. Stewart GB, Schmid CH. Lessons from meta-analysis in ecology and
ences. We appreciate the view of John Ioannidis who evolution: the need for trans-disciplinary evidence synthesis methodologies.
wrote, “Eventually, all research [both primary and meta- Res Synth Methods. 2015;6(2):109–10.
analytic] can be seen as a large, ongoing, cumulative 12. Lortie CJ, Stewart G, Rothstein H, Lau J. How to critically read ecological
meta-analyses. Res Synth Methods. 2015;6(2):124–33.
meta-analysis” [106] (cf. effective thinking; Fig. 6d). 13. Nakagawa S, Kubo T. Statistical models for meta-analysis in ecology and
Finally, we have to note that we have just scratched evolution (in Japanese). Proc Inst Stat Math. 2016;64(1):105–21.
the surface of the enormous subject of meta-analysis. 14. Nakagawa S, Santos ESA. Methodological issues and advances in biological
meta-analysis. Evol Ecol. 2012;26:1253–74.
For example, we did not cover other relevant topics such 15. Koricheva J, Gurevitch J. Uses and misuses of meta-analysis in plant ecology.
as multilevel (hierarchical) meta-analytic and meta- J Ecol. 2014;102:828–44.
regression models [14, 45, 49], which allow more com- 16. Page MJ, Moher D. Mass production of systematic reviews and meta-
analyses: an exercise in mega-silliness? Milbank Q. 2016;94(5):515–9.
plex sources of non-independence to be modeled, as 17. Ioannidis JPA. The mass production of redundant, misleading, and conflicted
well as multivariate (multi-response) meta-analyses [107] systematic reviews and meta-analyses. Milbank Q. 2016;94(5):485–514.
and network meta-analyses [108]. Many of the ten ap- 18. Cooper HM. Research synthesis and meta-analysis : a step-by-step approach.
4th ed. London: SAGE; 2010.
praisal questions above, however, are also relevant for 19. Rothstein HR, Lorite CJ, Stewart GB, Koricheva J, Gurevitch J. Quality
these extended methods. More importantly, we believe standards for research syntheses. In: Koricheva J, Gurevitch J, Mengersen K,
that asking the ten questions above will readily equip bi- editors. The handbook of meta-analysis in ecology and evolution. Princeton:
Princeton University Press; 2013. p. 323–38.
ologists with the knowledge necessary to differentiate 20. Vetter D, Rcker G, Storch I. Meta-analysis: a need for well-defined usage in
among mega-enlightenment, mega-mistakes, and some- ecology and conservation biology. Ecosphere. 2013;6:1–24.
thing in-between. 21. Morrissey M. Meta-analysis of magnitudes, differences, and variation in
evolutionary parameters. J Evol Biol. 2016;29(10):1882–904.
22. Vesterinen HM, Sena ES, Egan KJ, Hirst TC, Churolov L, Currie GL, Antonic A,
Howells DW, Macleod MR. Meta-analysis of data from animal studies: a
Acknowledgements practical guide. J Neurosci Methods. 2014;221:92–102.
We are grateful for comments on our article from the members of I-DEEL. We also 23. Mongeon P, Paul-Hus A. The journal coverage of Web of Science and
thank John Brookfield, one anonymous referee, and the BMC Biology editorial Scopus: a comparative analysis. Scientometrics. 2016;106(1):213–28.
team for comments, which significantly improved our article. SN acknowledges 24. Côté IM, Jennions MD. The procedure of meta-analysis in a nutshell. In:
an ARC (Australian Research Council) Future Fellowship (FT130100268), DWAN is Koricheva J, Gurevitch J, Mengersen K, editors. The handbook of meta-analysis
supported by an ARC Discovery Early Career Research Award (DE150101774) and in ecology and evolution. Princeton: Princton University Press; 2013. p. 14–24.
UNSW Vice Chancellors Fellowship. AMS is supported by a Judith and David 25. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA,
Coffey Fellowship from the University of Sydney. Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for
reporting systematic reviews and meta-analyses of studies that evaluate
health care interventions: explanation and elaboration. PLoS Med. 2009;6:
Competing interests e1000100. doi:10.1371/journal.pmed.1000100.
The authors declare that they have no competing interests. 26. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting
items for systematic reviews and meta-analyses: the PRISMA statement. Ann
Author details Internal Med. 2009;151:264–9.
1
Evolution & Ecology Research Centre and School of Biological, Earth and 27. Ellison AM. Repeatability and transparency in ecological research. Ecology.
Environmental Sciences, University of New South Wales, Sydney, NSW 2052, 2010;91(9):2536–9.
Australia. 2Diabetes and Metabolism Division, Garvan Institute of Medical 28. Parker TH, Forstmeier W, Koricheva J, Fidler F, Hadfield JD, Chee YE, Kelly
Research, 384 Victoria Street, Darlinghurst, Sydney, NSW 2010, Australia. CD, Gurevitch J, Nakagawa S. Transparency in ecology and evolution: real
3
Charles Perkins Centre, University of Sydney, Sydney, NSW 2006, Australia. problems, real solutions. Trends Ecol Evol. 2016;31(9):711–9.
4
School of Mathematics and Statistics, University of Sydney, Sydney, NSW 29. Nakagawa S, Cuthill IC. Effect size, confidence interval and statistical
2006, Australia. significance: a practical guide for biologists. Biol Rev. 2007;82:591–605.
Nakagawa et al. BMC Biology (2017) 15:18 Page 13 of 14

30. Borenstein M. Effect size for continuous data. In: Cooper H, Hedges LV, 57. Anzures-Cabrera J, Higgins JPT. Graphical displays for meta-analysis: an
Valentine JC, editors. The handbook of research synthesis and meta-analysis. overview with suggestions for practice. Res Synth Methods. 2010;1(1):66–80.
New York: Russell Sage Foundation; 2009. p. 221–35. 58. Senior AM, Grueber CE, Kamiya T, Lagisz M, O'Dwyer K, Santos ESA,
31. Borenstein M, Hedges LV, Higgens JPT, Rothstein HR. Introduction to meta- Nakagawa S. Heterogeneity in ecological and evolutionary meta-analyses: its
analysis. West Sussex: Wiley; 2009. magnitudes and implications. Ecology. 2016; in press.
32. Fleiss JL, Berlin JA. Effect sizes for dichotomous data. In: Cooper H, Hedges 59. Cochran WG. The combination of estimates from different experiments.
LV, Valentine JC, editors. The handbook of research synthesis and meta- Biometrics. 1954;10(1):101–29.
analysis. New York: Russell Sage Foundation; 2009. p. 237–53. 60. Higgins JPT, Thompson SG. Quantifying heterogeneity in a meta-analysis.
33. Hedges LV, Gurevitch J, Curtis PS. The meta-analysis of response ratios in Stat Med. 2002;12:1539–58.
experimental ecology. Ecology. 1999;80(4):1150–6. 61. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency
34. Hector KL, Lagisz M, Nakagawa S. The effect of resveratrol on longevity in meta-analyses. BMJ. 2003;327:557–60.
across species: a meta-analysis. Biol Lett. 2012. doi: 10.1098/rsbl.2012.0316. 62. Huedo-Medina TB, Sanchez-Meca J, Marin-Martinez F, Botella J. Assessing
35. Lagisz M, Hector KL, Nakagawa S. Life extension after heat shock exposure: heterogeneity in meta-analysis: Q statistic or I-2 index? Psychol Methods.
Assessing meta-analytic evidence for hormesis. Ageing Res Rev. 2013;12(2):653–60. 2006;11(2):193–206.
36. Nakagawa S, Lagisz M, Hector KL, Spencer HG. Comparative and meta- 63. Rucker G, Schwarzer G, Carpenter JR, Schumacher M. Undue reliance on
analytic insights into life-extension via dietary restriction. Aging Cell. 2012; I-2 in assessing heterogeneity may mislead. BMC Med Res Methodol.
11:401–9. 2008;8:79.
37. Garratt M, Nakagawa S, Simons MJ. Comparative idiosyncrasies in life extension 64. Harrell FEJ. Regression modeling strategies with applications to linear
by reduced mTOR signalling and its distinctiveness from dietary restriction. models, logistic regression, and survival analysis. New York: Springer; 2001.
Aging Cell. 2016;15(4):737–43. 65. Ioannidis JPA. Why most published research findings are false. PLoS Med.
38. Nakagawa S, Poulin R, Mengersen K, Reinhold K, Engqvist L, Lagisz M, Senior 2005;2(8):696–701.
AM. Meta-analysis of variation: ecological and evolutionary applications and 66. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed
beyond. Methods Ecol Evol. 2015;6(2):143–52. flexibility in data collection and analysis allows presenting anything as significant.
39. Senior AM, Nakagawa S, Lihoreau M, Simpson SJ, Raubenheimer D. An Psychol Sci. 2011;22(11):1359–66.
overlooked consequence of dietary mixing: a varied diet reduces 67. Lipsey MW. Those confounded moderators in meta-analysis: Good, bad, and
interindividual variance in fitness. Am Nat. 2015;186(5):649–59. ugly. Ann Am Acad Polit Social Sci. 2003;587:69–81.
40. Senior AM, Gosby AK, Lu J, Simpson SJ, Raubenheimer D. Meta-analysis of 68. Schielzeth H. Simple means to improve the interpretability of regression
variance: an illustration comparing the effects of two dietary interventions coefficients. Methods Ecol Evol. 2010;1(2):103–13.
on variability in weight. Evol Med Public Health. 2016;2016(1):244–55. 69. Higgins JPT, Green S. Cochrane handbook for systematic reviews of
41. Mengersen K, Jennions MD, Schmid CH. Statistical models for the meta- interventions. West Sussex: Wiley-Blackwell; 2009.
analysis of non-independent data. In: Koricheva J, Gurevitch J, Mengersen K, 70. Cumming G, Finch S. A primer on the understanding, use, and calculation of
editors. The handbook of meta-analysis in ecology and evolution. Princeton: confidence intervals that are based on central and noncentral distributions.
Princeton University Press; 2013. p. 255–83. Educ Psychol Meas. 2001;61:532–84.
42. Lajeunesse MJ. Meta-analysis and the comparative phylogenetic method. 71. Jennions MD, Lorite CJ, Koricheva J. Role of meta-analysis in interpreting the
Am Nat. 2009;174(3):369–81. scientific literature. In: Koricheva J, Gurevitch J, Mengersen K, editors. The
43. Chamberlain SA, Hovick SM, Dibble CJ, Rasmussen NL, Van Allen BG, handbook of meta-analysis in ecology and evolution. Princeton: Princeton
Maitner BS. Does phylogeny matter? Assessing the impact of phylogenetic University Press; 2013. p. 364–80.
information in ecological meta-analysis. Ecol Lett. 2012;15:627–36. 72. Thompson B. What future quantitative social science research could look
44. Noble DWA, Lagisz M, O'Dea RE, Nakagawa S. Non-independence and like: confidence intervals for effect sizes. Educ Res. 2002;31:25–32.
sensitivity analyses in ecological and evolutionary meta-analyses. Mol Ecol. 73. Cohen J. Statistical power analysis for the beahvioral sciences. 2nd ed.
2017; in press. doi: 10.1111/mec.14031. Hillsdale: Lawrence Erlbaum; 1988.
45. Hadfield J, Nakagawa S. General quantitative genetic methods for 74. Rothstein HR, Sutton AJ, Borenstein M. Publication bias in meta-analysis:
comparative biology: phylogenies, taxonomies and multi-trait models for prevention, assessment and adjustments. Chichester: Wiley; 2005.
continuous and categorical characters. J Evol Biol. 2010;23:494–508. 75. Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias
46. Viechtbauer W. Conducting meta-analyses in R with the metafor package. in reports of animal stroke studies leads to major overstatement of efficacy. PLoS
J Stat Software. 2010;36(3):1–48. Biol. 2010;8(3), e1000344.
47. Rosenberg MS, Adams DC, Gurevitch J. MetaWin: statistical software for 76. Moller AP, Jennions MD. Testing and adjusting for publication bias. Trends
meta-analysis. 2nd ed. Sunderland: Sinauer; 2000. Ecol Evol. 2001;16(10):580–6.
48. Marín-Martínez F, Sánchez-Meca J. Averaging dependent effect sizes in meta- 77. Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected
analysis: a cautionary note about procedures. Spanish J Psychol. 1999;2:32–8. by a simple, graphical test. BMJ. 1997;315:629–34.
49. Cheung MWL. Modeling dependent effect sizes with three-level meta-analyses: 78. Sterne JAC, Egger M. Funnel plots for detecting bias in meta-analysis: guidelines
a structural equation modeling approach. Psychol Methods. 2014;19:211–29. on choice of axis. J Clin Epidemiol. 2001;54:1046–55.
50. Sutton AJ, Higgins JPI. Recent developments in meta-analysis. Stat Med. 79. Sutton AJ. Publication bias. In: Cooper H, Hedges L, Valentine J, editors. The
2008;27(5):625–50. handbook of research synthesis and meta-analysis. New York: Russell Sage
51. Mengersen K, Schmid CH, Jennions MD, Gurevitch J. Statistical models and Foundation; 2009. p. 435–52.
approcahes to inference. In: Koricheva J, Gurevitch J, Mengersen K, editors. 80. Lau J, Ioannidis JPA, Terrin N, Schmid CH, Olkin I. Evidence based
The handbook of meta-analysis in ecology and evolution. Princeton: medicine–the case of the misleading funnel plot. BMJ. 2006;333(7568):
Princeton University Press; 2013. p. 89–107. 597–600.
52. Lajeunesse MJ. Meta-analysis and the comparative phylogenetic method. 81. Duval S, Tweedie R. Trim and fill: a simple funnel-plot-based method of
Am Nat. 2009;174:369–81. testing and adjusting for publication bias in meta-analysis. Biometrics. 2000;
53. Lajeunesse MJ. On the meta-analysis of response ratios for studies with 56:455–63.
correlated and multi-group designs. Ecology. 2011;92:2049–55. 82. Duval S, Tweedie R. A nonparametric "trim and fill" method of
54. Lajeunesse MJ, Rosenberg MS, Jennions MD. Phylogenetic nonindepedence accounting for publication bias in meta-analysis. J Am Stat Assoc.
and meta-analysis. In: Koricheva J, Gurevitch J, Mengersen K, editors. The 2000;95(449):89–98.
handbook of meta-analysis in ecology and evolution. Princeton: Princeton 83. Simonsohn U, Nelson LD, Simmons JP. p-curve and effect size: correcting
University Press; 2013. p. 284–99. for publication bias using only significant results. Perspect Psychol Sci. 2014;
55. Borenstein M, Hedges LV, Higgens JPT, Rothstein H. A basic introduction to 9(6):666–81.
fixed-effect and andom-effects models for meta-analysis. Res Synth Methods. 84. Terrin N, Schmid CH, Lau J, Olkin I. Adjusting for publication bias in the
2010;1:97–111. presence of heterogeneity. Stat Med. 2003;22(13):2113–26.
56. Vetter D, Rucker G, Storch I. Meta-analysis: a need for well-defined usage in 85. Bruns SB, Ioannidis JPA. p-curve and p-hacking in observational research.
ecology and conservation biology. Ecosphere. 2013;4(6):1–24. PLoS One. 2016;11(2), e0149144.
Nakagawa et al. BMC Biology (2017) 15:18 Page 14 of 14

86. Schuch FB, Vancampfort D, Rosenbaum S, Richards J, Ward PB, Veronese N,


Solmi M, Cadore EL, Stubbs B. Exercise for depression in older adults: a
meta-analysis of randomized controlled trials adjusting for publication bias.
Rev Bras Psiquiatr. 2016;38(3):247–54.
87. Jennions MD, Moller AP. Relationships fade with time: a meta-analysis of
temporal trends in publication in ecology and evolution. Proc R Soc Lond B
Biol Sci. 2002;269(1486):43–8.
88. Trikalinos TA, Ioannidis JP. Assessing the evolution of effect sizes over time.
In: Rothstein H, Sutton AJ, Borenstein M, editors. Publication bias in meta-
analysis: prevention, assessment and adjustments. Chichester: Wiley; 2005.
p. 241–59.
89. Koricheva J, Jennions MD, Lau J. Temporal trends in effect sizes: causes,
detection and implications. In: Koricheva J, Gurevitch J, editors. Mengersen
K, editors. Princeton: Princeton University Press; 2013. p. 237–54.
90. Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials
builds evidence for exemplary medical care. J Clin Epidemiol. 1995;48(1):45–
57. discussion 59–60.
91. Leimu R, Koricheva J. Cumulative meta-analysis: a new tool for detection of
temporal trends and publication bias in ecology. Proc R Soc Lond B Biol Sci.
2004;271(1551):1961–6.
92. Murtaugh PA. Journal quality, effect size, and publication bias in meta-
analysis. Ecology. 2002;83(4):1162–6.
93. Greenhouse JB, Iyengar S. Sensitivity analysis and diagnostics. In: Cooper H,
Hedges L, Valentine J, editors. The handbook of research synthesis and
meta-analysis. New York: Russell Sage Foundation; 2009. p. 417–34.
94. Lajeunesse MJ. Recovering missing or partial data from studies: a survey. In:
Koricheva J, Gurevitch J, Mengersen K, editors. The handbook of meta-
analysis in ecology and evolution. Princeton: Princeton University Press;
2013. p. 195–206.
95. Nakagawa S, Freckleton RP. Missing inaction: the dangers of ignoring
missing data. Trends Ecol Evol. 2008;23(11):592–6.
96. Ellington EH, Bastille-Rousseau G, Austin C, Landolt KN, Pond BA, Rees EE,
Robar N, Murray DL. Using multiple imputation to estimate missing data in
meta-regression. Methods Ecol Evol. 2015;6(2):153–63.
97. Gurevitch J, Nakagawa S. Research synthesis methods in ecology. In: Fox GA,
Negrete-Yankelevich S, Sosa VJ, editors. Ecological statistics: contemporary
theory and application. Oxford: Oxford University Press; 2015. p. 201–28.
98. Nakagawa S. Missing data: mechanisms, methods and messages. In: Fox GA,
Negrete-Yankelevich S, Sosa VJ, editors. Ecological statistics. Oxford: Oxford
University Press; 2015. p. 81–105.
99. Ioannidis J, Patsopoulos N, Evangelou E. Uncertainty in heterogeneity
estimates in meta-analyses. BMJ. 2007;335:914–6.
100. Jennions MD, Lorite CJ, Koricheva J. Using meta-analysis to test ecological
and evolutionary theory. In: Koricheva J, Gurevitch J, Mengersen K, editors.
The handbook of meta-analysis in ecology and evolution. Princeton:
Princeton University Press; 2013. p. 38–403.
101. Lajeunesse MJ. Power statistics for meta-analysis: tests for mean effects and
homogeneity. In: Koricheva J, Gurevitch J, Mengersen K, editors. The
handbook of meta-analysis in ecology and evolution. Princeton: Princeton
University Press; 2013. p. 348–63.
102. Smith ML, Glass GV. Meta-analysis of psychotherapy outcome studies. Am
Psychologist. 1977;32(9):752–60.
103. Eysenck HJ. Exercise in mega-silliness. Am Psychologist. 1978;33(5):517.
104. Whittaker RJ. Meta-analyses and mega-mistakes: calling time on meta-analysis
of the species richness-productivity relationship. Ecology. 2010;91(9):2522–33.
105. Whittaker RJ. In the dragon's den: a response to the meta-analysis forum
contributions. Ecology. 2010;91(9):2568–71.
106. Ioannidis JP. Meta-research: the art of getting it wrong. Res Synth Methods.
2010;3:169–84.
107. Jackson D, Riley R, White IR. Multivariate meta-analysis: potential and promise.
Stat Med. 2011;30(20):2481–98.
108. Salanti G, Schmid CH. Special issue on network meta-analysis: introduction
from the editors. Res Synth Methods. 2012;3(2):69–70.

You might also like