How To Write A Systematic Review Article and Meta
How To Write A Systematic Review Article and Meta
How To Write A Systematic Review Article and Meta
Introduction
General definitions are one thing; the practical benefit of writing reviews is
another. Why would a novice author/researcher engage in this activity? Why is it
important? What benefits can it bring? First, it provides the authors with a gen-
eral understanding of the subject matter they study as part of their area of exper-
tise. Each field of study has its own terminology, and the more specific a topic
is, the greater the terminological differences that may be found among authors.
It is therefore important to produce a good description and critical appraisal of
existing evidence concerning the topic being explored. Another objective is to
integrate the findings generated by different studies into a meaningful body of
evidence. The process of writing a review article will help the authors obtain a
unique perspective on the issue and assist them in processing the results from
many investigators into a consistent form. It will then be possible to summarize
the results and interpret the existing evidence in a new light. To increase one’s
chances of having a review article accepted for publication, it is useful to address
topical issues in a given field or areas of research featuring a number of hetero-
geneous and controversial studies where a consistent approach is needed.
What is a Review?
A systematic review that does not include an evaluation of study findings (i.e.
performs only a systematic search using explicit inclusion and exclusion crite-
ria) is referred to in this chapter as a hybrid narrative review. Hybrid narrative
reviews provide authors greater freedom to interpret and integrate study results
and conclusions compared with systematic reviews but still allow the reader to
determine the authenticity of the author’s findings. These reviews are particu-
larly important for theory development and problem identification, especially
when the peer-reviewed literature may be incomplete and when important
studies may not use rigorous experimental or longitudinal designs.
Meta-analyses are a step beyond systematic reviews; they require a quantita-
tive analysis of previously published findings.
The following sections discuss the steps involved in creating systematic
reviews and meta-analyses. Although not explicitly mentioned, much of the
information applies to hybrid narrative reviews as well. Because traditional
narrative reviews are no longer viewed favorably, they will not be discussed.
It is strongly recommended, however, that before writing any article, authors
should first choose a journal to which to submit their research because of the
subtle differences in journal manuscript definitions. Authors should study
thoroughly the guidelines for authors and keep them on hand to reference
while writing the article. This may save a great deal of time spent on final revi-
sions or even make them unnecessary.
The aim of a systematic review is set in the same way as in an original research
study; the article must contribute something new to the given research field.
The specific aim should correspond with the research questions. It may be, for
example, “to provide a systematic review of the results of studies published
from 2000 to 2012 that investigate the specific relationship between the level
176 Publishing Addiction Science
of parental control and alcohol use among children and adolescents.” Alter-
natively, it may be “to classify parenting strategies in relation to alcohol-using
children aged 12–15” or “to make a critical appraisal of recent studies of the
emotional bond in young adults who use cannabis.”
The aims are typically stated in the last paragraph of the introduction. The
aims then determine the choice of the specific procedure used to search sources
and process and present the results. In the concluding section of the study, it
should be stated whether and to what extent the aims have been fulfilled.
The primary and most important data sources are electronic databases, typi-
cally accessed through university libraries. Because access to specific papers
may be limited as a result of financial constrictions, the levels of access granted
to students and staff will depend on the resources of the university subscribing
to the journals. Thus, you may find that although you can get into a number of
databases, you may be able to access only a few full texts (as the others require
payment) and have mostly abstracts available, which may not be sufficient for
systematic reviews. This is dealt with in more detail in the next point.
In the field of addictology, we recommend to use following databases:
• SCOPUS: http://www.scopus.com
• ProQuest Central: http://search.proquest.com/index
• PsycARTICLES: http://www.apa.org/pubs/databases/psycarticles/index.
aspx
Nevertheless, databases and full-text studies are not the only data sources. It
is also possible to include conference presentations if the conference abstracts
have been published. At the same time, some journals could have a problem
with these types of publications because they did not undergo a standard
peer-review process. Also, a quality literature search should not disregard print
sources, such as monographs; articles in peer-reviewed, non-indexed jour-
nals; handbooks and manuals pertaining to the relevant topic; graduate theses;
and dissertations. These could be included into a category “Records identified
through other sources” in the PRISMA (Preferred Reporting Items for System-
atic Reviews and Meta-Analyses) study flow diagram (see below).
We recommend keeping scrupulous notes on the articles read, either using
Endnote or a separate database of references. This is relevant to all research but
particularly to reviews.
The relevant publications, the results of which are to be processed, are selected
according to the classification criteria that follow.
The first item, Records identified through database searching, shows the number
of publications found in databases on the basis of the selection criteria. The item
Additional records identified through other sources refers to the number of pub-
lications found in information sources other than those available online (these
are typically print documents, such as research reports, handbooks, and manu-
als). Another step involves the elimination of duplicate articles. If you work with
multiple databases, it is very likely that the same publication will be selected
several times. Such duplicates should therefore be removed. This process is very
easy if you use a citation manager. When using EndNote, for example, this can
be achieved by simply activating the “Find duplicates” function.
How to Write a Systematic Review Article and Meta-Analysis 179
Studies included in
quantitative evaluation
(n=16)
Then you can focus on the articles. The item Records screened indicates the
number of publications that remained after the exclusion of duplicates and
publications rejected after you have read the abstracts. The number of articles
eliminated on the basis of the examination of their abstracts is indicated in
the Records excluded box. On the other hand, articles for which the full text
is available (these should make up as large a proportion of the initial set of
records as possible) are assessed in the next step and their final number is given
under Full-text articles assessed for eligibility. When reading through the stud-
ies, you should continue to bear in mind the selection criteria (ideally, with a
checklist on your desk) and watch carefully for them being met in the studies
under scrutiny. If a more rigorous design is applied, you can also create a table
180 Publishing Addiction Science
specifically for the selection and assessment of publications. If you come across
articles that do not meet the selection criteria, you should state the reasons for
such ineligibility and the respective number of studies; see the item Full-text
articles excluded with reasons. The last figure shows the final number of articles
included in the study. This example contains two alternatives—Studies included
in qualitative evaluation and Studies included in quantitative evaluation—but
one item only, for example, Studies included in quantitative evaluation, is also
possible. For more information about the PRISMA study flow diagram method,
including further illustrations of the procedure or the PRISMA checklist that
helps in keeping a record of the process, visit http://www.prisma-statement.
org/statement.htm.
Interpretation of Results
The results of the studies you have obtained will be further summarized in a
structured form—ideally a table—according to the classification criteria. It is
advisable to compare the qualitative and quantitative perspectives of the stud-
ies when processing the results. (Although meta-analysis is not always the goal,
it is useful to take quantitative as well as qualitative approaches into account.)
When using a quantitative point of view, you can follow the number of stud-
ies that used a longitudinal versus cross-sectional design, how many studies
applied a standardized methodology versus a methodology developed specifi-
cally for the purposes of the study, or how many studies had their samples of
participants well balanced in terms of representativeness and how many did
not. On the other hand, a qualitative perspective makes it possible to look for
broader aspects of the works and fine subtleties in the results that have been
ascertained.
There are a number of available tools that can serve as a guide when examin-
ing study methodologies and results. The Consolidated Standards of Report-
ing Trials (CONSORT) statement provides a standardized way to report and
interpret the results of randomized clinical trials (Schulz et al., 2010). The pri-
mary tool is a 25-item checklist that contains questions on how the trial was
designed, the data analyzed, and the results interpreted. The Strengthening the
Reporting of Observational studies in Epidemiology (STROBE) and Transpar-
ent Reporting of Evaluations with Nonrandomized Designs (TREND) state-
ments are similar checklists for studies using observational study designs (von
Elm et al., 2007; des Jarlais et al., 2004). If a more quantitative analysis of study
design is desired, the recommendations of the Grades of Recommendation,
Assessment, Development, and Evaluation (GRADE) working group may be
used (Atkins et al., 2004). These recommendations contain a point system that
can be used in combination with the CONSORT, STROBE, or TREND state-
ments to further differentiate among studies. Although useful, the results of
using these tools should not be considered as absolute but as guides toward
How to Write a Systematic Review Article and Meta-Analysis 181
Once the results have been processed and interpreted, what is probably the
most challenging part comes next. For one thing, you may be quite tired by
now, because the previous systematic procedure was rather demanding in
terms of attention and endurance, and now you need to think about the results
and compare them with the conclusions drawn by other relevant studies and
with each other. In particular, this requires you to bring a new perspective to
the subject matter under study, singling out and discussing most salient finding
from the results. Importantly, the discussion should compare and evaluate the
results against other relevant research projects rather than against the presenta-
tion of the author’s opinions on the issue. Each idea or result presented in the
Research studies Criteria
Country Study Age category2 Number of Parental Parenting styles4 Methods5
design1 respondents involvement3
L C 1 2 3 Y N 1 2 3 4 5 1 2
Bahr & Hoffmann USA + + + + 4,938 + + + + + +
182 Publishing Addiction Science
(2010)
Barnes et al. (2000) USA + + + 506 + + +
Burk et al. (2011) USA + + 362 + + + CRPR
Choquet et al. (2008) France + + + 16,532 + + +
Clausen (1996) Norway + + + 846 + + + + + +
Total 8 8 6 12 14 6 10 13 9 9 7 3 12 6
sizes may strongly affect study generalizability. You may also face your own
limitations, particularly regarding the inclination toward a selective choice of
studies, where certain studies may not be included, either deliberately or inad-
vertently. Because citation bias may significantly compromise the results, you
should try to avoid it at all costs if you want to arrive at a conclusion that is
relevant to the field. If you fail to do so, it is most likely that reviewers will dis-
cover such a bias, as it is their job to examine related studies in the given area
of research.
The last aspect to consider during the interpretation process is the statisti-
cal versus clinical significance of studies. In a large number of cases, you will
find results that are not reflected in clinical practice, despite being significant.
Therefore, it is important to maintain contact with clinical practitioners (or
consult other experts) and be able to compare the results with real life. You can
then formulate how these significances correlate in the conclusion.
For addiction science, the critical evaluation of systematic reviews is quite
important. It is the key to the correct interpretation of selective data from par-
ticular studies, it provides background for comparing findings, and it can help
to identify potentially disproportionate or inhomogeneous interpretations of
findings. It has always been a sensitive issue in the context of publishing addic-
tion science because of potential conflicts of interest, and the history of the
field contains examples of published papers in which researchers intentionally
distorted data. The tendency to interpret data in a different way and present
specific points of view can be a potential source of bias (Bero & Jadad, 1997).
For example, there are many examples of contrasting study findings in the area
of tobacco policy depending on whether the study was or was not sponsored by
the tobacco industry (Glantz, 2005).
Meta-Analysis
for systematic reviews but requires a more complicated analysis. There are also
similarities with primary intervention trials, in which one focuses on how well
an intervention works. However, in a meta-analysis, the researcher looks across
studies to determine the magnitude of effects. It is worth following a system-
atic guideline such as PRISMA to establish a framework for the review (Moher
et al., 2009).
The first step is to formulate the research question. Decide the keywords
you will use to search for articles, the date from which you wish articles to be
included, and the inclusion and exclusion criteria. Search the databases you
have chosen for articles that meet your subject and eligibility criteria. It is also
worth looking at reference lists from the articles you have selected to find other
articles not so far identified.
Once the articles for inclusion have been identified they will need to be coded
according to the variables chosen for the meta-analysis. Because these coding
decisions are not always clear, two raters are often used to obtain some meas-
ure of reliability either by percent agreement or by a kappa coefficient. Enter
the data extracted onto a database with relevant details of each study entered
including, for example, type of intervention, follow-up periods, sample size,
type of control group, and research design.
One of the problems in comparing a number of studies is that studies will
report diverse outcomes according to the model they used. To determine effect
sizes so that the meta-analysis is effective, a “common currency” of effects needs
to be established in order for comparisons and aggregation to be made. Finney
and Moyer (2010) suggest that the most common effect sizes used are stand-
ardized mean difference, odds ratio, and correlation coefficient. The standardized
mean difference is “the difference between means on a continuous outcome
variable for an intervention and a comparison condition, typically divided by
the pooled standard deviation of the two groups.” (Finney and Moyer, 2010,
pp 321). By using standard deviations, one can measure by how many standard
deviations, or what proportion of standard deviations, the intervention is per-
forming better than the control group.
Another method of measuring effect size is by using the odds ratio. By calcu-
lating the probability of something changing divided by something not chang-
ing, a ratio may be obtained. An odds ratio of 1.00 would show that there was
no difference between treatment and a control condition in which there were
two possible outcomes.
The third method is the correlation coefficient, which can be used to express
the relationship between a continuous intervention dimension (which is unu-
sual in addiction studies) and the outcome (Finney & Moyer, 2010).
We have now established a method of calculating effect sizes, and, to find
out whether there is indeed an effect and what that effect is, we must now
aggregate them across the studies we have reviewed. This can be done with a
fixed-effects or a random-effects approach. These two approaches deal with the
study sampling errors, with the former assuming that the error in estimating
186 Publishing Addiction Science
the population effect size comes from random factors associated with subject-
level sampling, whereas the latter assumes that there are study sampling errors
in addition to subject-level sampling errors. A random-effects model is used
more frequently because of a greater generalizability, although the fixed-effects
model has a greater statistical power. Effects from larger sample sizes have less
variance across studies and are therefore more precise. To test whether the
overall effect size varies from zero, it is best to use specific statistical software
designed to conduct meta-analyses (Finney & Moyer, 2010).
As with systematic reviews, a table should be presented detailing all the arti-
cles included in the study and describing all the relevant characteristics, includ-
ing author, date of data collection, the main outcome findings, and methods of
collecting the data. A forest plot that shows the range of findings for each study is
also often included, detailing in comparison the range of effects in an intervention.
References
Atkins, D., Best, D., Briss, P. A., Eccles, M., Falck-Ytter, Y., Flottorp, S., . . .,
Zaza, S., & GRADE Working Group. (2004). Grading quality of evidence
and strength of recommendations. BMJ, 328, 1490.
188 Publishing Addiction Science
Pouget, E. R., Hagan, H., & Des Jarlais, D. (2012). Meta-analysis of hepatitis C
seroconversion in relation to shared syringes and drug preparation equip-
ment. Addiction, 107, 1057–1065.
Schulz, K. F., Altman, D. G., Moher, D., & CONSORT Group. (2010). CON-
SORT 2010 statement: Updated guidelines for reporting parallel group ran-
domised trials. PLoS Medicine, 7, e1000251.
Smith, V., Devane, D., Begley, C. M., & Clarke, M. (2011). Methodology in
conducting a systematic reviews of healthcare interventions. BMC Medical
Research Methodology, 11, 15.
Society for the Study of Addiction (2015). Instructions for Authors. Addiction.
Retrieved from http://www.addictionjournal.org/pages/authors (August 11,
2015 ).
Stroup, D. F., Berlin, J. A., Morton, S. C., Olkin, I., Williamson, G. D.,
Rennie, D., . . ., & Thacker, S. B. (2000). Meta-analysis of observational stud-
ies in epidemiology: A proposal for reporting. Meta-analysis of Observational
Studies in Epidemiology (MOOSE) group. JAMA, 283, 2008–2012.
Viechtbauer, W. (2007). Hypothesis testing for population heterogeneity in
meta-analysis. British Journal of Mathematical and Statistical Psychology,
60, 64–75.
von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C.,
Vandenbroucke, J. P., & STROBE Initiative. (2007). The Strengthening the
Reporting of Observational Studies in Epidemiology (STROBE) statement:
Guidelines for reporting observational studies. PloS Medicine, 4, e296.