2015 Article 103 PDF
2015 Article 103 PDF
2015 Article 103 PDF
Abstract
Background: The term severe acute respiratory infection (SARI) encompasses a heterogeneous group of respiratory
illnesses. Grading the severity of SARI is currently reliant on indirect disease severity measures such as respiratory
and heart rate, and the need for oxygen or intensive care. With the lungs being the primary organ system involved
in SARI, chest radiographs (CXRs) are potentially useful for describing disease severity. Our objective was to develop
and validate a SARI CXR severity scoring system.
Methods: We completed validation within an active SARI surveillance project, with SARI defined using the World Health
Organization case definition of an acute respiratory infection with a history of fever, or measured fever of ≥ 38 °C; and
cough; and with onset within the last 10 days; and requiring hospital admission. We randomly selected 250 SARI cases.
Admission CXR findings were categorized as: 1 = normal; 2 = patchy atelectasis and/or hyperinflation and/or bronchial
wall thickening; 3 = focal consolidation; 4 = multifocal consolidation; and 5 = diffuse alveolar changes.
Initially, four radiologists scored CXRs independently. Subsequently, a pediatrician, physician, two residents, two medical
students, and a research nurse independently scored CXR reports. Inter-observer reliability was determined using a
weighted Kappa (κ) for comparisons between radiologists; radiologists and clinicians; and clinicians. Agreement was
defined as moderate (κ > 0.4–0.6), good (κ > 0.6–0.8) and very good (κ > 0.8–1.0).
Results: Agreement between the two pediatric radiologists was very good (κ = 0.83, 95 % CI 0.65–1.00) and between the
two adult radiologists was good (κ = 0.75, 95 % CI 0.57–0. 93).
Agreement of the clinicians with the radiologists was moderate-to-good (pediatrician:κ = 0.65; pediatric resident:κ = 0.69;
physician:κ = 0.68; resident:κ = 0.67; research nurse:κ = 0.49, medical students: κ = 0.53 and κ = 0.56).
Agreement between clinicians was good-to-very good (pediatrician vs. physician:κ = 0.85; vs. pediatric resident:κ = 0.81;
vs. medicine resident:κ = 0.76; vs. research nurse:κ = 0.75; vs. medical students:κ = 0.63 and 0.66).
Following review of discrepant CXR report scores by clinician pairs, κ values for radiologist-clinician agreement ranged
from 0.59 to 0.70 and for clinician-clinician agreement from 0.97 to 0.99.
Conclusions: This five-point CXR scoring tool, suitable for use in poorly- and well-resourced settings and by clinicians of
varying experience levels, reliably describes SARI severity. The resulting numerical data enables epidemiological
comparisons of SARI severity between different countries and settings.
Keywords: Influenza, Humans, Radiography, Thoracic, Respiratory tract infections, Validation studies
* Correspondence: [email protected]
1
Starship Children’s Hospital, Auckland, New Zealand
2
The SHIVERS study, Auckland and Wellington, New Zealand
Full list of author information is available at the end of the article
© 2015 Taylor et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Taylor et al. BMC Medical Imaging (2015) 15:61 Page 2 of 10
ordinal. Weighted Kappa scores were defined as showing across categories to differ from these assumptions we in-
‘poor’ (κ ≤ 0.2), ‘fair’ (>0.2 to 0.4), ‘moderate’ (>0.4 to creased the sample-size to 250.
0.6), ‘good’ (>0.6 to 0.8) or ‘very good’ (>0.8 to 1.0) agree-
ment [20]. Results
Following completion of the scoring and identification Study sample demographics, clinical illness, respiratory
of the CXR reports with discrepant scores, the viral isolates and CXR abnormalities (Table 1)
pediatrician then met individually with each of the other The median (interquartile range) age of the children
clinicians to determine if we could achieve a consensus with SARI was 1 (0–3) year of age and of the adults was
severity score for these reports. We then recalculated 60 (42–75) years of age. Sixty-five (50 %) of the adults
the κ scores for the radiologist-clinician and clinician- were smokers, of whom 18 (28 %) were current smokers.
clinician comparisons. The most common presenting syndromes among the
children were suspected pneumonia (42 %) and sus-
Sample-size estimates pected bronchiolitis (36 %), and among the adults were
We based sample-size estimates upon the five-point or- suspected pneumonia (39 %) and febrile illness with re-
dinal scale scoring system, assuming a distribution of spiratory symptoms (25 %). Median length of hospital stay
scores of 10 %, 25 %, 25 %, 25 %, and 15 % across the for children and adults was 3 days. Ten percent (children
five categories. For a sample-size of 200, the Cohen’s 17 %, adults 3 %) required intensive care. Laboratory testing
Kappa measure of agreement will have a confidence identified influenza viruses in 23 % of SARI cases and non-
interval in the order of ±0.14 (assuming 100 % agree- influenza respiratory viruses (respiratory syncytial virus,
ment and weighting to allow for the ordinal nature of rhinovirus, parainfluenza virus types 1–3, adenovirus, or
scores). Given the potential for the actual distributions human metapneumovirus) in 43 %. In 12 (10 %) children
Taylor et al. BMC Medical Imaging (2015) 15:61 Page 5 of 10
Table 1 Demographic, clinical, and respiratory viral Table 1 Demographic, clinical, and respiratory viral
characteristics, and discharge diagnoses of random sample of characteristics, and discharge diagnoses of random sample of
250 patients hospitalized with a severe acute respiratory 250 patients hospitalized with a severe acute respiratory
infection and identified by active surveillance infection and identified by active surveillance (Continued)
Children Adults Cardiovascular 0 (0) 6 (5)
Variable (n1 = 125) (n2 = 125) Infectious diseases 4 (3) 6 (5)
Demographics
Other organ systems 2 (2) 20 (16)
Age in years, median (IQRa) 1 (0–3) 60 (42–75) *
IQR = interquartile range
b
Male gender, n (%) 70 (56) 66 (53) n1 = 123, n2 = 123
c
n1 = 118, n2 = 119. Suspected upper respiratory tract infection includes
Ethnicity, n (%) coryza and pharyngitis; exacerbation of adult chronic lung disease includes
chronic obstructive lung disease, emphysema, and bronchitis; exacerbation of
European and other 56 (45) 75 (60)
childhood chronic lung disease includes bronchiectasis and cystic fibrosis;
Maori 22 (17) 13 (10) febrile illness with respiratory symptoms includes shortness of breath
d
n1 = 125, n2 = 124. Child: influenza A (H1N1)pdm09 n = 7, influenza A (H3N2)
Pacific 36 (29) 19 (15) n = 9, influenza A (not subtyped) n = 1, influenza B n = 9; Adult: influenza A
(H1N1)pdm09 n = 8, influenza A (H3N2) n = 12, influenza A (not subtyped)
Asian 11 (9) 18 (15)
n = 5, influenza B n = 6
Self-defined healthb, n (%) e
n1 = 99, n2 = 109. Child: respiratory syncytial virus n = 49, rhinovirus n = 24,
parainfluenza virus n = 3, adenovirus n = 13, human metapneumovirus n = 5;
Excellent 51 (42) 11 (9) Adult: respiratory syncytial virus n = 7, rhinovirus n = 13, parainfluenza virus
Very good 32 (26) 39 (32) n = 2, adenovirus n = 0, human metapneumovirus n = 4
f
Based upon ICD principal discharge diagnosis codes
Good 25 (20) 44 (36)
Fair 5 (4) 20 (16) and one (1 %) adult co-detection of influenza and a non-
Poor 10 (8) 9 (7) influenza virus occurred. The proportion of SARI cases that
Smoking history (adults only)
were influenza positive was similar for children versus
adults (21 % vs. 25 %, P = 0.43). A larger proportion of
Ever smoker, n (%) - 65 (50)
the SARI cases in children, compared to adults, were
Current smoker, n (%) - 18 (14) positive for non-influenza respiratory viruses (81 %
Clinical features of SARI illness vs. 25 %, P < 0.001). A larger proportion of the SARI
Presenting syndromec, n (%) cases in children, compared to adults were assigned a
Suspected acute upper respiratory tract 6 (5) 3 (3) principal discharge diagnosis code for a respiratory ill-
infection ness (95 % vs. 74 %, P < 0.001). The distribution of
Suspected croup 4 (3) 0 (0) CXR scores across the five scoring categories differed
Suspected bronchiolitis 42 (36) 0 (0) between children and adults (P < 0.001; Fig. 3).
Suspected pneumonia 50 (42) 47 (39)
Chest radiograph scoring agreement
Exacerbation of adult chronic lung disease 0 (0) 11 (9)
Radiologist with radiologist agreement
Exacerbation of asthma 7 (6) 7 (6) Agreement within pairs of radiologists who scored the
Exacerbation of childhood chronic lung disease 1 (1) 0 (0) radiographs was ‘very good’ for the pediatric radiologists
Respiratory failure 0 (0) 3 (3) (κ = 0.83) and ‘good’ for the adult radiologists (κ = 0.75)
Febrile illness with respiratory symptoms 3 (3) 30 (25) (Table 2).
Other suspected acute respiratory infection 5 (4) 18 (15)
Clinician with radiologist agreement
Length of stay in days, median (IQRa) 3 (2–5) 3 (2–6)
The κ values for agreement of clinicians with radiologists
Intensive care unit admission, n (%) 21 (17) 4 (3) ranged from 0.49 to 0.69. Agreement of the clinician’s scor-
Respiratory viral testing and results ing of the CXR reports with the senior radiologists scoring
Influenza virus identifiedd, n (%) 26 (21) 31 (25) the CXR images was ‘good’ for the pediatrician (κ = 0.65),
Non-influenza respiratory virus identifiede, n (%) 80 (81) 27 (25) internal medicine physician (κ = 0.68), internal medicine
resident (κ = 0.66), and pediatric resident (κ = 0.69); and
Discharge diagnosis categoryf
‘moderate’ for the two medical students (κ = 0.53 and 0.56)
Respiratory 119 (95) 93 (74)
and the research nurse (κ = 0.49) (Table 2).
Fig. 3 Distribution of radiologist’s chest radiograph scores for children and adults hospitalized with a serious acute respiratory infection
the pediatrician versus the internal medicine physician whether a consensus score was possible, we recalculated
(κ = 0.85) and the pediatrician versus the pediatric radiologist-clinician and clinician-clinician agreement.
resident (κ = 0.81); and ‘good’ for comparisons be- The radiologist-clinician κ values ranged from 0.59 to
tween the pediatrician and the internal medicine 0.70 following this second CXR report review. The
resident (κ = 0.77), medical students (κ = 0.63 and changes in κ values for agreement of the clinicians’ scor-
0.66), and research nurse (κ = 0.75) (Table 3). ing with the radiologists’ scoring following this second
CXR report review were smaller for the pediatrician
Clinician-radiologist agreement and clinician-clinician (+3), internal medicine physician (−1), internal medicine
agreement following clinician review of chest radiographs resident (−3) and pediatric resident (+1) and larger for
with scoring discrepancies the medical students (+9, +14) and research nurse (+10)
Following review by clinician pairs of the CXR reports for (Table 4). Agreement on CXR report scoring for all
which their scores were discrepant, and determination of pairs of clinicians following this consensus meeting was
Table 2 Agreement between radiologists in scoring severe acute respiratory infection CXRs from their reading of the digital CXR
images and agreement in scoring severe acute respiratory infection CXRs: clinicians reading of CXR reports versus radiologists
reading of CXRs
Weighted Kappa Strength of
Health professional (95 % CI) agreementa
Radiologist Agreement
Pediatric radiologists 0.83 (0.65 to 1.00) ‘Very good’
Adult radiologists 0.75 (0.57 to 0.93) ‘Good’
Radiologist-clinician agreement
Radiologist vs. pediatrician 0.65 (0.52 to 0.78) ‘Good’
Radiologist vs. internal medicine physician 0.68 (0.55 to 0.80) ‘Good’
Radiologist vs. internal medicine resident 0.66 (0.53 to 0.78) ‘Good’
Radiologist vs. pediatric resident 0.69 (0.56 to 0.82) ‘Good’
Radiologist vs. medical student 1 0.56 (0.44 to 0.69) ‘Moderate’
Radiologist vs. medical student 2 0.53 (0.40 to 0.66) ‘Moderate’
Radiologist vs. research nurse 0.49 (0.36 to 0.62) ‘Moderate’
a
Agreement: weighted Kappa <0.2 = ‘poor’, >0.2 to 0.4 = ‘fair’, >0.4 to 0.6 = ‘moderate’, >0.6 to 0.8 = ‘good’, >0.8 to 1.0 = ‘very good’ agreement
CI = confidence interval
Taylor et al. BMC Medical Imaging (2015) 15:61 Page 7 of 10
Table 3 Agreement between clinician pairs in classification of CXR abnormalities in patients with a severe acute respiratory infection
Number of CXRs with
discrepant scores
Weighted Kappa Strength of n = 250
Clinician-clinician combination (95 % CI) agreementa n (%)
Agreement after independent review
Pediatrician vs. internal medicine physician 0.85 (0.73 to 0.98) ‘Very good’ 39 (16)
Pediatrician vs. internal medicine resident 0.76 (0.63 to 0.88) ‘Good’ 48 (19)
Pediatrician vs. pediatric resident 0.81 (0.68 to 0.95) ‘Very good’ 51 (20)
Pediatrician vs. medical student 1 0.66 (0.53 to 0.78) ‘Good’ 67 (27)
Pediatrician vs. medical student 2 0.63 (0.50 to 0.76) ‘Good’ 70 (28)
Pediatrician vs. research nurse 0.75 (0.62 to 0.88) ‘Good’ 56 (22)
Agreement after combined review of CXRs with discrepant scores
Pediatrician vs. internal medicine physician 0.98 (0.90 to 1.06) ‘Very good’ 3 (1)
Pediatrician vs. internal medicine resident 0.99 (0.87 to 1.12) ‘Very good’ 4 (2)
Pediatrician vs. pediatric resident 0.97 (0.84 to 1.09) ‘Very good’ 5 (2)
Pediatrician vs. medical student 1 0.99 (0.86 to 1.11) ‘Very good’ 3 (1)
Pediatrician vs. medical student 2 0.98 (0.85 to 1.10) ‘Very good’ 3 (1)
Pediatrician vs. research nurse 0.99 (0.86 to 1.11) ‘Very good’ 6 (2)
a
Agreement: weighted Kappa ≤0.2 = ‘poor’, >0.2 to 0.4 = ‘fair’, >0.4 to 0.6 = ‘moderate’, >0.6 to 0.8 = ‘good’, >0.8 to 1.0 = ‘very good’ agreement
CI Confidence interval
‘very good’ with κ scores ranging from 0.97 to 0.99 good’ inter-observer agreement between radiologists who
(Table 3). reviewed the original radiographs and applied the scoring
The distribution of CXR scores skewed more to the system. Agreement between radiologists and of radiolo-
lower (more normal) scores than was anticipated in the gists with clinicians was ‘moderate’ to ‘very good’. Inter-
study sample-size calculation. However, across all com- observer agreement between clinicians of various levels of
parisons the κ estimates had average confidence intervals experience was ‘good’ to ‘very good’. Following a consen-
of ± 0.12 (range ± 0.08 to ± 0.18), which was in keeping sus review by clinician pairs of radiograph reports with
with our sample-size estimate. discrepant scores, clinician agreement with the radiolo-
gists improved for the clinicians who were less experi-
Discussion enced in CXR interpretation and agreement between all
Using a novel five-point ordinal scoring system, we de- clinician pairs became ‘very good’.
scribed the agreement of clinicians with radiologists and Our study used data collected from 250 prospectively
between clinicians in the interpretation of CXR abnormal- enrolled SARI cases (125 pediatric; 125 adult) selected
ities in patients with SARI. We observed ‘good’ to ‘very randomly from a larger number of SARI cases identified
Table 4 Agreement in classification of CXR abnormalities in patients with a severe acute respiratory infection: clinicians reading of
CXR reports following clinician-clinician review of discrepant scores versus radiologists reading of CXRs
Weighted Kappa Strength of
Radiologist-clinician combination (95 % CI) agreementa
Radiologist vs. pediatrician 0.68 (0.60 to 0.76) ‘Good’
Radiologist vs. internal medicine physician 0.67 (0.59 to 0.76) ‘Good’
Radiologist vs. adult medical resident 0.65 (0.56 to 0.74) ‘Good’
Radiologist vs. pediatric medical resident 0.70 (0.62 to 0.78) ‘Good’
Radiologist vs. medical student 1 0.65 (0.56 to 0.74) ‘Good’
Radiologist vs. medical student 2 0.67 (0.59 to 0.76) ‘Good’
Radiologist vs. research nurse 0.59 (0.48 to 0.69) ‘Moderate’
a
Agreement: weighted Kappa ≤0.2 = ‘poor’, >0.2 to 0.4 = ‘fair’, >0.4 to 0.6 = ‘moderate’, >0.6 to 0.8 = ‘good’, >0.8 to 1.0 = ‘very good’ agreement
CI Confidence interval
Taylor et al. BMC Medical Imaging (2015) 15:61 Page 8 of 10
by active surveillance within a defined region and study radiograph abnormalities may allow for increased preci-
period. The CXR’s from the children were more severely sion in the application of tools that use vital sign and la-
abnormal compared to those obtained from adults. The boratory abnormalities to assist in clinical decision
largest differences in comparisons between the CXR’s making in patients with SARI [23, 24].
from children and adults were in the proportion with Our scoring system is relatively simple and simpler, for
CXR severity scores of 1 ‘normal’ (pediatric 11 %, adult example, to the approach developed for the scoring of
56 %) and of 2 ‘shows patchy atelectasis and/or hyperin- CXRs from patients with chronic respiratory conditions
flation and/or bronchial wall thickening’ (pediatric 63 %, such as cystic fibrosis [25]. Given that we were describ-
adult 29 %). We postulate that this is due to age-related ing an acute respiratory illness we specifically excluded
differences in lung anatomy, with young children having a description of the presence of chronic disease, non-
smaller airways with increased airway resistance; less respiratory disease and/or complications from this val-
alveoli and reduced alveolar surface area; and a more idation study. We believe that separate description and
elastic and compliant chest wall compared to adults [21]. recording of such abnormalities and of the acute
For large epidemiological studies of SARI, interpret- changes related to SARI is more appropriate.
ation of CXRs by radiologists is both costly and time The inter-observer agreements achieved in this study
consuming. In contrast with the storage and subsequent were ‘moderate’ (κ <0.4 to 0.6) to ‘very good’ (κ >0.8 to
review of a digital CXR image, a CXR report can be 1.0). This is an acceptable result when compared to
stored as a simple text document and read without the other studies that have examined inter-observer reliabil-
requirement for sophisticated software. Our approach al- ity in assessing CXRs, for example in adult community-
lows inclusion of CXR data into a numerical data set with- acquired pneumonia, where κ values were less than 0.50
out the need for complex methods, such as digital [10–12]. The inter-observer agreements achieved in this
algorithms. Clinical personnel with less specialist training study also compare favorably with those found when using
may use this approach and achieve acceptable levels of the WHO criteria for radiologically confirmed pneumonia
agreement with radiologists and with more experienced in children to examine the efficacy of pneumococcal con-
clinicians, especially if the opportunity for reviewing jugate vaccines in preventing pneumonia (Kappa = 0.58)
reports with discrepant scores is included. [22]. We postulate that better agreement was reached with
A potential weakness of our study is that we have our scoring system because it only required a description
compared radiologists’ scores from the original CXR im- of the presence of abnormalities rather than an interpret-
ages to clinicians’ scores of the corresponding reports. ation of whether or not these changes identified a specific
However, we felt it was important to show the scoring syndrome, for example pneumonia or bronchiolitis [26].
tool was valid when applied to CXRs by radiologists, Poor agreement between clinicians on the finer details of
given that their interpretation is the gold standard. chest radiograph interpretation is evident in studies of
Establishing that there was agreement between radiolo- both adults and children with community-acquired
gists was a necessary first step before proceeding with pneumonia [26, 27].
comparisons between radiologists and clinicians. Consistent with the published literature, agreement
With few admissions to intensive care and very few of clinicians with radiologists and agreement between
CXRs with scores of ‘5’, we cannot be sure agreement clinicians varied with the level of clinician experience
for extremely abnormal CXRs is high. However, most of in reading CXR reports [7, 27–29]. Including clinicians
the disagreement in previously reported studies is in the with less clinical experience in CXR interpretation
mid-range of our categorization system, e.g. the differen- (medical students and research nurse) and demonstrat-
tiation between bronchiolitis and pneumonia in children ing that they could reach reasonable levels of agree-
[11]. Our validation was limited to the first CXR of the ment compared with more experienced colleagues,
hospital admission so may not have included the most shows promise for the application of this tool by such
abnormal radiograph from each patient. However, for members of research teams.
surveillance this is most appropriate. Our sample was too small to allow us to reliably investi-
Application of this CXR scoring system allows for an gate or describe the chest radiographic features of popula-
evaluation of the relationship between the severity of CXR tion subgroups defined, for example by presenting
abnormalities and exposure to factors that potentially pre- syndrome, intensity of care required, or respiratory viruses
vent respiratory disease, for example, the pneumococcal detected. The first two years of SHIVERS surveillance iden-
conjugate vaccine [22]. It also allows for an evaluation of tified more than 3,500 cases of SARI. As we plan for five
the relationship between the severity of CXR changes on years of surveillance within the SHIVERS project, we antici-
hospital admission and subsequent health care utilization, pate that we will have sufficient study power to complete
for example, intensive care unit admission. Being able to these important subgroup analyses and will now be able to
include numerical data that describes the severity of chest include this measure of CXR severity in our analyses.
Taylor et al. BMC Medical Imaging (2015) 15:61 Page 9 of 10
Conclusions developed the data collection instruments, analyzed and interpreted the
Our CXR report scoring tool provides a reliable and data and completed the first and final drafts of the manuscript.
6. Vaillant L, La Ruche G, Tarantola A, Barboza P, epidemic intelligence team at 28. Aviram G, Bar-Shai A, Sosna J, Rogowski O, Rosen G, Weinstein I, et al. H1N1
In VS. Epidemiology of fatal cases associated with pandemic H1N1 influenza influenza: initial chest radiographic findings in helping predict patient
2009. Euro Surveill. 2009;14(33):1–6. outcome. Radiology. 2010;255(1):252–9.
7. Cherian T, Mulholland EK, Carlin JB, Ostensen H, Amin R, de Campo M, et al. 29. Edwards M, Lawson Z, Morris S, Evans A, Harrison S, Isaac R, et al. The presence
Standardized interpretation of paediatric chest radiographs for the diagnosis of radiological features on chest radiographs: how well do clinicians agree?
of pneumonia in epidemiological studies. Bull WHO. 2005;83(5):353–9. Clin Radiol. 2012;67(7):664–8.
8. Levine OS, O’Brien KL, Deloria-Knoll M, Murdoch DR, Feikin DR, DeLuca AN,
et al. The pneumonia etiology research for child health project: a 21st
century childhood pneumonia etiology study. Clin Infect Dis. 2012;54 Suppl
2:S93–101.
9. Fineberg HV. Pandemic preparedness and response–lessons from the H1N1
influenza of 2009. New Engl J Med. 2014;370(14):1335–42.
10. Kramer MS, Roberts-Brauer R, Williams RL. Bias and ‘overcall’ in
interpreting chest radiographs in young febrile children. Pediatrics.
1992;90(1 Pt 1):11–3.
11. McCarthy PL, Spiesel SZ, Stashwick CA, Ablow RC, Masters SJ, Dolan Jr TF.
Radiographic findings and etiologic diagnosis in ambulatory childhood
pneumonias. Clin Pediatr (Phila). 1981;20(11):686–91.
12. Swingler GH. Observer variation in chest radiography of acute lower
respiratory infections in children: a systematic review. BMC Med Imaging.
2001;1(1):1–5.
13. O’Grady KA, Torzillo PJ, Ruben AR, Taylor-Thomson D, Valery PC, Chang AB.
Identification of radiological alveolar pneumonia in children with high rates
of hospitalized respiratory infections: comparison of WHO-defined and
pediatric pulmonologist diagnosis in the clinical context. Pediatr Pulmonol.
2012;47(4):386–92.
14. Zhao C, Gan Y, Sun J. Radiographic study of severe Influenza-A (H1N1)
disease in children. Eur J Radiol. 2011;79(3):447–51.
15. Riquelme R, Torres A, Rioseco ML, Ewig S, Cilloniz C, Riquelme M, et al.
Influenza pneumonia: a comparison between seasonal influenza virus and
the H1N1 pandemic. Eur Respir J. 2011;38(1):106–11.
16. Soudack M, Ben-Shlush A, Raviv-Zilka L, Jacobson J, Mendelson E. Chest
radiograph findings in children with laboratory confirmed pandemic H1N1
virus infection. J Med Imaging Radiat Oncol. 2011;55(3):275–8.
17. Jartti A, Rauvala E, Kauma H, Renko M, Kunnari M, Syrjala H. Chest imaging
findings in hospitalized patients with H1N1 influenza. Acta Radiol. 2011;
52(3):297–304.
18. Huang QS, Baker M, McArthur C, Roberts S, Williamson D, Grant C, et al.
Implementing hospital-based surveillance for severe acute respiratory
infections caused by influenza and other respiratory pathogens in New
Zealand. Western pac. 2014;5(2):23–30.
19. Code of health and disability services consumers’ rights [http://www.hdc.
org.nz/media/24833/leaflet%20code%20of%20rights.pdf]. Accessed 22nd
December 2015.
20. Landis JR, Koch GG. The measurement of observer agreement for
categorical data. Biometrics. 1977;33(1):159–74.
21. Bramson RT, Griscom NT, Cleveland RH. Interpretation of chest radiographs
in infants with cough and fever. Radiology. 2005;236(1):22–9.
22. Hansen J, Black S, Shinefield H, Cherian T, Benson J, Fireman B, et al.
Effectiveness of heptavalent pneumococcal conjugate vaccine in children
younger than 5 years of age for prevention of pneumonia: updated analysis
using World Health Organization standardized interpretation of chest
radiographs. Pediatr Infect Dis J. 2006;25(9):779–81.
23. Fine MJ, Auble TE, Yealy DM, Hanusa BH, Weissfeld LA, Singer DE, et al.
A prediction rule to identify low-risk patients with community-acquired
pneumonia. New Engl J Med. 1997;336(4):243–50.
24. Lim WS, van der Eerden MM, Laing R, Boersma WG, Karalus N, Town GI, et
al. Defining community acquired pneumonia severity on presentation to Submit your next manuscript to BioMed Central
hospital: an international derivation and validation study. Thorax. 2003; and we will help you at every step:
58(5):377–82.
25. Cleveland RH, Stamoulis C, Sawicki G, Kelliher E, Zucker EJ, Wood C, et al. Brasfield • We accept pre-submission inquiries
and Wisconsin scoring systems have equal value as outcome assessment tools of • Our selector tool helps you to find the most relevant journal
cystic fibrosis lung disease. Pediatr Radiol. 2014;44(5):529–34.
• We provide round the clock customer support
26. Moncada DC, Rueda ZV, Macias A, Suarez T, Ortega H, Velez LA. Reading
and interpretation of chest X-ray in adults with community-acquired • Convenient online submission
pneumonia. Braz J Infect Dis. 2011;15(6):540–6. • Thorough peer review
27. Johnson J, Kline JA. Intraobserver and interobserver agreement of the
• Inclusion in PubMed and all major indexing services
interpretation of pediatric chest radiographs. Emerg Radiol. 2010;17(4):285–90.
• Maximum visibility for your research