Disease Prediction Models and Operational Readiness
Disease Prediction Models and Operational Readiness
Disease Prediction Models and Operational Readiness
Courtney D. Corley1*, Laura L. Pullum2, David M. Hartley3, Corey Benedum1, Christine Noonan1,
Peter M. Rabinowitz4, Mary J. Lancaster1
1 Pacific Northwest National Laboratory, Richland, Washington, United States of America, 2 Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States of
America, 3 Georgetown University Medical Center, Washington, DC, United States of America, 4 Yale University School of Medicine, New Haven, Connecticut, United
States of America
Abstract
The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents
and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the
One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed
models that attempted to predict a disease event, not merely its transmission dynamics and we considered models
involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched
commercial and government databases and harvested Google search results for eligible models, using terms and phrases
provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and
ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was
established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As
a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models,
classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6),
spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources
(e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A
component of this review is the identification of verification and validation (V&V) methods applied to each model, if any
V&V method was reported. All models were classified as either having undergone Some Verification or Validation method,
or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease
prediction models based upon established Technology Readiness Level definitions.
Citation: Corley CD, Pullum LL, Hartley DM, Benedum C, Noonan C, et al. (2014) Disease Prediction Models and Operational Readiness. PLoS ONE 9(3): e91989.
doi:10.1371/journal.pone.0091989
Editor: Niko Speybroeck, Université Catholique de Louvain, Belgium
Received November 2, 2012; Accepted February 19, 2014; Published March 19, 2014
Copyright: ß 2014 Corley et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This study was supported through a contract to Pacific Northwest National Laboratory from the National Biosurveillance Integration Center, Office of
Health Affairs, and the Science and Technology Directorate, Chemical and Biological Division, Threat Characterization and Attribution Branch, of the U.S.
Department of Homeland Security (DHS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the
manuscript.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail: [email protected]
Biosurveillance Disease forecast Infectious disease surveillance Remote sensing + disease forecast Biosurveillance
Bioterror* and model Disease outbreak origin Pathogen detection Spatial disease model Bioterror* and model
CBRN model* Epidemic model* Population dynamic + outbreak Vector-borne disease model CBRN model*
pathogens (e.g., Aphtae epizooticae), and plant pathogens (e.g., search results returned are bound by the dates of coverage of each
soybean and wheat rusts). Examples for evidence of condition database and the date in which the search was performed,
include accidental or deliberate events affecting air or water however all searching was completed by December 31, 2010. The
quality (e.g., volcanic ash, pesticide runoff), economically motivat- databases queried resulted in 12,152 citations being collected.
ed adulteration of the food and pharmaceutical supply, and Irrelevant citations on the topic of sexually transmitted diseases,
intentional exposure. In the context of this article, a biosurveil- cancer and diabetes were retrieved. We de-duplicated and
lance model is broadly defined as an abstract computational, removed extraneous studies resulting in a collection of 6,503
algorithmic, statistical, or mathematical representation that publications. We also collected 13,767 web documents based on
produces informative output related to event detection or event Google queries, often referred to as Google harvesting. We down
risk [23]. The model is formulated with a priori knowledge and selected the web documents for theses and dissertations, reducing
may ingest, process, and analyze data. A biosurveillance model this number to 21. Citations not relevant to the study of select
may be proactive or anticipatory (e.g., used to detect or forecast an agents, such as sexually transmitted diseases, cancer and diabetes,
event, respectively), it may assess risk, or it may be descriptive (e.g., were identified and removed, leaving 6,524 documents. See
used to understand the dynamics or drivers of an event) [23]. Checklist S1 for a list of information sources used in this study.
There also is a true lack of implementation of such models in Next, we filtered citations by hand based upon the definition of
routine surveillance and control activities; as a result there is not a biosurveillance model presented in the introduction and for
an active effort to build and improve capacity for such model select agents, which resulted in a 117 curated papers. Of these 117
implementation in the future [24–27]. When it comes to emerging papers, 54 were considered relevant to the study based on our
infectious disease events, or the intentional or accidental release of selection criteria; however, 10 of these dealt purely with disease
a bioterrorism agent, most such pathogens are zoonotic (trans- spread models, inactivation of bacteria, or the modeling of human
mitted from animal to human) in origin [28–30]. Therefore, in immune system responses to pathogens. As a result, we
assessing disease prediction models for biosurveillance prepared- systematically reviewed 44 papers and the results are presented
ness, it is reasonable to include a focus on agents of zoonotic origin in this analysis. See Figure S1 for a graphic summary of the data
that could arise from wildlife or domestic animal populations or reduction methodology and Checklist S1 for the PRISMA
could affect such animal populations concurrently with human guidelines used for the evaluation of the 44 papers. To enable
populations [31]. To date, the development of surveillance systems real-time collaboration and sharing of the literature, the citations
for tracking disease events in animals and humans have arisen were exported to the Biosurveillance Model Catalog housed at
largely in isolation, leading to calls for better integration of human http://BioCat.pnnl.gov.
and animal disease surveillance data streams [32], to better The models in the selected publications were classified in the
prepare for emerging and existing disease threats. Recent reports
categories listed below. These categories are not mutually
have shown some utility for such linkage [26,33].
exclusive and publications involving multiple modeling and
Two critical characteristics differentiate this work from other analytic approaches were assigned to multiple categories.
infectious disease modeling systematic reviews (e.g., [34–38]). First,
Risk Assessment models correlate risk factors for a specific
we reviewed models that attempted to predict or forecast the
location based upon weather and other covariates to calculate
disease event (not simply predict transmission dynamics). Second,
disease risk, similar to a forest fire warning. This type of model is
we considered models involving pathogens of concern as
commonly referred to as ecological niche modeling or disease risk
determined by the U.S. National Select Agent Registry as of June
mapping [39–41].
2011 (http://www.selectagents.gov).
Event Prediction models will assign a probability for when
and where the disease event is likely to occur based upon specific
Methods data sources and variables. The difference between event
Subject matter experts were asked to supply keywords and prediction and risk assessment is the end product of the model;
phrases salient to the research topic. A sample of keywords and in the former, the output is the location and a time period a disease
phrases used is shown in Table 1. Multiple searches were outbreak will occur, while the risk assessment model provides the
conducted in bibliographic databases covering the broad areas risk of an outbreak occurring under specified conditions [42–44].
of medicine, physical and life sciences, the physical environment, Spatial models forecast the geographic spread of a disease after
government and security. There were no restrictions placed on it occurs based upon the relationship between the outbreak and
publication date or language of publication. Abstracts and primarily geospatial factors. It should be noted that spatial models
citations of journal articles, books, books in a series, book sections can be considered dynamical models in that they change in time,
or chapters, edited books, theses and dissertations, conference e.g., spatial patch models. [45–47].
proceedings and abstracts, and technical reports containing the Dynamical models examine how a specific disease moves
keywords and phrases were reviewed. The publication date of through a population. These models may include parameters, such
as movement restrictions, that have the effect of interventions on Modeling and Simulation [58], and U.S. DoD MIL-STD-3022
the severity of an epidemic or epizootic. These models may be [59]. For instance, the U.S. DoD definition of verification for
used to predict and understand the dynamics of how a disease will modeling and simulation is ‘‘the process of determining that a
spread through a naı̈ve population or when the pathogenicity will model implementation and its associated data accurately represent
change [48,49]. the developer’s conceptual description and specifications’’[56].
Event Detection models attempt to identify outbreaks either The US DoD definition of validation for modeling and simulation is
through sentinel groups or through the collection of real-time ‘‘the process of determining the degree to which a model and its
diagnostic, clinical, or syndromic data and to detect spikes in signs, associated data provide an accurate representation of the real
symptoms or syndromes that are indicative of an event (e.g., event- world from the perspective of the intended uses of the model’’[56].
based biosurveillance) [50,51]. In the words of Boehm, verification answers the question ‘‘Did we
The disease agents examined in this study were taken from the build the system right?’’ and validation answers, ‘‘Did we build the
U.S. National Select Agent Registry and include human, plant, right system?’’ [60]. Further, the ‘‘official certification that a
and animal pathogens. The agents described within these models model, simulation, or federation of models and simulations and its
are grouped non-exclusively by their mode of transmission: direct associated data is acceptable for use for a specific purpose’’ is its
contact, vector-borne, water- or soil-borne and non-specific. accreditation[56], which answers the question of whether the model/
Next, we analyzed the data sources in order to find ways to simulation is credible enough to be used.
improve operational use of biosurveillance models. These non- All models were classified as either a) having undergone Some
mutually exclusive data source categories were: ‘‘Epidemiological V&V method, or b) No V&V based only on the paper(s) cited for
Data from the Same Location’’; ‘‘Epidemiological Data from a that model. Those models classified as having undergone Some
Different Location’’; ‘‘Governmental and Non-Governmental V&V were further classified based upon the type of V&V
Organizations’’; ‘‘Satellite (Remote Sensing)’’; ‘‘Simulated’’; ‘‘Lab- method(s) applied to these models. The V&V method classifica-
oratory Diagnostic’’; ‘‘Expert Opinion’’; and ‘‘Literature.’’ If a tions used were ‘‘Statistical Verification’’; ‘‘Sensitivity Analysis
paper cited any form of literature that was not epidemiological, (verification)’’; ‘‘Specificity and Sensitivity (verification)’’; ‘‘Verifi-
weather, or population data, it was categorized within the cation using Training Data’’; ‘‘Validation using Temporally
literature group. An example of this is references to the preferred Independent Data’’; and ‘‘Validation using Spatially and Tempo-
natural habitat or survival requirements for a disease agent. Papers rally Independent Data.’’ In general, no conclusions on model
that cited epidemiological data from a location independent of the credibility can be based on the types of V&V methods used, given
validation data were grouped ‘‘Epidemiological Data from a that a) none of the papers were focused on the model V&V, and b)
Different Location,’’ ‘‘Simulated Data,’’ and ‘‘Experimental seldom are all aspects of V&V reported upon in the types of papers
Data.’’ ‘‘Expert Opinion’’ did not explicitly state from whom or surveyed. The most frequently used verification method used is
what type of data was used. In addition to the model data sources, some form of statistical verification. It is important to note that
twelve non-mutually exclusive variable categories were identified verification methods do not necessarily imply that a model is
to facilitate understanding of how these models could be used correct. In this type of verification, methods such as Kappa (used
effectively by the research and operational communities. Models to assess the degree to which two or more persons, examining the
with variables describing location or distance and rainfall or same data, agree on the assignment of data to categories), area
temperature were categorized as ‘‘Geospatial’’ and ‘‘Climatic,’’ under the receiving operating characteristic (ROC) curve,
respectively. Models that took into account the epidemiological goodness of fit, and other statistical values are examined to help
(population-level) characteristics of the disease were grouped measure the ability of the model to accurately describe or predict
together as ‘‘Epidemiological.’’ Variables that dealt specifically the outbreak. Several models plotted observed data against
with the agent or etiology were categorized under ‘‘Etiology.’’ predicted data as a V&V technique. This technique was further
Population size, density, and other related variables were grouped delineated, depending on whether the observed data were part the
into either ‘‘Affected Population’’ (i.e., the animal, plant, or model’s training data (verification), temporally independent of the
human population affected by the disease) or ‘‘Vectors and Other training data (validation), or temporally and spatially independent
Populations’’ (i.e., populations of the vector or any other of the training data (validation). The remaining models applied
population that may be considered within the model but that verification methods such as sensitivity analysis, which examined
was not affected by the disease). Models that utilized remote whether a model functioned as it was believed to when different
sensing data such as the ‘‘normalized difference vegetation index’’ values were input into important variables; or specificity and
(NDVI), a measurement used to determine the amount of living sensitivity metrics, which measure the ability to determine true
green vegetation in a targeted area, were grouped within ‘‘Satellite positives and negatives. We acknowledge that not all of these V&V
(Remote Sensing).’’ ‘‘Agricultural’’ techniques, such as tillage techniques are applicable to every model type. Also note that the
systems, were also identified to be variables in some models as well use of a verification or validation method does not constitute
as ‘‘Clinical’’ and ‘‘Temporal’’ variables. The final two variable complete verification or validation of the model. For instance, the
types identified were ‘‘Topographic and Environmental,’’ such as IEEE standard for software verification and validation (IEEE Std
altitude or forest type, and ‘‘Social, Cultural, and Behavioral,’’ 1012-2005) includes five V&V processes, supported by ten V&V
which included religious affiliations and education. activities, in turn implemented by 79 V&V tasks. To put this in
There are many verification and validation (V&V) standards perspective of the study, the V&V methods noted herein are at or
(e.g., ISO/IEC 15288-2008 [52], IEEE Std 1012-2012 [53], ISO/ below the level of task. Assessment of inherent biases present
IEEE 12207 [54]) and definitions, including some that are within these source documents and models reviewed is beyond the
specifically focused on modeling and simulation: NASA-STD- scope of this study.
7009 [55], Verification, Validation, and Accreditation Recom-
mended Practices Guide from the U.S. Department of Defense Results and Analysis
(U.S. DoD) Modeling & Simulation Coordination Office [56],
U.S. Army TRADOC Reg 5-11 [57], U.S. Navy Best Practices The publications’ models were categorized as follows (see
Guide for Verification, Validation, and Accreditation of Legacy Table 2): event prediction (n = 4), spatial (n = 26), ecological niche
Dynamical [74–82] 9
Event Detection [80,83–87] 6
Event Prediction [74,88–90] 4
Review Articles [91–93] 3*
Risk Assessment [21,62,75–78,88,91,94–107] 28
Spatial [61,74,75,78,79,83–85,89,94–99,108,109] 26
(n = 28), diagnostic or clinical (n = 6), spread or response (n = 9), with lower confidence such that users may not trust the results or
and reviews (n = 3). The event prediction type includes only four may not trust that the findings are relevant. Similarly, users may
models—possibly explained by the difficulty in creating of a model not have faith that models are structured in a biologically
that truly predicts disease events. In general, these models were meaningful way if biologic or epidemiologic data do not appear
applied to (or involved) small or special populations (e.g., in a model [63]. Nonetheless, before incorporating epidemiolog-
populations with chronic diseases). According to Favier et al., ical data in disease event prediction models, further research is
the lack of prediction models could be addressed by taking a ‘‘toy needed to determine whether such data will increase the model’s
model’’ and creating a predictive model [61]. If models that are robustness, sensitivity, and specificity. Factors such as accuracy
similar to predictive models, such as risk assessment, could be and precision of epidemiological data will influence this analysis.
modified into such, the number of predictive models could be To better understand the relationship between the variables and
increased. the disease agent’s mode of transmission, a graph (Figure 1) was
created to show the distribution of different modes of transmission
Transmission Mode cited for each variable type used in the evaluated models. Table 5
The transmission modes of the models disease agent spanned shows the distribution of citations for each variable type. It was
the following: direct contact (n = 24), vector-borne (n = 15), water- noted without surprise that, as more research was done on a mode
or soil-borne (n = 7), and non-specific (n = 3); (see Table 3). Direct of transmission, more variables were examined. Furthermore the
contact and vector-borne models accounted for approximately variables, ‘‘Vectors or Other Populations’’ and ‘‘Social, Cultural,
84% of all of the evaluated models. Behavioral’’ were underutilized in the evaluated models. This is
unfortunate because these variables typically have a seasonal
Data Sources and Variables abundance pattern. Further, human socio-cultural behaviors
The data sources (e.g., remote sensing, non-governmental greatly impact the interactions between human and vector
organizations, expert opinion, epidemiological) and variable populations and seasonal meteorological variation can strongly
parameters (e.g., etiology, climatic, spatial, cultural) for each affect vector abundance and competence [64]. Relatively few
model were recorded and reviewed (see Table 4). The two disease prediction models were identified in which the causative
categories that contained the most data sources were ‘‘Epidemi- agent was water- or soil-borne [20,65].
ological Data from the Same Location’’ (n = 25), such as a previous
outbreak, and data gathered from an organization, such as census Verification and Validation Methods
data. Thirty-two models used some type of ‘‘Literature’’ (n = 14) The V&V methods applied to each model, if any, were also
an important fact is that the majority of data used in the models analyzed; see Table 6. Among the types of papers surveyed, few
were scientifically measured. aspects of V&V are typically reported. The majority of models
Categories of variables and parameters utilized in the models selected for this study were subjected some method of verification
supplemented the data sources. The two largest groupings were or validation. Publications on many applications of predictive
‘‘Geospatial’’ and ‘‘Climatic’’ variables. According to Eisen et al. models typically state statistical, sensitivity analysis, and training
[62], models that do not use epidemiological data produce results data test results. These are necessary, though insufficient methods
If a model involved multiple agents in different categories, the paper was placed in multiple groups.
doi:10.1371/journal.pone.0091989.t003
to determine the credibility, verification or validation of a model. verification or validation method does not constitute complete
For instance, the IEEE standard for software verification and verification or validation of the model [66–68].
validation (IEEE Std 1012-2005) includes five V&V processes,
supported by ten V&V activities, which are in turn implemented Operational Readiness
by 79 V&V tasks. To put this into perspective, the V&V methods Given the importance of these models to national and
noted herein are at or below the level of task. The papers reported international health security [69], we note the importance of a
the use of V&V methods for many models but not for others, and categorization scheme that defines a model’s viability for use in an
for the latter case it is unclear whether V&V methods were not operational setting. To our knowledge, none exists, but below we
used or merely unreported. Another positive observation is the illustrate one possibility, based upon the ‘‘technology readiness
significant use of real epidemiological data to examine aspects of level’’ (TRL) originally defined by NASA [70], to evaluate the
model validity. Even though ‘‘Validation using Spatially and technology readiness of space development programs. Important
Temporally Independent Data’’ was used for one of the smallest to note: NASA TRL levels were not developed to cover modeling
sets of models, use of actual data versus predicted data for and simulation, much less biosurveillance models, so the
validation tests was reported for approximately 33% of the models. definitions require modification. In the public health domain,
The reader is encouraged to understand that the use of a TRLs can assist decision makers in understanding the operational
Figure 1. The Percentage of Citations Placed in Each Variable Group by Transmission Mode (if a model contained variables from
multiple groups, it was placed in each respective group).
doi:10.1371/journal.pone.0091989.g001
If a model contained variables from multiple groups, it was placed in each respective group.
doi:10.1371/journal.pone.0091989.t005
readiness level, maturity and utility of a disease event or prediction Readiness Level rating of any given model will thus depend upon
model. Advantages of utilizing the TRL paradigm are that it can the diverse questions and purposes to which any given model is
provide a common understanding of biosurveillance model applied.
maturity, inform risk management, support decision making An initial scheme modifying these definitions is shown in
concerning government funded research and technology invest- Table 7. In such a scheme, the models would be characterized
ments, and support decisions concerning transition of technology. based on how the model was validated, what type of data was used
We also point out the characteristics of TRLs that may limit their to validate the model, and the validity of data used to create the
utility, such as the operational readiness of a model does not model. The V&V of predictive models, regardless of realm of
necessarily fit with technology maturity (V&V), a mature disease application, is an area that requires better definition and
prediction or forecasting model may possess a greater or lesser techniques. The results of model V&V can be used in the
degree of readiness for use in a particular geographic region than definition of model operational readiness; however the readiness
one of lower maturity, and numerous additional factors must be level definitions must also be accompanied by data validation,
considered, including the relevance of the models’ operational uncertainty quantification, and model fitness for use evaluations,
environment, the cost, technological accessibility, sustainability, many of which are areas of active research [73].
etc.
"Operational readiness" is a concept that is user and intended Discussion
use dependent. A model that one user may consider ready may not
suffice for readiness with another user. Different users have Our study was conducted to characterize published select-agent
different needs according to their missions. For example, in the pathogen models that are capable of predicting disease events in
case of surveillance models, some will need to see everything order to determine opportunities for expanded research and to
reported by event-based surveillance systems (i.e., they are define operational readiness levels [38]. Out of an initial collection
unconcerned with specificity but sensitivity is of high value to of 6,524 items 44 papers met inclusion criteria and were
them), while other users may demand low false alarm rates (i.e., systematically reviewed. Models were classified as one or more
specificity is important for their needs) [71,72]. The Operational of the following: event prediction, spatial, ecological niche,
No V&V [61,78,86,92,109] 5
Sensitivity Analysis (verification) [79–82,94,99,112,113] 8
Specificity and Sensitivity (verification) [1,75,84,95] 4
Statistical Verification [21,75,77,79,82,83,88,94–97,99–106,108,115] 21
Validation using Spatially and Temporally Independent Data [79,90] 2
Validation using Temporally Independent Data [84,88,97,102,103,111] 6
Verification using Training Data [21,75,81,85,89,97,104,105,107,112,115] 11
If a model used multiple methods for its verification or validation, it was categorized in each respective group.
doi:10.1371/journal.pone.0091989.t006
Table 7. Initial Definitions of Operational Readiness Levels for Disease Prediction Models.
Level Definition
doi:10.1371/journal.pone.0091989.t007
References
1. Anderson RM, May RM (1991) Infectious diseases of humans: dynamics and 7. Miller JC, Slim AC, Volz EM (2012) Edge-based compartmental modelling for
control: Oxford University Press. infectious disease spread. J Royal Soc Interface 9: 890–906
2. Boily MC, Masse B (1997) Mathematical models of disease transmission: a 8. Siettos CI, Russo L (2013) Mathematical modeling of infectious disease
precious tool for the study of sexually transmitted diseases. Can J Public Health dynamics. Virulence 4: 297–306.
88: 255–265. 9. Brownstein JS, Holford TR, Fish D (2003) A climate-based model predicts the
3. Angulo JJ (1987) Interdisciplinary approaches in epidemic studies—II: Four spatial distribution of the Lyme disease vector Ixodes scapularis in the United
geographic models of the flow of contagious disease. Soc Sci Med 24: 57–69. States. Environmental Health Perspectives 111: 1152–1157.
4. Riley S (2007) Large-scale Spatial-transmission Models of Infectious Disease. 10. Vasquez-Prokopec GM, Bisanzio D, Stoddard ST, Paz-Soldan V, Morrison
Science 316: 1298–1301. AC, et al. (2013) Using GPS technology to quantify human mobility, dynamic
5. Perez L, Dragicevic S (2009) An agent-based approach for modeling dynamics contacts and infectious disease dynamics in a resource-poor urban environ-
of contagious disease spread. Int J Health Geogr 8: 50. ment. PLoS One 8: e58802.
6. Van den Broeck W, Gioannini C, Goncalves B, Quaggiotto M, Colizza V, 11. Chan EH, Sahai V, Conrad C, Bronstein JS (2011) Using Web Search Query
et al. (2011) The GLEaMviz computational tool, publicly available software to Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical
explore realistic epidemic spreading scenarios at the global scale. BMC Infect Disease Surveillance. PLoS Negl Trop Dis 5: e1206.
Dis 11: 50.
12. Bush RM, Bender CA, Subbarao K, Cox NJ, Fitch WM (1999) Predicting the 41. Jentes ES, Poumerol G, Gershman MD, Hill DR, Lemarchand J, et al. (2011)
Evolution of Human Influenza A. Science 286: 1921–1925. The revised global yellow fever risk map and recommendations for vaccination,
13. Liao Y-C, Lee M-S, Ko C-Y, Hsiung CA (2008) Bioinformatics models for 2010: consensus of the Informal WHO Working Group on Geographic Risk
predicting antigenic variants of influenza A/H3N2 virus. Bioinformatics 24: for Yellow Fever. The Lancet Infectious Diseases 11: 622–632.
505–512. 42. Eisen L, Eisen RJ (2011) Using geographic information systems and decision
14. Johnson DA, Alldredge JR, Vakoch DL (1996) Potato late blight forecasting support systems for the prediction, prevention and control of vector-borne
models for the semiarid environment of south-central Washington. Phytopa- diseases. Annu Rev Entomol 56: 41–61.
thology 86: 480–484. 43. Tatem AJ, Hay SI, Rogers DJ (2006) Global traffic and disease vector dispersal.
15. Yuen JE, Hughes G (2002) Bayesian analysis of plant disease prediction. Plant Proceedings of the National Academy of Sciences of the United States of
Pathology 51: 407–412. America 103: 6242–6247.
16. Hashimoto S, Murakami Y, Taniguchi K, Nagaid M (2000) Detection of 44. Matsuda F, Ishimura S, Wagatsuma Y, Higashi T, Hayashi T, et al. (2008)
epidemics in their early stage through infectious disease surveillance. Prediction of epidemic cholera due to Vibrio cholerae O1 in children younger
International Journal of Epidemiology 29: 905–910. than 10 years using climate data in Bangladesh. Epidemiology and Infection
17. Jackson C, Vynnycky E, Hawker J, Olowokure B, Mangtani P (2013) School 136: 73–79.
closures and influenza: systematic review of epidemiological studies. BMJ Open 45. Brooker S, Hotez PJ, Bundy DAP (2010) The Global Atlas of Helminth
3: e002149. Infection: Mapping the Way Forward in Neglected Tropical Disease Control.
18. Kawaguchi R, Miyazono M, Noda T, Takayama Y, Sasai Y, et al. (2009) PLoS Negl Trop Dis 4: e779.
Influenza (H1N1) 2009 outbreak and school closure, Osaka Prefecture, Japan. 46. Murray KA, Retallick RWR, Puschendorf R, Skerratt LF, Rosauer D, et al.
Emerg Infect Dis 15: 1685. (2011) Assessing spatial patterns of disease risk to biodiversity: implications for
19. Brown ST, Tai JH, Bailey RR, Cooley PC, Wheaton WD, et al. (2011) Would the management of the amphibian pathogen, Batrachochytrium dendrobatidis.
school closure for the 2009 H1N1 influenza epidemic have been worth the Journal of Applied Ecology 49: 163–173.
cost? A computational simulation of Pennsylvania. BMC Public Health 11: 47. Tatem AJ, Baylis M, Mellor PS, Purse BV, Capela R, et al. (2003) Prediction of
353. bluetongue vector distribution in Europe and north Africa using satellite
20. Ford TE, Colwell RR, Rose JB, Morse SS, Rogers DJ, et al. (2009) Using imagery. Veterinary Microbiology 97: 13–29.
satellite images of environmental changes to predict infectious disease 48. Estrada-Peña A, Zatansever Z, Gargili A, Aktas M, Uzun R, et al. (2007)
outbreaks. Emerging Infectious Diseases 15: 1341–1346. Modeling the spatial distribution of crimean-congo hemorrhagic fever
21. de Magny GC, Murtugudde R, Sapiano MRP, Nizam A, Brown CW, et al. outbreaks in Turkey. Vector Borne & Zoonotic Diseases 7: 667–678.
(2008) Environmental signatures associated with cholera epidemics. Proceed- 49. Liccardo A, Fierro A (2013) A lattice model for influenza spreading. PLoS One
ings of the National Academy of Sciences of the United States of America 105: 8: e63935.
17676–17681. 50. Keller M, Blench M, Tolentino H, Freifeld CC, Mandl KD, et al. (2009) Use of
22. Linthicum KJ, Anyamba A, Tucker CJ, Kelley PW, Myers MF, et al. (1999) unstructured event-based reports for global infectious disease surveillance.
Climate and satellite indicators to forecast Rift Valley fever epidemics in Emerg Infect Dis 15: 689–695.
Kenya. Science 285: 397–400. 51. Takla A, Velasco E, Benzler J (2012) The FIFA Women’s World Cup in
23. Corley CD, Lancaster MJ, Brigantic RT, Chung JS, Walters RA, et al. (2012) Germany 2011: A practical example for tailoring an event-specific enhanced
Assessing the continuum of event-based biosurveillance through an operational infectious disease surveillance system. BMC Public Health 12: 576.
lens. Biosecur Bioterror 10: 131–141. 52. ISO/IEC (2008) ISO/IEC 15288:2008 Systems and software engineering —
24. Halliday J, Daborn C, Auty H, Mtema Z, Lembo T, et al. (2012) Bringing System life cycle processes. International Organization for Standardization/
together emerging and endemic zoonoses surveillance: shared challenges and a International Electrotechnical Commission.
common solution. Philosophical Transactions of the Royal Society B: 53. IEEE (2012) IEEE Std 1012-2012 IEEE Standard for System and Software
Biological Sciences 367: 2872–2880. Verification and Validation. IEEE Standards Association.
25. Halliday JEB, Meredith AL, Knobel DL, Shaw DJ, Bronsvoort BMDC, et al. 54. ISO/IEC-IEEE (2008) ISO/IEC 12207:2008, IEEE Std 12207-2008 Systems
(2007) A framework for evaluating animals as sentinels for infectious disease and Software Engineering – Software Life Cycle Processes. International
surveillance. Journal of the Royal Society Interface 4: 973–984. Organization for Standardization/International Electrotechnical Commission
26. Scotch M, Brownstein J, Vegso S, Galusha D, Rabinowitz P (2011) Human vs. and Institute of Electrical and Electronics Engineers.
Animal Outbreaks of the 2009 Swine-Origin H1N1 Influenza A epidemic. 55. NASA (2008) NASA-STD-7009 Standard for Models and Simulations.
EcoHealth 8: 376–380. National Aeronautics and Space Administration.
27. Scotch M, Odofin L, Rabinowitz P (2009) Linkages between animal and 56. U.S DoD (2011) Verification, Validation, and Accreditation (VV&A)
human health sentinel data. Bmc Veterinary Research 5: 1–9. Recommended Practices Guide (RPG). Modeling & Simulation Coordination
28. Woolhouse MEJ, Matthews L, Coen P, Stringer SM, Foster JD, et al. (1999) Office, U.S. Department of Defense.
Population dynamics of scrapie in a sheep flock. Philosophical Transactions of 57. U.S Army (1998) TRADOC Reg 5-11 U.S. Army Training and Doctrine
the Royal Society of London Series B-Biological Sciences 354: 751–756. Command (TRADOC) Models and Simulations (M&S) and Data Manage-
29. Woolhouse M, Gaunt E (2007) Ecological origins of novel human pathogens. ment. United States Army Training and Doctrine Command.
Critical Reviews in Microbiology 33: 231–242. 58. U.S Navy (2005) Best Practices Guide for Verification, Validation, and
30. Daszak P, Cunningham AA, Hyatt AD (2000) Emerging Infectious Diseases of Accreditation of Legacy Modeling and Simulation. Department of the Navy,
Wildlife— Threats to Biodiversity and Human Health. Science 287: 443–449. Navy Modeling & Simulation Office.
31. Rabinowitz P, Gordon Z, Chudnov D, Wilcox M, Odofin L, et al. (2006) 59. U.S DoD (2008) MIL-STD-3022 Documentation of Verification, Validation &
Animals as sentinels of bioterrorism agents. Emerging Infectious Diseases 12: Accreditation (VV&A) for Models and Simulations. U.S. Department of
647–652. Defense, Modeling and Simulation Coordination Office.
32. Models PCoI-S, Workshop GfCAtaIOHBS—A, Medicine Io (2012) Informa- 60. Boehm BW (1981) Software Engineering Economics; Yeh RT, editor: Prentice-
tion Sharing and Collaboration: Applications to Integrated Biosurveillance: Hall.
Workshop Summary: The National Academies Press. 61. Favier C, Chalvet-Monfray K, Sabatier P, Lancelot R, Fontenille D, et al.
33. Daszak P (2009) A Call for ‘‘Smart Surveillance’’: A Lesson Learned from (2006) Rift Valley fever in West Africa: the role of space in endemicity. Trop
H1N1. EcoHealth 6: 1–2. Med Int Health 11: 1878–1888.
34. Lloyd-Smith JO, George D, Pepin KM, Pitzer VE, Pulliam JRC, et al. (2009) 62. Eisen RJ, Eisen L (2008) Spatial Modeling of human risk of exposure to vector-
Epidemic Dynamics at the Human-Animal Interface. Science 326: 1362–1367. borne pathogens based on epidemiological versus arthropod vector data.
35. Feighner BH, Eubank S, Glass RJ, Davey VJ, Chrétien JP, et al. (2009) Journal of Medical Entomology 45: 181–192.
Infectious disease modeling and military readiness. Emerging Infectious 63. Margevicius KJ, Generous N, Taylor-McCabe KJ, Brown M, Daniel WB, et al.
Diseases 15: e1. (2014) Advancing a framework to enable characterization and evaluation of
36. Bravata DM, Sundaram V, McDonald KM, Smith WM, Szeto H, et al. (2004) data streams useful for biosurveillance. PLoS ONE 9: e83730.
Evaluating detection and diagnostic decision support systems for bioterrorism 64. Hartley DM, Barker CM, Le Menach A, Niu T, Gaff HD, et al. (2012) Effects
response. Emerg Infect Dis 10: 100–108. of Temperature on Emergence and Seasonality of West Nile Virus in
37. Rolka H, Burkom H, Cooper GF, Kulldorff M, Madigan D, et al. (2007) Issues California. The American Journal of Tropical Medicine and Hygiene 86: 884–
in applied statistics for public health bioterrorism surveillance using multiple 894.
data streams: Research needs {. Statistics in Medicine 26: 1834–1856. 65. Pascual M (2000) Cholera Dynamics and El Nino-Southern Oscillation.
38. Prieto DM, Das TK, Savachkin AA, Uribe A, Izurieta R, et al. (2012) A Science 289: 1766–1769.
systematic review to identify areas of enhancements of pandemic simulation 66. Pullum LL, Cui X. Techniques and Issues in Agent-Based Model Validation;
models for operational use at provincial and local levels. BMC Public Health 2012; Boston, MA.
12: 251. 67. Pullum LL, Cui X. A Hybrid Sensitivity Analysis Approach for Agent-based
39. Costa J, Peterson AT, Beard CB (2002) Ecologic niche modeling and Disease Spread Models; 2012; Boston, MA.
differentiation of poplations of Triatoma brasiliensis Neiva, 1911, the most 68. Koopman J (2004) Modeling Infection Transmission. Annual Review of Public
important Chagas’ disease vector in northeastern Brazil (Hemiptera, Reduvi- Health 25: 303–326.
idae, Triatominae). Am J Trop Med Hyg 67: 516–520. 69. Bernard KW (2013) Health and national security: A contemporary collision of
40. Knorr-Held L, Besag J (1998) Modelling risk from a disease in time and space. cultures. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and
Stat Med 17: 2045–2060. Science 11: 157–162.
70. Mankins JC (1995) Technology Readiness Levels. 93. Yamamoto T, Tsutsui T, Nishiguchi A, Kobayashi S (2008) Evaluation of
71. Hartley DM (2014) Using social media and other Internet data for public surveillance strategies for bovine brucellosis in Japan using a simulation model.
health surveillance: The importance of talking. Milbank Quarterly In press. Preventive Veterinary Medicine 86: 57–74.
72. Métras R, Collins LM, White RG, Alonso S, Chevalier V, et al. (2011) Rift 94. Fichet-Calvet E, Rogers DJ (2009) Risk maps of lassa fever in West Africa. Plos
Valley fever epidemiology, surveillance, and control: what have models Neglected Tropical Diseases 3: e388.
contributed? Vector-Borne and Zoonotic Diseases 11: 761–771. 95. Green AL, Dargatz DA, Herrero MV, Seitzinger AH, Wagner BA, et al. (2005)
73. Pitman R, Fisman D, Zaric GS, Postma M, Kretzschmar M, et al. (2012) Risk factors associated with herd-level exposure of cattle in Nebraska, North
Dynamic Transmission Modeling: A Report of the ISPOR-SMDM Modeling Dakota, and South Dakota to bluetongue virus. American Journal of
Good Research Practices Task Force-5. Value in health: the journal of the Veterinary Research 66: 853–860.
International Society for Pharmacoeconomics and Outcomes Research 15: 96. Kim DR, Ali M, Thiem VD, Park JK, von Seidlein L, et al. (2008) Geographic
828–834. analysis of shigellosis in Vietnam. Health and Place 14: 755–767.
74. Jewell CP, Kypraios T, Christley RM, Roberts GO (2009) A novel approach to 97. Kolivras KN, Comrie AC (2003) Modeling valley fever (coccidioidomycosis)
real-time risk prediction for emerging infectious diseases: A case study in Avian incidence on the basis of climate conditions. Int J Biometeorol 47: 87–101.
Influenza H5N1. Preventive Veterinary Medicine 91: 19–28. 98. Lipp EK, Huq A, Colwell R (2002) Effects Of Global Climate On Infectious
75. Erraguntla M, Ramachandran S, Chang-Nien W, Mayer RJ (2010) Avian Disease: The Cholera Model. Clinical Microbiology Reviews 15: 757–770.
Influenza Datamining Using Environment, Epidemiology, and Etiology 99. Lockhart CY (2008) Surveillance for diseases of poultry with specific reference
Surveillance and Analysis Toolkit (E3SAT). 2010 43rd Hawaii International to avian influenza: Massey University.
Conference on System Sciences (HICSS-43); Honolulu, HI. IEEE. pp. 7 pp. 100. Baptista-Rosas RC, Hinojosa A, Riquelme M (2007) Ecological Niche
76. Hadorn DC, Racloz V, Schwermer H, Stark KDC (2009) Establishing a cost- Modeling of Coccidioides spp. in Western North American Deserts. Annals
effective national surveillance system for Bluetongue using scenario tree of the New York Academy of Sciences 1111: 35–46.
modelling. Veterinary Research40: Article 57. 101. Chhetri BK, Perez AM, Thurmond MC (2010) Factors associated with spatial
77. Hutber AM, Kitching RP, Pilipcinec E (2006) Predictions for the timing and clustering of foot-and-mouth disease in Nepal. Tropical Animal Health and
use of culling or vaccination during a foot-and-mouth disease epidemic. Production 42: 1441–1449.
Research in Veterinary Science 81: 31–36. 102. Cooke III WH, Grala K, Wallis RC (2006) Avian GIS models signal human
78. Mayer D, Reiczigel J, Rubel F (2008) A Lagrangian particle model to predict risk for West Nile virus in Mississippi. International Journal of Health
the airborne spread of foot-and-mouth disease virus. Atmospheric Environment Geographics 5: Article 36.
42: 466–479. 103. Daniel M, Kolar J, Zeman P, Pavelka K, Sadlo J (1998) Predictive map of
79. Martı́nez-López B, Ivorra B, Ramos AM, Sánchez-Vizcaı́no JM (2011) A novel Ixodes vicinus high-incidence habitats and a tick-borne encephalitis risk
spatial and stochastic model to evaluate the within- and between-farm assessment using satellite data. Experimental & Applied Acarology 22: 417–
transmission of classical swine fever virus. I. General concepts and description 433.
of the model. Veterinary Microbiology 147: 300–309. 104. Eisen RJ, Griffith KS, Borchert JN, MacMillan K, Apangu T, et al. (2010)
80. Rubel F, Fuchs K (2005) A decision-support system for real-time risk Assessing human risk of exposure to plague bacteria in northwestern Uganda
assessment of airborne spread of the foot-and-mouth disease virus. Methods based on remotely sensed predictors. American Journal of Tropical Medicine
Inf Med 44: 590–595. and Hygiene 82: 904–911.
81. Bos MEH, Van Boven M, Nielen M, Bouma A, Elders ARW, et al. (2007) 105. Eisen RJ, Reynolds PJ, Ettestad P, Brown T, Enscore RE, et al. (2007)
Estimating the day of highly pathogenic avian influenza (H7N7) virus Residence-linked human plague in New Mexico: A habitat-suitability model.
introduction into a poultry flock based on mortality data. Veterinary Research American Journal of Tropical Medicine and Hygiene 77: 121–125.
38: 493–504. 106. Adjemian JCZ, Girvetz EH, Beckett L, oley JE (2006) Analysis of Genetic
82. Verdugo C, Cardona CJ, Carpenter TE (2009) Simulation of an early warning Algorithm for Rule-Set Production (GARP)Modeling Approach for Predicting
system using sentinel birds to detect a change of a low pathogenic avian Distributions of Fleas Implicatedas Vectors of Plague, Yersinia pestis, in
influenza virus (LPAIV) to high pathogenic avian influenza virus (HPAIV). California. Journal of Medical Entomology 43: 93–103.
Preventive Veterinary Medicine 88: 109–119. 107. Anyamba A, Chretien J-P, Small J, Tucker CJ, Formenty PB, et al. (2009)
83. Mongkolsawat C, Kamchai T (2009) GIS Modeling for Avian Influenza Risk Prediction of a Rift Valley fever outbreak. Proceedings of the National
Areas. International Journal of Geoinformatics 5: 7–12. Academy of Sciences of the United States of America 106: 955–959.
84. Ortiz-Pelaez A, Pfeiffer DU, Tempia S, Otieno FT, Aden HH, et al. (2010) 108. Kleinman K, Lazarus R, Platt R (2004) A generalized linear mixed models
Risk mapping of Rinderpest sero-prevalence in Central and Southern Somalia approach for detecting incident clusters of disease in small areas, with an
based on spatial and network risk factors. Bmc Veterinary Research 6: 22. application to biological terrorism. Am J Epidemiol 159: 217–224.
85. Racloz V, Venter G, Griot C, Stark KDC (2008) Estimating the temporal and 109. Munar-Vivas O, Morales-Osorio JG, Castañeda-Sánchez DA (2010) Use of
spatial risk of bluetongue related to the incursion of infected vectors into field-integrated information in GIS-based maps to evaluate Moko disease
Switzerland. BMC Vet Res 4: 42. (Ralstonia solanacearum) in banana growing farms in Colombia. Crop
86. Radosavljevic V, Belojevic G (2009) A new model of bioterrorism risk Protection 29: 936–941.
assessment. Biosecur Bioterror 7: 443–451. 110. Hartley DM, Nelson NP, Walters RA, Arthur R, Yangarber R, et al. (2010)
87. Purse BV, Baylis M, Tatem AJ, Rogers DJ, Mellor PS, et al. (2004) Predicting The landscape of international event-based biosurveillance. Emerging Health
the risk of bluetongue through time: climate models of temporal patterns of Threats Journal 3: Article e3.
outbreaks in Israel. Revue Scientifique Et Technique-Office International Des 111. Kong X, Wallstrom GL, Hogan WR (2008) A temporal extension of the
Epizooties 23: 761–775. Bayesian aerosol release detector. Lecture Notes in Computer Science
88. Cappelle J, Girard O, Fofana B, Gaidet N, Gilbert M Ecological Modeling of (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
the Spatial Distribution of Wild Waterbirds to Identify the Main Areas Where in Bioinformatics); Raleigh, NC. Springer Verlag. pp. 97–107.
Avian Influenza Viruses are Circulating in the Inner Niger Delta, Mali. 112. Lu HM, Zeng D, Chen H (2008) Bioterrorism event detection based on the
EcoHealth ePub: 1–11. In Press. Markov switching model: A simulated anthrax outbreak study; Taipei,
89. Mubangizi M, Mwebaze E, Quinn JA (2009) Computational Prediction of Taiwan.IEEE. pp. 76–81.
Cholera Outbreaks; Kampala. ICCIR. 113. Nordin JD, Goodman MJ, Kulldorff M, Ritzwoller DP, Abrams AM, et al.
90. Schaafsma AW, Hooker DC (2007) Climatic models to predict occurrence of (2005) Simulated anthrax attacks and syndromic surveillance. Emerg Infect Dis
Fusarium toxins in wheat and maize. Int J Food Microbiol 119: 116–125. 11: 1394–1398.
91. Marechal F, Ribeiro N, Lafaye M, Guell A (2008) Satellite imaging and vector- 114. Martin PAJ, Cameron A.R., Greiner M. (2006) Demonstrating freedom from
borne diseases: the approach of the French National Space Agency (CNES). disease using multiple complex data sources 1: A new methodology based on
Geospatial Health 3: 1–5. scenario trees. Preventive Veterinary Medicine.
92. Wagner MM (2002) Models of computer-based outbreak detection. The 115. Kolivras KN (2010) Changes in dengue risk potential in Hawaii, USA, due to
Reference Librarian 39: 343–362. climate variability and change. Climate Research 42: 1–11.