Decisions-to-Data using Level 5 Information Fusion
Erik Blasch
Air Force Research Laboratory, Information Directorate, Rome, NY, 13441
ABSTRACT
Over the last decade, there has been interest in presenting information fusion solutions to the user and ways to
incorporate visualization, interaction, and command and control. In this paper, we explore Decisions-to-Data (D2D) in
information fusion design: (1) sensing: from data to information (D2I) processing, (2) reporting: from human computer
interaction (HCI) visualizations to user refinement (H2U), and (3) disseminating: from collected to resourced (C2R)
information management. D2I supports net-centric intelligent situation awareness that includes processing of
information from non-sensor resources for mission effectiveness. H2U reflects that completely automated systems are
not realizable requiring Level 5 user refinement for efficient decision making. Finally, C2R moves from immediate data
collection to fusion of information over an enterprise (e.g., data mining, database queries and storage, and source
analysis for pedigree). By using D2I, H2U, and C2R concepts, they serve as informative themes for future complex
information fusion interoperability standards, integration of man and machines, and efficient networking for distribution
user situation understanding.
Keywords: Information Fusion, Data to Decisions, Virtual Worlds, Data Fusion Information Group, Enterprise, Info. Management
1. INTRODUCTION
The paradigm of the conference is multi-sensor interoperability, integration and networking for persistent intelligence,
surveillance, and reconnaissance (ISR) [1, 2]. A growing trend is to look at methods of data-to-decisions; however, we
view it as Decisions-to-Data (D2D). Information fusion seeks to reduce uncertainty, associate data, and enable
knowledge elucidation through data valuation. Uncertainty
Enterprise
Reporting
comes from many sources including sensors, entities, and
the environment and the subsequent processing over
interpretation, context, language, and users [3]. Assessing
the quality of merged and combined information requires
objective and subjective uncertainty measures, reasoning,
and system design [4]. Figure 1 demonstrates that
Info Mgt
Processes
information fusion (sensing) is a function of access to the
Da!
data through the network (enterprise), information
management processes [5], and coordination with the user
(reporting) [6, 7, 8]. Future successes of information
.
fusion system designs over streaming data will be
Network
impacted by information management (e.g., cloud-enabled
distributed network environment) and end user
Figure 1: Information Fusion in the Enterprise.
coordination (e.g., distributed clients).
i
L
From the seminal book on information fusion [9], the Joint Directors of Laboratories (JDL) model was proposed [10].
Subsequent revisions [11, 12] to the model incorporate new directions such as context [13]. The JDL model was revised
for the proposed Data Fusion Information Group (DFIG) model [3, 14].
Key elements of contemporary information fusion melding include: (1) sensing: mission awareness of data to
information, (2) reporting: human interfaces to user involvement, and (3) dissemination: collected to resourced
information management. Currently, a common theme is data to decisions (D2D) over joint data management (JDM)
[15,16]; however this is proposed as a bottom-up solution; whereas a top-down perspective (e.g. evidence-based queries)
Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR V,
edited by Michael A. Kolodny, Proc. of SPIE Vol. 9079, 907903 · © 2014 SPIE
CCC code: 0277-786X/14/$18 · doi: 10.1117/12.2050264
Proc. of SPIE Vol. 9079 907903-1
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
is also needed from decisions-to-data (D2D). In this paper, we revisit the development of an information fusion
architecture motivated from system design solutions based on information management, enterprise technologies, and
user interaction.
Current advances in available processing, sensor collection, data storage, and data distribution have afforded more
complex, distributed, and operational information fusion systems (IFSs). IFSs notionally consist of low-level
information fusion (LLIF) (e.g., data collection, registration [17], and target tracking association in time and space [18])
and high-level information fusion (HLIF) (e.g., situational awareness [19], threat assessment [20], user coordination
[21], and mission control [22]). HLIF challenges [23] include: resource management [24], network-centric architectures
[25], and spectrum sharing [26]; which are elements of a cloud computing environment of access, storage, and retrieval.
Contemporary HLIF research focuses on information management [27] and systems design [28].
HLIF and LLIF can benefit from the advances in enterprise computing, but there are few reports that bring together these
technologies, none the less document their impact on operational decision-making. Current contemporary topics of
interest include security [29], service-oriented computing [30, 31], and integrated intelligence (such as the Open
Geospatial Consortium (OGC) [32, 33]). There are examples of Google’s Cloud Fusion service [34] which brings
information together, but the hosting and linking of information provides a common repository that still leaves the user
with the goal of associating data and deriving the value of information. One example from Google Fusion is the linking
of people to a location; however, there is little in the way of determining the quality, credibility, availability, quantity,
and type of data that is needed to combine the information in a meaningful way to make more informed decisions.
The future command and control (C2) systems for intelligence analysts [35] situation awareness require methods in
HLIF for the creation and maintenance of data, displays for decision making [36], and reduction in mental workload
[37]. As an example of information management challenges, one is spatial image analysis to include: data storage,
parallel computation, high bandwidth communications, automatic pattern recognition, and human interfaces [38].
Section 2 covers D2I modeling. Section 3 presents H2U methods such as virtual worlds. Section 4 describes collected to
resourced (C2R) information management. Section 5 discusses the recent trends in enterprise cloud computing and
implications for the management of information fusion. An example application is presented in Section 6 for video
tracking and Section 7 provides conclusions.
2. DATA TO INFORMATION (D2I) MODELING
2.1 High-Low Fusion Level Distinctions
Information fusion is a technique to combine multiple sources of data, distributions [39], or information over various
system-level processes as described in the Data Fusion Information Group (DFIG) model [3, 14, 24], depicted in Figure
2. In the DFIG model, the goal was to separate the data fusion and Resource Management (RM) functions and highlight
the user involvement. RM is divided into sensor control (L4), user refinement (L5), and platform placement/resource
collection (L6), and to meet mission objectives. Data fusion includes object (L1), Situation (L2) and impact (L3)
assessment such as sense-making of threats, course of actions, game-theoretic decisions, and intent analysis to help
refine the estimation and information needs for different actions. RM can be aided by enterprise computing aspects of
data acquisition, access, recall, and storage services.
Info Fusion
Real
Sensors
World
And
'
Explicit
Tacit
Fusion
Fusion
L1
L2/3
Machine
Human
Human
4- 'Decision
Sources, L
5
I
Making
l
Platform
Knw sledge
L4
Representation
l
Resource Management
Ground
Station
L6
V;
Missióri 19áñágérnéñf r
Reasoning
Planning
Figure 2: DFIG Information Fusion model (L = Level).
Information fusion across all levels includes many uncertainty sources, methods, and measures [40]. As a challenge,
both at the hardware (i.e. components and sensors) and the software (i.e. algorithms and networks) layers add to the
complexity of system design. Recent efforts include the uncertainty representation and reasoning evaluation framework
Proc. of SPIE Vol. 9079 907903-2
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
(URREF) working group [http://eturwg.c4i.gmu.edu] [4] that is looking at enterprise level analysis over hardware and
software uncertainty representations to standardize terminology for downstream information fusion processes.
High-Level Information Fusion (HLIF) (as referenced to levels beyond Level 1) is the ability of a fusion system, through
knowledge, expertise, and understanding to: capture awareness and complex relations, reason over past and future
events, use direct sensing exploitations and tacit reports, and discern the usefulness and intention of results to meet
system-level goals. Designs of real-world information fusion systems imply distributed information source coordination
(network), organizational concepts (command), and environmental understanding (context). Additionally, there is a
need for automated processes that provide functionality in support of user decision processes, particularly at higher
levels requiring reasoning and inference which is typically done by a human. For example, a cloud-enabled service can
greatly enhance attributes of timeliness, availability, usability, and relevance which benefit both LLIF and HLIF though
situation awareness [41].
The DFIG model and enterprise computing services share a common goal to provide information (over the cloud) for
situation awareness. Cloud services store outputs, access information, support processing, and provide dissemination
over asynchronous services. Using the DFIG paradigm, Level 4 (sensor management) could use a cloud service to access
information, Level 5 (user refinement) can be the end-user applications that query information, and Level 6 (mission
management) can provide filtering and control of information dissemination to the correct user estimates. Inherent in the
analysis is that Level 0 (data preprocessing) is that data is already resident in the cloud environment. Next, we discuss
situation awareness and assessment in an enterprise network (i.e., cloud enabled) to focus on information processing.
2.2 Situation Awareness
There are two main groups addressing situational information: the engineering information fusion community (i.e.
Situation Assessment [SA]) and the human factors community (i.e. Situation Awareness [SAW]). SAW is a mental state
while SA supports (e.g. fusion products) that state which requires a common transformation between the two
representations.
Given the developments of SAW and SA, we combine the ideas into an integrated information fusion situation
awareness (IFSA) model in which the role of SA stratifies the object/event analysis. The IFSA combines elements of the
community models; SAW reference model with the DFIG elements of a combined L2/L3 analysis and user refinement
(L5). The IFSA model is presented in Figure 3.
Information Fusion
Real World
Explicit Fusion
Tacit Fusion
Situations)
Object
Recognition
And
Tracking
g
Situation Assessment
Aeu.q
oreo
aeMd
Levels
Knowledge
of'Us"
e r
,
fi
Impact
(Changes)
l'O'O'e'1
t
ñ
Making
Levels
Level <
A Mental
stata
I
Knowledge
of 'Them"
1
Level NelrRevisedModekand
CollectionReçn.renents
Platform
Resource Management
Level3
Human
Decision
Knowledge
of 'Us"
Plausible
Features
6
Impact
Threat
Level4
Ground
Station
Mission Management
Level6
Knowledge Representáior ascot,
Ramming
Figure 3: Information Fusion Situation Assessment Model.
The right side of Figure 3 captures the needs of the user and their ability to observe and orient themselves to the
information. As the user requests information for their SAW, they must regress over the data they have and what they
need. A cloud environment can provide these services. The information fusion system provides the elements of the
information from the left side of Figure 3, which provides alerts that call to the attention of situations of interest. The
user can coordinate with any level to update the SAW and control data collection. Finally, we note that there are needs
between resource management (e.g., airborne assets, web pages) versus that of mission management (e.g., goals,
policies, and doctrines) as shown in the bottom of Figure 3. What is not detailed in the DFIG/IFSA models is access to
the information about the real world (that is constant flux and change over political, social, and environmental contexts).
While the IFSA model captures SA and SAW issues, other considerations are metrics, model refinement, and practical
use. The interchange between the “us” and “them” refers to an environment, such as a cloud enterprise, which requires
Proc. of SPIE Vol. 9079 907903-3
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
analysis of security, access, and authentication of users to obtain information. Future SA/SAW needs include methods of
human interfaces to user involvement (H2U) such as virtual worlds – discussed in Section 3.
2.3 Information Fusion Modeling in the Enterprise
Current trends in information fusion are data mining, the enterprise architecture, and communications [30]. Different
mission applications require coordination over (1) data: models, storage/accesses control, and process and transport flow,
(2) architecture (e.g. service-oriented architecture), and the (3) enterprise (e.g. service bus, computing environment, and
the cloud). Figure 4 highlights the needs of the user, elements of data mining [42], and data flow in the enterprise.
Recently, Solano and Jernigan [43, 44] present an enterprise architecture to manage intelligence products for mission
objectives highlighting data formats (e.g., schemas, unstructured, and metadata); data processes (e.g. access, ingest,
cleansing, profiling, and ontology workflows); and database management services (DBMS). What is needed is towards
and enterprise solution for information fusion architectures is Collection to Resourced (C2R) information management
discussed in Section 4. Cloud technology can serve as a basis for access to resourced information but requires methods
of the enterprise such as a service-oriented architecture (SOA) information management services.
Information Awareness
Cloud
(Services)
Actionable
Information
Discovered
Models
Behavior Models
Information
Needs
Threats
Objectives
cY
Entity /Activity
Relationships
Situations
2
Q
Context:
Models,
Observables
`edigrees,
Multi- Dimensional
Analyses
1
Objécts
Tasks
Data Mining
Data Fusion
Resource Management
Response
Response
Action
Net -centric enablers: Transport, Enterprise Services, Databases
Data
1
{
Sensors
Platforms
Resources
[
Sources
LPeople
Figure 4: Information Fusion Enterprise Model (Adapted from [27])
3. HUMAN COMPUTER INTERACTION TO USER INVOLVEMENT (H2U)
H2U focuses on Level 5 (user refinement) techniques to determine the correct level of user involvement such as access
to data, reporting, and supporting mission objectives. Virtual environments (VE) are human-computer interfaces in
which the computer facilitates a multi-dimensional, model-based representation of an environment that interactively
responds to and is controlled by the behavior of one or more users. The term 'synthetic world' refers to a subset of VE’s
where the models represent a mix of real and
hypothetical (synthetic) data. They are
specifically designed to do analytic work that is
Air
shared across a community of users [45].
Virtual worlds
Time Lines
Document
Evidence Files
Virtual worlds can help in processing workflow
Collections _\
management (command) to dynamically exploit
E idence Fio
Sclama
information (context) and time as a dimension to
provide more efficient use of the data (over a
network as producer or consumer [46]). One
example to help an intelligence analyst is a
MindSnap (mental bookmark), shown in Figure.
5. MindSnap helps record decision points in an
Custom
Encyclopedias
analyst’s workflow, which can be used to
reference and restore workspaces to the point
Figure 5: MindSnaps example. (From Morrison [45])
where decisions were made in the analytic
process. From Figure 5, there are different evidence file products needed to support the schemas, hypothesis,
geographical products, documents, and other visual analytics. Future information fusion management solutions require
Proc. of SPIE Vol. 9079 907903-4
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
H2U designs, evaluations, and updates to determine the correct technology (i.e. user display) to support a user-defined
operating picture (UDOP). The UDOP would enable user coordination over data and information access and control over
the enterprise.
4. COLLECTION TO RESOURCED (C2R) INFORMATION MANAGEMENT
4.1 Information Management (IM) Model
The goal of IM is to maximize the ability (effectiveness) of a user to act upon information that is produced or consumed
within the enterprise. There are several means by which this can be accomplished:
Reducing barriers to effective information use by providing notification, mediation, access control, and persistence services;
Providing an information space wherein information
is managed directly, rather than delegating IM
responsibilities to applications that produce and
consume information;
Focusing on consumer needs rather than producer
preferences to ensure that information can be
effectively presented and used;
Providing tools to assess information quality and
suitability; and
Exploiting producer-provided characterization of
information to support automated management
and dissemination of information. [31]
If these means can be accomplished it can make
applications (e.g., simultaneous tracking and
identification) less complicated and enables the
enterprise to be more agile to adapt to changing
requirements and environments.
There are several best practices that help achieve the
goals of information management. Organizations
will greatly improve the interoperability and agility
of their future net-centric information fusion (and
command and control) systems by:
1. Adopting dedicated information management
Figure 6: Information Management (IM) Model.
infrastructures (e.g. cloud computing);
2. “Packaging” information for dissemination and management,
3. Creating simple, ubiquitous services that are independent of operating system and programming language;
4. Using a common syntax and semantics for common information attributes such as location, time, and subject; and
5. Adopting interfaces among producers, consumers and brokers that are simple, effective and well-documented
If appropriately employed, these best practices can reduce the complexity information fusion systems, allow for effective
control of the information space, and facilitate more effective sharing of information over an enterprise environment.
Viewing data as a “managed information object” (MIO), means information fusion can be viewed as process that uses
the tenets of an enterprise environment. A MIO comprises a payload and metadata that characterize the object such as
topic, time, and location. It is desirable that all of the information needed for making management decisions, be present
in or referenced within the metadata in a form that permits efficient processing. Figure 6 presents an IM model which
illustrates the extended relation of the actors coordinating through the enterprise with the various layers and inner circles
providing the protocol for information service access and dissemination [31, 47].
An important element of characterization is the concept of type. The type of an object (e.g., video data) is useful for
determining how the information should be characterized and for setting policy on its appropriate use. Type is distinct
from format in that type relates to the information purpose (e.g. scanned human intelligence reports), whereas the format
(e.g. JPEG) relates to its encoding. While format is essential for processing or presenting the information, type is more
important for determining management of the information. People or autonomous agents interact with the managed
information enterprise environment by producing and consuming information or by managing it. Figure 6 lists the actors
and their activities/services within an IM enterprise.
Proc. of SPIE Vol. 9079 907903-5
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
4.2 Service Layers
A set of service layers are defined that use artifacts to perform specific IM activities and are inherent in enterprise
environments. An artifact is a piece of information that is acted upon by a service or that influences the behavior of the
service (e.g., a policy). The services layers defined by the model are: Security, Workflow, Quality of Service (QoS),
Transformation, Brokerage, and Maintenance, as shown in Table 1. These services are intelligent agents that utilize the
information space within the architecture.
Table 1: Service Layers
Security
Workflow
QoS
Transformation
Broker
Maintenance
Control access, Log transactions, Audit logs, Negotiate security policy with federated information spaces,
Transform identity and Sanitize content
Manage workflow model configurations, Instantiate and maintain workflows, Assess and optimize
workflow performance
Respond to client context, Allocate resources to clients, QoS policy mediation, Prioritize results, and
Replicate information
Contextualize information, Transform MIOs, Support state and context-sensitive processing, Support user
defined processing functions, Support manager defined processing functions,
Process queries, Support browsing, Maintain subscriptions, Notify consumers, Process requests for
information and advertisements, Support federated information space proxies
Post MIOs, Verify adherence to standards, Manage MIO lifecycle and performance, Retrieve specific
MIOs from repositories, Support configuration management of information models
4.3 Information Spaces
The information space is a collection of catalogues, repositories, and database that provide common functions for
storage, retrieval and lifecycle management. The Information Space operates on managed information objects. An
information space is thus a key element of future coordination between enterprise computing and information fusion.
Due the rapidly increasing number of sensors and monitors, the expanding coverage of the physical world has
necessitated creation of higher dimension information space. Within a high dimensional space it is non-trivial to tailor
computing resources for a specific information fusion task while maximizing the system utility efficiency. Obviously, it
is highly undesirable to index through the entire space (effectiveness), while limiting task complexity (efficiency) to
reduce unexpected delays or even the risk of task failure.
Enterprise computing platforms can meet the indexing challenge by providing a highly elastic information fusion over
the high dimensional information space. The elasticity and illusion of infinite resources make transactional tasks
practically feasible, such as scalable database management systems (DBMS) [48, 49]. With certain initial mission goals,
a user can start with a traditional data processing using a sub-set of the operations/resources. Then, extra functions and
resources increase in demand later with the elastic scalability. No matter in the context of services, the capability of
dynamical job reassignments enables an enterprise system to seamlessly match the fluctuation in resource requirements.
4.4 Layered View of the Cloud
Using elements of the DFIG, the enterprise, and the
information management model; sensing, networking,
and reporting can be realized. Figure 7 presents the
layered information where the end-user (operator or
machine) desires quality information as fused products
from data which requires various methods and services
from sensor collections to information delivery.
“Sensors/Sources” can be viewed as a general term as it
relates to physical sensors, humans, and database
services (e.g. data mining) that seek data from the
environment and process it as a transducer for analysis.
User
/
1
Info Mgt
I
/
/
I
Processes
HCI
Analysis
Applications
\
\
\
Situations, Impacts, COA
Detection, Tracking, ID
Data Mining, Sensor mgt
Database Services,
_NetworR_ _ / Information Services \
/
/
Enterprise Services
Information Assurance
Transport
Current trends in information fusion share common
developments with cloud computing such as agentbased network service architectures, ontologies [50]
and metrics [51] to combine physics-based sensing and
human-based reporting using fusion products.
Visualization, AidedCognition, Planning,
Execution, supervision
i_
-Data
.-
1
/
Sensors /Sources
models, pedigree metrics
\
Messaging, discovery,
storage
\
Security, Protection
Access, retrieval,
dissemination
\
\
Reaoundat
foundational
Figure 7: Layered Information Services.
Proc. of SPIE Vol. 9079 907903-6
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
5. ENTERPRISE CLOUD COMPUTING
5.1 Network Clustering and Cloud Computing
One recent concern for the scientific community is the ability to process large amounts of data (e.g., biological health
science, social economics, and law enforcement). Examples of methods include cluster, cloud, grid, and heterogeneous
computing [52] which are compared by Schadt, et al. [53].
Cluster computing - uses a standard technique in information fusion of Bayesian Networks;
Heterogeneous computing - includes speed-up methods such as parallelism from a graphical processing unit (GPU);
Grid computing - comprises network of distributed agents to solve a task; and
Cloud computing - searches databases for relevant information such as that many clusters can be transformed to
work within the cloud to access relevant data [53, 54].
Cloud computing service layers include:
Platform as a service (PaaS) includes basic applications (e.g., Google Maps)
Software as a service (SaaS) hosts the application and data on databases at their own data center
Infrastructure as a service (IaaS) provides software over the internet
Given various computing environments, there is an interest in high-performance computing in a cloud-enabled
environment. Information fusion applications can make use of the enterprise architecture (see Figure 6) which could
include local networking for cloud information management for image processing in the cloud. The question would be
what is the value of the cloud? Cloud auditing would enable access to large data sets for a priori information (IaaS),
ability to exploit streaming data in the cloud as a service (SaaS), and associating different data sets from different
platform applications (PaaS). The cloud environment enables data sharing, storing, and indexing, while providing
security and time scaling for information fusion.
5.2 Google Fusion Tables
Thanks to its abundant computing power, a cloud computing environment is proposed to conduct data management,
integration, and collaboration tasks. In particular, outsourcing computing intensive information fusion tasks to ca loud
service is a natural solution for applications in which either on-site computing power is insufficient or decision making
requires integrated analysis of data collected by distributed sensors or monitors. For example, many research efforts have
been reported to relieve the burden of information fusion for wireless sensor networks (WSNs) to cloud service
platforms [55, 56]. Google Fusion Tables [37, 57, 58] illustrates important design principles for cloud-based information
fusion applications.
Initially launched in June 2009, the Google Fusion Tables service is a cloud-based data management and integration
service [37], which aimed at to meet three important requirements [57]: supporting collaborative operations among
multiple users and/or organizations; be easy to user; and seamless integration of web services.
The objective of the Google Fusion Tables is to exploit the cloud computing facility to achieve high efficient data utility.
Some of the guiding principles the design follows [56], which enable a continuous improvement in both the user
experience and the performance of Fusion Tables include: seamless integration with web services; emphasize ease of
use; incentives for data sharing; and collaboration.
Scalability and throughput are the main challenges to handle. In particular, there are hundreds of thousands of tables
with different schemes, sizes, and query characteristics. A two-layer storage stack is adopted in Google Fusion Tables:
Bigtable and Megastore. The Bigtable stores information in form of tuples (key, value), which are stored and shared on
the key. Writing and flexible reading operations are supported by Bigtable. As a library on top of Bigtable, MegaStore
provides higher level primitives. The library is used for three purposes: i) maintaining property indexes, ii) providing
table level transactions, and iii) replicating tables across multiple data centers. These cloud-based data management and
information fusion services support collaborative data processing. One large-data processing example is Wide-areaMotion Imagery (WAMI) target tracking and identification (ID).
6. EXAMPLE: WAMI FOR TARGET TRACKING AND IDENTIFICATION
Information fusion developments include large data (e.g., imagery), flexible autonomy (e.g., from moving airborne
platforms over communication systems), and human coordination for situation awareness which require HLIF metrics
[59, 60, 61]. Figure 8 demonstrates a layered architecture (Figure 7) imagery data collection example using electrooptical (EO) cameras and Wide-Area Motion Imagery (WAMI). Using information fusion for situation awareness based
on imagery includes: (i) tracking targets in images (fusion over time) [62], (ii) identifying targets using different sensors
Proc. of SPIE Vol. 9079 907903-7
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
(fusion over frequency) [63], and (iii) linking target measurements over wide areas (fusion over space) [64]. Metadata
examples include the Cursor on Target paradigm with a limited HTML schema of target allegiance, uncertainty location,
and priority. The multiple imagery sources could be viewed as agents in the architecture [65]. Inherent in the illustration
is the collections from different sensors; however, what is needed are enterprise information stored data of the physical
(terrain), resource (sensors), and social context (objects) that is easily accessible from cloud services.
Figure 8: Wide Area Motion Imagery (WAMI) data.
4
Simultaneous target tracking and identification using imagery and text [66, 67, 68, 69, 70, 71, 72] requires a priori
information for enhanced accuracy, timeliness, and confidence in decision making; while balancing throughput and cost.
Examples from the cloud include the vehicle data, the social (e.g. rhythm of the city), and the political (rules of the road)
context. Given WAMI [73, 74] data, shown in Figure 8, we seek the benefits that are enabled from an enterprise
network. For example, when the user designates an area of interest, the machine can then detect and track targets (L1
fusion). After a few time steps, the machine can access information through data mining (L2/L3 fusion) from the cloud
to enhance the Bayesian analysis of the situation. Together cloud computing and information fusion aid to determine (L5
fusion) of the target type and activity. Finally, the results are used to query the sensors to get more information (L4
fusion), store the results and disseminate back to the cloud for mission awareness (L6 fusion).
For the process analysis, we combine elements of cluster computing (i.e., information fusion by combining relevant
information for a Bayesian analysis of data and exploited features), cloud computing (i.e., database analysis of a priori
target identity information), and Google Fusion Table tenets. PaaS is the STID application with SaaS maintaining the
Bayesian processing and IaaS supporting the data passing and messaging. From Figure 1, we want to address the four
areas of the data, network, information management, and user applications. Figure 9 plots the four computing process
metrics for the WAMI tracking application.
From Figure 10, we see that sensors require the most throughput for the raw data; however, in themselves they have the
least collaboration in the network. For the track and ID applications, since the raw data is converted into tracks and ID
reports, it has the least throughput, but the most collaboration (as facilitated through the enterprise). Likewise,
applications are timely and have reasonable processor utilization. Information services take the most time as it passes
data around the enterprise and has moderate throughput and collaboration. Finally, the enterprise takes the least time
passing data and requires the most processor utilization.
Applications
Track/ID
D.35
D.35
130
Information
Service
D.25
120
Enterprise
Service
0.15
"0
Sensor
.1Ir
0.05
Data
0W-=
'44IW
111111r
-.- Sensor
Source Data
Enterprise Service
Info Service
Applications (Track11D)
0.00
3tqf1
Figure 9: Normalized metrics of information processes.
Throughput
Utifvation
Timeliness
Colla4aratian
Figure 10: Normalized metrics of information processes
The processing of larger data sets enables LLIF (object tracking and identification) to HLIF (situation awareness and
analysis) using the URREF [75, 76, 77, 78]. New paradigms using Google Fusion tables can enhance Sensor, User, and
Mission (SUM) resource management over information for tracking [79]. Such examples from the cloud enables linking
individual targets to group behaviors [80], road contextual information [81], and net-centric sensor management [82] to
Proc. of SPIE Vol. 9079 907903-8
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
extend dynamic track lifetime. With an enterprise architecture, new elements of LLIF/HLIF are available such as highperformance computing solutions [83], trust-based search over communication network systems [84], and exploration of
political and cultural effects [85, 86]. Without enterprise technology, access, storage, and recall of data from large
databases is not practical for real-time applications.
The use of cloud and enterprise technology for decisions-to-data could extend to many multimodal sources of data. For
imagery, there are numerous image fusion examples including night vision [87] which requires objective and subjective
analysis between users, machines, and network services [88].
Another data-driven application that can benefit from the interaction of Level 5 fusion and cloud technology is cyber
resiliency for data security [89, 90]. Network security enhances situation awareness [91] and cyber-physical system
analysis of sensor and information data [92]. An example is tracking and identification information of data streaming
from a unmanned air vehicle to detect network disruptions [93].
7. CONCLUSIONS
Decisions to data requires an appreciation of the user (Level 5 fusion) interacting with data through a network. The
network includes information fusion management, database management systems (DBMS), and the enterprise itself.
Enterprise services as a service-oriented architectures (SOA) can be addressed as to enable decisions from data. The
decisions are evaluated against network metrics of throughput, utilization, timeliness, and collaboration. Additional
analysis includes uncertainty quantification, data movement, virtual worlds, and applications such as tracking and
identification fusion. A user interface over extreme scale visual analytics [94] will continue to push the field of
decisions-to-data and information fusion systems. Future high-level information fusion management will require
standards and techniques for data to information (D2I) processing, human computer interaction displays to user
involvement (H2U), and collected to resourced (C2R) information management.
Essentially, the three important points highlighted in the paper include:
1) D2I: The computation needed varies based on the situation and information in the cloud that affords data to be
processed for information (Information Service);
2) H2U: The ability to connect through a cloud enables the combination of different sensors and users for collaborative
information fusion (Communication Service); and
3) C2R: The information needed varies over many conditions and the cloud’s storage ability affords a refined estimate
of the collected information fusion (Enterprise Service).
Acknowledgements
This work was sponsored by the Air Force Office of Scientific Research DDDAS program which is greatly appreciated. The views
and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official
policies, either expressed or implied, of Air Force Research Laboratory, or the U.S. Government.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Marusich, L. R., Buchler, N., Bakdash, J. Z., “Human Limits to Cognitive Information Fusion in a Military Decision-Making Task,” Int.
Command and Control Research and Tech. Symp. (ICCRTS), (2014).
Buchler, N., Maruisch, L. R., Sokoloff, S., “The Warfighter Associate: Decision-support software agent for the management of intelligence,
surveillance, and reconnaissance (ISR) assets,” Proc. SPIE, 9079, (2014).
Blasch, E., Kadar, I., Salerno, J., Kokar, M. M., Das, S., Powell, G. M., Corkill, D. D., et al., “Issues and Challenges in Situation Assessment
(Level 2 Fusion),” J. of Advances in Information Fusion, Vol. 1, No. 2, pp. 122 - 139, Dec. (2006).
Costa, P. C. G., Laskey, K. B., Blasch, E., Jousselme, A-L., “Towards Unbiased Evaluation of Uncertainty Reasoning: The URREF Ontology,”
Int. Conf. on Info Fusion, (2012).
Blasch, E., “Sensor, User, Mission (SUM) Resource Management and their interaction with Level 2/3 fusion” Int. Conf. on Info Fusion, (2006).
Blasch, E., Plano, S., “Level 5: User Refinement to aid the Fusion Process,” Proc. of SPIE, Vol. 5099, (2003).
Blasch, E., “Introduction to Level 5 Fusion: the Role of the User,” Chapter 19 in Handbook of Multisensor Data Fusion 2nd Ed, Eds. M. E.
Liggins, D. Hall, and J. Llinas, CRC Press, (2008).
Blasch, E., Breton, R., et al., “User Information Fusion Decision Making Analysis with the C-OODA Model,” Int. Conf. on Info Fusion, (2011).
Waltz, E., Llinas, J., [Multisensor Data Fusion], Artech House, Norwood, MA, (1990).
Kessler, O., et al., “Functional Description of the Data Fusion Process, technical report for the Office of Naval Technology Data Fusion
Development Strategy,” Naval Air Development Center, Warminster, PA, Nov. (1991).
Steinberg, A. N., Bowman, C. L., White, F. E., “Revisions to the JDL model,” Joint NATO/IRIS Conf., (1998).
Blasch, E., Plano, S., “DFIG Level 5 (User Refinement) issues supporting Situational Assessment Reasoning,” Int. Conf. on Info Fusion, (2005).
Snidaro, J., Visentini, I., Llinas, J., Foresti, G. L., “Context in fusion: some considerations in a JDL Perspective,” Int’l.. Conf. on Information
Fusion, (2013).
Proc. of SPIE Vol. 9079 907903-9
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
[14] Blasch, E., Steinberg, A., Das, S., Llinas, J., Chong, C.-Y., Kessler, O., Waltz, E., White, F., "Revisiting the JDL model for Information
Exploitation," Int’l Conf. on Info Fusion, (2013).
[15] Blasch, E., Russell, S., Seetharaman, G., “Joint Data Management for MOVINT Data-to-Decision Making,” Int. Conf. on Info Fusion, (2011).
[16] Preece, A., Pizzocaro, D., Braines, D., Mott, D., de Mel, G., Pham, T., “Integrating hard and soft Information Sources for D2D Using Controlled
Natural Language,” Int’l Conf. on Information Fusion, (2012).
[17] Wu, Y., et al., “Feature Based Background Registration in Wide Area Motion Imagery,” Proc. SPIE, Vol. 8402, (2012).
[18] Yang, C., Blasch, E., “Pose Angular-Aiding for Maneuvering Target Tracking,” Int. Conf. on Info Fusion, (2005).
[19] Chen, G., Shen, D., Kwan, C., et al., “Game Theoretic Approach to Threat Prediction and Situation Awareness,” J. of Advances in Information
Fusion, Vol. 2, No. 1, 1-14, June (2007).
[20] Blasch, E., “Modeling Intent for a target tracking and identification Scenario,” Proc. of SPIE, Vol. 5428, (2004).
[21] Blasch, E., “Situation, Impact, and User Refinement,” Proc. of SPIE, Vol. 5096, (2003).
[22] Blasch, E., Hanselman, P., “Information Fusion for Information Superiority," IEEE Nat. Aerospace and Elec. Conf., (2000).
[23] Blasch, E., Lambert, D. A., Valin, P., Kokar, M. M., Llinas, J., Das, S., et al., “High Level Information Fusion (HLIF) Survey of Models, Issues,
and Grand Challenges,” IEEE Aerospace and Elec. Sys. Mag., Vol. 27, No. 9, Sept. (2012).
[24] Blasch, E., Kadar, I., Hintz, K., Biermann, J., Chong, C-Y., Das, S., “Resource Management Coordination with Level 2/3 Fusion Issues and
Challenges,” IEEE Aerospace and Electronic Systems Magazine, Vol. 23, No. 3, pp. 32-46, Mar. (2008).
[25] Liggins, M. E., Chang, K-C., “Distributed Fusion Architectures, Algorithms and Performance within a network-Centric Architecture,” Chapter
17 in Handbook of Multisensor Data Fusion 2nd Ed, (eds). M. E. Liggins, et al., CRC Press, (2008).
[26] Tian, X., Tian, Z., et al., “Performance Analysis of Sliding Window Energy Detection for Spectrum Sensing under Low SNR conditions,”
submitted to Wireless Communications and Mobile Devices, Elsevier, May (2014).
[27] Kessler, O., White, F., “Data Fusion Perspectives and Its Role in Information Processing,” Ch.2 in Handbook of Multisensor Data Fusion 2nd Ed,
(eds.). M. E. Liggins, D. Hall, and J. Llinas, CRC Press, (2008).
[28] Blasch, E., Bossé, E., Lambert, D. A., [High-Level Information Fusion Management and Systems Design], Artech House, Norwood, MA, (2012).
[29] Mazur, S., Blasch, E., Chen, Y., Skormin, V., “Mitigating Cloud Computing Security Risks using a Self-Monitoring Defensive Scheme,” Proc.
IEEE Nat. Aerospace Electronics Conf (NAECON), (2011).
[30] Li, B., Yan, X-Q., “Modeling of Ambient Intelligence Based on Information Fusion and Service Oriented Computing,” Int’l Conf on Ubiquitous
Information Tech and Apps, (2010).
[31] Chen, G., Blasch, E., Shen, D., Chen, H., Pham, K., “Services Oriented Architecture (SOA) based Persistent ISR Simulation System,” Proc. of
SPIE, Vol. 7694, (2010).
[32] Khan, Z., Ludlow, D., McClatchey, R., Anjum, A., “An Architecture for Integrated Intelligence in Urban management using cloud computing,”
J. of Cloud Computing, Vol. 1., No 1. (2012).
[33] Blasch, E., Deignan, P. B. Jr., Dockstader, S. L., et al., “Contemporary Concerns in Geographical/Geospatial Information Systems (GIS)
Processing,” Proc. IEEE Nat. Aerospace Elec. Conf. (NAECON), (2011).
[34] Halevy, A., Shapley, R., “Google Fusion Tables,” Research Blog, June 9, (2009), http://googleresearch.blogspot.com/2009/06/google-fusiontables.html.
[35] Heuer, R. J., [Psychology of Intelligence Analysis], Center for the Study of Intelligence, (1999).
[36] Blasch, E. P., “Assembling a distributed fused Information-based Human-Computer Cognitive Decision Making Tool,” IEEE Aerospace and
Electronic Systems Magazine, Vol. 15, No. 5, pp. 11-17, May (2000).
[37] Parasuraman, R., Sheridan, T. B., Wickens, C. D., “Situation awareness, mental workload, and trust in automation: Viable, empirically supported
cognitive engineering constructs,” J. of Cog. Eng. and Decision Making, Vol. 2 (2), pp 140-160, (2008).
[38] Tangney, J., “AFOSR Programs in Higher Level Information Fusion,” Int. Conf. on Information Fusion, (2002).
[39] Blasch, E., Hensel, M., “Fusion of Distributions for Radar Clutter modeling,” Int. Conf. on Info Fusion, (2004).
[40] Costa, P.C.G., Carvalho, R.N., Laskey, K.B., Park, C.Y., “Evaluating Uncertainty Representation and Reasoning in HLF systems,” Int. Conf.on
Info. Fusion, (2011).
[41] Salerno, J., Blasch, E., et al.., “Evaluating algorithmic techniques in supporting situation awareness,” Proc. of SPIE, Vol. 5813, (2005).
[42] Waltz, E., “Information understanding: integrating data fusion and data mining processes,” Int. Symp on Circuits & Sys., (1998).
[43] Solano, M. A., Jernigan, G., “Enterprise data architecture principles of High-Level Multi-INT fusion: A pragmatic guide for implementing a
heterogeneous data exploitation,” Int. Conf. on Info Fusion, (2012).
[44] Solano, M. A., Carbone, J., “Systems engineering for information fusion: Towards enterprise multi-level fusion integration,” Int’l Conf. on Info
Fusion, (2013).
[45] Smith, C. A. P., Kisiel, K. W., Morrison, J. G., [Working Through Synthetic Worlds], Ashgate, (2009).
[46] Moore, R. A., Schermerhorn, J. H. , Oonk, H. M., Morrison, J. G., “Understanding and Improving Knowledge Transaction in Command and
Control,” Int. Command and Control Research and Tech. Symp. (ICCRTS), (2003).
[47] Linderman, M., Haines, S., Siegel, B., Chase, G., et al., “A Reference Model for Information Management to Support Coalition Information
Sharing Needs,” Int. Command and Control Research and Tech. Symp. (ICCRTS), (2005).
[48] Das, S., Agrawal, D., El Abbadi, A., “ElasTraS: An Elastic Transactional Data Store in the Cloud,” USENIX Conference on Hot Topics in Cloud
Computing, (2009).
[49] Agrawal, D., Das, S., El Abbadi, A., “Big Data and Cloud Computing: Current State and Future Opportunities,” EDBT, (2011).
[50] Blasch, E., “Ontological Issues in Higher Levels of Information Fusion: User Refinement of the Fusion Process,” Int. Conf. on Info Fusion,
(2003).
[51] Yu, W., Wang, X., Fu, X., Xuan, D., Zhao, W., “An Invisible Localization Attack to Internet Threat Monitors,” IEEE Tr. on Parallel and
Distributed Systems (TPDS), Vol. 20, No 11, November (2009).
[52] Liu, B., Blasch, E., Chen, Y., Aved, A. J., Hadiks, A., Shen, D., Chen, G., “Information Fusion in a Cloud Computing Era: A Systems-Level
Perspective,” IEEE Aerospace and Electronic Systems Magazine, (2014).
[53] Schadt, E. E., Linderman, M. D., Soresnon, J. Lee, L. Nolan, P., “Computational solutions to large-scale data management and analysis,” Nature
Reviews Genetics 11, 647–657, 1 September (2010).
[54] Grauer-Gray, S., Kambhamettu, C., Palaniappan, K., “GPU implementation of belief propagation using CUDA for cloud tracking and
reconstruction,” Pattern Rec. in Remote Sensing (PRRS), (2008).
Proc. of SPIE Vol. 9079 907903-10
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms
[55] Kurschl, W., Beer, W., “Combining cloud computing and wireless sensor networks,” Int’l Conf. on Information Integration and Web-based
Applications & Services, (2009).
[56] Tan, K.-L., “What's NExT?: Sensor + Cloud!?,” International Workshop on Data Management for Sensor Networks, (2010).
[57] Gonzalez, H., Halevy, A., Jensen, C. S., Langen, A., et al., "Google Fusion Tables: Data Management, Integration and Collaboration in the
Cloud," ACM Symp. on Cloud Computing, (2010).
[58] Gonzalez, H., Halevy, A., Jensen, C. S., Langen, A., et al., "Google Fusion Tables: Web-Centered Data Management and Collaboration," Int’l
Conf. on Management of data, (2010).
[59] Blasch, E., Breton, R., Valin, P., “Information Fusion Measures of Effectiveness (MOE) for Decision Support,” Proc. SPIE 8050, (2011).
[60] Blasch, E. P., Valin, P., Bossé, E., “Measures of Effectiveness for High-Level Fusion,” Int’l Conf. on Info Fusion, (2010).
[61] Blasch, E., Pribilski, M., Daughtery, B., et al. “Fusion Metrics for Dynamic Situation Analysis,” Proc. of SPIE, Vol. 5429, (2004).
[62] Ling, H., Bai, L., et al., “Robust Infrared Vehicle Tracking Across Target Change using L 1 regularization,” Int. Conf. on Info Fusion, (2010).
[63] Wu, Y., et al., “Multiple Source Data Fusion via Sparse Representation for Robust Visual Tracking,” Int. Conf. on Info Fusion, (2011).
[64] Ling, H., et al., “Evaluation of Visual Tracking in Extremely Low Frame Rate Wide Area Motion Imagery,” Int. Conf. on Info Fusion, (2011).
[65] López, J. M., Herrero, J. G., Rodríguez, F. J., Corredera, J. R., “Cooperative management of a net of intelligent surveillance agent sensors,”
International Journal of Intelligent Systems, 18 (3), 279-307, (2003)
[66] Blasch, E., Yang, C., Kadar, I., “Summary of Tracking and Identification Methods,” Proc. SPIE, Vol. 9091, (2014).
[67] Blasch, E., Nagy, J., Aved, A., Pottenger, W. M., Schneider, M., Hammoud, R., Jones, E. K., Ba sharat, A., Hoogs, A., Chen, G. Shen, D., Ling,
H., “Context aided Video-to-Text Information Fusion,” Int’l.. Conf. on Information Fusion, (2014).
[68] Hammoud, R. I., Sahin, C. S., Blasch, E. P, and Rhodes, B. J. “Multi-Source Multi-Modal Activity Recognition in Aerial Video Surveillance,”
submitted to IEEE International Computer Vision and Pattern Recognition Conference, (2014).
[69] Blasch, E., Dezert, J., Pannetier, B., “Overview of Dempster-Shafer and Belief Function Tracking Methods,” Proc. SPIE, Vol. 8745, (2013).
[70] Mei, X., Ling, H., Wu, Y., Blasch, E., Bai, L. “Efficient Minimum Error Bounded Particle Resampling L1 Tracker with Occlusion Detection,”
IEEE Trans. on Image Processing (T-IP), (2013).
[71] Blasch, E., Straka, O., Yang, C., Qiu, D., Šimandl, M., Ajgl, J., “Distributed Tracking Fidelity-Metric Performance Analysis Using Confusion
Matrices,” Int. Conf. on Info Fusion, (2012).
[72] Kahler, B., Blasch, E., “Decision-Level Fusion Performance Improvement from Enhanced HRR Radar Clutter Suppression,” J. of. Advances in
Information Fusion, Vol. 6, No. 2, Dec. (2011).
[73] Palaniappan, K., Bunyak, F., Kumar, P., et al., “Efficient feature extraction and likelihood fusion for vehicle tracking in low frame rate airborne
video,” Int. Conf. Information Fusion, (2010).
[74] Pelapur, R., Candemir, S., Poostchi, M., Bunyak, F., et al., “Persistent target tracking using likelihood fusion in wide-area and full motion video
sequences,” Int. Conf. Information Fusion, (2012).
[75] Blasch, E., Costa, P. C. G., Laskey, K. B., Stampouli, D., Ng, G. W, Schubert, J., Nagi, R., Valin, P., “Issues of Uncertainty Analysis in HighLevel Information Fusion – Fusion2012 Panel Discussion,” Int. Conf. on Info Fusion, (2012).
[76] Blasch, E. Costa, P. C. G., Laskey, K. B., Ling, H., Chen, G., “The URREF Ontology for Semantic Wide Area Motion Imagery Exploitation,”
Semantic Technologies for Intelligence, Defense, and Security (STIDS), pp. 88-95, October, (2012).
[77] Blasch, E., Laskey, K. B., Joussselme, A-L., Dragos, V., Costa, P. C. G., Dezert, J., “URREF Reliability versus Credibility in Information Fusion
(STANAG 2511),” Int’l Conf. on Info Fusion, (2013).
[78] Blasch, E., Jøsang, A., Dezert, J., et al., “URREF Self-Confidence in Information Fusion Trust,” Int’l. Conf. on Info. Fusion, (2014).
[79] Blasch, E. P. et al., “JDL Level 5 Fusion model ‘user refinement’ issues and applications in group Tracking,” Proc. SPIE, Vol. 4729, (2002).
[80] Blasch, E. P., Connare, T., “Improving Track maintenance Through Group Tracking,” Proc of the Workshop on Estimation, Tracking, and
Fusion; A Tribute to Yaakov Bar Shalom, 360 –371, May (2001).
[81] Yang, C., et al., “Fusion of Tracks with Road Constraints,” J. of. Advances in Information Fusion, Vol. 3, No. 1, 14-32, (2008).
[82] Yang, C., Kaplan, L., Blasch, E., “Performance Measures of Covariance and Information Matrices in Resource Management for Target State
Estimation,” IEEE Tr. on Aerospace and Electronic Systems, Vol. 48, No. 3, pp. 2594 – 2613, (2012).
[83] Cheng, E., Ma, L., Blaisse, A., et al., “Efficient Feature Extraction from Wide Area Motion Imagery by MapReduce in Hadoop,” Proc. SPIE,
Vol. 9089, (2014).
[84] Shen, D., Chen, G., et al., “A Trust-based Sensor Allocation Algorithm in Cooperative Space Search Problems,” Proc. SPIE, Vol. 8044, (2011).
[85] Blasch, E., Valin, P., Bosse, E., Nilsson, M., et al., “Implication of Culture: User Roles in Information Fusion for Enhanced Situational
Understanding,” Int. Conf. on Info Fusion, (2009).
[86] Blasch, E., Salerno, J., Yang, S. J., Fenstermacher, L., Endsley, M., Grewe, L., “Summary of Human, Social, Cultural, Behavioral (HCSB)
Modeling for Information Fusion,” Proc. SPIE, Vol. 8745, (2013).
[87] Liu, Z., Blasch, E., Xue, Z., Langaniere, R., Wu, W., “Objective Assessment of Multiresolution Image Fusion Algorithms for Context
Enhancement in Night Vision: A Comparative Survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, 34(1):94-109, (2012).
[88] Zheng, Y., Dong, W., et al, “Qualitative and quantitative comparisons of multispectral night vision colorization techniqu es,” Optical
Engineering, Vol. 51, Issues 8, Aug. (2012).
[89] Dsouza, G., Hariri, S., Al-Nashif, Y., Rodriguez, G. “Building resilient cloud services using DDDAS and moving target defense,” Int. J. Cloud
Computing, (2013).
[90] Blasch, E., Al-Nashif, Y., Hariri, S.., “Static versus Dynamic Data Information Fusion analysis using DDDAS for Cyber Trust,” International
Conference on Computational Science, Procedia Computer Science, (2014).
[91] Ge, L., Yu, W., Shen, D., Chen, G., Pham, K., et al., “Toward Effectiveness and Agility of Network Security Situational Awareness using
Moving Target Defense (MTD),” Proc. SPIE, Vol. 9085, (2014).
[92] Yu, W., Xu, G., Pham, K., et al., “A Framework for Cyber-Physical System Security Situation Awareness,” in [Principles of cyber-physical
systems], W. Yu and Sajal Das (eds) , Cambridge University Press, (2014).
[93] Wei, S., Gey, L., Yu, W., et al, “Simulation Study of Unmanned Aerial Vehicle Communication Networks addressing Bandwidth Disruptions,”
Proc. SPIE, Vol. 9085, (2014).
[94] Wong, P. C., Shen, H.-W., Johnson, C. R., Chen, C., Ross, R. B., “Top 10 Challenges in Extreme-Scale Visual Analytics,” IEEE Computer
Graphics and Applications, Jul/ Aug. (2012).
Proc. of SPIE Vol. 9079 907903-11
Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms