Biomedicine Meets Semantic Web: Juan Bernabé Moreno

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Potential benefits and challenges 1

Biomedicine meets Semantic Web


Juan Bernabé Moreno
University of Granada, Department of Computer Science and Artificial Intelligence

The integration and recycling of information sources lead to


Abstract— Bio-sciences have a lot of potential to be the best the creation of several data bases
fit to profit from the power of the semantic web. This work From the developer point of view this situation obviously
explains how ontologies have become a main stream within presents drawbacks:
the biomedical research, and discusses the main scenarios 1) High redundancy among the resources implies
ontologies are used in. Finally, after summarizing the wasting of development capacity
conclusion, a brief outlook is provided 2) Parallel scanning of resources is almost impossible.
Plenty of queries to be executed in order to retrieve
the available information
Index Terms— Semantic Web, Data sources, Integration, 3) Relevant dependencies or contradictions between
Biomedicine, Life Sciences data remain hidden, because the information is spread
out among different data bases
4) Users to get used to different data models and
I. INTRODUCTION different interfaces, which requires ramp up time
5) Need for custom development to integrate each and
T he amount of available knowledge doubles every 5 years every data source
6) Suboptimal data exploiting: small data bases stay
but this increase is even faster in the Life Sciences and this is unexploited. Adding of new data sources requires
the reason why the digital revolution has become a cornerstone manual intervention for discovery and biding, which
in the day-to-day work in Life Sciences. means the model does not scale.
In 1953 Watson and Crick discovered the structure of DNA. There have been approaches to overcome such problems
Almost 47 years later, the first draft of the human genome is (see section III.B)
published which was considered as one of the most significant
scientific landmarks of all time, comparable with the invention The result of theses approaches was in some cases a good
of the wheel or the splitting of the atom [11] short-term solution, but not sustainable. The fact of adding a
new data source implied remapping with other databases
The biomedicine nowadays is the result of a very fast schema or with a central data schema and eventually building a
evolution within the last years, becoming a data-driven new connector, which lead to a cost explosion and to a poor
internet-based science. Date driven because of the masses of designed home-grown system
data that high-throughput experiments produce on a daily The problem starts right after a new data source project goes
basis: more than 16k million DNA bases, more than 25.000 live: the developed data source becomes difficult to access
protein structures with an average of ca. 400 residues, more from outside because of the lack of a semantic basis and
than 130k annotated protein sequences –SWISSPROT[12]- application context
and more than 850k protein sequences –TrEMBL[13]-, more
than 14 million scientific articles available –PubMed[13]-),
etc. Internet-driven because of the incipient development of
the telemedicine, the shift from a preventive to a curative
medicine supported by information technologies, the online
collaboration platforms that break down the communication
barriers, etc

Current information state and the underlying problems Figure 1 Before Semantic Web in place
The biggest challenge and effort driver in the IT resources
available in the life sciences is the integration of existing
disparate data sources.
The same investment is done over and over by several
projects because of emerging requirements that make the
integration with legacy systems very tedious
Potential benefits and challenges 2

Figure 2 Semantic Web enabled scenario

II. THE SEMANTIC WEB


Figure 3 Typical biomedical content information item
As per W3C [1], semantic web is about giving well-defined
A. Biomedicine Data Sources
meaning to the information to enable that people and
computers work in cooperation. Even if most data bases can be accessed by a web interface,
The semantic web extends the ordinary web in two major ftp and email accessing methods are still mostly supported
aspects: The underlying data models differ substantially from each
• The information is expressed in special machine-targeted other, ranging from full-fledged object-oriented data bases to
language (instead of a wide range of natural languages for file-based models.
the human consumption) The content information is commonly given as per the
• The data is formally and semantically interlinked, whereas diagram shown in Figure 3, with header, annotation and actual
the web is a set of informally interlinked information information
The majority of data bases can be accessed by queries that
The Semantic Web to demolish communication barriers retrieve information based on the occurrence of certain text (or
Semantic Web shouldn’t be seen as a new technology, but stemmed text) within a data item –what’s known as full-text
rather as a completely new idea to organize the knowledge. search- or within certain predefined fields (i.e.: abstract of an
The benefits of structuring the knowledge in the scientific article). Boolean searching operations (and, or, not) are
community have shifted the semantic web into a key role commonly supported as well as wildcards –the entire support
rather than an unnecessary overhead. of regular expressions is hardly ever supported-. The interface
Older artificially created communication barriers between given to the user is usually form-based but sometimes a
the scientific community members can be demolished in a way console access is also given to allow the usage of a data
that the entire community will be able to profit from any small querying language DQL usually with certain restrictions. As
contribution of any individual. mentioned above, programmatic access is usually supported by
means of web services, REST, etc. The results set is usually a
(paged) set of the entries matching the query than can be
III. BIOINFORMATICS AS PERFECT CANDIDATE reduced by refining the query
Very well-known data bases are GenBank and DDBJ for
nucleotide sequences, SWISS-PROT and ENZYME for protein
As indicated before, the amount of data produced by the sequences, BLOCKS and PROSITE for protein families and
different researches and the number of algorithms created to PDB and MMDB for 3D macromolecular structures.
workout this information is huge, and the trend is making them
available over the internet for the community. Additionally, B. Biomedicine Data Base Interoperation
people are more and more recognizing the advantages of Since the information is available over internet and scattered
adhering to open standards and it is no longer unusual that into different data bases, there have been several approaches to
researchers from the bioinformatics community foster the address the integration and interoperation concern:
creation of new data standards, as they are required. XML as 1) Link-driven federation
publishing format is pushing out the proprietary free-text Most of the existing web interfaces that offer multi-database
position based formats. querying are based on this mechanism. The data source system
The number of data sources providing valuable information is often implemented by using files and specialized retrieval
over the internet is exponentially increasing, and their packages and the integration is done by means of cross-
integration –jointly querying- and interoperation has become reference indexes between the data items. The querying
the first concern. processing time is low and the interface is easy to use, but the
On the other hand, the upcoming internet applications are syntax-based ad-hoc linkage doesn’t address the
more and more allowing the information retrieval over the heterogeneities in the terminology used by the disparate data
internet by moving to service invocation architectures (REST sources. This mechanism presents serious scaling problems,
and web services) [15] i.e. incorporating a new data source requires re-indexing the
system.
Potential benefits and challenges 3

The best known systems running this method are SRS from help researches to make sense of massive data available to
Lion Bioscience/Biowisdom[16] and Entrez[17] perform analysis on. The challenging side is the variety of
2) Data warehouses synonymous terms and polysemy or lexical ambiguity, defined
Central databases that keep a copy of data from different as "the ambiguity of an individual word or phrase that can be
sources into central schema (e.g..: Atlas)[18] used (in different contexts) to express two or more different
3) Query optimizers meanings” in [27]
They are basically applications that enable the user to The biggest effort driver is the unification of disparate data
create queries to different data sources in a comfortable way. that are labeled differently in different data sources. Thus,
They often rely on view integration, where the different where the ontology adds value is in “fixing” the terminology
schemas are integrated to form a global one, which is queried so that people can label medical entities in a consistent way.
in a high level language (e.g.: Discovery Link)[19]. Other Additionally, synonymy, acronyms and abbreviations can
examples are BioKleisli [21], K2 [22], TINet [20], P/FDM augment the ontologies.
[24], TAMBIS [23], etc The most popular example is the Gene Ontology [28]
4) Middleware frameworks Their entities have is-a and part-of relations to other entities,
Intended to query different data models by means of providing the basis for representing biological knowledge.
different interfaces. These relations support the creation of computer reasoning
applications, which can infer subsumption (is-a relations) or
composition (part-of relations) between entities.
IV. ONTOLOGIES The ad-hoc usage of the GO ontology allows the querying of
many Model Organism Databases (MODs) thanks to the
In a more and more data intensive world the computers play a disambiguation of meanings.
key role in helping people manage the information explosion. Looking beyond the mere terminology fixing, the GO is
Ontologies have become the cornerstone to structure the used as a basis for the term extraction and better information
complex knowledge domains and establish standards retrieval on the life sciences documents (i.e. the GoPubMed
Ontologies have been defined as "a way to express formally a project)
shared understanding of information" [25] Ontologies are commonly used to provide a common way of
describe the patient information in health records (see US
National Library of Medicine UMLS) [30]
A. Ontologies between Philosophy and Computer Science The description of audio visual information is also address
As indicated in [2] the fact that ontologies is a plural raises by the usage of ontologies to provide names for anatomy,
the major difference between the philosophical and computer pathology and observations in images (i.e. Open Microscopy
science approach to the term. Environment –OME-)
A philosophical ontology would encompass the whole of the
universe, but computer scientists allow the existence of
multiple, overlapping ontologies, each focused on a particular
domain.
Indeed an understanding of the ontology of a particular
domain may be crucial to any understanding of the domain.
The combination of ontologies, and communication between
them, is therefore, a major issue within computer science,
although such issues are problematic with the philosophical
use of the term. At the limit, an ontology that perfectly
expresses one persons understanding of the world is useless for
anyone else with a different view of the world. Communication
between ontologies is necessary to avoid this type of solipsism.

B. Ontologies usage in biomedical use cases


Biomedicine has profited from ontology-driven technologies Figure 4 A chromatin-associated multiprotein complex
in many ways [26]: containing Polycomb Group proteins. GO View

1) Reference for naming things 2) Representation of encyclopedic knowledge


The motivation behind it is establishing a set of controlled The second natural step to capture and represent knowledge
terms for labeling entities in databases and data sets. is by means of rich relationships between the entities of a
It will ensure the consensus between people on the name to domain. The textual description of complex knowledge gives
be given to certain entity, and the consensus between people the humans the possibility to access this knowledge, but not
and machines to identify and name things. The immediate the machines. Using well-defined, univocal, standardized
consequence of this consensus is the fact that computers can relationships to structure and make explicit the knowledge
enable the access to machines and humans.
Potential benefits and challenges 4

The Foundation Model of Anatomy (FMA)[31] is a very


popular example that specifies canonical knowledge for the
anatomy domain (entities and relationships). It is the result of
the anatomist and knowledge engineers collaboration and
unlike other ontologies, it has not been created for a particular
application, but with the long-term goal of providing digital
accessible encyclopedic knowledge for anatomy

3) Information model specification


Specifying information and data models using ontologies
instead of UML provides several advantages:
• Explicit specification of the terms used to express
information in the biomedical domain
• Augmented capabilities like explicit relationship
making among data types and automatic reasoning –
subsumption and composition-.
• Complex structures visualization capabilities (like the
ones offered by Protégé [32]
• Publishing of information model in the semantic web (if
standards have been adhered –like OWL-) [33]
Figure 5 BioPax ontology browsed with Protégé
Ontologies for this purpose have been adopted in the
microarrays world. Microarray is the term standing for modern 5) Semantic based Information Integration
bio-molecular analysis systems used to generate molecular The integration scenario of heterogeneous yet related data
level biomarkers for a variety of biological states and medical sources requires manual ad-hoc processing currently based in
diseases. The creation of large amounts of microarray data and syntactic-based methods (e.g.: linking object with the same
the creation of databases to enable sharing of these data name facing polysemy, acronyms, abbreviations and synonymy
quickly raised the need for standards in describing microarray related issues). Specifying the semantics of data in a variety of
experiments and results. The MGED ontology aims at databases can enable researchers to integrate heterogeneous
providing a common terminology and information schema for data across different databases. Linking entities in different
annotating microarray experimental results, resolving data sources based on shared characteristics supported by an
ambiguity situations on how microarray experiments are ontology that provides a common declarative foundation to
described and providing a mechanism for query expansion describe biomedical content has proven to be a better
exploiting subsumption relationships [34] approach. The additional ontological reasoning capabilities
Another remarkable example is the Ontology of Biomedical can support the linking process and resolve ambiguity and at
Investigation which is a more generic approach targeting the the end of the day facilitate the integration and validation of
description of biological and medical investigation [35] disparate information.
The TAMBIS project implements an ontology driven
4) Specification of data exchange format integration middleware [23]
The emerging of multiple data based containing related
biomedical information requires a mechanism to specify the
standard exchange format. The ontological capabilities for
structuring information are being more and more used.
The BioPax organization has been working for years in
defining a standard for representing metabolic, biochemical,
transcription regulation, protein synthesis and signal
transduction pathways [36]. It has already be taken as standard
for the leading pathways resources like Kyoto Encyclopedia of
Genes and Genomes [37], BioCyc [38] or Reactome [39]

Figure 6 Information flow in TAMBIS


Potential benefits and challenges 5

1- The user interacts with Query Formulation Dialogues,


expressing queries in terms of the biological model.
The dialogues are driven by the content of the model,
guiding the user towards sensible queries. The query is
then passed to the transformation process, which may
require further user input to refine and instantiate the
query.
2- The Terminology Server provides services for
reasoning about concept models, answering questions
like: What can I say about Proteins? Or what are the
parents of concept X? It communicates with other
modules through a well-defined interface
3- The Services Knowledge Base links the biological
ontology with the sources and their schemas. This Figure 7 "Understanding cycle" proposed in the HyBrow
information is used by the transformation process to project
determine which source should be used.
4- Query Transformation takes the conceptual source- As extensively discussed in [10], in the recent years, several
independent queries and rewrites to produce executable formalisms have been proposed for modeling biochemical
query plans. To do this it requires knowledge about the processes [4][5][6] or quantitatively [7][8] The tools that are
biological sources and the services they offer being developed integrate a graphical user interface and a
Information about particular user preferences - say simulator, but only a very small subset manage to provide truly
favourite databases or analysis methods - may also be reasoning capabilities on the processes. For example, the
incorporated by the query planner. The query plans are Biocham [9] has been on the design of a biochemical rule
then passed to the wrappers. language and a query language of the model in temporal logic,
5- The Wrapper Service coordinates the execution of the that are intended to be used by biologists. Biocham is a
query and sends each component to the appropriate language and a programming environment for modeling
source. Results are collected and returned to the user. biochemical systems, making simulations, and querying such
models in temporal logic, composed by: a rule-based language
6) Computer reasoning with data for modeling biochemical systems, a simple simulator, a
The competitive advantage of representing the knowledge powerful query language based on Computation Tree Logic
by means of ontologies is the possibility to exploit knowledge CTL and several interfaces for automatic evaluation of CTL
by means of computer reasoning or the capability of making queries.
inferences based in the knowledge contained in the ontology,
the contextual information and the asserted facts. For a V. CONCLUSION AND RESEARCH DIRECTIONS
scientist the panorama looks like a huge amount of well- This work explains the challenges the Life Sciences are faced
structured information and a set of tools to analyze this with, and how the semantic web technologies are being used to
information and allow for drawing meaningful inferences. address the major integration problems.
This steps means shifting from the mere information Ontologies, the semantic web cornerstone, are being
retrieval to the meaning of information mindsets. Typically, adopted to solve a wide rage of problems, among them, the
when a researcher is formulating hypothesis, it’s extremely establishing of controlled vocabularies, the representation of
difficult to verify that the data available support this
knowledge, the specification of information models
hypothesis and if no, to figure out where the inconsistencies
overcoming certain limitations of the classic UML, the
are. The need for tools capable of querying and interpreting
specification of exchange formats to transfer knowledge
the information at hand is becoming more and more incipient.
The HyBrow [40] or Hypothesis Browser allows for between distributed systems, the integration capabilities
evaluating alternative hypotheses applying biological empowered by the usage of semantics, or the usage of
knowledge to integrated biological data –such gene automatic reasoning techniques to discover or infer hidden
expression, protein interactions and annotations-. knowledge inherent to the data model.
The uptake of this technique is so widespread, that many
institutions start owning the development of a particular
ontology. As an immediate result of it, the number of reference
ontologies in biomedicine is constantly increasing.
The future will bring more formalism and therefore better
analytical possibilities. Collaboration platforms for the
community development of ontologies will also be a big area
of research, as well as knowledge sharing for ontologies re-
usage and expansion. Another point to be address in the near
future is the mapping and overlapping of ontologies.
Potential benefits and challenges 6

integrated access to genomic data sources. IBM Systems Journal, Issue


on Deep computing for the life sciences.
[23] Goble, C., Stevens, R., Ng, G., Bechhofer, S., Paton, N., Baker, P.,
REFERENCES Peim, M., and Brass, A. (2001). Transparent access to multiple
[1] Berners-Lee, T., Hendler, J. and Lassila, O. “The Semantic Web,” bioinformatics information sources. IBM Systems Journal, Issue on
Scientific American, 284(5), 2001, pp. 34-43. Deep computing for the life sciences
[2] Parry, D. ACM International Conference Proceeding Series; [24] Kemp, G., Angelopoulos, N., and Gray, P. (2000). A schema-based
approach to building a bioinformatics database federation. In
Vol. 54 Proceedings of the second workshop on Australasian
Proceedings of the IEEE Inter- national Symposium on Bioinformatics
information security, Data Mining and Web Intelligence, and and Biomedical Engineering
Software Internationalisation - Volume 32, New Zealand 2004 [25] N. Noy, et al., Creatring Semtric Web contentss with Protege!-2000,
[3] De Roure, D.; Frey, J.; Michaelides, D.; Page, K. IEEE Intelligent Systems 16 (2) (2001 March/April).
Collaborative Technologies and Systems, 2006. CTS 2006. [26] Smith B, Nigam S. Ontologies in biomedicine. How to make use of
International Symposium on them (tutorial) Universtity of Buffalo, USA (available at
Volume , Issue , 14-17 May 2006 Page(s): 411 – 418 http://bioontology.org/wiki/images/d/d2/ISMB_2007_Handout.pdf)
[4] Regev, A., Silverman, W., and Shapiro, E. (2001a). [27] D. Slomin. R. Tengi, WordNet, Princeton University Cognitive
Representation and simulation of biochemical processes using Science Lab., 1-003
the pi-calculus process algebra. In Proceedings of the Pacific [28] Gene Ontology, available at http://www.geneontology.org/ (accessed
March 2009)
Symposium of Biocomputing,
[29] GoPubMed project, available at http://www.gopubmed.com/ (accessed
[5] Nagasaki, M., Onami, S., Miyano, S., and Kitano, H. (2000). March 2009)
Biocalculus: Its concept, and an application for molecular [30] US National Library of Medicine, available at http://www.nlm.nih.gov/
interaction. In Currents in Computational Molecular Biology., (accessed March 2009)
volume 30 of Frontiers Science Series. [31] Foundation Model of Anatomy, available at
[6] Eker, S., Knapp, M., Laderoute, K., Lincoln, P., Meseguer, J., http://sig.biostr.washington.edu/projects/fm/FME/aboutFME.html
and Sönmez, M. K. (2002b). Pathway logic: Symbolic analysis (accessed March 2009)
of biological signaling. In Proceedings of the seventh Pacific [32] Protégé. Stanford University, available at http://protege.stanford.edu
Symposium on Biocomputing (accessed March 2009)
[33] OWL Web Ontology Language, available at http://www.w3.org/TR/owl-
[7] Matsuno, H., Doi, A., Nagasaki, M., and Miyano, S. (2000).
features/ (accessed March 2009)
Hybrid Petri net representation of gene regulatory network. In [34] MGED, available at http://mged.sourceforge.net/ontologies/index.php
Pacific Symposium on Biocomputing (accessed March 2009)
[8] Hofestädt, R. and Thelen, S. (1998). Quantitative modeling of [35] Ontology of Biomedical Investigation, available at http://obi-
biochemical networks. In Silico Biology ontology.org/page/Main_Page (accessed March 2009)
[9] Chabrier-Rivier, N., Fages, F., and Soliman, S. (2004). The [36] Biological Pathways Exchange, available at http://www.biopax.org/
biochemical abstract machine BIOCHAM. In Danos, V. and (accessed March 2009)
Schächter, V., editors, CMSB'04: Proceedings of the second [37] Genes and Genomes, available at http://www.genome.jp/kegg
Workshop on Computational Methods in Systems Biology, [38] BioCyc, available at http://biocyc.org
[39] Reactome, available at http://www.reactome.org/
Lecture Notes in Computer Science. Springer-Verlag.
[40] Hypotheses Browser (HyBrow), available at http://www.hybrow.org/
[10] State-of-the-art in Bioinformatics, Reasoning on the Web with (accessed March 2009)
Rules and Semantics (REWERSE Project), Deliverable A2-D1,
available at http://rewerse.net/deliverables/a2-d1.pdf
(accessed March 2009)
[11] Available at
http://news.bbc.co.uk/2/hi/science/nature/805803.stm (accessed
March 2009)
[12] Available at http://www.expasy.ch/sprot/ (accessed March
[13] Available at http://www.trembl.org/ (accessed March 2009)
[14] Available at http://www.ncbi.nlm.nih.gov/pubmed/ (accessed
March 2009)
[15] Fielding, Roy T.; Taylor, Richard N. (2002-05), "Principled
Design of the Modern Web Architecture" ACM Transactions on
Internet Technology (TOIT) (New York: Association for
Computing Machinery)
[16] Available at http://www.biowisdom.com/solutions/srs/ (accessed March
2009)
[17] Available at http://www.ncbi.nlm.nih.gov/sites/gquery (accessed March
2009)
[18] Available at http://bioinformatics.ubc.ca/atlas (accessed March 2009)
[19] Available at
https://www3.ibm.com/solutions/lifesciences/solutions/discoverylink.ht
ml (accessed March 2009)
[20] Eckman, B., Kosky, A., and Laroco, L. (2001). Extending traditional
query-based integration approaches for functional characterization of
post-genomic data.
[21] Davidson, S., Overton, C., Tannen, V., and Wong, L. (1997). Biokleisli:
A digital library for biomedical researchers. Journal of Digital Libraries.
[22] Davidson, S., Crabtree, J., Brunk, B., Schug, J., Tannen, V., Overton,
C., and Stoeckert, C. (2001). K2/kleisli and gus: Experiments in

You might also like