Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, Pediatric Clinics of North America
…
1 page
1 file
Journal of Cybersecurity, 2017
Surveillance is recognised as a social phenomenon that is commonplace, employed by governments, companies and communities for a wide variety of reasons. Surveillance is fundamental in cybersecurity as it provides tools for prevention and detection; it is also a source of controversies related to privacy and freedom. Building on general studies of surveillance, we identify and analyse certain concepts that are central to surveillance. To do this we employ formal methods based on elementary algebra. First, we show that disparate forms of surveillance have a common structure and can be unified by abstract mathematical concepts. The model shows that (i) finding identities and (ii) sorting identities into categories are fundamental in conceptualising surveillance. Secondly, we develop a formal model that theorizes identity as abstract data that we call identifiers. The model views identity through the computational lens of the theory of abstract data types. We examine the ways identifier...
2018
Any broadcast organization that remains static runs the risk of being overtaken by newer, more agile alternatives. To remain competitive, broadcasters must constantly work to increase process velocity, accuracy, and flexibility. These goals cannot be reached without reducing time to market, manual touch-points, and associated labor costs. A major hurdle on this road to efficiency is the absence of a universal method to identify content, resulting in unnecessary manual workflows and timeand resource-consuming communications with third parties for the production, processing, and exchange of content. Root causes for these impracticalities include problems with work identification during acquisition, reconciliation, and de-duplication of assets obtained from multiple sources; placing high demands on limited resources; and causing delays or reducing content capacity. A necessary element to solve this problem is the use of globally unique and persistent works identification. As such, it w...
2010
The primary objective of most gene expression studies is the identification of one or more gene signatures; lists of genes whose transcriptional levels are uniquely associated with a specific biological phenotype. Whilst thousands of experimentally derived gene signatures are published, their potential value to the community is limited by their computational inaccessibility. Gene signatures are embedded in published article figures, tables or in supplementary materials, and are frequently presented using non-standard gene or probeset nomenclature. We present GeneSigDB (http:// compbio.dfci.harvard.edu/genesigdb) a manually curated database of gene expression signatures. GeneSigDB release 1.0 focuses on cancer and stem cells gene signatures and was constructed from more than 850 publications from which we manually transcribed 575 gene signatures. Most gene signatures (n = 560) were successfully mapped to the genome to extract standardized lists of EnsEMBL gene identifiers. GeneSigDB provides the original gene signature, the standardized gene list and a fully traceable gene mapping history for each gene from the original transcribed data table through to the standardized list of genes. The GeneSigDB web portal is easy to search, allows users to compare their own gene list to those in the database, and download gene signatures in most common gene identifier formats.
Computer, 2014
Open Governments use the Web as a global dataspace for datasets. It is in the interest of these governments to be interoperable with other governments worldwide, yet there is currently no way to identify relevant datasets to be interoperable with and there is no way to measure the interoperability itself. In this article we discuss the possibility of comparing identifiers used within various datasets as a way to measure semantic interoperability. We introduce three metrics to express the interoperability between two datasets: the identifier interoperability, the relevance and the number of conflicts. The metrics are calculated from a list of statements which indicate for each pair of identifiers in the system whether they identify the same concept or not. While a lot of effort is needed to collect these statements, the return is high: not only relevant datasets are identified, also machine-readable feedback is provided to the data maintainer.
Journal of Information Security, 2012
When initializing cryptographic systems or running cryptographic protocols, the randomness of critical parameters, like keys or key components, is one of the most crucial aspects. But, randomly chosen parameters come with the intrinsic chance of duplicates, which finally may cause cryptographic systems including RSA, ElGamal and Zero-Knowledge proofs to become insecure. When concerning digital identifiers, we need uniqueness in order to correctly identify a specific action or object. Unfortunately we also need randomness here. Without randomness, actions become linkable to each other or to their initiator's digital identity. So ideally the employed (cryptographic) parameters should fulfill two potentially conflicting requirements simultaneously: randomness and uniqueness. This article proposes an efficient mechanism to provide both attributes at the same time without highly constraining the first one and never violating the second one. After defining five requirements on random number generators and discussing related work, we will describe the core concept of the generation mechanism. Subsequently we will prove the postulated properties (security, randomness, uniqueness, efficiency and privacy protection) and present some application scenarios including systemwide unique parameters, cryptographic keys and components, identifiers and digital pseudonyms.
IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2000
The interactor normalization task (INT) is to identify genes that play the interactor role in protein-protein interactions (PPIs), to map these genes to unique IDs, and to rank them according to their normalized confidence. INT has two subtasks: gene normalization (GN) and interactor ranking. The main difficulties of INT GN are identifying genes across species and using full papers instead of abstracts. To tackle these problems, we developed a multistage GN algorithm and a ranking method, which exploit information in different parts of a paper. Our system achieved a promising AUC of 0.43471. Using the multistage GN algorithm, we have been able to improve system performance (AUC) by 1.719 percent compared to a one-stage GN algorithm. Our experimental results also show that with full text, versus abstract only, INT AUC performance was 22.6 percent higher.
Dlib, 2004
This paper focuses on the use of NISO OpenURL and MPEG-21 Digital Item Processing (DIP) to disseminate complex objects and their contained assets, in a repository architecture designed for the Research Library of the Los Alamos National Laboratory. In the architecture, the MPEG-21 Digital Item Declaration Language (DIDL) is used as the XML-based format to represent complex digital objects. Through an ingestion process, these objects are stored in a multitude of autonomous OAI-PMH repositories. An OAI-PMH compliant Repository Index keeps track of the creation and location of all those repositories, whereas an Identifier Resolver keeps track of the location of individual complex objects and contained assets. An MPEG-21 DIP Engine and an OpenURL Resolver facilitate the delivery of various disseminations of the stored objects. While these aspects of the architecture are described in the context of the LANL library, the paper will also briefly touch on their more general applicability.
BMC Bioinformatics, 2007
Background: Researchers involved in the annotation of large numbers of gene, clone or protein identifiers are usually required to perform a one-by-one conversion for each identifier. When the field of research is one such as microarray experiments, this number may be around 30,000.
it - Information Technology, 2009
The fast increase of the routing table size in the default-free zone (DFZ) is a major concern for the scalability of the Internet and a threat for its effective operation in the future. Proposals exist to modify the current routing architecture in order to decelerate the growth of the routing tables in the DFZ, but they are difficult to deploy. The locator/identifier split (Loc/ID) principle is significantly different from routing and addressing in today's Internet, but it is expected to improve routing scalability. We explain its basic idea, address interworking issues, point out design options, and review current implementation proposals.
Contrapoderes en la democracia constitucional ante la amenaza populista, 2024
Multilingual Margins, 2014
Brain Behavior and Immunity Integrative, 2024
Rome, IAI, January 2024, 5 p. (IAI Commentaries ; 24|03), 2024
Construyendo Bóvedas Tabicadas II, 2022
GAHPERD Journal, 2016
Albores, revista de ciencias políticas y sociales, 2024
Je vám to pojem? K 90. narozeninám Karla Skalického (Is that a notion to you? On the 90th anniversary of Karel Skalický), 2024
La dirigencia política argentina, 2023
Kahramanmaraş Sütçü İmam Üniversitesi Sosyal Bilimler Dergisi
Brain, Behavior, and Immunity, 2002
Facta Universitatis, Series: Mechanical Engineering, 2018
Journal of Lipid Research, 2013
The American Journal of Medicine, 2011
2013 IEEE Workshop on Applications of Computer Vision (WACV), 2013
Journal of Algebra, 1996