International Semantic Web Conference, Oct 1, 2020
This paper describes three new Geospatial Linked Data (GLD) quality metrics that help evaluate co... more This paper describes three new Geospatial Linked Data (GLD) quality metrics that help evaluate conformance to standards. Standards conformance is a key quality criteria, for example for FAIR data. The metrics were implemented in the open source Luzzu quality assessment framework and used to evaluate four public geospatial datasets that showed a wide variation in standards conformance. This is the first set of Linked Data quality metrics developed specifically for GLD.
I hereby certify that this material, which I now submit for assessment on the programme of study ... more I hereby certify that this material, which I now submit for assessment on the programme of study leading to the award of PhD in Electronic Engineering is entirely my own work and has not been taken from the work of others save to the extent that such work has been cited and acknowledged within the text of my work.
Zenodo (CERN European Organization for Nuclear Research), May 10, 2022
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
2021 IEEE International Conference on Big Data (Big Data), Dec 15, 2021
Smart city governments face several barriers in adopting open data initiatives. Some of these bar... more Smart city governments face several barriers in adopting open data initiatives. Some of these barriers are related to the attribution rights of open licenses, as reusers often misattribute open data. As a result, publishers do not know which open data is reused and for what, thus ignoring the return on investment and hindering the sustainability of open data initiatives. In addition, reusers who do not fully comply the right of attribution of open data licences may face legal problems. To overcome these pitfalls, this paper envisions an approach that aims to extend open data publication standards with elements coming from open-source software development standards, to support reusers in achieving proper attribution of open data. Our approach is a first attempt to both (i) enabling publishers to gather information to understand open data reuse in smart cities, and (ii) supporting reusers to avoid lawsuit issues related to open data license violation.
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
This chapter presents the Trusted Integrated Knowledge Dataspace (TIKD)-a trusted data sharing ap... more This chapter presents the Trusted Integrated Knowledge Dataspace (TIKD)-a trusted data sharing approach, based on Linked Data technologies, that supports compliance with the General Data Privacy Regulation (GDPR) for personal data handling as part of data security infrastructure for sensitive application environments such as healthcare. State-of-the-art shared dataspaces typically do not consider sensitive data and privacy-aware log records as part of their solutions, defining only how to access data. TIKD complements existing dataspace security approaches through trusted data sharing that includes personal data handling, data privileges, pseudonymization of user activity logging, and privacy-aware data interlinking services. TIKD was implemented on the Access Risk Knowledge (ARK) Platform, a socio-technical risk governance system, and deployed as part of the ARK-Virus Project which aims to govern the risk management of personal protection equipment (PPE) across a group of collaborating healthcare institutions. The ARK Platform was evaluated, both before and after implementing the TIKD, using both the ISO 27001 Gap Analysis Tool (GAT), which determines information security standard compliance, and the ISO 27701 standard for privacy information. The results of the security and privacy evaluations indicated that compliance with ISO 27001 increased from 50% to 85% and compliance with ISO 27701 increased from 64% to 90%. This shows that implementing TIKD provides a trusted data
Ontologies are widely considered as the building blocks of the semantic web, and with them, comes... more Ontologies are widely considered as the building blocks of the semantic web, and with them, comes the data interoperability issue. As ontologies are not necessarily always labelled in the same natural language, one way to achieve semantic interoperability is by means of cross-lingual ontology mapping. Translation techniques are often used as an intermediate step to translate the conceptual labels within an ontology. This approach essentially removes the natural language barrier in the mapping environment and enables the application of monolingual ontology mapping tools. This paper shows that the key to this translation-based approach to cross-lingual ontology mapping lies with selecting appropriate ontology label translations in a given mapping context. Appropriateness of the translations in the context of cross-lingual ontology mapping differs from the ontology localisation point of view, as the former aims to generate correct mappings whereas the latter aims to adapt specifications of conceptualisations to target communities. This paper further demonstrates that the mapping outcome using the translation-based cross-lingual ontology mapping approach is conditioned on the translations selected for the intermediate label translation step. In particular, this paper presents the design, implementation and evaluation of a novel cross-lingual ontology mapping system: SOCOM++. SOCOM++ provides configurable properties that can be manipulated by a user in the process of selecting label translations in an effort to adjust the subsequent mapping outcome. It is shown through the evaluation that for the same pair of ontologies, the mappings between them can be adjusted by tuning the translations for the ontology labels. This finding is not yet shown in previous research.
This paper describes the H2020 ALIGNED project (#644055) which investigates RDF-based data qualit... more This paper describes the H2020 ALIGNED project (#644055) which investigates RDF-based data quality, semantic model-driven software engineering, enterprise linked data for software and data engineering process management and engineering of data intensive systems based on linked data.
This paper presents DELTA-LD, an approach that detects and classifies the changes between two ver... more This paper presents DELTA-LD, an approach that detects and classifies the changes between two versions of a linked dataset. It contributes to the state-of-art: firstly, by proposing a classification to distinctly identify the resources that have had both their IRIs and representation changed and the resources that have had only their IRI changed; secondly by automatically selecting the appropriate resource properties to identify the same resources in different versions of a linked dataset with different IRIs and similar representation. The paper also presents the DELTA-LD change model to represent the detected changes. This model captures the information of both changed resources and triples in linked datasets during its evolution, bridging the gap between resource-centric and triple-centric views of changes. As a result, a single change detection mechanism can support several diverse use cases like interlink maintenance and replica synchronization. The paper, in addition, describes an experiment conducted to examine the accuracy of DELTA-LD in detecting the changes between the person snapshots of DBpedia. The result indicates that the accuracy of DELTA-LD outperforms the state-ofart approaches by up to 4%, in terms of F-measure. It is demonstrated that the proposed classification of changes helped to identify up to 1529 additional updated resources as compared to the existing classification of resource level changes. By means of a case study, we also demonstrate the automatic repair of broken interlinks using the changes detected by DELTA-LD and represented in DELTA-LD change model, showing how 100% of the broken interlinks were repaired between DBpedia person snapshot 3.7 and Freebase.
This paper identifies good practice for building high usability Linked Data mobile apps, demonstr... more This paper identifies good practice for building high usability Linked Data mobile apps, demonstrates these practices through a prototype app for a commercial service and evaluates the usability achieved. The state of the art in mobile Linked Data technologies and design guidelines for mobile search user interfaces were reviewed and analyzed to define good practice. This paper addresses the lack of research on mobile applications using Linked Data and consumer-oriented applications of Semantic Web technology. The user study consisted of a lab-based experimental investigations and surveys of real customers. The experiments showed that it is possible for a consumer-oriented Linked Data mobile application to achieve high usability, comparable to a commercial webbased solution on desktop devices. This provides evidence that our categorization of good practices in mobile Linked Data app design has broader applicability for mobile Linked Data applications.
The GDPR requires Data Controllers and Data Protection Officers (DPO) to maintain a Register of P... more The GDPR requires Data Controllers and Data Protection Officers (DPO) to maintain a Register of Processing Activities (ROPA) as part of overseeing the organisation’s compliance processes. The ROPA must include information from heterogeneous sources such as (internal) departments with varying IT systems and (external) data processors. Current practices use spreadsheets or proprietary systems that lack machine-readability and interoperability, presenting barriers to automation. We propose the Data Processing Catalogue (DPCat) for the representation, collection and transfer of ROPA information, as catalogues in a machine-readable and interoperable manner. DPCat is based on the Data Catalog Vocabulary (DCAT) and its extension DCAT Application Profile for data portals in Europe (DCAT-AP), and the Data Privacy Vocabulary (DPV). It represents a comprehensive semantic model developed from GDPR’s Article and an analysis of the 17 ROPA templates from EU Data Protection Authorities (DPA). To d...
This Deliverable summarizes the results of the W3C Data Privacy Vocabularies and Controls Working... more This Deliverable summarizes the results of the W3C Data Privacy Vocabularies and Controls Working Group (DPVCG), which was established and chaired by members of the SPECIAL Consortium (co-chairs: Axel Polleres (WU), Bert Bos (ERCIM)) as an initiative to build a community around the development of joint machine-readable vocabularies towards interoperability in the context of data privacy. Details about the community group, along with information on how to participate and provide feedback to the drafts is available at: https://www.w3.org/community/dpvcg/
International Semantic Web Conference, Oct 1, 2020
This paper describes three new Geospatial Linked Data (GLD) quality metrics that help evaluate co... more This paper describes three new Geospatial Linked Data (GLD) quality metrics that help evaluate conformance to standards. Standards conformance is a key quality criteria, for example for FAIR data. The metrics were implemented in the open source Luzzu quality assessment framework and used to evaluate four public geospatial datasets that showed a wide variation in standards conformance. This is the first set of Linked Data quality metrics developed specifically for GLD.
I hereby certify that this material, which I now submit for assessment on the programme of study ... more I hereby certify that this material, which I now submit for assessment on the programme of study leading to the award of PhD in Electronic Engineering is entirely my own work and has not been taken from the work of others save to the extent that such work has been cited and acknowledged within the text of my work.
Zenodo (CERN European Organization for Nuclear Research), May 10, 2022
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
2021 IEEE International Conference on Big Data (Big Data), Dec 15, 2021
Smart city governments face several barriers in adopting open data initiatives. Some of these bar... more Smart city governments face several barriers in adopting open data initiatives. Some of these barriers are related to the attribution rights of open licenses, as reusers often misattribute open data. As a result, publishers do not know which open data is reused and for what, thus ignoring the return on investment and hindering the sustainability of open data initiatives. In addition, reusers who do not fully comply the right of attribution of open data licences may face legal problems. To overcome these pitfalls, this paper envisions an approach that aims to extend open data publication standards with elements coming from open-source software development standards, to support reusers in achieving proper attribution of open data. Our approach is a first attempt to both (i) enabling publishers to gather information to understand open data reuse in smart cities, and (ii) supporting reusers to avoid lawsuit issues related to open data license violation.
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
This chapter presents the Trusted Integrated Knowledge Dataspace (TIKD)-a trusted data sharing ap... more This chapter presents the Trusted Integrated Knowledge Dataspace (TIKD)-a trusted data sharing approach, based on Linked Data technologies, that supports compliance with the General Data Privacy Regulation (GDPR) for personal data handling as part of data security infrastructure for sensitive application environments such as healthcare. State-of-the-art shared dataspaces typically do not consider sensitive data and privacy-aware log records as part of their solutions, defining only how to access data. TIKD complements existing dataspace security approaches through trusted data sharing that includes personal data handling, data privileges, pseudonymization of user activity logging, and privacy-aware data interlinking services. TIKD was implemented on the Access Risk Knowledge (ARK) Platform, a socio-technical risk governance system, and deployed as part of the ARK-Virus Project which aims to govern the risk management of personal protection equipment (PPE) across a group of collaborating healthcare institutions. The ARK Platform was evaluated, both before and after implementing the TIKD, using both the ISO 27001 Gap Analysis Tool (GAT), which determines information security standard compliance, and the ISO 27701 standard for privacy information. The results of the security and privacy evaluations indicated that compliance with ISO 27001 increased from 50% to 85% and compliance with ISO 27701 increased from 64% to 90%. This shows that implementing TIKD provides a trusted data
Ontologies are widely considered as the building blocks of the semantic web, and with them, comes... more Ontologies are widely considered as the building blocks of the semantic web, and with them, comes the data interoperability issue. As ontologies are not necessarily always labelled in the same natural language, one way to achieve semantic interoperability is by means of cross-lingual ontology mapping. Translation techniques are often used as an intermediate step to translate the conceptual labels within an ontology. This approach essentially removes the natural language barrier in the mapping environment and enables the application of monolingual ontology mapping tools. This paper shows that the key to this translation-based approach to cross-lingual ontology mapping lies with selecting appropriate ontology label translations in a given mapping context. Appropriateness of the translations in the context of cross-lingual ontology mapping differs from the ontology localisation point of view, as the former aims to generate correct mappings whereas the latter aims to adapt specifications of conceptualisations to target communities. This paper further demonstrates that the mapping outcome using the translation-based cross-lingual ontology mapping approach is conditioned on the translations selected for the intermediate label translation step. In particular, this paper presents the design, implementation and evaluation of a novel cross-lingual ontology mapping system: SOCOM++. SOCOM++ provides configurable properties that can be manipulated by a user in the process of selecting label translations in an effort to adjust the subsequent mapping outcome. It is shown through the evaluation that for the same pair of ontologies, the mappings between them can be adjusted by tuning the translations for the ontology labels. This finding is not yet shown in previous research.
This paper describes the H2020 ALIGNED project (#644055) which investigates RDF-based data qualit... more This paper describes the H2020 ALIGNED project (#644055) which investigates RDF-based data quality, semantic model-driven software engineering, enterprise linked data for software and data engineering process management and engineering of data intensive systems based on linked data.
This paper presents DELTA-LD, an approach that detects and classifies the changes between two ver... more This paper presents DELTA-LD, an approach that detects and classifies the changes between two versions of a linked dataset. It contributes to the state-of-art: firstly, by proposing a classification to distinctly identify the resources that have had both their IRIs and representation changed and the resources that have had only their IRI changed; secondly by automatically selecting the appropriate resource properties to identify the same resources in different versions of a linked dataset with different IRIs and similar representation. The paper also presents the DELTA-LD change model to represent the detected changes. This model captures the information of both changed resources and triples in linked datasets during its evolution, bridging the gap between resource-centric and triple-centric views of changes. As a result, a single change detection mechanism can support several diverse use cases like interlink maintenance and replica synchronization. The paper, in addition, describes an experiment conducted to examine the accuracy of DELTA-LD in detecting the changes between the person snapshots of DBpedia. The result indicates that the accuracy of DELTA-LD outperforms the state-ofart approaches by up to 4%, in terms of F-measure. It is demonstrated that the proposed classification of changes helped to identify up to 1529 additional updated resources as compared to the existing classification of resource level changes. By means of a case study, we also demonstrate the automatic repair of broken interlinks using the changes detected by DELTA-LD and represented in DELTA-LD change model, showing how 100% of the broken interlinks were repaired between DBpedia person snapshot 3.7 and Freebase.
This paper identifies good practice for building high usability Linked Data mobile apps, demonstr... more This paper identifies good practice for building high usability Linked Data mobile apps, demonstrates these practices through a prototype app for a commercial service and evaluates the usability achieved. The state of the art in mobile Linked Data technologies and design guidelines for mobile search user interfaces were reviewed and analyzed to define good practice. This paper addresses the lack of research on mobile applications using Linked Data and consumer-oriented applications of Semantic Web technology. The user study consisted of a lab-based experimental investigations and surveys of real customers. The experiments showed that it is possible for a consumer-oriented Linked Data mobile application to achieve high usability, comparable to a commercial webbased solution on desktop devices. This provides evidence that our categorization of good practices in mobile Linked Data app design has broader applicability for mobile Linked Data applications.
The GDPR requires Data Controllers and Data Protection Officers (DPO) to maintain a Register of P... more The GDPR requires Data Controllers and Data Protection Officers (DPO) to maintain a Register of Processing Activities (ROPA) as part of overseeing the organisation’s compliance processes. The ROPA must include information from heterogeneous sources such as (internal) departments with varying IT systems and (external) data processors. Current practices use spreadsheets or proprietary systems that lack machine-readability and interoperability, presenting barriers to automation. We propose the Data Processing Catalogue (DPCat) for the representation, collection and transfer of ROPA information, as catalogues in a machine-readable and interoperable manner. DPCat is based on the Data Catalog Vocabulary (DCAT) and its extension DCAT Application Profile for data portals in Europe (DCAT-AP), and the Data Privacy Vocabulary (DPV). It represents a comprehensive semantic model developed from GDPR’s Article and an analysis of the 17 ROPA templates from EU Data Protection Authorities (DPA). To d...
This Deliverable summarizes the results of the W3C Data Privacy Vocabularies and Controls Working... more This Deliverable summarizes the results of the W3C Data Privacy Vocabularies and Controls Working Group (DPVCG), which was established and chaired by members of the SPECIAL Consortium (co-chairs: Axel Polleres (WU), Bert Bos (ERCIM)) as an initiative to build a community around the development of joint machine-readable vocabularies towards interoperability in the context of data privacy. Details about the community group, along with information on how to participate and provide feedback to the drafts is available at: https://www.w3.org/community/dpvcg/
Validating Interlinks between Linked Data Datasets with the SUMMR Methodology, 2016
Linked Data datasets use interlinks to connect semantically similar resources across datasets. As... more Linked Data datasets use interlinks to connect semantically similar resources across datasets. As datasets evolve, a resources locator can change which can cause interlinks that contain old resource locators, to no longer dereference and become invalid. Validating interlinks, through validating the resource locators within them, when a dataset has changed is important to ensure interlinks work as intended. In this paper we introduce the SPARQL Usage for Mapping Maintenance and Reuse (SUMMR) methodology. SUMMR is an approach for Mapping Maintenance and Reuse (MMR) that provides query templates which are based on standard SPARQL queries for MMR activities. This paper describes SUMMR and two experiments: a lab-based evaluation of SUMMR's mapping maintenance query templates and a deployment of SUMMR in the DBpedia v.2015-10 release to detect invalid interlinks. The lab-based evaluation involved detecting interlinks that have become invalid, due to changes in resource locators and the repair of the invalid interlinks. The results show that the SUMMR templates and approach can be used to effectively detect and repair invalid interlinks. SUMMR's query template for discovering invalid interlinks was applied to the DBpedia v.2015-10 release, which discovered 53,418 invalid interlinks in that release.
Seshat: The Global History DatabankSeshat: The Global History Databank, 2015
The vast amount of knowledge about past human societies has not been systematically organized and... more The vast amount of knowledge about past human societies has not been systematically organized and, therefore, remains inaccessible for empirically testing theories about cultural evolution and eScholarship provides open access, scholarly publishing services to the University of California and delivers a dynamic research platform to scholars worldwide. historical dynamics. For example, what evolutionary mechanisms were involved in the transition from the small-scale, uncentralized societies, in which humans lived 10,000 years ago, to the large-scale societies with an extensive division of labor, great differentials in wealth and power, and elaborate governance structures of today? Why do modern states sometimes fail to meet the basic needs of their populations? Why do economies decline, or fail to grow? In this article, we describe the structure and uses of a massive databank of historical and archaeological information, Seshat: The Global History Databank. The data that we are currently entering in Seshat will allow us and others to test theories explaining how modern societies evolved from ancestral ones, and why modern societies vary so much in their capacity to satisfy their members' basic human needs.
Ontology Mapping Representations: a Pragmatic Evaluation, 2009
A common approach to mitigate the effects of ontol-ogy heterogeneity is to discover and express t... more A common approach to mitigate the effects of ontol-ogy heterogeneity is to discover and express the specific correspondences (mappings) between different ontologies. An open research question is: how should such ontology mappings be represented? In recent years several proposals for an ontology mapping representation have been published, but as yet no format is officially standardised or generally accepted in the community. In this paper we will present the results of a systematic analysis of ontology mapping representations to provide a pragmatic state of the art overview of their characteristics. In particular we are interested how current ontology mapping representations can support the management of ontology map-pings (sharing, re-use, alteration) as well as how suitable they are for different mapping tasks.
Ontology Consistency and Instance Checking for Real World Linked Data, 2015
Many large ontologies have been created which make use of OWL's expressiveness for specification.... more Many large ontologies have been created which make use of OWL's expressiveness for specification. However, tools to ensure that instance data is in compliance with the schema are often not well integrated with triple-stores and cannot detect certain classes of schema-instance inconsistency due to the assumptions of the OWL axioms. This can lead to lower quality, inconsistent data. We have developed a simple ontol-ogy consistency and instance checking service, SimpleConsist[8]. We also define a number of ontology design best practice constraints on OWL or RDFS schemas. Our implementation allows the user to specify which constraints should be applied to schema and instance data.
Effective, collaborative integration of software and big data engineering for Web-scale systems, ... more Effective, collaborative integration of software and big data engineering for Web-scale systems, is now a crucial technical and economic challenge. This requires new combined data and software engineering processes and tools. Semantic metadata standards and linked data principles, provide a technical grounding for such integrated systems given an appropriate model of the domain. In this paper we introduce the ALIGNED suite of ontologies specifically designed to model the information exchange needs of combined software and data engineering. These ontologies are deployed in web-scale, data-intensive, system development environments in both the commercial and academic domains. We exemplify the usage of the suite on a complex collaborative software and data engineering scenario from the legal information system domain.
This paper presents a new OWL RL ontology, the Reasoning Violations Ontology (RVO), which describ... more This paper presents a new OWL RL ontology, the Reasoning Violations Ontology (RVO), which describes both ABox and TBox reasoning errors produced by DL reasoners. This is to facilitate the integration of reasoners into data engineering tool-chains. The ontology covers violations of OWL 2 direct semantics and syntax detected on both the schema and instance level over the full range of OWL 2 and RDFS language constructs. Thus it is useful for reporting results to other tools when a reasoner is applied to linked data, RDFS vocabularies or OWL ontologies, for example for quality evaluations such as consistency, completeness or integrity. RVO supports supervised or semi-supervised error localisation and repair by defining properties that both identify the statement or triple where a violation is detected, and by providing context information on the violation which may help the repair process. In a case study we show how the ontology can be used by a reasoner and a supervised repair process to accelerate high quality ontology development and provide automated constraint checking feedback on instance data. RVO is also being used to enable integration of reasoning results into multi-vendor data quality tool chains within the ALIGNED H2020 project.
This paper reflects on six years developing semantic data quality tools and curation systems for ... more This paper reflects on six years developing semantic data quality tools and curation systems for both large-scale social sciences data collection and a major web of data hub. This experience has led the author to believe in using organisational value as a mechanism for automation of data quality management to deal with Big Data volumes and variety. However there are many challenges in developing these automated systems and this discussion paper sets out a set of challenges with respect to the current state of the art and identifies a number of potential avenues for researchers to tackle these challenges.
This paper describes OWL ontology re-engineering from the wiki-based social science codebook (the... more This paper describes OWL ontology re-engineering from the wiki-based social science codebook (thesaurus) developed by the Seshat: Global History Databank. The ontology describes human history as a set of over 1500 time series variables and supports variable uncertainty, temporal scoping, annotations and bibliographic references. The ontology was developed to transition from traditional social science data collection and storage techniques to an RDF-based approach. RDF supports automated generation of high usability data entry and validation tools, data quality management, incorporation of facts from the web of data and management of the data curation lifecycle. This ontology re-engineering exercise identified several pitfalls in modelling social science codebooks with semantic web technologies; provided insights into the practical application of OWL to complex, real-world modelling challenges ; and has enabled the construction of new, RDF-based tools to support the large-scale Seshat data curation effort. The Seshat ontology is an exemplar of a set of ontology design patterns for modelling unncertainty or temporal bounds in standard RDF. Thus the paper provides guidance for deploying RDF in the social sciences. Within Seshat, OWL-based data quality management will assure the data is suitable for statistical analysis. Publication of Seshat as high-quality, linked open data will enable other researchers to build on it.
Linked Open Data consists of a large set of structured data knowledge bases which have been linke... more Linked Open Data consists of a large set of structured data knowledge bases which have been linked together, typically using equivalence statements. These equivalences usually take the form of owl:sameAs statements linking individuals, but links between classes are far less common Often, the lack of linking between classes is because the relationships cannot be described as elementary one to one equivalences. Instead, complex correspondences referencing multiple entities in logical combinations are often necessary if we want to describe how the classes in one ontology are related to classes in a second ontology. In this paper we introduce a novel Bayesian Restriction Class Correspondence Estimation (Bayes-ReCCE) algorithm, an extensional approach to detecting complex correspondences between classes. Bayes-ReCCE operates by analysing features of matched individuals in the knowledge bases, and uses Bayesian inference to search for complex correspondences between the classes these individuals belong to. Bayes-ReCCE is designed to be capable of providing meaningful results even when only small amounts of matched instances are available. We demonstrate this capability empirically, showing that the complex correspondences generated by Bayes-ReCCE have a median F1 score of over 0.75 when compared against a gold standard set of complex correspondences between Linked Open Data knowledge bases covering the geographical and cinema domains. In addition we discuss how metadata produced by Bayes-ReCCE can be included in the correspondences to encourage reuse by allowing users to make more informed decisions on the meaning of the relationship described in the correspondences.
This paper describes the H2020 ALIGNED project (#644055) which investigates RDF-based data qualit... more This paper describes the H2020 ALIGNED project (#644055) which investigates RDF-based data quality, semantic model-driven software engineering, enterprise linked data for software and data engineering process management and engineering of data intensive systems based on linked data.
Uploads
Papers by Rob Brennan