Lecture notes in business information processing, 2018
Today’s organizations are socio-technical systems in which human workers increasingly perform kno... more Today’s organizations are socio-technical systems in which human workers increasingly perform knowledge work. Interactions between knowledge workers, clerks, and systems are essentially speech acts controlling the necessity and flow of activities in semi-structured and ad-hoc processes. IT-support for knowledge work does not necessarily require any predefined process model, and often none is available. To capture what is going on, a rising number of approaches for process modeling, analysis, and support classify interactions and derive process-related information. The frequency and diversity of speech acts has only been examined within delimited domains, but not in the larger setting of a reference model covering different types of work and domains, multiple takeholders, and interacting processes. Therefore, we have investigated interactions in the IT Infrastructure Library (ITIL). ITIL is a collection of predefined processes, functions, and roles that constitute best practices in the realm of IT service management (ITSM). For ITIL-based processes, we demonstrate the importance, prevalence, and diversity of interactions in triggers, and that further abstraction of interactions can improve the reusability of process patterns. Hence, at least in ITSM, applying speech act theory bears great potential for process improvement.
Lecture notes in business information processing, 2016
Speech acts have been proposed to improve the design of interactive systems for decades. Neverthe... more Speech acts have been proposed to improve the design of interactive systems for decades. Nevertheless, they have not yet made their way to common practice in software engineering or even process modeling. Various types of workflow management systems have been successful to support or even automate mostly predictable schema based process patterns without the explicit use of speech acts as design primitives. Yet, todays work is increasingly characterized by unpredictable collaborative processes, called knowledge work. Some types of knowledge work are supported by case management tools which typically provide regulated access to case-related information. But communicative acts are not supported sufficiently. Since knowledge workers are well aware of the pragmatic dimension of their communicative acts, we believe that bringing this awareness of the nature of a speech act to a case management tool will allow for better support of unregulated knowledge intensive processes. In this paper we propose a speech-act-based approach to improve the effectivity of knowledge work. We thereby enhance case management systems by making them aware of speech acts. Speech act related micro processes can then be used to prevent misunderstandings, increase process transparency and make useful inferences.
To elaborate main system characteristics and relevant deployment experiences for the health infor... more To elaborate main system characteristics and relevant deployment experiences for the health information system (HIS) Orbis/OpenMed, which is in widespread use in Germany, Austria, and Switzerland. In a deployment phase of 3 years in a 1.200 bed university hospital, where the system underwent significant improvements, the system's functionality and its software design have been analyzed in detail. We focus on an integrated CASE tool for generating embedded clinical applications and for incremental system evolution. We present a participatory and iterative software engineering process developed for efficient utilization of such a tool. The system's functionality is comparable to other commercial products' functionality; its components are embedded in a vendor-specific application framework, and standard interfaces are being used for connecting subsystems. The integrated generator tool is a remarkable feature; it became a key factor of our project. Tool generated applications are workflow enabled and embedded into the overall data base schema. Rapid prototyping and iterative refinement are supported, so application modules can be adapted to the users' work practice. We consider tools supporting an iterative and participatory software engineering process highly relevant for health information system architects. The potential of a system to continuously evolve and to be effectively adapted to changing needs may be more important than sophisticated but hard-coded HIS functionality. More work will focus on HIS software design and on software engineering. Methods and tools are needed for quick and robust adaptation of systems to health care processes and changing requirements.
Knowledge workers already face a broad range of tools to support their work, e. g. adaptive case ... more Knowledge workers already face a broad range of tools to support their work, e. g. adaptive case management systems, tailored information systems, groupware, and other (process) support systems. Case data is scattered across many systems, and the overlapping structured, semi-structured, and adhoc processes involved further impede keeping track of related data and activities. Organizations are socio-technical entities, and interactions have significant impact on their success. Today, around 50% of the work in the US is knowledge work, and other countries show a similar tendency. Improving integration of appropriate tools for knowledge work and augmenting support for interactions therefore offers to increase productivity in a very influential part of the workforce. Knowledge workers are well aware of the pragmatic intention of their communicative acts, but currently their systems are not. We suggest to use Speech Act Theory to enable useful inferences and to improve integration of the various tools for knowledge work. A focus on interactions raises awareness for the pragmatic intention and commitments in particular. It can help providing line markings for knowledge workers by facilitating compliance monitoring for interactions and artifacts stemming from many participating systems and manual documentation. Interactions already tie many separate systems together, and standardizing as well as partially automating them can therefore further simplify integration. Speechact-based adaptive case management offers to increase process transparency, enable useful inferences, and integrate structured, semi-structured, and ad-hoc processes.
Proceedings of the 1st ACM SIGMOD Joint International Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA)
Analytical SQL queries are a valuable source of information. Query log analysis can provide insig... more Analytical SQL queries are a valuable source of information. Query log analysis can provide insight into the usage of datasets and uncover knowledge that cannot be inferred from source schemas or content alone. To unlock this potential, flexible mechanisms for meta-querying are required. Syntactic and semantic aspects of queries must be considered along with contextual information. We present an extensible framework for analyzing SQL query logs. Query logs are mapped to a multi-relational graph model and queried using domain-specific traversal expressions. To enable concise and expressive meta-querying, semantic analyses are conducted on normalized relational algebra trees with accompanying schema lineage graphs. Syntactic analyses can be conducted on corresponding query texts and abstract syntax trees. Additional metadata allows to inspect the temporal and social context of each query. In this demonstration, we show how query log analysis with our framework can support data source discovery and facilitate collaborative data science. The audience can explore an exemplary query log to locate queries relevant to a data analysis scenario, conduct graph analyses on the log and assemble a customized logmonitoring dashboard.
Introduction: In cooperative environments like medical supply centers, the new role of clinical p... more Introduction: In cooperative environments like medical supply centers, the new role of clinical practice manager, who is responsible for supervision and controlling of the center's financial development, creates new demands on IT infrastructure: The insular information systems of all participating[for full text, please go to the a.m. URL]
Einleitung und Hintergrund: Zur Verbesserung der Behandlungsqualität und zur Vermeidung unnötiger... more Einleitung und Hintergrund: Zur Verbesserung der Behandlungsqualität und zur Vermeidung unnötiger Kosten ist eine effektive Nutzung der Informations- und Kommunikationstechnik für die Unterstützung einer kooperativen Patientenbehandlung unerlässlich. Die IHE XDS Spezifikation[for full text, please go to the a.m. URL]
In recent years, many production case management (PCM) and adaptive case management (ACM) systems... more In recent years, many production case management (PCM) and adaptive case management (ACM) systems have been introduced into the daily workflow of knowledge workers. In many research papers and case studies, the claims about the nature and requirements of knowledge work in general seem to vary. While choosing or creating a case management (CM) solution, typically one has the target knowledge workers and their domain-specific requirements in mind. But knowledge work shows a huge variety of modes of operation, complexity, and collaboration. We want to increase transparency on which features are covered by well-known and award-winning systems for different types of knowledge workers and different classes of systems. This may not unveil gaps between requirements and offered solutions, but it can uncover differences in solutions for varying user bases. We performed a literature review of 48 winners of the WfMC Awards for Excellence in Case Management from 2011 to 2016 and analyzed case studies in regard to targeted knowledge workers, advertised features, and type of system. Different types of knowledge workers showed a different bias on certain system types and features in regard to collaboration and variability of processes.
Ausgehend von einer Analyse der logischen Grundlagen der Datenreplikation werden in diesem Kapite... more Ausgehend von einer Analyse der logischen Grundlagen der Datenreplikation werden in diesem Kapitel zunachst die Zielsetzungen und Randbedingungen der Datenreplikation diskutiert. Anschliesend werden Verfahren zur Synchronisation von Replikaten auf der Basis des klassischen Korrektheitskriteriums One-Copy-Serialisierbarkeit vorgestellt. Dieses Kapitel bildet damit die Grundlage fur die anwendungsbezogenen Konzepte der Datenreplikation, die im nachfolgenden Kapitel erortert werden.
Adaptive Datenreplikation in verteilten Systemen, 1997
Ein Datenverwaltungssystem erbringt eine Dienstleistung fur verschiedene sehr unterschiedlich gea... more Ein Datenverwaltungssystem erbringt eine Dienstleistung fur verschiedene sehr unterschiedlich geartete Anwendungen. Die Anforderungen der Anwendungen an das Datenverwaltungssystem beschranken sich aber nicht nur auf das zuverlassige Abspeichern von Daten und die konsistente Bereitstellung von moglichst aktuellen Daten, sondern umfassen insbesondere auch den effizienten Umgang mit Daten. Effizienz beinhaltet in diesem Zusammenhang neben der Forderung nach einer moglichst kurzen Zugriffszeit beispielsweise auch die Forderung nach moglichst hoher Verfugbarkeit eines Datums. Die konsistente Verarbeitung nebenlaufiger Anwendungstransaktionen wirkt der so definierten Effizienz entgegen. Dabei ist insbesondere festzuhalten, das fur manche Anwendungen der effiziente Zugriff durchaus wichtiger als Konsistenz ist. Ebenso gibt es auch Anwendungen, fur die ein konsistenter Zugriff unerlaslich ist. Um eine moglichst gute Anpassung an die jeweiligen Erfordernisse einer Anwendung zu gewahrleisten, scheint es somit sinnvoll zu sein, das Datenverwaltungssystem in die Lage zu versetzen, auf die Anforderungen der Anwendungen in geeigneter Weise zu reagieren. Dazu ist es erforderlich, Mittel bereitzustellen, die es einer Anwendung erlauben, bezuglich des Zielkonfliktes (“Trade-off”) zwischen Konsistenz und Effizienz eine anwendungsspezifische Kompromislosung zu spezifizieren.
Einleitung und Hintergrund: Im Zuge der zunehmenden Vernetzung im Gesundheitswesen werden medizin... more Einleitung und Hintergrund: Im Zuge der zunehmenden Vernetzung im Gesundheitswesen werden medizinische Daten zunehmend im XML Format semantisch annotiert [ref:1], [ref:2], [ref:3], [ref:4], [ref:5], [ref:6]. Mit nativen XML-Datenbanken gibt[for full text, please go to the a.m. URL]
Communications in Computer and Information Science, 2017
Writing effective analytical queries requires data scientists to have in-depth knowledge of the e... more Writing effective analytical queries requires data scientists to have in-depth knowledge of the existence, semantics, and usage context of data sources. Once gathered, such knowledge is informally shared within a specific team of data scientists, but usually is neither formalized nor shared with other teams. Potential synergies remain unused. We introduce our novel approach of Query-driven Knowledge-Sharing Systems (QKSS). A QKSS extends a data management system with knowledge-sharing capabilities to facilitate user collaboration without altering data analysis workflows. Collective knowledge from the query log is extracted to support data source discovery and data integration. Knowledge is formalized to enable its sharing across data scientist teams.
The paper describes an ongoing project that pursues the idea of query-driven data integration. In... more The paper describes an ongoing project that pursues the idea of query-driven data integration. Instead of first creating a common global schema and fetching, transforming, and loading the data to be integrated, we start with the queries. They are taken as a specification of information need and thus as the overall purpose of integration. Two repositories are being developed, one for all information related to the queries and one for potential data sources to which those queries may refer. Queries may have very di↵erent forms, and thus there are many di↵erent ways how they can be used to make the integration e↵ort more e cient.
Inadequate availability of patient information is a major cause for medical errors and affects co... more Inadequate availability of patient information is a major cause for medical errors and affects costs in healthcare. Traditional approaches to information integration in healthcare do not solve the problem. Applying a document-oriented paradigm to systems integration enables inter-institutional information exchange in healthcare. The goal of the proposed architecture is to provide information exchange between strict autonomous healthcare institutions, bridging the gap between primary and secondary care. In a long-term healthcare data distribution scenario, the patient has to maintain sovereignty over any personal health information. Thus, the traditional publish-subscribe architecture is extended by a phase of human mediation within the data flow. DEUS essentially decouples the roles of information author and information publisher into distinct actors, resulting in a triangular data flow. The interaction scenario will be motivated. The significance of human mediation will be discusse...
In this paper we present an approach to determine the smallest possible number of perceptrons in ... more In this paper we present an approach to determine the smallest possible number of perceptrons in a neural net in such a way that the topology of the input space can be learned sufficiently well. We introduce a general procedure based on persistent homology to investigate topological invariants of the manifold on which we suspect the data set. We specify the required dimensions precisely, assuming that there is a smooth manifold on or near which the data are located. Furthermore, we require that this space is connected and has a commutative group structure in the mathematical sense. These assumptions allow us to derive a decomposition of the underlying space whose topology is well known. We use the representatives of the k-dimensional homology groups from the persistence landscape to determine an integer dimension for this decomposition. This number is the dimension of the embedding that is capable of capturing the topology of the data manifold. We derive the theory and validate it e...
In this study the Voronoi interpolation is used to interpolate a set of points drawn from a topol... more In this study the Voronoi interpolation is used to interpolate a set of points drawn from a topological space with higher homology groups on its filtration. The technique is based on Voronoi tessellation, which induces a natural dual map to the Delaunay triangulation. Advantage is taken from this fact calculating the persistent homology on it after each iteration to capture the changing topology of the data. The boundary points are identified as critical. The Bottleneck and Wasserstein distance serve as a measure of quality between the original point set and the interpolation. If the norm of two distances exceeds a heuristically determined threshold, the algorithm terminates. We give the theoretical basis for this approach and justify its validity with numerical experiments.
Lecture notes in business information processing, 2018
Today’s organizations are socio-technical systems in which human workers increasingly perform kno... more Today’s organizations are socio-technical systems in which human workers increasingly perform knowledge work. Interactions between knowledge workers, clerks, and systems are essentially speech acts controlling the necessity and flow of activities in semi-structured and ad-hoc processes. IT-support for knowledge work does not necessarily require any predefined process model, and often none is available. To capture what is going on, a rising number of approaches for process modeling, analysis, and support classify interactions and derive process-related information. The frequency and diversity of speech acts has only been examined within delimited domains, but not in the larger setting of a reference model covering different types of work and domains, multiple takeholders, and interacting processes. Therefore, we have investigated interactions in the IT Infrastructure Library (ITIL). ITIL is a collection of predefined processes, functions, and roles that constitute best practices in the realm of IT service management (ITSM). For ITIL-based processes, we demonstrate the importance, prevalence, and diversity of interactions in triggers, and that further abstraction of interactions can improve the reusability of process patterns. Hence, at least in ITSM, applying speech act theory bears great potential for process improvement.
Lecture notes in business information processing, 2016
Speech acts have been proposed to improve the design of interactive systems for decades. Neverthe... more Speech acts have been proposed to improve the design of interactive systems for decades. Nevertheless, they have not yet made their way to common practice in software engineering or even process modeling. Various types of workflow management systems have been successful to support or even automate mostly predictable schema based process patterns without the explicit use of speech acts as design primitives. Yet, todays work is increasingly characterized by unpredictable collaborative processes, called knowledge work. Some types of knowledge work are supported by case management tools which typically provide regulated access to case-related information. But communicative acts are not supported sufficiently. Since knowledge workers are well aware of the pragmatic dimension of their communicative acts, we believe that bringing this awareness of the nature of a speech act to a case management tool will allow for better support of unregulated knowledge intensive processes. In this paper we propose a speech-act-based approach to improve the effectivity of knowledge work. We thereby enhance case management systems by making them aware of speech acts. Speech act related micro processes can then be used to prevent misunderstandings, increase process transparency and make useful inferences.
To elaborate main system characteristics and relevant deployment experiences for the health infor... more To elaborate main system characteristics and relevant deployment experiences for the health information system (HIS) Orbis/OpenMed, which is in widespread use in Germany, Austria, and Switzerland. In a deployment phase of 3 years in a 1.200 bed university hospital, where the system underwent significant improvements, the system's functionality and its software design have been analyzed in detail. We focus on an integrated CASE tool for generating embedded clinical applications and for incremental system evolution. We present a participatory and iterative software engineering process developed for efficient utilization of such a tool. The system's functionality is comparable to other commercial products' functionality; its components are embedded in a vendor-specific application framework, and standard interfaces are being used for connecting subsystems. The integrated generator tool is a remarkable feature; it became a key factor of our project. Tool generated applications are workflow enabled and embedded into the overall data base schema. Rapid prototyping and iterative refinement are supported, so application modules can be adapted to the users' work practice. We consider tools supporting an iterative and participatory software engineering process highly relevant for health information system architects. The potential of a system to continuously evolve and to be effectively adapted to changing needs may be more important than sophisticated but hard-coded HIS functionality. More work will focus on HIS software design and on software engineering. Methods and tools are needed for quick and robust adaptation of systems to health care processes and changing requirements.
Knowledge workers already face a broad range of tools to support their work, e. g. adaptive case ... more Knowledge workers already face a broad range of tools to support their work, e. g. adaptive case management systems, tailored information systems, groupware, and other (process) support systems. Case data is scattered across many systems, and the overlapping structured, semi-structured, and adhoc processes involved further impede keeping track of related data and activities. Organizations are socio-technical entities, and interactions have significant impact on their success. Today, around 50% of the work in the US is knowledge work, and other countries show a similar tendency. Improving integration of appropriate tools for knowledge work and augmenting support for interactions therefore offers to increase productivity in a very influential part of the workforce. Knowledge workers are well aware of the pragmatic intention of their communicative acts, but currently their systems are not. We suggest to use Speech Act Theory to enable useful inferences and to improve integration of the various tools for knowledge work. A focus on interactions raises awareness for the pragmatic intention and commitments in particular. It can help providing line markings for knowledge workers by facilitating compliance monitoring for interactions and artifacts stemming from many participating systems and manual documentation. Interactions already tie many separate systems together, and standardizing as well as partially automating them can therefore further simplify integration. Speechact-based adaptive case management offers to increase process transparency, enable useful inferences, and integrate structured, semi-structured, and ad-hoc processes.
Proceedings of the 1st ACM SIGMOD Joint International Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA)
Analytical SQL queries are a valuable source of information. Query log analysis can provide insig... more Analytical SQL queries are a valuable source of information. Query log analysis can provide insight into the usage of datasets and uncover knowledge that cannot be inferred from source schemas or content alone. To unlock this potential, flexible mechanisms for meta-querying are required. Syntactic and semantic aspects of queries must be considered along with contextual information. We present an extensible framework for analyzing SQL query logs. Query logs are mapped to a multi-relational graph model and queried using domain-specific traversal expressions. To enable concise and expressive meta-querying, semantic analyses are conducted on normalized relational algebra trees with accompanying schema lineage graphs. Syntactic analyses can be conducted on corresponding query texts and abstract syntax trees. Additional metadata allows to inspect the temporal and social context of each query. In this demonstration, we show how query log analysis with our framework can support data source discovery and facilitate collaborative data science. The audience can explore an exemplary query log to locate queries relevant to a data analysis scenario, conduct graph analyses on the log and assemble a customized logmonitoring dashboard.
Introduction: In cooperative environments like medical supply centers, the new role of clinical p... more Introduction: In cooperative environments like medical supply centers, the new role of clinical practice manager, who is responsible for supervision and controlling of the center's financial development, creates new demands on IT infrastructure: The insular information systems of all participating[for full text, please go to the a.m. URL]
Einleitung und Hintergrund: Zur Verbesserung der Behandlungsqualität und zur Vermeidung unnötiger... more Einleitung und Hintergrund: Zur Verbesserung der Behandlungsqualität und zur Vermeidung unnötiger Kosten ist eine effektive Nutzung der Informations- und Kommunikationstechnik für die Unterstützung einer kooperativen Patientenbehandlung unerlässlich. Die IHE XDS Spezifikation[for full text, please go to the a.m. URL]
In recent years, many production case management (PCM) and adaptive case management (ACM) systems... more In recent years, many production case management (PCM) and adaptive case management (ACM) systems have been introduced into the daily workflow of knowledge workers. In many research papers and case studies, the claims about the nature and requirements of knowledge work in general seem to vary. While choosing or creating a case management (CM) solution, typically one has the target knowledge workers and their domain-specific requirements in mind. But knowledge work shows a huge variety of modes of operation, complexity, and collaboration. We want to increase transparency on which features are covered by well-known and award-winning systems for different types of knowledge workers and different classes of systems. This may not unveil gaps between requirements and offered solutions, but it can uncover differences in solutions for varying user bases. We performed a literature review of 48 winners of the WfMC Awards for Excellence in Case Management from 2011 to 2016 and analyzed case studies in regard to targeted knowledge workers, advertised features, and type of system. Different types of knowledge workers showed a different bias on certain system types and features in regard to collaboration and variability of processes.
Ausgehend von einer Analyse der logischen Grundlagen der Datenreplikation werden in diesem Kapite... more Ausgehend von einer Analyse der logischen Grundlagen der Datenreplikation werden in diesem Kapitel zunachst die Zielsetzungen und Randbedingungen der Datenreplikation diskutiert. Anschliesend werden Verfahren zur Synchronisation von Replikaten auf der Basis des klassischen Korrektheitskriteriums One-Copy-Serialisierbarkeit vorgestellt. Dieses Kapitel bildet damit die Grundlage fur die anwendungsbezogenen Konzepte der Datenreplikation, die im nachfolgenden Kapitel erortert werden.
Adaptive Datenreplikation in verteilten Systemen, 1997
Ein Datenverwaltungssystem erbringt eine Dienstleistung fur verschiedene sehr unterschiedlich gea... more Ein Datenverwaltungssystem erbringt eine Dienstleistung fur verschiedene sehr unterschiedlich geartete Anwendungen. Die Anforderungen der Anwendungen an das Datenverwaltungssystem beschranken sich aber nicht nur auf das zuverlassige Abspeichern von Daten und die konsistente Bereitstellung von moglichst aktuellen Daten, sondern umfassen insbesondere auch den effizienten Umgang mit Daten. Effizienz beinhaltet in diesem Zusammenhang neben der Forderung nach einer moglichst kurzen Zugriffszeit beispielsweise auch die Forderung nach moglichst hoher Verfugbarkeit eines Datums. Die konsistente Verarbeitung nebenlaufiger Anwendungstransaktionen wirkt der so definierten Effizienz entgegen. Dabei ist insbesondere festzuhalten, das fur manche Anwendungen der effiziente Zugriff durchaus wichtiger als Konsistenz ist. Ebenso gibt es auch Anwendungen, fur die ein konsistenter Zugriff unerlaslich ist. Um eine moglichst gute Anpassung an die jeweiligen Erfordernisse einer Anwendung zu gewahrleisten, scheint es somit sinnvoll zu sein, das Datenverwaltungssystem in die Lage zu versetzen, auf die Anforderungen der Anwendungen in geeigneter Weise zu reagieren. Dazu ist es erforderlich, Mittel bereitzustellen, die es einer Anwendung erlauben, bezuglich des Zielkonfliktes (“Trade-off”) zwischen Konsistenz und Effizienz eine anwendungsspezifische Kompromislosung zu spezifizieren.
Einleitung und Hintergrund: Im Zuge der zunehmenden Vernetzung im Gesundheitswesen werden medizin... more Einleitung und Hintergrund: Im Zuge der zunehmenden Vernetzung im Gesundheitswesen werden medizinische Daten zunehmend im XML Format semantisch annotiert [ref:1], [ref:2], [ref:3], [ref:4], [ref:5], [ref:6]. Mit nativen XML-Datenbanken gibt[for full text, please go to the a.m. URL]
Communications in Computer and Information Science, 2017
Writing effective analytical queries requires data scientists to have in-depth knowledge of the e... more Writing effective analytical queries requires data scientists to have in-depth knowledge of the existence, semantics, and usage context of data sources. Once gathered, such knowledge is informally shared within a specific team of data scientists, but usually is neither formalized nor shared with other teams. Potential synergies remain unused. We introduce our novel approach of Query-driven Knowledge-Sharing Systems (QKSS). A QKSS extends a data management system with knowledge-sharing capabilities to facilitate user collaboration without altering data analysis workflows. Collective knowledge from the query log is extracted to support data source discovery and data integration. Knowledge is formalized to enable its sharing across data scientist teams.
The paper describes an ongoing project that pursues the idea of query-driven data integration. In... more The paper describes an ongoing project that pursues the idea of query-driven data integration. Instead of first creating a common global schema and fetching, transforming, and loading the data to be integrated, we start with the queries. They are taken as a specification of information need and thus as the overall purpose of integration. Two repositories are being developed, one for all information related to the queries and one for potential data sources to which those queries may refer. Queries may have very di↵erent forms, and thus there are many di↵erent ways how they can be used to make the integration e↵ort more e cient.
Inadequate availability of patient information is a major cause for medical errors and affects co... more Inadequate availability of patient information is a major cause for medical errors and affects costs in healthcare. Traditional approaches to information integration in healthcare do not solve the problem. Applying a document-oriented paradigm to systems integration enables inter-institutional information exchange in healthcare. The goal of the proposed architecture is to provide information exchange between strict autonomous healthcare institutions, bridging the gap between primary and secondary care. In a long-term healthcare data distribution scenario, the patient has to maintain sovereignty over any personal health information. Thus, the traditional publish-subscribe architecture is extended by a phase of human mediation within the data flow. DEUS essentially decouples the roles of information author and information publisher into distinct actors, resulting in a triangular data flow. The interaction scenario will be motivated. The significance of human mediation will be discusse...
In this paper we present an approach to determine the smallest possible number of perceptrons in ... more In this paper we present an approach to determine the smallest possible number of perceptrons in a neural net in such a way that the topology of the input space can be learned sufficiently well. We introduce a general procedure based on persistent homology to investigate topological invariants of the manifold on which we suspect the data set. We specify the required dimensions precisely, assuming that there is a smooth manifold on or near which the data are located. Furthermore, we require that this space is connected and has a commutative group structure in the mathematical sense. These assumptions allow us to derive a decomposition of the underlying space whose topology is well known. We use the representatives of the k-dimensional homology groups from the persistence landscape to determine an integer dimension for this decomposition. This number is the dimension of the embedding that is capable of capturing the topology of the data manifold. We derive the theory and validate it e...
In this study the Voronoi interpolation is used to interpolate a set of points drawn from a topol... more In this study the Voronoi interpolation is used to interpolate a set of points drawn from a topological space with higher homology groups on its filtration. The technique is based on Voronoi tessellation, which induces a natural dual map to the Delaunay triangulation. Advantage is taken from this fact calculating the persistent homology on it after each iteration to capture the changing topology of the data. The boundary points are identified as critical. The Bottleneck and Wasserstein distance serve as a measure of quality between the original point set and the interpolation. If the norm of two distances exceeds a heuristically determined threshold, the algorithm terminates. We give the theoretical basis for this approach and justify its validity with numerical experiments.
Uploads
Papers by Richard Lenz