Weather and atmospheric patterns are often persistent. The simplest weather forecasting method is... more Weather and atmospheric patterns are often persistent. The simplest weather forecasting method is the so-called persistence model, which assumes that the future state of a system will be similar (or equal) to the present state. Machine learning (ML) models are widely used in different weather forecasting applications, but they need to be compared to the persistence model to analyse whether they provide a competitive solution to the problem at hand. In this paper, we devise a new model for predicting low-visibility in airports using the concepts of mixture of experts. Visibility level is coded as two different ordered categorical variables: cloud height and runway visual height. The underlying system in this application is stagnant approximately in 90% of the cases, and standard ML models fail to improve on the performance of the persistence model. Because of this, instead of trying to simply beat the persistence model using ML, we use this persistence as a baseline and learn an ordinal neural network model that refines its results by focusing on learning weather fluctuations. The results show that the proposal outperforms persistence and other ordinal autoregressive models, especially for longer time horizon predictions and for the runway visual height variable.
Increasing number of sectors which affect human lives, are using Machine Learning (ML) tools. Hen... more Increasing number of sectors which affect human lives, are using Machine Learning (ML) tools. Hence the need for understanding their working mechanism and evaluating their fairness in decision-making, are becoming paramount, ushering in the era of Explainable AI (XAI). In this contribution we introduced a few intrinsically interpretable models which are also capable of dealing with missing values, in addition to extracting knowledge from the dataset and about the problem. These models are also capable of visualisation of the classifier and decision boundaries: they are the angle based variants of Learning Vector Quantization. We have demonstrated the algorithms on a synthetic dataset and a real-world one (heart disease dataset from the UCI repository). The newly developed classifiers helped in investigating the complexities of the UCI dataset as a multiclass problem. The performance of the developed classifiers were comparable to those reported in literature for this dataset, with additional value of interpretability, when the dataset was treated as a binary class problem.
Reservoir computing is a popular approach to design recurrent neural networks, due to its trainin... more Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners.
Predicting investors reactions to financial and political news is important for the early detecti... more Predicting investors reactions to financial and political news is important for the early detection of stock market jitters. Evidence from several recent studies suggests that online social media could improve prediction of stock market movements. However, utilizing such information to predict strong stock market fluctuations has not been explored so far. In this work, we propose a novel event detection method on Twitter, tailored to detect financial and political events that influence a specific stock market. The proposed approach applies a bursty topic detection method on a stream of tweets related to finance or politics followed by a classification process which filters-out events that do not influence the examined stock market. We train our classifier to recognise real events by using solely information about stock market volatility, without the need of manual labeling. We model Twitter events as feature vectors that encompass a rich variety of information, such as the geographical distribution of tweets, their polarity, information about their authors as well as information about bursty words associated with the event. We show that utilizing only information about tweets polarity, like most previous studies, results in wasting important information. We apply the proposed method on high-frequency intra-day data from the Greek and Spanish stock market and we show that our financial event detector successfully predicts most of the stock market jitters.
IEEE transactions on emerging topics in computational intelligence, Oct 1, 2021
Along with the great success of deep neural networks, there is also growing concern about their b... more Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.
Recent years have seen the advancement of datadriven paradigms in population-based and evolutiona... more Recent years have seen the advancement of datadriven paradigms in population-based and evolutionary optimization. This reflects on one hand the mere abundance of available data, but on the other hand also progresses in the refinement of previously available machine learning methods. Surprisingly, deep pattern recognition methods emerging from the studies of neural networks have only been sparingly applied. This comes unexpected, as the complex data generated by evolutionary search algorithms can be considered tedious and intractable for manual analysis with mere practical intuitions. In this work, we therefore explore opportunities to employ deep networks to directly learn problem characteristics of continuous optimization problems. Particularly, with data obtained during initial runs of an optimization algorithm. We find that a graph neural network, trained upon a graph representation of continuous search spaces, shows in comparison to more traditional approaches higher validation accuracy and retrieves characteristics within the latent space which are better at distinguishing different continuous optimization problems. We hope that our study can pave the way towards new approaches which allow us to learn problemdependent algorithm components and recall these from predictions of inputs generated during the run-time of an optimization algorithm.
The notion of learning from different problem instances, although an old and known one, has in re... more The notion of learning from different problem instances, although an old and known one, has in recent years regained popularity within the optimization community. Notable endeavors have been drawing inspiration from machine learning methods as a means for algorithm selection and solution transfer. However, surprisingly approaches which are centered around internal sampling models have not been revisited. Even though notable algorithms have been established in the last decades. In this work, we progress along this direction by investigating a method that allows us to learn an evolutionary search strategy reflecting rough characteristics of a fitness landscape. This latter model of a search strategy is represented through a flexible mixture-based distribution, which can subsequently be transferred and adapted for similar problems of interest. We validate this approach in two series of experiments in which we first demonstrate the efficacy of the recovered distributions and subsequently investigate the transfer with a systematic from the literature to generate benchmarking scenarios.
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Where a licence is displayed above, please note the terms and conditions of the licence govern yo... more Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive.
The European Symposium on Artificial Neural Networks, Apr 1, 2018
Take-down policy If you believe that this document breaches copyright please contact us providing... more Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
In this paper, we contribute to a self-adaptive approach, namely interaction-awareness which adop... more In this paper, we contribute to a self-adaptive approach, namely interaction-awareness which adopts self-aware principles to dynamically manage and maintain the knowledge on the interactions between volunteer services in the volunteer computing paradigm. Such knowledge can inform the adaptation decisions; leading to increase in the precision of selecting and composing services. We evaluate the approaches using a volunteer storage composition scenario. The evaluation results show the advantages of dynamic knowledge management in selfadaptive VC in selecting dependable services and satisfying higher number of requests.
Engineering volunteer services calls for novel selfadaptive approaches for dynamically managing t... more Engineering volunteer services calls for novel selfadaptive approaches for dynamically managing the process of composing and/or allocating volunteer services. As these services tend to be published and withdrawn without restrictions; uncertainties, dynamisms and 'dilution of control' related to the decisions of selection and composition are complex problems. These services tend to exhibit periodic performance patterns, which are often repeated over a certain time period. Consequently, the awareness of such periodic patterns enables the prediction of the services performance leading to better adaptation. In this paper, we contribute to a self-adaptive approach, namely time-awareness, which combines self-aware principles with dynamic histograms to dynamically manage and maintain the periodic trends of services performance and their evolution trends over time. Such knowledge can inform the adaptation decisions; leading to increase in the precision of selecting and composing services. We evaluate the approach using a volunteer storage composition scenario. The evaluation results show the advantages of dynamic knowledge management in self-adaptive volunteer computing in selecting dependable services and satisfying higher number of requests.
ACM Transactions on Autonomous and Adaptive Systems, Jun 30, 2019
Background: Self-awareness has been recently receiving a ention in computing systems for enrichin... more Background: Self-awareness has been recently receiving a ention in computing systems for enriching autonomous so ware systems operating in dynamic environments. Objective: We aim to investigate the adoption of computational self-awareness concepts in autonomic so ware systems, and motivate future research directions on self-awareness and related problems. Method: We conducted a systemic literature review to compile the studies related to the adoption of self-awareness in so ware engineering and explore how self-awareness is engineered and incorporated in so ware systems. From 865 studies, 74 studies have been selected as primary studies. We have analysed the studies from multiple perspectives, such as motivation, inspiration, and engineering approaches, among others. Results: Results have shown that self-awareness has been used to enable self-adaptation in systems that exhibit uncertain and dynamic behaviour. ough the recent a empts to de ne and engineer self-awareness in so ware engineering, there is no consensus on the de nition of self-awareness. Also, the distinction between self-aware and self-adaptive systems has not been systematically treated. Conclusions: Our survey reveals that self-awareness for so ware systems is still a formative eld and that there is growing a ention to incorporate self-awareness for be er reasoning about the adaptation decision in autonomic systems. Many pending issues and open problems, outlining possible research directions. CCS Concepts: •General and reference → Surveys and overviews; General literature; •Social and professional topics → So ware selection and adaptation; •So ware and its engineering → So ware con guration management and version control systems;
We investigate possibilities of inducing temporal structures without fading memory in recurrent n... more We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradientbased algorithm for training feed-forward spiking neuron networks (Spike-Prop [1]) to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs.
Volunteered Service Composition (VSC) refers to the process of composing volunteered services and... more Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. Selection and composition decisions tend to encounter numerous uncertainties: service consumers and applications have little control of these services and tend to be uncertain about their level of support for the desired functionalities and nonfunctionalities. In this paper, we contribute to a self-awareness framework that implements two levels of awareness, Stimulusawareness and Time-awareness. The former responds to basic changes in the environment while the latter takes into consideration the historical performance of the services. We have used volunteer service computing as an example to demonstrate the benefits that self-awareness can introduce to self-adaptation. We have compared the Stimulus-and Time-awareness approaches with a recent Ranking approach from the literature. The results show that the Time-awareness level has the advantage of satisfying higher number of requests with lower time cost.
Monthly Notices of the Royal Astronomical Society, Jul 12, 2019
The mass and spin distributions of compact binary gravitational-wave sources are currently uncert... more The mass and spin distributions of compact binary gravitational-wave sources are currently uncertain due to complicated astrophysics involved in their formation. Multiple sub-populations of compact binaries representing different evolutionary scenarios may be present among sources detected by Advanced LIGO and Advanced Virgo. In addition to hierarchical modelling, unmodelled methods can aid in determining the number of sub-populations and their properties. In this paper, we apply Gaussian mixture model clustering to 1000 simulated gravitational-wave compact binary sources from a mixture of five sub-populations. Using both mass and spin as input parameters, we determine how many binary detections are needed to accurately determine the number of sub-populations and their mass and spin distributions. In the most difficult case that we consider, where two sub-populations have identical mass distributions but differ in their spin, which is poorly constrained by gravitational-wave detections, we find that ∼ 400 detections are needed before we can identify the correct number of sub-populations.
A fluorescence-based assay for the detection of kynurenine in urine for low-cost and high-through... more A fluorescence-based assay for the detection of kynurenine in urine for low-cost and high-throughput analysis in clinical laboratory settings.
Application of interpretable machine learning techniques on medical datasets facilitate early and... more Application of interpretable machine learning techniques on medical datasets facilitate early and fast diagnoses, along with getting deeper insight into the data. Furthermore, the transparency of these models increase trust among application domain experts. Medical datasets face common issues such as heterogeneous measurements, imbalanced classes with limited sample size, and missing data, which hinder the straightforward application of machine learning techniques. In this paper we present a family of prototype-based (PB) interpretable models which are capable of handling these issues. The models introduced in this contribution show comparable or superior performance to alternative techniques applicable in such situations. However, unlike ensemble based models, which have to compromise on easy interpretation, the PB models here do not. Moreover we propose a strategy of harnessing the power of ensembles while maintaining the intrinsic interpretability of the
Individual ECG electrodes of a multi-lead measuring system can have a variable impact on the solu... more Individual ECG electrodes of a multi-lead measuring system can have a variable impact on the solution of the inverse problem. In this study, we investigated the modelbased relevance of individual ECG electrodes to identify the position of the stimulation electrode using the inverse solution with a single dipole as an equivalent electrical heart generator. We used four torso ECG mapping datasets from the EDGAR database recorded during ventricular stimulation in three animal torso-tank experiments and one human measurement. The relevance of electrodes, expressed as their weighted contributions to the inverse solution, was determined by the singular value decomposition of a transfer matrix calculated for the given position of the stimulation electrode. The results showed that gradual omission of the electrodes with the highest weighted contributions to the inverse solution worsens the localization. However, missing a small number of such electrodes has little or no effect on the localization. One dataset was more robust to the gradual omission of electrodes with the highest contributions, and the localization significantly deteriorated only after skipping 92% of electrodes. Further study showed that using only several electrodes with the highest weighted contributions to the inverse solution leads to the same or even better localization results than using all electrodes.
Weather and atmospheric patterns are often persistent. The simplest weather forecasting method is... more Weather and atmospheric patterns are often persistent. The simplest weather forecasting method is the so-called persistence model, which assumes that the future state of a system will be similar (or equal) to the present state. Machine learning (ML) models are widely used in different weather forecasting applications, but they need to be compared to the persistence model to analyse whether they provide a competitive solution to the problem at hand. In this paper, we devise a new model for predicting low-visibility in airports using the concepts of mixture of experts. Visibility level is coded as two different ordered categorical variables: cloud height and runway visual height. The underlying system in this application is stagnant approximately in 90% of the cases, and standard ML models fail to improve on the performance of the persistence model. Because of this, instead of trying to simply beat the persistence model using ML, we use this persistence as a baseline and learn an ordinal neural network model that refines its results by focusing on learning weather fluctuations. The results show that the proposal outperforms persistence and other ordinal autoregressive models, especially for longer time horizon predictions and for the runway visual height variable.
Increasing number of sectors which affect human lives, are using Machine Learning (ML) tools. Hen... more Increasing number of sectors which affect human lives, are using Machine Learning (ML) tools. Hence the need for understanding their working mechanism and evaluating their fairness in decision-making, are becoming paramount, ushering in the era of Explainable AI (XAI). In this contribution we introduced a few intrinsically interpretable models which are also capable of dealing with missing values, in addition to extracting knowledge from the dataset and about the problem. These models are also capable of visualisation of the classifier and decision boundaries: they are the angle based variants of Learning Vector Quantization. We have demonstrated the algorithms on a synthetic dataset and a real-world one (heart disease dataset from the UCI repository). The newly developed classifiers helped in investigating the complexities of the UCI dataset as a multiclass problem. The performance of the developed classifiers were comparable to those reported in literature for this dataset, with additional value of interpretability, when the dataset was treated as a binary class problem.
Reservoir computing is a popular approach to design recurrent neural networks, due to its trainin... more Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners.
Predicting investors reactions to financial and political news is important for the early detecti... more Predicting investors reactions to financial and political news is important for the early detection of stock market jitters. Evidence from several recent studies suggests that online social media could improve prediction of stock market movements. However, utilizing such information to predict strong stock market fluctuations has not been explored so far. In this work, we propose a novel event detection method on Twitter, tailored to detect financial and political events that influence a specific stock market. The proposed approach applies a bursty topic detection method on a stream of tweets related to finance or politics followed by a classification process which filters-out events that do not influence the examined stock market. We train our classifier to recognise real events by using solely information about stock market volatility, without the need of manual labeling. We model Twitter events as feature vectors that encompass a rich variety of information, such as the geographical distribution of tweets, their polarity, information about their authors as well as information about bursty words associated with the event. We show that utilizing only information about tweets polarity, like most previous studies, results in wasting important information. We apply the proposed method on high-frequency intra-day data from the Greek and Spanish stock market and we show that our financial event detector successfully predicts most of the stock market jitters.
IEEE transactions on emerging topics in computational intelligence, Oct 1, 2021
Along with the great success of deep neural networks, there is also growing concern about their b... more Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.
Recent years have seen the advancement of datadriven paradigms in population-based and evolutiona... more Recent years have seen the advancement of datadriven paradigms in population-based and evolutionary optimization. This reflects on one hand the mere abundance of available data, but on the other hand also progresses in the refinement of previously available machine learning methods. Surprisingly, deep pattern recognition methods emerging from the studies of neural networks have only been sparingly applied. This comes unexpected, as the complex data generated by evolutionary search algorithms can be considered tedious and intractable for manual analysis with mere practical intuitions. In this work, we therefore explore opportunities to employ deep networks to directly learn problem characteristics of continuous optimization problems. Particularly, with data obtained during initial runs of an optimization algorithm. We find that a graph neural network, trained upon a graph representation of continuous search spaces, shows in comparison to more traditional approaches higher validation accuracy and retrieves characteristics within the latent space which are better at distinguishing different continuous optimization problems. We hope that our study can pave the way towards new approaches which allow us to learn problemdependent algorithm components and recall these from predictions of inputs generated during the run-time of an optimization algorithm.
The notion of learning from different problem instances, although an old and known one, has in re... more The notion of learning from different problem instances, although an old and known one, has in recent years regained popularity within the optimization community. Notable endeavors have been drawing inspiration from machine learning methods as a means for algorithm selection and solution transfer. However, surprisingly approaches which are centered around internal sampling models have not been revisited. Even though notable algorithms have been established in the last decades. In this work, we progress along this direction by investigating a method that allows us to learn an evolutionary search strategy reflecting rough characteristics of a fitness landscape. This latter model of a search strategy is represented through a flexible mixture-based distribution, which can subsequently be transferred and adapted for similar problems of interest. We validate this approach in two series of experiments in which we first demonstrate the efficacy of the recovered distributions and subsequently investigate the transfer with a systematic from the literature to generate benchmarking scenarios.
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Where a licence is displayed above, please note the terms and conditions of the licence govern yo... more Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive.
The European Symposium on Artificial Neural Networks, Apr 1, 2018
Take-down policy If you believe that this document breaches copyright please contact us providing... more Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
In this paper, we contribute to a self-adaptive approach, namely interaction-awareness which adop... more In this paper, we contribute to a self-adaptive approach, namely interaction-awareness which adopts self-aware principles to dynamically manage and maintain the knowledge on the interactions between volunteer services in the volunteer computing paradigm. Such knowledge can inform the adaptation decisions; leading to increase in the precision of selecting and composing services. We evaluate the approaches using a volunteer storage composition scenario. The evaluation results show the advantages of dynamic knowledge management in selfadaptive VC in selecting dependable services and satisfying higher number of requests.
Engineering volunteer services calls for novel selfadaptive approaches for dynamically managing t... more Engineering volunteer services calls for novel selfadaptive approaches for dynamically managing the process of composing and/or allocating volunteer services. As these services tend to be published and withdrawn without restrictions; uncertainties, dynamisms and 'dilution of control' related to the decisions of selection and composition are complex problems. These services tend to exhibit periodic performance patterns, which are often repeated over a certain time period. Consequently, the awareness of such periodic patterns enables the prediction of the services performance leading to better adaptation. In this paper, we contribute to a self-adaptive approach, namely time-awareness, which combines self-aware principles with dynamic histograms to dynamically manage and maintain the periodic trends of services performance and their evolution trends over time. Such knowledge can inform the adaptation decisions; leading to increase in the precision of selecting and composing services. We evaluate the approach using a volunteer storage composition scenario. The evaluation results show the advantages of dynamic knowledge management in self-adaptive volunteer computing in selecting dependable services and satisfying higher number of requests.
ACM Transactions on Autonomous and Adaptive Systems, Jun 30, 2019
Background: Self-awareness has been recently receiving a ention in computing systems for enrichin... more Background: Self-awareness has been recently receiving a ention in computing systems for enriching autonomous so ware systems operating in dynamic environments. Objective: We aim to investigate the adoption of computational self-awareness concepts in autonomic so ware systems, and motivate future research directions on self-awareness and related problems. Method: We conducted a systemic literature review to compile the studies related to the adoption of self-awareness in so ware engineering and explore how self-awareness is engineered and incorporated in so ware systems. From 865 studies, 74 studies have been selected as primary studies. We have analysed the studies from multiple perspectives, such as motivation, inspiration, and engineering approaches, among others. Results: Results have shown that self-awareness has been used to enable self-adaptation in systems that exhibit uncertain and dynamic behaviour. ough the recent a empts to de ne and engineer self-awareness in so ware engineering, there is no consensus on the de nition of self-awareness. Also, the distinction between self-aware and self-adaptive systems has not been systematically treated. Conclusions: Our survey reveals that self-awareness for so ware systems is still a formative eld and that there is growing a ention to incorporate self-awareness for be er reasoning about the adaptation decision in autonomic systems. Many pending issues and open problems, outlining possible research directions. CCS Concepts: •General and reference → Surveys and overviews; General literature; •Social and professional topics → So ware selection and adaptation; •So ware and its engineering → So ware con guration management and version control systems;
We investigate possibilities of inducing temporal structures without fading memory in recurrent n... more We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradientbased algorithm for training feed-forward spiking neuron networks (Spike-Prop [1]) to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs.
Volunteered Service Composition (VSC) refers to the process of composing volunteered services and... more Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. Selection and composition decisions tend to encounter numerous uncertainties: service consumers and applications have little control of these services and tend to be uncertain about their level of support for the desired functionalities and nonfunctionalities. In this paper, we contribute to a self-awareness framework that implements two levels of awareness, Stimulusawareness and Time-awareness. The former responds to basic changes in the environment while the latter takes into consideration the historical performance of the services. We have used volunteer service computing as an example to demonstrate the benefits that self-awareness can introduce to self-adaptation. We have compared the Stimulus-and Time-awareness approaches with a recent Ranking approach from the literature. The results show that the Time-awareness level has the advantage of satisfying higher number of requests with lower time cost.
Monthly Notices of the Royal Astronomical Society, Jul 12, 2019
The mass and spin distributions of compact binary gravitational-wave sources are currently uncert... more The mass and spin distributions of compact binary gravitational-wave sources are currently uncertain due to complicated astrophysics involved in their formation. Multiple sub-populations of compact binaries representing different evolutionary scenarios may be present among sources detected by Advanced LIGO and Advanced Virgo. In addition to hierarchical modelling, unmodelled methods can aid in determining the number of sub-populations and their properties. In this paper, we apply Gaussian mixture model clustering to 1000 simulated gravitational-wave compact binary sources from a mixture of five sub-populations. Using both mass and spin as input parameters, we determine how many binary detections are needed to accurately determine the number of sub-populations and their mass and spin distributions. In the most difficult case that we consider, where two sub-populations have identical mass distributions but differ in their spin, which is poorly constrained by gravitational-wave detections, we find that ∼ 400 detections are needed before we can identify the correct number of sub-populations.
A fluorescence-based assay for the detection of kynurenine in urine for low-cost and high-through... more A fluorescence-based assay for the detection of kynurenine in urine for low-cost and high-throughput analysis in clinical laboratory settings.
Application of interpretable machine learning techniques on medical datasets facilitate early and... more Application of interpretable machine learning techniques on medical datasets facilitate early and fast diagnoses, along with getting deeper insight into the data. Furthermore, the transparency of these models increase trust among application domain experts. Medical datasets face common issues such as heterogeneous measurements, imbalanced classes with limited sample size, and missing data, which hinder the straightforward application of machine learning techniques. In this paper we present a family of prototype-based (PB) interpretable models which are capable of handling these issues. The models introduced in this contribution show comparable or superior performance to alternative techniques applicable in such situations. However, unlike ensemble based models, which have to compromise on easy interpretation, the PB models here do not. Moreover we propose a strategy of harnessing the power of ensembles while maintaining the intrinsic interpretability of the
Individual ECG electrodes of a multi-lead measuring system can have a variable impact on the solu... more Individual ECG electrodes of a multi-lead measuring system can have a variable impact on the solution of the inverse problem. In this study, we investigated the modelbased relevance of individual ECG electrodes to identify the position of the stimulation electrode using the inverse solution with a single dipole as an equivalent electrical heart generator. We used four torso ECG mapping datasets from the EDGAR database recorded during ventricular stimulation in three animal torso-tank experiments and one human measurement. The relevance of electrodes, expressed as their weighted contributions to the inverse solution, was determined by the singular value decomposition of a transfer matrix calculated for the given position of the stimulation electrode. The results showed that gradual omission of the electrodes with the highest weighted contributions to the inverse solution worsens the localization. However, missing a small number of such electrodes has little or no effect on the localization. One dataset was more robust to the gradual omission of electrodes with the highest contributions, and the localization significantly deteriorated only after skipping 92% of electrodes. Further study showed that using only several electrodes with the highest weighted contributions to the inverse solution leads to the same or even better localization results than using all electrodes.
Uploads
Papers by Peter Tino