The use of machine learning in automatic speaker identification and localization systems has rece... more The use of machine learning in automatic speaker identification and localization systems has recently seen significant advances. However, this progress comes at the cost of using complex models, computations, and increasing the number of microphone arrays and training data. Therefore, in this work, we propose a new end-to-end identification and localization model based on a simple fully connected deep neural network (FC-DNN) and just two input microphones. This model can jointly or separately localize and identify an active speaker with high accuracy in single and multi-speaker scenarios by exploiting a new data augmentation approach. In this regard, we propose using a novel Mel Frequency Cepstral Coefficients (MFCC) based feature called Shuffled MFCC (SHMFCC) and its variant Difference Shuffled MFCC (DSHMFCC). In order to test our approach, we analyzed the performance of the identification and localization proposed model on the new features at different noise and reverberation cond...
In this work, we propose a framework to enhance the communication abilities of speech-impaired pa... more In this work, we propose a framework to enhance the communication abilities of speech-impaired patients in an intensive care setting via reading lips. Medical procedure, such as a tracheotomy, causes the patient to lose the ability to utter speech with little to no impact on the habitual lip movement. Consequently, we developed a framework to predict the silently spoken text by performing visual speech recognition, i.e., lip-reading. In a two-stage architecture, frames of the patient’s face are used to infer audio features as an intermediate prediction target, which are then used to predict the uttered text. To the best of our knowledge, this is the first approach to bring visual speech recognition into an intensive care setting. For this purpose, we recorded an audio-visual dataset in the University Hospital of Aachen’s intensive care unit (ICU) with a language corpus hand-picked by experienced clinicians to be representative of their day-to-day routine. With a word error rate of 6...
2020 9th Mediterranean Conference on Embedded Computing (MECO)
In this work, a new hybrid algorithm for disease risk classification is proposed. The proposed me... more In this work, a new hybrid algorithm for disease risk classification is proposed. The proposed methodology is based on Dynamic Time Warping (DTW). This methodology can be applied to time series from various domains such as vital sign time series available in medical big data. To validate our methodology, we applied it to risk classification for sepsis, which is one of the most challenging problems within the area of medical data analysis. In the first step the algorithm uses different statistical properties of time series features. Furthermore, using differently labeled training data sets, we created a DTW Barycenter Averaging (DBA) on each feature. In the second step, validation data sets and DTW are used to validate the precision of classification and the final results are compared. The performance of our methodology is validated with real medical data and on six different criteria definitions for the sepsis diseases. Results show that our algorithm performed, in the best case, with precision and recall of 96,38% and 90,90%, respectively.
Proceedings of the Genetic and Evolutionary Computation Conference Companion
Miniature autonomous sensory agents (MASA) can play a profound role in the exploration of hardly ... more Miniature autonomous sensory agents (MASA) can play a profound role in the exploration of hardly accessible unknown environments, thus, impacting many applications such as monitoring of underground infrastructure or exploration for natural resources, e.g. oil and gas, or even human body diagnostic exploration. However, using MASA presents a wide range of challenges due to limitations of the available hardware resources caused by their scaled-down size. Consequently, these agents are kinetically passive, i.e. they cannot be guided through the environment. Furthermore, their communication range and rate is limited, which a ects the quality of localization and, consequently, mapping. In addition, conducting real-time localization and mapping is not possible. As a result, Simultaneous Localization and Mapping (SLAM) techniques are not suitable and a new problem de nition is needed. In this paper we introduce what we dub as the Centralized O ine Localization And Mapping (COLAM) problem, highlighting its key elements, then we present a model to solve it. In this model evolutionary algorithms (EAs) are used to optimize agents' resources o-line for an energy-e cient environment mapping. Furthermore, we illustrate a modi ed version of Vietoris-Rips Complex we dub as Trajectory Incorporated Vietoris-Rips (TIVR) complex as a tool to conduct mapping. Finally, we project the proposed model on real experiments and present results. CCS CONCEPTS •Computing methodologies → Evolvable hardware; •Computer systems organization → Sensor networks;
The availability of Big Data has increased significantly in many areas in recent years. Insights ... more The availability of Big Data has increased significantly in many areas in recent years. Insights from these data sets lead to optimized processes in many industries, which is why understanding as well as gaining knowledge through analyses of these data sets is becoming increasingly relevant. In the medical field, especially in intensive care units, fast and appropriate treatment is crucial due to the usually critical condition of patients. The patient data recorded here is often very heterogeneous and the resulting database models are very complex, so that accessing and thus using this data requires technical background knowledge. We have focused on the development of a web application that is primarily aimed at clinical staff and researchers. It is an easily accessible visualization and benchmarking tool that provides a graphical interface for the MIMIC-III database. The anonymized datasets contained in MIMIC-III include general information about patients as well as characteristics such as vital signs and laboratory measurements. These datasets are of great interest because they can be used to improve digital decision support systems and clinical processes. Therefore, in addition to visualization, the application can be used by researchers to validate anomaly detection algorithms and by clinical staff to assess disease progression. For this purpose, patient data can be individualized through modifications such as increasing and decreasing vital signs and laboratory parameters so that disease progression can be simulated and subsequently analyzed according to the user's specific needs.
Background In recent years, the volume of medical knowledge and health data has increased rapidly... more Background In recent years, the volume of medical knowledge and health data has increased rapidly. For example, the increased availability of electronic health records (EHRs) provides accurate, up-to-date, and complete information about patients at the point of care and enables medical staff to have quick access to patient records for more coordinated and efficient care. With this increase in knowledge, the complexity of accurate, evidence-based medicine tends to grow all the time. Health care workers must deal with an increasing amount of data and documentation. Meanwhile, relevant patient data are frequently overshadowed by a layer of less relevant data, causing medical staff to often miss important values or abnormal trends and their importance to the progression of the patient’s case. Objective The goal of this work is to analyze the current laboratory results for patients in the intensive care unit (ICU) and classify which of these lab values could be abnormal the next time the...
Proceedings of the Genetic and Evolutionary Computation Conference Companion
In this work, we propose a framework to evolve morphology and behavior of resource-constrained mi... more In this work, we propose a framework to evolve morphology and behavior of resource-constrained miniaturized sensory agents. One key application of this framework is the exploration of unknown environments. Consequently, we demonstrate its ability to solve multi-objective optimization problem by applying it to an environment exploration task. The goal is to find the position of a pollution source in a U-shaped fluid pipe with unknown fluid dynamics properties. Firstly, morphological evolution is set to find the optimal physical shape for an agent so that it passes through six target points inside the pipe, i.e. it follows a certain trajectory. Based on that, behavioral evolution optimizes then the sensor controller of an agent and tries to solve a Pareto-optimization problem where the objectives are (1) to measure accurately chemical particle concentrations, (2) to measure depth of the agent while in the environment and (3) to minimize power consumption while using the sensors by altering their operation frequencies. We executed 24000 simulations and the best agent hit the target points with accuracy 91% and reached the maximum performance possible in first and second objectives while reducing the power usage by 86.6%, compared to fixed frequency rate.
Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2017
Advancement in miniaturization of autonomous sensory agents can play a profound role in many appl... more Advancement in miniaturization of autonomous sensory agents can play a profound role in many applications such as the exploration of unknown environments, however, due to their miniature size, power limitations poses a severe challenge. In this paper, and inspired from biological instinctive behaviour, we introduce an instinct-driven dynamic hardware reconfiguration design scheme using evolutionary algorithms on behaviour trees. Moreover, this scheme is projected on an application scenario of autonomous sensory agents exploring an inaccessible dynamic environment. In this scenario, agent's compression behaviour-introduced as an instinct-is critical due to the limited energy available on the agents. This emphasises the role of optimization of agents resources through dynamic hardware reconfiguration. In that regard, the presented approach is demonstrated using two compression techniques: Zeroorder hold and Wavelet compression. Behavioural and hardwarebased power models of these techniques, integrated with behaviour trees (BT), are implemented to facilitate off-line learning of the optimum on-line behaviour, thus, facilitating dynamic reconfiguration of agents hardware.
2021 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT), 2021
Speaker Identification using deep learning is an ongoing active topic that has wide applications ... more Speaker Identification using deep learning is an ongoing active topic that has wide applications in voice authenticated and activated systems, forensic investigations, security, and remote transactions. Mel frequency cepstral coefficients (MFCC) feature-based speaker identification systems are proven to per- form very well with clean speech conditions. However, there is rapid degradation in the robustness with adverse conditions and short speech duration. To overcome this limitation, we propose a new speaker identification pipeline system grounded on novel MFCC based features called shuffled MFCC (SHMFCC) along with a new data augmentation approach. Moreover, the system uses a simple and efficient deep neural network model with variable-duration input speech signals and is proved to perform remarkably at different datasets with different environmental and noise conditions and for a high number of speakers.
2018 International Conference on Computing, Networking and Communications (ICNC), 2018
Effective exploration of hardly accessible unknown environments impacts many applications such as... more Effective exploration of hardly accessible unknown environments impacts many applications such as monitoring underground infrastructure, exploration of natural resources, e.g. oil and gas, and in-vivo exploration for diagnostic purposes. In many of these scenarios, the environment is only accessible by miniaturized agents (mm-sized or less). Consequently, miniature autonomous sensory agents (MASA) can play an imperative role. However, MASA present wide range of challenges caused by their scaled-down size. Firstly, these agents are strongly limited in energy, consequently, special design constraints must be set. Secondly, these agents are usually kinetically passive thus, the widely used Simultaneous Localization and Mapping (SLAM) techniques can not be used. As a result, a new problem definition is needed, where agents’ motion is not controlled and, consequently, localization and mapping is conducted offline. In this paper, we formally introduce what we dub as the Centralized Offline Localization And Mapping (COLAM) problem, highlighting its objectives and constraints. Furthermore, we illustrate a comprehensive solution to this problem using a non-dominated sorting genetic algorithm (NSGA) based framework to optimize the agents’ resources and conduct adequate localization and mapping. Finally, we apply this solution on two different environments each with different physical properties.
Big Data Analytics for Cyber-Physical Systems, 2019
Abstract The fast growing digitalization of medicine has facilitated the collection of patient da... more Abstract The fast growing digitalization of medicine has facilitated the collection of patient data in databases. A smart city must facilitate such databases in its hospitals. However, handling this big data requires new strategies for filtering and processing such high volumes of patient data. In this regard, Machine Learning (ML) algorithms are increasingly applied to support bedside decisions on patient treatment, preprocess medical data and help to choose adequate medications. Consequently, the choice of the ML modeling scheme is imperative. In contrast to conventional classification representation methodologies used in ML such as Artificial Neural Networks (ANNs), we present a novel concept based on stochastic models. Stochastic Petri Nets (SPNs) are a powerful modeling scheme that is used to describe the dynamic behavior of biochemical kinetics. This chapter presents a framework for building stochastic models based on the sampled sequences from observed SPNs by means of the Expectation Maximization (EM) algorithm and Continuous-Time Markov Chains (CTMC). Apart from its theoretical interest, we expect the framework to be useful for the classification of different stochastic processes and identifying the differences between them in big data context. An important application of the newly developed framework is disease diagnoses especially in Intensive Care Units (ICUs). To test the framework, we conducted simulations using SPNs with different structure and using Gillespie algorithm to generate data. After training, classification results show F1 score 95% using linear Support Vector Machines (SVMs) and 89% using Classification and Regression Trees (CARTs) algorithm.
A well-established notion in Evolutionary Computation (EC) is the importance of the balance betwe... more A well-established notion in Evolutionary Computation (EC) is the importance of the balance between exploration and exploitation. Data structures (e.g. for solution encoding), evolutionary operators, selection and fitness evaluation facilitate this balance. Furthermore, the ability of an Evolutionary Algorithm (EA) to provide efficient solutions typically depends on the specific type of problem. In order to obtain the most efficient search, it is often needed to incorporate any available knowledge (both at algorithmic and domain level) into the EA. In this work, we develop an ontology to formally represent knowledge in EAs. Our approach makes use of knowledge in the EC literature, and can be used for suggesting efficient strategies for solving problems by means of EC. We call our ontology "Evolutionary Computation Ontology" (ECO). In this contribution, we show one possible use of it, i.e. to establish a link between algorithm settings and problem types. We also show that the ECO can be used as an alternative to the available parameter selection methods and as a supporting tool for algorithmic design.
Recent developments in the miniaturization of hardware have facilitated the use of robots or mobi... more Recent developments in the miniaturization of hardware have facilitated the use of robots or mobile sensory agents in many applications such as exploration of GPS-denied, hardly accessible unknown environments. This includes underground resource exploration and water pollution monitoring. One problem in scaling-down robots is that it puts significant emphasis on power consumption due to the limited energy available online. Furthermore, the design of adequate controllers for such agents is challenging as representing the system mathematically is difficult due to complexity. In that regard, Evolutionary Algorithms (EA) is a suitable choice for developing the controllers. However, the solution space for evolving those controllers is relatively large because of the wide range of the possible tunable parameters available on the hardware, in addition to the numerous number of objectives which appear on different design levels. A recently-proposed method, dubbed as Instinct Evolution Schem...
Architecture of Computing Systems – ARCS 2020, 2020
Increasing the efficiency of parallel software development is one of the key obstacles in taking ... more Increasing the efficiency of parallel software development is one of the key obstacles in taking advantage of heterogeneous multi-core architectures. Efficient and reliable compiler technology is required to identify the trade-off between multiple design goals at once. The most crucial objectives are application performance and processor power consumption. Including memory power into this multi-objective optimisation problem is of utmost importance. Therefore, this paper proposes the heuristic MORAM solving this three-dimensional Pareto front calculation. Furthermore, it is integrated into a commercially available framework to conduct a detailed evaluation and applicability study. MORAM is assessed with representative benchmarks on two different platforms and contrasted with a state-of-the-art evolutionary multi-objective algorithm. On average, MORAM produces 6% better Pareto fronts, while it is at least \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepac...
ACM Journal on Emerging Technologies in Computing Systems, 2021
Logic locking is a prominent technique to protect the integrity of hardware designs throughout th... more Logic locking is a prominent technique to protect the integrity of hardware designs throughout the integrated circuit design and fabrication flow. However, in recent years, the security of locking schemes has been thoroughly challenged by the introduction of various deobfuscation attacks. As in most research branches, deep learning is being introduced in the domain of logic locking as well. Therefore, in this article we present SnapShot, a novel attack on logic locking that is the first of its kind to utilize artificial neural networks to directly predict a key bit value from a locked synthesized gate-level netlist without using a golden reference. Hereby, the attack uses a simpler yet more flexible learning model compared to existing work. Two different approaches are evaluated. The first approach is based on a simple feedforward fully connected neural network. The second approach utilizes genetic algorithms to evolve more complex convolutional neural network architectures speciali...
One of the main reasons for the success of Evolutionary Algorithms (EAs) is their general-purpose... more One of the main reasons for the success of Evolutionary Algorithms (EAs) is their general-purposeness, i.e. the fact that they can be applied in a straight forward manner to a broad range of optimization problems, without any specific prior knowledge. On the other hand, it has been shown that incorporating a priori knowledge, such as expert knowledge or empirical findings, can significantly improve the performance of an EA. However, integrating knowledge in EAs poses numerous challenges. It is often the case that the features of the search space are unknown, hence any knowledge associated with the search space properties can be hardly used. In addition, a priori knowledge is typically problem-specific and hard to generalize. In this paper, we propose a framework, called Knowledge Integrated Evolutionary Algorithm (KIEA), which facilitates the integration of existing knowledge into EAs. Notably, the KIEA framework is EA-agnostic, i.e. it works with any evolutionary algorithm, problem-independent, i.e. it is not dedicated to a specific type of problems and expandable, i.e. its knowledge base can grow over time. Furthermore, the framework integrates knowledge while the EA is running, thus optimizing the consumption of computational power. In the preliminary experiments shown here, we observe that the KIEA framework produces in the worst case an 80% improvement on the converge time, w.r.t. the corresponding "knowledge-free" EA counterpart.
Big Data Analytics for Cyber-Physical Systems, 2019
Miniaturized Autonomous Sensory Agents (MASAs) can play a pivotal role in smart cities of tomorro... more Miniaturized Autonomous Sensory Agents (MASAs) can play a pivotal role in smart cities of tomorrow. From monitoring underground infrastructure such as pollution in water pipes, to exploration of natural resources such as oil and gas. These smart agents will not only detect anomalies, they are expected to provide sufficient data to facilitate mapping the detected anomalies, while-cleverly-adopting their behaviour based on the changes presented in the environment. However, given these objectives and due to MASA's miniaturization, conventional designing methods are not suitable to design MASA. On one hand, it is not possible to use the widely adopted Simultaneous Localization and Mapping (SLAM) schemes, because of the hardware limitations on-board of MASA. Furthermore, the targeted environments for MASA in a smart city are typically GPS-denied, hardly accessible, and either completely or partially unknown. Furthermore, designing MASA's hardware and their autonomous on-line behaviour presents an additional challenge. In this chapter, we present a framework dubbed as Evolutionary Localization and Mapping (EVOLAM), which uses Multi-objective Evolutionary Algorithms (MOEAs) to tackle the design and algorithmic challenges in using MASAs in monitoring infrastructure. This framework facilitates offline localization and mapping, while adaptively tuning offline hardware constraints and online behaviour. In addition, we present different types of MOEAs that can be used within the framework. Finally, we project EVOLAM on a casestudy, thus highlighting MOEA effectiveness in solving different complex localization and mapping problems.
Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021
In this work, we propose a novel approach for reinforcement learning driven by evolutionary compu... more In this work, we propose a novel approach for reinforcement learning driven by evolutionary computation. Our algorithm, dubbed as Evolutionary-Driven Reinforcement Learning (Evo-RL), embeds the reinforcement learning algorithm in an evolutionary cycle, where we distinctly differentiate between purely evolvable (instinctive) behaviour versus purely learnable behaviour. Furthermore, we propose that this distinction is decided by the evolutionary process, thus allowing Evo-RL to be adaptive to different environments. In addition, Evo-RL facilitates learning on environments with rewardless states, which makes it more suited for real-world problems with incomplete information. To show that Evo-RL leads to state-of-the-art performance, we present the performance of different state-of-the-art reinforcement learning algorithms when operating within Evo-RL and compare it with the case when these same algorithms are executed independently. Results show that reinforcement learning algorithms embedded within our Evo-RL approach significantly outperform the stand-alone versions of the same RL algorithms on OpenAI Gym control problems with rewardless states constrained by the same computational budget.
The use of machine learning in automatic speaker identification and localization systems has rece... more The use of machine learning in automatic speaker identification and localization systems has recently seen significant advances. However, this progress comes at the cost of using complex models, computations, and increasing the number of microphone arrays and training data. Therefore, in this work, we propose a new end-to-end identification and localization model based on a simple fully connected deep neural network (FC-DNN) and just two input microphones. This model can jointly or separately localize and identify an active speaker with high accuracy in single and multi-speaker scenarios by exploiting a new data augmentation approach. In this regard, we propose using a novel Mel Frequency Cepstral Coefficients (MFCC) based feature called Shuffled MFCC (SHMFCC) and its variant Difference Shuffled MFCC (DSHMFCC). In order to test our approach, we analyzed the performance of the identification and localization proposed model on the new features at different noise and reverberation cond...
In this work, we propose a framework to enhance the communication abilities of speech-impaired pa... more In this work, we propose a framework to enhance the communication abilities of speech-impaired patients in an intensive care setting via reading lips. Medical procedure, such as a tracheotomy, causes the patient to lose the ability to utter speech with little to no impact on the habitual lip movement. Consequently, we developed a framework to predict the silently spoken text by performing visual speech recognition, i.e., lip-reading. In a two-stage architecture, frames of the patient’s face are used to infer audio features as an intermediate prediction target, which are then used to predict the uttered text. To the best of our knowledge, this is the first approach to bring visual speech recognition into an intensive care setting. For this purpose, we recorded an audio-visual dataset in the University Hospital of Aachen’s intensive care unit (ICU) with a language corpus hand-picked by experienced clinicians to be representative of their day-to-day routine. With a word error rate of 6...
2020 9th Mediterranean Conference on Embedded Computing (MECO)
In this work, a new hybrid algorithm for disease risk classification is proposed. The proposed me... more In this work, a new hybrid algorithm for disease risk classification is proposed. The proposed methodology is based on Dynamic Time Warping (DTW). This methodology can be applied to time series from various domains such as vital sign time series available in medical big data. To validate our methodology, we applied it to risk classification for sepsis, which is one of the most challenging problems within the area of medical data analysis. In the first step the algorithm uses different statistical properties of time series features. Furthermore, using differently labeled training data sets, we created a DTW Barycenter Averaging (DBA) on each feature. In the second step, validation data sets and DTW are used to validate the precision of classification and the final results are compared. The performance of our methodology is validated with real medical data and on six different criteria definitions for the sepsis diseases. Results show that our algorithm performed, in the best case, with precision and recall of 96,38% and 90,90%, respectively.
Proceedings of the Genetic and Evolutionary Computation Conference Companion
Miniature autonomous sensory agents (MASA) can play a profound role in the exploration of hardly ... more Miniature autonomous sensory agents (MASA) can play a profound role in the exploration of hardly accessible unknown environments, thus, impacting many applications such as monitoring of underground infrastructure or exploration for natural resources, e.g. oil and gas, or even human body diagnostic exploration. However, using MASA presents a wide range of challenges due to limitations of the available hardware resources caused by their scaled-down size. Consequently, these agents are kinetically passive, i.e. they cannot be guided through the environment. Furthermore, their communication range and rate is limited, which a ects the quality of localization and, consequently, mapping. In addition, conducting real-time localization and mapping is not possible. As a result, Simultaneous Localization and Mapping (SLAM) techniques are not suitable and a new problem de nition is needed. In this paper we introduce what we dub as the Centralized O ine Localization And Mapping (COLAM) problem, highlighting its key elements, then we present a model to solve it. In this model evolutionary algorithms (EAs) are used to optimize agents' resources o-line for an energy-e cient environment mapping. Furthermore, we illustrate a modi ed version of Vietoris-Rips Complex we dub as Trajectory Incorporated Vietoris-Rips (TIVR) complex as a tool to conduct mapping. Finally, we project the proposed model on real experiments and present results. CCS CONCEPTS •Computing methodologies → Evolvable hardware; •Computer systems organization → Sensor networks;
The availability of Big Data has increased significantly in many areas in recent years. Insights ... more The availability of Big Data has increased significantly in many areas in recent years. Insights from these data sets lead to optimized processes in many industries, which is why understanding as well as gaining knowledge through analyses of these data sets is becoming increasingly relevant. In the medical field, especially in intensive care units, fast and appropriate treatment is crucial due to the usually critical condition of patients. The patient data recorded here is often very heterogeneous and the resulting database models are very complex, so that accessing and thus using this data requires technical background knowledge. We have focused on the development of a web application that is primarily aimed at clinical staff and researchers. It is an easily accessible visualization and benchmarking tool that provides a graphical interface for the MIMIC-III database. The anonymized datasets contained in MIMIC-III include general information about patients as well as characteristics such as vital signs and laboratory measurements. These datasets are of great interest because they can be used to improve digital decision support systems and clinical processes. Therefore, in addition to visualization, the application can be used by researchers to validate anomaly detection algorithms and by clinical staff to assess disease progression. For this purpose, patient data can be individualized through modifications such as increasing and decreasing vital signs and laboratory parameters so that disease progression can be simulated and subsequently analyzed according to the user's specific needs.
Background In recent years, the volume of medical knowledge and health data has increased rapidly... more Background In recent years, the volume of medical knowledge and health data has increased rapidly. For example, the increased availability of electronic health records (EHRs) provides accurate, up-to-date, and complete information about patients at the point of care and enables medical staff to have quick access to patient records for more coordinated and efficient care. With this increase in knowledge, the complexity of accurate, evidence-based medicine tends to grow all the time. Health care workers must deal with an increasing amount of data and documentation. Meanwhile, relevant patient data are frequently overshadowed by a layer of less relevant data, causing medical staff to often miss important values or abnormal trends and their importance to the progression of the patient’s case. Objective The goal of this work is to analyze the current laboratory results for patients in the intensive care unit (ICU) and classify which of these lab values could be abnormal the next time the...
Proceedings of the Genetic and Evolutionary Computation Conference Companion
In this work, we propose a framework to evolve morphology and behavior of resource-constrained mi... more In this work, we propose a framework to evolve morphology and behavior of resource-constrained miniaturized sensory agents. One key application of this framework is the exploration of unknown environments. Consequently, we demonstrate its ability to solve multi-objective optimization problem by applying it to an environment exploration task. The goal is to find the position of a pollution source in a U-shaped fluid pipe with unknown fluid dynamics properties. Firstly, morphological evolution is set to find the optimal physical shape for an agent so that it passes through six target points inside the pipe, i.e. it follows a certain trajectory. Based on that, behavioral evolution optimizes then the sensor controller of an agent and tries to solve a Pareto-optimization problem where the objectives are (1) to measure accurately chemical particle concentrations, (2) to measure depth of the agent while in the environment and (3) to minimize power consumption while using the sensors by altering their operation frequencies. We executed 24000 simulations and the best agent hit the target points with accuracy 91% and reached the maximum performance possible in first and second objectives while reducing the power usage by 86.6%, compared to fixed frequency rate.
Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2017
Advancement in miniaturization of autonomous sensory agents can play a profound role in many appl... more Advancement in miniaturization of autonomous sensory agents can play a profound role in many applications such as the exploration of unknown environments, however, due to their miniature size, power limitations poses a severe challenge. In this paper, and inspired from biological instinctive behaviour, we introduce an instinct-driven dynamic hardware reconfiguration design scheme using evolutionary algorithms on behaviour trees. Moreover, this scheme is projected on an application scenario of autonomous sensory agents exploring an inaccessible dynamic environment. In this scenario, agent's compression behaviour-introduced as an instinct-is critical due to the limited energy available on the agents. This emphasises the role of optimization of agents resources through dynamic hardware reconfiguration. In that regard, the presented approach is demonstrated using two compression techniques: Zeroorder hold and Wavelet compression. Behavioural and hardwarebased power models of these techniques, integrated with behaviour trees (BT), are implemented to facilitate off-line learning of the optimum on-line behaviour, thus, facilitating dynamic reconfiguration of agents hardware.
2021 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT), 2021
Speaker Identification using deep learning is an ongoing active topic that has wide applications ... more Speaker Identification using deep learning is an ongoing active topic that has wide applications in voice authenticated and activated systems, forensic investigations, security, and remote transactions. Mel frequency cepstral coefficients (MFCC) feature-based speaker identification systems are proven to per- form very well with clean speech conditions. However, there is rapid degradation in the robustness with adverse conditions and short speech duration. To overcome this limitation, we propose a new speaker identification pipeline system grounded on novel MFCC based features called shuffled MFCC (SHMFCC) along with a new data augmentation approach. Moreover, the system uses a simple and efficient deep neural network model with variable-duration input speech signals and is proved to perform remarkably at different datasets with different environmental and noise conditions and for a high number of speakers.
2018 International Conference on Computing, Networking and Communications (ICNC), 2018
Effective exploration of hardly accessible unknown environments impacts many applications such as... more Effective exploration of hardly accessible unknown environments impacts many applications such as monitoring underground infrastructure, exploration of natural resources, e.g. oil and gas, and in-vivo exploration for diagnostic purposes. In many of these scenarios, the environment is only accessible by miniaturized agents (mm-sized or less). Consequently, miniature autonomous sensory agents (MASA) can play an imperative role. However, MASA present wide range of challenges caused by their scaled-down size. Firstly, these agents are strongly limited in energy, consequently, special design constraints must be set. Secondly, these agents are usually kinetically passive thus, the widely used Simultaneous Localization and Mapping (SLAM) techniques can not be used. As a result, a new problem definition is needed, where agents’ motion is not controlled and, consequently, localization and mapping is conducted offline. In this paper, we formally introduce what we dub as the Centralized Offline Localization And Mapping (COLAM) problem, highlighting its objectives and constraints. Furthermore, we illustrate a comprehensive solution to this problem using a non-dominated sorting genetic algorithm (NSGA) based framework to optimize the agents’ resources and conduct adequate localization and mapping. Finally, we apply this solution on two different environments each with different physical properties.
Big Data Analytics for Cyber-Physical Systems, 2019
Abstract The fast growing digitalization of medicine has facilitated the collection of patient da... more Abstract The fast growing digitalization of medicine has facilitated the collection of patient data in databases. A smart city must facilitate such databases in its hospitals. However, handling this big data requires new strategies for filtering and processing such high volumes of patient data. In this regard, Machine Learning (ML) algorithms are increasingly applied to support bedside decisions on patient treatment, preprocess medical data and help to choose adequate medications. Consequently, the choice of the ML modeling scheme is imperative. In contrast to conventional classification representation methodologies used in ML such as Artificial Neural Networks (ANNs), we present a novel concept based on stochastic models. Stochastic Petri Nets (SPNs) are a powerful modeling scheme that is used to describe the dynamic behavior of biochemical kinetics. This chapter presents a framework for building stochastic models based on the sampled sequences from observed SPNs by means of the Expectation Maximization (EM) algorithm and Continuous-Time Markov Chains (CTMC). Apart from its theoretical interest, we expect the framework to be useful for the classification of different stochastic processes and identifying the differences between them in big data context. An important application of the newly developed framework is disease diagnoses especially in Intensive Care Units (ICUs). To test the framework, we conducted simulations using SPNs with different structure and using Gillespie algorithm to generate data. After training, classification results show F1 score 95% using linear Support Vector Machines (SVMs) and 89% using Classification and Regression Trees (CARTs) algorithm.
A well-established notion in Evolutionary Computation (EC) is the importance of the balance betwe... more A well-established notion in Evolutionary Computation (EC) is the importance of the balance between exploration and exploitation. Data structures (e.g. for solution encoding), evolutionary operators, selection and fitness evaluation facilitate this balance. Furthermore, the ability of an Evolutionary Algorithm (EA) to provide efficient solutions typically depends on the specific type of problem. In order to obtain the most efficient search, it is often needed to incorporate any available knowledge (both at algorithmic and domain level) into the EA. In this work, we develop an ontology to formally represent knowledge in EAs. Our approach makes use of knowledge in the EC literature, and can be used for suggesting efficient strategies for solving problems by means of EC. We call our ontology "Evolutionary Computation Ontology" (ECO). In this contribution, we show one possible use of it, i.e. to establish a link between algorithm settings and problem types. We also show that the ECO can be used as an alternative to the available parameter selection methods and as a supporting tool for algorithmic design.
Recent developments in the miniaturization of hardware have facilitated the use of robots or mobi... more Recent developments in the miniaturization of hardware have facilitated the use of robots or mobile sensory agents in many applications such as exploration of GPS-denied, hardly accessible unknown environments. This includes underground resource exploration and water pollution monitoring. One problem in scaling-down robots is that it puts significant emphasis on power consumption due to the limited energy available online. Furthermore, the design of adequate controllers for such agents is challenging as representing the system mathematically is difficult due to complexity. In that regard, Evolutionary Algorithms (EA) is a suitable choice for developing the controllers. However, the solution space for evolving those controllers is relatively large because of the wide range of the possible tunable parameters available on the hardware, in addition to the numerous number of objectives which appear on different design levels. A recently-proposed method, dubbed as Instinct Evolution Schem...
Architecture of Computing Systems – ARCS 2020, 2020
Increasing the efficiency of parallel software development is one of the key obstacles in taking ... more Increasing the efficiency of parallel software development is one of the key obstacles in taking advantage of heterogeneous multi-core architectures. Efficient and reliable compiler technology is required to identify the trade-off between multiple design goals at once. The most crucial objectives are application performance and processor power consumption. Including memory power into this multi-objective optimisation problem is of utmost importance. Therefore, this paper proposes the heuristic MORAM solving this three-dimensional Pareto front calculation. Furthermore, it is integrated into a commercially available framework to conduct a detailed evaluation and applicability study. MORAM is assessed with representative benchmarks on two different platforms and contrasted with a state-of-the-art evolutionary multi-objective algorithm. On average, MORAM produces 6% better Pareto fronts, while it is at least \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepac...
ACM Journal on Emerging Technologies in Computing Systems, 2021
Logic locking is a prominent technique to protect the integrity of hardware designs throughout th... more Logic locking is a prominent technique to protect the integrity of hardware designs throughout the integrated circuit design and fabrication flow. However, in recent years, the security of locking schemes has been thoroughly challenged by the introduction of various deobfuscation attacks. As in most research branches, deep learning is being introduced in the domain of logic locking as well. Therefore, in this article we present SnapShot, a novel attack on logic locking that is the first of its kind to utilize artificial neural networks to directly predict a key bit value from a locked synthesized gate-level netlist without using a golden reference. Hereby, the attack uses a simpler yet more flexible learning model compared to existing work. Two different approaches are evaluated. The first approach is based on a simple feedforward fully connected neural network. The second approach utilizes genetic algorithms to evolve more complex convolutional neural network architectures speciali...
One of the main reasons for the success of Evolutionary Algorithms (EAs) is their general-purpose... more One of the main reasons for the success of Evolutionary Algorithms (EAs) is their general-purposeness, i.e. the fact that they can be applied in a straight forward manner to a broad range of optimization problems, without any specific prior knowledge. On the other hand, it has been shown that incorporating a priori knowledge, such as expert knowledge or empirical findings, can significantly improve the performance of an EA. However, integrating knowledge in EAs poses numerous challenges. It is often the case that the features of the search space are unknown, hence any knowledge associated with the search space properties can be hardly used. In addition, a priori knowledge is typically problem-specific and hard to generalize. In this paper, we propose a framework, called Knowledge Integrated Evolutionary Algorithm (KIEA), which facilitates the integration of existing knowledge into EAs. Notably, the KIEA framework is EA-agnostic, i.e. it works with any evolutionary algorithm, problem-independent, i.e. it is not dedicated to a specific type of problems and expandable, i.e. its knowledge base can grow over time. Furthermore, the framework integrates knowledge while the EA is running, thus optimizing the consumption of computational power. In the preliminary experiments shown here, we observe that the KIEA framework produces in the worst case an 80% improvement on the converge time, w.r.t. the corresponding "knowledge-free" EA counterpart.
Big Data Analytics for Cyber-Physical Systems, 2019
Miniaturized Autonomous Sensory Agents (MASAs) can play a pivotal role in smart cities of tomorro... more Miniaturized Autonomous Sensory Agents (MASAs) can play a pivotal role in smart cities of tomorrow. From monitoring underground infrastructure such as pollution in water pipes, to exploration of natural resources such as oil and gas. These smart agents will not only detect anomalies, they are expected to provide sufficient data to facilitate mapping the detected anomalies, while-cleverly-adopting their behaviour based on the changes presented in the environment. However, given these objectives and due to MASA's miniaturization, conventional designing methods are not suitable to design MASA. On one hand, it is not possible to use the widely adopted Simultaneous Localization and Mapping (SLAM) schemes, because of the hardware limitations on-board of MASA. Furthermore, the targeted environments for MASA in a smart city are typically GPS-denied, hardly accessible, and either completely or partially unknown. Furthermore, designing MASA's hardware and their autonomous on-line behaviour presents an additional challenge. In this chapter, we present a framework dubbed as Evolutionary Localization and Mapping (EVOLAM), which uses Multi-objective Evolutionary Algorithms (MOEAs) to tackle the design and algorithmic challenges in using MASAs in monitoring infrastructure. This framework facilitates offline localization and mapping, while adaptively tuning offline hardware constraints and online behaviour. In addition, we present different types of MOEAs that can be used within the framework. Finally, we project EVOLAM on a casestudy, thus highlighting MOEA effectiveness in solving different complex localization and mapping problems.
Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021
In this work, we propose a novel approach for reinforcement learning driven by evolutionary compu... more In this work, we propose a novel approach for reinforcement learning driven by evolutionary computation. Our algorithm, dubbed as Evolutionary-Driven Reinforcement Learning (Evo-RL), embeds the reinforcement learning algorithm in an evolutionary cycle, where we distinctly differentiate between purely evolvable (instinctive) behaviour versus purely learnable behaviour. Furthermore, we propose that this distinction is decided by the evolutionary process, thus allowing Evo-RL to be adaptive to different environments. In addition, Evo-RL facilitates learning on environments with rewardless states, which makes it more suited for real-world problems with incomplete information. To show that Evo-RL leads to state-of-the-art performance, we present the performance of different state-of-the-art reinforcement learning algorithms when operating within Evo-RL and compare it with the case when these same algorithms are executed independently. Results show that reinforcement learning algorithms embedded within our Evo-RL approach significantly outperform the stand-alone versions of the same RL algorithms on OpenAI Gym control problems with rewardless states constrained by the same computational budget.
Uploads
Papers by Ahmed Hallawa