2008 Seventh IEEE International Symposium on Network Computing and Applications, 2008
... Each experiment was conducted using four codecs: replication, Reed-Solomon codes [4], Informa... more ... Each experiment was conducted using four codecs: replication, Reed-Solomon codes [4], Information Dispersal [3], and Tornado codes[1]. All of the servers contained an IntelR Pentium R 4 CPU with 2MB of L2 Cache, clocked at 32 Ро. ...
This paper looks at providing a highly efficient method of localizing faults on electrical distri... more This paper looks at providing a highly efficient method of localizing faults on electrical distribution grid while maintaining a high level of accuracy. First the grid is broken down into segments, each with a unique identifier. The solution then analyses the travelling waves that are present on the electrical grid at the time of the fault. These signals are measured by sensors at multiple locations and used to generate a key. that is then utilized in conjunction with a segmented graphical representation of the electrical distribution grid to determine the location of the fault. The results show that the proposed approach is effective.
Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architect... more Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architecture. Resiliency in power grid enables electrical distribution companies to maintain their services to the customers in the event of a grid failure. Its main advantages appear during the fault event: balance the power load and overcome the fault state by minimizing its impact. Although researchers have proposed many schemes for post-failure events to isolate faults (under certain conditions), yet there is no overall framework to evaluate the resiliency of the existing power grid and to improve it if required to recover from fault incidents successfully. In this paper, we propose a two-phase resiliency framework for the smart power grid. The first phase aims to assess the resiliency of a power grid. This phase provides important information regarding the weaknesses and the strengths of a power grid. Phase two will process the input of phase one to increase resiliency by employing redundant grid’s elements such as power lines and poles to back up/strengthen the vulnerable components. This preventive framework increases the overall grid resiliency and enables providing an efficient self-recovery policy upon any possible power failure. To achieve this goal, we utilize graph-theoretic metrics, mainly distortion metric, to implement our proposed framework. The proposed framework is implemented and its effectiveness with link failure events is demonstrated through three IEEE bus systems: 14, 30, and 118.
In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring... more In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring appliance electricity consumption. This data can be used in applying Demand Response (DR) in residential houses through data analytics, and developing data mining techniques. In this research, we introduce a smart system foundation that is applied to user's disaggregated power consumption data. This system encourages the users to apply DR by changing their behaviour of using heavier operation modes to lighter modes, and by encouraging users to shift their usages to off-peak hours. First, we apply Cross Correlation (XCORR) to detect times of the occurrences when an appliance is being used. We then use The Dynamic Time Warping (DTW) [13] algorithm to recognize the operation mode used.
Autonomic computing investigates how systems can achieve (user) specified "control" outcomes on t... more Autonomic computing investigates how systems can achieve (user) specified "control" outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and interdependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.
2020 IEEE 6th World Forum on Internet of Things (WF-IoT), 2020
In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring... more In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring appliance electricity consumption. This data can be used in applying Demand Response (DR) in residential houses through data analytics, and developing data mining techniques. In this research, we introduce a smart system foundation that is applied to user's disaggregated power consumption data. This system encourages the users to apply DR by changing their behaviour of using heavier operation modes to lighter modes, and by encouraging users to shift their usages to off-peak hours. First, we apply Cross Correlation (XCORR) to detect times of the occurrences when an appliance is being used. We then use The Dynamic Time Warping (DTW) [13] algorithm to recognize the operation mode used.
2019 IEEE Global Communications Conference (GLOBECOM), 2019
Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architect... more Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architecture. Resiliency in power grid enables electrical distribution companies to maintain their services to the customers in the event of a grid failure. Its main advantages appear during the fault event: balance the power load and overcome the fault state by minimizing its impact. Although researchers have proposed many schemes for post-failure events to isolate faults (under certain conditions), yet there is no overall framework to evaluate the resiliency of the existing power grid and to improve it if required to recover from fault incidents successfully. In this paper, we propose a two-phase resiliency framework for the smart power grid. The first phase aims to assess the resiliency of a power grid. This phase provides important information regarding the weaknesses and the strengths of a power grid. Phase two will process the input of phase one to increase resiliency by employing redundant grid’s elements such as power lines and poles to back up/strengthen the vulnerable components. This preventive framework increases the overall grid resiliency and enables providing an efficient self-recovery policy upon any possible power failure. To achieve this goal, we utilize graph-theoretic metrics, mainly distortion metric, to implement our proposed framework. The proposed framework is implemented and its effectiveness with link failure events is demonstrated through three IEEE bus systems: 14, 30, and 118.
We consider here Parallel Communicating (PC) grammar systems with features inspired from Hoare’s ... more We consider here Parallel Communicating (PC) grammar systems with features inspired from Hoare’s model of Communicating Sequential Processes (CSP). Specifically, we consider (1) nondeterministic queries (in two variants: using sets of query symbols, such that one symbol from each set has to be answered, and querying by nonterminals, such that the queried grammars are those having as axioms the specified symbol), (2) flags indicating the fact that a component of the system is ready to communicate, and (3) patterns of the communicated strings. The generative power of the obtained PC grammar systems is investigated, in comparison with the power of usual classes of PC grammar systems and with classes of grammars in the Chomsky hierarchy. 1Research supported by Grants OGP0007877 and S365A2 of the Natural Sciences and Engineering Research Council of Canada, and University of Western Ontario start-up grant
Security and fault tolerance are growing concerns for today's information processing needs. H... more Security and fault tolerance are growing concerns for today's information processing needs. However, these two problems have rarely been addressed together, in part because the common solutions conflict. Specifically, fault tolerance is usually accomplished by replicating and dispersing vital components, while security is usually accomplished by minimizing and concentrating vital components. This thesis presents an algorithmic solution to the problem of integrating these two concepts by making existing trusted third-party authentication protocols fault tolerant. This is shown for the general case and then applied to the Kerberos protocol. It has been demonstrated that formal methods employing modal logics have been valuable in determining whether the goals of authentication are met, and whether there are any security weaknesses. However, the formal methods in existence today generally ignore replication. Therefore, in order to formally prove to a high degree of confidence that t...
2018 IEEE World Engineering Education Conference (EDUNINE), 2018
The field of e-learning has emerged as a topic of interest in academia due to the increased ease ... more The field of e-learning has emerged as a topic of interest in academia due to the increased ease of accessing the Internet using using smart-phones and wireless devices. One of the challenges facing e-learning platforms is how to keep students motivated and engaged. Moreover, it is also crucial to identify the students that might need help in order to make sure their academic performance doesn't suffer. To that end, this paper tries to investigate the relationship between student engagement and their academic performance. Apriori association rules algorithm is used to derive a set of rules that relate student engagement to academic performance. Experimental results' analysis done using confidence and lift metrics show that a positive correlation exists between students' engagement level and their academic performance in a blended e-learning environment. In particular, it is shown that higher engagement often leads to better academic performance. This cements the previous work that linked engagement and academic performance in traditional classrooms.
The predominant risk factor of diabetic foot ulcers (DFU), peripheral neuropathy, results in loss... more The predominant risk factor of diabetic foot ulcers (DFU), peripheral neuropathy, results in loss of protective sensation and is associated with abnormally high plantar pressures. DFU prevention strategies strive to reduce these high plantar pressures. Nevertheless, several constraints should be acknowledged regarding the research supporting the link between plantar pressure and DFUs, which may explain the low prediction ability reported in prospective studies. The majority of studies assess vertical, rather than shear, barefoot plantar pressure in laboratory-based environments, rather than during daily activity. Few studies investigated previous DFU location-specific pressure. Previous studies focus predominantly on walking, although studies monitoring activity suggest that more time is spent on other weight-bearing activities, where a lower "peak" plantar pressure might be applied over a longer duration. Although further research is needed, this may indicate that an expression of cumulative pressure applied over time could be a more relevant parameter than peak pressure. Studies indicated that providing pressure feedback might reduce plantar pressures, with an emerging potential use of smart technology, however, further research is required. Further pressure analyses, across all weight-bearing activities, referring to location-specific pressures are required to improve our understanding of pressures resulting in DFUs and improve effectiveness of interventions.
The MANDAS project has defined a layered architecture for the management of distributed applicati... more The MANDAS project has defined a layered architecture for the management of distributed applications. In this paper we examine a vertical slice of this architecture, namely the management applications and services related to configuration management. We introduce an information model which captures the configuration information for distributed applications and discuss a repository service based on the model. We define a
2012 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2012
ABSTRACT Trust is a concept that has been used to support better decision-making when there is in... more ABSTRACT Trust is a concept that has been used to support better decision-making when there is incomplete information. Trust requires evidence. There are multiple evidence sources. One or more evidence sources may be used in trust calculation. This paper presents a middleware that takes this into account, the algorithms used and experimental results.
Third International Conference on Autonomic and Autonomous Systems (ICAS'07), 2007
... This shows that it is possible to implement the client on other platforms and still use the s... more ... This shows that it is possible to implement the client on other platforms and still use the server application on the Linux platform. ... 2. Balan, RK, Flinn, J., Satyanarayanan, M., Sinnamohideen, S., Yang, H., The Case for Cyber Foraging, In Proceedings of the 10th ACM SIGOPS ...
Proceedings of the IEEE Third International Workshop on Systems Management, 1998
Today's computing environments are becoming more and more distributed in nat... more Today's computing environments are becoming more and more distributed in nature. At the same time, the applications used in these environments are becoming more complicated and are being used in more mission critical roles in the enterprise. Consequently, users demands for performance, reliability, and availability are increasing rapidly. To meet these needs, a high level of quality of service must
2009 Fourth International Conference on Computer Sciences and Convergence Information Technology, 2009
The challenging problem in geocasting is distributing the packets to all the nodes within the geo... more The challenging problem in geocasting is distributing the packets to all the nodes within the geocast region with high probability. Sending packets to a node in the geocast region cannot guarantee delivery to all nodes in the region. It is because there may be some islands in which nodes are not connected each other within the region while they may
2014 IEEE Network Operations and Management Symposium (NOMS), 2014
ABSTRACT Increasingly, applications are moving into the cloud, which is actually supported by lar... more ABSTRACT Increasingly, applications are moving into the cloud, which is actually supported by large-scale data centres on the ground. These data centres are complex systems to manage and centralized solutions might not be able to meet the required scale nor make an efficient use of their networks. In this paper, we propose a hierarchical approach to dynamic resource manage-ment in data centres, where we leverage the topology of the data centre network to design the management hierarchy. We define a set of aggregate metrics at various levels in the hierarchy to convey system state information to higher management levels, and define managers' responsibilities and interactions. We evaluate our proposed approach through simulation. Experiments show that this management approach greatly reduces the flow of management data across the data centre network, thus reducing network overhead. Index Terms—data center management; virtualized infrastruc-ture management; cloud management; energy management; SLA management
2008 Seventh IEEE International Symposium on Network Computing and Applications, 2008
... Each experiment was conducted using four codecs: replication, Reed-Solomon codes [4], Informa... more ... Each experiment was conducted using four codecs: replication, Reed-Solomon codes [4], Information Dispersal [3], and Tornado codes[1]. All of the servers contained an IntelR Pentium R 4 CPU with 2MB of L2 Cache, clocked at 32 Ро. ...
This paper looks at providing a highly efficient method of localizing faults on electrical distri... more This paper looks at providing a highly efficient method of localizing faults on electrical distribution grid while maintaining a high level of accuracy. First the grid is broken down into segments, each with a unique identifier. The solution then analyses the travelling waves that are present on the electrical grid at the time of the fault. These signals are measured by sensors at multiple locations and used to generate a key. that is then utilized in conjunction with a segmented graphical representation of the electrical distribution grid to determine the location of the fault. The results show that the proposed approach is effective.
Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architect... more Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architecture. Resiliency in power grid enables electrical distribution companies to maintain their services to the customers in the event of a grid failure. Its main advantages appear during the fault event: balance the power load and overcome the fault state by minimizing its impact. Although researchers have proposed many schemes for post-failure events to isolate faults (under certain conditions), yet there is no overall framework to evaluate the resiliency of the existing power grid and to improve it if required to recover from fault incidents successfully. In this paper, we propose a two-phase resiliency framework for the smart power grid. The first phase aims to assess the resiliency of a power grid. This phase provides important information regarding the weaknesses and the strengths of a power grid. Phase two will process the input of phase one to increase resiliency by employing redundant grid’s elements such as power lines and poles to back up/strengthen the vulnerable components. This preventive framework increases the overall grid resiliency and enables providing an efficient self-recovery policy upon any possible power failure. To achieve this goal, we utilize graph-theoretic metrics, mainly distortion metric, to implement our proposed framework. The proposed framework is implemented and its effectiveness with link failure events is demonstrated through three IEEE bus systems: 14, 30, and 118.
In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring... more In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring appliance electricity consumption. This data can be used in applying Demand Response (DR) in residential houses through data analytics, and developing data mining techniques. In this research, we introduce a smart system foundation that is applied to user's disaggregated power consumption data. This system encourages the users to apply DR by changing their behaviour of using heavier operation modes to lighter modes, and by encouraging users to shift their usages to off-peak hours. First, we apply Cross Correlation (XCORR) to detect times of the occurrences when an appliance is being used. We then use The Dynamic Time Warping (DTW) [13] algorithm to recognize the operation mode used.
Autonomic computing investigates how systems can achieve (user) specified "control" outcomes on t... more Autonomic computing investigates how systems can achieve (user) specified "control" outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and interdependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.
2020 IEEE 6th World Forum on Internet of Things (WF-IoT), 2020
In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring... more In the Smart Grid environment, the advent of intelligent measuring devices facilitates monitoring appliance electricity consumption. This data can be used in applying Demand Response (DR) in residential houses through data analytics, and developing data mining techniques. In this research, we introduce a smart system foundation that is applied to user's disaggregated power consumption data. This system encourages the users to apply DR by changing their behaviour of using heavier operation modes to lighter modes, and by encouraging users to shift their usages to off-peak hours. First, we apply Cross Correlation (XCORR) to detect times of the occurrences when an appliance is being used. We then use The Dynamic Time Warping (DTW) [13] algorithm to recognize the operation mode used.
2019 IEEE Global Communications Conference (GLOBECOM), 2019
Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architect... more Introducing resiliency in a legacy power grid is one of the core mandates of smart grid architecture. Resiliency in power grid enables electrical distribution companies to maintain their services to the customers in the event of a grid failure. Its main advantages appear during the fault event: balance the power load and overcome the fault state by minimizing its impact. Although researchers have proposed many schemes for post-failure events to isolate faults (under certain conditions), yet there is no overall framework to evaluate the resiliency of the existing power grid and to improve it if required to recover from fault incidents successfully. In this paper, we propose a two-phase resiliency framework for the smart power grid. The first phase aims to assess the resiliency of a power grid. This phase provides important information regarding the weaknesses and the strengths of a power grid. Phase two will process the input of phase one to increase resiliency by employing redundant grid’s elements such as power lines and poles to back up/strengthen the vulnerable components. This preventive framework increases the overall grid resiliency and enables providing an efficient self-recovery policy upon any possible power failure. To achieve this goal, we utilize graph-theoretic metrics, mainly distortion metric, to implement our proposed framework. The proposed framework is implemented and its effectiveness with link failure events is demonstrated through three IEEE bus systems: 14, 30, and 118.
We consider here Parallel Communicating (PC) grammar systems with features inspired from Hoare’s ... more We consider here Parallel Communicating (PC) grammar systems with features inspired from Hoare’s model of Communicating Sequential Processes (CSP). Specifically, we consider (1) nondeterministic queries (in two variants: using sets of query symbols, such that one symbol from each set has to be answered, and querying by nonterminals, such that the queried grammars are those having as axioms the specified symbol), (2) flags indicating the fact that a component of the system is ready to communicate, and (3) patterns of the communicated strings. The generative power of the obtained PC grammar systems is investigated, in comparison with the power of usual classes of PC grammar systems and with classes of grammars in the Chomsky hierarchy. 1Research supported by Grants OGP0007877 and S365A2 of the Natural Sciences and Engineering Research Council of Canada, and University of Western Ontario start-up grant
Security and fault tolerance are growing concerns for today's information processing needs. H... more Security and fault tolerance are growing concerns for today's information processing needs. However, these two problems have rarely been addressed together, in part because the common solutions conflict. Specifically, fault tolerance is usually accomplished by replicating and dispersing vital components, while security is usually accomplished by minimizing and concentrating vital components. This thesis presents an algorithmic solution to the problem of integrating these two concepts by making existing trusted third-party authentication protocols fault tolerant. This is shown for the general case and then applied to the Kerberos protocol. It has been demonstrated that formal methods employing modal logics have been valuable in determining whether the goals of authentication are met, and whether there are any security weaknesses. However, the formal methods in existence today generally ignore replication. Therefore, in order to formally prove to a high degree of confidence that t...
2018 IEEE World Engineering Education Conference (EDUNINE), 2018
The field of e-learning has emerged as a topic of interest in academia due to the increased ease ... more The field of e-learning has emerged as a topic of interest in academia due to the increased ease of accessing the Internet using using smart-phones and wireless devices. One of the challenges facing e-learning platforms is how to keep students motivated and engaged. Moreover, it is also crucial to identify the students that might need help in order to make sure their academic performance doesn't suffer. To that end, this paper tries to investigate the relationship between student engagement and their academic performance. Apriori association rules algorithm is used to derive a set of rules that relate student engagement to academic performance. Experimental results' analysis done using confidence and lift metrics show that a positive correlation exists between students' engagement level and their academic performance in a blended e-learning environment. In particular, it is shown that higher engagement often leads to better academic performance. This cements the previous work that linked engagement and academic performance in traditional classrooms.
The predominant risk factor of diabetic foot ulcers (DFU), peripheral neuropathy, results in loss... more The predominant risk factor of diabetic foot ulcers (DFU), peripheral neuropathy, results in loss of protective sensation and is associated with abnormally high plantar pressures. DFU prevention strategies strive to reduce these high plantar pressures. Nevertheless, several constraints should be acknowledged regarding the research supporting the link between plantar pressure and DFUs, which may explain the low prediction ability reported in prospective studies. The majority of studies assess vertical, rather than shear, barefoot plantar pressure in laboratory-based environments, rather than during daily activity. Few studies investigated previous DFU location-specific pressure. Previous studies focus predominantly on walking, although studies monitoring activity suggest that more time is spent on other weight-bearing activities, where a lower "peak" plantar pressure might be applied over a longer duration. Although further research is needed, this may indicate that an expression of cumulative pressure applied over time could be a more relevant parameter than peak pressure. Studies indicated that providing pressure feedback might reduce plantar pressures, with an emerging potential use of smart technology, however, further research is required. Further pressure analyses, across all weight-bearing activities, referring to location-specific pressures are required to improve our understanding of pressures resulting in DFUs and improve effectiveness of interventions.
The MANDAS project has defined a layered architecture for the management of distributed applicati... more The MANDAS project has defined a layered architecture for the management of distributed applications. In this paper we examine a vertical slice of this architecture, namely the management applications and services related to configuration management. We introduce an information model which captures the configuration information for distributed applications and discuss a repository service based on the model. We define a
2012 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2012
ABSTRACT Trust is a concept that has been used to support better decision-making when there is in... more ABSTRACT Trust is a concept that has been used to support better decision-making when there is incomplete information. Trust requires evidence. There are multiple evidence sources. One or more evidence sources may be used in trust calculation. This paper presents a middleware that takes this into account, the algorithms used and experimental results.
Third International Conference on Autonomic and Autonomous Systems (ICAS'07), 2007
... This shows that it is possible to implement the client on other platforms and still use the s... more ... This shows that it is possible to implement the client on other platforms and still use the server application on the Linux platform. ... 2. Balan, RK, Flinn, J., Satyanarayanan, M., Sinnamohideen, S., Yang, H., The Case for Cyber Foraging, In Proceedings of the 10th ACM SIGOPS ...
Proceedings of the IEEE Third International Workshop on Systems Management, 1998
Today's computing environments are becoming more and more distributed in nat... more Today's computing environments are becoming more and more distributed in nature. At the same time, the applications used in these environments are becoming more complicated and are being used in more mission critical roles in the enterprise. Consequently, users demands for performance, reliability, and availability are increasing rapidly. To meet these needs, a high level of quality of service must
2009 Fourth International Conference on Computer Sciences and Convergence Information Technology, 2009
The challenging problem in geocasting is distributing the packets to all the nodes within the geo... more The challenging problem in geocasting is distributing the packets to all the nodes within the geocast region with high probability. Sending packets to a node in the geocast region cannot guarantee delivery to all nodes in the region. It is because there may be some islands in which nodes are not connected each other within the region while they may
2014 IEEE Network Operations and Management Symposium (NOMS), 2014
ABSTRACT Increasingly, applications are moving into the cloud, which is actually supported by lar... more ABSTRACT Increasingly, applications are moving into the cloud, which is actually supported by large-scale data centres on the ground. These data centres are complex systems to manage and centralized solutions might not be able to meet the required scale nor make an efficient use of their networks. In this paper, we propose a hierarchical approach to dynamic resource manage-ment in data centres, where we leverage the topology of the data centre network to design the management hierarchy. We define a set of aggregate metrics at various levels in the hierarchy to convey system state information to higher management levels, and define managers' responsibilities and interactions. We evaluate our proposed approach through simulation. Experiments show that this management approach greatly reduces the flow of management data across the data centre network, thus reducing network overhead. Index Terms—data center management; virtualized infrastruc-ture management; cloud management; energy management; SLA management
Uploads
Papers by Hanan Lutfiyya