This chapter contains sections titled: Overview, Introduction, A Summary of Task-Scheduling Resul... more This chapter contains sections titled: Overview, Introduction, A Summary of Task-Scheduling Results for Multiprocessor Systems, Priority-Driven Preemptive Scheduling Approach, Static Table-Driven Scheduling Approach, Dynamic Planning-Based Scheduling Approach, Dynamic Best-Effort Scheduling Approach, Integrated Scheduling of Hard and Quality of Service-Degradable Tasks, Real-Time Scheduling with Feedback Control, Summary, Exercises, References
Proceedings 16th International Parallel and Distributed Processing Symposium, 2002
In this paper, we present a novel and comprehensive resource management solution for the autonomo... more In this paper, we present a novel and comprehensive resource management solution for the autonomous hot-spot convergence system (AHSCS) that uses sensor web. This solution is in response to a call for solution at the WPDRTS 2002. The proposed solution involves system analysis and design and developing a new resource management methodology, which we call Feedback-based Adaptive Resource Management (FARM). The FARM methodology combines the advantages of feedback control scheduling, path-based scheduling, value-based scheduling, and survivability strategies to provide dependable (predictable, reliable, and secure) services to the AHSCS.
Many time-critical applications require predictable performance. Tasks corresponding to these app... more Many time-critical applications require predictable performance. Tasks corresponding to these applications have deadlines to be met despite the presence of faults. Failures can happen either due to processor faults or due to task errors. To tolerate both processor and task failures, the copies of every task have to be mutually excluded in space and also in time in the schedule. We assume that each task has two versions, namely, primary copy and backup copy. We believe that the position of the backup copy in the task queue with respect to the position of the primary copy (distance) is a crucial parameter which affects the performance of any fault-tolerant dynamic scheduling algorithm. To study the effect of distance parameter, we make fault-tolerant extensions to the well-known myopic scheduling algorithm [Ramamritham et al. IEEE Trans. Parallel Distr. sys. 1 (2) (1990) 184] which is a dynamic scheduling algorithm capable of handling resource constraints among tasks. We have conducted an extensive simulation to study the effect of distance parameter on the schedulability of the fault-tolerant myopic scheduling algorithm.
Real-time computing is an enabling technology for many current and future applications and is bec... more Real-time computing is an enabling technology for many current and future applications and is becoming increasingly pervasive. Many complex applications, such as automated factories, defense systems, space systems, and telecommunication systems exhibit a high degree of dynamic workload, distributed sensor processing, and require end-to-end performance and dependability guarantees. Parallel and distributed systems are becoming natural candidates for satisfying such requirements due to their potential for high performance and fault-tolerance. This calls for systematic research in several aspects of real-time systems-such as system specification, modeling, and verification, architectures, resource management, languages, and databases-tailoring to parallel and distributed systems. The goal of this special issue is to publish some recent research contributions in this important area. This special issue includes a set of seven research papers selected from the International Workshop on Parallel and Distributed Real-Time Systems (WPDRTS), 2003. Each paper received three reviews during WPDRTS selection process and subsequently the revised versions of the papers were checked by the editors for additional material. These seven papers cover several key issues in parallel and distributed real-time systems, such as modeling, scheduling, middleware, and performance evaluation. The first paper titled ''Robust Scheduling in Team Robotics,'' by L.B. Becker, E. Nett, S. Schemmer, and M. Gergeleit develops a scheduling algorithm that aims to provide predictable performance, based on expected execution times of tasks, in the presence of unpredictability in execution times. The second paper titled, ''Reliable Event-Triggered Systems for Mechatronic Applications,'' by C. Siemers, R. Falsett, R. Seyer, and K. Ecker develops hardware enhancements that allow building mechatronic systems with real-time and reliability features. The third paper titled ''COSMIC: A Real-Time Event-based Middleware for the CAN-Bus,'' by J. Kaiser, C. Brudna, and C. Mitidieri, presents a event model
Vulnerability assessment is a requirement of NERC's cybersecurity standards for electric power sy... more Vulnerability assessment is a requirement of NERC's cybersecurity standards for electric power systems. The purpose is to study the impact of a cyber attack on supervisory control and data acquisition (SCADA) systems. Compliance of the requirement to meet the standard has become increasingly challenging as the system becomes more dispersed in wide areas. Interdependencies between computer communication system and the physical infrastructure also become more complex as information technologies are further integrated into devices and networks. This paper proposes a vulnerability assessment framework to systematically evaluate the vulnerabilities of SCADA systems at three levels: system, scenarios, and access points. The proposed method is based on cyber systems embedded with the firewall and password models, the primary mode of protection in the power industry today. The impact of a potential electronic intrusion is evaluated by its potential loss of load in the power system. This capability is enabled by integration of a logic-based simulation method and a module for the power flow computation. The IEEE 30-bus system is used to evaluate the impact of attacks launched from outside or from within the substation networks. Countermeasures are identified for improvement of the cybersecurity.
IEEE Transactions on Parallel and Distributed Systems, 1993
Abstruct-Most real-time scheduling algorithms schedule tasks with respect to their worst case com... more Abstruct-Most real-time scheduling algorithms schedule tasks with respect to their worst case computation times. Resource reclaiming refers to the problem of utilizing the resources left unused by a task when it executes less than its worst case computation time, or when a task is deleted from the current schedule. Resource reclaiming is a very important issue in dynamic realtime multiprocessor environments. In this paper, we present dynamic resource reclaiming algorithms that are egective, avoid any run time anomalies, and have bounded overhead cost that is independent of the number of tasks in the schedule. Each Task is assumed to have a worst case computation time, a deadline, and a set of resource requirements. The algorithms utilize the information given in a multiprocessor task schedule and perform on-line local optimization. The effectiveness of the algorithms is demonstrated through simulation studies. The algorithms have also been implemented in the Spring Kernel [15].
The issue of providing fault-tolerance in real-time communication has been a problem of growing i... more The issue of providing fault-tolerance in real-time communication has been a problem of growing importance. There are two basic approaches for satisfying fault-tolerant requirements in real-time communication: (i) forward error recovery approach and (ii) detect and recovery approach. The first approach is well-suited for hard real-time communication, whereas the second approach is well-suited for soft real-time communication. Neither of these basic approaches is well-suited for supporting both hard and soft real-time communication. In this paper, we propose an integrated scheme that not only supports such a mixed communication requirements, but also improves the call acceptance rate significantly due to its efficient resource allocation mechanisms such as traffic dispersion and backup multiplexing. The effectiveness of the proposed scheme has been evaluated through extensive simulation studies.
In this paper, we address the problem of best-effort scheduling of (m; k)-firm real-time streams ... more In this paper, we address the problem of best-effort scheduling of (m; k)-firm real-time streams in multihop networks. The existing solutions for the problem ignore scalability considerations because the solutions maintain a separate queue for each stream and maintain perstream state information. In this context, we propose a scheduling algorithm, EDBP, which is scalable (fixed) with little degradation in performance.
With the proliferation of multimedia group applications, the construction of multicast trees sati... more With the proliferation of multimedia group applications, the construction of multicast trees satisfying Quality of Service QoS requirements is becoming a problem of prime importance. Multicast groups are usually classi ed as sparse or pervasive groups depending on the physical distribution of group members. They are also classi ed based on the temporal characteristics of group membership into static and dynamic groups. In this paper, we propose two algorithms for constructing multicast trees for multimedia group communication in which the members are sparse and static. The proposed algorithms use a constrained distributed unicast routing algorithm for generating low-cast, bandwidth and delay constrained multicast trees. These algorithms have l o wer message complexity and call setup time due to their nature of iteratively adding paths, rather than edges, to partially constructed trees. We study the performance in terms of call acceptance rate, call setup time, and multicast tree cost of these algorithms through simulation by comparing them with that of a recently proposed algorithm 14 for the same problem. The simulation results indicate that the proposed algorithms provide larger call acceptance rates, lower setup times and comparable tree costs.
... of Electrical and Computer Engineering Iowa State University Email: [email protected] ... in ... more ... of Electrical and Computer Engineering Iowa State University Email: [email protected] ... in the loss of a large volume of data, which will in turn a ect a large user community. ... of the desirable properties that a survivable network might satisfy: (i) fast and e cient detection of faults, (ii ...
... Download: http://vulcan.ee.iastate.edu/~gmani/personal/paper CACHED: Download as a PDF. by De... more ... Download: http://vulcan.ee.iastate.edu/~gmani/personal/paper CACHED: Download as a PDF. by Deepak Sahoo Swaminathan , Deepak R. Sahoo , S. Swaminathan , R. A-omari , Murti V. Salapaka , G. Manimaran , Arun K. Somani. In Proc. ...
Security was not an inherent feature of the Internet when it was originally deployed. The tremend... more Security was not an inherent feature of the Internet when it was originally deployed. The tremendous success and growth of the wired Internet has led to a wealth of applications ranging from e-commerce to grid computing. Quality of Service (QoS), reliability, and security are necessities for many of the applications. Furthermore, the growing number of wireless devices capable of connecting to Internet has made QoS, reliability, and security important issues in the wireless world as well. A considerable amount of research has been done on all aspects highlighted above, and much more remains to be done to secure the next-generation Internet and provide QoS. The term Trusted Internet refers to the next generation Internet that is capable of providing QoS, reliability, and security guarantees to the applications and end-users. We are pleased to present before you this special issue of the Journal of High Speed Networks (JHSN) consisting of selected papers from the Third Annual Trusted Internet Workshop (TIW) 2004 that was held in conjunction with the International Conference on High-Performance Computing (HiPC) on December 22, 2004 in Bangalore, INDIA. The goal of the workshop was to provide a forum for researchers and practitioners to present and discuss their work and exchange ideas in the areas of Internet QoS, Internet Reliability, and Internet Security. A similar special issue consisting of selected papers from the 2003 workshop was published last year as JHSN special issue, Vol. 13, no. 4, Dec. 2004. The workshop received 30 paper submissions. Each submission was reviewed by at least three reviewers, following which nine papers, some of which were regular and the remaining short, were selected for presentation at the workshop. For this special issue, we have selected six of these papers. Each of these six papers is an enhanced version of their workshop counterparts, and have been reviewed again by the special issue co-editors. The papers represent a good sample of the ongoing research in this area, and we hope that these will stimulate further advances in this area. The paper titled "SCIT-DNS: Critical infrastructure protection through secure DNS server dynamic updates" by Y. Huang, D. Arsenault and A. Sood presents a secure implementation framework for DNS servers. The framework called "Self-Cleaning Intrusion Tolerance" eliminates the risk of keeping the private keys of the DNS server online to sign dynamic updates, and uses hardware redundancy. An implementation of the framework is also presented. The paper titled "SPEE: A Secure Program Execution Environment tool using code integrity checking" by O. Gelbart, B. Narahari and R. Simha attempts to create a secure program execution environment by complementing the existing code security tools with the addition of program checking and program/user authentication. To this
This chapter contains sections titled: Overview, Introduction, A Summary of Task-Scheduling Resul... more This chapter contains sections titled: Overview, Introduction, A Summary of Task-Scheduling Results for Multiprocessor Systems, Priority-Driven Preemptive Scheduling Approach, Static Table-Driven Scheduling Approach, Dynamic Planning-Based Scheduling Approach, Dynamic Best-Effort Scheduling Approach, Integrated Scheduling of Hard and Quality of Service-Degradable Tasks, Real-Time Scheduling with Feedback Control, Summary, Exercises, References
Proceedings 16th International Parallel and Distributed Processing Symposium, 2002
In this paper, we present a novel and comprehensive resource management solution for the autonomo... more In this paper, we present a novel and comprehensive resource management solution for the autonomous hot-spot convergence system (AHSCS) that uses sensor web. This solution is in response to a call for solution at the WPDRTS 2002. The proposed solution involves system analysis and design and developing a new resource management methodology, which we call Feedback-based Adaptive Resource Management (FARM). The FARM methodology combines the advantages of feedback control scheduling, path-based scheduling, value-based scheduling, and survivability strategies to provide dependable (predictable, reliable, and secure) services to the AHSCS.
Many time-critical applications require predictable performance. Tasks corresponding to these app... more Many time-critical applications require predictable performance. Tasks corresponding to these applications have deadlines to be met despite the presence of faults. Failures can happen either due to processor faults or due to task errors. To tolerate both processor and task failures, the copies of every task have to be mutually excluded in space and also in time in the schedule. We assume that each task has two versions, namely, primary copy and backup copy. We believe that the position of the backup copy in the task queue with respect to the position of the primary copy (distance) is a crucial parameter which affects the performance of any fault-tolerant dynamic scheduling algorithm. To study the effect of distance parameter, we make fault-tolerant extensions to the well-known myopic scheduling algorithm [Ramamritham et al. IEEE Trans. Parallel Distr. sys. 1 (2) (1990) 184] which is a dynamic scheduling algorithm capable of handling resource constraints among tasks. We have conducted an extensive simulation to study the effect of distance parameter on the schedulability of the fault-tolerant myopic scheduling algorithm.
Real-time computing is an enabling technology for many current and future applications and is bec... more Real-time computing is an enabling technology for many current and future applications and is becoming increasingly pervasive. Many complex applications, such as automated factories, defense systems, space systems, and telecommunication systems exhibit a high degree of dynamic workload, distributed sensor processing, and require end-to-end performance and dependability guarantees. Parallel and distributed systems are becoming natural candidates for satisfying such requirements due to their potential for high performance and fault-tolerance. This calls for systematic research in several aspects of real-time systems-such as system specification, modeling, and verification, architectures, resource management, languages, and databases-tailoring to parallel and distributed systems. The goal of this special issue is to publish some recent research contributions in this important area. This special issue includes a set of seven research papers selected from the International Workshop on Parallel and Distributed Real-Time Systems (WPDRTS), 2003. Each paper received three reviews during WPDRTS selection process and subsequently the revised versions of the papers were checked by the editors for additional material. These seven papers cover several key issues in parallel and distributed real-time systems, such as modeling, scheduling, middleware, and performance evaluation. The first paper titled ''Robust Scheduling in Team Robotics,'' by L.B. Becker, E. Nett, S. Schemmer, and M. Gergeleit develops a scheduling algorithm that aims to provide predictable performance, based on expected execution times of tasks, in the presence of unpredictability in execution times. The second paper titled, ''Reliable Event-Triggered Systems for Mechatronic Applications,'' by C. Siemers, R. Falsett, R. Seyer, and K. Ecker develops hardware enhancements that allow building mechatronic systems with real-time and reliability features. The third paper titled ''COSMIC: A Real-Time Event-based Middleware for the CAN-Bus,'' by J. Kaiser, C. Brudna, and C. Mitidieri, presents a event model
Vulnerability assessment is a requirement of NERC's cybersecurity standards for electric power sy... more Vulnerability assessment is a requirement of NERC's cybersecurity standards for electric power systems. The purpose is to study the impact of a cyber attack on supervisory control and data acquisition (SCADA) systems. Compliance of the requirement to meet the standard has become increasingly challenging as the system becomes more dispersed in wide areas. Interdependencies between computer communication system and the physical infrastructure also become more complex as information technologies are further integrated into devices and networks. This paper proposes a vulnerability assessment framework to systematically evaluate the vulnerabilities of SCADA systems at three levels: system, scenarios, and access points. The proposed method is based on cyber systems embedded with the firewall and password models, the primary mode of protection in the power industry today. The impact of a potential electronic intrusion is evaluated by its potential loss of load in the power system. This capability is enabled by integration of a logic-based simulation method and a module for the power flow computation. The IEEE 30-bus system is used to evaluate the impact of attacks launched from outside or from within the substation networks. Countermeasures are identified for improvement of the cybersecurity.
IEEE Transactions on Parallel and Distributed Systems, 1993
Abstruct-Most real-time scheduling algorithms schedule tasks with respect to their worst case com... more Abstruct-Most real-time scheduling algorithms schedule tasks with respect to their worst case computation times. Resource reclaiming refers to the problem of utilizing the resources left unused by a task when it executes less than its worst case computation time, or when a task is deleted from the current schedule. Resource reclaiming is a very important issue in dynamic realtime multiprocessor environments. In this paper, we present dynamic resource reclaiming algorithms that are egective, avoid any run time anomalies, and have bounded overhead cost that is independent of the number of tasks in the schedule. Each Task is assumed to have a worst case computation time, a deadline, and a set of resource requirements. The algorithms utilize the information given in a multiprocessor task schedule and perform on-line local optimization. The effectiveness of the algorithms is demonstrated through simulation studies. The algorithms have also been implemented in the Spring Kernel [15].
The issue of providing fault-tolerance in real-time communication has been a problem of growing i... more The issue of providing fault-tolerance in real-time communication has been a problem of growing importance. There are two basic approaches for satisfying fault-tolerant requirements in real-time communication: (i) forward error recovery approach and (ii) detect and recovery approach. The first approach is well-suited for hard real-time communication, whereas the second approach is well-suited for soft real-time communication. Neither of these basic approaches is well-suited for supporting both hard and soft real-time communication. In this paper, we propose an integrated scheme that not only supports such a mixed communication requirements, but also improves the call acceptance rate significantly due to its efficient resource allocation mechanisms such as traffic dispersion and backup multiplexing. The effectiveness of the proposed scheme has been evaluated through extensive simulation studies.
In this paper, we address the problem of best-effort scheduling of (m; k)-firm real-time streams ... more In this paper, we address the problem of best-effort scheduling of (m; k)-firm real-time streams in multihop networks. The existing solutions for the problem ignore scalability considerations because the solutions maintain a separate queue for each stream and maintain perstream state information. In this context, we propose a scheduling algorithm, EDBP, which is scalable (fixed) with little degradation in performance.
With the proliferation of multimedia group applications, the construction of multicast trees sati... more With the proliferation of multimedia group applications, the construction of multicast trees satisfying Quality of Service QoS requirements is becoming a problem of prime importance. Multicast groups are usually classi ed as sparse or pervasive groups depending on the physical distribution of group members. They are also classi ed based on the temporal characteristics of group membership into static and dynamic groups. In this paper, we propose two algorithms for constructing multicast trees for multimedia group communication in which the members are sparse and static. The proposed algorithms use a constrained distributed unicast routing algorithm for generating low-cast, bandwidth and delay constrained multicast trees. These algorithms have l o wer message complexity and call setup time due to their nature of iteratively adding paths, rather than edges, to partially constructed trees. We study the performance in terms of call acceptance rate, call setup time, and multicast tree cost of these algorithms through simulation by comparing them with that of a recently proposed algorithm 14 for the same problem. The simulation results indicate that the proposed algorithms provide larger call acceptance rates, lower setup times and comparable tree costs.
... of Electrical and Computer Engineering Iowa State University Email: [email protected] ... in ... more ... of Electrical and Computer Engineering Iowa State University Email: [email protected] ... in the loss of a large volume of data, which will in turn a ect a large user community. ... of the desirable properties that a survivable network might satisfy: (i) fast and e cient detection of faults, (ii ...
... Download: http://vulcan.ee.iastate.edu/~gmani/personal/paper CACHED: Download as a PDF. by De... more ... Download: http://vulcan.ee.iastate.edu/~gmani/personal/paper CACHED: Download as a PDF. by Deepak Sahoo Swaminathan , Deepak R. Sahoo , S. Swaminathan , R. A-omari , Murti V. Salapaka , G. Manimaran , Arun K. Somani. In Proc. ...
Security was not an inherent feature of the Internet when it was originally deployed. The tremend... more Security was not an inherent feature of the Internet when it was originally deployed. The tremendous success and growth of the wired Internet has led to a wealth of applications ranging from e-commerce to grid computing. Quality of Service (QoS), reliability, and security are necessities for many of the applications. Furthermore, the growing number of wireless devices capable of connecting to Internet has made QoS, reliability, and security important issues in the wireless world as well. A considerable amount of research has been done on all aspects highlighted above, and much more remains to be done to secure the next-generation Internet and provide QoS. The term Trusted Internet refers to the next generation Internet that is capable of providing QoS, reliability, and security guarantees to the applications and end-users. We are pleased to present before you this special issue of the Journal of High Speed Networks (JHSN) consisting of selected papers from the Third Annual Trusted Internet Workshop (TIW) 2004 that was held in conjunction with the International Conference on High-Performance Computing (HiPC) on December 22, 2004 in Bangalore, INDIA. The goal of the workshop was to provide a forum for researchers and practitioners to present and discuss their work and exchange ideas in the areas of Internet QoS, Internet Reliability, and Internet Security. A similar special issue consisting of selected papers from the 2003 workshop was published last year as JHSN special issue, Vol. 13, no. 4, Dec. 2004. The workshop received 30 paper submissions. Each submission was reviewed by at least three reviewers, following which nine papers, some of which were regular and the remaining short, were selected for presentation at the workshop. For this special issue, we have selected six of these papers. Each of these six papers is an enhanced version of their workshop counterparts, and have been reviewed again by the special issue co-editors. The papers represent a good sample of the ongoing research in this area, and we hope that these will stimulate further advances in this area. The paper titled "SCIT-DNS: Critical infrastructure protection through secure DNS server dynamic updates" by Y. Huang, D. Arsenault and A. Sood presents a secure implementation framework for DNS servers. The framework called "Self-Cleaning Intrusion Tolerance" eliminates the risk of keeping the private keys of the DNS server online to sign dynamic updates, and uses hardware redundancy. An implementation of the framework is also presented. The paper titled "SPEE: A Secure Program Execution Environment tool using code integrity checking" by O. Gelbart, B. Narahari and R. Simha attempts to create a secure program execution environment by complementing the existing code security tools with the addition of program checking and program/user authentication. To this
Uploads
Papers by G. Manimaran