Highly varying job demands generally consist of many short jobs mixed with several long jobs. In ... more Highly varying job demands generally consist of many short jobs mixed with several long jobs. In this paper, we consider a simple scenario where two job streams with different level of demands must be processed by the same server. We study the performance of several round-robin variants and F CF SP in such a scenario. The simulation results show that on the one hand, by employing immediate preemption to favor newly arrived jobs, round-robin can effectively reduce the mean response time for the short-job stream, while only slightly increasing the response time for the longjob stream. On the other hand, by assuming the availability of job stream information and always favoring the short-job stream, F CF SP may improve performance. However, to further improve performance, other information if available (e.g., the characteristics of each individual stream) should be considered.
Scheduling has a great influence in improving computer system performance. Many recent system des... more Scheduling has a great influence in improving computer system performance. Many recent system designs use policies which give priority to short jobs as it is often the case that the majority of the jobs are short. SRPT (Shortest Remaining Processing Time First), which servers the jobs that needs the least amount of service to complete, is known to produce optimum mean response time. However SRPT requires having the knowledge of the individual service time exactly in advance, which is not practical. FCFS and PS which shares the system capacity equally among all the jobs can be used if the CPU time requirement of each job is not available beforehand. However, FCFS is detrimental even when there is a moderate variability in job service times. In practice, however, PS must be implemented by RR with time-slicing, which incurs non-negligible job switching overhead for small time-slices. Over time, in addition to PS strategies, for example, LCFSPR (Last Come First Serve with Pre-emptive Resume), LAT (Least-Attained-Time), SRT (Shortest Residual-Time), etc have been presented. A common feature of these strategies is that preemption is employed to favour possibly short jobs. Among these strategies, some are simple to use and incur less overhead, while others incur more overhead. For example, RR and LCFSPR only require insertion of jobs at the front/back of queue (with complexity O(1)), whereas LAT and SRT need to maintain a sorted queue (with complexity O(log(n))), and keep track of the attained times and residual times of individual jobs. LCFSPR also yields M/M/1 result, but it leads to situations where short jobs can get stuck behind long jobs. In this paper, we mainly consider the issue of handling newly arrived jobs in implementing RR strategy. A research issue is then how time-slicing performs if large time-slices have to be used. In this paper, we investigate several RR variants through Discrete Event Simulation. Our results show that, by favouring newly arrived jobs, - - the performance of RR with large time-slices could be improved than that of ideal PS. The simple immediate preemption scheme, which serves the new jobs immediately by preempting the current active job, is shown to further improve the performance of RR.
... Sarah Tasneem Math and Computer Science Eastern Connecticut State University Willimantic, CT ... more ... Sarah Tasneem Math and Computer Science Eastern Connecticut State University Willimantic, CT 06226 [email protected] ... Bhatti and Friedrich [5] discusses classifying web requests into high, medium, and low priorities using such information as IP addresses and ...
International Journal of Computers and Their Applications, 2010
Abstract-Real-time multi-process scheduling is commonly used in many control situations, where it... more Abstract-Real-time multi-process scheduling is commonly used in many control situations, where it is important to achieve job completions within specific time intervals. This paper investigates the potential improvement that might be achieved when additional ...
It has been observed in recent years that in many applications service time demands are highly va... more It has been observed in recent years that in many applications service time demands are highly variable. Without foreknowledge of exact service times of individual jobs, processor sharing is an effective theoretical strategy for handling such demands. In practice, however, processor sharing must be implemented by time-slicing with a round-robin discipline. In this paper, we investigate how round-robin performs with the consideration of job switching overhead. Because of recent results, we assume that the best strategy is for new jobs to preempt the one in service. By analyzing time-slicing with overhead, we derive the effective utilization parameter, and give a good approximation regarding the lower bound of time-slice under a given system load and overhead. The simulation results show that for both exponential and non-exponential distributions, the system blowup points agree with what the effective utilization parameter tells us. Furthermore, with the consideration of overhead, an optimum time-slice value exists for a particular environment.
Abstract Scheduling can require analyzing not only the total computation time of a task, but also... more Abstract Scheduling can require analyzing not only the total computation time of a task, but also the remaining execution time, R (t) Δt, after accumulated time Δt. Often a software program's execution time is characterized by a single value (mean). When scheduling is ...
In systems where customer service demands are only known probabilistically, there is very little ... more In systems where customer service demands are only known probabilistically, there is very little to distinguish between jobs. Therefore, no universal optimum scheduling strategy or algorithm exists. If the distribution of job times is known, then the residual time (expected time remaining for a job), based on the service it has already received, can be calculated. In a detailed discrete event simulation, we have explored the use of this function for increasing the probability that a job will meet its deadline. We have tested many different distributions with a wide range of and shape, four of which are reported here. We compare with RR and FCFS, and find that in all distributions studied our algorithm performs best. We also studied the use of two slow servers versus one fast server, and have found that they provide comparable performance, and in a few cases the double server system does better. 2 σ
Highly varying job demands generally consist of many short jobs mixed with several long jobs. In ... more Highly varying job demands generally consist of many short jobs mixed with several long jobs. In principle, without foreknowledge of exact service times of individual jobs, processor sharing is an effective theoretical strategy for handling such demands. In practice, however, processor sharing must be implemented by time-slicing, which incurs non-negligible job switching overhead for small time-slices. A research issue is then how time-slicing performs if large time-slices have to be used. In this paper, we investigate several roundrobin variants and the results from Discrete Event Simulation show that, by favoring newly arrived jobs, the performance of round-robin with large time-slices could be better than that of ideal processor sharing. The simple immediate-preemption scheme, which serves the new jobs immediately by preempting the current active job, is shown to further improve the performance of round-robin.
One of the most cost-effective fault-tolerant schemes in commercial computing systems is using re... more One of the most cost-effective fault-tolerant schemes in commercial computing systems is using redundant multi-threading executions. These systems detect faults by redundantly executing more than one (usually two) instances of a program, and comparing the architected states of these instances. In these systems, though a fault might decrease the overall performance but will not affect the correctness of the execution. In this paper, through simulations we investigate the fault effects of redundant multi-threading executions by injecting transient faults to redundant computing systems. The results are presented and discussed here.
Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand.... more Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.
In general, fault tolerant cache schemes can be classified into 3 different categories, namely, c... more In general, fault tolerant cache schemes can be classified into 3 different categories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This paper re-examines each of those fault tolerant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The design and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.
Highly varying job demands generally consist of many short jobs mixed with several long jobs. In ... more Highly varying job demands generally consist of many short jobs mixed with several long jobs. In this paper, we consider a simple scenario where two job streams with different level of demands must be processed by the same server. We study the performance of several round-robin variants and F CF SP in such a scenario. The simulation results show that on the one hand, by employing immediate preemption to favor newly arrived jobs, round-robin can effectively reduce the mean response time for the short-job stream, while only slightly increasing the response time for the longjob stream. On the other hand, by assuming the availability of job stream information and always favoring the short-job stream, F CF SP may improve performance. However, to further improve performance, other information if available (e.g., the characteristics of each individual stream) should be considered.
Scheduling has a great influence in improving computer system performance. Many recent system des... more Scheduling has a great influence in improving computer system performance. Many recent system designs use policies which give priority to short jobs as it is often the case that the majority of the jobs are short. SRPT (Shortest Remaining Processing Time First), which servers the jobs that needs the least amount of service to complete, is known to produce optimum mean response time. However SRPT requires having the knowledge of the individual service time exactly in advance, which is not practical. FCFS and PS which shares the system capacity equally among all the jobs can be used if the CPU time requirement of each job is not available beforehand. However, FCFS is detrimental even when there is a moderate variability in job service times. In practice, however, PS must be implemented by RR with time-slicing, which incurs non-negligible job switching overhead for small time-slices. Over time, in addition to PS strategies, for example, LCFSPR (Last Come First Serve with Pre-emptive Resume), LAT (Least-Attained-Time), SRT (Shortest Residual-Time), etc have been presented. A common feature of these strategies is that preemption is employed to favour possibly short jobs. Among these strategies, some are simple to use and incur less overhead, while others incur more overhead. For example, RR and LCFSPR only require insertion of jobs at the front/back of queue (with complexity O(1)), whereas LAT and SRT need to maintain a sorted queue (with complexity O(log(n))), and keep track of the attained times and residual times of individual jobs. LCFSPR also yields M/M/1 result, but it leads to situations where short jobs can get stuck behind long jobs. In this paper, we mainly consider the issue of handling newly arrived jobs in implementing RR strategy. A research issue is then how time-slicing performs if large time-slices have to be used. In this paper, we investigate several RR variants through Discrete Event Simulation. Our results show that, by favouring newly arrived jobs, - - the performance of RR with large time-slices could be improved than that of ideal PS. The simple immediate preemption scheme, which serves the new jobs immediately by preempting the current active job, is shown to further improve the performance of RR.
... Sarah Tasneem Math and Computer Science Eastern Connecticut State University Willimantic, CT ... more ... Sarah Tasneem Math and Computer Science Eastern Connecticut State University Willimantic, CT 06226 [email protected] ... Bhatti and Friedrich [5] discusses classifying web requests into high, medium, and low priorities using such information as IP addresses and ...
International Journal of Computers and Their Applications, 2010
Abstract-Real-time multi-process scheduling is commonly used in many control situations, where it... more Abstract-Real-time multi-process scheduling is commonly used in many control situations, where it is important to achieve job completions within specific time intervals. This paper investigates the potential improvement that might be achieved when additional ...
It has been observed in recent years that in many applications service time demands are highly va... more It has been observed in recent years that in many applications service time demands are highly variable. Without foreknowledge of exact service times of individual jobs, processor sharing is an effective theoretical strategy for handling such demands. In practice, however, processor sharing must be implemented by time-slicing with a round-robin discipline. In this paper, we investigate how round-robin performs with the consideration of job switching overhead. Because of recent results, we assume that the best strategy is for new jobs to preempt the one in service. By analyzing time-slicing with overhead, we derive the effective utilization parameter, and give a good approximation regarding the lower bound of time-slice under a given system load and overhead. The simulation results show that for both exponential and non-exponential distributions, the system blowup points agree with what the effective utilization parameter tells us. Furthermore, with the consideration of overhead, an optimum time-slice value exists for a particular environment.
Abstract Scheduling can require analyzing not only the total computation time of a task, but also... more Abstract Scheduling can require analyzing not only the total computation time of a task, but also the remaining execution time, R (t) Δt, after accumulated time Δt. Often a software program's execution time is characterized by a single value (mean). When scheduling is ...
In systems where customer service demands are only known probabilistically, there is very little ... more In systems where customer service demands are only known probabilistically, there is very little to distinguish between jobs. Therefore, no universal optimum scheduling strategy or algorithm exists. If the distribution of job times is known, then the residual time (expected time remaining for a job), based on the service it has already received, can be calculated. In a detailed discrete event simulation, we have explored the use of this function for increasing the probability that a job will meet its deadline. We have tested many different distributions with a wide range of and shape, four of which are reported here. We compare with RR and FCFS, and find that in all distributions studied our algorithm performs best. We also studied the use of two slow servers versus one fast server, and have found that they provide comparable performance, and in a few cases the double server system does better. 2 σ
Highly varying job demands generally consist of many short jobs mixed with several long jobs. In ... more Highly varying job demands generally consist of many short jobs mixed with several long jobs. In principle, without foreknowledge of exact service times of individual jobs, processor sharing is an effective theoretical strategy for handling such demands. In practice, however, processor sharing must be implemented by time-slicing, which incurs non-negligible job switching overhead for small time-slices. A research issue is then how time-slicing performs if large time-slices have to be used. In this paper, we investigate several roundrobin variants and the results from Discrete Event Simulation show that, by favoring newly arrived jobs, the performance of round-robin with large time-slices could be better than that of ideal processor sharing. The simple immediate-preemption scheme, which serves the new jobs immediately by preempting the current active job, is shown to further improve the performance of round-robin.
One of the most cost-effective fault-tolerant schemes in commercial computing systems is using re... more One of the most cost-effective fault-tolerant schemes in commercial computing systems is using redundant multi-threading executions. These systems detect faults by redundantly executing more than one (usually two) instances of a program, and comparing the architected states of these instances. In these systems, though a fault might decrease the overall performance but will not affect the correctness of the execution. In this paper, through simulations we investigate the fault effects of redundant multi-threading executions by injecting transient faults to redundant computing systems. The results are presented and discussed here.
Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand.... more Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.
In general, fault tolerant cache schemes can be classified into 3 different categories, namely, c... more In general, fault tolerant cache schemes can be classified into 3 different categories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This paper re-examines each of those fault tolerant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The design and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.
Uploads
Papers by sarah tasneem