Open-Source Simulators For Cloud Computing: Comparative Study and Challenging Issues

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Open-Source Simulators for Cloud Computing:

Comparative Study and Challenging Issues

Wenhong Tiana,∗, Minxian Xua , Aiguo Chenb,∗, Guozhong Lia , Xinyang Wanga , Yu Chena
a School of Information and Software Engineering, University of Electronic Science and Technology of China,China
b School Of Computer Science and Engineering, University of Electronic Science and Technology of China,China

Abstract
Resource scheduling in infrastructure as a service (IaaS) is one of the keys for large-scale Cloud
applications. Extensive research on all issues in real environment is extremely difficult because it requires
developers to consider network infrastructure and the environment, which may be beyond the control. In
arXiv:1506.01106v1 [cs.DC] 3 Jun 2015

addition, the network conditions cannot be controlled or predicted. Performance evaluations of workload
models and Cloud provisioning algorithms in a repeatable manner under different configurations are
difficult. Therefore, simulators are developed. To understand and apply better the state-of-the-art of
cloud computing simulators, and to improve them, we study four known open-source simulators. They
are compared in terms of architecture, modeling elements, simulation process, performance metrics and
scalability in performance. Finally, a few challenging issues as future research trends are outlined.
Keywords: Cloud Computing, Data centers, Simulators for Cloud computing, Resource Scheduling

1. Introduction hardware (HaaS), and illustrate their interrela-


tions as well as their inter-dependency on preced-
Cloud computing is developed based on vari- ing technologies.
ous recent advancements in virtualization, Grid Cloud data center can be a distributed network
computing, Web computing, utility computing in structure, which is composed of many comput-
and related technologies. Cloud computing pro- ing nodes (such as servers), storage nodes, and
vides both platforms and applications on demand network devices. Each node is formed by a se-
through the Internet or intranet [19]. Some of the ries of resources such as CPU, memory, network
key benefits of Cloud computing include the hid- bandwidth and so on. Each resource has its cor-
ing and abstraction of complexity, virtualized re- responding properties. There are many different
sources and efficient use of distributed resources. types of resources for Cloud providers. The defi-
Some examples of emerging Cloud computing nition and model defined by this paper are aimed
platforms are Google App Engine [15], IBM blue to be general enough to be used by a variety of
Cloud [16], Amazon EC2 [6], and Microsoft Azure Cloud providers. In this paper, we focus on Infras-
[20]. Cloud computing allows the sharing, al- tructure as a service (IaaS) in Cloud data centers.
location and aggregation of software, computa- In a traditional data center, applications are
tional and storage network resources on demand. tied to specific physical servers that are often
Cloud computing is still considered in its infancy over-provisioned to deal with workload surges
as there are many challenging issues to be resolved and unexpected failures [5]. Such configuration
[19][1][23]. Youseff et al. [18] establish a detailed rigidity makes data centers expensive to main-
ontology of dissecting Cloud into five main lay- tain with wasted energy and floor space, low re-
ers from top to down: Cloud application (SaaS), source utilizations and significant management
Cloud software environment (PaaS), Cloud soft- overheads. With virtualization technology, to-
ware infrastructure (IaaS), software kernel and day’s Cloud data centers become more flexible,
secure and on-demand allocating.
∗ Corresponding author One key technology plays an important role in
Email addresses: [email protected] Cloud data center is resource scheduling. One
(Wenhong Tian), [email protected] (Minxian Xu), of the challenging scheduling problems in Cloud
[email protected] (Aiguo Chen),
[email protected] (Guozhong Li),
data center is to consider allocation and migration
[email protected] (Xinyang Wang), of reconfigurable virtual machines and integrated
[email protected] (Yu Chen) features of hosting physical machines.
Accepted for publication in SIMPAT, June 1st, 2015 June 4, 2015
It is extremely difficult to research widely for on SimJava [13] and GridSim [24], which treat a
all these problems in real platforms because the Cloud data center as a large resource pool and
application developers can’t control and process consider application-level workloads. Kliazovich
network environment. What is more, the network et al. [10] propose an energy-aware simulation en-
conditions cannot be predicted or controlled. vironment named GreenCloud for Cloud datacen-
The research of dynamic and large-scale dis- ters at package level. Nunez et al. [4] introduce
tributed environment can be achieved by build- a new simulator of cloud infrastructure named
ing data center simulation system, which supports iCanCloud using C++ and compare the perfor-
visualized modeling and simulation in large-scale mance with CloudSim. Tian et al. [32] propose
applications in cloud infrastructure. Data cen- CloudSched, a novel lightweight simulation tool
ter simulation system can describe the application for VM scheduling with lifecycle in Cloud data
workload statement, which includes user informa- centers.
tion, data center position, the amount of users
and data centers, and the amount of resources in 1.2. Comparative Guideline of Open-Source
each data center. Using this information, data Cloud Simulators
center simulation system generates requests and Cloud simulators can be divided into various
allocates these requests to virtual machines. categories according to their features. In this sec-
By using data center simulation system, ap- tion, we will give a brief comparison with different
plication developers can evaluate suitable strate- categories by extending the comparison category
gies such as distributing reasonable data cen- in [9]. The open-source simulators are selected be-
ter resources, selecting data center to match cause we can study their source codes in details,
special requirements, improving resource utiliza- develop new algorithms and improve them if nec-
tion and load balancing, reducing total energy- essary. The four open source simulators, namely
consumptions, reducing costs and so on. We will CloudSim, iCanCloud, GreenCloud, CloudSched,
look at some closely related work firstly. are representative of many related simulators be-
cause we study the architecture design, modeling
1.1. Related Work elements, simulation process, performance met-
There is quite intensive research conducted for rics and scalability. These simulators have com-
cloud simulators. In this paper, we concen- mon features such as in architecture, modeling
trate on open-source simulators which we can eas- elements, simulation process as well as their own
ier access. Dumitrescu and Foster [8] introduce characteristics such as focusing on different ser-
GangSim tool for grid scheduling. Buyya et al. vice layers and with different performance met-
introduce GridSim [24] toolkit for modeling and rics. CloudSim is well known simulator for cloud
simulation of distributed resource management computing, it can be extended easily but cur-
for grid computing. Calheiros et al. [25] intro- rently it does not consider parallel experiments
duce modeling and simulations of Cloud comput- or lifecycles of VMs. The iCanCloud implements
ing environments at application level, a few sim- parallel experiments but does not consider en-
ple scheduling algorithms such as time-shared and ergy consumption or VM migration. GreenCloud
space-shared are discussed and compared. Sakel- models detailed energy consumptions for differ-
lari et al. [14] complement a survey of mathemat- ent physical components. CloudSched can model
ical models, simulation approaches and testbeds lifecycle of requests, and provide different metrics
in cloud computing, which aims to enable re- for load-balance, energy efficiency and utilization
searcher to find suitable modelling approach and etc. Four open source cloud data centers simula-
simulation implementation. Ikram et al. [2] in- tors (CloudSim, GreenCloud, iCanCloud, Cloud-
troduce a novel cloud resource management ser- Sched) are compared together in Table 1.
vice model and its simulation-based evaluations Platform: The platform that the simulator
are mainly focusing on two applications dynamic based on makes it bind with some specific fea-
service composition. Nuu et al. [27] propose tures. CloudSim and CloudSched are both im-
a scheme for modeling and experimenting com- plemented with Java, so they can be executed on
bined smart sleep and power scaling algorithms any machine installed JVM. While built based on
in energy-aware data center networks. Gurout et GridSim and SimJava, CloudSim is heavy to exe-
al. [26] provide a survey on energy-aware sim- cute. GreenCloud is an extension of NS2 network
ulation techniques with DVFS (Dynamic Voltage simulator, and it’s a packet level simulator. As
and Frequency Scaling). CloudAnalyst [7] aims to for iCanCloud, it’s based on OMNET, which can
achieve the optimal scheduling among user groups simulate in-depth physical layer entities.
and data centers based on the current configura- Language: The languages implemented the
tion. Both CloudSim and CloudAnalyst are based simulators are related to the platforms. CloudSim
2
Table 1: Comparison Guideline

Items CloudSim [24] GreenCloud [9] iCanCloud [3] CloudSched [30]


Platform any NS2 OMNET, MPI any
Programming Language Java C++/OTcl C++ Java
Availability Open Source Open Source Open Source Open Source
Graphical Support Limited (Via CloudAnalyst) N N Y
Physical Models N Limited (Via Plug-in) Y Y
Models for public cloud N N Y Y
Parallel experiments N N Y N
Energy Consumption Y Y N Y
Migration algorithms Y N N Y
Simulation time Seconds Tens of minutes seconds seconds
Memory space small large medium small

and CloudSched are implemented with Java, sumption modeling. The energy consumption
while GreenCloud needs combining C++ and model implemented in GreenCloud can trace ev-
OTcl, iCanCloud is in C++. ery element in a data center. DVFS energy con-
Availability: The four simulators under dis- sumption model is proposed in CloudSim with ex-
cussion are free or open-source, available for pub- tension tools. CloudSched provides energy con-
lic download. sumption metrics for different scheduling algo-
Graphical support: The original CloudSim rithms.
supports no graphical interface, the graphical in- Migration algorithms: Migration algorithms
terface is supported in CloudAnalyst. However, are proposed to satisfy specific objectives, for in-
full support is not provided in CloudAnalyst, only stance dealing with the overloaded scenario in
the configurations and results can be presented. load balancing applications, reducing the total
So we label it as limited, the same reason is number of running machines to save total energy
also applicable for GreenCloud. CloudSched and consumption, improving the resource utilization
iCanCloud support whole scheduling process to and so on. CloudSim and CloudSched support
be showed on the interfaces. migration algorithms, while other two simulators
Physical server models: The details about do not.
the simulated components can reflect the preci- Scalability: This mainly means how fast the
sion of the simulator and the validity of the re- simulator can run (simulation time) and how
sults. iCanCloud and CloudSched provide de- much memory space the simulator will consume
tailed simulation for physical analogs for the as the total number of requests is increasing, es-
scheduling, which can trace resource utilization pecially to a large amount. We will provide com-
in physical servers and rejected requests informa- parison in performance evaluation.
tion. GreenCloud needs to use a plug-in to simu- In summary, CloudSim, GreenCloud, iCan-
late and then it can even capture the packet loss. Cloud and CloudSched are open source and avail-
CloudSim treats resource pool as a whole. able to download. CloudSim and GreenCloud of-
Models for public cloud providers: Ama- fer no graphical interface support; CloudSched
zon, as a cloud provider, has proposed its VM and iCanCloud all provide user interface to oper-
models and informed that by using these specifica- ate. CloudSched and iCanCloud support physical
tions, better scheduling effects could be obtained. server models, and GreenCloud supports physical
Both iCanCloud and CloudSched use the model models with a plug-in. In addition, CloudSched
suggested by Amazon, in which physical machine and iCanCloud offer models for public cloud
and virtual machine specifications are pre-defined. providers. Parallel experiments are supported
Parallel experiments: Parallel experiments only in iCanCloud, but only iCanCloud does not
could combine more than one machine to work support energy consumption model. CloudSim
together to process the tasks. Supporting for mul- and CloudSched implement migration algorithms
tiple machines running experiments together is a while others not. In the following sections, we
main feature of iCanCloud and that feature is not will provide in-depth comparative study in terms
presented in other three simulators. of architecture design, simulation process, ele-
Energy consumption model: The energy ments, performance metrics and scalability in per-
consumption model can enable the simulators to formance.
compare energy efficiency of different scheduling The organization of remaining parts of this pa-
strategies and algorithms. Except for iCanCloud, per is as follows: from section 2 to section 6,
other three simulators can support energy con- detailed comparisons from different views about
3
CloudSim, GreenCloud, iCanCloud and Cloud-
Sched are given. Section 2 compares the architec-
ture and main features of these simulators; sec-
tion 3 compares the way how elements are mod-
eled in different simulators; section 4 presents the
basic simulation process and compares minor dif-
ferences in those simulators; section 5 lists the
metrics in use; section 6 shows how performance
are evaluated in those simulators; finally conclu-
sions about cloud simulators are given.

2. Comparison 1: Architecture and Main


Features

In this section, we will discuss the simulators Figure 2: Three-tier data center architecture of
architectures. GreenCloud[10]

workloads that can model various cloud user ser-


vices.

Figure 1: The Architecture of CloudSim [25]

Fig. 1 shows the multi-layered design and im-


plementation of CloudSim. At the fundamental Figure 3: The Architecture of iCanCloud [4]
layer, management of applications, hosts of VMs,
and dynamic system states are provided. By ex-
tending the core VM provisioning functionality, The iCanCloud adopts the architecture shown
the Cloud provider can also study the efficiency in Fig. 3, which is also a layered architecture. The
of different strategies at this layer. As for the bottom of the architecture consists of the hard-
top layer, the User Code represents the basic en- ware models layer, which basically contains the
tities for hosts, and through extending entities at models that are in charge of modeling the hard-
this layer, developer can enable the application to ware parts of a system. A set of system calls are
generate requests in a variety of approaches and connected with the hardware models layer in the
configurations, model cloud scenarios, implement basic systems API module. In this module, a set
custom applications, and etc. of system calls are provided as Application Pro-
GreenCloud structure could be mapped on the gramming Interface for all applications run in a
three-tier data center architectures as in Fig. 2, VM. The upper layer is a VMs repository, which
which are the most common architectures. Basi- contains a collection of VMs previously defined
cally, the architectures are composed of access lay- by the user. The cloud hypervisor is at the up-
ers, aggregation layers and cores layers. Servers per layer that is managing all produced jobs and
are placed at the access layer and responsible for the instances of VMs where those jobs are exe-
task execution. Switches and Links form the in- cuted. As for the top of architecture, it contains
terconnection fabric that delivers workload to any a definition of the entire cloud system.
of the computing servers for execution at the ag- CloudSched is implemented under a simplified
gregation layer. The core layer constitutes the layered architecture as shown in Fig. 4. From
4
3. Comparison 2: Building Blocks in Sim-
ulators

In this section, we discuss the building blocks,


i.e., the elements modeled in each simulator.

3.1. Modeling Cloud Data Centers


In CloudSim and CloudAnalyst, the
infrastructure-level services related to the
clouds are simulated by modeling the data center
entity. In CloudSim, an entity represents an
instance of a component, like data center or
host. The data center entity manages a number
Figure 4: A Simplified Layered Architecture of CloudSched
[32] of host entities and these hosts can be assigned
to one or more VMs based on allocation policy.
Host represents a physical computing server in
a Cloud, with processing capability, including
CPU, memory, storage, etc. In data center, both
hosts and VMs can be managed during their life
top to bottom layer, at the top layer, there is an cycles.
interface for a user to select resources and send re- In GreenCloud, elements are modeled based on
quests, basically, a few types of virtual machines the multi-tier data center architecture. Servers,
are preconfigured for a user to choose. The lower switches and links, and workloads constitute the
layer is the core layer of scheduling: once user re- basic elements of GreenCloud. Servers are re-
quests are generated, those requests are forwarded sponsible for task execution, quite similar to
to next level, which is responsible to choose appro- the servers in the CloudSim, and workloads can
priate data centers and physical machines based be viewed as the VM requests (tasks) in the
on user requests. CloudSched provides support CloudSim simulator. As for the switches and
for modeling and simulation of Cloud data cen- links, they form the interconnection fabric that
ters, especially allocating virtual machines (con- delivers workload to any of the computing servers
sisting of CPU, memory, storage and bandwidth for execution in a timely manner. The VMs are in
etc.) to suitable physical machines. This layer a variety of specification in CloudSim or Cloud-
can manage a large scale of Cloud data centers Sched, while workloads in GreenCloud are divided
consisting of thousands of physical machines. Dif- into three types: Computational Intensive Work-
ferent scheduling algorithms can be applied in dif- loads, Data Intensive Workloads and Balanced
ferent data centers based on customers’ character- Workloads.
istics. At the bottom layer, there are Cloud re- In iCanCloud, the elements model has some dif-
sources which include physical machines and vir- ferences. The main difference lies in the servers
tual machines, both of them consisting of certain modeling. In iCanCloud, hardware model repre-
amount of CPU, memory, storage and bandwidth sents the resources provided in the simulator and
etc. In summary, from the architecture view, the VM instances take the place of servers in other
compared simulators all adopt the layered archi- simulators. A data center represents a set of Vir-
tecture and the layers can be mainly divided into tual machines, and the VMs are responsible for
three parts. Each layer is responsible for some executing the scheduled jobs, which are a list of
basic functions. At the bottom layer, these sim- tasks submitted by users.
ulators provide management for servers (in both In CloudSched, the core hardware infrastruc-
GreenCloud and iCanCloud) or hosts of VMs (in ture related to the Cloud is modeled with a data
CloudSim and CloudSched). The upper layer are center component for handling VM requests. The
in charge of scheduling the tasks (comparing ef- data center component is mainly composed of
ficiency of different algorithms or strategies). At a set of hosts, which are responsible for man-
the top layer, interface for users are offered, in- aging VMs activity during their life cycles. A
cluding configurations or scenarios that can be set host is a component that represents a physical
by the users in all these simulators. Besides these computing node in a Cloud: it is assigned a
basic functions, some extra functions are extended pre-configured processing capability (expressed in
in different simulators, while the basic ones are computing power in CPU units), memory, band-
quite similar. width, storage, and a scheduling policy for allo-
5
Table 2: 8 types of virtual machines (VMs) in Amazon Table 3: 3 types of physical machines (PMs) in Amazon
EC2 EC2

MEM (GB) CPU (units) BW(G) VM CPU (units) MEM(G) BW(G) Pmin Pmax
16(4 cores x 4 units) 30 3380G 210W 300W
1.7 1 (1 cores x 1 units) 160 1-1(1)
52(16 cores x 3.25 units) 136.8 3380G 420W 600W
7.5 4 (2 cores x 2 units) 850 1-2(2) 40(16 cores x 2.5 units) 14 3380G 350W 500W
15.0 8 (4 cores x 2 units) 1690 1-3(3)
17.1 6.5 (2 cores x 3.25 units) 420 2-1(4)
34.2 13 (4 cores x 3.25 units) 850 2-2(5)
68.4 26 (8 cores x 3.25 units) 1690 2-3(6)
1.7 5 (2 cores x 2.5 units) 350 3-1(7) CloudSim supports the development of custom
7.0 20 (8 cores x 2.5 units) 1690 3-2(8) application service models that can be deployed
within a VM and its users are required to extend
the core Cloudlet object for implanting their ap-
plication services. To be exactly, VMs or jobs
cating processor cores to virtual machines. A VM in CloudSim, iCanCloud can only be allocated to
could be represented in a similar way like the host. hosts that have enough resources, like memory,
storage, etc.
3.2. Modeling Virtual Machine Allocation Workloads in GreenCloud need a complete sat-
isfaction of its two main requirements: computing
VM allocation is the process of generating VM and communicational, which define the amount of
instances on hosts that match the critical re- computing that has to be executed before a given
sources, configurations, and requirements of the deadline and the size of data transfers that must
Cloud provider. With virtualization technologies, be performed prior, during, and after the work-
Cloud computing provides flexibility in resource load execution.
allocation. For example, a PM(Physical Machine) Currently CloudSched implements dynamic
with two processing cores can host two or more load-balancing scheduling algorithms, utilization
VMs on each core concurrently. Only if the total maximization and energy-efficient scheduling al-
used amount of processing power by all VMs on gorithms. Other algorithms such as reliability-
a host is not more than available capacity in that oriented and cost-oriented etc. can be applied as
host, VMs can be allocated. well.
Taking the widely used example of Amazon
EC2 [6], we show that a uniform view of dif-
3.3. Modeling Customer Requirements
ferent types of VMs is possible. Table 2 shows
eight types of virtual machines from Amazon EC2 CloudSim models the customer requirements by
online information. The speed per CPU core deploying VM instances and users can extend the
is measured in EC2 Compute Units, being each core Cloudlet object for implementing their ap-
C.U. equivalent to a 1.0-1.2 GHz 2007 Opteron plication services. The VM instance may require
or 2007 Xeon processor. We can therefore form some resource such as memory, storage and band-
three types of different PMs (or PM pools) based width on the host to enable its allocation, which
on compute units. In real Cloud data center, means assign specific cores of CPU, amount of
for example, a physical machine with 2×68.4GB memory and bandwidth to specific VMs.
memory, 16 cores×3.25 units, 2×1690GB storage GreenCloud models customer requirements by
can be provided. In this or similar way, a uni- configuring the workload arrival rate/pattern to
form view of different types of virtual machines is the data center following a predefined distribu-
possibly formed. This kind of classification pro- tion (like exponential distribution), or generating
vides a uniform view of virtualized resources for requests from traces log files. In addition, differ-
heterogeneous virtualization platforms e.g., Xen, ent random distributions can also be configured
KVM, VMWare, etc., and brings great benefits to trigger the time of a workload arrival as well as
for virtual machine management and allocation. specify the size of the workload. This flexibility
Customers only need selecting suitable types of enables users to adopt various choices to inves-
VMs based on their requirements. There are eight tigate network conditions, traffic load, and influ-
types of VMs in EC2 as shown in Table 2, where ences on different switching components. More-
MEM is for memory with unit GB, CPU is nor- over, the trace-driven workload generation makes
malized to unit (each CPU unit is equal to 1Ghz it more realistic to simulate the workload arrival
2007 Intel Pentium processor [6]). Three types process.
of PMs are considered for heterogeneous case as In iCanCloud, VMs are the building blocks for
shown in Table 3. creating cloud systems. Both in the application
6
repository and VMs repository, collections of pre-
defined models can be customized by user. Those
models will be used in order to configure the cor-
responding jobs that will be executed in a specific
instance of a VM in the system. Also, new appli-
cation models can be easily added to the system.
CloudSched models customer requirements by
randomly generating different types of VMs and
allocates VMs based on appropriate scheduling
algorithms in different data centers. The arrival
process, service time distribution and required ca-
pacity distribution of requests can be generated
according to random processes. The arrival rate
of customers’ requests can be controlled. Distri-
bution of different types of VM requirements can Figure 5: An Example of User Requests and Allocation
be set too. A real-time VM request can be repre-
sented in an interval vector: vmID(VM typeID,
start-time, end-time, requested capacity). For
example, vm1(1, 0, 6, 0.25) shows that the re- minor differences. Requests in CloudSim, Cloud-
quest ID is 1, virtual machine is of type 1 (cor- Sched are generated as VM instances and put into
responding to integer 1), start-time is 0 and end- different queues in different phases, like waiting
time is 6 (here 6 means the end-time is the sixth queue represents the requests are waiting to be ex-
slot). Other requests can be represented in sim- ecuted. Workloads are produced in GreenCloud
ilar ways. Fig. 5 shows the life cycles of virtual with its size satisfying exponential distribution.
machine allocation in a slotted time window using Jobs in iCanCloud can be submitted by user or
two PMs, where PM#1 hosts vm1, vm2 and vm3 pre-defined model as list and then be added into
while PM#2 hosts vm4, vm5 and vm6. Notice the waiting queue to be executed.
that at any slot, the total capacity constraint of a Initiating data centers: In this phase, data
PM has to be met by all VMs allocated on it, and center are started to provide resources. The dis-
each VM has a start-time, end-time constraint. cussed simulators are almost similar in initial-
In summary, in order to satisfy the flexibility izing cloud data centers and they initialize the
and extendibility of customer requirements, these servers/hosts to offer resource like CPU, memory,
simulators all provide predefined configurations as storage and etc. To be noticed, the servers/hosts
well as interfaces for extending. CloudSim can may be geographical separated, which means lo-
extend the core Cloudlet object; GreenCloud can cated in different data centers.
generate customer requests in trace log file; iCan- Defining allocation policies: Allocation pol-
Cloud can modify the application model in the icy describes scheduling process, including when
application and VMs repository; CloudSched can and how to allocate the specific request to the
change the VM and PM specification in the con- specific server/host. Allocation policy has a tight
figuration files. relationship with the goal of scheduling. For in-
stance, load balancing and energy saving may use
different allocation policies. In CloudSim and
4. Comparison 3: Simulation Process iCanCloud, First Come First Service (FCFS) pol-
icy is implemented as a basic choice. Cloud-
Generally, the simulation process for cloud data Sched develops some load balancing policies to
centers can be mainly divided into four parts: 1) compare performance and GreenCloud contains
generating customer requests; 2) initiating data DVFS (Dynamic Voltage Frequency Scaling) poli-
centers; 3) defining allocation policy; 4) collecting cies to evaluate energy saving effects.
and outputing results. The simulators that we Collecting and outputting results: After
discussed in this paper all have these four parts, the scheduling process is completed, results would
though some differences existed when extending be gathered to evaluate the performance of a pol-
the basic parts. icy. Except CloudSim, other simulators would
Generating customer requests: Requests present part of simulation results in the user in-
are generated in this phase and prepared to be terface. Similarly, with different scheduling goals,
allocated. In different simulators, the requests evaluated indices would vary. The comparison in-
generation approaches may vary and preparation dices and typical outputs would be introduced in
process before requests allocation would also have the following sections.
7
5. Comparison 4: Performance Metrics M EMuA , N ETuA respectively.
(3). integrated load imbalance value (ILBi ) of
For different objectives of scheduling, there are server i: Variance is widely used as a measure of
different performance metrics. In this section, how far a set of numbers is spread out from each
we discuss some usual metrics that adopted in other in statistics. Using variance, an integrated
cloud simulators, like for utilization maximiza- load imbalance value (ILBi ) of server i is defined
tion, load-balancing, energy-efficient goals. Other as:
metrics for different objectives can be extended (Avgi − CP UuA )2 + (Avgi − M EMuA )2 + (Avgi − N ETuA )2
easily based on these usual metrics. Note that the 3
four simulator use quite different metrics, here we (2)
just try to cover the metrics which are applied in where
the four simulators. Table 4 summaries the met- Avgi = (CP UiU + M EMiU + N ETiU )/3 (3)
rics name, metrics objective and the simulators
that adopt the corresponding metric. ILBi could be applied to indicate load imbalance
level comparing utilization of CPU, memory and
5.1. Metric for Maximizing Resource Utilization network bandwidth of a single server itself.
(4). the imbalance value of all CPUs, memories
In the following, we firstly review two metrics
and network bandwidth: Using variance, the im-
for maximizing resource utilization and these
balance value of all CPUs in a data center is de-
two metrics are the basis for load balancing and
fined as
energy efficient in the following subsections.
(1). Average resource utilization. Average uti- N
X
lization of CPU, memory, hard disk and network IBLCP U = (CP UiU − CP UuA )2 (4)
bandwidth can be computed and an integration i

utilization of all these resources can be used too. Similarly, imbalance values of memory (IBLmem )
(2). The total number of PMs used. It is closely and network bandwidth (IBLnet ) can be calcu-
related to the average and whole utilization of a lated. Then total imbalance values of all servers
Cloud data center. in a Cloud datacenter is given by
N
X
IBLtot = ILBi (5)
5.2. Metrics for Multi-dimensional Load-
i
Balancing
In view of advantages and disadvantages (5). average imbalance value of a physical server i:
of existing metrics for resource scheduling The average imbalance value of a physical server
[5][28][21][31], integrated measurement on total i is defined as
imbalance level of Cloud data center and each IBLtot
server are developed for load-balancing strategy IBLP M
avg = (6)
N
[33]. The following parameters are considered:
(1). average CPU utilization (CP UiU ) of a single where N is the total number of servers. As its
CPU i: For example, if the observed period is one name suggests, this value can be used to measure
minute and CPU utilization is recorded every 10 average imbalance level of all physical servers.
seconds, then CP Uiu is the average of six recorded (6). average imbalance value of a Cloud data-
values of CPU i. This metric could represent the center (CDC): The average imbalance value of a
average load on a single CPU during a period of Cloud datacenter (CDC) is defined as
observed time. IBLCP U + IBLmem + IBLnet
(2). average utilization of all CPUs in a Cloud IBLCDC
avg = (7)
N
datacenter: Let CP Uin be the total number of
CPUs of server i, then the average utilization of (7). average running times: Average running
all CPUs on server i is time of proceeding same amount of tasks can be
PN compared for different scheduling algorithms.
CP U U CP Uin (8). makespan: In CloudSched, it is defined as
CP UuA = i PN i (1)
n the maximum load (or average utilization) on all
i CP Ui
PMs, and in some other simulators, it is defined
where N is the total number of physical servers as the longest processing time on all PMs.
in a Cloud datacenter. Similarly, average utiliza- (9). utilization efficiency: It is defined as (the
tion of memory, network bandwidth of server i, all minimum load on any PM) divides (maximum
memories and all network bandwidth in a Cloud load on any PM) in this case.
datacenter can be defined as M EMiU , N ETiU ,
8
5.3. Metrics for Energy-efficiency When the average utilization is adopted, u(t)=u,
then Ei =P (u)(t1 − t0 ).
(1). energy consumption model:
(2). The total energy consumption of a Cloud
Most of energy consumption in data centers is
data center: The energy consumption is com-
from computation processing, disk storage, net-
puted as the sum of energy consumed by all PMs:
work, and cooling systems. In [5], authors
proposed a power consumption model for blade n
X
server: Ecdc = Ei (13)
i=1
14.5+0.2UCP U +(4.5e−8 )Umem +0.003Udisk +(3.1e−8 )Unet
(8) It should be noted that the energy consumption
where UCP U , Umem , Udisk , Unet are utilization of of all VMs on PMs is included.
CPU, memory, hard disk and network interface (3). The total number of PMs used: This is the
respectively. From this formulation, it is observed total number of PMs used for the given set of VM
that except CPU, the other factors such as mem- requests. It is important for energy-efficiency.
ory, hard disk and network interface have very (4) The total power-on time of all PMs used:
small impact on total energy consumption. According to the energy consumption equation of
In [3], authors found that CPU utilization is each PM, the total power-on time is a key factor.
typically proportional to the overall system load,
hence proposed a power model as follows:
5.4. C/P (Cost/per task) Metric
P (U ) = kPmax + (1 − k)Pmax U (9) In iCanCloud, in order to deal with the com-
plexity level added by an infrastructure following
where Pmax is the maximum power of a server; k a pay-as-you-go basis, the C/P metric is defined
is the fraction of power when a server is idle, and as:
studies show that on average the k is about 0.7; Ch Texe I Texe I
and U is the CPU utilization. C/P = CT = b c (14)
iNc2 iNvm Nc
In GreenCloud, Dynamic Voltage/Frequency
Scaling (DVFS) is considered, the power con- where Texe is the task execution time, the val-
sumption of an average server can be expressed ues of I and i correspond to the whole tracing
as follows: interval and the tracing interval per task, that is,
the grain of the application. On the other hand,
P = Pf ixed + Pf × f 3 (10) Nvm and Nc are the number of Virtual Machines
and number of cores per Virtual Machine, Ch is
where Pf ixed accounts for the portion of the con- the machine’s usage price per hour. In this way,
sumed power which does not scale with the op- the best infrastructure setup would be that which
erating frequency f , while Pf is a frequency- produced the lowest C/P value.
dependent CPU power consumption.
The energy consumed by a switch and all its 5.5. Confidence Interval
transceivers can be defined as: Confidence intervals can be calculated for dif-
R
X ferent metrics as follows: Let x1 , x2 , x3 , ..., xn be
Pswitch = Pchassis +nlinecards +Plinecard + nports,r +Pr
the calculated metrics (such as IBLtot and Ecdc
r=0
(11) values etc.) from n times of repeated simulations.
where Pchassis is related to the power consumed Then the mean is
by the switch hardware, Plinecard is the power n
1X
consumed by any active network line card, Pr xmean = xi (15)
corresponds to the power consumed by a port n i=1
(transceiver) running at the rate r.
and the standard deviation s is
In real environment, the utilization of the CPU sP
may change over time due to the workload vari- n 2
i=1 (xmean − xi )
ability. Thus, the CPU utilization is a function s= (16)
n−1
of time and is represented as u(t). Therefore, the
total energy consumption by a physical machine and the confidence interval at 95% confidence
(Ei ) can be defined as an integral of the energy (normal distribution) is given by
consumption function over a period of time as:
s s
t1
(xmean − 1.96 √ , xmean + 1.96 √ ) (17)
n n
Z
Ei = P (u(t))dt (12)
t0
9
Table 4: Metrics Comparison Guideline

Metrics Optimization Objectives Simulators


average resource utilization maximizing resource utilization All Four
total number of PMs (hosts) need maximizing resource utilization All Four
average CPU utilization load balancing All
average utilization of all CPUs in a cloud datacenter load balancing All Four
integrated load imbalance value of a server load balancing CloudSched
imbalance value of all CPUs load balancing CloudSched
average imbalance value a physical server load balancing CloudSched
average imbalance value of a Cloud datacenter load balancing CloudSched
total simulation time All All Four
makespan or longest processing time load balancing CloudSim, CloudSched
energy consumption model energy-efficiency CloudSim, GreenCloud, CloudSched
total energy consumption of a Cloud data center energy-efficiency CloudSim, GreenCloud, CloudSched
total number of PMs used energy-efficiency CloudSim, GreenCloud, CloudSched
total power-on time of all PMs energy-efficiency CloudSim, GreenCloud, CloudSched
cost / per task C/P iCanCloud
confidence interval confidence interval CloudSched

6. Comparison 5: Performance Evaluation VMs number and jobs number. In most exper-
imental cases with jobs amount less or equal to
In this section, we will discuss the performance 50000, iCanCloud is faster than CloudSim, and
comparison of iCanCloud and CloudSim, Cloud- in all tests with 250k jobs, iCanCloud is faster.
Sched and CloudSched with a focus on the scala- Under all tests, iCanCloud shows better perfor-
bility. We also compare the typical outputs of all mance in execution time than CloudSim.
compared simulators. Fig. 6(b) presents the memory consumption
comparison in each experiment for CloudSim
6.1. Performance Comparison of iCanCloud and and iCanCloud. It can be noticed in this
Cloudsim graph that iCanCloud requires more memory
6.1.1. Experimental Environment Settings than CloudSim. Up to 1000 VMs, the amount
In the comparison between iCanCloud and of memory required by both simulators is similar.
CloudSim, jobs in CloudSim are modeled by con- However, when using more than 1000 VMs, the
figuring input size, processing length and output amount of memory required by iCanCloud goes
size. The jobs in the simulation experiments have up much faster than CloudSim.
5 MB input size, 30MB output size, 1,200,000 MI In general, iCanCloud is faster in large scale
processing length. In addition, jobs would take experiments and provides better scalability, but
advantage of all the available CPU capacity on requires more memory than CloudSim.
VMs and the VMs they used are 9,500 MIPS. Of
course, a new application model is developed in 6.2. Performance Comparison of CloudSim and
iCanCloud to execute the same functionality as CloudSched
CloudSim. The experimental environment is on a
computer with a CPU core i3 and 4GB of RAM 6.2.1. Experimental Environment Settings
memory. In the comparison between CloudSim and
CloudSched, the comparison is a bit complex than
6.1.2. Performance Comparison the comparison in section 7.1, a new construct
Fig. 6(a) demonstrates the execution time com- method is created with start-time and end-time
parison of CloudSim and iCanCloud, the x-axis parameters, which refers to the lifecycle of a re-
presents the number of jobs executed in each ex- quest. The file size of request represents the re-
periment, y-axis presents the VMs number and quired capacity of all requests. The start-time,
its type, and z-axis presents the time required to end-time generation approaches are same, servers
execute each experiment (measured in seconds) in (named VMs in CloudSim) and requests (named
log-scale. It’s obvious that both simulators need cloudlet in CloudSim) both adopt the EC2 spec-
more execution time when increasing the num- ifications. List Scheduling algorithm is imple-
ber of jobs, while these simulators would have dif- mented in both simulators, in which requests
ferent impact when increasing the VMs number. would be allocated to a PM with the lowest uti-
When the VMs number is more than 2500, the lization. The experimental environment is based
execution time keeps stable in iCanCloud, while on a Dell computer with a CPU core i5 and 8GB
the execution time is influenced directly by both of RAM memory.
10
(
a) (
b)

Figure 6: Performance comparison of CloudSim vs. iCanCloud [4]

(
a)) (
b)

Figure 7: Performance Comparison of CloudSim vs. CloudSched

11
6.2.2. Performance Comparison All these extensive strategies enable the idle nodes
Fig. 7(a) illustrates the time consumption of into sleep mode to save total energy and live mi-
each experiment, where x-axis shows the requests gration of VMs every 5s for adapting to the al-
number in each experiment for CloudSim and location. VMs can be migrated to another host,
CloudSched, y-axis shows the number of PMs and if this operation will reduce energy consumption.
simulators they belong to, and z-axis shows the In our simulation, the requests come randomly
time required in millisecond unit to simulate each and we vary the number of hosts and VMs to
experiment. It is also apparently observed that obtain data for the energy consumption and the
larger number of requests and number of PMs number of migrations. From our simulations, MC
need more time in both simulators. When the strategy shows the best energy-efficient effects as
number of VMs is less than 10,000, CloudSched shown in Fig. 8 (b). As for the number of migra-
always costs less time to complete simulation. As tions, NPA and DVFS both have no migrations,
for the numbers of VMs are 50,000 and 25,000, while in other three strategies, MC strategy takes
CloudSched takes less time than CloudSim, while least number of migration in most cases. The data
CloudSched takes longer time when the number shown in Figure 8 is the average of 5 times of re-
of PMs is more than 5,000. As the ratio of the peated simulation.
number of VMs to the number of PMs increases,
like 500,000: 500, CloudSim shows its strength.
Note that the ratio of VMs to PMs may be vary-
ing from a few to a few tens in a real cloud data
center.
Fig. 7(b) shows the memory consumption
comparison of each simulation in CloudSim and
CloudSched. In cases when the VMs num-
ber is relative small, like from 1,000 to 10,000.
CloudSched needs a little more memory, several
megabytes, to execute simulations. While as
the requests number becomes larger, CloudSched
costs much less memory than CloudSim, the large
difference happens when the request number is
500,000. The reason is that the VM and PM Figure 9: Typical Output of GreenCloud
model in CloudSched is simpler than the models
in CloudSim.
In general, CloudSched costs less time when the In Fig.9 with GreenCloud simulations, we col-
ratio of the number of VM requests to the number lect the total energy consumption under variable
of PMs is not too large (like below 100) and costs data center load (varying from 0.0, 0.3, 0.6 to
much less memory than CloudSim. 1.0) and variable number of servers (varying from
100 to 400) both for DVFS only and DNS+DVFS
power management schemes. The x-axis formats
6.3. Typical Outputs Compared
like (100, 0.0) represents the tests with 100 servers
In Fig. 8, we compare the performance of four and 0.0 load, and (400, 1.0) shows tests with 400
energy-conscious resource management strategies servers and 1.0 load. In our simulations, we set
against a benchmark technique NPA (NonPower- the type of workloads as HPC (High Performance
Aware). In the benchmark technique, the pro- Computation) and the results gathered are aver-
cessors can be operated at higher possible pro- aged over 5 runs with the random number gen-
cessing capacity as 100% and do not consider erator. From the bar chart, generally, it’s obvi-
energy-optimization during provisioning of VMs ous that the total energy consumption increases
to hosts. The first energy-conscious strategy for as the number of servers increases. It also demon-
comparison is DVFS enabled, which means that strates that the DVFS scheme shows itself little
the VMs are resized during the simulation based sensitive to the input load of servers, while by
on the dynamics of CPU utilization of the host. contrast the DNS+DVFS scheme shows precise
The other strategies are extensions of DVFS pol- sensitive to variable load. We also observe that
icy: MU (minimum utilization) strategy allocates under same number of servers and identical loads,
VMs on the minimal utilization nodes; RS (Ran- the DNS+DVFS scheme saves more energy than
dom selection) strategy randomly allocates VMs DVFS scheme.
to hosts; MC (maximum correlation) strategy al- In iCanCloud, Fig. 10 illustrates the results
locates VMs on the maximal correlation hosts. gathered by executing the model of Phobos ap-
12
Figure 8: Typical Output of CloudSim

Figure 10: Typical Output of iCanCloud [4]

plication along with the results of the same ap-


plication implemented on iCanCloud. The figure
represents the C/P metric for the experiments,
where the small instance type recommended by
Amazon EC2 is provided, and the VMs number
and tracing intervals are varied. From the results,
we can notice that in some cases, using the same
size for the interval (in years) and increasing the
VMs number, causes an upward trend in the C/P
metric. Then, increasing the VMs number pro-
vides the same execution time, which contributes
to a increasing of the cost for this configuration.
Besides that, the mathematical model does not
represent the time spent on performing I/O oper-
ations. Because that there are still some problems
for installation of current release of iCanCloud,
we cannot test more data but using results in its
original publication.
In CloudSched, Fig. 11 shows average imbal-
ance level of a cloud data center and five differ-
Figure 11: Typical CloudSched Outputs: Average Imbal-
ent scheduling algorithms for load balancing are ance Values of a Cloud Data Center when PMs=100
compared. ZHCJ algorithm introduced in [28],
ZHJZ algorithm [21], LIF algorithm [30], Rand
algorithm, and Round-Robin (Round) are com-
pared. In these simulations, different requests are
generated as follows: the total numbers of arrivals
(requests) can be randomly set; all requests fol-
low Poisson arrival process and have exponential
13
length distribution; the maximum length of re- do not consider this yet. Different priority
quests can be set; for each set of inputs (requests), policies can be created for users to have dif-
simulations are run six times and all the results ferent priorities for certain types of VMs, so
shown in this paper are the average of the six that more realistic scenarios can be consid-
runs. In these simulations, the number of PMs ered.
is fixed as 100, the number of requests is varying
from 250 to 1500, and a PC with 2 GHz CPU, • Supporting multiple or federated data
2 GB memory is used for all simulations. From centers. The simulator should be able to
these simulations, we observe that LIF algorithm reflect and model the multiple or federated
outwits other four algorithms with average imbal- data centers in real world. CloudAnalyst pro-
ance values, which shows that LIF has a better vides a framework by extending CloudSim
load balance effects than others. and there is still much work to improve.

7. Conclusions and Future Work Acknowledgment

In this paper, we mainly compare four open This research is sponsored by the National Nat-
source simulators, namely CloudSim, Green- ural Science Foundation of China (NSFC) (Grant
Cloud, iCanCloud and CloudSched. These simu- Number:61150110486).
lators can simulate the cloud data center scenar-
ios from different layers in the cloud computing
architecture. From their architectures, elements References
modeling, simulation process, performance met-
rics and outputs, we provide detailed comparisons [1] A. Beloglazov, J. Abawajy, R. Buyya, Energy-Aware
Resource Allocation Heuristics for Efficient Manage-
about these simulators. Considering the complex- ment of Data Centers for Cloud Computing Future
ity of networks and the difficult to control the Generation Computer Systems, vol.28, Issue 5, May
network traffics, simulators are crucial tools for 2012, pp. 755-768, 2012.
[2] A. Ikram , A. Anjum , N. Bessis. A cloud resource
research. We can see that none of them is perfect
management model for the creation and orchestration
for all aspects and there are still much work to of social communities, Simulation Modelling Practice
do to improve. One suggestion is to use different and Theory, 2015, 50: 130-150.
tools or their combinations for different optimiza- [3] A. Legrand, L. Marchal, and H. Casanova, Schedul-
ing distributed applications: the SimGrid simulation
tion objectives such as load balance and energy- framework. In the proceedings of the 3rd IEEE/ACM
efficiency. For future work, there are still quite a International Symposium on Cluster Computing and
few challenging issues for cloud simulating: the Grid, 2003.
[4] A. Nunez, J. Vazquez-Poletti, A. Caminero et al.,
• Modeling different Cloud layers. As we iCanCloud: A Flexible and Scalable Cloud Infrastruc-
compared in the paper, each tool may focus ture Simulator, Journal of Grid Computing 10:185,
C209, 2012
on one layer. Currently there is still lack of [5] A. Singh, M. Korupolu, D. Mohapatra, Server-
tools that can model all Cloud layers (IaaS, Storage Virtualization: Integration and Load Balanc-
PaaS and SaaS). ing in Data Centers, in the proceedings of the 2008
ACM/IEEE conference on Supercomputing, pp.1-12,
• High extensibility. When new policies and 2008.
[6] Amazon EC2, http://aws.amazon.com/ec2/
algorithms are added, modular design of the [7] B. Wickremasinghe et al., CloudAnalyst: A
simulators can assure that new modules can CloudSim-based Tool for Modelling and Analysis of
be easily added, currently the four simulators Large Scale Cloud Computing Environments, Pro-
still need improving this. ceedings of the 24th IEEE International Conference
on Advanced Information Networking and Applica-
tions (AINA 2010), Perth, Australia, April 20-23,
• Easy to use and repeatable. The simula- 2010.
tors should enable users to set up simulation [8] C. L. Dumitrescu and I. Foster., GangSim: a simu-
easily and quickly with easy to use graphi- lator for grid scheduling studies. Proceedings of the
cal user interfaces and outputs. It can ac- IEEE International Symposium on Cluster Comput-
ing and the Grid (CCGrid 2005), Cardiff, UK, 2005.
cept inputs from text files and output to text [9] D. Economou, S. Rivoire, C. Kozyrakis, P. Ran-
files; can save simulation inputs and outputs ganathan, Full-System Power Analysis and Modeling
so that modelers can repeat experiments, en- for Server Environments, 2006. Stanford University /
suring that repeated simulation yield identi- HP Labs Workshop on Modeling,Benchmarking, and
Simulation (MoBS) June 18, 2006.
cal results. [10] D. Kliazovich, P. Bouvry, S.U. Khan, Greencloud:
A packet-level simulator of energy-aware cloud com-
• Considering user priority. This is a real puting data centers. IEEE Conference on Global
requirement. Currently the four simulators Telecommunications, pp.1-5, 2010.

14
[11] DMTF Cloud Management [30] W. Tian, X. Liu, C. Jin, Y. Zhong, LIF: A Dy-
http://www.dmtf.org/standards/cloud, 2013. namic Scheduling Algorithm for Cloud Data Centers
[12] Eucalyptus, www.eucalyptus.com, 2013. Considering Multi-dimensional Resources, appear in
[13] F. Howell and R. Mcnab. SimJava: A discrete event Journal of Information and Computational Science,
simulation library for java. Proceedings of the first In- vol.10, issue 12, 2013.
ternational Conference on Web-Based Modeling and [31] W. Tian, Y. Zhao, Y. Zhong, M. Xu, C. Jing, Dy-
Simulation, 1998. namic and Integrated Load-balancing Scheduling Al-
[14] G. Sakellari, G. Loukas, A survey of mathematical gorithms for Cloud Data Centers, China Communi-
models, simulation approaches and testbeds used for cations, vol.8, Issue (6), pp.117-126, 2011.
research in cloud computing, Simulation Modelling [32] W. Tian, Y. Zhao, M. Xu, Y. Zhong, X. Sun, A
Practice and Theory, vol. 39, pp.92-103, Dec. 2013. Toolkit For Modeling and Simulation of Real-time
[15] Google App Engine, http://code.google.com/intl/zh- Virtual Machine Allocation in a Cloud Data Center,
CN/appengine/, 2013. In press of IEEE Transactions on Automation Science
[16] IBM blue cloud, http://www.ibm.com/grid/, 2013. and Engineering, pp.1-9, (7)2013.
[17] L. Luo, W. Wu, W.T. Tsai, D. Di, F. Zhang, Sim- [33] W. Zhang, Research and Implementation of Elastic
ulation of power consumption of cloud data centers, Network Service, PhD dissertation, National Univer-
Simulation of power consumption of cloud data cen- sity of Defense Technology, China (in Chinese) 2000.
ters, vol.39, pp.152-171, Dec. 2013.
[18] L. Youseff, et al., Toward A Unified Ontology Of
Cloud Computing, In the proceedings of Grid Com-
puting Environments Workshop, GCE’08, 2008.
[19] M. Armbrust, A. Fox, R. Griffith, A. Joseph, R. Katz,
A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I.
Stoica, M. Zaharia, Above the Clouds: A Berke-
ley View of Cloud computing. Technical Report
No. UCB/EECS-2009-28, University of California at
Berkley, USA, Feb. 10, 2009.
[20] Microsoft Windows Azure,
http://www.microsoft.com/windowsazure, 2013.
[21] H. Zheng, L. Zhou, J. Wu, Design and Implementa-
tion of Load Balancing in Web Server Cluster System,
Journal of Nanjing University of Aeronautics& Astro-
nautics, Vol. 38 No. 3 Jun. 2006.
[22] Hebrew University, Experimental Systems Lab,
www.cs.huji.ac.il/labs/parallel/workload, 2012.
[23] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and
I. Brandic, Cloud Computing and Emerging IT Plat-
forms: Vision, Hype, and Reality for Delivering Com-
puting as the 5th Utility. Future Generation Com-
puter Systems, 25(6): 599-616, Elsevier Science, Am-
sterdam, The Netherlands, June 2009.
[24] R. Buyya and M. Murshed, GridSim: A Toolkit for
the Modeling and Simulation of Distributed Resource
Management and Scheduling for Grid Computing.
Concurrency and Computation: Practice and Expe-
rience, 14(13-15), Wiley Press, Nov.-Dec., 2002.
[25] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F.
De Rose, and R. Buyya, CloudSim: A Toolkit for
Modeling and Simulation of Cloud Computing En-
vironments and Evaluation of Resource Provision-
ing Algorithms, Software: Practice and Experience,
vol.41, no.1, pp.23-50, ISSN: 0038-0644, Wiley Press,
New York, USA, January 2011.
[26] T. Guerout, T.Monteil, G. Costa, R. Calheiros, R.
Buyya, M. Alexandru, Energy-aware Simulation with
DVFS, Simulation Modeling Practice and Theory,
vol.39, pp.76-91, Dec. 2013.
[27] T. N. Huu , N. P. Ngoc, H.T. Thu , et al., Mod-
eling and experimenting combined smart sleep and
power scaling algorithms in energy-aware data center
networks, Simulation Modelling Practice and Theory,
2013, 39: 20-40.
[28] T. Wood, et. al., Black-box and Gray-box Strategies
for Virtual Machine Migration in the proceedings of
Symp. on Networked Systems Design and Implemen-
tation (NSDI), 2007.
[29] W. Tian, Adaptive Dimensioning of Cloud Datacen-
ters, in the proccedings of IEEE the 8th International
Conference on Dependable, Autonomic and Secure
Computing (DASC-09), Chengdu, China, December
12-14, 2009.

15

You might also like