Virtulisation
Virtulisation
Virtulisation
Unit 1
OVERVIEW OF VIRTUALIZATION
Q.1 What is Virtualization? Explain the five stages Virtualization Process.
A. ―Half the work that is done in the world is to make things appear what they are
not.‖ (E. R. Beadle).
Virtualization is the ability to run multiple operating systems on a single physical
system and share the underlying hardware resources*
It is the process by which one computer hosts the appearance of many computers.
Virtualization is used to improve IT throughput and costs by using physical
resources as a pool from which virtual resources can be allocated.
Virtualization is a technology that transfers hardware into software.
Virtualization allows us to run multiple Operating Systems as VMs on single
computer.
"Virtualization software makes it possible to run multiple operating systems and
multiple applications on the same server at the same time‖.
"It enables businesses to reduce IT costs while increasing the efficiency, utilization
and flexibility of their existing computer hardware.―
The technology behind virtualization is known as a virtual machine monitor
(VMM) or virtual manager, which separates compute environments from the actual
physical infrastructure.
Five Stages of Virtualization Process:
1. Discovery: The first step begins with datacentre inventories and the identification
of potential virtualization candidates.
2. Virtualization: The second step focuses on gaining a complete understanding of
the value choices that virtualization can offer.
3. Hardware maximization: The third step focuses on hardware recovery and how
you can make judicial investments when adding new hardware or replacing older
systems.
4. Architecture: The fourth step looks to the architecture you must prepare to
properly introduce virtualization technologies into your datacentre practices.
5. Management: The last step focuses on the update of the management tools you
use to maintain complete virtualization scenarios in your new dynamic datacentre.
Desktop Virtualization
Desktop Virtualization (DeskV) allows you to rely on virtual machines to
provision desktop systems. Desktop virtualization has several advantages, the
least of which is the ability to centralize desktop deployments and reduce
distributed management costs because user’s access centralized desktops
through a variety of thin or unmanaged devices.
Desktop virtualization centralizes desktop deployments so that you can gain
complete control over them, letting users rely on a variety of endpoints—thin
computing devices, unmanaged PCs, home PCs, or public PCs—to access your
corporate desktop infrastructure, once again through the Remote Desktop
Connection (RDC).
The main difference between DeskV and PresentV, or presentation virtualization,
often called Terminal Services or server-based computing, is that in PresentV,
users must share the desktop environment with all of the other users connecting
to the server. In DeskV, each user gets access to their own desktop, limiting
the potential impact of the applications they need on other desktop sessions.
DeskV can be quite a time-saver compared to the cost of managing distributed
systems throughout your infrastructure. If you have existing desktops, you can
turn them into unmanaged devices because all you need from the physical
workstation are three things:
1. A base operating system, which can be anything from Windows XP to Vista
NITESH SHUKLA Page 2
VIRTUALIZATION
software and is able to modify the behaviour of both control and data planes.
Network virtualization introduces the possibility of transforming physical
connections and devices into simpler logical entities, both improving resource
utilization and reducing design complexities. These techniques include
EtherChannel, Virtual PortChannel (vPC) ,Layer 2 multipathing with FabricPath.
Network virtualization techniques can help network partitioning, resource
optimization, management consolidation, and network extension
Network Virtualization (NetV) lets you control available bandwidth by splitting
it into independent channels that can be assigned to specific resources. For
example, the simplest form of network virtualization is the virtual local area
network (VLAN), which creates a logical segregation of a physical network.
the data.
3. Interconnect: Encompasses the network or medium between the host and the
storage device
Storage virtualization also allows multiple physical arrays to work together as a
single system, bringing advantages such as data redundancy and management
consolidation.
Storage Virtualization (StoreV) is used to merge physical storage from multiple
devices so that they appear as one single storage pool. The storage in this pool can
take several forms: direct attached storage (DAS), network attached storage
(NAS), or storage area networks (SANs); and it can be linked to through several
protocols: Fibre Channel, Internet SCSI (iSCSI), Fibre Channel on Ethernet, or
even the Network File System (NFS).
Though storage virtualization is not a requirement for server virtualization, one of
the key strengths you will be able to obtain from storage virtualization is the
ability to rely on thin provisioning or the assignation of a logical unit (LUN) of
storage of a given size, but provisioning it only on an as-needed basis.
For example, if you create a LUN of 100 gigabytes (GB) and you are only using
12GB, only 12GB of actual storage is provisioned. This significantly reduces the
cost of storage since you only pay as you go.
The remainder of the taxonomy is based on whether the guest and host use
the same ISA. Using this taxonomy, Figure 0-15 shows the "space" of virtual
machines that we have identified.
On the left side of the figure are process VMs. There are two types of process
VMs where the host and guest instruction sets are the same.
The first is multi-programmed systems, where virtualization is a natural part
of multiprogramming and is supported on most of today’s systems.
The second is dynamic optimizers, which transform guest instructions only
by optimizing them, and then execute them natively.
The two types of process VMs that do provide emulation are dynamic
translators and HLL VMs. HLL VMs are connected to the VM taxonomy
via a ―dotted line‖ because their process level interface is at a different, higher
level than the other process VMs.
On the right side of the figure are system VMs. These range from Classic OS
VMs and Hosted VMs, where replication – and providing isolated system
environments – is the goal, to Whole System VMs and CoDesigned VMs
where emulation is the goal.
With Whole System VMs, performance is often secondary, in favor of
accurate functionality, while with Co-Designed VMs, performance (or power
efficiency) is the major goal.
Here, Co-Designed VMs are ―dotted line‖ connected because their interface
is at a lower level than other system VMs.
system. Virtual machines operate based on the computer architecture and functions
of a real or hypothetical computer and their implementations may involve
specialized hardware, software, or a combination of both.
Classification of virtual machines can be based on the degree to which they
implement functionality of targeted real machines. That way, system virtual
machines (also known as full virtualization VMs) provide a complete substitute for
the targeted real machine and a level of functionality required for the execution of
a complete operating system. On the other hand, process virtual machines are
designed to execute a single computer program by providing an abstracted and
platform-independent program execution environment.
Different virtualization techniques are used based on the desired usage. Native
execution is based on direct virtualization of the underlying raw hardware, thus it
provides multiple "instances" of the same architecture a real machine is based on,
capable of running complete operating systems. Some virtual machines can also
emulate different architectures and allow execution of software applications and
operating systems written for another CPU or architecture. Operating-system-level
virtualization allows the resources of a computer to be partitioned via kernel's
support for multiple isolated user space instances, which are usually called
containers and may look and feel like real machines to the end users.
Some computer architectures are capable of hardware-assisted virtualization,
which enables efficient full virtualization by using virtualization-specific hardware
capabilities, primarily from the host CPUs.
A process VM, sometimes called an application virtual machine, or Managed
Runtime Environment (MRE), runs as a normal application inside a host OS and
supports a single process. It is created when that process is started and destroyed
when it exits. Its purpose is to provide a platform-independent programming
environment that abstracts away details of the underlying hardware or operating
system, and allows a program to execute in the same way on any platform.
A process VM provides a high-level abstraction – that of a high-level
programming language (compared to the low-level ISA abstraction of the system
VM). Process VMs are implemented using an interpreter; performance comparable
to compiled programming languages is achieved by the use of just-in-time
compilation.
This type of VM has become popular with the Java programming language, which
is implemented using the Java virtual machine. Other examples include the Parrot
virtual machine, and the .NET Framework, which runs on a VM called the
Common Language Runtime. All of them can serve as an abstraction layer for any
computer language.
A special case of process VMs are systems that abstract over the communication
1.Paravirtualization:
Paravirtualization is virtualization in which the guest operating system
(the one being virtualized) is aware that it is a guest and accordingly has
drivers that, instead of issuing hardware commands, simply issue
commands directly to the host operating system. This also includes memory
and thread management as well, which usually require unavailable
privileged instructions in the processor.
Paravirtualization is different from full virtualization, where the unmodified
OS does not know it is virtualized and sensitive OS calls are trapped using
binary translation at run time.
In paravirtualization, these instructions are handled at compile time when
the non-virtualizable OS instructions are replaced with hypercalls.
The advantage of paravirtualization is lower virtualization overhead, but the
performance advantage of paravirtualization over full virtualization can
vary greatly depending on the workload. Most user space workloads gain
very little, and near native performance is not achieved for all workloads.
As paravirtualization cannot support unmodified operating systems (e.g.
Windows 2000/XP), its compatibility and portability is poor.
Paravirtualization can also introduce significant support and maintainability
issues in production environments as it requires deep OS kernel
modifications.
The invasive kernel modifications tightly couple the guest OS to the
hypervisor with data structure dependencies, preventing the modified guest
OS from running on other hypervisors or native hardware.
The open source Xen project is an example of paravirtualization that
virtualizes the processor and memory using a modified Linux kernel and
virtualizes the I/O using custom guest OS device drivers.
2.Full Virtualization:
Full Virtualization is virtualization in which the guest operating system is
unaware that it is in a virtualized environment, and therefore hardware is
virtualized by the host operating system so that the guest can issue commands
to what it thinks is actual hardware, but really are just simulated hardware
devices created by the host.
Full Virtualization is done with a hardware emulation tool and processor-
based virtualization support that allows you to run unmodified guest kernels
that are not ―aware‖ they are being virtualized. The result is that you give up
performance on these platforms.
Windows, NetWare, and most closed-source OSs require full virtualization.
Many of these guests have PV drivers available, though, which allow for
NITESH SHUKLA Page 15
VIRTUALIZATION
devices like disks, network cards, etc., to run with improved performance.
Full virtualization is called ―full‖ because the entire system’s resources are
abstracted by the virtualization software layer.
Full virtualization has proven highly successful for:
sharing a computer system among multiple users;
isolating users from each other (and from the control program);
Emulating new hardware to achieve improved reliability, security and
productivity.
Type 1:
Also known as native or bare-metal hypervisors, these run directly on the host
computer’s hardware to control the hardware resources and to manage guest
operating systems. For this reason, they are sometimes called bare
metal hypervisors. A guest operating system runs as a process on the
host. Examples of Type 1 hypervisors include VMware ESXi, Citrix
XenServer and Microsoft Hyper-V hypervisor.
Type 2:
Also known as hosted hypervisors, these run within a formal operating
product.
VMM is the primary software behind virtualization environments and
implementations. When installed over a host machine, VMM facilitates the
creation of VMs, each with separate operating systems (OS) and applications.
VMM manages the backend operation of these VMs by allocating the
necessary computing, memory, storage and other input/output (I/O) resources.
VMM also provides a centralized interface for managing the entire operation,
status and availability of VMs that are installed over a single host or spread
across different and interconnected hosts.