Virtualization in Cloud Computing
Virtualization in Cloud Computing
Virtualization in Cloud Computing
Hypervisors take the physical resources and separate them so they can be utilized by
the virtual environment. They can sit on top of an OS or they can be directly installed
onto the hardware. The latter is how most enterprises virtualize their systems.
The Xen hypervisor is an open source software program that is responsible for
managing the low-level interactions that occur between virtual machines (VMs) and
the physical hardware. In other words, the Xen hypervisor enables the simultaneous
creation, execution and management of various virtual machines in one physical
environment.
With the help of the hypervisor, the guest OS, normally interacting with true
hardware, is now doing so with a software emulation of that hardware; often, the guest
OS has no idea it's on virtualized hardware.
While the performance of this virtual system is not equal to the performance of the
operating system running on true hardware, the concept of virtualization works
because most guest operating systems and applications don't need the full use of the
underlying hardware.
This allows for greater flexibility, control and isolation by removing the dependency
on a given hardware platform. While initially meant for server virtualization, the
concept of virtualization has spread to applications, networks, data and desktops.
A side-
by-side view of a traditional versus a virtual architecture
3. System users work with and perform computations within the virtual
environment.
JUMP TO SECTION
Overview
History of virtualization
How does it work?
Types of virtualization
Why migrate to Red Hat?
Overview
Virtualization is technology that lets you create useful IT services using resources that
are traditionally bound to hardware. It allows you to use a physical machine’s full
capacity by distributing its capabilities among many users or environments.
In more practical terms, imagine you have 3 physical servers with individual dedicated
purposes. One is a mail server, another is a web server, and the last one runs internal
legacy applications. Each server is being used at about 30% capacity—just a fraction of
their running potential. But since the legacy apps remain important to your internal
operations, you have to keep them and the third server that hosts them, right?
Traditionally, yes. It was often easier and more reliable to run individual tasks on
individual servers: 1 server, 1 operating system, 1 task. It wasn’t easy to give 1 server
multiple brains. But with virtualization, you can split the mail server into 2 unique ones
that can handle independent tasks so the legacy apps can be migrated. It’s the same
hardware, you’re just using more of it more efficiently.
Keeping security in mind, you could split the first server again so it could handle another
task—increasing its use from 30%, to 60%, to 90%. Once you do that, the now empty
servers could be reused for other tasks or retired altogether to reduce cooling and
maintenance costs.
A brief history of virtualization
While virtualization technology can be sourced back to the 1960s, it wasn’t widely
adopted until the early 2000s. The technologies that enabled virtualization—
like hypervisors—were developed decades ago to give multiple users simultaneous
access to computers that performed batch processing. Batch processing was a popular
computing style in the business sector that ran routine tasks thousands of times very
quickly (like payroll).
But, over the next few decades, other solutions to the many users/single machine
problem grew in popularity while virtualization didn’t. One of those other solutions was
time-sharing, which isolated users within operating systems—inadvertently leading
to other operating systems like UNIX, which eventually gave way to Linux®. All the
while, virtualization remained a largely unadopted, niche technology.
Fast forward to the the 1990s. Most enterprises had physical servers and single-vendor
IT stacks, which didn’t allow legacy apps to run on a different vendor’s hardware. As
companies updated their IT environments with less-expensive commodity servers,
operating systems, and applications from a variety of vendors, they were bound to
underused physical hardware—each server could only run 1 vendor-specific task.
This is where virtualization really took off. It was the natural solution to 2 problems:
companies could partition their servers and run legacy apps on multiple operating
system types and versions. Servers started being used more efficiently (or not at all),
thereby reducing the costs associated with purchase, set up, cooling, and maintenance.
Virtualization’s widespread applicability helped reduce vendor lock-in and made it the
foundation of cloud computing. It’s so prevalent across enterprises today that
specialized virtualization management software is often needed to help keep track of it
all.
When the virtual environment is running and a user or program issues an instruction
that requires additional resources from the physical environment, the hypervisor relays
the request to the physical system and caches the changes—which all happens at close
to native speed (particularly if the request is sent through an open source hypervisor
based on KVM, the Kernel-based Virtual Machine).
Types of virtualization
Data virtualization
Data that’s spread all over can be consolidated into a single source. Data virtualization
allows companies to treat data as a dynamic supply—providing processing capabilities
that can bring together data from multiple sources, easily accommodate new data
sources, and transform data according to user needs. Data virtualization tools sit in front
of multiple data sources and allows them to be treated as single source, delivering the
needed data—in the required form—at the right time to any application or user.
Desktop virtualization
Easily confused with operating system virtualization—which allows you to deploy
multiple operating systems on a single machine—desktop virtualization allows a central
administrator (or automated administration tool) to deploy simulated desktop
environments to hundreds of physical machines at once. Unlike traditional desktop
environments that are physically installed, configured, and updated on each machine,
desktop virtualization allows admins to perform mass configurations, updates, and
security checks on all virtual desktops.
Server virtualization
Servers are computers designed to process a high volume of specific tasks really well
so other computers—like laptops and desktops—can do a variety of other tasks.
Virtualizing a server lets it to do more of those specific functions and involves
partitioning it so that the components can be used to serve multiple functions.
Reduces bulk hardware costs, since the computers don’t require such high out-of-the-
box capabilities.
Increases security, since all virtual instances can be monitored and isolated.
Limits time spent on IT services like software updates.
Architecture of Virtualization
The architecture in Virtualization is defined as a model that describes
Virtualization conceptually. Virtualization application in Cloud Computing
is critical. In Cloud Computing, the end-users share the data on
applications termed as the clouds. However, end users can share the
entire IT infrastructure with Virtualization itself.
This fee is paid to compensate the third parties to provide cloud services to
end-users, and they also provide different versions of applications as
requested by the end cloud users.
https://www.guru99.com/virtualization-cloud-computing.html
ypes of virtualization architecture
There are two major types of virtualization architecture: hosted and bare-
metal. It's important to determine the type that will be used before
implementing virtualized systems.
Hosted architecture
In this type of setup, a host OS is first installed on the hardware, followed by
the software. The software, which is a hypervisor or virtual machine (VM)
monitor, is required to install multiple guest OS or VMs on the hardware in
order to set up the virtualization architecture. Once the hypervisor is in place,
applications can be installed and run on the VMs just like they are installed on
physical machines.
software development
running legacy applications
simplifying system configuration
Bare-metal architecture
In this architecture, a hypervisor is installed directly on the hardware instead
of on top of an OS. The installation for the hypervisor and VMs happens in the
same way as with hosted architecture. A bare-metal virtualization architecture
is suitable for applications that provide real-time access or perform some type
of data processing.
higher scalability
A Virtual Machine Cluster, also known as a VM Cluster, takes this technology one step further. It
is a group of several virtual machines hosted on multiple physical servers that are
interconnected and managed as a single entity. In simpler terms, a VM Cluster is a collection of
virtual machines that offer better performance, higher availability, and simplified management
than standalone virtual machines.
Another significant advantage of a VM cluster is its scalability. Organizations can quickly and
easily add or remove virtual machines as per their requirement without disrupting the existing
virtual machines. Since the resources are distributed across multiple physical servers, it is
easier to allocate computing resources as per the need of the application or service.
In a VM Cluster, each virtual machine is assigned its own set of resources and tasks. It has its
own operating system and applications, and the data resides within it. However, from the end-
users’ perspective, the virtual machines in the cluster appear as a single virtual environment.
This makes it easier for the IT team to manage the virtual machines as well as monitor and
troubleshoot issues that may arise.
The management of a VM cluster can be done through software tools that offer a centralized
management console. Such tools allow the IT team to monitor virtual machines’ status,
configure settings, and perform other administrative functions. In addition, VM clusters enable
businesses to create virtualized test environments that can be used to develop new
applications, prototypes, and conduct testing.
In conclusion, a virtual machine cluster is the next step in virtualization technology, offering a
more robust and redundant computing environment, high availability, scalability, and simplified
management. Organizations that run critical applications and services, multiple virtual
machines or need elasticity and the ability to handle peaks, will benefit from deploying a VM
Cluster. With minimal hardware requirements, it can be an affordable option for many
businesses to enhance their computing environment’s efficiency and reliability.
Apart from the virtualization architecture, there are various software solutions used in
virtualization:
These are just a few examples of virtualization architectures and software solutions. The
choice of architecture and software depends on specific requirements, use cases, and
the desired level of isolation and management capabilities.
What is app virtualization?
App virtualization (application virtualization) is the separation of an installation of
an application from the client computer accessing it.
From the user's perspective, the application works just like it would if it lived on the
user's device. The user can move or resize the application window as well as carry out
keyboard and mouse operations. There might be subtle differences at times, but for
the most part, the user should have a seamless experience.
App virtualization technology makes it possible to run applications that might conflict
with a user's desktop applications or with other virtualized applications.
The use of peripheral devices can get more complicated with app virtualization,
especially when it comes to printing. System monitoring products can also have
trouble with virtualized applications, making it difficult to troubleshoot and isolate
performance issues.
1. Performance Overhead: Virtualization introduces a layer of abstraction between the physical hardware
and virtual machines, which can result in a slight performance overhead. Although this overhead has
significantly reduced with advancements in virtualization technologies, resource-intensive workloads or
improper resource allocation can impact performance.
2. Resource contention: In a virtualized environment, multiple virtual machines share physical resources
such as CPU, memory, and storage. If resources are not allocated and managed properly, resource
contention can occur. This can lead to performance degradation and impact the performance of virtual
machines running on the same host.
4. Single Point of Failure: While virtualization can enhance overall system availability, it also introduces a
potential single point of failure—the hypervisor or virtualization layer. If the hypervisor fails, all the
virtual machines running on it may be affected. Implementing proper high availability and backup
strategies can help mitigate this risk.
5. Compatibility Issues: Although virtualization provides improved compatibility in many cases, there can
still be instances where certain applications or drivers may not work properly in a virtualized
environment. This can be due to software licensing restrictions, hardware dependencies, or specific
configurations that are not supported within the virtualization environment.
7. Licensing and Compliance: Virtualization can introduce complexities in software licensing and
compliance. Some software vendors have specific licensing requirements for virtualized environments,
and organizations need to ensure compliance with these licensing terms to avoid legal and financial
issues.
To mitigate these pitfalls, it is important to carefully plan and design virtualization deployments, properly
allocate resources, implement monitoring and management tools, and stay updated with best practices
and vendor guidelines. Regular performance monitoring and capacity planning can help optimize
resource usage and identify and resolve potential issues before they impact critical workloads.
Pitfalls
Mismatching Servers
One of the great things about virtual machines is that they can be easily
created and migrated from server to server according to needs. However, this
can also create problems sometimes because IT staff members may get
carried away and deploy more Virtual Machines than a server can handle.
This will actually lead to a loss of performance that can be quite difficult to
spot. A practical way to work around this is to have some policies in place
regarding VM limitations and to make sure that the employees adhere to
them.
Misplacing Applications
1. Detection/Discovery
You can't manage what you can't see! IT departments are often unprepared for the
complexity associated with understanding what VMs (virtual machines) exist and which
are active or inactive. To overcome these challenges, discovery tools need to extend to
the virtual world by identifying Virtual Machine Disk Format (.vmdk) files and how many
exist within the environment. This will identify both active and inactive VM’s.
2. Correlation
Difficulty in understanding which VMs are on which hosts and identifying which business
critical functions are supported by each VM is a common and largely unforeseen
problem encountered by IT departments employing virtualization. Mapping guest to host
relationships and grouping the VM’s by criticality & application is a best practice when
implementing virtualization.
3. Configuration management
Ensuring VMs are configured properly is crucial in preventing performance bottlenecks
and security vulnerabilities. Complexities in VM provisioning and offline VM patching is
a frequent issue for IT departments. A Technical Controls configuration management
database (CMDB) is critical to understanding the configurations of VM’s especially
dormant ones. The CMDB will provide the current state of a VM even if it is dormant,
allowing a technician to update the configuration by auditing and making changes to the
template.
omkarchalke
Read
Discuss
1. Resource Pooling: Virtualization allows for the creation of a shared pool of virtual
machines (VMs) across different physical hosts in the grid. This pooling of resources
enables efficient utilization of computing power and storage capacity, as VMs can be
dynamically allocated to tasks based on demand.
3. Isolation and Security: Virtualization provides isolation between VMs, ensuring that
applications and processes running within each VM do not interfere with each other.
This isolation enhances security by containing any potential vulnerabilities or attacks
within individual VMs, reducing the risk of compromising the entire grid.
4. Migration and Load Balancing: Virtual machine migration enables the live movement
of VMs between physical hosts within the grid without interrupting running applications.
This capability can be leveraged for load balancing, as VMs can be dynamically
migrated to balance the workload across the grid, optimizing resource utilization and
improving overall performance.
5. Fault Tolerance and High Availability: By using virtualization technologies like live
migration and clustering, grid environments can achieve fault tolerance and high
availability. In the event of a physical host failure or maintenance, VMs can be
automatically migrated to other hosts, ensuring minimal downtime and uninterrupted
grid operations.
6. Virtual Appliance Deployment: Grid computing can benefit from the concept of virtual
appliances, which are pre-configured VMs with specific software stacks or applications.
These virtual appliances can be easily deployed within the grid, simplifying the setup
and configuration process for different tasks or services.
Virtualization example
Consider a company that needs servers for three functions:
The email application requires more storage capacity and a Windows operating system.
The customer-facing application requires a Linux operating system and high processing
power to handle large volumes of website traffic.
The internal business application requires iOS and more internal memory (RAM).
To meet these requirements, the company sets up three different dedicated physical servers for
each application. The company must make a high initial investment and perform ongoing
maintenance and upgrades for one machine at a time. The company also cannot optimize its
computing capacity. It pays 100% of the servers’ maintenance costs but uses only a fraction of their
storage and processing capacities.
With virtualization, the company creates three digital servers, or virtual machines, on a single
physical server. It specifies the operating system requirements for the virtual machines and can use
them like the physical servers. However, the company now has less hardware and fewer related
expenses.
Infrastructure as a service
The company can go one step further and use a cloud instance or virtual machine from a cloud
computing provider such as AWS. AWS manages all the underlying hardware, and the company can
request server resources with varying configurations. All the applications run on these virtual servers
without the users noticing any difference. Server management also becomes easier for the
company’s IT team.
What is virtualization?
To properly understand Kernel-based Virtual Machine (KVM), you first need to understand some
basic concepts in virtualization. Virtualization is a process that allows a computer to share its
hardware resources with multiple digitally separated environments. Each virtualized environment
runs within its allocated resources, such as memory, processing power, and storage. With
virtualization, organizations can switch between different operating systems on the same server
without rebooting.
Virtual machine
A virtual machine is a software-defined computer that runs on a physical computer with a separate
operating system and computing resources. The physical computer is called the host machine and
virtual machines are guest machines. Multiple virtual machines can run on a single physical
machine. Virtual machines are abstracted from the computer hardware by a hypervisor.
Hypervisor
The hypervisor is a software component that manages multiple virtual machines in a computer. It
ensures that each virtual machine gets the allocated resources and does not interfere with the
operation of other virtual machines. There are two types of hypervisors.
Type 1 hypervisor
Type 2 hypervisor
Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system. Type 2
hypervisors are suitable for end-user computing.
https://www.tutorialspoint.com/cloud_computing/cloud_computing_virtualization.htm
Virtualization is a fundamental technology that underpins cloud computing. In fact,
virtualization is a key enabler of the cloud computing model, providing the foundation for
its scalability, resource pooling, and multi-tenancy capabilities. Here's how virtualization
is used in cloud computing:
3. Software as a Service (SaaS): SaaS providers may utilize virtualization to host and
deliver their applications to end-users. Virtualization allows for the consolidation of
multiple instances of the application on a shared infrastructure, providing efficient
resource utilization and multi-tenancy capabilities.
4. Resource Pooling and Elasticity: Virtualization enables resource pooling in the cloud,
where physical resources such as compute, storage, and networking are abstracted and
shared among multiple VMs or containers. This pooling allows for dynamic allocation of
resources based on demand, enabling scalability and elasticity to handle fluctuating
workloads.
https://www.geeksforgeeks.org/virtual-machine-security-in-cloud/
DOWNLOAD NOW
VMware NSX Data Center Datasheet
DOWNLOAD NOW
For example, an enterprise can insert security controls (such as encryption) between the
application layer and the underlying infrastructure, or use strategies such as micro-segmentation
to reduce the potential attack surface.
AuthorSAURABH ANAND
3 upvotes
Share
Introduction
The anatomy of cloud computing can be defined as the structure of the cloud. The anatomy of
cloud computing cannot be considered the same as cloud architecture. It may not include any
dependency on which the technology works, whereas architecture defines and describes the
technology over which it is working. Thus, the anatomy of cloud computing can be considered a
part of the architecture of the cloud.
Cloud storage architectures include a front end that exposes an API for accessing storage. This
API represents the Small Computer Systems Interface(SCSI) protocol in traditional storage
systems; however, these protocols are changing in the cloud. This could be an internal protocol
for implementing specific features or a standard back end for physical discs.
The storage logic is a layer of middleware that sits behind the frontend. Over typical data-
placement techniques, this layer incorporates a range of characteristics, such as replication and
data reduction, over the traditional data-placement algorithms. Finally, the backend implements
data storage on a physical level.
Components of cloud anatomy
Components of cloud anatomy
Application
The uppermost layer is the application layer. In this layer, any application can be executed.
Platform
This component comprises platforms that are in charge of the application's execution. This
platform bridges the gap between infrastructure and application.
Virtualised Infrastructure
The infrastructure is made up of resources that the other components operate on. This allows the
user to perform computations.
Visualization
Virtualization is the process of overlaying logical resource components on top of physical
resources. The infrastructure is made up of discrete and autonomous logical components.
Server/Storage/Datacentre
This is the physical component of the cloud provided by servers and storage units.
Now we will discuss layers of the anatomy of cloud computing. Some of them are discussed
below.
Cloud computing is a model for delivering on-demand computing resources over the
internet. It allows users to access a shared pool of computing resources, such as
servers, storage, databases, and applications, without the need for local infrastructure
or maintenance. The anatomy of cloud computing can be understood by exploring its
key components:
1. **Clients**: Clients are the end-user devices or software applications that interact with
the cloud. They can be desktop computers, laptops, smartphones, or other connected
devices. Clients communicate with the cloud infrastructure through various protocols
and interfaces.
By understanding these components, one can grasp the anatomy of cloud computing
and how its various elements work together to provide flexible, scalable, and on-
demand computing capabilities.
https://www.niallkennedy.com/blog/2009/03/cloud-computing-stack.html
By decoupling physical hardware from an operating system, a virtual infrastructure can help
organizations achieve greater IT resource utilization, flexibility, scalability and cost savings.
These benefits are especially helpful to small businesses that require reliable infrastructure but
can’t afford to invest in costly physical hardware.
DOWNLOAD NOW
Don’t Lose Out on the Power of Virtualization
READ MORE
Host: A virtualization layer that manages resources and other services for
virtual machines. Virtual machines run on these individual hosts, which
continuously perform monitoring and management activities in the
background. Multiple hosts can be grouped together to work on the same
network and storage subsystems, culminating in combined computing and
memory resources to form a cluster. Machines can be dynamically added or
removed from a cluster.
Hypervisor: A software layer that enables one host computer to
simultaneously support multiple virtual operating systems, also known as
virtual machines. By sharing the same physical computing resources, such
as memory, processing and storage, the hypervisor stretches available
resources and improves IT flexibility.
Virtual machine: These software-defined computers encompass operating
systems, software programs and documents. Managed by a virtual
infrastructure, each virtual machine has its own operating system called a
guest operating system.
The key advantage of virtual machines is that IT teams can provision them
faster and more easily than physical machines without the need for
hardware procurement. Better yet, IT teams can easily deploy and suspend
a virtual machine, and control access privileges, for greater security. These
privileges are based on policies set by a system administrator.
User interface: This front-end element means administrators can view and
manage virtual infrastructure components by connecting directly to the
server host or through a browser-based interface.
efficiently and utilize all the computing resources to work together, CPU
generates the same output just like a physical machine does. The
emulation function offers great portability and facilitates working on a
With CPU Virtualization, all the virtual machines act as physical machines
machine when all hosting services get the request. Finally, the virtual
machines get a share of the single CPU allocated to them, being a single-
application code gets executed on the processor and the privileged code
gets translated first, and that translated code gets executed directly on
The code that gets translated is very large in size and also slow at the
privileged coding runs very smooth and fast. The code programs or the
environment.
certain processors. Here, the guest user uses a different version of code
and mode of execution known as a guest mode. The guest code mainly
for hardware assistance. For this, the system calls runs faster than
chance of exiting from guest mode to root mode that eventually slows
Despite having specific software behavior of the CPU model, the virtual
machine still helps in detecting the processor model on which the system
runs. The processor model is different based on the CPU and the wide
variety of features it offers, whereas the applications that produce the
power waiting for the instructions to get executed first. Such applications
executions that are needed to be executed first. This overhead takes the
by it. The machines are also kept separate from each other. Because of
a single server where all the computing resources are stored, and
processing is done based on the CPU’s instructions that are shared among
all the systems involved. Since the hardware requirement is less and the
physical machine usage is absent, that is why the cost is very less, and
timing is saved.
It provides the best backup of computing resources since the data is stored
on a single system and provides greater retrieval options of data for the
reaches the client without any hassle, and also it maintains the atomicity.
Virtualization ensures the desired data reach the desired clients through
the medium and checks any constraints are there, and are also fast to
remove it.
Storage Virtualization
As we know that, there has been a strong link between the physical host and the locally
installed storage devices. However, that paradigm has been changing drastically, almost
local storage is no longer needed. As the technology progressing, more advanced
storage devices are coming to the market that provide more functionality, and obsolete
the local storage.
Storage virtualization is a major component for storage servers, in the form of functional
RAID levels and controllers. Operating systems and applications with device can access
the disks directly by themselves for writing. The controllers configure the local storage in
RAID groups and present the storage to the operating system depending upon the
configuration. However, the storage is abstracted and the controller is determining how
to write the data or retrieve the requested data for the operating system.
Storage virtualization is becoming more and more important in various other forms:
File servers: The operating system writes the data to a remote location with no need to
understand how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested
blocks at LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating system.
NAS presents the storage as file operations (like NFS). SAN technologies present the
storage as block level storage (like Fibre Channel). SAN technologies receive the
operating instructions only when if the storage was a locally attached device.
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering
analyze the most commonly used data and places it on the highest performing storage
pool. The lowest one used data is placed on the weakest performing storage pool.
This operation is done automatically without any interruption of service to the data
consumer.
By
Rich Castagna
Rodney Brown, TechTarget
The pooling of NAS resources makes it easier to handle file migrations in the
background, which will help improve performance. Typically, NAS systems are
not that complex to manage, but storage virtualization greatly simplifies the
task of managing multiple NAS devices through a single management
console.
Overview
Network virtualization is the transformation of a network that was once hardware-
dependent into a network that is software-based. Like all forms of IT virtualization, the
basic goal of network virtualization is to introduce a layer of abstraction between
physical hardware and the applications and services that use that hardware.
With network virtualization, digital service providers can optimize their server resources
(i.e. fewer idle servers), allow them to use standard servers for functions that once
required expensive proprietary hardware, and generally improve the speed, flexibility,
and reliability of their networks.
Whether it’s virtual reality in remote surgery or smart grids allowing ambulances to
safely speed through traffic lights, new advancements offer the promise of radically
improved and optimized experiences. But the traditionally hardware-dependent
networks of many service providers must be transformed to accommodate this
innovation. Network virtualization offers service providers the agility and scalability they
need to keep up.
Just as hyperscale public cloud providers have demonstrated how cloud-
native architectures and open source development can accelerate service delivery,
deployment, and iteration, telecommunication service providers can take this same
approach to operate with greater agility, flexibility, resilience, and security. They can
manage infrastructure complexity through automation and a common horizontal
platform. They can also meet the higher consumer and enterprise expectations of
performance, safety, ubiquity, and user experience. With cloud-native architectures and
automation, providers can more rapidly change and add services and features to better
respond to customer needs and demands.
With these services virtualized, providers can distribute network functions across
different servers or move them around as needed when demand changes. This
flexibility helps improve the speed of network provisioning, service updates, and
application delivery, without requiring additional hardware resources. The segmentation
of workloads into VMs or containers can also boost network security.
This approach:
Provides the ability to spin workloads up and down with minimal effort.
https://www.techtarget.com/searchnetworking/What-is-network-virtualization-Everything-
you-need-to-know