Cloud Sample File

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 60

Medi-Caps University,

Indore
Cloud Computing
CS3EL10

Name: Manish panwar


ENROLLMENT NO: EN19CS305027
Class: CS (IOT)

Submitted to:
Manish Panwar
INDEX

S.NO Experiment

1 Create Amazon Free Tier Account

2 Create your First EC2 windows instance

3 Launching RDS Instance in AWS


Assigning Elastic IP Addresses to Instance (Static IP
4
Address)
5 Configure AWS S3 Bucket
Create VPC – Virtual Private Cloud with Subnet,
6
Internet Gateway and Route Table
7 Create AWS Elastic Load Balancer (ELB)

8 Case Study of Google App Engine

9 Case Study of Xen hypervisor

10 Case Study of Open Stack


Experiment-1

Aim - Create Amazon Free Tier Account


Theory -
Follow the below quick steps to register for AWS free tier account or
create an aws account.

1. Open your favourite browser and navigate to the AWS free


tier signup Page.

2. Click on the Create a Free Account button as highlighted below.

3. On the Sign up for AWS page, provide the below details

● Email address: Provide a valid email address. Make sure you


have not used the same email address before to register for an
AWS account.

● Password: Provide a strong password.


● Confirm password: Re-enter the same password for
the confirmation.

● AWS account name: Provide a name for your AWS account.


One point to note down here is that you can able to change
the account name using the account settings page after the
signup.

Finally, click on Continue (Step 1 of 5) button.

4. On the Contact Information section, provide the below details

● How do you plan to use AWS?


You can choose Personal or Business based on your need.

● Full Name: Provide your full name.

● Phone Number: You need to provide your phone number


with your country code.

● Country or Region: Select your country from the dropdown.

● Address: You need to provide your complete


address including your city, state, Postal Code, etc.

Read and accept the terms and conditions of the AWS customer
agreement.
Finally, click on Continue (Step 2 of 5) button to move to the next step.

5. On the Billing Information section, provide the below details

● Credit or Debit card number: Provide your credit card


number and make you have entered the correct one.

● Expiration date: You need to provide the expiration date of


your credit card.

● Cardholder’s name: Provide the name of the cardholder.

● CVV: Enter the correct CVV of your card.


● Billing address: You can choose the contact address that you
have provided before or you can also add a new address by
selecting the Use a new address radio button.

● Do You have a PAN?You can choose Yes and provide the PAN
number or you can choose the No option and later you can
add your PAN details on the tax settings page.

Finally, click on Verify and Continue (step 3 of 5) button to move to the


next step.

6. Now enter the OTP that you have received on your mobile for a
transaction of 2 rupees and then click on the Next button. For me, it
is 2 rupees as I have chosen India as my country. Based on your
country you will get a very minimal transaction. Remember that this
amount amazon will hold temporarily just to verify your identity and
it might take 3 to 5 days to verify your identity.
7. Now is the time to verify your Phone on the Confirm your
Identity section. Provide the below details.
● How should we send you the verification code?
Select the text message radio button. You can also choose
the Phone call option.

● Country or Region Code: Select your country or region code.

● Cell Phone Number: Provide the number of your cell phone.

● Security Check: Type the Exact captcha.

Click on the Send SMS button to receive the SMS on your mobile.

8. Enter the code you have received and then click on the Verify
Code button.

9. It will show you now that “Your identity has been verified
successfully.” Then click on the Continue button.

10. Now, on the next window you will see three plans.

● Basic Plan (Free)


● Developer Plan
● Business Plan
Select the plan based on your need. Remember that basic plan is free of
cost and check the price before selecting the other two plans.
11. Finally, you will see now the Registration Confirmation page.
It might take 30 minutes to 1 hour for the activation of your
AWS account. You will receive an email confirmation for your
AWS account activation.
Experiment-2

Aim - Create your First EC2 windows instance


Theory –

Step 1: Launch an instance


To launch an instance

1. Open the Amazon EC2 console


at https://console.aws.amazon.com/ec2/.

2. From the console dashboard, choose Launch Instance.

3. The Choose an Amazon Machine Image (AMI) page displays a list


of basic configurations, called Amazon Machine Images (AMIs),
that serve as templates for your instance. Select the AMI for
Windows Server 2016 Base or later. Notice that these AMIs are
marked "Free tier eligible."

4. On the Choose an Instance Type page, you can select the


hardware configuration of your instance. Select the t2.micro
instance type, which is selected by default. The t2.micro instance
type is eligible for the free tier. In Regions where t2.micro is
unavailable, you can use
a t3.micro instance under the free tier. For more information,
see AWS Free Tier.

5. On the Choose an Instance Type page, choose Review and Launch


to let the wizard complete the other configuration settings for you.

6. On the Review Instance Launch page, under Security Groups,


you'll see that the wizard created and selected a security group
for you. You can use this security group, or alternatively you can
select the security group that you created when getting set up
using the following steps:
a. Choose Edit security groups.
b. On the Configure Security Group page, ensure that Select
an existing security group is selected.
c. Select your security group from the list of existing
security groups, and then choose Review and Launch.
7. On the Review Instance Launch page, choose Launch.

8. When prompted for a key pair, select Choose an existing key


pair, then select the key pair that you created when getting set
up.

Warning

Don't select Proceed without a key pair. If you launch your instance
without a key pair, then you can't connect to it.

When you are ready, select the acknowledgement check box, and
then choose Launch Instances.
9. A confirmation page lets you know that your instance is launching.
Choose View Instances to close the confirmation page and return
to the console.

10. On the Instances screen, you can view the status of the launch. It
takes a short time for an instance to launch. When you launch an
instance, its initial state is pending. After the instance starts, its
state changes to running and it receives a public DNS name. (If the
Public IPv4 DNS column is hidden, choose the settings icon ( ) in
the top- right corner, toggle on Public IPv4 DNS, and choose
Confirm.

11. It can take a few minutes for the instance to be ready so that you
can connect to it. Check that your instance has passed its status
checks; you can view this information in the Status check column.

Step 2: Connect to your instance

To connect to a Windows instance, you must retrieve the initial


administrator password and then enter this password when you
connect to your instance using Remote Desktop. It takes a few minutes
after instance launch before this password is available.

The name of the administrator account depends on the language of


the operating system. For example, for English, it's Administrator, for
French it's Administrateur, and for Portuguese it's Administrador. For
more information, see Localized Names for Administrator Account in
Windows in the Microsoft TechNet Wiki.

If you've joined your instance to a domain, you can connect to your


instance using domain credentials you've defined in AWS Directory
Service. On the Remote Desktop login screen, instead of using the local
computer name and the generated password, use the fully-qualified user
name for the administrator (for example, corp.example.com\Admin), and
the password for this account.

If you receive an error while attempting to connect to your


instance, see Remote Desktop can't connect to the remote
computer.

● New console
● Old console

To connect to your Windows instance using an RDP client

1. Open the Amazon EC2 console


at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, select Instances. Select the instance
and then choose Connect.
3. On the Connect to instance page, choose the RDP client tab,
and then choose Get password.

4. Choose Browse and navigate to the private key (.pem) file you
created when you launched the instance. Select the file and
choose Open to copy the entire contents of the file to this
window.
5. Choose Decrypt Password. The console displays the default
administrator password for the instance under Password, replacing
the Get password link shown previously. Save the password in a
safe place. This password is required to connect to the instance.
6. Choose Download remote desktop file. Your browser prompts you
to either open or save the RDP shortcut file. When you have
finished downloading the file, choose Cancel to return to the
Instances page.
● If you opened the RDP file, you'll see the Remote
Desktop Connection dialog box.
● If you saved the RDP file, navigate to your downloads
directory, and open the RDP file to display the dialog box.
7. You may get a warning that the publisher of the remote
connection is unknown. Choose Connect to continue to connect to
your instance.
8. The administrator account is chosen by default. Copy and paste
the password that you saved previously.

Tip

If you receive a "Password Failed" error, try entering the password


manually. Copying and pasting content can corrupt it.

9. Due to the nature of self-signed certificates, you may get a


warning that the security certificate could not be authenticated.
Use the following steps to verify the identity of the remote
computer, or simply choose Yes (Windows) or Continue (Mac OS
X) if you trust the certificate.
● If you are using Remote Desktop Connection on a
Windows computer, choose View certificate. If you are
using Microsoft Remote Desktop on a Mac, choose Show
Certificate.
● Choose the Details tab, and scroll down
to Thumbprint (Windows) or SHA1 Fingerprints (Mac OS X).
This is the unique identifier for the remote computer's
security certificate.
● In the Amazon EC2 console, select the instance,
choose Actions, Monitor and troubleshoot, Get system log.
● In the system log output, look for RDPCERTIFICATE-
THUMBPRINT. If this value matches the thumbprint or
fingerprint of the certificate, you have verified the identity
of the remote computer.
● If you are using Remote Desktop Connection on a Windows
computer, return to the Certificate dialog box and choose
OK. If you are using Microsoft Remote Desktop on a Mac,
return to the Verify Certificate and choose Continue.
● [Windows] Choose Yes in the Remote Desktop
Connection window to connect to your
instance.
[Mac OS X] Log in as prompted, using the default
administrator account and the default administrator password
that you recorded or copied previously. Note that you might
need to switch spaces to see the login screen. For more
information, see Add spaces and switch between them.

Step 3: Clean up your instance

After you've finished with the instance that you created for this tutorial,
you should clean up by terminating the instance. If you want to do
more with this instance before you clean up, see Next steps.

Important

Terminating an instance effectively deletes it; you can't reconnect to an


instance after you've terminated it.

If you launched an instance that is not within the AWS Free Tier, you'll
stop incurring charges for that instance as soon as the instance status
changes to shutting down or terminated. To keep your instance for later,
but not incur charges, you can stop the instance now and then start it
again later. For more information, see Stop and start your instance.

To terminate your instance


1. In the navigation pane, choose Instances. In the list of instances,
select the instance.
2. Choose Instance state, Terminate instance.
3. Choose Terminate when prompted for confirmation.
Amazon EC2 shuts down and terminates your instance. After
your instance is terminated, it remains visible on the console for
a short while, and then the entry is automatically deleted. You
cannot remove the terminated instance from the console
display yourself.
Experiment-3

Aim - Launching RDS Instance in AWS


Theory –
With AWS Explorer, you can launch an instance of any of the database
engines supported by Amazon RDS. The following walkthrough shows the
user experience for launching an instance of Microsoft SQL Server
Standard Edition, but the user experience is similar for all supported
engines.

To launch an Amazon RDS instance

1. In AWS Explorer, open the context (right-click) menu for the


Amazon RDS node and choose Launch DB Instance.
Alternatively, on the DB Instances tab, choose Launch DB
Instance.

2. In the DB Engine Selection dialog box, choose the type of


database engine to launch. For this walkthrough, choose Microsoft
SQL Server Standard Edition (sqlserver-se), and then choose Next.
3. In the DB Engine Instance Options dialog box, choose
configuration options.
In the DB Engine Instance Options and Class section, you can
specify the following settings.
License Model
Engine Type License

Microsoft SQL license-included


Server
MySql general-public-
license
Oracle bring-your-own-
license
The license model varies, depending on the type of database
engine. Engine Type License Microsoft SQL Server license-included
MySql general-public-license Oracle bring-your-own-license
DB Instance Version
Choose the version of the database engine you would like to use. If
only one version is supported, it is selected for you.
DB Instance Class
Choose the instance class for the database engine. Pricing for
instance classes varies. For more information, see Amazon RDS
Pricing.
Perform a multi AZ deployment
Select this option to create a multi-AZ deployment for enhanced
data durability and availability. Amazon RDS provisions and
maintains a standby copy of your database in a different
Availability Zone for automatic failover in the event of a scheduled
or unplanned outage. For information about pricing for multi-AZ
deployments, see the pricing section of the Amazon RDS detail
page. This option is not supported for Microsoft SQL Server.
Upgrade minor versions automatically
Select this option to have AWS automatically perform minor
version updates on your RDS instances for you.

In the RDS Database Instance section, you can specify the following
settings.

Allocated Storage
Engine Minimum Maximum
(GB) (GB)

MySQL 5 1024
Oracle Enterprise Edition 10 1024
Microsoft SQL Server Express 30 1024
Edition
Microsoft SQL Server 250 1024
Standard Edition
Microsoft SQL Server Web Edition 30 1024
The minimums and maximums for allocated storage depend on
the type of database engine. Engine Minimum (GB) Maximum
(GB) MySQL 5 1024 Oracle Enterprise Edition 10 1024 Microsoft SQL
Server Express Edition 30 1024 Microsoft SQL Server Standard
Edition 250 1024 Microsoft SQL Server Web Edition 30 1024
DB Instance Identifier
Specify a name for the database instance. This name is not case-
sensitive. It will be displayed in lowercase form in AWS Explorer.
Master User Name
Type a name for the administrator of the database instance.
Master User Password
Type a password for the administrator of the database instance
Confirm Password
Type the password again to verify it is correct.
1. In the Additional Options dialog box, you can specify the
following settings.
Database Port
This is the TCP port the instance will use to communicate on the
network. If your computer accesses the Internet through a firewall,
set this value to a port through which your firewall allows traffic.
Availability Zone
Use this option if you want the instance to be launched in a
particular Availability Zone in your region. The database instance
you have specified might not be available in all Availability Zones
in a given region.
RDS Security Group
Select an RDS security group (or groups) to associate with your
instance. RDS security groups specify the IP address, Amazon EC2
instances, and AWS accounts that are allowed to access your
instance. For more information about RDS security groups,
see Amazon RDS Security Groups. The Toolkit for Visual Studio
attempts to determine your current IP address and provides the
option to add this address to the security groups associated with
your instance. However, if your computer accesses the Internet
through a firewall, the IP address the Toolkit generates for your
computer may not be accurate. To determine which IP address to
use, consult your system administrator.
DB Parameter Group
(Optional) From this drop-down list, choose a DB parameter
group to associate with your instance. DB parameter groups
enable you
to change the default configuration for the instance. For more
information, go to the Amazon Relational Database Service User
Guide and this article.
When you have specified settings on this dialog box, choose Next.

2. The Backup and Maintenance dialog box enables you to specify


whether Amazon RDS should back up your instance and if so,
for how long the backup should be retained. You can also
specify a window of time during which the backups should
occur.
This dialog box also enables you to specify if you would like
Amazon RDS to perform system maintenance on your instance.
Maintenance includes routine patches and minor version upgrades.
The window of time you specify for system maintenance cannot
overlap with the window specified for backups.
Choose Next.
3. The final dialog box in the wizard allows you to review the
settings for your instance. If you need to modify settings, use
the Back button. If all the settings are correct, choose Launch.
Experiment-4

Aim - Assigning Elastic IP Addresses to Instance (Static IP


Address)
Theory –
Once you create a new EC2 Instance your instance will get a new public
IP address. But this IP address is not static so it will change or a new IP
will get assigned to your instance when certain actions are made on your
instance like restarting your instance.

If you have your public IP address configured in your DNS A record, that
particular IP address can change anytime. So, when a new IP address is
assigned to your instance your visitor cannot see your website because it
points to the old IP address which you don’t have control anymore.

To overcome this issue Elastic IP comes in which is a static IPv4 address


designed for dynamic cloud computing. With the Elastic IP you can
rapidly remap the address to any instance in your account.

Prerequisites

● A running EC2 Instance.

To allocate an Elastic IP address and assign it to an instance


using the console
1. Open the Amazon VPC console at
2. Create Elastic IP Address.
3. Choose Allocate new address.
4. Choose Allocate.

Note

If your account supports EC2-Classic, first choose VPC.

5. Select the Elastic IP address from the list, choose Actions, and
then choose Associate address.
6. Choose Instance or Network interface, and then select either
the instance or network interface ID. Select the private IP address
with which to associate the Elastic IP address, and then
choose Associate.
Create Elastic IP address

Login to your AWS management console and navigate to Compute >>


EC2 and under Network and security click Elastic IPs.

In this screen click Allocate Elastic IP address.

Now a new IPv4 address will get allocated.


Associate Elastic IP address
Once an IP is allocated you need to associate it with the EC2 Instance.

To do so check the checkbox of your IP address and click Actions and


choose Associate Elastic IP address.
In the Resource type choose Instance and and choose the instance from
the dropdown on the Instance field.
If you have any other Elastic IP address assigned before and you need to
re-associate you can enable the checkbox in Reassociation
Click Associate.

Release Elastic IP address


If you wish to no longer use the allocated IP address you can release it to
prevent any unnecessary billings.

To do so check the checkbox of your IP address and click Actions and


choose Release Elastic IP address.
Experiment-5

Aim - Configure AWS S3 Bucket


Theory –

Step-1 To create a bucket


1. Sign in to the AWS Management Console and open the Amazon
S3 console.
2. Choose Create bucket.
The Create bucket page opens.
3. In Bucket name, enter a DNS-compliant name for your bucket.
The bucket name must:
● Be unique across all of Amazon S3.
● Be between 3 and 63 characters long.
● Not contain uppercase characters.
● Start with a lowercase letter or number.

After you create the bucket, you can't change its name. For
information about naming buckets, see Bucket naming rules.

Important

Avoid including sensitive information, such as account numbers,


in the bucket name. The bucket name is visible in the URLs that
point to the objects in the bucket.

4. In Region, choose the AWS Region where you want the bucket to
reside.
Choose a Region that is close to you geographically to minimize
latency and costs and to address regulatory requirements. Objects
stored in a Region never leave that Region unless you explicitly
transfer them to another Region. For a list of Amazon S3 AWS
Regions, see AWS Service Endpoints in the Amazon Web Services
General Reference.
5. Keep the remaining settings set to the defaults.
6. Choose Create bucket.

Created a bucket in Amazon S3.


Step-2 To upload an object to a bucket
1. Open the Amazon S3 console.
2. In the Buckets list, choose the name of the bucket that you want
to upload your object to.
3. On the Objects tab for your bucket, choose Upload.
4. Under Files and folders, choose Add files.
5. Choose a file to upload, and then choose Open.
6. Choose Upload.

Successfully uploaded an object to your bucket.


To download an object from an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon


S3 console.
2. In the Buckets list, choose the name of the bucket that you want
to download an object from.
3. You can download an object from an S3 bucket in any of
the following ways:
● Choose the name of the object that you want to download.

On the Overview page, select the object and from


the Actions menu choose Download or Download as if you
want to download the object to a specific folder.
● Choose the object that you want to download and then from
the Object actions menu choose Download or Download as
if you want to download the object to a specific folder.
● If you want to download a specific version of the object,
choose the name of the object that you want to download.
Choose
the Versions tab and then from the Actions menu
choose Download or Download as if you want to download
the object to a specific folder.

Successfully downloaded your object.


Experiment- 6

Aim- Create VPC - Virtual Private Cloud with Subnet, Internet


Gateway and Route Table.

Step-1 Created VPC → Go to console and search for VPC. Then click launch
VPC wizard and choose the right template. Provide necessary details
and hit create.

Step-2 Create three subnets with different availability zones. Go to the


subnet on the left panel and click create a subnet. Provide the name to
the subnet and attach it to concerning VPC. Repeat the step three times
to create three subnets each with different availability zone as highlighted
below in snapshots.
Step-3 Created an internet gateway and attached it to VPC. Click on the
internet gateway on the left panel and provide a name and click create. I
have named it lab internet gateway. Once created, select the gateway and
click on the Action drop-down menu. There will be an option to attach it to
a specific VPC. Once attached, go back to your VPC menu.

Step-4 Created another internet gateway and tried to attach it to the


same VPC. Again, follow the same steps as above, and see that we can
only attach one gateway to one VPC.

Step-5 Out of three subnets making 2 with public access and leaving 1
private access after creating a public access route table. First, create a
routing table, then attach the concerning VPC. Once the VPC is attached
and the routing table is created, click the router tab and edit the route
table. In route table, click add route and enter 0.0.0.0/0 which is a public
access IP. Now attach that IP with the VPC gateway we created in the
above steps. Now, to make two of our subnets have public access, go to
subnet associated tab and edit subnet association and select two
subnets and hit save button. This way, we have created a routing table
that is attached to the VPC gateway, and we have attached two subnets
to that route table that can have public access through that attached
gateway.

Step-6 Now that we have attached the subnet to the VPC and routing
table to the gateway, I am attaching the security group. The security
groups would allow the traffic in and out of our VPC. For inbound traffic, I
am allowing HTTP and SSH, and for outbound traffic, I am allowing full
access. This means that only HTTP and SSH elements can enter the VPC
but everything inside the VPC can be sent outside without any restriction.
First, click on security groups on the left panel and click “create security
groups” and then provide security group with a new name, provide a
description and attach the concerning VPC. Then provide the inbound and
outbound traffic rules as shown in the snapshot. Once done, click create
security groups and it will be attached to the VPC.
Step-7 Now that we have set up the whole environment, it is important to
keep track of every activity inside our network. For that, we would have to
create a VPC flow log. First, click on the VPC we have created à then click
on the flow log tab and then click on the “create flow log” button which
has blue colour and then select the filter which tells the flow log to
monitor the specific type of activity. For example, if we want to monitor
only rejected requests in our network, or only allowed requests to our
network, or both. I have selected All. Now create a destination log group,
this log group would contain all the information of logs from our network,
additionally one can send logs to the s3 bucket as well. In order to access
the log group, we also have to attach the IAM policy to allow the flow log to
have access to cloud watch log groups. Finally, we hit save.
Experiment-7

Aim – Create an Application load balancer.


Theory –

Step 1: Configure a target group

Configuring a target group allows you to register targets such as EC2


instances. The target group that you configure in this step is used as the
target group in the listener rule when you configure your load balancer.

To configure your target group

1. Open the Amazon EC2 console.


2. In the left navigation pane, under Load Balancing, choose
Target Groups.
3. Choose Create target group.
4. In the Basic configuration section, set the following parameters:
a. For Choose a target type, select Instance to specify targets by
instance ID or IP addresses to specify targets by IP address. If
the target type is a Lambda function, you can enable health
checks by selecting Enable in the Health checks section.
b. For Target group name, enter a name for the target group.
c. Modify the Port and Protocol as needed.
d. If the target type is IP addresses, choose IPv4 or IPv6 as the
IP address type, otherwise skip to the next step.
Note that only targets that have the selected IP address type
can be included in this target group. The IP address type
cannot be changed after the target group is created.
e. For VPC, select a virtual private cloud (VPC) with the
targets that you want to include in your target group.
f. For Protocol version, select HTTP1 when the request
protocol is HTTP/1.1 or HTTP/2; select HTTP2, when the request
protocol is HTTP/2 or gRPC; and select gRPC, when the
request protocol is gRPC.
5. In the Health checks section, modify the default settings as
needed. For Advanced health check settings, choose the health
check port, count, timeout, interval, and specify success codes. If
health checks consecutively exceed the Unhealthy threshold count,
the load balancer takes the target out of service. If health checks
consecutively exceed the Healthy threshold count, the load
balancer puts the target back in service.
6. (Optional) Add one or more tags as follows:
a. Expand the Tags section.
b. Choose Add tag.
c. Enter the tag Key and tag Value. Allowed characters are
letters, spaces, numbers (in UTF-8), and the following special
characters: + - = . _ : / @. Do not use leading or trailing spaces.
Tag values are case-sensitive.
7. Choose Next.

Step 2: Register targets

You can register EC2 instances, IP addresses, or Lambda functions as


targets in a target group. This is an optional step to create a load
balancer. However, you must register your targets to ensure that your
load balancer routes traffic to them.

1. In the Register targets page, add one or more targets as follows:


● If the target type is Instances, select one or more instances,
enter one or more ports, and then choose Include as pending
below.
● If the target type is IP addresses, do the following:
a. Select a network VPC from the list, or choose Other
private IP addresses.
b. Enter the IP address manually, or find the IP
address using instance details. You can enter up to
five IP addresses at a time.
c. Enter the ports for routing traffic to the specified
IP addresses.
d. Choose Include as pending below.
● If the target type is Lambda, select a Lambda function, or
enter a Lambda function ARN, and then choose Include
as pending below.
2. Choose Create target group.
Step 3: Configure a load balancer and a listener

To create an Application Load Balancer, you must first provide basic


configuration information for your load balancer, such as a name,
scheme, and IP address type. Then, you provide information about your
network, and one or more listeners. A listener is a process that checks for
connection requests. It is configured with a protocol and a port for
connections from clients to the load balancer. For more information about
supported protocols and ports.

To configure your load balancer and listener

1. Open the Amazon EC2 console.


2. In the navigation pane, under Load Balancing, choose
Load Balancers.
3. Choose Create Load Balancer.
4. Under Application Load Balancer, choose Create.
5. Basic configuration
a. For Load balancer name, enter a name for your load
balancer. For example, my-alb. The name of your Application
Load Balancer must be unique within your set of Application
Load Balancers and Network Load Balancers for the Region.
Names can have a maximum of 32 characters, and can contain
only alphanumeric characters and hyphens. They can not
begin or end with a hyphen, or with internal-.
b. For Scheme, choose Internet-facing or Internal. An internet-
facing load balancer routes requests from clients to targets
over the internet. An internal load balancer routes requests to
targets using private IP addresses.
c. For IP address type, choose IPv4 or Dualstack. Use IPv4 if
your clients use IPv4 addresses to communicate with the load
balancer. Choose Dualstack if your clients use both IPv4 and
IPv6 addresses to communicate with the load balancer.
Note: If the load balancer is an internal load balancer, you
must choose IPv4.
6. Network mapping
a. For VPC, select the VPC that you used for your EC2 instances.
If you selected Internet-facing for Scheme, only VPCs with
an internet gateway are available for selection.
b. For Mappings, select two or more Availability Zones and
corresponding subnets. Enabling multiple Availability Zones
increases the fault tolerance of your applications.
For internet-facing load balancers, you can select an Elastic
IP address for each Availability Zone. This provides your load
balancer with static IP addresses.
For an internal load balancer, you can assign a private IP
address from the IPv4 range of each subnet instead of letting
AWS assign one for you.
Select one subnet per zone to enable. If you
enabled Dualstack mode for the load balancer, select subnets
with associated IPv6 CIDR blocks. You can specify one of the
following:
● Subnets from two or more Availability Zones
● Subnets from one or more Local Zones
● One Outpost subnet
7. For Security groups, select an existing security group, or create a
new one.
The security group for your load balancer must allow it to
communicate with registered targets on both the listener port and
the health check port. The console can create a security group for
your load balancer on your behalf with rules that allow this
communication. You can also create a security group and select it
instead. For more information, see Recommended rules.
(Optional) To create a new security group for your load balancer,
choose Create a new security group.
8. For Listeners and routing, the default is a listener that accepts
HTTP traffic on port 80. You can keep the default listener settings,
modify the protocol, or modify the port. Choose Add listener to
add a new listener (for example, an HTTPS listener).
If you create an HTTPS listener, configure the required Secure
listener settings. Otherwise, go to the next step.
When you use HTTPS for your load balancer listener, you must
deploy an SSL certificate on your load balancer. The load balancer
uses this certificate to terminate the connection and decrypt
requests from clients before sending them to the targets. For more
information, see SSL certificates. Additionally, specify the security
policy that the load balancer uses to negotiate SSL connections
with the clients. For more information, see Security policies.
For Default SSL certificate, do one of the following:
a. If you created or imported a certificate using AWS Certificate
Manager, select From ACM, and then select the certificate.
b. If you uploaded a certificate using IAM, select From IAM, and
then select the certificate.
c. If you want to import a certificate to ACM or IAM , enter a
certificate name. Then, paste the PEM-encoded private
key and body.
2. Tag and create
a. (Optional) Add a tag to categorize your load balancer. Tag keys
must be unique for each load balancer. Allowed characters are
letters, spaces, numbers (in UTF-8), and the following special
characters: + - = . _ : / @. Do not use leading or trailing spaces.
Tag values are case-sensitive.
b. Review your configuration, and choose Create load balancer.
A few default attributes are applied to your load balancer
during creation. You can view edit them after creating the load
balancer.
Step 4: Test the load balancer

After creating your load balancer, you can verify that your EC2 instances
pass the initial health check. You can then check that the load balancer is
sending traffic to your EC2 instance.

To test the load balancer

1. After the load balancer is created, choose Close.


2. In the navigation pane, under Load Balancing, choose
Target Groups.
3. Select the newly created target group.
4. Choose Targets and verify that your instances are ready. If the
status of an instance is initial, it's typically because the instance is
still in the process of being registered. This status can also indicate
that the instance has not passed the minimum number of health
checks to be considered healthy. After the status of at least one
instance is healthy, you can test your load balancer.
5. In the navigation pane, under Load Balancing, choose
Load Balancers.
6. Select the newly created load balancer.
7. Choose Description and copy the DNS name of the load balancer
(for example, my-load-balancer-1234567890abcdef.elb.us-east-
2.amazonaws.com). Paste the DNS name into the address field of an
internet-connected web browser. If everything is working, the
browser displays the default page of your server.
Experiment-8

Aim –Case Study pf Google app Engine


Theory –
History of google Search Engine
Founders Larry Page and Sergey Brin named the search engine they built
"Google," a play on the word "googol," the mathematical term for a 1
followed by 100 zeros. The name reflects the immense volume of
information that exists. Google's mission is to organize the world's
information and make it universally accessible and useful.

QUICK FACTS
● Founded: 1998
● Founders: Larry Page and Sergey Brin
● Incorporation: September 4, 1998
● Initial public offering (NASDAQ): August 19, 2004
● Headquarters: 1600 Amphitheatre Parkway, Mountain View,
CA 94043
● Offices: Locations of offices around the world.
● Management: Executives and board of directors.
● Motto: Don't Be Evil.

POTENTIAL PROFITABILITY OF THE INDUSTRY


Google’s main competitors, Yahoo, and Microsoft (operating under their
respective brands - MSN and Live Search), posted revenues of $7.0 billion
and $51.1 billion respectively (Google, 2007).
There is a dizzying amount of money made in this industry. Presently,
Google commands 57% of internet searches in the United States
(AgenceFrancePresse, 2008). This large market share enables them to
improve the quality of their search results and targeted ads more quickly
than their competitors. This creates a sort of selfperpetuating draw for
customers as the search results constantly improve. Yahoo and Microsoft
lag behind with 23% and 11% respective market shares (Figure 1)
(AgenceFrancePresse, 2008).
The competitive rivalry is strong and ongoing in this industry because
large amounts of advertising dollars flow to the website that has captured
the largest volume of searches. Hit wise reported that Google's market
share in the US was 72.11% in February 2009. "Yahoo Search, MSN Search
and Ask.com received 17.04%, 5.56% and 3.74%, respectively, and are down
year-over-year at -17%, -20%, and -10% respectively.

SUCCESSORS & FAILURES IN THE INDUSTRY


Google Inc., starting from just a smart algorithm, has developed a totally
new business model, has become in a few years the world leading search
engine, has developed winning applications as Google Earth, Google
Video, Google Maps, Gmail, and is enjoying a huge success. Google,
starting from scratch, has won the challenge against a giant like Microsoft
and against the previous search engine market leaders Yahoo, Lycos,
AltaVista, hotbot and Excite.

CURRENT FIRM LEVEL STRATEGY


Google regularly explores all three manners of diversification with
new start-ups, with acquisition, and with strategic alliances.

● START-UPS
Google has a rule that employees can spend 20% of the time working on
pet projects that are not part of their job description. Such motivation
helps Google innovate and diversify into previously untapped businesses
but usually still makes use of their core competencies and capabilities. In
fact both Gmail and Google News started off as 20% projects.

● ACQUISITIONS
Several of Google’s products are derived from acquisitions including
Docs, Earth, and YouTube. These products have expanded Google’s
brand and brought the previous users of these services to Google.
DoubleClick added the banner component of Google’s advertising
business and brought along significant revenue to Google’s income
statement.

● ALLIANCES
It is interesting to note the Google and Yahoo recently explored an
alliance for advertising but federal judges threatened an antitrust
investigation so Google backed out. This move did not cause a financial
setback being prompt and respectful of other partners. Yahoo and Google
in fact have a history together. Back in the early 2000’s Google provided
all of Yahoo’s search results. Google has in the past started organizations
to leverage the power of alliances. One example is OpenSocial which
allows developers to create applications that will work on all the member
companies’ websites. By giving developers a common API, the alliance
hopes to draw some of the attention away from Facebook, which is the
largest social networking site. Google also created the Open Handset
Alliance to promote the use of its open source Android operating system.
This alliance leverages the capabilities of both phone manufacturers and
independent developers to compete with Microsoft’s Windows Mobile
platform, RIM’s Blackberry, and Apple’s iPhone. Google understands the
wealth in diversification. Exploring new opportunities constantly over a
solid base of research could prove profitable with the use of products that
can reduce cost – cost of production, advertisements, etc. These new
products are crucial in gaining leverage in the constantly changing market
and providing an alternative industry if need be. Google understands that
valuable profits and minimized risk can be garnered with international
operations. The company’s international revenue totaled over $2.7 billion
in the second quarter of 2008, 52% of their total revenue (Google, 2008)

Experiment-9
Aim- Case Study of Xen hypervisor.

Theory-
What is Xen hypervisor?
Xen is a type 1 hypervisor that creates logical pools of system resources
so that many virtual machines can share the same physical resources.

Xen is a hypervisor that runs directly on the system hardware. Xen


inserts a virtualization layer between the system hardware and the virtual
machines, turning the system hardware into a pool of logical computing
resources that Xen can dynamically allocate to any guest operating
system. The operating systems running in virtual machines interact with
the virtual resources as if they were physical resources.

Figure 1 shows a system with Xen running virtual machines.

Figure 1. The Xen architecture

Xen is running three virtual machines. Each virtual machine is running a


guest operating system and applications independent of other virtual
machines while sharing the same physical resources.

Features:
The following are key concepts of the Xen architecture:

● Full virtualization.
● Xen can run multiple guest OS, each in its on VM.
● Instead of a driver, lots of great stuff happens in the Xen daemon.

1)Full virtualization
Most hypervisors are based on full virtualization which means that they
completely emulate all hardware devices to the virtual machines. Guest
operating systems do not require any modification and behave as if they
each have exclusive access to the entire system.

Full virtualization often includes performance drawbacks because


complete emulation usually demands more processing resources (and
more overhead) from the hypervisor. Xen is based on paravirtualization; it
requires that the guest operating systems be modified to support the Xen
operating environment. However, the user space applications and
libraries do not require modification.

Operating system modifications are necessary for reasons like:

● So that Xen can replace the operating system as the most


privileged software.
● So that Xen can use more efficient interfaces (such as virtual
block devices and virtual network interfaces) to emulate devices
— this increases performance.

2) Xen can run multiple guest OS each in its on VM

Xen can run several guest operating systems each running in its own
virtual machine or domain. When Xen is first installed, it
automatically creates the first domain, Domain 0 (or dom0).

Domain 0 is the management domain and is responsible for managing


the system. It performs tasks like building additional domains (or virtual
machines), managing the virtual devices for each virtual machine,
suspending virtual machines, resuming virtual machines, and migrating
virtual machines. Domain 0 runs a guest operating system and is
responsible for the hardware devices.

3) Instead of a driver, lots of great stuff happens in the


Xen daemon

The Xen daemon, xend, is a Python program that runs in dom0. It is the
central point of control for managing virtual resources across all the
virtual machines running on the Xen hypervisor. Most of the command
parsing, validation, and sequencing happens in user space in xend and
not in a driver.

IBM supports the SUSE Linux Enterprise Edition (SLES) 10 version of Xen
which supports the following configuration:

● Four virtual machines per processor and up to 64 virtual


machines per physical system.
● SLES 10 guest operating systems (paravirtualized only).

The Xen Architecture

● Xen is an open source hypervisor program developed by Cambridge


University. Xen is a micro-kernel hypervisor, which separates the
policy from the mechanism. The Xen hypervisor implements all the
mechanisms, leaving the policy to be handled by Domain 0, as
shown in Figure 3.5. Xen does not include any device drivers natively
[7]. It just provides a mechanism by which a guest OS can have
direct access to the physical devices. As a result, the size of the Xen
hypervisor is kept rather small. Xen provides a virtual environment
located between the hardware and the OS. A number of vendors
are in the process of developing commercial Xen hypervisors,
among them are Citrix XenServer [62] and Oracle VM [42].

● The core components of a Xen system are the hypervisor, kernel,


and applications. The organi-zation of the three components is
important. Like other virtualization systems, many guest OSes can
run on top of the hypervisor. However, not all guest OSes are
created equal, and one in

● particular controls the others. The guest OS, which has control
ability, is called Domain 0, and the others are called Domain U.
Domain 0 is a privileged guest OS of Xen. It is first loaded when
Xen boots without
any file system drivers being available. Domain 0 is designed to
access hardware directly and manage devices. Therefore, one of
the responsibilities of Domain 0 is to allocate and map hardware
resources for the guest domains (the Domain U domains).

● For example, Xen is based on Linux and its security level is C2. Its
management VM is named Domain 0, which has the privilege to
manage other VMs implemented on the same host. If Domain 0 is
compromised, the hacker can control the entire system. So, in the
VM system, security policies are needed to improve the security of
Domain 0. Domain 0, behaving as a VMM, allows users to create,
copy, save, read, modify, share, migrate, and roll back VMs as
easily as manipulating a file, which flexibly provides tremendous
benefits for users. Unfortunately, it also brings a series of security
problems during the software life cycle and data lifetime.

● Traditionally, a machine’s lifetime can be envisioned as a straight


line where the current state of the machine is a point that
progresses monotonically as the software executes. During this
time, con- figuration changes are made, software is installed, and
patches are applied. In such an environment, the VM state is akin to
a tree: At any point, execution can go into N different branches
where multiple instances of a VM can exist at any point in this tree
at any given time. VMs are allowed to roll back to previous states in
their execution (e.g., to fix configuration errors) or rerun from the
same point many times (e.g., as a means of distributing
dynamic content or circulating a “live” system image).

Deploying virtualization
To deploy virtualization for Xen:

● Install Xen on the system.


● Create and configure virtual machines (this includes the
guest operating system).

Install the Xen software using one of the following methods:

● Interactive install: Use this procedure to install directly on dedicated


virtual machine on the Xen server. This dedicated virtual machine is
referred to as the client computer in the install procedure.
● Install from CommCell console: Use this procedure to install
remotely on a dedicated virtual machine on the Xen server.
See Related topics for more info on deploying virtualization.

Managing your virtual machines

There are several virtual machine managers available including:

Open-source mangers: OpenXenManager, an open-source clone of Citrix’s


XenServer XenCenter and manages both XCP and Citrix’s XenServer. Xen
Cloud Control System (XCCS) is a lightweight front-end package for the
excellent Xen Cloud Platform cloud computing system. Zentific, a web-
based management interface for the effective control of virtual machines
running upon the Xen hypervisor.
Commercial managers: Convirture: ConVirt is a centralized management
solution that lets you provision, monitor, and manage the complete life
cycle of your Xen deployment. Citrix XenCenter is a Windows-native
graphical user interface for managing Citrix XenServer and XCP. Versiera
is a web-based Internet technology designed to securely manage and
monitor both cloud environments and enterprises with support for Linux,
FreeBSD, OpenBSD, NetBSD, OS X, Windows, Solaris, OpenWRT, and DD-
WRT.

Choosing Xen
On the pro side:

● The Xen server is built on the open source Xen hypervisor and
uses a combination of paravirtualization and hardware-assisted
virtualization. This collaboration between the OS and the
virtualization platform enables the development of a simpler
hypervisor that delivers highly optimized performance.
● Xen provides sophisticated workload balancing that captures
CPU, memory, disk I/O, and network I/O data; it offers two
optimization modes: one for performance and another for density.
● The Xen server takes advantage of a unique storage integration
feature called the Citrix Storage Link. With it, the sysadmin can
directly leverage features of arrays from such companies as HP,
Dell Equal Logic, NetApp, EMC, and others.
● The Xen server includes multicore processor support, live
migration, physical-server-to-virtual-machine conversion (P2V) and
virtual-to- virtual conversion (V2V) tools, centralized multiserver
management, real-time performance monitoring, and speedy
performance for Windows and Linux.

On the con side:


● Xen has a relatively large footprint and relies on Linux in dom0.
● Xen relies on third-party solutions for hardware device
drivers, storage, backup and recovery, and fault tolerance.
● Xen gets bogged down with anything with a high I/O rate
or anything that sucks up resources and starves other VMs.
● Xen’s integration can be problematic; it could become a burden
on your Linux kernel over time.
● XenServer 5 is missing 802.1Q virtual local area network (VLAN)
trunking; as for security, it doesn’t offer directory services
integration, role-based access controls, or security logging and
auditing or administrative actions.

Experiment- 10
Aim- Case Study of Open Stack.
Theory-
What is OpenStack?
OpenStack is a collection of open source software modules and tools that
provides a framework to create and manage both public cloud and
private cloud infrastructure.
OpenStack delivers infrastructure-as-a-service functionality -- it pools,
provisions and manages large concentrations of compute, storage and
network resources. These resources, which include bare metal hardware,
virtual machines (VMs) and containers, are managed through application
programming interfaces (APIs) as well as an OpenStack dashboard. Other
OpenStack components provide orchestration, fault management and
services intended to support reliable, high availability operations.
Businesses and service providers can deploy OpenStack on premises (in
the data center to build a private cloud), in the cloud to enable or drive
public cloud platforms, and at the network edge for distributed computing
systems.

What does OpenStack do?


To create a cloud computing environment, an organization typically builds
off of its existing virtualized infrastructure, using a well-established
hypervisor such as VMware vSphere, Microsoft Hyper-V or KVM. However,
cloud computing offers more than just virtualization -- a public or private
cloud provides extensive provisioning, lifecycle automation, user self-
service, cost reporting and billing, orchestration and other features.
Installing OpenStack software on top of a virtualized environment forms a
cloud operating system. An organization can use that to organize,
provision and manage large pools of heterogeneous compute, storage
and network resources. Whereas an IT administrator typically provisions
and manages resources in a more traditional virtualized environment,
OpenStack enables individual users to provision resources through
management dashboards and an API.
This cloud-based infrastructure created through OpenStack supports an
array of uses cases, including web hosting, big data projects, software-as-a-
service delivery or container deployment.
OpenStack competes most directly with other open source cloud
platforms, including Eucalyptus and Apache CloudStack. Some also see it
as an alternative to public cloud platforms such as Amazon Web Services
or Microsoft Azure, and some smaller public cloud providers use
OpenStack as the native cloud platform.

How does OpenStack work?


OpenStack is not an application in the traditional sense, but rather a
platform composed of several dozen separate components, called projects,
which interoperate with each other through APIs. Each component is
complementary, but not all components are required to create a basic
cloud. Organizations can install only select components that build the
features and functionality in a desired cloud environment.
OpenStack also relies on two additional foundation technologies: a base
operating system, such as Linux, and a virtualization platform, such as
VMware or Citrix. The OS handles the commands and data exchanged
from OpenStack, while the virtualization engine manages the virtualized
hardware resources used by OpenStack projects.
Once the OS, virtualization platform and OpenStack components are
deployed and configured properly, administrators can provision and
manage the instanced resources that applications require. Actions and
requests made through a dashboard produce a series of API calls, which
are authenticated through a security service and delivered to the
destination component, which executes the associated tasks.
As a simple example, an administrator logs into OpenStack and manages
the cloud environment through a dashboard. Administrators can create
and connect new compute instances and storage instances, and configure
network behaviors. Additionally, an administrator might connect various
other services, such as to monitor the performance of a provisioned
instance and employ resource billing and chargeback.
The OpenStack platform's vast scope and sheer number of interrelated
components can be confusing, and even daunting. Most OpenStack
adopters start with a small number of essential components and
gradually deploy other components over time to build out their cloud's
operational and business capabilities.

What are the different OpenStack components?


The OpenStack cloud platform is an amalgam of software components.
These components are shaped by open source contributions from the
developer community, and OpenStack adopters can choose to implement
some or all of these components as business needs dictate. The following
map shows all OpenStack components, as of April 2021.
(SOURCE: OPENSTACK.ORG, LICENSED UNDER CREATIVE COMMONS
ATTRIBUTION 4.0.
Map of all OpenStack components (as of April 2021), their functions and
interactions.
OpenStack setups vary, but typically start with a handful of central
components: compute (Nova), VM images (Glance), networking (Neutron),
storage (Cinder or Swift), identity management (Keystone) and resource
management (Placement).

What are the pros and cons of OpenStack?


Many enterprises that deploy and maintain an OpenStack infrastructure
enjoy several advantages, including that it is:
Affordable. OpenStack is available freely as open source software
released under the Apache 2.0 license. This means there is no upfront
cost to acquire and use OpenStack.
Reliable. With almost a decade of development and use, OpenStack
provides a comprehensive and proven production-ready modular platform
upon which an enterprise can build and operate a private or public cloud.
Its rich set of capabilities includes scalable storage, good performance
and high data security, and it enjoys broad acceptance across industries.
Vendor-neutral. Because of OpenStack's open source nature, some
organizations also see it as a way to avoid vendor lock-in, as an overall
platform as well as its individual component functions.
But potential adopters must also consider some drawbacks, such as the
following:
Complexity. Because of its size and scope, OpenStack requires an IT staff
with significant knowledge to deploy the platform and make it work. In
some cases, an organization might require additional staff or a consulting
firm to deploy OpenStack, which adds time and cost.
Support. As open source software, OpenStack is not owned or directed by
any one vendor or team. This can make it difficult to obtain support for
the technology, beyond the open source community.
Consistency. The OpenStack component suite is always in flux as new
components are added and others are deprecated.
To reduce the complexity of an OpenStack deployment, and to gain direct
access to technical support, an organization can select an OpenStack
distribution from a vendor. This is a version of the open source platform
packaged with other components, such as an installation program and
management tools. It often comes with technical support options.
An organization has many OpenStack distributions to choose from,
including the Red Hat OpenStack platform, the Mirantis Cloud Platform
and the Rackspace OpenStack private cloud.

OpenStack vs. other cloud platforms


Even simple clouds are complex and require extensive automation,
orchestration and management to operate. This means there are few
direct alternatives to OpenStack that are practical and proven. However,
there are some options that can help organizations combine the benefits
of cloud and on-premises capabilities to simplify or speed an enterprise's
adoption of next-generation technology.
Kubernetes (containers)
Organizations with small, dynamic container-based environments may
balk at OpenStack's embrace of traditional VMs. They may instead opt for
a pure container-based approach using a platform such as Kubernetes.
Hybrid cloud stacks
The three major public cloud providers all provide managed offerings
for on-premises clouds, with a strong emphasis on hybrid cloud
adoption.
AWS Outposts, Azure Stack and Google Anthos all offer appliances that
sit within a local data canter to facilitate a range of services that mimic
the providers' public services and capabilities.
VMware vCloud
Given the vast enterprise investments in virtualization technology, it's
natural to consider building a private cloud based on VMware's vCloud
Suite. VMware has partnerships with cloud providers, notably AWS, to
support such hybrid cloud projects. However, VMware software is
proprietary and requires licensing, and it may offer fewer capabilities and
less flexibility than an open source platform such as OpenStack.
Public clouds only
Plenty of organizations decide that the breadth and reliability of public
cloud services fulfil their requirements, thereby avoiding the need to
invest financially and intellectually in a private cloud infrastructure.

How to get started with OpenStack


OpenStack adoption is a process, not an event. There are potentially
dozens of components to understand, install and employ. Organizations
that seek to build a private cloud based on OpenStack need time,
financial investment and support from upper management.
Testing. OpenStack adoption typically starts with a technology evaluation -
- a test drive to see what an OpenStack setup looks like and how it
operates. The OpenStack Public Cloud Passport offers trial programs from
various OpenStack public cloud providers. Organizations that prefer to
install and run OpenStack locally for a hands-on examination can use
the DevStack distribution, which focuses on the dashboard and
OpenStack administration/user interactions and can be installed on a
single computer.

(SOURCE: OPENSTACK.ORG, LICENSED UNDER CREATIVE COMMONS


ATTRIBUTION 3.0)
Screenshot of the OpenStack Horizon dashboard, from which IT
admins can view usage and manage instances, volumes, networking
and other functions.
Preparation. Once an organization chooses to adopt OpenStack, it
must prepare to address the following three elements:
Education. Learn more about OpenStack components, how they
operate and how they're used.
Support. Identify and engage with OpenStack support services, from
simply finding online communities to identifying competent OpenStack
employees and third-party contractors.
Infrastructure. Identify the hardware infrastructure to initially deploy
OpenStack, which may require procurement and installation.
Deployment. Organizations should consider starting with limited, proof-
of- concept OpenStack projects. As an example, the OpenStack Compute
Starter Kit focuses on just five components: Nova (compute), Glance (VM
images), Keystone (identity management), Neutron (networking) and
Placement (resource usage and tracking).
Expansion. As an organization gains expertise in the OpenStack
environment, it may want to expand its OpenStack deployment through
additional components. It is highly unlikely that every business use case
will need every available component, so organizations can select
components, such as monitoring or billing, that fit specific business goals.

OpenStack releases
OpenStack versions are released in the spring and fall of each year.
These releases follow an alphabetical naming scheme, starting with the
initial Austin release in 2010.
OpenStack releases 2010-2019
The original OpenStack releases -- Austin, Bexar and Cactus -- are no
longer available. Releases between 2012 and 2016 are all at end-of-life
status as of late 2021: Diablo, Essex, Folsom, Grizzly, Havana, Icehouse,
Juno, Kilo, Liberty, Mintaka and Newton.
OpenStack releases from 2017-2019 are now in what's called extended
maintenance status: Ocata, Pike, Queens, Rocky, Stein and Train.
OpenStack releases 2020-2021
OpenStack releases in 2020, Ussuri and Victoria, are actively maintained
and supported by the community.
The Wallaby OpenStack release arrived in April 2021. Notable
improvements in Wallaby focused on role-based access control and
integration with other open source projects, including Ceph (distributed
storage), Kubernetes (container orchestration) and Prometheus
(monitoring and alerts).
Future OpenStack releases
The Xena version of OpenStack has an anticipated release in October 2021.
The Yoga release is expected in March 2022.

OpenStack Foundation
OpenStack was originally developed through a partnership between the
U.S. National Aeronautics and Space Administration and Rackspace, a
managed hosting and cloud computing service provider. In September
2012, the OpenStack Foundation was created as an independent non-
profit organization to oversee the OpenStack platform and community,
governed by a board of directors comprised of many direct and indirect
competitors, including IBM, Intel and VMware.
In October 2020, the OpenStack Foundation was relaunched as the Open
Infrastructure Foundation (OpenInfra) with a mission to more broadly
support other open source infrastructure communities and foster
continued development around public, private and hybrid clouds.
Various OpenInfra projects involve artificial intelligence and machine
learning, CI/CD software development paradigms, container
infrastructure and edge computing.

OpenStack platform providers


While comprehensive and capable, an OpenStack platform is difficult to
deploy from scratch. The OpenStack market provides a variety of
alternatives, including the following:
Distributions. Organizations can choose pre-packaged software offerings
that include or support OpenStack. Examples include VMware
Integrated OpenStack, Debian, SUSE OpenStack Cloud and Red Hat
OpenStack Platform.
Appliances. These combine OpenStack software with vendors' selected
hardware to accelerate deployment. Examples include IBM Spectrum
Scale with OpenStack Swift, and the Dell EMC Ready Architecture for Red
Hat OpenStack Platform.
Managed private cloud. Third-party organizations can support and help
with local OpenStack deployment and operation. Examples include IBM
Bluemix Private Cloud Local, Rack space OpenStack Private Cloud and
Tencent Cloud TStack.
Hosted private cloud. Some organizations cannot deploy and manage
a private cloud on-site, and instead rely on third-party providers to
handle the hardware and management of OpenStack-based private
clouds.
Examples include IBM Bluemix Private Cloud, Canonicals’ Managed
OpenStack and Rack space OpenStack Private Cloud.
Public cloud. Various public cloud providers offer services based on
OpenStack technology, such as Rack space Public Cloud, Vex host Public
Cloud and Elastx OpenStack.

You might also like