Cloud Computing 18CS72: Microsoft Windows Azure

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Cloud Computing 18CS72

Unit-3: AWS
Amazon Web Services, Inc. is a subsidiary of Amazon providing on-demand cloud computing
platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go
basis.

• VMs can be used to share computing resources both flexibly and safely.
• Amazon has been a leader in providing public cloud services (http://aws.amazon.com/).
• Amazon applies the IaaS model in providing its services. Figure 4.21 shows the AWS
architecture.
• EC2 provides the virtualized platforms to the host VMs where the cloud application can
run. S3 (Simple Storage Service) provides the object-oriented storage service for users.
• EBS (Elastic Block Service) provides the block storage. interface which can be used to
support traditional applications.
• SQS stands for Simple Queue Service, and its job is to ensure a reliable message service
between two processes.
• The message can be kept reliably even when the receiver processes are not running.
• Users can access their objects through SOAP with either browsers or other client
programs which support the SOAP standard.
• AWS Import/Export allows one to ship large volumes of data to and from EC2 by
shipping physical disks; it is well known that this is often the highest bandwidth
connection between geographically distant systems.
• Amazon Cloud Front implements a content distribution.

Microsoft Windows Azure

Dr. Nandini N, Dr. AIT,Bengaluru Page 1


Cloud Computing 18CS72

In 2008, Microsoft launched a Windows Azure platform to meet the challenges in cloud
computing. This platform is built over Microsoft data centers. Figure 4.22 shows the overall
architecture of Microsoft’s cloud platform. The platform is divided into three major component
platforms. Windows Azure offers a cloud platform built on Windows OS and based on Microsoft
virtualization technology.

Applications are installed on VMs deployed on the data-center servers. Azure manages all
servers, storage, and network resources of the data center. On top of the infrastructure are the
various services for building different cloud applications. Cloud-level services provided by the
Azure platform are introduced below.

• Live service Users can visit Microsoft Live applications and apply the data involved across
multiple machines concurrently.
• .NET service This package supports application development on local hosts and execution.
SQL Azure This function makes it easier for users to visit and use the relational database
associated with the SQL server in the cloud.
• SharePoint service This provides a scalable and manageable platform for users to develop their
special business applications in upgraded web services.
• Dynamic CRM service This provides software developers a business platform in managing
CRM applications in financing, marketing, and sales and promotions

Dr. Nandini N, Dr. AIT,Bengaluru Page 2


Cloud Computing 18CS72

GAE: Google App Engine is a cloud computing platform as a service for developing and hosting
web applications in Google-managed data centers. Applications are sandboxed and run across
multiple servers. Google App Engine primarily supports Go, PHP, Java, Python, Node.js, .NET,
and Ruby applications, although it can also support other languages via "custom
runtimes".[4] The service is free up to a certain level of consumed resources and only in standard
environment but not in flexible environment. Fees are charged for additional storage, bandwidth,
or instance hours required by the application.
Advantages of GAE include:

• Readily available servers with no configuration requirement


• Power scaling function all the way down to "free" when resource usage is minimal
• Automated cloud computing tools

Building applications on the cloud is gaining traction as it accelerates your business opportunities while
ensuring availability, security, accessibility, and scalability. However, to start with creating web
applications, you would require a suitable cloud computing technology. This is where Google App
Engine fits in by allowing you to build and host web applications on a fully-managed serverless platform.

Public , private and hybrid : Public, Private, and Hybrid Clouds :

Public : Public cloud is an IT model where on-demand computing services and


infrastructure are managed by a third-party provider and shared with multiple organizations
using the public Internet. ... Public cloud makes computing resources available to anyone for
purchase. Multiple users typically share the use of a public cloud.

Private: Private cloud (also known as an internal cloud or corporate cloud) is a cloud
computing environment in which all hardware and software resources. Private clouds
attempt to achieve customization and offer higher efficiency, resiliency, security, and privacy.

Dr. Nandini N, Dr. AIT,Bengaluru Page 3


Cloud Computing 18CS72

Hybrid clouds operate in the middle, with many compromises in terms of resource sharing

The concept of cloud computing has evolved from cluster, grid, and utility computing. Cluster
and grid computing leverage the use of many computers in parallel to solve problems of any size.
Utility and Software as a Service (SaaS) provide computing resources as a service with the
notion of pay per use. Cloud computing leverages dynamic resources to deliver large numbers of
services to end users. Cloud computing is a high-throughput computing (HTC) paradigm
whereby the infrastructure provides the services through a large data center or server farms. The
cloud computing model enables users to share access to resources from anywhere at any time
through their connected devices.

Hybrid Clouds
A hybrid cloud is built with both public and private clouds, as shown at the lower-left corner of
Figure 4.1. Private clouds can also support a hybrid cloud model by supplementing local
infrastructure with computing capacity from an external public cloud. For example, the Research
Compute Cloud (RC2) is a private cloud, built by IBM, that interconnects the computing and IT
resources at eight IBM Research Centers scattered throughout the United States, Europe, and
Asia. A hybrid cloud provides access to clients, the partner network, and third parties. In
summary, public clouds promote standardization, preserve capital investment, and offer
application flexibility.

Dr. Nandini N, Dr. AIT,Bengaluru Page 4


Cloud Computing 18CS72

fat switch network topology : Figure 4.10 shows a fat-tree switch network design for data-
center construction. The fat-tree topology is applied to interconnect the server nodes. The
topology is organized into two layers.

Server nodes are in the bottom layer, and edge switches are used to connect the nodes in the
bottom layer. The upper layer aggregates the lower-layer edge switches. A group of aggregation
switches, edge switches, and their leaf nodes form a pod. Core switches provide paths among
different pods.

• The fat-tree structure provides multiple paths between any two server nodes. This
provides fault-tolerant capability with an alternate path in case of some isolated link
failures.
• The failure of an aggregation switch and core switch will not affect the connectivity of
the whole network.
• The failure of any edge switch can only affect a small number of end server nodes.
• The extra switches in a pod provide higher bandwidth to support cloud applications in
massive data movement.
• The building blocks used are the low-cost Ethernet switches.
• This reduces the cost quite a bit.
• The routing table provides extra routing paths in case of failure. The routing algorithms
are built inside the switches.
• The end server nodes in the data center are not affected during a switch failure, as long as
the alternate routing path does not fail at the same time.

Dr. Nandini N, Dr. AIT,Bengaluru Page 5


Cloud Computing 18CS72

Unit-4: Map reduce, eucalyptus architecture, AWS S3 SOAP principles, google file system
architecuture

Map reduce: MapReduce is a software framework and programming model used for processing
huge amounts of data. ... The programs of Map Reduce in cloud computing are parallel
in nature, thus are very useful for performing large-scale data analysis using multiple machines
in the cluster. The input to each phase is key-value pairs.

There has been substantial interest in “data parallel” languages largely aimed at loosely coupled
computations which execute over different data samples. The language and runtime generate and
provide efficient execution of “many task” problems that are well known as successful grid
applications.

The major advantage of MapReduce is that it is easy to scale data processing over multiple
computing nodes. Under the MapReduce model, the data processing primitives are called
mappers and reducers. Decomposing a data processing application into mappers and reducers is
sometimes nontrivial. But, once we write an application in the MapReduce form, scaling the
application to run over hundreds, thousands, or even tens of thousands of machines in a cluster is
merely a configuration change. This simple scalability is what has attracted many programmers
to use the MapReduce model.

AWS S3 SOAP principles

Amazon S3 provides a simple web services interface that can be used to store and retrieve any
amount of data, at any time, from anywhere on the web. S3 provides the object-oriented storage
service for users. Users can access their objects through Simple Object Access Protocol (SOAP)
with either browsers or other client programs which support SOAP. SQS is responsible for
ensuring a reliable message service between two processes, even if the receiver processes are not
running.

Figure 6.24 shows the S3 execution environment. The fundamental operation unit of S3 is called
an object. Each object is stored in a bucket and retrieved via a unique, developer-assigned key. In
other words, the bucket is the container of the object. Besides unique key attributes, the object
has other attributes such as values, metadata, and access control information. From the
programmer’s perspective, the storage provided by S3 can be
viewed as a very coarse-grained key-value pair.

Through the key-value programming interface, users


can write, read, and delete objects containing from 1 byte to 5 gigabytes of data each. There are
two
types of web service interface for the user to access the data stored in Amazon clouds. One is a
REST

Dr. Nandini N, Dr. AIT,Bengaluru Page 6


Cloud Computing 18CS72

(web 2.0) interface, and the other is a SOAP interface. Here are some key features of S3:
• Redundant through geographic dispersion.
• Designed to provide 99.999999999 percent durability and 99.99 percent availability of objects
over a given year with cheaper reduced redundancy storage (RRS).
Authentication mechanisms to ensure that data is kept secure from unauthorized access. Objects
can be made private or public, and rights can be granted to specific users.
• Per-object URLs and ACLs (access control lists).
• Default download protocol of HTTP. A BitTorrent protocol interface is provided to lower costs
for high-scale distribution.
• $0.055 (more than 5,000 TB) to 0.15 per GB per month storage (depending on total amount).
• First 1 GB per month input or output free and then $.08 to $0.15 per GB for transfers outside an
S3 region.
• There is no data transfer charge for data transferred between Amazon EC2 and Amazon S3
within the same region or for data transferred between the Amazon EC2 Northern Virginia
region and the Amazon S3 U.S. Standard region (as of October 6, 2010).

Google file system: Google File System (GFS) is a scalable distributed file system (DFS)
created by Google Inc. and developed to accommodate Google's expanding . he architecture of
the file system is based on a master node, which contains a global map of the file system and
keeps track of the status of all the storage nodes, and a pool of chunk servers, which provide
distributed storage space in which to store files. Google File System (GFS or GoogleFS, not to
be confused with the GFS Linux file system) is a proprietarydistributed file system developed
by Google to provide efficient, reliable access to data using large clusters ofcommodity

Dr. Nandini N, Dr. AIT,Bengaluru Page 7


Cloud Computing 18CS72

hardwareA GFS cluster consists of multiple nodes. These nodes are divided into two types:
one Master node and multipleChunkservers. Each file is divided into fixed-size chunks.
Chunkservers store these chunks. Each chunk is assigned a globally unique 64-bit label by the
master node at the time of creation, and logical mappings of files to constituent chunks are
maintained. Each chunk is replicated several times throughout the network.

Eucalyptus architecture: Eucalyptus in cloud computing is an open-source software platform


for carrying out IaaS or Infrastructure-as-a-Service in a hybrid cloud computing or private cloud
computing environment. Eucalyptus in cloud computing pools together existing virtualised
framework to make cloud resources for storage as a service, network as a service and
infrastructure as a service. Elastic Utility Computing Architecture for Linking Your Programs To
Useful Systems is short known as Eucalyptus in cloud computing.

Dr. Nandini N, Dr. AIT,Bengaluru Page 8


Cloud Computing 18CS72

Eucalyptus in cloud computing frameworks declared a conventional concurrence with AWS or


Amazon Web Services in March 2012, permitting overseers to move cases between an Amazon
Elastic Compute Cloud and the Eucalyptus private cloud to make a hybrid cloud. The
organisation additionally permits Eucalyptus to work with Amazon’s product groups to create
interesting Amazon Web Services viable highlights.

It tends to be effortlessly sent in existing IT frameworks to appreciate the advantages of both


eucalyptus private cloud and eucalyptus public cloud models.

Unit-5 : cloud and cloudlets, architecture of face book architecture

Cloud and cloudlets


Cloud Cloudlets
It is large scale data center It is small scale data center
Cloud state is hard and soft Only soft state
Professionally managed Self managed
Machine power with power conditioninng Data center in a box

Dr. Nandini N, Dr. AIT,Bengaluru Page 9


Cloud Computing 18CS72

Need high internet latency Need low internet latency


More than hundred at a time can share Few users at a time can share
Far to the mobile users Near to the mobile users
Centralized ownership Decentralized owner ship

Architecture of face book: facebook is a social networking site that makes it easy for you to
connect and share with family and friends online Facebook is a website which allows users,
who sign-up for free profiles, to connect with friends, work colleagues or people they don't
know, online. It allows users to share pictures, music, videos, and articles, as well as their own
thoughts and opinions with however many people they like.

Facebook is one of the biggest tech company which is not using AWS or Azure. No cloud for
that matter is being used by Facebook to store its information.

Dr. Nandini N, Dr. AIT,Bengaluru Page 10

You might also like