Unit 1 Icc
Unit 1 Icc
Unit 1 Icc
Architecture of cloud
1.Client Infrastructure
2.Application
The application may be any software or platform that a client wants to access.
3.Service
A Cloud Services manages which type of service you access according to the client’s
requirement.
i. Software as a Service (SaaS) – Mostly, SaaS applications run directly through the
web browser means we do not require to download and install these
applications.Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco
WebEx.
ii. Platform as a Service (PaaS) – It is quite similar to SaaS, but the difference is
that PaaS provides a platform for software creation, but using SaaS, we can access
software over the internet without the need of any platformExample: Windows Azure,
Force.com, Magento Commerce Cloud, OpenShift.
4.Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual
machines.
5. Storage
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud
infrastructure includes hardware and software components such as servers, storage,
network devices, virtualization software, and other storage resources that are needed
to support the cloud computing model.
7. Management
8. Security
9. Internet
The Internet is medium through which front end and back end can interact and
communicate with each other.
Cloud computing offers various service models to cater to different needs and
preferences. Here's a breakdown of the three primary models:
On Premise On Cloud
Organizations own and manage all The cloud provider owns and manages
hardware, software, and data within their the underlying infrastructure.
physical data centers.
They have complete control over Organizations may have some control
security, customization, and over configurations and access
performance. management, but not the physical
infrastructure itself.
Scaling resources (servers, storage) can Scaling resources in the cloud is
be complex and time-consuming typically easier and faster.
Involves upfront costs for hardware, Typically follows a pay-as-you-go
software licenses, and ongoing model, where users pay only for the
maintenance. resources they use.
1. Public Cloud:
● This is the most common type of cloud, where resources are shared among
multiple organizations over the internet.
● computing resources are managed and operated by the Cloud Service Provider
(CSP). The CSP looks after the supporting infrastructure and ensures that the
resources are accessible to and scalable for the users.
● Public cloud is owned at a lower cost than the private and hybrid cloud.
● Public Cloud is less secure because resources are shared publicly.
● Performance depends upon the high-speed internet network link to the cloud
provider.
● Public cloud providers such as Amazon Web Services (AWS), Microsoft
Azure, and Google Cloud Platform offer a wide range of services, including
IaaS, PaaS, and SaaS.
2. Private Cloud:
● A private cloud is dedicated to a single organization. It can be hosted on-
premises or in a data center owned by the organization or a third-party
provider.
● The organization has full control over the cloud because it is managed by the
organization itself. So, there is no need for the organization to depends on
anybody.
● Customizable to meet specific business needs and compliance regulations.
● Private clouds offer more control and security than public clouds, but they can
be more expensive to set up and maintain.
● Limited access to the latest advancements and innovations offered by public
cloud providers.
● Reduced flexibility and agility compared to public cloud options.
3. Hybrid Cloud:
● A hybrid cloud combines public and private clouds, allowing organizations to
leverage the benefits of both. This can be useful for organizations that need to
keep sensitive data on-premises while using public cloud resources for other
applications.
● Hybrid cloud enables organizations to optimize costs by utilizing the cost-
effective public cloud for non-sensitive workloads while keeping mission-
critical applications and data on the more cost-efficient private cloud. This
approach allows for efficient resource allocation and cost management
● Hybrid facilitates seamless integration between on-premises infrastructure and
cloud environments.
● Hybrid provides greater control over sensitive data and compliance
requirements.
● Managing a hybrid cloud is complex because it is difficult to manage more
than one type of deployment model.
●
●
Q. Evolution of cloud
The phrase “Cloud Computing” was first introduced in the 1950s to describe internet-related
services, and it evolved from distributed computing to the modern technology known as
cloud computing. Cloud services include those provided by Amazon, Google, and Microsoft.
Cloud computing allows users to access a wide range of services stored in the cloud or on the
Internet. Cloud computing services include computer resources, data storage, apps, servers,
development tools, and networking protocols.
Distributed Systems
Distributed systems consist of multiple independent systems appearing as a single entity to
users, designed for resource sharing and efficiency. They feature scalability, concurrency,
availability, heterogeneity, and fault independence. The challenge was the need for all
systems to be geographically co-located, leading to the development of mainframe, cluster,
and grid computing.
Mainframe Computing
Introduced in 1951, mainframes are powerful, reliable machines handling large-scale data
and transactions with high fault tolerance and minimal downtime. They enhanced processing
power but were costly, prompting the development of cluster computing as a cheaper
alternative.
Cluster Computing
In the 1980s, cluster computing emerged as a cost-effective alternative to mainframes.
Machines in a cluster, connected by a high-bandwidth network, offered high computation
capabilities and easy scalability, though they still faced geographical limitations, which led to
grid computing.
Grid Computing
In the 1990s, grid computing allowed systems in different locations, connected via the
internet, to work together. It solved some issues but introduced new ones, like low bandwidth
and network problems, paving the way for cloud computing.
Virtualization
Introduced 40 years ago, virtualization creates a virtual layer over hardware, allowing
multiple instances to run simultaneously. It's a key technology in cloud computing,
underpinning services like Amazon EC2 and VMware vCloud, with hardware virtualization
being the most common.
Web 2.0
Web 2.0 enables cloud services to interact with clients through dynamic, interactive web
pages, enhancing web flexibility. It powered social media's rise, gaining popularity in 2004,
with examples including Google Maps, Facebook, and Twitter.
Service Orientation
Service orientation, a model for cloud computing, supports low-cost, flexible, and evolving
applications. It introduced Quality of Service (QoS), including Service Level Agreements
(SLAs), and Software as a Service (SaaS).
Utility Computing
Utility computing defines service provisioning techniques, offering compute, storage, and
infrastructure services on a pay-per-use basis.
Cloud Computing
Cloud computing involves storing and accessing data and programs on remote servers hosted
on the internet, rather than on local drives or servers. It's also known as Internet-based
computing, offering resources as services via the internet.
(LONGER ANSWER)
Distributed Systems
Distributed System is a composition of multiple independent systems but all of them are
depicted as a single entity to the users. The purpose of distributed systems is to share
resources and also use them effectively and efficiently. Distributed systems possess
characteristics such as scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was that all the systems
were required to be present at the same geographical location. Thus to solve this problem,
distributed computing led to three more types of computing and they were-Mainframe
computing, cluster computing, and grid computing.
Mainframe Computing
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive input-
output operations. Even today these are used for bulk processing tasks such as online
transactions etc. These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these
were very expensive. To reduce this cost, cluster computing came as an alternative to
mainframe technology.
Cluster Computing
Grid Computing
In 1990s, the concept of grid computing was introduced. It means that different systems were
placed at entirely different geographical locations and these all were connected via the
internet. These systems belonged to different organizations and thus the grid consisted of
heterogeneous nodes. Although it solved some problems but new problems emerged as the
distance between the nodes increased. The main problem which was encountered was the low
availability of high bandwidth connectivity and with it other network associated issues. Thus.
cloud computing is often referred to as “Successor of grid computing”.
Virtualization
Virtualization was introduced nearly 40 years back. It refers to the process of creating a
virtual layer over the hardware which allows the user to run multiple instances
simultaneously on the hardware. It is a key technology used in cloud computing. It is the base
on which major cloud computing services such as Amazon EC2, VMware vCloud, etc work
on. Hardware virtualization is still one of the most common types of virtualization.
Web 2.0
Web 2.0 is the interface through which the cloud computing services interact with the clients.
It is because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps, Facebook,
Twitter, etc. Needless to say, social media is possible because of this technology only. It
gained major popularity in 2004.
Service Orientation
A service orientation acts as a reference model for cloud computing. It supports low-cost,
flexible, and evolvable applications. Two important concepts were introduced in this
computing model. These were Quality of Service (QoS) which also includes the SLA
(Service Level Agreement) and Software as a Service (SaaS).
Utility Computing
Utility Computing is a computing model that defines service provisioning techniques for
services such as compute services along with other major services such as storage,
infrastructure, etc which are provisioned on a pay-per-use basis.
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud
computing is also referred to as Internet-based computing, it is a technology where the
resource is provided as a service through the Internet to the user. The data that is stored can
be files, images, documents, or any other storable document.
● Cost Efficiency: Cloud Computing provides flexible pricing to the users with the
principal pay-as-you-go model. It helps in lessening capital expenditures of
Infrastructure, particularly for small and medium-sized businesses companies.
● Flexibility and Scalability: Cloud services facilitate the scaling of resources based on
demand. It ensures the efficiency of businesses in handling various workloads without
the need for large amounts of investments in hardware during the periods of low
demand.
● Collaboration and Accessibility: Cloud computing provides easy access to data and
applications from anywhere over the internet. This encourages collaborative team
participation from different locations through shared documents and projects in real-
time resulting in quality and productive outputs.
● Automatic Maintenance and Updates: AWS Cloud takes care of the infrastructure
management and keeping with the latest software automatically making updates they
is new versions. Through this, AWS guarantee the companies always having access to
the newest technologies to focus completely on business operations and innvoations.
The F5 IBM Reference Architecture is a framework that combines technologies from two
companies, F5 Networks and IBM, to help businesses manage and run their applications
more efficiently in the cloud.
Key Points:
1. F5 Networks’ Role:
○ Application Delivery: F5 provides tools that help make sure applications are
running smoothly. These tools include load balancers that distribute traffic to
prevent any one server from getting overwhelmed and security features like
firewalls that protect against cyber attacks.
2. IBM’s Role:
○ Cloud Infrastructure: IBM offers the cloud platform where these applications
are hosted. Their cloud can be a mix of on-premises (your own servers),
private cloud (dedicated servers for your use), and public cloud (shared
servers). IBM also adds artificial intelligence (AI) services to make
applications smarter and more efficient.
3. Working Together:
○ Integration: The architecture allows the tools from F5 and IBM to work
together seamlessly. For example, IBM’s cloud can automatically scale up or
down based on the application’s needs, and F5’s tools can ensure that this
happens securely and without downtime.
4. Who Benefits:
○ Businesses Modernizing Applications: Companies looking to update their old
applications can use this architecture to move them to the cloud, making them
faster and more secure.
○ Security and Compliance: It helps businesses keep their applications safe and
meet legal requirements for data protection.
○ High Availability: By using this architecture, businesses can ensure that their
applications are always available to users, even during heavy traffic or
technical issues.
Benefits:
In simple terms, the F5 IBM Reference Architecture is like a blueprint that shows how to use
the best tools from F5 and IBM to build and run secure, efficient, and reliable applications in
the cloud.
SOA creates interoperability between apps and services. It ensures existing applications can
be easily scaled, while simultaneously reducing costs related to the development of business
service solutions. Each service provides a business capability, and services can also
communicate with each other across platforms and languages. Developers use SOA to reuse
services in different systems or combine several independent services to perform complex
tasks.
For example, multiple business processes in an organization require the user authentication
functionality. Instead of rewriting the authentication code for all business processes, you can
create a single authentication service and reuse it for all applications. Similarly, almost all
systems across a healthcare organization, such as patient management systems and electronic
health record (EHR) systems, need to register patients. These systems can call a single,
common service to perform the patient registration task.
Key Components of SOA:
● Services:
Infrastructure Services: These support the technical operations of the system, such as
authentication, logging, or message routing. They ensure the smooth functioning of
business services.
● Service Contract:
A service contract defines the interaction rules between a service provider and
consumers, specifying what the service does, how it can be invoked, and the expected
inputs and outputs. This contract is often expressed using standards like WSDL (Web
Services Description Language).
● Service Interface:
The service interface is the access point through which consumers interact with the
service. It typically defines the operations available, their parameters, and the
communication protocol (e.g., SOAP, REST).
● Service Implementation:
This is the actual code or logic that performs the service’s function. It handles the
business logic, data processing, and integration with other services or data sources.
● Service Registry:
The service bus is a communication backbone that enables services to interact with
each other. It manages message routing, transformation, and protocol conversion,
ensuring seamless communication between disparate services. ESB also provides
mediation, orchestration, and monitoring capabilities.
● Service Consumers:
These are applications, systems, or users that consume the services. They interact with
the services through the service interface to perform specific business tasks.
● Service Composition:
● Governance:
SOA governance involves the policies, guidelines, and processes that ensure the
services are developed, deployed, and managed in alignment with business goals and
IT standards. Governance also covers security, compliance, and performance
monitoring.
Advantages of SOA:
● Reusability:
Services are designed to be reusable across different applications and projects, which
reduces development time and effort. This leads to more efficient use of resources and
consistency across the organization.
● Interoperability:
SOA allows different systems, built on various platforms and using different
technologies, to communicate with each other via standard protocols (e.g., SOAP,
REST). This promotes integration and data sharing across heterogeneous
environments.
● Scalability:
SOA enables businesses to quickly adapt to changing market conditions and business
requirements. New services can be added or modified without disrupting existing
systems, supporting rapid innovation and responsiveness.
● Improved Maintainability:
SOA’s modular design makes it easier to maintain and update individual services
without affecting the entire system. This leads to better system stability and easier
troubleshooting.
● Enhanced Collaboration:
Disadvantages of SOA:
● Complexity:
● Performance Overhead:
● Security Concerns:
Exposing services over a network increases the attack surface, making security a
significant concern. SOA implementations must incorporate robust security measures,
including encryption, authentication, and authorization, to protect sensitive data and
services.
● Governance Challenges:
● Dependency Management:
● Cultural Shift:
Adopting SOA often requires a cultural shift within the organization, as it promotes a
more collaborative and decentralized approach to development. This can be
challenging for teams accustomed to traditional, monolithic development practices.
SOA requires specialized tools for service design, development, testing, and
monitoring. Additionally, it demands expertise in areas such as service design,
governance, and security, which may require training or hiring new talent.
The Enterprise Service Bus (ESB) is a software architecture which connects all the services
together over a bus like infrastructure. It acts as communication center in the SOA by
allowing linking multiple systems, applications and data and connects multiple systems with
no disruption.
Characteristics of an ESB
1. Message Routing:
○ ESB intelligently routes messages between services based on predefined rules,
ensuring that each message reaches the correct destination. It supports both
synchronous and asynchronous communication.
2. Protocol Mediation:
○ ESB supports multiple communication protocols (e.g., HTTP, JMS, SOAP,
REST) and can mediate between them. This allows services using different
protocols to interact seamlessly.
3. Message Transformation:
○ ESB can transform data formats between different systems (e.g., XML to
JSON), enabling interoperability between services that use different data
structures.
4. Service Orchestration:
○ ESB can coordinate the execution of multiple services to achieve a complex
business process, often through workflow engines or business process
management tools.
5. Service Registry and Discovery:
○ ESB often includes a service registry that allows for the dynamic discovery
and binding of services, making it easier to locate and invoke services across
the enterprise.
6. Security:
○ ESB provides security features such as authentication, authorization,
encryption, and message integrity to ensure secure communication between
services.
7. Scalability:
○ ESB is designed to handle a high volume of service interactions, providing
scalability to meet the needs of large and complex enterprises.
8. Monitoring and Management:
○ ESB offers tools for monitoring, managing, and auditing service interactions,
which helps in maintaining service health and ensuring compliance with
service level agreements (SLAs).
9. Error Handling and Fault Tolerance:
○ ESB provides mechanisms for error handling, such as retries and alternative
routing, ensuring that service failures are managed gracefully and do not
disrupt the overall system.
10. Decoupling:
○ ESB decouples service consumers and providers, meaning that changes in one
service do not necessarily require changes in others, thus improving system
flexibility.
Advantages of an ESB:
1. Simplified Integration:
○ ESB reduces the complexity of integrating multiple systems by providing a
unified platform for communication and interaction, enabling faster and more
reliable integration projects.
2. Enhanced Flexibility:
○ ESB allows for easy addition, removal, or modification of services without
impacting other components, providing flexibility to adapt to changing
business requirements.
3. Improved Reusability:
○ Services integrated via an ESB can be reused across different applications and
processes, reducing redundancy and development time.
4. Centralized Management:
○ ESB provides a single point of control for managing service interactions,
enabling better governance, monitoring, and auditing of service activity.
5. Scalability and Performance:
○ ESB can handle a high volume of transactions and scale to meet the needs of
large enterprises, ensuring consistent performance even as the number of
integrated services grows.
6. Protocol and Format Independence:
○ By supporting multiple protocols and data formats, ESB enables
communication between heterogeneous systems, ensuring interoperability
across different technologies and platforms.
7. Resilience and Fault Tolerance:
○ ESB enhances system reliability by providing mechanisms for error handling,
retries, and alternative routing, ensuring that failures in one part of the system
do not lead to overall system failure.
8. Streamlined Development and Deployment:
○ With its centralized infrastructure and reusable components, ESB streamlines
the development and deployment of new services and integrations, reducing
time-to-market for new initiatives.
9. Cost Efficiency:
○ By reducing the complexity of integrations and enabling service reuse, ESB
can lead to lower development and maintenance costs over time.
10. Security and Compliance:
○ ESB provides robust security features, ensuring that service interactions are
protected and compliant with regulatory requirements, which is particularly
important in industries with stringent security and privacy standards.