Virtualization (MSA)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Virtualization Technology

Virtual Machines (VMs):

• VMs are software-based machines running on top of a physical machine.

• Each VM runs a full-fledged operating system (OS) independent of the host.

• They offer isolation between different VMs, ensuring that processes and applications within
one VM do not interfere with others.

• Use Case: Ideal when different services require completely separate OS environments.

Advantages:

• Strong isolation between VMs.

• Can run multiple OS types (e.g., Linux, Windows) on the same physical machine.

Disadvantages:

• High overhead since each VM needs its own OS.

• Slower boot time compared to containers.

• Requires more resources (CPU, memory) due to full OS instances.


Containers:

• Containers are lightweight virtual machines but do not include a full operating system.

• They share the host OS kernel while still maintaining isolation between containers using
namespaces and cgroups.

• Use Case: Ideal for microservices because they require fewer resources and start faster than
VMs.

Advantages:

• Lightweight: No need to run a separate OS for each container.

• Faster startup: Containers start almost instantly compared to VMs.

• Efficient resource utilization: Lower memory and CPU usage.

Disadvantages:

• Slightly less isolated than VMs since they share the host OS.

• Not suitable for running applications that need different kernels or OS types.

Containerization Platforms:

1. Docker

o The most popular container platform, known for ease of use.

o Provides a registry (Docker Hub) for sharing container images.

2. Rancher

o A container management platform for deploying and managing Kubernetes clusters.

o Suitable for large-scale microservices orchestration.

3. Podman

o A container engine similar to Docker but focuses on security by running without a


daemon.

4. Containerd

o A lightweight container runtime designed to manage containers efficiently in a low-


level system.

5. LXC (Linux Containers)

o An early container technology that allows running Linux systems in isolated


environments.
Virtual Machines (VMs) Containers
Lightweight virtual environments sharing the
Software-based machines that run inside a
host OS.
physical machine.
Each VM has its own complete OS.
Containers share the host OS kernel.
VMs are large because they include a full OS Containers are small since they don’t have their
own OS
Faster to start (seconds) due to shared OS
Slower to boot (minutes) since the entire OS
needs to load
VMs provide stronger isolation between Containers offer weaker isolation (share kernel)
systems
Heavy on CPU and memory since each VM runs Lightweight, making better use of system
its own OS resources
Ideal for microservices or apps needing quick
Best for running different OS types (e.g.,
scaling
Windows and Linux)

Eg: VMware, VirtualBox, Hyper-V Eg: Docker, Podman, Kubernetes


What is Docker?
• Docker is an open-source platform designed to automate the deployment, scaling, and
management of applications using containers.

• It allows developers to package applications and their dependencies into a single, portable
unit (container) that can run consistently across different environments, whether on a
developer's laptop, in a testing environment, or in production.

• Docker simplifies the software delivery process by enabling applications to run in isolation
from each other, making it easier to manage different services and versions.

1. Docker Image

• A Docker Image is like a blueprint for creating containers.

• It includes:

o Application code

o Dependencies (like libraries and runtime)

o Configuration (like environment variables)

• Read-only template: Docker images cannot be changed once built.

Example: A Docker image for a web service might contain the app code, a web server, and all
required libraries.

2. Docker Container

• A container is a running instance of a Docker image.

• Containers are isolated environments where applications run.

• You can run multiple containers from the same image.

Example: Multiple containers can run the same image with different configurations for development,
testing, and production.

3. Docker Registry

• A Docker registry is like a storage system for Docker images.

• Docker Hub is a public registry where anyone can push/pull images.

• Private registries can also be created within organizations for secure image distribution.

Use Case: Store your microservice images on Docker Hub or in a private registry and pull them when
deploying.
4. Docker Compose

• Docker Compose helps define and manage multi-container environments.

• Useful for microservices where:

o Multiple services (like APIs, databases) run in separate containers.

o Dependencies exist (e.g., a web app depends on a database).

How it works: It uses a docker-compose.yml file to define services and their relationships.

5. Docker vs Docker Compose

Docker Docker Compose

Manages individual containers. Manages multi-container applications.

Uses Dockerfiles to build images. Uses a docker-compose.yml to define services.

Focuses on a single container at a time. Handles multiple interconnected services.

6. Advantages of Docker

1. Consistency Across Environments:

o Problem: Applications behave differently in development, testing, or production.

o Solution: Containers provide consistent environments, ensuring that microservices


run the same everywhere.

2. Portability:

o Problem: Moving services across systems or cloud providers can cause compatibility
issues.

o Solution: Containers are portable and run on any machine supporting Docker.

3. Versioning and Rollbacks:

o Problem: Managing different versions and reverting changes can be challenging.

o Solution: Docker images support versioning. If an update fails, you can roll back to a
previous image version.

4. Development Speed and CI/CD Integration:

o Problem: Building, testing, and deploying without automation can be slow and
prone to errors.

o Solution: Docker integrates with CI/CD pipelines, improving efficiency and


automation.
7. What Docker Cannot Solve

1. Managing multiple containers across multiple hosts: Requires tools like Kubernetes.

2. Automated scaling of services: Docker alone doesn't scale services automatically.

3. Automated rolling updates/zero downtime: Needs orchestration tools like Kubernetes or


Swarm.

4. Managing persistent data: Docker doesn't handle persistent data across container restarts
well.

5. Resource management: It doesn’t manage CPU and memory across a cluster of containers.

6. Service Discovery: Requires integration with tools for service registration and discovery (like
Eureka).
Creating a Docker Image for a Spring Boot Application
Prerequisites

• You need a WAR file generated by IntelliJ after building your Spring Boot service.

• Example of the WAR file: demo-0.0.1-SNAPSHOT.war

1. Create a Dockerfile

• Create a file named Dockerfile and place it in the base project folder (e.g., inside the demo
folder).

Contents of the Dockerfile

FROM container-registry.oracle.com/java/openjdk:latest

ARG JAR_FILE=build/libs/demo-0.0.1-SNAPSHOT.war

COPY ${JAR_FILE} myDemo.jar

ENTRYPOINT ["java", "-jar", "/myDemo.jar"]

Explanation of Dockerfile Instructions

• FROM: Specifies the base image. Here, it uses the latest OpenJDK image from Oracle.

• ARG JAR_FILE: Declares an argument that holds the path to the WAR file.

• COPY: Copies the specified WAR file into the container and renames it to myDemo.jar.

• ENTRYPOINT: Specifies the command to run when the container starts, which is the Java
command to execute the JAR file.

2. Build the Docker Image

To create a Docker image from the Dockerfile, run the following command in the terminal from the
demo directory:

docker build -t firstdockerimage .

• -t firstdockerimage: Tags the image with the name firstdockerimage.

• .: Indicates the context (current directory) for the build.

3. Running a Container

To run a container based on the created image, use the following command:

docker run -d -p 9090:8080 --name myContainer firstdockerimage

Command Breakdown

• docker run: Creates and runs a new container from the specified image.

• -d: Runs the container in detached mode (in the background).


• -p 9090:8080: Maps port 8080 of the container (where the microservice is running) to port
9090 on your host.

• --name myContainer: Assigns a name to the container (you can choose any name).

• firstdockerimage: The name of your Docker image to run.

Accessing the Microservice

After running the command, you can access your microservice at:

http://localhost:9090

4. Checking Container Status

To check if the container is running, use:

docker ps -a

5. Stopping a Container

To stop the running container, use:

docker stop myContainer

6. Executing Commands Inside the Container

To run commands inside a running Docker container, use the docker exec command. This is useful for
debugging or monitoring:

docker exec -it myContainer <command>

• -it: Allows you to interact with the container's terminal.

• <command>: Replace with the command you want to execute inside the container.
Kubernetes
Why Do We Need Kubernetes?

Docker is great for running individual containers, but when you need to:

1. Manage multiple containers across multiple hosts

2. Scale services up or down automatically based on traffic

3. Update applications without downtime

4. Ensure data is preserved across restarts

5. Distribute resources (CPU & memory) efficiently across containers

6. Make services discoverable automatically (so they can talk to each other)

That’s where Kubernetes helps as an orchestration platform to manage all these challenges.

Core Concepts of Kubernetes

1. Pod

• The smallest unit of deployment in Kubernetes.

• A pod can contain one or more containers.

• For microservices, the best practice is one container per pod to keep things simple and
isolated.

2. ReplicaSet

• Ensures a certain number of pod replicas are always running.

• Example: If you specify 3 replicas of a pod, Kubernetes will keep 3 instances running.
If one crashes, it will automatically start a new one to replace it.

3. Service

• Groups multiple pods providing the same function.

• Example: You may have 5 instances of an Order Processing service running as pods.
A Service ensures that traffic is distributed to the right pods and allows other services to
discover it.

4. Deployment

• Used to manage stateless applications (apps that don’t store data across restarts).
• Pods are dynamic (they are created with random names, and if they stop, new ones are
created).

• Ideal for apps that don’t need to save data.

5. StatefulSet

• Used for managing stateful applications (apps that need to keep data across restarts).

• Pods have stable identities (like consistent hostnames), so even if a pod is restarted, it can
reconnect to the right storage.

• Works with PersistentVolumeClaims (PVC) to store data permanently.

PersistentVolumeClaim (PVC)

• PVC is a request for storage by an application or user.

• It abstracts storage, meaning the application doesn’t need to know the details of the
underlying storage.
It just asks for storage, and Kubernetes makes sure the request is fulfilled.
How Kubernetes Works (Using the Image Above)

1. Kubernetes Control Plane: Manages the overall cluster and decides which pod should run on
which node.

2. Pods: Each pod contains a container running a microservice.

3. HTTP Communication: The microservices (inside containers) communicate with each other
over HTTP.
Using GitHub for Container Image Repository
1. Create a Personal Access Token (PAT) on GitHub

1. Login to GitHub

2. Settings:

o Click your profile picture in the top-right corner.

o Go to Settings from the dropdown.

3. Developer Settings:

o Scroll down and click Developer settings in the sidebar.

4. Personal Access Tokens (classic):

o Click Tokens (classic) in the sidebar.

o Click Generate new token.

5. Set Token Scopes:

o Check:
write:packages
read:packages

2. Login to GitHub Registry via Command Line

Use the following command to log in to GitHub’s container registry:

echo your_github_token | docker login ghcr.io -u your_github_username --password-stdin

• Replace:

o your_github_token with the Personal Access Token (PAT) you generated.

o your_github_username with your GitHub username.

3. Push Docker Image to GitHub Registry

1. Tag the Docker Image:

docker tag firstdockerimage ghcr.io/username/firstdockerimage:latest

Replace username with your GitHub username.

2. Push the Docker Image to Registry:

docker push ghcr.io/username/firstdockerimage:latest

Deploying Docker Image in Kubernetes


4. Create a Kubernetes Secret

The Kubernetes Secret will store your GitHub credentials, so the cluster can pull images from
your private GitHub registry.

kubectl create secret docker-registry github-registry-secret \

--docker-server=ghcr.io \

--docker-username=<UserName> \

--docker-password=<PAT> \

--docker-email=<GitHub Email ID>

• Replace:

o <UserName>: Your GitHub username.

o <PAT>: The Personal Access Token from GitHub.

o <GitHub Email ID>: Your GitHub email.

5. Deployment and Service YAML Files

1. Deployment YAML:

o Defines how many replicas (pods) to run and what image to use.

o The deployment ensures pods are running and automatically restarted if needed.

2. Service YAML:

o Exposes your application to other services or clients.

o Provides stable IP and load balances traffic among the pods.

6. Apply YAML Files

1. Apply Deployment YAML:

kubectl apply -f deployment.yaml

o Creates pods and pulls the image from GitHub using the secret.

2. Apply Service YAML:

kubectl apply -f service.yaml

o Exposes the pods to other services or users.

7. Check the Status of Pods and Services

• View running services:


kubectl get services

• View running pods:

kubectl get pods

Once the deployment is up, access your application using the External IP and Port.

8. Scaling the Deployment

• You can change the number of replicas (pods) by editing the deployment.yaml file.

• After making changes, reapply the deployment:

kubectl apply -f deployment.yaml

You might also like