19DCS148 AC File
19DCS148 AC File
19DCS148 AC File
PRACTICAL: 1
AIM:
To implement Cloud-based infrastructures and services, it is required to set up the complete
system requirements. Researchers & industry-based developers can focus on specific system
design issues that they want to investigate, without taking more concerned about the low-level
details so the Cloudsim is very much useful for these activities and it cansupport simulation
environment to implement cloud-based infrastructure solutions.
Overview of Cloudsim functionalities:
Support for modeling and simulation of large-scale Cloud computing data centers
support for modeling and simulation of virtualized server hosts, with customizable policies for
provisioning host resources to virtual machines
support for modeling and simulation of data center network topologies and message-passing
applications
support for dynamic insertion of simulation elements, stop and resume of simulation
support for user-defined policies for allocation of hosts to virtual machines and policies for
allocation of host resources to virtual machines
Perform Cloud Computing Set up using Cloudsim Tool:
1) Introduction to Cloudsim tool.
2) Perform Installation steps of Cloudsim on NetBeans.
THEORY:
CloudSim is a simulation toolkit that supports the modeling and simulation of the core functionality of
cloud, like job/task queue, processing of events, creation of cloud entities(datacenter, datacenter
brokers, etc), communication between different entities, implementation of broker policies, etc. This
toolkit allows to:
Flexibility to switch between space shared and time shared allocation of processing coresto virtualized
services.
You can download any type of setup as per your requirements from the above mention web page.
Right-click on the setup or you can Double-Click on the setup by using the mouse.
DEPSTAR (CSE) 2
CS451-Advance Computing 19DCS148
Wait for the while till the time the setup is properly Installed into the Computer
After complication of the setup you can click on the “Finish” button or you can also register the
Software, for Further Assistance because it is a Free Software.
Now you can start the NetBeans for further use.
DEPSTAR (CSE) 3
CS451-Advance Computing 19DCS148
select “Java with Ant” folder then select first option Java Application, Press next
Now give a name to the project as you wish.
DEPSTAR (CSE) 4
CS451-Advance Computing 19DCS148
Now browse the cloudsim folder which you have extracted from zip file .and go to that folder and
select “cloudsim-3.0.3.jar”.
CONCLUSION:
In this practical, we learnt about Netbeans and Cloudsim. We installed both the tools in oursystem.
DEPSTAR (CSE) 5
CS451-Advance Computing 19DCS148
PRACTICAL: 2
AIM:
Cloud Computing aims for Internet based application services to deliver reliable, secure, fault-
to lerant, scalable infrastructure. It is a tremendous challenging task to model and schedule
the different applications and services on real cloud infrastructure which requires to handle
different workload and energy performance parameters. Consider the real-worldanalogy into
cloudsim and
Perform following Programs:
1) Write a program in cloudsim using NetBeans IDE to create a datacenter with one
host and runfour cloudlets on it.
2) Write a program in cloudsim using NetBeans IDE to create a datacenter with three
hosts andrun three cloudlets on it.
CODE:
(1) Write a program in cloudsim using NetBeans IDE to create a datacenter with one host
and run four cloudlets on it.
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
DEPSTAR (CSE) 6
CS451-Advance Computing 19DCS148
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
/**
* @param args the command line arguments
*/
private static List<Cloudlet> cloudletList;
try
{
int num_user = 1;
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false;
int vmid = 0;
int mips = 1000;
long size = 10000;
int ram = 512;
long bw = 1000;
int pesNumber = 1;
String vmm = "Xen";
Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
vmlist.add(vm);
broker.submitVmList(vmlist);
DEPSTAR (CSE) 7
CS451-Advance Computing 19DCS148
int id = 0;
long length = 400000;
long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();
broker.submitCloudletList(cloudletList);
CloudSim.startSimulation();
CloudSim.stopSimulation();
DEPSTAR (CSE) 8
CS451-Advance Computing 19DCS148
}
}
int hostId = 0;
int ram = 2048;
long storage = 1000000;
int bw = 10000;
return datacenter;
DEPSTAR (CSE) 9
CS451-Advance Computing 19DCS148
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
DEPSTAR (CSE) 10
CS451-Advance Computing 19DCS148
OUTPUT :
(2) Write a program in cloudsim using NetBeans IDE to create a datacenter with three hosts
and run three cloudlets on it.
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
DEPSTAR (CSE) 11
CS451-Advance Computing 19DCS148
try
{
int num_user = 1;
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false;
int vmid = 0;
int mips = 250;
long size = 10000;
int ram = 512;
//int ram = 1024;
long bw = 1000;
int pesNumber = 1;
String vmm = "Xen";
Vm vm0 = new Vm(vmid++, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
Vm vm1 = new Vm(vmid++, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
Vm vm2 = new Vm(vmid++, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
//vmlist.add(vm);
vmlist.add(vm0);
vmlist.add(vm1);
vmlist.add(vm2);
broker.submitVmList(vmlist);
int id = 0;
broker.submitCloudletList(cloudletList);
CloudSim.startSimulation();
CloudSim.stopSimulation();
Log.printLine("Cloudsim finished");
}
catch(Exception ex)
{
ex.printStackTrace();
Log.printLine("Unwanted error occured");
}
}
int hostId = 0;
int ram = 2048;
long storage = 1000000;
int bw = 10000;
return null;
}
return broker;
}
private static void printCloudletList(List<Cloudlet> list) {
int size = list.size();
Cloudlet cloudlet;
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
OUTPUT :
CONCLUSION:
In this practical, we learnt about clouds in architecture and implemented several different scenariosusing
different number of datacenters, hosts and cloudlet
DEPSTAR (CSE) 15
CS451-Advance Computing 19DCS148
PRACTICAL: 3
AIM:
Perform following using Cloud Analyst:
1. Install a Cloud Analyst and Integrate with NetBeans. Monitor the performance of an
ExistingAlgorithms given in Cloud Analyst.
Modify or propose a new load balancing algorithm compatible with Cloud Analyst.
THEORY:
CloudAnalyst:
Cloud Analyst is a tool developed at the University of Melbourne whose goal is to support
evaluation of social networks tools according to geographic distribution of users and data
centers.
In this tool, communities of users and data centers supporting the social networks are
characterized and, based on their location; parameters such as user experience while using
the social network application and load on the data center are obtained/logged.
PRACTICAL:
Download CloudAnalyst from
http://www.cloudbus.org/cloudsim/CloudAnalyst.zip
Extract Files from the Zip file which will give following folder structure.
DEPSTAR (CSE) 16
CS451-Advance Computing 19DCS148
If you want to Run from Command line then type the following command in cmd.
java -cp jars\simjava2.jar;jars\gridsim.jar;jars\iText-2.1.5.jar;classes;.
cloudsim.ext.gui.GuiMain
DEPSTAR (CSE) 17
CS451-Advance Computing 19DCS148
Here we are creating 5 copies of one of them so it will give us total of 6 hosts.
DEPSTAR (CSE) 18
CS451-Advance Computing 19DCS148
We can also customize user base that is models a group of users and generates traffic
representing the users.
You can Save this Configuration as well in case you want to use it later. It is stored as .sim
file. XML data is generated and saved as Sim file.
DEPSTAR (CSE) 19
CS451-Advance Computing 19DCS148
Then we can run simulation that would give us overall report of simulation.
DEPSTAR (CSE) 20
CS451-Advance Computing 19DCS148
CONCLUSION:
In this practical, we learnt about cloudAnalyst and simulated a simple case with singledatacenter with 6
hosts and single user base.
DEPSTAR (CSE) 21
CS451-Advance Computing 19DCS148
PRACTICAL: 4
AIM:
Perform following using Google Cloud Platform:
1. Introduction to Google Cloud.
2. Perform Google Cloud Hands-on Labs.
Create and setup a Virtual Machine, GCP Essentials and Compute Engine:
1.Qwik Start - Windows on Google Cloud Platform.
2.Compute Engine: Qwik Start – Windows
THEORY:
Introduction To Google Cloud Platform
Google has been one of the leading software and technology developer in the world. Every year
Google comes up with different innovations and advancement in the technological field which is
brilliant and helps the people all over the world.
In the recent years, Google Cloud Platform is one of such innovations that have seen an increase
in its usage because more and more people are adopting Cloud. Since there has been a great
demand in the computing needs, a number of Google cloud services have been launched for global
customers.
Google Compute Engine: This computing engine has been introduced with the IaaSservice by
Google which effectively provides the VMs similar to Amazon EC2.
DEPSTAR (CSE) 22
CS451-Advance Computing 19DCS148
Google Cloud App Engine: The app engine has the PaaS service for the correct hostingapplications
directly. This is a very powerful and important platform which helps to develop mobile and different
web applications.
Google Cloud Container Engine: this particular element is helpful because it allows the user to run
the docker containers present up on the Google Cloud Platform that is effectively triggered by
Kubernetes.
Google Cloud Storage: the ability to store data and important resources on the cloud platform is very
important. Google cloud platform has been popular with the storage facilities and have allows the
users to back up or store data on the cloud servers whichcan be accessed from anywhere at any time.
Google BigQuery Service: the Google BigQuery Service is an efficient data analysis service which
enables the users to analyze their business for Big data. It also has a highlevel of storage facility which
can hold up to terabytes of storage.
Google Cloud Dataflow: the cloud data flow allows the users to manage consistent parallel data-
processing pipelines. It helps to manage the lifecycle of Google Computeservers of the pipelines that
are being processed.
Google Cloud Job Discovery: the Google Cloud Platform is also a great source for jobsearch, career
options etc. The advanced search engine and machine learning capabilities make it possible to find
out different ways of finding jobs and business opportunities.
Google Cloud Test Lab: this service provided by Google allows the users to test theirapps with the
help of physical and virtual devices present in the cloud. The various instrumentation tests and robotic
tests allow the users to get more insights about their applications.
Google Cloud Endpoints: this particular feature helps the users to develop andmaintain secured
application program interface running on the Google Cloud Platform.
Google Cloud Machine Learning Engine: as the name suggests, this element present in Google
Cloud helps the users to develop models and structures which enables the users to concentrate on
Machine learning abilities and framework.
DEPSTAR (CSE) 23
CS451-Advance Computing 19DCS148
Category Services
Compute Engine
App Engine
Data Transfer
Transfer Appliance
Migration
Cloud Storage Transfer Service
BigQuery Data Transfer Service
Virtual Private Cloud (VPC)
Cloud Load Balancing
Cloud Armor
Cloud CDN
Cloud Interconnect
Cloud DNS
Networking
Network Service Tiers
Lab fundamentals
Features and components
DEPSTAR (CSE) 24
CS451-Advance Computing 19DCS148
Regardless of topic or expertise level, all labs share a common interface. The lab that you're
taking should look similar to this:
Clicking this button creates a temporary Google Cloud environment, with all the necessary
services and credentials enabled, so you can get hands-on practice with the lab's material. This
also starts a countdown timer.
Credit
The price of a lab. 1 Credit is usually equivalent to 1 US dollar (discounts are available when
you purchase credits in bulk). Some introductory-level labs (like this one) are free. The more
specialized labs cost more because they involve heavier computing tasks and demand more
Google Cloud resources.
Time
Specifies the amount of time you have to complete a lab. When you click the Start Lab button,
the timer will count down until it reaches 00:00:00. When it does, your temporary Google Cloud
environment and resources are deleted. Ample time is given to complete a lab, but make sure you
don't work on something else while a lab is running: you risk losing all of your hard work!
Score
Many labs include a score. This feature is called "activity tracking" and ensures that you
complete specified steps in a lab. To pass a lab with activity tracking, you need to complete all
the steps in order. Only then will you receive completion credit.
DEPSTAR (CSE) 25
CS451-Advance Computing 19DCS148
switch between the two browser tabs to read the instructions and then perform the tasks.
Depending on your physical computer setup, you could also move the two tabs to separate
monitors.
You actually have access to more than one Google Cloud project. In fact, in some labs you may
be given more than one project in order to complete the assigned tasks.
1. In the Google Cloud Console title bar, next to your project name, click the drop-down
menu.
2. In the Select a project dialog, click All. The resulting list of projects includes a
"Qwiklabs Resources" project.
Project ID
A Google Cloud project is an organizing entity for your Google Cloud resources. It
DEPSTAR (CSE) 26
CS451-Advance Computing 19DCS148
often contains resources and services; for example, it may hold a pool of virtual machines, a
set ofdatabases, and a network that connects them together. Projects also contain settings
and permissions, which specify security rules and who has access to what resources.
A Project ID is a unique identifier that is used to link Google Cloud resources and APIs to your
specific project. Project IDs are unique across Google Cloud: there can be only one qwiklabs-
gcp-xxx. .., which makes it globally identifiable.
These credentials represent an identity in the Cloud Identity and Access Management (Cloud
IAM) service. This identity has access permissions (a role or roles) that allow you to work with
Google Cloud resources in the project you've been allocated. These credentials
are temporary and will only work for the access time of the lab. When the timer reaches
00:00:00, you will no longer have access to your Google Cloud project with those credentials.
Account.
2. Copy the Username from the Connection Details pane, paste it in the Email or
phone field, and click Next.
DEPSTAR (CSE) 27
CS451-Advance Computing 19DCS148
5. On the Welcome student! page, check Terms of Service to agree to Google Cloud's
terms of service, and click Agree and continue.
Your project has a name, ID, and number. These identifiers are frequently used when interacting
with Google Cloud services. You are working with one project to get experience with a specific
service or feature of Google Cloud.
DEPSTAR (CSE) 28
CS451-Advance Computing 19DCS148
The Google Cloud Console title bar also contains a button labeled with a three-line icon:
1. On the Navigation menu ( ), click IAM & Admin. This opens a page that contains a
list of users and specifies permissions and roles granted to specific accounts.
DEPSTAR (CSE) 29
CS451-Advance Computing 19DCS148
1. On the Navigation menu ( ), click APIs & Services > Library. The left pane, under
the header CATEGORY, displays the different categories available.
2. In the API search bar, type Dialogflow, and then click Dialogflow API. The Dialogflow
description page opens.
The Dialogflow API allows you to build rich conversational applications (e.g., for Google
Assistant) without having to understand the underlying machine learning and natural language
schema.
3. Click Enable.
4. Click the back button in your browser to verify that the API is now enabled.
DEPSTAR (CSE) 30
CS451-Advance Computing 19DCS148
5. Click Try this API. A new browser tab displays documentation for the Dialogflow API.
Explore this information, and close the tab when you're finished.
6. To return to the main page of the Cloud Console, on the Navigation menu, click Cloud
overview.
Now that you're finished with the lab, click End Lab and then click Submit to confirm it.
1. In the Cloud Console, on the Navigation menu , click Compute Engine > VM
instances, and then click Create Instance.
4. In the Boot disk section, click Change to begin configuring your boot disk.
5. Under Operating system select Windows Server and under Version select Windows
Server 2012 R2 Datacenter, and then click Select. Leave all other settings as their
defaults.
DEPSTAR (CSE) 31
CS451-Advance Computing 19DCS148
DEPSTAR (CSE) 32
CS451-Advance Computing 19DCS148
1. Click Activate Cloud Shell at the top of the Google Cloud console.
2. Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected,
you are already authenticated, and the project is set to your PROJECT_ID. The output contains
a line that declares the PROJECT_ID for this session:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and
supports tab-completion.
3. (Optional) You can list the active account name with this command:
ACTIVE: *
ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
[core]
project = <project_ID>
Example output:
DEPSTAR (CSE) 33
CS451-Advance Computing 19DCS148
[core]
project = qwiklabs-gcp-44776a13dea667a6
However the server instance may not yet be ready to accept RDP connections, as it takes a while
for all the OS components to initialize.
To see whether the server instance is ready for an RDP connection, run the following command
at your Cloud Shell terminal command line:
Repeat the command until you see the following in the command output, which tells you that the
OS components have initialized and the Windows Server is ready to accept your RDP connection
(attempt in the next step).
Copied!
content_copy
If asked Would you like to set or reset the password for [admin] (Y/n)?, enter Y.
If you are using a Chromebook or other machine at a Google Cloud event there is likely an RDP
app already installed on the computer. Click the icon as below, if it is present, in the lower left
corner of the screen and enter the external IP of your VM.
DEPSTAR (CSE) 34
CS451-Advance Computing 19DCS148
If you are not on Windows but using Chrome, you can connect to your server through RDP
directly from the browser using the Spark View extension. Click on Add to Chrome. Then,
click Launch app button.
Add your VM instance's External IP as your Domain. Click Connect to confirm you want to
connect.
DEPSTAR (CSE) 35
CS451-Advance Computing 19DCS148
To paste, hold the CTRL-V keys (if you are a Mac user, using CMND-V will not work.)
If you are in a Powershell window, be sure that you have clicked in to the window or else
the paste shortcut won't work.
If you are pasting into putty, right click.
CONCLUSION:
In this practical, we learnt about Basic Of Google cloud Platform As well ashow to make a
instance of windows in cloud.
DEPSTAR (CSE) 36
CS451-Advance Computing 19DCS148
PRACTICAL: 5
AIM:
Introduction to cloud Shell and gcloud on Google Cloud. Perform Following task:
Practice using gcloud commands.
Connect to compute services hosted on Google Cloud.
THEORY:
Task 1. Configure your environment
In this section, you'll learn about aspects of the development environment that you can adjust.
1. Copy your project ID to your clipboard or text editor. The project ID is listed in 2 places:
o In the Cloud Console, on the Dashboard, under Project info. (Click Navigation
menu ( ), and then click Cloud overview > Dashboard.)
o On the lab tab near your username and password.
2. In Cloud Shell, run the following gcloud command, to view the project id for your
project:
DEPSTAR (CSE) 37
CS451-Advance Computing 19DCS148
3. In Cloud Shell, run the following gcloud command to view details about the project:
3. To verify that your variables were set properly, run the following commands
DEPSTAR (CSE) 38
CS451-Advance Computing 19DCS148
Command details
o If you omit the --zone flag, the gcloud tool can infer your desired zone based on
your default properties. Other required instance settings, such as machine
type and image, are set to default values if not specified in the create command.
Click Check my progress to verify your performed task. If you have successfully created
a virtual machine with the gcloud tool, an assessment score is displayed.
o To open help for the create command, run the following command:
DEPSTAR (CSE) 39
CS451-Advance Computing 19DCS148
DEPSTAR (CSE) 40
CS451-Advance Computing 19DCS148
5. List the Firewall rules for the default network where the allow rule matches an ICMP
rule:
DEPSTAR (CSE) 41
CS451-Advance Computing 19DCS148
gcloud compute makes connecting to your instances easy. The gcloud compute ssh command
provides a wrapper around SSH, which takes care of authentication and the mapping of instance
names to IP addresses.
1. To connect to your VM with SSH, run the following command:
gcloud compute ssh gcelab2 --zone $ZONE
Output:
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
This tool needs to create the directory
[/home/gcpstaging306_student/.ssh] before being able to generate SSH Keys.
Do you want to continue? (Y/n)
2. To continue, type Y.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase)
3. To leave the passphrase empty, press ENTER twice.
4. Install nginx web server on to virtual machine:
sudo apt install -y nginx
5. You don't need to do anything here, so to disconnect from SSH and exit the remote shell,
run the following command:
exit
Output:
NAME NETWORK DIRECTION PRIORITY ALLOW
DENY DISABLED
default-allow-icmp default INGRESS 65534 icmp False
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-
65535,icmp False
default-allow-rdp default INGRESS 65534 tcp:3389 False
default-allow-ssh default INGRESS 65534 tcp:22 False
DEPSTAR (CSE) 42
CS451-Advance Computing 19DCS148
Viewing logs is essential to understanding the working of your project. Use gcloud to access the
different logs available on Google Cloud.
1. View the available logs on the system:
gcloud logging logs list
Output:
DEPSTAR (CSE) 43
CS451-Advance Computing 19DCS148
NAME: projects/qwiklabs-gcp-01-4b75909db302/logs/GCEGuestAgent
NAME: projects/qwiklabs-gcp-01-4b75909db302/logs/OSConfigAgent
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/autoscaler.googleapis.com%2Fstatus_change
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/cloudaudit.googleapis.com%2Factivity
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/cloudaudit.googleapis.com%2Fdata_access
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/cloudaudit.googleapis.com%2Fsystem_event
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/compute.googleapis.com%2Fautoscaler
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/compute.googleapis.com%2Finstance_group_manager_events
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/compute.googleapis.com%2Fshielded_vm_integrity
NAME: projects/qwiklabs-gcp-01-4b75909db302/logs/run.googleapis.com%2Fstderr
NAME: projects/qwiklabs-gcp-01-4b75909db302/logs/run.googleapis.com%2Fstdout
2. View the logs that relate to compute resources:
gcloud logging logs list --filter="compute"
Output:
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/compute.googleapis.com%2Fautoscaler
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/compute.googleapis.com%2Finstance_group_manager_events
NAME: projects/qwiklabs-gcp-01-
4b75909db302/logs/compute.googleapis.com%2Fshielded_vm_integrity
3. Read the logs related to the resource type of gce_instance:
gcloud logging read "resource.type=gce_instance" --limit 5
4. Read the logs for a specific viítual machine:
gcloud logging read "resource.type=gce_instance AND
labels.instance_name='gcelab2'" --limit 5
CONCLUSION:
In this Practical We are Learn How to run or use the google cloud shell.
DEPSTAR (CSE) 44
CS451-Advance Computing 19DCS148
PRACTICAL: 6
AIM:
Perform Cluster orchestration with Google Kubernetes Engine.
THEORY:
Active Cloud Shell:
1. Click Activate Cloud Shell at the top of the Google Cloud console.
2. Click Continue.
A cluster consists of at least one cluster master machine and multiple worker machines
called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes
processes necessary to make them part of the cluster.
1. Create a cluster:
a. gcloud container clusters create --machine-type=e2-medium --zone=us-west4-c lab-
cluster
DEPSTAR (CSE) 45
CS451-Advance Computing 19DCS148
GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides
the Deployment object for deploying stateless applications like web servers. Service objects define
rules and load balancing for accessing your application from the internet.
2. To create a Kubernetes Service, which is a Kubernetes resource that lets you expose your
application to external traffic, run the following kubectl expose command:
a. kubectl expose deployment hello-server --type=LoadBalancer --port 8080
4. To view the application from your web browser, open a new tab and enter the following
address, replacing [EXTERNAL IP] with the EXTERNAL-IP for hello-server.
a. http://[EXTERNAL-IP]:8080
DEPSTAR (CSE) 46
CS451-Advance Computing 19DCS148
Conclusion:
In this Practical, we have deployed and deleted containerized application to Kubernetes Engine.
DEPSTAR (CSE) 47
CS451-Advance Computing 19DCS148
PRACTICAL: 7
AIM:
Set Up Network and HTTP Load Balancers on Google Cloud Platform.
THEORY:
Active Cloud Shell:
1. Click Activate Cloud Shell at the top of the Google Cloud console.
2. Click Continue.
Task 1: Set the default region and zone for all resources
For this load balancing scenario, create three Compute Engine VM instances and install Apache on
them, then add a firewall rule that allows HTTP traffic to reach the instances.
The code provided sets the zone to <filled in at lab start>. Setting the tags field lets you reference
these instances all at once, such as with a firewall rule. These commands also install Apache on
each instance and give each instance a unique home page.
DEPSTAR (CSE) 48
CS451-Advance Computing 19DCS148
DEPSTAR (CSE) 49
CS451-Advance Computing 19DCS148
5. Run the following to list your instances. You'll see their IP addresses in the EXTERNAL_IP column:
6. Verify that each instance is running with curl, replacing [IP_ADDRESS] with the IP address for
each of your VMs:
a. curl http://[IP_ADDRESS]
DEPSTAR (CSE) 50
CS451-Advance Computing 19DCS148
3. Add a target pool in the same region as your instances. Run the following to create the target
pool and use the health check, which is required for the service to function:
gcloud compute target-pools create www-pool \ --region --http-health-check basic-check
DEPSTAR (CSE) 51
CS451-Advance Computing 19DCS148
1. Enter the following command to view the external IP address of the www-rule forwarding
rule used by the load balancer:
gcloud compute forwarding-rules describe www-rule --region
4. Use curl command to access the external IP address, replacing IP_ADDRESS with an external IP
address from the previous command:
while true; do curl -m1 $IPADDRESS; done
DEPSTAR (CSE) 52
CS451-Advance Computing 19DCS148
HTTP(S) Load Balancing is implemented on Google Front End (GFE). GFEs are distributed
globally and operate together using Google's global network and control plane. You can configure
URL rules to route some URLs to one set of instances and route other URLs to other instances.
Requests are always routed to the instance group that is closest to the user, if that group has enough
capacity and is appropriate for the request. If the closest group does not have enough capacity, the
request is sent to the closest group that does have capacity.
To set up a load balancer with a Compute Engine backend, your VMs need to be in an instance
group. The managed instance group provides VMs running the backend servers of an external
HTTP load balancer. For this lab, backends serve their own hostnames.
1. First, create the load balancer template:
DEPSTAR (CSE) 53
CS451-Advance Computing 19DCS148
--target-tags=allow-health-check \
--rules=tcp:80
4. Now that the instances are up and running, set up a global static external IP address that your
customers use to reach your load balancer
8. Create a URL map to route the incoming requests to the default backend service:
10. Create a global forwarding rule to route incoming requests to the proxy:
DEPSTAR (CSE) 54
CS451-Advance Computing 19DCS148
DEPSTAR (CSE) 55
CS451-Advance Computing 19DCS148
1. In the Cloud Console, from the Navigation menu, go to Network services > Load balancing.
3. In the Backend section, click on the name of the backend and confirm that the VMs
are Healthy. If they are not healthy, wait a few moments and try reloading the page.
4. When the VMs are healthy, test the load balancer using a web browser, going
to http://IP_ADDRESS/, replacing IP_ADDRESS with the load balancer's IP address.
This may take three to five minutes. If you do not connect, wait a minute, and then reload the browser.
Your browser should render a page with content showing the name of the instance that served the page,
along with its zone (for example, Page served from: lb-backend-group-xxxx).
DEPSTAR (CSE) 56
CS451-Advance Computing 19DCS148
Conclusion:
In this Practical, we have built a network load balancer and an HTTP(S) load balancer and practiced
using instance template and managed instance group.
DEPSTAR (CSE) 57
CS451-Advance Computing 19DCS148
PRACTICAL: 8
AIM:
Create and Manage Cloud Resources: Challenge Lab on Google cloud Platform.
THEORY:
Active Cloud Shell:
1. Click Activate Cloud Shell at the top of the Google Cloud console.
2. Click Continue.
In the Cloud Console, on the top left of the screen, select Navigation menu > Compute Engine >
VM Instances:
DEPSTAR (CSE) 58
CS451-Advance Computing 19DCS148
You will serve the site via nginx web servers, but you want to ensure that the environment is fault-
tolerant. Create an HTTP load balancer with a managed instance group of 2 nginx web servers.
Use the following code to configure the web servers; the team will replace this with their own
configuration later.
DEPSTAR (CSE) 59
CS451-Advance Computing 19DCS148
--default-service web-server-backend
Conclusion:
In this Practical, we have learnt how to create an Instance, Kubernetes cluster and setup an HTTP load
balancer.
DEPSTAR (CSE) 61
CS451-Advance Computing 19DCS148
PRACTICAL: 9
AIM:
1. Create and Setup Amazon Elastic Compute Cloud (EC2) on Amazon cloud Platform.
2. Create and setup monitoring service for AWS cloud resources and the applications you
run on AWS (Amazon CloudWatch).
3. Create an AWS Identity and Access Management (IAM) group and user, attach a policy
and add a user to a group.
THEORY:
Task 1: Create and Setup Amazon Elastic Compute Cloud (EC2) on Amazon cloud Platform.
1. First, login into your AWS account and click on “services” present on the left of the AWS
management console, i.e. the primary screen. And from the drop-down menu of options, tap
on “EC2”. Here is the image attached to refer to.
2. In a while, the EC2 console will be loaded onto your screen. Once it is done, from the list of options
on the left in the navigation pane, click on “Instances”. Please refer to the image attached ahead for a
better understanding.
DEPSTAR (CSE) 62
CS451-Advance Computing 19DCS148
3. A new fresh screen will be loaded in a while. In the right corner, there will be an orange box
named “Launch Instance”. Click on that box and wait. Here is the image to refer to.
4. Now, the process of launching an EC2 instance will start. The next screen will display you a
number of options to choose your AMI(Amazon Machine Image) from it. And horizontally,
on the menu bar you will see, there is a 7-step procedure written to be followed for
successfully launching an instance. I have chosen “Amazon Linux 2 AMI” as my AMI. And
then go ahead, click “Next”. Refer to the image for any confusion.
5. Now, comes sub-step-2 out of 7-steps of creating the instance, i.e. “Choose Instance Type”. I
have chosen “t2 micro” as my instance type because I am a free tier user and this instance
type is eligible to use for us. Then click “Next”. Refer to the image attached ahead for better
understanding.
DEPSTAR (CSE) 63
CS451-Advance Computing 19DCS148
6. Further comes sub-step 3 out of the 7-step process of creating the instance, i.e. “Configure
Instance”. Here we will confirm all the configurations we need for our EC2. By default, the
configurations are filled, we just confirm them or alter them as per our needs and click “Next”
to proceed. Here’s the image for better understanding and resolving confusion.
7. Next comes sub step 4 out of the 7-step process of creating the instance, i.e. “Add Storage”.
Here we will look at the pre-defined storage configurations and modify them if they are not
aligned as per our requirements. Then click “Next”. Here’s the image of the storage window
attached ahead to understand better.
8. Next comes sub-step 5 out of the 7-step process of creating the instance, i.e. “Add Tags”.
Here we will just click “Next” and proceed ahead. Here’s the image to refer to.
DEPSTAR (CSE) 64
CS451-Advance Computing 19DCS148
9. Now we will complete the 6th sub step out of the 7-step process of creating the instance,
which is “Configure Security Group”. In Security Group, we have to give a group name and a
group description, followed by the number and type of ports to open and the source type. In
order to resolve confusion please refer to the image attached ahead.
10. Now we will complete the last step of the process of creating the instance, which is “Review”.
In review, we will finally launch the instance and then a new dialog box will appear to ask for
the “Key Pair”. Key pairs are used for authentication of the user when connecting to your
EC2. We will be given two options to choose from, whether to choose an existing key pair or
creating a new one and downloading it to launch. It is not necessary to create a new key pair
every time you can use the previous one as well. Here is the image of the window attached.
DEPSTAR (CSE) 65
CS451-Advance Computing 19DCS148
Task 2: Create and setup monitoring service for AWS cloud resources and the applications you
run on AWS (Amazon CloudWatch).
Notifying website management team when the instance on which website is hosted stops Whenever
the CPU utilization of instance (on which website is hosted) goes above 80%, cloudwatch event is
triggered. This cloudwatch event then activates the SNS topic which sends the alert email to the
attached subscribers.
1. Let us assume that you have already launched an instance with the name tag ‘instance’.
3. You will be directed to this dashboard. Now specify the name and display name.
DEPSTAR (CSE) 66
CS451-Advance Computing 19DCS148
8. Select Email as protocol and specify the email address of subscribers in Endpoint. Click on create
the subscription. Now Go to the mailbox of the specified email id and click on Subscription
confirmed.
DEPSTAR (CSE) 67
CS451-Advance Computing 19DCS148
9. Go to the cloudwatch dashboard on the AWS management console. Click on Metrics in the
left pane.
DEPSTAR (CSE) 68
CS451-Advance Computing 19DCS148
12. This dashboard shows the components of Amazon Cloudwatch such as Namespace, Metric
Name, Statistics, etc
DEPSTAR (CSE) 69
CS451-Advance Computing 19DCS148
13. Select the greater threshold. Also, specify the amount( i.e 80 ) of the threshold value. Click on
Next.
14. Click on Select an existing SNS topic, also mention the name of the SNS topic you created now.
DEPSTAR (CSE) 70
CS451-Advance Computing 19DCS148
15. Specify the name of alarm and description which is completely optional. Click on Next and then click on
Create alarm.
16. You can see the graph which notifies whenever CPU utilization goes above 80%.
DEPSTAR (CSE) 71
CS451-Advance Computing 19DCS148
Task 3: Create an AWS Identity and Access Management (IAM) group and user, attach a
policy and add a user to a group.
Select user →Click to Add user, provide a username, and select any one or both access
type(programmatic access and AWS Management Console access), select auto-generated
password or custom password(give your own password).
DEPSTAR (CSE) 72
CS451-Advance Computing 19DCS148
Click on Next: Tags provide key and value to your user which will be helpful in
searching when you have so many IAM users.
Click on Reviews check all the configurations and make changes if needed.
Click on create user and your IAM user is successfully created and as you have chosen
programmatic access an Access key ID and a secret access key.
DEPSTAR (CSE) 73
CS451-Advance Computing 19DCS148
Give group name →next step. Give permissions /attach policies to the group.
DEPSTAR (CSE) 74
CS451-Advance Computing 19DCS148
Click on the next step (check group configuration and make changes if needed).
DEPSTAR (CSE) 75
CS451-Advance Computing 19DCS148
By default, an IAM group does not have any IAM user we have to add a user to it and
remove the user if required.
To add IAM user in IAM group. Inside your IAM group that you have created →go to
Users →click on Add Users to Group → click to Add User. User successfully added.
Conclusion:
In this Practical, we have learnt how to Create and Setup Amazon Elastic Compute Cloud and how to
Create and setup monitoring service for AWS cloud resources and the applications you run on AWS
and also Create an AWS Identity and Access Management (IAM) group and user, attach a policy and
add a user to a group.
DEPSTAR (CSE) 76
CS451-Advance Computing 19DCS148
PRACTICAL: 10
AIM:
Create and setup Amazon Simple Storage Service (Amazon S3) Block Public Access on
Amazon Cloud Platform.
THEORY:
Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon providing on-demand cloud computing
platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.
These cloud computing web services provide a variety of basic abstract technical infrastructure and
distributed computing building blocks and tools. One of these services is Amazon Elastic Compute
Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, available all
the time, through the Internet. AWS's virtual computers emulate most of the attributes of a real
computer, including hardware central processing units (CPUs) and graphics processing units (GPUs)
for processing; local/RAM memory; hard-disk/SSD storage; a choice of operating systems;
networking; and pre-loaded application software such as web servers, databases, and customer
relationship management (CRM).
The AWS technology is implemented at server farms throughout the world, and maintained by the
Amazon subsidiary. Fees are based on a combination of usage (known as a "Pay-as-you-go" model),
hardware, operating system, software, or networking features chosen by the subscriber required
availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS
computer, a dedicated physical computer, or clusters of either. As part of the subscription agreement,
Amazon provides security for subscribers' systems. AWS operates from many global geographical
regions including 6 in North America.
Amazon markets AWS to subscribers as a way of obtaining large scale computing capacity more
quickly and cheaply than building an actual physical server farm. All services are billed based on usage,
but each service measures usage in varying ways. As of 2017, AWS owns 33% of all cloud (IaaS,
PaaS) while the next two competitors Microsoft Azure and Google Cloud have 18%, and 9%
respectively, according to Synergy Group.
DEPSTAR (CSE) 77
CS451-Advance Computing 19DCS148
PRACTICAL:
Login to your AWS account and go to products and select Amazon Simple Storage Service (S3).
Before you begin hosting your awesome static website out of S3, you need a bucket first. For this blog
post, it is critical that your bucket has the same name as your domain name.
If your website domain is www.my-awesome-site.com, then your bucket name must be www.my-
awesome-site.com.
The reasoning for this has to do with how requests are routed to S3. The request comes into the bucket,
and then S3 uses the Host header in the request to route to the appropriate bucket.
Alright, you have your bucket. It has the same name as your domain name, yes? Time to configure the
bucket for static website hosting.
Navigate to S3 in the AWS Console.
Click into your bucket.
Click the “Properties” section.
Click the “Static website hosting” option.
Select “Use this bucket to host a website”.
DEPSTAR (CSE) 78
CS451-Advance Computing 19DCS148
Your bucket is configured for static website hosting, and you now have an S3 website url like this
http://www.my-awesome-site.com.s3-website-us-east-1.amazonaws.com/.
By default, any new buckets created in an AWS account deny you the ability to add a public access
bucket policy. This is in response to the recent leaky buckets where private information has been
exposed to bad actors. However, for our use case, we need a public access bucket policy. To allow this
you must complete the following steps before adding your bucket policy.
Click into your bucket.
Select the “Permissions” tab at the top.
Under “Public Access Settings” we want to click “Edit”.
Change “Block new public bucket policies”, “Block public and cross-account access if bucket
has public policies”, and “Block new public ACLs and uploading public objects” to be false
and Save.
Now you must update the Bucket Policy of your bucket to have public read access to anyone in the
world. The steps to update the policy of your bucket in the AWS Console are as follows:
Navigate to S3 in the AWS Console.
Click into your bucket.
Click the “Permissions” section.
Select “Bucket Policy”.
Add the following Bucket Policy and then Save
Remember S3 is a flat object store, which means each object in the bucket represents a key without
any hierarchy. While the AWS S3 Console makes you believe there is a directory structure, there isn’t.
Everything stored in S3 is keys with prefixes.
DEPSTAR (CSE) 79
CS451-Advance Computing 19DCS148
Conclusion:
In this practical, we learnt about AWS and hosted our static website on AWS using S3 service.
DEPSTAR (CSE) 80
CS451-Advance Computing 19DCS148
PRACTICAL: 11
AIM:
Create and deploy project using AWS Amplify Hosting Service of AWS.
THEORY:
Amazon Web Services are some of the most useful products we have access to. One such service that
is becoming increasingly popular as days go by is AWS Amplify. It was released in 2018 and it runs
on Amazon’s cloud infrastructure. It is in direct competition with Firebase, but there are features that
set them apart.
Why is it needed?
User experience on any application is the most important aspect that needs to be taken care of.
AWS Amplify helps unify user experience across platforms such as web and mobile. This makes
it easier for a user to choose which one would they be more comfortable with. It is useful in case
of front end development as it helps in building and deployment. Many who use it claim that it
actually makes full-stack development a lot easier with its scalability.
Main features:
Can be used for authenticating users which are powered by Amazon Cognito.
With help from Amazon AppSync and Amazon S3, it can securely store and sync data
seamlessly between applications.
As it is serverless, making changes to any back-end related cases has become simpler. Hence,
less time is spent on maintaining and configuring back-end features.
It also allows for offline synchronization.
It promotes faster app development.
It is useful for implementing Machine Learning and AI-related requirements as it is powered
by Amazon Machine learning services.
It is useful for continuous deployment.
Various AWS services are used for various functionalities. AWS Amplify offers. The main
components are libraries, UI components, and the CLI tool chain. It also provides static web
hosting using AWS Amplify Console.
DEPSTAR (CSE) 81
CS451-Advance Computing 19DCS148
Task 1: Log in to the AWS Amplify Console and choose Get Started under Deploy.
DEPSTAR (CSE) 82
CS451-Advance Computing 19DCS148
DEPSTAR (CSE) 83
CS451-Advance Computing 19DCS148
Conclusion:
In this Practical, we have learnt how to Create and deploy project using AWS Amplify Hosting Service
of AWS.
.
DEPSTAR (CSE) 84
CS451-Advance Computing 19DCS148
PRACTICAL: 12
AIM:
Simulating networks using iFogSim.
THEORY:
iFogSim is a java programming language based API that inherits the established API of Cloudsim to
manage its underlying discrete event-based simulation. It also utilizes the API of CloudsimSDN for
relevant network-related workload handling.
iFogSim Simulation Toolkit is another simulator used for the implementation of the Fog computing-
related research problem.
This course will help you to follow the simulation-based approach of iFogSim and can leverage various
benefits like:
iFogSim simulator possesses a huge potential to simulate the research-based use case and then
corresponding to the promising results can be deployed to the existing system with minimum cost and
efforts involved.
Installing iFogSim
The iFogSim library can be downloaded from the URL https://github.com/Cloudslab/iFogSim. This
library is written in Java, and therefore the Java Development Kit (JDK) will be required to customise
and work with the toolkit.
After downloading the compression toolkit in the Zip format, it is extracted and a folder iFogSim-
master is created. The iFogSim library can be executed on any Java based integrated development
environment (IDE) like Eclipse, Netbeans, JCreator, JDeveloper, jGRASP, BlueJ, IntelliJ IDEA or
Jbuilder.
In order to integrate iFogSim on an Eclipse ID, we need to create a new project in the IDE
DEPSTAR (CSE) 85
CS451-Advance Computing 19DCS148
DEPSTAR (CSE) 86
CS451-Advance Computing 19DCS148
Once the library is set up, the directory structure of iFogSim can be viewed in the Eclipse IDE
in Project Name -> src.
There are numerous packages with Java code for different implementations of fog computing, IoT and
edge computing.
To work with iFogSim in the graphical user interface (GUI) mode, there is a file
called FogGUI.java in org.fog.gui.example. This file can be directly executed in the IDE, and there are
different cloud and fog components that can be imported in the simulation working area as shown in
Figure 3.
In Fog Topology Creator, there is a Graph menu, where there is the option to import the topology
Conclusion:
In this Practical, we have learnt how to Simulating networks using iFogSim.
DEPSTAR (CSE) 87
CS451-Advance Computing 19DCS148
PRACTICAL: 13
AIM:
A Comparative Study of Docker Engine on Windows Server vs Linux Platform Comparing
the feature sets and implementations of Docker on Windows and Linux and Build and Run
Your First Docker Windows Server Container Walkthrough installing Docker on Windows
10, building a Docker image and running a Windows container.
THEORY:
What does it mean to Windows community?
It means that Windows Server 2016 natively supports Docker containers now on-wards and offers two
deployment options – Windows Server Containers and Hyper-V Containers, which offer an
additional level of isolation for multi-tenant environments.The extensive partnership integrates across
the Microsoft portfolio of developer tools, operating systems and cloud infrastructure including:
In case you are Linux enthusiast like me, you must be curious to know how different does Docker
Engine on Windows Server Platform work in comparison to Linux Platform. Under this post, I am
going to spend considerable amount of time talking about architectural difference, CLI which works
under both the platform and further details about Dockerfile, docker compose and the state of Docker
Swarm under Windows Platform.
Let us first talk about architectural difference of Windows containers Vs Linux containers.
Looking at Docker Engine on Linux architecture, sitting on the top are CLI tools like Docker compose,
Docker Client CLI, Docker Registry etc. which talks to Docker REST API. Users communicates and
interacts with the Docker Engine and in turn, engine communicates with containerd. Containerd spins
up runC or other OCI compliant run time to run containers. At the bottom of the architecture, there are
underlying kernel features like namespaces which provides isolation and control groups etc. which
implements resource accounting and limiting, providing many useful metrics, but they also help ensure
that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single
container cannot bring the system down by exhausting one of those resources.
DEPSTAR (CSE) 88
CS451-Advance Computing 19DCS148
Under Windows, it’s slightly a different story. The architecture looks same for the most of the top level
components like same Remote API, same working tools (Docker Compose, Swarm) but as we move
down, the architecture looks different. In case you are new to Windows kernel, the Kernel within the
Windows is somewhat different than that of Linux because Microsoft takes somewhat different
approach to the Kernel’s design. The term “Kernel mode” in Microsoft language refers to not only the
Kernel itself but the HAL(hal.dll) and various system services as well. Various managers for Objects,
processes, Memory, Security, Cache, Plug in Play (PnP), Power, Configuration and I/O collectively
called Windows Executive(ntoskrnl.exe) are available. There is no kernel feature specifically called
namespace and cgroup on Windows. Instead, Microsoft team came up with new version of Windows
Server 2016 introducing “Compute Service Layer” at OS level which provides namespace, resource
control and UFS like capabilities. Also, as you see below, there is NO containerd and runC concept
available under Windows Platform. Compute Service Layer provides public interface to container
and does the responsibility of managing the containers like starting and stopping containers but it
doesn’t maintain the state as such. In short, it replaces containerd on windows and abstracts low level
capabilities which the kernel provides.
DEPSTAR (CSE) 89
CS451-Advance Computing 19DCS148
You need Windows 2016 Server Evaluation build 14393 or later to taste the newer Docker Engine on
Win2k16. If you try to follow the usual Docker installation process on your old Windows 2016 TP5
system, you will get the following error
Start-Service Docker
DEPSTAR (CSE) 90
CS451-Advance Computing 19DCS148
Now you can search plenty of Windows Dockerized application using the below command:
Important Points:
3. You can’t commit a running container and build image out of it. (This is very much possible on
Linux Platform.
DEPSTAR (CSE) 91
CS451-Advance Computing 19DCS148
Building containers using Dockerfile is supported on Windows server platform. Let’s pick up a
sample MySQL Dockerfile to build up MySQL container. I found it available on some github
repository and want to see if Dockerfile is supported or not. The sample Dockerfile looks somewhat
like as shown below:
FROM microsoft/windowsservercore
LABEL Description=”MySql” Vendor=”Oracle” Version=”5.6.29″
RUN powershell -Command \
$ErrorActionPreference = ‘Stop’; \
Invoke-WebRequest -Method Get -Uri https://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-
5.6.29-winx64.zip -OutFile c:\mysql.zip ; \
Expand-Archive -Path c:\mysql.zip -DestinationPath c:\ ; \
Remove-Item c:\mysql.zip -Force
RUN SETX /M Path %path%;C:\mysql-5.6.29-winx64\bin
RUN powershell -Command \
$ErrorActionPreference = ‘Stop’; \
mysqld.exe –install ; \
Start-Service mysql ; \
Stop-Service mysql ; \
Start-Service mysql
RUN type NUL > C:\mysql-5.6.29-winx64\bin\foo.mysql
RUN echo UPDATE user SET Password=PASSWORD(‘mysql123′) WHERE User=’root’; FLUSH
PRIVILEGES; .> C:\mysql-5.6.29-winx64\bin\foo.mysql
RUN mysql -u root mysql < C:\mysql-5.6.29-winx64\bin\foo.mysql
This just brings up the MySQL image perfectly. I had my own version of MySQL Dockerized image
available which is still under progress. I still need to populate the Docker image details.
Conclusion:
In this Practical, we have learnt how to Docker Engine works on windows and linux operating system
and instating also building a docker image and running a windows container.
DEPSTAR (CSE) 92