OS Activity - Vivek
OS Activity - Vivek
OS Activity - Vivek
Features:
· It is designed to run on any standard x86 Intel and AMD hence most of the hardware
vendors make drivers for windows like Dell, HP, etc.
· It supports enhanced performance by utilizing multi-core processors.
· It comes preloaded with many productivity tools which helps to complete all types of
everyday tasks on your computer.
· Windows has a very large user base so there is a much larger selection of available
software programs, utilities.
· Windows is backward compatible meaning old programs can run on newer versions.
· Hardware is automatically detected eliminating need of manually installing any device
drivers.
LINUX Operating System
The Linux OS is an open source operating system project that is a freely distributed, cross-
platform operating system developed based on UNIX. This operating system is developed
by Linus Torvalds. The name Linux comes from the Linux kernel. It is basically the system
software on a computer that allows apps and users to perform some specific task on the
computer. The development of Linux operating system pioneered the open source
development and became the symbol of software collaboration.
Features:
· Linux is free can be downloaded from the Internet or redistribute it under GNU licenses
and has the best community support.
· Linux OS is easily portable which means it can be installed on various types of devices
like mobile, tablet computers.
· It is a multi-user, multitasking operating system.
· BASH is the Linux interpreter program which can be used to execute commands.
· Linux provides multiple levels of file structures i.e. hierarchical structure in which all
the files required by the system and those that are created by the user are arranged.
· Linux provides user security using authentication features and also threat detection and
solution is very fast because Linux is mainly community driven.
A real time operating system is a type of operating system used in computing systems that
require strict completion deadlines for all tasks that need to be performed. An real time OS is
critical in applications that need immediate and deterministic behavior, such as in industrial
control systems, aerospace and defense, medical devices, and automotive industries.Overall,
an real time operating system ensures that a system is reliable, safe, and efficient.
· In real time OS, the kernel restores the state of the task and passes control of the CPU for that
task.
VxWorks Operating System
In this article, you will learn about the VxWorks operating system's history and architecture,
capabilities, functions, and features.
It supports the AMD/Intel architecture, the ARM architecture, the POWER architecture,
and the RISC-V architecture. On 32 and 64-bit processors, the real-time operating system
may be utilized in multicore mixed modes, symmetric multiprocessing, multi-OS
architectures, and asymmetric multiprocessing.
The VxWorks development environment contains the kernel, board support packages, the
Wind River Workbench development suite, and third-party software and hardware
technologies. The real-time operating system in VxWorks 7 version has been redesigned for
modularity and upgradeability, with the operating system kernel separated from middleware,
applications, and other packages. Scalability, security, safety, connection, and graphics have
all been enhanced to meet the demands of the Internet of Things (IoT).
There are various features of the VxWorks OS. Some features of the VxWorks OS are as
follows:
Android is a mobile operating system based on a modified version of the Linux kernel and
other open-source software, designed primarily for touchscreen mobile devices such as
smartphones and tablets. Android is developed by a partnership of developers known as the
Open Handset Alliance and commercially sponsored by Google. It was disclosed in
November 2007, with the first commercial Android device, the HTC Dream, launched in
September 2008.
It is free and open-source software. Its source code is Android Open Source Project (AOSP),
primarily licensed under the Apache License. However, most Android devices dispatch with
additional proprietary software pre-installed, mainly Google Mobile Services (GMS),
including core apps such as Google Chrome, the digital distribution platform Google Play
and the associated Google Play Services development platform.
o About 70% of Android Smartphone runs Google's ecosystem, some with vendor-
customized user interface and some with software suite, such as TouchWizand
later One UI by Samsung, and HTC Sense.
o Competing Android ecosystems and forksinclude Fire OS (developed by Amazon) or
LineageOS. However, the "Android" name and logo are trademarks of Google which
impose standards to restrict "uncertified" devices outside their ecosystem to use
android branding.
Below are the following unique features and characteristics of the android operating
system, such as:
Most Android devices support NFC, which allows electronic devices to interact across short
distances easily. The main goal here is to create a payment option that is simpler than
carrying cash or credit cards, and while the market hasn't exploded as many experts had
predicted, there may be an alternative in the works, in the form of Bluetooth Low Energy
(BLE).
2. Infrared Transmission
The Android operating system supports a built-in infrared transmitter that allows you to use
your phone or tablet as a remote control.
2. Automation
The Tasker app allows control of app permissions and also automates them.
You can download apps on your PC by using the Android Market or third-party options
like AppBrain. Then it automatically syncs them to your Droid, and no plugging is required.
Android phones also have unique hardware capabilities. Google's OS makes it possible to
upgrade, replace, and remove your battery that no longer holds a charge. In addition, Android
phones come with SD card slots for expandable storage.
While it's possible to hack certain phones to customize the home screen, Android comes with
this capability from the get-go. Download a third-party launcher like Apex, Nova, and you
can add gestures, new shortcuts, or even performance enhancements for older-model devices.
7. Widgets
Apps are versatile, but sometimes you want information at a glance instead of having to open
an app and wait for it to load. Android widgets let you display just about any feature you
choose on the home screen, including weather apps, music widgets, or productivity tools that
helpfully remind you of upcoming meetings or approaching deadlines.
8. Custom ROMs
Because the Android operating system is open-source, developers can twist the current OS
and build their versions, which users can download and install in place of the stock OS. Some
are filled with features, while others change the look and feel of a device. Chances are, if
there's a feature you want, someone has already built a custom ROM for it.
The computer network has many resources such as software and hardware that are mandatory
to finish the task. Generally, the required resources are file storage, CPU, memory, input and
output devices and so on. The operating system acts as the controller of all the above-
mentioned resources and assigns them with specific programs executed to perform the task.
Hence, the operating system is a resource manager that handles the resource as a user view
and system view. The evolution of the operating system is marked from the programming of
punch cards to training machines to speak and interpret any language.
1. Serial Processing
It develops by 1940 to 1950’s programmers incorporated by the hardware components
without the implementation of the operating system. The problems here are the scheduling
and setup time. The user’s login for machine time by wasting the computed time. The setup
time is involved when loading the compiler, saving the compiled program, source program,
linking and buffering. If any intermediate error occurs, the process gets starts over.
Though it is uncomfortable for the users it is made to keep the expensive computer as busy
up to the extent by running a leveraged stream of jobs. The protection of memory doesn’t
allow the memory area comprises the monitor to altered and the timer protects the job from
monopolizing the system. The processor sustains as idle when the input and output devices
are in use by the bad utilization of CPU time.
The multi-programming is used to manage multiple communicative jobs. The time of the
processor is shared among multiple users and many users can simultaneously access the
system via terminals. Printing ports needed that programs with the command-line user
interface, where the user has written responses to prompt or written commands. The
interaction is scrolled down as a roll of paper.
The video terminals replaced the use of printing terminals that displayed the fixed size
characters. Some are used to develops forms on the screen but many usually with scrolled
like glass teletype. Personal computers became adaptable in the mid-1970s. The commercial
feasible personal computer is Altair 8800, that came into the market and rocked the business
values. The Altair doesn’t have an operating system because it has only light-emitting
diodes and toggle switches for input and output. So the people started to use floppy disk and
connected terminals.
The digital research implemented the CP/M operating system in 1976 for Altair and similar
computers. Later DOS and CP/M had a command-line interface similar to the time-sharing
operating systems. These computers were dedicated only to single users and do not apply to
shared users.
After many research gaps, the project on large computers and enhancement in hardware made
the Macintosh commercially and economically feasible. The research prototypes such as
sketchpads are still under process at many research labs. It formed the basis of expected
products.
· The derivatives of CP are CP-VM, CP/M, CP/M-86, DOS, DR-DOS, and FreeDOS.
· The OS of the Microsoft Windows is Windows 3.x, Windows 95/98, Windows XP,
Windows Vista, Windows 7, Windows 8.
· The derivates of MULTICS are UNIX, Xenix, Linux, QNX, VSTa, RISC iX, Mac
OSX and so on.
· The derivatives of VMS are OS/2, React OS, KeyKOS, OS/360, and OS/400.
The real-time operating system is an advanced multi-feature operating system applied when
there are rigid time needs for the flow of data or the operation of a processor. The distributed
operating system is an interconnection between two or multiple nodes but the processor
doesn’t share any memory. It is also called as loosely packed systems.
Fedora
This flavour of operating system is the foundation for the commercial Redhat Linux
version. It weighs more on features and functionality along with free software. Fedora
can be used freely within a community and has third party repositories this makes it
unlicensed distro version of the Linux OS and imparts a community driver feature for
accessibility. This category of OS receives a regular update every 6 Months making it a
more scalable in terms of performance.
Unlike Ubuntu, Fedora doesn’t make his desktop environment or other software. Fedora
project uses “upstream” software facilitating a platform that integrates all the upstream
software without adding their custom tools or patching it much. Fedora arrived with the
GNOME 3 Desktop Environment by default, even supposing it to be available for other
desktop environments also.
Redhat
Red Hat Enterprise commercializes its Linux distribution intended for Servers and
workstations. It is the most favored version of Linux OS and relies on open source
Fedora version.
This version caters to long releases to ensure stability among its features. This version
is trademarked to prohibit the Red Hat Enterprise Linux Software from being
redistributed. Nonetheless, the core software is free and open-source.
CentOS
CentOS is a community version of Redhat. It is a community project that takes the Red Hat
Enterprise Linux code, removes all Red Hat’s trademarks, and makes it available for free
use and distribution.
It is available free, and support comes from the community as opposed to Redhat itself.
Activity-2
In OS virtualizations, the virtual eyes environment accepts command from any of the user
operating it and performs different task on the same machine by running different
applications.
In operating system virtualizations when the application does not interfere with another one
even though they are functioning in the same computer.
The kernel of an operating system allows more than one isolated user-space instance to exist.
These instances call as software containers, which are virtualizations engines.
These are reasons, which are telling why we have to use Operating System Virtualization
in Cloud Computing.
The operating system of the computer manages all the software and hardware of the
computer. With the help of the operating system, several different computer programs can
run at the same time.
This is done by using the CPU of the computer. With the combination of few components of
the computer which is coordinated by the operating system, every program runs successfully.
2. Compare VMS and Containers
Virtual machines and Containers are two ways of deploying multiple, isolated services
on a single platform.
Virtual Machine:
It runs on top of an emulating software called the hypervisor which sits between the
hardware and the virtual machine. The hypervisor is the key to enabling virtualization.
It manages the sharing of physical resources into virtual machines. Each virtual
machine runs its guest operating system. They are less agile and have lower portability
than containers.
Container:
It sits on the top of a physical server and its host operating system. They share a
common operating system that requires care and feeding for bug fixes and patches.
They are more agile and have higher portability than virtual machines.
Benefits of Virtualization
· Virtualized services help businesses scale faster and be more flexible.
· 1. Cut Your IT Expenses
· Lower spending is one of the main virtualization benefits. When using
virtualization technology, one physical server hosts many virtual machines. You
can use these machines for different purposes. Half of business owners consider
this point to be very important.
In the event of a disaster, an operator can retrieve a file with all the VM data from
a computer in minutes. This caters to business interoperability, trustworthiness,
and resilience.
Developers won’t need to request a new computer with the required operating
system. Instead, they can complete their tasks. Thus, the independence of DevOps
makes for another critical point in the list of the benefits of virtualization.
· 8. Greater ROI
· Among the other benefits of virtualization is the idea of getting a significant return
on smaller investments. You had to spend a lot of money on setting up an on-site
working environment and buying hardware in the past.
Now, you can just spend some time setting up a machine, purchasing a license
(from Microsoft Azure or AWS), or just getting access for starters. According to
Statista, enterprises spend just 9% of their IT budget on virtualization.
Aside from this small initial investment, the rest of the money you earn is yours.
Add here that you are then able to dedicate more time to your business rather than
setting up a new environment.
However, the number of data leaks in 2020 still remained high. In the US alone,
155.8 million people suffered data leaks. The reason was almost always attributed
to weak security.
Security enhancement is one of the most important benefits of virtualization.
Virtual firewalls give you the best of two worlds. First, they are isolated, like other
virtual applications. Thus, VMs are safe from viruses and malicious attacks of
different sorts. Second, they are cheaper and simpler to install, maintain, and
update.
In terms of virtualization benefits, you get higher visibility of what’s going on both
in virtual and physical environments. This enables faster provisioning of resources.
You also react to adverse events more quickly.
In some cases, the disadvantages of virtualization are also a piece of the puzzle.
For example, virtualization takes more time compared to the use of local systems.
As well, with time, you still may face scalability issues since you cannot grow
endlessly. At some point, you will need to expand your hardware base.
Activity-3
Introduction
There are many different Linux filesystems available, each with its own advantages and
disadvantages. In this article, we will compare the four most popular Linux filesystems: Ext2,
Ext3, Ext4 and Btrfs.
Ext2 is the oldest of the four filesystems, and is still used by many Linux distributions. It is a
very stable filesystem, but does not support features such as journaling or extended attributes.
Ext3 is a journaled version of Ext2, and is therefore more reliable. However, it does not
support some of the newer features found in other filesystems such as extent-based allocation
or delayed allocation.
A Linux filesystem is the underlying software component that provides access to the data on
a storage device. The three most popular Linux filesystems are Ext, Ext2, and Ext3. Each has
its own strengths and weaknesses, which we will explore in this article.
Ext: The original Linux filesystem was created by Linus Torvalds himself. It is simple and
straightforward, but does not have some of the features that newer filesystems have.
Ext2: The second version of the Ext filesystem was released in 1993. It added support for
larger file sizes and extended attributes.
Ext3: The third version of the Ext filesystem was released in 2001. It added journaling,
which helps to prevent data loss in the case of a power failure or system crash.
Overview of Ext2, Ext3
There are many different Linux filesystems available, but the most popular are Ext2, Ext3,
Ext4 and Ext5. All of these filesystems have their own advantages and disadvantages, so it’s
important to choose the right one for your needs.
Ext2 is the oldest of the four filesystems, and it’s also the simplest. It doesn’t have any
journaling features, so it’s not as reliable as the other options. However, it’s also much faster
than the other options, so it’s a good choice if you need maximum performance.
Ext3 is a slightly newer version of Ext2 that includes journaling. This makes it more reliable,
but it also means that it’s slightly slower. However, most people feel that the reliability is
worth the trade-off in speed.
There are a few different types of Linux filesystems available, each with its own set of pros
and cons. Here’s a quick rundown of the most popular ones:
– EXT2: One of the most popular Linux filesystems, EXT2 is known for being fast and
stable. However, it doesn’t support journaling, which means that data can be lost in the event
of a power failure or system crash.
– EXT3: An extension of EXT2, EXT3 adds journaling support to help prevent data loss.
It’s also compatible with most major Linux distributions.
– XFS: Another popular Linux filesystem, XFS is known for being scalable and efficient. It
doesn’t support journaling, however, so data can be lost in the event of a power failure or
system crash.
There are many different types of Linux filesystems available, each with its own benefits and
drawbacks. In this article, we’ll take a look at four of the most popular filesystems: Ext2,
Ext3, Ext4, and Btrfs.
Ext2:
Ext2 is the oldest filesystem in this list, having been first introduced in 1992. Despite its age,
it’s still widely used thanks to its simplicity and reliability. One downside of Ext2 is that it
doesn’t support journaling, which means that data can be lost in the event of a power failure
or system crash.
Ext3:
Ext3 was introduced in 2001 and is a journaled version of Ext2. This means that data is less
likely to be lost in the event of a power failure or system crash. However, Ext3 is not as
widely used as Ext2 due to its slightly higher overhead.
There are many different types of Linux filesystems available, and choosing the right one for
your system can be a daunting task. In this article, we will compare the most popular Linux
filesystems: Ext2, Ext3, Ext4 and Btrfs.
Ext2:
Ext2 is the oldest and most popular Linux filesystem. It is a journaling filesystem, which
means that it keeps track of all changes made to the filesystem. This makes it very reliable
and ensures that the filesystem can be easily recovered in case of a crash. However,
journaling can also make the filesystem slower, so it is not ideal for systems that require high
performance.
Ext3:
Ext3 is an improved version of Ext2 that adds support for journaling. This makes it even
more reliable than Ext2, but it also comes with a performance penalty. If you need maximum
reliability, then Ext3 is a good choice. However, if you need maximum performance, then
you should consider using another filesystem such as Ext4 or Btrfs.
2. Discuss the file-mount and unmount system calls.
Mount system call makes a directory accessible by attaching a root directory of one file
system to another directory. In UNIX directories are represented by a tree structure, and
hence mounting would mean attaching them to the branches. This means the file system
found on one device can be attached to the tree. The location in the system where the file is
attached is called as a mount point.
Example:-
Mount –t type device dir
- This will attach or mount the file system found on device of type type to the directory dir.
- Unmount system calls does the opposite. It un mounts or detaches the attached file front the
target or mount point. If a file is opened or used by some process cannot be detached.
The attaching of a file system to another file system is done by using mount system call. At
the time of mounting, there is an essential splicing one directory tree onto a branch in another
directory tree is done. The mount takes two arguments. One – the mount point, which is a
directory in the current file naming system, two – the file system to mount to that point. At
the time of inserting CDROM into the system, the corresponding CDROM file system will
automatically mounts to the directory - /dev/cdrom in the system.
The unmount system call is used to detach a file system.
Activity-4:
· UNIX, the fork() function creates a new process by duplicating the existing process.
The new process is called the child process and has its own unique process ID (PID).
· In Windows, the CreateProcess() function creates a new process by creating a new
process image.
1. Purpose: fork() is used to create a new process in UNIX, which is a clone of the parent
process with a separate memory space. CreateProcess() in Windows is used to create a new
process with a specified executable file to run.
2. Inheritance: In UNIX, fork() duplicates the parent process, inheriting its file descriptors,
environment variables, and memory layout. In Windows, CreateProcess() doesn't inherit
these attributes directly; instead, it requires explicit configuration during process creation.
3. Memory: fork() shares the same memory layout between parent and child processes
initially, with copy-on-write protection. CreateProcess() assigns a completely separate
memory space for the new process in Windows.
4. Process ID: fork() returns the child's process ID in the parent process and zero in the child
process. CreateProcess() returns a PROCESS_INFORMATION structure containing the new
process's handle and process ID.
Activity-5:
All the processes in a system require some resources such as central processing unit(CPU),
file storage, input/output devices, etc to execute it. Once the execution is finished, the process
releases the resource it was holding. However, when many processes run on a system they
also compete for these resources they require for execution. This may arise a deadlock
situation.
A deadlock is a situation in which more than one process is blocked because it is holding a
resource and also requires some resource that is acquired by some other process. Therefore,
none of the processes gets executed.
· Mutual Exclusion: Only one process can use a resource at any given time i.e. the
resources are non-sharable.
· Hold and wait: A process is holding at least one resource at a time and is waiting
to acquire other resources held by some other process.
· No preemption: The resource can be released by a process voluntarily i.e. after
execution of the process.
· Circular Wait: A set of processes are waiting for each other in a circular fashion.
For example, lets say there are a set of processes {�0P0,�1P1,�2P2,�3P3} such
that �0P0 depends on �1P1, �1P1 depends on �2P2, �2P2 depends
on �3P3 and �3P3 depends on �0P0. This creates a circular relation between all
these processes and they have to wait forever to be executed.
The first two methods are used to ensure the system never enters a deadlock.
Deadlock Prevention
This is done by restraining the ways a request can be made. Since deadlock occurs when all
the above four conditions are met, we try to prevent any one of them, thus preventing a
deadlock.
Deadlock Avoidance
When a process requests a resource, the deadlock avoidance algorithm examines the
resource-allocation state. If allocating that resource sends the system into an unsafe state, the
request is got granted.
Therefore, it requires additional information such as how many resources of each type is
required by a process. If the system enters into an unsafe state, it has to take a step back to
avoid deadlock.
We let the system fall into a deadlock and if it happens, we detect it using a detection
algorithm and try to recover.
Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the problem of deadlock
situation is not frequent, some systems simply ignore it. Operating systems such as UNIX and
Windows follow this approach. However, if a deadlock occurs we can reboot our system and
the deadlock is resolved automatically.
What is Process?
A process can create other processes to perform multiple tasks at a time; the created
processes are known as clone or child process, and the main process is known as the parent
process. Each process contains its own memory space and does not share it with the other
processes. It is known as the active entity. A typical process remains in the below form in
memory.
A process in OS can remain in any of the following states:
When we start executing the program, the processor begins to process it. It takes the
following steps:
o Firstly, the program is loaded into the computer's memory in binary code after
translation.
o A program requires memory and other OS resources to run it. The resources such that
registers, program counter, and a stack, and these resources are provided by the OS.
o A register can have an instruction, a storage address, or other data that is required by
the process.
o The program counter maintains the track of the program sequence.
o The stack has information on the active subroutines of a computer program.
o A program may have different instances of it, and each instance of the running
program is knowns as the individual process.
A thread is the subset of a process and is also known as the lightweight process. A process
can have more than one thread, and these threads are managed independently by the
scheduler. All the threads within one process are interrelated to each other. Threads have
some common information, such as data segment, code segment, files, etc., that is shared to
their peer threads. But contains its own registers, stack, and counter.
o When a process starts, OS assigns the memory and resources to it. Each thread within
a process shares the memory and resources of that process only.
o Threads are mainly used to improve the processing of an application. In reality, only a
single thread is executed at a time, but due to fast context switching between threads
gives an illusion that threads are running parallelly.
o If a single thread executes in a process, it is known as a single-threaded And if
multiple threads execute simultaneously, then it is known as multithreading.
Thread is a single sequence stream within a process. Threads have same properties as of the
process so they are called as light weight processes. Threads are executed one after another
but gives the illusion as if they are executing in parallel. Each thread has different states.
Each thread has
1. A program counter
2. A register set
3. A stack space
Threads are not independent of each other as they share the code, data, OS resources etc.
Types of Threads:
1. User Level thread (ULT) – Is implemented in the user level library, they are not
created using the system calls. Thread switching does not need to call OS and to cause
interrupt to Kernel. Kernel doesn’t know about the user level thread and manages them as
if they were single-threaded processes.
· Advantages of ULT –
· Can be implemented on an OS that doesn’t support multithreading.
· Simple representation since thread has only program counter, register set,
stack space.
· Simple to create since no intervention of kernel.
· Thread switching is fast since no OS calls need to be made.
· Limitations of ULT –
· No or less co-ordination among the threads and Kernel.
· If one thread causes a page fault, the entire process blocks.
2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of
thread table in each process, the kernel itself has thread table (a master one) that keeps
track of all the threads in the system. In addition kernel also maintains the traditional
process table to keep track of the processes. OS kernel provides system call to create and
manage threads.
· Advantages of KLT –
· Since kernel has full knowledge about the threads in the system, scheduler
may decide to give more time to processes having large number of threads.
· Good for applications that frequently block.
· Limitations of KLT –
· Slow and inefficient.
· It requires thread control block so it is an overhead.
Activity-6:
In this article, you will learn about the difference between Paging and Swapping in the
operating system. But before discussing the differences, you must know about the Paging and
Swapping in the operating system.
Paging is accomplished by dividing the RAM into fixed-sized sections known as frames. A
process's logical memory is divided into identical fixed-size units called pages. The hardware
determines the page size and frame size as we know that the procedure must be executed
from the main memory. Therefore, whenever a process has to run, its pages from the source,
or back store, are loaded into any free frames in the main memory.
A memory management technique called Swapping removes inactive programs from the
computer system's main memory. Any process must be executed in memory. Still, it can be
temporarily swapped out of memory to a backup store and then returned to memory to
continue its execution. Swapping is done to provide memory for the operation of other
processes.
The swapping mechanism typically impacts performance, but it also aids in executing many
large operations concurrently. Swapping is another name for a method of memory
compression. Generally, low priority processes can be swapped so that higher priority
processes may be loaded and executed.
Key Differences between the Paging and Swapping
There are various key differences between Paging and Swapping in the operating system.
Some main differences between Paging and Swapping in the operating system are as follows:
1. Paging is a memory management method that enables systems to store and get the
data from secondary storage for usage in the main memory. In contrast, Swapping
temporarily transfers a process from the primary to secondary memory.
2. Paging is more flexible than swapping because paging transfers pages. On the other
hand, Swapping is less flexible.
3. There are many processes in the main memory during swapping. On the other hand,
there are some processes in the main memory while paging.
4. Swapping involves processes switching between the main memory and secondary
memory. On the other hand, pages are equal-size memory blocks that transfer
between the main memory and secondary memory during paging.
5. Swapping allows the CPU to access processes more quickly. On the other hand,
paging allows virtual memory to be implemented.
6. Swapping is appropriate for heavy workloads. On the other hand, the paging is
appropriate for light to medium workloads.
7. Swapping allows multiprogramming. In contrast, paging allows a process's physical
address space to be non-contiguous, which prevents external fragmentation.
There are various head-to-head comparisons between Paging and Swapping. Some
differences between Paging and Swapping are as follows:
Features Paging Swapping
Flexibility Paging is more flexible as the only pages Swapping is less flexible because it
of a process are moved. moves the entire process back and
forth between RAM and the back
store.
Main Functionality During paging, the pages are equal-size Swapping involves processes
memory chunks that travel between the switching between main memory and
primary and secondary memory. secondary memory.
Workloads Swapping is appropriate for heavy The paging is appropriate for light to
workloads. medium workloads.
Usage Paging allows virtual memory to be Swapping allows the CPU to access
implemented. processes more quickly.
Processes There are many processes in the main There are some processes in the main
memory during swapping. memory while paging.
Activity-7:
The straight answer is that different shells cater to specific user preferences and requirements.
Each shell has unique features and styles, making it suitable for a particular set of tasks and
UX requirements.
Users can choose the shell that aligns with their workflow or requirements. While all shells
can accomplish essential operations, shells vary in the level of flexibility and user
experience.
Let’s now discuss the 8 popular types of Linux shells and explore the benefits and features of
these shells.
The following discussion will help you pick the right shell that fits your particular workflows
and operational usage.
Note that you need a basic understanding of Linux commands and using the command-line
utilities available on a typical Linux distribution.
While lacking advanced features, it has the necessary basic scripting and automation
capabilities. As a result, the Bourne Shell scripts are highly portable across Unix-like
systems, making them reliable for simple command-line tasks and script execution.
Features
Advanced Features
It has no advanced features, but it’s used as the base for other shells.
Limitations
Example
Here’s an example of a Bourne Shell script:
#!/bin/sh
In this script:
The #!/bin/sh line means the script should be run in the Bourne Shell.
When you run the script, the echo “Hello, World!” line prints “Hello, World!” on the screen.
The /bin/sh file is in modern Linux systems but acts like a shortcut to its main shell, Bash (on
Linux) or dash (on Ubuntu and Debian). This link is set up to work like the Bourne shell.
The C Shell, often abbreviated as csh, is another shell with a long association with Linux.
Developed by Bill Joy at the University of California, Berkeley, in the late 1970s, the C
Shell is known for its unique syntax and command-line editing capabilities. It was the first
shell that introduced the command history feature.
Features
Advanced Features
· The C shell introduced the command history feature that tracks and recalls previously
executed commands.
· Users can create custom aliases for frequently used apps.
· It introduced tilde (~) to represent the user’s home directory for enhanced
convenience.
· Csh incorporated a built-in expression grammar for more flexible command
execution.
Limitations
· The C shell was criticized for its syntax inconsistencies, which can confuse even the
advanced users.
· It lacked full support for standard input/output (stdio) file handles and functions,
limiting specific capabilities.
· Its limited recursion abilities meant that users couldn’t use sequences of complex
commands.
Compared to the Bourne shell, the C shell improved readability and performance. Its
interactive features and innovations influenced the development of subsequent Unix shells.
TENEX C Shell ( short as tcsh) is an upgraded version of the C Shell. It can remember past
commands, helps complete file names, and generally allows complex scripts. Many Unix
systems already have tcsh installed.
Features
Advanced Features
Limitations
The KornShell, or ksh, was developed by David Korn at AT&T Bell Laboratories. It
combines the best features of the Bourne Shell and the C Shell, offering a powerful and
user-friendly shell with advanced scripting capabilities. This shell has superior speed
performance compared to the C and Bourne shells.
Features:
Advanced Features
Limitations
If you're working in IT, you might need to schedule various repetitive tasks as part of your
automation processes.
For example, you could schedule a particular job to periodically execute at specific times of
the day. This is helpful for performing daily backups, monthly log archiving, weekly file
deletion to create space, and so on.
And if you use Linux as your OS, you'll use something called a cron job to make this happen.
What is a cron?
Cron is a job scheduling utility present in Unix like systems. The crond daemon enables cron
functionality and runs in background. The cron reads the crontab (cron tables) for running
predefined scripts.
By using a specific syntax, you can configure a cron job to schedule scripts or other
commands to run automatically.
For individual users, the cron service checks the following file: /var/spool/cron/crontabs
Contents of /var/spool/cron/crontabs
If you get a prompt like this, it means you don't have permission to use cron.
· crontab -e: edits crontab entries to add, delete, or edit cron jobs.
· crontab -l: list all the cron jobs for the current user.
· crontab -u username -l: list another user's crons.
· crontab -u username -e: edit another user's crons.
When you list crons, you'll see something like this:
Weekdays 0-6 Days of the week where commands would run. Here, 0 is Sunday.
h represents that the script is a bash script and should be run from /bin/bash.
· /path/to/script.sh specifies the path to script.
Below is the summary of the cron job syntax.
* * * * * sh /path/to/script/script.sh
| | | | | |
| | | | | Command or Script to Execute
| | | | |
| | | | |
| | | | |
| | | | Day of the Week(0-6)
| | | |
| | | Month of the Year(1-12)
| | |
| | Day of the Month(1-31)
| |
| Hour(0-23)
|
Min(0-59)
It is okay if you are unable to grasp this all at once. You can practice and generate cron
schedules with the crontab guru.
1. Create a script called date-script.sh which prints the system date and time
and appends it to a file. The script is shown below:
4. Check the output of the file date-out.txt. According to the script, the
system date should be printed to this file every minute.
Output of our cron job.
Activity-9
Static IP Address and Dynamic IP Address are both used to identify a computer on a network
or on the Internet.
· Static IP address is provided by the Internet Service Provider and remains fixed till
the system is connected to the network.
· Dynamic IP address is provided by the DHCP, generally a company gets a single
static IP address and then generates the dynamic IP address for its computers within
the organization's network.
Read through this article to find out more about static and dynamic IP addresses and how
they are different from each other.
What is an IP Address?
IP address is provided by the Internet Service Provider and is called the logical address of a
computer connected on a network. Every unique instance linked to any computer
communication network employing the TCP/IP communication protocols is given an IP
address.
When network nodes connect to a network, the Dynamic Host Configuration Protocol
(DHCP) server allocates IP addresses. DHCP assigns IP addresses from a pool of available
addresses that are part of the addressing system as a whole. Even though DHCP only offers
dynamic addresses, many machines reserve static IP addresses given to a single entity and
cannot be used again.
IP addresses are generally represented by a 32-bit unsigned binary value. It is represented in a
dotted decimal format. For example, "192.165.20.40" is a valid IP address.
A static IP address is explicitly allocated to a device rather than one that a DHCP server has
assigned. Because it does not change, it is called static.
Static IP addresses can be configured on routers, phones, tablets, desktops, laptops, and any
other device that can use an IP address. This can be done either by the device itself handing
out IP addresses or by manually typing the IP address into the device.
If you want to host a website from your home, have a file server on your network, utilize
networked printers, forward ports to a specific device, run a print server, or use a remote
access application, you'll need a static IP address. DNS servers are an example of a static IP
address at work.
An ISP gives you a dynamic IP address that you can use for a limited time. If a dynamic
address isn't in use, it can be allocated to another device automatically. DHCP or PPPoE are
used to assign dynamic IP addresses.
Internet Service Providers and networks with many connecting clients or end-nodes
commonly use dynamic IP addresses. A DHCP server handles the task of assigning,
reassigning, and altering dynamic IP addresses. The scarcity of static IP addresses on IPv4 is
one of the key reasons for using dynamic IP addresses. Dynamic IP addresses allow a single
IP address to be swapped across many nodes to get around this problem.
Static IP address does not get Dynamic IP address can be changed any time.
Changes
changed with time.
Device Device using static IP address Device using dynamic IP address is difficult to
tracking can be traced easily. trace.
The following table highlights the major differences between a static IP address and a
dynamic IP address −
Conclusion
To conclude, static IP addresses are provided by ISPs and they remain fixed, while dynamic
IP addresses are assigned by the DHCP and they keep changing regularly when a user logs in.
2. Compare Study different options offered by Linux for package
management.
Overview
Packages in Linux are similar to executable files in Windows operating system but are not
executable. A package in Linux is a compressed software archive file containing all the files
included with a software application that provides any functionality. Packages can be a
command-line utility, GUI application, or software library. This process is the same as
installing any application, software, or utility in Windows.
Regarding package management in Linux, package management is the term used to signify
the installation and maintenance of Packages in your system. Package managers reduce the
requirement for manually downloading and installing various dependencies required for the
software.
Packages
A package contains all the necessary data required for the installation and maintenance of the
software package. These packages are created by someone known as a package maintainer. A
package maintainer takes care of the packages. They ensure active maintenance, bug fixes if
any, and the final compilation of the package.
Repositories
These Packages are present in the Repositories that contain packages specially designed,
compiled, and maintained for each Linux version and distribution. These Repositories contain
thousands of packages created by the distribution vendors. Sometimes projects may handle
their packaging and distribution.
Dependencies
Some packages might require some other pre-installed software to function correctly. A
resource or software that a package depends on is called its dependency. Dependencies
include metadata on how to build the code you depend on and information on where to find
the files containing it. The package manager takes care of all these problems for you. It will
install, modify, upgrade, update, and remove package files and provide dependency
resolution. Resolving a dependency means suppose you have software that requires Python
version 3.1. You are trying to install another software that requires version 3.3, which causes
a conflict, and this conflict will be required to resolve before proceeding with the installation
of this software. The package manager facilitates this. The package manager also ensures we
receive the original and authentic package by verifying their certificates and checksum to
ensure they have not been modified.
In simple terms, a package manager is a software tool used for package management in Linux
i.e. to manage the installation, removal, and updating of various software packages. It can be
thought of as a hub for all software packages available for your system. The package manager
keeps track of all the installed packages on the system, including their dependencies, and uses
this information to resolve conflicts and handle updates.
Using a package manager in Linux can save us a lot of time and effort compared to manually
installing software and its dependencies. When we install a package, the package manager
automatically checks if any other software is required for it to work correctly and installs
these dependencies for us. This relieves us from the problem of figuring out what other
softwares are needed and manually installing them. A package manager can also
automatically check for updates to installed packages and install them for us. This helps us to
keep our system up-to-date and secure.
The Package managers can be of two types based on their functions. The first ones are low-
level, which ensures installing a package, upgrading a package, or checking which all
packages are installed. The other type is the high level which ensures dependency resolution.
· DPKG – This is the abbreviation for Debian-based Package Management System. All
the Debian-based Linux and their distros use DPKG. DPKG is used with packages
made for Debian-based Linux, which end with .deb Extension. Although it cannot
download and install packages and their dependency automatically.
· APT - APT is the abbreviation for Advanced Packaging Tool. It is the most widely
used tool and the default package manager available in Ubuntu and other Debian-
based distros.
o To install a package with apt, use the following command sudo apt install
package_name // This command will install the package with the name
package_name, change it according to the package name you wish to install
o To remove a package with apt, use the following command sudo apt remove
package_name // This command will remove the package with the name
package_name. However, this doesn’t remove the dependencies and package
configurations.
o To completely remove the package with apt, use the following command sudo
apt purge package_name // This command completely removes the package as
well as the dependencies and configuration of the packages.
o To remove any leftover dependencies, use the following command sudo apt
autoremove // This will automatically remove any dependencies or leftovers
from previously removed packages.
o The apt update command: sudo apt update // This command gets a copy of the
latest version of all the packages installed in our system from the repositories.
Please note this does not upgrade any packages and only fetches the latest
version of the package.
o The apt upgrade command: sudo apt upgrade // This command will check the
list of available upgrades and then upgrade the packages one by one. Usually,
this command is run after “sudo apt update” so that initially, the list of all
available updates is updated with the update command, and the upgrade is
done with the Sudo upgrade command.
o To upgrade one specific package as per the requirement sudo apt upgrade
package_name // This command will only upgrade that specific package.
However, you need to run the update command first to get an update, and then
you can upgrade the package.
· APT and APT – GET
o APT and APT–GET are very similar. You can consider apt as a modern and
more human graphical interface-based implementation of the apt-get. Apt is
more famously used than apt-get, but apt-get has its functionality, such as
running low-level commands.
· YUM- This is the abbreviation for “ Yellow Dog Updater, Modified ”. This was once
known as YUP or Yellow Dog Updater. This package manager is primarily used in
Red Hat Enterprise Linux. This package manager is a high-level package manager
who can perform functions such as dependency resolution. As Yum Downloads and
installs the package, it does not require any downloaded files.
o To install a package with yum, use the following command yum install
package_name // This command will install the package with the name
package_name, and change it according to the package name you wish to
install.
o To remove a package with yum, use the following command yum remove
package_name // This command will remove the package named
package_name and resolve any dependencies
o To update a package using yum, use the following command yum update
package_name // This command will automatically resolve any dependencies
and update the package to the latest stable version.
o The update command: yum update // This command will automatically fetch
and install all the updates available for your system.
· DNF - This is the abbreviation for “ Dandified YUM, ”. This package manager is the
successor to YUM. This version includes several improvements, such as improved
performance and quicker dependency resolution.
· RPM - This is the abbreviation for “ Red Hat Package Manager ”. This package
manager is used in Red Hat-based Linux operating systems such as Fedora, CentOS,
etc. RPM is used with packages made for Red Hat-based Linux, and these packages
end with .rpm Extension. This package manager is a low-level package manager who
can perform functions such as installation, upgrade, and removal of packages.RPM
requires the package downloaded to install the package.
o To install a package with RPM, use the following command rpm -i
package_name.rpm // This command will install the package with name
package_name.rpm
o To upgrade a package with RPM, use the following command rpm -U
package_name.rpm // This command will install the package with name
package_name.rpm
o To remove or erase a package with RPM, use the following command rpm -e
package_name // This command will remove the package named
package_name
· Pacman – Lastly, we have a very famous package manager called Pacman
abbreviation for "Package manager". This Package manager is used majorly in Arch
Linux and Arch Linux-based distros. In addition to automatically obtaining and
installing all required packages, Pacman is capable of resolving dependencies.
Pacman simplifies the process of installation and maintenance of packages.
o To install a package with pacman, use the following command pacman -S
package_name // This command will install the package with name
package_name
o To upgrade a package with pacman, use the following command pacman -
Syu // This command will update all the packages in the system. It
synchronizes with the repository and updates all the system packages based on
updates available.
o To remove or erase a package with pacman, use the following
command pacman -Rs package_name // This command will remove the
package named package_name
Various vendors provide their package manager and package format. Some package
managers do allow the usage of multiple packaging formats to be used. Some of the prevalent
packaging formats include:
o The .rpm package extension was designed and developed by the Red Hat
Linux distribution and used in the Red Hat Package manager (RPM)
o The .deb package was designed and developed by the Debian Linux
distribution. They are majorly used in Debian-based Linux and distros.
o The .tar format is short for Tape Archive. This is just for creating an archive or
combination of multiple files and directories into one file. Tar archives do not
compress the consisting files and directories.
o The .gz archives are created after direct compression using the GZIP Utility.
Activity-10
· OpenLDAP has been one of the most popular choices for implementing the LDAP
protocol since its inception in 1998.
· However, as more LDAP and directory solutions enter the scene, understanding each
and deciding which best suits your needs becomes more challenging.
· OpenLDAP Overview
· OpenLDAP is command-line driven software that allows IT admins to build and
manage an LDAP directory. Due to its minimal UI and reliance on the CLI, it requires
an in-depth knowledge of the LDAP protocol and directory structure.
· OpenLDAPs Benefits
· OpenLDAP often wins out over its competitors for its cost, flexibility, and OS-
agnosticism. We’ll cover these below, and then dive into the OpenLDAP alternatives
it’s most often up against.
· Low Costs
· OpenLDAP is free from a software perspective (of course, not free to implement if
you include somebody’s time, hosting costs, etc.). This is a significant driving factor
in its popularity, making OpenLDAP a common choice for startups and lean IT
teams.
· While the software is free, however, OpenLDAP incurs hidden costs in its
maintenance and management. Since it is generated as simple-source code that needs
to be built into the “service,” the challenge of OpenLDAP is installing, configuring,
and implementing the code into a working directory service instance.
· For MSPs, every additional client multiplies this challenge, as each individual
customer generally requires their own OpenLDAP instance. Due to this hurdle, some
organizations and MSPs opt for a more user-friendly and feature-rich option.
· OS-Agnosticism
· OpenLDAP supports Windows, Mac, and Linux operating systems. This contrasts
with other solutions, like Microsoft AD; as a Windows product, AD fares better with
Windows than with other operating systems.
· OpenLDAP isn’t the only OS-agnostic solution, however. Other directory solutions,
like JumpCloud, are OS-agnostic as well.
· Flexibility
· Being open-source makes OpenLDAP incredibly flexible. Its minimal UI and code-
reliant functionality don’t lock users into predetermined workflows; rather, IT can
manipulate the software to do exactly what they need.
· This gives it broad applicability; however, the minimal interface also requires more
expertise than competing solutions. We’ll get into this trade-off next.
· Where OpenLDAP Falls Short
· Limited Scope
· By only working with LDAP, OpenLDAP’s directory approach is more narrow than
other solutions on the market. As SaaS and cloud-based solutions replace legacy-
owned software, the number of protocols different solutions use to authenticate and
authorize users is growing. Modern directory services have begun to follow suit with
multi-protocol approaches. These allow the directory to unify more resources — not
just those that are compatible with LDAP — and connect them with users.
· A robust multi-protocol directory like JumpCloud, for example, can unify resources
that use LDAP, SAML, SCIM, RADIUS, and many other protocols.
· OpenLDAP Alternatives
· While there are many directory solutions out there, there are few big competitors
OpenLDAP often goes up against.
· Because both OpenLDAP and JumpCloud are free to try, we recommend testing each
out in your own environment with a small subset or test environment. This will allow
you to experience the pros and cons of each and evaluate which would work better for
your team and environment.
Activity-11
Every computer is connected to some other computer through a network whether internally or
externally to exchange some information. This network can be small as some computers
connected in your home or office, or can be large or complicated as in large University or the
entire Internet.
Maintaining a system's network is a task of System/Network administrator. Their task
includes network configuration and troubleshooting.
ifconfig: ifconfig is short for interface configurator. This command is utilized in network
inspection, initializing the interface, enabling or disabling an IP address, and configuring an
interface with an IP address. Also, it is used to show the network and route interface.
o MTU
o MAC address
o IP address
Syntax:
1. Ifconfig
ip: It is the updated and latest edition of ifconfig command. The command provides the
information of every network, such as ifconfig. Also, it can be used to get information about
a particular interface.
Syntax:
1. ip a
2. ip addr
traceroute: The traceroute command is one of the most helpful commands in the
networking field. It's used to balance the network. It identifies the delay and decides the
pathway to our target. Basically, it aids in the below ways:
1. traceroute <destination>
tracepath: The tracepath command is the same as the traceroute command, and it is used to
find network delays. Besides, it does not need root privileges. By default, it comes pre-
installed in Ubuntu. It traces the path to the destination and recognizes all hops in it. It
identifies the point at which the network is weak if our network is not strong enough.
Syntax:
1. tracepath <destination>
ping: It is short for Packet Internet Groper. The ping command is one of the widely used
commands for network troubleshooting. Basically, it inspects the network connectivity
between two different nodes.
Syntax:
1. ping <destination>
netstat: It is short for network statistics. It gives statistical figures of many interfaces,
which contain open sockets, connection information, and routing tables.
Syntax:
1. Netstat
ss: This command is the substitution for the netstat command. The ss command is more
informative and much faster than netstat. The ss command's faster response is possible
because it fetches every information from inside the kernel userspace.
Syntax:
1. Ss
nsloopup: The nslookup command is an older edition of the dig command. Also, it is
utilized for DNS related problems.
Syntax:
1. nslookup <domainname>
dig: dig is short for Domain Information Groper. The dig command is an improvised edition
of the nslookup command. It is utilized in DNS lookup to reserve the DNS name server.
Also, it is used to balance DNS related problems. Mainly, it is used to authorize DNS
mappings, host addresses, MX records, and every other DNS record for the best DNS
topography understanding.
Syntax:
1. dig <domainname>
route: The route command shows and employs the routing table available for our system.
Basically, a router is used to detect a better way to transfer the packets around a destination.
Syntax:
1. Route
host: The host command shows the IP address for a hostname and the domain name for an
IP address. Also, it is used to get DNS lookup for DNS related issues.
Syntax:
1. host -t <resourceName>
arp: The arp command is short for Address Resolution Protocol. This command is used to
see and include content in the ARP table of the kernel.
Syntax:
1. Arp
iwconfig: It is a simple command which is used to see and set the system's hostname.
Syntax:
1. Hostname
curl and wget: These commands are used to download files from CLI from the internet.
curl must be specified with the "O" option to get the file, while wget is directly used.
curl Syntax:
1. curl -O <fileLink>
wget Syntax:
1. wget <fileLink>
mtr: The mtr command is a mix of the traceroute and ping commands. It regularly shows
information related to the packets transferred using the ping time of all hops. Also, it is used
to see network problems.
Syntax:
1. mtr <path>
whois: The whois command fetches every website related information. We can get every
information of a website, such as an owner and the registration information.
Syntax:
1. mtr <websiteName>
ifplugstatus: The ifplugstatus command checks whether a cable is currently plugged into a
network interface. It is not available in Ubuntu directly. We can install it with the help of the
below command:
1. Ifplugstatus
tcpdump: The tcpdump command is widely used in network analysis with other commands
of the Linux network. It analyses the traffic passing from the network interface and shows it.
When balancing the network, this type of packet access will be crucial.
Syntax:
1. $ tcpdump -i <network_device>
Activity-12:
A server is a central repository where information and computer programs are held and
accessed by the programmer within the network. Web server and Application server are
kinds of the server which employed to deliver sites and therefore the latter deals with
application operations performed between users and back-end business applications of the
organization.
Web Server:
It is a computer program that accepts the request for data and sends the specified documents.
Web server may be a computer where the online content is kept. Essentially internet server is
employed to host sites however there exist different web servers conjointly like recreation,
storage, FTP, email, etc.
Example of Web Servers:
· Apache Tomcat
· Resin
Application server:
It encompasses Web container as well as EJB container. Application servers organize the run
atmosphere for enterprises applications. Application server may be a reasonably server that
mean how to put operating system, hosting the applications and services for users, IT services
and organizations. In this, user interface similarly as protocol and RPC/RMI protocols are
used.
Examples of Application Server:
· Weblogic
· JBoss
· Websphere
Web servers arrange the run environment While application servers arrange the run
4.
for web applications. environment for enterprises applications.
Web server’s capacity is lower than application While application server’s capacity is
6.
server. higher than web server.
In web server, HTML and HTTP protocols are While in this, GUI as well as HTTP
7.
used. and RPC/RMI protocols are used.
Processes that are not resource-intensive are Processes that are resource-intensive are
8.
supported. supported.
Web Server examples are Apache HTTP Application Servers example are JBoss ,
11.
Server , Nginx. Glassfish.
2. Identify the role of virtual host.
The Virtual Host term refers to the practice of executing multiple websites
(like enterprise1.test.com and enterprise2.test.com) on one device. Virtual hosts can be "IP-
based", meaning that we have a distinct IP address for all websites, or "name-based",
meaning that we have more than one name executing on all IP addresses. The concept that
they are executing on a similar physical server isn't probable to the end user.
Apache server was one of the initial servers for supporting IP-based virtual hosts. The 1.1 and
later versions of Apache support both name-based and IP-based virtual hosts. The latter
virtual host variants are sometimes also known as non-IP or host-based virtual hosts.
A virtual host began with the aim of hosting multiple websites on one device in its starting
days. It would also define sharing individual machine resources, like CPU and memory. The
resources are utilized and shared in such a manner that maximum capability is archived.
The virtual host serves more aims than ever with the development of cloud computing,
including solutions like virtual storage hosting, virtual server hosting, virtual application
hosting, and sometimes entire/virtual data center hosting as well.
There are several ways to set up a virtual host, and almost all ways are listed and explained
below that are utilized today:
o IP-based
o Name-based
o Port-based
IP-based
It is one of the easiest ways and could be utilized to use distinct directives based on the IP
address. We use distinct IPs for all domains in the IP-based virtual hosting.
More than one IP will point to the unique domains of the server, and there would be only a
single IP for a single server. This virtual hosting is gained by making more than one IP
address for one server.
Name-based
These types of virtual hosts are the most frequently and commonly used virtual hosting
method used today. It will use one IP address for every domain on the provided server. The
named-based virtual host will transfer a message to a server alerting about the domain name
to which it's trying to connect when a browser is attempting to connect to a server. The server
inspects the host configuration and hence returns the request along with the correct website if
the domain name is given.
Port-based
This type of virtual hosting is also the same as IP-based virtual hosting. The main difference
between the two is rather than using the distinct IP address for all virtual hosts; we utilize
ports in which the server is configured for responding to more than one website that is rely on
the server port.
Virtual Hosting
Virtual hosting is a technique to host more than one domain name (with isolated handling of
all names) on one server. It allows a server for sharing its resources, like processor cycles and
memory, without needing every service given to use a similar hostname. The virtual hosting
term is generally used in web server references, but the foundation does carry on to other
internet services.
Shared web hosting is one of the extensively used applications. The amount for shared
web hosting is less than an embedded web server due to various customers can be hosted on
one server. Also, it is very basic for one entity to wish for using more than one name on a
similar machine so that the names can reverse services provided instead of where those
services appear to be hosted.
o There are two primary types of virtual hosting: IP-based and name-based.
o Name-based virtual hosting utilizes a hostname illustrated by the client.
o It saves IP addresses the related administrative overhead.
o However, the protocol being served should be supplied the hostname at the right
point.
o There are serious difficulties with name-based virtual hosting with TLS/SSL.
o IP-based virtual hosting utilizes an isolated IP address for all hostnames, and it can be
implemented with protocol but needs an embedded IP address per domain name
provided.
o Also, port-based virtual hosting is possible in the convention but is rarely utilized in
practice due to it's unfamiliar to users.
IP-based and name-based virtual hosting can be merged: a server may contain more than
one IP address and serve more than one name on a few or each of those IP addresses. It
can be helpful when using TLS/SSL with wildcard certificates.
Introduction
The Apache HTTP server is a popular open-source web server that offers flexibility, power,
and widespread support for developers. Apache server configuration does not take place in a
single monolithic file, but instead happens through a modular design where new files can be
added and modified as needed. Within this modular design, you can create an individual site
or domain called a virtual host.
Using virtual hosts, one Apache instance can serve multiple websites. Each domain or
individual site that is configured using Apache will direct the visitor to a specific directory
holding that site’s information. This is done without indicating to the visitor that the same
server is also responsible for other sites. This scheme is expandable without any software
limit as long as your server can handle the load.
In this guide, you will set up Apache virtual hosts on an Ubuntu 20.04 server. During this
process, you’ll learn how to serve different content to different visitors depending on which
domains they are requesting by creating two virtual host sites.
Note: If you do not have domains available at this time, you can use test values locally on
your computer. Step 6 of this tutorial will show you how to test and configure your test
values. This will allow you to validate your configuration even though your content won’t be
available to other visitors through the domain name.
Step 1 — Creating the Directory Structure
The first step is to create a directory structure that will hold the site data that you will be
serving to visitors.
Your document root, the top-level directory that Apache looks at to find content to serve, will
be set to individual directories under the /var/www directory. You will create a directory here
for each of the virtual hosts.
Within each of these directories, you will create a public_html directory.
The public_html directory contains the content that will be served to your visitors. The parent
directories, named here as your_domain_1 and your_domain_2, will hold the scripts and
application code to support the web content.
Use these commands, with your own domain names, to create your directories:
Your web server now has the permissions it needs to serve content, and your user should be
able to create content within the necessary folders. The next step is to create content for your
virtual host sites.
Virtual host files are the files that specify the actual configuration of your virtual hosts and
dictates how the Apache web server will respond to various domain requests.
Apache comes with a default virtual host file called 000-default.conf. You can copy this file
to create virtual host files for each of your domains.
/etc/apache2/sites-available/your_domain_1.conf
<VirtualHost *:80>
...
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
...
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Within this file, customize the items for your first domain and add some additional directives.
This virtual host section matches any requests that are made on port 80, the default HTTP
port.
First, change the ServerAdmin directive to an email that the site administrator can receive
emails through:
/etc/apache2/sites-available/your_domain_1.conf
ServerAdmin admin@your_domain_1
After this, add two additional directives. The first, called ServerName, establishes the base
domain for the virtual host definition. The second, called ServerAlias, defines further names
that should match as if they were the base name. This is useful for matching additional hosts
you defined. For instance, if you set the ServerName directive to example.com you could
define a ServerAlias to www.example.com, and both will point to this server’s IP address.
Add these two directives to your configuration file after the ServerAdmin line:
/etc/apache2/sites-available/your_domain_1.conf
<VirtualHost *:80>
...
ServerAdmin admin@your_domain_1
ServerName your_domain_1
ServerAlias www.your_domain_1
DocumentRoot /var/www/html
...
</VirtualHost>
Next, change your virtual host file location for the document root for this domain. Edit
the DocumentRoot directive to point to the directory you created for this host:
/etc/apache2/sites-available/your_domain_1.conf
DocumentRoot /var/www/your_domain_1/public_html
Here is an example of the virtual host file with all of the adjustments made above:
/etc/apache2/sites-available/your_domain_1.conf
<VirtualHost *:80>
...
ServerAdmin admin@your_domain_1
ServerName your_domain_1
ServerAlias www.your_domain_1
DocumentRoot /var/www/your_domain_1/public_html
...
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
...
</VirtualHost>
Create your second configuration file by copying over the file from your first virtual host site:
1. sudo cp /etc/apache2/sites-available/your_domain_1.conf /etc/apache2/sites-
available/your_domain_2.conf
Copy
You now need to modify all of the pieces of information to reference your second domain.
When you are finished, it should look like this:
/etc/apache2/sites-available/your_domain_2.conf
<VirtualHost *:80>
...
ServerAdmin admin@your_domain_2
ServerName your_domain_2
ServerAlias www.your_domain_2
DocumentRoot /var/www/your_domain_2/public_html
...
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
...
</VirtualHost>
Now that you have created your virtual host files, you must enable them. Apache includes
some tools that allow you to do this.
Copy
There will be output for both sites, similar to the example below, reminding you to reload
your Apache server:
Output
Enabling site example.com.
To activate the new configuration, you need to run:
systemctl reload apache2
Before reloading your server, disable the default site defined in 000-default.conf by using
the a2dissite command:
1. sudo a2dissite 000-default.conf
Copy
Output
Site 000-default disabled.
To activate the new configuration, you need to run:
systemctl reload apache2
Output
...
Syntax OK
When you are finished, restart Apache to make these changes take effect.
Optionally, you can check the status of the server after all these changes with this command:
Your server should now be set up to serve two websites. If you’re using real domain names,
you can skip Step 6 and move on to Step 7. If you’re testing your configuration locally,
follow Step 6 to learn how to test your setup using your local computer.
Now that you have your virtual hosts configured, you can test your setup by going to the
domains that you configured in your web browser:
http://your_domain_1
You can also visit your second host page and view the file you created for your second site:
http://your_domain_2
If both of these sites work as expected, you’ve successfully configured two virtual hosts on
the same server.
Note: If you adjusted your local computer’s hosts file, like in Step 6 of this tutorial, you
may want to delete the lines you added now that you verified that your configuration works.
This will prevent your hosts file from being filled with entries that are no longer necessary.
3 virtual host types
Someone who wants to visit your website types in an address and hopes to end up in the right
destination. Virtual servers handle that query in a few different ways.
· Internet protocol (IP) address. Use a different IP for each domain, but point them to
one server. Allow that one server to resolve multiple IP addresses.
· Name. Use one IP for all domains on the server. During connection, ask your visitors
which site they'd like to visit. After that query, resolve the visit to the proper site.
· Port. Assign each website to a different port on the server.
· Delays. Choose the name system, and some browsers will struggle to authenticate the
site. Your visitors could be told your site is not secure, or others may wait long
periods for your site to load.
· Complexity. It takes little coding to set up IP addresses for each site, but you may run
out of available IP addresses to use. And you must keep track of which address
corresponds to each site.
Activity-13
A solid-state drive (SSD) is a type of mass storage device used in place of a spinning hard
disk drive (HDD). Solid-state drives have no moving parts and information is saved onto
integrated circuits (ICs).
Although SSDs serve the same functions as hard drives, their internal parts are different.
SSDs store data using flash memory, allowing them to access data much faster than hard
drives.
Benefits of using a solid-state drive (SSD)
SSDs have access speeds of 35-100 microseconds making it 25 to 100 times faster than a
HDD. This makes an SSD more reliable because it uses less power, has faster access times,
increases the battery life, and has quicker file transfers.
Since there are no moving parts, it is more durable to drops and shudders making it more
resilient against data loss caused by physical or external trauma.
The absence of a rotating metal platter to store data and a moving read arms means an SSD
makes little noise. Compared to an HDD, the rotation of the metal platter and the movement
of the read arm creates noise.
Lastly, a SSD is considerably more compact than a HDD because there are no moving parts.
This also means that solid-state drives are more suitable for portable electronic devices such
as tablets and cellular phones [2].
Everything is faster
SSDs enable “instant on” allowing your system to boot almost immediately. Imagine sitting
in class being able to access LEARN instantly, and being able to switch slides during a
lecture without waiting.
Seamless multitasking
The improved data access of a SSD allows computers to run multiple programs with ease.
Sometimes being a student means tackling a number of things at once.
Seamless multitasking gives you the opportunity to not only maximize your learning but
gives you the ability to conquer more than one task on one screen.
Since SSDs have no moving parts that are susceptible to damage, they are extremely durable
and reliable.
SSDs use flash memory which means they’re able to maintain more consistent operating
temperatures which will not only keep overall system temperatures down, but it will also
ensure your system stays alive for longer.
The longer your computer can stay alive, the less worries you will have stressing over buying
a new computer, and stressing over losing your files.
Better gaming
Faster data access speeds enable faster load times. If you are a part of the Games Institute,
there is an increased chance at first strikes and a seamless gaming experience.
Flexible storage
SSDs are available in multiple forms. Some forms like mSATA are able to plug into your
system’s motherboard allowing the drive to work alongside your existing hard drive.
Flexible storage is significant especially for students. With the amount of assignments stored
onto your computer will eventually slow down the efficiency of your computer. Flexible
storage allows you to organize your computer and allows it to run more efficiently.
The increased speed of a SSD means you will be able to get more done in less time. This
means you have more time to better your academic and personal development.
Solid-state drives use semiconductor chips to store data. The chips used in solid-state drive
deliver non-volatile memory, meaning the data stay even without power.
SSDs cannot overwrite existing information; they have to erase it first. However, when you
delete a file in Windows or Mac OS, it is not erased immediately – the space is marked as
available for re-use. In order to actually re-use this space, the SSD has to be given a “TRIM”
command. Once there are enough pages to be erased, then the SSD will do a “garbage
collection” operation and delete the data as a block.
SSDs have more space available than what is advertised because of over-provisioning. Over-
provisioning is storage that is not available to the operating system but is instead used for
internal tasks. The over-provisioned space takes up a small percentage of the overall solid-
state drive.
Block remapping occurs at the 70% mark when there is no data to be deleted the solid state
drive will move all files around in a cycle causing the drive to slow down.
The last process is wear levelling, the process designed to extend the life of a solid-state
drive. It arranges data so that the erase cycles are distributed evenly throughout the blocks of
the device.
What is RAID?
RAID (redundant array of independent disks) is a way of storing the same data in different
places on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive
failure. There are different RAID levels, however, and not all have the goal of providing
redundancy.
RAID arrays appear to the operating system (OS) as a single logical drive.
RAID employs the techniques of disk mirroring or disk striping. Mirroring will copy identical
data onto more than one drive. Striping partitions help spread data over multiple disk drives.
Each drive's storage space is divided into units ranging from a sector of 512 bytes up to
several megabytes. The stripes of all the disks are interleaved and addressed in order. Disk
mirroring and disk striping can also be combined in a RAID array.
In a multiuser system, better performance requires a stripe wide enough to hold the typical or
maximum size record, enabling overlapped disk I/O across drives.
RAID levels
RAID devices use different versions, called levels. The original paper that coined the term
and developed the RAID setup concept defined six levels of RAID -- 0 through 5. This
numbered system enabled those in IT to differentiate RAID versions. The number of levels
has since expanded and has been broken into three categories: standard, nested and
nonstandard RAID levels.
A visualization of RAID 0.
RAID 1. Also known as disk mirroring, this configuration consists of at least two drives that
duplicate the storage of data. There is no striping. Read performance is improved, since either
disk can be read at the same time. Write performance is the same as for single disk storage.
A visualization of RAID 1.
RAID 2. This configuration uses striping across disks, with some disks storing error checking
and correcting (ECC) information. RAID 2 also uses a dedicated Hamming code parity, a
linear form of ECC. RAID 2 has no advantage over RAID 3 and is no longer used.
A visualization of RAID 2.
RAID 3. This technique uses striping and dedicates one drive to storing parity information.
The embedded ECC information is used to detect errors. Data recovery is accomplished by
calculating the exclusive information recorded on the other drives. Because an I/O operation
addresses all the drives at the same time, RAID 3 cannot overlap I/O. For this reason, RAID 3
is best for single-user systems with long record applications.
A visualization of RAID 3.
RAID 4. This level uses large stripes, which means a user can read records from any single
drive. Overlapped I/O can then be used for read operations. Because all write operations are
required to update the parity drive, no I/O overlapping is possible.
A visualization of RAID 4.
RAID 5. This level is based on parity block-level striping. The parity information is striped
across each drive, enabling the array to function, even if one drive were to fail. The array's
architecture enables read and write operations to span multiple drives. This results in
performance better than that of a single drive, but not as high as a RAID 0 array. RAID 5
requires at least three disks, but it is often recommended to use at least five disks for
performance reasons.
RAID 5 arrays are generally considered to be a poor choice for use on write-intensive
systems because of the performance impact associated with writing parity data. When a disk
fails, it can take a long time to rebuild a RAID 5 array.
A visualization of RAID 5.
RAID 6. This technique is similar to RAID 5, but it includes a second parity scheme
distributed across the drives in the array. The use of additional parity enables the array to
continue functioning, even if two disks fail simultaneously. However, this extra protection
comes at a cost. RAID 6 arrays often have slower write performance than RAID 5 arrays.
A visualization of RAID 6.
Nested RAID levels
Some RAID levels that are based on a combination of RAID levels are referred to as nested
RAID. Here are some examples of nested RAID levels.
RAID 10 (RAID 1+0). Combining RAID 1 and RAID 0, this level is often referred to as
RAID 10, which offers higher performance than RAID 1, but at a much higher cost. In RAID
1+0, the data is mirrored and the mirrors are striped.
RAID 01 (RAID 0+1). RAID 0+1 is similar to RAID 1+0, except the data organization
method is slightly different. Rather than creating a mirror and then striping it, RAID 0+1
creates a stripe set and then mirrors the stripe set.
RAID 03 (RAID 0+3, also known as RAID 53 or RAID 5+3). This level uses striping in
RAID 0 style for RAID 3's virtual disk blocks. This offers higher performance than RAID 3,
but at a higher cost.
RAID 50 (RAID 5+0). This configuration combines RAID 5 distributed parity with RAID 0
striping to improve RAID 5 performance without reducing data protection.
RAID 7. A nonstandard RAID level based on RAID 3 and RAID 4 that adds caching. It
includes a real-time embedded OS as a controller, caching via a high-speed bus and other
characteristics of a standalone computer.
Adaptive RAID. This level enables the RAID controller to decide how to store the parity on
disks. It will choose between RAID 3 and RAID 5. The choice depends on what RAID set
type will perform better with the type of data being written to the disks.
Linux MD RAID 10. This level, provided by the Linux kernel, supports the creation of
nested and nonstandard RAID arrays. Linux software RAID can also support the creation of
standard RAID 0, RAID 1, RAID 4, RAID 5 and RAID 6 configurations.
Benefits of RAID
Advantages of RAID include the following:
· When a large amount of data needs to be restored. If a drive fails and data is lost, that
data can be restored quickly, because this data is also stored in other drives.
· When uptime and availability are important business factors. If data needs to be
restored, it can be done quickly without downtime.
· When working with large files. RAID provides speed and reliability when working with
large files.
· When an organization needs to reduce strain on physical hardware and increase
overall performance. As an example, a hardware RAID card can include additional
memory to be used as a cache.
· When having I/O disk issues. RAID will provide additional throughput by reading and
writing data from multiple drives, instead of needing to wait for one drive to perform
tasks.
· When cost is a factor. The cost of a RAID array is lower than it was in the past, and
lower-priced disks are used in large numbers, making it cheaper.