CIT421 NET-CENTRIC COMPUTING SUMMARY copy

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

JTECH EDUCATIONAL CONSULTS

https://api.whatsapp.com/send?phone=23409032650760
TEL:09032650760,07064298170
Email:[email protected]
MOTTO :Embracing Education

CIT421 NET-CENTRIC COMPUTING SUMMARY


Q Explain the concept of distributed systems
Answer
A distributed system is a system whose components are located on different networked computers, which
communicate and coordinate their actions by passing messages to one another from any system in order to appear as
a single system to the end-user. The computers that are in a distributed system can be physically together and
connected by a local network, or they can be geographically distant and connected by a wide area network.
Functionality
There are two general ways that distributed systems function:
a. Each component of the system works to achieve a common goal and the end-user views results as one combined
unit.
b. Each component has its own end-user and the distributed system facilitates sharing resources or communication
services.

Q Describe the Architectural Models of Distributing Computing


Answer:
Architectural models
Distributed systems generally consist of four different basic architectural models:
a. Client-server — Clients contact the server for data, then format it and display it to the end-user.
b. Three-tier — Information about the client is stored in a middle tier rather than on the client, to simplify
application deployment.
c. n-tier — Generally used when the server needs to forward requests to additional enterprise services on the
network.
d. Peer-to-peer — There are no additional nodes used to provide services or manage resources. Responsibilities are
uniformly distributed among components in the system, known as peers, which can serve as either client or server.

Q What is distributed computing?


Answer:
Distributed Computing is a much broader technology that has been around for more than three decades now.
Distributed computing is computing over distributed autonomous computers that communicate only over a network.

Q Explain the term Service Orientation


Answer:
Service orientation is the underlying paradigm that defines the architecture of a cloud computing system. Cloud
computing is often summarized with the acronym XaaS meaning, Everything-as-a-Service—that clearly underlines
the central role of service orientation

Q Describe the concept, Virtualization


Answer:
Virtualization is another element that plays a fundamental role in cloud computing. This technology is a core feature
of the infrastructure used by cloud providers. Virtualization concept is more than 40 years old, but cloud computing

1
introduces new challenges, especially in the management of virtual environments, whether they are abstractions of
virtual hardware or a runtime environment

Q Classify Mobile and Cloud Computing


Answer:
Mobile Computing
Mobile Computing is a technology that allows transmission of data, voice and video via a computer or any other
wireless enabled device without having to be connected to a fixed physical link

Internet with Mobile Devices


Mobile communication
• The mobile communication refers to the infrastructure put in place to ensure that seamless and reliable
communication goes on
• These would include devices such as protocols, services, bandwidth, and portals necessary to facilitate
and support the stated services
• The data format is also defined at this stage
• This ensures that there is no collision with other existing systems which offer the same service.

Mobile Software
• Mobile software is the actual program that runs on the mobile hardware
• It deals with the characteristics and requirements of mobile applications
• This is the engine of the mobile device
• It is the operating system of the appliance
Its the essential component that operates the mobile device

Q Mention 5 mobile hardware


Answer:
• mobile devices or device components that receive or access the service of mobility
• They would range from portable laptops, smartphones, tablet Pc's, Personal Digital Assistants
• They are usually classified in the following categories:

Personal Digital Assistant (PDA)


• The main purpose of this device is to act as an electronic organizer or day planner that is portable, easy to
use and capable of sharing information with your computer systems.
• PDA is an extension of the PC, not a replacement
• These systems are capable of sharing information with a computer system through a process or service
known as synchronization
• Both devices will access each other to check for changes or updates in the individual devices
• The use of infrared and Bluetooth connections enables these devices to always be synchronized.

• With PDA devices, a user can browse the internet, listen to audio clips, watch video clips, edit and modify
office documents, and many more services
• The device has a stylus and a touch sensitive screen for input and output purposes

Smartphones
It combines the features of a PDA with that of a mobile phone or camera phone
It has a superior edge over other kinds of mobile phones.
Smartphones have the capability to run multiple programs concurrently
These phones include high-resolution touch screens, web browsers that can:
access and properly display standard web pages rather than just mobile-optimized sites
high-speed data access via Wi-Fi and high speed cellular broadband.

2
The most common mobile Operating Systems (OS) used by modern smartphones include:
a. Google's Android
b. Apple's iOS
c. Nokia's Symbian
d. RIM's BlackBerry OS
e. Samsung's Bada
f. Microsoft's Windows Phone, and embedded Linux distributions such as Maemo and MeeGo. Such
operating systems can be installed on different phone models, and typically each device can receive multiple OS
software updates over its lifetime.

Tablet PC and iPads


• This mobile device is larger than a mobile phone or a PDA and integrates into a touch screen and is
operated using touch sensitive motions on the screenThey are often controlled by a pen or by the touch of a finger
• They are usually in slate form and are light in weight. Examples would include ipads, Galaxy Tabs,
Blackberry Playbooks etc.
• They offer the same functionality as portable computers
• They support mobile computing in a far superior way and have enormous processing horsepower
• Users can edit and modify document files, access high speed internet, stream video and audio data,
receive and send e-mails, attend/give lectures and presentations among its very many other functions
• They have excellent screen resolution and clarity

Advantages

• Location Flexibility
• This has enabled users to work from anywhere as long as there is a connection established
• A user can work without being in a fixed position
• Their mobility ensures that they are able to carry out numerous tasks at the same time and perform their
stated jobs.

• Saves Time
• The time consumed or wasted while travelling from different locations or to the office and back, has been
slashed
• One can now access all the important documents and files over a secure channel or portal and work as if
they were on their computer
• It has enhanced telecommuting in many companies
• It has also reduced unnecessary incurred expenses

• Enhanced Productivity
• Users can work efficiently and effectively from whichever location they find comfortable
• This in turn enhances their productivity level

• Ease of Research
• Research has been made easier, since users earlier were required to go to the field and search for facts and
feed them back into the system
• It has also made it easier for field officers and researchers to collect and feed data from wherever they are
without making unnecessary trips to and from the office to the field

• Entertainment
• Video and audio recordings can now be streamed on-the-go using mobile computing
• It's easy to access a wide variety of movies, educational and informative material
• With the improvement and availability of high speed data connections at considerable cost, one is able to
get all the entertainment they want as they browse the internet for streamed data
Streamlining of Business Processes

3
• Business processes are now easily available through secured connections
• Looking into security issues, adequate measures have been put in place to ensure authentication and
authorization of the user accessing the services
• Some business functions can be run over secure links and sharing of information between business
partners can also take place

Security Issues
• Mobile computing has its fair share of security concerns as any other technology
• Due to its nomadic nature, it's not easy to monitor the proper usage
• Users might have different intentions on how to utilize this privilege
• Improper and unethical practices such as hacking, industrial espionage, pirating, online fraud and
malicious destruction are some but few of the problems experienced by mobile computing
• Another big problem plaguing mobile computing is credential verification
• As other users share username and passwords, it poses as a major threat to security
• This being a very sensitive issue, most companies are very reluctant to implement mobile computing to
the dangers of misrepresentation
• The problem of identity theft is very difficult to contain or eradicate

Current Trends
• These are the list of the current mobile technologies starting from 5G technologies which is the hottest
mobile technology available in the market.
• 5G
• In telecommunications, 5G is the fifth generation technology standard for broadband cellular networks,
which cellular phone companies began deploying worldwide in 2019, and is the planned successor to the 4G
networks which provide connectivity to most current cellphones. 5G networks are predicted to have more than 1.7
billion subscribers worldwide by 2025, according to the GSM Association.[1] Like its predecessors, 5G networks
are cellular networks, in which the service area is divided into small geographical areas called cells.

• 4G
• 4G is the fourth generation of broadband cellular network technology, succeeding 3G, and preceding 5G.
A 4G system must provide capabilities defined by ITU in IMT Advanced. Potential and current applications include
amended mobile web access, IP telephony, gaming services, high-definition mobile TV, video conferencing, and 3D
television.
• The first-release WIMAX standard was commercially deployed in South Korea in 2006 and has since
been deployed in most parts of the world.

• 3G or third generation
• 3G mobile telecommunications is a generation of standards for mobile phones and mobile
telecommunication services fulfilling the International Mobile Telecommunications-2000 (IMT-2000) specifications
by the International Telecommunication Union. Application services include wide-area wireless voice telephone,
mobile Internet access, video calls and mobile TV, all in a mobile environment.

• Global Positioning System (GPS)


• The Global Positioning System (GPS) is a space-based satellite navigation system that provides location
and time information in all weather, anywhere on or near the Earth, where there is an unobstructed line of sight to
four or more GPS satellites
• The GPS program provides critical capabilities to military, civil and commercial users around the world

• Long Term Evolution (LTE)


• LTE is a standard for wireless communication of high-speed data for mobile phones and data terminals
• It is based on the GSM/EDGE and UMTS/HSPA network technologies, increasing the capacity and speed
using new modulation techniques

• WiMAX

4
• WiMAX (Worldwide Interoperability for Microwave Access) is a wireless communications standard
designed to provide 30 to 40 megabit-per-second data rates, with the latest update providing up to 1 Gbit/s for fixed
stations
• It is a part of a fourth generation or 4G wireless-communication technology
• WiMAX far surpasses the 30-metre wireless range of a conventional Wi-Fi Local Area Network (LAN),
offering a metropolitan area network with a signal radius of about 50 km

• Near Field Communication


• Near Field Communication (NFC) is a set of standards for smartphones and similar devices to establish
radio communication with each other by touching them together or bringing them into close proximity, usually no
more than a few centimeters
• Present and anticipated applications include contactless transactions, data exchange, and simplified setup
of more complex communications such as Wi-Fi. Communication is also possible between an NFC device and an
unpowered NFC chip, called a "tag"

Q Explain the concept of network Security


Answer:
Network Security Network security is a term that describes the security tools, tactics and security policies designed
to monitor, prevent and respond to unauthorized network intrusion, while also protecting digital assets, including
network traffic. Network security includes hardware and software technologies (including resources such as savvy
security analysts, hunters, and incident responders) and is designed to respond to the full range of potential threats
targeting your network.

The Three Key Focuses of Network Security


There are three key focuses that should serve as a foundation of any network security strategy: protection, detection
and response.
3.3.1 Protection entails any tools or policies designed to prevent network security intrusion.
3.3.2 Detection refers to the resources that allow you to analyze network traffic and quickly identify problems before
they can do harm.
3.3.3 Response is the ability to react to identified network security threats and resolve them as quickly as possible.

Q Understand the importance of network security


Answer
-Network security tools and devices exist to help organizations protect, not only its sensitive information, but also its
overall performance, reputation and even its ability to stay in business.
-Companies that fall prey to cyberattacks often find themselves crippled from the inside out, unable to deliver
services or effectively address customer needs. Similarly, networks play a major role in internal company processes,
and when they come under attack, those processes may grind to a halt, further hampering an organization’s ability to
conduct business or even resume standard operations.
-But perhaps even more damaging is the detrimental effect that a network breach can have on your business’s
reputation.

Q Identify and explain the network security tools and techniques.


Answer:

1. Access control If threat actors cannot access your network, the amount of damage they will be able to do will be
extremely limited. But in addition to preventing unauthorized access, be aware that even authorized users can also
be potential threats.

2. Anti-malware software Malware, in the form of viruses, trojans, worms, keyloggers, spyware, etc. are designed
to spread through computer systems and infect networks. Anti-malware tools are a kind of network security software
designed to identify dangerous programs and prevent them from spreading.

3. Anomaly detection It can be difficult to identify anomalies in your network without a baseline understanding of
how that network should be operating.

5
4. Application security For many attackers, applications are a defensive vulnerability that can be exploited.
Application security helps establish security parameters for any applications that may be relevant to your network
security.

5. Data loss prevention (DLP) Often, the weakest link in network security is the human element. DLP technologies
and policies help protect staff and other users from misusing and possibly compromising sensitive data or allowing
said data out of the network.

6. Email security As with DLP, email security is focused on shoring up human-related security weaknesses. Via
phishing strategies (which are often very complex and convincing), attackers persuade email recipients to share
sensitive information via desktop or mobile device, or inadvertently download malware into the targeted network.

7. Endpoint security The business world is becoming increasingly bring your own device (BYOD), to the point
where the distinction between personal and business computer devices is almost non-existent.

8. Firewalls Firewalls function much like gates that can be used to secure the borders between your network and the
internet. Firewalls are used to manage network traffic, allowing authorized traffic through while blocking access to
non-authorized traffic.

9. Intrusion prevention systems Intrusion prevention systems (also called intrusion detection) constantly scan and
analyze network traffic/packets, so that different types of attacks can be identified and responded to quickly.

10. Network segmentation There are many kinds of network traffic, each associated with different security risks.
Network segmentation allows you to grant the right access to the right traffic, while restricting traffic from
suspicious sources.

11. Security information and event management (SIEM) Sometimes simply pulling together the right
information from so many different tools and resources can be prohibitively difficult — particularly when time is an
issue.

12. Virtual private network (VPN) VPN tools are used to authenticate communication between secure networks
and an endpoint device.

13. Web security Including tools, hardware, policies and more, web security is a blanket term to describe the
network security measures businesses take to ensure safe web use when connected to an internal network.

14. Wireless security Generally speaking, wireless networks are less secure than traditional networks. Thus, strict
wireless security measures are necessary to ensure that threat actors aren’t gaining access.

Q Explain the concept of Client Server category of networks.


Answer:
Client Server Computing
In client-server computing, the clients requests a resource and the server provides that resource. A server may serve
multiple clients at the same time while a client is in contact with only one server. Both the client and server usually
communicate via a computer network but sometimes they may reside in the same system.

6
Characteristics of Client Server Computing
The salient points for client server computing are as follows:
The client server computing works with a system of request and response. The client sends a request to
the server and the server responds with the desired information.
The client and server should follow a common communication protocol so they can easily interact with
each other. All the communication protocols are available at the application layer.
A server can only accommodate a limited number of client requests at a time. So it uses a system based
to priority to respond to the requests.
Denial of Service (DoS) attacks hindera servers’ ability to respond to authentic client requests by
inundating it with false requests.
An example of a client server computing system is a web server. It returns the web pages to the clients
that requested them.

Q What are the differences Between Client Server and Peer to Peer computing?
Answer:
The major differences between client server computing and peer to peer computing are as follows:
In client server computing, a server is a central node that services many client nodes. On the other hand, in a peer
to peer system, the nodes collectively use their resources and communicate with each other.
In client server computing the server is the one that communicates with the other nodes. In peer to peer to
computing, all the nodes are equal and share data with each other directly.
Client Server computing is believed to be a subcategory of the peer to peer computing.

Q What are the Advantages and Disadvantages of Client Server computing?


Answer:
Advantages of Client Server Computing
The different advantages of client server computing are −
All the required data is concentrated in a single place i.e. the server. So it is easy to protect the data and
provide authorisation and authentication.
The server need not be located physically close to the clients. Yet the data can be accessed efficiently.
It is easy to replace, upgrade or relocate the nodes in the client server model because all the nodes are
independent and request data only from the server.
All the nodes i.e clients and server may not be build on similar platforms yet they can easily facilitate
the transfer of data.

Disadvantages of Client Server Computing


The different disadvantages of client server computing are −
If all the clients simultaneously request data from the server, it may get overloaded. This may lead to
congestion in the network.
If the server fails for any reason, then none of the requests of the clients can be fulfilled. This leads of
failure of the client server network.
The cost of setting and maintaining a client server model are quite high.

Q Explain the concept of an App

7
Answer:
Web app
An interactive computer program, built with web technologies (HTML, CSS, JS), which stores (Database, Files) and
manipulates data (CRUD), and is used by a team or single user to perform tasks over the internet.

Q Enumerate the prerequisite for building a web application


Answer:
Prerequisites for Building a Web Application
To make a data-centric web app from the bottom-up, it is advantageous to understand:
1. Backend language (e.g. Python, Ruby) - control how your web app works
2. Web front end (HTML, CSS, Javascript) - for the look and feel of your web app
3. DevOps (Github, Jenkins) - Deploying / hosting your web app

If you do not have any experience with the points above, you need not worry. You have two options:
1. Learn the points above - there are lots of resources online to help you. I’d recommend Codecademy .
2. Use a web app builder like Budibase - As a builder, Budibase will remove the need to learn a backend
language. On top of that, Budibase will also take care of a lot of your DevOps tasks such as hosting.

Q Describe the steps for building a web application


Answer:
Building a Web Application

Step 1 – Source an idea


Before making a web app, you must first understand what you intend on building, and more importantly, the reason
for it.
Your idea should stem from solving someone’s problem. Ideally, your own problem.
It is important that developer choose an idea which interests him /her. Ask yourself:
How much time do I have to build this app?
What am I interested in?
What apps do I enjoy using?
What do I like about these apps?
How much time/money will this app save or generate for me (as a user)?
How much will it improve my life

Step 2 – Market Research


Once you have chosen your idea(s), it is important to research the market to see:
1. If a similar product exists
2. If a market exists

The number 1 reason start-ups fail, is the failure to achieve product-market fit.
“Product/market fit means being in a good market with a product that can satisfy that market.”
To quickly find out if a similar web app exists, use the following tools to search for your idea:
1. Google
2. Patent and trademark search
3. Betalist
4. Product hunt

If a similar product exists, do not worry.

Step 3 - Define your web apps functionality


You have got your idea, you have validated the market, it is now time to list everything you want your app to do.
A common mistake here is to get carried away. The more functionality you add, the longer it will take to build your
web app. Quite often, the longer a web app takes to build, the more frustration you will experience.

For direction, I have included a list of basic functions required for a simple CRM app.

8
Users can create an account
Users can retrieve lost passwords
Users can change their passwords
Users can create new contacts

Step 4 - Sketch your web app


There are multiple stages of designing a web app.
The first stage is sketching using a notebook (with no lines) and pen/pencil
After step 1, 2 and 3, you should have an idea of what your web app is, who your users are, and the features it will
have.
When sketching, consider the following:
Navigation
Branding
Forms
Buttons
Any other interactive elements

Step 5 – Plan your web apps workflow


It is time to put yourself in the shoes of your user. Here, we are going to plan your web apps workflow.
After you have finished analysing your competitor’s web apps, it is time to write down different workflows for your
app. Consider the following points:
How does a user signup
Do they receive a verification email
How does a user login
How does a user change their password
How does a user navigate through the app
How does a user change their user settings
How does a user pay for the app
How does a user cancel their subscription

Step 6 – Wireframing / Prototyping Your Web Application


Ok, it’s time to turn those sketches and that new-found understanding of your web application into a
wireframe/prototype.
What is wireframing / prototyping
Wireframing is the process of designing a blueprint of your web application. Prototyping is taking wireframing a
step further, adding an interactive display.
You can prototype/wireframe using the following tools:
Sketch (macOS)
InVision Studio (macOs)
Adobe XD (macOS, Windows)
Figma (Web, macOS, Windows, Linux)
Balsamiq (macOS, Windows, Web)

Step 7 – Seek early validation


You have now got a beautiful wireframe/prototype which visually describes your web app.
Before Starting the development stage.
Before we make our web app, I would like to share the following tips:
1. Attempt to get a small section of your app fully working. What we would call a “Complete Vertical”.
o Building the smallest possible section will allow you to piece all the bits together, and iron out those
creases early.
o You will get great satisfaction early by having something working - great motivation.
o Create things that you know you will throw away later - if it gets you something working now.
2. At the start - expect things to change a lot as you learn and discover what you have not thought about.
o Have faith that your app will stabilise.
o Do not be afraid to make big changes.

9
3. Spend time learning your tools.
o You may feel like you are wasting your time, reading, or experimenting with “hello world”. Learning the correct
way to do things will have a huge positive, cumulative effect on your productivity over time.

Step 8 – Architect and build your database


So, we know roughly our web application’s functionality, what it looks like, and the pages required. Now it is time
to determine what information we will store in our database.

o Where possible, “Go with the grain” of your tools. Realise that as soon as you step out of the normal flow
/ usage of your toolset, you are on your own and could be in a deep time sink. There are always exceptions to this of
course!
4. Do not avoid issues that need to be fixed.
o Face your issues head on - they will never go away and will only grow in stature.

Q What is a Database
Answer:
Database
A database is simply a collection of data! Data can be stored to disk, or in memory on a server, or both. You could
create a folder on your hard drive, store a few documents, and call it a database.
A Database Management System (DBMS) is a system that provides you with consistent APIs to (most commonly):
Create databases, update and delete databases
Read and write data to databases
Secure access to a database by providing levelled access to different areas and functions

Database types
There are many types of database for many different purposes. A web app will most commonly use one of the
following:
a. SQL
You should use a SQL database if your data is very relational. Your data is relational if you have multiple, well
defined record types that have relationships between them.
b. Document Database
You should use a document database if your data is not very relational. Document databases store “documents”.
Each record in your database is simply a big blob of structured data - often in JSON format.

Physical separation
Every one of your clients has a separate database (although could share a database server with others). This makes it
much more difficult to make a mistake that leads to data leakage.
Pros:
Most secure
More scalable

Cons:
Managing, maintaining and upgrading is more complex
Query all your clients’ data together is more difficult

Logical separation
All of your clients are stored in one giant database.
Every time you need to get data for a single client, you must remember to include a filter for the client. E.g. ‘select’
from customers where customerClientId = 1234”
Pros:
Easier to get started
Easier to maintain and upgrade
Can easily query all your clients’ data with one query

Cons:
Easy to make a mistake that will result in a data breach

10
More difficult to scale

Step 9 - Build the frontend


Note: In reality, you will build your backend and frontend at the same time. But for this post, we’ll keep it simple.
frontend
The Frontend is the visual element of your web application. It defines what you see and interact with. The frontend
is developed with HTML, CSS, and JavaScript.
First, you need to set up your development environment. The components of this will be:
1. A code editor, such as VS Code, Sublime Text
2. A compilation, and packaging framework:
1. Webpack
2. Gulp
3. Grunt

3. A frontend framework (strictly not necessary, but highly advised unless you are an experienced frontend
developer):
1. React
2. Ember
3. Vue
4. Svelte

4. Configuring your packaging tool to talk to your backend - which is most likely running on a different port on
localhost.

Step 10 - Build your backend


What do we mean by the backend?
The backend is typically what manages your data. This refers to databases, servers, and everything the user can’t see
within a web application.
When building your web app, you need to choose between:
1. Server Pages (Multiple Page Application)
2. Single Page Application

“But isn’t this the frontend?” - I hear you say. Yes! But your choice will affect how you develop your backend.
The primary jobs of the backend will be to:
Provide HTTP endpoints for your frontend, which allow it to operate on your data. E.g. Create, Read,
Update and Delete (“CRUD”) records.
Authenticate users (verify they are who they say they are: aka log them in).
Authorization. When a logged in user makes a request, the backend will determine whether they are
allowed (authorized) to perform the requested action.
Serve the frontend
Step 11 - Host your web application
What is hosting
Hosting involves running your web app on a particular server.
When using Budibase, this step can be automated with Budibase hosting . With Budibase, you are still required to
buy a domain.
If you are not using Budibase to host your web application, follow these quick steps: \
1. Buy a domain - Namecheap
2. Buy/Setup an SSL certificate - Let’s Encrypt
3. Choose a cloud provider:
1. Amazon
2. MS Azure
3. Google Cloud Platform
4. Lower cost: Digital Ocean / Linode - if you are happy managing your own VMs
5. Zeit Now, Heroku, Firebase are interesting alternatives that aim to be faster and easier to get things done
- you should read about what they offer.

11
Step 12 - Deploy your web app
You have sourced your idea, validated it, designed and developed your web app, and chosen your hosting provider.
You’re now at the last step. Well done!
The deployment step includes is how your web application gets from your source control on your computer to your
cloud hosting from step 11.

Q What is firewall?
Answers:
A firewall forms a barrier through which the traffic going in each direction must pass. A firewall security policy
dictates which traffic is authorized to pass in each direction.

Q What is Personal Firewall


Answer
A personal firewall controls the traffic between a personal computer or workstation on one side and the Internet or
enterprise network on the other side. Personal firewall functionality can be used in the home environment and on
corporate intranets.

Q What are the benefits of host-based firewall?


Answer
A host-based firewall is a software module used to secure an individual host. Such modules are available in many
operating systems or can be provided as an add-on package. Like conventional stand-alone firewalls, host-resident
firewalls filter and restrict the flow of packets. A common location for such firewalls is a server.
There are several benefits to the use of a server-based or workstationbased firewall:
• Filtering rules can be tailored to the host environment. Specific corporate security policies for servers can
be implemented, with different filters for servers used for different application.
• Protection is provided independent of topology. Thus both internal and external attacks must pass through
the firewall.
• Used in conjunction with stand-alone firewalls, the host-based firewall provides an additional layer of
protection.

A new type of server can be added to the network, with its own firewall, without the necessity of altering
the network firewall configuration.

Q Explain the Parallel programming Model


Answer
Parallel Programming Models
A parallel programming model is a set of program abstractions for fitting parallel activities from the application to
the underlying parallel hardware. It spans over different layers: applications, programming languages, compilers,
libraries, network communication, and I/O systems. Two widely known parallel programming models are:
a. shared memory and
b. message passing

but there are also different combinations of both.

Q Enumerate and explain 4 of the parallel programming models


Answer
Data-parallel programming model is also among the most important ones as it was revived again with increasing
popularity of MapReduce and GPGPU (General-Purpose computing on Graphics Processing Units).
a. In the shared-memory programming model, tasks share a common address space, which they read and
write in an asynchronous manner.
b. In the message-passing programming model, tasks have private memories, and they communicate
explicitly via message exchange.

Mainstream parallel programming environments are based on augmenting traditional sequential programming
languages with low-level parallel constructs (library function calls and/or compiler directives).

12
MPI
The MPI is a library of routines with the bindings in Fortran, C, and C++ and it is an example of an explicitly
parallel API that implements the message-passing model via library function calls.

OpenMP
On the other side, OpenMP is an example of mainly implicit parallel API intended for shared-memory
multiprocessors. It exploits parallelism through compiler directives and the library function calls.

MapReduce Parallel Programming Model


One of the most widely used parallel programming models today is MapReduce. MapReduce is easy both to learn
and use, and is especially useful in analyzing large datasets.

OpenCL
OpenCL has some advantages over other parallel programming models. First of all, it is the only one of the “open”
standards for which there actually are implementations by all major vendors—unlike for OpenMP or OpenACC.

The CUDA programming model


The CUDA programming model is a parallel programming model that provides an abstract view of how processes
can be run on underlying GPU architectures. The evolution of GPU architecture and the CUDA programming
language have been quite parallel and interdependent.

Q What is the message -passing programming?


Answer:
Messages
A message transfer is when data moves from variables in one sub-program to variables in another sub-program. The
message consists of the data being sent.

The message-passing programming model


The sequential paradigm for programming is a familiar one. The programmer has a simplified view of the target
machine as a single processor which can access a certain amount of memory. He or she therefore writes a single
program to run on that processor.

Single-Program-Multiple-Data (SPMD )
Message-passing paradigm involves a set of sequential programs, one for each processor. In reality, it is rare for a
parallel programmer to make full use of this generality and to write a different executable for each processor.

Dependency analysis
When examining an artifact for re-use you might want to understand what it depends on. Developing a service that
has a dependency on a large number of other distinct systems is likely to result in something that has to be
revalidated every time each of those dependencies changes (which might therefore be quite often).
To undertake a typical dependency analysis, perform the following steps:
1. Identify the artefact with dependencies you want to analyze.
2. Trace through any relationships defined on that artefact and identify the targets of the relationships. This
impact analysis thus results in a list of "dependencies" that the selected artefact depends on.
3. If these "dependencies" also depend on other artefacts, then the selected artefact will also have an
indirect dependency. The impact analysis must therefore act recursively looking for relationships from any of the

13
"dependencies".

How dependencies are found


When impact analysis is started, it does not change the direction of processing through the graph. For example, an
object, B, has a dependency on object C, and object B is depended on by objects A and D, as shown in Figure 1
below.
If object A is selected for analysis, the results list includes object B and object C. Despite object D also having a
dependency on objects B and C, the analysis keeps tracing down through the dependencies and will not find any
objects which are backwards in the dependency hierarchy that are not directly linked to the selected object, so object
D will not be in the results list.

Introduction to Open Specification for Multi-Processing (OpenMP)


Open MP means Open specifications for MultiProcessing via collaborative work between interested parties from
the hardware and software industry, government and academia. It is an Application Program Interface (API) that is
used to explicitly direct multi-threaded, shared memory parallelism. API components include Compiler directives,
Runtime library routines and Environment variables.

Brief History of OpenMP


In 1991, Parallel Computing Forum (PCF) group invented a set of directives for specifying loop parallelism in
Fortran programs. X3H5, an ANSI subcommittee developed an ANSI standard based on PCF. In 1997, the first
version of OpenMP for Fortran was defined by OpenMP Architecture Review Board. Binding for C/C++ was
introduced later. Version 3.1 of it was available since 2011.

Thread
A process is an instance of a computer program that is being executed. It contains the program code and its current
activity. A thread of execution is the smallest unit of processing that can be scheduled by an operating system.
Thread model is an extension of the process model.

A Process
A process contains all the information needed to execute the program.
Process ID
Program code
Data on run time stack
Global data

Data on heap
Each process has its own address space. In multitasking, processes are given time slices in a round robin fashion. If
computer resources are assigned to another process, the status of the present process has to be saved, in order that
the execution of the suspended process can be resumed at a later time.

Differences between threads and processes


A thread is contained inside a process. Multiple threads can exist within the same process and share resources such
as memory. The threads of a process share the latter’s instructions (code) and its context (values that its variables
reference at any given moment). Different processes do not share these resources.

OpenMP Programming Model

14
Shared memory, thread-based parallelism. OpenMP is based on the existence of multiple threads in the shared
memory programming paradigm. A shared memory process consists of multiple threads.

Explicit Parallelism In Explicit Parallelism, a Programmer has full control over parallelization. OpenMP is not an
automatic parallel programming model.

Compiler Directive Based Most OpenMP parallelism is specified through the use of compiler directives which are
embedded in the source code.
OpenMP is not necessarily implemented identically by all vendors. Meant for distributed-memory parallel systems
(it is designed for shared address spaced machines). Guaranteed to make the most efficient use of shared memory.

Fork-Join Parallelism.
OpenMP program begin as a single process: the master thread. The master thread executes sequentially until the first
parallel region construct is encountered.

JOIN
When the threads complete executing the statement in the parallel region construct, they synchronize and terminate,
leaving only the master thread

A “Pragma”
It stands for “pragmatic information”. A pragma is a way to communicate the information to the compiler. The
information is non-essential in the sense that the compiler may ignore the information and still produce correct
object program.

OpenMP | Hello World program


Prerequisite: OpenMP | Introduction with Installation Guide In this section, we will learn how to create a parallel
Hello World rogram using OpenMP.
Steps to Create a Parallel Program
1. Include the header file: We have to include the OpenMP header for our program along with the
standard header files.

//OpenMP header
#include <omp.h> #pragma omp parallel

2. Specify the parallel region:

In OpenMP, we need to mention the region which we are going to make it as parallel using the keyword pragma
omp parallel. The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the
parallel.
#pragma omp parallel
{
//Parallel region code
}
So, here we include
#pragma omp parallel
{
printf("Hello World... from thread = %d\n",
omp_get_thread_num());
}
3. Set the number of threads: we can set the number of threads to execute the program using the external variable.
export OMP_NUM_THREADS=5

Compile and Run:

Compile:
gcc -o hello -fopenmp hello.c

15
4.2 Execute:

./hello
Below is the complete program with the output of the above approach: Program: Since we specified the number of
threads to be executed as 5, 5 threads will execute the same print statement at the same point of time.

Definition of Program Evaluation


Evaluation is the systematic application of scientific methods to assess the design, implementation, improvement or
outcomes of a program (Rossi & Freeman, 1993; Short, Hennessy, & Campbell, 1996).

Purposes for Program Evaluation


Demonstrate program effectiveness to funders

Improve the implementation and effectiveness of programs

Better manage limited resources

Document program accomplishments

Justify current program funding

Support the need for increased levels of funding

Satisfy ethical responsibility to clients to demonstrate positive and negative effects of program participation (Short,
Hennessy, & Campbell, 1996).

Document program development and activities to help ensure successful replication

Barriers
Program evaluations require funding, time and technical skills: requirements that are often perceived as diverting
limited program resources from clients. Program staff are often concerned that evaluation activities will inhibit
timely accessibility to services or compromise the safety of clients.

Overcoming Barriers
Collaboration is the key to successful program evaluation. In evaluation terminology, stakeholders are defined as
entities or individuals that are affected by the program and its evaluation (Rossi & Freeman, 1993; CDC, 1999).
Involvement of these stakeholders is an integral part of program evaluation. Stakeholders include but are not limited
to program staff, program clients, decision makers, and evaluators.

Types of Evaluation
Context Evaluation
Investigating how the program operates or will operate in a particular social, political, physical and economic
environment.

Formative Evaluation
Assessing needs that a new program should fulfill (Short, Hennessy, & Campbell, 1996), examining the early stages
of a program's development (Rossi & Freeman, 1993), or testing a program on a small scale before broad
dissemination (Coyle, Boruch, & Turner, 1991).

Process Evaluation
Examining the implementation and operation of program components. Sample question: Was the program
administered as planned?

Impact Evaluation
Investigating the magnitude of both positive and negative changes produced by a program (Rossi & Freeman, 1993).

16
Outcome Evaluation
Assessing the short and long-term results of a program. Sample question: What are the long-term positive effects of
program participation?

Performance or Program Monitoring


Similar to process evaluation, differing only by providing regular updates of evaluation results to stakeholders rather
than summarizing results at the evaluation's conclusion (Rossi & Freeman, 1993; Burt, Harrell, Newmark, Aron, &
Jacobs, 1997).

Logic Models
Logic models are flowcharts that depict program components. These models can include any number of program
elements, showing the development of a program from theory to activities and outcomes.

Communicating Evaluation Findings


Preparation, effective communication and timeliness in order to ensure the utility of evaluation findings. Questions
that should be answered at the evaluation's inception include: what will be communicated? to whom? by whom? and
how? The target audience must be identified and the report written to address their needs including the use of non-
technical language and a user-friendly format (National Committee for Injury Prevention and Control, 1989).

Models for Parallel Computing


A Model for Parallel Programming is an abstraction and is machine architecture independent. A model can be
implemented on various hardware and memory architectures. There are several parallel programming models like
Shared Memory model, Threads model, Message Passing model, Data Parallel model and Hybrid model etc.

Shared Memory Model


Recent Trends in Parallel Computing in the shared-memory programming model, tasks share a common address
space, which they read and write asynchronously.

Threads Model
In this model a single process can have multiple, concurrent execution paths. The main program is scheduled to run
by the native operating system. It loads and acquires all the necessary softwares and user resources to activate the
process.

Message Passing Model


In the message-passing model, there exists a set of tasks that use their own local memories during computation.
Multiple tasks can reside on the same physical machine as well across an arbitrary number of machines.

Data Parallel Model


In the data parallel model, most of the parallel work focuses on performing operations on a data set. The data set is
typically organised into a common structure, such as an array or a cube.

Hybrid model
The hybrid models are generally tailor-made models suiting to specific applications. Actually, these fall in the
category of mixed models. Such type of application-oriented models keep cropping up.

Single Program Multiple Data (SPMD)


SPMD is actually a "high level" programming model that can be built upon any combination of the previously
mentioned parallel programming models. A single program is executed by all tasks simultaneously.

Multiple Program Multiple Data (MPMD)


Like SPMD, MPMD is actually a “high level” programming model that can be built upon any combination of the
previously mentioned parallel programming models.

Q Explain the concept of computer forensics


Answer

17
Computer Forensics is a scientific method of investigation and analysis in order to gather evidence from the digital
devices or computer networks and components which is suitable for presentation in a court of law or legal body.

Digital Forensics is defined as the process of preservation, identification, extraction, and documentation of computer
evidence which can be used by the court of law. It is a science of finding evidence from digital media like a
computer, mobile phone, server, or network.

Q Explain the characteristics of digital forensics


Answer
Characteristics of Digital Forensics
Identification:
Preservation
Analysis
Documentation
Presentation

Distributed Systems
A distributed system is a computing environment in which various components are spread across multiple computers
(or other computing devices) on a network. These devices split up the work, coordinating their efforts to complete
the job more efficiently than if a single device had been responsible for the task.

Elements of Distributed Systems


Distributed systems have evolved over time but today’s most common implementations are largely designed to
operate via the internet and, more specifically, the cloud
For Example:
• A distributed system begins with a task, such as rendering a video to create a finished product ready for
release.
• The web application, or distributed applications, managing this task — like a video editor on a client
computer:
• splits the job into pieces
• An algorithm gives one frame of the video to each of a dozen different computers (or nodes) to complete
the rendering
• Once the frame is complete, the managing application gives the node a new frame to work on
• This process continues until the video is finished and all the pieces are put back together

Patterns in a Distributed System


A Software Design Pattern is a programming language defined as an ideal solution to a contextualized
programming problem. Patterns are reusable solutions to common problems that represent the best practices
available at the time.
When thinking about the challenges of a distributed computing platform, the trick is to:
• break it down into a series of interconnected patterns
• simplifying the system into smaller, more manageable and more easily understood components which
helps abstract a complicated architecture

Patterns are commonly used to describe distributed systems, such as:


• Command and Query Responsibility Segregation (CQRS) and
• two-phase commit (2PC)

Different combinations of patterns are used to design distributed systems, and each approach has unique benefits and
drawbacks.

Benefits of Distributed Systems


• Greater flexibility: It is easier to add computing power as the need for services grows. In most cases
today, you can add servers to a distributed system on the fly.

18
• Reliability: A well-designed distributed system can withstand failures in one or more of its nodes without
severely impacting performance. In a monolithic system, the entire application goes down if the server goes down.
• Enhanced speed: Heavy traffic can bog down single servers when traffic gets heavy, impacting
performance for everyone.
• Geo-distribution: Distributed content delivery is both intuitive for any internet user, and vital for global
organizations

Challenges of Distributed Systems


Distributed systems are considerably more complex than monolithic computing environments, and raise a number of
challenges around design, operations and maintenance vis-à-vis:
Increased opportunities for failure:
• The more systems added to a computing environment, the more opportunity there is for failure
• If a system is not carefully designed and a single node crashes, the entire system can go down
• Distributed systems are designed to be fault tolerant, that fault tolerance isn’t automatic or fool-proof

Synchronization process challenges:


• Distributed systems work without a global clock
• It requires careful programming to ensure that processes are properly synchronized to avoid transmission
delays that result in errors and data corruption
• In a complex system — such as a multiplayer video game — synchronization can be challenging,
especially on a public network that carries data traffic

Imperfect scalability:
• Doubling the number of nodes in a distributed system does not necessarily double performance
• Architecting an effective distributed system that maximizes scalability is a complex undertaking that
needs to take into account load balancing, bandwidth management and other issues.

More complex security:


• Managing a large number of nodes in a heterogeneous or globally distributed environment creates
numerous security challenges
• A single weak link in a file system or larger distributed system network can expose the entire system to
attack.

Risks of Distributed Systems


The challenges of distributed systems as outlined above create a number of correlating risks such as:
Security:
• Distributed systems are as vulnerable to attack as any other system
• their distributed nature creates a much larger attack surface that exposes organizations to threats.

Risk of network failure:


• Distributed systems are beholden to public networks in order to transmit and receive data
• If one segment of the internet becomes unavailable or overloaded, distributed system performance may
decline.

Governance and control issues:


• Distributed systems lack the governability of monolithic, single-server-based systems, creating auditing
and adherence issues around global privacy laws such as GDPR
• Globally distributed environments can impose barriers to providing certain levels of assurance and impair
visibility into where data resides.

Cost control:
Unlike centralized systems, the scalability of distributed systems allows administrators to easily add additional
capacity as needed, which can also increase costs.

19
• Pricing for cloud-based distributed computing systems are based on usage (such as the number of memory
resources and CPU power consumed over time)
• If demand suddenly spikes, organizations can face a massive bill.

Access Control in Distributed Systems


Administrators use a variety of approaches to manage access control in distributed computing environments, ranging
from:
• traditional access control lists (ACLs) to
• role-based access control (RBAC)
• One of the most promising access control mechanisms for distributed systems is attribute-based access
control (ABAC), which controls access to objects and processes using rules that include:
• information about the user
• the action requested and
• the environment of that request.

Application of distributed systems


• Distributed systems are used when a workload is too great for a single computer or device to handle
• They are also helpful in situations when the workload is subject to change, such as e-commerce traffic on
Cyber Monday
• Distributed system is used virtually for every internet-connected web application that exists is built on top
of some form of.
• Some of the most common examples of distributed systems:
• Telecommunications networks (including cellular networks and the fabric of the internet)
• Graphical and video-rendering systems
• Scientific computing, such as protein folding and genetic research
• Airline and hotel reservation systems
• Multiuser video conferencing systems
• Cryptocurrency processing systems (e.g. Bitcoin)
• Peer-to-peer file-sharing systems (e.g. BitTorrent)
• Distributed community compute systems (e.g. Folding@Home)
• Multiplayer video games
• Global, distributed retailers and supply chain management (e.g. Amazon)

Types of Distributed Deployments


• Distributed deployments can range from tiny:
• single department deployments on local area networks to
• large-scale, global deployments
• In addition to their size and overall complexity, organizations can consider deployments based on:
• the size and capacity of their computer network
• the amount of data they’ll consume
• how frequently they run processes
• whether they’ll be scheduled or ad hoc
• the number of users accessing the system
• capacity of their data center and the necessary data fidelity and availability requirements.

Based on these considerations, distributed deployments are categorized as:


• Departmental, small enterprise
• Medium enterprise or large enterprise
• Distributed systems can also evolve over time, transitioning from departmental to small enterprise as the
enterprise grows and expands.

Factors Determining the need for a Distributed Systems


• They’re essential to the operations of wireless networks, cloud computing services and the internet
• Distributed systems provide scalability and improved performance in ways that monolithic systems
cannot

20
• Distributed systems can offer features that would be difficult or impossible to develop on a single system
• if the Master Catalog does not see the segment bits it needs for a restore, it can ask the other off-site node
or nodes to send the segments
• Virtually everything you do now with a computing device takes advantage of the power of distributed
systems
• sending an email
• playing a game or
• reading this article on the web.

Models and Architectures of Distributed Systems


There are two Models and Architecture of distributed systems:
• Client-server systems:
• the most traditional and simple type of distributed system, involve a multitude of networked computers
that interact with a central server for data storage, processing or other common goal
• the client requests a resource and the server provides that resource
• A server may serve multiple clients at the same time while a client is in contact with only one server
• Both the client and server usually communicate via a computer network and so they are a part of
distributed systems.
• Cell phone networks are an advanced type of distributed system that share workloads among handsets,
switching systems and internet-based devices
• Peer-to-peer networks:
• workloads are distributed among hundreds or thousands of computers all running the same software
• The peer to peer systems contains nodes that are equal participants in data sharing
• All the tasks are equally divided between all the nodes
• The nodes interact with each other as required as they share resources
• This is done with the help of a network.

Characteristics of a Distributed system


• Scalability: The ability to grow as the size of the workload increases is an essential feature of distributed
systems, accomplished by adding additional processing units or nodes to the network as needed.
• Concurrency: Distributed system components run simultaneously. They’re also characterized by the lack
of a “global clock,” when tasks occur out of sequence and at different rates.
• Availability/fault tolerance: If one node fails, the remaining nodes can continue to operate without
disrupting the overall computation.
• Transparency: An external programmer or end user sees a distributed system as a single computational
unit rather than as its underlying parts, allowing users to interact with a single logical device rather than being
concerned with the system’s architecture.
• Heterogeneity: In most distributed systems, the nodes and components are often asynchronous, with
different hardware, middleware, software and operating systems. This allows the distributed systems to be extended
with the addition of new components.
• Replication: Distributed systems enable shared information and messaging, ensuring consistency
between redundant resources, such as software or hardware components, improving fault tolerance, reliability and
accessibility.

Distributed Tracing
Distributed tracing is a method for monitoring applications — typically those built on a micro-services architecture
— which are commonly deployed on distributed systems.

Distributed Objects
The distributed object paradigm
It provides abstractions beyond those of the message-passing model. In object-oriented programming, objects are
used to represent an entity significant to an application.
Each object encapsulates:
the state or data of the entity: in Java, such data is contained in the instance variables of each object;
the operations of the entity, through which the state of the entity can be accessed or updated.

21
Local Objects vs. Distributed Objects
Local objects are those whose methods can only be invoked by a local process, a process that runs on the same
computer on which the object exists.
A distributed object is one whose methods can be invoked by a remote process, a process running on a
computer connected via a network to the computer on which the object exists.

The Distributed Object Paradigm


In a distributed object paradigm, network resources are represented by distributed objects.
To request service from a network resource, a process invokes one of its operations or methods, passing data as
parameters to the method.
The method is executed on the remote host, and the response is sent back to the requesting process as a return
value.
Message-passing paradigm is data-oriented while Distributed objects paradigm is action-oriented: the
focus is on the invocation of the operations, while the data passed takes on a secondary role.
Although less intuitive to human-beings, the distributed-object paradigm is more natural to object-oriented
software development.

Distributed Objects
A distributed object is provided, or exported, by a process called the object server. A facility, here called an object
registry, must be present in the system architecture for the distributed object to be registered.

Distributed Object Systems/Protocols


The distributed object paradigm has been widely adopted in distributed applications, for which a large number of
mechanisms based on the paradigm are available. Among the most well-known of such mechanisms are:
Java Remote Method Invocation (RMI),
the Common Object Request Broker Architecture (CORBA) systems,
the Distributed Component Object Model (DCOM),
mechanisms that support the Simple Object Access Protocol (SOAP).

Remote Procedure Call & Remote Method Invocation


Remote Procedure Calls (RPC)
Remote Method Invocation has its origin in a paradigm called Remote Procedure Call
3.6.2 Remote procedure call model:
A procedure call is made by one process to another, with data passed as arguments.
Upon receiving a call:
1. the actions encoded in the procedure are executed
2. the caller is notified of the completion of the call and
3. a return value, if any, is transmitted from the callee to the caller

Local Procedure Call and Remote Procedure Call


Remote Procedure Calls (RPC)
Since its introduction in the early 1980s, the Remote Procedure Call model has been widely in use in network
applications.
There are two prevalent APIs for this paradigm.
the Open Network Computing Remote Procedure Call, evolved from the RPC API originated from Sun
Microsystems in the early 1980s.
The other well-known API is the Open Group Distributed Computing Environment (DCE) RPC.

Q. Define Malware Analysis


Malware analysis is the process of determining the purpose and functionality of a piece of malware. This process
will reveal what type of harmful program has infected your network, the damage it’s capable of causing, and most
importantly how to remove it.
Q. List and explain Types of Malwares
Virus: Viruses are pieces of malware that require human intervention to propagate to other machines.
Worm: Unlike Viruses, Worms do not need the help of humans to move to other machines. They can
spread easily and can infect a high number of machines in a short amount of time.

22
Trojan: These appear to be normal programs that have a legitimate function, like a game or a utility
program. But underneath the innocent looking user interface, a Trojan performs malicious tasks without the user
being aware.
Spyware: Spyware is software that gathers personal or confidential information from user systems
without their knowledge.
Keylogger: This is a special type of spyware. It is specialized in recording the keystrokes made by the
user.
Ransomware: Ransomware is a form of malware that encrypts a victim's files. The attacker then
demands a ransom from the victim to restore access to the data upon payment.

Remote Method Invocation


Remote Method Invocation (RMI) is an object-oriented implementation of the Remote Procedure Call model. It is
an API for Java programs only. Using RMI, an object server exports a remote object and registers it with a directory
service.
Syntactically:
1. A remote object is declared with a remote interface, an extension of the Java interface.
2. The remote interface is implemented by the object server.
3. An object client accesses the object by invoking the remote methods associated with the objects using syntax
provided for remote method invocations.

The Java RMI Architecture


Object Registry
The RMI API allows a number of directory services to be used[ for registering a distributed object.
A simple directory service called the RMI registry, rmiregistry, which is provided with the Java Software
Development Kit
The RMI Registry is a service whose server, when active, runs on the object server’s host machine, by
convention and by default on the TCP port 1099.

The Interaction between the Stub and the Skeleton


A time-event diagram describing the interaction between the stub and the skeleton:
The API for the Java RMI
The Remote Interface
The Server-side Software
The Remote Interface Implementation
Stub and Skeleton Generations
The Object Server
The Client-side Software

The Remote Interface


A Java interface is a class that serves as a template for other classes:
it contains declarations or signatures of methods whose implementations are to be supplied by classes that
implements the interface.
A java remote interface is an interface that inherits from the Java Remote class, which allows the interface to be
implemented using RMI syntax.
Other than the Remote extension and the Remote exception that must be specified with each method signature, a
remote interface has the same syntax as a regular or local Java interface.

A sample remote interface


The java.rmi.Remote Exception must be listed in the throw clause of each method signature.
This exception is raised when errors occur during the processing of a remote method call, and the exception is
required to be caught in the method caller’s program.
Causes of such exceptions include exceptions that may occur during inter-process communications, such as
access failures and connection failures, as well as problems unique to remote method invocations, including errors
resulting from the object, the stub, or the skeleton not being found.

23
The Server-side Software
An object server is an object that provides the methods of and the interface to a distributed object.
Each object server must
implement each of the remote methods specified in the interface,
register an object which contains the implementation with a directory service.
It is recommended that the two parts be provided as separate classes.

The Remote Interface Implementation


A class which implements the remote interface should be provided.
The syntax is similar to a class that implements a local interface.

UML diagram for the SomeImpl class


Stub and Skeleton Generations
In RMI, each distributed object requires a proxy each for the object server and the object client, known as the
object’s skeleton and stub, respectively.
These proxies are generated from the implementation of a remote interface using a tool provided with the Java
SDK:
the RMI compiler rmic.
o rmic <class name of the remote interface implementation>
For example:
o rmic SomeImpl
As a result of the compilation, two proxy files will be generated, each prefixed with the implementation class
name:
SomeImpl_skel.class
SomeImpl_stub.class.
The stub file for the object
The stub file for the object, as well as the remote interface file, must be shared with each object client – these file
are required for the client program to compile.
A copy of each file may be provided to the object client by hand.
In addition, the Java RMI has a feature called “stub downloading” which allows a stub file to be obtained by a
client dynamically.

The Object Server


The object server class is a class whose code instantiates and exports an object of the remote interface
implementation.
The Naming class provides methods for storing and obtaining references from the registry.
o In particular, the rebind method allow an object reference to be stored in the registry with a URL in the form of:
rmi://<host name>:<port number>/<reference name>
o The rebind method will overwrite any reference in the registry bound with the given reference name.
o If the overwriting is not desirable, there is also a bind method.
o The host name should be the name of the server, or simply “localhost”.
o The reference name is a name of your choice, and should be unique in the registry.
When an object server is executed, the exporting of the distributed object causes the server process to begin to
listen and wait for clients to connect and request the service of the object.
An RMI object server is a concurrent server: each request from an object client is serviced using a separate
thread of the server.
Note that if a client process invokes multiple remote method calls, these calls will be executed concurrently
unless provisions are made in the client process to synchronize the calls.

The RMI Registry


A server exports an object by registering it by a symbolic name with a server known as the RMI registry.
A server, called the RMI Registry, is required to run on the host of the server which exports remote objects.
The RMIRegistry is a server located at port 1099 by default
It can be invoked dynamically in the server class:

24
import java.rmi.registry.LocateRegistry;

LocateRegistry.createRegistry ( 1099 );…

Alternatively, an RMI registry can be activated by hand using the rmiregistry utility :
rmiregistry <port number>
where the port number is a TCP port number.
If no port number is specified, port number 1099 is assumed.

The Client-side Software


The program for the client class is like any other Java class.
The syntax needed for RMI involves
o locating the RMI Registry in the server host,
and
o looking up the remote reference for the server object; the reference can then be cast to the remote interface class
and the remote methods invoked.

Looking up the remote object


The lookup method of the Naming class is used to retrieve the object reference, if any, previously stored in the
registry by the object server.
Note that the retrieved reference must be cast to the remote interface (not its implementation) class.

Invoking the Remote Method


The remote interface reference can be used to invoke any of the methods in the remote interface, as in the
example:
String message = h.method1();
System.out.println(message);
Note that the syntax for the invocation of the remote methods is the same as for local methods.
It is a common mistake to cast the object retrieved from the registry to the interface implementation class or the
server object class.
Instead it should be cast as the interface class.

UML Component Diagrams


UML Component diagrams are used in modelling the physical aspects of object-oriented systems that are used for
visualizing, specifying, and documenting component-based systems and also for constructing executable systems
through forward and reverse engineering.

Component Diagram at a Glance

25
A component diagram breaks down the actual system under development into various high levels of functionality.
Each component is responsible for one clear aim within the entire system and only interacts with other essential
elements on a need-to-know basis.

The example above shows the internal components of a larger component:


The data (account and inspection ID) flows into the component via the port on
the right-hand side and is converted into a format the internal components can use.
The data then passes to and through several other components via various
connections before it is output at the ports on the left.
It is important to note that the internal components are surrounded by a large
'box' which can be the overall system itself (in which case there would not be a component symbol in the top right
corner) or a subsystem or component of the overall system (in this case the 'box' is a component itself

Basic Concepts of Component Diagram


A component represents a modular part of a system that encapsulates its contents and whose manifestation is
replaceable within its environment.
A high-level, abstracted view of a component in UML 2 can be modeled as:
1. A rectangle with the component's name
2. A rectangle with the component icon
3. A rectangle with the stereotype text and/or icon

A high-level, abstracted view of a component

Interface
In the example below shows two type of component interfaces:

Provided Interface symbols with a complete circle at their end represent an interface that the component provides -
this "lollipop" symbol is shorthand for a realization relationship of an interface classifier.

Required Interface symbols with only a half circle at their end (a.k.a. sockets) represent an interface that the
component requires (in both cases, the interface's name is placed near the interface symbol itself).

Component Diagram Example - Using Interface (Order System)

Subsystems
The subsystem classifier is a specialized version of a component classifier. Because of this, the subsystem notation
element inherits all the same rules as the component notation element.

26
A Sub-system

Port
Ports are represented using a square along the edge of the system or a component. A port is often used to help
expose required and provided interfaces of a component.

A Port
Relationships
Graphically, a component diagram is a collection of vertices and arcs and commonly contain components, interfaces
and dependency, aggregation, constraint, generalization, association, and realization relationships. It may also
contain notes and constraints

Modelling Source Code


Either by forward or reverse engineering, identify the set of source code files of
interest and model them as components stereotyped as files.

27
For larger systems, use packages to show groups of source code files.
Consider exposing a tagged value indicating such information as the version
number of the source code file, its author, and the date it was last changed. Use tools to manage the value of this tag.
Model the compilation dependencies among these files using dependencies.
Again, use tools to help generate and manage these dependencies.

Component Diagram Example - C++ Code with versioning


Modelling an Executable Release
Identify the set of components you'd like to model. Typically, this will involve
some or all the components that live on one node, or the distribution of these sets of components across all the nodes
in the system.
Consider the stereotype of each component in this set. For most systems, you'll
find a small number of different kinds of components (such as executables, libraries, tables, files, and documents).
You can use the UML's extensibility mechanisms to provide visual cues (clues) for these stereotypes.
For each component in this set, consider its relationship to its neighbors. Most
often, this will involve interfaces that are exported (realized) by certain components and then imported (used) by
others.

Modelling a Physical Database


Identify the classes in your model that represent your logical database schema.
Select a strategy for mapping these classes to tables. You will also want to
consider the physical distribution of your databases.

28
To visualize, specify, construct, and document your mapping, create a
component diagram that contains components stereotyped as tables.
Where possible, use tools to help you transform your logical design into a
physical design.

Distributed Transactions
A distributed transaction includes one or more statements that, individually or as a group, update data on two or
more distinct nodes of a distributed database.

Two Types of Permissible Operations in Distributed Transactions:


DML and DDL Transactions
Transaction Control Statements

DML and DDL Transactions


The following are the DML and DDL operations supported in a distributed transaction:
CREATE TABLE AS SELECT
DELETE
INSERT (default and direct load)
LOCK TABLE
SELECT
SELECT FOR UPDATE

You can execute DML and DDL statements in parallel, and INSERT direct load statements serially, but note the
following restrictions:
All remote operations must be SELECT statements.
These statements must not be clauses in another distributed transaction.
If the table referenced in the table_expression_clause of an INSERT, UPDATE,
or DELETE statement is remote, then execution is serial rather than parallel.
You cannot perform remote operations after issuing parallel DML/DDL or
direct load INSERT.
If the transaction begins using XA or OCI, it executes serially.
No loopback operations can be performed on the transaction originating the
parallel operation. For example, you cannot reference a remote object that is actually a synonym for a local object.
If you perform a distributed operation other than a SELECT in the transaction,
no DML is parallelized.

29
Transaction Control Statements
The following are the supported transaction control statements:
COMMIT
ROLLBACK
SAVEPOINT

Node Roles

Clients
A node acts as a client when it references information from a database on another node. The referenced node is a
database server. In Figure 2, the node sales is a client of the nodes that host the warehouse and finance databases.

Database Servers
A database server is a node that hosts a database from which a client requests data.
In Figure 2, an application at the sales node initiates a distributed transaction that accesses data from the warehouse
and finance nodes.

Local Coordinators
A node that must reference data on other nodes to complete its part in the distributed transaction is called a local
coordinator. In Figure 2, sales is a local coordinator because it coordinates the nodes it directly references:
warehouse and finance.

A local coordinator is responsible for coordinating the transaction among the nodes it communicates directly with by:
Receiving and relaying transaction status information to and from those nodes

30
Passing queries to those nodes
Receiving queries from those nodes and passing them on to other nodes
Returning the results of queries to the nodes that initiated them

Global Coordinator
The node where the distributed transaction originates is called the global coordinator. The database application
issuing the distributed transaction is directly connected to the node acting as the global coordinator.
The global coordinator performs the following operations during a distributed transaction:
Sends all of the distributed transaction SQL statements, remote procedure calls,
and so forth to the directly referenced nodes, thus forming the session tree
Instructs all directly referenced nodes other than the commit point site to
prepare the transaction
Instructs the commit point site to initiate the global commit of the transaction if
all nodes prepare successfully
Instructs all nodes to initiate a global rollback of the transaction if there is an
abort response

Commit Point Site


The job of the commit point site is to initiate a commit or roll back operation as instructed by the global coordinator.

The commit point site is distinct from all other nodes involved in a distributed transaction in these ways:
The commit point site never enters the prepared state. Consequently, if the
commit point site stores the most critical data, this data never remains in-doubt, even if a failure occurs.
The commit point site commits before the other nodes involved in the
transaction. In effect, the outcome of a distributed transaction at the commit point site determines whether the
transaction at all nodes is committed or rolled back:

How a Distributed Transaction Commits


A distributed transaction is considered committed after all non-commit-point sites are prepared, and the transaction
has been actually committed at the commit point site. The redo log at the commit point site is updated as soon as the
distributed transaction is committed at this node.

Commit Point Strength


Every database server must be assigned a commit point strength. If a database server is referenced in a distributed
transaction, the value of its commit point strength determines which role it plays in the two-phase commit.
The commit point site, which is determined at the beginning of the prepare phase, is selected only from the nodes
participating in the transaction. The following sequence of events occurs:
1. Of the nodes directly referenced by the global coordinator, the database selects
the node with the highest commit point strength as the commit point site.
2. The initially-selected node determines if any of the nodes from which it has to
obtain information for this transaction has a higher commit point strength.
3. Either the node with the highest commit point strength directly referenced in the
transaction or one of its servers with a higher commit point strength becomes the commit point site.

The following conditions apply when determining the commit point site:
A read-only node cannot be the commit point site.
If multiple nodes directly referenced by the global coordinator have the same
commit point strength, then the database designates one of these as the commit point site.
If a distributed transaction ends with a rollback, then the prepare and commit
phases are not needed.

Two-Phase Commit Mechanism


Unlike a transaction on a local database, a distributed transaction involves altering data on multiple databases.
The database ensures the integrity of data in a distributed transaction using the two-phase commit mechanism.

31
In the prepare phase, the initiating node in the transaction asks the other participating nodes to promise to commit
or roll back the transaction.

During the commit phase, the initiating node asks all participating nodes to commit the transaction.

Prepare Phase
The first phase in committing a distributed transaction is the prepare phase. In this phase, the database does not
actually commit or roll back the transaction. Instead, all nodes referenced in a distributed transaction (except the
commit point site, described in the "Commit Point Site") are told to prepare to commit.
By preparing, a node:
Records information in the redo logs so that it can subsequently either commit
or roll back the transaction, regardless of intervening failures
Places a distributed lock on modified tables, which prevents reads

Read-Only Response
When a node is asked to prepare, and the SQL statements affecting the database do not change any data on the node,
the node responds with a read-only message. The message indicates that the node will not participate in the commit
phase
There are three cases in which all or part of a distributed transaction is read-only:

32
Steps in the Prepare Phase
To complete the prepare phase, each node excluding the commit point site performs the following steps:
1. The node requests that its descendants, that is, the nodes subsequently
referenced, prepare to commit.
2. The node checks to see whether the transaction changes data on itself or its
descendants. If there is no change to the data, then the node skips the remaining steps and returns a read-only
response
e 3. 3. The node allocates the resources it needs to commit the transaction if data is
changed.
4. The node saves redo records corresponding to changes made by the transaction
to its redo log.
5. The node guarantees that locks held for the transaction are able to survive a
failure.
6. The node responds to the initiating node with a prepared response or, if its
attempt or the attempt of one of its descendants to prepare was unsuccessful, with an abort response.

Commit Phase
The second phase in committing a distributed transaction is the commit phase. Before this phase occurs, all nodes
other than the commit point site referenced in the distributed transaction have guaranteed that they are prepared, that
is, they have the necessary resources to commit the transaction.

Steps in the Commit Phase


The commit phase consists of the following steps:
1. The global coordinator instructs the commit point site to commit.
2. The commit point site commits.
3. The commit point site informs the global coordinator that it has committed.
4. The global and local coordinators send a message to all nodes instructing them to
commit the transaction.

33
5. At each node, the database commits the local portion of the distributed
transaction and releases locks.
6. At each node, the database records an additional redo entry in the local redo log,
indicating that the transaction has committed.

7. The participating nodes notify the global coordinator that they have committed.
When the commit phase is complete, the data on all nodes of the distributed system is consistent.
Guaranteeing Global Database Consistency
Each committed transaction has an associated system change number (SCN) to uniquely identify the changes made
by the SQL statements within that transaction. The SCN functions as an internal timestamp that uniquely identifies a
committed version of the database.
In a distributed system, the SCNs of communicating nodes are coordinated when all of the following actions occur:
A connection occurs using the path described by one or more database links
A distributed SQL statement executes
A distributed transaction commits

Guaranteeing Global Database Consistency


Each committed transaction has an associated system change number (SCN) to uniquely identify the changes made
by the SQL statements within that transaction.

In a distributed system, the SCNs of communicating nodes are coordinated when all of the following actions occur:
A connection occurs using the path described by one or more database links
A distributed SQL statement executes
A distributed transaction commits
Forget Phase
After the participating nodes notify the commit point site that they have committed, the commit point site can forget
about the transaction. The following steps occur:
1. After receiving notice from the global coordinator that all nodes have committed,
the commit point site erases status information about this transaction.
2. The commit point site informs the global coordinator that it has erased the status
information.
3. The global coordinator erases its own information about the transaction.
In-Doubt Transactions
The two-phase commit mechanism ensures that all nodes either commit or perform a rollback together. What
happens if any of the three phases fails because of a system or network error? The transaction becomes in-doubt.
Distributed transactions can become in-doubt in the following ways:
A server machine running Oracle Database software crashes
A network connection between two or more Oracle Databases involved in
distributed processing is disconnected
An unhandled software error occurs

Automatic Resolution of In-Doubt Transactions


In the majority of cases, the database resolves the in-doubt transaction automatically. Assume that there are two
nodes, local and remote, in the following scenarios. The local node is the commit point site.

Relevance of System Change Numbers for In-Doubt Transactions


A system change number (SCN) is an internal timestamp for a committed version of the database. The Oracle
Database server uses the SCN clock value to guarantee transaction consistency.

Flat & Nested Distributed Transactions


A transaction is a series of object operations that must be done in an ACID-compliant manner. ACID connotes:
Atomicity – The transaction is completed entirely or not at all.
Consistency – It is a term that refers to the transition from one consistent state to
another.
Isolation – It is carried out separately from other transactions.

34
Durability – Once completed, it is long lasting.

Transactions Commands:
Begin – initiate a new transaction.
Commit – End a transaction and the changes made during the transaction are
saved. Also, it allows other transactions to see the modifications you’ve made.
Abort – End a transaction and all changes made during the transaction will be undone.

Roles for Running a Transaction Successfully :


Client – The transactions are issued by the clients.
Coordinator – The execution of the entire transaction is controlled by it
(handles Begin, commit & abort).
Server – Every component that accesses or modifies a resource is subject to
transaction control.

Flat & Nested Distributed Transactions


If a client transaction calls actions on multiple servers, it is said to be distributed. Distributed transactions can be
structured in two different ways:
1. Flat transactions
2. Nested transactions

Flat Transactions: A flat transaction has a single initiating point (Begin) and a single end point(Commit or abort).

Limitations of a Flat Transaction:


All work is lost in the event of a crash.
Only one DBMS may be used at a time.
No partial rollback is possible

Concurrency
Concurrency means multiple computations are happening at the same time. Concurrency is everywhere in modern
programming, whether we like it or not:
Multiple computers in a network
Multiple applications running on one computer
Multiple processors in a computer (today, often multiple processor cores on a
single chip)

In fact, concurrency is essential in modern programming:


Web sites must handle multiple simultaneous users.
Mobile apps need to do some of their processing on servers (“in the cloud”).

Two Models for Concurrent Programming


There are two common models for concurrent programming:
Shared memory and
Message passing.

Shared memory
In the shared memory model of concurrency, concurrent modules interact by reading and writing shared objects in
memory. Other examples of the shared-memory model:
A and B might be two processors (or processor cores) in the same computer,
sharing the same physical memory.
A and B might be two programs running on the same computer, sharing a
common filesystem with files they can read and write.
A and B might be two threads in the same Java program (we will explain what a
thread is below), sharing the same Java objects.

35
Message Passing
In the message-passing model, concurrent modules interact by sending messages to each other through a
communication channel. Modules send off messages, and incoming messages to each module are queued up for
handling. Examples include:
A and B might be two computers in a network, communicating by network
connections.
A and B might be a web browser and a web server – A opens a connection to B,
asks for a web page, and B sends the web page data back to A.
A and B might be an instant messaging client and server.
A and B might be two programs running on the same computer whose input and
output have been connected by a pipe, like ls | grep typed into a command prompt.

Processes, Threads, Time-slicing


The message-passing and shared-memory models are about how concurrent
modules communicate. The concurrent modules themselves come in two different kinds: processes and threads.

Process.
A process is an instance of a running program that is isolated from other
processes on the same machine. In particular, it has its own private section of the machine’s memory.
The process abstraction is a virtual computer. It makes the program feel like it
has the entire machine to itself – like a fresh computer has been created, with fresh memory, just to run that program.

Thread
A thread is a locus of control inside a running program. Think of it as a place in the program that is being run, plus
the stack of method calls that led to that place to which it will be necessary to return through.
Just as a process represents a virtual computer, the thread abstraction represents
a virtual processor.

Threads are automatically ready for shared memory, because threads share all
the memory in the process.

Time Slicing
When there are more threads than processors, concurrency is simulated by time
slicing, which means that the processor switches between threads.

Interleaving
Here is one thing that can happen. Suppose two cash machines, A and B, are both working on a deposit at the same
time. Here is how the deposit() step typically breaks down into low-level processor instructions:
get balance (balance=0)
add 1
write back the result (balance=1)
Race Condition
A race condition means that the correctness of the program (the satisfaction of postconditions and invariants)
depends on the relative timing of events in concurrent computations A and B. When this happens, we say “A is in a
race with B.”

Reordering
The race condition on the bank account balance can be explained in terms of different interleavings of sequential
operations on different processors.

Service-Oriented Architecture (SOA)


Service-Oriented Architecture, can be defined as "services" that provide a platform by which disparate systems can
communicate with each other.

A Service

36
Services represent building blocks that allow users to organize information in ways that are familiar to them.

A service is commonly characterized by these four properties:


1. It logically represents a business activity with a specified outcome.
2. It is self-contained
3. It is a black box for its consumers
4. It may consist of other underlying services

The 6 Defining Concepts of SOA


In October of 2009, a manifesto was created about service-oriented architecture. This manifesto states that there are
six core values of SOA:
1. Business value is more important than technical strategy.
2. Strategic goals are more valuable than project-specific benefits.
3. Intrinsic interoperability is greater than custom integration.
4. Shared services over specific-purpose implementations.
5. Flexibility is given more importance than optimization.
6. Evolutionary refinement is more important than pursuit of initial perfection.
1. What are the ten commandments for computer ethics?
Answer
i. Thou shalt not use a computer to harm other people.
ii. Thou shalt not interfere with other people‟s computer work.
iii. Thou shalt not snoop around in other people‟s files.
iv. Thou shalt not use a computer to steal.
v. Thou shalt not use a computer to bear false witness.
vi. Thou shalt not use of copy software for which you have not paid.
vii. Thou shalt not use other people‟s computer resources without authorization.
viii. Thou shalt not appropriate other people‟s intellectual output.
ix. Thou shalt think about the social consequences of the program u write.
x. Thou shalt use a computer in ways to show consideration and respect.

2. Explain the three levels of computer ethics.


Answer
First level: - It is the basic level where computer ethics tries to sensitize people
to the fact that computer technology has social and ethical consequences. Newspaper, TV news program, and
magazines have highlighted the topic of computer ethics by reporting on events relating to computer viruses,
software ownership law suits, computer aided bank robbery, computer malfunction etc.
Second level:- It consists of someone who takes interest in computer ethics
cases, collects examples, clarifies them, looks for similarities and differences reads related works, attends relevant
events to make preliminary assessments and after comparing them.
Third level: - It referred to as „theoretical‟ computer ethics applies scholarly
theories to computer ethics cases and concepts in order to deepen the understanding of issues.

Cloud Computing
Cloud Computing is the delivery of the computing services, such as servers, databases, storage and networking –
over the Internet. These services, usually are offered by so called Cloud Providers, that usually charge based on
usage.

37
Capabilities of Cloud Computing
Generally, these are a few of the capabilities of Cloud Computing:
Create new apps and services
Store, back up and recover data
Deliver software
Analyse data for pattern recognition
Streaming.

Besides the capabilities that Cloud Computing provides, there are also a lot of benefits that it can offer.
Cost – using cloud services lowers the costs that organizations need to spend for
buying hardware and software tools for setting up the infrastructure for its needs.
Speed – when the organization needs more resources, provisioning additional
resources in cloud services can be done in minutes.
Scaling – the ability to scale elastically on demand using cloud services appears
as their main and most common use case – processing power, storage, bandwidth and whatever the demand is, in
less than a minute.

Categories of Cloud Computing models


Software as a Service (SaaS)
The providers that provide this model of Cloud Computing solutions usually provide a web-based application where
the users of the service can operate.

Platform as a Service (PaaS)


Platform as a Service is another Cloud Computing model in which the third-party provider provides the necessary
hardware and software tools – usually required for development or research – over the Internet.

Infrastructure as a Service (IaaS)


According to most of the information provided by different surveys, IaaS is the most common cloud-based model
provided by the service providers.

Mobile Cloud Computing (MCC)


In the consumer space, mobility players such as Apple, Google, and Microsoft all offer variants of cloud-based apps
and private storage.

Advantages of Mobile & Cloud Computing


Mobile Cloud Computing offers a bunch of advantages while using cloud services. Following are listed some of the
most important ones:
Flexibility – one of key advantages while using MCC is that the cloud
information can be used anywhere, everywhere; all you need is a mobile device of any kind, which is paired or
configured with the organization cloud platform.
Real time available data – accessing the data in real time is no longer a
challenge while you are out of the office.
No upfront payments – last, but not least – payments. Commonly, cloud
applications does not require payment without using it. It is mostly the case pay-for-use, which helps in growing the
adoption of the model.

38
Disadvantages of Mobile & Cloud Computing (MCC)
Whenever there are advantages on any issue, it is sure there would be the disadvantages as well. The following are
some listed and most important disadvantages of Mobile and Cloud Computing.
Security – a major concern with Cloud Computing is the security and data
integration. When mobile is the subject, the attention must be two times higher: unprotected information can be
easily sniffed.
Internet connection – considering the flexibility of MCC, allowing the users to
access the data from anywhere, requires Internet connection.
Performance – considering smaller size and lower hardware performance, it is
understandable that the performance with MCC will be in a much lower level.

The Top Threats in the usage of Mobile and Cloud Computing.


Data Loss
Using Cloud Computing is more like outsourcing the data to the service provider.
This means increasing the risk of exposing important data which were not issues in traditional computing.
The following solutions can lower the risk:
- Encryption of data while transmission;
- Using access control tools
- Time-to-time back up

Insecure API
Usually, the communication between a client (in this case, a mobile device which is
handled by the company’s employee) and the server (which is somewhere in the cloud) is done by an
Application Programming Interface.

39

You might also like