Nonfunctional Requirements

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Scalability - impartirea unui nr mare de utilizatori pe servere; se trec pachete pe cloud

Integrating and accommodating future system growths.


System load. Main load measures:
nr of users; nr of nodes and resources; nr of transactions; data volume handled by
the DS
ease with which a sys component can be added/removed; easily expand and contract
horiz: add nodes to the system; vertic: add resources to a single node in the system
Openness - easily add new components
An open system is one which conforms to well defined interfaces, thus allowing
flexibility and extensibility.
Open system main features:
Built up from heterogeneous components (not dependent on a single technology)
Interoperability
Portability
Access to shared resources (through published interfaces)
Conform to international standards
International Organizations for Standards
Standards initiated by commercial organizations (W3C, OMG)
Standards initiated by companies (Sun)
Heterogeneity - not dependent on a single technology
Heterogeneity of a system results from different technologies for:
Programming languages; Networks; HW; Oss; Components; Data management
Fault tolerance and Reliability
When a system fails, DS should provide a set of services
Objectives: detecting the failure and making failure transparent to user (hide,
attenuate, tolerate)
Important criteria in the DS architecture.
Partial failure of a DS distinguishes centralized systems and DS. In DS the system
continues to operate in the presence of fault. The system managements role is to identify
the faulty components and substitute their services.
Achieving fault tolerance:
Hardware redundancy
Replication
Software recovery
DS Algorithm design: failure of one component will not collapse the algorithm, none of
the DS components knows the systems global state and there is no global clock, DS
components take their decisions based on local machine state
Basic concepts:
1. Availability - probability that the system is operating correctly at a certain moment of
time
2. Reliability - time interval in which the system can run continuously without failure
Key elements: redundancy and replication

enforced in the design stage: limit nr of simultaneous operation of system critical


components + avoid centralized algorithms
3. Safety, robust - when the DS fails to operate correctly and no damages happen
4. Maintainability - how easily a failed system can be repaired (high availability
especially if failed components can be automatically detected and repaired)
Types of failures
1. Crash failure Omission failure Timing failure Response failure Arbitrary failure

Resource sharing
Motivation: cost effectiveness, facilitates workgroup applications, cooperation
Problems: integrity and security of the shared resources
Solution: Resource manager for resource access (authorization authentication)

Security
Model and actors
1. Resource provider (RP) - offers security-critical data
or application resources
2. Hacker (H) - can corrupt, access, replace,
delay/deny access to resources
3. Security Service Provider (SP) - protect critical
resources from attack by providing protection services:
integrity, confidentiality, authorization and access, identity,
authenticity, availability, non repudiation, auditing
4. Security System Beneficiary (SB) integrity verification (verify accuracy of
data/applications)
confidentiality preservation (has assurance that confidentiality of data is enforced
over time)
Authorization/Access permission (authorized access permissions to critical data have
been provided)
Identity verification (verification means of the identity of the data/application sources)
Cryptography
fundamental component of any security solution
provides integrity and confidentiality protection
important for mechanisms of: identity, authenticity. non repudiation
use cryptographic methods with symmetric and asymmetric keys
Identity and authentication
password based
physical token based
bio metrics based
certificate based (block of data containing info to identify a principal)
Access control
Role based
Firewall based

Domain based

Performance - analysis phase


Speed (responsiveness and throughput), scalability and stability requirements
General application features that may impact over performance:
1. Objects and data (general information architecture)
How many objects, their sizes, total amount of data being manipulated
How are the manipulations expected to be performed (DB access, file access, object
storage etc.)
How data is accessed and queried for and how often
=>Resource specification
2.
Users
How many simultaneous users
What level of concurrency is expected
3.
Transactions
What is a transaction for the application?
Are there more type of transactions?
Specify details: nr of objects created/deleted/changed, duration of transactions,
expected transaction amount (trans/sec)
4.
Distribution specifics
Distributed application parts may cause performance drawbacks: network
communication is slower than inter-process communication on the same machine,
inter-process communication is slower than component-to-component communication
within the same process
Trade off: good designs => decoupling components, Good performance => close
coupling
Need to analyze expected performance gains and drawbacks of distributed
computing
5.
Shared resources
Identify all shared resources and the associated performance costs due to forcing
unique access to shared resources; in case of high performance costs specify an
alternative mechanism allowing the efficient use of shared resources (update files journaling)
6.
Extra features
The more features the worst the performance and more effort to improve
performance
Minimize extra features in the requirements
Design should be extensible to incorporate extra features instead of including them in
the requirements
Design decisions affecting performance:
1. How long a transaction will be
2. How often data or objects need to be updated
3. Object locations
4. Persistency (object persistency and how persistency is achieved)
5. How data is manipulated

6. How components interact


7. How tightly coupled the subsystems are
8. Responses to errors
9. Retry capabilities
Bottleneck and Performance: check throughput of each component to determine bottleneck
client and server processes
network transfer rates (peak and average)
network interface card throughput
Router speed, disk I/O
Middleware/queueing transfer rates
database access, update and transaction rates
Operating system loads
Performance boosting design elements
Queues
Asynchronous communications and activities
Parallelizable activities
Minimized serialization points
Balanced workloads across multiple servers
Redundant servers and automatic switching capabilities
Activities that can be configured at runtime to run in different locations
Short transactions
Communication in DS and Performance
key factor: minimize the amount of necessary communication (too many messages
are passed between distributed components => main cause of performance
problems)
avoid communication overhead:
data transfer = significant overhead: keep data near the processors
any task should be able to run in several locations: choose the location that
provides the best performance
avoid generating distributed garbage: distributed garbage collection =
significant overhead
reduce the cost of keeping data synchronized: minimize data duplication
reduce data-transfer costs by duplicating (balance: find optimal duplication
points)
cache distributed data wherever possible
use compression to reduce time to transfer data

Transparency - user shouldnt know the application is distributed


The DS should hide its complexity from the client (user). Hierarchically structured: the
lower levels support
the higher
levels.

2.

3.

4.

5.

6.
7.

8.

1. Access
same interface to a service (could be local or remote)
the user cannot distinguish a local resource from a remote one
Location
a client doesnt need to know the location (machine, IP address) of the resource he
uses
Migration
Sometimes components need relocation (ex: for load balancing)
This should be done without warning the client
Replication (copy of components)
main issues: generating replicas and keeping them updates with the original
main benefits: system loading, scalability, reliability, may improve the system
response
key concept for load balancing
create database replicas in order to keep data near the processes
Concurrency
same resource can be concurrently used by many clients without interference or
awareness
main issue: integrity of the shared resource
Scalability
users are not aware about scaling up the system
Performance
the user is not aware about how a certain performance is achieved
techniques that may influence performance transparency: load balancing, replicas,
component migration
Failure
server components can recover themselves
relies on replication and transparency

DS management must hide the failure from the client

You might also like