Unit 5
Unit 5
Unit 5
A firewall is an insulated metal barrier that keeps the hot and dangerous moving parts of
the motor separate from the inflammable interior where the passengers sit.
The firewall may be a separate computer system, a service running on an existing router
or server, or a separate network containing a number of supporting devices.
Development of Firewalls
First Generation
The first generation of firewalls are called packet filtering firewalls : because they are
simple networking devices that filter packets based on their headers as they travel to and from
the organizations networks.
In this case the firewall examines every incoming packet header and can selectively filter
packets (accept or reject) based on these and other factors:
Address
Packet type
Port request (HTTP or Telnet)
First generation firewalls scan network data packets looking for compliance with or
violation of the rules of the firewalls database.
A first generation firewall inspects packets at the network layer or Layer 3 of the OSI
model.
If it finds a packet that matches a restriction, it simply refuses to forward it from one
network to another.
The restrictions most commonly implemented in packet filtering firewalls are based on a
combination of the following:
1
Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) source and
destination port requests.
Second Generation
The application firewall is frequently a dedicated computer separate from the filtering
router and quite commonly used in conjunction with a filtering router.
The application firewall is also known as a proxy server since it runs special software
designed to serve as a proxy for a service request.
CACHE SERVERS.
These servers can stores the most recently, accessed pages in their internal cache, and
are, therefore, also called cache servers.
DMZ
A demilitarized zone (DMZ) is an intermediate area between a trusted network and an
untrusted network.
2
Third Generation
The next generation of firewalls, stateful inspection firewalls :- keep track of each
network connection established between internal and external systems using a state table.
These state tables :- track the state and context of each packet in the conversation, by
recording which station sent what packet and when.
The additional processing requirements of managing and verifying packets against the
state table. This can possibly expose the system to a DoS attack.
Fourth Generation
Fifth Generation
The final form of firewall is the kernel proxy a specialized form that works under the
Windows NT Executive, which is the kernel of Windows NT.
It evaluates packets at multiple layers of the protocol stack by checking security in the
kernel as data is passed up and down the stack.
3 FIREWALL ARCHITECTURES
Most organizations with an Internet connection have some form of a router as the inter face to
the Internet at the perimeter between the organizations internal networks and the external service
provider.
Many of these routers can be configured to filter packets that the organization does not allow into
the network.
This is a simple but effective means to lower the organizations risk to external attack.
3
a lack of auditing and strong authentication.
The complexity of the access control lists used to filter the packets can grow and degrade
network performance.
Combines the packet filtering router with a separate, dedicated firewall, such as an application
proxy server.
This approach allows the router to prescreen packets to minimize the network traffic and load on
the internal proxy.
sacrificial host
The bastion host stands as a sole defender on the network perimeter, it is also commonly
referred to as the sacrificial host.
Advantage:-
the proxy requires the external attack to compromise two separate systems before the
attack can access internal data.
4
A technology known as network address translation is commonly implemented with this
architecture.
The subnet firewall consists of two or more internal bastion hosts, behind a packet filtering
router, with each host protecting the trusted network.
The first general model consists of two filtering routers, with one or more dual-homed bastion
hosts between them.
5
Connections from the outside or untrusted network are routed through an external
filtering router.
Connections from the outside or untrusted network are routed into and then out of a
routing firewall to the separate network segment known as the DMZ.
Connections into the trusted internal network are allowed only from the DMZ bastion
host servers.
SOCKS Servers
SOCKS is the protocol for handling TCP traffic through a proxy server.
The SOCKS system is a proprietary circuit-level proxy server that places special SOCKS client-
side agents on each workstation.
The general approach is to place the filtering requirements on the individual workstation.
Selecting the optimum firewall for your organization depends on a number of factors.
The most important of these is the extent to which the firewall design provides the desired
protection.
When evaluating a firewall for your networks, your questions should cover the following topics:
1. What type of firewall technology offers the right balance between protection and cost
for the needs of the organization?
6
2. What features are included in the base price? What features are available at extra cost?
Are all cost factors known?
3. How easy is it to set up and configure the firewall? How accessible
4. Can the candidate firewall adapt to the growing network in the target organization?
Once the firewall architecture and technology has been selected the initial configuration
and ongoing management of the firewalls(s) needs to be considered.
Application programming can appreciate the problems associated with debugging both
syntax errors and logic errors.
Syntax errors in firewall policies are usually easy to identify, as the systems alert the
administrator of incorrectly configured policies.
Best Practices for Firewalls
This section outlines some of the best business practices for firewall use:
1. All traffic from the trusted network is allowed out. This allows members of the
organization to access the services they need. Filtering and logging of outbound traffic is
possible when indicated by specific organizational policy goals.
2. The firewall device is never accessible directly from the public network.
3. Simple Mail Transport Protocol (SMTP) data is allowed to pass through the firewall, but
it should all be routed to a well-configured SMTP gateway to securely filter and route
messaging traffic.
5. Telnet (terminal emulation) access to all internal servers from the public networks should
be blocked.
6. When Web services are offered outside the firewall, HTTP traffic should be denied from
reaching your internal networks by using some form of proxy access or DMZ
architecture.
Information security intrusion detection systems (IDS) work like a burglar alarm.
7
When the alarm detects a violation of its configuration (as in an opened or broken window), it
activates the alarm.
This alarm can be an audible and visual (noise and lights), or it can be a silent alarm that sends a
message to a monitoring company.
A host based IDS resides on a particular computer or server, known as the host, and monitors
activity on that system.
Most host-based IDS work on the principle of configuration or change management, in which
the
systems record the file sizes
locations and
other attributes of the files and then
report when one or more of these attributes changes
when new files are created, and
when exiting files are deleted.
Host-based IDS can also monitor systems logs for predefined events.
Host-based IDS examine these files and logs to determine if an attack has occurred, and if the
attack was successful, they report this information to the administrator.
IDS maintain its own log files. Therefore, when hackers successfully modify a systems log in an
attempt to cover their tracks, IDS provide independent verification that the attack occurred.
8
Host-based IDS work through the configuration and classification of various categories of
systems and data files.
Host-based IDS that are managed can monitor multiple computers simultaneously.
They do this by storing a client file on each monitored host and by making that host report back
to the master console, which is usually located on the systems administrators computer.
This master console monitors the information provided from the managed clients and notifies the
administrator when predetermined attack conditions occur.
When a predefined condition occurs, network-based IDS respond and notify the appropriate
administrator.
Network IDS, therefore, require a much more complex configuration and maintenance
Network IDS must match known and unknown attack strategies against their knowledge
base to determine whether or not an attack has occurred.
Network IDS result in many more false positive readings, as the systems are attemption to read
into the pattern of activity on the network to determine what is normal and what is not.
Signature-based IDS
The problem with this approach is that the signatures must be continually updates, as new attack
strategies are identified.
Failure to stay current allows attacks using new strategies to succeed.
Another weakness of this method is the time frame over which attacks occur.
9
If attackers are slow and methodical, they may slip undetected through IDS, as their actions may
not match the signature that includes factors based on duration of the events.
The only way to resolve this is to collect and analyze data over longer periods of time, which
requires substantially larger date storage ability and additional processing capacity.
Statistical Anomaly-based IDS
Another common method used in IDS is the statistical anomaly-based IDS (stat IDS) or
behavior-based IDS.
Stat IDS collect data from normal traffic and establish a baseline.
Once the baseline is established, IDS periodically samples network activity, based on statistical
methods, and compares the samples to the baseline.
When the activity is outside the baseline parameters (known as a clipping level), IDS notify the
administrator.
The baseline variables can include a hosts memory or CPU usage, network packet types, and
packet quantities.
The advantage of this approach is that the system can detect new types of attacks, as it looks for
abnormal activity of any type.
Encryption Definitions
The tools and functions popular in encryption security solutions after being introduced to
some basic definitions:
10
Encipher: To encrypt or convert plaintext to iphertext.
Key or Cryptovariable: The information used in conjunction with the algorithm to
create the ciphertext from the plaintext. The key can be a series of bits used in a
mathematical algorithm, or the knowledge of how to manipulate the plaintext.
Keyspace: The entire range of values that can possibly be used to construct an individual
key.
6. Encryption operations.
Encryption Operations
In encryption the most commonly used algorithms include two functions: substitution and
transposition. In a substitution cipher, you substitute one value for another. For example, you
can substitute the message character with the character three values to the right in the alphabet.
The substitution operation, the transposition cipher is simple to understand but can be
complex to decipher if properly used. Unlike the substitution cipher, the transposition cipher
(or permutation cipher) simply rearranges the values within a block to create the ciphertext.
Plaintext: 001001010110101110010101010101001001
11
Key : 1 4, 28, 31, 45, 57, 62, 76, 83
The plaintext broken into 8-bit blocks (for ease of discussion) and the corresponding
ciphertext, based on the application of the key to the plaintext:
Vernam Cipher
The Vernam cipher was developed at AT & T and uses a one-use set of characters, the
value of which is added to the block of text. The resulting sum is then converted to text. When
the two are added, if the values exceed 26, 26 is subtracted from the total (Modulo 26). The
corresponding results are then converted back to text as shown in the example below.
Plaintext: M Y D O G H A S F L E A S
Corresponding
Values : 13 25 04 15 07 08 01 19 06 12 05 01 19
One-time pad: F P Q R N S B I E H T Z L
Pad
corresponding
values 06 16 17 18 14 19 02 09 05 08 20 26 12
Results:
(Plaintext) 13 25 04 15 07 08 01 19 06 12 05 01 19
One time pad: 06 16 17 18 14 19 02 09 05 08 20 26 12
Sum: 19 41 21 33 21 27 03 28 11 20 25 27 31
Subtraction: 15 07 01 02 01 05
Ciphertext: P O U G U A C B K T Y A E
7. Encryption methods.
Summetric Encryption
The method described in the syntax example that required the same key to both encipher
and decipher the message is also known as private key encryption, or symmetric encryption.
Symmetric encryption indicates that the same key, also known as a secret key, is used to
conduct both the encryption and decryption of the message. Symmetric encryption methods can
be extremely efficient, requiring minimal processing to either encrypt or decrypt the message.
The problem is that both the sender and the receiver must know the encryption key. If either copy
of the key is compromised, an intermediate can decrypt and read the messages. One of the
challenges of symmetric key encryption is getting a copy of the key to the receiver, a process that
must be conducted out of band (meaning through an alternate channel or band than the one
carrying the ciphertext) to avoid interception. Figure illustrates the concept of symmetric
encryption.
12
Symmetric Encryption Example
There are a number of popular symmetric encryption cryptosystems. One of the most
familiar is Data Encryption Standard (DES). DES was developed in 1977 by IBM and is based
on the Data Encryption Algorithm (DEA), which uses a 64-bit block size and a 56-bit key. The
algorithm begins by adding parity bits to the key (resulting in 64 bits) and then applies the key in
16 rounds of XOR, substitution, and transposition operations. With a 56-bit key, the algorithm
has 256 possible keys to choose from (over 72 quadrillion).
DES is a federally approved standard for non-classified data. DES was cracked in 1997
when Rivest-Shamir-Aldeman (RSA) put a bounty on the algorithm. The term, RSA, reflects the
last names of the three developers of the RSA algorithm.
Triple DES, or 3DES, was developed as an improvement to DES and uses up to three
keys in succession. It is substantially more secure than DES, not only because it uses up to three
keys to DESs one, but because it also performs three different encryption operations as
described below:
1. In the first operation, 3DES encrypts the message with key 1, then decrypts it with key 2,
and then it encrypts it with key 1 again. Decrypting with a different key is essentially
another encryption, but it reverses the application of the traditional encryption operations.
Essentially, [E{D[E(M,K1)],K2},K1].
2. In the second operation, it encrypts the message with key 1, then it encrypts it again with
key 2, and then it encrypts it a third time with key 1 again, or [E{E[E(M,K1)],K2},K1].
3. In the third operation, 3DES encrypts the message three times with three different keys;
[E{E[E(M,K1)],K2}, K3]. This is the most secure level of encryption possible with 3
DES.
The successor to 3DES is Advanced Encryption standard (AES). AES is based on the
Rijndael Block Cipher, which is a block cipher with a variable block length and a key length of
either 128, 192 or 256 bits.
Asymmetric Encryption
13
message, only Key B can decrypt it, and if Key B is used to encrypt a message, only Key A can
decrypt it. This technique has its highest value when one key is used as a private key, and the
other key as a public key. Why is it called the public key? The public key is stored in a public
location, where anyone can use it. Obviously the private key is called that, because it must be
kept private, or else there is no benefit from the encryption. The private key, as its name
suggests, is a secret known only to the owner of the key pair. Alex at ABC Corporation wants to
send an encrypted message to Rachel at XYZ Corporation. Alex goes to a public key registry and
obtains Rachels public key. Remember the foundation of asymmetric encryption is that the same
key cannot be used to both encrypt and decrypt the same message. Rachels public key is used to
encrypt the message, only Rachels private key can be used to decrypt it, and that private key is
held by Rachel alone.
The problem with asymmetric encryption is that it requires four keys to hold a single
conversation between two parties. If four organizations want to frequently exchange
communications, they each have to manage their private key and four public keys. It can be
confusing to determine which public key is needed to encrypt a particular message. With more
organizations in the loop, the problem expands. Also, asymmetric encryption is not as efficient as
symmetric encryptions in terms of CPU computations. As a result, the hybrid system described in
the section on Public Key Infrastructure is more commonly used, instead of a pure asymmetric
system.
Digital Signatures
An interesting thing happens when the asymmetric process is reversed, that is the private
key is used to encrypt a short message. The public key can be used to decrypt it, and the fact that
the message was sent by the organization that owns the private key cannot be refuted. This is
known as non-repudiation, which is the foundation of digital signatures. Digital Signatures are
encrypted messages that are independently verified as authentic by a central facility (registry).
RSA
One of the most popular public key cryptosystems is RSA. As described earlier, RSA
stands for Rivest-Shamir-Aldeman, its developers. RSA is the first public key encryption
algorithm developed and published for commercial use. RSA is very popular and is part of both
Microsoft and Netscape Web browsers. There are a number of extensions to the RSA algorithm
including: RSA Encryption Scheme Optimal Asymmetric Encryption Padding (RSAES-
14
OAEP); and RSA Signature Scheme with Appendix Probabilistic Signature Scheme (RSASSA-
PSS).
PKI
Public Key Infrastructure is the entire set of hardware, software, and cryptosystems
necessary to implement public key encryption. PKI systems are based on public key
cryptosystems and include digital certificates and certificate authorities (CAs). Common
implementations of PKI include: systems to issue digital certificates to users and servers;
encryption enrollment; key issuing systems; tools for managing the key issuance; verification
and return of certificates; and any other services associated with PKI.
Integrity: A digital certificate demonstrates that the content signed by the certificate has
not been altered while being moved from server to client.
Privacy: Digital certificates keep information from being intercepted during transmission
over the Internet.
Authorization: Digital certificates issued in a PKI environment can replace user IDs and
passwords, enhance security, and reduce some of the overhead required for authorization
processes and controlling access privileges for specific transactions.
Non-repudiation: Digital certificates can validate actions, making it less likely that
customers or partners can later repudiate a digitally signed transaction, such as an online
purchase.
8. Securing Authentication
One last set of cryptosystems discussed here provide secure third-party authentication.
The first if Kerberos, named after the three-headed dog of Greek mythology (Spelled can berus
in Latin), which guarded the gates to the under world. Kerberos uses symmetric key encryption
to validate an individual user to various network resources. Kerberos keeps and database
containing the private keys of clients and servers, which, in the case of a client is the clients
encrypted password. Network services running on servers in the network register with Kerberos,
as do the clients that use those services.15 The Kerberos system knows these private keys and can
authenticate one network node (client or server) to another for example, Kerberos can
authenticate a client to a print service. To understand Kerberos, think of a friend introducing you
around at a party. Kerberos also generates temporary session keys, which are private keys given
to the two parties in a conversation. The session key is used to encrypt all communications
between these two parties. Typically a user logs into the network, is authenticated to the
15
Kerberos systems, and is then authenticated by the Kerberos system to other resources on the
network by the Kerberos system it self.
Kerberos consists of three interacting services all using a data base library:
1. Authentication server (AS), which is a Kerberos server that authenticates clients and
servers.
2. Key Distribution Center (KDC), which generates and issues session keys.
3. Kerberos ticket granting service (TGS), which provides tickets to clients who request
services. In Kerberos a ticket is an identification card for a particular client that verifies
to the server that the client is requesting services and that the client is a valid member of
the Kerberos system and therefore authorized to receive services. The ticket consists of
the clients name and network address, a ticket validation starting and ending time, and
the session key, all encrypted in the private key of the server from which the client is
requesting services.
Kerberos works based on the following principles:
The KDC knows the secret keys of all clients and servers on the network.
The KDC initially exchanges information with the client and server by using these
secret keys.
Kerberos authenticates a client to a requested service on a server through TGS and by
issuing temporary session keys for communications between the client and KDC, the
server and KDC, and the client and server.
Communications then takes place between the client and server using these temporary
session keys.16
Client /TGS session key for future communications between client and tgs
[Kc, tgs], encrypted with the clients key
Ticket granting ticket (TGT). The TGT contains the client name, client
address, ticket valid times, and the client/tgs session key, all encrypted in the
tgs private key
16
Figure Kerberos Scenario
(1)Client requests services from tgs sending: server name (s), the TGT and authenticator
containing the client name, time stamp, and optional session key, all encrypted in the client/tgs
session key [c,t,k]Kc, tgs
There are a number of components to the physical design of a successful access control. The
most important is the need for strong authentication, which is two factor authentication. When
considering access control you address:
Authentication
Authentication is the validation of a users identity, in other words, Are you whom you claim to
be?
17
A password is a private word or combination of characters, typically longer than a
password, from which a virtual password is derived.
These include dumb cards, such as ID cards or ATM cards with magnetic stripes
containing the digital (and often encrypted) user personal identification number (PIN) against
which a user input is compared. A better version is the smart card, which contains a computer
chip that can verity and validate a number of pieces of information above and beyond the PIN
Another device often used is the token, a computer chip in a display that presents a number used
to support remote login authentication.
Asynchronous tokens use a challenge- response system, in which the server challenges
the user during login with a numerical sequence. The user places this sequence into the token and
receives a response. The user then enters the response into the system to gain access. This system
does not require the synchronization of the previous system and therefore does not suffer form
mistiming issues.
This involves the entries area of biometrics discussed earlier. Biometrics includes:
Fingerprints
Palm scan
Hand geometry
Hand topology
Keyboard dynamics
ID cards (face representation)
Facial recognition
Retina scan
Iris scan
Voice recognition
With all these metrics, only three human characteristics are considered truly unique:
Fingerprints
Retina of the eye (blood vessel pattern)
Iris of the eye (random pattern of features found in the rise including: freckles,
pits, striations, vasculature, coronas, and crypts)
Figure 1 depicts some of these human recognition characteristics.
18
Figure Recognition Characteristics
Most of the technologies that scan human characteristics convert these images to some
form of minutiae. Minutiae are unique points of reference that are digitized and stored in an
encrypted format. Each subsequent scan is also digitized and then compared with the encoded
value to determine if users are whom they claim to be. The problem is that some human
characteristics can change over time, due to normal development, injury, or illness.
What You Do
The fourth and final area of authentication addresses something the user performs or
something they produce. This includes technology in the areas of signature recognition and voice
recognition, or at least signature capture, for authentication during a purchase. The customer
signs his or her signature on a special pad, with a special stylus that captures the signature. The
signature is digitized and either simply saved for future reference, or compared to a database for
validation. Currently, the technology for signature capturing is much more widely accepted than
that for signature comparison, because signatures change over time due to a number of factors,
including age, fatigue, and the speed with which the signature is written. Voice recognition works
similarly. There are several voice recognition software packages on the market today. These
monitor the analog waveforms of a humans speech and attempt to convert then into on screen
text. Voice recognition for authentication is much simpler, as the captured and digitized voice is
only compared to a stored version for authentication, rather than for text recognition. Systems
that use voice recognition provide the user with a phrase that they are expected to read. This
phrase is then compared to a stored version for authentication, for example, My voice is my
password, please verify me. Thank you.
Effectiveness of Biometrics
Biometric technologies are evaluated on three basic criteria: first, the false reject rate,
which is the percentage of authorized users that are denied access; second, the false accept rate,
which is the percentage of unauthorized users allowed access; finally, the crossover error rate,
which is the point at which the number of false rejections equals the false acceptances.
The false reject rate is the percentage or value associated with the rate at which authentic
users are denied or prevented access to authorized areas, as a result of a failure in the biometric
device. This error rate is also known as a Type 1 error. This error rate, while a nuisance to
authorized users, is probably of the least concern to security individuals. Rejection of an
authorized individual represents no threat to security, but simply an impedance to authentic use.
As a result, it is often overlooked, until the rate increases to a level high enough to irritate users.
False Accept Rate
The false accept rate is the percentage or value associated with the rate at which
fraudulent or nonusers are allowed access to systems or areas as a result of a failure in the
19
biometric device. This error rate is also knows as a Type II error. This type of error is
unacceptable to security, as it represents a clear breach of security. Frequently multiple measures
of authentication are required to back up a device that may fail and result in the admission of
mistakenly accepted individuals.
The cross over error rate is the point at which the number of false rejections equals the
false acceptances, also known as the equal error rate. This is possibly the most common and
important overall measure of the accuracy of a biometric system. Most biometric systems can be
adjusted to compensate for both false positive and false negative errors. Adjustment to one
extreme creates a system that requires perfect matches and results in high false rejects, but
almost no false accept. Adjustment to the other extreme allows low false rejects, but produces
high false accept. The trick is to find the balance with low false accepts, but also lows false
rejects, to both ensure security and minimize the frustration level of authentic users. The optimal
setting is some where near the equal error rate or CER.
CERs are used to compare various biometrics and may vary by manufacturer. A biometric
device that provides a CER of one percent is considered superior to one with a CER of five
percent.
Access Controls
There are a number of physical access controls that are uniquely suited to the physical
entry and exit of people to and from the organizations facilities. Some times the technology of
physical security control can overlap logical security control technologies. Some of these
overlaps include biometrics, smart cards, or wireless enabled keycards, which are used for
controlling access to locked doors, information assets, and information system resources.
Before examining access controls, you need to understand the concept of a secure facility
and its design. The organizations general management oversees physical security. Commonly,
access controls for a building are operated by a group called facilities management.
The point of view of facilities management, a secure facility is a physical location that
has been engineered with controls designed to minimize the risk of attacks from physical threats.
The concept of the secure facility brings to mind military bases, maximum-security prisons, and
nuclear power plants. A secure facility can use the natural terrain, traffic flow, and urban
development to its advantage. A secure facility can complement these features with protection
mechanisms, such as fences, gates, walls, guards and alarms.
20
There are a number of physical security controls and issues that the organizations
communities of interest should consider when implementing physical security inside and outside
the facility.
The oldest and most reliable methods of providing physical security on the premises.
These controls deter unauthorized access to the facility.
Guards
Guards and security agencies have the ability to apply human reasoning. Other controls
are either static, and therefore unresponsive to actions, or they are programmed to respond with
specific action to specific stimult. Guards are employed to evaluate each situation as its arises
and to make reasoned responses.
Most guards have clear, standing operating procedures (SOPs) that help them to act
decisively in unfamiliar situations. In the military, guards are provided with general orders ( see
the Offline on guard duty) and special orders that are unique to their posts.
Dogs
If the organization is protecting highly valuable resources, dogs can be a valuable part of
physical security if they are integrated into the plan correctly and managed property. Guard dogs
are useful because their keen sense of smell and hearing can detect intrusions.
One area of access control that ties physical security with information access control is
the use of identification cards (ID) and name badges. An ID card is typically worn concealed,
whereas a name badge is visible. These devices can serve a number of purposes. First, they are
simple forms of biometrics (facial recognition) to identify and authenticate an authorized
individual with access to the facility.
Another inherent weakness of access control technologies is the human factor known as
tailgating.
There are two types of locks: mechanical and electromechanical. The mechanical lock
relies on a key of carefully shaped pieces of metal that turn tumblers to release secured loops of
steel, aluminum, or brass (in brass padlocks). Or a dial can cause the proper rotation of slotted
discs until the slots on multiple disks are aligned, permitting the retraction of a securing bolt
(combination and safe locks).
21
Locks are divided into four categories: manual, programmable, electronic, and biometric.
Manual locks, padlocks, and combination locks are common place and well understood. These
locks are often present by the manufacturer and therefore unchangeable.
Manual locks are installed into doors and cannot be changed, except by highly trained
locksmiths.
Electronic locks can be integrated into alarm systems and other building management systems.
Some locks require keys that contain computer chips. These smart cards can carry critical
information, provide strong authentication, and offer a number of other features. Keycard readers
based on smart cards are commonly used for securing computer rooms, communication closets,
and other restricted areas.
The card reader can track entry and provide accountability. Individuals can be allowed or
denied access depending on their current status, without requiring replacement of the lock. A
specialized type of keycard reader is the proximity reader, which does not require the insertion
of the keycard into the reader. Instead the individual simply places the card within the locks
range to be recognized.
The most sophisticated locks are biometric locks. Finger, palm and hand readers, iris and
retina scanners, voice, and signature readers fall into this category.
The procedure must take into account that locks fail in one of two ways: when the lock of
a door fails and the door becomes unlocked, that is a fail-safe lock; when the lock of a door fails
and the door remains locked, this is a fail-secure lock. A fail-safe lock usually secures an exit,
where it is essential for human safety in the event of a fire. A fail-secure lock is used when
human safety is not a factor.
Mantraps
A mantrap is a small enclosure that has an entry point and a different exit point. The
individual entering the facility, area, or room, enters the mantrap, requests access through some
form of electronic or biometric lock and key, and if verified, is allowed to exit the mantrap into
the facility.
This is called a mantrap, because if the individual is denied entry, the mantrap does not allow
exit until a security official overrides the automatic locks of the enclosure.
Electronic Monitoring
To record events within a specific area that guards and dogs might miss, or to record
events in areas where other types of physical controls are not practical, monitoring equipment
can be used. Many of you are accustomed to video monitoring, with cameras viewing you from
odd corners and with the ever present silver globes found in many retail stores. On the other end
22
of these cameras are video cassette recorders (VCRs) and related machinery that captures the
video feed. Electronic monitoring includes closed-circuit television systems (CCT), some of
which collect constant video feeds while others rotate input from a number of cameras, sampling
each area in turn.
The burglar alarm is common in residential and commercial environments. These alarms
detect instructions into unauthorized areas and notify either a local or remote security agency to
react.
These systems rely on a number of sensors that detect the intrusion: motion detectors, thermal
detectors, glass breakage detectors, weight sensors, and contact sensors. Motion detectors detect
movement within a confined space and are either active or passive. Some motion sensors emit
energy beams, usually in the form of infrared or laser light, ultrasonic sound or sound waves, or
some form of electromagnetic radiation. If the reflected beam from the room being measured is
disrupted, the alarm is activated. Other types of motion sensors are passive, in that they read
energy from the monitored space, usually infrared, and detect rapid changes in the energy in the
monitored area.
Computer rooms and wiring and communications closets are facilities that require special
attention to ensure the confidentiality, integrity, and availability of information. Logical access
controls are easily defeated, if an attacker gains physical access to the computing equipment.
The space above the ceiling, below the floor above, is called a plenum and is usually 12
to 24 inches high.
11 fire safety
23
Fire suppression systems are devices installed and maintained to detect and respond to a
fire, potential or combustion situation. These devices typically work to deny an environment of
one of the three requirements for a fire to burn: temperature (ignition source), fuel and oxygen.
While the temperature of ignition or flame point may vary by combustible fuel type, it
can be as low as a few hundred degrees. Paper, the most common combustible in the office, has a
flame point of 451 degrees Fahrenheit. Paper can reach that temperature when it is exposed to a
carelessly dropped cigarette butt malfunctioning electrical equipment, or other accidental or
purposeful misadventures.
Water and water mist systems, which are described in detail in subsequent paragraphs,
work to reduce the temperature of the flame to extinguish it and to saturate some categories of
fuels to prevent ignition. Carbon dioxide systems (CO2) rob fire of its oxygen. Soda acid systems
deny fire its fuel, preventing spreading. Gas-based systems, such as halon and its Environmental
Protection Agency-approved replacements, dirupt the fires chemical reaction but leave enough
oxygen for people to survive for a short time.
Fire Detection
Fire detection systems fall into two general categories: manual and automatic. Manual
fire detection systems include human responses, such as calling the fire department, as well as
manually activated alarms, such as sprinklers and gaseous systems. When manually triggered
alarms are tied directly to suppression systems, as false alarms are not uncommon. During the
chaos of a fire evacuation, and attacker can easily slip into offices and obtain sensitive
information.
There are three basic types of fire detection systems: thermal detection, smoke detection,
and flame detection. The thermal detection systems contain a sophisticated heat sensor that
operates in one of two ways. In the first, known as fixed temperature, the sensor detects when
the ambient temperature in an area reaches a predetermined level, usually between 235 degrees
Fahrenheit and 165 degrees Fahrenheit, and 165 degrees Fahrenheit, or 57 degrees Centigrade to
74 degrees Centigrade. In the second, known as rate-of-rise, the sensor detects an usually rapid
increase in the area temperature, within a relatively short period of time.
Smoke detection systems are perhaps the most common means of detecting a potentially
dangerous fire, and the yare required by building codes in most residential.
Smoke detectors operate in one of three ways.
In the first, photoelectric sensors project and detect an infrared beam across an area. If
the beam is interrupted (presumably by smoke), the alarm or suppression system is
activated.
24
The third category of smoke detectors is the air-aspirating detector. Air-aspirating
detectors are very sophisticated systems, which are used in high-sensitivity areas.
The third major category of fire detection systems is the flame detector. The flame
detector is a sensor that detects the infrared or ultraviolet light produced by an open flame. These
systems require direct line of sight with the flame and compare the flame signature to a
database to determine whether or not to activate the alarm and suppression systems.
Fire Suppression
Class A: Fires that involve ordinary combustible fuels such as wood, paper, textiles,
rubber, cloth, and trash. Class A fires are extinguished by agents that interrupt the ability
of the fuel to be ignited. Water and multipurpose, dry chemical fire extinguishers are ideal
for these types of fires.
Class B: Fires fueled by combustible liquids or gases, such as solvents, gasoline, paint,
lacquer, and oil. Class B fires are extinguished by agents that remove oxygen from the
fire. Carbon dioxide, multipurpose dry chemical and halon fire extinguishers are ideal for
these types of fires.
Class C: Fires with energized electrical equipment or appliances. Class C fires are
extinguished with agents that must be nonconducting.
Carbon dioxide, multipurpose, dry chemical, and halon fire extinguishers are ideal for
these types of fires. Never use a water fire extinguisher on a Class C fire.
Class D: Fires fueled by combustible metals, such as magnesium, lithium, and sodium.
Fires of this type require special extinguishing agents and techniques.
Manual and automatic fire response can include installed systems designed to apply
suppressive agents. These are usually either sprinkler or gaseous systems. All sprinkler systems
are designed to apply liquid, usually water, to all areas in which a fire has been detected. In
sprinkler systems, the organization can implement wet-pipe, dry-pipe, or pre-action systems. A
wet-pipe system has pressurized water in all pipes and has some form of valve in each protected
area. When the system is activated, the valves are opened allowing water to sprinkle the area.
25
valves closed. Keeping the water away from the target areas. The pressurized air holds valves
closed, keeping the water away from the target areas.
When a fire is detected and the sprinkled heads are activated, the pressurized air escapes, water
fills the pipes, and exits through the sprinkler heads. This reduces the risk of accidental activation
of the system with its resulting damage.
A third type of sprinkler system is the pre-action system. Unlike either the wet-or dry-
pipe systems, the pre-action system has a two-phase response to a fire. The system is normally
maintained with nothing in the delivery pipes. When a fire has been detected, the first phase is
initiated, and valves allow water to enter the system. A variation of the pre-action system is the
deluge system, in which the valves are kept open, and as soon as the first phase is activated,
valves allow water to be immediately applied to various areas without waiting for the second
phase to trigger the individual heads.
Water mist sprinklers are the newest form of sprinkler systems and rely on microfine
mists instead of traditional shower-type systems. The water mist systems work like traditional
water system in reducing the ambient temperature around the flame, therefore minimizing its
ability to sustain the necessary temperature around the flame, therefore minimizing its ability to
sustain the necessary temperature needed to maintain combustion.
Chemical gas systems can be used in the suppression of fires. These systems are either
self-pressurizing or must be pressurized with an additional agent. Until recently there were only
two major types of gaseous systems: carbon dioxide and halon. Carbon dioxide robs a fire of its
oxygen supply. Halon is one of a few chemicals designated as a clean agent, which means that it
does not leave any residue when dry, nor does it interfere with the operation of electrical or
electronic equipment. Unlike carbon dioxide, halon does not rob the fire of its oxygen and
produces instead a chemical reaction with the flame to extinguish it. As a result it is much safer
than carbon dioxide when people are present and the system is activated. These alternative clean
agents include the following.11
FM-200 (very similar to halon 1301) is safe in occupied areas
FE-13 (trifluromethane) is one of the newest and safest clean agent variations of the most
commonly used clean agents and ensure no one is left behind. It is also important to have
fire suppression systems see Figure, which are both manual and automatic, inspected and
tested regularly.
26
Scanning and Analysis Tools
Part of an attack protocol to collect information that an attacker would need to launch a
successful attack.
of the preparatory parts of the attack protocol is the collection of publicly available
information about a potential target, a process known as footprinting.
The attacker uses public Internet data sources to perform keyword searches to identify
the network addresses of the organization.
Port Scanner
Port scanning utilities, or port scanners, are tools used by both attackers and defenders
to identify (or fingerprint) the computers that are active on a network,
the ports and services active on those computers, the functions and roles the machines are
fulfilling, and other useful information.
can scan for specific types of computers, protocols, or resources or their scans can be
generic.
It is helpful to understand the network environment so that you can use the tool most
suited to the data collection task at hand.
27
Operating System Detection Tools
once the target systems OS is known, all of the vulnerabilities to which it is susceptible
can easily be determined.
There are many tools that use networking protocols to determine a remote computers
OS.
One specific tool worth mentioning is XProbe, which uses ICMP to determine the
remote OS.
When run, XProbe sends many different ICMP queries to the target host.
As reply packets are received, XProbe matches these responses from the targets TCP/IP
stack with its own internal database of known responses.
Because most OSs have a unique way of responding to ICMP requests, Xprobe is very
reliable in finding matches and thus detecting the operating systems of remote computers.
System and network administrators should take note of this and restrict the use of ICMP
through their organizations firewalls and, when possible, within its internal networks.
Vulnerability Scanners
Active vulnerability scanners scan networks for highly detailed information.
- initiates traffic on the network in order to determine security holes.
- identifies exposed usernames and groups, shows open network shares, and exposes
configuration problems and other vulnerabilities in servers.
An example is GFI LANguard Network Security Scanner (NSS), which is available as
freeware for noncommercial use.
Another example of a vulnerability scanner is Nessus, which is a professional freeware
utility that uses IP packets to identify the hosts available on the network, the services
(ports) they are offering, the operating system and OS version they are running, the type
of packet filters and firewalls in use, and dozens of other characteristics of the network.
A passive vulnerability scanner is one that listens in on the network and determines
vulnerable versions of both server and client software.
Tenable Network Security with its Passive Vulnerability Scanner (PVS) and Sourcefire
with its RNA product.
do not require vulnerability analysts to get approval prior to testing.
These tools simply monitor the network connections to and from a server to obtain a list
of vulnerable applications.
Packet Sniffers
A packet sniffer is a network tool that collects copies of packets from the network and
analyzes them.
28
It can provide a network administrator with valuable information for diagnosing and
resolving networking issues.
In the wrong hands, however, a sniffer can be used to eavesdrop on network traffic.
There are both commercial and open-source sniffersmore specifically, Sniffer is a
commercial product, and Snort is open-source software.
Digital Certificates
A digital certificate is an electronic document or container file that contains a key value and
identifying information about the entity that controls the key. The certificate is often issued and
certified by a third party, usually a certificate
authority
The types of digital certificates to accomplish their assigned functions, as follows:
The CA application suite issues and uses certificates (keys).
Mail applications use Secure/Multipurpose Internet Mail Extension (S/MIME)
certificates.
Development applications use object-signing certificates to identify signers of object
oriented code and scripts.
Web servers and Web application servers use Secure Sockets Layer (SSL) certificates.
Web clients use client SSL .
Two popular certificate types are those created using Pretty Good Privacy (PGP) and
those created using applications that conform to International Telecommunication
Unions (ITU-T) X.509 version 3.
30
An X.509 v3 certificate binds a distinguished name (DN), which uniquely identifies a
certificate entity, to a users public key.
Hybrid Cryptography Systems
The most common hybrid system is based on the Diffie-Hellman key
exchange, which is a method for exchanging private keys using public key encryption. Diffie-
Hellman key exchange uses asymmetric encryption to exchange session keys.
Steganography
The word steganographythe art of secret writingThe most popular modern version of
steganography involves hiding information within files that contain digital pictures or other
images.
refer examples also.
- Developed by Netscape
- protocol to use public key encryption to secure a channel over the Internet, thus enabling
secure
communications. It provides
data encryption
integrity
server authentication
client authentication.
- The SSL protocol uses handshaking mechanism.
- SSL provides two protocol layers within the TCP framework:
The SSL Record Protocol is responsible for the fragmentation, compression, encryption,
and attachment of an SSL header to the plaintext prior to transmission.
Standard HTTP provides the Internet communication services between client and host
without consideration for encryption of the data that is transmitted between client and
server.
31
provides for the encryption of individual messages transmitted via the
Internet between a client and server.
S-HTTP is the application of SSL over HTTP, which allows the encryption
of all information passing between two computers through a protected and secure virtual
connection.
S-HTTP is designed for sending individual messages over the Internet and
therefore a session for each individual exchange of data must be established by
encrypting and decrypting the keys.
S-HTTP can provide
- confidentiality
- authentication
-data integrity.
32