UNIT -6
UNIT -6
UNIT -6
Syllabus: Data Security: data control, encrypt everything, regulatory and standards compliance;
Network Security: firewall rules, network intrusion detection; Host Security: system hardening,
antivirus protection, host intrusion detection, data segmentation, credential management;
Compromise response.
Data Security :
● Physical security defines how you control physical access to the servers that support
your infrastructure.
● The cloud still has physical security constraints. After all, there are actual servers
running somewhere.
● When selecting a cloud provider, you should understand their physical security protocols
and the things you need to do on your end to secure your systems against physical
vulnerabilities.
Data Control :
● The big chasm between traditional data centers and the cloud is the location of your data
on someone else’s servers.
● Companies who have outsourced their data centers to a manage services provider may
have crossed part of that chasm; what cloud services add is the inability to see or touch
the servers on which their data is hosted.
● The meaning of this change is a
somewhat emotional matter, but it does present some real business challenges.
● The main practical problem is that factors that have nothing to do with your business can
compromise your operations and your data.
● For example, any of the following events could create trouble for your infrastructure:
• The cloud provider declares bankruptcy and its servers are seized or it ceases
operations.
• A third party with no relationship to you (or, worse, a competitor) sues your
cloud provider and obtains a blanket subpoena granting access to all servers
owned by the cloud provider.
• Failure of your cloud provider to properly secure portions of its infrastructure—
especially in the maintenance of physical access controls—results in the
compromise of your systems.
● The solution is to do two things you should be doing anyway, but likely are pretty lax
about: encrypt everything and keep off-site backups.
• Encrypt sensitive data in your database and in memory. Decrypt it only in
memory for the duration of the need for the data. Encrypt your backups and
encrypt all network communications.
• Choose a second provider and use automated, regular backups (for which
many open source and commercial solutions exist) to make sure any current and
historical data can be recovered even if your cloud provider were to disappear
from the face of the earth.
● Let’s examine how these measures deal with each scenario, one by one.
● When the cloud provider goes down :
○ This scenario has a number of variants: bankruptcy, deciding to take the
business in another direction, or a widespread and extended outage. Whatever is
going on, you risk losing access to your production systems due to the actions of
another company. You also risk that the organization controlling your data might
not protect it in accordance with the service levels to which they may have been
previously committed.
● When a subpoena compels your cloud provider to turn over your data :
○ If the subpoena is directed at you, obviously you have to turn over the data to the
courts, regardless of what precautions you take, but these legal requirements
apply whether your data is in the cloud or on your own internal IT infrastructure.
● When your cloud provider fails to adequately protect their network:
○ When you select a cloud provider, you absolutely must understand how they
treat physical, network, and host security. Though it may sound counter intuitive,
the most secure cloud provider is one in which you never know where the
physical server behind your virtual instance is running.
Encrypt Everything :
● In the cloud, your data is stored somewhere; you just don’t know exactly where.
● However, you know some basic parameters:
• Your data lies within a virtual machine guest operating system, and you control the
mechanisms for access to that data.
• Network traffic exchanging data between instances is not visible to other virtual hosts.
• For most cloud storage services, access to data is private by default. Many, including
Amazon S3, nevertheless allow you to make that data public.
● Encrypt your network traffic :
○ No matter how lax your current security practices, you probably have network
traffic encrypted—at least for the most part. A nice feature of the Amazon cloud
is that virtual server cannot sniff the traffic of other virtual servers. I still
recommend against relying on this feature, since it may not be true of other
providers. Furthermore, Amazon might roll out a future feature that renders this
protection measure obsolete. You should therefore encrypt all network traffic, not
just web traffic.
● Encrypt your Backups :
○ When you bundle your data for backups, you should be encrypting it using some
kind of strong cryptography, such as PGP. You can then safely store it in a
moderately secure cloud storage environment like Amazon S3, or even in a
completely insecure environment. Encryption eats up CPU. As a result, I
recommend first copying your files in plain text over to a temporary backup
server whose job it is to perform encryption.
● Encrypt your File systems :
○ Each virtual server you manage will mount ephemeral storage devices or block
storage devices. The failure to encrypt ephemeral devices poses only a very
moderate risk in an EC2 environment because the EC2 Xen system zeros out
that storage when your instance terminates.
Regulatory and Standards Compliance :
Most problems with regulatory and standards compliance lie not with the cloud, but in the fact
that the regulations and standards written for Internet applications predate the acceptance of
virtualization technologies. In other words, chances are you can meet the spirit of any
particular specification, but you may not be able to meet the letter of the specification.
From a security perspective, you’ll encounter three kinds of issues in standards and regulations:
“How” issues
These result from a standard such as PCI or regulations such as HIPAA or SOX, which
govern how an application of a specific type should operate in order to protect certain
concerns specific to its problem domain.
For example, HIPAA defines how you should handle personally identifying health care
data.
“Where” issues
These result from a directive such as Directive 95/46/EC that governs where you can
store certain information.
One key impact of this particular directive is that the private data on EU citizens may not
be stored in the United States (or any other country that does not treat private data in the
same way as the EU).
“What” issues
These result from standards prescribing very specific components to your infrastructure.
For example, PCI prescribes the use of antivirus software on all servers processing
credit card data.
Network Security
Amazon’s cloud has no perimeter. Instead, EC2 provides security groups that define firewall like
traffic rules governing what traffic can reach virtual servers in that group. Although I often
speak of security groups as if they were virtual network segments protected by a firewall, they
most definitely are not virtual network segments, due to the following:
• Two servers in two different Amazon EC2 availability zones can operate in the same
security group.
• A server may belong to more than one security group.
• Servers in the same security group may not be able to talk to each other at all.
• Servers in the same network segment may not share any IP characteristics—they may
even be in different class address spaces.
• No server in EC2 can see the network traffic bound for other servers (this is not necessarily
true for other cloud systems). If you try placing your virtual Linux server in promiscuous
mode, the only network traffic you will see is traffic originating from or destined for your
server.
Firewall Rules
Typically, a firewall protects the perimeter of one or more network segments. Figure 5-2
illustrates how a firewall protects the perimeter.
This structure requires you to move through several layers—or perimeters—of network
protection in the form of firewalls to gain access to increasingly sensitive data
Host Security
Host security describes how your server is set up for the following tasks:
• Preventing attacks.
• Minimizing the impact of a successful attack on the overall system.
• Responding to attacks when they occur.
Given the assumption that your services are vulnerable, your most significant tool in
preventing attackers from exploiting a vulnerability once it becomes known is the rapid rollout
of security patches.
In the cloud, rolling out a patch across the infrastructure takes three simple steps:
1. Patch your AMI with the new security fixes.
2. Test the results.
3. Relaunch your virtual servers.
System Hardening
Prevention begins when you set up your machine image. As you get going, you will experiment
with different configurations and constantly rebuild images. Once you have found a
configuration that works for a particular service profile, you should harden the system before
creating your image.
Server hardening is the process of disabling or removing unnecessary services and eliminating
unimportant user accounts. Tools such as Bastille Linux can make the process of hardening
your machine images much more efficient. Once you install Bastille Linux, you execute the
interactive scripts that ask you questions about your server. It then proceeds to disable services
and accounts. In particular, it makes sure that your hardened system meets the following
criteria:
• No network services are running except those necessary to support the server’s function.
• No user accounts are enabled on the server except those necessary to support the services
running on the server or to provide access for users who need it.
• All configuration files for common server software are configured to the most secure
settings.
• All necessary services run under a nonprivileged role user account (e.g., run MySQL as
the mysql user, not root).
• When possible, run services in a restricted filesystem, such as a chroot jail.
Antivirus Protection
Some regulations and standards require the implementation of an antivirus (AV) system on
your servers. It’s definitely a controversial issue, since an AV system with an exploit is itself an
attack vector and, on some operating systems, the percentage of AV exploits to known viruses
is relatively high.
Personally, I have mixed feelings about AV systems. They are definitely necessary in some
circumstances, but a risk in others. For example, if you are accepting the upload of photos or
other files that could be used to deliver viruses that are then served to the public, you have an
obligation to use some kind of antivirus software in order to protect your site from becoming
a mechanism for spreading the virus.
Unfortunately, not all AV systems are created equally. Some are written better than others,
and some protect you much better than others. Finally, some servers simply don’t have an
operational profile that makes viruses, worms, and trojans viable attack vectors
In the cloud, you should always opt for the centralized configuration. It centralizes your rules
and analysis so that it is much easier to keep your HIDS infrastructure up to date
The downside of an HIDS is that it requires CPU power to operate, and thus can eat up
resources
on your server. By going with a centralized deployment model, however, you can push a lot
of that processing onto a specialized intrusion detection server.
Data Segmentation
In addition to assuming that the services on your servers have security exploits, you should
further assume that eventually one of them will be compromised. Obviously, you never want
any server to be compromised. The best infrastructure, however, is tolerant of—in fact, it
assumes—the compromise of any individual node. This tolerance is not meant to encourage
lax security for individual servers, but is meant to minimize the impact of the compromise of
specific nodes. Making this assumption provides you with a system that has the following
advantages:
• Access to your most sensitive data requires a full system breach.
• The compromise of the entire system requires multiple attack vectors with potentially
different skill sets.
• The downtime associated with the compromise of an individual node is negligible or
nonexistent.
Credential Management
Your machine images OSSEC profile should have no user accounts embedded in them. In fact,
you should never allow password-based shell access to your virtual servers. The most secure
approach to providing access to virtual servers is the dynamic delivery of public SSH keys to
target servers. In other words, if someone needs access to a server, you should provide her
credentials to the server when it starts up or via an administrative interface instead of
embedding that information in the machine image
Another approach is to use existing cloud infrastructure management tools or build your own
that enable you to store user credentials outside the cloud and dynamically add and remove
users to your cloud servers at runtime. This approach, however, requires an administrative
service running on each host and thus represents an extra attack vector against your server.
Compromise Response
Because you should be running an intrusion detection system, you should know very quickly
if and when an actual compromise occurs. If you respond rapidly, you can take advantage of
the cloud to eliminate exploit-based downtime in your infrastructure.
When you detect a compromise on a physical server, the standard operating procedure is a
painful, manual process:
1. Remove intruder access to the system, typically by cutting the server off from the rest of
the network.
2. Identify the attack vector. You don’t want to simply shut down and start over, because
the vulnerability in question could be on any number of servers. Furthermore, the
intruder very likely left a rootkit or other software to permit a renewed intrusion after you
remove the original problem that let him in. It is therefore critical to identify how the
intruder compromised the system, if that compromise gave him the ability to compromise
other systems, and if other systems have the same vulnerability.
3. Wipe the server clean and start over. This step includes patching the original vulnerability
and rebuilding the system from the most recent uncompromised backup.
4. Launch the server back into service and repeat the process for any server that has the same
attack vector.
This process is very labor intensive and can take a long time. In the cloud, the response is much
simpler.
Section B:
1)Explain about data control in data security.
2)List the basic parameters of encryption and how to encrypt different types of data?
3)Explain the issues of regulatory and standard compliance.
4)Explain about firewall with help of an example.
5)Explain in detail about Network intrusion detection.
6)Explain Host security and write a short note on
a) System hardening
b) Anti-virus protection
c) Host intrusion detection
d) Data segmentation
e) Credential management
7)Explain the process of compromise response.