DDoS Attacks

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

Demystifying DoS attacks

Background:
Over the last two years, the term “DDoS attack” has made its way into the public media stream.
Today even non-technical people are aware of the existence and potential impact of such attacks. In
years past, DDoS attacks have been dominated by “volumetric” attacks usually generated by
compromised PCs that are grouped together in large-scale botnets. Some well-publicized examples
include the DDoS attacks against UK-based online betting sites where the hackers extorted the
gambling firms, and the politically motivated DDoS attacks against the Georgian government.

Not only are attacks increasing in size, but they are also increasing in complexity as new types of
DDoS attacks continue to emerge and threaten the availability of Internet-facing businesses and
services. Conduct a quick search on the Internet and it’s not difficult to find media coverage
regarding online banking, e-commerce and even social media sites that have been victims of
application-layer DDoS attacks. The motivation? Most of the time it’s for financial gain, but other
incentives includes political “hactivisim” or just plain old ego. And thanks to a growing trend of do-it-
yourself attack tools and “botnets for hire,” even a computer novice can execute a successful DDoS
attack.

For example, possibly one of the most publicized series of DDoS attacks happened in 2010 when a
group of Wikileaks supporters and hactivists known as “Anonymous” used social media sites to
recruit and instruct supporters on how to download, configure and execute an application-layer DoS
attack against several targets (the group called these attacks “Operation Payback”). For those
supporters who were not computer-savvy enough to conduct the DDoS attacks themselves, there
was an option to “Volunteer your PC for the Cause,” in which case a member of Anonymous would
take over the supporter’s PC and make it part of the botnet!

1.1 What is a Denial of Service?


As the name implies, DoS is a Denial of Service to a victim trying to access a resource. In many cases
it can be safely said that the attack requires a protocol flaw as well as some kind of network
amplification.

Denial of Services is also an attack on a computer system or network that causes a loss of service to
users, typically the loss of network connectivity and services through the consumption of bandwidth
of the victim network, or the overloading the computational resources of the victim system.

The motivation for DoS attacks is not to break into a system. Instead, it is to deny the legitimate use
of the system or network to others who need its services. One can say that this will typically happen
through one of the following means:

1. Denying critical resources to intended users


2. Deny communication between systems.
3. Bring the network or the system down or have it operate at a reduced speed which affects
productivity.
4. Hang the system, which is more dangerous than crashing since there is no automatic reboot.
Productivity can be disrupted indefinitely.

The DoS concept is easily applied to the networked world. Routers and servers can handle a finite
amount of traffic at any given time based on factors such as hardware performance, memory and
bandwidth. If this limit or rate is surpassed, new requests will be rejected. As a result, legitimate
traffic will be ignored and the object's users will be denied access. So, an attacker who wishes to
disrupt a specific service or device can do so by simply overwhelming the target with packets
designed to consume all available resources.

A DoS is not a traditional "crack", in which the goal of the attacker is to gain unauthorized privileged
access, but it can be just as malicious. The point of DoS is disruption and inconvenience. Success is
measured by how long the chaos lasts. When turned against crucial targets, such as root DNS
servers, the attacks can be very serious in nature. DoS threats are often among the first topics that
come up when discussing the concept of information warfare. They are simple to set up, difficult to
stop, and very efficient.

http://www.youtube.com/watch?feature=player_detailpage&v=jc-S4fa5BxQ

1.2 Distributed Denial of Service Attack


A DDoS can be thought of as an advanced form of a traditional DoS attack. Instead of one attacker
flooding a target with traffic, numerous machines are used in a “master-slave”, multi-tiered
configuration.

The process is relatively simple. A cracker breaks into a large number of Internet-connected
computers and installs the DDoS software package (of which there are several variations). The DDoS
software allows the attacker to remotely control the compromised computer, thereby making it a
“slave”. From a “master” device, the cracker can inform the slaves of a target and direct the attack.
Thousands of machines can be controlled from a single point of contact. Start time, stop time, target
address and attack type can all be communicated to slave computers from the master machine via
the Internet. When used for one purpose, a single machine can generate several megabytes of
traffic. Several hundred machines can generate gigabytes of traffic. With this in mind, it’s easy to see
how devastating this sudden flood of activity can be for virtually any target.

The network exploit techniques vary. With enough machines participating, any type of attack will be
effective: ICMP requests can be directed toward a broadcast address (Smurf attacks), bogus HTTP
requests, fragmented packets, or random traffic. The target will eventually become so overwhelmed
that it crashes or the quality of service will be worthless. It can be directed at any networked device:
routers (effectively targeting an entire network), servers (Web, mail, DNS) or specific machines
(firewalls, IDS).

But what makes a DDoS difficult to deal with? Obviously the sudden, rapid flood of traffic will catch
the eye of any competent administrator .Unfortunately though, all of this traffic will likely be
spoofed, an attack technique in which the true source address hidden. An inspection of these
packets will yield little information other than the router that sent it (your upstream router). This
means there isn’t an obvious rule that will allow the firewall to protect against the attack, as the
traffic often appears legitimate and can come from anywhere.
SAMPLE ANATOMY OF A DDoS ATTACK

2.0 Various DoS types

2.1 Direct Flooding Attacks


The simplest case of a DoS attack is the direct flooding attack. In this case, the attacker sends
packets directly from his computer(s) to the victim’s site. In the attack, the source address of the
packets may be forged. There are many tools available to allow this type of attack for a variety of
protocols including ICMP, UDP and TCP. Some common tools include stream2, synhose, synk7,
synsend, and hping2. This type of attack usually has n amplification factor of 1 to 1. That is, for each
packet sent by the attacker, one packet is received by the victim

2.2 Smurf and Fraggle Attacks


One of the earliest, reflective attacks was the smurf attack. The smurf attack is performed by
sending an ICMP echo request ping packet with the victim’s address as the source address to a
network’s broadcast address. The fraggle attack is similar except it uses UDP packets. If the network
router and network servers are configured to respond to a network address ping packet, all the
servers on that subnet will respond to the forge source IP address, flooding the victim’s site.

2.3 Teardrop attacks


A Teardrop attack involves sending mangled IP fragments with overlapping, over-sized payloads to
the target machine. This can crash various operating systems due to a bug in their TCP/IP
fragmentation re-assembly code. Windows 3.1x, Windows 95 and Windows NT operating systems,
as well as versions of Linux prior to versions 2.0.32 and 2.1.63 are vulnerable to this attack. Around
September 2009, a vulnerability in Windows Vista was referred to as a "teardrop attack", but the
attack targeted SMB2 which is a higher layer than the TCP packets that teardrop used.
2.4 Peer-to-peer attacks
Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS
attacks. The most aggressive of these peer-to-peer-DDoS attacks exploits DC++. Peer-to-peer attacks
are different from regular botnet-based attacks. With peer-to-peer there is no botnet and the
attacker does not have to communicate with the clients it subverts. Instead, the attacker acts as a
"puppet master," instructing clients of large peer-to-peer file sharing hubs to disconnect from their
peer-to-peer network and to connect to the victim's website instead. As a result, several thousand
computers may aggressively try to connect to a target website. While a typical web server can
handle a few hundred connections per second before performance begins to degrade, most web
servers fail almost instantly under five or six thousand connections per second. With a moderately
large peer-to-peer attack, a site could potentially be hit with up to 750,000 connections in short
order. The targeted web server will be plugged up by the incoming connections.
While peer-to-peer attacks are easy to identify with signatures, the large number of IP addresses
that need to be blocked (often over 250,000 during the course of a large-scale attack) means that
this type of attack can overwhelm mitigation defenses. Even if a mitigation device can keep blocking
IP addresses, there are other problems to consider. For instance, there is a brief moment where the
connection is opened on the server side before the signature itself comes through. Only once the
connection is opened to the server can the identifying signature be sent and detected, and the
connection torn down. Even tearing down connections takes server resources and can harm the
server. This method of attack can be prevented by specifying in the peer-to-peer protocol which
ports are allowed or not. If port 80 is not allowed, the possibilities for attack on websites can be very
limited.

2.5 ICMP
The reflective ICMP attack uses public sites that respond to ICMP echo request packets to flood the
victim’s site. Most well known public sites block ICMP to their networks as a result. However,
routers respond very efficiently to ICMP and if not properly rate limited, can be an excellent
reflective media. This attack by itself does not amplify the packets sent to the victim’s site. If used
in conjunction with a remote controlled network of computers, this attack can be very difficult to
trace.

2.6 TCP SYN


The TCP SYN flood attack is a protocol violation attack that is used in several variations. In the
simplest case, an attacker sends the first packet (with the SYN bit set) of the well known TCP three
way handshake. The victim responds with the second packet back to the source address with SYN-
ACK bit set. The attacker never responds to the reply packet, either on purpose or because the
source address of the packet is forged. In the original attack, the victim’s TCP receive queues would
be filled up, denying new TCP connections. Most modern UNIX and Windows implementations have
fixed this issue by increasing the queue size and rate limiting the number of TCP SYN packets
allowed. TCP SYN cookies are another way to mitigate this type of attack using cryptographic
techniques to create the server’s initial sequence number . SYN cookie TCP stack implementations
are available for many popular operating systems. A variation to this attack uses public servers as a
reflective media to flood the victim with TCP SYN ACK packets. In this case, the attacker spoofs the
source address of the TCP SYN packet with the victim’s address. The packet is sent to a public server
that provides a public TCP service (such as HTTP). The server sends a TCP SYN ACK packet to the
victim’s host. The victim, having not sent the original packet either ignores the packets or sends a
TCP RST packet. The technique can achieve 3 to 5 times amplification factors by retry packets sent
from the reflection servers

2.7 UDP attacks


The UDP protocol can be very efficient for DoS/DDoS attacks. UDP is a stateless protocol and does
not have any acknowledgement mechanism by design. Due to its status as a connectionless
protocol, when data packets are sent via UDP, no handshakes are required between sender and
receiver, resulting in a mandatory processing of all received packets. This can lead to bandwidth
saturation when a large number of UDP packets are sent to a victim system where legitimate service
requests are prevented access to the victim system. The Slammer worm was extremely fast because
it did not require a response from the compromised computer.

2.8 TTL Expiration


The TTL expiration attack relies on ICMP control messages to flood the victim. In this attack, the
source address is forged to match the victim’s address. The TTL for the packet is set to a low value
that will expire in transit at a high speed router. When the TTL of the packet reaches zero, the
router drops the packet and sends an ICMP TTL expired message to the source address, in this case
the victim’s site. Since TTL expiration is often done on the line card in ASIC, this can be an extremely
fast reflective media.The best defense for this type of attack is rate limiting ICMP to all routers in the
service provider’s network. Some network equipment vendors are now offering the ability to turn
off TTL expiration processing, with the side effect of breaking traceroute.

2.9 Fragmentation Attacks


Packet fragmentation can be used in two distinct areas: evasion of IDS detection and as a DoS
mechanism. As a DoS mechanism, fragmentation is used to exhaust a system’s resources while
trying to reassemble the packets. These types of attacks have occurred against CheckPoint firewalls,
Cisco routers and Windows computers

2.10 Remote Controlled Network Attacks


Remote controlled network attacks involve the attacker compromising a series of computers and
placing an application or agent on the computers. The computer then listens for commands from a
central control computer. The compromise of computers can either be done manually or
automatically through a worm or virus. Typical control channels include IRC channels, direct port
communication or even through ICMP ping packets. Other versions can operate almost completely
stealth. They can spoof the from and to addresses. The zombie listens passively (non promiscuous)
for TCP SYN packets on different destination ports in a specific order. When the ports are matched,
either from a specific IP address or any IP address, a user defined function is called. The attacker
could use the packet header fields to determine what command to run and what IP address to
attack.Attacks can be launched from the compromised computers either directly at a target or
through a reflective media described below. Remote controlled attacks are very difficult to trace to
the original control computer. A distributed reflective DoS attack is especially difficult to trace and is
explained in detail in the later section.
2.10.1 Encryption
Although encryption is a necessary security tool to protect the data of organizations and individuals,
criminals have used it for decades to hide the secrets of their misdeeds. After security analysts and
law enforcement agencies discovered that botmasters utilize unencrypted IRC channel directives to
control botnets, attackers now encrypt the command and control signals of their botnets.

2.10.2 Fast-Flux
The evolution of the technology that attackers are taking advantage of continues today with the
recent trend in fast-flux networks. Here, botnets manipulate DNS records to hide malicious Web
sites behind a rapid-changing network of compromised hosts acting as proxies. The fast-flux trend
reflects the need for attackers to try to mask the source of their attacks so that they are able to
sustain the botnet for as long as possible.

2.11 Reflective Flooding Attacks


Reflective attacks forge the source address of the IP packets with the victim’s IP address and send
them to an intermediate host. When the intermediate host sends a reply, it is sent to the victim’s
destination address, flooding the victim. Depending on the type of protocol used and the
application and configuration involved, amplification factors of 3 to several hundred are possible.
Reflective attacks can be difficult to trace to the original attacker because the flood packets are
actually sent from intermediate servers. In many types of reflective attacks, the intermediate
servers are usually well known, public servers such as www.amazon.com, www.cnn.com, etc. The
victim’s service provider cannot block access to these sites and many times end up blocking all the
traffic to the victim’s site to allow other network traffic to get through.
2.11.1 DRDoS
A distributed reflective DoS (DRDoS) attack uses a remote controlled collection of computers to
spray spoofed packets to a reflective media, typically servers or routers in one of the reflective
attacks described above. Figure shows a DRDoS UDP and TCP flood attack. As the diagram shows,
this attack is especially difficult to trace because the controlling computer is 2 layers hidden from the
packets received at the victim’s site.

While the DNS servers utilized in these type of attacks were not compromised, they did have a fl aw
that allowed them to be used as reflectors: they were open, recursive DNS servers. That is, they did
all of the recursion queries necessary to service a DNS client without requiring that client to be from
the same network as they were. This poses a similar problem to that of open mail relays. As such,
network operators should view open, recursive DNS servers as being just as important to secure

3.0 DDoS attack using Tools


Background :

Denial of service attack programs, root kits, and network sniffers have been around in the computer
underground for a very long time. They have not gained nearly the same level of attention by the
general public as did the Morris Internet Worm of 1988, but have slowly progressed in their
development. As more and more systems have come to be required for business, research,
education, the basic functioning of government, and now entertainment and commerce from
people’s homes, the increasingly large number of vulnerable systems has converged with the
development of these tools to create a situation that resulted in distributed denial of service attacks
that took down the largest e-commerce and media sites on the Internet. Meanwhile, researchers
said the recent uptick of DDoS attacks in recent years that are being for used for political
‘hacktivism,’ extortion and other criminal purposes can be attributed, in part, to the proliferation of
DDoS tools.

3.1 List of DDoS Tools used widely by Attackers


3.1.1 LOIC DDoS

The LOIC tool has been in the news for quite some time now. LOIC (Low Orbit Ion Cannon) is
an open source network stress testing application, written in C#.LOIC performs a denial-of-service
(DoS) attack (or when used by multiple individuals, a DDoS attack) on a target site by flooding the
server with TCP packets or UDP packets with the intention of disrupting the service of a particular
host. From the code it does a HTTP request from the target site and has some elements in the code
as to not adversely affect the browser being used. Target changes are communicated via the IRC
channel to participants. From the looks of it the code could easily be modified to "autofire" rather
than require a user to chose to participate. A detailed information on LOIC can be found in the below
link :

http://www.simpleweb.org/reports/loic-report.pdf

3.1.2 Dirt Jumper

Dirt Jumper is a bot that performs DDoS attacks on urls provided by its Command and Control
server. Each infected system made an outbound connection to the C&C and receives instructions on
which sites to attack. Analysis revealed that this particular piece of malware was launching DDoS
attacks and we have direct evidence of DDoS attack on two Russian websites. One of these was a
gaming website, the other involved in selling a popular Smartphone.

Further research determined that this malware was also used in attacks on yet another Russian
gaming site, test attacks on various other sites, attacks on a large corporations load balancer, and a
damaging attack on a Russian electronic trading platform. Just like many other DDoS bot families,
Dirt Jumper aka Russkill continues to undergo active development to help feed a market that’s
hungry for DDoS services.

3.1.3 Apache Killer

The developers behind the open source Apache Foundation issued a warning for all users of the
Apache HTTPD Web Server, as an attack tool it has been made available on the Internet and has
already been spotted being actively used. The bug in question is a Denial-of-Service vulnerability that
allows the attacker to execute the attack remotely and to take over a great amount of memory and
CPU usage with only a modest number of request directed at the Web Server. And even though the
vulnerability has been spotted more than four years ago by Google security engineer Michal
Zalewski, it has never been patched. The attack can be deployed against all versions in the 1.3 and
2.0 lines, but as the Foundation no longer supports the 1.3 line, a patch will be issued only for
Apache 2.0 and 2.2.
3.1.3.1 THC SSL Dos/DDoS

THC-SSL-DOS is a tool to verify the performance of SSL. Establishing a secure SSL connection requires
15x more processing power on the server than on the client. THC-SSL-DOS exploits this asymmetric
property by overloading the server and knocking it off the Internet. This problem affects all SSL
implementations today. The vendors are aware of this problem since 2003 and the topic has been
widely discussed. This attack further exploits the SSL secure Renegotiation feature to trigger
thousands of renegotiations via single TCP connection.

3.1.3.2 Comparing flood DDoS vs. SSL-Exhaustion attack

A traditional flood DDoS attack cannot be mounted from a single DSL connection. This is because the
bandwidth of a server is far superior to the bandwidth of a DSL connection: A DSL connection is not
an equal opponent to challenge the bandwidth of a server.

This is turned upside down for THC-SSL-DOS: The processing capacity for SSL handshakes is far
superior at the client side: A laptop on a DSL connection can challenge a server on a 30Gbit link.
Traditional DDoS attacks based on flooding are sub optimal: Servers are prepared to handle large
amount of traffic and clients are constantly sending requests to the server even when not under
attack.

The SSL-handshake is only done at the beginning of a secure session and only if security is required.
Servers are _not_ prepared to handle large amount of SSL Handshakes. The worst attack scenario is
an SSL-Exhaustion attack mounted from thousands of clients (SSL-DDoS).

DDoS Reactive Mechanisms

The reactive mechanisms (also referred to as Early Warning Systems) try to detect the attack and
respond to it immediately. Hence, they restrict the impact of the attack on the victim. Again, there is
the danger of characterizing a legitimate connection as an attack. For that reason it is necessary for
researchers to be very careful.
The main detection strategies are signature detection, anomaly detection, and hybrid systems.

Signature-based methods search for patterns (signatures) in observed network traffic that match
known attack signatures from a database. The advantage of these methods is that they can easily
and reliably detect known attacks. Moreover, the signature database must always be kept up-todate
in order to retain the reliability of the system.

Sample of Signature based detection:


Anomaly-based methods compare the parameters of the observed network traffic with normal
traffic. Hence it is possible for new attacks to be detected. However, in order to prevent a false
alarm, the model of "normal traffic" must always be kept updated and the threshold of categorizing
an anomaly must be properly adjusted.

Sample of Anomaly based detection:

Finally, hybrid systems combine both these methods. These systems update their signature database
with attacks detected by anomaly detection. Again the danger is great because an attacker can fool
the system by characterizing normal traffic as an attack. In that case an Intrusion Detection
System (IDS) becomes an attack tool. Thus IDS designers must be very careful because their research
can boomerang.

After detecting the attack, the reactive mechanisms respond to it. The relief of the impact of the
attack is the primary concern. Some mechanisms react by limiting the accepted traffic rate. This
means that legitimate traffic is also blocked. In this case the solution comes from traceback
techniques that try to identify the attacker. If attackers are identified, despite their efforts to spoof
their address, then it is easy to filter their traffic. Filtering is efficient only if attackers' detection is
correct. In any other case filtering can become an attacker's tool.

Reacting to DoS/DDoS
Unfortunately, the options are somewhat limited because most DDoS attacks used spoofed source
addresses that are likely generated at random. So what can be done:

Create a whitelist of the IP addresses and protocols you must allow if prioritizing traffic during an
attack.

The "ip verify unicast reverse-path" (or non-Cisco equivalent) command should be enabled on the
input interface of the upstream connection. This feature drops spoofed packets, a major difficulty in
defeating DDoS attacks, before they can be routed. Additionally, make sure incoming traffic with
source addresses from reserved ranges (i.e., 192.168.0.0) is blocked. This filter will drop packets
whose sources are obviously incorrect.

Ingress and egress filtering techniques are also crucial to the prevention of DDoS attacks. These
simple ACLs, if properly and consistently implemented by ISPs and large networks, could eliminate
spoofed packets from reaching the Internet, greatly reducing the time involved in tracking down
attacker. The filters, when placed on border routers, ensure that incoming traffic does not have a
source address originating from the private network and more importantly, that outbound traffic
does have an address originating from the internal network. RFC2267 is a great foundation for such
filtering techniques.

Rate Limiting

A better option for immediate relief, one available to most ISPs, would be to "rate limit" the
offending traffic type. Rate limiting restricts the amount of bandwidth a specific type of traffic can
consume at any given moment. This is accomplished by dropping the limited packets received when
the threshold is exceeded. It's useful when a specific packet is used in the attack. Cisco provides this
example for limiting ICMP packets used in a flood:

interface xy rate-limit output access-group 2020 3000000 512000 786000 conform-action


transmit exceed-action drop access-list 2020 permit icmp any any echo-reply

This example brings up an interesting problem, which was noted earlier. What if the offending traffic
appears to be completely legitimate? For instance, rate limiting a SYN flood directed at a Web server
will reject both good and bad traffic, since all legitimate connections require the initial 3-way
handshake of TCP. It's a difficult problem, without an easy answer. Such concerns make DDoS
attacks extremely tricky to handle without making some compromises.
Route Filter Techniques

Blackhole routing and Sinkhole routing, can be used when the network is under attack. These
techniques try to temporarily mitigate the impact of the attack. The first one directs routing traffic
to a null interface, where it is finally dropped. At first glance, it would be perfect to "Blackhole"
malicious traffic. But is it always possible to isolate malicious from legitimate traffic? If victims know
the exact IP address being attacked, then they can ignore traffic originating from these sources. This
way, the attack impact is restricted because the victims do not consume CPU time or memory as a
consequence of the attack. Only network bandwidth is consumed. However, if the attackers' IP
addresses cannot be distinguished and all traffic is blackholed, then legitimate traffic is dropped as
well. In that case, this filter technique fails.

Sinkhole routing involves routing suspicious traffic to a valid IP address where it can be analyzed.
There, traffic that is found to be malicious is rejected (routed to a null interface); otherwise it is
routed to the next hop. A sniffer on the sinkhole router can capture traffic and analyze it. This
technique is not as severe as the previous one. The effectiveness of each mechanism depends on the
strength of the attack. Specifically, sinkholing cannot react to a severe attack as effectively as
blackholing. However, it is a more sophisticated technique, because it is more selective in rejecting
traffic.

Filtering malicious traffic seems to be an effective countermeasure against DDoS. The closer to the
attacker the filtering is applied, the more effective it is. This is natural, because when traffic is
filtered by victims, they "survive," but the ISP's network is already flooded. Consequently, the best
solution would be to filter traffic on the source; in other words, filter zombies' traffic.

Until now, three filtering possibilities have been reported concerning criteria for filters. The first one
is filtering on the source address. This one would be the best filtering method, if we knew each time
who the attacker is. However, this is not always possible because attackers usually use spoofed IP
addresses. Moreover, DDoS attacks usually derive from thousands of zombies and this makes it too
difficult to discover all the IP addresses that carry out the attack. And even if all these IP addresses
are discovered, the implementation of a filter that rejects thousands of IP addresses is practically
impossible to deploy.

The second filtering possibility is filtering on the service. This tactic presupposes that we know the
attack mechanism. In this case, we can filter traffic toward a specific UDP port or a TCP connection
or ICMP messages. But what if the attack is directed toward a very common port or service? Then
we must either reject every packet (even if it is legitimate) or suffer the attack.

Finally, there is the possibility of filtering on the destination address. DDoS attacks are usually
addressed to a restricted number of victims, so it seems to be easy to reject all traffic toward them.
But this means that legitimate traffic is also rejected. In case of a large-scale attack, this should not
be a problem because the victims will soon break down and the ISP will not be able to serve anyone.
So filtering prevents victims from breaking down by simply keeping them isolated.

Attack Distribution and/or Isolation – Anycast

IPv4 anycast implementations have been in use on the Internet for long-time now. Particularly
suited for single response UDP queries, DNS anycast architectures are in use in most tier 1 Internet
providers’ backbones. Anycast implementations can be used for both DNS authoritative and
recursive implementations. Several root name servers are implementing anycast architectures to
mitigate DDoS attacks .
Black hole filtering is a specialized form of anycast. Sinkholes can use anycast to distribute the load
of an attack across many locations

Many DNS anycast implementations are done using eBGP announcements. Anycast networks can be
contained in a single AS or spam multiple AS's across the globe. Anycast provides two distinct
advantages in regard to DoS/DDoS attacks. In a DoS attack, anycast localizes the effect of the attack.
In a DDoS attack, the attack is distributed over a much larger number of servers, distributing the load
of the attack and allowing the service to better withstand it.

The main disadvantage of an anycast implementation is brownout conditions. This is where the
server is still functioning but running at full capacity. Some legitimate queries go unanswered due to
resource exhaustion. This may be due to a DoS/DDoS attack or failure of a neighbouring anycast
server without adequate reserve capacity. If this resource is taken fully off-line and queries are
redirected through anycast to the next server, a cascading effect can result taking down the entire
service. To prevent this from occurring, a true secondary anycast system is needed, separate from
the primary anycast. This allows one area to failover to an independent anycast system.

Due diligence is needed when setting up and maintaining an eBGP anycast system. All BGP routing
parameters are set the same for each anycast site. If a configuration erroris made on a site to lower
its routing preference relative to the others, it will act as a magnet for the traffic and the entire
service can go down (as long as that route is advertised).

Difficulties in Defending DoS/DDoS


Development of detection and defending tools is very complicated. Designers must think in advance
of every possible situation because every weakness can be exploited. Difficulties involve:

Any attempt of filtering the incoming flow means that legitimate traffic will also be rejected. And if
legitimate traffic is rejected, how will applications that wait for information react? On the other
hand, if zombies number in the thousands or millions, their traffic will flood the network and
consume all the bandwidth. In that case filtering is useless because nothing can travel over the
network. Filtering is efficient only if attackers' detection is correct. In any other case filtering can
become an attacker's tool.
Attack packets usually have spoofed IP addresses. Hence it is more difficult to trace back to their
source. Furthermore, it is possible that intermediate routers and ISPs may not cooperate in this
attempt. Sometimes attackers, by spoofing source IP addresses, create counterfeit armies. Packets
might derive from thousands of IP addresses, but zombies number only a few tens, for example.
Defense mechanisms are applied in systems with differences in software and architecture. Also
systems are managed by users with different levels of knowledge. Developers must design a
platform, independent of all these parameters.
Conclusion
Although DDoS attacks have largely escaped the front page of major news organizations over the
past few years, replaced by elaborate identity theft, spam, and phishing schemes, the threat still
remains. In fact, attack architectures and technology have evolved so rapidly that enterprises large
and small should be concerned. Unfortunately, it appears the attacks will only increase in complexity
and magnitude as computer network technology permits.

Whether attackers are driven by financial, political, religious, or technical motives, the tools that
they have at their disposal have changed the dynamics of network security. Whereas firewall
management used to be a sufficient strategy to manage attacks, botnets and reflectors have since
reduced the effectiveness of blocking attacks at the network edge.

Attack techniques continue to advance and the number of software vulnerabilities continues to
increase. Internet worms that previously took days or weeks to spread now take minutes. Service
providers and vendors are quickly adapting to the new landscape. Defense in depth must be
practiced by service providers as zero day exploits are released.

DDoS attacks are a difficult challenge for the Internet community. The reality of the situation is that
only the biggest attacks are fully investigated, the smaller ones that happen everyday slip through
the cracks. And while a bevy of products exist, most are not practical for smaller networks and
providers. Ultimately, you are in charge of dealing with and defending against a DDoS. This means
knowing how to respond when under attack: identifying traffic, designing and implementing filters,
the follow-up investigation. Preparation and planning are, by far, the best methods for mitigating
DDoS attacks and risk.

The bottom line: Never before has it been easier to execute a DDoS attack.

References:

[1]” Computer Security Incident Handling Guide”, CERT Coordination Center


http://csrc.nist.gov/publications/nistpubs/800-61-rev1/SP800-61rev1.pdf

[2] P. Ferguson and D. Senie, "Network Ingress Filtering: Defeating Denial of Service Attacks which
employ IP Source Address Spoofing," RFC 2267.

[3] http://ddos.arbornetworks.com/

[4] http://www.symantec.com/business/security_response/weblog/

[5] http://blog.spiderlabs.com/

You might also like